CN110567324A - multi-target group threat degree prediction device and method based on DS evidence theory - Google Patents

multi-target group threat degree prediction device and method based on DS evidence theory Download PDF

Info

Publication number
CN110567324A
CN110567324A CN201910830059.6A CN201910830059A CN110567324A CN 110567324 A CN110567324 A CN 110567324A CN 201910830059 A CN201910830059 A CN 201910830059A CN 110567324 A CN110567324 A CN 110567324A
Authority
CN
China
Prior art keywords
target
layer
track
targets
threat degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910830059.6A
Other languages
Chinese (zh)
Other versions
CN110567324B (en
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201910830059.6A priority Critical patent/CN110567324B/en
Publication of CN110567324A publication Critical patent/CN110567324A/en
Application granted granted Critical
Publication of CN110567324B publication Critical patent/CN110567324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41HARMOUR; ARMOURED TURRETS; ARMOURED OR ARMED VEHICLES; MEANS OF ATTACK OR DEFENCE, e.g. CAMOUFLAGE, IN GENERAL
    • F41H11/00Defence installations; Defence devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a multi-target group threat degree prediction device and method based on DS evidence theory.A sensor device acquires information of a target and a region where the target is located, and a convolutional neural network extracts and classifies target characteristics; clustering according to the multi-target characteristics, reserving common characteristics and non-common characteristics meeting preset requirements, and assigning values to the multiple targets according to the characteristic threat weights; associating the targets of the front frame and the rear frame, marking the same target, and continuously associating multiple frames to obtain a track segment of the marked target; judging the track segment of the successfully associated target; establishing a threat degree prediction space, giving a threat degree weight to target characteristics, abnormal tracks, effective tracks and whether the target characteristics enter a protected area, predicting the threat degree of a target cluster by DS evidence theory, and obtaining a threat degree value. The method improves the structure of the convolutional neural network, makes up the resolution loss of the original image after convolutional pooling, and simultaneously improves the threat degree prediction effect of the multi-target cluster by utilizing the DS evidence theory so as to quickly take countermeasures.

Description

Multi-target group threat degree prediction device and method based on DS evidence theory
Technical Field
The invention relates to the field of target detection and identification, in particular to a device and a method for predicting a multi-target group threat degree based on a DS evidence theory.
background
in the combat process of a modern air defense weapon system, target threat assessment becomes an important process in the auxiliary decision of a combat command control system, and the assessment result directly influences tactical decision and target fire distribution. With the development of modern wars, the situation of a battlefield changes rapidly, so that uncertain situation information is increased continuously, and thus the assessment of the aerial target threat becomes a multi-field and multi-level uncertain knowledge reasoning problem. The multi-attribute decision is easy to realize, and meanwhile, the method has a good effect and is wide in application.
In the multi-attribute decision theory, there are many methods for determining a comprehensive evaluation index function, but the weight of each index of the target threat assessment is set to be constant. However, in various configurations, the weight vector remains fixed, which may cause unreasonable integration results in practical problems, i.e. a "state imbalance" problem, and the integrated target threat value may not reflect the real situation.
Aiming at the problem, the invention provides a multi-target group threat degree prediction device and method based on DS evidence theory, which comprises the steps of collecting target information by using a plurality of sensors, obtaining information of a target and a region where the target is located, presetting target types and feature types, and extracting and classifying target features by using a convolutional neural network; dividing the multi-target feature cluster into shared features and non-shared features, reserving the shared features and the non-shared features meeting preset requirements, and performing multi-target assignment according to the feature threat weight; associating the targets of the front frame and the rear frame, marking the same target, and continuously associating multiple frames to obtain a track segment of the marked target; judging the track of the track segment successfully associated with the target; establishing a threat degree prediction space, giving a threat degree weight to target characteristics, abnormal tracks, effective tracks and whether the target characteristics enter a protected area, predicting the threat degree of a target cluster by DS evidence theory, and obtaining a threat degree value.
the method improves the structure of the convolutional neural network, introduces a residual error structure, compensates the loss of the resolution of the original image after convolutional pooling, and simultaneously improves the prediction effect of the threat degree of the multi-target cluster by giving a threat degree weight to the effective track, the abnormal track and the target feature which have a tendency to enter a set range at the next moment, and quickly takes countermeasures. The invention establishes a threat degree prediction space, which comprises the following steps: and predicting the threat degree of the target cluster based on DS evidence theory to obtain a threat degree prediction result. The method can obtain an effective and reasonable threat judgment result, can meet the threat judgment on the target, and can be widely applied to the fields of target detection and identification, remote sensing mapping, unmanned control and the like.
Disclosure of Invention
The invention provides a multi-target group threat degree prediction device and method based on DS evidence theory, which are characterized in that a multi-sensor is used for collecting target information, information of a target and a region where the target is located is obtained, the type and the characteristic type of the target are preset, and the target characteristic is extracted and classified by using a convolutional neural network; dividing the multi-target feature cluster into shared features and non-shared features, reserving the shared features and the non-shared features meeting preset requirements, and performing multi-target assignment according to the feature threat weight; associating the targets of the front frame and the rear frame, marking the same target, and continuously associating multiple frames to obtain a track segment of the marked target; judging the track of the track segment successfully associated with the target; establishing a threat degree prediction space, giving a threat degree weight to target characteristics, abnormal tracks, effective tracks and whether the target characteristics enter a protected area, predicting the threat degree of a target cluster by DS evidence theory, and obtaining a threat degree value.
The method improves the structure of the convolutional neural network, introduces a residual error structure, compensates the loss of the resolution of the original image after convolutional pooling, and simultaneously improves the prediction effect of the threat degree of the multi-target cluster by giving a threat degree weight to the effective track, the abnormal track and the target feature which have a tendency to enter a set range at the next moment, and quickly takes countermeasures. The invention establishes a threat degree prediction space, which comprises the following steps: and predicting the threat degree of the target cluster based on DS evidence theory to obtain a threat degree prediction result. The method can obtain an effective and reasonable threat judgment result, can meet the threat judgment on the target, and can be widely applied to the fields of target detection and identification, remote sensing mapping, unmanned control and the like.
the invention provides a multi-target group threat degree prediction method based on a DS evidence theory, which comprises the following steps:
Step 1, acquiring target information by using multiple sensors, and acquiring information of a target and an area where the target is located;
Step 2, presetting a target category and a feature category, inputting the acquired target information into a convolutional neural network, extracting target features, and classifying the features; the method comprises the steps that a sampling layer is connected after a convolutional layer, and a residual error structure is added after a pooling layer, so that the loss of the resolution of a new image obtained by an original input image through each convolutional layer is made up;
Step 3, clustering the characteristics of the targets obtained in the step 2, clustering the characteristics according to the characteristics among multiple targets, dividing the characteristics into shared characteristics and non-shared characteristics, reserving the non-shared characteristics meeting the preset requirements, and assigning values to the multiple targets according to preset characteristic threat weights;
step 4, based on two continuous frame moments, associating targets appearing in a previous frame and a later frame, judging whether the targets are the same target, and if the targets are the same target, indicating that the target association is successful; marking the same target, continuously associating multiple frames, and acquiring a track segment of the marked target;
Obtaining a plurality of targets and track sections corresponding to the targets through the step 4;
step 5, judging the track of the track section obtained by the target which is successfully associated;
Effective track: at least n consecutive frames are present; assigning a threat degree weight to the effective track;
disappearance trajectory: after a certain frame disappears, the target does not appear after the frame any more, and the corresponding track is a disappearing track;
abnormal tracks: if a single target appears in a track which frequently disappears and appears for more than a set number of times and only has n frames of continuity at most, judging that the track section is an abnormal track; assigning a threat degree weight to the abnormal track;
Predicting the track, and predicting the track based on the current time, wherein the predicting comprises the following steps: an effective trajectory and an abnormal trajectory; if the single target track has a trend to enter a set range at the next moment, giving a threat degree weight value to the track;
Step 6, establishing a threat degree prediction space, comprising: and predicting the threat degree of the target cluster based on DS evidence theory to obtain a threat degree prediction result.
Further, the target information acquisition also needs to be performed with time registration and space registration, and the time registration specifically includes synchronizing the dynamic position data and the track of the target acquired by the laser radar sensor with the motion track state data of the target in the fused video image in time, and respectively processing the data acquired by the laser radar sensor, the infrared sensor and the visible light sensor and the fused data of the infrared and the visible light by adopting multiple threads to achieve time synchronization; the fused video image represents a video image obtained by fusing an infrared video and a visible light video; the spatial registration specifically includes mapping sensor data information to a uniform coordinate system using a transformation relationship between a local coordinate system and a global coordinate system of each of the plurality of sensors.
in the step 2, all targets at the current moment are detected by using a convolutional neural network, so as to detect target characteristics; the detection target features and all the detection targets share convolution layers of a convolution neural network;
dividing a training data set according to the land, water and air field, wherein each kind comprises a plurality of data, the training set is input into a convolutional neural network for training by taking a scene as a batch, the scene at least comprises one target, and the data is stored every 200 frames in an iterative manner; training is completed by using an Adam optimization algorithm, and the model is updated; taking the updated model as a detection model, and outputting results of the type, the number and the characteristics of the targets;
the convolutional neural network consists of a convolutional layer, an excitation layer, a pooling layer and an upper sampling layer;
the layer 1 is a convolution input layer, an original image is input, 16 convolution kernels are set, the size is 5 multiplied by 5, the filling value is 2, the step length is 1, and the activation function is set as a ReLU function;
the 2 nd layer is a convolution operation layer and an average pooling layer, an image is input, the convolution layer is provided with 32 convolution kernels, the size is 5 multiplied by 5, the filling value is 2, the step length is 1, and a ReLU function linear activation function is obtained; the down-sampling layer uses 2 multiplied by 2 kernel, the step length is 2, and average pooling down-sampling output is carried out;
The 3 rd layer is a convolution operation layer, an average pooling layer and an up-sampling layer, the image from the 2 nd layer is input, 64 convolution kernels with the size of 3 multiplied by 3 are convolved, the filling value is 1, the step length is 1, and a ReLU function linear activation function is adopted; the down-sampling layer uses 2 x 2 kernel, the step length is 2, and average pooling down-sampling output is carried out, and then regularization processing is carried out; the up-sampling layer uses 2 × 2 kernels, and the step length is 2;
the 4 th layer is a convolution operation layer, a maximum pooling layer and an up-sampling layer, the image from the 3 rd layer is input, 32 convolution kernels with the size of 3 multiplied by 3 are convolved, the filling value is 1, and a ReLU activation function is used; the downsampling layer uses 2 × 2 kernels for maximum pooling; the up-sampling layer uses 2 × 2 kernels, and the step length is 2; outputting an image;
The 5 th layer is a convolution operation layer and a maximum pooling layer, the image from the 4 th layer is input, 16 convolution kernels with the size of 3 multiplied by 3 are convolved, the filling value is 1, and a ReLU activation function is used; the down-sampling layer uses 2 multiplied by 2 kernel to carry out maximum pooling, and down-sampling outputs images;
the 6 th layer is two full-connection layers, 2048 neurons are connected with a residual error structure and output through the characteristic diagram of the 5 th layer, and then node information is randomly lost by Dropout to obtain new neurons; the DropOut layer produces only 50% of the output;
and the 7 th layer is an output layer, and target features are output through the classifier.
In order to compensate the difference between the resolution of the image generated by each convolution layer and the resolution of the original image, (1) compensation is carried out by utilizing a superposition residual structure, and the residual structure is used for carrying out linear addition on the output characteristic of the 2 nd layer average pooling sampling layer and the output characteristic of the 5 th layer maximum pooling sampling layer, (2) the output ends of the 3 rd layer convolution operation layer and the 4 th layer convolution operation layer are connected with an upper sampling layer corresponding to the resolution of the original image;
And detecting the target through the convolutional neural network, extracting the target characteristics and obtaining the classification of the target characteristics.
Step 3, performing feature clustering among multiple targets, dividing the clustering into shared features and non-shared features, reserving the non-shared features meeting preset requirements, and assigning values to the multiple targets according to preset feature threat weights;
judging the threat degree of a multi-target set, wherein multiple targets divided in one target set class may contain multiple same features, namely common features, and independent features, namely non-common features, of a single target;
for the common characteristics, endowing a threat degree weight value according to the characteristic type;
for the non-shared characteristic parts of a plurality of targets, the importance of the non-shared characteristic is judged according to the weight W (A) of the threat degree of the non-shared characteristic in the characteristic region, and a weight threshold value W is preset0(A) if the weight is larger than a preset weight threshold, retaining the non-common characteristics, and endowing the partial characteristic types with threat degree weights;
the weight formula of the threat degree of the non-shared characteristics in the characteristic region is as follows:
Wherein N represents a target class, and i represents the ith target class between 1 and N; m represents a feature class, j represents a j-th feature set between 1 and M; ij denotes the j feature set of the targets,1 to k in the j feature set of the ith targetbIs characterized in that Rijkfeatures 1 to k in the j feature set of the ith target; w (A) represents the weight of the A target in the target class, the feature class and the feature set to which the A target belongs;
wherein, Fijkmax (A) denotes that the characteristic A isthe ratio of 1-k features is the maximum value, namely the maximum value from 1-k is searched in the ith target class and the jth feature set; fijkmin (A) represents the minimum value of the ratio of the feature A in 1 to k features;
presetting a weight value:
When W (A) is larger than W0(A) Then, the non-common characteristic A is retained; the registered and fused target features contain common features and non-common features that satisfy the condition.
wherein, the step 4 specifically comprises: on the basis of two continuous frame moments, associating targets appearing in a previous frame and a next frame, judging whether the targets of the current frame and the previous frame are the same target or not by taking the current frame as a reference, and judging whether the targets of the previous frame and the next frame are the same target or not by analogy; if the target is the same target, indicating that the target association is successful; marking the same target, continuously associating multiple frames, and acquiring a track segment of the marked target; because the target is blocked, the target is divided into a plurality of small blocks, different weights are given to each small block, when the apparent similarity is calculated, the overall apparent similarity is calculated, the similarity of the corresponding small blocks is simultaneously calculated, then whether the two targets are the same target or not is comprehensively judged,
assuming that the target A and the target B are targets in continuous front and back frames, associating the two targets to judge whether the two targets are the same target; because the target is blocked, the target is divided into a plurality of small blocks, different weights are given to each small block, when the apparent similarity is calculated, the overall apparent similarity is calculated, the similarity of the corresponding small blocks is simultaneously calculated, then whether the two targets are the same target or not is comprehensively judged,
the judging method comprises the following steps: (1) the relevance of the image area where the target is located;
Wherein, I1,I2Denotes the image areas, sim (I), corresponding to the two objects A and B, respectively1,I2) Representing the similarity of the two images, and degree represents the dot multiplication operation;
(2) dividing small blocks into the target, and correspondingly similarity among the small blocks;
dividing the targets A and B into s small blocks, and calculating the similarity between the small blocks:
wherein A iskDenotes the kth patch, B, in object Akrepresents the kth patch in target B;
wkindicating the weight of the k tiles in the target,Euclidean distance representing the color at the location of the kth tile (x, y) for objects a and B;
synthesizing the correlation of the image area where the target is located and the corresponding similarity between the small blocks, obtaining the comprehensive matching similarity by matching the similarity, judging whether the comprehensive matching similarity is greater than a preset target matching similarity threshold value, if so, considering that the target is successfully associated with the previous and subsequent frames, and A and B represent the same target; and traversing all the targets of the previous frame and the next frame, and if the comprehensive matching similarity is not greater than a preset target matching similarity threshold, indicating that the target association fails.
further, the target association also comprises a target with association failure, and the failure of the association of the continuous multiple frames may be caused by the disappearance or appearance of the target, wherein the disappearance of the target is defined as that if the target track at the time t-1 and each target in the time t have no association relationship, the target is taken as a disappeared target; and the target appears, and is defined as that if a target which does not have an association relation with each target in the target track at the previous moment exists in the targets at the current moment, the target is taken as a new target at the current moment.
wherein, in the step 5, the track prediction is performed to predict the track based on the current time, and includes: performing track prediction on the effective track and the abnormal track at the current moment by using Kalman filtering; the position of the target is recorded in real time, so that whether the target enters the range of the set area at the next moment is judged for the target prediction which does not enter the range of the set area at the current moment.
in step 6, the correctness of the predicted trajectory is judged by using a DS evidence theory, and an optimal trajectory is output, which includes the following steps:
(1) recording target real-time data;
(2) Establishing a sample space matrix for threat degree prediction, wherein D is { L, M, H }, L represents low risk, M represents medium risk, and H represents high risk;
(3) Classifying the real-time target data based on the sample space matrix, and dividing the characteristics into target characteristic information, target position information and target track information according to clustering; outputting respective threat degree weight to convert into probability to obtain evidences m1, m2 and m3, and expressing miThe basic probability assignment is denoted mi(Ai);
(4) and (5) synthesizing m1, m2 and m3 by using DS evidence theory, and outputting the classes meeting the synthesis decision rule as final results.
Further, the DS evidence theory is applied to synthesize m1, m2, and m3, and the class satisfying the synthesis decision rule is output as the final result, as follows:
taking the target predicted track situation as a basic proposition A, taking target space information, image information and target track information as basic evidences,
first, the basic probability is calculated:
Wherein (1- Σ Bel) represents assignable fundamental probability assignments; α represents the degree of influence of the confidence function Bel and the likelihood function Pl on the assignment of the basic probability assignments:
Δmxyz(n) represents a difference between the plurality of evidences with respect to the nth characteristic index,It represents the minimum difference of three levels,Denotes the maximum difference of three levels, S (m)i) Representing evidence support;
Wherein, the evidence support degree is as follows:
evidence support reflects the degree of support of the evidence by other evidence, S (m)i) The larger the value, the smaller the distance between evidences, the greater the support of the evidences, D (m)i) Is the distance between the evidences;
wherein S ismax(mi) Represents the maximum support of evidence, Smin(mi) Representing a minimum support of evidence;
then calculating the basic probability assignment:
mi(Ai)=S(mi)′*P(mi) (10)
And finally, synthesis:
and finally, outputting a synthetic probability result and judging the threat degree of the target cluster.
The invention also provides a multi-target group threat degree prediction device based on the DS evidence theory, which comprises the following steps: (1) a power source; (2) a multi-sensor module; (3) a target processing module; (4) a trajectory processing module; (5) a threat level processing module; (6) a wireless communication module; (7) a display terminal;
collecting target and surrounding environment information data by the multi-sensor module (2), inputting the data into a target processing module (3) to perform target detection and feature extraction on target information, classifying features, associating features, and associating front and rear frame targets to obtain targets with successful association and failed association, recording position information of the targets in real time, inputting the targets and the position information into a track processing module (4), performing track analysis and prediction, outputting the position information, the feature information and the track information to a threat degree processing module (5), and outputting the target information and the track information to a display terminal (7); (1) the power supply supplies power to the whole device to enable the device to work independently in a charged mode, (6) the wireless communication module provides network connection for the whole device; and the threat degree processing module (5) endows the position information, the characteristic information and the track information with threat degree weights respectively, and predicts the threat degree of the target cluster based on the weights.
the method for constructing the efficient threat assessment method is simple and easy to implement, can effectively cope with multi-level, multi-type and multi-directional continuous attacks of enemies, scientifically distributes firepower, improves the efficiency of battle command decision and can effectively avoid state imbalance.
drawings
FIG. 1 is a flow chart of an implementation of the device and method for predicting the threat degree of a multi-target group based on the DS evidence theory.
FIG. 2 is a device framework diagram of a multi-objective group threat degree prediction device and method based on DS evidence theory.
FIG. 3 is a convolutional neural network structure diagram of the device and method for predicting the threat degree of multiple target groups based on DS evidence theory.
FIG. 4 is a threat degree prediction diagram of the device and method for predicting the threat degree of a multi-target group based on the DS evidence theory.
Detailed Description
it should be noted that the embodiments and features of the embodiments in the present application can be combined with each other without conflict, and the present invention is further described in detail with reference to the drawings and specific embodiments.
Fig. 1 is a flowchart of an implementation of the device and method for predicting a threat level of a multi-target group based on a DS evidence theory of the present invention, which mainly shows: detecting targets by a plurality of sensors, performing space-time registration, performing target feature detection by a convolutional neural network, and matching and fusing multi-target features based on feature clustering and the same frame, wherein the multi-target features comprise common features and non-common features; marking the target based on the target association of the previous frame and the next frame, obtaining a marked target track section, and judging the track section: abnormal tracks, vanishing tracks and effective tracks; establishing a sample space, dividing the evidence into target characteristic information, target position information and target track information, calculating a weight, synthesizing the evidence by applying a DS evidence theory, and outputting a class meeting a synthesis decision rule as a final result.
step 1, acquiring target information by using multiple sensors, and acquiring information of a target and an area where the target is located; the system at least comprises a laser radar sensor, an infrared sensor and a visible light sensor;
the acquired data is temporally and spatially registered.
the time registration is to synchronize the dynamic position data and the track of the target obtained by the laser radar sensor with the motion track state data of the target in the fused video image in time, and respectively process the data respectively obtained by the laser radar sensor, the infrared sensor and the visible light sensor and the fused data of the infrared and the visible light by adopting multiple threads, so as to achieve the time synchronization; the fused video image represents a video image obtained by fusing an infrared video and a visible light video;
the spatial registration specifically includes mapping sensor data information to a uniform coordinate system using a transformation relationship between a local coordinate system and a global coordinate system of each of the plurality of sensors.
step 2, presetting a target category and a feature category, inputting the acquired target information into a convolutional neural network, extracting target features, and classifying the features; preferably, after the convolutional layers, connection sampling is performed, and a residual error structure is added after the pooling layers, so that the loss of the classification rate of a new image obtained by the original input image through each convolutional layer is made up;
further, the convolutional neural network comprises a convolutional layer, an excitation layer, a pooling layer and an upsampling layer;
FIG. 3 is a schematic diagram of a convolutional neural network structure of the device and method for predicting the threat degree of a multi-target group based on DS evidence theory, mainly showing that the convolutional neural network comprises a convolutional layer, an excitation layer, a pooling layer and an upper sampling layer; connecting a sampling layer behind the convolution layer, and adding a residual error structure behind the pooling layer;
wherein, the 1 st layer is a convolution input layer, an original image is input, 16 convolution kernels are set, the size is 5 multiplied by 5, the filling value is 2, the step length is 1, and the activation function is set as a ReLU function;
the 2 nd layer is a convolution operation layer and an average pooling layer, an image is input, the convolution layer is provided with 32 convolution kernels, the size is 5 multiplied by 5, the filling value is 2, the step length is 1, and a ReLU function linear activation function is obtained; the down-sampling layer uses 2 multiplied by 2 kernel, the step length is 2, and average pooling down-sampling output is carried out;
the 3 rd layer is a convolution operation layer, an average pooling layer and an up-sampling layer, the image from the 2 nd layer is input, 64 convolution kernels with the size of 3 x 3 are convolved, the filling value is 1, the step length is 1, and the ReLU function linear activation function is obtained; the down-sampling layer uses 2 x 2 kernel, the step length is 2, and average pooling down-sampling output is carried out, and then regularization processing is carried out; the up-sampling layer uses 2 × 2 kernels, and the step length is 2;
the 4 th layer is a convolution operation layer, a maximum pooling layer and an up-sampling layer, the image from the 3 rd layer is input, 32 convolution kernels with the size of 3 multiplied by 3 are convolved, the filling value is 1, and a ReLU activation function is used; the downsampling layer uses 2 × 2 kernels for maximum pooling; the up-sampling layer uses 2 × 2 kernels, and the step length is 2; outputting an image;
Wherein, the 5 th layer is a convolution operation layer and a maximum pooling layer, the image from the 4 th layer is input, 16 convolution kernels with the size of 3 multiplied by 3 are convolved, the padding value is 1, and a ReLU activation function is used; the down-sampling layer uses 2 multiplied by 2 kernel to carry out maximum pooling, and down-sampling outputs images;
Wherein, the 6 th layer is two full-connection layers, 2048 neurons are used for connecting a residual error structure and are output through the characteristic diagram of the 5 th layer, and then node information is randomly lost by Dropout to obtain new neurons; the DropOut layer produces only 50% of the output;
and the 7 th layer is an output layer, and the target features are output through the classifier.
further, for ease of processing, detecting target features and detecting all targets share convolutional layers of a convolutional neural network.
Further, detecting a target by using a convolutional neural network, obtaining target characteristics and pre-training, dividing a training data set according to the land, water and air field, wherein each type comprises a plurality of data, the training set is input into the convolutional neural network for training by taking a scene as a batch, the scene at least comprises one target, and data is stored every 200 frames in an iterative manner; training is completed by using an Adam optimization algorithm, and the model is updated; and taking the updated model as a detection model, and outputting the results of the type, the number and the characteristics of the targets.
further, in order to compensate the difference between the resolution of the image generated by each convolution layer and the resolution of the original image, (1) compensation is performed by using a superposition residual structure, and the residual structure is used for performing linear addition on the output characteristic of the 2 nd layer average pooled sampling layer and the output characteristic of the 5 th layer maximum pooled sampling layer, (2) the output ends of the 3 rd layer convolution operation layer and the 4 th layer convolution operation layer are connected with an up-sampling layer corresponding to the resolution of the original image;
And detecting the target through the convolutional neural network, extracting the target characteristics and obtaining the classification of the target characteristics.
step 3, clustering the characteristics of the target characteristics obtained in the step 2, clustering the characteristics according to the characteristics among multiple targets, dividing the characteristics into shared characteristics and non-shared characteristics, and reserving the non-shared characteristics meeting the preset requirements; assigning values to a plurality of targets according to a preset characteristic threat weight;
Preferably, the threat degree of the multi-target set is judged, and due to the fact that the targets are divided into a target set class, the targets may contain a plurality of same features, namely common features, and independent features, namely non-common features, of a single target; if the target A has the characteristic 1, the target B has the characteristic 2, and the target C has the characteristics 1, 2 and 3, keeping the characteristics 1 and 2, and calculating whether the characteristic 3 meets the preset requirement;
for the common characteristics, endowing a threat degree weight value according to the characteristic type;
For the non-shared characteristic parts of a plurality of targets, the importance of the non-shared characteristic is judged according to the weight W (A) of the threat degree of the non-shared characteristic in the characteristic region, and a weight threshold value W is preset0(A) If the weight is larger than a preset weight threshold, retaining the non-common characteristics, and endowing the partial characteristic types with threat degree weights;
the weight formula of the threat degree of the non-shared characteristics in the characteristic region is as follows:
wherein N represents a target class, and i represents the ith target class between 1 and N; m represents a feature class, j represents a j-th feature set between 1 and M; ij denotes the j feature set of the targets,1 to k in the j feature set of the ith targetbIs characterized in that Rijkfeatures 1 to k in the j feature set of the ith target; w (A) represents the weight of the A target in the target class, the feature class and the feature set to which the A target belongs;
Wherein, Fijkmax (A) represents the maximum value of the ratio of the feature A in 1 to k features, namely the maximum value from 1 to k is searched in the ith target class and the jth feature set; fijkmin (A) represents the minimum value of the ratio of the feature A in 1 to k features;
Presetting a weight value:
When W (A) is larger than W0(A) then, the non-common characteristic A is retained; the registered and fused target features contain common features and non-common features that satisfy the condition.
step 4, based on two continuous frame moments, associating targets appearing in a previous frame and a later frame, judging whether the targets are the same target, and if the targets are the same target, indicating that the target association is successful; marking the same target, continuously associating multiple frames to obtain a track segment of the marked target, and obtaining a plurality of targets and track segments corresponding to the targets through the step 4;
the method comprises the steps that a target is blocked, the target is divided into a plurality of small blocks, different weights are given to the small blocks, when apparent similarity is calculated, the overall apparent similarity is calculated, the similarity of the corresponding small blocks is calculated at the same time, and then whether the two targets are the same target or not is comprehensively judged;
Assuming that the target A and the target B are targets in continuous front and back frames, associating the two targets to judge whether the two targets are the same target; because the target is blocked, the target is divided into a plurality of small blocks, different weights are given to each small block, when the apparent similarity is calculated, the overall apparent similarity is calculated, the similarity of the corresponding small blocks is simultaneously calculated, then whether the two targets are the same target or not is comprehensively judged,
the judging method comprises the following steps: (1) the relevance of the image area where the target is located;
Wherein, I1,I2Denotes the image areas, sim (I), corresponding to the two objects A and B, respectively1,I2) Representing the similarity of the two images, and degree represents the dot multiplication operation;
(2) Dividing small blocks into the target, and correspondingly similarity among the small blocks;
dividing the targets A and B into s small blocks, and calculating the similarity between the small blocks:
wherein A iskDenotes the kth patch, B, in object AkRepresents the kth patch in target B;
wkindicating the weight of the k tiles in the target,euclidean distance representing the color at the location of the kth tile (x, y) for objects a and B;
synthesizing the correlation of the image area where the target is located and the corresponding similarity between the small blocks, obtaining the comprehensive matching similarity by matching the similarity, judging whether the comprehensive matching similarity is greater than a preset target matching similarity threshold value, if so, considering that the target is successfully associated with the previous and subsequent frames, and A and B represent the same target; and traversing all the targets of the previous frame and the next frame, and if the comprehensive matching similarity is not greater than a preset target matching similarity threshold, indicating that the target association fails.
Further, for a target with failed association, the failure of association of multiple continuous frames may be caused by disappearance or appearance of the target, and the disappearance of the target is defined as taking the target as a disappearing target if there is no association between the target track at the time t-1 and each target at the time t; and if the target does not have an association relation with each target in the target track at the previous moment, taking the target as a new target at the current moment, and marking the new target by starting the current frame to obtain a track section corresponding to the new target.
Step 5, judging the track of the track section obtained by the target successfully associated;
Wherein, the track includes: valid, abnormal, vanishing, and potential trajectories;
Further, the effective track indicates that at least n continuous frames exist, and a threat weight is given to the effective track;
Further, the vanishing trajectory represents an object that does not appear any more after a certain frame disappears after the frame disappears, and the corresponding trajectory is a vanishing trajectory;
Further, if a single target of the abnormal track appears for more than a set number of times and frequently disappears and appears and only n frames of continuous tracks exist at most, the track section is judged to be the abnormal track; assigning a threat degree weight to the abnormal track;
further, the potential track records the position of the target in real time, and for the target which does not enter the set area range at the current moment, if a single target track has a tendency to enter the set range at the next moment, the track is called as the potential track, and a threat degree weight is given to the track;
Predicting the track, namely predicting the track based on the current moment, wherein the predicting comprises effective track prediction, abnormal track prediction and potential track prediction; performing track prediction on the effective track and the abnormal track at the current moment by using Kalman filtering;
step 6, establishing a threat degree prediction space, comprising: and predicting the threat degree of the target cluster based on DS evidence theory to obtain a threat degree prediction result.
(1) recording target real-time data;
(2) establishing a sample space matrix for threat degree prediction, wherein D is { L, M, H }, L represents low risk, M represents medium risk, and H represents high risk;
(3) classifying the real-time target data based on the sample space matrix, and dividing the characteristics into target characteristic information, target position information and target track information according to clustering; outputting respective threat degree weight to convert into probability to obtain evidences m1, m2 and m3, and expressing miThe basic probability assignment is denoted mi(Ai);
(4) and (5) synthesizing m1, m2 and m3 by using DS evidence theory, and outputting the classes meeting the synthesis decision rule as final results.
taking the target predicted track situation as a basic proposition A, taking target space information, image information and target track information as basic evidences,
First, the basic probability is calculated:
wherein (1- Σ Bel) represents assignable fundamental probability assignments; α represents the degree of influence of the confidence function Bel and the likelihood function Pl on the assignment of the basic probability assignments:
Δmxyz(n) represents a difference between the plurality of evidences with respect to the nth characteristic index,it represents the minimum difference of three levels,denotes the maximum difference of three levels, S (m)i) Representing evidence support;
wherein, the evidence support degree is as follows:
Evidence support reflects the degree of support of the evidence by other evidence, S (m)i) The larger the value, the smaller the distance between evidences, the greater the support of the evidences, D (m)i) Is the distance between the evidences;
wherein S ismax(mi) Represents the maximum support of evidence, Smin(mi) Representing a minimum support of evidence;
then calculating the basic probability assignment:
mi(Ai)=S(mi)′*P(mi) (10)
and finally, synthesis:
And finally, outputting a synthetic probability result and judging the threat degree of the target cluster.
FIG. 2 is a device framework diagram of a multi-objective group threat degree prediction device and method based on DS evidence theory. The apparatus showing the invention mainly comprises: (2) a multi-sensor module; (3) a target processing module; (4) a trajectory processing module; (5) a threat level processing module; (7) a display terminal;
the wireless communication device also comprises a power supply (1) and a wireless communication module (6);
collecting target and surrounding environment information data by a multi-sensor module (2), inputting the data into a target processing module (3) to perform target detection and feature extraction on target information, classifying features, associating features, and associating front and rear frame targets to obtain targets with successful association and failed association, recording position information of the targets in real time, inputting the targets and the position information into a track processing module (4), performing track analysis and prediction, outputting the position information, the feature information and the track information to a threat degree processing module (5), and outputting the target information, the track information and a threat degree prediction value to a display terminal (7); and the threat degree processing module (5) endows the position information, the characteristic information and the track information with threat degree weights respectively, and predicts the threat degree of the target cluster based on the weights.
(1) the power supply supplies power to the whole device, so that the device works in an independent electrified mode; (6) the wireless communication module provides network connectivity for the entire device.
the (3) target processing module further comprises a 301 target matching fusion unit, a 302 target association unit,
the 301 target matching and fusing unit is used for registering and fusing the characteristics of target point cloud information and image information under the condition of time registration, converting the target point cloud information into image information, extracting image characteristics, matching the image information with the characteristics of fused infrared and visible light images, fusing common characteristics, calculating the weight of non-common characteristics, and keeping the non-common characteristics when the weight value is greater than a preset weight.
the 302 target association unit is connected with the target matching unit, the matched and fused features are input into the target association unit, the current frame image and the previous t frame image are associated, and the state of the target is judged according to the association result.
the (4) track processing module comprises a 401 track generation unit, a 402 track deletion unit, a 403 track prediction unit and a 404 track updating unit;
the 401 track generation unit is used for generating a track, such as a new track generated by a target with failed association;
The 402 track deleting unit is used for associating failed tracks;
the 403 track prediction unit firstly judges whether the track is an effective track, an abnormal track and a predicted potential track; if the track which only exists in continuous n frames at most and is formed when the single target appears for a plurality of times and exceeds the set threshold value, the track is regarded as an abnormal track; if a single target appears a track which exceeds the set threshold for a plurality of times and exceeds n frames, the track is regarded as an effective track; and predicting the effective track by means of a Kalman filter.
wherein 404, the track updating unit is configured to update the track.
fig. 4 is a threat degree prediction value of the multi-target group threat degree prediction apparatus and method based on DS evidence theory of the present invention, which shows threat degree prediction situations of multiple targets obtained by using the technical solution of the present invention, for example, the threat degree prediction values of enemies 312, 400, and 401 shown in the figure, which fall within the protected area range at the current time, are large, "enemy attributes" belong to "target feature information" of target attributes, "arrival direction" belongs to "position information," and "trajectory information" of the current frame, and threat degree values of enemies 310 and 311 at the current time are smaller than 312, 400, and 401.
The invention provides a multi-target group threat degree prediction device and method based on DS evidence theory, which can realize the purpose of acquiring target information by using a multi-fusion sensor device and extracting target characteristics by using an improved convolutional neural network, thereby avoiding the resolution loss caused by downsampling; the method comprises the steps of clustering and dividing common features and non-common features of a target according to multi-target features, giving threat degree weights to the common features, and calculating the non-common feature weights, so as to judge whether the threat degree weights need to be given to the non-common features or not, and avoid inaccurate threat degree weights of the multi-target clusters due to the fact that unimportant features are given to the weights; target position information, track information and characteristic information are established, the probability of evidence is calculated based on DS evidence theory, and classes meeting the synthetic decision rule are output as threat degree prediction results. The device and the method can be widely applied to the fields of target detection and identification, unmanned control and the like, can realize target threat assessment in the combat process of a modern air defense weapon system, provide a decision for a combat command control system, and directly influence tactical decision and target/fire distribution in threat degree assessment.
it will be appreciated by persons skilled in the art that the invention is not limited to details of the foregoing embodiments and that the invention can be embodied in other specific forms without departing from the spirit or scope of the invention. In addition, various modifications and alterations of this invention may be made by those skilled in the art without departing from the spirit and scope of this invention, and such modifications and alterations should also be viewed as being within the scope of this invention. It is therefore intended that the following appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (10)

1. A multi-target group threat degree prediction method based on a DS evidence theory is characterized by comprising the following steps:
Step 1, acquiring target information by using multiple sensors, and acquiring information of a target and an area where the target is located;
Step 2, presetting a target category and a feature category, inputting the acquired target information into a convolutional neural network, extracting target features, and classifying the features; the method comprises the steps that a sampling layer is connected after a convolutional layer, and a residual error structure is added after a pooling layer, so that the loss of the resolution of a new image obtained by an original input image through each convolutional layer is made up;
step 3, clustering the characteristics of the targets obtained in the step 2, clustering the characteristics according to the characteristics among multiple targets, dividing the characteristics into shared characteristics and non-shared characteristics, reserving the non-shared characteristics meeting the preset requirements, and assigning values to the multiple targets according to preset characteristic threat weights;
Step 4, based on two continuous frame moments, associating targets appearing in a previous frame and a later frame, judging whether the targets are the same target, and if the targets are the same target, indicating that the target association is successful; marking the same target, continuously associating multiple frames, and acquiring a track segment of the marked target;
obtaining a plurality of targets and track sections corresponding to the targets through the step 4;
Step 5, judging the track of the track section obtained by the target which is successfully associated;
effective track: at least n consecutive frames are present; assigning a threat degree weight to the effective track;
Disappearance trajectory: after a certain frame disappears, the target does not appear after the frame any more, and the corresponding track is a disappearing track;
Abnormal tracks: if a single target appears in a track which frequently disappears and appears for more than a set number of times and only has n frames of continuity at most, judging that the track section is an abnormal track; assigning a threat degree weight to the abnormal track;
Predicting the track, and predicting the track based on the current time, wherein the predicting comprises the following steps: an effective trajectory and an abnormal trajectory; if a single target track is in a set range in a trend at the next moment, the track is called as a potential track, and then a threat degree weight is given to the track;
step 6, establishing a threat degree prediction space, comprising: and predicting the threat degree of the target cluster based on DS evidence theory to obtain a threat degree prediction result.
2. The multi-target group threat degree prediction method based on the DS evidence theory as claimed in claim 1, wherein the step 4 further comprises associating failed targets, and continuous multi-frame association failure is possibly caused by disappearance or appearance of the targets, and disappearance of the targets is defined as that if no association relationship exists between a target track at the time t-1 and each target at the time t, the target is taken as a disappearing target; and if the target does not have an association relation with each target in the target track at the previous moment, taking the target as a new target at the current moment, and marking the new target by starting the current frame to obtain a track section corresponding to the new target.
3. The multi-objective group threat degree prediction method based on DS evidence theory as claimed in claim 1, wherein the step 1 further comprises performing temporal registration and spatial registration,
the time registration specifically comprises the steps of synchronizing the dynamic position data and the track of the target acquired by the laser radar sensor with the motion track state data of the target in the fused video image in time, and respectively processing the data acquired by the laser radar sensor, the infrared sensor and the visible light sensor and the fused infrared and visible light data by adopting multiple threads to achieve time synchronization; the fused video image represents a video image obtained by fusing an infrared video and a visible light video;
The spatial registration specifically includes mapping sensor data information to a uniform coordinate system using a transformation relationship between a local coordinate system and a global coordinate system of each of the plurality of sensors.
4. The multi-target group threat degree prediction method based on the DS evidence theory as claimed in claim 1, wherein a convolutional neural network is used for detecting all targets at the current moment so as to detect target characteristics; the detection target features and all the detection targets share convolution layers of a convolution neural network;
dividing a training data set according to the land, water and air field, wherein each kind comprises a plurality of data, the training set is input into a convolutional neural network for training by taking a scene as a batch, the scene at least comprises one target, and the data is stored every 200 frames in an iterative manner; training is completed by using an Adam optimization algorithm, and the model is updated; taking the updated model as a detection model, and outputting results of the type, the number and the characteristics of the targets;
the convolutional neural network consists of a convolutional layer, an excitation layer, a pooling layer and an upper sampling layer;
The layer 1 is a convolution input layer, an original image is input, 16 convolution kernels are set, the size is 5 multiplied by 5, the filling value is 2, the step length is 1, and the activation function is set as a ReLU function;
the 2 nd layer is a convolution operation layer and an average pooling layer, an image is input, the convolution layer is provided with 32 convolution kernels, the size is 5 multiplied by 5, the filling value is 2, the step length is 1, and a ReLU function linear activation function is obtained; the down-sampling layer uses 2 multiplied by 2 kernel, the step length is 2, and average pooling down-sampling output is carried out;
The 3 rd layer is a convolution operation layer, an average pooling layer and an up-sampling layer, the image from the 2 nd layer is input, 64 convolution kernels with the size of 3 multiplied by 3 are convolved, the filling value is 1, the step length is 1, and a ReLU function linear activation function is adopted; the down-sampling layer uses 2 x 2 kernel, the step length is 2, and average pooling down-sampling output is carried out, and then regularization processing is carried out; the up-sampling layer uses 2 × 2 kernels, and the step length is 2;
The 4 th layer is a convolution operation layer, a maximum pooling layer and an up-sampling layer, the image from the 3 rd layer is input, 32 convolution kernels with the size of 3 multiplied by 3 are convolved, the filling value is 1, and a ReLU activation function is used; the downsampling layer uses 2 × 2 kernels for maximum pooling; the up-sampling layer uses 2 × 2 kernels, and the step length is 2; outputting an image;
The 5 th layer is a convolution operation layer and a maximum pooling layer, the image from the 4 th layer is input, 16 convolution kernels with the size of 3 multiplied by 3 are convolved, the filling value is 1, and a ReLU activation function is used; the down-sampling layer uses 2 multiplied by 2 kernel to carry out maximum pooling, and down-sampling outputs images;
The 6 th layer is two full-connection layers, 2048 neurons are connected with a residual error structure and output through the characteristic diagram of the 5 th layer, and then node information is randomly lost by Dropout to obtain new neurons; the DropOut layer produces only 50% of the output;
and the 7 th layer is an output layer, and target features are output through the classifier.
in order to compensate the difference between the resolution of the image generated by each convolution layer and the resolution of the original image, (1) compensation is carried out by utilizing a superposition residual structure, and the residual structure is used for carrying out linear addition on the output characteristic of the 2 nd layer average pooling sampling layer and the output characteristic of the 5 th layer maximum pooling sampling layer, (2) the output ends of the 3 rd layer convolution operation layer and the 4 th layer convolution operation layer are connected with an upper sampling layer corresponding to the resolution of the original image;
And detecting the target through the convolutional neural network, extracting the target characteristics and obtaining the classification of the target characteristics.
5. the DS evidence theory-based multi-target group threat degree prediction method as claimed in claim 1, wherein in the step 3, the threat degree of the multi-target set is judged, and multiple targets classified in a target set class may include multiple identical features, namely common features, and independent features, namely non-common features, of a single target;
for the common characteristics, endowing a threat degree weight value according to the characteristic type;
for the non-shared characteristic parts of a plurality of targets, the importance of the non-shared characteristic is judged according to the weight W (A) of the threat degree of the non-shared characteristic in the characteristic region, and a weight threshold value W is preset0(A) if the weight is larger than a preset weight threshold, retaining the non-common characteristics, and endowing the partial characteristic types with threat degree weights;
The weight formula of the threat degree of the non-shared characteristics in the characteristic region is as follows:
wherein N represents a target class, and i represents the ith target class between 1 and N; m represents a feature class, j represents a j-th feature set between 1 and M; ij denotes the j feature set of the targets,1 to k in the j feature set of the ith targetbis characterized in that Rijkfeatures 1 to k in the j feature set of the ith target; w (A) represents the weight of the A target in the target class, the feature class and the feature set to which the A target belongs;
wherein, Fijkmax (A) represents the maximum value of the ratio of the feature A in 1 to k features, namely the maximum value from 1 to k is searched in the ith target class and the jth feature set; fijkmin (A) represents the minimum value of the ratio of the feature A in 1 to k features;
Presetting a weight value:
when W (A) is larger than W0(A) then, the non-common characteristic A is retained; the registered and fused target features contain common features and non-common features that satisfy the condition.
6. The multi-target group threat degree prediction method based on the DS evidence theory as claimed in claim 1, wherein in the step 4, the targets are associated, whether the targets of the current frame and the previous frame are the same target or not is judged by taking the current frame as a reference, and by analogy, whether the targets of the previous frame and the next frame are the same target or not is judged;
the method comprises the steps that a target is blocked, the target is divided into a plurality of small blocks, different weights are given to the small blocks, when apparent similarity is calculated, the overall apparent similarity is calculated, the similarity of the corresponding small blocks is calculated at the same time, and then whether the two targets are the same target or not is comprehensively judged;
assuming that the target A and the target B are targets in continuous front and back frames, associating the two targets to judge whether the two targets are the same target; because the target is blocked, the target is divided into a plurality of small blocks, different weights are given to each small block, when the apparent similarity is calculated, the overall apparent similarity is calculated, the similarity of the corresponding small blocks is simultaneously calculated, then whether the two targets are the same target or not is comprehensively judged,
The judging method comprises the following steps: (1) the relevance of the image area where the target is located;
wherein, I1,I2denotes the image areas, sim (I), corresponding to the two objects A and B, respectively1,I2) Which indicates the similarity of the two images,Representing a dot product operation;
(2) dividing small blocks into the target, and correspondingly similarity among the small blocks;
Dividing the targets A and B into s small blocks, and calculating the similarity between the small blocks:
Wherein A iskdenotes the kth patch, B, in object Akrepresents the kth patch in target B;
wkIndicating the weight of the k tiles in the target,euclidean distance representing the color at the location of the kth tile (x, y) for objects a and B;
Synthesizing the correlation of the image area where the target is located and the corresponding similarity between the small blocks, obtaining the comprehensive matching similarity by matching the similarity, judging whether the comprehensive matching similarity is greater than a preset target matching similarity threshold value, if so, considering that the target is successfully associated with the previous and subsequent frames, and A and B represent the same target; and traversing all the targets of the previous frame and the next frame, and if the comprehensive matching similarity is not greater than a preset target matching similarity threshold, indicating that the target association fails.
7. The DS evidence theory-based multi-target group threat degree prediction method as claimed in claim 1, wherein the step 5 of predicting the trajectory based on the current time comprises the steps of: performing track prediction on the effective track and the abnormal track at the current moment by using Kalman filtering; the position of the target is recorded in real time, so that whether the target enters the range of the set area at the next moment is judged for the target prediction which does not enter the range of the set area at the current moment.
8. The DS evidence theory-based multi-target group threat degree prediction method according to claim 1, wherein in the step 6, the predicted trajectory is subjected to correctness judgment by using the DS evidence theory, and an optimal trajectory is output, and the method comprises the following steps:
(1) recording target real-time data;
(2) establishing a sample space matrix for threat degree prediction, wherein D is { L, M, H }, L represents low risk, M represents medium risk, and H represents high risk;
(3) classifying the real-time target data based on the sample space matrix, and dividing the characteristics into target characteristic information, target position information and target track information according to clustering; outputting respective threat degree weight to convert into probability to obtain evidences m1, m2 and m3, and expressing mithe basic probability assignment is denoted mi(Ai);
(4) and (5) synthesizing m1, m2 and m3 by using DS evidence theory, and outputting the classes meeting the synthesis decision rule as final results.
9. the DS evidence theory-based multi-target group threat degree prediction method according to claim 8, wherein the DS evidence theory is applied to synthesize m1, m2, and m3, and a class satisfying a synthesis decision rule is output as a final result, as follows:
taking the target predicted track situation as a basic proposition A, taking target space information, image information and target track information as basic evidences,
first, the basic probability is calculated:
Wherein (1- Σ Bel) represents assignable fundamental probability assignments; α represents the degree of influence of the confidence function Bel and the likelihood function Pl on the assignment of the basic probability assignments:
Δmxyz(n) represents a difference between the plurality of evidences with respect to the nth characteristic index,it represents the minimum difference of three levels,Denotes the maximum difference of three levels, S (m)i) Representing evidence support;
Wherein, the evidence support degree is as follows:
evidence support reflects the degree of support of the evidence by other evidence, S (m)i) The larger the value, the smaller the distance between evidences, the greater the support of the evidences, D (m)i) Is the distance between the evidences;
Wherein S ismax(mi) Represents the maximum support of evidence, Smin(mi) Representing a minimum support of evidence;
then calculating the basic probability assignment:
mi(Ai)=S(mi)″*P(mi) (10)
And finally, synthesis:
and finally, outputting a synthetic probability result and judging the threat degree of the target cluster.
10. A multi-target group threat degree prediction device based on DS evidence theory comprises: (1) a power source; (2) a multi-sensor module; (3) a target processing module; (4) a trajectory processing module; (5) a threat level processing module; (6) a wireless communication module; (7) a display terminal;
Collecting target and surrounding environment information data by a multi-sensor module (2), inputting the data into a target processing module (3) to perform target detection and feature extraction on target information, classifying features, associating features, and associating front and rear frame targets to obtain targets with successful association and failed association, recording position information of the targets in real time, inputting the targets and the position information into a track processing module (4), performing track analysis and prediction, outputting the position information, the feature information and the track information to a threat degree processing module (5), and outputting the target information, the track information and a threat degree prediction value to a display terminal (7); (1) the power supply supplies power to the whole device to enable the device to work independently in a charged mode, (6) the wireless communication module provides network connection for the whole device; and the threat degree processing module (5) endows the position information, the characteristic information and the track information with threat degree weights respectively, and predicts the threat degree of the target cluster based on the weights.
CN201910830059.6A 2019-09-04 2019-09-04 Multi-target group threat degree prediction device and method based on DS evidence theory Active CN110567324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910830059.6A CN110567324B (en) 2019-09-04 2019-09-04 Multi-target group threat degree prediction device and method based on DS evidence theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910830059.6A CN110567324B (en) 2019-09-04 2019-09-04 Multi-target group threat degree prediction device and method based on DS evidence theory

Publications (2)

Publication Number Publication Date
CN110567324A true CN110567324A (en) 2019-12-13
CN110567324B CN110567324B (en) 2021-10-22

Family

ID=68777483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910830059.6A Active CN110567324B (en) 2019-09-04 2019-09-04 Multi-target group threat degree prediction device and method based on DS evidence theory

Country Status (1)

Country Link
CN (1) CN110567324B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN111339871A (en) * 2020-02-18 2020-06-26 中国电子科技集团公司第二十八研究所 Target group distribution pattern studying and judging method and device based on convolutional neural network
CN112418071A (en) * 2020-11-20 2021-02-26 浙江科技学院 Method for identifying threat degree of flyer target to protected low-altitude unmanned aerial vehicle based on cluster analysis
CN112903008A (en) * 2021-01-15 2021-06-04 泉州师范学院 Mountain landslide early warning method based on multi-sensing data fusion technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5912986A (en) * 1994-06-21 1999-06-15 Eastman Kodak Company Evidential confidence measure and rejection technique for use in a neural network based optical character recognition system
CN1389710A (en) * 2002-07-18 2003-01-08 上海交通大学 Multiple-sensor and multiple-object information fusing method
US20180103302A1 (en) * 2016-10-10 2018-04-12 Utilidata, Inc. Systems and methods for system measurements integrity determination
CN108520526A (en) * 2017-02-23 2018-09-11 南宁市富久信息技术有限公司 A kind of front side dynamic disorder object detecting method
CN110133573A (en) * 2019-04-23 2019-08-16 四川九洲电器集团有限责任公司 A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5912986A (en) * 1994-06-21 1999-06-15 Eastman Kodak Company Evidential confidence measure and rejection technique for use in a neural network based optical character recognition system
CN1389710A (en) * 2002-07-18 2003-01-08 上海交通大学 Multiple-sensor and multiple-object information fusing method
US20180103302A1 (en) * 2016-10-10 2018-04-12 Utilidata, Inc. Systems and methods for system measurements integrity determination
CN108520526A (en) * 2017-02-23 2018-09-11 南宁市富久信息技术有限公司 A kind of front side dynamic disorder object detecting method
CN110133573A (en) * 2019-04-23 2019-08-16 四川九洲电器集团有限责任公司 A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339871A (en) * 2020-02-18 2020-06-26 中国电子科技集团公司第二十八研究所 Target group distribution pattern studying and judging method and device based on convolutional neural network
CN111339871B (en) * 2020-02-18 2022-09-16 中国电子科技集团公司第二十八研究所 Target group distribution pattern studying and judging method and device based on convolutional neural network
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN111337941B (en) * 2020-03-18 2022-03-04 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN112418071A (en) * 2020-11-20 2021-02-26 浙江科技学院 Method for identifying threat degree of flyer target to protected low-altitude unmanned aerial vehicle based on cluster analysis
CN112418071B (en) * 2020-11-20 2021-08-24 浙江科技学院 Method for identifying threat degree of flyer target to protected low-altitude unmanned aerial vehicle based on cluster analysis
CN112903008A (en) * 2021-01-15 2021-06-04 泉州师范学院 Mountain landslide early warning method based on multi-sensing data fusion technology
CN112903008B (en) * 2021-01-15 2023-01-10 泉州师范学院 Mountain landslide early warning method based on multi-sensing data fusion technology

Also Published As

Publication number Publication date
CN110567324B (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN110567324B (en) Multi-target group threat degree prediction device and method based on DS evidence theory
Narejo et al. Weapon detection using YOLO V3 for smart surveillance system
CN107862705B (en) Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics
CN112836640B (en) Single-camera multi-target pedestrian tracking method
CN113269098A (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
CN110533695A (en) A kind of trajectory predictions device and method based on DS evidence theory
CN111932583A (en) Space-time information integrated intelligent tracking method based on complex background
AU2021208647A1 (en) Systems for multiclass object detection and alerting and methods therefor
CN109086803B (en) Deep learning and personalized factor-based haze visibility detection system and method
CN110765948A (en) Target detection and identification method and system based on unmanned aerial vehicle
CN110532937A (en) Method for distinguishing is known to targeting accuracy with before disaggregated model progress train based on identification model
CN113688797A (en) Abnormal behavior identification method and system based on skeleton extraction
CN109086682A (en) A kind of intelligent video black smoke vehicle detection method based on multi-feature fusion
CN114596340A (en) Multi-target tracking method and system for monitoring video
Park et al. Advanced wildfire detection using generative adversarial network-based augmented datasets and weakly supervised object localization
CN116704273A (en) Self-adaptive infrared and visible light dual-mode fusion detection method
Cao et al. Learning spatial-temporal representation for smoke vehicle detection
CN116109950A (en) Low-airspace anti-unmanned aerial vehicle visual detection, identification and tracking method
Kadim et al. Deep-learning based single object tracker for night surveillance.
CN110781730B (en) Intelligent driving sensing method and sensing device
CN116363171A (en) Three-dimensional multi-target tracking method integrating point cloud and image information
CN116309270A (en) Binocular image-based transmission line typical defect identification method
Castellano et al. Crowd flow detection from drones with fully convolutional networks and clustering
Jandhyala et al. Forest Fire Classification and Detection in Aerial Images using Inception-V3 and SSD Models
CN113255549A (en) Intelligent recognition method and system for pennisseum hunting behavior state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant