CN110969637A - Multi-threat target reconstruction and situation awareness method based on generation countermeasure network - Google Patents

Multi-threat target reconstruction and situation awareness method based on generation countermeasure network Download PDF

Info

Publication number
CN110969637A
CN110969637A CN201911210172.0A CN201911210172A CN110969637A CN 110969637 A CN110969637 A CN 110969637A CN 201911210172 A CN201911210172 A CN 201911210172A CN 110969637 A CN110969637 A CN 110969637A
Authority
CN
China
Prior art keywords
target
threat
image
reconstruction
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911210172.0A
Other languages
Chinese (zh)
Other versions
CN110969637B (en
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201911210172.0A priority Critical patent/CN110969637B/en
Publication of CN110969637A publication Critical patent/CN110969637A/en
Application granted granted Critical
Publication of CN110969637B publication Critical patent/CN110969637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

The invention discloses a multi-threat target reconstruction and situation perception method based on a generated countermeasure network, which comprises the steps of collecting the navigation state of the multi-threat target in a supervision area, marking a timestamp, and forming time-synchronized original scene point cloud data, an infrared image and a visible light image; fusing the infrared image and the visible light image, and fusing the image and the point cloud data to realize the reconstruction of a three-dimensional scene and a target; generating a simulation track and a reconstruction target output by the countermeasure network in multiple ways, and acquiring simulation track sections corresponding to the multiple threat targets respectively; and updating the target monitoring search area of the client according to the threat degree perception variable weight, and outputting the threat degree of the target by applying an annealing algorithm to realize firepower distribution. The method obtains target fusion spectral information and spatial attitude and position, improves robustness, avoids mistaken identification caused by shielding, continuously and accurately tracks, monitors and interferes the target, carries out fire distribution on the threatening target in the monitoring range of our part according to the distance threat degree variable weight, searches for an optimal solution and improves large-scale cluster intelligent grouping.

Description

Multi-threat target reconstruction and situation awareness method based on generation countermeasure network
Technical Field
The invention relates to the fields of artificial intelligence, multi-sensor measurement, multi-threat target situation perception and the like, in particular to a multi-threat target reconstruction and situation perception method based on a generation countermeasure network.
Background
The battlefield target threat degree evaluation provides a high-reliability basis for weapon system fire distribution, effectively shortens the time for commander situation perception and strategy formulation, and improves the combat efficiency and quality. The weapon system fire power distribution refers to a process of distributing a certain number of certain types of equipment of one party to each equipment of the other party on the basis of comprehensively considering factors such as executed combat missions, situations, performance of fighting equipment of the other parties and the like.
At present, mainstream algorithms comprise an ant colony algorithm, an artificial bee colony algorithm and the like, and have the defects of short early pheromone, easy falling into local optimization, slow convergence on a large-space problem and the like, and cannot be directly applied to air combat fire distribution. The multi-attribute decision method quantifies a plurality of qualitative or quantitative target attribute values influencing the threat degree, and then calculates a comprehensive evaluation value of the target by combining a weight vector and a certain combination rule, however, the target threat degree evaluation and the fire power distribution are a dynamic multivariable and multi-constraint combination optimization problem, have the characteristics of antagonism, activeness, uncertainty and the like, relate to the dynamic multivariable and multi-constraint combination optimization problem, and are difficult to solve by using the traditional method.
Disclosure of Invention
In order to avoid the defects of the prior art, the invention provides a multi-threat target reconstruction and situation perception method based on a generation countermeasure network, which comprises the steps of collecting the navigation state of the multi-threat target in a supervision area, marking a timestamp, and forming time-synchronized original scene point cloud data, an infrared image and a visible light image; fusing the infrared image and the visible light image, and fusing the image and the point cloud data to realize the reconstruction of a three-dimensional scene and a target; generating a simulation track and a reconstruction target output by the countermeasure network in multiple ways, and acquiring simulation track sections corresponding to the multiple threat targets respectively; and updating the target monitoring search area of the client according to the threat degree perception variable weight, and outputting the threat degree of the target by applying an annealing algorithm to realize firepower distribution.
The method obtains target fusion spectrum information and spatial attitude and position information, improves robustness, avoids mistaken identification caused by shielding, continuously and accurately tracks, monitors and interferes the target, carries out fire distribution on the threatening target according to the distance threat degree variable weight in the monitoring range of our part, searches for an optimal solution, and improves large-scale cluster intelligent grouping.
In order to achieve the above object, the present invention provides a multi-threat object reconstruction and situation awareness method based on generation of an anti-network, which mainly includes:
acquiring navigation states of a plurality of threat targets in a supervision area, marking timestamps, and forming time-synchronized original scene point cloud data, infrared images and visible light images;
fusing the infrared image and the visible light image, and realizing three-dimensional scene and target reconstruction by the image and the point cloud data;
constructing and generating a confrontation network, outputting a simulation track and a simulation reconstruction target, and acquiring simulation track sections corresponding to the multi-threat targets;
updating the target monitoring search area of the client according to the variable weight of threat degree perception, outputting threat degree of the target through an annealing algorithm, and performing target fire power distribution of the client.
The infrared image and the visible light image are fused, and the method specifically comprises the following steps:
defogging is carried out on the extracted infrared image and the extracted visible light image, small noise points are filtered by adopting an image binarization method, and contour areas of a plurality of edges are extracted by a self-adaptive edge algorithm to obtain the maximum contour of each target; the targets comprise enemy targets and surrounding scenes;
detecting characteristic points of the infrared image and the visible light image and contour points of the target in the infrared image and the visible light image, and fitting when the contour area is larger than a preset threshold value to obtain a preprocessed infrared image and a preprocessed visible light image;
performing K-layer NSCT decomposition on the preprocessed infrared image and the preprocessed visible image, and constructing an average gradient and a Q factor matrix to obtain a low-frequency subband coefficient and a high-frequency subband coefficient;
processing image high-frequency information, carrying out PCNN processing on corresponding high-frequency sub-band coefficients, taking the sub-band coefficients as external input excitation of the PCNN, and respectively calculating the rest K-1 layers except the highest layer scale K and the highest layer scale K of image division;
processing the low-frequency information of the image, giving a higher pixel weight to a low-frequency sub-band coefficient in the fusion process for an image area with high energy, then normalizing the variance, judging the variance and a preset variance threshold, and fusing according to different rules;
and performing NSCT inverse transformation, reconstructing the low-frequency coefficient and each high-frequency sub-band coefficient obtained by fusion to obtain a fused image, taking the central coordinate positions of the two images as the positions of the target, then mapping the target center of the visible light image into the infrared image, and acquiring the visible light-infrared image so as to further obtain the fusion of the azimuth and the angle of the target area.
The method for realizing the reconstruction of the three-dimensional scene and the target specifically comprises the following steps:
creating a three-dimensional voxel grid for each three-dimensional point of the input point cloud data, searching coordinate values of all point clouds, and finding out the maximum value X in the direction of three-dimensional coordinate axesmax、Ymax、ZmaxAnd the minimum value Xmin、Ymin、ZminDetermining the side length L of the large cubic grid, if the side length L of the large cubic grid is larger than the preset side length L0Dividing a plurality of voxel grids along the direction X, Y, Z;
presetting point cloud number NOSequentially comparing the point cloud number n in the plurality of voxel grids with a preset point cloud number threshold value, and if the point cloud number n is smaller than the preset value, deleting the voxel grid;
the side lengths L of several small cubic grids are compared againiWith a predetermined side length L0If the side length is greater than L0Continuously dividing a plurality of small cubes, and if the small cubes are less than or equal to L0Traversing the points in the voxel grid, and replacing other points in the voxel grid by the center of gravity of the voxel grid approximately, wherein the calculation formula of the center of gravity is as follows:
Figure BDA0002297900350000031
wherein d isiIndicating point (x)i,yi,zi) Distance to the center of the region of each voxel grid, diRepresents the minimum value of the distance, when the minimum value is reached (x)i,yi,zi) I is more than or equal to 0 and less than or equal to n as the gravity center;
Figure BDA0002297900350000032
wherein d isjIndicating point (x)j,yj,zj) To the region center of gravity (x) of each voxel grid0,y0,z0) Distance of dmaxIndicates the maximum value of the distance, the corresponding point being the farthest point found, max { d }jDenotes { d }jJ is more than or equal to 0 and less than or equal to n-1;
preserving a center of gravity point (x) within a voxel grid0,y0,z0) Removing error point pairs by using RANSAC, processing all voxel grids to obtain filtered point cloud data, setting a threshold value tau, and if tau is less than or equal to dmaxThen, it remains in accordance with djOtherwise, only the gravity point is reserved, and the gravity point and the points smaller than the maximum distance are reserved points.
Further, according to the point cloud retaining points, calculating the average curvature of the point cloud, and taking the voxel with the minimum average curvature as a seed voxel to perform region growth to form a hyper-voxel; and the accurate extraction of the characteristic points of the target contour and the positioning of the characteristic region are realized by estimating the average curvature of the external curved geometric characteristics of the hyper-voxels.
Wherein, the establishing and generating of the countermeasure network specifically comprises: the two generator networks are used for obtaining a plurality of pieces of simulation track data corresponding to the simulation reconstruction target; the first generator network inputs the point cloud superpixel, inputs the visible light-infrared image, trains a generation countermeasure network to a generator to generate simulation target data with the same distribution as real target data, and outputs a simulation reconstruction target; similarly, outputting a simulation reconstruction surrounding scene; and the second generator network inputs the target real track data and generates a countermeasure network, so that the generated countermeasure network is trained until the generator generates simulation track data with the same distribution as the real track data, and then a plurality of groups of simulation track data are generated by using the generator generating the countermeasure network.
Further, the first generator network performs fusion operation of point cloud and image through 3 layers of convolution layer, 4 layers of expansion convolution layer, 3 layers of deconvolution layer and the final convolution layer, and outputs a reconstructed target and a three-dimensional scene through registration training;
the sizes of convolution kernels of the 3 layers of convolution layers are respectively 7 × 7, 5 × 5 and 3 × 3, the step length is 2, and the number of feature maps is respectively 64, 128 and 256; the sizes of convolution kernels of the 4 layers of expansion convolutions are respectively 3 multiplied by 3, expansion factors are respectively 2, 4, 8 and 16, the step length is 1, and the numbers of feature maps are respectively 256, 256 and 256; the convolution kernel of the 3 layers of deconvolution layers is 3 multiplied by 3, the step length is 2, the number of characteristic graphs is 128, 64 and 32 respectively, and the convolution kernels are filled through the 3 layers of deconvolution layers; the convolution kernel size of the last convolution layer is 3 multiplied by 3, the convolution step length is 1, and the number of characteristic graphs is 3; and adding a BN layer and an lReLu layer into the output of each convolution layer, and activating the output of the last convolution layer by adopting a Tanh function.
Further, the second generator network outputs a virtual target model through 3 convolutional layers, 6 residual layers, 3 deconvolution layers and a final convolutional layer;
the sizes of convolution kernels of the 3 layers of convolution layers are respectively 7 × 7, 5 × 5 and 3 × 3, and the numbers of feature maps are respectively 64, 128 and 256; each residual layer in the 6 residual layers comprises two convolution layers and residual connection, the sizes of the convolution kernels are all 3 multiplied by 3, and the number of feature maps is 256; the convolution kernels of the 3 deconvolution layers are all 3 multiplied by 3, and the number of the characteristic graphs is respectively 256, 128 and 64; the convolution kernel size of the last convolution layer is 3 multiplied by 3, the step length is 2, and the number of characteristic graphs is 3; each convolutional layer of the second generator network then also contains a BN layer and lReLu activation layer, the last layer being the Tanh function activation function.
Wherein, the updating of the target monitoring search area according to the threat degree perception variable weight specifically comprises: because different degrees of threat degrees can be caused by the change of the track state of the enemy target in the monitoring area of the party, a threat assessment model is established based on the aspects of target type, attack capability, defense capability, information reliability and guarantee, the target monitoring search range of the party is dynamically set by utilizing the change condition of the weight value of the threat degree, a decision basis is provided for the selection of the target, and the center of gravity of the battle is determined.
Further, the weight value of the threat degree perception is changed by the following calculation method:
the target of one party in the monitoring area is set as N, wherein N is {1, 2., and N } represents the nth target of one party; the number of enemy targets M, M ═ {1, 2.. M }, which is denoted as the mth threat target; different threat targets have different threat degree indexes K, K ═ 1, 2.. times, K } which is expressed as the kth threat index;
according to the position of the threat target and the threat degree evaluation, constructing a state weight value of a threat degree index:
Figure BDA0002297900350000041
wkrepresenting the threat level indicator k state weight, wk(X) a threat degree index k state weight value, w, of the my target and the threat target at the corresponding positionsmkWeight value of k index of m target, XmkA k-th threat indicator representing threat target m of my n search scope,
Figure BDA0002297900350000042
expressing the average value of the sum of K threat indexes corresponding to the mth threat target, wherein sigma is a variable weight factor and has the value range of [ -0.5, 0.5];δmRepresenting threat weight corresponding to m, wherein the value is related to the track of the target m;
wherein X represents the position relation between the target of the party and the threat target and is represented by a matrix NxM,
Figure BDA0002297900350000043
wherein, gk (x) represents the state weight change value of the threat index k of the target of my party and the threat target:
Figure BDA0002297900350000044
setting the search step length according to the threat degree,
Xnm=Xn(m-1)+(rd-0.5rd)*Hstep(6)
Figure BDA0002297900350000051
rd is the range diameter of the target random monitoring area of our party, 0.5rd is represented as radius HstepFor adaptive step-size adjustment factor, wminMinimum threat value, w, of threat target in target monitoring area of my partymaxThe maximum threat value of the threat target in the monitoring area of the target of the my party,
Figure BDA0002297900350000052
representing the current optimal solution, i.e. as threat object m0The threat level is greatest in the monitored area.
Further, the annealing algorithm calculates a corresponding position function of the threat targets according to d threat targets searched in the monitoring area and the track of the threat target entering the range as an initial population by taking the monitoring range of any target of one party as a unit;
selecting threat values of threat targets in two adjacent target monitoring areas of our party, calculating two fitness degrees f (m), carrying out cross operation according to cross probability, then carrying out mutation operation according to mutation probability, wherein the cross probability pc is 0.7, the mutation probability pm is 0.02, obtaining a new population, and obtaining delta E (f) (m) -f (m 0); the threat value of the threat target is calculated by the threat weight;
executing an acceptance determination process; if delta E is the corresponding position of the threat object m newly entering the monitoring area<0, then the new m fire power allocation is accepted; if Δ E ≦ 0, the new model m is exp (- Δ E/T) according to the probability Pk) Is subjected to a temperature TkIs the current temperature;
when the model is accepted, setting m0 ═ m; Δ E is the threat goal with the greatest threat as the objective function; judging whether a convergence condition is met, if so, outputting an optimal solution, and distributing the optimal solution of firepower for the target of one party according to the position, the track and the threat degree value of the identified threat target; t decreases geometrically and ends when T < 0.0001.
The method obtains target fusion spectrum information and spatial attitude and position information, realizes accurate positioning three-dimensional and obtains real-time imaging of three-dimensional point cloud; the method comprises the steps of utilizing a generated countermeasure network to change a classical generated countermeasure network structure, constructing two generator networks, outputting a simulation target and a simulation track, improving robustness, avoiding false identification caused by shielding, achieving continuous and accurate tracking monitoring and interference of the target, taking the position priority order from the local as basic constraint, designing a threat degree variable weight, calculating a multi-target threat value according to the threat degree variable weight, achieving firepower distribution on the threat target, searching an optimal solution and improving large-scale cluster intelligent grouping.
Drawings
FIG. 1 is a flowchart of a method for multi-threat object reconstruction and situation awareness based on generation of an anti-network according to the present invention.
Fig. 2 is a flow chart of visible light and infrared video image processing based on a multi-threat target reconstruction and situation awareness method for generating an anti-network according to the present invention.
Fig. 3 is a flowchart of acquiring simulated multi-threat targets and respective corresponding simulated trajectory segments based on the multi-threat target reconstruction and situation awareness method for generating an anti-network according to the present invention.
Fig. 4 is a diagram of a fire distribution display effect based on a multi-threat target reconstruction and situation awareness method for generating an anti-network according to the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application can be combined with each other without conflict, and the present invention is further described in detail with reference to the drawings and specific embodiments.
Fig. 1 is a flowchart of a method for multi-threat target reconstruction and situation awareness based on generation of an anti-network according to the present invention, which mainly includes:
step 1, acquiring navigation states of a plurality of threat targets in a supervision area, marking timestamps, and forming time-synchronized original scene point cloud data, infrared images and visible light images;
step 2, fusing the infrared image and the visible light image, and realizing three-dimensional scene and target reconstruction by the image and the point cloud data;
step 3, constructing and generating a confrontation network, outputting a simulation track and a simulation reconstruction target, and acquiring simulation track sections corresponding to the multi-threat targets;
and 4, updating the target monitoring search area of the client according to the threat degree perception variable weight, outputting the threat degree of the threat target through an annealing algorithm, and distributing the fire power of the target of the client.
The navigation states of a plurality of threat targets in the supervision area are collected, original scene point cloud data are obtained by utilizing a laser radar sensor, an infrared image is obtained by an infrared sensor, a visible light image is obtained by a visible light camera, a coordinate system is unified in advance, coordinate calibration of the targets is realized, the navigation states of the plurality of threat targets are marked with timestamps, and multi-target time synchronization is carried out.
The method for sensing the situation of the multi-threat target based on the generation countermeasure network comprises the following steps of fusing an infrared image and a visible light image, fusing the infrared image and the visible light image in the same time frame because the infrared light and the visible light have respective limitations and advantages in different environments, and mainly showing the fusion processing of the infrared image and the visible light image, wherein the step 2 is shown in a visible light and infrared video image processing flow chart of the multi-threat target situation sensing method based on the generation countermeasure network, and specifically comprises the following steps:
firstly, defogging is carried out on an extracted infrared image and an extracted visible light image, small noise points are filtered by adopting an image binarization method, and contour areas of a plurality of edges are extracted by a self-adaptive edge algorithm to obtain the maximum contour of each target; the targets comprise enemy targets and surrounding scenes;
detecting characteristic points of the infrared image and the visible light image and contour points of the target in the infrared image and the visible light image, and fitting when the contour area is larger than a preset threshold value to obtain a preprocessed infrared image and a preprocessed visible light image;
performing K-layer NSCT decomposition on the preprocessed infrared image and the preprocessed visible image, and constructing an average gradient and a Q factor matrix to obtain a low-frequency subband coefficient and a high-frequency subband coefficient;
processing image high-frequency information, carrying out PCNN processing on corresponding high-frequency sub-band coefficients, taking the sub-band coefficients as external input excitation of the PCNN, and respectively calculating the rest K-1 layers except the highest layer scale K and the highest layer scale K of image division;
the method specifically comprises the following steps:
carrying out PCNN processing on the corresponding high-frequency subband coefficients, taking the subband coefficients as external input excitation of the PCNN, and adaptively calculating β values of PCNN link strength:
Figure BDA0002297900350000071
wherein the content of the first and second substances,
Figure BDA0002297900350000072
energy representing the size of an area of the high frequency coefficient matrix centered at (x, y) is M N,
Figure BDA0002297900350000073
decomposition coefficients representing the K-layer NSCT transform of the image at (x, y);
in order to highlight the target detail information in the source image in the fused image, the fusion coefficient of the image is determined by adopting a method of taking a large absolute value at the highest-level scale K of the image, and then the corresponding fusion rule can be expressed as:
Figure BDA0002297900350000074
wherein, I1And I2Is the high frequency subband decomposition coefficient for image a and image B;
taking the rest K-1 layers except the highest layer K of the image as neuron input of the PCNN, respectively calculating the ignition times of each pixel in each sub-image of the infrared image A and the visible light image B, determining a high-frequency fusion coefficient during fusion according to the ignition times, and fusing according to the following rules:
Figure BDA0002297900350000075
wherein, T1And T2Are each I1And I2Number of ignitions, W, output via PCNN network pulses1And W2The infrared image A and the visible light image B account for the weight value of the high-frequency sub-band coefficient, and Thresh is a threshold value;
processing the low-frequency information of the image, giving a higher pixel weight to a low-frequency sub-band coefficient in the fusion process for an image area with high energy, then normalizing the variance, judging the variance and a preset variance threshold, and fusing according to different rules;
the method specifically comprises the following steps:
the processing of the low-frequency information of the image specifically comprises the following steps: pixel saliency is first computed, which is expressed as:
Figure BDA0002297900350000076
wherein, IS(i, j) represents the pixel value of the image, USRepresenting the mean of the image pixels, S ═ ir, vis is used to represent the infrared and visible pictures, URRepresenting a region tie value;
then, a higher weight, w, is given to the pixels in the image region with high energy in the fusion processirAnd wvisRepresenting the weight of the infrared and visible images, respectively, FL(x, y) represents the fused low frequency component, EvisRepresents energy in the visible region, EirRepresents the energy in the infrared region;
when the pixel is in the target area, the pixel energy in the infrared image is relatively concentrated, so that the energy in the infrared image area is large, the energy in the corresponding visible light image area is relatively small, so that a small weight is given to the visible light image, and the weight setting range is smaller than 0.3:
Figure BDA0002297900350000081
FL(x,y)=wvis×visL(x,y)+(1-wvis)×irL(x,y) (8)
the visible light image pixel energy is relatively concentrated, the area energy is large, the infrared image area energy is relatively small, so that the infrared image is endowed with a small weight, and the value of the weight is less than 0.3:
Figure BDA0002297900350000082
FL(x,y)=wir×irL(x,y)+(1-wir)×visL(x,y) (10)
finally, normalizing the local variance by
Figure BDA0002297900350000083
Wherein, QvisRepresenting the variance, Q, of the visible image areairRepresenting the variance of the infrared image area;
when the difference of the normalized local variance is larger, namely G (i, j) > T, T represents a preset variance threshold value, which indicates that the difference of two image regions is larger, and the region variance is selected to be larger:
Figure BDA0002297900350000084
when the difference in normalized local variance is small, i.e., when G (i, j) < T,
Figure BDA0002297900350000085
wherein, CFAnd (x, y) represents the fused low-frequency coefficient, then PCNN processing is carried out on the low-frequency subband coefficient, 4 times of the coefficient value is used as the external input of the PCNN, wherein T represents a preset threshold value, and the value is between 0.3 and 0.4.
And performing NSCT inverse transformation, reconstructing the low-frequency coefficient and each high-frequency sub-band coefficient obtained by fusion to obtain a fused image, taking the central coordinate positions of the two images as the positions of the target, then mapping the target center of the visible light image into the infrared image, and acquiring the visible light-infrared image so as to further obtain the fusion of the azimuth and the angle of the target area.
The method for realizing the reconstruction of the three-dimensional scene and the target specifically comprises the following steps:
creating a three-dimensional voxel grid for each three-dimensional point of the input point cloud data, searching coordinate values of all point clouds, and finding out the maximum value X in the direction of three-dimensional coordinate axesmax、Ymax、ZmaxAnd the minimum value Xmun、Ymun、ZminDetermining the side length L of the large cubic grid, if the side length L of the large cubic grid is larger than the preset side length L0Dividing a plurality of voxel grids along the direction X, Y, Z;
presetting point cloud number NOSequentially comparing the point cloud number n in the plurality of voxel grids with a preset point cloud number threshold value, and if the point cloud number n is smaller than the preset value, deleting the voxel grid;
the side lengths L of several small cubic grids are compared againiWith a predetermined side length L0If the side length is greater than L0Continuously dividing a plurality of small cubes, and if the small cubes are less than or equal to L0Traversing the points in the voxel grid, and replacing other points in the voxel grid by the center of gravity of the voxel grid approximately, wherein the calculation formula of the center of gravity is as follows:
Figure BDA0002297900350000091
wherein d isiIndicating point (x)i,yi,zi) Distance to the center of the region of each voxel grid, diRepresents the minimum value of the distance, when the minimum value is reached (x)i,yi,zi) I is more than or equal to 0 and less than or equal to n as the gravity center;
Figure BDA0002297900350000092
wherein d isjIndicating point (x)j,yj,zj) Region center of gravity to each voxel grid(x0,y0,z0) Distance of dmaxIndicates the maximum value of the distance, the corresponding point being the farthest point found, max { d }jDenotes { d }jJ is more than or equal to 0 and less than or equal to n-1;
preserving a center of gravity point (x) within a voxel grid0,y0,z0) Removing error point pairs by using RANSAC, processing all voxel grids to obtain filtered point cloud data, setting a threshold value tau, and if tau is less than or equal to dmaxThen, it remains in accordance with djOtherwise, only the gravity point is reserved, and the gravity point and the points smaller than the maximum distance are reserved points.
And (3) the point cloud retaining points calculate the average curvature of the point cloud, and the voxel with the minimum average curvature is used as a seed voxel to carry out region growth to form a superpixel, and the accurate extraction of the target contour feature points and the feature region positioning are realized by estimating the average curvature of the external bending geometric features of the superpixel.
Fig. 3 is a flowchart of acquiring simulated multi-threat targets and respective corresponding simulated trajectory segments based on the multi-threat target reconstruction and situation awareness method for generating an antagonistic network, which mainly shows that two generator networks respectively process point cloud data, fused infrared-visible light images and trajectories, and output the simulated multi-threat targets and the respective corresponding simulated trajectory segments through discriminators.
Generating a first generator network of an antagonistic network, inputting the point cloud hyper-voxel and the visible light-infrared image by the first generator network, performing fusion operation of the point cloud and the image, performing registration training, training the generated antagonistic network to a generator to generate simulated target data with the same distribution as real target data, and outputting a simulated reconstruction target; similarly, outputting a simulation reconstruction surrounding scene; the sizes of convolution kernels of the 3 layers of convolution layers are respectively 7 × 7, 5 × 5 and 3 × 3, the step length is 2, and the number of feature maps is respectively 64, 128 and 256; the sizes of convolution kernels of the 4 layers of expansion convolutions are respectively 3 multiplied by 3, expansion factors are respectively 2, 4, 8 and 16, the step length is 1, and the numbers of feature maps are respectively 256, 256 and 256; the convolution kernel of the 3 layers of deconvolution layers is 3 multiplied by 3, the step length is 2, the number of characteristic graphs is 128, 64 and 32 respectively, and the convolution kernels are filled through the 3 layers of deconvolution layers; the convolution kernel size of the last convolution layer is 3 multiplied by 3, the convolution step length is 1, and the number of characteristic graphs is 3; and adding a BN layer and an lReLu layer into the output of each convolution layer, and activating the output of the last convolution layer by adopting a Tanh function.
And the second generator network is used for inputting the target real track data and inputting the target real track data to generate the countermeasure network, so that the generated countermeasure network is trained until the generator generates simulation track data with the same distribution as the real track data, and then a plurality of groups of simulation track data are generated by utilizing the generator for generating the countermeasure network. Through 3 layers of convolution layers, 6 layers of residual error layers, 3 layers of deconvolution layers and the final convolution layer, a plurality of groups of simulation track data are generated.
The sizes of convolution kernels of the 3 layers of convolution layers are respectively 7 × 7, 5 × 5 and 3 × 3, and the numbers of feature maps are respectively 64, 128 and 256; each residual layer in the 6 residual layers comprises two convolution layers and residual connection, the sizes of the convolution kernels are all 3 multiplied by 3, and the number of feature maps is 256; the convolution kernels of the 3 deconvolution layers are all 3 multiplied by 3, and the number of the characteristic graphs is respectively 256, 128 and 64; the convolution kernel size of the last convolution layer is 3 multiplied by 3, the step length is 2, and the number of characteristic graphs is 3; each convolutional layer of the second generator network then also contains a BN layer and lReLu activation layer, the last layer being the Tanh function activation function.
The method comprises the steps of updating a target monitoring search area of one party according to threat degree perception variable weight, establishing a threat assessment model based on target types, striking capacity, defense capacity, information reliability and guarantee by taking a threat target track position as a basic factor due to different degrees of threat degree caused by the change of an enemy target track state in the monitoring area of one party, dynamically setting a target monitoring search range of one party according to the change condition of the threat degree weight value, providing a decision basis for the selection of a target, and determining the fighting gravity center.
Further, in the method for calculating the weight change value of the threat degree, the target of one party in the monitoring area is set as N, and N is {1, 2., N } and is represented as the nth target of the party; the number of enemy targets M, M ═ {1, 2.. M }, which is denoted as the mth threat target; different threat targets have different threat degree indexes K, K ═ 1, 2.. times, K } which is expressed as the kth threat index;
according to the position of the threat target and the threat degree evaluation, constructing a state weight value of a threat degree index:
Figure BDA0002297900350000101
wkrepresenting the threat level indicator k state weight, wk(X) a threat degree index k state weight value, w, of the my target and the threat target at the corresponding positionsmkWeight value of k index of m target, XmkA k-th threat indicator representing threat target m of my n search scope,
Figure BDA0002297900350000102
expressing the average value of the sum of K threat indexes corresponding to the mth threat target, wherein sigma is a variable weight factor and has the value range of [ -0.5, 0.5];δmRepresenting threat weight corresponding to m, wherein the value is related to the track of the target m;
wherein X represents the position relation between the target of the party and the threat target and is represented by a matrix NxM,
Figure BDA0002297900350000103
wherein, gk(X) the state variable weight of the threat index k representing the target of the party and the threat target:
Figure BDA0002297900350000104
setting the search step length according to the threat degree,
Xnm=Xn(m-1)+(rd-0.5rd)*Hstep(6)
Figure BDA0002297900350000111
rd is the range diameter of the target random monitoring area of our party, 0.5rd is represented as radius HstepFor adaptive step-size adjustment factor, wminMinimum threat value, w, of threat target in target monitoring area of my partymaxThe maximum threat value of the threat target in the monitoring area of the target of the my party,
Figure BDA0002297900350000112
representing the current optimal solution, i.e. as threat object m0The threat level is greatest in the monitored area.
FIG. 4 is a diagram of a fire distribution display effect of a multi-threat object reconstruction and situation awareness method based on an anti-network generation method, wherein a square frame part of an upper diagram is a monitored object area of one party, a circle part of the upper diagram is an enemy object, and the number of threat objects entering the monitored area is 12; the lower graph shows a friend or foe situation fire distribution graph, the distribution condition of the number of targets 11 of our party and the number of threat targets 12 of our party, the coordinate value is the relative distance in two-dimensional display, a key priority detection area is established firstly, the length and the width of the monitoring area are input, the monitoring area is generated, fire deployment is generated according to sensor data, the number of threat targets and tracks, optimal calculation is carried out on deployment schemes, deployment node coordinates of the targets of our party are stored in a database, and a final fire deployment scheme is displayed.
The method for finding the optimal distribution by using the annealing algorithm mainly comprises the following steps:
taking the monitoring range of any target of our party as a unit, and calculating a corresponding position function of the threat targets according to d threat targets searched in a monitoring area and the track of the threat target entering the range as an initial population;
selecting threat values of threat targets in two adjacent target monitoring areas of our party, calculating two fitness degrees f (m), carrying out cross operation according to cross probability, then carrying out mutation operation according to mutation probability, wherein the cross probability pc is 0.7, the mutation probability pm is 0.01, obtaining a new population, and obtaining delta E (f) (m) -f (m 0); the threat value of the threat target is calculated by the threat weight;
execution acceptance determinationA process; if delta E is the corresponding position of the threat object m newly entering the monitoring area<0, then the new m fire power allocation is accepted; if Δ E ≦ 0, the new model m is exp (- Δ E/T) according to the probability Pk) Is subjected to a temperature TkIs the current temperature;
when the model is accepted, setting m0 ═ m; Δ E is the threat goal with the greatest threat as the objective function; judging whether a convergence condition is met, if so, outputting an optimal solution, and distributing the optimal solution of firepower for the target of one party according to the position, the track and the threat degree value of the identified threat target; t decreases geometrically and ends when T < 0.0001.
It will be appreciated by persons skilled in the art that the invention is not limited to details of the foregoing embodiments and that the invention can be embodied in other specific forms without departing from the spirit or scope of the invention. In addition, various modifications and alterations of this invention may be made by those skilled in the art without departing from the spirit and scope of this invention, and such modifications and alterations should also be viewed as being within the scope of this invention. It is therefore intended that the following appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (10)

1. A multi-threat target reconstruction and situation awareness method based on a generation countermeasure network is characterized by mainly comprising the following steps:
acquiring navigation states of a plurality of threat targets in a target supervision area of one party, marking timestamps, and forming time-synchronized original scene point cloud data, infrared images and visible light images;
fusing the infrared image and the visible light image, and realizing three-dimensional scene and target reconstruction by the image and the point cloud data;
constructing and generating a confrontation network, outputting a simulation track and a simulation reconstruction target, and acquiring simulation track sections corresponding to the multi-threat targets;
updating the target monitoring search area of the client according to the variable weight of threat degree perception, outputting threat degree of the target through an annealing algorithm, and performing target fire power distribution of the client.
2. The multi-threat target reconstruction and situation awareness method based on generation of an antagonistic network according to claim 1, wherein the fusion of the infrared image and the visible light image specifically comprises:
defogging is carried out on the extracted infrared image and the extracted visible light image, small noise points are filtered by adopting an image binarization method, and contour areas of a plurality of edges are extracted by a self-adaptive edge algorithm to obtain the maximum contour of each target; the targets comprise enemy targets and surrounding scenes;
detecting characteristic points of the infrared image and the visible light image and contour points of the target in the infrared image and the visible light image, and fitting when the contour area is larger than a preset threshold value to obtain a preprocessed infrared image and a preprocessed visible light image;
performing K-layer NSCT decomposition on the preprocessed infrared image and the preprocessed visible image, and constructing an average gradient and a Q factor matrix to obtain a low-frequency subband coefficient and a high-frequency subband coefficient;
processing image high-frequency information, carrying out PCNN processing on corresponding high-frequency sub-band coefficients, taking the sub-band coefficients as external input excitation of the PCNN, and respectively calculating the rest K-1 layers except the highest layer scale K and the highest layer scale K of image division;
processing the low-frequency information of the image, giving a higher pixel weight to a low-frequency sub-band coefficient in the fusion process for an image area with high energy, then normalizing the variance, judging the variance and a preset variance threshold, and fusing according to different rules;
and performing NSCT inverse transformation, reconstructing the low-frequency coefficient and each high-frequency sub-band coefficient obtained by fusion to obtain a fused image, taking the central coordinate positions of the two images as the positions of the target, then mapping the target center of the visible light image into the infrared image, and acquiring the visible light-infrared image so as to further obtain the fusion of the azimuth and the angle of the target area.
3. The multi-threat target reconstruction and situation awareness method based on generation of an antagonistic network according to claim 1, wherein the implementation of the three-dimensional scene and target reconstruction specifically comprises:
creating a three-dimensional voxel grid for each three-dimensional point of the input point cloud data, searching coordinate values of all point clouds, and finding out the maximum value X in the direction of three-dimensional coordinate axesmax、Ymax、ZmaxAnd the minimum value Xmin、Ymin、XminDetermining the side length L of the large cubic grid, if the side length L of the large cubic grid is larger than the preset side length L0Dividing a plurality of voxel grids along the direction X, Y, Z;
presetting point cloud number NOSequentially comparing the point cloud number n in the plurality of voxel grids with a preset point cloud number threshold value, and if the point cloud number n is smaller than the preset value, deleting the voxel grid;
the side lengths L of several small cubic grids are compared againiWith a predetermined side length L0If the side length is greater than L0Continuously dividing a plurality of small cubes, and if the small cubes are less than or equal to L0Traversing the points in the voxel grid, and replacing other points in the voxel grid by the center of gravity of the voxel grid approximately, wherein the calculation formula of the center of gravity is as follows:
Figure FDA0002297900340000021
wherein d isiIndicating point (x)i,yi,zi) Distance to the center of the region of each voxel grid, diRepresents the minimum value of the distance, when the minimum value is reached (x)i,yi,zi) I is more than or equal to 0 and less than or equal to n as the gravity center;
Figure FDA0002297900340000022
wherein d isjIndicating point (x)j,yj,zj) To the region center of gravity (x) of each voxel grid0,y0,z0) Distance of dmaxIndicates the maximum value of the distance, the corresponding point being the farthest point found, max { d }jDenotes { d }jJ is more than or equal to 0 and less than or equal to n-1;
preserving a center of gravity point (x) within a voxel grid0,y0,z0) Removing error point pairs by using RANSAC, processing all voxel grids to obtain filtered point cloud data, setting a threshold value tau, and if tau is less than or equal to dmaxThen, it remains in accordance with djOtherwise, only the gravity point is reserved, and the gravity point and the points smaller than the maximum distance are reserved points.
4. The multi-threat target reconstruction and situation awareness method based on the generative confrontation network as claimed in claim 3, wherein the average curvature of the point cloud is calculated according to the point cloud retention points, and the voxel with the minimum average curvature is used as a seed voxel for region growing to form a hyper-voxel; and the accurate extraction of the characteristic points of the target contour and the positioning of the characteristic region are realized by estimating the average curvature of the external curved geometric characteristics of the hyper-voxels.
5. The multi-threat target reconstruction and situation awareness method based on generation of an antagonistic network according to claim 1, wherein the construction of the generation of the antagonistic network specifically comprises: the two generator networks are used for obtaining a plurality of pieces of simulation track data corresponding to the simulation reconstruction target;
the first generator network inputs the point cloud superpixel, inputs the visible light-infrared image, trains a generation countermeasure network to a generator to generate simulation target data with the same distribution as real target data, and outputs a simulation reconstruction target; similarly, outputting a simulation reconstruction surrounding scene;
and the second generator network inputs the target real track data and generates a countermeasure network, so that the generated countermeasure network is trained until the generator generates simulation track data with the same distribution as the real track data, and then a plurality of groups of simulation track data are generated by using the generator generating the countermeasure network.
6. The multi-threat target reconstruction and situation awareness method based on generation countermeasure network as claimed in claim 5, wherein the first generator network performs fusion operation of point cloud and image through 3 convolutional layers, 4 expansion convolutional layers, 3 deconvolution layers and a final convolutional layer, and outputs a reconstructed target and a three-dimensional scene through registration training;
the sizes of convolution kernels of the 3 layers of convolution layers are respectively 7 × 7, 5 × 5 and 3 × 3, the step length is 2, and the number of feature maps is respectively 64, 128 and 256; the sizes of convolution kernels of the 4 layers of expansion convolutions are respectively 3 multiplied by 3, expansion factors are respectively 2, 4, 8 and 16, the step length is 1, and the numbers of feature maps are respectively 256, 256 and 256; the convolution kernel of the 3 layers of deconvolution layers is 3 multiplied by 3, the step length is 2, the number of characteristic graphs is 128, 64 and 32 respectively, and the convolution kernels are filled through the 3 layers of deconvolution layers; the convolution kernel size of the last convolution layer is 3 multiplied by 3, the convolution step length is 1, and the number of characteristic graphs is 3; and adding a BN layer and an lReLu layer into the output of each convolution layer, and activating the output of the last convolution layer by adopting a Tanh function.
7. The multi-threat target reconstruction and situation awareness method based on generation countermeasure network as claimed in claim 5, wherein the second generator network generates a plurality of sets of simulated trajectory data through 3 convolutional layers, 6 residual layers, 3 anti-convolutional layers and a final convolutional layer;
the sizes of convolution kernels of the 3 layers of convolution layers are respectively 7 × 7, 5 × 5 and 3 × 3, and the numbers of feature maps are respectively 64, 128 and 256; each residual layer in the 6 residual layers comprises two convolution layers and residual connection, the sizes of the convolution kernels are all 3 multiplied by 3, and the number of feature maps is 256; the convolution kernels of the 3 deconvolution layers are all 3 multiplied by 3, and the number of the characteristic graphs is respectively 256, 128 and 64; the convolution kernel size of the last convolution layer is 3 multiplied by 3, the step length is 2, and the number of characteristic graphs is 3; each convolutional layer of the second generator network then also contains a BN layer and lReLu activation layer, the last layer being the Tanh function activation function.
8. The multi-threat target reconstruction and situation awareness method based on generation countermeasure network according to claim 1, wherein the updating of the target monitoring search area of our party according to the threat degree awareness variable weight specifically comprises: the threat assessment model is established based on the target type, the hitting ability, the defense ability, the information reliability and the guarantee by taking the position of the threat target track as a basic factor, and the threat degree weight value change condition is utilized to dynamically set the target monitoring search range of the party, so that a decision basis is provided for the selection of the target, and the fighting gravity center is determined.
9. The multi-threat object reconstruction and situation awareness method based on generation of confrontation network as claimed in claim 8, wherein the threat degree awareness variable weight is calculated as follows:
the target of one party in the monitoring area is set as N, wherein N is {1, 2., and N } represents the nth target of one party; the number of enemy targets M, M ═ {1, 2.. M }, which is denoted as the mth threat target; different threat targets have different threat degree indexes K, K ═ 1, 2.. times, K } which is expressed as the kth threat index;
according to the position of the threat target and the threat degree evaluation, constructing a state weight value of a threat degree index:
Figure FDA0002297900340000031
wkrepresenting the threat level indicator k state weight, wk(X) a threat degree index k state weight value, w, of the my target and the threat target at the corresponding positionsmkWeight value of k index of m target, XmkA k-th threat indicator representing threat target m of my n search scope,
Figure FDA0002297900340000041
expressing the average value of the sum of K threat indexes corresponding to the mth threat target, wherein sigma is a variable weight factor and has the value range of [ -0.5, 0.5];δmRepresenting threat weight corresponding to m, wherein the value is related to the track of the target m;
wherein X represents the position relation between the target of the party and the threat target and is represented by a matrix NxM,
Figure FDA0002297900340000042
wherein, gk(X) the state variable weight of the threat index k representing the target of the party and the threat target:
Figure FDA0002297900340000043
setting the search step length according to the threat degree,
Xnm=Xn(m-1)+(rd-0.5rd)*Hstep(6)
Figure FDA0002297900340000044
rd is the range diameter of the target random monitoring area of our party, 0.5rd is represented as radius HstepFor adaptive step-size adjustment factor, wminMinimum threat value, w, of threat target in target monitoring area of my partymaxThe maximum threat value of the threat target in the monitoring area of the target of the my party,
Figure FDA0002297900340000045
representing the current optimal solution, i.e. as threat object m0The threat level is greatest in the monitored area.
10. The multi-threat object reconstruction and situation awareness method based on generation of an antagonistic network as claimed in claim 9, wherein the annealing algorithm mainly comprises:
taking the monitoring range of any target of our party as a unit, and calculating a corresponding position function of the threat targets according to d threat targets searched in a monitoring area and the track of the threat target entering the range as an initial population;
selecting threat values of threat targets in two adjacent target monitoring areas of our party, calculating two fitness degrees f (m), carrying out cross operation according to cross probability, then carrying out mutation operation according to mutation probability, wherein the cross probability pc is 0.7, the mutation probability pm is 0.02, obtaining a new population, and obtaining delta E (f) (m) -f (m 0); the threat value of the threat target is calculated by the threat weight;
executing an acceptance determination process; if delta E is the corresponding position of the threat object m newly entering the monitoring area<0, then the new m fire power allocation is accepted; if Δ E ≦ 0, the new model m is exp (- Δ E/T) according to the probability Pk) Is subjected to a temperature TkIs the current temperature;
when the model is accepted, setting m0 ═ m; Δ E is the threat goal with the greatest threat as the objective function; judging whether a convergence condition is met, if so, outputting an optimal solution, and distributing the optimal solution of firepower for the target of one party according to the position, the track and the threat degree value of the identified threat target; t decreases geometrically and ends when T < 0.0001.
CN201911210172.0A 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network Active CN110969637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911210172.0A CN110969637B (en) 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911210172.0A CN110969637B (en) 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN110969637A true CN110969637A (en) 2020-04-07
CN110969637B CN110969637B (en) 2023-05-02

Family

ID=70032473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911210172.0A Active CN110969637B (en) 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN110969637B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111722220A (en) * 2020-06-08 2020-09-29 北京理工大学 Rocket target identification system based on parallel heterogeneous sensor
CN111899353A (en) * 2020-08-11 2020-11-06 长春工业大学 Three-dimensional scanning point cloud hole filling method based on generation countermeasure network
CN112365582A (en) * 2020-11-17 2021-02-12 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN112801403A (en) * 2021-02-10 2021-05-14 武汉科技大学 Method and system for predicting potential threat degree of aerial target based on SSA-BP
CN112884802A (en) * 2021-02-24 2021-06-01 电子科技大学 Anti-attack method based on generation
CN112990363A (en) * 2021-04-21 2021-06-18 中国人民解放军国防科技大学 Battlefield electromagnetic situation sensing and utilizing method
CN113192182A (en) * 2021-04-29 2021-07-30 山东产研信息与人工智能融合研究院有限公司 Multi-sensor-based live-action reconstruction method and system
CN113671981A (en) * 2020-05-14 2021-11-19 北京理工大学 Remote laser guidance aircraft control system and control method thereof
CN114722407A (en) * 2022-03-03 2022-07-08 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenous countermeasure sample
CN114722407B (en) * 2022-03-03 2024-05-24 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenic type countermeasure sample

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070210953A1 (en) * 2006-03-13 2007-09-13 Abraham Michael R Aircraft collision sense and avoidance system and method
CN105654232A (en) * 2015-12-24 2016-06-08 大连陆海科技股份有限公司 Coastal monitoring and defense decision-making system based on multi-dimensional space fusion and method thereof
CN107832885A (en) * 2017-11-02 2018-03-23 南京航空航天大学 A kind of fleet Algorithm of Firepower Allocation based on adaptive-migration strategy BBO algorithms
CN108564129A (en) * 2018-04-24 2018-09-21 电子科技大学 A kind of track data sorting technique based on generation confrontation network
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070210953A1 (en) * 2006-03-13 2007-09-13 Abraham Michael R Aircraft collision sense and avoidance system and method
CN105654232A (en) * 2015-12-24 2016-06-08 大连陆海科技股份有限公司 Coastal monitoring and defense decision-making system based on multi-dimensional space fusion and method thereof
CN107832885A (en) * 2017-11-02 2018-03-23 南京航空航天大学 A kind of fleet Algorithm of Firepower Allocation based on adaptive-migration strategy BBO algorithms
CN108564129A (en) * 2018-04-24 2018-09-21 电子科技大学 A kind of track data sorting technique based on generation confrontation network
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
S. AKBARI 等: ""A new framework of a decision support system for air to air combat tasks"" *
姚跃亭 等: ""改进遗传算法的防空目标分配"" *
宋遐淦 等: ""改进模拟退火遗传算法在协同空战中的应用"" *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113671981A (en) * 2020-05-14 2021-11-19 北京理工大学 Remote laser guidance aircraft control system and control method thereof
CN111722220A (en) * 2020-06-08 2020-09-29 北京理工大学 Rocket target identification system based on parallel heterogeneous sensor
CN111722220B (en) * 2020-06-08 2022-08-26 北京理工大学 Rocket target identification system based on parallel heterogeneous sensor
CN111899353A (en) * 2020-08-11 2020-11-06 长春工业大学 Three-dimensional scanning point cloud hole filling method based on generation countermeasure network
CN112365582A (en) * 2020-11-17 2021-02-12 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN112365582B (en) * 2020-11-17 2022-08-16 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN112801403A (en) * 2021-02-10 2021-05-14 武汉科技大学 Method and system for predicting potential threat degree of aerial target based on SSA-BP
CN112884802A (en) * 2021-02-24 2021-06-01 电子科技大学 Anti-attack method based on generation
CN112990363A (en) * 2021-04-21 2021-06-18 中国人民解放军国防科技大学 Battlefield electromagnetic situation sensing and utilizing method
CN113192182A (en) * 2021-04-29 2021-07-30 山东产研信息与人工智能融合研究院有限公司 Multi-sensor-based live-action reconstruction method and system
CN114722407A (en) * 2022-03-03 2022-07-08 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenous countermeasure sample
CN114722407B (en) * 2022-03-03 2024-05-24 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenic type countermeasure sample

Also Published As

Publication number Publication date
CN110969637B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN110969637A (en) Multi-threat target reconstruction and situation awareness method based on generation countermeasure network
CN106845621B (en) Dense population number method of counting and system based on depth convolutional neural networks
JP5487298B2 (en) 3D image generation
CN111814875B (en) Ship sample expansion method in infrared image based on pattern generation countermeasure network
CN109029363A (en) A kind of target ranging method based on deep learning
CN110570363A (en) Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN106600675A (en) Point cloud synthesis method based on constraint of depth map
CN110243390B (en) Pose determination method and device and odometer
CN110349117A (en) A kind of infrared image and visible light image fusion method, device and storage medium
CN116258817B (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
CN108682039A (en) A kind of binocular stereo vision measurement method
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
CN112614163B (en) Target tracking method and system integrating Bayesian track reasoning
CN103700109B (en) SAR image change detection based on multiple-objection optimization MOEA/D and fuzzy clustering
CN110147816A (en) A kind of acquisition methods of color depth image, equipment, computer storage medium
CA3138812A1 (en) Automatic crop classification system and method
CN113674335A (en) Depth imaging method, electronic device, and storage medium
CN111914938A (en) Image attribute classification and identification method based on full convolution two-branch network
CN111242972B (en) On-line cross-scale multi-fluid target matching tracking method
CN109740455B (en) Crowd evacuation simulation method and device
CN109377447B (en) Contourlet transformation image fusion method based on rhododendron search algorithm
CN115239607A (en) Method and system for self-adaptive fusion of infrared and visible light images
Yang et al. Block based dense stereo matching using adaptive cost aggregation and limited disparity estimation
Basaru et al. HandyDepth: Example-based stereoscopic hand depth estimation using Eigen Leaf Node Features
Ravichandran et al. Entropy optimized image fusion: Using particle swarm technology and discrete wavelet transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant