CN108596071A - The different spectral coverage infrared image transform method of confrontation network is generated based on gradient constraint - Google Patents

The different spectral coverage infrared image transform method of confrontation network is generated based on gradient constraint Download PDF

Info

Publication number
CN108596071A
CN108596071A CN201810351344.5A CN201810351344A CN108596071A CN 108596071 A CN108596071 A CN 108596071A CN 201810351344 A CN201810351344 A CN 201810351344A CN 108596071 A CN108596071 A CN 108596071A
Authority
CN
China
Prior art keywords
network
sub
infrared image
spectral coverage
different spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810351344.5A
Other languages
Chinese (zh)
Other versions
CN108596071B (en
Inventor
杨卫东
朱军
王祯瑞
蒋哲兴
钟胜
邹博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201810351344.5A priority Critical patent/CN108596071B/en
Publication of CN108596071A publication Critical patent/CN108596071A/en
Application granted granted Critical
Publication of CN108596071B publication Critical patent/CN108596071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of different spectral coverage infrared image transform methods generating confrontation network based on gradient constraint, including:Infrared image to be measured is transformed to different spectral coverage infrared image using sub-network is generated, the training for generating sub-network includes:Several two matched kind spectral coverage infrared images are generated into confrontation network to being divided into training sample pair and test sample pair, structure, the generation confrontation network is including generating sub-network and differentiating sub-network;Training sample is generated into confrontation network to input, the loss function based on gradient constraint calculates the loss for differentiating sub-network and generating sub-network, and backpropagation update differentiates sub-network and generates sub-network;By test sample to being divided into several test image blocks, using test image block Self -adaptive sub-network, transformation picture is obtained, when the mean value for converting the partial correlation value between picture and test image block is more than or equal to threshold value, obtains trained generation sub-network.The result of the image transformation of the present invention is preferable.

Description

The different spectral coverage infrared image transform method of confrontation network is generated based on gradient constraint
Technical field
The invention belongs to infrared image processing fields, and confrontation network is generated based on gradient constraint more particularly, to one kind Different spectral coverage infrared image transform method.
Background technology
With scientific and technological progress, camouflage and perturbation technique continue to develop, and single band broadband target detection and identification cannot expire For foot in the demand of the target identification of complex condition, multiband detection and multicharacteristic information integration technology will become target identification Important means.Infrared multispectral image can provide the spatial information of target and provide the spectral information of target, be infrared Clarification of objective extraction, fusion and identification provide the data basis of analysis.
There are major application value in fast development and it of infrared imagery technique in numerous areas, to national economy and national defence Development has major contribution.How quickly, it expeditiously develops and is suitable under various complicated infrared background environment and interference The infrared imaging system of target can accurately be detected, it has also become one of modern infrared technology critical issue.
However, due to technical reason, the multispectral of long wave infrared region, Hyper spectral Imaging remote sensor and detection system are ground Threshold processed is higher, is substantially grasped by developed country, and existing equipment is seldom, and mostly project special equipment, such as SEBASS, The imaging spectrometers system such as TIRIS, AHI.Multispectral, Hyper spectral Imaging detection instrument the development of long wave infrared region and target The research of background spectrum signature analysis and probe algorithm needs a large amount of emulation, half l-G simulation test environment, due to infrared band mostly light Spectrum, the shortage of Hyper spectral Imaging spectral instrument, it is difficult to obtain enough truthful datas to establish simulated environment, how to obtain approximation The multispectral emulating image of real scene is problem in the urgent need to address.
Invention content
For the disadvantages described above or Improvement requirement of the prior art, the present invention provides one kind generating confrontation based on gradient constraint The different spectral coverage infrared image transform method of network, thus solving the prior art, there are of high cost, time-consuming, it is approximate true to be difficult to obtain The technical issues of multispectral emulating image of real field scape.
To achieve the above object, the present invention provides a kind of infrared figures of different spectral coverage generating confrontation network based on gradient constraint As transform method, including:Infrared image to be measured is transformed to different spectral coverage infrared image using sub-network is generated,
It is described generate sub-network training include:
(1) by several two matched kind spectral coverage infrared images to being divided into training sample pair and test sample pair, structure Generation confrontation network is built, the generation confrontation network is including generating sub-network and differentiating sub-network;
(2) training sample is generated into confrontation network to input, the loss function based on gradient constraint calculates and differentiates sub-network With the loss for generating sub-network, backpropagation update differentiates sub-network and generates sub-network;
(3) it by test sample to being divided into several test image blocks, using test image block Self -adaptive sub-network, obtains Changing image is trained when the mean value of the partial correlation value between changing image and test image block is more than or equal to threshold value Good generation sub-network.
Further, structure generation confrontation network includes:
It based on convolutional neural networks, replaces down-sampling to operate by the convolutional layer of k with step-length, replaces connecting entirely with convolutional layer Layer is added BN layers after each layer of convolutional layer, and in addition to output layer is with other than tanh functions in generating sub-network, remainder layer is all used Relu functions;And using LeakyReLu as activation primitive in differentiating sub-network.
Further, the ranging from 0.8-1 of threshold value.
Further, loss function includes the loss function of the loss function and generation sub-network that differentiate sub-network, described Differentiate the right value update width of loss function addition discriminating sub-network on the basis of generating the loss function of confrontation network of sub-network Degree constraint, the loss function for generating sub-network add Gradient Features difference on the basis of generating the loss function of confrontation network Constraint and the constraint of Pixel-level difference.
Further, the loss function for generating sub-network is:
LG=LGAN(G, D)+λ1Ll2(G)+λ2Lfeat(G)
Wherein, LGAN(G, D) makes a living into the loss function of confrontation network, Ll2(G) it is Pixel-level difference, Lfeat(G) it is gradient Feature difference, λ1For Pixel-level difference weight, λ2For Gradient Features difference weight, G is to generate sub-network, LGTo generate sub-network Loss function.
Further, Pixel-level difference is:
Wherein, M and N indicates to generate the line number and columns of the image G (x) that sub-network is exported in training process respectively, | y-G (x)|2Indicate that the Euclidean distance of y and G (x), y are the different spectral coverage infrared image that training sample centering is used to compare with G (x), x is Training sample centering, which is used to input, generates the infrared image that sub-network is converted, EX, y[|y-G(x)|2] it is training sample pair |y-G(x)|2Expectation.
Further, Gradient Features difference is:
Lfeat(G)=EX, y[|Ncc(fy, fG(x))-1|]
Wherein, fyIt is training sample centering for Gradient Features with G (x) the different spectral coverage infrared image y compared, fG(x)For Generate the Gradient Features for the image G (x) that sub-network is exported in training process, Ncc (fy, fG(x)) it is fyWith fG(x)Mutual value, EX, y[|Ncc(fy, fG(x)) -1 |] be training sample pair | Ncc (fy, fG(x)) -1 | expectation.
Further, differentiate that the loss function of sub-network is:
LD=-LGAN(G, D)+λ3Lpenalty
Wherein, LGAN(G, D) makes a living into the loss function of confrontation network, LpenaltyFor right value update amplitude loss item, λ3For Right value update amplitude weight, D are to differentiate sub-network, LDTo differentiate the loss function of sub-network.
Further, right value update amplitude is:
Wherein,Indicate the output for differentiating sub-network for inputGradient quadratic sum,To utilize random square Battle array carries out linear interpolation between the image different spectral coverage infrared image corresponding with training sample pair for generating sub-network output and obtains The interpolation image arrived.
Further, the parameter of random matrix meets being uniformly distributed for [0,1], the size and training sample of the random matrix The corresponding different spectral coverage infrared image of this centering is identical.
In general, through the invention it is contemplated above technical scheme is compared with the prior art, can obtain down and show Beneficial effect:
(1) present invention carries out different spectral coverage infrared image transformation using the generation sub-network fought in network is generated, and image becomes The result changed preferably, network there is preferable generalization ability, mostly light at low cost, efficient, can obtaining approximate real scene Compose emulating image.
(2) present invention is during training generates confrontation network, it is proposed that a kind of novel loss function, i.e., original On the basis of the loss function for generating confrontation network, it is added to loss item Ll2(G) and Lfeat(G), loss item Ll2(G) on pixel level Constraint generates the effect of image, and loses item Lfeat(G) Gradient Features for generating image, this two losses are constrained on feature level Item has good effect for improving the quality for generating image, and improves the speed of network convergence.
(3) the present invention provides a kind of new different spectral coverage infrared image transform methods, and the mistake of confrontation network is generated in training Cheng Zhong, in loss item Ll2(G) and Lfeat(G) on the basis of, it is added to LpenaltyItem is lost, input sample is by a small margin in order to prevent Difference causes to generate the variation of sub-network output by a relatively large margin, this can improve network training speed and stability, carry simultaneously High network generates the effect of image.
(4) present invention, by test sample to being divided into several test image blocks, utilizes test image block in test network Self -adaptive sub-network, obtains changing image, is commented using the mean value of the partial correlation value between changing image and test image block Valence generates the training effect of sub-network, and the effect for thus obtaining the image transformation of final generation sub-network is more preferable.
Description of the drawings
Fig. 1 is the flow chart of different spectral coverage infrared image transform method provided in an embodiment of the present invention;
Fig. 2 is network structure provided in an embodiment of the present invention and loss function schematic diagram;
Fig. 3 is the training method schematic diagram provided in an embodiment of the present invention for generating confrontation network;
Fig. 4 (a) is the loss schematic diagram provided in an embodiment of the present invention added before right value update amplitude constraint;
Fig. 4 (b) is the loss schematic diagram after addition right value update amplitude constraint provided in an embodiment of the present invention;
Fig. 5 (a1) is test sample provided in an embodiment of the present invention to the infrared image in a;
Fig. 5 (a2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in a;
Fig. 5 (a3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in a;
Fig. 5 (a4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in a Work difference statistic histogram between infrared image;
Fig. 5 (b1) is test sample provided in an embodiment of the present invention to the infrared image in b;
Fig. 5 (b2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in b;
Fig. 5 (b3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in b;
Fig. 5 (b4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in b Work difference statistic histogram between infrared image;
Fig. 5 (c1) is test sample provided in an embodiment of the present invention to the infrared image in c;
Fig. 5 (c2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in c;
Fig. 5 (c3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in c;
Fig. 5 (c4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in c Work difference statistic histogram between infrared image;
Fig. 5 (d1) is test sample provided in an embodiment of the present invention to the infrared image in d;
Fig. 5 (d2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in d;
Fig. 5 (d3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in d;
Fig. 5 (d4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in d Work difference statistic histogram between infrared image;
Fig. 5 (e1) is test sample provided in an embodiment of the present invention to the infrared image in e;
Fig. 5 (e2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in e;
Fig. 5 (e3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in e;
Fig. 5 (e4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in e Work difference statistic histogram between infrared image;
Fig. 5 (f1) is test sample provided in an embodiment of the present invention to the infrared image in f;
Fig. 5 (f2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in f;
Fig. 5 (f3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in f;
Fig. 5 (f4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in f Work difference statistic histogram between infrared image;
Fig. 5 (g1) is test sample provided in an embodiment of the present invention to the infrared image in g;
Fig. 5 (g2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in g;
Fig. 5 (g3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in g;
Fig. 5 (g4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in g Work difference statistic histogram between infrared image;
Fig. 5 (h1) is test sample provided in an embodiment of the present invention to the infrared image in h;
Fig. 5 (h2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in h;
Fig. 5 (h3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in h;
Fig. 5 (h4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in h Work difference statistic histogram between infrared image;
Fig. 5 (i1) is test sample provided in an embodiment of the present invention to the infrared image in i;
Fig. 5 (i2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in i;
Fig. 5 (i3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in i;
Fig. 5 (i4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in i Work difference statistic histogram between infrared image;
Fig. 5 (j1) is test sample provided in an embodiment of the present invention to the infrared image in j;
Fig. 5 (j2) is changing image of the test sample provided in an embodiment of the present invention to the infrared image in j;
Fig. 5 (j3) is test sample provided in an embodiment of the present invention to the different spectral coverage infrared image in j;
Fig. 5 (j4) is changing image and different spectral coverage of the test sample provided in an embodiment of the present invention to the infrared image in j Work difference statistic histogram between infrared image;
Fig. 6 is the local line chart being worth mutually before and after transformation provided in an embodiment of the present invention.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below It does not constitute a conflict with each other and can be combined with each other.
As shown in Figure 1, a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint, including: Infrared image to be measured is transformed to different spectral coverage infrared image using sub-network is generated,
It is described generate sub-network training include:
(1) by several two matched kind spectral coverage infrared images to being divided into training sample pair and test sample pair, structure Generation confrontation network is built, the generation confrontation network is including generating sub-network and differentiating sub-network;
(2) training sample is generated into confrontation network to input, the loss function based on gradient constraint calculates and differentiates sub-network With the loss for generating sub-network, backpropagation update differentiates sub-network and generates sub-network;
(3) it by test sample to being divided into several test image blocks, using test image block Self -adaptive sub-network, obtains Changing image is trained when the mean value of the partial correlation value between changing image and test image block is more than or equal to threshold value Good generation sub-network, the ranging from 0.8-1 of threshold value.
As shown in Fig. 2, structure generation confrontation network includes:
It based on convolutional neural networks, replaces down-sampling to operate by the convolutional layer of k with step-length, replaces connecting entirely with convolutional layer Layer is added BN layers after each layer of convolutional layer, and in addition to output layer is with other than tanh functions in generating sub-network, remainder layer is all used Relu functions;And using leaky Relu as activation primitive, k 2 in differentiating sub-network.
Loss function includes the loss function for differentiating sub-network and the loss function for generating sub-network, the discriminating sub-network Loss function generate confrontation network loss function on the basis of addition differentiate sub-network right value update amplitude constraint, it is described The loss function for generating sub-network adds the constraint of Gradient Features difference and pixel on the basis of generating the loss function of confrontation network Grade difference constraint.
Generate sub-network loss function be:
LG=LGAN(G, D)+λ1Ll2(G)+λ2Lfeat(G)
Wherein, LGAN(G, D) makes a living into the loss function of confrontation network, Ll2(G) it is Pixel-level difference, Lfeat(G) it is gradient Feature difference, λ1For Pixel-level difference weight, λ2For Gradient Features difference weight, G is to generate sub-network, LGTo generate sub-network Loss function.λ1Value range be 50-80, λ2Value range be 20-30.
Pixel-level difference is:
Wherein, M and N indicates to generate the line number and columns of the image G (x) that sub-network is exported in training process respectively, | y-G (x)|2Indicate that the Euclidean distance of y and G (x), y are the different spectral coverage infrared image that training sample centering is used to compare with G (x), x is Training sample centering, which is used to input, generates the infrared image that sub-network is converted, EX, y[|y-G(x)|2] it is training sample pair |y-G(x)|2Expectation.
Gradient Features difference is:
Lfeat(G)=EX, y[|Ncc(fy, fG(x))-1|]
Wherein, fyIt is training sample centering for Gradient Features with G (x) the different spectral coverage infrared image y compared, fG(x)For Generate the Gradient Features for the image G (x) that sub-network is exported in training process, Ncc (fy, fG(x)) it is fyWith fG(x)Mutual value, EX, y[|Ncc(fy, fG(x)) -1 |] be training sample pair | Ncc (fy, fG(x)) -1 | expectation.
The Gradient Features that y and G (x) is calculated using Sobel operators obtain fyWith fG(x), it is as follows:
The step of Sobel operator extraction image gradient features, is as follows:
The operator includes the matrix of two groups of 3x3, respectively transverse matrix and longitudinal matrix, it and image are made two dimension volume Product, you can obtain the brightness difference approximation of transverse direction and longitudinal direction respectively.If representing the infrared figure in training sample centering with A Picture, SxAnd SyTransverse gradients feature and longitudinal Gradient Features are respectively represented, formula is as follows:
WhereinIndicate convolution algorithm.
A image gradient features S can be by transverse gradients SxWith longitudinal gradient SyIt is calculated, specific formula for calculation is as follows:
So y and G (x) are calculated Gradient Features according to described in specific Sobel operators, f is respectively obtainedyWith fG(x)
Wherein, M and N indicates that the line number and columns of image, (m, n) indicate the pixel coordinate of image;The value range of Ncc exists [0,1].
Differentiate that the loss function of sub-network is:
LD=-LGAN(G, D)+λ3Lpenalty
Wherein, LGAN(G, D) makes a living into the loss function of confrontation network, LpenaltyFor right value update amplitude, λ3More for weights New amplitude weight, D are to differentiate sub-network, LDTo differentiate the loss function of sub-network, λ3Value range be 30-50.
Right value update amplitude is:
Wherein,Indicate the output for differentiating sub-network for inputGradient quadratic sum,To utilize random square Battle array carries out linear interpolation between the image A ' different spectral coverage infrared image B corresponding with training sample pair for generating sub-network output Obtained interpolation image.The parameter of random matrix ε meets being uniformly distributed for [0,1], the size and training sample of the random matrix The corresponding different spectral coverage infrared image of this centering is identical,
As shown in figure 3, the training method for generating confrontation network includes:
1) G networks and D networks are initialized;
2) A figures are input in G networks and obtain A ' figures, (A ', B) and (A, B) is separately input to the loss of D network calculations, Fixed G networks, update D networks;
3) fixed D networks, recalculate loss function, update G networks.
As shown in Fig. 4 (a) and 4 (b), during training generates confrontation network, in loss item Ll2(G) and Lfeat(G) On the basis of, it is added to LpenaltyItem is lost, difference causes generation sub-network output larger to input sample by a small margin in order to prevent The variation of amplitude, this can improve network training speed and stability, while improve the effect that network generates image.
Fig. 5 illustrate test sample in a-j infrared image, test sample is to the Transformation Graphs of the infrared image in a-j The changing image and different spectrum of picture, test sample to the different spectral coverage infrared image, test sample in a-j to the infrared image in a-j Work difference statistic histogram between section infrared image;
By the changing image of infrared image and the different spectral coverage infrared image of test sample centering according to size 64 × 64 and step Long 64 × 64 cut into several patch;Changing image and the test sample centering of infrared image are calculated using cross correlation algorithm Every a pair of patch partial correlation value Ncc (A ' of different spectral coverage infrared imagej, Bj);Wherein, j indicates the number of current patch, A 'j Indicate that the changing image A ' of infrared image numbers the patch, B that are jjIndicate that the different spectral coverage infrared image B of test sample centering is compiled Number be j patch, the value range of correlation is in [0,1].
The changing image of infrared image and different all patch of spectral coverage infrared image of test sample centering are calculated to correlation Mean μ and variances sigma, the calculation formula of mean μ and variances sigma it is as follows:
Wherein, N indicates patch to number;
Calculation result data analysis is as shown in table 1, μABIndicate the infrared image A and test sample of the preceding test sample pair of transformation The different spectral coverage infrared image B correlation mean values of centering, μA′BThe changing image A ' of expression infrared image is different with test sample centering Spectral coverage infrared image B correlation mean values;σABIndicate the preceding infrared image A of test sample pair of transformation and the different spectrum of test sample centering Section infrared image B correlation variances, σA′BIndicate the infrared figure of different spectral coverage of the changing image A ' and test sample centering of infrared image As B correlation variances.
Table 1
As shown in fig. 6, after transformation the correlation mean value u that calculates of two figures closer to 1, variances sigma closer to 0, Then indicate two figures more like the effect of the image transformation of model is better;Conversely, effect is poorer.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, all within the spirits and principles of the present invention made by all any modification, equivalent and improvement etc., should all include Within protection scope of the present invention.

Claims (10)

1. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint, which is characterized in that including:Profit Infrared image to be measured is transformed to different spectral coverage infrared image with sub-network is generated,
It is described generate sub-network training include:
(1) by several two matched kind spectral coverage infrared images to being divided into training sample pair and test sample pair, structure life At confrontation network, the generation confrontation network includes generating sub-network and differentiating sub-network;
(2) training sample is generated into confrontation network to input, the loss function based on gradient constraint calculates and differentiates sub-network and life At the loss of sub-network, backpropagation update differentiates sub-network and generates sub-network;
(3) it by test sample to being divided into several test image blocks, using test image block Self -adaptive sub-network, is converted Image obtains trained when the mean value of the partial correlation value between changing image and test image block is more than or equal to threshold value Generate sub-network.
2. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint as described in claim 1, It is characterized in that, the structure generation confrontation network includes:
Based on convolutional neural networks, replaces down-sampling to operate by the convolutional layer of k with step-length, full articulamentum is replaced with convolutional layer, BN layers are added after each layer of convolutional layer, in addition to output layer is with other than tanh functions in generating sub-network, remainder layer all uses Relu Function;And using LeakyReLu as activation primitive in differentiating sub-network.
3. a kind of different spectral coverage infrared image transformation side generating confrontation network based on gradient constraint as claimed in claim 1 or 2 Method, which is characterized in that the ranging from 0.8-1 of the threshold value.
4. a kind of different spectral coverage infrared image transformation side generating confrontation network based on gradient constraint as claimed in claim 1 or 2 Method, which is characterized in that the loss function includes the loss function of the loss function and generation sub-network that differentiate sub-network, described Differentiate the right value update width of loss function addition discriminating sub-network on the basis of generating the loss function of confrontation network of sub-network Degree constraint, the loss function for generating sub-network add Gradient Features difference on the basis of generating the loss function of confrontation network Constraint and the constraint of Pixel-level difference.
5. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint as claimed in claim 4, It is characterized in that, the loss function for generating sub-network is:
LG=LGAN(G, D)+λ1Ll2(G)+λ2Lfeat(G)
Wherein, LGAN(G, D) makes a living into the loss function of confrontation network, Ll2(G) it is Pixel-level difference, Lfeat(G) it is Gradient Features Difference, λ1For Pixel-level difference weight, λ2For Gradient Features difference weight, G is to generate sub-network, and D is to differentiate sub-network, LGFor Generate the loss function of sub-network.
6. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint as claimed in claim 5, It is characterized in that, the Pixel-level difference is:
Wherein, M and N indicates to generate the line number and columns of the image G (x) that sub-network is exported in training process respectively, | y-G (x) |2 Indicate that the Euclidean distance of y and G (x), y are the different spectral coverage infrared image that training sample centering is used to compare with G (x), x is training sample This centering, which is used to input, generates the infrared image that sub-network is converted, EX, y[|y-G(x)|2] be training sample pair | y-G (x)|2Expectation.
7. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint as claimed in claim 5, It is characterized in that, the Gradient Features difference is:
Lfeat(G)=EX, y[|Ncc(fy, fG(x))-1|]
Wherein, fyIt is training sample centering for Gradient Features with G (x) the different spectral coverage infrared image y compared, fG(x)To generate The Gradient Features for the image G (x) that sub-network is exported in training process, Ncc (fy, fG(x)) it is fyWith fG(x)Cross correlation value, EX, y [|Ncc(fy, fG(x)) -1 |] be training sample pair | Ncc (fy, fG(x)) -1 | expectation.
8. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint as claimed in claim 4, It is characterized in that, described differentiate that the loss function of sub-network is:
LD=-LGAN(G, D)+λ3Lpenalty
Wherein, LGAN(G, D) makes a living into the loss function of confrontation network, LpenaltyFor right value update amplitude, λ3For right value update width Weight is spent, D is to differentiate sub-network, LDTo differentiate the loss function of sub-network.
9. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint as claimed in claim 8, It is characterized in that, the right value update amplitude is:
Wherein,Indicate the output for differentiating sub-network for inputGradient quadratic sum,To be existed using random matrix Carry out what linear interpolation obtained between the image different spectral coverage infrared image corresponding with training sample centering of generation sub-network output Interpolation image.
10. a kind of different spectral coverage infrared image transform method generating confrontation network based on gradient constraint as claimed in claim 9, It is characterized in that, the parameter of the random matrix meets being uniformly distributed for [0,1], the size and training sample of the random matrix The corresponding different spectral coverage infrared image of centering is identical.
CN201810351344.5A 2018-04-18 2018-04-18 Different-spectral-band infrared image transformation method for generating countermeasure network based on gradient constraint Active CN108596071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810351344.5A CN108596071B (en) 2018-04-18 2018-04-18 Different-spectral-band infrared image transformation method for generating countermeasure network based on gradient constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810351344.5A CN108596071B (en) 2018-04-18 2018-04-18 Different-spectral-band infrared image transformation method for generating countermeasure network based on gradient constraint

Publications (2)

Publication Number Publication Date
CN108596071A true CN108596071A (en) 2018-09-28
CN108596071B CN108596071B (en) 2020-07-10

Family

ID=63611209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810351344.5A Active CN108596071B (en) 2018-04-18 2018-04-18 Different-spectral-band infrared image transformation method for generating countermeasure network based on gradient constraint

Country Status (1)

Country Link
CN (1) CN108596071B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709903A (en) * 2020-05-26 2020-09-25 中国科学院长春光学精密机械与物理研究所 Infrared and visible light image fusion method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709903A (en) * 2020-05-26 2020-09-25 中国科学院长春光学精密机械与物理研究所 Infrared and visible light image fusion method

Also Published As

Publication number Publication date
CN108596071B (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN109993220B (en) Multi-source remote sensing image classification method based on double-path attention fusion neural network
CN111738124B (en) Remote sensing image cloud detection method based on Gabor transformation and attention
CN108399362A (en) A kind of rapid pedestrian detection method and device
CN110532859A (en) Remote Sensing Target detection method based on depth evolution beta pruning convolution net
CN110728324B (en) Depth complex value full convolution neural network-based polarimetric SAR image classification method
CN111523521A (en) Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN111353531B (en) Hyperspectral image classification method based on singular value decomposition and spatial spectral domain attention mechanism
CN109558806A (en) The detection method and system of high score Remote Sensing Imagery Change
CN107944483B (en) Multispectral image classification method based on dual-channel DCGAN and feature fusion
CN109389080A (en) Hyperspectral image classification method based on semi-supervised WGAN-GP
CN103955926A (en) Method for remote sensing image change detection based on Semi-NMF
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN109426773A (en) A kind of roads recognition method and device
CN106228130A (en) Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
WO2021205424A2 (en) System and method of feature detection in satellite images using neural networks
Feng et al. NPALoss: Neighboring pixel affinity loss for semantic segmentation in high-resolution aerial imagery
CN117726550B (en) Multi-scale gating attention remote sensing image defogging method and system
Ma et al. A spectral grouping-based deep learning model for haze removal of hyperspectral images
CN115393404A (en) Double-light image registration method, device and equipment and storage medium
CN105913451B (en) A kind of natural image superpixel segmentation method based on graph model
CN114299382A (en) Hyperspectral remote sensing image classification method and system
CN108596071A (en) The different spectral coverage infrared image transform method of confrontation network is generated based on gradient constraint
CN110503113A (en) A kind of saliency object detection method restored based on low-rank matrix
CN116977747B (en) Small sample hyperspectral classification method based on multipath multi-scale feature twin network
CN116630723A (en) Hyperspectral ground object classification method based on large-kernel attention mechanism and MLP (Multi-level particle swarm optimization) mixing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant