CN109902715A - A kind of method for detecting infrared puniness target based on context converging network - Google Patents

A kind of method for detecting infrared puniness target based on context converging network Download PDF

Info

Publication number
CN109902715A
CN109902715A CN201910049019.8A CN201910049019A CN109902715A CN 109902715 A CN109902715 A CN 109902715A CN 201910049019 A CN201910049019 A CN 201910049019A CN 109902715 A CN109902715 A CN 109902715A
Authority
CN
China
Prior art keywords
model
stdnet
training
image
context
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910049019.8A
Other languages
Chinese (zh)
Other versions
CN109902715B (en
Inventor
王欢
石曼淑
任明武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201910049019.8A priority Critical patent/CN109902715B/en
Publication of CN109902715A publication Critical patent/CN109902715A/en
Application granted granted Critical
Publication of CN109902715B publication Critical patent/CN109902715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of method for detecting infrared puniness target based on context converging network, first building STDNet model;Then synthesis of artificial sample architecture training set, and according to the weighted mean square error loss function of local signal to noise ratio design STDNet model;Secondly training set training STDNet model is utilized;Further using the sample image under true application scenarios including true Weak target, and with the additional training STDNet model of learning rate lower than the training stage;Test sample is finally inputted into trained STDNet model, the result that STDNet model is exported is extracted by thresholding, connected region, mass center acquisition is handled, and completes small IR targets detection.The present invention can detect jointly Weak target, and the ability with stronger inhibition complex background interference using global and local feature.

Description

A kind of method for detecting infrared puniness target based on context converging network
Technical field
The invention belongs to infrared image analysis technology, specially a kind of infrared small object based on context converging network Detection method.
Background technique
Not only size is small for the target that small IR targets detection is directed to, but also very weak, is often submerged in complicated back Scape leads to higher omission factor and empty inspection rate.Existing method for detecting infrared puniness target still has technology when solving the problems, such as this On defect, such as method based on filtering is easy to occur a large amount of empty inspections at background edge;It is calculated based on contrast and conspicuousness Method complex edge is interfered, salt-pepper noise is sensitive;Method based on background and objective matrix decomposition is to sparse background interference It is sensitive;Method based on conventional machines study is limited by receptive field size, and false alarm rate is still very high.In short, robust in infrared image Dim targets detection it is still challenging.
Summary of the invention
The purpose of the present invention is to provide a kind of method for detecting infrared puniness target based on context converging network, solve The target and background duty ratio false-alarm problem that seriously unbalanced problem and complex background interference generate, has stronger inhibition The ability of complex background interference and enhancing target.
Realize the technical solution of the object of the invention are as follows: a kind of small IR targets detection side based on context converging network Method, comprising the following steps:
Step 1, building STDNet model, specific steps are as follows:
The context converging network that step 11, the selection maximum swelling factor are 64 is basic framework;
Step 12 connects three context network models one by one, and expansion factor is bottom with 2 by the network model of acquisition Exponential form first expands, reduces again, then expands again, i.e., the broadening factor of first context network model increases to 64 from 1, The expansion factor of second context network model is reduced to 1 from 64, and the expansion factor of third context network model is again Increase to 64 from 1;Each layer of the network model of acquisition applies three kinds of operations, i.e. Conv+ABN+leaky-ReLU layers, Conv refers to Convolution algorithm, ABN refer to that adaptive batch normalizes, and leaky-ReLU refers to activation primitive;
Step 13 finally increases an output layer in the model of acquisition, the output layer be 1 × 1 convolution kernel and it It is the pure convolutional layer of no any batch standardization and activation primitive;
Convolutional layer with the same expansion factor is directly connected to by step 14;
Step 2, synthesis of artificial sample architecture training set;
Step 3, the weighted mean square error loss function that STDNet model is designed according to local signal to noise ratio;
Step 4 utilizes training set training STDNet model;
Step 5, using under true application scenarios include true Weak target sample image, and be lower than step 34 The additional training STDNet model of habit rate;Test sample is inputted into trained STDNet model, the knot that STDNet model is exported Fruit is extracted by thresholding, connected region, mass center acquisition is handled, and completes small IR targets detection.
Preferably, the convolution kernel of convolution algorithm is 3 × 3 in step 12, adaptive batch normalization formula are as follows: ABN (x)= w1x+w2BN (x), wherein w1And w2It is the scalar weight of study, BN (x) is batch Standardization Operator, and ABN (x) is adaptive batch Amount;Leaky ReLU activation primitive is Φ (x)=max (0.2x, x).
Preferably, in step 2 the step of synthesis of artificial sample composing training collection are as follows:
Step 21 extracts Background of the image block of fixed size as training sample from natural scene infrared image Picture;
Step 22 is randomly assigned target superposed positions on background image, generates gray scale point by pre-set dimension and signal to noise ratio Cloth meets the target of two-dimensional Gaussian function;
Target in step 22 is superimposed upon on the designated position of the background image in step 21 by step 23, obtains emulation sample This, composing training collection.
Preferably, step 3 according to local signal to noise ratio design STDNet model weighted mean square error loss function it is specific Step are as follows:
Step 31, by target or background pixel weight ξjIt introduces L2 loss function training pattern and obtains Weighted Loss Function:Wherein, N and P are respectively indicated in the quantity and single image of training image Sum of all pixels, Oij(θ) indicates the output valve of j-th of pixel in i-th of image, YijIt corresponds in i-th of true picture The value of j-th of pixel, ξjIt is represented to the weight of target or background pixel imparting, i.e.,Wherein, j indicates jth A pixel, Bk and Ta respectively indicate background and target class, ρ1It is greater than 1 real number;
Step 32, the locality for calculating each image in training set are miscellaneous than image, utilize threshold value th=w3·Mean(O (θ))+(1-w3) Max (O (θ)) carries out Threshold segmentation to the local signal to noise ratio image of training set and obtain two-value part signal to noise ratio Figure filters out the corresponding pixel of 1 value in the signal to noise ratio figure of two-value part and constitutes set Hs;
Step 33 obtains new weight ξ " according to the pixel that step 32 filters outj, new weight ξ "jSpecifically:Wherein Ta indicates background, and Hs expression belongs to the set of pixels filtered out by step 22, ρ indicates to be greater than 1 real number;
Step 34, with new weight ξ "jInstead of the weight ξ in step 33jObtain final loss function.
Preferably, step 4 utilizes training set training STDNet model specifically: inputs to training set in batches improved Context converging network after the accessed wheel of entire training set, upsets entire training set again, carries out the second wheel, repeatedly until Model restrains deconditioning.
Compared with prior art, the present invention its remarkable advantage are as follows: the present invention effectively refers to and simulate human viewer The action process of Weak target is detected using naked eyes, and there is biggish receptive field, can preferably be helped distinguish between using global information True Weak target and complex background interference, background rejection ability is strong, and verification and measurement ratio is high, false alarm rate is low, applied widely.
Detailed description of the invention
Fig. 1 is STDNet model support composition of the invention.
Fig. 2 is that local signal to noise ratio calculates image change process schematic, and wherein Fig. 2 (a) is original image, wherein thin circle Indicate that real goal, thick circle indicate the clutter of most like target;Fig. 2 (b) is multiple dimensioned local signal to noise ratio result figure;Fig. 2 (c) For the result figure of Fig. 2 (b) threshold binarization.
Fig. 3 is mixing scene infrared image.
Fig. 4 is the ROC curve diagram on 3 infrared sequences and 1 Single Infrared Image Frame collection, and Fig. 4 (a) is aircraft figure ROC curve diagram on image set, Fig. 4 (b) are the ROC curve diagram on cloudy image set, and Fig. 4 (c) is air-flow image set On ROC curve diagram, Fig. 4 (d) be Single Infrared Image Frame collection on ROC curve diagram.
Fig. 5 is the present invention and existing method a representativeness on 3 infrared sequences, Single Infrared Image Frame collection respectively The testing result figure of image and corresponding real goal figure, Fig. 5 (a) are the testing result figure on aircraft sequence chart image set, Fig. 5 (b) For the testing result figure on cloudy sequence chart image set, Fig. 5 (c) is the testing result figure on air-flow sequence chart image set, and Fig. 5 (d) is Testing result figure on single-frame images collection.
Specific embodiment
A kind of method for detecting infrared puniness target based on context converging network, specific steps are as follows:
Step 1, building STDNet model.The basic skeleton of STDNet model is context converging network (CAN).The present invention CAN model is improved, so that it is preferably suitable for Dim targets detection.As shown in Figure 1, network model frame of the invention Composition includes 20 layers in total, and rectangular block indicates convolutional layer, and number below is convolutional layer call number, and the number in rectangular block Indicate the expansion factor that the convolutional layer uses;First context converging network model is made of 7 convolutional layers, with each piece Height indicates the size of corresponding expansion factor, and every layer is given below expansion factor, respectively 1,2,4,8,16,32,64;Second Context converging network model includes 6 layers, expansion factor 32,16,8,4,2,1;Third context converging network model packet Containing 6 layers, expansion factor 2,4,8,16,32,64;Selective connection (solid line in figure) is added to pass each layer of characteristic pattern It is delivered to the convolutional layer below with its expansion factor having the same, stacking of the characteristic pattern on dimension of the channel is completed, before indicating The characteristic pattern of roll lamination can be transmitted directly on the convolutional layer being connected below, to realize being total to for different convolutional layer characteristic patterns It enjoys.By taking the 15th layer (in the sub- CAN of third) as an example, its expansion factor is 4, and preceding layer is the convolution that expansion factor is 2 Layer, therefore the characteristic pattern for inputting the 15th layer is the series connection of the 2nd layer, the 12nd layer and the 14th layer of output characteristic pattern, all three layers of tools There is identical expansion factor 2.One output layer is set after three context converging network models, and the output layer is convolution Core size is 1 × 1, the convolutional layer that expansion factor is 1, without activation primitive;Entire outputting and inputting for model is all single channel The characteristic pattern quantity of image, each middle layer output is set as 24;All characteristic patterns and export image width and highly with The width for inputting infrared image is identical with height.
Found out by network structure, it contains two kinds of convolutional layer, and 1 × 1 convolution of the last layer, which is one, not to be had The pure convolutional layer of any activation primitive, all rest layers are the combinations of three kinds of operations, i.e. convolution algorithm (Conv), adaptive batch (ABN) and Leaky ReLU activation primitive are normalized, wherein Leaky ReLU represents activation primitive Φ (x)=max (0.2x, x). ABN is that adaptive batch normalizes, is defined as:
ABN (x)=w1x+w2BN(x) (1)
Wherein w1And w2It is the scalar weight of study, BN (x) is batch Standardization Operator.
Step 2, Infrared DIM-small Target Image are not readily available, and the training of model needs a large amount of sample.The present invention In in order to obtain enough training samples, constructed by synthetic method, i.e., using existing infrared from natural scene Thermal imaging system has taken different natural feature on a map and sky mixing scene (without target) as Weak target institute under different weather The background image at place, as shown in figure 3, the specific synthetic method of training sample are as follows:
(1) background image of the image block of fixed size as training sample is extracted from natural scene infrared image;
(2) target superposed positions are randomly assigned on background image, it is full to generate intensity profile by pre-set dimension and signal to noise ratio The target of sufficient two-dimensional Gaussian function;
(3) synthesis spline sheet, structure will on the designated position for the background image that target is superimposed upon in step (1) in (2), be obtained At training set.
In certain embodiments, enough training samples, every cunning are generated in scene image using sliding window technique Dynamic 5 pixels cut one 128 × 128 image block.To each image block, instructed using two-dimensional Gaussian function random superposition one Practice sample, composing training collection.
Step 3, the weighted mean square error loss function that STDNet model is designed according to local signal to noise ratio.It is deep to be usually used in training The loss function of degree network has L2 loss, L1 loss and intersects entropy loss, and L2 loss function is in the original CAN model of training It puts up the best performance, therefore STDnet also uses L2 loss function.Assuming that Oi(θ) is the model output of i-th of image, YiIt is corresponding true Real value, then L2 loss function is defined as follows:
Wherein θ indicates all parameters of network, and N indicates the quantity of training sample.
In order to enable STDNet model effectively to learn to detect all Small objects and generate false-alarm as few as possible, damaging It loses in function and considers two critical issues.One be destination number and background duty ratio imbalance problem, the other is can STDNet can be made to generate the decoy region of many wrong reports.The present invention reaches increase target picture by modifying the above L2 loss function The weight of element, and the purpose for increasing the weight of potential false-alarm pixel solve both of these problems.Below with specific embodiment pair It is explained:
(1) increase object pixel weight
In certain embodiments, the quantity of object pixel accounts for the quantity about 1/600 (every of the background pixel of entire training set The size of training image is 128 × 128).Due to the imbalance of destination number and background duty ratio, L2 loss is certain to generate inclined See, ignores the importance of Small object.A kind of direct solution is lost using weighting L2, by for the big weight of Target Assignment And small weight is distributed to background to handle this imbalance.Therefore, weight ξjIt is defined as belonging to inhomogeneity (target or background) Pixel assign different weights, indicate are as follows:
Wherein, j indicates j-th of pixel, and Bk and Ta respectively indicate background and target class, ρ1It is greater than 1 real number;Formula 2 Mean distribute bigger weight for real goal pixel, to fight Pixel-level imbalance problem.Therefore, by by formula 3 introduce formula 2 to obtain Weighted Loss Function:
Wherein, N and P respectively indicates the sum of all pixels in the quantity and single image of training image, Oij(θ) is indicated i-th The output valve of j-th of pixel in image, YijCorrespond to the value of j-th of pixel in i-th of true picture.The weighting loss Function plays a crucial role in terms of maintaining high detection rate, because more punishment will be obtained by losing object pixel.
(2) increase potential false-alarm pixel weight in training set
Although obviously considering imbalance problem, but still it is a large amount of to observe that the model is often generated in background area Residual error.This is related with another problem, i.e., potential false-alarm pixel.L2 loss may apply bigger weight to target, but also There can be background area extremely similar with target.In other words, exactly these regions are most likely to be false-alarm, if being only mesh Mark distributes bigger weight, and lacks the ability for handling potential " decoy ".As shown in Fig. 2, even if real goal (thin circle table Show) there is higher brightness, still there are many clutter (thick circle expression) is closely similar with the real goal in infrared background.
After the decoy pixel for analyzing STDNet generation, observation false-alarm pixel usually has with its local ambient background Biggish luminance difference, discovery part SCR are the common measurement of the contrast between target background adjacent thereto, its fine earth's surface The comparison between target background adjacent thereto is illustrated.Local SCR in fixed window is defined as following formula:
Wherein, utIt is the mean intensity of target, and ubAnd σbIt is the mean intensity of local background and standard side around target Difference.Formula (5) is calculated and an equal amount of confidence level figure of original image.It, can in order to obtain different size of decoy To use multiple windows, for example 3 × 3,5 × 5 until m × m, wherein m is the size (m=15) of maximized window.Then picture is used Plain maximum pondization operation merges the confidence level figure from different scale window, obtains multiple dimensioned part SCR.
However, background may be omitted if larger part SCR is used to select false-alarm pixel as criterion in training Part false-alarm pixel in boundary and strong texture region.The reason is that the pixel in these regions usually has biggish standard deviationb, Their local SCR of this relative reduction (pixel with higher part SCR is it is more likely that false-alarm pixel).By using (ut- ub)2Instead of the molecule in formula (5) | ut-ub| the effect of Lai Zengjia luminance difference, the void in background border and strong texture region Alert pixel can be extracted.
Target is although small, but size is also variation, therefore needs to set the possible full-size of target, and equably several The multiscale analysis for realizing part SCR is repeated the above process in a size.The SCR value of final each pixel is set on all scales Maximum value, and full figure normalizes to [0,1] to obtain final local SCR figure.In training, the bigger back of local SCR value Scene element may more be known as false-alarm, and these false-alarms are that model needs are paid close attention to.Therefore, instruction is retained using threshold operation Practice the background pixel for collecting larger part SCR, row threshold division of going forward side by side obtains two-value part SCR figure.Its threshold operation be defined as with Lower formula:
Th=w3·Mean(O(θ))+(1-w3)·Max(O(θ)) (6)
Based on above-mentioned analysis, the background pixel with height part SCR should also be endowed big weight, therefore be arranged another A weighting coefficient:
Wherein Hs expression belongs to the false-alarm set of pixels being calculated by above-mentioned part SCR, ρ2Indicate the real number greater than 1, The setting is intended to reduce false alarm rate.Since high detection rate and low false alarm rate are for small IR targets detection no less important, therefore enable ρ12=ρ, and formula (3) and formula (7) are merged, then obtain:
Wherein ρ is rule of thumb set as 10.Using this combining weights, ξ " is setjTo replace the ξ ' in formula (7)jTo obtain Obtain final loss function.
Step 4 utilizes training set training STDNet model;
It is specially that training set is inputed to improved context polymeric network in batches using training set training STDNet model Network upsets entire training set after entire training set accessed one is taken turns again, carries out next round, repeatedly until model convergence stops Training.
In certain embodiments, every batch of training set is sized to 10, is optimized using AdaGrad, from the beginning STDNet is opened Begin training, and as the time increases learning rate by 0.01, be reduced to 0.001, then to 0.0001.All nets in STDNet network Network weight is initialized from Gaussian Profile, mean value 0, standard deviation 0.02.Entire training process is in 300,000 iteration After stop, and local signal to noise ratio weight map every 100,000 iteration update primary.
Step 5, model starting and on-line checking
Using the sample image under true application scenarios including true Weak target, and to be lower than the learning rate of training stage Additional training STDNet model sets 0.00001 for learning rate in certain embodiments, completes model starting.
One width image to be detected is directly inputted into trained STDNet network model, by the calculating of model, obtains mould The gray level image of the expression Weak target confidence level of the output of type, i.e. a width as original image size, wherein gray value is got over Big represent more may be object pixel.
Image binaryzation is exported to model with the threshold value that formula (6) provides, the connected region after extracting binaryzation obtains every The mass center of a connected region, then the i.e. corresponding Weak target of each connected region, image to be detected detection finish.
As shown in figure 4, giving STDNet network model and 13 kinds of exemplary process (includes: max-medium filter Maxmedian, morphologic filtering tophat, supporting vector value and anisotropic filter group DSVT, multiple dimensioned areal model MFM, base In low-rank sparse decompose small IR targets detection CLSDM, local contrast method LCM, weight local losses model WLDM, Block similarity method PatchSim, infrared piece of iconic model IPI, non-negative infrared piece of iconic model NIPPS, based on structure tensor and The method for detecting infrared puniness target RIPT of sparse heavy weighting, function connects neural network FCnet, feedforward neural network Front) Comparing result in four sequences, is provided with ROC curve, it can be seen that and model of the invention obtains highest detection performance, High detection rate can be obtained with minimum false alarm rate.As shown in figure 5, giving the width representative image in each subset, by hand The standard target image of mark and methodical testing result, therefrom it is also seen that method of the invention is correctly detecting While target, the residual error of background parts is minimum, i.e. background inhibitory effect is best.

Claims (5)

1. a kind of method for detecting infrared puniness target based on context converging network, which comprises the following steps:
Step 1, building STDNet model, specific steps are as follows:
The context converging network that step 11, the selection maximum swelling factor are 64 is basic framework;
Step 12 connects three context network models one by one, the network model of acquisition by expansion factor with 2 for bottom index Form first expands, reduces again, then expands again, i.e., the broadening factor of first context network model increases to 64, second from 1 The expansion factor of a context network model is reduced to 1 from 64, and the expansion factor of third context network model increases from 1 again It is added to 64;Each layer of the network model of acquisition applies three kinds of operations, i.e. Conv+ABN+leaky-ReLU layers, Conv refers to convolution Operation, ABN refer to that adaptive batch normalizes, and leaky-ReLU refers to activation primitive;
Step 13 finally increases an output layer in the model of acquisition, the output layer be 1 × 1 convolution kernel and it be not have There is the pure convolutional layer of any batch standardization and activation primitive;
Convolutional layer with the same expansion factor is directly connected to by step 14;
Step 2, synthesis of artificial sample architecture training set;
Step 3, the weighted mean square error loss function that STDNet model is designed according to local signal to noise ratio;
Step 4 utilizes training set training STDNet model;
Step 5, using under true application scenarios include true Weak target sample image, and be lower than step 34 learning rate Additional training STDNet model;Test sample is inputted into trained STDNet model, the result that STDNet model is exported passes through Cross thresholding, connected region is extracted, mass center acquisition processing, completion small IR targets detection.
2. the method for detecting infrared puniness target according to claim 1 based on context converging network, which is characterized in that The convolution kernel of convolution algorithm is 3 × 3 in step 12, adaptive batch normalization formula are as follows: ABN (x)=w1x+w2BN (x), wherein w1And w2It is the scalar weight of study, BN (x) is batch Standardization Operator, and ABN (x) is adaptive batch;Leaky ReLU activation Function is Φ (x)=max (0.2x, x).
3. the method for detecting infrared puniness target according to claim 1 based on context converging network, which is characterized in that In step 2 the step of synthesis of artificial sample composing training collection are as follows:
Step 21 extracts background image of the image block of fixed size as training sample from natural scene infrared image;
Step 22 is randomly assigned target superposed positions on background image, and it is full to generate intensity profile by pre-set dimension and signal to noise ratio The target of sufficient two-dimensional Gaussian function;
Target in step 22 is superimposed upon on the designated position of the background image in step 21 by step 23, obtains simulation sample, structure At training set.
4. the method for detecting infrared puniness target according to claim 1 based on context converging network, which is characterized in that Step 3 designs the specific steps of the weighted mean square error loss function of STDNet model according to local signal to noise ratio are as follows:
Step 31, by target or background pixel weight ξjIt introduces L2 loss function training pattern and obtains Weighted Loss Function:Wherein, N and P are respectively indicated in the quantity and single image of training image Sum of all pixels, Oij(θ) indicates the output valve of j-th of pixel in i-th of image, YijIt corresponds in i-th of true picture The value of j-th of pixel, ξjIt is represented to the weight of target or background pixel imparting, i.e.,Wherein, j indicates jth A pixel, Bk and Ta respectively indicate background and target class, ρ1It is greater than 1 real number;
Step 32, the locality for calculating each image in training set are miscellaneous than image, utilize threshold value th=w3·Mean(O(θ))+(1- w3) Max (O (θ)) carries out Threshold segmentation to the local signal to noise ratio image of training set and obtain two-value part signal to noise ratio figure, filter out The corresponding pixel of 1 value constitutes set Hs in the signal to noise ratio figure of two-value part;
Step 33 obtains new weight ξ " according to the pixel that step 32 filters outj, new weight ξ "jSpecifically:Wherein Ta indicates background, and Hs expression belongs to the set of pixels filtered out by step 22, ρ indicates to be greater than 1 real number;
Step 34, with new weight ξ "jInstead of the weight ξ in step 33jObtain final loss function.
5. the method for detecting infrared puniness target according to claim 1 based on context converging network, which is characterized in that Step 4 utilizes training set training STDNet model specifically: training set is inputed to improved context converging network in batches, After the entire accessed wheel of training set, upsets entire training set again, carry out the second wheel, repeatedly until model convergence stops instruction Practice.
CN201910049019.8A 2019-01-18 2019-01-18 Infrared dim target detection method based on context aggregation network Active CN109902715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910049019.8A CN109902715B (en) 2019-01-18 2019-01-18 Infrared dim target detection method based on context aggregation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910049019.8A CN109902715B (en) 2019-01-18 2019-01-18 Infrared dim target detection method based on context aggregation network

Publications (2)

Publication Number Publication Date
CN109902715A true CN109902715A (en) 2019-06-18
CN109902715B CN109902715B (en) 2022-09-06

Family

ID=66943831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910049019.8A Active CN109902715B (en) 2019-01-18 2019-01-18 Infrared dim target detection method based on context aggregation network

Country Status (1)

Country Link
CN (1) CN109902715B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472583A (en) * 2019-08-16 2019-11-19 广东工业大学 The micro- Expression Recognition system of face based on deep learning
CN110706208A (en) * 2019-09-13 2020-01-17 东南大学 Infrared dim target detection method based on tensor mean square minimum error
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111353581A (en) * 2020-02-12 2020-06-30 北京百度网讯科技有限公司 Lightweight model acquisition method and device, electronic equipment and storage medium
CN111598899A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
CN112288026A (en) * 2020-11-04 2021-01-29 南京理工大学 Infrared weak and small target detection method based on class activation diagram
CN113297574A (en) * 2021-06-11 2021-08-24 浙江工业大学 Activation function adaptive change model stealing defense method based on reinforcement learning reward mechanism
CN113450413A (en) * 2021-07-19 2021-09-28 哈尔滨工业大学 Ship target detection method based on GF4 single-frame image
CN113781375A (en) * 2021-09-10 2021-12-10 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN114463619A (en) * 2022-04-12 2022-05-10 西北工业大学 Infrared dim target detection method based on integrated fusion features
CN114818838A (en) * 2022-06-30 2022-07-29 中国科学院国家空间科学中心 Low signal-to-noise ratio moving point target detection method based on pixel time domain distribution learning
CN117078998A (en) * 2023-06-30 2023-11-17 成都飞机工业(集团)有限责任公司 Target detection method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229918A (en) * 2017-05-26 2017-10-03 西安电子科技大学 A kind of SAR image object detection method based on full convolutional neural networks
CN109002848A (en) * 2018-07-05 2018-12-14 西华大学 A kind of detection method of small target based on Feature Mapping neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229918A (en) * 2017-05-26 2017-10-03 西安电子科技大学 A kind of SAR image object detection method based on full convolutional neural networks
CN109002848A (en) * 2018-07-05 2018-12-14 西华大学 A kind of detection method of small target based on Feature Mapping neural network

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472583B (en) * 2019-08-16 2022-04-19 广东工业大学 Human face micro-expression recognition system based on deep learning
CN110472583A (en) * 2019-08-16 2019-11-19 广东工业大学 The micro- Expression Recognition system of face based on deep learning
CN110706208A (en) * 2019-09-13 2020-01-17 东南大学 Infrared dim target detection method based on tensor mean square minimum error
CN111353581A (en) * 2020-02-12 2020-06-30 北京百度网讯科技有限公司 Lightweight model acquisition method and device, electronic equipment and storage medium
CN111353581B (en) * 2020-02-12 2024-01-26 北京百度网讯科技有限公司 Lightweight model acquisition method and device, electronic equipment and storage medium
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111598899A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
CN112288026A (en) * 2020-11-04 2021-01-29 南京理工大学 Infrared weak and small target detection method based on class activation diagram
CN113297574B (en) * 2021-06-11 2022-08-02 浙江工业大学 Activation function adaptive change model stealing defense method based on reinforcement learning reward mechanism
CN113297574A (en) * 2021-06-11 2021-08-24 浙江工业大学 Activation function adaptive change model stealing defense method based on reinforcement learning reward mechanism
CN113450413A (en) * 2021-07-19 2021-09-28 哈尔滨工业大学 Ship target detection method based on GF4 single-frame image
CN113781375A (en) * 2021-09-10 2021-12-10 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN113781375B (en) * 2021-09-10 2023-12-08 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN114463619A (en) * 2022-04-12 2022-05-10 西北工业大学 Infrared dim target detection method based on integrated fusion features
CN114818838A (en) * 2022-06-30 2022-07-29 中国科学院国家空间科学中心 Low signal-to-noise ratio moving point target detection method based on pixel time domain distribution learning
CN114818838B (en) * 2022-06-30 2022-09-13 中国科学院国家空间科学中心 Low signal-to-noise ratio moving point target detection method based on pixel time domain distribution learning
CN117078998A (en) * 2023-06-30 2023-11-17 成都飞机工业(集团)有限责任公司 Target detection method, device, equipment and medium

Also Published As

Publication number Publication date
CN109902715B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN109902715A (en) A kind of method for detecting infrared puniness target based on context converging network
CN111259850B (en) Pedestrian re-identification method integrating random batch mask and multi-scale representation learning
CN105518709B (en) The method, system and computer program product of face for identification
CN105825511B (en) A kind of picture background clarity detection method based on deep learning
CN110148162A (en) A kind of heterologous image matching method based on composition operators
CN107316295A (en) A kind of fabric defects detection method based on deep neural network
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN107563999A (en) A kind of chip defect recognition methods based on convolutional neural networks
CN109325395A (en) The recognition methods of image, convolutional neural networks model training method and device
CN109800631A (en) Fluorescence-encoded micro-beads image detecting method based on masked areas convolutional neural networks
CN104899866B (en) A kind of intelligentized infrared small target detection method
CN109711288A (en) Remote sensing ship detecting method based on feature pyramid and distance restraint FCN
CN106446930A (en) Deep convolutional neural network-based robot working scene identification method
CN107016357A (en) A kind of video pedestrian detection method based on time-domain convolutional neural networks
CN110378232B (en) Improved test room examinee position rapid detection method of SSD dual-network
CN104103033B (en) View synthesis method
CN104182985B (en) Remote sensing image change detection method
CN108052881A (en) The method and apparatus of multiclass entity object in a kind of real-time detection construction site image
CN108280440A (en) A kind of fruit-bearing forest recognition methods and system
CN106530271B (en) A kind of infrared image conspicuousness detection method
CN108447055A (en) SAR image change detection based on SPL and CCN
CN107563433A (en) A kind of infrared small target detection method based on convolutional neural networks
CN104200478B (en) Low-resolution touch screen image defect detection method based on sparse representation
CN106228130B (en) Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN107909109A (en) SAR image sorting technique based on conspicuousness and multiple dimensioned depth network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant