CN110119728A - Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network - Google Patents

Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network Download PDF

Info

Publication number
CN110119728A
CN110119728A CN201910436645.2A CN201910436645A CN110119728A CN 110119728 A CN110119728 A CN 110119728A CN 201910436645 A CN201910436645 A CN 201910436645A CN 110119728 A CN110119728 A CN 110119728A
Authority
CN
China
Prior art keywords
remote sensing
convolution kernel
sensing images
size
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910436645.2A
Other languages
Chinese (zh)
Other versions
CN110119728B (en
Inventor
彭宇
郭玥
于希明
马宁
姚博文
刘大同
彭喜元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910436645.2A priority Critical patent/CN110119728B/en
Publication of CN110119728A publication Critical patent/CN110119728A/en
Application granted granted Critical
Publication of CN110119728B publication Critical patent/CN110119728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Based on the remote sensing images cloud detection method of optic of Multiscale Fusion semantic segmentation network, it belongs to remote sensing image clouds detection technique field.The present invention solves the problems, such as that cloud detection precision existing for the existing method for carrying out cloud detection by manually extracting feature is low.The present invention extracts shallow-layer feature using preceding three-level sub-network, further feature is extracted using rear two-stage sub-network, again by the further feature of extraction and shallow-layer Fusion Features, this just takes full advantage of the abundant semantic information that the abundant detailed information that shallow-layer feature includes and further feature include, the advantage of the two is merged, so that finer to the segmentation on further feature boundary, and best cloud detection effect is reached by the ratio of optimization further feature and shallow-layer feature, cloud area detecting error of the invention is less than 1%.Present invention could apply to remote sensing image clouds detection technique fields.

Description

Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
Technical field
The invention belongs to remote sensing image clouds detection technique fields, and in particular to a kind of remote sensing images cloud detection method of optic.
Background technique
Remote sensing is to obtain the important means of earth resource and environmental information, and cloud is the master for influencing satellite remote sensing images quality Want factor.Under normal circumstances, the region of earth surface 50% is covered by cloud, and the presence of cloud is brought greatly not to remote sensing image processing Just.The remote sensing images available information covered by cloud is few, but occupies a large amount of memory space of system and transmission bandwidth, and then reduce and defend The utilization rate of sing data.At present in addition to synthetic aperture radar sensor can penetrate cloud layer and obtain earth's surface information, other sensors are not The cloud covering problem of remote sensing images can be thoroughly solved, and current most image data is still by the sensing of visible light wave range Device obtains.Therefore, become the key for promoting remotely-sensed data utilization rate to the high-precision cloud detection of visible remote sensing image.
Cloud detection method of optic experienced the stage from artificial judgment to computer disposal, the cloud detection and classification of early stage mainly according to Artificial range estimation by Observation personnel is judged, very big to the subjective experience degree of dependence of Observation personnel, and with flood tide The remote sensing data of growth, it is all unrealistic by artificial judgment, automatically, quickly and effectively cloud detection and classification Focus on research direction as each satellite data processing center.
It by the method that computer disposal carries out cloud detection is completed on the basis of extracting Cloud-Picture Characteristics, Cloud-Picture Characteristics Extraction constantly excavate deeper feature, extracting mode is also from manually extracting to transformation is automatically extracted.Cloud is most straight with atural object The difference of sight is gray feature, and cloud shows as partially white in picture, directly claims by the method that gray threshold carries out cloud detection For threshold method, threshold method calculates simple, but needs priori knowledge and impacted factor is more, detection accuracy is lower.The ash of cloud Degree feature cannot represent all characteristics of cloud, and in turn, subsequent cloud detection method of optic continues to excavate other features of cloud, including frequency Feature, textural characteristics etc..For example, there is scholar that picture is divided into multiple portions, then extract the gray scale of each part, frequency, Textural characteristics are trained, and are classified eventually by support vector machines (Support Vector Machine, SVM).
The gray scale of cloud, frequency, textural characteristics belong to the shallow-layer feature manually extracted, and carry out cloud by manually extracting feature The method of detection has the following problems:
(1) feature extraction directly often is carried out to picture in its entirety, due to the complexity of cloud form, caused few to containing in picture Measure the missing inspection of cloud situation;
(2) shallow-layer feature is only extracted to be unfavorable for distinguishing atural object similar with cloud feature, poor robustness;
(3) it can only substantially judge the position of cloud, cloud amount value extraction accuracy is low.
The method of cloud detection is carried out there are above-mentioned problems due to manually extracting feature, lead to existing pass through The artificial cloud detection precision for extracting the method that feature carries out cloud detection is lower.
Summary of the invention
The purpose of the present invention is for solve it is existing by manually extract feature carry out cloud detection method existing for cloud detection The low problem of precision.
The technical solution adopted by the present invention to solve the above technical problem is: based on Multiscale Fusion semantic segmentation network Remote sensing images cloud detection method of optic, method includes the following steps:
Step 1: randomly selecting out N from true panchromatic visible remote sensing image data set0The original remote sensing figure of Zhang Zuowei Picture;
To N0It opens original remote sensing images to be pre-processed, obtains N0Open pretreated remote sensing images;
Step 2: by N0It opens pretreated remote sensing images to be trained as training set input semantic segmentation network, instruct The convolution nuclear parameter of convolutional layer in semantic segmentation network is constantly updated during practicing, until when reaching the maximum number of iterations of setting Deconditioning obtains trained semantic segmentation network;
Step 3: being carried out using the method for step 1 to remote sensing images to be detected pre- for remote sensing images to be detected Processing obtains pretreated remote sensing images to be detected;
By the trained semantic segmentation network of pretreated remote sensing images input step two to be detected, semantic segmentation is obtained Image after the cutting of network output;
Image passes through softmax classifier after cutting, and obtains bianry image identical with picture size after cutting, two-value Gray value is not 0 pixel representative domain containing cloud sector in image, and the pixel that gray value is 0 represents non-cloud region, and realization is treated The cloud detection of the remote sensing images of detection.
The beneficial effects of the present invention are: the invention proposes a kind of remote sensing figures based on Multiscale Fusion semantic segmentation network As cloud detection method of optic, the present invention extracts shallow-layer feature using preceding three-level sub-network, extracts further feature using rear two-stage sub-network, Again by the further feature of extraction and shallow-layer Fusion Features, this just take full advantage of the abundant detailed information that shallow-layer feature includes and The abundant semantic information that further feature includes, has merged the advantage of the two, so that it is finer to the segmentation on further feature boundary, And best cloud detection effect is reached by the ratio of optimization further feature and shallow-layer feature, the present invention improves cloud detection essence Degree, cloud area detecting error is less than 1%.
Detailed description of the invention
Fig. 1 is the flow chart of the remote sensing images cloud detection method of optic of the invention based on Multiscale Fusion semantic segmentation network;
Fig. 2 is the schematic network structure of semantic segmentation network of the invention;
Fig. 3 is the flow chart that the present invention is trained semantic segmentation network;
Fig. 4 is the schematic diagram of deconvolution operating process;
Fig. 5 is the schematic diagram of the bilinearity core calculating process of warp lamination;
Fig. 6 is the test data set original graph for the scene 1 that the present invention chooses;
Fig. 7 is the test data set original graph for the scene 2 that the present invention chooses;
Fig. 8 is the test data set original graph for the scene 3 that the present invention chooses;
Fig. 9 is the effect picture for carrying out cloud detection to the test data set original graph of scene 1 using maximum variance between clusters;
Figure 10 is the effect picture for carrying out cloud detection to the test data set original graph of scene 2 using maximum variance between clusters;
Figure 11 is the effect picture for carrying out cloud detection to the test data set original graph of scene 3 using maximum variance between clusters;
Figure 12 is the effect picture for carrying out cloud detection to the test data set original graph of scene 1 using multi-feature extraction method;
Figure 13 is the effect picture for carrying out cloud detection to the test data set original graph of scene 2 using multi-feature extraction method;
Figure 14 is the effect picture for carrying out cloud detection to the test data set original graph of scene 3 using multi-feature extraction method;
Figure 15 is the corresponding mark figure of test data set original graph of scene 1, scene 2 and scene 3;
Figure 16 is to carry out cloud detection using test data set original graph of the FCN method to scene 1, scene 2 and scene 3 Effect picture;
Figure 17 is to carry out cloud detection using test data set original graph of the U-net method to scene 1, scene 2 and scene 3 Effect picture;
Figure 18 is carried out using test data set original graph of the Deeplab V3+ method to scene 1, scene 2 and scene 3 The effect picture of cloud detection;
Figure 19 is the test data set original graph using WMSFNet method of the invention to scene 1, scene 2 and scene 3 Carry out the effect picture of cloud detection.
Specific embodiment
Specific embodiment 1: as shown in Figure 1, based on Multiscale Fusion semantic segmentation network described in present embodiment Remote sensing images cloud detection method of optic, method includes the following steps:
Step 1: randomly selecting out N from true panchromatic visible remote sensing image data set0The original remote sensing figure of Zhang Zuowei Picture;
To N0It opens original remote sensing images to be pre-processed, obtains N0Open pretreated remote sensing images;
Data set used by step 1 are as follows: 2 meters of resolution ratio really panchromatic visual remote sensing of high score No.1 satellite shooting Image data set;
Step 2: by N0Pretreated remote sensing images, which are opened, as training set inputs semantic segmentation network (weighted Multi-scale fusion network, WMSFNet) it is trained, it is constantly updated in semantic segmentation network in training process The convolution nuclear parameter of convolutional layer, until deconditioning when reaching the maximum number of iterations of setting, obtains trained semantic segmentation Network;
Multiscale Fusion semantic segmentation network refers to: the layer of fusion is increased on the basis of semantic segmentation network;
Step 3: being carried out using the method for step 1 to remote sensing images to be detected pre- for remote sensing images to be detected Processing obtains pretreated remote sensing images to be detected;
By the trained semantic segmentation network of pretreated remote sensing images input step two to be detected, semantic segmentation is obtained Image after the cutting of network output;
Image passes through softmax classifier after cutting, and obtains bianry image identical with picture size after cutting, two-value Gray value is not 0 pixel representative domain containing cloud sector in image, and the pixel that gray value is 0 represents non-cloud region, and realization is treated The cloud detection of the remote sensing images of detection.
The WMSFNet cloud detection algorithm frame of present embodiment is as shown in Figure 1, input picture for a width, first to figure Piece is pre-processed, i.e., the gray average in each channel of picture the gray scale of each pixel in picture is individually subtracted, and accelerates meter Calculate speed.
Picture further feature is extracted using convolutional layer, pond layer carries out Feature Dimension Reduction, realizes the differentiation of cloud and atural object.So Afterwards, it is up-sampled using warp lamination, obtains width two-value picture identical with input picture size.
The non-cloud region in pixel representative image that gray value is 0 in bianry image, gray value is not 0 pixel generation Domain containing cloud sector in table image, therefore, as long as counting the pixel number that gray value in bianry image is not 0 accounts for all pixels point Ratio, then be the ratio of original input image medium cloud.
When the ratio of cloud is greater than the threshold value of setting, then illustrate in the image that most of is all cloud, the useful information for including is very It is few, which can be rejected.
Picture is inputted for a width, the present invention uses VGGNet to extract feature as trunk first, since cloud detection is one The prediction task of a Pixel-level, need to generate is a predicted pictures identical with original picture size, is realized to each picture Vegetarian refreshments classification.And deep learning algorithm handles classification task commonly using full articulamentum, and two dimensional image is converted to a dimension label, The prediction task of Pixel-level does not need for picture to be converted into one-dimensional.Therefore, it is necessary to the full articulamentum in VGGNet is replaced with volume Lamination.
For the characteristic extraction procedure of WMSFNet using VGGNet as trunk, table 1 shows the structure of VGGNet and WMSFNet.? In original VGGNet, the convolutional layer with identical output characteristic pattern size is set as level-one, the characteristics of according to VGGNet, is removed The size of input feature vector figure can be become original half by pond layer, other layers will not change the size of input feature vector figure. Therefore, every to pass through primary network station, output characteristic pattern can be reduced to the half of input feature vector figure.
1 VGGNet of table and WMSFNET network structure
Each layer configuration information (not considering the zero padding operation in first layer convolution) of WMSFNet is as shown in table 2:
Each layer configuration information of 2 WMSFNet network of table
Pond layer will lead to characteristic pattern and become smaller, and finally need to obtain a width and the identical two-value picture of original picture size, Therefore, it is necessary to the picture up-sampling to Chi Huahou, up-sampling is realized by warp lamination.But it if directly will be last The output of level-one is upsampled to identical as original image size, and the testing result at cloud with atural object edge will be very coarse.The present invention It is final to be mentioned shallow-layer minutia and Deep Semantics Fusion Features using Deep Semantics feature using the thought of Multiscale Fusion High cloud detection accuracy reinforces the detection effect at cloud edge using shallow-layer minutia.
The present invention carries out cloud detection using the WMSFNet network proposed.The network has following characteristics:
1) traditional cloud detection method of optic needs manual extraction feature and given threshold, and researcher is needed to have warp abundant It tests.WMSFNet network can be trained end to end, not need to manually adjust parameter, simplify the realization process of cloud detection;
2) training process of WMSFNet network needs to have cloud sector domain and non-cloud region to distinguish for input picture in advance, point Do not learnt, it is insensitive to the shape feature of cloud;
3) WMSFNet network can automatically extract the further feature of cloud, realize the prediction task of Pixel-level, and sufficiently merge Shallow-layer minutia and Deep Semantics feature, so that partitioning boundary is finer;
4) WMSFNet network can be realized the prediction task of Pixel-level, and it is identical as input picture size to finally obtain a width Two-value picture, respectively indicated cloud sector domain and cloudless region.
Compared with the conventional method, the present invention has only merged two layers of feature of shallow-layer and deep layer, and merges according to the ratio of 1:3, Achieve good cloud sector domain detection effect.Wherein: the detection effect at cloud edge can be improved in the minutia of shallow-layer, deep layer The detection accuracy of cloud can be improved in semantic feature, reduces erroneous judgement.It can be applied to the high-precision cloud inspection to visible remote sensing image It surveys.
Specific embodiment 2: the present embodiment is different from the first embodiment in that: it is described to N0Open original remote sensing Image is pre-processed, and N is obtained0Open pretreated remote sensing images, detailed process are as follows:
Remote sensing images original for any one calculate the mean value M of each channel gray scale of the original remote sensing images of this, then The gray scale for being utilized respectively each pixel in the original remote sensing images of this subtracts mean value M, and it is corresponding to obtain the original remote sensing images of this After the corresponding pretreatment of the original remote sensing images of remote sensing images after pretreatment, i.e. this in remote sensing images each pixel gray value Are as follows:
O ' (i, j)=O (i, j)-M (1)
Wherein: O (i, j) is the gray value in the original remote sensing images of this at pixel (i, j), and O ' (i, j) is this original Gray value after the corresponding pretreatment of beginning remote sensing images in remote sensing images at pixel (i, j);
Similarly, N is calculated0Remote sensing images after the corresponding pretreatment of every original remote sensing images in original remote sensing images, Obtain N0Open pretreated remote sensing images.
Specific embodiment 3: as shown in Figures 2 and 3, present embodiment is unlike specific embodiment two: described The detailed process of step 2 are as follows:
By N0Pretreated remote sensing images are opened as training set and input semantic segmentation network, before starting training, are needed The network parameter of semantic segmentation network is initialized, starts training process again after the completion of network parameter initialization;
The semantic segmentation network includes 15 convolutional layers, 5 pond layers, 2 warp laminations and 2 cutting layers, difference It is:
2 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 64;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 64;
2 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 128;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 128;
3 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 256;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 256;
3 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 512;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 512;
3 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 512;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 512;
1 convolution kernel is having a size of 7*7, the convolutional layer that convolution kernel number is 4096;
1 convolution kernel is having a size of 1*1, the convolutional layer that convolution kernel number is 4096;
1 convolution kernel is having a size of 8*8, the warp lamination that convolution kernel number is 2;
1 cutting layer,
1 convolution kernel is having a size of 16*16, the warp lamination that convolution kernel number is 2;
1 cutting layer;
It is a having a size of 1*1, convolution kernel to convolution kernel for 2 warp lamination having a size of 8*8, convolution kernel number using convolution kernel Number is up-sampled for the characteristic pattern of 4096 convolutional layer output, the characteristic pattern after being up-sampled, special after the up-sampling of acquisition The size of sign figure is four times of the characteristic pattern size that convolution kernel is exported having a size of 1*1, the convolutional layer that convolution kernel number is 4096;
The size of characteristic pattern is convolution kernel having a size of 2*2, the pond layer that convolution kernel number is 512 after the up-sampling obtained Four times of the characteristic pattern size of output, output characteristic pattern warp of the convolution kernel having a size of 2*2, the pond layer that convolution kernel number is 512 Convolution kernel is crossed having a size of 7*7, the convolutional layer and convolution kernel that convolution kernel number is 4096 are having a size of 1*1, convolution kernel number After 4096 convolutional layer, characteristic pattern size does not change.
By characteristic pattern after the up-sampling of acquisition and the last one convolution kernel having a size of 3*3, the convolution that convolution kernel number is 512 The characteristic pattern of layer output carries out pixel-by-pixel weighting and is averaged, and obtains fused characteristic pattern;Using convolution kernel having a size of 16*16, volume The warp lamination that product core number is 2 up-samples fusion feature figure, fusion feature figure after being up-sampled, and adopts in acquisition Fusion feature figure is eight times of characteristic pattern size after fusion after sample;
Image after fusion feature figure is cut after cutting layer after up-sampling, image and remote sensing after pretreatment after cutting The size of image is identical;
And in the training process, the convolution nuclear parameter of the convolutional layer of semantic segmentation network is constantly updated by BP algorithm;Directly Stop iteration when to the maximum number of iterations N for reaching setting, obtains trained semantic segmentation network.
The deep learning framework that the present invention uses is Caffe, programming language Python.
WMSFNet is a full convolutional network, and convolutional layer is consistent with the convolutional layer algorithm of other deep learning networks, than More special is the introduction of warp lamination.In the forward-propagating stage, iteration obtains each time training result and training label it Between there is a certain error, error will lead to identification mistake, and therefore, it is necessary to constantly adjust the convolution of convolutional layer by learning process Nuclear parameter can just obtain suitable convolution nuclear parameter.
The convolution nuclear parameter of warp lamination is not involved in training in WMSFNet, i.e., the convolution nuclear parameter of warp lamination is entire It is fixed in training process.
The calculating process of convolutional layer are as follows: convolutional layer receives NCA characteristic pattern passes through in k*k as input, each input feature vector The displacement window of core carries out convolution, to generate a pixel on output characteristic pattern, the stride for shifting window is s, usually small In k, N in totalFA output characteristic pattern will form the input feature vector figure of next convolutional layer.It is N that convolutional layer, which receives a size,C*H* The input feature vector figure of W and one group of size are NF*NC* the convolution kernel of k*k, obtaining one group of size is NF*HO*WOOutput characteristic pattern, HOAnd WOSize can be exported by following equation:
HO=(H+2*p-k)/s+1
WO=(W+2*p-k)/s+1
Deconvolution operation is actually the process of a transposition convolution, such as: the size of a deconvolution core is k, step-length For s, the number of zero padding is p, then is calculating the convolution kernel progress convolution being equivalent to when deconvolution with size is k, and step-length is 1, The number of zero padding is k-p-1, and needs to mend s-1 zero between each input unit, and similar with convolutional layer, warp lamination receives The convolution kernel that the input feature vector figure and one group of size that one size is NcHW are NfNckk, obtaining one group of size is The size of the output characteristic pattern of NfHoWo, Ho and Wo are provided by following equation:
Ho=s* (H-1)+k-2*p
Wo=s* (W-1)+k-2*p
For the specific calculating process of deconvolution operation as shown in figure 4, left figure indicates input feature vector figure in Fig. 4, right figure indicates warp The output characteristic pattern of warp lamination is crossed, wherein the convolution kernel size k of warp lamination is 4, s 2, p 0, it inputs having a size of 4*4, Input after zero padding is having a size of 13*13, Output Size 10*10.
The convolution kernel of warp lamination is a bilinearity core, can be obtained by bilinear interpolation, if convolution kernel current point Coordinate be (i, j), the coordinate of central point is (a, b), then the value D of convolution kernel current point is calculated by following equation:
D=[1-abs (i-a)/2] * [1-abs (j-b)/2]
Illustrate its calculating process with a size for the bilinearity core of 4*4, as shown in Figure 5.By taking the 2nd lattice as an example, sit Scale value is (0,1), the corresponding weight of point are as follows:
[1-abs (1-1.5)/2] * [1-abs (0-1.5)/2]=0.1875
In WMSFNet network structure, only pond layer will affect the size for exporting characteristic pattern, if input feature vector figure is big Small is H, then passes through the 5th grade of Chi Huahou, and the size for exporting characteristic pattern is H/25, it is 7*7's followed by convolution kernel size Convolutional layer, if layer output is H6, then the size of output characteristic pattern is obtained are as follows:
H6=(H/25- 7)/1+1=(H-192)/25
Therefore, finally enter warp lamination is sized at H6.In addition, for the long or wide picture for being higher than 192 pixels, Algorithm has no idea to be handled, to solve this problem, when generally carrying out first time convolution to input picture, by picture 100 pixel of zero padding, at this time H6Output characteristic pattern size are as follows:
H6=(H+6)/25
It is by H in next step6Output be upsampled to original 32 times, if the output of deconvolution be H7, H at this time7Output it is special Levy figure size are as follows:
H7=(H6- 1) * 32+64=((H+6)/32-1) * 32+64=H+38
Obviously, H7It is of different sizes with input picture H, it at this moment needs using cutting layer for H7Be cut to it is identical as H size, The position for needing specified cutting, to determine where algorithm is cut, by H7Expression formula obtain, cutting should be set at this time Offset be 19.
Specific embodiment 4: present embodiment is unlike specific embodiment three: the semantic segmentation network is adopted Loss function is J (W, b), and image after cutting is inputted softmax classifier, is obtained identical with picture size after cutting Bianry image;The value of loss function J (W, b) is calculated using the bianry image of acquisition:
Wherein: Sj′Indicate jth ' a value in the output vector S by Softmax classifier, j '=1,2 ..., T, T generation The total number of value in table output vector S;aj' indicate the jth ' a value being input in the input vector a of Softmax classifier, e Represent natural constant;yj′It is the vector of a 1 × T, and yj′={ 0,0 ..., 0,1,0 ..., 0,0 }, in which: 1 is vector yj′In Jth ' a element, vector yj′In other elements be 0.
Specific embodiment 5: present embodiment is unlike specific embodiment four: the semantic segmentation network The convolution nuclear parameter of convolutional layer is constantly updated by BP algorithm, detailed process are as follows:
In the training process, each time iteration according to formula (4) update semantics segmentation network convolutional layer convolution kernel Parameter;
Wherein:Indicate the Transfer Parameters from i-th of neuron of l convolutional layer to j-th of neuron of l+1 convolutional layer, α is learning rate,For the bias term of i-th of neuron of l convolutional layer.
The purpose of training process is to keep cost function J (W, b) smaller and smaller.It is similar with other deep learning algorithms, convolution The adjustment of the convolution nuclear parameter of layer is also to be learnt by back-propagation algorithm (Back Propagation, BP).Each In iterative process, some parameters for causing recognition effect bad are all updated so that new parameter to the detection effect of cloud more It is good, until reaching frequency of training, obtain final trained model.
WMSFNet can relatively accurately detect cloud, and main cause is by the network volume after BP algorithm training The convolution kernel of lamination can more effectively extract the feature of cloud.The convolution kernel of shallow-layer extracts the shallow-layers features such as gray scale, the texture of cloud, The convolution kernel of deep layer extracts the abstract semantic feature of cloud, finally by these Fusion Features, to obtain preferable cloud detection effect.
Specific embodiment 6: present embodiment is unlike specific embodiment three: the up-sampling by acquisition The characteristic pattern that characteristic pattern and the last one convolution kernel are exported having a size of 3*3, the convolutional layer that convolution kernel number is 512 afterwards is carried out by picture Element weighted average, obtains fused characteristic pattern, detailed process are as follows:
Wherein: Ai″j″For the pixel value at pixel (i ", j ") in characteristic pattern after up-sampling, Bi″j″For the last one convolution Pixel value in the characteristic pattern that core is exported having a size of 3*3, the convolutional layer that convolution kernel number is 512 at pixel (i ", j "), α ' and β ' is weight coefficient, Ci″j″For the pixel value at pixel in the characteristic pattern of fusion (i ", j ").
What the convolutional layer of preceding three-level network extracted is shallow-layer feature, and that the convolutional layer of rear two-level network extracts is deep layer spy It levies, shallow-layer feature includes abundant detailed information, and further feature includes abundant semantic information, merges the advantage of the two.Due to third The pond layer output characteristic pattern of grade network and the pond layer output characteristic pattern of level V network are of different sizes, cannot directly pixel-by-pixel Addition is merged.Therefore, it is necessary to first be upsampled to level V network pool layer output characteristic pattern originally using warp lamination 4 times after, then with third level network pool layer output characteristic pattern to carry out pixel-by-pixel weighting average, finally by fused characteristic pattern Original 8 times are upsampled to using warp lamination, obtain width two-value picture identical with input picture size.
Specific embodiment 7: present embodiment is unlike specific embodiment six: characteristic pattern after the up-sampling The integration percentage of characteristic pattern of convolutional layer output for being 512 having a size of 3*3, convolution kernel number with the last one convolution kernel is 1:3.
Experimental verification and analysis
In order to assess performance of the WMSFNet network in terms of cloud detection, shot from high score No.1 satellite 2 meters points of the present invention Resolution really chooses image in panchromatic visible remote sensing image and carries out test verifying, and photo resolution is 256 × 256 pixels.Base 100 pictures are selected to be trained in the cloud detection method of optic of WMSFNet network, 20 pictures carry out test verifying.
There is preferably effect compared with other methods in order to verify method proposed by the present invention in panchromatic visible remote sensing image Fruit, the present invention respectively with extract shallow-layer feature cloud detection method of optic and other advanced languages that can automatically extract further feature Adopted dividing method compares, and absolutely proves validity of the WMSFNet network in terms of cloud detection.
The present invention chooses three kinds of different scenes and illustrates cloud detection effect, as shown in Fig. 6,7 and 8:
Scene 1 is simplest scene, only includes cloud other than background in picture;Scene 2 other than background not only Containing cloud, and including land, there are grey scale change significantly extra large land boundaries;Scene 3 is relative complex, contains in the background and cloud The similar building of feature.
It mainly include threshold method and multi-feature extraction method based on the cloud detection method of optic that manual features are extracted, threshold method is cloud inspection The common method of survey, cloud have apparent gray feature with respect to atural object, select document (the automatic cloud inspection of high score No.1 satellite image Survey) in maximum variance between clusters in scene 1, scene 2 and scene 3 panchromatic visible remote sensing image carry out cloud detection, Detection effect is respectively as shown in Fig. 9,10 and 11.
Single gray feature cannot summarize all characteristics of cloud, in turn, the document (feature in the identification of remote sensing images cloud atlas Extract) a kind of propose multi-feature extraction method, by extracting gray feature, frequecy characteristic, the textural characteristics of cloud, and pass through SVM classifier verifies in a width picture whether contain cloud.SVM classifier parameter can carry out parameter optimization by genetic method. One width picture is first divided into many fritters by this method needs, and predicts whether contain cloud, base in each fritter by SVM Cloud detection, the cloud inspection of acquisition are carried out to the test data set original graph of scene 1, scene 2 and scene 3 respectively in multi-feature extraction method Survey effect as shown in figs. 12-14.
It exports dark parts in picture and represents domain containing cloud sector, light-colored part represents cloudless region.The experimental results showed that utilizing The classification accuracy that SVM carries out cloud detection can achieve 89%.It, may in each fritter due to the mode for taking image to divide equally Not only included cloud sector domain but also included cloudless region, therefore this method can only substantially extract cloud area, detection accuracy is lower.
In Practical Project utilization, not only need to judge whether contain cloud in image, but also need to comment cloud amount value Valence, remote sensing satellite are difficult to capture completely cloudless scene when obtaining image, and a small amount of cloud can't block effective information, if Still it is handled, will cause the loss of useful information.Therefore inevitable problem is become to the detection of cloud amount value.Prediction The difference of picture medium cloud area percentage and true picture cloud area percentage is denoted as cloud area detecting error, true picture cloud face The calculating of product percentage can have cloud sector domain by hand labeled, with having cloud sector domain pixel number divided by all pictures in picture Vegetarian refreshments number obtains.The cloud area detecting value of multi-feature extraction method is using the quantity of darkened boxes in output picture divided by all sides The quantity of block obtains, as shown in table 3.
Table 3
As shown in Table 3, the method based on multi-feature extraction+SVM can only realize the substantially detection of cloud area, in simple scenario In, less than 20%, but if containing atural object similar with cloud feature in scene, the detection of cloud area misses cloud area detecting error Difference is greater than 30%.In conclusion poor based on the detection effect that the method for extracting shallow-layer feature carries out cloud detection.
Other than gray scale, frequency, textural characteristics, there are many more abstract further features for cloud.It can be with by multiple convolutional layers Extract the more abstract semantic feature of cloud.Feature extraction is carried out according to Alexnet, and is classified instead of SVM, classification is accurate Degree can achieve 94%.Alexnet is the better simply network of structure, if the network more complicated using structure, classification is quasi- Exactness has better effect, but can only substantially detect cloud amount value using the method for classification, and therefore, the present invention finally uses language The method of justice segmentation accurately extracts cloud amount value.
The present invention compares WMSFNet network and FCN, common semanteme dividing method U-net and advanced semanteme point respectively Cloud detection effect of the segmentation method Deeplab V3+ under three kinds of different scenes.It as shown in figure 15, is 1 original graph of scene, scene 2 Original graph and the corresponding mark figure of 3 original graph of scene;Figure 16,17,18 and 19 be respectively FCN, U-net, Deeplab V3+ and The cloud detection effect picture of WMSFNet.
In the first most simple scenario, the available preferable effect of all semantic segmentation methods.Second of scene In, effect is poor in this scenario by U-net, grey scale change significantly extra large land boundary can be also identified as cloud.The third is relatively multiple In miscellaneous scene, building similar with cloud feature is contained in background, traditional threshold method cannot handle this scene, remove Outside U-net, these building recognitions will not be cloud by other semantic segmentation methods.
By Figure 15-19 it is found that in addition to U-net, FCN and Deeplab V3+ can substantially detect the profile of cloud, but by Result in that edge detection is coarse, and Deeplab V3+ is due to introducing empty convolution sum condition random in the big input stride of FCN , the detection effect at cloud edge is more fine, but some cloud pixels for being in image boundary be easy to cause erroneous judgement.From figure The cloud detection effect that 15-19 can be seen that WMSFNet is more preferable compared with other methods, since this method has merged shallow-layer minutia With Deep Semantics feature, method can obtain preferable effect in cloud edge detection, to keep the extraction of cloud amount value more smart Really.
Carry out the precision of balancing method in image segmentation usually using many standards, the present invention is using the precision marked pixel-by-pixel Standard.Assuming that sharing k+1 class, PijIt indicates originally to belong to class i but is predicted to be the pixel quantity of class j.That is PiiIndicate that label is correct Pixel, and PijAnd PjiIndicate the pixel of marked erroneous.
(1) pixel precision (Pixel Accuracy, PA): the ratio of the correct total pixel of pixel Zhan is marked.
(2) it equal pixel precision (Mean Pixel Accuracy, MPA): calculates in each class by correct classified pixels number Then ratio is averaging.
(3) it hands over and than (Mean Intersection over Union, MIoU): calculating true value and prediction value set Intersection and the ratio between union, be then averaging.
(4) frequency power is handed over and than (Frequency Weighted Intersection over Union, FWIoU): Weight is arranged for it according to the frequency that each class occurs on the basis of MIoU.
The calculation method of cloud area detecting error are as follows: prognostic chart cloudlet area percentage and true picture cloud area percentage Difference.Input picture can obtain width two-value picture identical with input picture size after WMSFNet network, wherein ash The pixel that angle value is 0 represents the cloudless region in picture, and gray value is not that 0 pixel represents the domain containing cloud sector in picture, As long as counting gray value in two-value picture is not that 0 pixel number accounts for the ratio of all pixels point, available prognostic chart The cloud area percentage of piece.
The results are shown in Table 4 for the quantizating index of different semantic segmentation methods.Quantizating index includes PA, MPA, MIoU, FWIoU And cloud area detecting error.WMSFNet has promotion compared with other semantic segmentation methods in five indexs, for of the invention Data set, WMSFNet cloud detection method of optic have better detection effect compared with other methods.
4 quantizating index of table compares
The experimental results showed that WMSFNet has preferable effect under different scenes, 95.39% picture may be implemented Element classification accuracy, cloud area detecting error are better than 1%.
Above-mentioned example of the invention only explains computation model and calculation process of the invention in detail, and is not to this The restriction of the embodiment of invention.It for those of ordinary skill in the art, on the basis of the above description can be with It makes other variations or changes in different ways, all embodiments can not be exhaustive here, it is all to belong to the present invention The obvious changes or variations extended out of technical solution still in the scope of protection of the present invention.

Claims (7)

1. the remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network, which is characterized in that this method include with Lower step:
Step 1: randomly selecting out N from true panchromatic visible remote sensing image data set0The original remote sensing images of Zhang Zuowei;
To N0It opens original remote sensing images to be pre-processed, obtains N0Open pretreated remote sensing images;
Step 2: by N0It opens pretreated remote sensing images to be trained as training set input semantic segmentation network, training process The middle convolution nuclear parameter for constantly updating convolutional layer in semantic segmentation network stops instruction when until reaching the maximum number of iterations of setting Practice, obtains trained semantic segmentation network;
Step 3: remote sensing images to be detected pre-process remote sensing images to be detected using the method for step 1, Obtain pretreated remote sensing images to be detected;
By the trained semantic segmentation network of pretreated remote sensing images input step two to be detected, semantic segmentation network is obtained Image after the cutting of output;
Image passes through softmax classifier after cutting, and obtains bianry image identical with picture size after cutting, bianry image Middle gray value is not 0 pixel representative domain containing cloud sector, and the pixel that gray value is 0 represents non-cloud region, realizes to be detected Remote sensing images cloud detection.
2. the remote sensing images cloud detection method of optic according to claim 1 based on Multiscale Fusion semantic segmentation network, special Sign is, described to N0It opens original remote sensing images to be pre-processed, obtains N0Open pretreated remote sensing images, detailed process Are as follows:
Remote sensing images original for any one, calculate the mean value M of each channel gray scale of the original remote sensing images of this, then distinguish Mean value M is subtracted using the gray scale of each pixel in the original remote sensing images of this, obtains the corresponding pre- place of the original remote sensing images of this After the corresponding pretreatment of the original remote sensing images of remote sensing images after reason, i.e. this in remote sensing images each pixel gray value are as follows:
O ' (i, j)=O (i, j)-M (1)
Wherein: O (i, j) is the gray value in the original remote sensing images of this at pixel (i, j), and O ' (i, j) is that this is original distant Gray value after the corresponding pretreatment of sense image in remote sensing images at pixel (i, j);
Similarly, N is calculated0Remote sensing images after the corresponding pretreatment of every original remote sensing images, that is, obtain in original remote sensing images N0Open pretreated remote sensing images.
3. the remote sensing images cloud detection method of optic according to claim 2 based on Multiscale Fusion semantic segmentation network, special Sign is, the detailed process of the step 2 are as follows:
By N0Pretreated remote sensing images are opened as training set and input semantic segmentation network, before starting training, are needed initial Change the network parameter of semantic segmentation network, starts training process again after the completion of network parameter initialization;
The semantic segmentation network includes 15 convolutional layers, 5 pond layers, 2 warp laminations and 2 cutting layers, is respectively:
2 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 64;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 64;
2 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 128;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 128;
3 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 256;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 256;
3 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 512;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 512;
3 convolution kernels are having a size of 3*3, the convolutional layer that convolution kernel number is 512;
1 convolution kernel is having a size of 2*2, the pond layer that convolution kernel number is 512;
1 convolution kernel is having a size of 7*7, the convolutional layer that convolution kernel number is 4096;
1 convolution kernel is having a size of 1*1, the convolutional layer that convolution kernel number is 4096;
1 convolution kernel is having a size of 8*8, the warp lamination that convolution kernel number is 2;
1 cutting layer,
1 convolution kernel is having a size of 16*16, the warp lamination that convolution kernel number is 2;
1 cutting layer;
It is having a size of 1*1, convolution kernel number to convolution kernel using the warp lamination that convolution kernel is 2 having a size of 8*8, convolution kernel number The characteristic pattern of 4096 convolutional layer output is up-sampled, the characteristic pattern after being up-sampled, characteristic pattern after the up-sampling of acquisition Four times of the size characteristic pattern size that be convolution kernel export having a size of 1*1, the convolutional layer that convolution kernel number is 4096;
Characteristic pattern after the up-sampling of acquisition is defeated for 512 convolutional layer having a size of 3*3, convolution kernel number with the last one convolution kernel Characteristic pattern out carries out pixel-by-pixel weighting and is averaged, and obtains fused characteristic pattern;Using convolution kernel having a size of 16*16, convolution kernel The warp lamination that number is 2 up-samples fusion feature figure, fusion feature figure after being up-sampled, after the up-sampling of acquisition Fusion feature figure is eight times of characteristic pattern size after fusion;
Image after fusion feature figure is cut after cutting layer after up-sampling, image and remote sensing images after pretreatment after cutting Size it is identical;
And in the training process, the convolution nuclear parameter of the convolutional layer of semantic segmentation network is constantly updated by BP algorithm;Until reaching To setting maximum number of iterations N when stop iteration, obtain trained semantic segmentation network.
4. the remote sensing images cloud detection method of optic according to claim 3 based on Multiscale Fusion semantic segmentation network, special Sign is that the loss function that the semantic segmentation network uses inputs softmax classifier for J (W, b), by image after cutting, Obtain bianry image identical with picture size after cutting;The value of loss function J (W, b) is calculated using the bianry image of acquisition:
Wherein: Sj′Indicate jth ' a value in the output vector S by Softmax classifier, j '=1,2 ..., T, T represents defeated The total number of value in outgoing vector S;aj′Indicate the jth ' a value being input in the input vector a of Softmax classifier, e is represented Natural constant;yj′It is the vector of a 1 × T, and yj′={ 0,0 ..., 0,1,0 ..., 0,0 }, in which: 1 is vector yj′In A element of j ', vector yj′In other elements be 0.
5. the remote sensing images cloud detection method of optic according to claim 4 based on Multiscale Fusion semantic segmentation network, special Sign is that the convolution nuclear parameter of the convolutional layer of the semantic segmentation network is constantly updated by BP algorithm, detailed process are as follows:
In the training process, each time iteration according to formula (4) update semantics segmentation network convolutional layer convolution nuclear parameter;
Wherein:Indicating the Transfer Parameters from i-th of neuron of l convolutional layer to j-th of neuron of l+1 convolutional layer, α is Learning rate,For the bias term of i-th of neuron of l convolutional layer.
6. the remote sensing images cloud detection method of optic according to claim 3 based on Multiscale Fusion semantic segmentation network, special Sign is that characteristic pattern is 512 having a size of 3*3, convolution kernel number with the last one convolution kernel after the up-sampling by acquisition The characteristic pattern of convolutional layer output carries out pixel-by-pixel weighting and is averaged, and obtains fused characteristic pattern, detailed process are as follows:
Wherein: Ai″j″For the pixel value at pixel (i ", j ") in characteristic pattern after up-sampling, Bi″j″For the last one convolution kernel ruler Pixel value in the characteristic pattern for the convolutional layer output that very little is 3*3, convolution kernel number is 512 at pixel (i ", j "), α ' and β ' are equal For weight coefficient, Ci″j″For the pixel value at pixel in the characteristic pattern of fusion (i ", j ").
7. the remote sensing images cloud detection method of optic according to claim 6 based on Multiscale Fusion semantic segmentation network, special Sign is that characteristic pattern and the last one convolution kernel are defeated for 512 convolutional layer having a size of 3*3, convolution kernel number after the up-sampling The integration percentage of characteristic pattern out is 1:3.
CN201910436645.2A 2019-05-23 2019-05-23 Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network Active CN110119728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910436645.2A CN110119728B (en) 2019-05-23 2019-05-23 Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910436645.2A CN110119728B (en) 2019-05-23 2019-05-23 Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network

Publications (2)

Publication Number Publication Date
CN110119728A true CN110119728A (en) 2019-08-13
CN110119728B CN110119728B (en) 2023-12-05

Family

ID=67523101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910436645.2A Active CN110119728B (en) 2019-05-23 2019-05-23 Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network

Country Status (1)

Country Link
CN (1) CN110119728B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598600A (en) * 2019-08-27 2019-12-20 广东工业大学 Remote sensing image cloud detection method based on UNET neural network
CN110781770A (en) * 2019-10-08 2020-02-11 高新兴科技集团股份有限公司 Living body detection method, device and equipment based on face recognition
CN110910390A (en) * 2019-11-11 2020-03-24 大连理工大学 Panoramic three-dimensional color point cloud semantic segmentation method based on depth distortion convolution
CN111079683A (en) * 2019-12-24 2020-04-28 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111144304A (en) * 2019-12-26 2020-05-12 上海眼控科技股份有限公司 Vehicle target detection model generation method, vehicle target detection method and device
CN111523381A (en) * 2020-03-13 2020-08-11 上海眼控科技股份有限公司 Method and equipment for updating land utilization information in numerical weather forecast
CN111523546A (en) * 2020-04-16 2020-08-11 湖南大学 Image semantic segmentation method, system and computer storage medium
CN111553289A (en) * 2020-04-29 2020-08-18 中国科学院空天信息创新研究院 Remote sensing image cloud detection method and system
CN111553925A (en) * 2020-04-27 2020-08-18 南通智能感知研究院 End-to-end crop image segmentation method and system based on FCN
CN111611968A (en) * 2020-05-29 2020-09-01 中国科学院西北生态环境资源研究院 Processing method of remote sensing image and remote sensing image processing model
CN111738954A (en) * 2020-06-24 2020-10-02 北京航空航天大学 Single-frame turbulence degradation image distortion removal method based on double-layer cavity U-Net model
CN111783968A (en) * 2020-06-30 2020-10-16 山东信通电子股份有限公司 Power transmission line monitoring method and system based on cloud edge cooperation
CN111798461A (en) * 2020-06-19 2020-10-20 武汉大学 Pixel-level remote sensing image cloud area detection method for guiding deep learning by coarse-grained label
CN111797712A (en) * 2020-06-16 2020-10-20 南京信息工程大学 Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network
CN111951284A (en) * 2020-08-12 2020-11-17 湖南神帆科技有限公司 Optical remote sensing satellite image refined cloud detection method based on deep learning
CN112001403A (en) * 2020-08-11 2020-11-27 北京化工大学 Image contour detection method and system
CN112149492A (en) * 2020-07-06 2020-12-29 北京航空航天大学 Remote sensing image accurate cloud detection method based on reinforcement genetic learning
CN112489054A (en) * 2020-11-27 2021-03-12 中北大学 Remote sensing image semantic segmentation method based on deep learning
CN112508031A (en) * 2020-12-22 2021-03-16 北京航空航天大学 Unsupervised remote sensing image semantic segmentation method and model from virtual to reality
CN112784894A (en) * 2021-01-18 2021-05-11 西南石油大学 Automatic labeling method for rock slice microscopic image
CN112819837A (en) * 2021-02-26 2021-05-18 南京大学 Semantic segmentation method based on multi-source heterogeneous remote sensing image
CN113239830A (en) * 2021-05-20 2021-08-10 北京航空航天大学 Remote sensing image cloud detection method based on full-scale feature fusion
CN113743300A (en) * 2021-09-03 2021-12-03 中化现代农业有限公司 Semantic segmentation based high-resolution remote sensing image cloud detection method and device
CN113792653A (en) * 2021-09-13 2021-12-14 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image
CN114092801A (en) * 2021-10-28 2022-02-25 国家卫星气象中心(国家空间天气监测预警中心) Remote sensing image cloud detection method and device based on depth semantic segmentation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341517A (en) * 2017-07-07 2017-11-10 哈尔滨工业大学 The multiple dimensioned wisp detection method of Fusion Features between a kind of level based on deep learning
CN107944354A (en) * 2017-11-10 2018-04-20 南京航空航天大学 A kind of vehicle checking method based on deep learning
US20180144477A1 (en) * 2016-06-15 2018-05-24 Beijing Sensetime Technology Development Co.,Ltd Methods and apparatuses, and computing devices for segmenting object
CN108447048A (en) * 2018-02-23 2018-08-24 天津大学 Convolutional neural networks characteristics of image processing method based on concern layer
CN108491757A (en) * 2018-02-05 2018-09-04 西安电子科技大学 Remote sensing image object detection method based on Analysis On Multi-scale Features study
CN108830855A (en) * 2018-04-02 2018-11-16 华南理工大学 A kind of full convolutional network semantic segmentation method based on the fusion of multiple dimensioned low-level feature
US20190057507A1 (en) * 2017-08-18 2019-02-21 Samsung Electronics Co., Ltd. System and method for semantic segmentation of images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180144477A1 (en) * 2016-06-15 2018-05-24 Beijing Sensetime Technology Development Co.,Ltd Methods and apparatuses, and computing devices for segmenting object
CN107341517A (en) * 2017-07-07 2017-11-10 哈尔滨工业大学 The multiple dimensioned wisp detection method of Fusion Features between a kind of level based on deep learning
US20190057507A1 (en) * 2017-08-18 2019-02-21 Samsung Electronics Co., Ltd. System and method for semantic segmentation of images
CN107944354A (en) * 2017-11-10 2018-04-20 南京航空航天大学 A kind of vehicle checking method based on deep learning
CN108491757A (en) * 2018-02-05 2018-09-04 西安电子科技大学 Remote sensing image object detection method based on Analysis On Multi-scale Features study
CN108447048A (en) * 2018-02-23 2018-08-24 天津大学 Convolutional neural networks characteristics of image processing method based on concern layer
CN108830855A (en) * 2018-04-02 2018-11-16 华南理工大学 A kind of full convolutional network semantic segmentation method based on the fusion of multiple dimensioned low-level feature

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DUC MY VO等: ""Semantic image segmentation using fully convolutional neural networks with multi-scale images and multi-scale dilated convolutions"", 《MULTIMEDIA TOOLS AND APPLICATIONS》, vol. 77 *
邓国徽等: ""基于改进的全卷积神经网络高分遥感数据语义分割研究"", pages 1 - 13 *

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598600A (en) * 2019-08-27 2019-12-20 广东工业大学 Remote sensing image cloud detection method based on UNET neural network
CN110781770B (en) * 2019-10-08 2022-05-06 高新兴科技集团股份有限公司 Living body detection method, device and equipment based on face recognition
CN110781770A (en) * 2019-10-08 2020-02-11 高新兴科技集团股份有限公司 Living body detection method, device and equipment based on face recognition
CN110910390A (en) * 2019-11-11 2020-03-24 大连理工大学 Panoramic three-dimensional color point cloud semantic segmentation method based on depth distortion convolution
CN110910390B (en) * 2019-11-11 2022-10-21 大连理工大学 Panoramic three-dimensional color point cloud semantic segmentation method based on depth distortion convolution
CN111079683A (en) * 2019-12-24 2020-04-28 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111079683B (en) * 2019-12-24 2023-12-12 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111144304A (en) * 2019-12-26 2020-05-12 上海眼控科技股份有限公司 Vehicle target detection model generation method, vehicle target detection method and device
CN111523381A (en) * 2020-03-13 2020-08-11 上海眼控科技股份有限公司 Method and equipment for updating land utilization information in numerical weather forecast
CN111523546A (en) * 2020-04-16 2020-08-11 湖南大学 Image semantic segmentation method, system and computer storage medium
CN111553925A (en) * 2020-04-27 2020-08-18 南通智能感知研究院 End-to-end crop image segmentation method and system based on FCN
CN111553289A (en) * 2020-04-29 2020-08-18 中国科学院空天信息创新研究院 Remote sensing image cloud detection method and system
CN111611968A (en) * 2020-05-29 2020-09-01 中国科学院西北生态环境资源研究院 Processing method of remote sensing image and remote sensing image processing model
CN111611968B (en) * 2020-05-29 2022-02-01 中国科学院西北生态环境资源研究院 Processing method of remote sensing image and remote sensing image processing model
CN111797712B (en) * 2020-06-16 2023-09-15 南京信息工程大学 Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network
CN111797712A (en) * 2020-06-16 2020-10-20 南京信息工程大学 Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network
CN111798461B (en) * 2020-06-19 2022-04-01 武汉大学 Pixel-level remote sensing image cloud area detection method for guiding deep learning by coarse-grained label
CN111798461A (en) * 2020-06-19 2020-10-20 武汉大学 Pixel-level remote sensing image cloud area detection method for guiding deep learning by coarse-grained label
CN111738954B (en) * 2020-06-24 2022-11-25 北京航空航天大学 Single-frame turbulence degradation image distortion removal method based on double-layer cavity U-Net model
CN111738954A (en) * 2020-06-24 2020-10-02 北京航空航天大学 Single-frame turbulence degradation image distortion removal method based on double-layer cavity U-Net model
CN111783968B (en) * 2020-06-30 2024-05-31 山东信通电子股份有限公司 Power transmission line monitoring method and system based on cloud edge cooperation
CN111783968A (en) * 2020-06-30 2020-10-16 山东信通电子股份有限公司 Power transmission line monitoring method and system based on cloud edge cooperation
CN112149492B (en) * 2020-07-06 2022-08-30 北京航空航天大学 Remote sensing image accurate cloud detection method based on reinforcement genetic learning
CN112149492A (en) * 2020-07-06 2020-12-29 北京航空航天大学 Remote sensing image accurate cloud detection method based on reinforcement genetic learning
CN112001403B (en) * 2020-08-11 2023-12-15 北京化工大学 Image contour detection method and system
CN112001403A (en) * 2020-08-11 2020-11-27 北京化工大学 Image contour detection method and system
CN111951284A (en) * 2020-08-12 2020-11-17 湖南神帆科技有限公司 Optical remote sensing satellite image refined cloud detection method based on deep learning
CN111951284B (en) * 2020-08-12 2022-04-22 湖南神帆科技有限公司 Optical remote sensing satellite image refined cloud detection method based on deep learning
CN112489054A (en) * 2020-11-27 2021-03-12 中北大学 Remote sensing image semantic segmentation method based on deep learning
CN112508031B (en) * 2020-12-22 2022-09-02 北京航空航天大学 Unsupervised remote sensing image semantic segmentation method and model from virtual to reality
CN112508031A (en) * 2020-12-22 2021-03-16 北京航空航天大学 Unsupervised remote sensing image semantic segmentation method and model from virtual to reality
CN112784894A (en) * 2021-01-18 2021-05-11 西南石油大学 Automatic labeling method for rock slice microscopic image
CN112819837A (en) * 2021-02-26 2021-05-18 南京大学 Semantic segmentation method based on multi-source heterogeneous remote sensing image
CN112819837B (en) * 2021-02-26 2024-02-09 南京大学 Semantic segmentation method based on multi-source heterogeneous remote sensing image
CN113239830B (en) * 2021-05-20 2023-01-17 北京航空航天大学 Remote sensing image cloud detection method based on full-scale feature fusion
CN113239830A (en) * 2021-05-20 2021-08-10 北京航空航天大学 Remote sensing image cloud detection method based on full-scale feature fusion
CN113743300A (en) * 2021-09-03 2021-12-03 中化现代农业有限公司 Semantic segmentation based high-resolution remote sensing image cloud detection method and device
CN113792653A (en) * 2021-09-13 2021-12-14 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image
CN113792653B (en) * 2021-09-13 2023-10-20 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image
CN114092801A (en) * 2021-10-28 2022-02-25 国家卫星气象中心(国家空间天气监测预警中心) Remote sensing image cloud detection method and device based on depth semantic segmentation

Also Published As

Publication number Publication date
CN110119728B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN110119728A (en) Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
CN110414377B (en) Remote sensing image scene classification method based on scale attention network
CN110210463B (en) Precise ROI-fast R-CNN-based radar target image detection method
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN103049763B (en) Context-constraint-based target identification method
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
Huang et al. Morphological building/shadow index for building extraction from high-resolution imagery over urban areas
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN107133955B (en) A kind of collaboration conspicuousness detection method combined at many levels
CN110276264A (en) A kind of crowd density estimation method based on foreground segmentation figure
CN104517095B (en) A kind of number of people dividing method based on depth image
CN104392228A (en) Unmanned aerial vehicle image target class detection method based on conditional random field model
CN109766936A (en) Image change detection method based on information transmitting and attention mechanism
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
CN111160407A (en) Deep learning target detection method and system
CN103106658A (en) Island or reef coastline rapid obtaining method
CN106991686A (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN110633727A (en) Deep neural network ship target fine-grained identification method based on selective search
CN106991411A (en) Remote Sensing Target based on depth shape priori becomes more meticulous extracting method
CN115393734A (en) SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN114495170A (en) Pedestrian re-identification method and system based on local self-attention inhibition
Zhang et al. Nearshore vessel detection based on Scene-mask R-CNN in remote sensing image
Widyantara et al. Gamma correction-based image enhancement and canny edge detection for shoreline extraction from coastal imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant