CN109741340B - Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network - Google Patents

Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network Download PDF

Info

Publication number
CN109741340B
CN109741340B CN201811538248.8A CN201811538248A CN109741340B CN 109741340 B CN109741340 B CN 109741340B CN 201811538248 A CN201811538248 A CN 201811538248A CN 109741340 B CN109741340 B CN 109741340B
Authority
CN
China
Prior art keywords
image
layer
pixel
aspp
fcn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811538248.8A
Other languages
Chinese (zh)
Other versions
CN109741340A (en
Inventor
蔡轶珩
马杰
胡绍斌
李媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201811538248.8A priority Critical patent/CN109741340B/en
Publication of CN109741340A publication Critical patent/CN109741340A/en
Application granted granted Critical
Publication of CN109741340B publication Critical patent/CN109741340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

An ice cover radar image ice layer fine segmentation method based on an FCN-ASPP network relates to the field of computer vision and pattern recognition. The invention takes the radar amplitude image as a training sample of the network, and correspondingly performs data amplification aiming at the problem of less ice layer image data, thereby expanding the wide applicability of the invention. Lee filtering was performed on the ice-covered image. In order to save the edge information as much as possible, a threshold judgment process is added to the filtering process. According to the invention, the ice layer segmentation depth network of the FCN-ASPP is constructed, and the ASPP layer is improved, so that the extraction capability of the network on small-scale features is enhanced. And further processing the preliminary classification result by CRF, and further refining the segmentation result on the basis of realizing end-to-end pixel-level segmentation. In addition, the network greatly realizes the process of autonomous learning.

Description

Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network
Technical Field
The invention belongs to the field of computer vision and pattern recognition, and relates to a method for ice cover radar image ice layer segmentation based on deep learning.
Background
In recent years, global warming has severely threatened the living environment on which we live. In recent decades, ice caps in the south pole have accelerated melting, causing the sea level to rise and have a considerable effect on ocean currents. And may even cause serious geological disasters. Therefore, the data of the thickness, distribution and the like of the polar ice cover and the information of how the data change along with time can be effectively understood and predicted to influence of the glacier melting. The radar sensor is one of instruments capable of penetrating an ice layer and providing large-area under-ice terrain information. Because air, ice and rock have different dielectric constants, the backscatter is different when the radar wave sees through different medium objects, consequently can be distinguished in the radar echo map, and then extracts key information such as ice sheet thickness, ice cover underground structure distribution.
The traditional radar classification image segmentation methods at present are divided into three types, namely segmentation based on a model, segmentation based on an edge and segmentation based on a region. However, these methods also have the problems of dependence on prior knowledge, manual setting of segmentation parameters, insufficient segmentation refinement and the like. Moreover, these methods typically require manual extraction of features, which require a significant amount of time to adjust to the new data set.
Compared with the traditional method, the deep learning method can automatically learn proper feature representation aiming at the current problem, and saves the labor time cost required by manually extracting the features. The full convolution neural network (FCN) is an end-to-end segmentation method developed by deep learning, has pixel-level segmentation capability, can achieve better segmentation effect than machine learning through a large amount of data learning, almost all parameters are obtained through continuous iterative learning, and therefore the number of parameters needing to be adjusted is small. However, the FCN segmentation network also has the defect of insufficient segmentation refinement, and the method improves the FCN segmentation network on the basis of the FCN segmentation network, strengthens the refinement segmentation capability of the network, and enables the FCN segmentation network to better meet the requirement of ice layer radar image segmentation.
Disclosure of Invention
The invention provides a segmentation method of an ice cover radar map based on a deep learning network.
1. Pretreatment of
The method comprises the steps of carrying out targeted preprocessing on an ice cover radar map according to characteristics of the ice cover radar map, and firstly converting a radar amplitude image into a 255-level gray scale map. Since different media in the ground reflect radar waves differently, different texture features and gray scale features appear in the converted gray scale map, which can be used as a basis for classification later.
The original radar amplitude image is acquired by an airplane airborne radar, and the radar image needs to be preprocessed according to the characteristics of the radar image, so that the acquired radar amplitude image is more suitable for ice cover segmentation, the accuracy of an algorithm is improved, and the logarithm conversion is performed on the pixel value of the original radar amplitude image. And calculating the converted pixel value corresponding to the radar amplitude. Then, the images are normalized to 255 pixel levels.
Due to the imaging mode specific to the radar system, a great amount of granular speckle noise exists in the radar original image, and the noise causes serious interference to segmentation. Therefore, the invention adopts the lee filtering algorithm to further process the radar gray level image, thereby eliminating the influence of speckle noise on classification. However, although the image after the traditional lee filtering eliminates the influence of speckle noise, the edge part of the image becomes fuzzy, and the boundary becomes difficult to be segmented. Therefore, the invention adds a threshold judgment process to the filtering process, introduces the variation coefficient C to judge whether the edge is the edge, and does not carry out filtering processing on the edge part to store the edge characteristic.
The filtered grayscale image is clipped to a size of 500 × 389 as an input training image. In addition, label images having the same size as the input training image are manually labeled, and noise (niose), an ice layer (layer), a bedrock layer (bedrock), and an air layer (air) are respectively labeled as pixel gradations 0, 1, 2, and 3.
2. Data amplification and construction of training samples
Since training deep networks requires a large amount of data, the existing data alone is far from sufficient for training. In order to improve the training and detection effects of the algorithm, corresponding data amplification is performed for the problem that the image data of the ice cover is few. The method comprises translation transformation, scaling and the like, so that the data volume is increased, the learning and training of the method are facilitated, and the applicability and the generalization of the method are improved. The data amplification mode is as follows:
1) and (3) translating the preprocessed image in four directions of left, right, up and down, wherein the translation distance is 20 pixels, so that the network learns the translation invariance in the training process.
2) The preprocessed image is respectively reduced to 50% and 75% of the original image, and the sample data is further amplified.
3. FCN-ASPP network construction
The invention constructs the FCN-ASPP network.
Although the fcn (full volumetric network) network can realize end-to-end pixel-level image segmentation, a large amount of detail information is lost in a training picture through 5 downsampling processes due to a large number of downsampling links in an early stage, so that a classification result is not accurate enough. Therefore, the method simplifies the down-sampling link in the FCN, changes 5 times of down-sampling into 3 times of down-sampling processes, adds a BN (batch normalization) layer after each convolution process, performs batch standardization operation on data, and accelerates the convergence speed of the network. The specific network structure comprises an input layer, two convolution layers (C1, C2), two BN layers (B1, B2), two activation function layers (R1, R2), a first down-sampling layer (pool1), two convolution layers (C3, C4), two BN layers (B3, B4), two activation function layers (R3, R4), a second down-sampling layer (pool2), two convolution layers (C5, C6), two BN layers (B5, B6), two activation function layers (R5, R6) and a third down-sampling layer (pool 3). And (4) inputting training pictures into a down-sampling layer, and obtaining feature maps with different sizes through a down-sampling process to be used as the input of a next layer of network.
Because the edge part of the ice cover image is extremely irregular, in order to further enhance the classification capability of the network on the detail features, the classification accuracy of the ice layer is increased. A multi-scale expanded pyramid structure (Atrous Spatial pyramiding) was constructed herein. The ASPP network provided by the method comprises five parallel convolutional layers, and a common convolutional layer is added on the basis of the ASPP network with only four parallel expansion convolutional layers to better extract detailed information, so that the ASPP network with five expansion convolutional layers with different expansion rates is formed, and five parallel branch expansion convolutional characteristic graphs with different scales are obtained. And then the feature maps are fused into one feature map for subsequent processing. The ASPP network improves the perception degree of the whole network to local detail information.
4. Refined segmentation result
In order to further mine the relationship between pixels in the training image to improve the segmentation accuracy. And (4) performing detail processing on the training image and the segmentation result graph obtained by training as well as the input of a Markov Conditional Random Field (CRF) to obtain a final ice cover segmentation image.
Advantageous effects
1. The invention can rapidly carry out automatic feature extraction on the ice layer image by a method for constructing the FCN-ASPP network, can carry out feature extraction on the ice layer image from different levels, learns the relationship between each pixel in the ice layer image and a plurality of neighborhoods around the pixel, well expresses high-level features in the ice cover image and extracts more detailed information. The invention increases BN network layer to normalize data after each convolution layer, and improves network convergence. Therefore, the internal characteristics of the ice layer, the bedrock and the noise are better distinguished, end-to-end and pixel-level ice layer segmentation is realized, and the classification judgment efficiency is greatly improved compared with that of the traditional single pixel.
2. The invention further processes the tail end output of the FCN-ASPP network and the original image, and further excavates the relation between each pixel in the ice layer image and a plurality of neighborhoods around the pixel by using a Markov condition random field to improve the classification accuracy. So that the ice layer image segmentation result graph and the manual labeling segmentation result reach good consistency. The invention realizes the automation of ice layer image segmentation and greatly reduces the consumption of manpower and material resources.
3. The invention adopts the idea of multi-level feature extraction, fully utilizes the characteristics of high feature resolution of shallow features and embodies the characteristics of stronger semantic information of deep features. Therefore, refinement and robustness of ice layer segmentation are realized.
4. The invention improves the ASPP network structure, and adds a common convolution layer branch on the basis of four parallel expansion convolution layer branches of the original ASPP network structure. The extraction capability of the network for small-size features is further enhanced.
5. The invention carries out Lee filtering processing on the radar image. In order to ensure that the edge information of the image is stored, the invention adds a threshold judgment process to the filtering process, introduces a variation coefficient, and carries out filtering processing on the coefficient when the coefficient is lower than the threshold, otherwise, the original pixel value is kept. Thereby improving the robustness of the segmentation effect.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a schematic diagram of a dilated convolution jump structure;
FIG. 3 is a schematic diagram of a different expansion coefficient expansion convolution kernel;
FIG. 4 is an ASPP network structure;
FIG. 5 is an FCN-ASPP network structure;
FIG. 6 is the ice layer segmentation result;
Detailed Description
The following detailed description is made with reference to the accompanying drawings:
the technical block diagram of the invention is shown in figure 1. The specific implementation steps are as follows:
1. pretreatment of
In the first step, the acquired radar amplitude image is subjected to logarithmic conversion, and the radar amplitude a is calculated through a formula (1)iCorresponding pixel value bi
bi=20×log10(ai) (1)
And in the second step, the image is normalized by formula (2) to be 255 pixel levels.
Figure GDA0001971707540000051
Wherein, biIs the logarithmically converted pixel value, ciIs the normalized pixel value, max is the maximum value of the pixel before normalization, min is the minimum value of the pixel before normalization.
And thirdly, Lee filtering processing is carried out on the original image, coherent speckle noise of the radar image is eliminated, and the obtained image is stored as a training image. Speckle noise is a multiplicative noise, and is modeled as
y=xv (3)
Where y represents the image after noise addition, x represents the image without noise addition, and v is the noise signal. A moving window of 7x7 was designed as a filter to filter the image. In order to prevent the edge blurring caused by Lee filtering from adversely affecting the segmentation, a simple threshold judgment is added, a variation coefficient C is introduced, and the threshold C of the image is firstly obtained
Figure GDA0001971707540000061
N is the view number of the radar image, and the variation coefficient C corresponding to the window center pixel I can be obtained by a formulai。var(yi) Is the variance of the pixels within the window,
Figure GDA0001971707540000062
is the mean of the pixels within the window.
Figure GDA0001971707540000063
When C is presentiAnd when the C is less than or equal to C, adopting a Lee filtering method. When C is presentiAnd when the pixel is more than C, the pixel is reserved and is not subjected to filtering processing.
The fourth step clips the normalized grayscale image to a size of 500 × 389 as the input training image. In addition, label images having the same size as the input training image are manually labeled, and noise (niose), an ice layer (layer), a bedrock layer (bedrock), and an air layer (air) are respectively labeled as pixel gradations 0, 1, 2, and 3.
2. Data amplification and construction of training samples
Since training a deep network requires a large amount of data, only existing ice layer images are far from sufficient for training. Therefore, different modes of data amplification need to be performed on training data, so that the data volume is increased, and the training and detection effects are improved.
In the first step, the preprocessed image is translated into 20 pixels respectively in left, right, up and down directions, and the translation invariance of network learning is realized.
And in the second step, the preprocessed image and the image processed in the first step are subjected to reduction processing, and the image is reduced to 50% and 75% of the original image, so that the data is amplified.
And thirdly, expanding the same data of the label images (label) divided by the ice layer, thereby obtaining classification labels corresponding to the samples one by one. 3/4 of the constructed training sample data is used as a training set, and 1/4 is used as a verification set.
3. FCN-ASPP network construction and training and testing process
First step of building FCN-ASPP network
The FCN-ASPP network is constructed on the basis of the FCN network, the two-stage convolution layer and the pooling layer of the down-sampling part of the FCN network are removed, and the ASPP layer is added between the down-sampling part and the up-sampling part of the FCN network.
The network structure specifically comprises an input layer, two convolution layers (C1, C2), two BN layers (B1, B2), two activation function layers (R1, R2), a first downsampling layer (pool1), two convolution layers (C3, C4), two BN layers (B3, B4), two activation function layers (R3, R4), a second downsampling layer (pool2), two convolution layers (C5, C6), two BN layers (B5, B6), two activation function layers (R5, R6), and a third downsampling layer (pool 3). The convolution operation step size is carried out by applying two tightly connected 3 × 3 filter convolution kernels in the downsampling layers, the convolution is 12The data is then normalized too much by the calculated mean and variance. The standardization process adopts a transformation reconstruction method.
y=γx+β (6)
x is original data, y is normalized data, and the conversion parameters gamma and beta are obtained through training and learning. And (4) carrying out linear transformation on the input data of the BN layer through conversion parameters gamma and beta to obtain a final normalization result. The structure of the pooling layer is a max-pooling layer with a core of 2X 2 and a step size of 2. And inputting training pictures into a down-sampling layer, and obtaining feature maps with different sizes through a plurality of down-sampling processes.
ASPP layer section: because the low-level feature resolution of the FCN is high, and the high-level information embodies stronger semantic information, in order to fully utilize convolution features of different levels in a deep network, the original FCN is further improved, and output feature maps of pool1, pool2 and pool3 are used as the input of four parallel expansion convolution layers (C7-1, C7-2, C7-3 and C7-4) of the ASPP. Each layer respectively passes through two convolution layers, two activation function layers, a pooling layer and a full-connection layer, and a BN layer is added behind each convolution layer for normalizing data. (C7-1, BN4-1, R7-1, C8-1, BN5-1, R8-1, pool4-1, FC5-1), (C7-2, BN4-2, R7-2, C8-2, BN5-2, R8-2, pool4-2, FC5-2), (C7-3, BN4-3, R7-3, C8-3, BN 8-3, R8-3, pool 8-3, FC 8-3), (C8-4, BN 8-4, R8-4, C8-4, BN 8-4, R8-4, pool 8-4, FC 8-4), (C8-5, R8-5, C8-5, R8-5, and R8-5, C8-5. And fusing the FC5-1, FC5-2, FC5-3, FC5-4 and FC5-5 layer output characteristic maps to obtain a fused characteristic map (session).
The convolution layer of the ASPP layer adopts an expansion convolution method, and the edge part of the ice cover image is extremely irregular, so that the classification capability of the network on the detail characteristics is further enhanced, and the classification accuracy of the ice layer is improved. A multi-scale expanded Pyramid structure (Atrous Spatial Pyramid) was constructed herein. The ASPP network provided by the method comprises five parallel convolutional layers, and a common convolutional layer is added on the basis of the ASPP network with only four parallel expansion convolutional layers to better extract detailed information, so that the ASPP network with five expansion convolutional layers with different expansion rates is formed, and five parallel branch expansion convolutional characteristic graphs with different scales are obtained. These feature maps are then fused into one feature map. In the part, a BN layer is added after each convolution layer for data normalization, so that the convergence speed of the network is improved.
An up-sampling part: including an upsampling layer (upsampling) and an output layer. And taking the fusion characteristic graph obtained by the three ASPP layers as an up-sampling layer for input, and performing up-sampling through a bilinear interpolation algorithm. And fusing the feature maps into a feature map, and classifying pixels by a softmax classifier to obtain an ice cover segmentation image.
Training process: inputting the glacier radar image and the classification label image into an FCN-ASPP network to obtain a classification result image with the same size as the original image. And the loss function adopts a cross entropy function, and the network optimization is carried out by minimizing the difference value of the input label image and the classification predicted image. And storing the trained model.
4 refinement of the segmentation results
And (4) taking the segmentation result graph and the original gray level image obtained by the output of the front end FCN-ASPP as the input of a Markov Conditional Random Field (CRF) for further processing. The energy function used by the fully-connected CRF is
Figure GDA0001971707540000081
And the pixel x belongs to N, and N is all pixels of the picture. The first term of the formula (7) is
Figure GDA0001971707540000082
Figure GDA0001971707540000083
Is a unitary energy function, in which Pi(x) And the distribution probability of a classification mark i corresponding to the pixel x in a segmentation result graph obtained by front-end FCN-ASPP output is shown, and the value range of i in the invention is 0, 1, 2 and 3.
The second term of the formula (7) is
Figure GDA0001971707540000091
Figure GDA0001971707540000092
Figure GDA0001971707540000093
Is a binary energy function. x, y denote two different pixels, px,pyRepresenting the x, y position, I, of the pixelx,IyAnd expressing the x, y gray scale of the pixel, and the gray scale and the pixel position of the pixel are obtained from the input original gray scale image. Mu.si(x, y) is a label compatibility function, where the label of pixel x is i, when pixelyWhen they belong to the class i tag, mui(x, y) is 0, otherwise, is 1, pixelyThe label of (2) is obtained from a segmentation result graph output by the front end FCN-ASPP. The closer the pixels x and y are, the smaller the gray scale difference,
Figure GDA0001971707540000094
the larger the value.
Four energy functions E of the pixel x are obtained through calculation0(x),E1(x),E2(x),E3(x) The label corresponding to the minimum energy value is taken as the new label of the pixel x. And traversing all pixels of the input segmentation result graph to obtain a new and more accurate segmentation result graph. Setting parameters according to the characteristics of the radar image, wherein the parameter omega is1,ω2Set to 49, parameter σα,σγSet to 5, parameter σβSet to 3.

Claims (3)

1. An ice cover radar image ice layer fine segmentation method based on an FCN-ASPP network is characterized by comprising the following steps:
1) pretreatment of
Carrying out logarithmic conversion on the pixel value of the original radar amplitude image; calculating a converted pixel value corresponding to the radar amplitude; then, carrying out normalization processing on the image, and normalizing the image into 255 pixel levels;
adopting a lee filtering algorithm to further process the radar gray level image;
taking the filtered gray level image as an input training image; label images with the same size as the input training images are marked, and noise, an ice layer, a basement layer and an air layer are respectively marked as pixel gray levels of 0, 1, 2 and 3;
2) data amplification and construction of training samples
3) FCN-ASPP network construction
The FCN network structure specifically comprises an input layer, two convolution layers (C1, C2), two BN layers (B1, B2), two activation function layers (R1, R2), a first down-sampling layer (pool1), two convolution layers (C3, C4), two BN layers (B3, B4), two activation function layers (R3, R4), a second down-sampling layer (pool2), two convolution layers (C5, C6), two BN layers (B5, B6), two activation function layers (R5, R6) and a third down-sampling layer (pool 3);
the output characteristic diagrams of pool1, pool2 and pool3 are used as the input of four parallel expansion convolutional layers of ASPP, namely C7-1, C7-2, C7-3 and C7-4 and the input of a common convolutional layer C7-5; each layer respectively passes through two convolution layers, two activation function layers R, a pooling layer pool and a full connection layer FC, and a BN layer is added behind each convolution layer for normalizing data; the whole ASPP layer is: (C7-1, BN4-1, R7-1, C8-1, BN5-1, R8-1, pool4-1, FC5-1), (C7-2, BN4-2, R7-2, C8-2, BN5-2, R8-2, pool4-2, FC5-2), (C7-3, BN4-3, R7-3, C8-3, BN 8-3, R8-3, pool 8-3, FC 8-3), (C8-4, BN 8-4, R8-4, pool 8-4, FC 8-4), (C8-5, R8-5, C8-5, BN 8-5, R8-36; fusing FC5-1, FC5-2, FC5-3, FC5-4 and FC5-5 layer output characteristic diagrams to obtain fused characteristic diagrams;
inputting a fusion characteristic diagram obtained by the three ASPP layers as an up-sampling layer, and performing up-sampling through a bilinear interpolation algorithm; then, the feature maps are fused into a feature map, and pixels are classified through a softmax classifier to obtain an ice layer segmentation image;
inputting the glacier radar image and the classification label image into an FCN-ASPP network in the training process to obtain a classification result graph with the same size as the original image; the loss function adopts a cross entropy function, and network optimization is carried out by minimizing the difference value between the input label image and the classified prediction image; storing the trained model;
4) and further processing the preliminary segmentation result obtained by the FCN-ASPP by adopting a CRF algorithm.
2. The method of claim 1, wherein: the Lee filtering process specifically includes:
designing a 7x7 moving window as a filter to filter the image; adding a simple threshold judgment, introducing a variation coefficient C, and firstly calculating the threshold C of the image
Figure FDA0001907455800000021
N is the view number of the radar image, and the variation coefficient C corresponding to the window center pixel I is calculated by a formulai;var(yi) Is the variance of the pixels within the window,
Figure FDA0001907455800000022
is the mean value of the pixels in the window;
Figure FDA0001907455800000023
when C is presentiWhen the C is less than or equal to C, adopting a Lee filtering method; when C is presentiAnd when the pixel is more than C, the pixel is reserved and is not subjected to filtering processing.
3. The method of claim 1, wherein: the CRF treatment process specifically comprises the following steps;
taking a segmentation result image and an original gray level image output by the front end FCN-ASPP as the input of a Markov Conditional Random Field (CRF) for further processing; the energy function used by the fully-connected CRF is
Figure FDA0001907455800000024
The pixel x belongs to N, and N is all pixels of the picture; the first term of the formula is
Figure FDA0001907455800000025
Figure FDA0001907455800000026
Is a unitary energy function, in which Pi(x) Representing the distribution probability of a classification mark i corresponding to a pixel x in a segmentation result graph obtained by front-end FCN-ASPP output, wherein the value range of i is 0, 1, 2 and 3;
the second term of the formula is
Figure FDA0001907455800000027
Figure FDA0001907455800000028
Figure FDA0001907455800000029
Is a binary energy function; x, y denote two different pixels, px,pyRepresenting the x, y position, I, of the pixelx,IyExpressing the x and y gray scales of the pixel, wherein the pixel gray scale and the pixel position are obtained from an input original gray scale image; mu.si(x, y) is a label compatibility function, where pixel x has a label of i, and when pixel y belongs to the i-class label, μi(x, y) takes a value of 0, otherwise, takes a value of 1, and the label of the pixel y is obtained by a segmentation result graph output by the front end FCN-ASPP; the closer the pixels x and y are, the smaller the gray scale difference,
Figure FDA0001907455800000031
the larger the value;
four energy functions E of the pixel x are obtained through calculation0(x),E1(x),E2(x),E3(x) Taking the label corresponding to the minimum energy value as the new label of the pixel x; traversing all pixels of the input segmentation result graph to obtain a new segmentation result graph with a parameter omega1,ω2Set to 49, parameter σα,σγSet to 5, parameter σβSet to 3.
CN201811538248.8A 2018-12-16 2018-12-16 Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network Active CN109741340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811538248.8A CN109741340B (en) 2018-12-16 2018-12-16 Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811538248.8A CN109741340B (en) 2018-12-16 2018-12-16 Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network

Publications (2)

Publication Number Publication Date
CN109741340A CN109741340A (en) 2019-05-10
CN109741340B true CN109741340B (en) 2020-10-16

Family

ID=66360375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811538248.8A Active CN109741340B (en) 2018-12-16 2018-12-16 Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network

Country Status (1)

Country Link
CN (1) CN109741340B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276332B (en) * 2019-06-28 2021-12-24 北京奇艺世纪科技有限公司 Video feature processing method and device
CN111667499A (en) * 2020-06-05 2020-09-15 济南博观智能科技有限公司 Image segmentation method, device and equipment for traffic signal lamp and storage medium
CN111738338B (en) * 2020-06-23 2021-06-18 征图新视(江苏)科技股份有限公司 Defect detection method applied to motor coil based on cascaded expansion FCN network
CN112819000A (en) * 2021-02-24 2021-05-18 长春工业大学 Streetscape image semantic segmentation system, streetscape image semantic segmentation method, electronic equipment and computer readable medium
CN114445726B (en) * 2021-12-13 2022-08-02 广东省国土资源测绘院 Sample library establishing method and device based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542551A (en) * 2010-12-13 2012-07-04 北京师范大学 Automatic change detection technology for floating ice at edges of polar ice sheets
CN102567726A (en) * 2010-12-13 2012-07-11 北京师范大学 Technology for automatically extracting floating ice in polar ice sheet edge regions
CN106971396A (en) * 2017-03-10 2017-07-21 中国科学院遥感与数字地球研究所 Ice sheet freeze thawing detection method based on super-pixel
CN107103280A (en) * 2017-03-10 2017-08-29 中国科学院遥感与数字地球研究所 Polar ice sheet freeze thawing detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10580131B2 (en) * 2017-02-23 2020-03-03 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542551A (en) * 2010-12-13 2012-07-04 北京师范大学 Automatic change detection technology for floating ice at edges of polar ice sheets
CN102567726A (en) * 2010-12-13 2012-07-11 北京师范大学 Technology for automatically extracting floating ice in polar ice sheet edge regions
CN106971396A (en) * 2017-03-10 2017-07-21 中国科学院遥感与数字地球研究所 Ice sheet freeze thawing detection method based on super-pixel
CN107103280A (en) * 2017-03-10 2017-08-29 中国科学院遥感与数字地球研究所 Polar ice sheet freeze thawing detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Fully Convolutional Networks for Semantic Segmentation";Jonathan Long;《CVPR 2015》;20151231;第3431-3440 *

Also Published As

Publication number Publication date
CN109741340A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109741340B (en) Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN111461258B (en) Remote sensing image scene classification method of coupling convolution neural network and graph convolution network
CN108985238B (en) Impervious surface extraction method and system combining deep learning and semantic probability
CN111915592B (en) Remote sensing image cloud detection method based on deep learning
CN111340738B (en) Image rain removing method based on multi-scale progressive fusion
CN110084234B (en) Sonar image target identification method based on example segmentation
CN110598600A (en) Remote sensing image cloud detection method based on UNET neural network
CN111428762B (en) Interpretable remote sensing image ground feature classification method combining deep data learning and ontology knowledge reasoning
CN110570433B (en) Image semantic segmentation model construction method and device based on generation countermeasure network
Mohajerani et al. Cloud and cloud shadow segmentation for remote sensing imagery via filtered jaccard loss function and parametric augmentation
CN113435411B (en) Improved DeepLabV3+ based open pit land utilization identification method
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN113312993B (en) Remote sensing data land cover classification method based on PSPNet
CN112561876A (en) Image-based pond and reservoir water quality detection method and system
CN111178438A (en) ResNet 101-based weather type identification method
CN110991257A (en) Polarization SAR oil spill detection method based on feature fusion and SVM
CN113887472A (en) Remote sensing image cloud detection method based on cascade color and texture feature attention
CN111008644A (en) Ecological change monitoring method based on local dynamic energy function FCN-CRF model
CN111274878B (en) Satellite cloud image classification method and system
CN115049841A (en) Depth unsupervised multistep anti-domain self-adaptive high-resolution SAR image surface feature extraction method
CN114937206A (en) Hyperspectral image target detection method based on transfer learning and semantic segmentation
CN113052215A (en) Sonar image automatic target identification method based on neural network visualization
CN115311508A (en) Single-frame image infrared dim target detection method based on depth U-type network
CN115861756A (en) Earth background small target identification method based on cascade combination network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant