CN109409240A - A kind of SegNet remote sensing images semantic segmentation method of combination random walk - Google Patents

A kind of SegNet remote sensing images semantic segmentation method of combination random walk Download PDF

Info

Publication number
CN109409240A
CN109409240A CN201811139786.XA CN201811139786A CN109409240A CN 109409240 A CN109409240 A CN 109409240A CN 201811139786 A CN201811139786 A CN 201811139786A CN 109409240 A CN109409240 A CN 109409240A
Authority
CN
China
Prior art keywords
segnet
random walk
classification
segmentation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811139786.XA
Other languages
Chinese (zh)
Other versions
CN109409240B (en
Inventor
江洁
何永强
刘思滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201811139786.XA priority Critical patent/CN109409240B/en
Publication of CN109409240A publication Critical patent/CN109409240A/en
Application granted granted Critical
Publication of CN109409240B publication Critical patent/CN109409240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Abstract

The present invention relates to a kind of SegNet remote sensing images semantic segmentation methods of combination random walk, it is divided into SegNet initial segmentation step and random walk Optimized Segmentation step, SegNet initial segmentation step exports initial semantic segmentation image and classification strength information by SegNet;Random walk Optimized Segmentation step, first selection random walk seed region calculate different classes of classification significant indexes, given threshold chooses different classes of seed region according to the classification intensive properties information that SegNet is exported;Secondly, carrying out the calculating of nonoriented edge weight according to original image gradient and the classical strength information of SegNet;Third step since seed region and combines nonoriented edge weight, carries out random walk on whole picture initial segmentation image, finally obtain the Optimized Segmentation result in entire image.Present invention random walk in entire image realizes prediction error and control, greatly reduces burrs on edges and patch shape error in classification, complete high-precision remote sensing images semantic segmentation.

Description

A kind of SegNet remote sensing images semantic segmentation method of combination random walk
Technical field
The present invention relates to SegNet (Random-Walk-SegNet) the remote sensing images semantemes of a kind of combination random walk point Segmentation method belongs to information technology field.
Background technique
In recent years, remote sensing technology was developed rapidly, and Remote Sensing Image Processing Technology is increasingly used in disaster point The fields such as analysis, city monitoring and resource management.Remote Sensing Imagery Change Detection is crucial one of technology, it can be according to difference The image detection in period go out specific region in certain time have occurred which kind of variation and changed degree, semantic segmentation be A big key problem in Remote Sensing Imagery Change Detection, by semantic segmentation, belonging to available pixel each into image Ground object target classification information, on this basis can by comparing obtain two images between change information.
Image, semantic refers to the process for being split the pixel in image according to semantic difference, in remote sensing figure As field, multiple dimensioned, a plurality of types of semantic segmentations are always the emphasis of remote sensing image processing, difficult point.As high-resolution is distant Feel the extensive utilization of image, the new challenge of the appearance of remote sensing images semantic segmentation: 1) there are various objects, packets in image Include large-scale house, road and kart etc., the difference of scale causes greatly very much to be difficult while fine Ground Split, needs different layers Secondary, different scale segmentation;2) details of image is very rich, and for same type objects, the increase of details causes its spectrum can Denaturation increases, such as twig, roof, road sign, and variance within clusters increase, and brings difficulty to classification;3) the main base of segmentation and classification In image texture rather than the gray level of single pixel, parted pattern design when need to make full use of each type objects of original image Global Information.
Image, semantic dividing method experienced from based on pixel threshold, based on cluster to based on figure divide development course. These traditional semantic segmentation methods are mostly based on the rudimentary characteristics of image such as gray level of pixel and are split, multiple for possessing The image of miscellaneous, changeable details, it is too simple rough, it is extremely difficult to high-precision.Semantic segmentation method based on deep learning, exists The problem of edge positioning inaccurately, more than noise spike.
Remote sensing images semantic segmentation based on deep learning substantially increases the precision of segmentation, but with high-definition remote sensing The extensive utilization of image, the increase of image detail are that classification learning brings very big interference, be easy to appear rough edge and Patch shape error in classification, increases the difficulty of Accurate Segmentation.
Is done to the semantic segmentation algorithm based on deep learning by many work for remote sensing images field.Efficient piecewise training of deep structured models for semantic segmentation[C]. (Lin G,Shen C,van den HengelA,et al.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:3194-3203.) text in by combine CNN and condition Random field carries out image, semantic segmentation using complicated contextual information;Semantic segmentation of earth observation data using multimodal and multi-scale deep networks[C].(Audebert N,Le Saux B,Lefèvre S.In Asian Conference on Computer Vision.Springer,Cham, The complete convolutional neural networks of depth (DFCNN) 2016:180-196) is constructed in text, is used for using multiple convolutional layers Rapid polymerization prediction is carried out on multiple scales, enhances details resolution capability; Classification with an edge: improving semantic image segmentation with boundary detection[J].(Marmanis D, Schindler K,Wegner J D,et al.ISPRS Journal of Photogrammetry and Remote Sensing, 2018,135:158-172.) algorithm proposed in text combines SegNet with edge detection network HED, promotion Segmentation precision, etc..Above-mentioned various improved methods, although improving semantic segmentation precision on the whole, for atural object edge Identification positioning there are still very big error, edge segmentation is not smooth enough, and patch noise like is more.And for other high-precision point Algorithm is cut, the neural network number of plies is more, and error control model is integrated in trained network, and Construction of A Model is complicated, such as Gated Convolutional Neural Network for Semantic Segmentation in High-Resolution Images [J] (Wang H, Wang Y, Zhang Q, et al.Remote Sensing, 2017,9 (5): 446.) in ResNet- Comentropy Controlling model ECM is proposed on the basis of 101, can effectively control segmentation error, but under the insufficient scene of training burden It just will appear obvious error, this is the problem of using the control errors algorithm for being integrated in network model not can avoid.
Up to the present, there are no the semantic segmentation algorithms based on deep learning calculates SegNet and traditional random walk Method combines.
Summary of the invention
The technology of the present invention solves the problems, such as: the SegNet for overcoming the deficiencies of the prior art and provide a kind of combination random walk is distant Feel image, semantic dividing method, the random walk in entire image, realize prediction error and control, greatly reduce burrs on edges and Patch shape error in classification completes high-precision remote sensing images semantic segmentation.
A kind of the technology of the present invention solution: SegNet remote sensing images semantic segmentation method of combination random walk, comprising: In conjunction with the SegNet semantic segmentation result optimizing of random walk, it is divided into SegNet initial segmentation step and random walk Optimized Segmentation Step;
SegNet initial segmentation step, inputs original remote sensing images first, and by SegNet, final output SegNet is initial Semantic segmentation image and all kinds of classification strength informations;
Random walk Optimized Segmentation step is selected first for optimizing segmentation to the image after SegNet initial segmentation Random walk seed region is taken, according to the classification intensive properties information that SegNet is exported, calculates different classes of classification conspicuousness Index, given threshold choose different classes of seed region;Second step calculates the weight on random walk non-directed graph side, according to original The classical strength information of beginning image gradient and SegNet carries out the calculating of nonoriented edge weight;Third step is chosen from the first step Seed region starts, and combines the nonoriented edge weight of second step, and random walk is carried out on whole picture initial segmentation image, and extension is each The region of classification finally obtains the Optimized Segmentation result in entire image.
In the random walk Optimized Segmentation step, the classification conspicuousness information based on SegNet output chooses random trip Seed region is walked, its step are as follows:
A 6 dimensional vector Z=(z are exported in each location of pixels with the last one convolutional layer of SegNet1,z2,z3,z4,z5, z6)TThe ratio of secondary big element and greatest member constructs a simple classification significant indexes Sa, is defined as:
Sa=1-z2nd/zmax (1)
Wherein, zmaxGreatest member in corresponding vector Z, z2ndTime big element in corresponding vector Z, the value range of Sa are [0,1], Sa is bigger, it is meant that the corresponding prediction result of greatest member is more significant relative to other predicted values, current predictive classification It is more reliable.
In the random walk Optimized Segmentation step, merge into original image gradient and the classical strength of SegNet letter The weight on breath setting random walk non-directed graph side, its step are as follows:
Weight is constructed according to the following formula:
wij=exp (- α (hi-hj)2-β(gi-gj)2) (2)
Wherein, each pixel in image regards the node of non-directed graph, h asiAnd hjIt is to be originally inputted in remote sensing images two The intensity value of adjacent node current predictive classification, by SegNet decoder, the last one convolutional layer is directly exported, and α and β are two Free parameter takes 10 and 50, g respectivelyiAnd gjIt is the corresponding image grayscale of two adjacent nodes, before carrying out weight computing, point It is other that image grayscale and classification strength information are normalized.
The present invention is with the advantages of prior art:
(1) present invention realizes the initial semantic segmentation of image, obtains preliminary classification knot first using SegNet as basic framework Fruit and classification conspicuousness information;Then, the seed region of random walk is chosen according to the classification conspicuousness information of SegNet output, The weight for merging original image gradient and the classical strength information design random walk non-directed graph side of SegNet, in entire image Random walk realizes prediction error and control, greatly reduces burrs on edges and patch shape error in classification, complete high-precision remote sensing Image, semantic segmentation.
(2) present invention carries out the setting of random walk seed region and weight, Optimized Segmentation using SegNet output information As a result.Remote sensing images semantic segmentation experiment is carried out in the data set that ISPRS is provided using the method for the present invention, is realized 89.9% segmentation precision.Be experimentally confirmed the result of multiple detection windows is merged, the introducing of random walk method The output quality of SegNet can be effectively promoted, semantic segmentation result is optimized.In addition, random walk method is not necessarily to be trained, Be not take up Internet resources, can offline independent operating, have superior performance under weak Training scene.It is evaluated and tested with ISPRS official Main stream approach in ranking list compares, and the method for the present invention all achieves higher precision in the detection of all kinds of ground object targets, no Easily there is large area adhesion, greatly reduce burrs on edges and patch shape error in classification, edge positioning is more accurate, it can be achieved that high-precision The remote sensing images semantic segmentation of degree works.
(3) present invention controls semantic segmentation error using random walk, and Random Walk Algorithm is as a kind of independent rear end Optimization algorithm selects marking area as seed region from the output result of deep neural network, will be significant in output result The low region of property is divided again, does not need to be integrated in network and is trained, and shows under various scenes more stable.
Detailed description of the invention
Fig. 1 is the method for the present invention overview flow chart;
Fig. 2 is SegNet structural schematic diagram;
Fig. 3 is original image and ground truth;Wherein: (a) being road surface, (b) be building, (c) be short vegetation, (d) It (e) is automobile for trees;
Fig. 4 is that schematic diagram is divided in simple random walk;Wherein: (a) being seed point and segmentation result, (b) be classification 1 Migration probability (c) is 2 migration probability of classification, (d) is 3 migration probability of classification;
Fig. 5 is the influence schematic diagram of conspicuousness threshold value, in which: (a) is the ground truth of semantic segmentation, (b) is SegNet Output, exports the error of result and true value, the i.e. difference of (b) and (a) (c) for network, (d)-(f) be threshold value Thr be 0.5, 0.3 and 0.1 corresponding non-seed area detection result;
Fig. 6 is RWSNet segmentation result schematic diagram, in which: (a) original image 1, (b) segmentation result 1, (c) original image 2, segmentation result 2 (d).
Specific embodiment
The following describes the present invention in detail with reference to the accompanying drawings and embodiments.
As shown in Figure 1, the method is specifically implemented by the following steps:
(1) SegNet initial segmentation
In traditional SegNet image detection, for the region opposed central regions at detection image edge, carry out Less for the information that utilizes when classification, the precision of segmentation is lower than window center region.
The present invention uses the method repeatedly predicted in conjunction with multiple detection windows same position pixel.Using with input The identical window of picture size carries out sliding sampling on the image, and obtained sampled picture is window image, and window image is input to SegNet exports the corresponding prediction classification of each original window image pixel and classification strength information, i.e. prediction result pixel-by-pixel.When When the stepping of sliding window is less than window size, testing result of the same location of pixels under different detection windows can be carried out Superposition, the location of pixels in single detection window edge also has an opportunity to be in the center of other detection windows originally, with one kind The thought of integrated study improves the spatial perception ability of method, so as to promote whole segmentation effect.
The SegNet structure that the present invention uses contains in encoder as shown in Fig. 2, encoder is first 13 layers of VGG-16 Pond layer (Pooling layer), using maximum pond.Decoder is similarly 13 layers, and structure corresponds to encoder, each pond Layer has corresponding up-sampling layer (Upsampling layer) in a decoder, restores the spatial discrimination of image by up-sampling Rate.Decoder finally connects one Softmax layers, predicts target category pixel-by-pixel.Network inputs are high-resolution remote sensing image, Output is the semantic segmentation image of same resolution ratio.
1. SegNet training parameter is arranged
The training parameter that inventive network uses is as shown in table 1.128 × 128 and 256 × 256 input rulers have been respectively trained Very little SegNet, is denoted as SegNet-1 and SegNet-2.It is optimized using stochastic gradient descent method, decoder is initially learned Habit rate is updated to 0.001 after being updated to 0.001, SegNet-2 iteration 40000 times after being 0.01, SegNet-1 iteration 20000 times. Encoder is initialized using the VGG-16 of pre-training network, and learning rate is set as the half of decoder learning rate.
1 network training relevant parameter of table
2. in conjunction with the SegNet training method of sliding window strategy
The image data of present invention training is all from International Society for Photogrammetry and Remote (International Society for Photogrammetry and Remote Sensing, ISPRS) the two-dimentional remote sensing images semanteme held point Match is cut, image corresponds to the area Stuttgart, Germany Vaihingen, and ground resolution GSD is 9cm.Image in data set is from one Whole picture is extracted by cutting in the image of ortho-rectification, and amounting to has 33 width original images, and resolution ratio substantially 2000 × 2500。
Original image is 8 tiff formats of triple channel, the information comprising three wave bands, respectively near-infrared, visible red Color and green (Near Infrared, Red and Green, IRRG), ISPRS provides the annotation results of wherein 16 width images (number 1,3,5,7,9,11,13,15,17,21,26,28,30,32,34 and 37), mark image are 8 RGB images of triple channel.
In mark image as shown in Figure 3, (a) represents road surface, (b) represents building, (c) represents short vegetation, (d) generation Table trees, (e) represent automobile, amount to 5 class ground object targets.
Other than this 5 classification is marked with, parts of images contains background classes target, and such target signature is very mixed and disorderly, and source is contained Lid river, container, swimming pool and tennis court etc., ratio is very low (only 0.88%) in all images, in hands-on In ignore this kind of targets, ensuring method is interference-free.
The present invention takes 4 groups (numbers 5,7,23 and 30) in 16 groups of remote sensing images for containing mark image to collect as verifying, Remaining 12 groups of image is used to generate training set, is finally trained verifying collection and training set together, realizes point of higher precision Cut result.
Sliding sampling is carried out on verifying collection image using window identical with input image size, window image is input to Network exports corresponding prediction result pixel-by-pixel.When the stepping of sliding window is less than window size, same pixel can be more It is detected under a window, the testing result for merging multiple windows carries out class prediction.The stepping of SegNet-1 sliding window point Do not take 128 (window is non-overlapping), 64,32, SegNet-2 sliding detection window stepping take 256 (not being overlapped), 128,64 respectively. Find in experimentation: 1) the semantic segmentation precision using the SegNet of bigger input image size is more preferable;2) by multiple detections The result of window, which carries out fusion, can promote segmentation precision.Therefore the 256 × 256 of final choice SegNet-2 are as input ruler Very little, the stepping of network sliding detection window takes 256 (not being overlapped), 128,64 respectively, finally carries out to the prediction result of multiple windows Fusion.
(2) based on the random walk method of SegNet output information
Random walk (Random Walk) is a kind of special shape of Brownian movement, to image application random walk side When method, usually entire image is regarded as a non-directed graph G=(V, E, W).Each pixel in image regards the section of non-directed graph as Point vi∈ V, adjacent node constitute the side e of figureij∈ E, the weight w on sideij∈ W by between two pixels color, texture and The information such as gradient provide.
When random walk method is run, the seed region (node containing classification marker information) in specified image is needed, In walk process, each unmarked point is calculated to the probability of different classes of seed region, the classification of maximum probability is that can be considered this The classification of mark point, and then complete image segmentation.One simply illustrative figure is as shown in figure 4, (a) is three different seed regions Random walk division result, (b) for unmarked point be classification L1Seed region probability, similarly (c), (d) be respectively not Mark point is classification L2、L3Seed region probability graph, the big classification of probability is considered as to the classification of point to be marked.
The present invention proposes a kind of random walk method based on SegNet output information.It is planted using network output information Sub-zone dividing and the setting of adjacent side weight.
(3) seed region based on SegNet output information divides
In one 6 dimension of each location of pixels output, (background classes are not involved in network training to the last one convolutional layer of SegNet, only Output) vector Z=(z1,z2,z3,z4,z5,z6)T, each element characterizing method is in the predicted intensity for corresponding to classification, wherein maximum The corresponding classification information of the element i.e. corresponding prediction classification of location of pixels thus.
The present invention is exported by the inspiration of nearest neighbor distance ratio NNDR matching strategy in the last one convolutional layer of SegNet On the basis of 6 dimensional vector Z, with the ratio of in vector Z times big element and greatest member, constructs a simple classification conspicuousness and refer to Sa is marked, is defined as:
Sa=1-z2nd/zmax (3)
Wherein, zmaxGreatest member in corresponding vector Z, z2ndTime big element in corresponding vector Z, the value range of Sa are [0,1].Sa is bigger, it is meant that the corresponding prediction result of greatest member is more significant relative to other predicted values, current predictive classification It is more reliable.As a kind of simple classification significant indexes, the definition of Sa meets the direct feel of the mankind visually: Wu Faming The aobvious region for distinguishing two categories, corresponding conspicuousness is low, and Sa is close to 0;The region that can obviously distinguish, corresponding conspicuousness By force, Sa is close to 1.Threshold value Thr is arranged to Sa, it is not strong enough less than the region significance of Thr, it is labeled as non-seed region, conversely, Seed region is labeled as to be retained.
The different values of conspicuousness threshold value Thr in random walk optimization method are tested, partial test knot is obtained As figure 5 illustrates, (a)-(c) is respectively the ground truth of semantic segmentation to fruit in Fig. 5, SegNet is exported and its error, in Fig. 5 (d)-(f) is threshold value 0.5,0.3 and 0.1 corresponding non-seed area detection result.Ideal seed region division methods can It is labeled as non-seed region as much as possible with the error detection region in the segmentation result that exports SegNet, then retains enough The correct region of mark as seed region.It is 0.5 that Thr, which is arranged, in the present invention, and the region for meeting Sa > 0.5 is left seed zone Domain, utmostly identify classification error region on the basis of, remain enough correct regions as seed region come into Row random walk.
(4) weight design of SegNet output information is combined
The present invention is on the basis of traditional weight for assigning side with gradient information, in conjunction with the classification intensity of SegNet output Information designs a kind of improved weight building method:
wij=exp (- α (hi-hj)2-β(gi-gj)2) (4)
Wherein, hiAnd hjIt is the intensity value of two adjacent node current predictive classifications, by the last one volume of SegNet decoder Lamination directly exports, and α and β are two free parameters.Before carrying out weight computing, respectively to image grayscale and classical strength into Row normalization.The starting point for designing such a weight is, method is allowed to be swum on the basis of network original output result It walks, until colliding the biggish region of gradient (corresponding grey scale difference is big, and the weight on side is low), terminates migration, to reach smooth Classifying edge and the purpose for reducing patch shape error in classification.
Above-mentioned two free parameter is set as α=10, β=50.
As indicated with 6, the present invention can accurately be partitioned into different atural object, such as in Fig. 6 in (a) (b) conventional scenario, right It can accurately be identified in building, road surface, automobile etc., and edge is smoother;In complex scene, in Fig. 6 (c) (d) Shown, objects in images shade is more, interference is big, and human eye resolution is also relatively difficult, and the segmentation effect that the present invention can be realized Fruit, the small of patch noise like
On the whole, the present invention is better than it due to using prediction error analysis and control means, semantic segmentation result Its method.In addition, random walk method without being trained, is not take up Internet resources, can offline independent operating, in weak training There is superior performance under scene, can be realized 89.9% segmentation precision in test set.
It is above exactly a kind of SegNet figure dividing method of combination random walk proposed by the present invention.It should be noted that The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.It is all of the invention Made any modifications, equivalent replacements, and improvements etc., is all included in the scope of protection of the present invention within spirit and scope.

Claims (3)

1. a kind of SegNet remote sensing images semantic segmentation method of combination random walk, it is characterised in that: in conjunction with random walk SegNet semantic segmentation result optimizing is divided into SegNet initial segmentation step and random walk Optimized Segmentation step;
SegNet initial segmentation step, inputs original remote sensing images first, by SegNet, final output SegNet initial language Adopted segmented image and all kinds of classification strength informations;
Random walk Optimized Segmentation step is realized are as follows: first for optimizing segmentation to the image after SegNet initial segmentation Step chooses random walk seed region, and according to the classification intensive properties information that SegNet is exported, it is aobvious to calculate different classes of classification Work property index, given threshold choose different classes of seed region;Second step calculates the weight on random walk non-directed graph side, root According to original image gradient and the classical strength information of SegNet, the calculating of nonoriented edge weight is carried out;Third step is selected from the first step The seed region taken starts, and combines the nonoriented edge weight of second step, and random walk is carried out on whole picture initial segmentation image, expands Region of all categories is opened up, the Optimized Segmentation result in entire image is finally obtained.
2. the SegNet remote sensing images semantic segmentation method of combination random walk according to claim 1, it is characterised in that: In the random walk Optimized Segmentation step, the classification conspicuousness information based on SegNet output chooses random walk seed zone Domain, its step are as follows:
A 6 dimensional vector Z=(z are exported in each location of pixels with the last one convolutional layer of SegNet1,z2,z3,z4,z5,z6)TIt is secondary The ratio of big element and greatest member constructs classification significant indexes Sa, is defined as:
Sa=1-z2nd/zmax (1)
Wherein, zmaxGreatest member in corresponding vector Z, z2ndTime big element in corresponding vector Z, the value range of Sa be [0, 1], Sa is bigger, it is meant that and the predicted value that the corresponding prediction result of greatest member corresponds to classification relative to other elements is more significant, when Preceding prediction classification is more reliable.
3. the SegNet remote sensing images semantic segmentation method of combination random walk according to claim 1, it is characterised in that: In the random walk Optimized Segmentation step, merge into original image gradient and the classical strength information of SegNet setting with The weight on machine migration non-directed graph side, its step are as follows:
Weight is constructed according to the following formula:
wij=exp (- α (hi-hj)2-β(gi-gj)2) (2)
Wherein, each pixel in image regards the node of non-directed graph, h asiAnd hjBe be originally inputted in remote sensing images two it is adjacent The intensity value of node current predictive classification, by SegNet decoder, the last one convolutional layer is directly exported, giAnd gjIt is to be originally inputted The gradient value of two adjacent nodes in remote sensing images, α and β are two free parameters, and α represents the classification intensity letter of SegNet output The weight in the weight computing of non-directed graph side is ceased, value range [5,20] preferably takes 10;β represents original image gradient information and exists Weight in the weight computing of non-directed graph side, value range [40,65], preferably takes 50, giAnd gjIt is the corresponding figure of two adjacent nodes As gray scale was respectively normalized image grayscale and classification strength information before carrying out weight computing.
CN201811139786.XA 2018-09-28 2018-09-28 SegNet remote sensing image semantic segmentation method combined with random walk Active CN109409240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811139786.XA CN109409240B (en) 2018-09-28 2018-09-28 SegNet remote sensing image semantic segmentation method combined with random walk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811139786.XA CN109409240B (en) 2018-09-28 2018-09-28 SegNet remote sensing image semantic segmentation method combined with random walk

Publications (2)

Publication Number Publication Date
CN109409240A true CN109409240A (en) 2019-03-01
CN109409240B CN109409240B (en) 2022-02-11

Family

ID=65465567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811139786.XA Active CN109409240B (en) 2018-09-28 2018-09-28 SegNet remote sensing image semantic segmentation method combined with random walk

Country Status (1)

Country Link
CN (1) CN109409240B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096948A (en) * 2019-03-15 2019-08-06 中国科学院西安光学精密机械研究所 Remote sensing image recognition methods based on characteristic aggregation convolutional network
CN110674676A (en) * 2019-08-02 2020-01-10 杭州电子科技大学 Road confidence estimation fuzzy frame method based on semantic segmentation
CN110837836A (en) * 2019-11-05 2020-02-25 中国科学技术大学 Semi-supervised semantic segmentation method based on maximized confidence
CN111401380A (en) * 2020-03-24 2020-07-10 北京工业大学 RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization
CN111444924A (en) * 2020-04-20 2020-07-24 中国科学院声学研究所南海研究站 Method and system for detecting plant diseases and insect pests and analyzing disaster grades
CN111462149A (en) * 2020-03-05 2020-07-28 中国地质大学(武汉) Example human body analysis method based on visual saliency
CN111753834A (en) * 2019-03-29 2020-10-09 中国水利水电科学研究院 Planting land structure semantic segmentation method and device based on deep neural network
CN112464745A (en) * 2020-11-09 2021-03-09 中国科学院计算机网络信息中心 Ground feature identification and classification method and device based on semantic segmentation
CN113177592A (en) * 2021-04-28 2021-07-27 上海硕恩网络科技股份有限公司 Image segmentation method and device, computer equipment and storage medium
CN113486762A (en) * 2021-06-30 2021-10-08 中南大学 Small obstacle detection method based on SegNet-SL network
CN114170493A (en) * 2021-12-02 2022-03-11 江苏天汇空间信息研究院有限公司 Method for improving semantic segmentation precision of remote sensing image
CN115049936A (en) * 2022-08-12 2022-09-13 武汉大学 High-resolution remote sensing image-oriented boundary enhancement type semantic segmentation method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915445A (en) * 2012-09-17 2013-02-06 杭州电子科技大学 Method for classifying hyperspectral remote sensing images of improved neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915445A (en) * 2012-09-17 2013-02-06 杭州电子科技大学 Method for classifying hyperspectral remote sensing images of improved neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEO GRADY 等: "Random Walks for Image Segmentation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
秦婵婵: "基于随机游走算法的图像分割方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096948A (en) * 2019-03-15 2019-08-06 中国科学院西安光学精密机械研究所 Remote sensing image recognition methods based on characteristic aggregation convolutional network
CN111753834A (en) * 2019-03-29 2020-10-09 中国水利水电科学研究院 Planting land structure semantic segmentation method and device based on deep neural network
CN111753834B (en) * 2019-03-29 2024-03-26 中国水利水电科学研究院 Planting land block structure semantic segmentation method and device based on deep neural network
CN110674676B (en) * 2019-08-02 2022-03-29 杭州电子科技大学 Road confidence estimation fuzzy frame method based on semantic segmentation
CN110674676A (en) * 2019-08-02 2020-01-10 杭州电子科技大学 Road confidence estimation fuzzy frame method based on semantic segmentation
CN110837836A (en) * 2019-11-05 2020-02-25 中国科学技术大学 Semi-supervised semantic segmentation method based on maximized confidence
CN111462149A (en) * 2020-03-05 2020-07-28 中国地质大学(武汉) Example human body analysis method based on visual saliency
CN111462149B (en) * 2020-03-05 2023-06-06 中国地质大学(武汉) Instance human body analysis method based on visual saliency
CN111401380A (en) * 2020-03-24 2020-07-10 北京工业大学 RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization
CN111401380B (en) * 2020-03-24 2023-06-20 北京工业大学 RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization
CN111444924A (en) * 2020-04-20 2020-07-24 中国科学院声学研究所南海研究站 Method and system for detecting plant diseases and insect pests and analyzing disaster grades
CN111444924B (en) * 2020-04-20 2023-05-30 中国科学院声学研究所南海研究站 Method and system for detecting plant diseases and insect pests and analyzing disaster grade
CN112464745B (en) * 2020-11-09 2023-07-07 中国科学院计算机网络信息中心 Feature identification and classification method and device based on semantic segmentation
CN112464745A (en) * 2020-11-09 2021-03-09 中国科学院计算机网络信息中心 Ground feature identification and classification method and device based on semantic segmentation
CN113177592A (en) * 2021-04-28 2021-07-27 上海硕恩网络科技股份有限公司 Image segmentation method and device, computer equipment and storage medium
CN113486762B (en) * 2021-06-30 2022-03-25 中南大学 Small obstacle detection method based on SegNet-SL network
CN113486762A (en) * 2021-06-30 2021-10-08 中南大学 Small obstacle detection method based on SegNet-SL network
CN114170493A (en) * 2021-12-02 2022-03-11 江苏天汇空间信息研究院有限公司 Method for improving semantic segmentation precision of remote sensing image
CN115049936A (en) * 2022-08-12 2022-09-13 武汉大学 High-resolution remote sensing image-oriented boundary enhancement type semantic segmentation method

Also Published As

Publication number Publication date
CN109409240B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN109409240A (en) A kind of SegNet remote sensing images semantic segmentation method of combination random walk
CN109614985B (en) Target detection method based on densely connected feature pyramid network
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
CN111598174B (en) Model training method based on semi-supervised antagonistic learning and image change analysis method
US9129191B2 (en) Semantic object selection
CN109685067A (en) A kind of image, semantic dividing method based on region and depth residual error network
US9129192B2 (en) Semantic object proposal generation and validation
Wang et al. A deep neural network with spatial pooling (DNNSP) for 3-D point cloud classification
CN107862261A (en) Image people counting method based on multiple dimensioned convolutional neural networks
Lian et al. DeepWindow: Sliding window based on deep learning for road extraction from remote sensing images
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN105930868A (en) Low-resolution airport target detection method based on hierarchical reinforcement learning
CN102279929B (en) Remote-sensing artificial ground object identifying method based on semantic tree model of object
CN106683102B (en) SAR image segmentation method based on ridge ripple filter and convolutional coding structure learning model
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110853026A (en) Remote sensing image change detection method integrating deep learning and region segmentation
CN106446933A (en) Multi-target detection method based on context information
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN110807485B (en) Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN112767413B (en) Remote sensing image depth semantic segmentation method integrating region communication and symbiotic knowledge constraints
CN104408731A (en) Region graph and statistic similarity coding-based SAR (synthetic aperture radar) image segmentation method
CN110222772B (en) Medical image annotation recommendation method based on block-level active learning
CN110348311B (en) Deep learning-based road intersection identification system and method
Feng et al. Improved deep fully convolutional network with superpixel-based conditional random fields for building extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant