CN110532914A - Building analyte detection method based on fine-feature study - Google Patents

Building analyte detection method based on fine-feature study Download PDF

Info

Publication number
CN110532914A
CN110532914A CN201910768818.0A CN201910768818A CN110532914A CN 110532914 A CN110532914 A CN 110532914A CN 201910768818 A CN201910768818 A CN 201910768818A CN 110532914 A CN110532914 A CN 110532914A
Authority
CN
China
Prior art keywords
dense
feature
image
characteristic pattern
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910768818.0A
Other languages
Chinese (zh)
Inventor
王爽
侯彪
何佩
周立刚
曹思宇
赵栋
焦李成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201910768818.0A priority Critical patent/CN110532914A/en
Publication of CN110532914A publication Critical patent/CN110532914A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of building analyte detection methods based on fine-feature study.Implementation step are as follows: construct dense convolutional network, generate training sample set and test sample collection, image in sample set is pre-processed, the fine-feature figure of remote sensing image is extracted using dense convolution block, use top-down method fusion feature figure, with the dense convolutional network of deep supervision loss training of combination of edge, test sample is finally sent into trained dense convolutional network and obtains final detection result.The present invention extracts the fine-feature figure of remote sensing image using the dense convolutional network of building, utilize the dense convolutional network of deep supervision loss training of combination of edge, reduce EMS memory occupation amount under conditions of guaranteeing that building feature is rich, improves building analyte detection accuracy rate.

Description

Building analyte detection method based on fine-feature study
Technical field
The invention belongs to technical field of image processing, further relate to one of building analyte detection technical field and are based on The building analyte detection method of fine-feature study.The present invention can be used for detecting building from remote sensing image.
Background technique
Visual inspection survey technology is one of key problem of computer vision field, and building analyte detection is captured with remote sensing satellite Image be data source, the building in image is positioned using image processing techniques.Building in remote sensing image Analyte detection problem plays important role in urban planning and territory monitoring field.With the further development of remote sensing technology, Optical remote sensing data obtained are also more and more abundant, but since the optical infrared remote sensing images of satellite shooting are vulnerable to illumination, Cloud layer etc. does not play the influence of certainty weather conditions, and the building analyte detection of high-accuracy is faced with huge challenge.
Wuhan University is in patent document " the remote sensing image building detection side based on multiple dimensioned multiple features fusion of its application A kind of building analyte detection side is proposed in method " (number of patent application: CN201710220588.5, publication number: CN107092871A) Method.This method is down-sampled to remote sensing image first, obtains the image pyramid being made of the image of different scale;Calculate image gold The edge images of word tower;Multiple groups feature calculation is carried out to the edge images of different scale and carries out fusion establishing characteristic model;Root Window, which is carried out, according to characteristic model and neighborhood part non-maxima suppression chooses acquisition target window;Small range is carried out to target window Interior expansion/shrinking calculation obtains rectangular window;The rectangular window, which is rotated, according to the principal direction of target window obtains optimal mesh Window is marked, and building is extracted according to optimal objective window.This method has paid close attention to building shape and size dimension not One the problem of, detects building with multi-scale method.But the shortcoming that this method still has is, due to this method The edge feature detection building of building is only utilized, when background texture information complexity, Detection accuracy is relatively low.
The paper " Densely connected convolutional networks " that Huang Gao et al. is delivered at it (Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016) it is proposed in and a kind of new can be used for detecting mesh calibration method.This method uses dense convolution block first Input picture feature is extracted, the input of each convolutional layer is the output of preceding several layers of convolution and dense in dense piece in dense convolution block The input of block.Then target signature is generated using multiple concatenated dense convolution blocks, the input of each dense convolution block is former The superposition of the characteristic pattern of the output of a dense convolution block on a passage, obtains testing result to the end with this.Different dense pieces Characteristic pattern output can be progressive, so that each layer of characteristic pattern all generates contribution to final loss.The advantage of doing so is that Implicit deep supervision is realized, gradient disappearance is avoided, achieves good effect in the propagation and extraction of Enhanced feature, it can To extract the fine characteristic pattern of image.But the shortcoming that this method still has is, the quantity of characteristic pattern can be with net Network depth is constantly multiplied, and causes computation complexity excessively high, and EMS memory occupation requires high.
Summary of the invention
It is an object of the invention in view of the above shortcomings of the prior art, propose a kind of building based on fine-feature study Object detecting method solves the problems, such as that computation complexity is excessively high when detecting building from remote sensing image and background texture is believed Cease the low problem of complicated building analyte detection accuracy rate.
Realizing the object of the invention thinking is: firstly, constructing dense convolutional network, generating training sample set and test sample Collection recycles dense convolution block to extract the fine-feature figure of training sample set, using top-down method fusion feature figure, uses The dense convolutional network of deep supervision loss training of combination of edge.Test sample is finally sent into trained dense convolutional network to obtain To final detection result.
The specific steps of the present invention are as follows,
Step 1, dense convolutional network is constructed:
It builds one and is cascaded into dense convolutional network, each dense convolution agllutination by 5 mutually isostructural dense convolution blocks Structure is formed by 9 layers, and structure is successively are as follows: input layer → the first convolutional layer → the first fused layer → the second convolutional layer → the second fusion Layer → third convolutional layer → third fused layer → Volume Four lamination → the 4th fused layer;
The channel number that input layer is arranged is 3;By the number of convolution kernel in first to fourth convolutional layer set gradually for 64,64,128,128, the scale of convolution kernel is disposed as 3 × 3 nodes;
Step 2, training sample set and test sample collection are generated:
At least 16 optical remote sensing pictures that size is 960 × 960 × 3 are chosen from optical remote sensing data set, it will wherein 75% picture forms training set, remaining 25% picture forms test set;
Every optical remote sensing picture random overlapping in training set is cut into the training sample that size is 480 × 480 × 3 This, obtains training sample set after cutting 150 times to every picture;
Every optical remote sensing picture in test set is not overlapped and is cut into 4 tests that size is 480 × 480 × 3 Sample forms test sample collection;
Step 3, the image in sample set is pre-processed:
Gray scale stretching is done respectively to training sample set image and test sample collection image, in the image after gray scale stretching Each pixel does normalized between 0 to 1, obtains pretreated training sample set and test sample collection;
Step 4, the fine-feature figure of remote sensing image is extracted using dense convolution block:
Pretreated training sample set is input in first dense convolution block by the first step, will be in dense convolution block The first convolutional layer output all characteristic patterns and the second convolutional layer export all characteristic patterns, be overlapped operation on a passage After be input to third convolutional layer, obtain third convolutional layer output characteristic pattern, by first to third convolutional layer export all spies Sign figure is input to Volume Four lamination after being overlapped operation on a passage, the characteristic pattern of Volume Four lamination output is obtained, by first All characteristic patterns exported to Volume Four lamination are overlapped operation on a passage, obtain the fine-feature of low layer semantic information Figure;
Second step is thick by second according to method identical with the first step by the fine-feature figure of low layer semantic information Close convolution block obtains time low layer semantic information fine-feature figure;
Third step obtains high level after third, the four, the 5th dense convolution blocks using method identical with second step The characteristic pattern of semantic information;
Step 5, using top-down approach fusion feature figure:
The first step operates the characteristic pattern of high-layer semantic information by 2 times of up-sampling deconvolution, obtains up-sampling feature Figure;
Second step, by the output of the 4th dense convolution block, the convolution operation for being 1 × 1 by a core obtains channel reduction Up-sampling characteristic pattern in the characteristic pattern and the first step of channel reduction half is overlapped by the characteristic pattern of half on a passage, Obtain first fusion feature figure;
Third step, the image merged to first carry out 2 times of up-sampling deconvolution operations, obtain the image of first fusion Up-sampling characteristic pattern;1 × 1 convolution operation that reduction half in channel is carried out to the output of the dense convolution block of third, to process The up-sampling characteristic pattern of the image of characteristic pattern and first fusion after convolution operation is overlapped on a passage, obtains second Fusion feature figure,
4th step, using method identical with third step, the characteristic pattern and second dense convolution block that second is merged Output handled, third fusion feature figure is obtained, to the defeated of third fusion feature figure and first dense convolution block It is handled out, obtains the 4th fusion feature figure;
Step 6, with the dense convolutional network of deep supervision loss training of combination of edge:
The first step successively carries out convolution sum deconvolution operation to each fusion feature figure, passes through convolution sum warp to each Characteristic pattern after product operation, which calculates, intersects entropy loss, is added to obtain to merge to all intersection entropy loss and intersects entropy loss;
Second step operates the characteristic pattern of the high-layer semantic information of the 5th dense convolution block output by 16 times of up-samplings, Logits image is obtained, logits image is obtained into prognostic chart by sigmoid, using Sobel edge detection algorithm, is obtained pre- The edge image of mapping and true value figure calculates intersection entropy loss to two edge images and obtains edge crossing entropy loss;It will fusion Intersect entropy loss to be added with edge crossing entropy loss, obtains the deep supervision loss of combination of edge;
Third step finds out the deep supervision loss of combination of edge to the parameter for needing to optimize each in dense convolutional network respectively Partial derivative, by it is each need update optimization parameter be added with its partial derivative, the dense convolutional network after obtaining undated parameter, The dense convolutional network that training set inputs updated parameter is found out to the deep supervision loss of combination of edge;
4th step, iteration execute third step, and the deep supervision loss difference of combination of edge is no more than twice before and after acquiring 0.01, obtain trained dense convolutional network;
Step 7, building is detected:
Test sample collection is input in trained dense convolutional network, the testing result of test sample collection is exported.
The present invention has the advantage that compared with prior art
First, since the present invention utilizes the fine-feature figure of dense convolution block extraction remote sensing image, i.e., dense convolution The characteristic pattern of first to fourth convolutional layer output in block is overlapped operation on a passage, obtains the fine of low layer semantic information Characteristic pattern, then by the characteristic pattern of 4 identical dense convolution blocks extraction high-layer semantic informations, overcome feature in the prior art Figure quantity redundancy, EMS memory occupation amount is big, the high problem of computation complexity, so that present invention reduction while not reducing accuracy rate Computation complexity when detection building, improves detection efficiency.
Second, intersect entropy loss since the present invention calculates each characteristic pattern merged using top-down approach, utilizes The dense convolutional network of deep supervision loss training of combination of edge, overcomes the building analyte detection accuracy rate of background texture information complexity Low problem allows the invention to guarantee accuracy rate of the building feature tractability to improve building analyte detection.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is analogous diagram of the present invention.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawing.
Referring to attached drawing 1, step of the invention is described in further detail.
Step 1, dense convolutional network is constructed.
It builds one and is cascaded into dense convolutional network, each dense convolution agllutination by 5 mutually isostructural dense convolution blocks Structure is formed by 9 layers, and structure is successively are as follows: input layer → the first convolutional layer → the first fused layer → the second convolutional layer → the second fusion Layer → third convolutional layer → third fused layer → Volume Four lamination → the 4th fused layer
The channel number that input layer is arranged is 3;By the number of convolution kernel in first to fourth convolutional layer set gradually for 64,64,128,128, the scale of convolution kernel is disposed as 3 × 3 nodes.
Step 2, training sample set and test sample collection are generated.
At least 16 optical remote sensing pictures that size is 960 × 960 × 3 are chosen from optical remote sensing data set, it will wherein 75% picture forms training set, remaining 25% picture forms test set.
Every optical remote sensing picture random overlapping in training set is cut into the training sample that size is 480 × 480 × 3 This, obtains training sample set after cutting 150 times to every picture.
Every optical remote sensing picture in test set is not overlapped and is cut into 4 tests that size is 480 × 480 × 3 Sample forms test sample collection.
Step 3, the image in sample set is pre-processed.
Gray scale stretching is done respectively to training sample set image and test sample collection image, in the image after gray scale stretching Each pixel does normalized between 0 to 1, obtains pretreated training sample set and test sample collection.
Step 4, the fine-feature figure of remote sensing image is extracted using dense convolution block.
Pretreated training sample set is input in first dense convolution block by the first step, will be in dense convolution block The first convolutional layer output all characteristic patterns and the second convolutional layer export all characteristic patterns, be overlapped operation on a passage After be input to third convolutional layer, obtain third convolutional layer output characteristic pattern, by first to third convolutional layer export all spies Sign figure is input to Volume Four lamination after being overlapped operation on a passage, the characteristic pattern of Volume Four lamination output is obtained, by first All characteristic patterns exported to Volume Four lamination are overlapped operation on a passage, obtain the fine-feature of low layer semantic information Figure.
Second step is thick by second according to method identical with the first step by the fine-feature figure of low layer semantic information Close convolution block obtains time low layer semantic information fine-feature figure.
Third step obtains high level after third, the four, the 5th dense convolution blocks using method identical with second step The characteristic pattern of semantic information.
Step 5, using top-down approach fusion feature figure.
The first step operates the characteristic pattern of high-layer semantic information by 2 times of up-sampling deconvolution, obtains up-sampling feature Figure.
Second step, by the output of the 4th dense convolution block, the convolution operation for being 1 × 1 by a core obtains channel reduction Up-sampling characteristic pattern in the characteristic pattern and the first step of channel reduction half is overlapped by the characteristic pattern of half on a passage, Obtain first fusion feature figure.
Third step, the image merged to first carry out 2 times of up-sampling deconvolution operations, obtain the image of first fusion Up-sampling characteristic pattern;1 × 1 convolution operation that reduction half in channel is carried out to the output of the dense convolution block of third, to process The up-sampling characteristic pattern of the image of characteristic pattern and first fusion after convolution operation is overlapped on a passage, obtains second Fusion feature figure.
4th step, using method identical with third step, the characteristic pattern and second dense convolution block that second is merged Output handled, third fusion feature figure is obtained, to the defeated of third fusion feature figure and first dense convolution block It is handled out, obtains the 4th fusion feature figure.
Step 6, with the dense convolutional network of deep supervision loss training of combination of edge.
The first step successively carries out convolution sum deconvolution operation to each fusion feature figure, passes through convolution sum warp to each Characteristic pattern after product operation, which calculates, intersects entropy loss, is added to obtain to merge to all intersection entropy loss and intersects entropy loss.
Second step operates the characteristic pattern of the high-layer semantic information of the 5th dense convolution block output by 16 times of up-samplings, Logits image is obtained, logits image is obtained into prognostic chart by sigmoid, using Sobel edge detection algorithm, is obtained pre- The edge image of mapping and true value figure calculates intersection entropy loss to two edge images and obtains edge crossing entropy loss;It will fusion Intersect entropy loss to be added with edge crossing entropy loss, obtains the deep supervision loss of combination of edge.
The true value figure refers to, is labeled to each region in each training sample, by the region containing building It is labeled as 1, the area marking containing building is not 0, and the region of marks all in the training sample after each mark is formed One true value figure.
The step of described Sobel edge detection algorithm, is, with Sobel operator to prognostic chart or true value figure respectively in transverse direction Convolution operation is done with longitudinal, obtains the horizontal and vertical brightness difference approximate diagram of prognostic chart or true value figure, and by two figure phases It takes absolute value to obtain gray scale approximate diagram after adding, average to pixel value all in gray scale approximate diagram, it will be in gray scale approximate diagram Pixel value is greater than the pixel predicted composition figure of average value and the edge image of true value figure.
Third step finds out the deep supervision loss of combination of edge to the parameter for needing to optimize each in dense convolutional network respectively Partial derivative, by it is each need update optimization parameter be added with its partial derivative, the dense convolutional network after obtaining undated parameter, The dense convolutional network that training set inputs updated parameter is found out to the deep supervision loss of combination of edge.
4th step, iteration execute third step, and the deep supervision loss difference of combination of edge is no more than twice before and after acquiring 0.01, obtain trained dense convolutional network.
Step 7, building is detected.
Test sample collection is input in trained dense convolutional network, the testing result of test sample collection is exported.
Effect of the invention is described further below with reference to emulation:
1. emulation experiment condition:
The hardware platform of emulation experiment of the invention are as follows: Dell Computer, CPU model Intel (R) E5-2603, frequency 1.60GHz, GPU model GeForce GTX 1080, video memory 8G.
The software platform of emulation experiment of the invention are as follows: 16.0 system of ubuntu, Python 3.5, pytorch-gpu 1.1.0。
2. emulation content and its interpretation of result:
Emulation experiment of the present invention is using the present invention and two prior arts (FCN detection method, U-Net detection method) point The other QuickBird optical remote sensing data set to input carries out building analyte detection, obtains testing result.
In emulation experiment, two prior arts of use refer to:
Prior art FCN detection method refers to that DarrellT et al. exists, " Fullyconvolutional networks It is proposed in for semantic segmentation [j], IEEE T PATTERN ANAL., vol.39, no.4,2014. " Building analyte detection method, abbreviation FCN detection method.
Prior art U-Net detection method refers to that OlafRonneberger et al. is in " U-net:Convolutional networks for biomedical image segmentation[c],International Conference on Medical Image Computing and Computer-Assisted Intervention.,pp.234–241,2015.” The building analyte detection method of middle proposition, abbreviation U-Net detection method.
Input picture used in emulation experiment of the present invention is all in disclosed QuickBird optical remote sensing data set Image, the data set are collected by DigitalGlobe company, the U.S., share 16 960 × 960 × 3 images, the present invention Emulation experiment therefrom randomly selects 12 composition training sets, remaining 4 composition test sets;
Emulation experiment 1 is tested under above-mentioned simulated conditions using the method for the present invention, and testing result such as Fig. 2 institute is obtained Show.
Emulation experiment 2 is tested under above-mentioned simulated conditions using FCN method in the prior art, and detection knot is obtained Fruit.
Emulation experiment 3 is tested under above-mentioned simulated conditions using the U-Net method of the prior art, and detection knot is obtained Fruit.
In order to verify building analyte detection effect of the present invention, using two evaluation indexes (accuracy rate, f score) respectively to three kinds The testing result of method is evaluated, and accuracy rate, f score are higher to show that building analyte detection result is more accurate.Test sample is concentrated Respective pixel compares in every picture all pixels prediction result and true value figure, using following formula, the accuracy rate that calculates separately, f point Number takes test sample to concentrate the Average Accuracy of all pictures, average f score, calculated result is depicted as table 1:
The Comparative result table of the emulation experiment of the present invention of table 1.
Method Average Accuracy Average f score
FCN 0.6608 0.7619
U-Net 0.7140 0.7831
The method of the present invention 0.8112 0.8307
Wherein, TP indicate prediction result be construction zone true value figure be also building region sum of all pixels;TN table Show that the non-construction zone true value figure of prediction result is also the sum of all pixels of non-construction zone;FP indicates that prediction result is non-building Object area true value figure is the sum of all pixels in the region of building, and it is building that FN expression prediction result, which is non-construction zone true value figure, The sum of all pixels in the region of object.
In conjunction with table 1 as can be seen that compared with existing FCN and U-Net method, Average Accuracy of the invention is 0.8112, average f score is 0.8307, and two indices are above two kinds of art methods, it was demonstrated that the present invention can obtain more High building analyte detection accuracy rate.
Effect of the invention is further described below with reference to the analogous diagram of Fig. 2.
Fig. 2 is the QuickBird optical remote sensing data set testing result figure that the present invention obtains under these experimental conditions.By There are 4 in the test set picture of emulation experiment of the present invention, Fig. 2 (a) is the testing result figure of the first picture;Fig. 2 (b) indicate with The true value figure of corresponding first picture of Fig. 2 (a);Fig. 2 (c) is the testing result figure of the second picture;Fig. 2 (d) is indicated and Fig. 2 (c) the true value figure of corresponding second picture;Fig. 2 (e) is the testing result figure of third picture;Fig. 2 (f) is indicated and Fig. 2 (e) The true value figure of corresponding third picture;Fig. 2 (f) is the testing result figure of the 4th picture;Fig. 2 (g) indicates right with Fig. 2 (f) The true value figure for the 4th picture answered.It can be seen that in testing result figure from true value figure corresponding with its of testing result figure in Fig. 2 Construction zone and the construction zone of true value figure be closer to, testing result figure is accurately shown and building in true value figure Close profile.
The above emulation experiment shows: the method for the present invention is extracted fine building using the dense convolutional network built Feature solves the problems, such as that the building analyte detection accuracy rate of background texture information complexity is low, is promoted while reducing calculation amount Building analyte detection precision, is a kind of remote sensing image building analyte detection method that accuracy rate is high.

Claims (3)

1. a kind of building analyte detection method based on fine-feature study, which is characterized in that extract optics using dense convolution block The fine-feature figure of remote sensing images, using top-down method fusion feature figure, with the deep supervision loss training of combination of edge The specific steps of dense convolutional network, this method include the following:
Step 1, dense convolutional network is constructed:
It builds one and is cascaded into dense convolutional network by 5 mutually isostructural dense convolution blocks, each dense convolution block structure is by 9 Layer composition, structure is successively are as follows: and input layer → the first convolutional layer → the first fused layer → the second convolutional layer → the second fused layer → Third convolutional layer → third fused layer → Volume Four lamination → the 4th fused layer;
The channel number that input layer is arranged is 3;The number of convolution kernel in first to fourth convolutional layer is set gradually as 64, 64,128,128, the scale of convolution kernel is disposed as 3 × 3 nodes;
Step 2, training sample set and test sample collection are generated:
At least 16 optical remote sensing pictures that size is 960 × 960 × 3 are chosen from optical remote sensing data set, it will wherein 75% Picture form training set, remaining 25% picture form test set;
Every optical remote sensing picture random overlapping in training set is cut into the training sample that size is 480 × 480 × 3, it is right Every picture obtains training sample set after cutting 150 times;
Every optical remote sensing picture in test set is not overlapped and is cut into 4 test samples that size is 480 × 480 × 3 Form test sample collection;
Step 3, the image in sample set is pre-processed:
Gray scale stretching is done respectively to training sample set image and test sample collection image, to each of the image after gray scale stretching Pixel does normalized between 0 to 1, obtains pretreated training sample set and test sample collection;
Step 4, the fine-feature figure of remote sensing image is extracted using dense convolution block:
Pretreated training sample set is input in first dense convolution block by the first step, by dense convolution block All characteristic patterns of one convolutional layer output and all characteristic patterns of the second convolutional layer output, are overlapped defeated after operating on a passage The characteristic pattern for entering to obtain the output of third convolutional layer to third convolutional layer, all characteristic patterns that first to third convolutional layer is exported It is input to Volume Four lamination after being overlapped operation on a passage, obtains the characteristic pattern of Volume Four lamination output, by first to the All characteristic patterns of four convolutional layers output are overlapped operation on a passage, obtain the fine-feature figure of low layer semantic information;
The fine-feature figure of low layer semantic information is passed through second dense volume according to method identical with the first step by second step Block obtains time low layer semantic information fine-feature figure;
Third step obtains high-level semantic after third, the four, the 5th dense convolution blocks using method identical with second step The characteristic pattern of information;
Step 5, using top-down approach fusion feature figure:
The first step operates the characteristic pattern of high-layer semantic information by 2 times of up-sampling deconvolution, obtains up-sampling characteristic pattern;
Second step, by the output of the 4th dense convolution block, the convolution operation for being 1 × 1 by a core obtains channel reduction half Characteristic pattern, by channel reduction half characteristic pattern and the first step in up-sampling characteristic pattern be overlapped on a passage, obtain First fusion feature figure;
Third step, the image merged to first carry out 2 times of up-sampling deconvolution operations, obtain the upper of the image of first fusion Sample characteristic pattern;1 × 1 convolution operation that reduction half in channel is carried out to the output of the dense convolution block of third, to by convolution The up-sampling characteristic pattern of the image of characteristic pattern and first fusion after operation is overlapped on a passage, obtains second fusion Characteristic pattern,
4th step, using method identical with third step, characteristic pattern that second is merged and second dense convolution block it is defeated Handled out, obtain third fusion feature figure, to the output of third fusion feature figure and first dense convolution block into Row processing, obtains the 4th fusion feature figure;
Step 6, with the dense convolutional network of deep supervision loss training of combination of edge:
The first step successively carries out convolution sum deconvolution operation to each fusion feature figure, grasps to each by convolution sum deconvolution Characteristic pattern after work, which calculates, intersects entropy loss, is added to obtain fusion intersection entropy loss to all intersection entropy loss;
The characteristic pattern of the high-layer semantic information of 5th dense convolution block output is operated by 16 times of up-samplings, is obtained by second step Logits image is obtained prognostic chart by sigmoid, using Sobel edge detection algorithm, obtains prognostic chart by logits image With the edge image of true value figure, intersection entropy loss is calculated to two edge images and obtains edge crossing entropy loss;Fusion is intersected Entropy loss is added with edge crossing entropy loss, obtains the deep supervision loss of combination of edge;
Third step finds out the deep supervision loss of combination of edge to the inclined of the parameter for needing to optimize each in dense convolutional network respectively Derivative needs the parameter for updating optimization to be added with its partial derivative for each, and the dense convolutional network after obtaining undated parameter will instruct The dense convolutional network for practicing the updated parameter of collection input finds out the deep supervision loss of combination of edge;
4th step, iteration execution third step are no more than 0.01 until acquiring the front and back deep supervision loss of combination of edge twice and differing, Obtain trained dense convolutional network;
Step 7, building is detected:
Test sample collection is input in trained dense convolutional network, the testing result of test sample collection is exported.
2. the building analyte detection method according to claim 1 based on fine-feature study, which is characterized in that step 6 the True value figure described in two steps refers to, is labeled to each region in each training sample, by the region containing building It is labeled as 1, the area marking containing building is not 0, and the region of marks all in the training sample after each mark is formed One true value figure.
3. the building analyte detection method according to claim 1 based on fine-feature study, which is characterized in that step 6 the The step of Sobel edge detection algorithm described in two steps is, with Sobel operator to prognostic chart or true value figure respectively laterally and Convolution operation is done in longitudinal direction, obtains the horizontal and vertical brightness difference approximate diagram of prognostic chart or true value figure, and two figures are added After take absolute value to obtain gray scale approximate diagram, average to pixel value all in gray scale approximate diagram, by picture in gray scale approximate diagram Element value is greater than the pixel predicted composition figure of average value and the edge image of true value figure.
CN201910768818.0A 2019-08-20 2019-08-20 Building analyte detection method based on fine-feature study Pending CN110532914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910768818.0A CN110532914A (en) 2019-08-20 2019-08-20 Building analyte detection method based on fine-feature study

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910768818.0A CN110532914A (en) 2019-08-20 2019-08-20 Building analyte detection method based on fine-feature study

Publications (1)

Publication Number Publication Date
CN110532914A true CN110532914A (en) 2019-12-03

Family

ID=68663697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910768818.0A Pending CN110532914A (en) 2019-08-20 2019-08-20 Building analyte detection method based on fine-feature study

Country Status (1)

Country Link
CN (1) CN110532914A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080652A (en) * 2019-12-23 2020-04-28 西安电子科技大学 Optical remote sensing image segmentation method based on multi-scale lightweight cavity convolution
CN111161250A (en) * 2019-12-31 2020-05-15 北京云智宇航科技有限公司 Multi-scale remote sensing image dense house detection method and device
CN111626298A (en) * 2020-04-17 2020-09-04 中国科学院声学研究所 Real-time image semantic segmentation device and segmentation method
CN111968088A (en) * 2020-08-14 2020-11-20 西安电子科技大学 Building detection method based on pixel and region segmentation decision fusion
CN112084859A (en) * 2020-08-06 2020-12-15 浙江工业大学 Building segmentation method based on dense boundary block and attention mechanism
CN117635645A (en) * 2023-12-08 2024-03-01 兰州交通大学 Juxtaposed multi-scale fusion edge detection model under complex dense network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389051A (en) * 2018-09-20 2019-02-26 华南农业大学 A kind of building remote sensing images recognition methods based on convolutional neural networks
CN109583456A (en) * 2018-11-20 2019-04-05 西安电子科技大学 Infrared surface object detection method based on Fusion Features and dense connection
CN109816695A (en) * 2019-01-31 2019-05-28 中国人民解放军国防科技大学 Target detection and tracking method for infrared small unmanned aerial vehicle under complex background
CN109871798A (en) * 2019-02-01 2019-06-11 浙江大学 A kind of remote sensing image building extracting method based on convolutional neural networks
US20190206044A1 (en) * 2016-01-20 2019-07-04 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190206044A1 (en) * 2016-01-20 2019-07-04 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
CN109389051A (en) * 2018-09-20 2019-02-26 华南农业大学 A kind of building remote sensing images recognition methods based on convolutional neural networks
CN109583456A (en) * 2018-11-20 2019-04-05 西安电子科技大学 Infrared surface object detection method based on Fusion Features and dense connection
CN109816695A (en) * 2019-01-31 2019-05-28 中国人民解放军国防科技大学 Target detection and tracking method for infrared small unmanned aerial vehicle under complex background
CN109871798A (en) * 2019-02-01 2019-06-11 浙江大学 A kind of remote sensing image building extracting method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHUANG WANG ET.AL.: "AN IMPROVED FULLY CONVOLUTIONAL NETWORK FOR LEARNING RICH BUILDING FEATURES", 《 2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080652A (en) * 2019-12-23 2020-04-28 西安电子科技大学 Optical remote sensing image segmentation method based on multi-scale lightweight cavity convolution
CN111161250A (en) * 2019-12-31 2020-05-15 北京云智宇航科技有限公司 Multi-scale remote sensing image dense house detection method and device
CN111161250B (en) * 2019-12-31 2023-05-26 南遥科技(广东)有限公司 Method and device for detecting dense houses by using multi-scale remote sensing images
CN111626298A (en) * 2020-04-17 2020-09-04 中国科学院声学研究所 Real-time image semantic segmentation device and segmentation method
CN111626298B (en) * 2020-04-17 2023-08-18 中国科学院声学研究所 Real-time image semantic segmentation device and segmentation method
CN112084859A (en) * 2020-08-06 2020-12-15 浙江工业大学 Building segmentation method based on dense boundary block and attention mechanism
CN112084859B (en) * 2020-08-06 2024-02-09 浙江工业大学 Building segmentation method based on dense boundary blocks and attention mechanism
CN111968088A (en) * 2020-08-14 2020-11-20 西安电子科技大学 Building detection method based on pixel and region segmentation decision fusion
CN111968088B (en) * 2020-08-14 2023-09-15 西安电子科技大学 Building detection method based on pixel and region segmentation decision fusion
CN117635645A (en) * 2023-12-08 2024-03-01 兰州交通大学 Juxtaposed multi-scale fusion edge detection model under complex dense network
CN117635645B (en) * 2023-12-08 2024-06-04 兰州交通大学 Juxtaposed multi-scale fusion edge detection model under complex dense network

Similar Documents

Publication Publication Date Title
CN110532914A (en) Building analyte detection method based on fine-feature study
CN113240691B (en) Medical image segmentation method based on U-shaped network
CN109034210A (en) Object detection method based on super Fusion Features Yu multi-Scale Pyramid network
CN109598290A (en) A kind of image small target detecting method combined based on hierarchical detection
CN103279765B (en) Steel wire rope surface damage detection method based on images match
CN107346420A (en) Text detection localization method under a kind of natural scene based on deep learning
CN107204010A (en) A kind of monocular image depth estimation method and system
CN109446925A (en) A kind of electric device maintenance algorithm based on convolutional neural networks
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN109800770A (en) A kind of method, system and device of real-time target detection
CN107844743A (en) A kind of image multi-subtitle automatic generation method based on multiple dimensioned layering residual error network
CN110991274B (en) Pedestrian tumbling detection method based on Gaussian mixture model and neural network
CN109948471A (en) Based on the traffic haze visibility detecting method for improving InceptionV4 network
CN104992403B (en) Hybrid operator image redirection method based on visual similarity measurement
CN111709387B (en) Building segmentation method and system for high-resolution remote sensing image
CN112712546A (en) Target tracking method based on twin neural network
CN108520203A (en) Multiple target feature extracting method based on fusion adaptive more external surrounding frames and cross pond feature
CN103700117A (en) Robust optical flow field estimating method based on TV-L1 variation model
CN108460336A (en) A kind of pedestrian detection method based on deep learning
CN107564007A (en) The scene cut modification method and system of amalgamation of global information
CN103226825B (en) Based on the method for detecting change of remote sensing image of low-rank sparse model
Lin et al. Optimal CNN-based semantic segmentation model of cutting slope images
CN113902792A (en) Building height detection method and system based on improved RetinaNet network and electronic equipment
CN106504219A (en) Constrained path morphology high-resolution remote sensing image road Enhancement Method
CN112200766A (en) Industrial product surface defect detection method based on area-associated neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191203