CN110110679A - Atural object coverage rate calculation method based on full convolutional network and condition random field - Google Patents

Atural object coverage rate calculation method based on full convolutional network and condition random field Download PDF

Info

Publication number
CN110110679A
CN110110679A CN201910395844.3A CN201910395844A CN110110679A CN 110110679 A CN110110679 A CN 110110679A CN 201910395844 A CN201910395844 A CN 201910395844A CN 110110679 A CN110110679 A CN 110110679A
Authority
CN
China
Prior art keywords
segmentation
full convolutional
atural object
coverage rate
random field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910395844.3A
Other languages
Chinese (zh)
Inventor
段昶
罗兴奕
朱策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Map Technology Co Ltd
Original Assignee
Chengdu Map Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Map Technology Co Ltd filed Critical Chengdu Map Technology Co Ltd
Priority to CN201910395844.3A priority Critical patent/CN110110679A/en
Publication of CN110110679A publication Critical patent/CN110110679A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the atural object coverage rate calculation methods based on full convolutional network and condition random field, comprising the following steps: constructs full convolutional neural networks;Production training data: dividing the remote sensing images of acquisition according to classification to be split pixel-by-pixel, carries out data enhancing to remote sensing images, constructs semantic segmentation data set;The full convolutional neural networks of training: by the constructed good full convolutional neural networks of resulting semantic segmentation data set input, continuous repetitive exercise updates network parameter, until training result meets the preset condition of convergence;Remote Sensing Image Segmentation: segmented image is treated using trained full convolutional neural networks and carries out semantic segmentation, obtains all kinds of atural object primary segmentation results;Segmentation result optimization;Calculate atural object coverage rate information.The present invention also can be well solved the indeterminable problem of conventional segmentation algorithm without using professional software.

Description

Atural object coverage rate calculation method based on full convolutional network and condition random field
Technical field
The present invention relates to remote sensing images atural object coverage rate algorithms, and in particular to based on full convolutional network and condition random field Atural object coverage rate calculation method.
Background technique
Atural object coverage rate information is a part important in remote sensing images information, and existing way is special using ENVI etc. mostly Industry software part is covered according to the multispectral information of remote sensing images come rough Statistics coverage rate or directly using remote sensing images statistics atural object Rate;It is directly needed at present using remote sensing images statistics atural object coverage rate using traditional remote sensing image segmentation method, still, tradition Remote sensing image segmentation method for example utilize brightness, Texture eigenvalue method, using K-means cluster method, HOG method Deng usually needing to add excessive prior information again, or even manual extraction feature is needed, limitation is larger, and generalization ability is not yet By force, in many cases, conventional method is unable to satisfy demand in accuracy and speed, is no longer satisfied existing requirement.
Summary of the invention
The technical problem to be solved by the present invention is to can not accurately and quickly count atural object covering by remote sensing images at present Rate information solves current nothing, and it is an object of the present invention to provide the atural object coverage rate calculation method based on full convolutional network and condition random field The problem of method accurately and quickly counts atural object coverage rate information by remote sensing images.
The present invention is achieved through the following technical solutions:
Atural object coverage rate calculation method based on full convolutional network and condition random field, comprising the following steps:
S1, the full convolutional neural networks of building;
S2, production training data: the remote sensing images of acquisition are divided pixel-by-pixel according to classification to be split, to remote sensing figure As carrying out data enhancing, semantic segmentation data set is constructed;
S3, the full convolutional neural networks of training: will be good constructed by the resulting semantic segmentation data set input step S1 of step S2 Full convolutional neural networks, continuous repetitive exercise updates network parameter, until training result meets the preset condition of convergence;
S4, Remote Sensing Image Segmentation: segmented image is treated using the trained full convolutional neural networks of step S3 and carries out semanteme Segmentation, obtains all kinds of atural object primary segmentation results;
S5, segmentation result optimization: the resulting segmentation result of step S4 is post-processed, introduce condition random field, to point It cuts result to be reconstructed, optimize, obtains the higher segmentation result of precision;
S6, it calculates atural object coverage rate information: according to the resulting segmentation result of step S5, carrying out the coverage rate meter of all kinds of atural objects It calculates.
All kinds of atural objects include water body, vegetation, building, road.
The present invention also can be well solved the indeterminable problem of conventional segmentation algorithm without using professional software, even if In the case where remote sensing images are affected by factors such as weather, illumination, atural object coverage rate information is still quickly and accurately counted, Even the visualization result with effective informations such as classification, areas can be provided for professional application department.The present invention is to tradition The improvement and innovation of remote sensing images atural object coverage rate calculation, be it is a kind of using deep learning method directly according to remote sensing images A kind of automatic method for obtaining atural object coverage rate information, realizes atural object classification in quick, accurate acquisition remote sensing images and its covers The function of lid rate information, compared to traditional approach, arithmetic speed is fast, and accuracy is high, wide adaptability.
The full convolutional neural networks constructed in step S1 are added on the basis of ResNet-50 convolutional neural networks The convolution module with holes of parallel different spreading rates makes model become the network model with function of image segmentation.
Step S3 utilizes backpropagation mechanism, carries out classified calculating according to softmax classification function, calculates predicted value and true Real value error constantly iterates using the two cross entropy as backpropagation power, adjusts network parameter, and introduce the side poly Method adjusts e-learning rate, until network reaches the preset condition of convergence.
The method of distant image segmentation is realized in step S4 are as follows: remote sensing images to be split are cut into N × N sub-block, it is sequentially defeated Enter full convolutional neural networks, obtains the segmentation result of each sub-block, then by former segmentation sequential concatenation, obtain point of whole picture remote sensing images Cut result.
In step S5, by improving optimization of the existing full convolutional network realization to segmentation result, condition random field pair is utilized All kinds of segmentation results optimize, and adjust all kinds of segmentation result pixel coverages, realize the optimization of classification belonging to the part of image border, Improve segmentation precision.
The method calculated all kinds of atural object coverage rates is realized in step S6 are as follows: according to segmentation result, traverse in calculated result The number of all kinds of affiliated pixels of atural object, calculates all kinds of atural object planimetric areas, finally releases its coverage rate.
Compared with prior art, the present invention having the following advantages and benefits:
1, it is soft without using profession that the present invention is based on the atural object coverage rate calculation methods of full convolutional network and condition random field Part also can be well solved the indeterminable problem of conventional segmentation algorithm;
Even if 2, the present invention is based on the atural object coverage rate calculation methods of full convolutional network and condition random field in remote sensing images In the case where being affected by factors such as weather, illumination, atural object coverage rate information is still quickly and accurately counted, it might even be possible to be Professional application department provides the visualization result with effective informations such as classification, areas;
3, the present invention is based on the atural object coverage rate calculation method of full convolutional network and condition random field realize it is quick, accurate The function of atural object classification and its coverage rate information in remote sensing images is obtained, compared to traditional approach, arithmetic speed is fast, accuracy Height, wide adaptability.
Detailed description of the invention
Attached drawing described herein is used to provide to further understand the embodiment of the present invention, constitutes one of the application Point, do not constitute the restriction to the embodiment of the present invention.In the accompanying drawings:
Fig. 1 is the method for the present invention flow diagram;
Fig. 2 is inventive network model structure schematic diagram;
Fig. 3 is the image segmentation result that condition random field is not used;
Fig. 4 is the image segmentation result for introducing condition random field.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below with reference to embodiment and attached drawing, to this Invention is described in further detail, and exemplary embodiment of the invention and its explanation for explaining only the invention, are not made For limitation of the invention.
Embodiment 1
As shown in Figure 1, the present invention is based on the atural object coverage rate calculation methods of full convolutional network and condition random field, including with Lower step:
S1, the full convolutional neural networks of building:;
S2, production training data: the remote sensing images of acquisition are divided pixel-by-pixel according to classification to be split, to remote sensing figure As carrying out data enhancing, semantic segmentation data set is constructed;
S3, the full convolutional neural networks of training: will be good constructed by the resulting semantic segmentation data set input step S11 of step S2 Full convolutional neural networks, continuous repetitive exercise updates network parameter, until training result meets the preset condition of convergence;
S4, Remote Sensing Image Segmentation: segmented image is treated using the trained full convolutional neural networks of step S3 and carries out semanteme Segmentation, obtains all kinds of atural object primary segmentation results;
S5, segmentation result optimization: the resulting segmentation result of step S4 is post-processed, introduce condition random field, to point It cuts result to be reconstructed, optimize, obtains the higher segmentation result of precision;
S6, it calculates atural object coverage rate information: according to the resulting segmentation result of step S5, carrying out the coverage rate meter of all kinds of atural objects It calculates.
All kinds of atural objects include water body, vegetation, building, road.
The present invention also can be well solved the indeterminable problem of conventional segmentation algorithm without using professional software, even if In the case where remote sensing images are affected by factors such as weather, illumination, atural object coverage rate information is still quickly and accurately counted, Even the visualization result with effective informations such as classification, areas can be provided for professional application department.The present invention is to tradition The improvement and innovation of remote sensing images atural object coverage rate calculation, be it is a kind of using deep learning method directly according to remote sensing images A kind of automatic method for obtaining atural object coverage rate information, realizes atural object classification in quick, accurate acquisition remote sensing images and its covers The function of lid rate information, compared to traditional approach, arithmetic speed is fast, and accuracy is high, wide adaptability.
Embodiment 2
Based on embodiment 1, the full convolutional neural networks constructed in step S1 are the bases in ResNet-50 convolutional neural networks On plinth, it is added to the convolution module with holes of parallel different spreading rates, model is made to become the network mould with function of image segmentation Type.
Network architecture is as shown in Fig. 2, specific structure is as follows:
It is sequentially connected from output is input to are as follows: a convolutional layer, a pond layer, 4 residual error structure block modules, one A parallel convolution module with holes, 1 × 1 convolutional layer.
It is to preferably extract feature that residual error structure, which is added, and it is more in order to extract for designing parallel convolution module with holes The characteristic information of scale improves segmentation result, and introducing 1 × 1 convolution is then to keep input image size unrestricted, and retaining space Information.
Wherein first convolutional layer scale is 3 × 3, and number is 256, and convolution step-length is 1;First time pond layer scale be 2 × 2, using the average pond of the overall situation;4 residual error structure block modules are respectively as follows: the residual error structure comprising 3 block modules Block1, the residual error structure block2 comprising 4 block modules, residual error structure block3 and packet comprising 6 block modules Residual error structure block4 containing 3 block modules, wherein each block module is combined to obtain by 33 × 3 convolutional layers, and First layer convolutional layer may be connected directly to third layer convolutional layer.
One parallel convolution module specific structure with holes is 1 × 1 convolutional layer and 3 group 3 that 1 group of number arranged side by side is 256 × 3 sizes convolutional layer with holes, every group of number are 256, and 3 groups of convolution spreading rates with holes are respectively 6,12,18.
All feature branches are connected, 1 × 1 convolutional layer that last group of number is 256 is input to, obtains final result.
Embodiment 3
Based on the above embodiment, in step S2, the training method used the present invention relates to network model for Training, Need to provide a large amount of training datas with groundtruth label for training process, specific embodiment is as follows:
S2.1, the remotely-sensed data image of acquisition is marked pixel-by-pixel according to classification to be split;
S2.2, the remote sensing images after mark are cut using sliding window cutting algorithm, is cut into 256 × 256 rulers The subimage block of very little tape label;
S2.3,90 ° are carried out to these subimage blocks respectively, 180 °, 270 ° rotate, upper and lower, left and right mirror image, 0.5 times, 1.5 Times, the operation of 2 times of scalings and addition Gauss, salt-pepper noise enhance data, so that the data volume is expanded as original 16 times;
S2.4, enhanced data set is divided into network training data and network testing data at random according to the ratio of 8:2.
Embodiment 4
Based on the above embodiment, the specific embodiment of the full convolutional neural networks of step S3 training are as follows: the instruction that will be made Practice data and input the full convolutional network that builds, by softmax classification functionWherein k is label classification,For kth class prediction result } result of output and the true value of former label carry out error calculation, utilize cross entropy loss function The two error is calculated, calculating formula is as follows:
Wherein,For kth class prediction result, whereinFor kth class label legitimate reading.
Backpropagation is carried out to prediction error using stochastic gradient descent algorithm (SGD), the parameter of network is updated, is used in combination Poly mode updates e-learning rate (learning rate), and calculating formula is as follows:
Wherein iter is current iteration wheel number, and max_iter is greatest iteration wheel number, and power is a super parameter, is set as 0.9.Initial learning rate is set as 0.01, and maximum number of iterations (max_iter) is set as 20000 times.
Every trained m (m takes 50) is taken turns to network inputs verifying collection data and is verified, pixel of the computation model on verifying collection Precision (PA), calculating formula is as follows:
Wherein pii is to belong to the prediction of the i-th class into the pixel quantity of the i-th class, and pij is the picture for belonging to the i-th class and being divided into jth class Prime number amount.
Network model index is assessed according to the value that verifying collects gained pixel precision PA.Until network reaches preset convergence item Part or training reach maximum number of iterations M (M takes 20000).
Embodiment 5
Based on the above embodiment, the method for distant image segmentation is realized in step S4 are as follows: remote sensing images to be split are cut into N × N sub-block, sequentially inputs full convolutional neural networks, obtains the segmentation result of each sub-block, then by former segmentation sequential concatenation, obtain whole The segmentation result of width remote sensing images.Specific embodiment is as follows: by the RGB remote sensing images I benefit that resolution ratio to be predicted is H × W The image block B of 256 × 256 sizes is cut into sliding window cutting algorithml, l representative image block serial number is sequentially defeated by image block The full convolutional network for entering trained completion predicted, the image block after being dividedIt seeks by cutting is suitable to image block Spliced, obtains the segmentation result of whole picture remote sensing images I
Embodiment 6
Based on the above embodiment, in step S5, by improving optimization of the existing full convolutional network realization to segmentation result, benefit All kinds of segmentation results are optimized with condition random field, adjust all kinds of segmentation result pixel coverages, realize image border part Affiliated classification optimization, improves segmentation precision.
Specific embodiment are as follows:
Segmentation result is being obtained using full convolutional neural networksAfterwards, condition random field is introduced.The feature at each pixel Vector is denoted as I={ I1, I2..., In }, n is the number of image pixel, carries out global observation to I.Enable x={ L1, L2..., Lnum_clsIndicate pixel label, X={ X1, X2..., XnIndicate the flag sequence exported (X value is in x).
At this time, for segmentation resultGiven observation sequence I, obtains the conditional probability of flag sequence X are as follows:
Wherein, Z (I) is standardizing factor, indicates X exp when taking all difference L (- E (x | I)) sum.
E (x | I) it is energy function, expression formula is E (x)=∑iθi(xi)+∑ijθij(xi, xj) (x hereiAnd xjTable Show the label of pixel i and j).
θi(xi) it is unitary potential function, expression formula are as follows:
θi(xi)=- logP (xi)
θij(xi, xj) it is binary potential function, expression formula are as follows:
In above formula,km(fi, fj) it is (fi, fj) between Gaussian kernel.In the present invention In, Gaussian kernel is combined using bilateral position and color, and K is taken as 2, i.e. m takes 1 and 2.
First core depends on location of pixels (p) and pixel color intensity (I):
Second core is only dependent upon location of pixels (p):
σα, σβ, σγFor hyper parameter, value is 1,1,1.5 respectively.
fiIt is the feature vector of pixel i, fiIt is indicated with (x, y, r, g, b).ωmFor respective weights.
Energy function E (x | I) it is made of unitary potential function and binary potential function.
Finally, encourage position close and the similar point of feature belongs to same class with binary potential function.If binary potential function ratio Larger, the conditional probability of final flag sequence is smaller, illustrates to be not belonging to same class.
It is completed using above-mentioned principle to segmentation resultOptimization, obtain final result
All kinds of segmentation results are optimized using condition random field in the present embodiment, adjust all kinds of segmentation result pixel models It encloses, realizes that the specific effect of the optimization of classification belonging to the part of image border is as shown in Figure 3, Figure 4, after Fig. 4 introduces condition random field, Compared with Fig. 3, all kinds of object edges are optimized in image, become more smooth, so that each type objects of whole image divide It is more accurate.
Embodiment 7
Based on the above embodiment, the present embodiment carries out concrete example explanation, vegetation atural object to wherein vegetation atural object coverage rate The method that coverage rate calculates are as follows: according to segmentation result, traverse the number of all kinds of affiliated pixels of atural object in calculated result, calculate each Class atural object planimetric area, finally releases its coverage rate.Specific embodiment are as follows:
rkFor segmented imageIn all kinds of atural object parts (k is atural object classification), to its indicator function indicator functionCalculate pixel value summation Sk, then kth class atural object coverage rate calculation formula is as follows:
Wherein ε is correction factor, takes 0.9.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects It is described in detail, it should be understood that being not intended to limit the present invention the foregoing is merely a specific embodiment of the invention Protection scope, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all include Within protection scope of the present invention.

Claims (7)

1. the atural object coverage rate calculation method based on full convolutional network and condition random field, which comprises the following steps:
S1, the full convolutional neural networks of building;
S2, production training data: the remote sensing images of acquisition are divided pixel-by-pixel according to classification to be split, to remote sensing images into The enhancing of row data, constructs semantic segmentation data set;
S3, the full convolutional neural networks of training: by full volume constructed by the resulting semantic segmentation data set input step S1 of step S2 Product neural network, continuous repetitive exercise update network parameter, until training result meets the preset condition of convergence;
S4, Remote Sensing Image Segmentation: treating segmented image using the trained full convolutional neural networks of step S3 and carry out semantic segmentation, Obtain all kinds of atural object primary segmentation results;
S5, segmentation result optimization: the resulting segmentation result of step S4 is post-processed, condition random field is introduced, segmentation is tied Fruit is reconstructed, optimizes, and obtains the higher segmentation result of precision;
S6, calculate atural object coverage rate information: according to the resulting segmentation result of step S5, the coverage rate for carrying out all kinds of atural objects is calculated.
2. the atural object coverage rate calculation method according to claim 1 based on full convolutional network and condition random field, special Sign is that the full convolutional neural networks constructed in step S1 are added to simultaneously on the basis of ResNet-50 convolutional neural networks The convolution module with holes of capable different spreading rates makes model become the network model with function of image segmentation.
3. the atural object coverage rate calculation method according to claim 1 based on full convolutional network and condition random field, special Sign is, step S3 utilizes backpropagation mechanism, carries out classified calculating according to softmax classification function, calculates predicted value and true Real value error constantly iterates using the two cross entropy as backpropagation power, adjusts network parameter, and introduce the side poly Method adjusts e-learning rate, until network reaches the preset condition of convergence.
4. the atural object coverage rate calculation method according to claim 1 based on full convolutional network and condition random field, special Sign is, the method for distant image segmentation is realized in step S4 are as follows: remote sensing images to be split are cut into N × N sub-block, are sequentially inputted Full convolutional neural networks obtain the segmentation result of each sub-block, then by former segmentation sequential concatenation, obtain the segmentation of whole picture remote sensing images As a result.
5. the atural object coverage rate calculation method according to claim 1 based on full convolutional network and condition random field, special Sign is, in step S5, by improving optimization of the existing full convolutional network realization to segmentation result, using condition random field to each Class segmentation result optimizes, and adjusts all kinds of segmentation result pixel coverages, realizes the optimization of classification belonging to the part of image border, mentions High segmentation precision.
6. the atural object coverage rate calculation method according to claim 1 based on full convolutional network and condition random field, special Sign is, the method calculated all kinds of atural object coverage rates is realized in step S6 are as follows: according to segmentation result, traverses each in calculated result The number of the affiliated pixel of class atural object calculates all kinds of atural object planimetric areas, finally releases its coverage rate.
7. the atural object coverage rate calculation method according to claim 1 based on full convolutional network and condition random field, special Sign is that all kinds of atural objects include water body, vegetation, building, road.
CN201910395844.3A 2019-05-13 2019-05-13 Atural object coverage rate calculation method based on full convolutional network and condition random field Pending CN110110679A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910395844.3A CN110110679A (en) 2019-05-13 2019-05-13 Atural object coverage rate calculation method based on full convolutional network and condition random field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910395844.3A CN110110679A (en) 2019-05-13 2019-05-13 Atural object coverage rate calculation method based on full convolutional network and condition random field

Publications (1)

Publication Number Publication Date
CN110110679A true CN110110679A (en) 2019-08-09

Family

ID=67489727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910395844.3A Pending CN110110679A (en) 2019-05-13 2019-05-13 Atural object coverage rate calculation method based on full convolutional network and condition random field

Country Status (1)

Country Link
CN (1) CN110110679A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991430A (en) * 2020-03-02 2020-04-10 中科星图股份有限公司 Ground feature identification and coverage rate calculation method and system based on remote sensing image
CN111062950A (en) * 2019-11-29 2020-04-24 南京恩博科技有限公司 Method, storage medium and equipment for multi-class forest scene image segmentation
CN111104976A (en) * 2019-12-12 2020-05-05 南京大学 Time sequence image-based blue-green algae coverage rate calculation method
CN112215815A (en) * 2020-10-12 2021-01-12 杭州视在科技有限公司 Bare soil coverage automatic detection method for construction site
CN113449594A (en) * 2021-05-25 2021-09-28 湖南省国土资源规划院 Multilayer network combined remote sensing image ground semantic segmentation and area calculation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180259970A1 (en) * 2017-03-10 2018-09-13 TuSimple System and method for occluding contour detection
CN108681692A (en) * 2018-04-10 2018-10-19 华南理工大学 Increase Building recognition method in a kind of remote sensing images based on deep learning newly
CN108876796A (en) * 2018-06-08 2018-11-23 长安大学 A kind of lane segmentation system and method based on full convolutional neural networks and condition random field
CN109657082A (en) * 2018-08-28 2019-04-19 武汉大学 Remote sensing images multi-tag search method and system based on full convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180259970A1 (en) * 2017-03-10 2018-09-13 TuSimple System and method for occluding contour detection
CN108681692A (en) * 2018-04-10 2018-10-19 华南理工大学 Increase Building recognition method in a kind of remote sensing images based on deep learning newly
CN108876796A (en) * 2018-06-08 2018-11-23 长安大学 A kind of lane segmentation system and method based on full convolutional neural networks and condition random field
CN109657082A (en) * 2018-08-28 2019-04-19 武汉大学 Remote sensing images multi-tag search method and system based on full convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
易盟等: "《基于改进全卷积神经网络的航拍图像语义分类方法》", 《计算机工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062950A (en) * 2019-11-29 2020-04-24 南京恩博科技有限公司 Method, storage medium and equipment for multi-class forest scene image segmentation
CN111104976A (en) * 2019-12-12 2020-05-05 南京大学 Time sequence image-based blue-green algae coverage rate calculation method
CN110991430A (en) * 2020-03-02 2020-04-10 中科星图股份有限公司 Ground feature identification and coverage rate calculation method and system based on remote sensing image
CN112215815A (en) * 2020-10-12 2021-01-12 杭州视在科技有限公司 Bare soil coverage automatic detection method for construction site
CN113449594A (en) * 2021-05-25 2021-09-28 湖南省国土资源规划院 Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN113449594B (en) * 2021-05-25 2022-11-11 湖南省国土资源规划院 Multilayer network combined remote sensing image ground semantic segmentation and area calculation method

Similar Documents

Publication Publication Date Title
CN110110679A (en) Atural object coverage rate calculation method based on full convolutional network and condition random field
CN111738124B (en) Remote sensing image cloud detection method based on Gabor transformation and attention
CN107392925B (en) Remote sensing image ground object classification method based on super-pixel coding and convolutional neural network
CN109447994A (en) In conjunction with the remote sensing image segmentation method of complete residual error and Fusion Features
CN108549893A (en) A kind of end-to-end recognition methods of the scene text of arbitrary shape
CN109493346A (en) It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN109948547A (en) Urban green space landscape evaluation method, device, storage medium and terminal device
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN108681692A (en) Increase Building recognition method in a kind of remote sensing images based on deep learning newly
CN108010034A (en) Commodity image dividing method and device
CN106446914A (en) Road detection based on superpixels and convolution neural network
CN105631415A (en) Video pedestrian recognition method based on convolution neural network
CN110853026A (en) Remote sensing image change detection method integrating deep learning and region segmentation
CN114494821B (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN106339753A (en) Method for effectively enhancing robustness of convolutional neural network
CN111489370B (en) Remote sensing image segmentation method based on deep learning
CN107358182A (en) Pedestrian detection method and terminal device
CN114187450A (en) Remote sensing image semantic segmentation method based on deep learning
CN110084817A (en) Digital elevation model production method based on deep learning
CN106683102A (en) SAR image segmentation method based on ridgelet filters and convolution structure model
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
CN108776777A (en) The recognition methods of spatial relationship between a kind of remote sensing image object based on Faster RCNN
CN111768326B (en) High-capacity data protection method based on GAN (gas-insulated gate bipolar transistor) amplified image foreground object
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN111667461B (en) Abnormal target detection method for power transmission line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190809

RJ01 Rejection of invention patent application after publication