CN110443822A - A kind of high score remote sensing target fine extracting method of semanteme edge auxiliary - Google Patents

A kind of high score remote sensing target fine extracting method of semanteme edge auxiliary Download PDF

Info

Publication number
CN110443822A
CN110443822A CN201910638370.0A CN201910638370A CN110443822A CN 110443822 A CN110443822 A CN 110443822A CN 201910638370 A CN201910638370 A CN 201910638370A CN 110443822 A CN110443822 A CN 110443822A
Authority
CN
China
Prior art keywords
target
boundary
edge
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910638370.0A
Other languages
Chinese (zh)
Other versions
CN110443822B (en
Inventor
夏列钢
张雄波
吴炜
杨海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910638370.0A priority Critical patent/CN110443822B/en
Publication of CN110443822A publication Critical patent/CN110443822A/en
Application granted granted Critical
Publication of CN110443822B publication Critical patent/CN110443822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A kind of fine extracting method of high score remote sensing target based on edge auxiliary is extracted task design depth convolutional neural networks according to remote sensing target first, is maked sample and trains to obtain target detection model and Model for Edge Detection.Then remote sensing image is input in target detection model and carries out the extraction of atural object outsourcing frame, so that it is determined that ground object target type and position range;Remote sensing image is input to progress atural object edge extracting in Model for Edge Detection, to obtain ground object target edge strength figure.Finally for the target in each outsourcing frame, edge strength figure with corresponding position is that instruction carries out object boundary extraction operation, refinement is directly carried out to the higher limbus of intensity as boundary, it with boundary is completely that target is repaired to fuzzy especially interrupted edges, it is final to determine target fine boundary, object boundary vector can be turned to polygon element by mission requirements.The fine extraction of high score remote sensing target can be achieved in the present invention.

Description

A kind of high score remote sensing target fine extracting method of semanteme edge auxiliary
Technical field
The invention belongs in remote sensing fields and computer vision target detection and edge extracting field, propose that one kind is based on The fine extracting method of two stages remote sensing target that the target detection and edge detection of deep learning combine.
Technical background
Objective extraction is the important method that information is extracted from remotely-sensed data, continuous with remote sensing image spatial resolution It is promoted, image Observable target and its extraction accuracy require also correspondingly to be continuously improved, although can be regarded using general computer Example segmentation carries out certain applications in feel, but far apart in the requirement on practical atural object boundary, as generallyd use at present Target detection-exposure mask segmentation dual stage process (such as Mask-RCNN) is difficult to directly apply to the fine mesh of high score remote sensing image Mark extracts.In fact, based on the target detection of deep learning method due to requiring nothing more than positioning target and determining outsourcing frame, essence Degree is universal higher;Actual influence Objective extraction effect essentially consists in the exposure mask segmentation stage to the fine true of atural object actual boundary It is fixed, due to complicated variation of ground object target in terms of image spectrum performance, spatial form, with local data in detection block It carries out Boundary Recognition and is limited to the still difficult preferable precision of acquirement of current depth parted pattern, this is also the main improvement think of of this method The extraction accuracy of object boundary is improved on road by introducing semantic edge, to realize finely mentioning for high score remote sensing target on the whole It takes, reaches practical ability.
Current main-stream algorithm of target detection can be roughly divided into two types since deep learning proposes, respectively be built based on region The target detection convolutional neural networks of view and end-to-end integrated target detection convolutional neural networks.Target detection end to end Method is in order to pursue detection speed faster, using the recognition result of single detection as final object detection results, effect Fruit suggests that object detection method seems not ideal enough based on region compared to multistage training.In the target suggested based on region In detection field, R-CNN is initiative proposition, it changes previous cumbersome exhaustive sliding window form, is searched by selectivity Rope extracts a small amount of suitable region candidate frame, then mentions to candidate region size normalization and using depth convolutional neural networks Feature is taken, finally Classification and Identification is carried out using SVM and obtains final object detection results.The Fast-RCNN proposed later, The detection algorithms such as Faster-RCNN, FPN, Mask-RCNN are all targetedly to be improved on this basis.Since target is examined Method of determining and calculating, which is only required, to be positioned target and is classified, and has good detection accuracy.
Edge detection algorithm requires accurately to determine object boundary, has bigger skill to target positioning compared to simple Art difficulty.In the edge detection algorithm based on deep learning, RCF is relatively effective convolutional neural networks model.It is main Thought is finely adjusted using VGG model, will obtain Fusion Features figure after multiple convolutional layer dimensionality reductions in 5 stages in VGG, then divide The Fusion Features figure of other deconvolution and the loss value for individually calculating each stage, 5 stages of last multiple dimensioned synthesis obtains finally Fusion Features figure and calculate loss value.The last of each stage is utilized compared in HED using the feature of all convolutional layers One layer of convolutional layer feature can bring the promotion in effect.In the case where strong boundary, because edge contains than more rich semanteme Information, then global information is obtained by convolution, therefore have good edge detection effect.However in the case where weak boundary, mesh Mark edge is difficult to provide effective semantic information, or even because of the shadow occlusion of tall and big object, completely lost target with it is non-targeted The differentiation at edge, prevents neural network from recognizing object edge, to the case where edge extracting broken string occur.In actual requirement In, the edge of broken string is not construed as being complete Boundary Extraction, so, exist by the boundary patch work of target of boundary integrity It can very important content at last in object boundary extraction task.
This method attempts the advantages of integration objective detection and edge extracting, in the higher target detection outsourcing frame guidance of precision Under, using edge detection results finely positioning object boundary, and for blocking or being disturbed edge, then it is based on object detection results Actively completion is speculated and connects, to realize the fine extraction of remote sensing target.
Summary of the invention
The present invention will overcome atural object mesh caused by the factors such as blocking in high score remote sensing image as atural object obscurity boundary, mutually It marks that edge extracting is fine, incomplete disadvantage, proposes a kind of high score remote sensing target based on the edge auxiliary finely side of extraction Method.
The present invention can assist overcoming by edge the defect of exposure mask boundary inaccuracy in general example segmentation, while pass through two Phase process in turn ensures that object boundary is complete under the auxiliary of outsourcing frame, the defect for overcoming general edge extracting easily to break, most The fine extraction of high score remote sensing target is realized eventually.
To achieve the above object, technical solution proposed by the present invention is as follows:
A kind of fine extracting method of high score remote sensing target based on edge auxiliary, method include the following steps:
Step 1: production remote sensing image Objective extraction sample, control image draw the fine boundary of target and determine type; Two different marks are generated to same width image sample, i.e., generate target outsourcing collimation mark note according to target detection model needs, It is required to generate object boundary mark according to Model for Edge Detection;It specifically includes:
Step 1.1: obtaining high score remote sensing image: using with visible light-near infrared sensor optical satellite remote sensing number According to or carry the Airborne Data Classification of general optical camera, multispectral image can be used directly according to resolution requirement or fusion is complete Color image;
Step 1.2: cutting remote sensing image: in production area selection typical target location by remote sensing image according to unified Pixel is cut;
Step 1.3: production deep learning training sample: drawing deep learning training using ArcGIS or other GIS softwares Sample marks the boundary of each ground object target, obtains corresponding .shp file, is generated outside target according to target detection model needs Packet collimation mark note;It is required to generate object boundary mark according to Model for Edge Detection;
Step 1.4: acquiring 200 or more images according to mission requirements and corresponding mark is used as training sample, for detection essence Spend independent setup test sample;
Step 2: using ready image sample training target detection model and Model for Edge Detection;Training Faster R-CNN neural network obtains target detection model, and training RCF neural network obtains Model for Edge Detection, can according to productive target Corresponding replacement, modification network;It specifically includes:
Step 2.1: projected depth convolutional neural networks: for training objective detection model and Model for Edge Detection, selection Be both neural networks of RCF and Faster R-CNN, can accordingly be replaced according to productive target, modify network;
Step 2.2: initialization weight: initializing RCF network weight using VGG pre-training model, use COCO data Pre-training model on collection initializes the weight of Faster R-CNN;
Step 2.3: training hyper parameter: configuration hyper parameter being set, numerical value is specifically set after model tuning;
The training parameter of RCF is set: the number of iterations=8000, batch_size=4, learning rate more new strategy=step, Learning rate updates step-length=[3200,4800,6400,8000], initial learning rate=0.001, and learning rate updates coefficient=0.1;
The training parameter of Faster R-CNN is set: training stage number=3, each stage iteration wheel number=[40,120,40], Every wheel the number of iterations=1000, validation interval=50, batch_size=4, learning rate more new strategy=step, learning rate update Step-length=1000, initial learning rate=0.001, learning rate update momentum=0.9, weight decay=0.0001;
Step 2.4: training pattern: input sample training sample is input in RCF model, according to described in step 2.3 Hyper parameter be trained, obtain the Model for Edge Detection that can extract ground object target edge contour;Training sample is input to It in Faster R-CNN, is trained according to the hyper parameter described in step 2.3, obtains that ground object target outsourcing frame can be extracted Target detection model;
Step 3: the image sample for being used to test being input in target detection model and Model for Edge Detection, atural object is obtained Target outsourcing frame and edge strength figure;Object object edge intensity map can be shown that corresponding position is object edge in remote sensing image Possibility;Ground object target outsourcing frame is the rectangle framing mask for indicating target present position range and target type;It specifically includes:
Step 3.1: high score remote sensing image being input to target detection model, obtains the rectangle outsourcing frame of ground object target;It will High score remote sensing image is input to Model for Edge Detection, obtains ground object target edge strength figure;
Step 3.2: parametrization outsourcing frame: converting its lower-left top for the rectangle outsourcing frame parameter that target detection model exports Point coordinate and right vertices coordinate;Specifically:
X1=x-w y1=y-h
X2=x+w y2=y+h
Wherein, x, y, w, h respectively represent the center abscissa of rectangle outsourcing frame, and center ordinate is wide and high;
Step 4: edge strength figure in step 3 is refined into the boundary of single pixel wide;It specifically includes:
Step 4.1: according to the threshold value of setting, ground object target edge strength figure being converted to binary map by grayscale image.With Binary_image (x, y) indicates that (1 indicates to be edge edge estimate of situation of the ground object target at image (x, y), and 0 indicates It is not edge), it may be expressed as:
Wherein threshold is the real number on section [0,1], can be by user setting, initial default value 0.5.X, y are figure The transverse and longitudinal coordinate of picture.
Step 4.2: the edge thick line wide for pixels multiple in binary map is constantly done with the center of each of the edges line and is corroded Operation, until all edge lines only remain a pixel it is wide, achieve the purpose that skeletal extraction.
Step 5: it is constraint with ground object target outsourcing frame in step 3, the boundary of single pixel wide in step 4 is repaired, To obtain complete fine polygon ground object target boundary.
It specifically includes:
Step 5.1: finding out part not closed in ground object target boundary line, be classified as three types, respectively image The boundary of edge is broken (causing boundary at image border imperfect due to skeletal extraction algorithm), and the boundary inside image is disconnected Line (part that border detection model does not identify correctly), boundary the end of a thread inside image (lead to mesh due to skeletal extraction algorithm There is excess edge line in mark boundary).
Step 5.2: three kinds of different methods of boundary broken string of three types are handled.
Step 5.2.1: breaking for the boundary at image border, and the edge strength figure with corresponding position is instruction, choosing It takes the higher pixel of edge intensity value computing to fill up object boundary broken string, if still having gap with image boundary, uses and image side The vertical straight line in boundary makes it be connected with object boundary broken string, and connect the object boundary line to form closure with image boundary.
Step 5.2.2: break for the boundary inside image, reset threshold value, and strong with the edge of corresponding position Degree figure is instruction, and the pixel that intensity value in edge strength figure is higher than new threshold value is mapped in binary map, if object boundary is disconnected Line is still not closed, then two breakpoints is connected with original geometrical property.Wherein, the step of breakpoint being repaired with original geometrical property It is as follows.
Step 5.2.2.1: the innermost circle boundary line being connected with two breakpoints is taken out.
Step 5.2.2.2:: it if innermost circle boundary line is divided into stem portion with the situation of change of slope, defines boundaries Substantially geometry.
Step 5.2.2.3: repairing from breakpoint into closed figure according to geometry, and mend is made to keep original geometry Characteristic.
Step 5.2.3: for the isolated boundary line inside image, i.e., with other broken strings all apart from farther away isolated boundary Line is then deleted.
Step 5.3: skeletal extraction is carried out again, by the fractional refinement filled up in step 5.2 at the target side of single pixel wide Boundary line.
Step 5.4: traversing image again, delete all inc object boundary lines, obtain complete fine polygon Shape ground object target boundary.
By adopting the above-described technical solution, the invention has the advantages that with the utility model has the advantages that
1, the method that the present invention is combined using target detection model and Model for Edge Detection had both avoided general example point The defect of exposure mask boundary inaccuracy in algorithm is cut, and solves the problems, such as that easily broken string, profile are thick in general Boundary extracting algorithm, energy Guarantee integrality, the fining of object boundary.
2, compared with traditional artificial identification remote sensing target, the present invention has faster recognition speed, has several hundred to a width The remote sensing image of target, the present invention was identified less than one second, and was manually drawn and generally required several hours;The present invention is with higher Accuracy rate, due to using design method end to end, avoiding because of the artificial feelings drawn maloperation and accuracy is caused to decline Condition.
3, for the present invention using the deep neural network algorithm in machine learning, only need to change sample training can be to a variety of mesh Mark is identified and is extracted object boundary, does not need to redesign algorithm again, has very strong reusability and robustness.
Detailed description of the invention
Fig. 1 is the fine extracting method schematic diagram of high score remote sensing target of edge auxiliary of the invention;
Fig. 2 is target detection model sample figure in the embodiment of the present invention;
Fig. 3 is Model for Edge Detection sample graph in the embodiment of the present invention;
Fig. 4 is target detection model prediction result exemplary diagram in the embodiment of the present invention;
Fig. 5 is Model for Edge Detection prediction result exemplary diagram in the embodiment of the present invention;
Fig. 6 is ultimate bound result comparative example figure in the embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not For limiting the present invention, i.e., described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is logical Often the component of the embodiment of the present invention described in attached drawing can be arranged and be designed with a variety of different configurations here.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Fig. 1 is the fine extracting method schematic diagram of high score remote sensing target of edge auxiliary.
Shown in referring to Fig.1, preferred embodiment is provided for the present invention, including the following steps:
Step 1: production remote sensing image Objective extraction sample, control image draw the fine boundary of target and determine type;
Step 2: using ready image sample training target detection model and Model for Edge Detection;
Step 3: the image sample for being used to test being input in target detection model and Model for Edge Detection, atural object is obtained Target outsourcing frame and edge strength figure;
Step 4: edge strength figure in step 3 is refined into the boundary of single pixel wide;
Step 5: it is constraint with ground object target outsourcing frame in step 3, the boundary of single pixel wide in step 4 is repaired, To obtain complete fine polygon ground object target boundary.
According to above-described embodiment, step 1 detailed step is as follows:
Step 1.1: obtaining high score remote sensing image: using with visible light-near infrared sensor optical satellite remote sensing number According to or carry the Airborne Data Classification of general optical camera, multispectral image can be used directly according to resolution requirement or fusion is complete Color image.
Step 1.2: cutting remote sensing image: cutting remote sensing image unification in production area selection typical target location At 512*512 pixel size (certain randomness and extensively covering should be had by cutting range by production district size).
Step 1.3: production deep learning training sample: drawing deep learning training using ArcGIS or other GIS softwares Sample needs to mark in the present embodiment the boundary of each ground object target, corresponding .shp file is obtained, according to target detection mould Type requires to generate target outsourcing collimation mark note, as shown in Figure 2;It is required to generate object boundary mark, such as Fig. 3 according to Model for Edge Detection It is shown.
Step 1.4: 200 or more images are generally acquired according to mission requirements and corresponding mark is used as training sample, if you need to Detection accuracy can independent setup test sample.
Step 2 detailed step is as follows:
According to above-described embodiment, step 2.1: projected depth convolutional neural networks: for training objective detection model and side Edge detection model, what is selected in the present invention is both neural networks of RCF and Faster R-CNN, according to productive target network It can accordingly replace, modify.
Step 2.2: initialization weight: initializing RCF network weight using VGG pre-training model, use COCO data Pre-training model on collection initializes the weight of Faster R-CNN.
Step 2.3: training hyper parameter: configuration hyper parameter is set, and specific value setting is as follows after model tuning.
The training parameter of RCF is set: the number of iterations=8000, batch_size=4, learning rate more new strategy=step, Learning rate updates step-length=[3200,4800,6400,8000], initial learning rate=0.001, and learning rate updates coefficient=0.1.
The training parameter of Faster R-CNN is set: training stage number=3, each stage iteration wheel number=[40,120,40], Every wheel the number of iterations=1000, validation interval=50, batch_size=4, learning rate more new strategy=step, learning rate update Step-length=1000, initial learning rate=0.001, learning rate update momentum=0.9, weight decay=0.0001.
Step 2.4: training pattern: input sample training sample is input in RCF model, according to described in step 2.3 Hyper parameter be trained, obtain the Model for Edge Detection that can extract ground object target edge contour;Training sample is input to It in Faster R-CNN, is trained according to the hyper parameter described in step 2.3, obtains that ground object target outsourcing frame can be extracted Target detection model.
According to above-described embodiment, step 3 detailed step is as follows:
Step 3.1: high score remote sensing image being input to target detection model, obtains the rectangle outsourcing frame of ground object target, such as Shown in Fig. 4;High score remote sensing image is input to Model for Edge Detection, obtains ground object target edge strength figure, as shown in Figure 5.
Step 3.2: parametrization outsourcing frame: converting its lower-left top for the rectangle outsourcing frame parameter that target detection model exports Point coordinate and right vertices coordinate.Specifically:
X1=x-w y1=y-h
X2=x+w y2=y+h
Wherein, x, y, w, h respectively represent the center abscissa of rectangle outsourcing frame, and center ordinate is wide and high.
According to above-described embodiment, step 4 detailed step is as follows:
Step 4.1: according to the threshold value of setting, ground object target edge strength figure being converted to binary map by grayscale image.With Binary_image (x, y) indicates that (1 indicates to be edge edge estimate of situation of the ground object target at image (x, y), and 0 indicates It is not edge), it may be expressed as:
Wherein threshold is the real number on section [0,1], can be by user setting, initial default value 0.5.X, y are figure The transverse and longitudinal coordinate of picture.
Step 4.2: the edge thick line wide for pixels multiple in binary map is constantly done with the center of each of the edges line and is corroded Operation, until all edge lines only remain a pixel it is wide, achieve the purpose that skeletal extraction.
According to above-described embodiment, step 5 detailed step is as follows:
Step 5.1: finding out part not closed in ground object target boundary line, be classified as three types, respectively image The boundary of edge is broken (causing boundary at image border imperfect due to skeletal extraction algorithm), and the boundary inside image is disconnected Line (part that border detection model does not identify correctly), boundary the end of a thread inside image (lead to mesh due to skeletal extraction algorithm There is excess edge line in mark boundary).
Step 5.2: three kinds of different methods of boundary broken string of three types are handled.
Step 5.2.1: breaking for the boundary at image border, and the edge strength figure with corresponding position is instruction, choosing It takes the higher pixel of edge intensity value computing to fill up object boundary broken string, if still having gap with image boundary, uses and image side The vertical straight line in boundary makes it be connected with object boundary broken string, and connect the object boundary line to form closure with image boundary.
Step 5.2.2: break for the boundary inside image, reset threshold value, and strong with the edge of corresponding position Degree figure is instruction, and the pixel that intensity value in edge strength figure is higher than new threshold value is mapped in binary map, if object boundary is disconnected Line is still not closed, then two breakpoints is connected with original geometrical property.Wherein, the step of breakpoint being repaired with original geometrical property It is as follows.
Step 5.2.2.1: the innermost circle boundary line being connected with two breakpoints is taken out.
Step 5.2.2.2:: it if innermost circle boundary line is divided into stem portion with the situation of change of slope, defines boundaries Substantially geometry.
Step 5.2.2.3: repairing from breakpoint into closed figure according to geometry, and mend is made to keep original geometry Characteristic.
Step 5.2.3: for the isolated boundary line inside image, i.e., with other broken strings all apart from farther away isolated boundary Line is then deleted.
Step 5.3: skeletal extraction is carried out again, by the fractional refinement filled up in step 5.2 at the target side of single pixel wide Boundary line.
Step 5.4: traversing image again, delete all inc object boundary lines, obtain complete as shown in Fig. 6-A Whole fine ground object target boundary, the predicted boundary that wherein Fig. 6-B is Mask-RCNN is as a result, the two compares it can be found that this hair It is bright to have obvious accuracy benefits.
Content described in this specification embodiment is only enumerating to the way of realization of inventive concept, protection of the invention Range should not be construed as being limited to the specific forms stated in the embodiments, and protection scope of the present invention is also and in art technology Personnel conceive according to the present invention it is conceivable that equivalent technologies mean.

Claims (1)

1. a kind of fine extracting method of high score remote sensing target based on edge auxiliary, includes the following steps:
Step 1: production remote sensing image Objective extraction sample, control image draw the fine boundary of target and determine type;To same One width image sample generates two different marks, i.e., generates target outsourcing collimation mark note according to target detection model needs, according to Model for Edge Detection requires to generate object boundary mark;It specifically includes:
Step 1.1: obtain high score remote sensing image: using have visible light-near infrared sensor optical satellite remotely-sensed data or Multispectral image or the panchromatic shadow of fusion can be used directly according to resolution requirement in the Airborne Data Classification for carrying general optical camera Picture;
Step 1.2: cutting remote sensing image: in production area selection typical target location by remote sensing image according to unified pixel It is cut;
Step 1.3: production deep learning training sample: drawing deep learning training sample using ArcGIS or other GIS softwares, The boundary for marking each ground object target obtains corresponding .shp file, generates target outsourcing frame according to target detection model needs Mark;It is required to generate object boundary mark according to Model for Edge Detection;
Step 1.4: acquiring 200 or more images according to mission requirements and corresponding mark is used as training sample, be detection accuracy list Only setup test sample;
Step 2: using ready image sample training target detection model and Model for Edge Detection;Training Faster R-CNN Neural network obtains target detection model, and training RCF neural network obtains Model for Edge Detection, can accordingly be replaced according to productive target It changes, modify network;It specifically includes:
Step 2.1: projected depth convolutional neural networks: for training objective detection model and Model for Edge Detection, selection is Both neural networks of RCF and Faster R-CNN can be replaced accordingly according to productive target, modify network;
Step 2.2: initialization weight: RCF network weight is initialized using VGG pre-training model, using on COCO data set Pre-training model initialize the weight of Faster R-CNN;
Step 2.3: training hyper parameter: configuration hyper parameter being set, numerical value is specifically set after model tuning;
The training parameter of RCF is set: the number of iterations=8000, batch_size=4, learning rate more new strategy=step, study Rate updates step-length=[3200,4800,6400,8000], initial learning rate=0.001, and learning rate updates coefficient=0.1;
The training parameter of Faster R-CNN is set: training stage number=3, each stage iteration wheel number=[40,120,40], every wheel The number of iterations=1000, validation interval=50, batch_size=4, learning rate more new strategy=step, learning rate update step-length =1000, initial learning rate=0.001, learning rate updates momentum=0.9, weight decay=0.0001;
Step 2.4: input sample, training pattern: training sample is input in RCF model, according to super described in step 2.3 Parameter is trained, and obtains the Model for Edge Detection that can extract ground object target edge contour;Training sample is input to It in Faster R-CNN, is trained according to the hyper parameter described in step 2.3, obtains that ground object target outsourcing frame can be extracted Target detection model;
Step 3: the image sample for being used to test being input in target detection model and Model for Edge Detection, ground object target is obtained Outsourcing frame and edge strength figure;Corresponding position is the possibility of object edge in the bright remote sensing image of ground object target edge strength chart Property;Ground object target outsourcing frame is the rectangle framing mask for indicating target present position range and target type;It specifically includes:
Step 3.1: high score remote sensing image being input to target detection model, obtains the rectangle outsourcing frame of ground object target;By high score Remote sensing image is input to Model for Edge Detection, obtains ground object target edge strength figure;
Step 3.2: parametrization outsourcing frame: converting its bottom left vertex for the rectangle outsourcing frame parameter that target detection model exports and sit Mark and right vertices coordinate;Specifically:
X1=x-w y1=y-h
X2=x+w y2=y+h
Wherein, x, y, w, h respectively represent the center abscissa of rectangle outsourcing frame, and center ordinate is wide and high;
Step 4: edge strength figure in step 3 is refined into the boundary of single pixel wide;It specifically includes:
Step 4.1: according to the threshold value of setting, ground object target edge strength figure being converted to binary map by grayscale image;Use binary_ Image (x, y) indicates that (1 indicates to be edge edge estimate of situation of the ground object target at image (x, y), and 0 indicates not to be side Edge), it may be expressed as:
Wherein threshold is the real number on section [0,1], can be by user setting, initial default value 0.5;X, y are image Transverse and longitudinal coordinate;
Step 4.2: the edge thick line wide for pixels multiple in binary map is constantly corrosion behaviour with the center of each of the edges line Make, until all edge lines only remain a pixel it is wide, achieve the purpose that skeletal extraction;
Step 5: being constraint with ground object target outsourcing frame in step 3, single pixel edge in step 4 is repaired, to have obtained Whole fine polygon ground object target boundary;Detailed step is as follows:
Step 5.1: finding out part not closed in ground object target boundary line, be classified as three types, respectively image border The boundary at place is broken, the boundary broken string inside image, boundary the end of a thread inside image, and wherein the boundary broken string at image border is Cause boundary at image border imperfect due to skeletal extraction algorithm, boundary inside image broken string be border detection model not The part correctly identified, boundary the end of a thread inside image is to cause object boundary excess edge occur due to skeletal extraction algorithm Line;
Step 5.2: three kinds of different methods of boundary broken string of three types are handled;
Step 5.2.1: breaking for the boundary at image border, and the edge strength figure with corresponding position is instruction, chooses side The higher pixel of edge intensity value fills up object boundary broken string, if still having gap with image boundary, hangs down using with image boundary Straight straight line makes it be connected with object boundary broken string, and connect the object boundary line to form closure with image boundary;
Step 5.2.2: break for the boundary inside image, reset threshold value, and with the edge strength figure of corresponding position For instruction, the pixel that intensity value in edge strength figure is higher than new threshold value is mapped in binary map, if object boundary broken string is still It is not closed, then two breakpoints are connected with original geometrical property;Wherein, the step of repairing breakpoint with original geometrical property is such as Under;
Step 5.2.2.1: the innermost circle boundary line being connected with two breakpoints is taken out;
Step 5.2.2.2: it if innermost circle boundary line is divided into stem portion with the situation of change of slope, defines boundaries substantially Geometry;
Step 5.2.2.3: repairing from breakpoint into closed figure according to geometry, and mend is made to keep original geometry special Property;
Step 5.2.3: for the isolated boundary line inside image, i.e., with other broken strings all apart from farther away isolated boundary line, then It is deleted;
Step 5.3: skeletal extraction is carried out again, by the fractional refinement filled up in step 5.2 at the object boundary line of single pixel wide;
Step 5.4: traversing image again, delete all inc object boundary lines, with obtaining complete fine polygon Object object boundary.
CN201910638370.0A 2019-07-16 2019-07-16 Semantic edge-assisted high-resolution remote sensing target fine extraction method Active CN110443822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910638370.0A CN110443822B (en) 2019-07-16 2019-07-16 Semantic edge-assisted high-resolution remote sensing target fine extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910638370.0A CN110443822B (en) 2019-07-16 2019-07-16 Semantic edge-assisted high-resolution remote sensing target fine extraction method

Publications (2)

Publication Number Publication Date
CN110443822A true CN110443822A (en) 2019-11-12
CN110443822B CN110443822B (en) 2021-02-02

Family

ID=68430338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910638370.0A Active CN110443822B (en) 2019-07-16 2019-07-16 Semantic edge-assisted high-resolution remote sensing target fine extraction method

Country Status (1)

Country Link
CN (1) CN110443822B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111818557A (en) * 2020-08-04 2020-10-23 中国联合网络通信集团有限公司 Network coverage problem identification method, device and system
CN111967526A (en) * 2020-08-20 2020-11-20 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN112084872A (en) * 2020-08-10 2020-12-15 浙江工业大学 High-resolution remote sensing target accurate detection method fusing semantic segmentation and edge
CN112084871A (en) * 2020-08-10 2020-12-15 浙江工业大学 High-resolution remote sensing target boundary extraction method based on weak supervised learning
CN113128388A (en) * 2021-04-14 2021-07-16 湖南大学 Optical remote sensing image change detection method based on space-time spectrum characteristics
CN113160258A (en) * 2021-03-31 2021-07-23 武汉汉达瑞科技有限公司 Method, system, server and storage medium for extracting building vector polygon
CN114241326A (en) * 2022-02-24 2022-03-25 自然资源部第三地理信息制图院 Progressive intelligent production method and system for ground feature elements of remote sensing images
CN114485694A (en) * 2020-11-13 2022-05-13 元平台公司 System and method for automatically detecting building coverage area
CN115273154A (en) * 2022-09-26 2022-11-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium
CN115797633A (en) * 2022-12-02 2023-03-14 中国科学院空间应用工程与技术中心 Remote sensing image segmentation method, system, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7341841B2 (en) * 2003-07-12 2008-03-11 Accelr8 Technology Corporation Rapid microbial detection and antimicrobial susceptibility testing
CN103247032A (en) * 2013-04-26 2013-08-14 中国科学院光电技术研究所 Weak extended target positioning method based on attitude compensation
US20180150958A1 (en) * 2016-11-30 2018-05-31 Brother Kogyo Kabushiki Kaisha Image processing apparatus, method and computer-readable medium for binarizing scanned data
US20180253828A1 (en) * 2017-03-03 2018-09-06 Brother Kogyo Kabushiki Kaisha Image processing apparatus that specifies edge pixel in target image by calculating edge strength
CN109712140A (en) * 2019-01-02 2019-05-03 中楹青创科技有限公司 Method and device of the training for the full link sort network of evaporating, emitting, dripping or leaking of liquid or gas detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7341841B2 (en) * 2003-07-12 2008-03-11 Accelr8 Technology Corporation Rapid microbial detection and antimicrobial susceptibility testing
CN103247032A (en) * 2013-04-26 2013-08-14 中国科学院光电技术研究所 Weak extended target positioning method based on attitude compensation
US20180150958A1 (en) * 2016-11-30 2018-05-31 Brother Kogyo Kabushiki Kaisha Image processing apparatus, method and computer-readable medium for binarizing scanned data
US20180253828A1 (en) * 2017-03-03 2018-09-06 Brother Kogyo Kabushiki Kaisha Image processing apparatus that specifies edge pixel in target image by calculating edge strength
CN109712140A (en) * 2019-01-02 2019-05-03 中楹青创科技有限公司 Method and device of the training for the full link sort network of evaporating, emitting, dripping or leaking of liquid or gas detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TIANJUN WU,ET AL.: "Prior Knowledge-Based Automatic Object-Oriented Hierarchical Classification for Updating Detailed Land Cover Maps", 《ISRS》 *
贾涛: "基于卷积神经网络的目标检测技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111818557A (en) * 2020-08-04 2020-10-23 中国联合网络通信集团有限公司 Network coverage problem identification method, device and system
CN112084872A (en) * 2020-08-10 2020-12-15 浙江工业大学 High-resolution remote sensing target accurate detection method fusing semantic segmentation and edge
CN112084871A (en) * 2020-08-10 2020-12-15 浙江工业大学 High-resolution remote sensing target boundary extraction method based on weak supervised learning
CN112084871B (en) * 2020-08-10 2024-02-13 浙江工业大学 High-resolution remote sensing target boundary extraction method based on weak supervised learning
CN111967526B (en) * 2020-08-20 2023-09-22 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN111967526A (en) * 2020-08-20 2020-11-20 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN114485694A (en) * 2020-11-13 2022-05-13 元平台公司 System and method for automatically detecting building coverage area
CN113160258A (en) * 2021-03-31 2021-07-23 武汉汉达瑞科技有限公司 Method, system, server and storage medium for extracting building vector polygon
CN113128388A (en) * 2021-04-14 2021-07-16 湖南大学 Optical remote sensing image change detection method based on space-time spectrum characteristics
CN113128388B (en) * 2021-04-14 2022-09-02 湖南大学 Optical remote sensing image change detection method based on space-time spectrum characteristics
CN114241326A (en) * 2022-02-24 2022-03-25 自然资源部第三地理信息制图院 Progressive intelligent production method and system for ground feature elements of remote sensing images
CN115273154B (en) * 2022-09-26 2023-01-17 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium
CN115273154A (en) * 2022-09-26 2022-11-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium
CN115797633A (en) * 2022-12-02 2023-03-14 中国科学院空间应用工程与技术中心 Remote sensing image segmentation method, system, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN110443822B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN110443822A (en) A kind of high score remote sensing target fine extracting method of semanteme edge auxiliary
Lian et al. Road extraction methods in high-resolution remote sensing images: A comprehensive review
Chen et al. Road extraction in remote sensing data: A survey
Huang et al. Individual tree crown detection and delineation from very-high-resolution UAV images based on bias field and marker-controlled watershed segmentation algorithms
Li et al. Building extraction from remotely sensed images by integrating saliency cue
CA3174351A1 (en) Feature extraction from mobile lidar and imagery data
CN112084872A (en) High-resolution remote sensing target accurate detection method fusing semantic segmentation and edge
CN105139015B (en) A kind of remote sensing images Clean water withdraw method
CN103839267B (en) Building extracting method based on morphological building indexes
Wolf et al. Automatic extraction and delineation of single trees from remote sensing data
CN109685732A (en) A kind of depth image high-precision restorative procedure captured based on boundary
CN110555857A (en) semantic edge dominant high-resolution remote sensing image segmentation method
US11804025B2 (en) Methods and systems for identifying topographic features
CN107154044B (en) Chinese food image segmentation method
Chen et al. Automatic building extraction via adaptive iterative segmentation with LiDAR data and high spatial resolution imagery fusion
Zang et al. Road network extraction via aperiodic directional structure measurement
CN108596920A (en) A kind of Target Segmentation method and device based on coloured image
CN102034247A (en) Motion capture method for binocular vision image based on background modeling
CN103871081A (en) Method for tracking self-adaptive robust on-line target
CN105678318A (en) Traffic label matching method and apparatus
Hackel et al. Large-scale supervised learning For 3D point cloud labeling: Semantic3d. Net
CN108022245B (en) Facial line primitive association model-based photovoltaic panel template automatic generation method
Liu et al. Object-oriented detection of building shadow in TripleSat-2 remote sensing imagery
Ok Automated extraction of buildings and roads in a graph partitioning framework
Tejeswari et al. Building footprint extraction from space-borne imagery using deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant