CN112070037B - Road extraction method, device, medium and equipment based on remote sensing image - Google Patents
Road extraction method, device, medium and equipment based on remote sensing image Download PDFInfo
- Publication number
- CN112070037B CN112070037B CN202010953874.4A CN202010953874A CN112070037B CN 112070037 B CN112070037 B CN 112070037B CN 202010953874 A CN202010953874 A CN 202010953874A CN 112070037 B CN112070037 B CN 112070037B
- Authority
- CN
- China
- Prior art keywords
- road
- remote sensing
- image
- sensing image
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 75
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 22
- 230000009466 transformation Effects 0.000 claims abstract description 11
- 238000004519 manufacturing process Methods 0.000 claims abstract description 10
- 238000010586 diagram Methods 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 14
- 230000015654 memory Effects 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 239000010410 layer Substances 0.000 description 8
- 238000012545 processing Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000002344 surface layer Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
Provided are a method, device, medium and equipment for extracting a road based on a remote sensing image. The method comprises the following steps: drawing a road in the remote sensing image, and converting the road into a two-classification image; dividing the remote sensing image and the binary image to manufacture a remote sensing image sample and a binary image sample; carrying out edge symbol distance transformation on the binary image samples to obtain edge symbol distance image samples corresponding to the binary image samples one to one; training a road extraction model by using the corresponding remote sensing image sample, the binary image sample and the edge symbol distance image sample; and carrying out road extraction on the remote sensing image by using the trained road extraction model. When the road is extracted, the road continuity performance is obviously improved, and most of the shielding problems can be overcome.
Description
Technical Field
The invention belongs to the technical field of information, and particularly relates to a road extraction method, a road extraction device, a road extraction medium and road extraction equipment based on remote sensing images.
Background
Road information is one of the most important geographic information elements, and plays an important role in applications such as automatic driving and disaster emergency response. With the development of remote sensing technology and the maturity of deep learning technology, remote sensing image road extraction algorithms with high spatial and temporal resolution based on cnn are endless, and allow large-scale monitoring of roads. Therefore, the remote sensing image data quickly becomes an important data source for automatically extracting the road network, and the research on the algorithm for automatically extracting the road from the remote sensing image becomes a focus.
However, the current research mainly focuses on road extraction in urban areas, and since roads in remote areas are often narrow and variable in width, and have problems of serious tree crowns, shadow occlusion and the like, the road extraction algorithms for urban areas are often poor in effect in remote areas, and particularly the problem of road discontinuity and breakage is serious.
Disclosure of Invention
The present invention is directed to solving the problems described above. Specifically, the invention provides a road extraction method, a road extraction device, a road extraction medium and road extraction equipment based on remote sensing images.
According to a first aspect of the present disclosure, there is provided a method for extracting a road based on a remote sensing image, including:
drawing a road in the remote sensing image, and converting the road into a two-classification image;
dividing the remote sensing image and the binary image to manufacture a remote sensing image sample and a binary image sample, wherein the remote sensing image sample and the binary image sample correspond to each other one by one;
carrying out edge symbol distance transformation on the binary image samples to obtain edge symbol distance image samples corresponding to the binary image samples one to one;
training a road extraction model by using the corresponding remote sensing image sample, the binary image sample and the edge symbol distance image sample;
and carrying out road extraction on the remote sensing image by using the trained road extraction model.
Performing edge-symbol distance transformation on the binary image samples comprises:
determining a distance D from each pixel point in the two classified image samples to a closest point located on a road edge i Based on D i And determining the edge symbol distance of all the pixel points.
Wherein x is i Is the pixel location, x j On the road edge and x i More recently, the development of new and more sophisticated displaysED is the Euclidean distance.
Based on D i Determining the edge symbol distances of all the pixel points comprises:
edge symbol distance BSD i Then, thenWherein Htanh is a HardTanh function, alpha is a proportionality coefficient, F is a foreground area, and B is a background area.
The road extraction model comprises a ResNet frame, a distance regression task branch and a classification task branch.
The method for training the road extraction model by using the corresponding remote sensing image sample, the binary image sample and the edge symbol distance image sample comprises the following steps:
inputting a remote sensing image sample into a ResNet frame to obtain a remote sensing image characteristic diagram;
inputting the remote sensing image feature map into a distance regression task branch, and calculating a first Loss through a first Loss function between the output result of the distance regression task branch and the edge symbol distance image sample;
and inputting the shallow features and the deep features obtained from the distance regression task branch into a classification task branch, and calculating a second Loss through a second Loss function by using the output result of the classification task branch and the binary image sample.
The road extraction method based on the remote sensing image further comprises the following steps:
and determining the final Loss based on the first Loss and the second Loss, and training the road extraction model by using the final Loss.
According to another aspect of the present disclosure, there is provided a remote sensing image-based road extraction device, including:
the second classification image making module is used for drawing the road in the remote sensing image and converting the road into a second classification image;
the sample manufacturing module is used for segmenting the remote sensing image and the binary image to manufacture a remote sensing image sample and a binary image sample, and the remote sensing image sample corresponds to the binary image sample one by one;
the edge symbol distance transformation module is used for carrying out edge symbol distance transformation on the binary image samples to obtain edge symbol distance image samples which correspond to the binary image samples one by one;
the model training module is used for training a road extraction model by using the corresponding remote sensing image sample, the binary image sample and the edge symbol distance image sample;
and the road extraction module is used for extracting the road of the remote sensing image by using the trained road extraction model.
According to another aspect of the present document, there is provided a computer readable storage medium having stored thereon a computer program which, when executed, implements the steps of a method for road extraction based on remote sensing imagery.
According to another aspect of the present document, there is provided a computer device comprising a processor, a memory and a computer program stored on said memory, the steps of the method for remote sensing image based road extraction being implemented when the computer program is executed by the processor.
The method comprises the steps of carrying out edge symbol distance transformation on a binary image of a remote sensing image to obtain an edge symbol distance image sample, and training a multi-branch road extraction model by using the remote sensing image sample, the binary image sample and the edge symbol distance image sample. The real-value distance information in the edge symbol distance image can promote the model to better identify the road boundary, and better classify road pixels by learning the distance characteristics to the boundary, rather than learning only spectral characteristics, so as to achieve the purpose of correcting shape deformity or breaking the road.
Other characteristic features and advantages of the invention will become apparent from the following description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention. In the drawings, like reference numerals are used to indicate like elements. The drawings in the following description are illustrative of some, but not all embodiments of the invention. For a person skilled in the art, other figures can be derived from these figures without inventive effort.
Fig. 1 is a flowchart illustrating a method for extracting a road based on a remote sensing image according to an exemplary embodiment.
Fig. 2 is a schematic diagram of a remote sensing image according to an exemplary embodiment.
FIG. 3 is a diagram illustrating a binary image according to an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating an edge symbol distance image in accordance with an exemplary embodiment.
FIG. 5 is a diagram illustrating a road extraction model, according to an exemplary embodiment.
FIG. 6 is a diagram illustrating predicted results of a road extraction model according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a road extraction device based on a remote sensing image according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating a computer device for remote sensing image-based road extraction according to an exemplary embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
Fig. 1 is a flowchart illustrating a method for extracting a road based on a remote sensing image according to an exemplary embodiment. Referring to fig. 1, the method for extracting a road based on a remote sensing image includes:
step S11: drawing a road in the remote sensing image, and converting the road into a two-classification image;
step S12: dividing the remote sensing image and the binary image to manufacture a remote sensing image sample and a binary image sample, wherein the remote sensing image sample and the binary image sample correspond to each other one by one;
step S13: carrying out edge symbol distance transformation on the binary image samples to obtain edge symbol distance image samples corresponding to the binary image samples one to one;
step S14: training a road extraction model by using the corresponding remote sensing image sample, the binary image sample and the edge symbol distance image sample;
step S15: and carrying out road extraction on the remote sensing image by using the trained road extraction model.
In step S11, the road object in the remote sensing image is rendered, and a rendering tool such as Arcgis, labelme, etc. may be selected. Taking Arcgis as an example, a surface layer element is newly built in a remote sensing image, a geographical coordinate system which is the same as that of the original remote sensing image is given, then, a road in the remote sensing image is sketched through manual visual interpretation by an editing tool, a proper scaling is selected, drawing is carried out along edge pixels, and finally a vector file in a shape file format is generated. And then, converting the vector diagram into a grid diagram to obtain a binary grid image, namely a two-classification image, wherein 0 in the two-classification image is a background, and 1 in the two-classification image is a road.
In step S12, a sample segmentation rule is set, including both the size of a single sample and the overlap ratio between adjacent samples. And segmenting the remote sensing image and the binary image according to the set sample segmentation rule to obtain a plurality of segmented remote sensing image samples and a plurality of binary image samples which are used as training samples of a subsequent road extraction model.
In step S13, edge-symbol distance conversion is performed on the binary image samples to obtain edge-symbol distance image samples corresponding to the binary image samples one to one. When only two classification images are used for training the model, the depth classification model determines whether one pixel belongs to a road class or a non-road class, and the high spatial distribution characteristic correlation among road pixels can be ignored, so that the road extraction result often enables the blocked part of pixels to be classified as non-roads, and the problem of road breakage discontinuity is caused.
Thus, introducing herein edge symbol distance, in one embodiment, edge symbol distance transforming the binary image samples comprises:
determining a distance D from each pixel point in the two classified image samples to a closest point located on a road edge i Based on D i And determining the edge symbol distance of all the pixel points.
Given a pixel position x, inspired by the symbol distance transform SDT i Its signed distance value SDT i Comprises the following steps:
where ED is the euclidean distance and F and B are the foreground and background regions, respectively. Symbolic distance value SDT i Representing the distance of a pixel to its closest different class of pixels.
Thus, the distance from a given pixel to the nearest pixel located on the edge of the road can be defined as D i ,Wherein x is i Is the pixel location, x j On the road edge with x i The nearest pixel position, ED, is the Euclidean distance.
Edge symbol distance BSD i ,Wherein Htanh is a HardTanh function, alpha is a proportionality coefficient, F is a foreground area, and B is a background area. Because the calculation is carried out on the binary image, the foreground area is a road, and the background area is a non-road.
In the above formula, the distance D i Implementation by logarithmic mappingAnd scaling to reduce the sensitivity of the edge symbol distance to the distance change along with the increase of the distance between the pixel point and the road edge, so that the attention is concentrated on the road edge and the adjacent area, and the attention to the background area far away from the road edge is weakened. The distance values are then normalized to [ -1,1] using the HardTanh function and the scaling factor α]The training speed of the model is increased, and meanwhile, the attention to background pixels far away from the road is further weakened, so that some noises are filtered. Giving the road pixels an edge-symbol distance of positive, the non-road pixels an edge-symbol distance of negative, and the road boundary pixels an edge-symbol distance value of zero will effectively help the model to better distinguish between roads and non-roads. Fig. 2 is a schematic diagram of a remote sensing image according to an exemplary embodiment. FIG. 3 is a diagram illustrating a binary image according to an exemplary embodiment. FIG. 4 is a schematic diagram illustrating an edge symbol distance image in accordance with an exemplary embodiment. It can be seen that the remote sensing image schematic diagram, the binary image and the edge symbol distance image have a one-to-one correspondence relationship.
In step S14, the road extraction model is trained using the corresponding remote sensing image sample, binary image sample, and edge symbol distance image sample. And (4) taking the remote sensing image, the two-class image and the edge symbol distance image which are manufactured in the steps S11 to S13 as training samples to train the road extraction model.
In one embodiment, the road extraction model includes a ResNet framework, distance regression task branches, and classification task branches.
FIG. 5 is a diagram illustrating a road extraction model according to an exemplary embodiment. Referring to fig. 5, there are shown a ResNet frame 51, a distance regression task branch 52, and a classification task branch 53.
The method for training the road extraction model by using the corresponding remote sensing image sample, the binary image sample and the edge symbol distance image sample comprises the following steps:
inputting the remote sensing image sample into a ResNet frame to obtain a remote sensing image characteristic diagram;
inputting the remote sensing image feature map into a distance regression task branch, and calculating a first Loss through a first Loss function between the output result of the distance regression task branch and the edge symbol distance image sample;
and inputting the shallow features and the deep features obtained from the distance regression task branch into a classification task branch, and calculating a second Loss through a second Loss function by using the output result of the classification task branch and the binary image sample.
And determining the final Loss based on the first Loss and the second Loss, and training the road extraction model by using the final Loss.
The road extraction model will be described below with reference to fig. 5.
In this embodiment, a ResNet frame is selected as a backbone network, and considering that a forest boundary road is generally small in width and few in pixels, a model needs to maintain high-frequency details of an input image. Therefore, changing stride of the last two residual blocks of the ResNet frame to 1 can obtain a characteristic map of 1/8 size of the input image. In addition, the expansion convolution is used for expanding the receptive field of the convolution layer, so that the spatial context information is richer. The remote sensing image sample (5 c in the figure) is input into a ResNet frame, and then the obtained remote sensing image feature map is transmitted into a 52 (distance regression task branch).
In the distance regression task branch, firstly, a 3 × 3 convolution layer (Conv) is adopted to reduce the dimension of the remote sensing image characteristic diagram generated by the backbone network. Then, a batch processing normalization layer (BN) is adopted to normalize the distribution of each batch of input into standard normal distribution so as to reduce the offset of internal covariates and accelerate the learning process, and finally Tanh is adopted as an activation function to generate a shallow feature map. And then, sequentially carrying out 1 × 1 convolution layer (Conv) and activation function Tanh operation on the shallow feature map to generate a deep feature map, and obtaining a low-resolution distance value prediction result. The Tanh function is used to limit the distance value to the range of [ -1,1], thus keeping consistent with the range of the mark value of the sample (5 a in the figure) obtained by the edge symbol distance transformation. Then, upsampling is performed to obtain an edge symbol distance prediction result (Outputs-a) having the same size as the sample of the remote sensing image (5 c in the figure). The first Loss (Loss-1 in the figure) is calculated using the L1 Loss function with the edge-signed distance image samples as the true values in the distance regression branch.
In the classification task branch (53 in the figure), the shallow feature and the deep feature obtained from the distance regression task branch are directly connected, the classification task branch is input, and the shallow feature graph and the deep feature graph generated in the distance regression task branch are shared between the distance regression task and the two classification tasks, so that the fusion of the distance feature and the semantic feature is realized. The 3 x 3 convolution layer, batch normalization layer, and ReLU activation layer are then input to generate a semantic feature map. And obtaining a classification probability graph through the 1 x 1 convolution layer. And (4) up-sampling the classification probability map to obtain a classification prediction result (Outputs-B) with the same size as the remote sensing image sample. In the branch of the classification task, a binary image sample (5 b in the figure) is used as a true value, and a cross entropy Loss function is adopted to calculate a second Loss (Loss-2 in the figure).
As described above, the model adopts a multi-task training strategy, the regression task branch adopts an L1 Loss function, and the first Loss is:wherein N represents the total number of pixels, BSD gt i Represents the true value, BSD, of pixel i i Is the predicted edge symbol distance value. The classification task branch adopts a cross entropy Loss function, and the second Loss is as follows:wherein N represents the total number of pixels, y i Class representing truth value, p i A category representing a predicted value.
The final Loss calculation for the multitask is L final =L bsd +λ×L cls 。
Where λ is a hyperparameter that balances the magnitude difference between the two types of losses and can also be used to control the weight ratio of the classification and regression tasks.
And training the road extraction model by using the final Loss until convergence, and after verifying the model, using the road extraction model for extracting the road.
The distance regression task branch can be regarded as playing an intermediate supervision role in the classification task branch in the whole view.
From the above description, the text obtains an edge symbol distance image sample by performing edge symbol distance transformation on a binary image of a remote sensing image, and trains a multi-branch road extraction model by using the remote sensing image sample, the binary image sample and the edge symbol distance image sample. The real-value distance information in the edge symbol distance image can promote the model to better identify the road boundary, and better classify road pixels by learning the distance characteristics to the boundary, rather than being limited to the spectrum characteristic learning, so as to achieve the purpose of correcting the shape deformity or breaking the road.
FIG. 6 is a diagram illustrating predicted results of a road extraction model, according to an example embodiment. Reference numeral 61 denotes a remote sensing image, 62 denotes a true value of a road, 63 denotes a result of prediction using only two-classification, and 64 denotes a result of prediction using an edge sign distance. Referring to fig. 6, in the road extraction method based on remote sensing images provided herein, a distance regression task is introduced into a road extraction model for supervision, and when a road is extracted, the road continuity performance is improved significantly, so that most of the occlusion problems can be overcome.
Fig. 7 is a block diagram illustrating a road extraction device based on a remote sensing image according to an exemplary embodiment. Referring to fig. 7, the remote sensing image-based road extraction device includes: a binary image making module 701, a sample making module 702, an edge symbol distance transformation module 703, a model training module 704 and a road extraction module 705.
The binary image creation module 701 is configured to draw a road in the remote sensing image and convert the road into a binary image.
The sample preparation module 702 is configured to segment the remote sensing image and the two-class image to prepare a remote sensing image sample and a two-class image sample, wherein the remote sensing image sample and the two-class image sample are in one-to-one correspondence.
The edge symbol distance transform module 703 is configured to perform edge symbol distance transform on the binary image samples, so as to obtain edge symbol distance image samples corresponding to the binary image samples one to one.
The model training module 704 is configured to train a road extraction model using the corresponding remote sensing imagery samples, the two-class image samples, the edge symbol distance image samples.
The road extraction module 705 is configured for road extraction of the remote sensing image using the trained road extraction model.
Fig. 8 is a block diagram illustrating a computer device 800 for remote sensing image-based road extraction according to an exemplary embodiment. For example, the computer device 800 may be provided as a server. Referring to fig. 8, the computer apparatus 800 includes a processor 801, and the number of processors may be set to one or more as necessary. The computer device 800 also includes a memory 802 for storing instructions, such as application programs, that are executable by the processor 801. The number of the memories can be set to one or more according to needs. Which may store one or more application programs. Processor 801 is configured to execute instructions to perform a method of remote sensing imagery based road extraction comprising:
drawing a road in the remote sensing image, and converting the road into a two-classification image;
dividing the remote sensing image and the two classified images to manufacture a remote sensing image sample and two classified image samples, wherein the remote sensing image sample corresponds to the two classified image samples one to one;
performing edge symbol distance conversion on the two classified image samples to obtain edge symbol distance image samples corresponding to the two classified image samples one to one;
training a road extraction model by using the corresponding remote sensing image sample, the two classification image samples and the edge symbol distance image sample;
and carrying out road extraction on the remote sensing image by using the trained road extraction model.
As will be appreciated by one skilled in the art, the embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer, and the like. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional like elements in the article or device comprising the element.
While the preferred embodiments herein have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of this disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope thereof. Thus, it is intended that such changes and modifications be included herein, provided they come within the scope of the appended claims and their equivalents.
Claims (7)
1. The road extraction method based on the remote sensing image is characterized by comprising the following steps:
drawing a road in the remote sensing image, and converting the road into a two-classification image;
dividing the remote sensing image and the two classified images to manufacture a remote sensing image sample and two classified image samples, wherein the remote sensing image sample corresponds to the two classified image samples one to one;
performing edge symbol distance conversion on the two classified image samples to obtain edge symbol distance image samples corresponding to the two classified image samples one to one;
training a road extraction model by using the corresponding remote sensing image sample, the two classification image samples and the edge symbol distance image sample;
carrying out road extraction on the remote sensing image by using the trained road extraction model;
the performing edge-symbol distance transformation on the two classified image samples comprises:
determining a distance D from each pixel point in the two classified image samples to a closest point located on a road edge i Based on D i Determining the edge symbol distance of all pixel points;
wherein x is i Is the pixel location, x j On the road edge with x i ED is the Euclidean distance at the position of the nearest pixel point;
the base is based on D i Determining the edge symbol distances of all the pixel points comprises:
2. The method for extracting a road based on remote sensing images as claimed in claim 1, wherein the road extraction model comprises a ResNet frame, a distance regression task branch and a classification task branch.
3. The method of claim 2, wherein the training of the road extraction model using the corresponding remote-sensing image samples, the two-class image samples and the edge sign distance image samples comprises:
inputting the remote sensing image sample into a ResNet frame to obtain a remote sensing image characteristic diagram;
inputting the remote sensing image feature map into the distance regression task branch, and calculating a first Loss through a first Loss function between the output result of the distance regression task branch and the edge symbol distance image sample;
and inputting the shallow feature and the deep feature obtained from the distance regression task branch into the classification task branch, and calculating a second Loss through a second Loss function by using the output result of the classification task branch and the binary image sample.
4. The method for extracting a road based on a remote sensing image as claimed in claim 3, further comprising:
and determining a final Loss based on the first Loss and the second Loss, and training the road extraction model by using the final Loss.
5. Road extraction element based on remote sensing image, its characterized in that includes:
the second classification image making module is used for drawing the road in the remote sensing image and converting the road into a second classification image;
the sample manufacturing module is used for segmenting the remote sensing image and the two classified images to manufacture a remote sensing image sample and two classified image samples, and the remote sensing image sample corresponds to the two classified image samples one by one;
the edge symbol distance conversion module is used for carrying out edge symbol distance conversion on the binary image samples to obtain edge symbol distance image samples which correspond to the binary image samples one to one;
the model training module is used for training a road extraction model by using the corresponding remote sensing image sample, the two classification image samples and the edge symbol distance image sample;
the road extraction module is used for extracting the road of the remote sensing image by using the trained road extraction model;
the performing edge-symbol distance transform on the two classified image samples comprises:
determining a distance D from each pixel point in the two classified image samples to a closest point located on a road edge i Based on D i Determining the edge symbol distance of all pixel points;
wherein x is i Is the pixel location, x j On the road edge with x i ED is the Euclidean distance at the position of the nearest pixel point;
the base is based on D i Determining the edge symbol distances of all the pixel points comprises:
6. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-4.
7. A computer arrangement comprising a processor, a memory and a computer program stored on the memory, characterized in that the processor, when executing the computer program, carries out the steps of the method according to any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010953874.4A CN112070037B (en) | 2020-09-11 | 2020-09-11 | Road extraction method, device, medium and equipment based on remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010953874.4A CN112070037B (en) | 2020-09-11 | 2020-09-11 | Road extraction method, device, medium and equipment based on remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112070037A CN112070037A (en) | 2020-12-11 |
CN112070037B true CN112070037B (en) | 2022-09-30 |
Family
ID=73697025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010953874.4A Active CN112070037B (en) | 2020-09-11 | 2020-09-11 | Road extraction method, device, medium and equipment based on remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112070037B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139975B (en) * | 2021-04-19 | 2023-11-17 | 国交空间信息技术(北京)有限公司 | Road feature-based pavement segmentation method and device |
CN113361371B (en) * | 2021-06-02 | 2023-09-22 | 北京百度网讯科技有限公司 | Road extraction method, device, equipment and storage medium |
CN115641512B (en) * | 2022-12-26 | 2023-04-07 | 成都国星宇航科技股份有限公司 | Satellite remote sensing image road identification method, device, equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101126812A (en) * | 2007-09-27 | 2008-02-20 | 武汉大学 | High resolution ratio remote-sensing image division and classification and variety detection integration method |
CN109800736A (en) * | 2019-02-01 | 2019-05-24 | 东北大学 | A kind of method for extracting roads based on remote sensing image and deep learning |
-
2020
- 2020-09-11 CN CN202010953874.4A patent/CN112070037B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101126812A (en) * | 2007-09-27 | 2008-02-20 | 武汉大学 | High resolution ratio remote-sensing image division and classification and variety detection integration method |
CN109800736A (en) * | 2019-02-01 | 2019-05-24 | 东北大学 | A kind of method for extracting roads based on remote sensing image and deep learning |
Non-Patent Citations (3)
Title |
---|
Distance transform regression for spatially-aware deep semantic segmentation;Nicolas Audebert 等;《Computer Vision and Image Understanding》;20190906;全文 * |
Improved Anchor-Free Instance Segmentation for Building Extraction from High-Resolution Remote Sensing Images;Tong Wu 等;《remote sensing》;20200908;全文 * |
结合卷积受限玻尔兹曼机的CV图像分割模型;李晓慧 等;《激光与光电子学进展》;20200828;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112070037A (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112070037B (en) | Road extraction method, device, medium and equipment based on remote sensing image | |
US11651477B2 (en) | Generating an image mask for a digital image by utilizing a multi-branch masking pipeline with neural networks | |
US11393100B2 (en) | Automatically generating a trimap segmentation for a digital image by utilizing a trimap generation neural network | |
Gao et al. | MLNet: Multichannel feature fusion lozenge network for land segmentation | |
CN111738262A (en) | Target detection model training method, target detection model training device, target detection model detection device, target detection equipment and storage medium | |
CN111210446B (en) | Video target segmentation method, device and equipment | |
CN116665176B (en) | Multi-task network road target detection method for vehicle automatic driving | |
CN111652250B (en) | Remote sensing image building extraction method and device based on polygons and storage medium | |
CN111932577B (en) | Text detection method, electronic device and computer readable medium | |
CN115631205B (en) | Method, device and equipment for image segmentation and model training | |
CN110852327A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113628180A (en) | Semantic segmentation network-based remote sensing building detection method and system | |
CN115424017B (en) | Building inner and outer contour segmentation method, device and storage medium | |
CN113570540A (en) | Image tampering blind evidence obtaining method based on detection-segmentation architecture | |
CN115761225A (en) | Image annotation method based on neural network interpretability | |
CN114283431B (en) | Text detection method based on differentiable binarization | |
CN114581710A (en) | Image recognition method, device, equipment, readable storage medium and program product | |
CN113763412B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN116524261A (en) | Image classification method and product based on multi-mode small sample continuous learning | |
CN116612280A (en) | Vehicle segmentation method, device, computer equipment and computer readable storage medium | |
CN113762266B (en) | Target detection method, device, electronic equipment and computer readable medium | |
CN113158856B (en) | Processing method and device for extracting target area in remote sensing image | |
CN113744280A (en) | Image processing method, apparatus, device and medium | |
CN113343817A (en) | Unmanned vehicle path detection method and device for target area and medium | |
CN115205624A (en) | Cross-dimension attention-convergence cloud and snow identification method and equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |