CN114882473A - Road extraction method and system based on full convolution neural network - Google Patents

Road extraction method and system based on full convolution neural network Download PDF

Info

Publication number
CN114882473A
CN114882473A CN202210605408.6A CN202210605408A CN114882473A CN 114882473 A CN114882473 A CN 114882473A CN 202210605408 A CN202210605408 A CN 202210605408A CN 114882473 A CN114882473 A CN 114882473A
Authority
CN
China
Prior art keywords
road
convolution
layer
deconvolution
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210605408.6A
Other languages
Chinese (zh)
Inventor
高飞
孔令哲
王俊
陈鹏辉
罗喜伶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Innovation Research Institute of Beihang University
Original Assignee
Hangzhou Innovation Research Institute of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Innovation Research Institute of Beihang University filed Critical Hangzhou Innovation Research Institute of Beihang University
Priority to CN202210605408.6A priority Critical patent/CN114882473A/en
Publication of CN114882473A publication Critical patent/CN114882473A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a road extraction method and a system based on a full convolution neural network, which comprises the steps of improving a network model on the basis of FCN to obtain a deep convolution neural network; training a neural network on a training set by taking the minimum loss function as a target to obtain a road extraction network model; and inputting the SAR image for testing into a road extraction model to obtain a road network in the SAR image. The invention can realize road extraction.

Description

Road extraction method and system based on full convolution neural network
Technical Field
The invention relates to the field of road extraction, in particular to a road extraction method and system based on a full convolution neural network.
Background
Synthetic Aperture Radar (SAR) is a modern high-resolution microwave imaging Radar, which can realize all-weather and all-time operation by adopting a mode of actively transmitting electromagnetic waves to a target and receiving and analyzing a target reflection echo signal for imaging. The SAR has wide working wave band coverage, uses various polarization modes, has certain penetration capacity on shelters such as vegetation, cloud and mist and the like, and has strong advantages in the target observation of the ground sea surface. Therefore, synthetic aperture radars are widely used in military and civilian applications.
SAR systems are capable of obtaining a large number of high resolution images, of which the road is an important target. Roads are the main parts forming the modern traffic system, and are also the main objects of the identification and recording of maps and information systems, the detection of the roads has important geographic, economic and military significance, and the roads are widely applied to the aspects of traffic management, road monitoring, city planning, map updating and the like. Therefore, how to accurately perform road extraction from the SAR image is of interest to many researchers.
In the SAR image road extraction task, the semantic segmentation network can automatically extract the target features of each level and has simple process, so that the method is widely applied. The full Convolutional Neural Network (FCN) is the most common algorithm in the semantic segmentation Network, and its structure is simple and practical, and is usually used as the design basis of the semantic segmentation Network with higher performance.
Disclosure of Invention
The invention aims to provide a road extraction method and a road extraction system based on a full convolution neural network, and aims to solve the problem of road extraction.
The invention provides a road extraction method based on a full convolution neural network, which comprises the following steps,
s1, establishing a deep convolutional neural network;
s2, training a deep convolutional neural network on the SAR image training set by taking the minimum loss function as a target to obtain a road extraction network model;
and S3, inputting the SAR image for testing into a road extraction model, and acquiring a road network in the SAR image.
The invention also provides a road extraction system of the full convolution neural network, which comprises the following steps:
a road extraction system based on a full convolution neural network is characterized by comprising,
a building module: the deep convolution neural network is established;
a training module: the method comprises the steps of training a deep convolutional neural network on an SAR image training set by taking the minimum loss function as a target to obtain a road extraction network model;
a test module: and inputting the SAR image for testing into a road extraction model to obtain a road network in the SAR image.
By adopting the embodiment of the invention, the SAR image road extraction can be realized.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a road extraction method for a full convolution neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a full convolution neural network structure of a road extraction method of the full convolution neural network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a multi-scale feature extraction module of the road extraction method for the full convolution neural network according to the embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a hole convolution module of a road extraction method of a full convolution neural network according to an embodiment of the present invention
FIG. 5 is a schematic diagram of a deconvolution layer structure of a road extraction method for a full convolution neural network according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a road connectivity enhancement network structure of a road extraction method for a full convolution neural network according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a road extraction result of the road extraction method of the full convolution neural network according to the embodiment of the present invention;
fig. 8 is a schematic diagram of a road extraction system of a full convolution neural network according to an embodiment of the present invention.
Description of reference numerals:
810: establishing a module; 820: a training module; 830: and a testing module.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be apparent that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise. Furthermore, the terms "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Method embodiment
According to an embodiment of the present invention, a method for extracting a road of a full convolution neural network is provided, and fig. 1 is a flowchart of the method for extracting a road of a full convolution neural network according to the embodiment of the present invention, as shown in fig. 1, specifically including:
(1) improving a network model on the basis of the FCN to obtain a deep convolutional neural network;
(2) training a neural network on a training set by taking the minimum loss function as a target to obtain a road extraction network model;
(3) and inputting the SAR image for testing into a road extraction model to obtain a road network in the SAR image.
Fig. 2 is a schematic structural diagram of a full convolution neural network of a road extraction method of a full convolution neural network according to an embodiment of the present invention, and as shown in fig. 2, the deep convolution neural network constructed by improving the FCN in step (1) sequentially includes: the system comprises an input layer, a multi-scale feature extraction layer, a cavity convolution connecting layer, an anti-convolution layer, a road connectivity enhancement network layer and an output layer.
The input layers include a convolution layer with convolution kernel size 7 x 7, step size 2, and fill 3, a Batch Normalization (BN) layer, a ReLU activation function layer, and a maximum pooling layer with size 3 x 3, step size 2, and fill 1, for reducing the input image size.
The multi-scale feature extraction layer comprises four units, and the four units are respectively composed of 3, 4, 6 and 3 multi-scale feature extraction modules. Fig. 3 is a schematic structural diagram of a multi-scale feature extraction module of the road extraction method of the full convolution neural network according to the embodiment of the present invention, and as shown in fig. 3, the multi-scale feature extraction module firstly performs feature extraction on input by using a 3 × 3 convolution, then averagely divides a feature map into three parts according to channels, obtains features under different scales by using convolutions of 3 × 3, 5 × 5, and 7 × 7 for the three parts of the feature map, merges the obtained three feature maps according to channels by using a 1 × 1 convolution, and finally adds the input features and the feature map after merging, so as to avoid gradient disappearance. Each convolution layer is followed by a Batch Normalization (BN) layer and a ReLU activation function layer. In the feature extraction module, 5 × 5 and 7 × 7 convolution kernels mainly focus on regional features of roads under different scales, 3 × 3 convolution kernels mainly learn detailed features of roads, and various features are fused to obtain comprehensive road features.
Fig. 4 is a schematic structural diagram of a hole convolution module of the road extraction method of the full convolution neural network according to the embodiment of the present invention, and as shown in fig. 4, the hole convolution connection layer includes four continuous hole convolutions, the hole rates d of the hole convolution kernels are 1, 2, 4, and 8, respectively, and each hole convolution has a ReLU activation function layer. The output characteristic graph is obtained by adding the outputs of the convolutions of the cavities and is used for expanding the receptive field, fusing the multi-scale characteristics and strengthening the characteristic extraction.
Fig. 5 is a schematic diagram of a structure of a deconvolution layer of a road extraction method of a full convolution neural network according to an embodiment of the present invention, where, as shown in fig. 5, the deconvolution layer includes five deconvolution units. The first four deconvolution units are all composed of a 3 × 3 deconvolution layer, a batch normalization layer (BN) and a ReLU activation function layer, and the last deconvolution unit sequentially comprises a 4 × 4 deconvolution layer, two 3 × 3 conventional convolutions and a Sigmoid activation function. After the fifth, fourth and third deconvolution units, the feature maps output by the corresponding units in the multi-scale feature extraction layer are processed by the attention mechanism module and added with the feature maps at the deconvolution unit for feature fusion. The deconvolution layer is used for up-sampling and restoring a spatial structure of the road image, and the attention mechanism module is used for performing secondary attention enhancement on a channel domain and a spatial domain of the feature map.
Fig. 6 is a schematic diagram of a road connectivity enhancement network structure of a road extraction method of a full convolution neural network according to an embodiment of the present invention, and as shown in fig. 6, the road connectivity enhancement network includes two parts, namely an encoder and a decoder. The encoder part of the network extracts the information in the road prediction probability map using 4 3 × 3 convolutions, each convolution being followed by a Batch Normalization (BN) layer and a ReLU activation function. After the second, third, and fourth 3 x 3 convolutions, each using a maximum pooling of size 2 x 2 with step size 2, the current feature map was down sampled by a factor of 2 to obtain a feature map with a spatial dimension equal to the size of the input 1/2, 1/4, 1/8, respectively. The decoder part of the network learns the correction value of each pixel point in the road prediction probability graph by using 4 3-by-3 convolutions, gradually restores the spatial structure of the road prediction probability graph by using 2 times of up-sampling of bilinear interpolation before the first convolution, the second convolution and the third convolution, and performs channel splicing on the feature graph with the corresponding size in the encoder and the current feature graph after each up-sampling to fuse the features of different levels. And after the fourth 3-by-3 convolution, adding the convolution output result with the input road prediction probability map, and correcting the prediction probability map while avoiding the loss of original image information. And finally, obtaining a road prediction probability graph after the connectivity is enhanced by using a Sigmoid activation function.
And the output layer is used for carrying out threshold segmentation operation on the road prediction probability graph after the connectivity is enhanced, setting the pixel value larger than the threshold as 1 and setting the pixel value smaller than the threshold as 0 to obtain a road prediction binary graph as a final road extraction result. Through testing, the optimal segmentation threshold value is selected to be 0.24.
Fig. 7 is a schematic diagram of a road extraction result of the road extraction method of the full convolution neural network according to the embodiment of the present invention, and as shown in fig. 7, a result of road extraction performed on an SAR image by a test using the SAR image, a road truth value, and the neural network model according to the present invention is schematic, and road extraction results in three different scenes of a town, a suburb, and a country are selected in the diagram to illustrate performance of the model in different situations. The first column is an SAR image to be extracted, the second column is an artificially labeled road network truth value, the third column is a road extraction result before the connectivity enhancement network is corrected, and the fourth column is a road extraction result after the connectivity enhancement network is corrected. It can be seen that most roads in the SAR image can be extracted through the neural network model before the correction of the connectivity enhancement network, but the extracted roads are fragmented, and many breakpoints exist among the roads; the number of road discontinuities is obviously reduced after the correction of the connectivity enhancement network, the connectivity of the extracted road is enhanced, and the road network is more complete. And (4) introducing Recall rate (Recall), Precision rate (Precision) and F1-score to measure the final road extraction result. ResUnet, which is frequently used in road extraction at present, obtains 72.0% recall rate, 73.8% accuracy rate and 72.9% F1-score on the SAR data set, while the neural network model of the invention obtains 73.8% recall rate, 79.5% accuracy rate and 76.5% F1-score on the SAR data set, and the three indexes are all higher than ResUnet, thereby having better road extraction effect.
The principle of the invention is as follows: the method has the advantages that the multi-size convolution kernels are extracted and fused with multi-scale features, so that the stronger feature extraction capability can be obtained, the receptive field can be enlarged by using the cavity convolution, the spatial information is reserved, the road prediction probability map is input into the connectivity enhancement network model for learning, the extraction of the road network can be further strengthened, and the discontinuous points in the road extraction result are reduced. The three are combined to obtain a good SAR image road extraction effect.
The invention relates to an SAR image road extraction method based on a full convolution neural network of a corrected prediction probability map, and an improved FCN model is constructed. Extracting features of different scales from the input road image by using convolution kernels of different sizes, fusing the features through convolution operation to obtain a feature map, and enhancing the feature expression of the model; obtaining a larger receptive field by using a hole convolution operation on the obtained characteristic diagram, and reserving spatial information; performing up-sampling reduction on image details through deconvolution, and fusing features of different levels processed by an attention module in a deconvolution process to obtain a road prediction probability map; inputting the road prediction probability map into a connectivity enhancement network, correcting the probability prediction value of each pixel point, reducing the discontinuity points of the extracted road network, and finally converting the road prediction probability map with enhanced connectivity into a road prediction binary map through threshold segmentation to obtain a road extraction result; the method obtains better precision in SAR image road extraction by introducing multi-scale feature extraction, feature fusion and connectivity enhancement network, and has certain popularization.
The invention has the beneficial effects that: the feature extraction capability of the network model is enhanced by using multi-scale feature extraction; and correcting the road prediction probability map by using the connectivity enhancement network, and reducing the discontinuity points in the extracted road.
System embodiment
According to an embodiment of the present invention, a road extraction system of a full convolution neural network is provided, and fig. 8 is a schematic diagram of the road extraction system of the full convolution neural network according to the embodiment of the present invention, as shown in fig. 8, the road extraction system specifically includes:
the establishing module 810: the deep convolution neural network is established;
the training module 820: the method comprises the steps of training a deep convolutional neural network on an SAR image training set by taking the minimum loss function as a target to obtain a road extraction network model;
the test module 830: and inputting the SAR image for testing into a road extraction model to obtain a road network in the SAR image.
The embodiment of the present invention is a system embodiment corresponding to the above method embodiment, and specific operations of each module may be understood with reference to the description of the method embodiment, which is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; however, these modifications or alternative technical solutions of the embodiments of the present invention do not depart from the scope of the present invention.

Claims (10)

1. A road extraction method based on a full convolution neural network is characterized by comprising the following steps,
s1, establishing a deep convolutional neural network;
s2, training a deep convolutional neural network on the SAR image training set by taking the minimum loss function as a target to obtain a road extraction network model;
and S3, inputting the SAR image for testing into a road extraction model, and acquiring a road network in the SAR image.
2. The method according to claim 1, wherein the S1 specifically includes: and establishing a deep convolutional neural network connected with an input layer, a multi-scale feature extraction layer, a cavity convolutional connecting layer, an deconvolution layer, a road connectivity enhancement network layer and an output layer.
3. The method of claim 2, wherein the input layers comprise a convolutional layer, a BN layer, a ReLU activation function layer, and a max pooling layer connected in sequence, and the input layers are used to reduce the size of the input image.
4. The method according to claim 3, wherein the multi-scale feature extraction layer comprises 4 units connected in sequence, the first unit, the second unit, the third unit and the fourth unit are sequentially composed of 3, 4, 6 and 3 multi-scale feature extraction modules, the single multi-scale feature extraction module is used for extracting features of input features by using 3 x 3 convolution to obtain feature maps, then dividing the feature maps into three parts according to channel average, obtaining features under different scales by using 3 x 3, 5 x 5 and 7 x 7 convolution on the three parts of feature maps respectively, obtaining a fused feature map by using 1 x 1 convolution after each convolution, and finally adding the input features and the fused feature map, the first unit inputs the input image after size reduction, and the fourth unit outputs the fusion road characteristic.
5. The method according to claim 4, wherein the hole convolution connection layer comprises four hole convolutions connected in sequence, each of the first hole convolution, the second hole convolution, the third hole convolution and the fourth hole convolution is followed by a ReLU activation function layer, the first hole convolution is used as an input, and outputs of the first hole convolution, the second hole convolution, the third hole convolution and the fourth hole convolution are added to obtain the enhanced road characteristic.
6. The method of claim 5, wherein the deconvolution layer comprises five sequentially connected deconvolution units;
the first four deconvolution units are formed by sequentially connecting a 3 × 3 deconvolution layer, a batch normalization layer and a ReLU activation function layer, the first four deconvolution units are a fifth deconvolution unit, a fourth deconvolution unit, a third deconvolution unit and a second deconvolution unit, and the input of the fifth deconvolution unit is the reinforced road characteristic;
and the last first deconvolution unit comprises a 4 x 4 deconvolution function, two 3 x 3 conventional convolution functions and a Sigmoid activation function which are sequentially connected, the feature maps output by the first unit, the second unit and the third unit are processed by an attention mechanism module and then are respectively added with the feature maps output by the fifth deconvolution unit, the fourth deconvolution unit and the third deconvolution unit, feature fusion is carried out, and the road prediction probability map is output from the first deconvolution unit.
7. The method of claim 6, wherein the road connectivity enhancement network comprises:
an encoder and a decoder;
the encoder comprises 4 3 × 3 convolutions, each convolution is followed by a batch normalization layer and a ReLU activation function, each convolution is followed by a maximum pooling, the first 3 × 3 encoder convolution inputs the road prediction probability map, and the 4 3 × 3 convolutions extract information from the road prediction probability map;
the decoder comprises 4 3-by-3 convolutions, the 4 3-by-3 convolutions learn the correction value of each pixel point in the road prediction probability graph, 2 times of up-sampling of bilinear interpolation is used for gradually restoring the spatial structure of the road prediction probability graph before the convolution of the first decoder, the second decoder and the third decoder, after each up-sampling, channel splicing is carried out on the feature graph with the corresponding size in the encoder and the current feature graph, features of different levels are fused, after the fourth 3-by-3 convolution, the output result and the input road prediction probability graph are added, and finally, a road prediction probability graph with enhanced connectivity is obtained by using a Sigmoid activation function;
the output layer is used for setting the pixel value larger than the threshold value as 1 and setting the pixel value smaller than the threshold value as 0.
8. The method according to claim 7, wherein S2 specifically includes: inputting a training set image into a constructed deep convolutional neural network to obtain a road prediction probability map, calculating a loss function value by combining a road truth value, updating a neural network parameter through a back propagation algorithm, comparing a current loss function value with a last epoch loss function value after each training epoch is finished, if the loss function value is reduced, storing a current network model, and updating the learning rate to be 0.2 times of the original learning rate when 4 continuous epochs of the loss function values are not reduced; and when the loss function values are not reduced for 7 epochs continuously, judging that the model is converged, and finishing the training.
9. The method according to claim 8, wherein S3 specifically includes:
inputting the tested SAR image into the trained road extraction network model to obtain a road prediction binary image, and comparing the regression rate and the accuracy by changing the segmentation threshold of the output layer to determine the optimal segmentation threshold so as to obtain the optimal road extraction binary image.
10. A road extraction system based on a full convolution neural network is characterized by comprising,
a building module: the method is used for establishing a deep convolutional neural network;
a training module: the method comprises the steps of training a deep convolutional neural network on an SAR image training set by taking the minimum loss function as a target to obtain a road extraction network model;
a test module: and inputting the SAR image for testing into a road extraction model to obtain a road network in the SAR image.
CN202210605408.6A 2022-05-30 2022-05-30 Road extraction method and system based on full convolution neural network Pending CN114882473A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210605408.6A CN114882473A (en) 2022-05-30 2022-05-30 Road extraction method and system based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210605408.6A CN114882473A (en) 2022-05-30 2022-05-30 Road extraction method and system based on full convolution neural network

Publications (1)

Publication Number Publication Date
CN114882473A true CN114882473A (en) 2022-08-09

Family

ID=82678960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210605408.6A Pending CN114882473A (en) 2022-05-30 2022-05-30 Road extraction method and system based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN114882473A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524322A (en) * 2023-04-10 2023-08-01 北京盛安同力科技开发有限公司 SAR image recognition method based on deep neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524322A (en) * 2023-04-10 2023-08-01 北京盛安同力科技开发有限公司 SAR image recognition method based on deep neural network

Similar Documents

Publication Publication Date Title
CN110263705B (en) Two-stage high-resolution remote sensing image change detection system oriented to remote sensing technical field
CN111652321B (en) Marine ship detection method based on improved YOLOV3 algorithm
CN111126359B (en) High-definition image small target detection method based on self-encoder and YOLO algorithm
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN110781756A (en) Urban road extraction method and device based on remote sensing image
CN112862774B (en) Accurate segmentation method for remote sensing image building
CN114565860B (en) Multi-dimensional reinforcement learning synthetic aperture radar image target detection method
CN109146891B (en) Hippocampus segmentation method and device applied to MRI and electronic equipment
CN113361367B (en) Underground target electromagnetic inversion method and system based on deep learning
CN113160265A (en) Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation
CN113011305A (en) SAR image road extraction method and device based on semantic segmentation and conditional random field
CN115294468A (en) SAR image ship identification method for improving fast RCNN
CN114882473A (en) Road extraction method and system based on full convolution neural network
CN114332633B (en) Radar image target detection and identification method and equipment and storage medium
CN114677602A (en) Front-view sonar image target detection method and system based on YOLOv5
CN110517272B (en) Deep learning-based blood cell segmentation method
CN113569720B (en) Ship detection method, system and device
CN114419490A (en) SAR ship target detection method based on attention pyramid
CN111967516B (en) Pixel-by-pixel classification method, storage medium and classification equipment
CN110751201B (en) SAR equipment task failure cause reasoning method based on textural feature transformation
CN114677575A (en) Scene migration method and device and electronic equipment
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN111210416A (en) Anatomical structure prior-guided brain region-of-interest rapid segmentation method and system
CN116310899A (en) YOLOv 5-based improved target detection method and device and training method
CN116129234A (en) Attention-based 4D millimeter wave radar and vision fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination