CN110853058A - High-resolution remote sensing image road extraction method based on visual saliency detection - Google Patents
High-resolution remote sensing image road extraction method based on visual saliency detection Download PDFInfo
- Publication number
- CN110853058A CN110853058A CN201911098716.9A CN201911098716A CN110853058A CN 110853058 A CN110853058 A CN 110853058A CN 201911098716 A CN201911098716 A CN 201911098716A CN 110853058 A CN110853058 A CN 110853058A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- detection
- saliency
- road
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 85
- 238000000605 extraction Methods 0.000 title claims abstract description 33
- 230000000007 visual effect Effects 0.000 title claims abstract description 26
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000005520 cutting process Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 7
- 150000001875 compounds Chemical class 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 3
- 230000004438 eyesight Effects 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims 1
- 238000001035 drying Methods 0.000 abstract 1
- 230000016776 visual perception Effects 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000019771 cognition Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
Abstract
The invention discloses a high-resolution remote sensing image road extraction method based on visual saliency detection, which is realized by the following technical scheme: firstly, preprocessing an original remote sensing image by means of drying, shadow detection and removal, super-pixel extraction and image cutting; in a saliency map detection algorithm based on boundary weight, convex hull detection and a boundary probability algorithm are used for providing prior information for saliency region detection; in the algorithm of the significance detection based on the background connectivity, the background connectivity is used as a significance detection criterion, and a significance map based on the background connectivity is calculated; then, carrying out image fusion in a gradient domain by taking saliency map detection based on boundary weight as prior information and saliency map detection based on background connectivity, and reconstructing an image by using Haar wavelet to obtain an improved saliency map; and finally, automatically generating a binary mask map for road extraction by using a GrabCT algorithm, thereby improving the accuracy of road detection and extraction.
Description
Technical Field
The invention belongs to the technical field of remote sensing image road extraction methods, and particularly relates to a high-resolution remote sensing image road extraction method based on visual saliency detection.
Background
Road extraction is an important component in remote sensing image analysis, and is widely applied to urban and rural planning, land reasonable utilization, emergency treatment, vehicle navigation and other aspects. The high-resolution remote sensing technology greatly enriches the observation and measurement of people on the ground. Because the high-resolution remote sensing image has richer space structure, geometric texture and topological relation and can help people to more effectively recognize the ground target object, the method for processing the remote sensing image data to acquire interesting information by a certain technical means aiming at different requirements is an important guide in the field of remote sensing.
In a remote sensing image, the types of ground features are various, wherein road information is important basic geographic information and is one of the fastest updated basic geographic information in a city. At present, due to the improvement of image resolution, the land features contained in the image become more abundant, the detailed features of the image are also more and more abundant, and the road information is extracted from the image containing a large amount of land features and abundant detailed information, so that the road information is greatly interfered, for example, shadows of trees and buildings on two sides of the road, green belts, vehicles on the road and temporary construction areas on the road can all influence the accurate extraction of the road. In the process of manually interpreting and extracting roads, the road positions can be quickly found, and besides the rich experience intuition, the method also depends on the inherent characteristics of the roads and researches on the road extraction algorithm. Whatever the method, the following features of a road are generally used in road extraction:
(1) geometric features. The road of the high-resolution remote sensing image is generally represented by two parallel straight lines or curves, and a small amount of shadow or number occlusion may occur locally. The width variation of the same road section is small, and the curvature of the curved road is not too large. In most cases, the roads are arranged in a line, and different shapes of connection changes can occur only at the intersection of two roads.
(2) The radiation characteristic. The whole gray value of the road and the gray value of the neighboring ground objects generally have a certain contrast, but the roads made of different materials have different radiation characteristics, and a road tree, a lane line, a vehicle, a pedestrian and the like may exist in the road, which interferes with the extraction of the road.
(3) And (4) topological characteristics. The road distribution is more uniform and does not suddenly break, and the roads are connected with each other to form a net shape in a larger range. Urban road networks are denser than suburbs, and mountain roads are very sparse.
(4) A contextual characteristic. Besides the information of the road itself, the ground features adjacent to the road can also play a certain auxiliary role. For example, vegetation may appear beside some roads, and features beside roads in cities, suburbs, mountainous areas, and the like may have differences.
The improvement of the space-time resolution of the remote sensing image brings more opportunities and challenges to road extraction, the detail difference between different roads is amplified, and a plurality of ground objects with shape textures similar to the roads cause more interference. Although the existing algorithm can be successfully applied to interpretation of high-resolution remote sensing images, some defects and shortcomings still exist, and the summary mainly includes:
(1) ① since the theoretical basis of the existing method lacks a visual perception strategy, the obtained segmentation result is often the optimal mathematical solution rather than the optimal visual perception solution, so that the segmentation result is usually greatly different from the human eye visual judgment, ② other visual segmentation methods are specifically used, the estimation and selection of parameters are determined according to experience and specific segmentation targets, and basic theoretical support is lacked, thereby reducing the universality of the methods.
(2) ① the ground expression ability of high resolution remote sensing images is strong, so that many ground targets similar to road spectrum features and similar textures are difficult to distinguish, thereby affecting the speed and accuracy of road extraction. ② the traditional road extraction method cannot obtain enough spectrum differences due to less spectrum resolution of the high resolution remote sensing images, under the condition, the resolution and cognition of the road are difficult to carry out.
(3) ① has unique shape expression for the ground target on the high-resolution remote sensing image due to very detailed description capability of the ground details, and the traditional shape recognition method can not meet the requirement. ② is the shape expression of the ground target, and the traditional recognition method has no weight discussion on the description of the traditional recognition method, so the traditional recognition method can not adapt to the complex ground target on the high-resolution remote sensing image.
The human visual system has the capability of perceiving and understanding complex scenes in real time, and can quickly and accurately perceive important or interesting information in the scenes, and the main reason is that the human has complex visual perception and attention mechanisms. The real visual world is complex and contains a large number of stimuli and targets, and the visual system is an information processing system with limited computing resources. At a certain moment, for the whole visual scene, the visual perception system only selects a part of the stimuli to carry out conversion and transmission processing, and what is used for realizing the function is a visual attention mechanism. The system is a part of a visual perception model, and is cooperated with modules for learning, memory and the like to work, attention focus transfer and separation of a significant object can be effectively processed on the premise of limited resources, and the system is a premise that cognitive processes of information perception, memory and the like in attention objects and memory can be smoothly completed. Typically, the region or object of interest in the image is around 20% to 20% of the entire image, and the other is a less relevant background region, and computing the entire image would waste a significant amount of data storage analysis and run time. The significance detection can obtain an effective significance target area and accurately inhibit redundant background information. Therefore, the saliency detection result of the image can be used as a preprocessing stage of a plurality of applications in the computer vision field, and very effective prior knowledge is provided for subsequent processing.
The timely updating of the road plays an important role in city planning and construction, traffic management, GIS data acquisition, space database updating and image understanding. Therefore, the realization of the rapid and accurate automatic extraction of the high-resolution remote sensing image road based on the visual perception technology has great significance for the continuously expanding requirements.
Disclosure of Invention
Aiming at the defects in the prior art, the high-resolution remote sensing image road extraction method based on visual saliency detection provided by the invention solves the problems of load in the saliency detection process and insufficient road extraction precision in the existing remote sensing image road extraction method.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: the high-resolution remote sensing image road extraction method based on visual saliency detection comprises the following steps:
s1, acquiring a remote sensing image of the road to be extracted and preprocessing the remote sensing image to obtain a remote sensing image block;
s2, carrying out image saliency detection based on boundary probability on the remote sensing image blocks to generate a saliency map based on boundary weight;
s3, carrying out significance detection based on background weighted contrast on the remote sensing image block to generate a significance map based on background connectivity;
s4, fusing the saliency map based on the boundary weight and the saliency map based on the background connectivity to obtain a fused saliency map;
and S5, performing binarization processing on the fusion saliency map, and extracting a saliency road region by taking the image after binarization processing as a mask, thereby realizing remote sensing image road extraction.
Further, the step S1 is specifically:
s11, sequentially carrying out denoising and color space conversion processing on the remote sensing image;
s12, carrying out shadow detection on the remote sensing image after color space conversion, and removing a shadow area in the remote sensing image;
s13, cutting the remote sensing image with the shadow removed into image blocks with the same size;
and S14, processing all the image blocks through a super-pixel algorithm to obtain corresponding remote sensing image blocks.
Further, in step S12, a global adaptive threshold is calculated according to the global average luminance value of the remote sensing image after color space conversion, the shadow detection of the remote sensing image is performed by a global adaptive threshold algorithm, and the detected shadow region is removed from the remote sensing image.
Further, the step S2 is specifically:
s21, performing convex hull detection on the remote sensing image block based on a Harris point convex hull detection algorithm, and determining a salient point;
s22, calculating a corresponding boundary probability mean value according to the relation between the superpixel corresponding to the saliency point and the detected convex hull;
and S23, detecting the image boundary significance of the remote sensing image block based on the calculated boundary probability average value, and generating a significance map based on the boundary weight.
Further, in step S22, the mean value of the boundary probability values of the ith super-pixel edge in the convex hullComprises the following steps:
in the formula, EtA set of edge pixels along the t-th super pixel;
Further, the step S3 is specifically:
s31, taking the background connectivity measurement of the remote sensing image block as the prior information of the target significance detection measurement;
and S32, using the prior information as a weight item of the background weighted contrast of the remote sensing image block, carrying out background saliency detection on the remote sensing image block, and generating a saliency map based on background connectivity.
Further, the step S31 is specifically:
in the step S31, the background connectivity metric BC (I) of the remote sensing imageR) The calculation formula of (2) is as follows:
in the formula IRA super-pixel region connecting image boundaries;
i is a super pixel in the remote sensing image block;
and B is a boundary block set of the remote sensing image block.
Further, the step S4 is specifically:
s41, respectively transforming the saliency map based on the boundary weight and the saliency map based on the background connectivity into a gradient domain through a discretization model;
s42, calculating the maximum gradient amplitude of the saliency map based on the boundary weight and the saliency map based on the background connectivity in the gradient domain respectively to obtain corresponding gradient saliency maps;
and S43, reconstructing the gradient domain saliency map through a gradient reconstruction algorithm of a Haar wavelet to obtain a corresponding fusion saliency map.
Further, in step S43, the maximum gradient magnitude M (x, y) of the fusion saliency map is calculated as:
in the formula (I), the compound is shown in the specification,gradient components in the x direction and the y direction respectively when the saliency map is transformed to the gradient domain by the discretization model;
n is the gradient saliency map sequence number to be fused, and N ═ 1,2, 3.
Further, in step S5, automatically performing binarization processing on the fusion saliency map by using a GrabCut algorithm to generate a binary mask map, and extracting saliency road regions for the mask according to the binary mask map to realize remote sensing image road extraction.
Compared with the prior art, the invention has the following beneficial effects.
(1) The significance detection algorithm is simple, efficient and accurate: the method is used for detecting the salient region of the remote sensing image by using a visual saliency detection algorithm, wherein the salient map detection algorithm based on the boundary weight provides prior information for the accurate detection of the salient region by using convex hull detection and a boundary probability algorithm; in the background connectivity based significance detection algorithm, the background connectivity algorithm has the characteristics of simplicity and high efficiency, and the super-pixels are used as the calculation units in the calculation process, so that the operation time of the algorithm is greatly saved.
(2) Road detection accuracy is high: the remote sensing image quality is improved by using denoising and shadow detection algorithms in the remote sensing image preprocessing process; carrying out image fusion in a gradient domain by taking saliency map detection based on boundary weight as prior information and saliency map detection based on background connectivity, and reconstructing an image by using Haar wavelet to obtain an improved saliency map; and finally, the improved GrabCut algorithm is used for automatically generating a binary mask map for road extraction, so that the accuracy of road detection and extraction is improved.
Drawings
Fig. 1 is a flowchart of a high-resolution remote sensing image road extraction method based on visual saliency detection provided by the invention.
Fig. 2 is a remote sensing image of a road to be extracted in the embodiment of the invention.
FIG. 3 is a boundary probability image in an embodiment of the present invention.
Fig. 4 is a saliency map based on boundary weights in an embodiment of the present invention.
Fig. 5 is a saliency map based on background connectivity in an embodiment of the present invention.
Fig. 6 is a fusion saliency map in an embodiment of the present invention.
FIG. 7 is a binary mask map in an embodiment of the present invention.
Fig. 8 is a schematic diagram of a road extracted from a remote sensing image according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Example 1:
as shown in fig. 1, the method for extracting a high-resolution remote sensing image road based on visual saliency detection includes the following steps:
s1, acquiring a remote sensing image of the road to be extracted and preprocessing the remote sensing image to obtain a remote sensing image block;
s2, carrying out image saliency detection based on boundary probability on the remote sensing image blocks to generate a saliency map based on boundary weight;
s3, carrying out significance detection based on background weighted contrast on the remote sensing image block to generate a significance map based on background connectivity;
s4, fusing the saliency map based on the boundary weight and the saliency map based on the background connectivity to obtain a fused saliency map;
and S5, performing binarization processing on the fusion saliency map, and extracting a saliency road region by taking the image after binarization processing as a mask, thereby realizing remote sensing image road extraction.
Example 2:
in the foregoing embodiment 1, when the remote sensing image is subjected to saliency target detection, the feature shadow usually seriously affects the effect of the image saliency detection algorithm, so that before the saliency detection is performed on the remote sensing image, the shadow detection and removal need to be performed on the image first to improve the quality of the remote sensing image and the effect of the saliency detection algorithm, and in the embodiment of the present invention, the shadow of the remote sensing image is detected and removed by using a global adaptive shadow detection algorithm, so that the step S1 in the embodiment 1 specifically includes:
s11, sequentially carrying out denoising and color space conversion processing on the remote sensing image;
specifically, when the remote sensing image shown in fig. 2 is processed, a guide filter is adopted to filter the whole image, and then the image is converted into the CIE Lab color space from the RGB color image;
s12, carrying out shadow detection on the remote sensing image after color space conversion, and removing a shadow area in the remote sensing image;
specifically, a global adaptive threshold is calculated according to the global average brightness value of the remote sensing image after color space conversion, the shadow detection of the remote sensing image is carried out through a global adaptive threshold algorithm, and the detected shadow area is removed from the remote sensing image by using global RGB information;
s13, cutting the remote sensing image with the shadow removed into image blocks with the same size;
in the embodiment of the invention, the remote sensing image after the shadow is removed is cut into image blocks with the fixed size of 500 multiplied by 500;
s14, processing all image blocks through a super-pixel algorithm to obtain corresponding remote sensing image blocks;
in the embodiment of the invention, the image block is processed by the SLIC superpixel algorithm, and a superpixel index map is established.
Example 3:
in the embodiment 1, the road shows that an important feature in the remote sensing image is an edge feature with a clear outline, and in order to improve the detection accuracy of the saliency region, the saliency detection method based on the boundary weight is used in the embodiment of the invention to obtain the saliency map based on the boundary weight corresponding to the feature as the saliency prior map; therefore, step S2 in embodiment 1 is specifically:
s21, performing convex hull detection on the remote sensing image block based on a Harris point convex hull detection algorithm, and determining a salient point;
s22, calculating a corresponding boundary probability mean value according to the relation between the superpixel corresponding to the saliency point and the detected convex hull;
and S23, detecting the image boundary significance of the remote sensing image block based on the calculated boundary probability average value, and generating a significance map based on the boundary weight.
In step S22 of the present embodiment, a boundary Probability (PB) map generated based on the relationship between the superpixels corresponding to the saliency points and the detected convex hull is shown in fig. 3; wherein the mean value of the boundary probability values of the ith super-pixel edge in the convex hullComprises the following steps:
in the formula, EtA set of edge pixels along the t-th super pixel;
In an embodiment of the invention, the average color and centroid position of the tth super-pixel is r, respectivelyt cAnd rt lRepresenting, using Harris point to detect and obtain a convex hull, dividing the image into an inner region and an outer region, and respectively representing the average color as cinAnd coutThen, the boundary information r is converted intot pbThe boundary weight map is calculated in combination with the color information as:
wt=rt pb×d(rt c,cout)
wherein the weight of the t-th super pixel is wtTo representIs the mean color of the t-th super-pixel and the euclidean distance of the region outside the convex hull. Assuming that the convex hull comprises N superpixels, calculating the significance of the t superpixel based on the weighted convex hull according to color and space information as follows:
in the formula (I), the compound is shown in the specification,andthe color and the spatial Euclidean distance between the nth super pixel and the t-th super pixel;
wnis the weight of the nth super pixel;
λ is a weight that balances the importance between color and location.
The saliency map based on the boundary weights obtained based on the above process is shown in fig. 4 as a saliency prior map.
Example 4:
in the above embodiment 1, since the target area is connected to the image boundary much less than the background area, that is, when the area of the image block (or super pixel) is closely connected to the image boundary, the image block is considered as the background area, and therefore the saliency detection needs to be performed on the background area, based on this theory, the above step S3 is specifically:
s31, taking the background connectivity measurement of the remote sensing image block as the prior information of the target significance detection measurement;
and S32, using the prior information as a weight item of the background weighted contrast of the remote sensing image block, carrying out background saliency detection on the remote sensing image block, and generating a saliency map based on background connectivity.
In the above step S31, the background connectivity metric BC (I) of the remote sensing imageR) The calculation formula of (2) is as follows:
in the formula IRA super-pixel region connecting image boundaries;
i is a super pixel in the remote sensing image block;
and B is a boundary block set of the remote sensing image block.
Mapped due to background connectivity value of super-pixel iProbability, when the background connectivity is large, it is close to 1; when the boundary connectivity is small, it is close to 0, defined as:
in the formula, δ is a weight parameter, and δ ∈ [0.5,2.5], which is used as a weighting term of the background contrast to calculate the region saliency value.
The saliency map based on the background connectivity obtained based on the above process is shown in fig. 5.
Example 5:
in the foregoing embodiment 1, for the two saliency maps obtained in step S2 and step S3, an appropriate method needs to be used for image fusion, and a conventional weighted image fusion method cannot fully utilize saliency information, and an embodiment of the present invention proposes a saliency map fusion method based on gradient optimization to solve this problem, so that step S4 in embodiment 1 specifically includes:
s41, respectively transforming the saliency map based on the boundary weight and the saliency map based on the background connectivity into a gradient domain through a discretization model;
s42, calculating the maximum gradient amplitude of the saliency map based on the boundary weight and the saliency map based on the background connectivity in the gradient domain respectively to obtain corresponding gradient saliency maps;
and S43, reconstructing the gradient domain saliency map through a gradient reconstruction algorithm of a Haar wavelet to obtain a corresponding fusion saliency map.
In step S43, the maximum gradient magnitude M (x, y) of the fusion saliency map is calculated as:
in the formula (I), the compound is shown in the specification,gradient components in the x direction and the y direction respectively when the saliency map is transformed to the gradient domain by the discretization model;
respectively fusing gradient components of the saliency map gradient in the x direction and the y direction;
n is the sequence number of the gradient saliency map to be fused, and N is {1,2, 3.., N }, and when the local and global saliency maps are fused, N is 2;
wherein the content of the first and second substances,Sna gradient operator of the discretized model for the nth image.
Specifically, in order to obtain a fused saliency map, which requires reconstruction of the image in the gradient domain, the relationship of the fusion gradient G to the fused saliency map S can be expressed as:
in the formula (I), the compound is shown in the specification,in the process, Haar wavelet decomposition coefficients of the saliency map are obtained from the fusion gradient, and then the coefficients are synthesized to obtain the fusion saliency map.
The fusion saliency map obtained by the above process is shown in fig. 6.
Example 6:
in step S5 in embodiment 1, the fusion saliency map is automatically binarized by the GrabCut algorithm to generate a binary mask map, and a saliency road region is extracted for the mask according to the binary mask map, so as to implement road extraction of the remote sensing image.
The binary mask map obtained through the above process is shown in fig. 7, and the extracted road region is shown in fig. 8.
Compared with the prior art, the invention has the following beneficial effects.
(1) The significance detection algorithm is simple, efficient and accurate: the method is used for detecting the salient region of the remote sensing image by using a visual saliency detection algorithm, wherein the salient map detection algorithm based on the boundary weight provides prior information for the accurate detection of the salient region by using convex hull detection and a boundary probability algorithm; in the background connectivity based significance detection algorithm, the background connectivity algorithm has the characteristics of simplicity and high efficiency, and the super-pixels are used as the calculation units in the calculation process, so that the operation time of the algorithm is greatly saved.
(2) Road detection accuracy is high: the remote sensing image quality is improved by using denoising and shadow detection algorithms in the remote sensing image preprocessing process; carrying out image fusion in a gradient domain by taking saliency map detection based on boundary weight as prior information and saliency map detection based on background connectivity, and reconstructing an image by using Haar wavelet to obtain an improved saliency map; and finally, the improved GrabCut algorithm is used for automatically generating a binary mask map for road extraction, so that the accuracy of road detection and extraction is improved.
Claims (10)
1. The high-resolution remote sensing image road extraction method based on visual saliency detection is characterized by comprising the following steps of:
s1, acquiring a remote sensing image of the road to be extracted and preprocessing the remote sensing image to obtain a remote sensing image block;
s2, carrying out image saliency detection based on boundary probability on the remote sensing image blocks to generate a saliency map based on boundary weight;
s3, carrying out significance detection based on background weighted contrast on the remote sensing image block to generate a significance map based on background connectivity;
s4, fusing the saliency map based on the boundary weight and the saliency map based on the background connectivity to obtain a fused saliency map;
and S5, performing binarization processing on the fusion saliency map, and extracting a saliency road region by taking the image after binarization processing as a mask, thereby realizing remote sensing image road extraction.
2. The method for extracting the high-resolution remote sensing image road based on the vision restrictive detection as claimed in claim 1, wherein said step S1 is specifically:
s11, sequentially carrying out denoising and color space conversion processing on the remote sensing image;
s12, carrying out shadow detection on the remote sensing image after color space conversion, and removing a shadow area in the remote sensing image;
s13, cutting the remote sensing image with the shadow removed into image blocks with the same size;
and S14, processing all the image blocks through a super-pixel algorithm to obtain corresponding remote sensing image blocks.
3. The method for extracting the high-resolution remote sensing image road based on the visual saliency detection as claimed in claim 2, wherein in said step S12, a global adaptive threshold is calculated according to the global average luminance value of the remote sensing image after the color space conversion, the shadow detection of the remote sensing image is performed by a global adaptive threshold algorithm, and the detected shadow region is removed from the remote sensing image.
4. The method for extracting the high-resolution remote sensing image road based on the visual saliency detection as claimed in claim 1, wherein said step S2 specifically is:
s21, performing convex hull detection on the remote sensing image block based on a Harris point convex hull detection algorithm, and determining a salient point;
s22, calculating a corresponding boundary probability mean value according to the relation between the superpixel corresponding to the saliency point and the detected convex hull;
and S23, detecting the image boundary significance of the remote sensing image block based on the calculated boundary probability average value, and generating a significance map based on the boundary weight.
5. The method for extracting the high-resolution remote-sensing image road based on the visual saliency detection as claimed in claim 4, wherein in said step S22, the mean value of the boundary probability values of the ith super-pixel edge in said convex hullComprises the following steps:
in the formula, EtA set of edge pixels along the t-th super pixel;
6. The method for extracting the high-resolution remote sensing image road based on the visual saliency detection as claimed in claim 1, wherein said step S3 specifically is:
s31, taking the background connectivity measurement of the remote sensing image block as the prior information of the target significance detection measurement;
and S32, using the prior information as a weight item of the background weighted contrast of the remote sensing image block, carrying out background saliency detection on the remote sensing image block, and generating a saliency map based on background connectivity.
7. High resolution based on visual saliency detection as claimed in claim 6The method for extracting a road from a remote sensing image is characterized in that in the step S31, the background connectivity metric BC (I) of the remote sensing imageR) The calculation formula of (2) is as follows:
in the formula IRA super-pixel region connecting image boundaries;
i is a super pixel in the remote sensing image block;
and B is a boundary block set of the remote sensing image block.
8. The method for extracting the high-resolution remote sensing image road based on the visual saliency detection as claimed in claim 1, wherein said step S4 specifically is:
s41, respectively transforming the saliency map based on the boundary weight and the saliency map based on the background connectivity into a gradient domain through a discretization model;
s42, calculating the maximum gradient amplitude of the saliency map based on the boundary weight and the saliency map based on the background connectivity in the gradient domain respectively to obtain corresponding gradient saliency maps;
and S43, reconstructing the gradient domain saliency map through a gradient reconstruction algorithm of a Haar wavelet to obtain a corresponding fusion saliency map.
9. The method for extracting the high-resolution remote sensing image road based on the visual saliency detection as claimed in claim 8, wherein in said step S43, the maximum gradient amplitude M (x, y) of the fused saliency map is calculated as:
in the formula (I), the compound is shown in the specification,for transformation of the saliency map into the gradient domain by means of a discretized model in the x-and y-directions, respectivelyAn upward gradient component;
n is the gradient saliency map sequence number to be fused, and N ═ 1,2, 3.
10. The method for extracting the road of the high-resolution remote sensing image based on the visual saliency detection as claimed in claim 1, wherein in step S5, the fusion saliency map is automatically binarized by the GrabCut algorithm to generate a binary mask map, and the saliency road region is extracted according to the binary mask map to realize the road extraction of the remote sensing image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911098716.9A CN110853058B (en) | 2019-11-12 | 2019-11-12 | High-resolution remote sensing image road extraction method based on visual saliency detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911098716.9A CN110853058B (en) | 2019-11-12 | 2019-11-12 | High-resolution remote sensing image road extraction method based on visual saliency detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110853058A true CN110853058A (en) | 2020-02-28 |
CN110853058B CN110853058B (en) | 2023-01-03 |
Family
ID=69601618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911098716.9A Active CN110853058B (en) | 2019-11-12 | 2019-11-12 | High-resolution remote sensing image road extraction method based on visual saliency detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110853058B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111415357A (en) * | 2020-03-19 | 2020-07-14 | 长光卫星技术有限公司 | Portable shadow extraction method based on color image |
CN111597909A (en) * | 2020-04-22 | 2020-08-28 | 吉林省大河智能科技有限公司 | Fire detection and judgment method based on visual saliency |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6075905A (en) * | 1996-07-17 | 2000-06-13 | Sarnoff Corporation | Method and apparatus for mosaic image construction |
CN102682287A (en) * | 2012-04-17 | 2012-09-19 | 电子科技大学 | Pedestrian detection method based on saliency information |
CN103413275A (en) * | 2013-07-26 | 2013-11-27 | 北京工业大学 | Retinex night image enhancement method based on gradient zero norm minimum |
CN104504670A (en) * | 2014-12-11 | 2015-04-08 | 上海理工大学 | Multi-scale gradient domain image fusion algorithm |
CN105005761A (en) * | 2015-06-16 | 2015-10-28 | 北京师范大学 | Panchromatic high-resolution remote sensing image road detection method in combination with significance analysis |
US20160163058A1 (en) * | 2013-07-31 | 2016-06-09 | Yichen Wei | Geodesic saliency using background priors |
US20170083762A1 (en) * | 2015-06-22 | 2017-03-23 | Photomyne Ltd. | System and Method for Detecting Objects in an Image |
CN107862702A (en) * | 2017-11-24 | 2018-03-30 | 大连理工大学 | A kind of conspicuousness detection method of combination boundary connected and local contrast |
CN107862673A (en) * | 2017-10-31 | 2018-03-30 | 北京小米移动软件有限公司 | Image processing method and device |
CN108830883A (en) * | 2018-06-05 | 2018-11-16 | 成都信息工程大学 | Vision attention SAR image object detection method based on super-pixel structure |
CN109146798A (en) * | 2018-07-10 | 2019-01-04 | 西安天盈光电科技有限公司 | image detail enhancement method |
CN109345496A (en) * | 2018-09-11 | 2019-02-15 | 中国科学院长春光学精密机械与物理研究所 | A kind of image interfusion method and device of total variation and structure tensor |
US20190102918A1 (en) * | 2016-03-23 | 2019-04-04 | University Of Iowa Research Foundation | Devices, Systems and Methods Utilizing Framelet-Based Iterative Maximum-Likelihood Reconstruction Algorithms in Spectral CT |
CN110084107A (en) * | 2019-03-19 | 2019-08-02 | 安阳师范学院 | A kind of high-resolution remote sensing image method for extracting roads and device based on improvement MRF |
CN110276270A (en) * | 2019-05-30 | 2019-09-24 | 南京邮电大学 | A kind of high-resolution remote sensing image building area extracting method |
-
2019
- 2019-11-12 CN CN201911098716.9A patent/CN110853058B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6075905A (en) * | 1996-07-17 | 2000-06-13 | Sarnoff Corporation | Method and apparatus for mosaic image construction |
CN102682287A (en) * | 2012-04-17 | 2012-09-19 | 电子科技大学 | Pedestrian detection method based on saliency information |
CN103413275A (en) * | 2013-07-26 | 2013-11-27 | 北京工业大学 | Retinex night image enhancement method based on gradient zero norm minimum |
US20160163058A1 (en) * | 2013-07-31 | 2016-06-09 | Yichen Wei | Geodesic saliency using background priors |
CN104504670A (en) * | 2014-12-11 | 2015-04-08 | 上海理工大学 | Multi-scale gradient domain image fusion algorithm |
CN105005761A (en) * | 2015-06-16 | 2015-10-28 | 北京师范大学 | Panchromatic high-resolution remote sensing image road detection method in combination with significance analysis |
US20170083762A1 (en) * | 2015-06-22 | 2017-03-23 | Photomyne Ltd. | System and Method for Detecting Objects in an Image |
US20190102918A1 (en) * | 2016-03-23 | 2019-04-04 | University Of Iowa Research Foundation | Devices, Systems and Methods Utilizing Framelet-Based Iterative Maximum-Likelihood Reconstruction Algorithms in Spectral CT |
CN107862673A (en) * | 2017-10-31 | 2018-03-30 | 北京小米移动软件有限公司 | Image processing method and device |
CN107862702A (en) * | 2017-11-24 | 2018-03-30 | 大连理工大学 | A kind of conspicuousness detection method of combination boundary connected and local contrast |
CN108830883A (en) * | 2018-06-05 | 2018-11-16 | 成都信息工程大学 | Vision attention SAR image object detection method based on super-pixel structure |
CN109146798A (en) * | 2018-07-10 | 2019-01-04 | 西安天盈光电科技有限公司 | image detail enhancement method |
CN109345496A (en) * | 2018-09-11 | 2019-02-15 | 中国科学院长春光学精密机械与物理研究所 | A kind of image interfusion method and device of total variation and structure tensor |
CN110084107A (en) * | 2019-03-19 | 2019-08-02 | 安阳师范学院 | A kind of high-resolution remote sensing image method for extracting roads and device based on improvement MRF |
CN110276270A (en) * | 2019-05-30 | 2019-09-24 | 南京邮电大学 | A kind of high-resolution remote sensing image building area extracting method |
Non-Patent Citations (4)
Title |
---|
IOAOA S.SEVENCO,ETC: "A wavelet based method for image reconstruction from gradient data with applications", 《MULTIDIMENSIONAL SYSTEM AND SIGNAL PROCESSING》 * |
陈杰等: "利用小波变换的高分辨率多光谱遥感图像多尺度分水岭分割", 《遥感学报》 * |
陈炳才等: "融合边界连通性与局部对比性的图像显著性检测", 《计算机学报》 * |
黄佳: "基于高分辨率遥感影像的低等级道路自动提取研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111415357A (en) * | 2020-03-19 | 2020-07-14 | 长光卫星技术有限公司 | Portable shadow extraction method based on color image |
CN111415357B (en) * | 2020-03-19 | 2023-04-07 | 长光卫星技术股份有限公司 | Portable shadow extraction method based on color image |
CN111597909A (en) * | 2020-04-22 | 2020-08-28 | 吉林省大河智能科技有限公司 | Fire detection and judgment method based on visual saliency |
CN111597909B (en) * | 2020-04-22 | 2023-05-02 | 吉林省大河智能科技有限公司 | Fire detection judging method based on visual saliency |
Also Published As
Publication number | Publication date |
---|---|
CN110853058B (en) | 2023-01-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108573276B (en) | Change detection method based on high-resolution remote sensing image | |
CN107341470B (en) | Power transmission line detection method based on aerial images | |
CN111310773A (en) | Efficient license plate positioning method of convolutional neural network | |
CN110853026A (en) | Remote sensing image change detection method integrating deep learning and region segmentation | |
CN107818303B (en) | Unmanned aerial vehicle oil and gas pipeline image automatic contrast analysis method, system and software memory | |
CN111797712A (en) | Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network | |
CN106294705A (en) | A kind of batch remote sensing image preprocess method | |
CN113160062B (en) | Infrared image target detection method, device, equipment and storage medium | |
CN107610092B (en) | Pavement crack dynamic detection method based on video stream | |
CN113409267B (en) | Pavement crack detection and segmentation method based on deep learning | |
CN106157323A (en) | The insulator division and extracting method that a kind of dynamic division threshold value and block search combine | |
CN103870834A (en) | Method for searching for sliding window based on layered segmentation | |
CN104715251A (en) | Salient object detection method based on histogram linear fitting | |
CN110853058B (en) | High-resolution remote sensing image road extraction method based on visual saliency detection | |
CN111462044A (en) | Greenhouse strawberry detection and maturity evaluation method based on deep learning model | |
CN112561899A (en) | Electric power inspection image identification method | |
CN108710862A (en) | A kind of high-resolution remote sensing image Clean water withdraw method | |
CN113327255A (en) | Power transmission line inspection image processing method based on YOLOv3 detection, positioning and cutting and fine-tune | |
CN111414954A (en) | Rock image retrieval method and system | |
CN104008374B (en) | Miner's detection method based on condition random field in a kind of mine image | |
CN106971402B (en) | SAR image change detection method based on optical assistance | |
CN117274627A (en) | Multi-temporal snow remote sensing image matching method and system based on image conversion | |
TW202225730A (en) | High-efficiency LiDAR object detection method based on deep learning through direct processing of 3D point data to obtain a concise and fast 3D feature to solve the shortcomings of complexity and time-consuming of the current voxel network model | |
CN105205485B (en) | Large scale image partitioning algorithm based on maximum variance algorithm between multiclass class | |
CN113392704B (en) | Mountain road sideline position detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |