CN116630811B - River extraction method, river extraction device, terminal equipment and readable storage medium - Google Patents

River extraction method, river extraction device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN116630811B
CN116630811B CN202310673435.1A CN202310673435A CN116630811B CN 116630811 B CN116630811 B CN 116630811B CN 202310673435 A CN202310673435 A CN 202310673435A CN 116630811 B CN116630811 B CN 116630811B
Authority
CN
China
Prior art keywords
river
spots
image
filling
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310673435.1A
Other languages
Chinese (zh)
Other versions
CN116630811A (en
Inventor
刘力荣
唐新明
刘克
甘宇航
罗征宇
尤淑撑
金华星
杜磊
何芸
牟兴林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Original Assignee
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ministry Of Natural Resources Land Satellite Remote Sensing Application Center filed Critical Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority to CN202310673435.1A priority Critical patent/CN116630811B/en
Publication of CN116630811A publication Critical patent/CN116630811A/en
Application granted granted Critical
Publication of CN116630811B publication Critical patent/CN116630811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of image processing, and provides a river extraction method, a river extraction device, terminal equipment and a readable storage medium, wherein the river extraction method comprises the following steps: acquiring full-color gray level images and multispectral images corresponding to the satellite remote sensing images; carrying out automatic semantic segmentation on the river by utilizing the multispectral image so as to obtain initial river flow spots; morphological corrosion treatment is carried out on the initial river map spots to obtain corroded river map spots, and positive sample points for interactive deep learning are obtained from the corroded river map spots; performing edge detection and filling on the full-color gray level image to obtain a rough filling area of the river; acquiring a negative sample point for interactive deep learning based on the rough filling area of the river; and precisely filling the rough filling area of the river by using the positive sample points and the negative sample points to obtain the river pattern spots with refined edges. The method effectively solves the problems of automatic extraction and inaccurate edges of rivers on the basis of saving the manufacturing cost of large-scale samples.

Description

River extraction method, river extraction device, terminal equipment and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a river extraction method, a river extraction device, a terminal device, and a readable storage medium.
Background
On wide homeland, the types and the numbers of the rivers are complex and various, the rivers occupy important positions in the topography characteristic analysis, and the research of river information plays a great role in planning and utilization of water resources, prevention of drought and waterlogging disasters and the like. To fully utilize the river resources, the river is first fully recognized and analyzed. With the development of aerospace technology, the satellite remote sensing image has the advantages of easy acquisition, strong real-time performance, high accuracy, wide range and the like, and massive remote sensing data are applied to the acquisition research of the earth surface coverage information. The river is presented as an irregular linear object in the remote sensing image, and the traditional manual or man-machine interaction mode is time-consuming and labor-consuming for extracting the river from the remote sensing image, so that the requirements of management departments such as natural resources, water conservancy departments and the like on the increasingly-growing high-precision and high-frequency monitoring application of the river can not be met. Therefore, with the development of technology, the river information can be automatically extracted by fully utilizing remote sensing interpretation technology, image processing, artificial intelligence and big data technology, and the high-precision and high-frequency acquisition of the river information can be realized while the cost is reduced, so that the method has important significance in the aspects of natural resource monitoring, hydraulic engineering construction, flood disaster prevention and the like.
At present, the river automatic extraction method based on remote sensing images mainly comprises the following steps: firstly, river extraction based on mathematical morphology, firstly, preprocessing a remote sensing image to enhance the contrast of a river, then obtaining a river water area through target segmentation, and finally forming a complete river water area through regional communication; secondly, the genetic algorithm realizes the automatic extraction of the river, and the method seeks the optimal solution through the principles of continuous hybridization iteration and superior and inferior elimination; thirdly, a deep learning semantic segmentation extraction method based on a full convolution network is used for marking a water body and a background in a remote sensing image, then the constructed convolution neural network is used for learning, and then river identification is carried out, wherein the deep learning semantic segmentation by using the full convolution neural network is a mainstream method.
For the three schemes, compared with the former two schemes, semantic segmentation of the remote sensing image by deep learning improves the automation level and precision to a certain extent, but still has some defects. On one hand, the existing semantic segmentation model is easy to lose abundant texture information in the remote sensing image during feature extraction, and accurate segmentation is difficult to achieve; on the other hand, training of the semantic segmentation model often requires a large amount of balanced high-quality samples to support, the cost is high, and the generalization capability of the model is not strong for the extraction task of a large-scale area. Therefore, current river results automatically extracted are often not directly applied to business, and further optimization processing is needed. For example, the refinement method for river automatic extraction mainly includes manual restoration, a threshold segmentation method, a contour detection method, and the like. Further, the above-mentioned treatments refine the effect of the river automatic extraction edge to different degrees, but still have the problems of consuming manpower, inaccurate refinement, etc.
Disclosure of Invention
In view of this, embodiments of the present application provide a river extraction method, apparatus, terminal device, and readable storage medium.
In a first aspect, embodiments of the present application provide a river extraction method, including:
acquiring full-color gray level images and multispectral images corresponding to the satellite remote sensing images;
carrying out automatic semantic segmentation on the river by utilizing the multispectral image so as to obtain initial river flow spots;
morphological corrosion treatment is carried out on the initial river map spots to obtain corroded river map spots, and positive sample points for interactive deep learning are obtained from the corroded river map spots;
performing edge detection and filling on the full-color gray level image to obtain a rough filling area of the river;
acquiring a negative sample point for interactive deep learning based on the rough filling area of the river;
and precisely filling the rough filling area of the river by utilizing the positive sample points and the negative sample points so as to obtain the river pattern spots with refined edges.
In some embodiments, the obtaining positive sample points for interactive deep learning from the corroded river pattern spot comprises:
and uniformly selecting a preset number of key points from the corroded river pattern spots according to the spatial positions of the river, and taking the preset number of key points as positive sample points for interactive deep learning and as initial seed points in the filling process.
In some embodiments, the edge detecting and filling the full-color gray scale image to obtain a rough filling area of the river comprises:
carrying out Canny edge detection on the full-color gray level image to obtain a corresponding contour binary image;
and filling the profile binary map by using the key points selected from the corroded river map spots as initial seed points by using a flooding filling algorithm, and taking the difference set of the profile binary maps before and after filling as a rough filling area of the river.
In some embodiments, the Canny edge detection of the full-color gray scale image further comprises:
and performing image preprocessing on the full-color gray level image, wherein the image preprocessing comprises histogram equalization processing and median filtering processing.
In some embodiments, the obtaining negative sample points for interactive deep learning based on the coarsely populated areas of the river includes:
and establishing a buffer area outwards for the rough filling area of the river, and uniformly selecting a preset number of points along the edge of the buffer area to serve as negative sample points for interactive deep learning.
In some embodiments, the accurately filling the rough filling area of the river with the positive sample points and the negative sample points to obtain the edge-refined river pattern spot comprises:
respectively carrying out pixel coordinate summation on each positive sample point and each negative sample point, and generating a positive sample point sequence and a negative sample point sequence based on pixel coordinates according to the descending order or the ascending order;
and sequentially inputting the positive and negative sample point sequences into an interactive deep learning model based on edge constraint according to a preset insertion rule to refine and fill the rough filling region so as to obtain the edge refined river pattern spots.
In some embodiments, the multispectral image includes spectral data of blue band, green band and near infrared band, and the automatic semantic segmentation of river using the multispectral image to obtain an initial river flow map spot comprises:
and combining the spectrum data of the blue wave band, the green wave band and the near infrared wave band to obtain a pseudo-color image, and performing automatic semantic segmentation of the river by using the pseudo-color image to extract and obtain an initial river pattern spot.
In a second aspect, embodiments of the present application provide a river extraction apparatus, including:
the acquisition module is used for acquiring full-color gray level images and multispectral images corresponding to the satellite remote sensing images;
the initial extraction module is used for carrying out automatic semantic segmentation on the river by utilizing the multispectral image so as to obtain initial river flow spots;
the positive sample point acquisition module is used for carrying out morphological corrosion treatment on the initial river map spots to obtain corroded river map spots, and acquiring positive sample points for interactive deep learning from the corroded river map spots;
the rough filling module is used for carrying out edge detection and filling on the full-color gray level image to obtain a rough filling area of the river;
the negative sample point acquisition module is used for acquiring a negative sample point for interactive deep learning based on the rough filling area of the river;
and the fine extraction module is used for precisely filling the rough filling area of the river by utilizing the positive sample points and the negative sample points so as to obtain the river map spots with the refined edges.
In a third aspect, embodiments of the present application provide a terminal device, the terminal device including a processor and a memory, the memory storing a computer program, the processor being configured to execute the computer program to implement the river extraction method.
In a fourth aspect, embodiments of the present application provide a readable storage medium storing a computer program which, when executed on a processor, implements the river extraction method.
The embodiment of the application has the following beneficial effects:
according to the river extraction method, full-automatic and high-precision river extraction is realized by utilizing full-color gray level images and multispectral images corresponding to satellite remote sensing images, wherein the multispectral images are utilized for carrying out automatic semantic segmentation on the river so as to obtain initial river flow spots; morphological corrosion treatment is carried out on the initial river map spots, and positive sample points of the interactive deep learning model are obtained from the corroded river map spots; meanwhile, edge detection and filling are carried out on the full-color gray level image, and a rough filling area of a river and a negative sample point of interactive deep learning are obtained; and finally, accurately filling the river by utilizing the positive and negative sample points to obtain the river pattern spots with refined edges. According to the method, interactive deep learning segmentation is combined on the basis of automatic semantic segmentation of the image, river edge refinement post-treatment based on an automatic segmentation result is realized, and the problem of inaccurate river automatic extraction result edge is solved; in addition, positive and negative sample points are automatically obtained and utilized to conduct river edge refinement treatment, and the problems that a large number of samples are needed to be supported by a depth semantic segmentation model based on supervised learning, input cost is high and the extracted river pattern spots are low in accuracy in the prior art are solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows two satellite remote sensing images containing river information;
FIG. 2 shows a first flow chart of a river extraction method of an embodiment of the present application;
FIG. 3 shows the comparison of the original remote sensing image of FIG. 1 with the initial river map spots obtained by the method of the embodiments of the present application;
FIG. 4 shows a comparison of the initial river pattern spot of FIG. 3 with a corroded river pattern spot obtained by the method of the example of the present application;
FIG. 5 shows a second flow chart of a river extraction method of an embodiment of the present application;
FIG. 6 shows the comparison of the original remote sensing image of FIG. 3 with a profile binary image obtained by the method of the embodiment of the present application;
FIG. 7 shows a comparison of the profile binary map of FIG. 6 with the result of flood fill obtained by the method of the present embodiments;
FIG. 8 shows a third flow chart of a river extraction method according to an embodiment of the present application;
FIG. 9 shows a sample point input sequence schematic of a river extraction method according to an embodiment of the present application;
FIG. 10 shows a comparison of the flood fill results of FIG. 7 with interactive deep learning fill results obtained by methods of embodiments of the present application;
fig. 11 shows a schematic structural diagram of a river extraction method according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In the following, the terms "comprises", "comprising", "having" and their cognate terms may be used in various embodiments of the present application are intended only to refer to a particular feature, number, step, operation, element, component, or combination of the foregoing, and should not be interpreted as first excluding the existence of or increasing the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing. Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of this application belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is identical to the meaning of the context in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The embodiments described below and features of the embodiments may be combined with each other without conflict.
As described in the background art, the automation level and precision are improved to a certain extent by utilizing deep learning to perform river semantic segmentation on the remote sensing image, but the problems of incomplete extraction, confusion with surrounding ground objects, multiple false extraction points, inaccurate edges and the like still exist, for example, as shown in fig. 1, compared with a natural image, the satellite remote sensing image contains abundant texture and detail information, but the existing semantic segmentation model is easy to lose the abundant texture information when the characteristics are extracted, and accurate segmentation of river flow spots is difficult to achieve; on the other hand, the training cost of the current deep semantic segmentation model is high, and particularly for a large-range extraction task, the model generalization capability is not strong, so that the problems of object segmentation, hollowness in a pattern spot, incomplete extraction and the like of an automatically extracted river result often exist, and the commercialized application of the extraction result is directly influenced.
Although, at present, the river automatic extraction refining method is also adopted, such as manual restoration, a threshold segmentation method, a contour detection method and the like, wherein the manual restoration relies on manual refining of an automatic extraction result; the threshold segmentation method is to set a threshold value for segmentation according to the difference between the river region and the background gray level by comparing the gray level distribution condition, so as to improve the extraction precision; the contour line detection algorithm is used for carrying out edge detection on river edges according to river edge characteristics in the remote sensing image, and then carrying out region merging according to texture or gray level characteristics so as to improve extraction accuracy. However, these processes also have problems such as labor consumption and complicated operations.
Based on the method, aiming at the problems that the obtained image spots are incomplete and fragmented and are difficult to apply practically and the like due to the fact that a deep learning semantic segmentation method is adopted to automatically extract the river, the post-processing method for realizing the refined edges is mainly provided on the basis of constructing a deep learning semantic segmentation model and aiming at the characteristic of river automatic extraction based on remote sensing images, namely, the river image spot refined post-processing method based on the combination of traditional image processing and artificial interaction is constructed, the full-automatic and high-precision river segmentation extraction is realized, and the problem that the river automatic extraction result edges are inaccurate is effectively solved on the basis of saving the manufacturing cost of a large-scale sample.
The river extraction method will be described with reference to specific examples.
Fig. 2 shows a flow chart of a river extraction method according to an embodiment of the present application. The river extraction method, exemplarily, includes the steps of:
s100, obtaining full-color gray level images and multispectral images corresponding to the satellite remote sensing images.
It will be appreciated that satellite remote sensing images have rich texture features and detailed information, as shown in fig. 1 (a) and (b), and that typically, rivers appear as irregular lines in satellite remote sensing images. In order to realize accurate segmentation of river map spots, the embodiment fully utilizes full color and multispectral image information of satellite remote sensing images, and carries out automatic extraction of the river map spots through an automatic semantic segmentation technology of pseudo-color images and deep learning, which are obtained based on multispectral image information, so as to obtain an initial extraction result; furthermore, the technology such as image edge detection, morphological processing, interactive deep learning and the like is utilized to realize the fine extraction of river edges, so that river flow pattern information with accurately segmented edges is obtained.
By performing image extraction processing on the satellite remote sensing image obtained by shooting, a corresponding full-color gray level image and a corresponding multispectral image can be extracted from the satellite remote sensing image, and river information on the earth surface can be comprehensively obtained by utilizing the two image information. For example, taking satellite image data of a third domestic resource as an example, a front-view full-color image and a multispectral image with a ground resolution of 2.1 meters can be extracted from the satellite image data, wherein the multispectral image can contain information of different frequency spectrums such as blue R1, green R2, red R3, near infrared band R4 and the like.
Generally, in color images obtained by combining blue R1, green R2, and red R3, rivers in China are usually displayed as blue, green, yellow, black, and the like with different shades. In the near infrared band R4, the water body absorptivity is high, the vegetation reflectivity is high, the water and land boundaries can be better distinguished, especially in the spring and summer season when the vegetation is flourishing, therefore, in order to fully utilize the high distinguishing characteristics of the thermal infrared band to the river coastal water body and the vegetation, and according to the characteristics of the water body reflection spectrum, the application is to adopt the pseudo-color image obtained by combining blue R1, green R2 and the near infrared band R4 to develop the automatic semantic segmentation extraction of the river, the water body contour can be clearer, and foundation support is provided for the high-precision extraction and refinement of the river edge.
S200, river automatic semantic segmentation is carried out by utilizing the multispectral image so as to obtain initial river flow map spots.
Illustratively, the spectral data of blue R1, green R2 and near-infrared band R4 may be combined to obtain a pseudo-color image, which is input into a pre-trained deep learning model for automatic semantic segmentation of the river, so as to extract an initial river pattern spot. The deep learning model can be obtained by training in advance by adopting classical deep learning networks such as deep v3+, UNet and HRNet, specifically, in the training process, the obtained pseudo-color images can be used as training samples, a certain number of river semantic segmentation samples are marked, and model training is carried out on the selected deep learning network by using the samples, so that the river automatic semantic segmentation model is obtained.
Taking the original remote sensing images including the river shown in (a) and (c) of fig. 3 as an example, the initial river flow spots shown in (b) and (d) of fig. 3 can be obtained by the above extraction, respectively. It can be understood that when the river extraction is performed by using the traditional deep learning model, the problems of incomplete extraction results, incoherence, hollowness, inaccurate boundaries and the like often exist in an automatically extracted river pattern spot structure due to the influences of the number, quality, diversity and the like of samples and the generalization capability of an automatic semantic segmentation model.
Therefore, the present embodiment further synthesizes processing methods such as edge detection, morphological erosion, filling, etc. to obtain positive and negative sample points for interactive deep learning based on the initial extraction result, and further generates a sample point sequence based on pixel coordinates, and further precisely fills the river region by adopting an interactive deep learning frame with edge constraint, so as to obtain a refined river edge, thereby obtaining a more accurate river extraction result.
And S300, performing morphological corrosion treatment on the initial river map spots to obtain corroded river map spots, and acquiring positive sample points for interactive deep learning from the corroded river map spots.
In general, the dilation and erosion of an image are basic morphological operations, which are mainly used to find the largest and smallest areas in the image, and the core is to apply a convolution operation from left to right and from top to bottom to the image matrix, where the erosion operation is to set the value in the image to the minimum value in the nuclear coverage area, so that the target image spot achieves the "shrinkage" effect. In this embodiment, by performing the processing of the river map spots by using morphological corrosion, some non-target type pixel points extracted by mistake in the obtained initial river map spots can be filtered out, so as to provide a data base for obtaining reliable positive sample points later.
In one embodiment, for the convolution size for corrosion, for example, a square matrix with elements of 1 may be used, the scale of the matrix determines the magnitude of the shrinkage of the pattern, and in the above-mentioned test of the satellite image with the resource No. three, a scale size of 5×5 may be selected. It will be appreciated that this is merely an example of a size, which may be specifically adapted to the actual requirements, and is not limited thereto.
Illustratively, after morphological erosion treatment, post-erosion river pattern spots may be obtained, e.g., for initial river pattern spots shown in fig. 4 (a) and (c), post-erosion results shown in fig. 4 (b) and (d), respectively. It can be seen that by the above-described etching treatment, the false extraction points and a large amount of noise points at the outer boundary portion of the automatically extracted river pattern spots can be effectively eliminated.
The interactive deep learning technology is also an interactive segmentation technology based on deep learning, and is that a user provides certain interactive information, including click, bbox, closed curve, non-closed curve and other interactive modes, to segment the foreground and background required to be segmented by the user. In the embodiment, the pattern spot refining post-treatment is realized by combining the interactive segmentation technology, so that the problem of inaccurate edge of the river automatic extraction result can be solved on the basis of saving the manufacturing cost of a large-scale sample.
Furthermore, after the corrosion treatment, the preset number of key points are uniformly selected from the corroded river pattern spots according to the spatial position of the river, and the preset number of key points are used as positive sample points for the subsequent interactive deep learning and as initial seed points for the subsequent filling treatment.
S400, performing edge detection and filling on the full-color gray level image to obtain a rough filling area of the river.
In this embodiment, rough river edges will be acquired based on the acquired full-color grayscale image, and a rough filled region of a river flow patch will be acquired by processing such as edge detection and flood filling, for example. As an alternative, the method further comprises, before performing Canny edge detection on the full-color gray scale image: and performing image preprocessing on the full-color gray level image, wherein the image preprocessing comprises histogram equalization processing, median filtering processing and the like, so that the contrast of the image can be improved, and the edge information of the image can be protected as much as possible while the image noise is reduced.
In one embodiment, as shown in fig. 5, the step S400 includes the following sub-steps:
s410, canny edge detection is carried out on the full-color gray level image so as to obtain a corresponding contour binary image.
For example, a classical Canny algorithm or the like may be used for edge detection, and a corresponding contour binary image may be obtained. For example, in the remote sensing image shown in fig. 3, the contour binary image obtained by the edge detection is shown in fig. 6 (a) and (b) for the full-color gray scale image. It will be appreciated that the Canny algorithm herein is but one of the possible edge detection algorithms, and that other edge algorithms such as Roberts, sobel, laplacian are possible and are not limited thereto.
S420, filling the contour binary image by using the key points selected from the corroded river map spots as initial seed points through a flooding filling algorithm, and taking the difference set of the contour binary images before and after filling as a rough filling area of the river.
Further, after the contour binary image is obtained, a corresponding filling algorithm may be further used to perform region filling, for example, here, a flooding filling algorithm may be used, where the flooding filling mainly searches the same region according to the difference between the pixel gray values to realize region segmentation. Specifically, the preset number of key points selected in step S300 are used as initial seed points during the flooding filling, and the flooding area is filled, i.e. the area with the same pixel gray value as the seed point area is filled. For example, with the contour binary map shown in fig. 6, the result of the water-flooding as shown in (b) and (d) in fig. 7 can be obtained. The filled binary image is then differenced from the unfilled original binary image, whereby a coarsely filled area of the river can be obtained.
S500, based on the rough filling area of the river, a negative sample point for interactive deep learning is acquired.
Illustratively, after obtaining the rough filling area of the river, a buffer area is established outwards for the rough filling area of the river, and a preset number of points are uniformly selected along the edge of the buffer area to serve as negative sample points for interactive deep learning. Alternatively, the number of negative sample points may be equal to the number of positive sample points.
S600, precisely filling the rough filling area of the river by using the positive sample points and the negative sample points to obtain the river pattern spots with refined edges.
Finally, after positive sample points and negative sample points are automatically extracted, the river region is finely filled based on interactive deep learning and edge constraint. The training interactive deep learning model is input according to a certain sequence based on the positive and negative sample points automatically generated in the front to simulate manual interactive click input, so that the automatic optimization filling of the river area is realized. It will be appreciated that the interactive deep learning model differs from other deep learning models in that during image segmentation, the user intervenes and controls the image segmentation to assist in completing the image segmentation. The method not only can make up for the lack of accuracy of automatic segmentation by the model, but also is more accurate and efficient than the simple manual segmentation for images such as remote sensing images.
In one embodiment, as shown in fig. 8, step S600 may include the sub-steps of:
and S610, respectively carrying out pixel coordinate summation on each positive sample point and each negative sample point, and generating a positive sample point sequence and a negative sample point sequence based on the pixel coordinates according to the descending order or the ascending order.
For example, the pixel coordinate value of a certain sample point is (x, y), and the sum can obtain (x+y); further, according to the order of increasing or decreasing sum, a positive and negative sample point sequence can be generated according to the principle that positive and negative sample points are inserted in turn, as shown in fig. 9.
S620, sequentially inputting the positive and negative sample point sequences into an interactive deep learning model based on edge constraint according to a preset insertion rule to refine and fill the rough filling area so as to obtain the edge refined river pattern spots.
Wherein the interactive deep learning model can be obtained through pre-training, and the specific training process of the model is not described herein. For example, the Edge constraint deep learning model can adopt an interactive deep learning framework of Edge Flow (Edge Flow), wherein the Edge Flow framework adopts a mode from thick to thin, fully utilizes interaction and image information to perform multistage feature fusion, and adopts a porous convolution block with lightweight operation in a network so as to improve learning efficiency; meanwhile, in order to improve the stability of the segmentation mask, an edge constraint mode is adopted to further improve the segmentation stability. It can be understood that when the interactive deep learning model performs edge segmentation, the segmentation mask of the river edge will be greatly affected along with the gradual input of each sample point, so as to obtain a more accurate image spot extraction result.
For example, with regard to the result of the water-flooding filling shown in fig. 10 (a) and (c), after the interactive deep learning filling, the filling result shown in fig. 10 (b) and (d) can be obtained. As the test results show, the river edge refinement treatment method provided by the application effectively solves the problems of incomplete initial automatic extraction results, fragmentation, inaccurate edges and the like, effectively improves the accuracy of river extraction, and provides a thought for applying the river automatic extraction results to daily business.
The river extraction method fully utilizes full color and multispectral image information of the satellite remote sensing image, wherein a pseudo-color image is extracted from the multispectral image, so that automatic semantic segmentation extraction of a river is performed by using a traditional deep learning technology, and initial river extraction image spots are obtained; and further, processing methods such as Canny and the like of edge detection, morphological corrosion, water diffusion filling and the like are integrated to automatically obtain positive and negative sample points for interactive segmentation, then sample point sequence generation is carried out based on pixel coordinates, and river region accurate filling is carried out by combining an interactive deep learning model of edge constraint, so that river map spots with refined edges are obtained. The method can effectively solve the problems of high cost, fragmentation, omission of detection, inaccurate edge and the like of the extraction result of the high-resolution remote sensing image in the river under the condition of a small amount of river samples, and improves the accuracy, efficiency and the like of the automatic extraction of the river pattern spots.
Fig. 11 shows a schematic structural view of the river extraction apparatus 10 according to the embodiment of the present application. Exemplarily, the river extraction apparatus 10 includes:
the acquisition module 100 is configured to acquire a full-color gray scale image and a multispectral image corresponding to the satellite remote sensing image.
The initial extraction module 200 is configured to perform automatic semantic segmentation on the river by using the multispectral image, so as to obtain an initial river flow map spot.
The positive sample point obtaining module 300 is configured to perform morphological corrosion processing on the initial river map spot to obtain a corroded river map spot, and obtain a positive sample point for interactive deep learning from the corroded river map spot.
The rough filling module 400 is used for performing edge detection and filling on the full-color gray level image to obtain a rough filling area of the river.
The negative sample point acquisition module 500 is configured to acquire a negative sample point for interactive deep learning based on a rough filling area of a river.
The fine extraction module 600 is configured to accurately fill the rough filling area of the river with positive sample points and negative sample points, so as to obtain a river map spot with a refined edge.
It will be appreciated that the apparatus of this embodiment corresponds to the river extraction method of the above embodiment, and that the options in the above embodiment are equally applicable to this embodiment, so that the description will not be repeated here.
The present application also provides a terminal device, such as a computer, and the like, which exemplarily includes a processor and a memory, wherein the memory stores a computer program, and the processor executes the computer program, so that the terminal device performs the functions of each module in the river extraction method or the river extraction device.
The processor may be an integrated circuit chip with signal processing capabilities. The processor may be a general purpose processor including at least one of a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) and a network processor (Network Processor, NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
The Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory is used for storing a computer program, and the processor can correspondingly execute the computer program after receiving the execution instruction.
The present application also provides a readable storage medium for storing the computer program for use in the above terminal device.
Wherein, the storage medium comprises: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A river extraction method, comprising:
acquiring full-color gray level images and multispectral images corresponding to the satellite remote sensing images;
carrying out automatic semantic segmentation on the river by utilizing the multispectral image so as to obtain initial river flow spots;
morphological corrosion treatment is carried out on the initial river map spots to obtain corroded river map spots, and positive sample points for interactive deep learning are obtained from the corroded river map spots;
performing edge detection and filling on the full-color gray level image to obtain a rough filling area of the river;
acquiring a negative sample point for interactive deep learning based on the rough filling area of the river;
and precisely filling the rough filling area of the river by utilizing the positive sample points and the negative sample points so as to obtain the river pattern spots with refined edges.
2. The river extraction method of claim 1, wherein the obtaining positive sample points for interactive deep learning from the corroded river pattern spots comprises:
and uniformly selecting a preset number of key points from the corroded river pattern spots according to the spatial positions of the river, and taking the preset number of key points as positive sample points for interactive deep learning and as initial seed points required for acquiring the rough filling area.
3. The river extraction method according to claim 1 or 2, wherein the performing edge detection and filling of the full-color gray scale image to obtain a rough filling region of a river comprises:
carrying out Canny edge detection on the full-color gray level image to obtain a corresponding contour binary image;
and filling the profile binary map by using the key points selected from the corroded river map spots as initial seed points by using a flooding filling algorithm, and taking the difference set of the profile binary maps before and after filling as a rough filling area of the river.
4. The river extraction method of claim 3, wherein the Canny edge detection is performed on the full-color gray scale image, further comprising:
and performing image preprocessing on the full-color gray level image, wherein the image preprocessing comprises histogram equalization processing and median filtering processing.
5. The river extraction method of claim 1, wherein the acquiring negative sample points for interactive deep learning based on the coarsely filled area of the river comprises:
and establishing a buffer area outwards for the rough filling area of the river, and uniformly selecting a preset number of points along the edge of the buffer area to serve as negative sample points for interactive deep learning.
6. The river extraction method of claim 1, wherein the precisely filling the coarsely filled area of the river with the positive sample points and the negative sample points to obtain edge-refined river pattern spots comprises:
respectively carrying out pixel coordinate summation on each positive sample point and each negative sample point, and generating a positive sample point sequence and a negative sample point sequence based on pixel coordinates according to the descending order or the ascending order;
and sequentially inputting the positive and negative sample point sequences into an interactive deep learning model based on edge constraint according to a preset insertion rule to refine and fill the rough filling region so as to obtain the edge refined river pattern spots.
7. The river extraction method of claim 1, wherein the multispectral image comprises spectral data of blue band, green band and near infrared band, the river automatic semantic segmentation is performed by using the multispectral image to obtain initial river flow spots, and the river flow spots comprise:
and combining the spectrum data of the blue wave band, the green wave band and the near infrared wave band to obtain a pseudo-color image, and performing automatic semantic segmentation of the river by using the pseudo-color image to extract and obtain an initial river pattern spot.
8. A river extraction device, comprising:
the acquisition module is used for acquiring full-color gray level images and multispectral images corresponding to the satellite remote sensing images;
the initial extraction module is used for carrying out automatic semantic segmentation on the river by utilizing the multispectral image so as to obtain initial river flow spots;
the positive sample point acquisition module is used for carrying out morphological corrosion treatment on the initial river map spots to obtain corroded river map spots, and acquiring positive sample points for interactive deep learning from the corroded river map spots;
the rough filling module is used for carrying out edge detection and filling on the full-color gray level image to obtain a rough filling area of the river;
the negative sample point acquisition module is used for acquiring a negative sample point for interactive deep learning based on the rough filling area of the river;
and the fine extraction module is used for precisely filling the rough filling area of the river by utilizing the positive sample points and the negative sample points so as to obtain the river map spots with the refined edges.
9. A terminal device, characterized in that it comprises a processor and a memory, the memory storing a computer program, the processor being adapted to execute the computer program to implement the river extraction method according to any one of claims 1-7.
10. A readable storage medium, characterized in that it stores a computer program which, when executed on a processor, implements the river extraction method according to any one of claims 1-7.
CN202310673435.1A 2023-06-07 2023-06-07 River extraction method, river extraction device, terminal equipment and readable storage medium Active CN116630811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310673435.1A CN116630811B (en) 2023-06-07 2023-06-07 River extraction method, river extraction device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310673435.1A CN116630811B (en) 2023-06-07 2023-06-07 River extraction method, river extraction device, terminal equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116630811A CN116630811A (en) 2023-08-22
CN116630811B true CN116630811B (en) 2024-01-02

Family

ID=87609787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310673435.1A Active CN116630811B (en) 2023-06-07 2023-06-07 River extraction method, river extraction device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116630811B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416784A (en) * 2018-02-06 2018-08-17 石家庄铁道大学 Completed region of the city boundary rapid extracting method, device and terminal device
CN110298211A (en) * 2018-03-21 2019-10-01 北京大学 A kind of Methods Deriving Drainage Network based on deep learning and high-resolution remote sensing image
CN111027497A (en) * 2019-12-17 2020-04-17 西安电子科技大学 Weak and small target rapid detection method based on high-resolution optical remote sensing image
CN111046772A (en) * 2019-12-05 2020-04-21 国家海洋环境监测中心 Multi-temporal satellite remote sensing island shore line and development and utilization information extraction method
CN116189009A (en) * 2023-03-17 2023-05-30 阿里巴巴达摩院(杭州)科技有限公司 Training method of remote sensing image processing model, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256419B (en) * 2017-12-05 2018-11-23 交通运输部规划研究院 A method of port and pier image is extracted using multispectral interpretation
CN113409336B (en) * 2021-06-23 2022-03-01 生态环境部卫星环境应用中心 Method, device, medium and equipment for extracting area and frequency of river dry-out and flow-break

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416784A (en) * 2018-02-06 2018-08-17 石家庄铁道大学 Completed region of the city boundary rapid extracting method, device and terminal device
CN110298211A (en) * 2018-03-21 2019-10-01 北京大学 A kind of Methods Deriving Drainage Network based on deep learning and high-resolution remote sensing image
CN111046772A (en) * 2019-12-05 2020-04-21 国家海洋环境监测中心 Multi-temporal satellite remote sensing island shore line and development and utilization information extraction method
CN111027497A (en) * 2019-12-17 2020-04-17 西安电子科技大学 Weak and small target rapid detection method based on high-resolution optical remote sensing image
CN116189009A (en) * 2023-03-17 2023-05-30 阿里巴巴达摩院(杭州)科技有限公司 Training method of remote sensing image processing model, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于国产高分遥感卫星全国地表水遥感监测应用;尤淑撑;卫星应用(第06期);第1-6页 *
基于高分辨率遥感影像的城市水体提取算法研究;陈星壮;中国优秀硕士学位论文全文数据库(第01期);第13-27页 *

Also Published As

Publication number Publication date
CN116630811A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN108171698B (en) Method for automatically detecting human heart coronary calcified plaque
CN109558806B (en) Method for detecting high-resolution remote sensing image change
CN110765934B (en) Geological disaster identification method based on multi-source data fusion
CN110298211B (en) River network extraction method based on deep learning and high-resolution remote sensing image
CN111862143B (en) Automatic monitoring method for river dike collapse
CN107247927B (en) Method and system for extracting coastline information of remote sensing image based on tassel cap transformation
Li et al. Accurate water extraction using remote sensing imagery based on normalized difference water index and unsupervised deep learning
CN111881816B (en) Long-time-sequence river and lake ridge culture area monitoring method
CN106650812A (en) City water body extraction method for satellite remote sensing image
Wang et al. The poor generalization of deep convolutional networks to aerial imagery from new geographic locations: an empirical study with solar array detection
CN110889840A (en) Effectiveness detection method of high-resolution 6 # remote sensing satellite data for ground object target
CN111339989A (en) Water body extraction method, device, equipment and storage medium
CN109886170A (en) A kind of identification of oncomelania intelligent measurement and statistical system
CN112926399A (en) Target object detection method and device, electronic equipment and storage medium
CN112037244A (en) Landsat-8 image culture pond extraction method combining index and contour indicator SLIC
CN109801306B (en) Tidal ditch extraction method based on high resolution remote sensing image
CN115937707A (en) SAR image water body extraction method based on multi-scale residual error attention model
CN116758049A (en) Urban flood three-dimensional monitoring method based on active and passive satellite remote sensing
CN114998658A (en) Intertidal zone beach extraction method and system based on tidal flat index
CN113989287A (en) Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN116630811B (en) River extraction method, river extraction device, terminal equipment and readable storage medium
CN113240620A (en) Highly adhesive and multi-size brain neuron automatic segmentation method based on point markers
CN116895019A (en) Remote sensing image change detection method and system based on dynamic weighted cross entropy loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant