CN116343070B - Intelligent interpretation method for aerial survey image ground object elements - Google Patents

Intelligent interpretation method for aerial survey image ground object elements Download PDF

Info

Publication number
CN116343070B
CN116343070B CN202310571985.2A CN202310571985A CN116343070B CN 116343070 B CN116343070 B CN 116343070B CN 202310571985 A CN202310571985 A CN 202310571985A CN 116343070 B CN116343070 B CN 116343070B
Authority
CN
China
Prior art keywords
neural network
network model
sample data
aerial survey
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310571985.2A
Other languages
Chinese (zh)
Other versions
CN116343070A (en
Inventor
王勇
袁鹏
余大杰
王俊
唐国民
马随阳
谢齐
孟昌
余永周
望曹俊杰
吕英豪
李金乾
廖阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Tiandiying Surveying And Mapping Technology Co ltd
Original Assignee
Wuhan Tiandiying Surveying And Mapping Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Tiandiying Surveying And Mapping Technology Co ltd filed Critical Wuhan Tiandiying Surveying And Mapping Technology Co ltd
Priority to CN202310571985.2A priority Critical patent/CN116343070B/en
Publication of CN116343070A publication Critical patent/CN116343070A/en
Application granted granted Critical
Publication of CN116343070B publication Critical patent/CN116343070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Abstract

The application provides an intelligent interpretation method of aerial survey image ground object elements, relates to the field of unmanned aerial vehicle aerial survey, in particular to the field of neural network model deep learning, and can be applied to realizing automatic extraction of linear and blocky ground object elements. The specific scheme comprises the following steps: s1, acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data; s2, inputting test sample data into a first neural network model to obtain an output result; s3, carrying out regularization treatment on the output result by adopting an algorithm according to the output result to obtain a ground feature element; solves the following problems: the original image with larger size cannot be directly used as training data; the obstruction object has interference blocking to road extraction and incomplete and non-communication of road information; the sample size is small, the identification accuracy of various image roads is low, and the generalization capability is low; the water-land boundary blurring caused by high sediment content in water has influence on binarization and edge extraction.

Description

Intelligent interpretation method for aerial survey image ground object elements
Technical Field
The application relates to the field of unmanned aerial vehicle aerial survey, in particular to the field of neural network model deep learning.
Background
The neural network model deep learning at the present stage is a new stage of machine learning development in artificial intelligence, and because the neural network model deep learning has a strong pre-training model, a strong model expression capability and a strong calculation and reasoning speed, the problems of characterization of complex object characteristics, association analysis of complex factories and the like can be effectively solved, and the neural network model deep learning is the best object extraction mode at present.
Meanwhile, the unmanned aerial vehicle aerial survey enriches the variety of measurement elements, but the element extraction successful in the current aerial survey depends on manual identification and receipt, and has low efficiency; moreover, the land area topography updating and mapping workload of the equal-length river measurement items of the comprehensive map of the channel mapping unit is large, and the low-efficiency operation mode of manually extracting the channel shoreline elements cannot meet the requirement of rapid topography updating.
Disclosure of Invention
The application provides an intelligent interpretation method of aerial survey image ground object elements.
The application solves the problems that the obstruction is formed on the road extraction by the shielding object at the present stage, so that the road information is incomplete and not communicated; the image road identification accuracy is reduced due to the fact that the sample size is small and the sources are small, and the generalization capability is reduced; the problem of unclear boundary of the water and land caused by high sediment content in the water has great influence on binarization and edge extraction.
According to an aspect of the embodiments of the present disclosure, there is provided an intelligent interpretation method of aerial survey image ground object elements, including:
acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
inputting the test sample data into a first neural network model to obtain an output result;
and (3) carrying out regularization treatment on the output result by adopting an algorithm according to the output result to obtain the ground feature element.
According to another aspect of the disclosed embodiments, the regularizing the output result is performed by using an algorithm, where the algorithm includes a marking probes algorithm and a Douglas-Peucker algorithm.
According to another aspect of the embodiments of the present disclosure, wherein acquiring aerial survey image data and preprocessing the aerial survey image data to obtain test sample data includes: cutting, enhancing and adjusting the aerial survey image data to obtain test sample data.
In accordance with another aspect of embodiments of the present disclosure,
the Marching Cubes algorithm comprises rough adjustment and fine adjustment;
the rough adjustment is to adjust the area, the length and the angle of the output result;
the fine adjustment is an adjustment of the long side and the short side of the output result.
According to another aspect of the embodiments of the present disclosure, there is provided a method for extracting a water foam line, including:
acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
inputting the test sample data into a first neural network model to obtain an output result;
filling the hole of the output result according to the Flood fill algorithm to obtain a partitioned water area;
and detecting and extracting the partitioned water area according to the Canny operator to obtain a water foam line result.
According to another aspect of the disclosed embodiments, the parameters of the Flood fill algorithm include a start node, a target color, and a replacement color, wherein the Flood fill algorithm includes determining nodes connected to the start node, connecting through a path of an entry target color, and replacing the target color with the replacement color.
According to another aspect of the disclosed embodiments, the detecting and extracting the partitioned water area according to the Canny operator, to obtain the water foam line result, includes:
extracting the edges of the partitioned water areas according to the Canny operator, and performing smoothing treatment on the edges of the partitioned water areas to obtain a smoothing result;
performing shrinkage and expansion treatment according to the smooth result to obtain a treatment result; and adding the treatment result into the split water area to obtain a water foam line result.
According to another aspect of the embodiments of the present disclosure, there is provided a training method of a neural network model, including:
acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
dividing the test sample data to obtain training set sample data, verification set sample data and test set sample data;
inputting training set sample data into a DenseNeXt network in a second neural network model to perform feature extraction to obtain a first extracted feature;
respectively inputting the first extraction features into two ASPP models and a bottleneck layer to obtain second extraction features, third extraction features and bottleneck layer input features, and fusing the second extraction features and the third extraction features to obtain fused features; adding the fusion characteristics and bottleneck layer input characteristics to obtain an output result;
and carrying out parameter adjustment on the second neural network model according to the output result and the verification set to obtain a super parameter, and determining the first neural network model according to the super parameter.
According to another aspect of the disclosed embodiments, wherein the dividing the test sample data to obtain the training set sample data, the validation set sample data, and the test set sample data includes:
the training data is used to solve for network parameters that minimize the loss function;
verification data is used to minimize overfitting;
the test data is used to test the classification ability of the first neural network model after the second neural network model training is completed.
According to another aspect of the disclosed embodiments, an intelligent interpretation system for aerial survey image ground feature elements is provided, and the ground feature element is interpreted by adopting any one of the above methods on the extraction system of the ground feature element image.
The embodiment of the disclosure adopts the above technical scheme and has at least the following beneficial effects:
according to the embodiment of the disclosure, the learning ability of the neural network model to the aerial survey images with different sizes is enhanced by cutting the aerial survey images, and the problem that the aerial survey images cannot be directly used as training data of the neural network model because the aerial survey images are large is solved; by carrying out model change on the deep LabV3+ neural network model, the function of thinning the segmentation result is enhanced, meanwhile, feature extraction can be carried out on samples with any resolution, and richer semantic information is obtained, so that more road information is extracted, and the problems that interference blocking is formed on road extraction by a shelter and the road information is incomplete and not communicated are solved; through increasing the number of samples through the mode of data augmentation before model training, increase the various possibility of sample simultaneously, reach the purpose that improves discernment accuracy and generalization ability, solved sample volume inadequately, the source is few, leads to the problem that discernment accuracy reduces to various image roads, the generalization ability reduces.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is an automatic extraction implementation operation chart of a ground object factor graph affected by an unmanned aerial vehicle based on deep learning in an embodiment of the disclosure;
FIG. 2 is a diagram of an intelligent interpretation method of aerial survey image ground object elements in an embodiment of the disclosure;
FIG. 3 is a block diagram of a modified deep LabV3+ neural network model in an embodiment of the present disclosure;
FIG. 4 is a bottleneck layer design of a DenseNeXt network design in a modified DeepLabV3+ neural network model in an embodiment of the present disclosure;
FIG. 5 is an extraction technology roadmap of a feature element image in an embodiment of the disclosure;
FIG. 6 is a comparison of the prior to and after regularization in an embodiment of the present disclosure;
FIG. 7 is a comparison of the prior to and after regularization in an embodiment of the present disclosure;
FIG. 8 is a flow chart of a method of extracting a water spray line in an embodiment of the present disclosure;
FIG. 9 is an extraction of a water froth line in an embodiment of the disclosure;
fig. 10 is a diagram of a training method of a neural network model in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, based on the examples herein, which are within the scope of the application as defined by the claims, will be within the scope of the application as defined by the claims.
The terms "first," "second," "third," "fourth," and the like in embodiments of the present disclosure are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a series of steps or elements. The method, system, article, or apparatus is not necessarily limited to those explicitly listed but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
The unmanned aerial vehicle aerial survey technology carrying the remote sensing platform has the characteristics of low cost, simplicity and convenience in acquisition, high image resolution, strong timeliness, easiness in operation, few external influence factors and the like, and is attracting more attention. The close combination of unmanned aerial vehicle and mapping provides a more efficient and simpler mapping mode for traditional mapping, and becomes one of new principal force armies in the field of aviation remote sensing. However, the development of the conventional aerial survey image information detection and processing algorithm is slow, the conventional ground feature element extraction mainly depends on a time-consuming and labor-consuming manual acquisition mode, and the obtained ground feature information is low in accuracy and cannot be fully utilized, so how to realize the rapidness, the intellectualization and the automation of the ground feature element extraction and the full utilization of the ground feature information in the aerial survey image is one of the difficulties to be solved in the current remote sensing technology. Meanwhile, most of traditional extraction methods of ground feature elements in aerial survey images are to singly use image information and traditional methods of segmentation, classification, edge detection and the like to realize ground feature element extraction of aerial survey images, and the methods have great limitations and have a lot of difficulties which are difficult to overcome in practical application. Therefore, the embodiment of the disclosure introduces a new idea based on the original algorithm: the automatic extraction concrete implementation operation of the unmanned aerial vehicle influencing the ground object factor graph based on the deep learning is shown in fig. 1. Deep learning utilizes massive data and a neural network model, and through learning and training on the established neural network model of the attribute characteristics of various ground objects in massive unmanned aerial vehicle images, the classification and prediction accuracy of the established detection model is continuously improved until the requirements are met. The deep learning construction model has stronger generalization capability, the network structure is deeper, the inherent characteristics of data can be abstracted without manually designing related description rules, the problem of selecting target features in the traditional detection problem is avoided, the description capability of the model to the target features is improved, and a more generalized and concise solution idea is provided for solving the problem of automatically extracting ground features.
The embodiment of the disclosure provides an intelligent interpretation method for aerial survey image ground object elements, as shown in fig. 2, comprising the following steps:
s1, acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
s2, inputting the test sample data into a first neural network model to obtain an output result;
the first neural network model is generated by training the second neural network to be an improved deep LabV3+ neural network model, and the current deep LabV3+ neural network model is insufficient. Firstly, the useful information is lost due to gradual reduction of the space dimension of input data in the characteristic extraction process of the coding end, and detail recovery can not be well realized during decoding; secondly, although the boundary extraction capability of the deep LabV3+ neural network model on the target can be improved by introducing the ASPP module, the relation between local features of the target cannot be completely simulated, so that the target segmentation has a cavitation phenomenon, and the accuracy of the target segmentation is reduced; finally, in order to pursue segmentation accuracy, xaccept with more network layers and larger parameter quantity is selected as a feature extraction network, and the convolution mode in the ASPP module is common convolution, so that the parameter quantity is further increased, the model depth is deepened, the parameter quantity is increased, the complexity of the model is increased, the requirement on hardware is higher, the network training difficulty is increased, the network training speed is slower, and the convergence is slower.
To improve the network segmentation performance, improving the existing deep labv3+ model by improving the defects comprises the following steps:
aiming at the problem of large Xnaption network parameters extracted by the characteristics of the traditional deep LabV & lt3+ & gt model, a lightweight high-performance backbone network is provided, which is named as DenseNeXt and replaces the Xnaption network in the traditional deep LabV & lt3+ & gt. The proposed DenseNeXt network integrates the ConvNeXt design idea on the basis of the traditional DenseNet, so that the parameter calculation amount of the model is greatly reduced, the memory occupation is reduced, and the calculation speed of the model is improved; meanwhile, in order to further improve the extraction capability of the deep LabV3+ model on the high-level semantic features, after the features of the input image are extracted by the DenseNeXt network, the image features are fused by using two ASPP modules, so that more high-level semantic information is obtained, and the extraction of the edge features is enhanced.
The improved deep LabV3+ model structure diagram is shown in fig. 3, and comprises the following steps:
the ratio of the 4-phase block stacks of the proposed DenseNeXt network is set to 1:1:3:1. The specific layer numbers of each stage are 8, 24 and 8 respectively. The proposed DenseNeXt network designs two branches, one for the depth separable convolution of a 7 x 7 size convolution kernel and the other for the depth separable convolution of a 3 x 3 size convolution kernel. And adding the output feature graphs, and finally splicing the output feature graphs with the input feature graphs of the bottleneck layer to serve as the output feature graphs of the bottleneck layer, so that the model can obtain the multi-scale feature extraction capability. The bottleneck layer of the DenseNeXt network design is shown in FIG. 4.
The improved deep LabV3+ neural network model in the embodiment of the disclosure greatly reduces the parameter calculation amount of the model in model training, reduces memory occupation, and improves the calculation speed of the model.
And S3, carrying out regularization treatment on the output result by adopting an algorithm according to the output result to obtain the ground feature element.
In one possible embodiment, the method includes obtaining aerial survey image data and preprocessing the aerial survey image data to obtain test sample data, and includes:
cutting and enhancing adjustment are carried out on the aerial survey image data, and the size of the aerial survey image data is adjusted to be consistent to obtain test sample data;
because the size of aerial images is usually large, deep learning cannot support large-size data training, so sample data is first cut into 512x512 blocks. Aiming at the problem that the sample size of the training set is small and the overfitting caused by insufficient feature extraction occurs in the training process, the original aerial survey image data set is subjected to data enhancement, and images are expanded in the modes of horizontal overturning, vertical rotation, center clipping, random brightness contrast, elastic transformation, gaussian noise, channel transposition and the like. Meanwhile, data screening is performed for the phenomena of covering blank areas, blurring images and incomplete labeling in the sample.
And marking the test sample data by a manual marking method to obtain a test sample data label.
The house road identification of the project requires labeling of target areas on the original graph, so that the deep learning model can learn the characteristics of houses and roads to distinguish other areas in the image. Therefore, the fabrication of data labels is particularly important.
The method comprises the steps of performing precision evaluation on an extracted image, and outputting the extracted image when an evaluation condition is met; and when the evaluation condition is not met, returning the extracted image to the first neural network model to continue deep learning, and stopping learning until the evaluation condition is met, and outputting the extracted image, wherein the extraction technical roadmap of the ground feature element image is shown in fig. 5.
In one possible implementation, boundary extraction is implemented using a Marching Cubes algorithm, which includes coarse tuning to eliminate obvious errors in segmentation and polygon and further fine tuning the direction of the adjustment lines and the position of the nodes, where,
the implementation process of the coarse adjustment algorithm comprises the steps of determining a threshold value of an output result and removing polygons with areas lower than the threshold value; deleting edges with the length lower than the given edge length; removing the acute angle with a threshold; removing the excessively smooth angle by using a threshold value;
the implementation process of the fine tuning algorithm comprises the steps of finding a long side with a threshold value, and adding the direction of the longest side into a main direction list; adding the directions of other sides to a main direction list according to an angle threshold value, and adjusting the long side and the short side according to the list and the angle; if the direct distance between the two lines is smaller than the threshold value, merging parallel lines; if the distance between the two lines is greater than the threshold, parallel lines are connected.
After boundary extraction is performed by using a Marching cube algorithm, a Douglas-Peucker algorithm is finally used for realizing polygon, and comparison diagrams before and after regularization are shown in fig. 6 and 7.
The embodiment of the disclosure adopts a two-step method of a Marching cube algorithm and a Douglas-Peucker algorithm, thereby eliminating irregular boundaries and details of the house edge and improving the edge precision.
The traditional extraction method of the water foam line comprises a threshold segmentation method, an edge detection operator method, a data mining method and an object-oriented method. Such conventional image processing methods have some problems, such as low automation degree, and the result depends on the selection of parameters manually.
The deep learning has the advantages of autonomous learning of image features, avoidance of manual intervention, remarkable effect and the like, and is widely applied to the fields of aerial survey image classification, semantic segmentation and the like. According to the characteristics of the water edge, the extraction of the water edge of the aerial survey image is essentially a two-classification problem, the deep learning method can fully mine information hidden in the original image, and the water edge extraction result is more accurate by using known data as far as possible, so that the deep learning method can provide a new thought for solving the water edge extraction problem.
Aiming at the automatic extraction of the water foam lines, the embodiment of the disclosure provides an improved method based on a first neural network model, which is used for dividing a water area image, extracting a water area region and extracting the water foam lines by combining a traditional edge detection algorithm and an image morphology method. The method aims at improving the accuracy and efficiency of extracting the water surface area and enabling the algorithm to have practicability. The flow chart of the method for extracting the water foam line in the embodiment of the disclosure is shown in fig. 7, and includes:
acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
inputting the test sample data into a first neural network model to obtain an output result;
filling the hole of the output result according to the Flood fill algorithm to obtain a partitioned water area;
because the water surface often contains ground objects such as ships, reefs and bridges, the water surface area identified by the neural network model is easy to have holes, and the subsequent extraction of the edges of the water foam lines is affected, the defect phenomenon of the water surface is repaired by adopting a Flood fill algorithm in the project.
Flood fill is a classical algorithm that extracts several connected points from one region to distinguish from other neighboring regions. Because its idea is similar to the spreading of floods from one area to all reachable areas. The Flood fill algorithm accepts three parameters: a start node, a target color, and an alternate color. The algorithm traverses all nodes to find nodes that are connected to the starting node and are connected by a path of the target color, and then changes their color to the alternate color. There are many ways of constructing the Flood fill algorithm at present, but they all show or implicitly use queues or stacks according to whether the current node diagonal direction node is considered, the algorithm is divided into a four-way algorithm of the node which does not consider the diagonal direction and a four-way algorithm of the node which considers the diagonal direction
Eight-way algorithm.
And detecting and extracting the partitioned water area according to the Canny operator to obtain a water foam line result.
On the basis of dividing the water surface area, the edge of the water surface area is extracted by a classical edge detection Canny operator, and an edge smoothing technology is adopted for solving the problem of non-smoothing of the water foam line. The edge extracted by the Canny operator is contracted for 2 times and expanded for 1 time and then is overlapped on the original image, so that the phenomena of irregular saw teeth, defects and the like of the edge are eliminated, and the result is shown in figure 8. For the segmented image, clear and complete water foam lines can be rapidly and effectively detected through a Canny edge detection operator and morphological processing.
In the embodiment of the disclosure, in the extraction of the water spray line, a strategy of extracting a chemical line segment as a surface area is adopted, the improved deep LabV3+ is used for dividing a water area image, the water surface area is extracted, and the water spray line is extracted by combining a traditional edge detection algorithm and an image morphology method. The method aims at improving the accuracy and efficiency of extracting the water surface area and enabling the algorithm to have practicability.
In a possible embodiment, a training method of a neural network model is provided, as shown in fig. 9, and includes:
acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
dividing the test sample data to obtain training set sample data, verification set sample data and test set sample data;
inputting training set sample data into a DenseNeXt network in a second neural network model to perform feature extraction to obtain a first extracted feature;
respectively inputting the first extraction features into two ASPP models and a bottleneck layer to obtain second extraction features, third extraction features and bottleneck layer input features, and fusing the second extraction features and the third extraction features to obtain fused features; adding the fusion characteristics and bottleneck layer input characteristics to obtain an output result;
and carrying out parameter adjustment on the second neural network model according to the output result and the verification set to obtain a super parameter, and determining the first neural network model according to the super parameter.
In a possible implementation manner, the embodiment of the disclosure utilizes the south CASS brief code drawing function to realize an automatic drawing process based on brief code guidance, and automatically generates the standard dwg drawing according to the fifth element node coordinates extracted by deep learning, so that research and development contents can be efficiently realized, and drawing specifications can be maximally met.
According to feature element coding specifications of southern CASS mapping software, automatically adding and assigning attributes to the extracted feature vectors of buildings, roads, water foam lines and the like, firstly extracting 6 coordinate information in a TIF file, then calculating coordinate information of feature outlines, storing each feature in a list, and finally storing in a dat file.
When the CAD one-key mapping function is started by the system developed in the project, the AUTOCAD software is automatically opened, the generated dat file with the vector attribute stored is imported into the AUTOCAD software to automatically map, and the dat file can be finally stored as a dwg map, so that the work of professionals is facilitated.
In one possible implementation manner, the embodiment of the disclosure provides an intelligent interpretation system for aerial image ground feature elements, which is characterized in that the intelligent interpretation system for aerial image ground feature elements adopts any one of the methods to interpret the ground feature elements.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the embodiments of the present disclosure may be performed in parallel, sequentially, or in a different order, so long as the desired result of the technical solution disclosed in the embodiments of the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the embodiments of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the embodiments of the present disclosure are intended to be included within the scope of the embodiments of the present disclosure.

Claims (6)

1. An intelligent interpretation method for aerial survey image ground feature elements is characterized by comprising the following steps:
s1, acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
step S2, inputting the test sample data into a first neural network model to obtain an output result, wherein the first neural network model specifically comprises:
dividing the test sample data to obtain training set sample data, verification set sample data and test set sample data;
inputting the training set sample data into a DenseNeXt network in a second neural network model to perform feature extraction to obtain a first extracted feature, wherein the DenseNeXt network blends ConvNeXt on the basis of DenseNet;
wherein the training data is used to solve for network parameters that minimize the loss function; the validation data is used to minimize overfitting; the test data are used for testing the classification capability of the first neural network model after the training of the second neural network model is finished;
respectively inputting the first extraction features into two ASPP models and a bottleneck layer to obtain second extraction features, third extraction features and bottleneck layer input features, and fusing the second extraction features and the third extraction features to obtain fusion features; adding the fusion characteristic and the bottleneck layer input characteristic to obtain an output result;
performing parameter adjustment on the second neural network model according to the output result and the verification set to obtain a super parameter, and determining a first neural network model according to the super parameter;
step S3, according to the output result, regularizing the output result by adopting an algorithm to obtain a ground feature element;
the algorithm comprises a Maring cube algorithm and a Douglas-Peucker algorithm.
2. The method of claim 1, wherein acquiring aerial survey image data and preprocessing the aerial survey image data to obtain test sample data comprises: and cutting, enhancing and adjusting the aerial survey image data to obtain the test sample data.
3. A method as claimed in claim 1, wherein,
the Marching Cubes algorithm comprises: coarse adjustment and fine adjustment;
the rough adjustment is to adjust the area, the length and the angle of the output result;
the fine adjustment is an adjustment of the long side and the short side of the output result.
4. The extraction method of the water foam line is characterized by comprising the following steps of:
acquiring aerial survey image data, and preprocessing the aerial survey image data to obtain test sample data;
inputting the test sample data into a first neural network model to obtain an output result, wherein the first neural network model specifically comprises:
dividing the test sample data to obtain training set sample data, verification set sample data and test set sample data;
inputting the training set sample data into a DenseNeXt network in a second neural network model to perform feature extraction to obtain a first extracted feature;
wherein the DenseNeXt network is integrated with ConvNeXt on the basis of DenseNet;
wherein the training data is used to solve for network parameters that minimize the loss function; the validation data is used to minimize overfitting; the test data are used for testing the classification capability of the first neural network model after the training of the second neural network model is finished;
respectively inputting the first extraction features into two ASPP models and a bottleneck layer to obtain second extraction features, third extraction features and bottleneck layer input features, and fusing the second extraction features and the third extraction features to obtain fusion features; adding the fusion characteristic and the bottleneck layer input characteristic to obtain an output result;
performing parameter adjustment on the second neural network model according to the output result and the verification set to obtain a super parameter, and determining a first neural network model according to the super parameter;
filling the hole of the output result according to the Flood fill algorithm to obtain a partitioned water area;
and detecting and extracting the partitioned water area according to the Canny operator to obtain a water foam line result.
5. The method of claim 4, wherein the parameters of the Flood fill algorithm include a start node, a target color, and a replacement color, wherein the Flood fill algorithm includes determining nodes connected to the start node, connecting through a path of a target color, and replacing the target color with the replacement color.
6. The method of claim 4, wherein the detecting and extracting the partitioned water area according to a Canny operator to obtain a water spray line result comprises:
extracting the edge of the partitioned water area according to the Canny operator, and performing smoothing treatment on the edge of the partitioned water area to obtain a smoothing result;
performing shrinkage and expansion treatment according to the smooth result to obtain a treatment result; and adding the treatment result to the split water to obtain a water foam line result.
CN202310571985.2A 2023-05-22 2023-05-22 Intelligent interpretation method for aerial survey image ground object elements Active CN116343070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310571985.2A CN116343070B (en) 2023-05-22 2023-05-22 Intelligent interpretation method for aerial survey image ground object elements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310571985.2A CN116343070B (en) 2023-05-22 2023-05-22 Intelligent interpretation method for aerial survey image ground object elements

Publications (2)

Publication Number Publication Date
CN116343070A CN116343070A (en) 2023-06-27
CN116343070B true CN116343070B (en) 2023-10-13

Family

ID=86882625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310571985.2A Active CN116343070B (en) 2023-05-22 2023-05-22 Intelligent interpretation method for aerial survey image ground object elements

Country Status (1)

Country Link
CN (1) CN116343070B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954897A (en) * 2014-05-20 2014-07-30 电子科技大学 Intelligent power grid high voltage insulation damage monitoring system and method based on ultraviolet imagery
CN108647648A (en) * 2018-05-14 2018-10-12 电子科技大学 A kind of Ship Recognition system and method under visible light conditions based on convolutional neural networks
CN109903304A (en) * 2019-02-25 2019-06-18 武汉大学 A kind of contour of building automatic Extraction Algorithm based on convolutional Neural metanetwork and polygon regularization
CN113344953A (en) * 2021-04-21 2021-09-03 中国计量大学 Unmanned aerial vehicle-based machine vision tidal bore flow velocity measurement method
CN115331102A (en) * 2022-07-29 2022-11-11 长江空间信息技术工程有限公司(武汉) Remote sensing image river and lake shoreline intelligent monitoring method based on deep learning
WO2023039959A1 (en) * 2021-09-17 2023-03-23 海南大学 Remote sensing image marine and non-marine area segmentation method based on pyramid mechanism
CN116091951A (en) * 2023-04-07 2023-05-09 华南农业大学 Method and system for extracting boundary line between farmland and tractor-ploughing path
CN116129111A (en) * 2022-12-22 2023-05-16 上海欣能信息科技发展有限公司 Power line semantic segmentation method for improving deep Labv3+ model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11282208B2 (en) * 2018-12-24 2022-03-22 Adobe Inc. Identifying target objects using scale-diverse segmentation neural networks

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954897A (en) * 2014-05-20 2014-07-30 电子科技大学 Intelligent power grid high voltage insulation damage monitoring system and method based on ultraviolet imagery
CN108647648A (en) * 2018-05-14 2018-10-12 电子科技大学 A kind of Ship Recognition system and method under visible light conditions based on convolutional neural networks
CN109903304A (en) * 2019-02-25 2019-06-18 武汉大学 A kind of contour of building automatic Extraction Algorithm based on convolutional Neural metanetwork and polygon regularization
CN113344953A (en) * 2021-04-21 2021-09-03 中国计量大学 Unmanned aerial vehicle-based machine vision tidal bore flow velocity measurement method
WO2023039959A1 (en) * 2021-09-17 2023-03-23 海南大学 Remote sensing image marine and non-marine area segmentation method based on pyramid mechanism
CN115331102A (en) * 2022-07-29 2022-11-11 长江空间信息技术工程有限公司(武汉) Remote sensing image river and lake shoreline intelligent monitoring method based on deep learning
CN116129111A (en) * 2022-12-22 2023-05-16 上海欣能信息科技发展有限公司 Power line semantic segmentation method for improving deep Labv3+ model
CN116091951A (en) * 2023-04-07 2023-05-09 华南农业大学 Method and system for extracting boundary line between farmland and tractor-ploughing path

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CD_HIEFNet: Cloud Detection Network Using Haze Optimized Transformation Index and Edge Feature for Optical Remote Sensing Imagery;Guo, Q等;《MDPI》;1-25页 *
遥感图像智能解译系统的设计与实现;王熠明;《中国优秀硕士学位论文电子期刊》;第22-60页 *

Also Published As

Publication number Publication date
CN116343070A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
Zhao et al. Building extraction from satellite images using mask R-CNN with building boundary regularization
CN109409263B (en) Method for detecting urban ground feature change of remote sensing image based on Siamese convolutional network
CN106980858B (en) Language text detection and positioning system and language text detection and positioning method using same
CN103049763B (en) Context-constraint-based target identification method
CN108986122B (en) Intelligent reconstruction method for indoor parking guide map
CN108875595A (en) A kind of Driving Scene object detection method merged based on deep learning and multilayer feature
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
Lee et al. Perceptual organization of 3D surface points
CN110180186B (en) Topographic map conversion method and system
CN106709486A (en) Automatic license plate identification method based on deep convolutional neural network
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN108305260A (en) Detection method, device and the equipment of angle point in a kind of image
CN111709387B (en) Building segmentation method and system for high-resolution remote sensing image
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN113313094B (en) Vehicle-mounted image target detection method and system based on convolutional neural network
CN112508079A (en) Refined identification method, system, equipment, terminal and application of ocean frontal surface
CN114283343B (en) Map updating method, training method and device based on remote sensing satellite image
CN113807301A (en) Automatic extraction method and automatic extraction system for newly-added construction land
CN115512169A (en) Weak supervision semantic segmentation method and device based on gradient and region affinity optimization
Wei et al. A concentric loop convolutional neural network for manual delineation-level building boundary segmentation from remote-sensing images
CN106504219B (en) Constrained path morphology high-resolution remote sensing image road Enhancement Method
CN116343070B (en) Intelligent interpretation method for aerial survey image ground object elements
CN112257810A (en) Submarine biological target detection method based on improved FasterR-CNN
CN116403109A (en) Building identification and extraction method and system based on improved neural network
Zhao et al. Building outline delineation: From very high resolution remote sensing imagery to polygons with an improved end-to-end learning framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant