CN109242015B - Water area detection method and device based on visual monitoring of air-based platform - Google Patents

Water area detection method and device based on visual monitoring of air-based platform Download PDF

Info

Publication number
CN109242015B
CN109242015B CN201810997602.7A CN201810997602A CN109242015B CN 109242015 B CN109242015 B CN 109242015B CN 201810997602 A CN201810997602 A CN 201810997602A CN 109242015 B CN109242015 B CN 109242015B
Authority
CN
China
Prior art keywords
water area
area
convolution
porous
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810997602.7A
Other languages
Chinese (zh)
Other versions
CN109242015A (en
Inventor
曹先彬
甄先通
李岩
沈佳怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810997602.7A priority Critical patent/CN109242015B/en
Publication of CN109242015A publication Critical patent/CN109242015A/en
Application granted granted Critical
Publication of CN109242015B publication Critical patent/CN109242015B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting the area of a water area based on visual monitoring of an empty foundation platform, and relates to the technical field of image processing. The device comprises an image collector, a CNN feature extractor, a branching device of a porous convolution network, a feature fusion device, a classifier and an area converter. Firstly, extracting high-level semantic information in a water area picture to be monitored by a convolutional neural network to obtain an output characteristic diagram, and inputting the output characteristic diagram into a porous convolutional network. Then according to the edge information of the image, averaging the hyperspectral information of all pixel points in each area to obtain the expected characteristics of the area; and (3) utilizing a linear SVM classifier training model to classify the sample vectors of the expected characteristics, and finally outputting a classification result to obtain the water area S. And calculating the real area of the water area part according to the legend parameter gamma and the water area S of the original image. The invention has strong robustness, can monitor the change of the real area of the water area part and predict natural disasters.

Description

Water area detection method and device based on visual monitoring of air-based platform
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for detecting the area of a water area based on visual monitoring of an empty foundation platform.
Background
The monitoring of the area of the water area refers to monitoring and extracting the water area based on aerial images and by combining with specific geographic information data. The monitoring of the water area can measure the sizes of the water areas such as waterfalls, lakes, rivers, reservoirs and the like in real time, so that natural disasters such as rainstorm, debris flow, drought and the like can be monitored and predicted, people can be helped to take protective measures in time to avoid injuries, and the method has very important practical significance.
The traditional method is usually used for monitoring the area of the water area, and huge resources such as manpower and material resources are consumed, and the monitoring method based on the air-based platform visual monitoring system has the characteristics of mobility, large data volume and quick updating, so that the problems of large monitoring area and dispersed monitoring targets can be effectively solved; therefore, the technology has greater advantages in the measurement of the water body area in a large-range water area.
With the development of the related technology of the near space vehicle and the test flight verification of the airship and other platforms in recent years, the air-based platform visual monitoring system can carry task loads such as remote sensing, imaging and communication, and provides more comprehensive, timely and clear monitoring pictures for technicians. However, due to the influence of weather conditions, cloud cover shielding and illumination conditions, the common visible light picture cannot meet the requirement of monitoring the area of the water area.
Disclosure of Invention
The invention introduces a hyperspectral picture to monitor the change of the water area, in particular to a water area detection method and a water area detection device based on the visual monitoring of an empty foundation platform, which aim to solve the problem of high difficulty in the existing water area monitoring.
The water area detection method based on the visual monitoring of the empty foundation platform comprises the following specific steps:
step one, obtaining a picture of a certain water area to be monitored, extracting high-level semantic information in the picture based on a Convolutional Neural Network (CNN), and obtaining an output characteristic diagram;
the picture of the area of the water area to be monitored is an RGB image of 3 channels, and the size is H x W3; h is the height of the picture and W is the width of the picture. Sequentially passing the picture of the area of the water area to be monitored through three convolution layers connected in series by a convolution neural network; three convolutional layers have the same size of 3 x 3 convolutional kernel: obtaining a characteristic graph with the size H x W x 64 through the first convolution layer; passing through the second convolution layer to obtain a feature size H W128; and (4) obtaining a final output characteristic diagram with the size of H x W256 and the number of output channels of 256 through the third convolution layer.
The high-level semantic information is fuzzy edge information of different areas in the water area picture to be monitored, and more accurate edge information is obtained through a porous convolution network subsequently.
And step two, inputting the output characteristic diagram into a porous convolution network to carry out porous convolution on multiple branches with different receptive fields.
The porous convolution network performs convolution on the output characteristic diagram by five different receptive fields respectively according to a convolution kernel of 1 × 1, a convolution kernel of 3 × 3 with a ratio of 2, a convolution kernel of 3 × 3 with a ratio of 4, a convolution kernel of 3 × 3 with a ratio of 8 and a convolution kernel of 3 × 3 with a ratio of 16, so as to obtain output characteristic diagrams of 5 porous convolution networks, wherein the sizes of the output characteristic diagrams are all H × W256, and the number of output channels is all 256.
Step three, fusing different receptive field characteristics of the output characteristic diagram of the porous convolution network to obtain an output characteristic diagram with the channel number of 1;
the method specifically comprises the following steps:
firstly, output characteristic graphs of 5 porous convolution networks are spliced according to the number of channels to obtain a characteristic graph of H x W x 1280.
Then, the characteristic diagram of H W1280 sequentially passes through three convolution layers connected in series in a fusion network, and the size of a convolution kernel is 3X 3.
Obtaining a characteristic graph with the size H W128 through the first convolution layer;
passing through the second convolution layer to obtain a characteristic graph with the size H W64;
and (5) passing through a third convolution layer to obtain a characteristic diagram with the size H W1.
And finally, outputting an output characteristic diagram with the size H W1 and the number of channels 1 by the fusion network.
The output feature map with the number of channels of 1 includes edge information of the original, and the original is already divided into regions of different sizes.
And fourthly, according to the edge information of the output feature map with the channel number of 1 and the divided regions with different sizes, averaging the hyperspectral information of all the pixel points in each region to obtain the expected features of the region, and using the expected features as sample vectors for final classification.
The method specifically comprises the following steps:
firstly, selecting spectral features of a wave band of 0.4-2.5 mu m as classification objects;
and then, extracting the spectral characteristics of each pixel point in a 0.4-2.5 mu m wave band.
And finally, according to the divided regions with different sizes, averaging the spectral characteristics of all the pixel points in each region to obtain the expected characteristics of each region, and representing the characteristics of the region.
And fifthly, training a model by using a linear SVM classifier, classifying the sample vectors of the expected characteristics, outputting classification results of the water area and the non-water area, and obtaining the water area S of the original image.
The linear SVM classifier training model essentially converts the division of the region into a two-classification model, and the classification result is 0 or 1. 0 represents the desired feature coming from a non-water area and 1 represents the desired feature coming from a water area.
And finally, counting all the segmentation areas with the classification result of 1, and calculating the water area S in the original image by combining the number of pixel points in each area.
And step six, calculating the real area of the water area part according to the legend parameter gamma and the original image water area S obtained by the image collector parameters and the parameters of the visual monitoring system of the empty foundation platform.
Calculating the real area S' of the water area part according to the output water area S, wherein the specific calculation formula is as follows:
S′=gamma2*S。
and seventhly, monitoring the change of the real area S' of the water area part, and predicting partial natural disasters in advance.
The water area detection device based on the visual monitoring of the air-based platform comprises an image collector, a CNN characteristic extractor, a branching device of a porous convolution network, a characteristic fusion device, a classifier and an area converter;
the branching device of the porous convolution network comprises 5 branching devices which are respectively: 1 × 1 convolution kernel, 2 ratio of 3 × 3 porous convolution kernels, 4 ratio of 3 × 3 porous convolution kernels, 8 ratio of 3 × 3 porous convolution kernels, and 16 ratio of 3 × 3 porous convolution kernels.
The image collector collects images of the area of the water area to be monitored, obtains pictures to be monitored and transmits the pictures to the CNN characteristic extractor; the CNN feature extractor monitors the picture to be monitored by using a preset monitoring model to obtain a monitoring result, and extracts high-level semantic information in the picture to obtain an output feature map. And then 5 branching devices of the porous convolution network perform convolution on different fields of the output characteristic diagram to obtain 5 branching output characteristic diagrams. And splicing the output feature maps convolved with different receptive fields through a feature fusion device, convolving the spliced output feature maps again to obtain an output feature map containing edge information, and dividing the original image into different areas. And calculating expected features of the hyperspectral information mean values of different areas as classification sample vectors, classifying the expected features through a linear SVM classifier to obtain discrimination results of the water areas and non-water areas, and counting the water area of the original image. And finally, calculating the real area of the water area part by the area converter according to the legend parameter gamma obtained by the camera parameter and the parameters of the visual monitoring system of the empty foundation platform and the water area of the original image, and predicting natural disasters according to the change of the real area.
The invention has the advantages that:
1. the water area detection method based on the visual monitoring of the empty foundation platform effectively utilizes the hyperspectral picture acquired by the visual monitoring system of the empty foundation platform and fully combines the spatial information and the spectral information contained in the hyperspectral picture, thereby achieving the purpose of monitoring the change of the water area.
2. According to the water area detection method based on the visual monitoring of the empty foundation platform, the characteristic graphs with different receptive fields are fused through multi-branch porous convolution, and edge information with high robustness can be obtained, so that the image segmentation is more accurate, and a technical foundation is laid for accurate calculation of the subsequent water area.
3. The invention relates to a water area detection device based on visual monitoring of an empty foundation platform, which divides a specific algorithm into six modules to realize: the device comprises a picture collector, a CNN feature extractor, a porous convolution branching device, a feature fusion device, a classifier and an area converter. The modularized device design can not only reduce the complexity of physical realization, but also facilitate the local debugging of the sub-modules by operators.
Drawings
FIG. 1 is a flow chart of a water area detection method based on the visual monitoring of an empty foundation platform provided by the invention;
fig. 2 is a schematic diagram of a water area detection device based on the visual monitoring of an empty foundation platform provided by the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention provides a method and a device for detecting water area based on visual monitoring of an air-based platform. And secondly, grouping all pixel points of the original image according to the division of the edge information. In order to obtain a stable classification result, the expectation of the spectral information of each group of pixel points is obtained to be used as a sample vector for final classification. And (5) training the model by using the linear SVM classifier again, predicting the water area and the non-water area, and outputting a classification result. And finally, calculating the real area of the water area part according to the legend parameter gamma obtained by the camera parameters, the parameters of the visual monitoring system of the empty foundation platform and the like.
The method for detecting the area of the water area based on the visual monitoring of the empty foundation platform, as shown in fig. 1, comprises the following specific steps:
the method comprises the steps of firstly, obtaining a picture to be monitored, extracting high-level semantic information in the picture based on a Convolutional Neural Network (CNN), and obtaining an output characteristic diagram.
The picture of the area of the water area to be monitored is an RGB image of 3 channels, and the size is H x W3; h is the height of the picture and W is the width of the picture. Sequentially passing the picture of the area of the water area to be monitored through three convolution layers connected in series by a convolution neural network; three convolutional layers have the same size of 3 x 3 convolutional kernel: obtaining a characteristic graph with the size H x W x 64 through the first convolution layer; passing through the second convolution layer to obtain a feature size H W128; and (4) obtaining a final output characteristic diagram with the size of H x W256 and the number of output channels of 256 through the third convolution layer.
The high-level semantic information is fuzzy edge information of different areas in the water area picture to be monitored, and more accurate edge information is obtained through a porous convolution network subsequently.
And step two, inputting the output characteristic diagram into a porous convolution network to carry out porous convolution on multiple branches with different receptive fields.
The multi-aperture convolution network comprises a conventional convolution branching device and 4 multi-aperture convolutions, wherein convolution kernels of 1 x 1, 3 x 3 multi-aperture convolution kernels of 2, 3 x 3 multi-aperture convolution kernels of 4, 3 x 3 multi-aperture convolution kernels of 8 and 3 x 3 multi-aperture convolutions of 16 are used for respectively carrying out convolution on output characteristic graphs in five different receptive fields, and output characteristic graphs of 5 multi-aperture convolution networks are obtained, the sizes of the output characteristic graphs are H x W256, and the number of output channels is 256. .
And step three, fusing different receptive field characteristics on the output characteristic diagram of the porous convolution network to obtain an output characteristic diagram with the channel number of 1.
The method specifically comprises the following steps:
firstly, output characteristic graphs of 5 porous convolution networks are spliced according to the number of channels to obtain a characteristic graph of H x W x 1280. Then, the characteristic diagram of H W1280 sequentially passes through three convolution layers connected in series in a fusion network, and the size of a convolution kernel is 3X 3.
Obtaining a characteristic graph with the size H W128 through the first convolution layer;
passing through the second convolution layer to obtain a characteristic graph with the size H W64;
and (5) passing through a third convolution layer to obtain a characteristic diagram with the size H W1.
And finally, outputting an output characteristic diagram with the size H W1 and the number of channels 1 by the fusion network.
The output feature map with the number of channels of 1 includes edge information of the original, and the original is already divided into regions of different sizes.
And fourthly, according to the edge information of the output feature map with the channel number of 1 and the divided regions with different sizes, averaging the hyperspectral information of all the pixel points in each region to obtain the expected features of the region, and using the expected features as sample vectors for final classification.
The method specifically comprises the following steps:
firstly, selecting spectral features of a wave band of 0.4-2.5 mu m as classification objects;
the research shows that the spectral characteristics of water are mainly determined by the material composition of water itself and are influenced by various water states. The absorption of the natural water body with a relatively pure ground surface to electromagnetic waves with a wave band of 0.4-2.5 mu m is obviously higher than that of most other ground surface substances. In order to obtain a stable classification result, the spectral features near the 0.4-2.5 μm waveband are selected as objects for classification.
And then, extracting the spectral characteristics of each pixel point in a 0.4-2.5 mu m wave band.
And finally, according to the divided regions with different sizes, averaging the spectral characteristics of all the pixel points in each region to obtain the expected characteristics of each region, and representing the characteristics of the region.
And obtaining the expected features of each region according to the edge information of the output feature map with the channel number of 1, and using the expected features as a sample vector for final classification.
And step five, utilizing a linear Support Vector Machine (SVM) classifier training model to classify the sample vectors with the expected characteristics, outputting classification results of the water area and the non-water area, and obtaining the water area S.
And (3) setting the classifier parameter C of the linear SVM classifier as 100, carrying out model training by using training set data to obtain a classification model, and then testing the test set data to obtain a final classification result. The training set represents a data set composed of labeled expected features, and the test set represents a data set composed of unlabeled expected features.
The linear SVM classifier training model essentially converts the division of the region into a two-classification model, and the classification result is 0 or 1. 0 represents the desired feature coming from a non-water area and 1 represents the desired feature coming from a water area.
And finally, counting all the segmentation areas with the classification result of 1, and calculating the water area S in the original image by combining the number of pixel points in each area.
And step six, calculating the real area of the water area part according to the legend parameter gamma and the original image water area S obtained by the image collector parameters, the air-based platform vision monitoring system parameters and the like.
Calculating the real area S' of the water area part according to the output water area S, wherein the specific calculation formula is as follows:
S′=gamma2*S。
and step eight, monitoring the change of the real area S' of the water area part, and predicting partial natural disasters in advance.
The water area detection device based on the visual monitoring of the air-based platform is shown in fig. 2 and comprises an image collector, a CNN feature extractor, a porous convolution splitter, a feature fusion device, a classifier and an area converter.
The method comprises the following steps that an image collector collects images of a ground area to be monitored, obtains a picture to be monitored, and transmits the image to a CNN feature extractor, wherein the picture to be detected is a hyperspectral picture;
the CNN feature extractor monitors the picture to be monitored by using a preset monitoring model to obtain a monitoring result, and extracts high-level semantic information in the picture to obtain an output feature map.
And the 5 porous convolution branching devices are used for performing porous convolution of different receptive fields on the output characteristic diagram of the CNN characteristic extractor to obtain 5 branched output characteristic diagrams.
And the characteristic fusion device is used for splicing the branch output characteristic graphs of different receptive fields according to the channel dimension and then carrying out convolution operation to obtain the edge information of the original image.
And the linear SVM classifier is used for classifying expected characteristics of different areas as input samples of the classifier to obtain a discrimination result of a water area and a non-water area.
And the area converter is used for converting the water area of the picture into the real area of the water area through the known legend gamma.
Water area detection device based on empty base platform visual monitoring, the working process is as follows:
the image collector collects images of the area of the water area to be monitored, obtains pictures to be monitored and transmits the pictures to the CNN characteristic extractor; the CNN feature extractor monitors the picture to be monitored by using a preset monitoring model to obtain a monitoring result, and extracts high-level semantic information in the picture to obtain an output feature map. And then 5 branching devices of the porous convolution network perform convolution on different fields of the output characteristic diagram to obtain 5 branching output characteristic diagrams. And splicing the output feature maps convolved with different receptive fields through a feature fusion device, convolving the spliced output feature maps again to obtain an output feature map containing edge information, and dividing the original image into different areas. And calculating expected features of the hyperspectral information mean values of different areas as classification sample vectors, classifying the expected features through a linear SVM classifier to obtain discrimination results of the water areas and non-water areas, and counting the water area of the original image. And finally, calculating the real area of the water area part by the area converter according to the legend parameter gamma obtained by the camera parameter and the parameters of the visual monitoring system of the empty foundation platform and the water area of the original image, and predicting natural disasters according to the change of the real area.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. The method for detecting the area of the water area based on the visual monitoring of the empty foundation platform is characterized by comprising the following specific steps of:
step one, obtaining a picture of a certain water area to be monitored, extracting high-level semantic information in the picture based on a Convolutional Neural Network (CNN), and obtaining an output characteristic diagram;
the high-level semantic information is fuzzy edge information of different areas in the picture of the area of the water area to be monitored;
inputting the output characteristic diagram into a porous convolution network to carry out porous convolution on multiple branches with different receptive fields;
the porous convolution network performs convolution on the output characteristic diagram by five different receptive fields respectively according to 1 × 1 convolution kernel, 3 × 3 porous convolution kernels with the ratio of 2, 3 × 3 porous convolution kernels with the ratio of 4, 3 × 3 porous convolution kernels with the ratio of 8 and 3 × 3 porous convolution with the ratio of 16, so as to obtain output characteristic diagrams of 5 porous convolution networks, wherein the sizes of the output characteristic diagrams are all H × W256, and the number of output channels is all 256;
step three, fusing different receptive field characteristics of the output characteristic diagram of the porous convolution network to obtain an output characteristic diagram with the channel number of 1;
the output feature map with the channel number of 1 comprises edge information of the original image, and the original image is divided into areas with different sizes;
fourthly, according to the edge information of the output feature map with the channel number of 1 and the divided regions with different sizes, averaging the hyperspectral information of all pixel points in each region to obtain expected features of the region, and using the expected features as sample vectors for final classification;
step five, utilizing a linear SVM classifier training model to classify the sample vectors of the expected characteristics, outputting classification results of a water area and a non-water area, and simultaneously obtaining the water area S;
the linear SVM classifier training model essentially converts the division of the region into a two-classification model, and the classification result is 0 or 1; 0 represents the desired feature is from a non-water area, 1 represents the desired feature is from a water area;
finally, counting all the segmentation areas with the classification result of 1, and calculating the water area S in the original image by combining the number of pixel points in each area;
calculating the real area of the water area part according to the legend parameter gamma and the original image water area S obtained by the image collector parameters and the parameters of the visual monitoring system of the empty foundation platform;
calculating the real area S' of the water area part according to the output water area S, wherein the specific calculation formula is as follows:
S′=gamma2*S
and step eight, monitoring the change of the real area S' of the water area part, and predicting partial natural disasters in advance.
2. The method for detecting the area of a water area based on the visual monitoring of the platform based on the empty foundation as claimed in claim 1, wherein the first step is specifically as follows:
the picture of the area of the water area to be monitored is an RGB image of 3 channels, and the size is H x W3; h is the height of the picture, and W is the width of the picture; sequentially passing the picture of the area of the water area to be monitored through three convolution layers connected in series by a convolution neural network; three convolutional layers have the same size of 3 x 3 convolutional kernel: obtaining a characteristic graph with the size H x W x 64 through the first convolution layer; passing through the second convolution layer to obtain a feature size H W128; and (4) obtaining a final output characteristic diagram with the size of H x W256 and the number of output channels of 256 through the third convolution layer.
3. The method for detecting the area of the water area based on the visual monitoring of the platform based on the empty foundation as claimed in claim 1, wherein the third step is specifically as follows:
firstly, splicing output characteristic graphs of 5 porous convolution networks according to the number of channels to obtain a characteristic graph of H, W, 1280; then, sequentially passing the characteristic diagram H W1280 through three serially-connected convolution layers of a fusion network, wherein the size of a convolution kernel is 3X 3;
obtaining a characteristic graph with the size H W128 through the first convolution layer;
passing through the second convolution layer to obtain a characteristic graph with the size H W64;
obtaining a characteristic graph with the size H W1 through a third convolution layer;
and finally, outputting an output characteristic diagram with the size H W1 and the number of channels 1 by the fusion network.
4. The method for detecting the area of the water area based on the visual monitoring of the platform based on the empty foundation as claimed in claim 1, wherein the fourth step is specifically as follows:
firstly, selecting spectral features of a wave band of 0.4-2.5 mu m as classification objects;
then, extracting the spectral characteristics of each pixel point in a 0.4-2.5 mu m wave band;
and finally, according to the divided regions with different sizes, averaging the spectral characteristics of all the pixel points in each region to obtain the expected characteristics of each region, and representing the characteristics of the region.
5. The detection device applied to the water area detection method based on the visual monitoring of the air-based platform is characterized by comprising an image collector, a CNN feature extractor, a branching device of a porous convolution network, a feature fusion device, a classifier and an area converter;
the image collector collects images of the area of the water area to be monitored, obtains pictures to be monitored and transmits the pictures to the CNN characteristic extractor; the CNN feature extractor monitors a picture to be monitored by using a preset monitoring model to obtain a monitoring result, and extracts high-level semantic information in the picture to obtain an output feature map; then 5 branching devices of the porous convolution network carry out convolution on different fields of the output characteristic diagram to obtain 5 branching output characteristic diagrams; splicing the output feature maps of different reception fields by a feature fusion device, and convolving the spliced output feature maps again to obtain an output feature map containing edge information, and dividing the original image into different areas; obtaining expected features of hyperspectral information mean values of different areas as classification sample vectors, classifying the expected features through a linear SVM classifier to obtain discrimination results of a water area and a non-water area, and counting the water area of an original image; and finally, calculating the real area of the water area part by the area converter according to the legend parameter gamma obtained by the camera parameter and the parameters of the visual monitoring system of the empty foundation platform and the water area of the original image, and predicting natural disasters according to the change of the real area.
6. The apparatus for detecting the area of a water area based on the visual monitoring of an empty foundation platform as claimed in claim 5, wherein the branching unit of the porous convolution network comprises 5 branching units, respectively: 1 × 1 convolution kernel, 2 ratio of 3 × 3 porous convolution kernels, 4 ratio of 3 × 3 porous convolution kernels, 8 ratio of 3 × 3 porous convolution kernels, and 16 ratio of 3 × 3 porous convolution kernels.
CN201810997602.7A 2018-08-29 2018-08-29 Water area detection method and device based on visual monitoring of air-based platform Active CN109242015B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810997602.7A CN109242015B (en) 2018-08-29 2018-08-29 Water area detection method and device based on visual monitoring of air-based platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810997602.7A CN109242015B (en) 2018-08-29 2018-08-29 Water area detection method and device based on visual monitoring of air-based platform

Publications (2)

Publication Number Publication Date
CN109242015A CN109242015A (en) 2019-01-18
CN109242015B true CN109242015B (en) 2020-04-10

Family

ID=65068127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810997602.7A Active CN109242015B (en) 2018-08-29 2018-08-29 Water area detection method and device based on visual monitoring of air-based platform

Country Status (1)

Country Link
CN (1) CN109242015B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111982031B (en) * 2020-08-24 2021-12-31 衡阳市大雁地理信息有限公司 Water surface area measuring method based on unmanned aerial vehicle vision
CN113344885A (en) * 2021-06-15 2021-09-03 温州大学 River floating object detection method based on cascade convolution neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101114023A (en) * 2007-08-28 2008-01-30 北京交通大学 Lake and marshland flooding remote sense monitoring methods based on model
CN103679728A (en) * 2013-12-16 2014-03-26 中国科学院电子学研究所 Water area automatic segmentation method of SAR image of complicated terrain and device
CN107679477A (en) * 2017-09-27 2018-02-09 深圳市未来媒体技术研究院 Face depth and surface normal Forecasting Methodology based on empty convolutional neural networks
CN108376392A (en) * 2018-01-30 2018-08-07 复旦大学 A kind of image motion ambiguity removal method based on convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563303B (en) * 2017-08-09 2020-06-09 中国科学院大学 Robust ship target detection method based on deep learning
CN107807986B (en) * 2017-10-31 2019-12-17 中南大学 remote sensing image intelligent understanding method for describing ground object space relation semantics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101114023A (en) * 2007-08-28 2008-01-30 北京交通大学 Lake and marshland flooding remote sense monitoring methods based on model
CN103679728A (en) * 2013-12-16 2014-03-26 中国科学院电子学研究所 Water area automatic segmentation method of SAR image of complicated terrain and device
CN107679477A (en) * 2017-09-27 2018-02-09 深圳市未来媒体技术研究院 Face depth and surface normal Forecasting Methodology based on empty convolutional neural networks
CN108376392A (en) * 2018-01-30 2018-08-07 复旦大学 A kind of image motion ambiguity removal method based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
利用卫星遥感图象进行闽江水口库区的动态变化监测及环境评价_罗彩莲;罗彩莲;《中国优秀博硕士学位论文全文数据库(硕士)工程科技辑》;20050615;全文 *
基于航空摄影测量的新安江水库水域面积及库容变化分析;赵新华等;《大坝与安全》;20161031(第5期);全文 *
遥感技术在江苏省水域面积监测中的应用;高士佩;《长江科学院院报》;20170731(第7期);全文 *

Also Published As

Publication number Publication date
CN109242015A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN109934153B (en) Building extraction method based on gating depth residual error optimization network
CN109033998B (en) Remote sensing image ground object labeling method based on attention mechanism convolutional neural network
CN110009010B (en) Wide-width optical remote sensing target detection method based on interest area redetection
CN110598784B (en) Machine learning-based construction waste classification method and device
Jia et al. Detection and classification of astronomical targets with deep neural networks in wide-field small aperture telescopes
CN109241902B (en) Mountain landslide detection method based on multi-scale feature fusion
CN106920221A (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
CN112801158A (en) Deep learning small target detection method and device based on cascade fusion and attention mechanism
CN115294473A (en) Insulator fault identification method and system based on target detection and instance segmentation
CN104517126A (en) Air quality assessment method based on image analysis
Cervantes et al. Utilization of low cost, sky-imaging technology for irradiance forecasting of distributed solar generation
CN112528913A (en) Grit particulate matter particle size detection analytic system based on image
CN109242015B (en) Water area detection method and device based on visual monitoring of air-based platform
CN109033975A (en) Birds detection, identification and method for tracing and device in a kind of monitoring of seashore
Onishi et al. Deep convolutional neural network for cloud coverage estimation from snapshot camera images
CN114973032A (en) Photovoltaic panel hot spot detection method and device based on deep convolutional neural network
Ibrahim et al. The application of UAV images in flood detection using image segmentation techniques
CN111881984A (en) Target detection method and device based on deep learning
CN116385911A (en) Lightweight target detection method for unmanned aerial vehicle inspection insulator
CN115661932A (en) Fishing behavior detection method
Yuan et al. Identification method of typical defects in transmission lines based on YOLOv5 object detection algorithm
Johnson et al. Opensentinelmap: A large-scale land use dataset using openstreetmap and sentinel-2 imagery
Gauci et al. A Machine Learning approach for automatic land cover mapping from DSLR images over the Maltese Islands

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant