CN114821376A - Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning - Google Patents

Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning Download PDF

Info

Publication number
CN114821376A
CN114821376A CN202210735764.XA CN202210735764A CN114821376A CN 114821376 A CN114821376 A CN 114821376A CN 202210735764 A CN202210735764 A CN 202210735764A CN 114821376 A CN114821376 A CN 114821376A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
image
vehicle image
geological disaster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210735764.XA
Other languages
Chinese (zh)
Other versions
CN114821376B (en
Inventor
张蕴灵
崔玉萍
侯芸
刘晨
董元帅
杨璇
张艳红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Checsc Highway Maintenance And Test Technology Co ltd
Inner Mongolia Beijiang Transportation Construction And Development Co ltd
China Highway Engineering Consultants Corp
CHECC Data Co Ltd
Original Assignee
Checsc Highway Maintenance And Test Technology Co ltd
Inner Mongolia Beijiang Transportation Construction And Development Co ltd
China Highway Engineering Consultants Corp
CHECC Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Checsc Highway Maintenance And Test Technology Co ltd, Inner Mongolia Beijiang Transportation Construction And Development Co ltd, China Highway Engineering Consultants Corp, CHECC Data Co Ltd filed Critical Checsc Highway Maintenance And Test Technology Co ltd
Priority to CN202210735764.XA priority Critical patent/CN114821376B/en
Publication of CN114821376A publication Critical patent/CN114821376A/en
Application granted granted Critical
Publication of CN114821376B publication Critical patent/CN114821376B/en
Priority to GB2217794.3A priority patent/GB2621645A/en
Priority to PCT/CN2022/125257 priority patent/WO2024000927A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning, which comprises the steps of S1 obtaining an unmanned aerial vehicle image; s2, preprocessing the acquired unmanned aerial vehicle image to obtain a preprocessed unmanned aerial vehicle image; s3, extracting features based on the preprocessed unmanned aerial vehicle image to obtain feature information of the unmanned aerial vehicle image; and S4, inputting the characteristic information of the unmanned aerial vehicle image into the trained neural network model to obtain a geological disaster extraction result. The embodiment of the invention provides an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning, which can be used for preprocessing and characteristic extraction based on the acquired unmanned aerial vehicle image, and meanwhile, self-adaptive geological disaster recognition is carried out based on a trained neural network model, so that corresponding geological disaster inspection information is extracted, the efficiency and the accuracy of geological disaster detection are improved, the labor cost is effectively reduced, and the intelligent level of geological disaster detection is improved.

Description

Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning
Technical Field
The invention relates to the technical field of unmanned aerial vehicle image processing, in particular to an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning.
Background
With the increasingly serious problem of geological disasters in China, high-tech means are necessary to assist investigation, and the method is a necessary development trend of geological disaster investigation in the future. The unmanned aerial vehicle remote sensing technology has important help to improve accuracy and effectiveness of geological disaster investigation, and high-resolution remote sensing data and geological images can be obtained through the unmanned aerial vehicle remote sensing technology, so that important data support is provided for geological disaster assessment, monitoring and post-disaster reconstruction.
In the prior art, after the image returned by the unmanned aerial vehicle is obtained, generally, the unmanned aerial vehicle image returned based on the unmanned aerial vehicle image is judged and extracted in a manual mode, however, due to the professional knowledge and experience limitation of technical personnel and the difficulty in unifying the standards of different technical personnel, the conditions of erroneous judgment, missed judgment and the like of the extraction of the geological disaster information can be caused generally, high-intensity labor needs to be paid in the mode of manually observing the returned image, the problems of high labor cost and low efficiency exist, and the requirement for obtaining the modern geological disaster information is difficult to meet.
Disclosure of Invention
Aiming at the problems, the invention aims to provide an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning.
The purpose of the invention is realized by adopting the following technical scheme:
in a first aspect, the invention provides an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning, which comprises the following steps:
s1, acquiring the unmanned aerial vehicle image;
s2, preprocessing the acquired unmanned aerial vehicle image to obtain a preprocessed unmanned aerial vehicle image;
s3, extracting features based on the preprocessed unmanned aerial vehicle image to obtain feature information of the unmanned aerial vehicle image;
and S4, inputting the characteristic information of the unmanned aerial vehicle image into the trained neural network model to obtain a geological disaster extraction result.
In one embodiment, step S1 includes:
and acquiring unmanned aerial vehicle images acquired and transmitted back by the unmanned aerial vehicle in real time aiming at the target area.
In one embodiment, step S2 includes:
s21, carrying out projection transformation, radiation correction, image registration and image cutting on the acquired unmanned aerial vehicle image to obtain a standard unmanned aerial vehicle image;
and S22, performing enhancement processing according to the acquired standard unmanned aerial vehicle image to obtain a preprocessed unmanned aerial vehicle image.
In one embodiment, step S3 includes:
s31, carrying out RGB channel separation based on the preprocessed unmanned aerial vehicle image, and respectively obtaining red channel component characteristics, green channel component characteristics and blue channel component characteristics of the unmanned aerial vehicle image;
s32, extracting the infrared reflectivity characteristics based on the preprocessed unmanned aerial vehicle image to obtain the infrared reflectivity characteristics of the unmanned aerial vehicle image;
s33, extracting texture features based on the preprocessed unmanned aerial vehicle image to obtain the texture features of the unmanned aerial vehicle image;
s34, extracting vegetation index features based on the preprocessed unmanned aerial vehicle image to obtain the vegetation index features of the unmanned aerial vehicle image;
s35 is fused according to the red channel component characteristic, the green channel component characteristic, the blue channel component characteristic, the infrared reflectivity characteristic, the unmanned aerial vehicle image and the vegetation index characteristic of the obtained unmanned aerial vehicle image, and a feature matrix of the unmanned aerial vehicle image is obtained.
In one embodiment, step S4 includes:
s41, acquiring a feature matrix of the unmanned aerial vehicle image;
s42, acquiring a reference image feature matrix corresponding to the unmanned aerial vehicle image;
and S43, inputting the feature matrix of the unmanned aerial vehicle image and the reference feature matrix into the trained neural network model as input sets to obtain a geological disaster extraction result output by the neural network model.
In one embodiment, the trained neural network model comprises an input layer, a first convolutional layer, a second convolutional layer, a pooling layer, a first fully-connected layer, a second fully-connected layer and a softmax layer which are connected in this order;
the input of the input layer is a feature matrix of the unmanned aerial vehicle image and a corresponding reference feature matrix; the first convolution layer and the second convolution layer respectively comprise 32 convolution kernels, the sizes of the convolution kernels are respectively 3 x 3 and 5 x 5, the pooling layer is maximum pooling, the size of the pooling layer is 3 x 3, the first full-connection layer comprises 128 neurons, the second full-connection layer comprises 16 neurons, and feature vectors reflecting whether the unmanned aerial vehicle image comprises geological disasters and geological disaster types are output; and classifying the softmax layer according to the characteristic vectors output by the second full-connection layer, and outputting a geological disaster extraction result.
In one embodiment, the method further comprises:
SB1 trains neural network models, including:
constructing a training set, and acquiring two groups of unmanned aerial vehicle images acquired at the same position in different time periods, wherein the unmanned aerial vehicle image with the later time period is used as a first image, and the unmanned aerial vehicle image with the later time period is used as a second image; the first image and the second image can be unmanned aerial vehicle images before and after a geological disaster happens to the same position, wherein the geological disaster comprises cracks and landslides;
respectively extracting features according to the first image and the second image to obtain a feature matrix corresponding to the first image and a feature matrix corresponding to the second image, and constructing a training set by using the feature matrix corresponding to the first image, the feature matrix corresponding to the second image and the correspondingly calibrated geological disaster extraction identification;
training the neural network model based on the constructed training set, testing the neural network model by adopting the testing set, and obtaining the trained neural network model when the passing rate of the neural network model reaches a preset standard.
In a second aspect, the invention provides an unmanned aerial vehicle image geological disaster automatic extraction system based on deep learning, wherein the system is used for realizing the unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning according to any one of the embodiments of the first aspect.
The invention has the beneficial effects that: the invention provides an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning, which can be used for preprocessing and characteristic extraction based on an acquired unmanned aerial vehicle image, and meanwhile, self-adaptive geological disaster recognition is carried out based on a trained neural network model, so that corresponding geological disaster inspection information is extracted, the efficiency and the accuracy of geological disaster detection are improved, the labor cost is effectively reduced, and the intelligent level of geological disaster detection is improved.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a schematic flow chart of an automatic extraction method of an unmanned aerial vehicle image geological disaster based on deep learning according to an embodiment of the present invention.
Detailed Description
The invention is further described in connection with the following application scenarios.
Referring to fig. 1, an embodiment of the invention provides an automatic extraction method of an unmanned aerial vehicle image geological disaster based on deep learning, which includes:
s1, acquiring the unmanned aerial vehicle image;
in one embodiment, step S1 includes:
and acquiring unmanned aerial vehicle images acquired and transmitted back by the unmanned aerial vehicle in real time aiming at the target area.
The method can be completed based on a server, intelligent equipment and the like, the server receives the remote sensing image which is acquired and returned by the unmanned aerial vehicle in real time aiming at the target area (region) in real time, geological disaster extraction processing is carried out based on the acquired remote sensing image, and geological disaster information existing in the unmanned aerial vehicle image can be extracted timely and accurately.
S2, preprocessing the acquired unmanned aerial vehicle image to obtain a preprocessed unmanned aerial vehicle image;
in one embodiment, step S2 includes:
s21, carrying out projection transformation, radiation correction, image registration, image cutting and the like on the acquired unmanned aerial vehicle image to obtain a standard unmanned aerial vehicle image;
because of receiving the influence of different factors such as atmospheric refraction, earth's surface radiation resolution ratio, the unmanned aerial vehicle image that leads to acquireing easily has the distortion, leads to appearing the error when quality disaster information is drawed according to the unmanned aerial vehicle image, consequently, to the unmanned aerial vehicle image that acquires, at first carries out preprocessing such as projection transformation, radiation correction, image registration and image cutting to the unmanned aerial vehicle image to obtain standard unmanned aerial vehicle image.
The unmanned aerial vehicle image can correspond to the standard coordinate system through projection transformation, and the unmanned aerial vehicle image can be projected to a proper size level; meanwhile, through radiation correction, the influences of the unmanned aerial vehicle image on the ground angle, the speed, the air refraction, the ground curvature and the like in the acquisition process can be eliminated, and the distortion problem of the unmanned aerial vehicle image is eliminated; meanwhile, based on image registration and influence cutting, the unmanned aerial vehicle image and the target area can be confirmed according to the acquired unmanned aerial vehicle image, and therefore the standard unmanned aerial vehicle image corresponding to the target area is obtained.
And S22, performing enhancement processing according to the acquired standard unmanned aerial vehicle image to obtain a preprocessed unmanned aerial vehicle image.
In the process of collecting the unmanned aerial vehicle image, the unmanned aerial vehicle image is easily influenced by high-altitude illumination, air conditions and the like, so that the unmanned aerial vehicle image is easily subjected to brightness deviation or unclear conditions and the accuracy of geological disaster information extraction is influenced. Consequently, after obtaining standard unmanned aerial vehicle image, further carry out enhancement to the unmanned aerial vehicle image and handle, can help improving the definition of unmanned aerial vehicle image, highlight characteristic information wherein, indirectly improved follow-up accuracy of carrying out geological disaster feature extraction according to the unmanned aerial vehicle image.
In one embodiment, the step S22 of performing enhancement processing according to the acquired standard drone image includes:
converting the standard unmanned aerial vehicle image from an RGB space to a gray scale space according to the acquired standard unmanned aerial vehicle image to obtain a first gray scale image; the grey scale spatial transfer function used is:
Figure 329840DEST_PATH_IMAGE001
in the formula, h 1 (x, y) represents the gray level of the pixel (x, y) position in the first gray image; r (x, y) represents the R-component level of the pixel (x, y) position in the standard unmanned aerial vehicle image, and g (x, y) represents the pixel (x, y) position in the standard unmanned aerial vehicle imageB (x, y) represents the B-component level of the pixel point (x, y) position in the standard unmanned aerial vehicle image;
and carrying out local gray level adjustment processing according to the acquired first gray level image to obtain a second gray level image, wherein the adopted local gray level adjustment processing function is as follows:
Figure 100002_DEST_PATH_IMAGE002
in the formula, h 2 (x, y) represents the gray level of the pixel (x, y) position in the second gray image; h is 1T20 And h 1T80 Respectively representing the gray levels corresponding to the first 20% and 80% of pixel points in a sequence obtained by sequencing the gray levels of all the pixel points in the first gray image from large to small; max (R (x, y), G (x, y), B (x, y)) represents the maximum value of the R, G and B scores at the position of the pixel point (x, y);
and carrying out global gray level adjustment processing according to the obtained second gray level image to obtain a third gray level image, wherein the adopted global gray level adjustment processing function is as follows:
Figure 252665DEST_PATH_IMAGE003
in the formula, h 3 (x, y) represents the gray level of the pixel (x, y) position in the third gray image, h (x, y) represents the average gray level of all pixels in the neighborhood region centered on pixel (x, y) in the second gray image, h 2med Represents the median gray level, max (h), of the second gray scale image 2 ) And min (h) 2 ) Respectively representing a maximum gray level and a minimum gray level, h, of the second gray level image T Representing a set standard gray level, where h T Has a value range of [150,170 ]];ω 1 、ω 2 And ω 3 Respectively represent the set weight coefficients, where 1 Has a value range of [0.2,0.4 ]];ω 2 Has a value range of [0.3,0.5 ]],ω 3 Has a value range of [0.2,0.4 ]],ω 123 Has a value range of [1,1.1 ]];
And converting the third gray level image into an RGB space from a gray level space according to the obtained third gray level image to obtain the preprocessed unmanned aerial vehicle image.
Aiming at the problem that the unmanned aerial vehicle image is easily influenced by high-altitude illumination, air condition and the like in the process of collecting the unmanned aerial vehicle image, so that the unmanned aerial vehicle image is easily subjected to brightness deviation or unclear, in the above embodiment, a technical solution is proposed for performing enhancement processing specifically for an unmanned aerial vehicle image, which first converts a standard unmanned aerial vehicle image from an RGB space to a gray scale space, first performs local gray scale adjustment processing for local sharpness adjustment based on the converted gray scale image, and proposes an improved local gray scale adjustment processing function, especially for the ultra-bright and ultra-dark (brightness deviation) pixel points in the gray image to perform self-adaptive equalization adjustment, in the process of local gray level adjustment, the degree of local gray level adjustment is controlled by particularly considering the RGB space characteristics of the pixel points, so that the condition of pixel point distortion caused in the process of local gray level adjustment is avoided; global gray scale adjustment is further carried out on the gray scale image based on local gray scale adjustment, integral stretching and global brightness adjustment can be carried out on the gray scale image, the contrast of the unmanned aerial vehicle image is effectively improved, and finally the unmanned aerial vehicle image is converted back to the RGB space according to the obtained gray scale image, so that the preprocessed unmanned aerial vehicle image is obtained. Through the unmanned aerial vehicle image after the enhancement is handled, can improve the detail show in the image effectively, highlight detail characteristic wherein, guaranteed the whole definition of image simultaneously, avoided the distortion condition that traditional image processing leads to, further carry out geological disasters characteristic extraction according to the unmanned aerial vehicle image for follow-up and establish the basis.
S3, extracting features based on the preprocessed unmanned aerial vehicle image to obtain feature information of the unmanned aerial vehicle image;
in one embodiment, in step S3, the feature extraction based on the preprocessed drone image includes:
s31, carrying out RGB channel separation based on the preprocessed unmanned aerial vehicle image, and respectively obtaining red channel component characteristics, green channel component characteristics and blue channel component characteristics of the unmanned aerial vehicle image;
s32, extracting the infrared reflectivity characteristics based on the preprocessed unmanned aerial vehicle image to obtain the infrared reflectivity characteristics of the unmanned aerial vehicle image;
s33, extracting texture features based on the preprocessed unmanned aerial vehicle image to obtain the texture features of the unmanned aerial vehicle image;
s34, extracting vegetation index features based on the preprocessed unmanned aerial vehicle image to obtain the vegetation index features of the unmanned aerial vehicle image;
s35 is fused according to the red channel component characteristic, the green channel component characteristic, the blue channel component characteristic, the infrared reflectivity characteristic, the unmanned aerial vehicle image and the vegetation index characteristic of the obtained unmanned aerial vehicle image, and a feature matrix of the unmanned aerial vehicle image is obtained.
Among the above-mentioned embodiment, provide a technical scheme that unmanned aerial vehicle image feature extracted, can construct the characteristic matrix of multidimension degree based on unmanned aerial vehicle image's RGB channel information, infrared reflection characteristic information, texture information and NDVI (normalized vegetation index) characteristic information to carry out the feedback through the geology characteristic information of multidimension degree in to the unmanned aerial vehicle image, help improving the variety of information extraction, improve the accuracy of geological disaster information extraction.
And S4, inputting the characteristic information of the unmanned aerial vehicle image into the trained neural network model to obtain a geological disaster extraction result.
In one embodiment, the step S4 of inputting the feature information of the unmanned aerial vehicle image into the trained neural network model to obtain the result of extracting the geological disaster includes:
s41, acquiring a feature matrix of the unmanned aerial vehicle image;
s42, acquiring a reference image feature matrix corresponding to the unmanned aerial vehicle image;
and S43, inputting the feature matrix of the unmanned aerial vehicle image and the reference feature matrix into the trained neural network model as input sets to obtain a geological disaster extraction result output by the neural network model.
In one scenario, the occurrence of a geological disaster and information extraction can be accurately found by considering that the extraction of the geological disaster requires comparison between data at the current time and previous data. Therefore, when an input set is constructed, a corresponding feature matrix is extracted according to image information returned by the unmanned aerial vehicle in real time, meanwhile, an image acquired at the position at the previous moment (last time) is acquired according to the position corresponding to the image information, the corresponding feature matrix is extracted in the same mode according to the image acquired at the previous moment, and the input set is input into the trained neural network model according to the feature matrix of the image returned at the current moment and the feature matrix of the image at the same position at the previous moment as the input set.
In one embodiment, the trained neural network model comprises an input layer, a first convolutional layer, a second convolutional layer, a pooling layer, a first fully-connected layer, a second fully-connected layer and a softmax layer which are connected in this order;
the input of the input layer is a feature matrix of the unmanned aerial vehicle image and a corresponding reference feature matrix; the first convolution layer and the second convolution layer respectively comprise 32 convolution kernels, the sizes of the convolution kernels are respectively 3 x 3 and 5 x 5, the pooling layer is maximum pooling, the size of the pooling layer is 3 x 3, the first full-connection layer comprises 128 neurons, the second full-connection layer comprises 16 neurons, and feature vectors reflecting whether the unmanned aerial vehicle image comprises geological disasters and geological disaster types are output; and the softmax layer classifies according to the feature vectors output by the second full-connection layer and outputs a geological disaster extraction result.
In one embodiment, the neural network model uses an activation function Relu.
The neural network model constructed based on the invention can perform self-adaptive feature extraction according to the data of the two feature matrixes in the input set, judge whether the geological disaster occurs according to the comparison of the front feature information and the rear feature information, and further identify the type of the geological disaster and accurately complete the extraction of the geological disaster information.
Geological disasters include cracks, landslides and the like.
In one embodiment, the method further comprises:
SB1 trains neural network models, including:
constructing a training set, and acquiring two groups of unmanned aerial vehicle images acquired at the same position in different time periods, wherein the unmanned aerial vehicle image with the later time period is used as a first image, and the unmanned aerial vehicle image with the later time period is used as a second image; the first image and the second image can be unmanned aerial vehicle images before and after a geological disaster happens to the same position, wherein the geological disaster comprises cracks and landslides;
respectively extracting features according to the first image and the second image to obtain a feature matrix corresponding to the first image and a feature matrix corresponding to the second image, and constructing a training set by using the feature matrix corresponding to the first image, the feature matrix corresponding to the second image and the correspondingly calibrated geological disaster extraction identification;
training the neural network model based on the constructed training set, testing the neural network model by adopting the testing set, and obtaining the trained neural network model when the passing rate of the neural network model reaches a preset standard.
The neural network model is trained based on the method, so that the effect and the accuracy of the neural network model can be ensured, and the efficiency and the reliability of extracting geological disasters based on the unmanned aerial vehicle image are improved.
The embodiment of the invention provides an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning, which can be used for preprocessing and characteristic extraction based on the acquired unmanned aerial vehicle image, and meanwhile, self-adaptive geological disaster recognition is carried out based on a trained neural network model, so that corresponding geological disaster inspection information is extracted, the efficiency and the accuracy of geological disaster detection are improved, the labor cost is effectively reduced, and the intelligent level of geological disaster detection is improved.
Based on the above-mentioned method for automatically extracting image geological disasters of the unmanned aerial vehicle based on deep learning shown in fig. 1, the invention further provides an automatic extraction system for image geological disasters of the unmanned aerial vehicle based on deep learning, wherein the system is used for realizing the method for automatically extracting image geological disasters of the unmanned aerial vehicle based on deep learning shown in fig. 1 and the specific embodiments corresponding to the steps of the method, and the description of the application is not repeated here.
It should be noted that, functional units/modules in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules are integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of software functional units/modules.
From the above description of embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any suitable combination thereof. For a hardware implementation, the processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the procedures of an embodiment may be performed by a computer program instructing associated hardware. In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be analyzed by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (9)

1. The utility model provides an unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning which characterized in that includes:
s1, acquiring the unmanned aerial vehicle image;
s2, preprocessing the acquired unmanned aerial vehicle image to obtain a preprocessed unmanned aerial vehicle image;
s3, extracting features based on the preprocessed unmanned aerial vehicle image to obtain feature information of the unmanned aerial vehicle image;
and S4, inputting the characteristic information of the unmanned aerial vehicle image into the trained neural network model to obtain a geological disaster extraction result.
2. The method for automatically extracting geologic hazard based on unmanned aerial vehicle image deep learning of claim 1, wherein step S1 comprises:
and acquiring unmanned aerial vehicle images acquired and transmitted back by the unmanned aerial vehicle in real time aiming at the target area.
3. The method for automatically extracting geologic hazard based on unmanned aerial vehicle image deep learning of claim 2, wherein step S2 comprises:
s21, carrying out projection transformation, radiation correction, image registration and image cutting on the acquired unmanned aerial vehicle image to obtain a standard unmanned aerial vehicle image;
and S22, performing enhancement processing according to the acquired standard unmanned aerial vehicle image to obtain a preprocessed unmanned aerial vehicle image.
4. The method for automatically extracting geologic hazards based on unmanned aerial vehicle image deep learning of claim 3, wherein step S3 comprises:
s31, carrying out RGB channel separation based on the preprocessed unmanned aerial vehicle image, and respectively obtaining red channel component characteristics, green channel component characteristics and blue channel component characteristics of the unmanned aerial vehicle image;
s32, extracting the infrared reflectivity characteristics based on the preprocessed unmanned aerial vehicle image to obtain the infrared reflectivity characteristics of the unmanned aerial vehicle image;
s33, extracting texture features based on the preprocessed unmanned aerial vehicle image to obtain the texture features of the unmanned aerial vehicle image;
s34, extracting vegetation index features based on the preprocessed unmanned aerial vehicle image to obtain the vegetation index features of the unmanned aerial vehicle image;
s35 is fused according to the red channel component characteristic, the green channel component characteristic, the blue channel component characteristic, the infrared reflectivity characteristic, the unmanned aerial vehicle image and the vegetation index characteristic of the obtained unmanned aerial vehicle image, and a feature matrix of the unmanned aerial vehicle image is obtained.
5. The method for automatically extracting geologic hazard based on unmanned aerial vehicle image deep learning of claim 4, wherein step S4 comprises:
s41, acquiring a feature matrix of the unmanned aerial vehicle image;
s42, acquiring a reference image feature matrix corresponding to the unmanned aerial vehicle image;
and S43, inputting the feature matrix of the unmanned aerial vehicle image and the reference feature matrix into the trained neural network model as input sets to obtain a geological disaster extraction result output by the neural network model.
6. The unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning of claim 5, wherein the trained neural network model comprises an input layer, a first convolution layer, a second convolution layer, a pooling layer, a first full-link layer, a second full-link layer and a softmax layer which are connected in sequence;
the input of the input layer is a feature matrix of the unmanned aerial vehicle image and a corresponding reference feature matrix; the first convolution layer and the second convolution layer respectively comprise 32 convolution kernels, the sizes of the convolution kernels are respectively 3 x 3 and 5 x 5, the pooling layer is maximum pooling, the size of the pooling layer is 3 x 3, the first full-connection layer comprises 128 neurons, the second full-connection layer comprises 16 neurons, and feature vectors reflecting whether the unmanned aerial vehicle image comprises geological disasters and geological disaster types are output; and the softmax layer classifies according to the feature vectors output by the second full-connection layer and outputs a geological disaster extraction result.
7. The method for automatically extracting the image geological disaster of the unmanned aerial vehicle based on the deep learning as claimed in claim 6, wherein the method further comprises:
SB1 trains neural network models, including:
constructing a training set, and acquiring two groups of unmanned aerial vehicle images acquired at the same position in different time periods, wherein the unmanned aerial vehicle image with the later time period is used as a first image, and the unmanned aerial vehicle image with the later time period is used as a second image; the first image and the second image are unmanned aerial vehicle images before and after a geological disaster happens to the same position, wherein the geological disaster comprises a crack and a landslide;
respectively extracting features according to the first image and the second image to obtain a feature matrix corresponding to the first image and a feature matrix corresponding to the second image, and constructing a training set by using the feature matrix corresponding to the first image, the feature matrix corresponding to the second image and the correspondingly calibrated geological disaster extraction identification;
training the neural network model based on the constructed training set, testing the neural network model by adopting the testing set, and obtaining the trained neural network model when the passing rate of the neural network model reaches a preset standard.
8. The method for automatically extracting geological disasters based on the deep learning unmanned aerial vehicle image as claimed in claim 3, wherein in step S22, the enhancing process according to the acquired standard unmanned aerial vehicle image comprises:
converting the standard unmanned aerial vehicle image from an RGB space to a gray scale space according to the acquired standard unmanned aerial vehicle image to obtain a first gray scale image; the grey scale spatial transfer function used therein is:
Figure 957073DEST_PATH_IMAGE001
in the formula, h 1 (x, y) represents the gray level of the pixel (x, y) position in the first gray image; r (x, y) represents the R-grade of the position of the pixel point (x, y) in the standard unmanned aerial vehicle image, G (x, y) represents the G-grade of the position of the pixel point (x, y) in the standard unmanned aerial vehicle image, and B (x, y) represents the B-grade of the position of the pixel point (x, y) in the standard unmanned aerial vehicle image;
and carrying out local gray level adjustment processing according to the acquired first gray level image to obtain a second gray level image, wherein the adopted local gray level adjustment processing function is as follows:
Figure DEST_PATH_IMAGE002
in the formula, h 2 (x, y) represents the gray level of the pixel (x, y) position in the second gray image; h is 1T20 And h 1T80 Respectively representing the gray levels corresponding to the first 20% and 80% of pixel points in a sequence obtained by sequencing the gray levels of all the pixel points in the first gray image from large to small; max (R (x, y), G (x, y), B (x, y)) represents the maximum value of the R, G and B scores at the position of the pixel point (x, y);
and carrying out global gray level adjustment processing according to the obtained second gray level image to obtain a third gray level image, wherein the adopted global gray level adjustment processing function is as follows:
Figure 302604DEST_PATH_IMAGE003
in the formula, h 3 (x, y) represents the gray level of the pixel (x, y) position in the third gray image, h (x, y) represents the average gray level of all pixels in the neighborhood region centered on pixel (x, y) in the second gray image, h 2med Represents the median gray level, max (h), of the second gray scale image 2 ) And min (h) 2 ) Respectively represent the second ashMaximum and minimum gray levels, h, of the intensity image T Representing a set standard gray level, where h T Has a value range of [150,170 ]];ω 1 、ω 2 And ω 3 Respectively represent the set weight coefficients, where 1 Has a value range of [0.2,0.4 ]];ω 2 Has a value range of [0.3,0.5 ]],ω 3 Has a value range of [0.2,0.4 ]],ω 123 Has a value range of [1,1.1 ]];
And converting the third gray level image into an RGB space from a gray level space according to the obtained third gray level image to obtain the preprocessed unmanned aerial vehicle image.
9. An automatic extraction system for unmanned aerial vehicle image geological disasters based on deep learning is characterized in that the system is used for realizing the automatic extraction method for unmanned aerial vehicle image geological disasters based on deep learning according to any one of claims 1 to 8.
CN202210735764.XA 2022-06-27 2022-06-27 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning Active CN114821376B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202210735764.XA CN114821376B (en) 2022-06-27 2022-06-27 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning
GB2217794.3A GB2621645A (en) 2022-06-27 2022-10-14 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
PCT/CN2022/125257 WO2024000927A1 (en) 2022-06-27 2022-10-14 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210735764.XA CN114821376B (en) 2022-06-27 2022-06-27 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning

Publications (2)

Publication Number Publication Date
CN114821376A true CN114821376A (en) 2022-07-29
CN114821376B CN114821376B (en) 2022-09-20

Family

ID=82522305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210735764.XA Active CN114821376B (en) 2022-06-27 2022-06-27 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning

Country Status (2)

Country Link
CN (1) CN114821376B (en)
WO (1) WO2024000927A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908954A (en) * 2023-03-01 2023-04-04 四川省公路规划勘察设计研究院有限公司 Geological disaster hidden danger identification system and method based on artificial intelligence and electronic equipment
WO2024000927A1 (en) * 2022-06-27 2024-01-04 中咨数据有限公司 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
GB2621645A (en) * 2022-06-27 2024-02-21 Checc Data Co Ltd Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689481B (en) * 2024-02-04 2024-04-19 国任财产保险股份有限公司 Natural disaster insurance processing method and system based on unmanned aerial vehicle video data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548465A (en) * 2016-11-25 2017-03-29 福建师范大学 A kind of Enhancement Method of multi-spectrum remote sensing image
CN110008854A (en) * 2019-03-18 2019-07-12 中交第二公路勘察设计研究院有限公司 Unmanned plane image Highway Geological Disaster recognition methods based on pre-training DCNN
CN110532974A (en) * 2019-09-03 2019-12-03 成都理工大学 High-definition remote sensing information on geological disasters extraction method based on deep learning
CN112037144A (en) * 2020-08-31 2020-12-04 哈尔滨理工大学 Low-illumination image enhancement method based on local contrast stretching
CN112364849A (en) * 2021-01-13 2021-02-12 四川省安全科学技术研究院 High-order landslide geological disaster intelligent identification method
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2021218119A1 (en) * 2020-04-30 2021-11-04 中国科学院深圳先进技术研究院 Image toning enhancement method and method for training image toning enhancement neural network
CN114596495A (en) * 2022-03-17 2022-06-07 湖南科技大学 Sand slide identification and automatic extraction method based on Sentinel-2A remote sensing image
WO2022133194A1 (en) * 2020-12-17 2022-06-23 Trustees Of Tufts College Deep perceptual image enhancement

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205039B (en) * 2021-04-29 2023-07-28 广东电网有限责任公司东莞供电局 Power equipment fault image recognition disaster investigation system and method based on multiple DCNN networks
CN114821376B (en) * 2022-06-27 2022-09-20 中咨数据有限公司 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548465A (en) * 2016-11-25 2017-03-29 福建师范大学 A kind of Enhancement Method of multi-spectrum remote sensing image
CN110008854A (en) * 2019-03-18 2019-07-12 中交第二公路勘察设计研究院有限公司 Unmanned plane image Highway Geological Disaster recognition methods based on pre-training DCNN
CN110532974A (en) * 2019-09-03 2019-12-03 成都理工大学 High-definition remote sensing information on geological disasters extraction method based on deep learning
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2021218119A1 (en) * 2020-04-30 2021-11-04 中国科学院深圳先进技术研究院 Image toning enhancement method and method for training image toning enhancement neural network
CN112037144A (en) * 2020-08-31 2020-12-04 哈尔滨理工大学 Low-illumination image enhancement method based on local contrast stretching
WO2022133194A1 (en) * 2020-12-17 2022-06-23 Trustees Of Tufts College Deep perceptual image enhancement
CN112364849A (en) * 2021-01-13 2021-02-12 四川省安全科学技术研究院 High-order landslide geological disaster intelligent identification method
CN114596495A (en) * 2022-03-17 2022-06-07 湖南科技大学 Sand slide identification and automatic extraction method based on Sentinel-2A remote sensing image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XUEYANG FU等: "Remote Sensing Image Enhancement Using Regularized-Histogram Equalization and DCT", 《IEEE》 *
唐敏等: "无人机影像局部增强方法及其在影像匹配中的应用", 《国土资源遥感》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024000927A1 (en) * 2022-06-27 2024-01-04 中咨数据有限公司 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
GB2621645A (en) * 2022-06-27 2024-02-21 Checc Data Co Ltd Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
CN115908954A (en) * 2023-03-01 2023-04-04 四川省公路规划勘察设计研究院有限公司 Geological disaster hidden danger identification system and method based on artificial intelligence and electronic equipment

Also Published As

Publication number Publication date
CN114821376B (en) 2022-09-20
WO2024000927A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
CN114821376B (en) Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning
WO2022222352A1 (en) Remote-sensing panchromatic and multispectral image distributed fusion method based on residual network
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN108875821A (en) The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
CN112801230B (en) Intelligent acceptance method for unmanned aerial vehicle of power distribution line
CN110070571B (en) Phyllostachys pubescens morphological parameter detection method based on depth camera
CN109241867B (en) Method and device for recognizing digital rock core image by adopting artificial intelligence algorithm
CN113160053B (en) Pose information-based underwater video image restoration and splicing method
CN116757988B (en) Infrared and visible light image fusion method based on semantic enrichment and segmentation tasks
WO2019228450A1 (en) Image processing method, device, and equipment, and readable medium
CN113688817A (en) Instrument identification method and system for automatic inspection
CN114821440B (en) Mobile video stream content identification and analysis method based on deep learning
CN116824347A (en) Road crack detection method based on deep learning
CN114565539B (en) Image defogging method based on online knowledge distillation
CN117576461A (en) Semantic understanding method, medium and system for transformer substation scene
GB2621645A (en) Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
CN117541574A (en) Tongue diagnosis detection method based on AI semantic segmentation and image recognition
CN113239828A (en) Face recognition method and device based on TOF camera module
US20230290110A1 (en) Systems and methods for generating synthetic satellite image training data for machine learning models
CN116778322A (en) Method for removing continuous water surface structure of water surface bridge based on high-resolution image
CN111539931A (en) Appearance abnormity detection method based on convolutional neural network and boundary limit optimization
CN116433528A (en) Image detail enhancement display method and system for target area detection
CN110827375A (en) Infrared image true color coloring method and system based on low-light-level image
CN116092019A (en) Ship periphery abnormal object monitoring system, storage medium thereof and electronic equipment
CN115690934A (en) Master and student attendance card punching method and device based on batch face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant