GB2621645A - Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image - Google Patents

Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image Download PDF

Info

Publication number
GB2621645A
GB2621645A GB2217794.3A GB202217794A GB2621645A GB 2621645 A GB2621645 A GB 2621645A GB 202217794 A GB202217794 A GB 202217794A GB 2621645 A GB2621645 A GB 2621645A
Authority
GB
United Kingdom
Prior art keywords
image
uav
geological disaster
uav image
grayscale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2217794.3A
Other versions
GB202217794D0 (en
Inventor
Hou Yun
Zhang Yunling
Yang Xuan
Dong Yuanshuai
Wu Hangbin
Cui Li
Hu Lin
Zhang Xueliang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Checc Highway Maintenance And Test Tech Co Ltd
Chine Highway Eng Consultants Corp
Tongji University
CHECC Data Co Ltd
Original Assignee
Checc Highway Maintenance And Test Tech Co Ltd
Chine Highway Eng Consultants Corp
Tongji University
CHECC Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210735764.XA external-priority patent/CN114821376B/en
Application filed by Checc Highway Maintenance And Test Tech Co Ltd, Chine Highway Eng Consultants Corp, Tongji University, CHECC Data Co Ltd filed Critical Checc Highway Maintenance And Test Tech Co Ltd
Publication of GB202217794D0 publication Critical patent/GB202217794D0/en
Publication of GB2621645A publication Critical patent/GB2621645A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image, belonging to the technical field of unmanned aerial vehicle image processing. The method comprises: S1, acquiring an unmanned aerial vehicle image; S2, preprocessing the acquired unmanned aerial vehicle image, so as to obtain a preprocessed unmanned aerial vehicle image; S3, performing feature extraction on the preprocessed unmanned aerial vehicle image, so as to obtain feature information of the unmanned aerial vehicle image; and S4, inputting the feature information of the unmanned aerial vehicle image into a trained neural network model, so as to obtain a geological disaster extraction result. The method can perform preprocessing on the acquired unmanned aerial vehicle image and feature extraction, and, perform self-adaptive geological disaster recognition by using the trained neural network model so as to extract corresponding geological disaster inspection information, thus helping to improve the efficiency and accuracy of geological disaster detection, effectively reducing the labor costs, and improving the intelligent level of geological disaster detection.

Description

METHOD FOR AUTOMATICALLY EXTRACTING GEOLOGICAL DISASTER
INFORMATION FROM UNMANNED AERIAL VEHICLE IMAGE BASED ON DEEP
LEARNING
[1] This application claims priority to Chinese Patent Application No. 202210735764X, filed with the China National Intellectual Property Administration on June 27, 2022, and entitled "METHOD FOR AUTOMATICALLY EXTRACTING GEOLOGICAL DISASTER INFORMATION FROM UNMANNED AERIAL VEHICLE IMAGE BASED ON DEEP LEARNING", which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[2] The present disclosure relates to a technical field of unmanned aerial vehicle (UAV) image processing, and in particular, to a method for automatically extracting geological disaster information from a UAV image based on deep learning.
BACKGROUND
[3] Along with the increasingly severe geological disasters in China, it is necessary to apply high-tech means to assist in investigation, which is also an inevitable development trend of future geological disaster investigation. UAV remote sensing technology is of great help to improve accuracy and effectiveness of geological disaster investigation. High-resolution remote sensing data and geological images may be obtained by using the UAV remote sensing technology, which provides important data support for geological disaster evaluation and monitoring and post-disaster reconstruction.
[4] In the conventional art, after images returned by a UAV are obtained, geological disaster information is usually manually determined and extracted based on the returned UAV images. However, the limitation of professional knowledge and experience of technicians and the difficulty in unifying standards of different technicians would cause wrong determination and missed determination, etc. in geological disaster information extraction. In addition, observing the returned images by human also requires high-intensity labor, which incurs the problems of high labor costs and low efficiency, and is difficult to meet a modern requirement of acquisition of geological disaster information.
SUMMARY
[5] In view of the foregoing problems, the present disclosure aims to provide a method for automatically extracting geological disaster information from a UAV image based on deep 1 learning.
1061 The objective of the present disclosure is implemented by using the following technical solutions: 107] According to a first aspect, the present disclosure provides a method for automatically extracting geological disaster information from the UAV image based on deep learning, including: 1081 Si: acquiring the UAV image; 1091 S2: preprocessing the acquired UAV image to obtain a preprocessed UAV image; [010] S3: extracting features based on the preprocessed UAV image to obtain feature information of the UAV image; and 10111 S4: inputting the feature information of the UAV image into a trained neural network model to obtain a geological disaster information extraction result.
[12] In an implementation, Si includes: [13] acquiring the UAV image collected and returned by a UAV from a target region in real time.
10141 In an implementation, S2 includes: [015] 521: performing projection transformation, radiation correction, image registration and image clipping on the acquired UAV image to obtain a standard UAV image; and 10161 522: performing enhancement processing based on the standard UAV image to obtain the preprocessed UAV image.
[17] In an implementation, S3 includes: [18] S31: performing red green blue (RGB) channel separation based on the preprocessed UAV image to obtain red channel component features, green channel component features, and blue channel component features of the UAV image; [19] S32: extracting infrared reflectivity features based on the preprocessed UAV image, to obtain infrared reflectivity features of the UAV image; 10201 533: extracting texture features based on the preprocessed UAV image to obtain texture features of the UAV image; [021] S34: extracting vegetation index features based on the preprocessed UAV image to obtain vegetation index features of the UAV image; and 10221 S35: performing fusion based on the obtained red channel component features, green channel component features, blue channel component features, infrared reflectivity features, and vegetation index features, to obtain a feature matrix of the UAV image.
10231 In an implementation, S4 includes: 10241 S41: obtaining the feature matrix of the UAV image; 10251 S42: obtaining a reference image feature matrix corresponding to the UAV image; and [26] S43: inputting the feature matrix of the UAV image and the reference image feature matrix as input set into the trained neural network model to obtain the geological disaster information extraction result output by the neural network model.
[27] In an implementation, the trained neural network model includes an input layer, a first convolutional layer, a second convolutional layer, a pooling layer, a first fully connected layer, a second fully connected layer, and a softmax layer which are sequentially connected; [28] where input of the input layer is the feature matrix of the UAV image and the corresponding reference image feature matrix; the first convolutional layer and the second convolutional layer each include 32 convolution kernels with sizes of 3>1 and 5x5 respectively, the pooling layer is max-pooling, with a size of 3 x3, the first fully connected layer includes 128 neurons, and the second frilly connected layer includes 16 neurons, with an output of a feature vector that reflects whether the UAV image contains geological disaster information and types of geological disasters; and the softmax layer performs classification based on the feature vector output by the second fully connected layer, and outputs the geological disaster information extraction result.
[29] In an implementation, the method further includes: [30] SB1: training the neural network model, including: [31] obtaining two groups of UAV images obtained in different time periods at a same position, where the UAV image captured in an earlier time period is taken as a first image, and the UAV image captured in a later time period is taken as a second image; the first image and the second image may be UAV images before and after a geological disaster at the same position, and the geological disasters include cracks and landslides; [32] extracting features based on the first image and the second image respectively to obtain a feature matrix corresponding to the first image and a feature matrix corresponding to the second image, and constructing a training set by using the feature matrix corresponding to the first image, the feature matrix corresponding to the second image, and correspondingly-calibrated geological disaster information extraction marks; and [33] training the neural network model based on the constructed training set, testing the neural network model by using a test set, and obtaining the trained neural network model when a pass rate of the neural network model reaches a predetermined standard.
[34] According to a second aspect, the present disclosure provides a system for automatically extracting geological disaster information from the UAV image based on deep learning, where the system is used to implement the method for automatically extracting geological disaster information from the UAV image based on deep learning according to any one of the implementations of the first aspect.
[35] The present disclosure has the following beneficial effects: the present disclosure provides the method for automatically extracting geological disaster information from the UAV image based on deep learning, such that preprocessing and feature extraction may be performed based on the acquired UAV image, and self-adaptive geological disaster identification and extraction of corresponding geological disaster inspection information are performed based on the trained neural network model. This helps to improve efficiency and accuracy of geological disaster detection, effectively reduces labor costs, and improves an intelligent level of geological disaster detection.
BRIEF DESCRIPTION OF THE DRAWINGS
[36] The present disclosure is further described by using the accompanying drawing, but the embodiments in the accompanying drawing do not constitute any limitation to the present disclosure. For a person of ordinary skill in the art, other accompanying drawings may be further obtained based on following accompanying drawing without creative efforts.
[37] FIG. 1 is a schematic flowchart of a method for automatically extracting geological disaster information from a UAV image based on deep learning according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[38] The technical solutions of the embodiments of the present disclosure are clearly and completely described below with reference to the accompanying drawing. Apparently, the described embodiments are merely some rather than all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.
[39] A method for automatically extracting geological disaster information from a UAV image based on deep learning shown in the embodiment in FIG. 1 includes Si, 52, S3 and 54.
[40] Si: A UAV image is acquired.
[41] In an implementation, Si includes: [42] acquiring the UAV image collected and returned by a UAV from a target region in real time.
[43] The method proposed above may be completed based on a server, a smart device, etc. The server receives in real time a remote sensing image collected from the target region (area) and returned by a UAV in real time, and extracts and processes geological disaster information based on the acquired remote sensing image, so that geological disaster information existing in the UAV image may be timely and accurately extracted.
[44] S2: The acquired UAV image is preprocessed to obtain a preprocessed UAV image.
[45] In an implementation, S2 includes S21 and S22.
[46] S21: Projection transformation, radiation correction, image registration, image clipping, etc. are performed on the acquired UAV image to obtain a standard UAV image.
[47] Due to the influence of different factors such as atmospheric refraction and land-surface radiation resolution, the acquired UAV image is easily distorted, resulting in an error in extracting geological disaster information from the UAV image. Therefore, preprocessing such as projection transformation, radiation correction, image registration and image clipping, is first performed on the acquired UAV image, to obtain the standard UAV image.
[48] Through the projection transformation, the UAV image may correspond to a standard coordinate system, which helps to project the UAV image to an appropriate size level. In addition, through the radiation correction, an influence of a ground angle, a velocity, air refraction, ground curvature, etc. during collection of the UAV image may be eliminated, thereby eliminating the distortion of the UAV image. In addition, based on the image registration and the image clipping, confirmation with the target region may be performed based on the acquired UAV image, so as to obtain the standard UAV image corresponding to the target region.
[49] S22: Enhancement processing is performed based on the standard UAV image to obtain the preprocessed UAV image.
[50] The UAV image is prone to be affected by high-altitude illumination, air conditions, etc. during collection, such that the UAV image is easily subjected to brightness deviation or unsharpness, etc., which affects accuracy of geological disaster information extraction. Therefore, after the standard UAV image is obtained, further enhancing the UAV image may help to improve sharpness of the UAV image, highlight feature information therein, and indirectly improve accuracy of subsequent geological disaster feature extraction based on the UAV image.
[51] In an implementation, in 522, performing enhancement processing based on the standard UAV image includes: [52] converting the standard UAV image from ROB space to gray space based on the standard UAV image, to obtain a first grayscale image, where a gray space conversion function used is: h, (x, y) = 0.299 x r(x, y) + 0.587 x g(x, y) + 0.114 x b(x, y) [53] where hffx,y) represents a gray level of a pixel (x,y) in the first grayscale image; r(x,y) represents an R component level of the pixel (x,y) in the standard UAV image, g(x,y) represents a G component level of the pixel (x,y) in the standard UAV image, and b(x,y) represents a B component level of the pixel (x,y) in the standard UAV image; [54] performing local grayscale adjustment processing based on the obtained first grayscale image to obtain a second grayscale image, where a local grayscale adjustment processing function used is: h2 (x, y) hi (x, y) -ax [max (x, y), g(x, y), b (x, y)) -hi (x, y)] x (1112(5L5Y) -h1hT:T02°) (x, hit20 hi (x, y) hyrao < (x, < hino min (him°, (x, y) + p x z4 (max (r (x, y), g(x, y), b(x, y)) -min(r (x, y), g(x, y), h,(x, y) h180 10551 where h2(x,y) represents a gray level of the pixel (x,y) in the second grayscale image; limo and lingo represent gray levels corresponding to a 20%t pixel and an 80(l'oth pixel in a sequence obtained by ranking gray levels of all pixels in the first grayscale image from large to small, respectively; and max(r(x,y), g(x,y), b(x,y)) represent a maximum value among the R component level, the G component level, and the B component level of the pixel (x,y); 10561 performing global grayscale adjustment processing based on the obtained second grayscale image to obtain a third grayscale image, where a global grayscale adjustment processing function used is: hT h2 med hT h2med h3(x, = w1 x h28(x, y) x x 255 255 + u)2 x h2 (x, y) + (03 x max(h2) -min(h2) 10571 where h3(x,y) represents a gray level of the pixel (x,y) in the third grayscale image, 1126(x,y) represents an average gray level of all pixels in a neighborhood range of the second grayscale image with the pixel (x, y) as a center, balled represents a median gray level of the second grayscale image, max(h2) and min(h2) represent a maximum gray level and a minimum gray level of the second grayscale image respectively, and Irr represents a predetermined standard gray level, where hT has a value range of [150,170]; on, (1)2 and on represent predetermined weight coefficients respectively, where on has a value range of [0.2,0.4], 0)2 has a value range of [0.3,0.5], w3 has a value range of [0.2,0.4], and 0)1+0)2+0)3 has a value range of [1,1.1]; and 10581 converting the obtained third grayscale image from the gray space to the RGB space to obtain the preprocessed UAV image.
10591 In view of the problem that the UAV image is prone to be affected by high-altitude illumination, air conditions, etc. during collection, such that the UAV image is easily subjected to brightness deviation or unsharpness, etc., in the foregoing implementation, a technical solution for specifically enhancing the UAV image is proposed. The standard UAV image is first converted from RGB space to gray space. Based on the grayscale image obtained through conversion, local grayscale adjustment for local sharpness adjustment is first performed, an improved local grayscale adjustment processing function is proposed, and self-adaptive equalization adjustment is performed particularly for extra-bright and extra-dark (brightness deviation) pixels in the grayscale image. During local grayscale adjustment, RGB spatial features of pixels are specially considered to control a degree of local grayscale adjustment, which avoids the pixel distortion caused in the local grayscale adjustment process. Global grayscale adjustment is further performed on the grayscale image based on the local grayscale adjustment, which may perform overall stretching and global brightness adjustment on the grayscale image, and effectively improve a contrast of the UAV image, and finally the obtained grayscale image is converted back to the RGB space to obtain the preprocessed UAV image. The enhanced UAV image may effectively improve display of details in the image, highlight detail features, and ensure overall sharpness of the image, avoiding the distortion caused by conventional image processing, and laying a foundation for further geological disaster feature extraction based on the UAV image.
[60] S3: Features is extracted based on the preprocessed UAV image to obtain feature information of the UAV image.
[61] In an implementation, in S3, extracting features based on the preprocessed UAV image includes S31, S32, S33, S34 and S35.
[62] S31: RGB channel separation is performed based on the preprocessed UAV image to obtain red channel component features, green channel component features, and blue channel component features of the UAV image.
[63] S32. Infrared reflectivity features are extracted based on the preprocessed UAV image, to obtain infrared reflectivity features of the UAV image.
[64] S33: Texture features are extracted based on the preprocessed UAV image to obtain texture features of the UAV image.
[65] S34: Vegetation index features are extracted based on the preprocessed UAV image to obtain vegetation index features of the UAV image.
[66] S35: Fusion is performed based on the obtained red channel component features, green channel component features, blue channel component features, infrared reflectivity features, and vegetation index features, to obtain a feature matrix of the UAV image.
[67] In the foregoing implementation, a technical solution for UAV image feature extraction is provided, which may construct a multi-dimensional feature matrix based on RGB channel information, infrared reflection feature information, texture information and normalized vegetation index (NDVI) feature information of the UAV image, so that geological feature information in the UAV image is fed back by using multi-dimensional features, thereby helping to improve the diversity of information extraction and accuracy of geological disaster information extraction.
[68] S4: The feature information of the UAV image is inputted into a trained neural network model to obtain a geological disaster information extraction result.
[69] In an implementation, in S4, inputting the feature information of the UAV image into the trained neural network model to obtain the geological disaster information extraction result includes S41, S42 and S43.
[70] S41: The feature matrix of the UAV image is obtained.
[71] S42: A reference image feature matrix corresponding to the UAV image is obtained.
[72] 843: The feature matrix of the UAV image and the reference image feature matrix as input set are inputted into the trained neural network model to obtain the geological disaster information extraction result output by the neural network model.
[73] In a scenario, for extraction of geological disaster information, it is required to compare data at a current moment with previous data, so as to accurately find the occurrence of geological disasters and extract information. Therefore, when the input set is constructed, the corresponding feature matrix is extracted based on the image information returned by the UAV in real time. In addition, based on a corresponding position of the image information, an image collected at the previous moment (last time) at this position is acquired, and a corresponding feature matrix is extracted in the same mode based on the images collected at the previous moment. The feature matrix of the image returned at the current moment and the feature matrix of the image at the same position at the previous moment are taken as an input set, and the input set is input into the trained neural network model.
[74] In an implementation, the trained neural network model includes an input layer, a first convolutional layer, a second convolutional layer, a pooling layer, a first fully connected layer, a second fully connected layer, and a softmax layer which are sequentially connected; [75] where input of the input layer is the feature matrix of the UAV image and the corresponding reference image feature matrix; the first convolutional layer and the second convolutional layer include 32 convolution kernels with sizes of 3/3 and 5/5, respectively, the pooling layer is max-pooling, with a size of 3/3, the first fully connected layer includes 128 neurons, and the second fully connected layer includes 16 neurons, with an output of a feature vector that reflects whether the UAV image contains geological disaster information and types of geological disasters; and the softmax layer performs classification based on the feature vector output by the second fully connected layer, and outputs the geological disaster information extraction result.
[76] In an implementation, an activation function used by the neural network model is Relu.
[77] Based on the neural network model constructed in the present disclosure, self-adaptive feature extraction may be performed based on the data of the two feature matrices in the input set, and whether a geological disaster occurs may be determined based on the comparison of previous and later feature information. In addition, a type of the geological disaster may be further identified, and the extraction of geological disaster information may be accurately completed.
[78] The geological disasters include cracks, landslides, etc. 8 [79] In an implementation, the method further includes SB1.
[80] SB1: The neural network model is trained, which includes: [81] obtaining two groups of UAV images obtained in different time periods at a same position, where the UAV image captured in an earlier time period is taken as a first image, and the UAV image captured in a later time period is taken as a second image; the first image and the second image may be UAV images before and after a geological disaster at the same position, and the geological disasters include cracks and landslides; [82] extracting features based on the first image and the second image respectively to obtain a feature matrix corresponding to the first image and a feature matrix corresponding to the second image, and constructing a training set by using the feature matrix corresponding to the first image, the feature matrix corresponding to the second image, and correspondingly calibrated geological disaster information extraction marks; and [83] training the neural network model based on the constructed training set, testing the neural network model by using a test set, and obtaining the trained neural network model when a pass rate of the neural network model reaches a predetermined standard.
[84] Training the neural network model based on the foregoing method may ensure effects and accuracy of the neural network model, and improve efficiency and reliability of geological disaster information extraction based on the UAV image.
[85] The foregoing implementation of the present disclosure provides the method for automatically extracting geological disaster information from the UAV image based on deep learning, so that reprocessing and feature extraction may be performed based on the acquired UAV image, and self-adaptive geological disaster identification and extraction of corresponding geological disaster inspection information are performed based on the trained neural network model. This helps to improve efficiency and accuracy of geological disaster detection, effectively reduces labor costs, and improves an intelligent level of geological disaster detection.
[86] Based on the method for automatically extracting geological disaster information from the UAV image based on deep learning shown in FIG. 1, the present disclosure further provides a system for automatically extracting geological disaster information from the UAV image based on deep learning, where the system is used to implement the method for automatically extracting geological disaster information from the UAV image based on deep learning shown in FIG. 1 and specific embodiments corresponding to the steps of the method, and description thereof is not provided herein again in the present application.
[87] It should be noted that functional units/modules in the embodiments of the present disclosure may be integrated into one processing unit/module, or each of the units/modules may exist alone physically, or two or more units/modules may be integrated into one unit/module. The foregoing integrated unit/module may be implemented either in a form of hardware or in a form of a software functional unit/module [88] From the foregoing description of the implementations, a person skilled in the art may clearly understand that the embodiments described herein may be implemented in hardware, software, firmware, middleware, code or any suitable combination thereof For hardware implementation, a processor may be implemented in one or more of the following units: an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable logic device (PLD), a field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, another electronic unit designed to implement the functions described herein, or a combination thereof For software implementation, some or all of processes of the embodiments may be completed by a computer program instructing related hardware. During implementation, the foregoing program may be stored in a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. The computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates transfer of a computer program from one place to another. The storage medium may be any available medium accessible by a computer. The computer-readable medium may include, but is not limited to, a random access memory (RAM), a read only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or another optical storage, a magnetic disk storage medium or another magnetic storage device, or any other medium that may be used to carry or store desired program code in the form of an instruction or a data structure and that may be accessed by a computer.
[89] Finally, it should be noted that the foregoing embodiments are provided merely to describe the technical solutions of the present disclosure, rather than to limit the protection scope of the present disclosure. Although the present disclosure is described in detail with reference to preferred embodiments, a person of ordinary skill in the art should understand that modifications or equivalent replacements may be made to the technical solutions of the present disclosure without departing from the essence and scope of the technical solutions of the present disclosure.

Claims (9)

  1. WHAT IS CLAIMED IS: L A method for automatically extracting geological disaster information from an unmanned aerial vehicle (UAV) image based on deep learning, comprising: St: acquiring the UAV image; S2: preprocessing the acquired UAV image to obtain a preprocessed UAV image; S3: extracting features based on the preprocessed UAV image to obtain feature information of the UAV image; and S4: inputting the feature information of the UAV image into a trained neural network model to obtain a geological disaster information extraction result.
  2. 2. The method for automatically extracting geological disaster information from the UAV image based on deep learning according to claim 1, wherein Si comprises: acquiring, the UAV image collected and returned by a UAV from a target region in real time.
  3. 3. The method for automatically extracting geological disaster information from the UAV image based on deep learning according to claim 2, wherein 52 comprises: S21: performing projection transformation, radiation correction, image registration and image clipping on the acquired UAV image to obtain a standard UAV image; and S22: performing enhancement processing based on the standard UAV image to obtain the preprocessed UAV image.
  4. 4. The method for automatically extracting geological disaster information from the UAV image based on deep learning according to claim 3, wherein S3 comprises: S31: performing red green blue (RGB) channel separation based on the preprocessed UAV image to obtain red channel component features, green channel component features, and blue channel component features of the UAV image; S32: extracting infrared reflectivity features based on the preprocessed UAV image, to obtain infrared reflectivity features of the UAV image; S33: extracting texture features based on the preprocessed UAV image to obtain texture features of the UAV image; S34: extracting vegetation index features based on the preprocessed UAV image to obtain vegetation index features of the UAV image; and S35: performing fusion based on the obtained red channel component features, green channel component features, blue channel component features, infrared reflectivity features, and 11 vegetation index features, to obtain a feature matrix of the UAV image.
  5. 5. The method for automatically extracting geological disaster information from the UAV image based on deep learning according to claim 4, wherein S4 comprises: S41: obtaining the feature matrix of the UAV image; S42: obtaining a reference image feature matrix corresponding to the UAV image; and S43: inputting the feature matrix of the UAV image and the reference image feature matrix as input set into the trained neural network model to obtain the geological disaster information extraction result output by the neural network model.
  6. 6. The method for automatically extracting geological disaster information from the UAV image based on deep learning according to claim 5, wherein the trained neural network model comprises an input layer, a first convolutional layer, a second convolutional layer, a pooling layer, a first fully connected layer, a second fully connected layer, and a softmax layer which are sequentially connected; wherein input of the input layer is the feature matrix of the UAV image and the corresponding reference image feature matrix; the first convolutional layer and the second convolutional layer each comprise 32 convolution kernels with sizes of 3/3 and 5/5 respectively, the pooling layer is max-pooling, with a size of 3/3, the first fully connected layer comprises 128 neurons, and the second fully connected layer comprises 16 neurons, with an output of a feature vector that reflects whether the UAV image contains geological disaster information and types of geological disasters; and the softmax layer performs classification based on the feature vector output by the second fully connected layer, and outputs the geological disaster information extraction result.
  7. 7. The method for automatically extracting geological disaster information from the UAV image based on deep learning according to claim 6, wherein the method further comprises: SB1: training the neural network model, comprising: obtaining two groups of UAV images obtained in different time periods at a same position, wherein the UAV image captured in an earlier time period is taken as a first image, and the UAV image captured in a later time period is taken as a second image; the first image and the second image are UAV images before and after a geological disaster at the same position, and the geological disasters comprise cracks and landslides; extracting features based on the first image and the second image respectively to obtain a feature matrix corresponding to the first image and a feature matrix corresponding to the second image, and constructing a training set by using the feature matrix corresponding to the first image, the feature matrix corresponding to the second image, and correspondingly-calibrated geological disaster information extraction marks; and training the neural network model based on the constructed training set, testing the neural network model by using a test set, and obtaining the trained neural network model when a pass rate of the neural network model reaches a predetermined standard.
  8. 8. The method for automatically extracting geological disaster information from the UAV image based on deep learning according to claim 3, wherein in S22, performing enhancement processing based on the standard UAV image comprises: converting the standard UAV image from RGB space to gray space based on the standard UAV image, to obtain a first grayscale image, wherein a gray space conversion function used is: hj.(x, y) = 0.299 x r(x, y) + 0.587 x g(x, y) + 0.114 x b(x,y) wherein hi(x,y) represents a gray level of a pixel (x,y) in the first grayscale image; r(x,y) represents an R component level of the pixel (x,y) in the standard UAV image, g(x,y) represents a G component level of the pixel (x,y) in the standard UAV image, and b(x,y) represents a B component level of the pixel (x,y) in the standard UAV image; performing local grayscale adjustment processing on the obtained first grayscale image to obtain a second grayscale image, wherein a local grayscale adjustment processing function used is: h, (x, y) h, (x, h1120 \h, (x, y) -ax [max& (x, y), g(x, y),b(x, y)) -h, (x, y)] x h, (x, y) -Elmo h1130 < h, (x, y) < h120 255 -h1120) 31) h1180 h, (x, y) min (him°, h,(x, y) + i x 7 (max (x, y), g(x, y), b (x, y)) -min(r(x, y), g(x, y), wherein h2(x,y) represents a gray level of the pixel (x,y) in the second grayscale image, fn F20 and hrrso represent gray levels corresponding to a 20%th pixel and an 80%th pixel in a sequence obtained by ranking gray levels of all pixels in the first grayscale image from large to small, respectively; and max(r(x,y), g(x,y), b(x,y)) represents a maximum value among the R component level, the G component level, and the B component level of the pixel (x,y); performing global grayscale adjustment processing based on the obtained second grayscale image to obtain a third grayscale image, wherein a global grayscale adjustment processing function used is.hT -h2 med hT -h2med h3(x,y) = a), x h26(x y) x x 255 255 + co2 x h2(x,y) + c03 x max(h2) -min(h2) wherein h3(x,y) represents a gray level of the pixel (x,y) in the third grayscale image, h26(x,y) represents an average gray level of all pixels in a neighborhood range of the second grayscale image with the pixel (x, y) as a center, h2rned represents a median gray level of the second grayscale image, max(h2) and min(h2) represent a maximum gray level and a minimum gray level of the second grayscale image respectively, and frr represents a predetermined standard gray level, wherein hT has a value range of [150,170]; an, on and CO3 represent predetermined weight coefficients respectively, wherein on has a value range of [0.2,0.4], on has a value range of [0.3,0.5], on has a value range of [0.2,0.4], and on+on+on has a value range of [1,1.1]; and converting the obtained third grayscale image from the gray space to the RGB space to obtain the preprocessed UAV image.
  9. 9. A system for automatically extracting geological disaster information from a UAV image based on deep learning, wherein the system is used to implement the method for automatically extracting geological disaster information from the UAV image based on deep learning according to any one of the claims 1 to 8.
GB2217794.3A 2022-06-27 2022-10-14 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image Pending GB2621645A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210735764.XA CN114821376B (en) 2022-06-27 2022-06-27 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning
PCT/CN2022/125257 WO2024000927A1 (en) 2022-06-27 2022-10-14 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image

Publications (2)

Publication Number Publication Date
GB202217794D0 GB202217794D0 (en) 2023-01-11
GB2621645A true GB2621645A (en) 2024-02-21

Family

ID=89834518

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2217794.3A Pending GB2621645A (en) 2022-06-27 2022-10-14 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image

Country Status (1)

Country Link
GB (1) GB2621645A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118013428B (en) * 2024-04-10 2024-06-07 四川省华地建设工程有限责任公司 Geological disaster risk assessment method and system based on artificial intelligence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548465A (en) * 2016-11-25 2017-03-29 福建师范大学 A kind of Enhancement Method of multi-spectrum remote sensing image
CN110008854A (en) * 2019-03-18 2019-07-12 中交第二公路勘察设计研究院有限公司 Unmanned plane image Highway Geological Disaster recognition methods based on pre-training DCNN
CN110532974A (en) * 2019-09-03 2019-12-03 成都理工大学 High-definition remote sensing information on geological disasters extraction method based on deep learning
CN112037144A (en) * 2020-08-31 2020-12-04 哈尔滨理工大学 Low-illumination image enhancement method based on local contrast stretching
CN113205039A (en) * 2021-04-29 2021-08-03 广东电网有限责任公司东莞供电局 Power equipment fault image identification and disaster investigation system and method based on multiple DCNNs
CN114596495A (en) * 2022-03-17 2022-06-07 湖南科技大学 Sand slide identification and automatic extraction method based on Sentinel-2A remote sensing image
WO2022133194A1 (en) * 2020-12-17 2022-06-23 Trustees Of Tufts College Deep perceptual image enhancement
CN114821376A (en) * 2022-06-27 2022-07-29 中咨数据有限公司 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548465A (en) * 2016-11-25 2017-03-29 福建师范大学 A kind of Enhancement Method of multi-spectrum remote sensing image
CN110008854A (en) * 2019-03-18 2019-07-12 中交第二公路勘察设计研究院有限公司 Unmanned plane image Highway Geological Disaster recognition methods based on pre-training DCNN
CN110532974A (en) * 2019-09-03 2019-12-03 成都理工大学 High-definition remote sensing information on geological disasters extraction method based on deep learning
CN112037144A (en) * 2020-08-31 2020-12-04 哈尔滨理工大学 Low-illumination image enhancement method based on local contrast stretching
WO2022133194A1 (en) * 2020-12-17 2022-06-23 Trustees Of Tufts College Deep perceptual image enhancement
CN113205039A (en) * 2021-04-29 2021-08-03 广东电网有限责任公司东莞供电局 Power equipment fault image identification and disaster investigation system and method based on multiple DCNNs
CN114596495A (en) * 2022-03-17 2022-06-07 湖南科技大学 Sand slide identification and automatic extraction method based on Sentinel-2A remote sensing image
CN114821376A (en) * 2022-06-27 2022-07-29 中咨数据有限公司 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning

Also Published As

Publication number Publication date
GB202217794D0 (en) 2023-01-11

Similar Documents

Publication Publication Date Title
CN114821376B (en) Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning
CN108875821A (en) The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
US12033327B2 (en) Methods, systems and apparatus for processing medical chest images
CN109684967A (en) A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network
CN109712084B (en) Image restoration method, image restoration system and flat panel detector
US20220398698A1 (en) Image processing model generation method, processing method, storage medium, and terminal
CN101983507A (en) Automatic redeye detection
CN112529827A (en) Training method and device for remote sensing image fusion model
GB2621645A (en) Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
CN114821440B (en) Mobile video stream content identification and analysis method based on deep learning
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN113379611B (en) Image processing model generation method, processing method, storage medium and terminal
CN112749664A (en) Gesture recognition method, device, equipment, system and storage medium
CN115861922B (en) Sparse smoke detection method and device, computer equipment and storage medium
CN115690934A (en) Master and student attendance card punching method and device based on batch face recognition
CN116433528A (en) Image detail enhancement display method and system for target area detection
CN112700396A (en) Illumination evaluation method and device for face picture, computing equipment and storage medium
CN115240245A (en) Face living body detection method, face living body detection device and electronic equipment
CN115410154A (en) Method for identifying thermal fault of electrical equipment of wind power engine room
CN115471745A (en) Network model and device for plant identification and electronic equipment
CN113379610B (en) Training method of image processing model, image processing method, medium and terminal
CN114972065A (en) Training method and system of color difference correction model, electronic equipment and mobile equipment
CN110443259B (en) Method for extracting sugarcane from medium-resolution remote sensing image
CN107145734A (en) A kind of automatic acquisition of medical data and input method and its system

Legal Events

Date Code Title Description
789A Request for publication of translation (sect. 89(a)/1977)

Free format text: PCT PUBLICATION NOT PUBLISHED