CN112307992A - Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing - Google Patents

Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing Download PDF

Info

Publication number
CN112307992A
CN112307992A CN202011214732.2A CN202011214732A CN112307992A CN 112307992 A CN112307992 A CN 112307992A CN 202011214732 A CN202011214732 A CN 202011214732A CN 112307992 A CN112307992 A CN 112307992A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
mangrove
automatic
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011214732.2A
Other languages
Chinese (zh)
Inventor
李瑞利
沈小雪
翟朝阳
张志�
张月琪
江鎞倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN202011214732.2A priority Critical patent/CN112307992A/en
Publication of CN112307992A publication Critical patent/CN112307992A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a mangrove plant automatic identification method based on unmanned aerial vehicle visible light remote sensing, the method is through the visible light remote sensing data that the unmanned aerial vehicle gathers, the data set that the preconditioning obtains the training model needs; then generating a digital surface model of a research sample plot through three-dimensional point cloud reconstruction; the SegNet semantic segmentation model is improved, a group of convolutional neural networks are added to obtain physical structure characteristics of the mangrove plants, and a characteristic training model based on visible light orthographic image extraction is combined; and carrying out automatic classification and identification on the mangrove plants by using the trained model. The method is not designed specifically according to a research sample plot, a pixel-level classification mode is adopted, universality is high, and the method utilizes physical structure information constraint of the mangrove forest to help to learn the mangrove forest more comprehensively, effectively improves the accuracy of species identification, and can be easily applied to the mangrove forest research and monitoring in different areas.

Description

Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing
Technical Field
The invention belongs to the technical field of mangrove forest remote sensing monitoring, and particularly relates to an unmanned aerial vehicle visible light remote sensing-based automatic mangrove plant identification method.
Background
The low-altitude unmanned remote sensing has the advantages of low cost, flexible data acquisition, high image spatial resolution, capability of acquiring image data in real time, certain advantages particularly in the field of small-area low altitude, and important supplement of traditional aerial remote sensing and satellite remote sensing. During the past years, unmanned aerial vehicles have attracted much attention in forest mapping, crop management and other vegetation monitoring aspects, and mangrove unmanned aerial vehicle remote sensing research is in the launch stage as one of the branches.
At present, mangrove plant species identification research developed based on unmanned aerial vehicle visible light remote sensing mainly develops around object-level species identification algorithms. The object-oriented type identification method extracts the characteristics of the plaque such as spectrum, texture, geometric structure and the like through an image segmentation technology, and manually screens related characteristics to classify species. However, image segmentation parameters and feature types required in related researches need to be selected in a targeted manner by combining the characteristics of survey plots, so that the universality is weak, and the long-term effective monitoring of mangrove forest ecosystems is not facilitated.
With the continuous development of computer vision technology in recent years, the pixel level image segmentation algorithm based on deep learning can solve the problems, so the pixel level image segmentation algorithm based on deep learning becomes a research focus at present. For example: the pixel-level type identification method is characterized in that the image pixels are directly divided by using high computing performance provided by a computer through an end-to-end artificial intelligence algorithm, so that the identification and understanding of the visible light remote sensing data of the unmanned aerial vehicle are realized, the processing mode is simple, and the method is high in universality. However, in the current research in this direction, classification is mainly achieved by acquiring spectral features of mangrove plants through color aerial images, and information such as physical structures of mangrove plants cannot be fully utilized to recognize the spectral features more comprehensively.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an unmanned aerial vehicle visible light remote sensing-based automatic mangrove plant identification method, which comprises the following steps:
s1: collecting unmanned aerial vehicle visible light remote sensing data of a mangrove forest ecosystem to obtain an original image of the unmanned aerial vehicle;
s2: preprocessing, data labeling and cutting an original image of the unmanned aerial vehicle to obtain a training set, a test set and a verification set required by a training network;
s3: carrying out three-dimensional point cloud reconstruction on the unmanned aerial vehicle original image subjected to preprocessing, data annotation and cutting processing to obtain a mangrove plant digital surface model of a flight area;
s4: inputting the training set obtained in S2 and the corresponding mangrove plant digital surface model in S3 into an improved SegNet neural network for training, and performing parameter optimization through verification set verification loop iteration to construct a mangrove plant type automatic identification model;
s5: inputting the data in the test set into the automatic classification model of the mangrove plant species obtained in S4, obtaining a species identification result, verifying the accuracy of the mangrove plant classification model, improving parameters and optimizing;
s6: storing the mangrove plant classification model after the precision verification in a server;
s7: the method comprises the steps that the visible light remote sensing data of the unmanned aerial vehicle are acquired in real time and are transmitted back to a server through a data link, and the server calls a mangrove plant classification model to recognize the visible light remote sensing data of the unmanned aerial vehicle, so that automatic classification and identification of mangrove plants are achieved.
Specifically, in S1, when collecting the unmanned aerial vehicle visible light remote sensing data of the mangrove ecosystem, the unmanned aerial vehicle is selected as Dajiang spirit Phantom4RTK, the positioning accuracy in the horizontal direction and the height direction is centimeter level, and the pixel is 5472 multiplied by 3648. The flight parameters are selected to be 80m in height, 3m/s in flight speed, 90% in course overlapping degree and 80% in side direction overlapping degree, the lens shoots the orthographic image vertically downwards, and the average time of single operation is 18 min.
Specifically, in S2, the following steps are performed:
s21: carrying out distortion correction on an original image of the unmanned aerial vehicle according to the distortion parameters of the camera lens of the tripod head of the unmanned aerial vehicle;
s22: carrying out data annotation on the unmanned aerial vehicle original image subjected to distortion correction to determine a plant identification type label;
s23: cutting the unmanned aerial vehicle original image after data labeling, expanding image data and screening data collected in an effective area containing mangrove plants;
s24: and dividing the cut original images of the unmanned aerial vehicle into a training set, a test set and a verification set according to corresponding proportions.
Specifically, in S3, the following steps are performed:
s31: carrying out three-dimensional point cloud reconstruction on the unmanned aerial vehicle original image subjected to preprocessing, data annotation and cutting processing according to the flight parameters of the unmanned aerial vehicle;
s32: and acquiring a mangrove plant digital surface model of the flight area based on the three-dimensional point cloud reconstruction result.
Specifically, in S4, the following steps are performed:
s41: performing feature extraction on the training set obtained by the S2 through the first 13 layers of convolution of the VGG-16 network;
s42: performing feature extraction on the corresponding mangrove plant digital surface model in the S3 through the first 13 layers of convolution of the VGG-16 network;
s43: combining the characteristic graphs extracted in S41 and S42 to obtain high-dimensional characteristic information constrained by physical structure information of the mangrove plants;
s44: performing up-sampling and deconvolution on the high-dimensional characteristic information to obtain a pixel-level semantic segmentation image model with the same size as the original image;
s45: and optimizing parameters of the pixel-level semantic segmentation image model according to the verification set obtained in the step S2 to obtain an automatic classification model of the mangrove plant species.
Has the advantages that:
according to the automatic mangrove plant identification method, when a pixel-level classification identification algorithm based on deep learning is constructed, a SegNet semantic segmentation algorithm is improved, a group of convolutional neural networks is added to extract feature information of a mangrove forest Digital ground Model (DSM), judgment of mangrove forest species is assisted through physical structure information provided by the DSM, and accuracy of mangrove forest species identification is improved. The improved pixel-level automatic species identification method improves the precision of the automatic mangrove plant species identification method; meanwhile, the method is not designed in a targeted manner aiming at a research sample plot, has strong universality and can be easily applied to mangrove investigation and monitoring in different areas.
Drawings
FIG. 1 is a flow chart of an automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing of the invention;
FIG. 2 is a diagram of the improved deep learning network framework of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are only a part of the examples of the present invention, and these examples are only for explaining the present invention and do not limit the scope of the present invention.
As shown in fig. 1, a mangrove plant automatic identification method based on unmanned aerial vehicle visible light remote sensing comprises the following steps:
s1: collecting unmanned aerial vehicle visible light remote sensing data of a mangrove forest ecosystem to obtain an original image of the unmanned aerial vehicle;
specifically, in S1, when collecting the unmanned aerial vehicle visible light remote sensing data of the mangrove ecosystem, the unmanned aerial vehicle is selected as Dajiang spirit Phantom4RTK, the positioning accuracy in the horizontal direction and the height direction is centimeter level, and the pixel is 5472 multiplied by 3648. The flight parameters are selected to be 80m in height, 3m/s in flight speed, 90% in course overlapping degree and 80% in side direction overlapping degree, the lens shoots the orthographic image vertically downwards, and the average time of single operation is 18 min.
S2: preprocessing, data labeling and cutting an original image of the unmanned aerial vehicle to obtain a training set, a test set and a verification set required by a training network;
specifically, in S2, the following steps are performed:
s21: carrying out distortion correction on an original image of the unmanned aerial vehicle according to the distortion parameters of the camera lens of the tripod head of the unmanned aerial vehicle;
s22: and carrying out data annotation on the unmanned aerial vehicle original image subjected to distortion correction so as to determine a plant identification type label. For example, identifying an original image of the unmanned aerial vehicle, marking which of the images are tree types and which of the images are bare lands, and marking the corresponding identified labels as corresponding types;
s23: cutting the unmanned aerial vehicle original image after data labeling, expanding image data and screening data collected in an effective area containing mangrove plants;
s24: the images obtained in step S23 are divided into a training set, a test set, and a verification set according to the corresponding proportions, for example: the proportions may be 60%, 20% and 20%;
s3: carrying out three-dimensional point cloud reconstruction on the unmanned aerial vehicle original image subjected to preprocessing, data annotation and cutting processing to obtain a mangrove plant digital surface model of a flight area;
specifically, in S3, the following steps are performed:
s31, performing three-dimensional point cloud reconstruction on the unmanned aerial vehicle original image subjected to preprocessing, data annotation and cutting according to the flight parameters (such as height, image course overlapping degree and side direction overlapping degree) of the unmanned aerial vehicle;
s32, acquiring a mangrove plant digital surface model of the flight area based on the three-dimensional point cloud reconstruction result;
s4: inputting the training set obtained in S2 and the corresponding mangrove plant digital surface model in S3 into the improved SegNet neural network shown in FIG. 2 for training, and performing parameter optimization through verification set verification loop iteration to construct a mangrove plant species automatic identification model;
specifically, in S4, the following steps are performed:
s41: performing feature extraction on the training set obtained by the S2 through the first 13 layers of convolution of the VGG-16 network, wherein the extracted features are mainly the features of the spectrum and the like of the mangrove plants;
s42: performing feature extraction on the corresponding mangrove plant digital surface model in the S3 through the first 13 layers of convolution of the VGG-16 network, wherein the extracted features mainly comprise physical structure features such as height information of mangroves;
s43: combining the characteristic graphs extracted in S41 and S42 to obtain high-dimensional characteristic information constrained by physical structure information of the mangrove plants;
s44: performing up-sampling and deconvolution on the high-dimensional characteristic information to obtain a pixel-level semantic segmentation image model with the same size as the original image;
s45: and optimizing parameters of the pixel-level semantic segmentation image model according to the verification set obtained in the step S2 to obtain an automatic classification model of the mangrove plant species.
S5: inputting the data in the test set into the automatic classification model of the mangrove plant species obtained in S4, obtaining a species identification result, verifying the accuracy of the mangrove plant classification model, improving parameters and optimizing;
s6: storing the mangrove plant classification model after the precision verification in a server;
s7: the method comprises the steps that the visible light remote sensing data of the unmanned aerial vehicle are acquired in real time and are transmitted back to a server through a data link, and the server calls a mangrove plant classification model to recognize the visible light remote sensing data of the unmanned aerial vehicle, so that automatic classification and identification of mangrove plants are achieved.
Further, when the labeled image is preprocessed in S2, the image size may be fixed to 736 × 736 pixels, and the DSM data is correspondingly cropped to 736 × 736 pixels in S3.
Further, in S3, ContextCapture software may be selected to perform mangrove forest three-dimensional point cloud reconstruction, and DSM data of the research plot is further generated.
Further, in S4, the hardware device may be selected as an NVIDIA GeForce RTX 2080Ti graphics card, the processor is Intel i9-10900K, the training learning rate is 5 × 10-6, and the training iteration number is 100.
According to the automatic mangrove plant identification method, when a pixel-level classification identification algorithm based on deep learning is constructed, a SegNet semantic segmentation algorithm is improved, a group of convolutional neural networks is added to extract feature information of a mangrove forest Digital ground Model (DSM), judgment of mangrove forest species is assisted through physical structure information provided by the DSM, and accuracy of mangrove forest species identification is improved. The improved pixel-level automatic species identification method improves the precision of the automatic mangrove plant species identification method; meanwhile, the method is not designed with pertinence to the research sample plot, has strong universality and can be easily applied to mangrove investigation and monitoring of a plurality of research areas.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. An unmanned aerial vehicle visible light remote sensing-based automatic mangrove plant identification method is characterized in that: the method comprises the following steps:
s1: collecting unmanned aerial vehicle visible light remote sensing data of a mangrove forest ecosystem to obtain an original image of the unmanned aerial vehicle;
s2: preprocessing, data labeling and cutting an original image of the unmanned aerial vehicle to obtain a training set, a test set and a verification set required by a training network;
s3: carrying out three-dimensional point cloud reconstruction on the unmanned aerial vehicle original image subjected to preprocessing, data annotation and cutting processing to obtain a mangrove plant digital surface model of a flight area;
s4: inputting the training set obtained in S2 and the corresponding mangrove plant digital surface model in S3 into an improved SegNet neural network for training, and performing parameter optimization through verification set verification loop iteration to construct a mangrove plant type automatic identification model;
s5: inputting the data in the test set into the automatic classification model of the mangrove plant species obtained in S4, obtaining a species identification result, verifying the accuracy of the mangrove plant classification model, improving parameters and optimizing;
s6: storing the mangrove plant classification model after the precision verification in a server;
s7: the method comprises the steps that the visible light remote sensing data of the unmanned aerial vehicle are acquired in real time and are transmitted back to a server through a data link, and the server calls a mangrove plant classification model to recognize the visible light remote sensing data of the unmanned aerial vehicle, so that automatic classification and identification of mangrove plants are achieved.
2. The automatic mangrove plant identification method according to claim 1, characterized in that:
specifically, in S1, when collecting the unmanned aerial vehicle visible light remote sensing data of the mangrove ecosystem, the unmanned aerial vehicle is selected as the Xinntom 4RTK, the positioning accuracy in the horizontal direction and the height direction is centimeter level, the pixels are 5472 × 3648, the flight parameters are selected as the height of 80m, the flight speed is 3m/S, the course overlap degree is 90%, the lateral overlap degree is 80%, the lens shoots the orthographic image vertically downwards, and the single operation time is 18min on average.
3. The automatic mangrove plant identification method according to claim 1, characterized in that:
specifically, in S2, the following steps are performed:
s21: carrying out distortion correction on an original image of the unmanned aerial vehicle according to the distortion parameters of the camera lens of the tripod head of the unmanned aerial vehicle;
s22: carrying out data annotation on the unmanned aerial vehicle original image subjected to distortion correction to determine a plant identification type label;
s23: cutting the unmanned aerial vehicle original image after data labeling, expanding image data and screening data collected in an effective area containing mangrove plants;
s24: and dividing the cut original images of the unmanned aerial vehicle into a training set, a test set and a verification set according to corresponding proportions.
4. The automatic mangrove plant identification method according to claim 1, characterized in that:
specifically, in S3, the following steps are performed:
s31: carrying out three-dimensional point cloud reconstruction on the unmanned aerial vehicle original image subjected to preprocessing, data annotation and cutting processing according to the flight parameters of the unmanned aerial vehicle;
s32: and acquiring a mangrove plant digital surface model of the flight area based on the three-dimensional point cloud reconstruction result.
5. The automatic mangrove plant identification method according to claim 1, characterized in that:
specifically, in S4, the following steps are performed:
s41: performing feature extraction on the training set obtained by the S2 through the first 13 layers of convolution of the VGG-16 network;
s42: performing feature extraction on the corresponding mangrove plant digital surface model in the S3 through the first 13 layers of convolution of the VGG-16 network;
s43: combining the characteristic graphs extracted in S41 and S42 to obtain high-dimensional characteristic information constrained by physical structure information of the mangrove plants;
s44: performing up-sampling and deconvolution on the high-dimensional characteristic information to obtain a pixel-level semantic segmentation image model with the same size as the original image;
s45: and optimizing parameters of the pixel-level semantic segmentation image model according to the verification set obtained in the step S2 to obtain an automatic classification model of the mangrove plant species.
CN202011214732.2A 2020-11-04 2020-11-04 Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing Pending CN112307992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011214732.2A CN112307992A (en) 2020-11-04 2020-11-04 Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011214732.2A CN112307992A (en) 2020-11-04 2020-11-04 Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing

Publications (1)

Publication Number Publication Date
CN112307992A true CN112307992A (en) 2021-02-02

Family

ID=74324798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011214732.2A Pending CN112307992A (en) 2020-11-04 2020-11-04 Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing

Country Status (1)

Country Link
CN (1) CN112307992A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
CN113239829A (en) * 2021-05-17 2021-08-10 哈尔滨工程大学 Cross-dimension remote sensing data target identification method based on space occupation probability characteristics
CN117456369A (en) * 2023-12-25 2024-01-26 广东海洋大学 Visual recognition method for intelligent mangrove growth condition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230329A (en) * 2017-12-18 2018-06-29 孙颖 Semantic segmentation method based on multiple dimensioned convolutional neural networks
CN108681706A (en) * 2018-05-15 2018-10-19 哈尔滨工业大学 A kind of double source remotely-sensed data semantic segmentation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230329A (en) * 2017-12-18 2018-06-29 孙颖 Semantic segmentation method based on multiple dimensioned convolutional neural networks
CN108681706A (en) * 2018-05-15 2018-10-19 哈尔滨工业大学 A kind of double source remotely-sensed data semantic segmentation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王红军: "《基于知识的机电系统故障诊断与预测技术》", 31 January 2014, 中国财富出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
CN113239829A (en) * 2021-05-17 2021-08-10 哈尔滨工程大学 Cross-dimension remote sensing data target identification method based on space occupation probability characteristics
CN117456369A (en) * 2023-12-25 2024-01-26 广东海洋大学 Visual recognition method for intelligent mangrove growth condition
CN117456369B (en) * 2023-12-25 2024-02-27 广东海洋大学 Visual recognition method for intelligent mangrove growth condition

Similar Documents

Publication Publication Date Title
CN112307992A (en) Automatic mangrove plant identification method based on unmanned aerial vehicle visible light remote sensing
CN110569786B (en) Fruit tree identification and quantity monitoring method and system based on unmanned aerial vehicle data acquisition
US20210118165A1 (en) Geospatial object geometry extraction from imagery
CN112560623B (en) Unmanned aerial vehicle-based rapid mangrove plant species identification method
US20230162496A1 (en) System and method for assessing pixels of satellite images of agriculture land parcel using ai
US10546216B1 (en) Recurrent pattern image classification and registration
CN112881294B (en) Unmanned aerial vehicle-based mangrove forest stand health degree evaluation method
CN114067219A (en) Farmland crop identification method based on semantic segmentation and superpixel segmentation fusion
Zhao et al. Tree canopy differentiation using instance-aware semantic segmentation
CN115689928B (en) Method and system for removing duplication of transmission tower inspection images under visible light
CN111552762A (en) Orchard planting digital map management method and system based on fruit tree coding
Liu et al. An efficient approach based on UAV orthographic imagery to map paddy with support of field-level canopy height from point cloud data
Gómez‐Sapiens et al. Improving the efficiency and accuracy of evaluating aridland riparian habitat restoration using unmanned aerial vehicles
WO2023043317A1 (en) Method and system for delineating agricultural fields in satellite images
Mohamad et al. A screening approach for the correction of distortion in UAV data for coral community mapping
Luo et al. An evolutionary shadow correction network and a benchmark UAV dataset for remote sensing images
CN117167208A (en) Wind driven generator blade damage intelligent inspection system and method based on UAV and CNN
CN116030324A (en) Target detection method based on fusion of spectral features and spatial features
CN115358991A (en) Method and system for identifying seedling leaking quantity and position of seedlings
Kalmukov et al. Methods for Automated Remote Sensing and Counting of Animals
CN115294467A (en) Detection method and related device for tea diseases
CN113989509A (en) Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
CN113989253A (en) Farmland target object information acquisition method and device
Subramaniam et al. Real Time Monitoring of Forest Fires and Wildfire Spread Prediction
Chakraborty et al. UAV sensing-based semantic image segmentation of litchi tree crown using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210202

RJ01 Rejection of invention patent application after publication