CN112949612A - High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle - Google Patents

High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN112949612A
CN112949612A CN202110434975.5A CN202110434975A CN112949612A CN 112949612 A CN112949612 A CN 112949612A CN 202110434975 A CN202110434975 A CN 202110434975A CN 112949612 A CN112949612 A CN 112949612A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
training
image
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110434975.5A
Other languages
Chinese (zh)
Inventor
张胜景
赵瑞山
李守军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Technical University
Original Assignee
Liaoning Technical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Technical University filed Critical Liaoning Technical University
Priority to CN202110434975.5A priority Critical patent/CN112949612A/en
Publication of CN112949612A publication Critical patent/CN112949612A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The invention discloses a classification method for coastal zone land features based on high-resolution remote sensing images of an unmanned aerial vehicle, which is used for designing and finishing the acquisition of the high-resolution remote sensing images of the unmanned aerial vehicle, the manufacture of a data set, the optimization of a deep learning model and the verification of precision. Collecting unmanned aerial vehicle remote sensing images in an experimental area; dividing the land object categories of the coastal zones; the improved PSPNet semantic segmentation algorithm is applied to the high-resolution coastal zone remote sensing image of the unmanned aerial vehicle, the remote sensing image background is more complex and changeable than a natural image, the pyramid pooling module is introduced, the problem that a traditional model lacks of utilizing category clues in a global scene is solved, and the classification precision is effectively improved. Aiming at the problems of large range of national coastal zones, large number of images in a data set and the like, the step length of average pooling and the size of a convolution kernel are redefined, and a backbone extraction network is replaced by the MobileNet V2, so that the training time of a semantic segmentation network model is reduced. The method has the characteristics of wide identification range, high classification precision, low cost, short period and the like, and can effectively improve the classification precision, save the classification time and reduce the cost of manpower and material resources.

Description

High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a high-resolution remote sensing image coastal zone ground object classification method based on an unmanned aerial vehicle.
Background
The coastal zone is a sea-land transition zone, the economic development of coastal areas is rapid in recent years, because the understanding process of people on the dynamic change of land utilization lags behind the self understanding of cities at present, the contradiction between the resource bottleneck of a coastline and the ecological environment restriction is increasingly prominent, and in order to analyze the change trend and the influence factors of the coastal zone, repair and protect the coastal zone, an efficient and accurate ground object identification and classification means is extremely important.
At present, the land feature classification method mainly comprises methods such as artificial visual interpretation and traditional machine learning (refer to what is rainy and xu Xiao Jian. A sea and land segmentation method suitable for processing large-scene optical remote sensing data: China, 202011418779.0), and the like, and has the defects of low classification speed, great manpower and material resources consumption for sample selection and evaluation, low generalization, poor robustness and the like. China sea island is numerous and coastline is extremely long, in order to repair and protect the seashore, a rapid and accurate means capable of identifying and classifying land features of a large-area seashore zone is urgently needed, on the other hand, the unmanned aerial vehicle remote sensing technology is rapidly developed, the information acquisition speed is high, the cost is low, the remote sensing image resolution is high, and the means of acquiring required images by unmanned aerial vehicle remote sensing gradually becomes a hot means. However, the coastal zone land feature classification based on the remote sensing image generally adopts a traditional machine learning method (refer to Schneirather, Tsunstar and Marseudang), a remote sensing image fusion and coastal zone classification method based on improved reliability factors, China 201910319782.8, has limited feature representation capability, extracts shallow features such as edges and textures, cannot make a great breakthrough in classification precision, and cannot realize end-to-end training and prediction due to the existence of feature engineering.
In recent years, rapid development of deep learning has attracted much attention, and the method can learn target characteristics from massive image data and can realize end-to-end training and prediction. For example, the initial design of the Unet is directed to the segmentation processing of medical images, which effectively solves the problem of less data set samples caused by the particularity of the medical images. SegNet is an improvement on a classical network model FCN, reduces memory occupation and improves efficiency. Compared with natural images, the remote sensing image background is more complex and changeable, the background characteristics of different areas are greatly different, the size difference of different ground objects is extremely large, and the setting requirement on the receptive field is more severe.
In summary, aiming at the problems of high cost, long time consumption and low classification precision of the coastal zone ground object identification and classification method, the method is improved on the basis of the deep learning semantic segmentation PSPNet algorithm, so that the method has the advantages of short construction period, high efficiency and high classification precision, can identify and classify the coastal zone ground objects in a large range, and provides technical support for repairing and protecting the coastal zone.
Disclosure of Invention
Aiming at the problems that the remote sensing image background is more complex and changeable compared with a natural image, the ground feature size difference is extremely large, and the requirement on the receptive field is more severe, the pyramid pooling module is introduced, and the context information is combined, so that the problem that the traditional model is lack of utilizing category clues in the global scene is solved, and the classification precision is effectively improved. Aiming at the problems of numerous islands, extremely large coastal zones and extremely large number of images of a data set in China, the average pooling step length and the convolution kernel size are redefined, and a backbone extraction network is replaced by the MobileNet V2, so that the training time of a semantic segmentation network model is reduced, and the efficiency of classification work is improved.
In order to achieve the above object, the present invention comprises the steps of:
s1: determining the space range of an experimental area, acquiring remote sensing image data of a high-resolution RGB (red, green and blue) coastal zone of the unmanned aerial vehicle, performing image splicing on the acquired images by using mapping software, acquiring a digital ortho-image (DOM) image, and classifying the land features of the coastal zone into categories;
s2: converting the digital ortho-image in the S1 into a gray scale image, reassigning the pixel value to be 0-6, and cutting the image into a picture with the pixel size of 684 multiplied by 456; making the pictures into a data set in a PASCAL VOC format and dividing the data set into a training set, a verification set and a test set; performing data augmentation operation on the data set;
s3: carrying out semantic segmentation PSPNet model training on the training set obtained in the step S2, improving a model algorithm according to the average cross-over ratio (MIOU) and the training time obtained by training, redefining the average pooling step length and the convolution kernel size, and replacing a trunk extraction network of the PSPNet;
s4: training the optimized semantic segmentation network model obtained in the step S3 according to the training set and the verification set obtained in the step S2;
s5: carrying out classification experiments on the land features in the coastal zone according to the test set obtained in the step S2 and the semantic segmentation model obtained after the training in the step S4;
s6: adjusting training parameters according to the S5 test result, and circulating the operation until obtaining a semantic segmentation network model with highest precision and shortest training time;
s7: and obtaining a coastal zone land feature semantic segmentation network model according to the S6 to classify the coastal zone land features.
Further, the data processing in step S1 mainly includes the following steps:
(1) the data set is collected by using a 2048-thousand-pixel lens carried by an unmanned aerial vehicle, and the flying height is 100 meters;
(2) coastal zones are divided into seven major categories, beach, building, sea water, vegetation, roads and other ground features.
Further, in step S2, there are 19200 data sets; the data set augmentation comprises the operations of translation and rotation on the image, so that training overfitting is avoided, and the robustness of the semantic segmentation model is enhanced.
Further, step S3 includes the following steps:
(1) MIOU is an evaluation index of precision of a training result of a semantic segmentation network model in deep learning, and is defined as an average value of the ratio of intersection and union of real pixel values and predicted pixel values of all samples; wherein the MIOU calculation formula is as follows:
Figure BDA0003032819350000041
where TP represents the number of samples for which the true value is positive and the predicted value is also positive; FP represents the number of samples with negative true values and positive predicted values; FN represents the number of samples with positive true values and negative predicted values; k +1 is the set total classification category number;
(2) the model loss function consists of two parts, and the function formula L is as follows:
Figure BDA0003032819350000042
wherein L represents the Loss of Loss function of LossM denotes the number of categories, ycIs a one-hot vector, the element has only two values of 0 and 1, if the category is the same as that of the sample, the 1 is taken, otherwise, the 0 and P are takencRepresenting the probability that the prediction sample belongs to.
Figure BDA0003032819350000051
Wherein S represents the accuracy, TP represents the true value is positive, and the predicted value is the number of positive samples; FP represents the number of samples with negative true values and positive predicted values; FN represents the number of samples with positive true values and negative predicted values;
(3) the algorithm modification comprises redefining the step length of average pooling and the size of a convolution kernel, changing a 6 multiplied by 6 characteristic region divided by the original PSPNet pyramid pooling module into a 5 multiplied by 5 region, and reducing the calculation amount of pooling operation; the PSPNet trunk feature extraction network ResNet50 is replaced by a MobileNet V2, the MobileNet V2 uses standard convolution feature extraction features, and the convolution mode adopts dimension ascending and dimension descending, so that the time and space complexity of convolution layers is reduced, and the training time is saved.
Further, in step S4, the training set obtained in S2 is trained using the network model after algorithm refinement in S3.
Further, in step S6, training parameters are adjusted according to the test result of S5, the number of iterations is adjusted to 90-100, the number of trains in each batch is adjusted to 6-8, and the learning rate is adjusted to 0.001-0.0001.
Further, the images used in the classification of the coastal zone features in step S7 are the test set after being processed in step S2.
Therefore, the coastal zone ground object classification method based on the high-resolution remote sensing image of the unmanned aerial vehicle provides support for repairing and protecting the coastal zone, and has certain significance for deep research on subsequent ground object classification of the remote sensing image.
Drawings
The description of the present disclosure will become apparent and readily understood in conjunction with the following drawings, in which:
FIG. 1 is a flow chart of a classification method of land features in a coastal zone based on a high-resolution remote sensing image of an unmanned aerial vehicle according to the invention;
FIG. 2 is a diagram of data set production results;
FIG. 3 is a schematic diagram of a pyramid module;
FIG. 4 is a graph of the results of a classification experiment for terrain in a coastal zone;
Detailed Description
According to the steps shown in fig. 1, the classification method of the land features in the coastal zone based on the high-resolution remote sensing image of the unmanned aerial vehicle is explained in detail.
Step 1: and determining the space range of the experimental area, and acquiring remote sensing image data of the high-resolution RGB (red, green and blue) coastal zone of the unmanned aerial vehicle. The method comprises the following specific steps:
(1) the remote sensing image is collected by using a 2048-thousand-pixel lens carried by an unmanned aerial vehicle, and the flying height is 100 meters;
(2) splicing the collected high-resolution remote sensing images of the unmanned aerial vehicle by using mapping software to obtain a digital ortho-image (DOM) image;
(3) coastal zones are divided into seven major categories (as shown in table 1) of beach, building, sea, vegetation, road and other features.
TABLE 1
Figure BDA0003032819350000071
Step 2: the data set production mainly comprises the following specific steps:
(1) converting the digital ortho-image in the step 1 into a gray scale image, reassigning the pixel value of the gray scale image to be 0-6, and cutting the gray scale image into a picture with the pixel size of 684 multiplied by 456;
(2) the data set is enlarged, and operations such as rotation, translation, scaling and the like are carried out by using a Python program;
(3) the pictures are made into a data set (as shown in figure 2) in a PASCAL VOC format and divided into a training set, a verification set and a test set.
And step 3: the algorithm improvement mainly comprises the following specific steps:
(1) MIOU is an evaluation index of precision of a training result of a semantic segmentation network model in deep learning, and is defined as an average value of the ratio of intersection and union of real pixel values and predicted pixel values of all samples; wherein the MIOU calculation formula is as follows:
Figure BDA0003032819350000081
where TP represents the number of samples for which the true value is positive and the predicted value is also positive; FP represents the number of samples with negative true values and positive predicted values; FN represents the number of samples with positive true values and negative predicted values; k +1 is the set overall classification category number.
(2) The model loss function consists of two parts, and the function formula L is as follows:
Figure BDA0003032819350000082
wherein L represents the accuracy, M represents the number of categories, ycIs a one-hot vector, the element has only two values of 0 and 1, if the category is the same as that of the sample, the 1 is taken, otherwise, the 0 and P are takencRepresenting the probability that the prediction sample belongs to.
Figure BDA0003032819350000083
Wherein S represents the output of the upper layer of the model, TP represents the true value is positive, and the predicted value is the number of positive samples; FP represents the number of samples with negative true values and positive predicted values; FN represents the number of samples for which the true value is positive and the predicted value is negative.
(3) Redefining the step size and convolution kernel size of average pooling, changing the 6 × 6 characteristic region partially divided by the original PSPNet pyramid pooling module (shown in FIG. 3) into a 5 × 5 region, and reducing the calculation amount of pooling operation; the PSPNet trunk feature extraction network ResNet50 is replaced by MobileNet V2, the MobileNet V2 uses standard convolution feature extraction features, the convolution mode adopts ascending dimension first and then descending dimension, the convolution layer time and space complexity are reduced, and the training time is saved, as shown in Table 2.
TABLE 2
Figure BDA0003032819350000091
And 4, step 4: and (4) training the optimized semantic segmentation network model obtained in the step (S3) according to the training set and the verification set obtained in the step (2).
And 5: and (4) carrying out classification experiments on the coastal zone land features according to the test set obtained in the step 2 and the semantic segmentation model obtained after the training of S4.
Step 6: the parameters are adjusted according to the test result in the step 5 as follows: the iteration times are adjusted to 90-100 times, the training number of each batch is adjusted to 6-8, and the learning rate is adjusted to 0.001-0.0001.
And 7: and (4) classifying the coastal zone land features by the semantic segmentation network model of the coastal zone land features obtained according to the step 6 (as shown in figure 4), so that the training time is greatly reduced.
The invention relates to a classification method of coastal zone land features based on high-resolution remote sensing images of unmanned aerial vehicles, which aims at the field of classification and identification of traditional high-resolution remote sensing images, particularly land features, and relies on manual means and visual interpretation for a long time, so that the situations of misjudgment and missed judgment of targets are easy to occur, and the detection precision is also lower; the coastal zones are numerous, the data volume is extremely large, and the training time is very slow. Therefore, the intelligent level and the efficiency of ground feature classification are improved by applying deep learning to ground feature classification of remote sensing images of the high-resolution coastal zone area of the unmanned aerial vehicle.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A high-resolution remote sensing image coastal zone ground object classification method based on an unmanned aerial vehicle is characterized by comprising the following steps:
s1: determining the space range of an experimental area, acquiring remote sensing image data of a high-resolution RGB (red, green and blue) coastal zone of the unmanned aerial vehicle, performing image splicing on the acquired images by using mapping software, acquiring a digital ortho-image (DOM) image, and classifying the land features of the coastal zone into categories;
s2: converting the digital ortho-image in the S1 into a gray scale image, reassigning the pixel value to be 0-6, and cutting the image into a picture with the pixel size of 684 multiplied by 456; performing data augmentation operation on the data set; making the pictures into a data set in a PASCAL VOC format and dividing the data set into a training set, a verification set and a test set;
s3: training the training set obtained in the step S2 by using a semantic segmentation PSPNet model, improving a model algorithm according to the average cross-over ratio (MIOU) and the training time obtained by training, redefining the average pooling step length and the convolution kernel size, and replacing a trunk extraction network of the PSPNet;
s4: training the optimized semantic segmentation network model obtained in the step S3 according to the training set and the verification set obtained in the step S2;
s5: carrying out classification experiments on the land features in the coastal zone according to the test set obtained in the step S2 and the semantic segmentation model obtained after the training in the step S4;
s6: adjusting training parameters according to the S5 test result, and circulating the operation until obtaining a semantic segmentation network model with highest precision and shortest training time;
s7: and obtaining a coastal zone land feature semantic segmentation network model according to the S6 to classify the coastal zone land features.
2. The method for classifying the terrain based on the high-resolution remote-sensing image of the unmanned aerial vehicle as claimed in claim 1, wherein the step S1 comprises the steps of:
(1) the data set is collected by using a 2048-thousand-pixel lens carried by an unmanned aerial vehicle, and the flying height is 100 meters;
(2) coastal zones are divided into seven major categories, beach, building, sea water, vegetation, roads and other ground features.
3. The method for classifying the terrain based on the high-resolution remote-sensing image of the unmanned aerial vehicle as claimed in claim 1, wherein the step S2 comprises the steps of:
(1) converting the digital ortho-image into a gray-scale image, reassigning the pixel value to be 0-6, and cutting the image into a picture with the pixel size of 684 multiplied by 456;
(2) 19200 data sets are shared, and the data set expansion comprises the operations of translation and rotation of the image, so that training and fitting are avoided, and the robustness of the semantic segmentation model is enhanced.
4. The method for classifying the terrain according to claim 1, wherein the step S3 comprises the following steps:
(1) MIOU is an evaluation index of precision of a training result of a semantic segmentation network model in deep learning, and is defined as an average value of the ratio of intersection and union of real pixel values and predicted pixel values of all samples; wherein the MIOU calculation formula is as follows:
Figure FDA0003032819340000021
where TP represents the number of samples for which the true value is positive and the predicted value is also positive; FP represents the number of samples with negative true values and positive predicted values; FN represents the number of samples with positive true values and negative predicted values; k +1 is the set total classification category number;
(2) the model loss function consists of two parts, and the function formula L is as follows:
Figure FDA0003032819340000031
wherein L represents the Loss of Loss function of Loss, M represents the number of categories, ycIs a one-hot vector, the element has only two values of 0 and 1, if the category is the same as that of the sample, the 1 is taken, otherwise, the 0 and P are takencRepresenting the probability that the prediction sample belongs to.
Figure FDA0003032819340000032
Wherein S represents the accuracy, TP represents the true value is positive, and the predicted value is the number of positive samples; FP represents the number of samples with negative true values and positive predicted values; FN represents the number of samples with positive true values and negative predicted values;
(3) the algorithm modification comprises redefining the step length of average pooling and the size of a convolution kernel, changing a 6 multiplied by 6 characteristic region divided by the original PSPNet pyramid pooling module into a 5 multiplied by 5 region, and reducing the calculation amount of pooling operation; the PSPNet trunk feature extraction network ResNet50 is replaced by a MobileNet V2, the MobileNet V2 uses standard convolution feature extraction features, and the convolution mode adopts dimension ascending and dimension descending, so that the time and space complexity of convolution layers is reduced, and the training time is saved.
5. The method for classifying the terrain based on the high-resolution remote-sensing image of the coastal zone of the unmanned aerial vehicle as claimed in claim 1, wherein the training set obtained in the step S2 is trained in the step S4 by using a network model after algorithm improvement in the step S3.
6. The method for classifying the terrain based on the coastline zone of the high-resolution remote sensing image of the unmanned aerial vehicle as claimed in claim 1, wherein in the step S5, the terrain of the coastline zone of the unmanned aerial vehicle acquired in the step S1 is classified according to a model trained in the step S4.
7. The method for classifying the terrain based on the high-resolution remote-sensing image of the coastal zone of the unmanned aerial vehicle as claimed in claim 1, wherein in the step S6, the training parameters are adjusted according to the test result of S5, the iteration number is adjusted to 90-100, the training number in each batch is adjusted to 6-8, and the learning rate is adjusted to 0.001-0.0001.
8. The method for classifying the coastal zone land features based on the high-resolution remote sensing image of the unmanned aerial vehicle as claimed in claim 1, wherein the image used for classifying the coastal zone land features in the step S7 is a test set obtained after being processed in S2.
CN202110434975.5A 2021-04-22 2021-04-22 High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle Pending CN112949612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110434975.5A CN112949612A (en) 2021-04-22 2021-04-22 High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110434975.5A CN112949612A (en) 2021-04-22 2021-04-22 High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN112949612A true CN112949612A (en) 2021-06-11

Family

ID=76233252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110434975.5A Pending CN112949612A (en) 2021-04-22 2021-04-22 High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112949612A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743208A (en) * 2021-07-30 2021-12-03 南方海洋科学与工程广东省实验室(广州) Unmanned aerial vehicle array-based white dolphin number statistical method and system
CN113935369A (en) * 2021-10-20 2022-01-14 华南农业大学 Method for constructing mountain nectar garden road recognition semantic segmentation model
CN115205688A (en) * 2022-09-07 2022-10-18 浙江甲骨文超级码科技股份有限公司 Tea tree planting area extraction method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147794A (en) * 2019-05-21 2019-08-20 东北大学 A kind of unmanned vehicle outdoor scene real time method for segmenting based on deep learning
CN111325713A (en) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 Wood defect detection method, system and storage medium based on neural network
CN112418229A (en) * 2020-11-03 2021-02-26 上海交通大学 Unmanned ship marine scene image real-time segmentation method based on deep learning
CN112634261A (en) * 2020-12-30 2021-04-09 上海交通大学医学院附属瑞金医院 Stomach cancer focus detection method and device based on convolutional neural network
CN112669325A (en) * 2021-01-06 2021-04-16 大连理工大学 Video semantic segmentation method based on active learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147794A (en) * 2019-05-21 2019-08-20 东北大学 A kind of unmanned vehicle outdoor scene real time method for segmenting based on deep learning
CN111325713A (en) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 Wood defect detection method, system and storage medium based on neural network
CN112418229A (en) * 2020-11-03 2021-02-26 上海交通大学 Unmanned ship marine scene image real-time segmentation method based on deep learning
CN112634261A (en) * 2020-12-30 2021-04-09 上海交通大学医学院附属瑞金医院 Stomach cancer focus detection method and device based on convolutional neural network
CN112669325A (en) * 2021-01-06 2021-04-16 大连理工大学 Video semantic segmentation method based on active learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHUN LI 等: "Semantic segmentation of landslide images in Nyingchi region based on PSPNet network", ICISCE, no. 7, pages 1269 - 1273, XP033975080, DOI: 10.1109/ICISCE50968.2020.00256 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743208A (en) * 2021-07-30 2021-12-03 南方海洋科学与工程广东省实验室(广州) Unmanned aerial vehicle array-based white dolphin number statistical method and system
CN113935369A (en) * 2021-10-20 2022-01-14 华南农业大学 Method for constructing mountain nectar garden road recognition semantic segmentation model
CN115205688A (en) * 2022-09-07 2022-10-18 浙江甲骨文超级码科技股份有限公司 Tea tree planting area extraction method and system

Similar Documents

Publication Publication Date Title
CN110619282B (en) Automatic extraction method for unmanned aerial vehicle orthoscopic image building
CN112949612A (en) High-resolution remote sensing image coastal zone ground object classification method based on unmanned aerial vehicle
Wang et al. RENet: Rectangular convolution pyramid and edge enhancement network for salient object detection of pavement cracks
CN111695448B (en) Roadside vehicle identification method based on visual sensor
CN108256464B (en) High-resolution remote sensing image urban road extraction method based on deep learning
CN110532961B (en) Semantic traffic light detection method based on multi-scale attention mechanism network model
CN110399840B (en) Rapid lawn semantic segmentation and boundary detection method
CN108876805B (en) End-to-end unsupervised scene passable area cognition and understanding method
CN112347970A (en) Remote sensing image ground object identification method based on graph convolution neural network
CN111666909A (en) Suspected contaminated site space identification method based on object-oriented and deep learning
CN114638794A (en) Crack detection and three-dimensional positioning method based on deep learning and SLAM technology
CN116343053B (en) Automatic solid waste extraction method based on fusion of optical remote sensing image and SAR remote sensing image
CN114066808A (en) Pavement defect detection method and system based on deep learning
CN112766136A (en) Space parking space detection method based on deep learning
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN116485717A (en) Concrete dam surface crack detection method based on pixel-level deep learning
CN111882620A (en) Road drivable area segmentation method based on multi-scale information
CN112700476A (en) Infrared ship video tracking method based on convolutional neural network
CN114387446A (en) Automatic water body extraction method for high-resolution remote sensing image
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN113378642A (en) Method for detecting illegal occupation buildings in rural areas
CN115497006B (en) Urban remote sensing image change depth monitoring method and system based on dynamic mixing strategy
CN117078925A (en) Building rubbish annual output accurate calculation method based on RDSA-deep LabV3+ network
CN115861260A (en) Deep learning change detection method for wide-area city scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination