CN113469011A - Planning land feature identification method and device based on remote sensing image classification algorithm - Google Patents

Planning land feature identification method and device based on remote sensing image classification algorithm Download PDF

Info

Publication number
CN113469011A
CN113469011A CN202110717242.2A CN202110717242A CN113469011A CN 113469011 A CN113469011 A CN 113469011A CN 202110717242 A CN202110717242 A CN 202110717242A CN 113469011 A CN113469011 A CN 113469011A
Authority
CN
China
Prior art keywords
image
remote sensing
pixel
super
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110717242.2A
Other languages
Chinese (zh)
Inventor
龚波涛
朱琦锋
季彤天
陈树藩
王华云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tiexin Geographic Information Co ltd
State Grid Shanghai Electric Power Co Ltd
Original Assignee
Shanghai Tiexin Geographic Information Co ltd
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tiexin Geographic Information Co ltd, State Grid Shanghai Electric Power Co Ltd filed Critical Shanghai Tiexin Geographic Information Co ltd
Priority to CN202110717242.2A priority Critical patent/CN113469011A/en
Publication of CN113469011A publication Critical patent/CN113469011A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a device for identifying ground objects in a planning ground based on a remote sensing image classification algorithm. The method comprises the steps of preprocessing an original image, training an image labeling result and image features through superpixel segmentation and image feature extraction, and finally finishing the identification and classification of image ground objects by using a random forest classifier generated by training. Compared with the prior art, the method has the advantages of high recognition efficiency, high accuracy and the like.

Description

Planning land feature identification method and device based on remote sensing image classification algorithm
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for identifying a planning ground feature based on a remote sensing image classification algorithm.
Background
In the process of planning during the early construction of the power grid, in order to optimize a planning scheme, the distribution conditions of different types of ground features in the planning area need to be known, and the compensation amount of the planning area needs to be calculated, so that the digital image needs to be segmented and identified.
With the development of machine learning technology, digital image analysis technology based on remote sensing images, such as surface coverage classification, is also developed to a certain extent, and the ground feature type distribution of a planning region can be directly obtained by analyzing digital images.
The super-pixel is a pixel classification method, similar adjacent pixels can be combined into a whole, a larger image can be directly processed, and edge information loss caused by image cutting is reduced. However, the conventional super-pixel method still has a problem in the accuracy of pixel classification, and affects the work such as image recognition.
Disclosure of Invention
The invention aims to provide a method and a device for identifying a ground object in a planning place based on a remote sensing image classification algorithm.
The purpose of the invention can be realized by the following technical scheme:
a planning land feature identification method based on a remote sensing image classification algorithm comprises the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixels
Figure BDA0003135310010000021
Then randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
Figure BDA0003135310010000022
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity.
The color proximity and spatial proximity between two pixels is defined as follows:
Figure BDA0003135310010000023
Figure BDA0003135310010000024
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
Further, the extracted features of the method comprise 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features and 24-dimensional GLCM features;
further, the method may normalize the extracted features by the following formula:
Figure BDA0003135310010000031
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejRepresents the standard deviation of the jth feature;
further, the image block sample size of this method is 4000 × 4000.
Further, the method for preprocessing the original aerial image comprises distortion correction, image denoising, image defogging, image splicing and the like.
A planned ground feature recognition device based on a remote sensing image classification algorithm comprises a memory, a processor and a program, wherein the processor executes the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixels
Figure BDA0003135310010000032
Then randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
Figure BDA0003135310010000033
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity.
The color proximity and spatial proximity between two pixels is defined as follows:
Figure BDA0003135310010000034
Figure BDA0003135310010000035
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
Further, the features extracted in the device executing step comprise 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features and 24-dimensional GLCM features;
further, the device may perform the step of normalizing the extracted features by the following formula:
Figure BDA0003135310010000041
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejRepresents the standard deviation of the jth feature;
further, the image blocking sample size in the apparatus performing step is 4000 × 4000.
Further, the method for preprocessing the original aerial image in the step of executing the device comprises distortion correction, image denoising, image defogging, image splicing and the like.
Compared with the prior art, the invention has the following beneficial effects:
1) the method uses an image segmentation method based on the superpixel, reduces salt and pepper noise, extracts multi-dimensional image features, has higher output efficiency, and obtains more reliable classification results;
2) the method extracts different types of image features, so that the output result is more accurate;
drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is an exemplary diagram of a sample of a remote sensing image;
FIG. 3 is an exemplary diagram of a superpixel segmentation result;
FIG. 4 is an exemplary illustration of a superpixel label;
fig. 5 is a diagram of an example of a test remote sensing image prediction output.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The invention provides a technical scheme that: a method and a device for identifying ground objects in a planning place based on a remote sensing image classification algorithm are disclosed, wherein the method comprises the following steps:
step S1: preprocessing an original aerial image, specifically comprising distortion correction, image denoising, image defogging and the like, splicing, and then carrying out image annotation on the complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: dividing the complete remote sensing image and the corresponding labeled graph into a plurality of 4000 x 4000 samples, wherein each sample consists of one remote sensing image and one labeled graph, as shown in FIG. 2;
step S3: carrying out Gaussian blur on the input remote sensing image as the input of a subsequent segmentation algorithm;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixels
Figure BDA0003135310010000051
Then randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
Figure BDA0003135310010000052
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity.
The color proximity and spatial proximity between two pixels is defined as follows:
Figure BDA0003135310010000053
Figure BDA0003135310010000054
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: in order to reduce the time complexity, the distance between the pixel and all other pixels is not calculated during each iteration of clustering, but the pixels in 2S × 2S around the center of the super pixel, specifically, the pixels in 2S × 2S around the center of the super pixel are calculated during each iteration of clustering, and the process is continued until the residual error E converges within the threshold value, where E is the sum of the spatial distances before and after the updating of all the centers of the super pixel. FIG. 2 shows the result of the superpixel segmentation of the remote sensing image as shown in FIG. 3;
step S8: for each super pixel, counting the number of each category of the internal pixels, and taking the category with the largest number as the category label of the super pixel, as shown in fig. 4;
then extracting the super-pixel characteristics, including the following contents:
and extracting HSV color histogram features. And representing the input remote sensing image by HSV color space, equally dividing hue H, saturation S and brightness V into 8, 4 and 4 sections respectively, and counting the proportion of pixels falling into each section in the superpixel to generate a 128-dimensional feature vector.
And (5) extracting Gabor texture features. The function of the Gabor filter is expressed as follows:
Figure BDA0003135310010000061
x′=xcosθ+ysinθ
y′=-xsinθ+ycosθ
where λ is the wavelength and is specified in units of pixels, θ specifies the direction of the Gabor filter parallel fringes, Φ represents the phase shift, σ represents the standard deviation of the gaussian factor, and γ represents the aspect ratio.
Since the super-pixels are irregular in shape and uncertain in size and number of pixels, the method adopted in the application is to perform multi-directional (0 °, 45 °, 90 °, 135 °) and multi-kernel size (5, 7, 9, 11, 13, 15) Gabor filtering on the whole image, and then average the filtering output in each super-pixel to form the 24-dimensional Gabor texture feature of the super-pixel.
And (5) extracting GLCM features. Setting the gray level of a pixel point A (x, y) in the image as i, the gray level of another pixel point B as j, the distance between the pixel point A and the pixel point B as d, and the direction angle as theta, and counting the probability of the simultaneous occurrence of the pixel point A and the pixel point B as P (i, j, d, theta). The mathematical expression of the joint probability P (i, j, d, θ) is as follows:
P(i,j,d,θ)={[(x,y),(x+dx,y+dy)]|
f(x,y)=i,f(x+dx,y+dy)=j}
because the shape and the size of the super pixel are not fixed, the side length of a calculation window of the super pixel GLCM is set as S, the center of the calculation window is the center of the super pixel, and the calculation window is obtained by averaging the coordinates of all pixels in the super pixel. If the gray scale of the image is G, the size of the GLCM matrix is G multiplied by G, and in order to reduce the calculation amount, the original remote sensing image is converted into a gray scale map with 16 gray scales. After computing the GLCM matrix, P (i, j, d, θ)/N2As a result of its normalization, thereby reducing dimensional gaps between data.
Finally, the normalized GLCM may be used to compute the texture statistics of the superpixels as the texture parameters. The invention selects six statistics of contrast (contrast), dissimilarity (dissimilarity), homogeneity (homogeneity), angular second moment (angular second moment), energy (energy) and correlation (correlation), and GLCM calculation is carried out in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees, so that 24-dimensional GLCM texture feature vectors are shared.
Step S9: extracting the characteristics of each super pixel, wherein the characteristics comprise 128-dimensional HSV color histogram characteristics, 24-dimensional Gabor texture characteristics and 24-dimensional GLCM texture characteristics, and standardizing the extracted characteristics, and the processing formula is as follows:
Figure BDA0003135310010000071
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejRepresents the standard deviation of the jth feature;
step S10: marking the super-pixel standard features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using a trained random forest classifier, and outputting the ground object class of the recognized image, as shown in fig. 5.
For vegetation types, the items that need to be calculated are: land compensation charge is calculated according to six to ten times of the average annual output value of the last three years; arranging subsidy fees, and calculating according to the agricultural population number required to be arranged, namely dividing the quantity of the collected cultivated land by the average quantity of the cultivated land occupied by each person of the collected units before land collection; the young seedling compensation fee is calculated according to the yield value of one third to one quarter of the yield value of the season. For building types, the items that need to be calculated are: the house value compensation fee is calculated according to the market price of the similar real estate in the city; and (4) calculating the relocation subsidy fee according to the house collection standard of the location. For water system types such as a field water conservancy facility or an artificial fishpond, etc., migration fees and compensation fees are calculated with reference to relevant standards.
The invention also provides a planned ground feature recognition device based on the remote sensing image classification algorithm, which comprises a memory, a processor and a program, wherein the processor executes the steps S1-S11.

Claims (10)

1. A planning land feature recognition method based on a remote sensing image classification algorithm is characterized by comprising the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixels
Figure FDA0003135310000000011
Then randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
Figure FDA0003135310000000012
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity;
the color proximity and spatial proximity between two pixels is defined as follows:
Figure FDA0003135310000000013
Figure FDA0003135310000000014
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
2. The method for identifying the planned terrain based on the remote sensing image classification algorithm as claimed in claim 1, wherein the extracted features include 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features and 24-dimensional GLCM features.
3. The method for identifying the planned terrain based on the remote sensing image classification algorithm, according to claim 1, is characterized in that the method can normalize the extracted features through the following formula:
Figure FDA0003135310000000021
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejThe standard deviation of the jth feature is shown.
4. The method for identifying the planned land features based on the remote sensing image classification algorithm according to claim 1, wherein the size of the image block sample is 4000 x 4000.
5. The method for identifying the planned terrain based on the remote sensing image classification algorithm as claimed in claim 1, wherein the method for preprocessing the original aerial image comprises distortion correction, image de-noising, image defogging and image splicing.
6. A planned ground feature recognition device based on a remote sensing image classification algorithm comprises a memory, a processor and a program, and is characterized in that the processor executes the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixels
Figure FDA0003135310000000022
Then randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
Figure FDA0003135310000000031
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity;
the color proximity and spatial proximity between two pixels is defined as follows:
Figure FDA0003135310000000032
Figure FDA0003135310000000033
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
7. The planned terrain recognition device based on the remote sensing image classification algorithm, as claimed in claim 6, wherein the features extracted in the device implementation steps include 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features, and 24-dimensional GLCM features.
8. The planned terrain feature recognition device based on the remote sensing image classification algorithm, as claimed in claim 6, wherein the device performs the steps of normalizing the extracted features by the following formula:
Figure FDA0003135310000000034
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejThe standard deviation of the jth feature is shown.
9. A planned ground feature recognition device based on a remote sensing image classification algorithm according to claim 6, wherein the device executes the image block sample size in the step of 4000 x 4000.
10. The planned terrain recognition device based on the remote sensing image classification algorithm, as claimed in claim 6, wherein the device performs the steps of preprocessing the original aerial image including distortion correction, image denoising, image defogging, and image stitching.
CN202110717242.2A 2021-07-31 2021-07-31 Planning land feature identification method and device based on remote sensing image classification algorithm Pending CN113469011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110717242.2A CN113469011A (en) 2021-07-31 2021-07-31 Planning land feature identification method and device based on remote sensing image classification algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110717242.2A CN113469011A (en) 2021-07-31 2021-07-31 Planning land feature identification method and device based on remote sensing image classification algorithm

Publications (1)

Publication Number Publication Date
CN113469011A true CN113469011A (en) 2021-10-01

Family

ID=77873239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110717242.2A Pending CN113469011A (en) 2021-07-31 2021-07-31 Planning land feature identification method and device based on remote sensing image classification algorithm

Country Status (1)

Country Link
CN (1) CN113469011A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280397A (en) * 2017-12-25 2018-07-13 西安电子科技大学 Human body image hair detection method based on depth convolutional neural networks
US20210209426A1 (en) * 2018-09-29 2021-07-08 Shenzhen University Image Fusion Classification Method and Device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280397A (en) * 2017-12-25 2018-07-13 西安电子科技大学 Human body image hair detection method based on depth convolutional neural networks
US20210209426A1 (en) * 2018-09-29 2021-07-08 Shenzhen University Image Fusion Classification Method and Device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHENGQIN LI: "Superpixel Segmentation using Linear Spectral Clustering", 《IEEE》, 15 December 2015 (2015-12-15), pages 1356 - 1363 *
许艳松: "改进模糊聚类算法自动识别泥石流区域", 《中国水运》, 15 May 2021 (2021-05-15), pages 46 - 49 *

Similar Documents

Publication Publication Date Title
Zhang et al. A feature difference convolutional neural network-based change detection method
US11783569B2 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
He et al. Hybrid first and second order attention Unet for building segmentation in remote sensing images
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN109146889B (en) Farmland boundary extraction method based on high-resolution remote sensing image
Jia et al. Multiple feature-based superpixel-level decision fusion for hyperspectral and LiDAR data classification
Sirmacek et al. A probabilistic framework to detect buildings in aerial and satellite images
An et al. Scene learning for cloud detection on remote-sensing images
Han et al. A novel computer vision-based approach to automatic detection and severity assessment of crop diseases
CN106503739A (en) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
CN106339674A (en) Hyperspectral image classification method based on edge preservation and graph cut model
CN111160273A (en) Hyperspectral image space spectrum combined classification method and device
WO2022267388A1 (en) Mangrove hyperspectral image classification method and apparatus, and electronic device and storage medium
Sukhia et al. Content-based remote sensing image retrieval using multi-scale local ternary pattern
US20220004740A1 (en) Apparatus and Method For Three-Dimensional Object Recognition
Shen et al. Biomimetic vision for zoom object detection based on improved vertical grid number YOLO algorithm
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
Chen et al. Object-based multi-modal convolution neural networks for building extraction using panchromatic and multispectral imagery
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
Ahmad et al. Hybrid dense network with attention mechanism for hyperspectral image classification
Jenifa et al. Classification of cotton leaf disease using multi-support vector machine
CN113496148A (en) Multi-source data fusion method and system
Guo et al. Dual-concentrated network with morphological features for tree species classification using hyperspectral image
Kuswidiyanto et al. Airborne hyperspectral imaging for early diagnosis of kimchi cabbage downy mildew using 3D-ResNet and leaf segmentation
Huang et al. Research on crop planting area classification from remote sensing image based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination