CN113469011A - Planning land feature identification method and device based on remote sensing image classification algorithm - Google Patents
Planning land feature identification method and device based on remote sensing image classification algorithm Download PDFInfo
- Publication number
- CN113469011A CN113469011A CN202110717242.2A CN202110717242A CN113469011A CN 113469011 A CN113469011 A CN 113469011A CN 202110717242 A CN202110717242 A CN 202110717242A CN 113469011 A CN113469011 A CN 113469011A
- Authority
- CN
- China
- Prior art keywords
- image
- remote sensing
- pixel
- super
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000007635 classification algorithm Methods 0.000 title claims abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 12
- 238000007637 random forest analysis Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 230000011218 segmentation Effects 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 230000003595 spectral effect Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 238000003064 k means clustering Methods 0.000 claims description 5
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 abstract 1
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011496 digital image analysis Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method and a device for identifying ground objects in a planning ground based on a remote sensing image classification algorithm. The method comprises the steps of preprocessing an original image, training an image labeling result and image features through superpixel segmentation and image feature extraction, and finally finishing the identification and classification of image ground objects by using a random forest classifier generated by training. Compared with the prior art, the method has the advantages of high recognition efficiency, high accuracy and the like.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for identifying a planning ground feature based on a remote sensing image classification algorithm.
Background
In the process of planning during the early construction of the power grid, in order to optimize a planning scheme, the distribution conditions of different types of ground features in the planning area need to be known, and the compensation amount of the planning area needs to be calculated, so that the digital image needs to be segmented and identified.
With the development of machine learning technology, digital image analysis technology based on remote sensing images, such as surface coverage classification, is also developed to a certain extent, and the ground feature type distribution of a planning region can be directly obtained by analyzing digital images.
The super-pixel is a pixel classification method, similar adjacent pixels can be combined into a whole, a larger image can be directly processed, and edge information loss caused by image cutting is reduced. However, the conventional super-pixel method still has a problem in the accuracy of pixel classification, and affects the work such as image recognition.
Disclosure of Invention
The invention aims to provide a method and a device for identifying a ground object in a planning place based on a remote sensing image classification algorithm.
The purpose of the invention can be realized by the following technical scheme:
a planning land feature identification method based on a remote sensing image classification algorithm comprises the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixelsThen randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity.
The color proximity and spatial proximity between two pixels is defined as follows:
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
Further, the extracted features of the method comprise 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features and 24-dimensional GLCM features;
further, the method may normalize the extracted features by the following formula:
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejRepresents the standard deviation of the jth feature;
further, the image block sample size of this method is 4000 × 4000.
Further, the method for preprocessing the original aerial image comprises distortion correction, image denoising, image defogging, image splicing and the like.
A planned ground feature recognition device based on a remote sensing image classification algorithm comprises a memory, a processor and a program, wherein the processor executes the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixelsThen randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity.
The color proximity and spatial proximity between two pixels is defined as follows:
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
Further, the features extracted in the device executing step comprise 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features and 24-dimensional GLCM features;
further, the device may perform the step of normalizing the extracted features by the following formula:
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejRepresents the standard deviation of the jth feature;
further, the image blocking sample size in the apparatus performing step is 4000 × 4000.
Further, the method for preprocessing the original aerial image in the step of executing the device comprises distortion correction, image denoising, image defogging, image splicing and the like.
Compared with the prior art, the invention has the following beneficial effects:
1) the method uses an image segmentation method based on the superpixel, reduces salt and pepper noise, extracts multi-dimensional image features, has higher output efficiency, and obtains more reliable classification results;
2) the method extracts different types of image features, so that the output result is more accurate;
drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is an exemplary diagram of a sample of a remote sensing image;
FIG. 3 is an exemplary diagram of a superpixel segmentation result;
FIG. 4 is an exemplary illustration of a superpixel label;
fig. 5 is a diagram of an example of a test remote sensing image prediction output.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The invention provides a technical scheme that: a method and a device for identifying ground objects in a planning place based on a remote sensing image classification algorithm are disclosed, wherein the method comprises the following steps:
step S1: preprocessing an original aerial image, specifically comprising distortion correction, image denoising, image defogging and the like, splicing, and then carrying out image annotation on the complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: dividing the complete remote sensing image and the corresponding labeled graph into a plurality of 4000 x 4000 samples, wherein each sample consists of one remote sensing image and one labeled graph, as shown in FIG. 2;
step S3: carrying out Gaussian blur on the input remote sensing image as the input of a subsequent segmentation algorithm;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixelsThen randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity.
The color proximity and spatial proximity between two pixels is defined as follows:
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: in order to reduce the time complexity, the distance between the pixel and all other pixels is not calculated during each iteration of clustering, but the pixels in 2S × 2S around the center of the super pixel, specifically, the pixels in 2S × 2S around the center of the super pixel are calculated during each iteration of clustering, and the process is continued until the residual error E converges within the threshold value, where E is the sum of the spatial distances before and after the updating of all the centers of the super pixel. FIG. 2 shows the result of the superpixel segmentation of the remote sensing image as shown in FIG. 3;
step S8: for each super pixel, counting the number of each category of the internal pixels, and taking the category with the largest number as the category label of the super pixel, as shown in fig. 4;
then extracting the super-pixel characteristics, including the following contents:
and extracting HSV color histogram features. And representing the input remote sensing image by HSV color space, equally dividing hue H, saturation S and brightness V into 8, 4 and 4 sections respectively, and counting the proportion of pixels falling into each section in the superpixel to generate a 128-dimensional feature vector.
And (5) extracting Gabor texture features. The function of the Gabor filter is expressed as follows:
x′=xcosθ+ysinθ
y′=-xsinθ+ycosθ
where λ is the wavelength and is specified in units of pixels, θ specifies the direction of the Gabor filter parallel fringes, Φ represents the phase shift, σ represents the standard deviation of the gaussian factor, and γ represents the aspect ratio.
Since the super-pixels are irregular in shape and uncertain in size and number of pixels, the method adopted in the application is to perform multi-directional (0 °, 45 °, 90 °, 135 °) and multi-kernel size (5, 7, 9, 11, 13, 15) Gabor filtering on the whole image, and then average the filtering output in each super-pixel to form the 24-dimensional Gabor texture feature of the super-pixel.
And (5) extracting GLCM features. Setting the gray level of a pixel point A (x, y) in the image as i, the gray level of another pixel point B as j, the distance between the pixel point A and the pixel point B as d, and the direction angle as theta, and counting the probability of the simultaneous occurrence of the pixel point A and the pixel point B as P (i, j, d, theta). The mathematical expression of the joint probability P (i, j, d, θ) is as follows:
P(i,j,d,θ)={[(x,y),(x+dx,y+dy)]|
f(x,y)=i,f(x+dx,y+dy)=j}
because the shape and the size of the super pixel are not fixed, the side length of a calculation window of the super pixel GLCM is set as S, the center of the calculation window is the center of the super pixel, and the calculation window is obtained by averaging the coordinates of all pixels in the super pixel. If the gray scale of the image is G, the size of the GLCM matrix is G multiplied by G, and in order to reduce the calculation amount, the original remote sensing image is converted into a gray scale map with 16 gray scales. After computing the GLCM matrix, P (i, j, d, θ)/N2As a result of its normalization, thereby reducing dimensional gaps between data.
Finally, the normalized GLCM may be used to compute the texture statistics of the superpixels as the texture parameters. The invention selects six statistics of contrast (contrast), dissimilarity (dissimilarity), homogeneity (homogeneity), angular second moment (angular second moment), energy (energy) and correlation (correlation), and GLCM calculation is carried out in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees, so that 24-dimensional GLCM texture feature vectors are shared.
Step S9: extracting the characteristics of each super pixel, wherein the characteristics comprise 128-dimensional HSV color histogram characteristics, 24-dimensional Gabor texture characteristics and 24-dimensional GLCM texture characteristics, and standardizing the extracted characteristics, and the processing formula is as follows:
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejRepresents the standard deviation of the jth feature;
step S10: marking the super-pixel standard features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using a trained random forest classifier, and outputting the ground object class of the recognized image, as shown in fig. 5.
For vegetation types, the items that need to be calculated are: land compensation charge is calculated according to six to ten times of the average annual output value of the last three years; arranging subsidy fees, and calculating according to the agricultural population number required to be arranged, namely dividing the quantity of the collected cultivated land by the average quantity of the cultivated land occupied by each person of the collected units before land collection; the young seedling compensation fee is calculated according to the yield value of one third to one quarter of the yield value of the season. For building types, the items that need to be calculated are: the house value compensation fee is calculated according to the market price of the similar real estate in the city; and (4) calculating the relocation subsidy fee according to the house collection standard of the location. For water system types such as a field water conservancy facility or an artificial fishpond, etc., migration fees and compensation fees are calculated with reference to relevant standards.
The invention also provides a planned ground feature recognition device based on the remote sensing image classification algorithm, which comprises a memory, a processor and a program, wherein the processor executes the steps S1-S11.
Claims (10)
1. A planning land feature recognition method based on a remote sensing image classification algorithm is characterized by comprising the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixelsThen randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity;
the color proximity and spatial proximity between two pixels is defined as follows:
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
2. The method for identifying the planned terrain based on the remote sensing image classification algorithm as claimed in claim 1, wherein the extracted features include 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features and 24-dimensional GLCM features.
3. The method for identifying the planned terrain based on the remote sensing image classification algorithm, according to claim 1, is characterized in that the method can normalize the extracted features through the following formula:
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejThe standard deviation of the jth feature is shown.
4. The method for identifying the planned land features based on the remote sensing image classification algorithm according to claim 1, wherein the size of the image block sample is 4000 x 4000.
5. The method for identifying the planned terrain based on the remote sensing image classification algorithm as claimed in claim 1, wherein the method for preprocessing the original aerial image comprises distortion correction, image de-noising, image defogging and image splicing.
6. A planned ground feature recognition device based on a remote sensing image classification algorithm comprises a memory, a processor and a program, and is characterized in that the processor executes the following steps:
step S1: preprocessing an original aerial image, and performing image annotation on a complete remote sensing image: dividing the method into three types of vegetation, buildings and water systems, and outputting pixel-by-pixel labeled graphs;
step S2: partitioning the complete remote sensing image and the corresponding labeled graph into a plurality of samples, wherein each sample consists of one remote sensing image and one labeled graph;
step S3: carrying out Gaussian blur on an input remote sensing image;
step S4: determining the required number k of superpixels, and calculating the average distance between adjacent superpixelsThen randomly selecting an initial clustering center on the image according to the super-pixel average distance S;
step S5: selecting the position with the minimum gradient in the initially selected clustering center range of 3 x 3;
step S6: k-means clustering is performed on all pixels, and the distance between two pixels is calculated as follows:
where m controls the closeness between superpixels, dcRepresenting the color proximity, dsRepresenting spatial proximity;
the color proximity and spatial proximity between two pixels is defined as follows:
in the formula, I (x)i,yiS) and I (x)j,yjS) represents the values of the two pixels on spectral band s, B represents the set of spectral bands, color proximity controls superpixel uniformity, and spatial proximity controls superpixel compactness;
step S7: clustering each time, iteratively calculating pixels in 2S multiplied by 2S around the center of the superpixel, and continuing until the residual error E converges within a threshold value;
step S8: for each super pixel, counting the number of each category of the internal pixels, and marking the category with the largest number as the category of the super pixel;
step S9: extracting the characteristics of each super pixel, and standardizing the extracted characteristics;
step S10: marking the super-pixel standardized features and the super-pixel categories as training samples, and training a random forest classifier;
step S11: inputting a remote sensing picture to be recognized, processing the remote sensing picture by a super-pixel segmentation image not containing image labels and a feature extraction method, classifying features by using the trained random forest classifier, and outputting the ground object class of the recognized image.
7. The planned terrain recognition device based on the remote sensing image classification algorithm, as claimed in claim 6, wherein the features extracted in the device implementation steps include 128-dimensional HSV color histogram features, 24-dimensional Gabor texture features, and 24-dimensional GLCM features.
8. The planned terrain feature recognition device based on the remote sensing image classification algorithm, as claimed in claim 6, wherein the device performs the steps of normalizing the extracted features by the following formula:
in the formula, xij、x′ijRespectively representing the j-th feature, mu, of the ith super-pixel sample before and after normalizationjMeans, σ, representing the jth featurejThe standard deviation of the jth feature is shown.
9. A planned ground feature recognition device based on a remote sensing image classification algorithm according to claim 6, wherein the device executes the image block sample size in the step of 4000 x 4000.
10. The planned terrain recognition device based on the remote sensing image classification algorithm, as claimed in claim 6, wherein the device performs the steps of preprocessing the original aerial image including distortion correction, image denoising, image defogging, and image stitching.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110717242.2A CN113469011A (en) | 2021-07-31 | 2021-07-31 | Planning land feature identification method and device based on remote sensing image classification algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110717242.2A CN113469011A (en) | 2021-07-31 | 2021-07-31 | Planning land feature identification method and device based on remote sensing image classification algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113469011A true CN113469011A (en) | 2021-10-01 |
Family
ID=77873239
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110717242.2A Pending CN113469011A (en) | 2021-07-31 | 2021-07-31 | Planning land feature identification method and device based on remote sensing image classification algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113469011A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280397A (en) * | 2017-12-25 | 2018-07-13 | 西安电子科技大学 | Human body image hair detection method based on depth convolutional neural networks |
US20210209426A1 (en) * | 2018-09-29 | 2021-07-08 | Shenzhen University | Image Fusion Classification Method and Device |
-
2021
- 2021-07-31 CN CN202110717242.2A patent/CN113469011A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280397A (en) * | 2017-12-25 | 2018-07-13 | 西安电子科技大学 | Human body image hair detection method based on depth convolutional neural networks |
US20210209426A1 (en) * | 2018-09-29 | 2021-07-08 | Shenzhen University | Image Fusion Classification Method and Device |
Non-Patent Citations (2)
Title |
---|
ZHENGQIN LI: "Superpixel Segmentation using Linear Spectral Clustering", 《IEEE》, 15 December 2015 (2015-12-15), pages 1356 - 1363 * |
许艳松: "改进模糊聚类算法自动识别泥石流区域", 《中国水运》, 15 May 2021 (2021-05-15), pages 46 - 49 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | A feature difference convolutional neural network-based change detection method | |
US11783569B2 (en) | Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model | |
He et al. | Hybrid first and second order attention Unet for building segmentation in remote sensing images | |
CN108573276B (en) | Change detection method based on high-resolution remote sensing image | |
CN109146889B (en) | Farmland boundary extraction method based on high-resolution remote sensing image | |
Jia et al. | Multiple feature-based superpixel-level decision fusion for hyperspectral and LiDAR data classification | |
Sirmacek et al. | A probabilistic framework to detect buildings in aerial and satellite images | |
An et al. | Scene learning for cloud detection on remote-sensing images | |
Han et al. | A novel computer vision-based approach to automatic detection and severity assessment of crop diseases | |
CN106503739A (en) | The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics | |
CN106339674A (en) | Hyperspectral image classification method based on edge preservation and graph cut model | |
CN111160273A (en) | Hyperspectral image space spectrum combined classification method and device | |
WO2022267388A1 (en) | Mangrove hyperspectral image classification method and apparatus, and electronic device and storage medium | |
Sukhia et al. | Content-based remote sensing image retrieval using multi-scale local ternary pattern | |
US20220004740A1 (en) | Apparatus and Method For Three-Dimensional Object Recognition | |
Shen et al. | Biomimetic vision for zoom object detection based on improved vertical grid number YOLO algorithm | |
CN109034213B (en) | Hyperspectral image classification method and system based on correlation entropy principle | |
Chen et al. | Object-based multi-modal convolution neural networks for building extraction using panchromatic and multispectral imagery | |
CN113435254A (en) | Sentinel second image-based farmland deep learning extraction method | |
Ahmad et al. | Hybrid dense network with attention mechanism for hyperspectral image classification | |
Jenifa et al. | Classification of cotton leaf disease using multi-support vector machine | |
CN113496148A (en) | Multi-source data fusion method and system | |
Guo et al. | Dual-concentrated network with morphological features for tree species classification using hyperspectral image | |
Kuswidiyanto et al. | Airborne hyperspectral imaging for early diagnosis of kimchi cabbage downy mildew using 3D-ResNet and leaf segmentation | |
Huang et al. | Research on crop planting area classification from remote sensing image based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |