CN112131912A - Remote sensing image underlying surface extraction method - Google Patents
Remote sensing image underlying surface extraction method Download PDFInfo
- Publication number
- CN112131912A CN112131912A CN201910555254.2A CN201910555254A CN112131912A CN 112131912 A CN112131912 A CN 112131912A CN 201910555254 A CN201910555254 A CN 201910555254A CN 112131912 A CN112131912 A CN 112131912A
- Authority
- CN
- China
- Prior art keywords
- pixel point
- remote sensing
- underlying surface
- data
- gray level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims abstract description 31
- 238000003066 decision tree Methods 0.000 claims abstract description 29
- 238000009877 rendering Methods 0.000 claims abstract description 15
- 238000007637 random forest analysis Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 18
- 238000000034 method Methods 0.000 claims description 11
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 235000021384 green leafy vegetables Nutrition 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 4
- 238000013145 classification model Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 241000872198 Serjania polyphylla Species 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/45—Analysis of texture based on statistical description of texture using co-occurrence matrix computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
Abstract
The invention relates to a remote sensing image underlying surface extraction method which comprises the steps of selecting a plurality of representative pixel points (A) in a plurality of known remote sensing images, generating a gray level co-occurrence matrix by taking the pixel point (A) as a center in a certain wave band, calculating energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel point (A) in the wave band, constructing a decision tree model by applying a random forest model according to the parameters, scanning unknown remote sensing images point by point according to the decision tree model to finally obtain underlying surface categories, and performing color rendering on the underlying surface categories in classification results.
Description
Technical Field
The invention relates to a method for processing extraction of an underlying surface of a remote sensing image, in particular to a processing method for extracting the underlying surface of the remote sensing image by using a processing method of image processing and machine learning.
Background
When the remote sensing image is used for extracting the underlying surface, errors exist in the shoreline extraction of the river channel; errors exist in the extraction of the land feature information such as greenbelt, road, roof, pavement and bare land; the above errors are large, which results in a large workload of manual intervention.
At present, Envi software remote sensing image classification and identification can only be trained aiming at a single remote sensing image, namely training data and a classification model are obtained aiming at the single remote sensing image. Therefore, Envi is based on a training mode of a single remote sensing image, so that the trained classification model has a good recognition effect on the remote sensing image used for training, and the generalization capability of the model is limited.
Disclosure of Invention
The invention aims to provide a method for extracting an underlying surface of a remote sensing image, which can classify and identify greenbelts, roads, roofs, pavements, bare lands and water bodies in the remote sensing image and then represent the identified greenbelts, roads, roofs, pavements, bare lands and water bodies as different colors.
In order to achieve the purpose of the invention, the following technical scheme is adopted in the application:
the invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps:
(one) extracting data
Selecting a plurality of representative pixel points (A) in a plurality of known remote sensing images to extract wave band data (DN 1, DN2, DN 3), wherein the representative pixel points (A) comprise c underlying surface categories which comprise: roads, greens, pavements, bodies of water, open land and roofs;
secondly, generating a gray level co-occurrence matrix by taking the pixel point A as the center
(I) To reduce the size of the gray level co-occurrence matrix, the maximum value of the gray level values is designated asReducing the gray value of one wave band in the remote sensing image to;
WhereinIs the maximum value of the gray value of the original wave band,in order to reduce the gray value of the pixel point,the gray value of the pixel point before reduction;
(II) in a wave band where the pixel point (A) is located, taking the pixel point (A) as a geometric center and the length asEach pixel point and width ofThe window of each pixel point takes the pixel point at the leftmost lower corner of the window as the origin of coordinates and the length as the lengthEach pixel point is taken as an X axis and the width asEstablishing a window coordinate for the Y axis by each pixel point, finding out the gray values of all the pixel points in the window coordinate, and finding out the gray values of two adjacent pixel point pairs along the X axis direction and the direction parallel to the X axis, along the Y axis direction and the direction parallel to the Y axis, or along the diagonal line with an included angle of 45 degrees with the X axis and the Y axis and the direction parallel to the diagonal lineFinding out the number of two adjacent pixel point pairs in the window;
(III) establishing a gray level co-occurrence matrix with k rows and k columns, and comparing the gray level values of two adjacent pixel point pairs in the windowThe number of occurrences is recorded in a gray level co-occurrence matrix of k rows and k columns, i.e. the gray level values of two adjacent pairs of pixel pointsIn a gray level co-occurrence matrix of k rows and k columnsRows and columnsThe column is recorded as 1, and the gray values of all the two adjacent pixel point pairs in the window are recordedAre all recorded in a gray level co-occurrence matrix;
(IV) carrying out normalization processing on the data in the gray level co-occurrence matrix, namely dividing the obtained times in the gray level co-occurrence matrix by the number of the two adjacent pixel point pairs of the window to obtain the probability of occurrence of the gray level value pair;
Thirdly, calculating the energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel points A in the wave band according to the gray level co-occurrence matrix
The calculation formula of the energy texture of the pixel point (A) is as follows:
whereinThe value of the ith row and the jth column in the normalized gray level co-occurrence matrix is obtained;
(IV) finding out all data of a plurality of representative pixel points (A) in a plurality of remote sensing images
Repeating the step (two) and the step (three), and finding out the band(s) of the pixel point (A),,,,,) The data (DN 1, DN2, DN 3) of the pixel (A) and (B) under each band,,,,,) As sample data, n =21 attributes are included in one sample data, and so on, the band data (DN 1, DN2, DN 3) of a plurality of representative pixel points (a) in a plurality of remote sensing images are found, and (under each band), (DN 1, DN2, DN 3),,,,,) Forming a training sample set;
(V) constructing a decision tree model by using a random forest model
(1) Randomly selecting a samples in a place where training samples are collected, and establishing a decision tree, wherein the attributes of the training samples areN =21, calculating the current sample setThe middle and lower cushion surface is of the typeThe proportion of the sample isThen, thenThe information gain of (a) is:
(2) In the a training samples, according to a certain attribute of the samplesIs divided into certain attributesDividing a training samples intoA branch, V is);Is the firstThe sample set contained in each branch is matched with a certain attributeThe amount of information gain resulting from the division is:
Computing a set of samplesThe middle and lower cushion surface is of the typeThe proportion of the sample isThen, thenThe information gain of (a) is:
Repeating the above steps, calculating the information gain of a training samples according to each attribute of the training samples, comparing the information gain of 21 attributes of the training samples, using the attribute with the maximum value of the information gain as the first node,
(3) under the attribute of the first nodeEach branch is a branch, the value or the value range of each branch is marked, and then the step (2) is repeated to respectively calculate the value on the second branchSample set under one branchIn the method, the information gain amount divided by the remaining 20 attributes is compared with the sample sets under the 1 st 1 … … th branchThe attribute with the maximum value of the information gain under the 1 … … V branch is respectively taken as the first node under the first nodeThe nodes on the one of the branches are,
(4)、the sample set is divided into V: () Repeating the step (3), finding out the nodes on the newly divided V branches and the next layer of branches of the nodes, and repeating the steps until the underlying surface type is obtained, thereby completing the construction of a decision tree,
(5) repeating the step (1) to the step (4) for b times to construct a decision tree model,
(VI) classifying the unknown remote sensing images
According to the decision tree model, scanning unknown remote sensing images point by point, processing each scanned point by steps (two) to (three), obtaining 21 attribute data from each scanning point, respectively inputting the 21 attribute data of each scanning point into the decision tree model, classifying the 21 attribute data of each scanning point in each decision tree according to the values or value ranges marked on the nodes and each branch to finally obtain the underlying surface category, respectively counting the underlying surface categories of each scanning point in b decision trees, and taking the most statistical result as the final classification result of the underlying surface categories of the scanning point;
(VII) color rendering is carried out on the classified result
Rendering the roads in the classification result in the step (six) to be red, and setting the waveband data to be 255,0 and 0; rendering the green land to be green, and setting the waveband data to be 0,255 and 0; the paving rendering is blue, and the wave band data is set to be 0,0 and 255; rendering the water body to be yellow, and setting the waveband data to be 255,255 and 0; bare rendered white, with the band data set to 255255255; the roof is rendered black and the band data is set to 0,0, 0.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in the step (one), the number of the selected representative pixel points (A) in each underlying surface category is the same.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in step (II), theEach pixel point is 3-10 pixel points,each pixel point is 3-10 pixel points.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in step (I), the specified gradation value10-20。
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in step (five), the number of branches divided into V under each attribute is different, and V is 1 to 30.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in the step (V), the number a of the samples is 800-.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in the step (five), b decision trees are constructed, wherein b is 50-1000.
Compared with the remote sensing image which is not processed, the remote sensing image which is processed by the remote sensing image underlying surface extraction method of the invention more clearly shows that the remote sensing image comprises the following components: the underlayment categories of roads, greenbelts, pavements, bodies of water, bare land, and roofs.
Drawings
FIG. 1 is a schematic diagram of a window coordinate in step (I) of the method for extracting the underlying surface of the remote sensing image according to the present invention;
FIG. 2 is a schematic diagram of two adjacent pixel point pairs found in FIG. 1;
FIG. 3 is a schematic diagram of a decision tree model constructed in the fifth step in the remote sensing image underlying surface extraction method of the present invention;
FIG. 4 is an original remote sensing image;
FIG. 5 is a schematic diagram of the classification recognition and rendering results of remote sensing images after the original remote sensing images processed by the remote sensing image underlying surface extraction method are processed.
Detailed Description
The method for extracting the underlying surface of the remote sensing image comprises the following steps:
(one) extracting data
As shown in fig. 4, a plurality of representative pixel points (a) in a plurality of known remote sensing images are selected to extract the waveband data (DN 1, DN2, DN 3), the representative pixel points (a) include c underlying surface categories, and the c underlying surface categories include: the method comprises the following steps that the number of selected representative pixel points (A) in each underlying surface category is the same for roads, greenbelts, pavements, water bodies, bare lands and roofs;
secondly, generating a gray level co-occurrence matrix by taking the pixel point A as the center
(I) In order to reduce the size of the gray level co-occurrence matrix, the maximum value of the designated gray level value is 16, and the gray level value of one wave band in the remote sensing image is reduced to;
WhereinIs the maximum value of the gray value of the original wave band,in order to reduce the gray value of the pixel point,the gray value of the pixel point before reduction;
(II) within a band in which the pixel point (A) is located, as shown in FIG. 1, the pixel point (A) is taken as a geometric center, and the length is taken asEach pixel point and width ofThe window of each pixel point takes the pixel point at the leftmost lower corner of the window as the origin of coordinates and the length as the lengthEach pixel point is taken as an X axis and the width asEstablishing a window coordinate for Y axis by each pixel point, finding out the gray values of all pixel points in the window coordinate, and finding out the gray values of two adjacent pixel point pairs along the diagonal line with an included angle of 45 degrees between the X axis and the Y axis and the direction parallel to the diagonal line as shown in FIG. 2And find out the number of two adjacent pixel point pairs in the window is 16;
(III) establishing a gray level co-occurrence matrix with k rows and k columns, and comparing the gray level values of two adjacent pixel point pairs in the windowThe number of occurrences is recorded in a gray level co-occurrence matrix of k rows and k columns, i.e. the gray level values of two adjacent pairs of pixel pointsIn a gray level co-occurrence matrix of k rows and k columnsRows and columnsThe column is recorded as 1, and the gray values of all the two adjacent pixel point pairs in the window are recordedAre all recorded in a gray level co-occurrence matrix;
(IV) carrying out normalization processing on the data in the gray level co-occurrence matrix, namely dividing the obtained times in the gray level co-occurrence matrix by the number of two adjacent pixel point pairs of the window, such as 16 pairs of gray level values, to obtain the probability of occurrence of the gray level values;
Thirdly, calculating the energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel points A in the wave band according to the gray level co-occurrence matrix
The calculation formula of the energy texture of the pixel point (A) is as follows:
whereinThe value of the ith row and the jth column in the normalized gray level co-occurrence matrix is obtained;
(IV) finding out all data of a plurality of representative pixel points (A) in a plurality of remote sensing images
Repeating the step (two) and the step (three), and finding out the band(s) of the pixel point (A),,,,,) The data (DN 1, DN2, DN 3) of the pixel (A) and (B) under each band,,,,,) As sample data, n =21 attributes are included in one sample data, and so on, the band data (DN 1, DN2, DN 3) of a plurality of representative pixel points (a) in a plurality of remote sensing images are found, and (under each band), (DN 1, DN2, DN 3),,,,,) Forming a training sample set;
(V) constructing a decision tree model by using a random forest model
(1) Randomly selecting 800 samples in a place where training sample sets are replaced, and establishing a decision tree, wherein the attributes of the training samples areN =21, calculating the current sample setThe middle and lower cushion surface is of the typeThe proportion of the sample isSuch as:、、、、andthen, thenThe information gain of (a) is:
(2) As shown in fig. 3, in 800 training samples, the energy texture ASM of the first band DN1 of the pixel point (a) is divided, and 800 training samples are divided intoEach branch, each branch containing a sample setIncluding 260, 240 and 300 sample sets, respectively, the energy texture ASM of the first band DN1 applies to the sample setsThe amount of information gain resulting from the division is:
Computing a set of samplesThe middle and lower cushion surface is of the typeThe proportion of the sample isThen, thenThe information gain of (a) is:
Repeating the above steps, calculating the information gain of 800 training samples divided by the rest 20 attributes according to the training samples, comparing the information gain of 21 attributes of the training samples, using the attribute of the maximum value of the information gain as the first node, in fig. 3, the 21 attributes are replaced by numbers, and the value or the value range of each branch is marked as an illustrative description;
(3) under the attribute of the first nodeEach branch is a branch, anMarking the value or value range of each branch, then repeating the step (2), respectively calculating the information gain quantity divided by the rest 20 attributes in the sample sets 260, 240 and 300 under 3 branches, respectively comparing the information gain quantity of 20 attributes in the sample sets 260, 240 and 300 under 3 branches, respectively taking the attribute with the maximum value of the information gain quantity under 1 st, 2 nd and 3 rd branches as the node on the 1 st, 2 nd and 3 rd branches under the first node,
(4)、dividing the sample set into 2 parts, repeating the step (3), finding out the nodes on the newly divided 2 branches and the next layer branch of the node, repeating the steps until the underlying surface category is obtained, completing the construction of a decision tree,
(5) repeating the step (1) to the step (4) b =80 times, constructing a decision tree model shown in figure 3,
(VI) classifying the unknown remote sensing images
According to the decision tree model, scanning unknown remote sensing images point by point, processing each scanned point by steps (two) to (three), obtaining 21 attribute data from each scanning point, respectively inputting the 21 attribute data of each scanning point into the decision tree model, classifying the 21 attribute data of each scanning point in each decision tree according to the values or value ranges marked on the nodes and each branch to finally obtain the underlying surface category, respectively counting the underlying surface categories of each scanning point in b decision trees, and taking the most statistical result as the final classification result of the underlying surface categories of the scanning point;
(VII) color rendering is carried out on the classified result
As shown in fig. 5, the roads in the classification result in the step (six) are rendered into red, the band data is set to 255,0,0, and the red is marked with the mark in the figure because the figure in the patent document cannot be coloredReplacement; the green space is rendered to green, the band data is set to 0,255,0, and the green color is marked by the label in the figureReplacement; the paving rendering is blue, the wave band data is set to be 0,0 and 255, and the blue is marked in the figureReplacement; rendering the water body into yellow, setting the wave band data as 255,0, and marking the yellow in the figureReplacement; bare rendered as white, with the band data set to 255255255, marked with a label in the figureReplacement; the roof is rendered black with the band data set to 0,0,0, the black being marked in the figureAnd (4) replacing.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention by those skilled in the art should fall within the protection scope defined by the claims of the present invention without departing from the spirit of the present invention.
Claims (7)
1. A remote sensing image underlying surface extraction method comprises the following steps:
(one) extracting data
Selecting a plurality of representative pixel points (A) in a plurality of known remote sensing images to extract wave band data (DN 1, DN2, DN 3), wherein the representative pixel points (A) comprise c underlying surface categories which comprise: roads, greens, pavements, bodies of water, open land and roofs;
secondly, generating a gray level co-occurrence matrix by taking the pixel point A as the center
(I) To reduce the size of the gray level co-occurrence matrix, the maximum value of the gray level values is designated asReducing the gray value of one wave band in the remote sensing image to;
WhereinIs the maximum value of the gray value of the original wave band,in order to reduce the gray value of the pixel point,the gray value of the pixel point before reduction;
(II) in a wave band where the pixel point (A) is located, taking the pixel point (A) as a geometric center and the length asEach pixel point and width ofThe window of each pixel point takes the pixel point at the leftmost lower corner of the window as the origin of coordinates and the length as the lengthEach pixel point is taken as an X axis and the width asEstablishing a window coordinate for the Y axis by each pixel point, finding out the gray values of all the pixel points in the window coordinate, and finding out the gray values of two adjacent pixel point pairs along the X axis direction and the direction parallel to the X axis direction, the Y axis direction and the direction parallel to the Y axis direction or the diagonal line with an included angle of 45 degrees with the X axis and the Y axis and the direction parallel to the diagonal lineFinding out the number of two adjacent pixel point pairs in the window;
(III) establishing a gray level co-occurrence matrix with k rows and k columns, and comparing the gray level values of two adjacent pixel point pairs in the windowThe number of occurrences is recorded in a gray level co-occurrence matrix of k rows and k columns, i.e. the gray level values of two adjacent pairs of pixel pointsIn a gray level co-occurrence matrix of k rows and k columnsRows and columnsThe column is recorded as 1, and the gray values of all the two adjacent pixel point pairs in the window are recordedAre all recorded in a gray level co-occurrence matrix;
(IV) carrying out normalization processing on the data in the gray level co-occurrence matrix, namely dividing the obtained times in the gray level co-occurrence matrix by the windowThe number of the adjacent two pixel point pairs is obtained to obtain the probability of the occurrence of the gray value pair;
Thirdly, calculating the energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel points A in the wave band according to the gray level co-occurrence matrix
The calculation formula of the energy texture of the pixel point (A) is as follows:
whereinThe value of the ith row and the jth column in the normalized gray level co-occurrence matrix is obtained;
(IV) finding out all data of a plurality of representative pixel points (A) in a plurality of remote sensing images
Repeating the step (two) and the step (three), and finding out the band(s) of the pixel point (A),,,,,) The data (DN 1, DN2, DN 3) of the pixel (A) and (B) under each band,,,,,) As sample data, n =21 attributes are included in one sample data, and so on, the band data (DN 1, DN2, DN 3) of a plurality of representative pixel points (a) in a plurality of remote sensing images are found, and (under each band), (DN 1, DN2, DN 3),,,,,) Forming a training sample set;
(V) constructing a decision tree model by using a random forest model
(1) Randomly selecting a samples in a place where training samples are collected, and establishing a decision tree, wherein the attributes of the training samples areN =21, calculating the current sample setThe middle and lower cushion surface is of the typeThe proportion of the sample isThen, thenThe information gain of (a) is:
(2) In the a training samples, according to a certain attribute of the samplesIs divided into certain attributesDividing a training samples intoA branch, V is);Is the firstThe sample set contained in each branch is matched with a certain attributeThe amount of information gain resulting from the division is:
Computing a set of samplesThe middle and lower cushion surface is of the typeThe proportion of the sample isThen, thenThe information gain of (a) is:
Repeating the steps, calculating the information gain quantity of a training samples divided according to each attribute of the training samples, comparing the information gain quantity of 21 attributes of the training samples, and taking the attribute with the maximum value of the information gain quantity as a first node;
(3) under the attribute of the first nodeEach branch is a branch, the value or the value range of each branch is marked, and then the step (2) is repeated to respectively calculate the value on the second branchSample set under one branchIn (2), the information gain amounts divided by the remaining 20 attributes are compared in the secondSample set under one branchThe attribute with the maximum value of the information gain under the 1 … … V branch is respectively taken as the first node under the first nodeNodes on each branch;
(4)、the sample set is divided into V: () Dividing and repeating the steps(3) Finding out nodes on the newly divided V branches and the next layer of branches of the nodes, and repeating the steps until the underlying surface type is obtained, thereby completing the construction of a decision tree;
(5) repeating the step (1) to the step (4) for b times to construct a decision tree model;
(VI) classifying the unknown remote sensing images
According to the decision tree model, scanning unknown remote sensing images point by point, processing each scanned point by steps (two) to (three), obtaining 21 attribute data from each scanning point, respectively inputting the 21 attribute data of each scanning point into the decision tree model, classifying the 21 attribute data of each scanning point in each decision tree according to the values or value ranges marked on the nodes and each branch to finally obtain the underlying surface category, respectively counting the underlying surface categories of each scanning point in b decision trees, and taking the most statistical result as the final classification result of the underlying surface categories of the scanning point;
(VII) color rendering is carried out on the classified result
Rendering the roads in the classification result in the step (six) to be red, and setting the waveband data to be 255,0 and 0; rendering the green land to be green, and setting the waveband data to be 0,255 and 0; the paving rendering is blue, and the wave band data is set to be 0,0 and 255; rendering the water body to be yellow, and setting the waveband data to be 255,255 and 0; bare rendered white, with the band data set to 255255255; the roof is rendered black and the band data is set to 0,0, 0.
2. The method for extracting the remote sensing image underlying surface as claimed in claim 1, wherein: in the step (one), the number of the selected representative pixel points (A) in each underlying surface category is the same.
5. The method for extracting the remote sensing image underlying surface as claimed in claim 4, wherein: in step (five), the number of branches divided into V under each attribute is different, and V is 1 to 30.
6. The method for extracting the underlying surface of the remote sensing image as claimed in claim 5, wherein: in the step (V), the number a of the samples is 800- & gt 1000.
7. The method for extracting the underlying surface of the remote sensing image as claimed in claim 6, wherein: in the step (five), b decision trees are constructed, wherein b is 50-100.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555254.2A CN112131912A (en) | 2019-06-25 | 2019-06-25 | Remote sensing image underlying surface extraction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555254.2A CN112131912A (en) | 2019-06-25 | 2019-06-25 | Remote sensing image underlying surface extraction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112131912A true CN112131912A (en) | 2020-12-25 |
Family
ID=73850061
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910555254.2A Pending CN112131912A (en) | 2019-06-25 | 2019-06-25 | Remote sensing image underlying surface extraction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112131912A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112949657A (en) * | 2021-03-09 | 2021-06-11 | 河南省现代农业大数据产业技术研究院有限公司 | Forest land distribution extraction method and device based on remote sensing image texture features |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915636A (en) * | 2015-04-15 | 2015-09-16 | 北京工业大学 | Remote sensing image road identification method based on multistage frame significant characteristics |
CN105279519A (en) * | 2015-09-24 | 2016-01-27 | 四川航天系统工程研究所 | Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning |
CN108229425A (en) * | 2018-01-29 | 2018-06-29 | 浙江大学 | A kind of identifying water boy method based on high-resolution remote sensing image |
CN108256424A (en) * | 2017-12-11 | 2018-07-06 | 中交信息技术国家工程实验室有限公司 | A kind of high-resolution remote sensing image method for extracting roads based on deep learning |
US20190171879A1 (en) * | 2017-12-04 | 2019-06-06 | Transport Planning and Research Institute Ministry of Transport | Method of extracting warehouse in port from hierarchically screened remote sensing image |
-
2019
- 2019-06-25 CN CN201910555254.2A patent/CN112131912A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915636A (en) * | 2015-04-15 | 2015-09-16 | 北京工业大学 | Remote sensing image road identification method based on multistage frame significant characteristics |
CN105279519A (en) * | 2015-09-24 | 2016-01-27 | 四川航天系统工程研究所 | Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning |
US20190171879A1 (en) * | 2017-12-04 | 2019-06-06 | Transport Planning and Research Institute Ministry of Transport | Method of extracting warehouse in port from hierarchically screened remote sensing image |
CN108256424A (en) * | 2017-12-11 | 2018-07-06 | 中交信息技术国家工程实验室有限公司 | A kind of high-resolution remote sensing image method for extracting roads based on deep learning |
CN108229425A (en) * | 2018-01-29 | 2018-06-29 | 浙江大学 | A kind of identifying water boy method based on high-resolution remote sensing image |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112949657A (en) * | 2021-03-09 | 2021-06-11 | 河南省现代农业大数据产业技术研究院有限公司 | Forest land distribution extraction method and device based on remote sensing image texture features |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107392925B (en) | Remote sensing image ground object classification method based on super-pixel coding and convolutional neural network | |
CN110544251B (en) | Dam crack detection method based on multi-migration learning model fusion | |
CN107657257A (en) | A kind of semantic image dividing method based on multichannel convolutive neutral net | |
CN107038416B (en) | Pedestrian detection method based on binary image improved HOG characteristics | |
CN104809481A (en) | Natural scene text detection method based on adaptive color clustering | |
CN111105389B (en) | Road surface crack detection method integrating Gabor filter and convolutional neural network | |
CN114387528A (en) | Pine nematode disease monitoring space-air-ground integrated monitoring method | |
CN105469098A (en) | Precise LINDAR data ground object classification method based on adaptive characteristic weight synthesis | |
CN110807485B (en) | Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image | |
CN104077612B (en) | A kind of insect image-recognizing method based on multiple features rarefaction representation technology | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
CN111709901A (en) | Non-multiple multi/hyperspectral remote sensing image color homogenizing method based on FCM cluster matching and Wallis filtering | |
CN112417931B (en) | Method for detecting and classifying water surface objects based on visual saliency | |
CN108985363A (en) | A kind of cracks in reinforced concrete bridge classifying identification method based on RBPNN | |
CN112488050A (en) | Color and texture combined aerial image scene classification method and system | |
CN107992856A (en) | High score remote sensing building effects detection method under City scenarios | |
CN105868683A (en) | Channel logo identification method and apparatus | |
CN103295019B (en) | A kind of Chinese fragment self-adaptive recovery method based on probability-statistics | |
CN115661777A (en) | Semantic-combined foggy road target detection algorithm | |
CN116740121A (en) | Straw image segmentation method based on special neural network and image preprocessing | |
CN112131912A (en) | Remote sensing image underlying surface extraction method | |
CN113505856A (en) | Hyperspectral image unsupervised self-adaptive classification method | |
CN111882573A (en) | Cultivated land plot extraction method and system based on high-resolution image data | |
CN113516193B (en) | Image processing-based red date defect identification and classification method and device | |
CN116206156A (en) | Pavement crack classification and identification method under shadow interference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |