CN112131912A - Remote sensing image underlying surface extraction method - Google Patents

Remote sensing image underlying surface extraction method Download PDF

Info

Publication number
CN112131912A
CN112131912A CN201910555254.2A CN201910555254A CN112131912A CN 112131912 A CN112131912 A CN 112131912A CN 201910555254 A CN201910555254 A CN 201910555254A CN 112131912 A CN112131912 A CN 112131912A
Authority
CN
China
Prior art keywords
pixel point
remote sensing
underlying surface
data
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910555254.2A
Other languages
Chinese (zh)
Inventor
曲兆松
谢宗彦
王希花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinfotek Science And Technology Co ltd
Original Assignee
Beijing Sinfotek Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinfotek Science And Technology Co ltd filed Critical Beijing Sinfotek Science And Technology Co ltd
Priority to CN201910555254.2A priority Critical patent/CN112131912A/en
Publication of CN112131912A publication Critical patent/CN112131912A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers

Abstract

The invention relates to a remote sensing image underlying surface extraction method which comprises the steps of selecting a plurality of representative pixel points (A) in a plurality of known remote sensing images, generating a gray level co-occurrence matrix by taking the pixel point (A) as a center in a certain wave band, calculating energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel point (A) in the wave band, constructing a decision tree model by applying a random forest model according to the parameters, scanning unknown remote sensing images point by point according to the decision tree model to finally obtain underlying surface categories, and performing color rendering on the underlying surface categories in classification results.

Description

Remote sensing image underlying surface extraction method
Technical Field
The invention relates to a method for processing extraction of an underlying surface of a remote sensing image, in particular to a processing method for extracting the underlying surface of the remote sensing image by using a processing method of image processing and machine learning.
Background
When the remote sensing image is used for extracting the underlying surface, errors exist in the shoreline extraction of the river channel; errors exist in the extraction of the land feature information such as greenbelt, road, roof, pavement and bare land; the above errors are large, which results in a large workload of manual intervention.
At present, Envi software remote sensing image classification and identification can only be trained aiming at a single remote sensing image, namely training data and a classification model are obtained aiming at the single remote sensing image. Therefore, Envi is based on a training mode of a single remote sensing image, so that the trained classification model has a good recognition effect on the remote sensing image used for training, and the generalization capability of the model is limited.
Disclosure of Invention
The invention aims to provide a method for extracting an underlying surface of a remote sensing image, which can classify and identify greenbelts, roads, roofs, pavements, bare lands and water bodies in the remote sensing image and then represent the identified greenbelts, roads, roofs, pavements, bare lands and water bodies as different colors.
In order to achieve the purpose of the invention, the following technical scheme is adopted in the application:
the invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps:
(one) extracting data
Selecting a plurality of representative pixel points (A) in a plurality of known remote sensing images to extract wave band data (DN 1, DN2, DN 3), wherein the representative pixel points (A) comprise c underlying surface categories which comprise: roads, greens, pavements, bodies of water, open land and roofs;
secondly, generating a gray level co-occurrence matrix by taking the pixel point A as the center
(I) To reduce the size of the gray level co-occurrence matrix, the maximum value of the gray level values is designated as
Figure 761975DEST_PATH_IMAGE001
Reducing the gray value of one wave band in the remote sensing image to
Figure 288902DEST_PATH_IMAGE001
Figure 740743DEST_PATH_IMAGE002
Wherein
Figure 11318DEST_PATH_IMAGE003
Is the maximum value of the gray value of the original wave band,
Figure 560111DEST_PATH_IMAGE004
in order to reduce the gray value of the pixel point,
Figure 51748DEST_PATH_IMAGE005
the gray value of the pixel point before reduction;
(II) in a wave band where the pixel point (A) is located, taking the pixel point (A) as a geometric center and the length as
Figure 335093DEST_PATH_IMAGE006
Each pixel point and width of
Figure 878201DEST_PATH_IMAGE007
The window of each pixel point takes the pixel point at the leftmost lower corner of the window as the origin of coordinates and the length as the lengthEach pixel point is taken as an X axis and the width as
Figure 949898DEST_PATH_IMAGE007
Establishing a window coordinate for the Y axis by each pixel point, finding out the gray values of all the pixel points in the window coordinate, and finding out the gray values of two adjacent pixel point pairs along the X axis direction and the direction parallel to the X axis, along the Y axis direction and the direction parallel to the Y axis, or along the diagonal line with an included angle of 45 degrees with the X axis and the Y axis and the direction parallel to the diagonal line
Figure 376331DEST_PATH_IMAGE008
Finding out the number of two adjacent pixel point pairs in the window;
(III) establishing a gray level co-occurrence matrix with k rows and k columns, and comparing the gray level values of two adjacent pixel point pairs in the window
Figure 723130DEST_PATH_IMAGE008
The number of occurrences is recorded in a gray level co-occurrence matrix of k rows and k columns, i.e. the gray level values of two adjacent pairs of pixel points
Figure 918619DEST_PATH_IMAGE009
In a gray level co-occurrence matrix of k rows and k columns
Figure 551726DEST_PATH_IMAGE010
Rows and columns
Figure 465455DEST_PATH_IMAGE011
The column is recorded as 1, and the gray values of all the two adjacent pixel point pairs in the window are recorded
Figure 474999DEST_PATH_IMAGE008
Are all recorded in a gray level co-occurrence matrix;
(IV) carrying out normalization processing on the data in the gray level co-occurrence matrix, namely dividing the obtained times in the gray level co-occurrence matrix by the number of the two adjacent pixel point pairs of the window to obtain the probability of occurrence of the gray level value pair
Figure 787645DEST_PATH_IMAGE012
Thirdly, calculating the energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel points A in the wave band according to the gray level co-occurrence matrix
The calculation formula of the energy texture of the pixel point (A) is as follows:
Figure DEST_PATH_IMAGE013
the calculation formula of the entropy of the pixel point (A) is as follows:
Figure 998177DEST_PATH_IMAGE014
the calculation formula of the contrast of the pixel point (A) is as follows:
Figure 336886DEST_PATH_IMAGE015
the calculation formula of the non-similarity of the pixel point (A) is as follows:
Figure 946859DEST_PATH_IMAGE016
the calculation formula of the correlation of the pixel point (A) is as follows:
Figure 789044DEST_PATH_IMAGE017
wherein
Figure 232795DEST_PATH_IMAGE018
Is a mean value, and
Figure 932502DEST_PATH_IMAGE019
Figure 221532DEST_PATH_IMAGE020
is the standard deviation.
Figure 246120DEST_PATH_IMAGE021
Figure 657510DEST_PATH_IMAGE022
Figure 236390DEST_PATH_IMAGE023
Figure 329111DEST_PATH_IMAGE024
The calculation formula of the inverse difference of the pixel point (A) is as follows:
Figure 674117DEST_PATH_IMAGE025
wherein
Figure 990829DEST_PATH_IMAGE026
The value of the ith row and the jth column in the normalized gray level co-occurrence matrix is obtained;
(IV) finding out all data of a plurality of representative pixel points (A) in a plurality of remote sensing images
Repeating the step (two) and the step (three), and finding out the band(s) of the pixel point (A)
Figure 588163DEST_PATH_IMAGE027
Figure 422258DEST_PATH_IMAGE028
Figure 155859DEST_PATH_IMAGE029
Figure 643472DEST_PATH_IMAGE030
Figure 728103DEST_PATH_IMAGE031
Figure 493452DEST_PATH_IMAGE032
) The data (DN 1, DN2, DN 3) of the pixel (A) and (B) under each band
Figure 81560DEST_PATH_IMAGE027
Figure 943336DEST_PATH_IMAGE028
Figure 312001DEST_PATH_IMAGE029
Figure 284636DEST_PATH_IMAGE030
Figure 320725DEST_PATH_IMAGE031
Figure 353403DEST_PATH_IMAGE032
) As sample data, n =21 attributes are included in one sample data, and so on, the band data (DN 1, DN2, DN 3) of a plurality of representative pixel points (a) in a plurality of remote sensing images are found, and (under each band), (DN 1, DN2, DN 3)
Figure 81800DEST_PATH_IMAGE027
Figure 982760DEST_PATH_IMAGE028
Figure 483143DEST_PATH_IMAGE029
Figure 545777DEST_PATH_IMAGE030
Figure 764400DEST_PATH_IMAGE031
Figure 609996DEST_PATH_IMAGE032
) Forming a training sample set;
(V) constructing a decision tree model by using a random forest model
(1) Randomly selecting a samples in a place where training samples are collected, and establishing a decision tree, wherein the attributes of the training samples are
Figure 823940DEST_PATH_IMAGE033
N =21, calculating the current sample set
Figure 463999DEST_PATH_IMAGE034
The middle and lower cushion surface is of the type
Figure 285763DEST_PATH_IMAGE035
The proportion of the sample is
Figure 403892DEST_PATH_IMAGE036
Then, then
Figure 941184DEST_PATH_IMAGE034
The information gain of (a) is:
Figure 548882DEST_PATH_IMAGE037
- - - -formula (1)
(2) In the a training samples, according to a certain attribute of the samples
Figure 538835DEST_PATH_IMAGE038
Is divided into certain attributes
Figure 522972DEST_PATH_IMAGE039
Dividing a training samples into
Figure 383611DEST_PATH_IMAGE040
A branch, V is
Figure 362544DEST_PATH_IMAGE041
);
Figure 43055DEST_PATH_IMAGE042
Is the first
Figure 362041DEST_PATH_IMAGE043
The sample set contained in each branch is matched with a certain attribute
Figure 811608DEST_PATH_IMAGE034
The amount of information gain resulting from the division is:
Figure 964372DEST_PATH_IMAGE044
- - - -formula (2)
Computing a set of samples
Figure 991234DEST_PATH_IMAGE042
The middle and lower cushion surface is of the type
Figure 989277DEST_PATH_IMAGE035
The proportion of the sample is
Figure 420914DEST_PATH_IMAGE036
Then, then
Figure 744579DEST_PATH_IMAGE042
The information gain of (a) is:
Figure 399682DEST_PATH_IMAGE045
- - - -formula (3)
Repeating the above steps, calculating the information gain of a training samples according to each attribute of the training samples, comparing the information gain of 21 attributes of the training samples, using the attribute with the maximum value of the information gain as the first node,
(3) under the attribute of the first node
Figure 60471DEST_PATH_IMAGE040
Each branch is a branch, the value or the value range of each branch is marked, and then the step (2) is repeated to respectively calculate the value on the second branch
Figure 219051DEST_PATH_IMAGE046
Sample set under one branch
Figure 979196DEST_PATH_IMAGE042
In the method, the information gain amount divided by the remaining 20 attributes is compared with the sample sets under the 1 st 1 … … th branch
Figure 777388DEST_PATH_IMAGE042
The attribute with the maximum value of the information gain under the 1 … … V branch is respectively taken as the first node under the first node
Figure 317566DEST_PATH_IMAGE046
The nodes on the one of the branches are,
(4)、
Figure 455287DEST_PATH_IMAGE042
the sample set is divided into V: (
Figure 120754DEST_PATH_IMAGE041
) Repeating the step (3), finding out the nodes on the newly divided V branches and the next layer of branches of the nodes, and repeating the steps until the underlying surface type is obtained, thereby completing the construction of a decision tree,
(5) repeating the step (1) to the step (4) for b times to construct a decision tree model,
(VI) classifying the unknown remote sensing images
According to the decision tree model, scanning unknown remote sensing images point by point, processing each scanned point by steps (two) to (three), obtaining 21 attribute data from each scanning point, respectively inputting the 21 attribute data of each scanning point into the decision tree model, classifying the 21 attribute data of each scanning point in each decision tree according to the values or value ranges marked on the nodes and each branch to finally obtain the underlying surface category, respectively counting the underlying surface categories of each scanning point in b decision trees, and taking the most statistical result as the final classification result of the underlying surface categories of the scanning point;
(VII) color rendering is carried out on the classified result
Rendering the roads in the classification result in the step (six) to be red, and setting the waveband data to be 255,0 and 0; rendering the green land to be green, and setting the waveband data to be 0,255 and 0; the paving rendering is blue, and the wave band data is set to be 0,0 and 255; rendering the water body to be yellow, and setting the waveband data to be 255,255 and 0; bare rendered white, with the band data set to 255255255; the roof is rendered black and the band data is set to 0,0, 0.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in the step (one), the number of the selected representative pixel points (A) in each underlying surface category is the same.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in step (II), the
Figure 343925DEST_PATH_IMAGE047
Each pixel point is 3-10 pixel points,
Figure 753041DEST_PATH_IMAGE007
each pixel point is 3-10 pixel points.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in step (I), the specified gradation value
Figure 682951DEST_PATH_IMAGE048
10-20。
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in step (five), the number of branches divided into V under each attribute is different, and V is 1 to 30.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in the step (V), the number a of the samples is 800-.
The invention discloses a remote sensing image underlying surface extraction method, which comprises the following steps: in the step (five), b decision trees are constructed, wherein b is 50-1000.
Compared with the remote sensing image which is not processed, the remote sensing image which is processed by the remote sensing image underlying surface extraction method of the invention more clearly shows that the remote sensing image comprises the following components: the underlayment categories of roads, greenbelts, pavements, bodies of water, bare land, and roofs.
Drawings
FIG. 1 is a schematic diagram of a window coordinate in step (I) of the method for extracting the underlying surface of the remote sensing image according to the present invention;
FIG. 2 is a schematic diagram of two adjacent pixel point pairs found in FIG. 1;
FIG. 3 is a schematic diagram of a decision tree model constructed in the fifth step in the remote sensing image underlying surface extraction method of the present invention;
FIG. 4 is an original remote sensing image;
FIG. 5 is a schematic diagram of the classification recognition and rendering results of remote sensing images after the original remote sensing images processed by the remote sensing image underlying surface extraction method are processed.
Detailed Description
The method for extracting the underlying surface of the remote sensing image comprises the following steps:
(one) extracting data
As shown in fig. 4, a plurality of representative pixel points (a) in a plurality of known remote sensing images are selected to extract the waveband data (DN 1, DN2, DN 3), the representative pixel points (a) include c underlying surface categories, and the c underlying surface categories include: the method comprises the following steps that the number of selected representative pixel points (A) in each underlying surface category is the same for roads, greenbelts, pavements, water bodies, bare lands and roofs;
secondly, generating a gray level co-occurrence matrix by taking the pixel point A as the center
(I) In order to reduce the size of the gray level co-occurrence matrix, the maximum value of the designated gray level value is 16, and the gray level value of one wave band in the remote sensing image is reduced to
Figure 847216DEST_PATH_IMAGE049
Figure 495366DEST_PATH_IMAGE002
Wherein
Figure 394925DEST_PATH_IMAGE003
Is the maximum value of the gray value of the original wave band,
Figure 569555DEST_PATH_IMAGE004
in order to reduce the gray value of the pixel point,
Figure 514508DEST_PATH_IMAGE005
the gray value of the pixel point before reduction;
(II) within a band in which the pixel point (A) is located, as shown in FIG. 1, the pixel point (A) is taken as a geometric center, and the length is taken as
Figure 915533DEST_PATH_IMAGE050
Each pixel point and width of
Figure 463189DEST_PATH_IMAGE050
The window of each pixel point takes the pixel point at the leftmost lower corner of the window as the origin of coordinates and the length as the length
Figure 367691DEST_PATH_IMAGE050
Each pixel point is taken as an X axis and the width as
Figure 608180DEST_PATH_IMAGE050
Establishing a window coordinate for Y axis by each pixel point, finding out the gray values of all pixel points in the window coordinate, and finding out the gray values of two adjacent pixel point pairs along the diagonal line with an included angle of 45 degrees between the X axis and the Y axis and the direction parallel to the diagonal line as shown in FIG. 2
Figure 493572DEST_PATH_IMAGE008
And find out the number of two adjacent pixel point pairs in the window is 16;
(III) establishing a gray level co-occurrence matrix with k rows and k columns, and comparing the gray level values of two adjacent pixel point pairs in the window
Figure 579340DEST_PATH_IMAGE008
The number of occurrences is recorded in a gray level co-occurrence matrix of k rows and k columns, i.e. the gray level values of two adjacent pairs of pixel points
Figure 603927DEST_PATH_IMAGE009
In a gray level co-occurrence matrix of k rows and k columns
Figure 156263DEST_PATH_IMAGE010
Rows and columns
Figure 859776DEST_PATH_IMAGE011
The column is recorded as 1, and the gray values of all the two adjacent pixel point pairs in the window are recorded
Figure 890180DEST_PATH_IMAGE008
Are all recorded in a gray level co-occurrence matrix;
(IV) carrying out normalization processing on the data in the gray level co-occurrence matrix, namely dividing the obtained times in the gray level co-occurrence matrix by the number of two adjacent pixel point pairs of the window, such as 16 pairs of gray level values, to obtain the probability of occurrence of the gray level values
Figure 34854DEST_PATH_IMAGE012
Thirdly, calculating the energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel points A in the wave band according to the gray level co-occurrence matrix
The calculation formula of the energy texture of the pixel point (A) is as follows:
Figure 620075DEST_PATH_IMAGE051
the calculation formula of the entropy of the pixel point (A) is as follows:
Figure 686251DEST_PATH_IMAGE014
the calculation formula of the contrast of the pixel point (A) is as follows:
Figure 644979DEST_PATH_IMAGE015
the calculation formula of the non-similarity of the pixel point (A) is as follows:
Figure 644159DEST_PATH_IMAGE016
the calculation formula of the correlation of the pixel point (A) is as follows:
Figure 210401DEST_PATH_IMAGE017
wherein
Figure 357349DEST_PATH_IMAGE018
Is the average value, and,
Figure 788943DEST_PATH_IMAGE020
is the standard deviation.
Figure 580312DEST_PATH_IMAGE052
Figure 504406DEST_PATH_IMAGE053
Figure 76333DEST_PATH_IMAGE054
Figure 252230DEST_PATH_IMAGE055
The calculation formula of the inverse difference of the pixel point (A) is as follows:
Figure 960423DEST_PATH_IMAGE025
wherein
Figure 312208DEST_PATH_IMAGE026
The value of the ith row and the jth column in the normalized gray level co-occurrence matrix is obtained;
(IV) finding out all data of a plurality of representative pixel points (A) in a plurality of remote sensing images
Repeating the step (two) and the step (three), and finding out the band(s) of the pixel point (A)
Figure 840273DEST_PATH_IMAGE027
Figure 678916DEST_PATH_IMAGE028
Figure 507194DEST_PATH_IMAGE029
Figure 914036DEST_PATH_IMAGE030
Figure 788451DEST_PATH_IMAGE031
Figure 571731DEST_PATH_IMAGE032
) The data (DN 1, DN2, DN 3) of the pixel (A) and (B) under each band
Figure 251586DEST_PATH_IMAGE027
Figure 953963DEST_PATH_IMAGE028
Figure 456620DEST_PATH_IMAGE029
Figure 371486DEST_PATH_IMAGE030
Figure 174357DEST_PATH_IMAGE031
Figure 286450DEST_PATH_IMAGE032
) As sample data, n =21 attributes are included in one sample data, and so on, the band data (DN 1, DN2, DN 3) of a plurality of representative pixel points (a) in a plurality of remote sensing images are found, and (under each band), (DN 1, DN2, DN 3)
Figure 948507DEST_PATH_IMAGE027
Figure 398555DEST_PATH_IMAGE028
Figure 259195DEST_PATH_IMAGE029
Figure 178741DEST_PATH_IMAGE030
Figure 515044DEST_PATH_IMAGE031
Figure 178238DEST_PATH_IMAGE032
) Forming a training sample set;
(V) constructing a decision tree model by using a random forest model
(1) Randomly selecting 800 samples in a place where training sample sets are replaced, and establishing a decision tree, wherein the attributes of the training samples are
Figure 690122DEST_PATH_IMAGE033
N =21, calculating the current sample set
Figure 732351DEST_PATH_IMAGE034
The middle and lower cushion surface is of the type
Figure 696896DEST_PATH_IMAGE035
The proportion of the sample is
Figure 163780DEST_PATH_IMAGE036
Such as:
Figure 530171DEST_PATH_IMAGE056
Figure 650573DEST_PATH_IMAGE057
Figure 305677DEST_PATH_IMAGE058
Figure 370060DEST_PATH_IMAGE059
Figure 653274DEST_PATH_IMAGE060
and
Figure 413420DEST_PATH_IMAGE061
then, then
Figure 290240DEST_PATH_IMAGE034
The information gain of (a) is:
Figure 895665DEST_PATH_IMAGE037
- - - -formula (1)
(2) As shown in fig. 3, in 800 training samples, the energy texture ASM of the first band DN1 of the pixel point (a) is divided, and 800 training samples are divided into
Figure 298964DEST_PATH_IMAGE062
Each branch, each branch containing a sample set
Figure 230011DEST_PATH_IMAGE042
Including 260, 240 and 300 sample sets, respectively, the energy texture ASM of the first band DN1 applies to the sample sets
Figure 65899DEST_PATH_IMAGE034
The amount of information gain resulting from the division is:
Figure 802911DEST_PATH_IMAGE044
- - - -formula (2)
Computing a set of samples
Figure 998400DEST_PATH_IMAGE042
The middle and lower cushion surface is of the type
Figure 569189DEST_PATH_IMAGE035
The proportion of the sample is
Figure 279656DEST_PATH_IMAGE036
Then, then
Figure 430146DEST_PATH_IMAGE042
The information gain of (a) is:
Figure 339196DEST_PATH_IMAGE045
- - - -formula (3)
Repeating the above steps, calculating the information gain of 800 training samples divided by the rest 20 attributes according to the training samples, comparing the information gain of 21 attributes of the training samples, using the attribute of the maximum value of the information gain as the first node, in fig. 3, the 21 attributes are replaced by numbers, and the value or the value range of each branch is marked as an illustrative description;
(3) under the attribute of the first node
Figure 609116DEST_PATH_IMAGE040
Each branch is a branch, anMarking the value or value range of each branch, then repeating the step (2), respectively calculating the information gain quantity divided by the rest 20 attributes in the sample sets 260, 240 and 300 under 3 branches, respectively comparing the information gain quantity of 20 attributes in the sample sets 260, 240 and 300 under 3 branches, respectively taking the attribute with the maximum value of the information gain quantity under 1 st, 2 nd and 3 rd branches as the node on the 1 st, 2 nd and 3 rd branches under the first node,
(4)、
Figure 682246DEST_PATH_IMAGE063
dividing the sample set into 2 parts, repeating the step (3), finding out the nodes on the newly divided 2 branches and the next layer branch of the node, repeating the steps until the underlying surface category is obtained, completing the construction of a decision tree,
(5) repeating the step (1) to the step (4) b =80 times, constructing a decision tree model shown in figure 3,
(VI) classifying the unknown remote sensing images
According to the decision tree model, scanning unknown remote sensing images point by point, processing each scanned point by steps (two) to (three), obtaining 21 attribute data from each scanning point, respectively inputting the 21 attribute data of each scanning point into the decision tree model, classifying the 21 attribute data of each scanning point in each decision tree according to the values or value ranges marked on the nodes and each branch to finally obtain the underlying surface category, respectively counting the underlying surface categories of each scanning point in b decision trees, and taking the most statistical result as the final classification result of the underlying surface categories of the scanning point;
(VII) color rendering is carried out on the classified result
As shown in fig. 5, the roads in the classification result in the step (six) are rendered into red, the band data is set to 255,0,0, and the red is marked with the mark in the figure because the figure in the patent document cannot be colored
Figure 292218DEST_PATH_IMAGE064
Replacement; the green space is rendered to green, the band data is set to 0,255,0, and the green color is marked by the label in the figure
Figure 399983DEST_PATH_IMAGE065
Replacement; the paving rendering is blue, the wave band data is set to be 0,0 and 255, and the blue is marked in the figure
Figure 781417DEST_PATH_IMAGE066
Replacement; rendering the water body into yellow, setting the wave band data as 255,0, and marking the yellow in the figure
Figure 669738DEST_PATH_IMAGE067
Replacement; bare rendered as white, with the band data set to 255255255, marked with a label in the figure
Figure 277875DEST_PATH_IMAGE068
Replacement; the roof is rendered black with the band data set to 0,0,0, the black being marked in the figure
Figure 302463DEST_PATH_IMAGE069
And (4) replacing.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention by those skilled in the art should fall within the protection scope defined by the claims of the present invention without departing from the spirit of the present invention.

Claims (7)

1. A remote sensing image underlying surface extraction method comprises the following steps:
(one) extracting data
Selecting a plurality of representative pixel points (A) in a plurality of known remote sensing images to extract wave band data (DN 1, DN2, DN 3), wherein the representative pixel points (A) comprise c underlying surface categories which comprise: roads, greens, pavements, bodies of water, open land and roofs;
secondly, generating a gray level co-occurrence matrix by taking the pixel point A as the center
(I) To reduce the size of the gray level co-occurrence matrix, the maximum value of the gray level values is designated as
Figure 885517DEST_PATH_IMAGE001
Reducing the gray value of one wave band in the remote sensing image to
Figure 572982DEST_PATH_IMAGE001
Figure 894942DEST_PATH_IMAGE002
Wherein
Figure 233520DEST_PATH_IMAGE003
Is the maximum value of the gray value of the original wave band,
Figure 30837DEST_PATH_IMAGE004
in order to reduce the gray value of the pixel point,
Figure 782761DEST_PATH_IMAGE005
the gray value of the pixel point before reduction;
(II) in a wave band where the pixel point (A) is located, taking the pixel point (A) as a geometric center and the length as
Figure 507747DEST_PATH_IMAGE006
Each pixel point and width of
Figure 264613DEST_PATH_IMAGE007
The window of each pixel point takes the pixel point at the leftmost lower corner of the window as the origin of coordinates and the length as the length
Figure 59262DEST_PATH_IMAGE006
Each pixel point is taken as an X axis and the width as
Figure 678243DEST_PATH_IMAGE007
Establishing a window coordinate for the Y axis by each pixel point, finding out the gray values of all the pixel points in the window coordinate, and finding out the gray values of two adjacent pixel point pairs along the X axis direction and the direction parallel to the X axis direction, the Y axis direction and the direction parallel to the Y axis direction or the diagonal line with an included angle of 45 degrees with the X axis and the Y axis and the direction parallel to the diagonal line
Figure 318172DEST_PATH_IMAGE008
Finding out the number of two adjacent pixel point pairs in the window;
(III) establishing a gray level co-occurrence matrix with k rows and k columns, and comparing the gray level values of two adjacent pixel point pairs in the window
Figure 585336DEST_PATH_IMAGE008
The number of occurrences is recorded in a gray level co-occurrence matrix of k rows and k columns, i.e. the gray level values of two adjacent pairs of pixel points
Figure 49422DEST_PATH_IMAGE009
In a gray level co-occurrence matrix of k rows and k columns
Figure 651305DEST_PATH_IMAGE010
Rows and columns
Figure 783340DEST_PATH_IMAGE011
The column is recorded as 1, and the gray values of all the two adjacent pixel point pairs in the window are recorded
Figure 795026DEST_PATH_IMAGE008
Are all recorded in a gray level co-occurrence matrix;
(IV) carrying out normalization processing on the data in the gray level co-occurrence matrix, namely dividing the obtained times in the gray level co-occurrence matrix by the windowThe number of the adjacent two pixel point pairs is obtained to obtain the probability of the occurrence of the gray value pair
Figure 806844DEST_PATH_IMAGE012
Thirdly, calculating the energy texture, entropy, contrast, inverse difference, non-similarity and cooperativity parameters of the pixel points A in the wave band according to the gray level co-occurrence matrix
The calculation formula of the energy texture of the pixel point (A) is as follows:
Figure 256542DEST_PATH_IMAGE013
the calculation formula of the entropy of the pixel point (A) is as follows:
Figure 127022DEST_PATH_IMAGE014
the calculation formula of the contrast of the pixel point (A) is as follows:
Figure 211521DEST_PATH_IMAGE015
the calculation formula of the non-similarity of the pixel point (A) is as follows:
Figure 20340DEST_PATH_IMAGE016
the calculation formula of the correlation of the pixel point (A) is as follows:
Figure 331235DEST_PATH_IMAGE017
wherein
Figure 886631DEST_PATH_IMAGE018
Is a mean value, and
Figure 717315DEST_PATH_IMAGE019
Figure 461149DEST_PATH_IMAGE020
is the standard deviation of
Figure 554614DEST_PATH_IMAGE021
Figure 769825DEST_PATH_IMAGE022
Figure 94496DEST_PATH_IMAGE023
Figure 377274DEST_PATH_IMAGE024
The calculation formula of the inverse difference of the pixel point (A) is as follows:
Figure 413494DEST_PATH_IMAGE025
wherein
Figure 163888DEST_PATH_IMAGE026
The value of the ith row and the jth column in the normalized gray level co-occurrence matrix is obtained;
(IV) finding out all data of a plurality of representative pixel points (A) in a plurality of remote sensing images
Repeating the step (two) and the step (three), and finding out the band(s) of the pixel point (A)
Figure 703585DEST_PATH_IMAGE027
Figure 664587DEST_PATH_IMAGE028
Figure 722192DEST_PATH_IMAGE029
Figure 13627DEST_PATH_IMAGE030
Figure 749108DEST_PATH_IMAGE031
Figure 631745DEST_PATH_IMAGE032
) The data (DN 1, DN2, DN 3) of the pixel (A) and (B) under each band
Figure 750880DEST_PATH_IMAGE027
Figure 408124DEST_PATH_IMAGE028
Figure 187993DEST_PATH_IMAGE029
Figure 553115DEST_PATH_IMAGE030
Figure 48294DEST_PATH_IMAGE031
Figure 681531DEST_PATH_IMAGE032
) As sample data, n =21 attributes are included in one sample data, and so on, the band data (DN 1, DN2, DN 3) of a plurality of representative pixel points (a) in a plurality of remote sensing images are found, and (under each band), (DN 1, DN2, DN 3)
Figure 194202DEST_PATH_IMAGE027
Figure 667909DEST_PATH_IMAGE028
Figure 122155DEST_PATH_IMAGE029
Figure 87312DEST_PATH_IMAGE030
Figure 45035DEST_PATH_IMAGE031
Figure 306953DEST_PATH_IMAGE032
) Forming a training sample set;
(V) constructing a decision tree model by using a random forest model
(1) Randomly selecting a samples in a place where training samples are collected, and establishing a decision tree, wherein the attributes of the training samples are
Figure 28921DEST_PATH_IMAGE033
N =21, calculating the current sample set
Figure 66278DEST_PATH_IMAGE034
The middle and lower cushion surface is of the type
Figure 219786DEST_PATH_IMAGE035
The proportion of the sample is
Figure 786028DEST_PATH_IMAGE036
Then, then
Figure 729713DEST_PATH_IMAGE034
The information gain of (a) is:
Figure 573691DEST_PATH_IMAGE037
- - - -formula (1)
(2) In the a training samples, according to a certain attribute of the samples
Figure 115793DEST_PATH_IMAGE038
Is divided into certain attributes
Figure 318848DEST_PATH_IMAGE039
Dividing a training samples into
Figure 31720DEST_PATH_IMAGE040
A branch, V is
Figure 363211DEST_PATH_IMAGE041
);
Figure 196038DEST_PATH_IMAGE042
Is the first
Figure 104082DEST_PATH_IMAGE043
The sample set contained in each branch is matched with a certain attribute
Figure 379949DEST_PATH_IMAGE034
The amount of information gain resulting from the division is:
Figure 234904DEST_PATH_IMAGE044
- - - -formula (2)
Computing a set of samples
Figure 46871DEST_PATH_IMAGE042
The middle and lower cushion surface is of the type
Figure 145058DEST_PATH_IMAGE035
The proportion of the sample is
Figure 239047DEST_PATH_IMAGE036
Then, then
Figure 802752DEST_PATH_IMAGE042
The information gain of (a) is:
Figure 92395DEST_PATH_IMAGE045
- - - -formula (3)
Repeating the steps, calculating the information gain quantity of a training samples divided according to each attribute of the training samples, comparing the information gain quantity of 21 attributes of the training samples, and taking the attribute with the maximum value of the information gain quantity as a first node;
(3) under the attribute of the first node
Figure 358554DEST_PATH_IMAGE040
Each branch is a branch, the value or the value range of each branch is marked, and then the step (2) is repeated to respectively calculate the value on the second branch
Figure 110478DEST_PATH_IMAGE046
Sample set under one branch
Figure 721282DEST_PATH_IMAGE042
In (2), the information gain amounts divided by the remaining 20 attributes are compared in the second
Figure 599852DEST_PATH_IMAGE046
Sample set under one branch
Figure 394502DEST_PATH_IMAGE042
The attribute with the maximum value of the information gain under the 1 … … V branch is respectively taken as the first node under the first node
Figure 728662DEST_PATH_IMAGE046
Nodes on each branch;
(4)、
Figure 545089DEST_PATH_IMAGE042
the sample set is divided into V: (
Figure 15516DEST_PATH_IMAGE041
) Dividing and repeating the steps(3) Finding out nodes on the newly divided V branches and the next layer of branches of the nodes, and repeating the steps until the underlying surface type is obtained, thereby completing the construction of a decision tree;
(5) repeating the step (1) to the step (4) for b times to construct a decision tree model;
(VI) classifying the unknown remote sensing images
According to the decision tree model, scanning unknown remote sensing images point by point, processing each scanned point by steps (two) to (three), obtaining 21 attribute data from each scanning point, respectively inputting the 21 attribute data of each scanning point into the decision tree model, classifying the 21 attribute data of each scanning point in each decision tree according to the values or value ranges marked on the nodes and each branch to finally obtain the underlying surface category, respectively counting the underlying surface categories of each scanning point in b decision trees, and taking the most statistical result as the final classification result of the underlying surface categories of the scanning point;
(VII) color rendering is carried out on the classified result
Rendering the roads in the classification result in the step (six) to be red, and setting the waveband data to be 255,0 and 0; rendering the green land to be green, and setting the waveband data to be 0,255 and 0; the paving rendering is blue, and the wave band data is set to be 0,0 and 255; rendering the water body to be yellow, and setting the waveband data to be 255,255 and 0; bare rendered white, with the band data set to 255255255; the roof is rendered black and the band data is set to 0,0, 0.
2. The method for extracting the remote sensing image underlying surface as claimed in claim 1, wherein: in the step (one), the number of the selected representative pixel points (A) in each underlying surface category is the same.
3. The remote sensing image underlying surface extraction method of claim 2, wherein: in step (II), the
Figure 981067DEST_PATH_IMAGE047
Each pixel point is 3-10 pixel points,
Figure 65173DEST_PATH_IMAGE007
each pixel point is 3-10 pixel points.
4. The method for extracting the underlying surface of the remote sensing image as claimed in claim 3, wherein: in step (I), the specified gradation value
Figure 416782DEST_PATH_IMAGE048
10-20。
5. The method for extracting the remote sensing image underlying surface as claimed in claim 4, wherein: in step (five), the number of branches divided into V under each attribute is different, and V is 1 to 30.
6. The method for extracting the underlying surface of the remote sensing image as claimed in claim 5, wherein: in the step (V), the number a of the samples is 800- & gt 1000.
7. The method for extracting the underlying surface of the remote sensing image as claimed in claim 6, wherein: in the step (five), b decision trees are constructed, wherein b is 50-100.
CN201910555254.2A 2019-06-25 2019-06-25 Remote sensing image underlying surface extraction method Pending CN112131912A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910555254.2A CN112131912A (en) 2019-06-25 2019-06-25 Remote sensing image underlying surface extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910555254.2A CN112131912A (en) 2019-06-25 2019-06-25 Remote sensing image underlying surface extraction method

Publications (1)

Publication Number Publication Date
CN112131912A true CN112131912A (en) 2020-12-25

Family

ID=73850061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910555254.2A Pending CN112131912A (en) 2019-06-25 2019-06-25 Remote sensing image underlying surface extraction method

Country Status (1)

Country Link
CN (1) CN112131912A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949657A (en) * 2021-03-09 2021-06-11 河南省现代农业大数据产业技术研究院有限公司 Forest land distribution extraction method and device based on remote sensing image texture features

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915636A (en) * 2015-04-15 2015-09-16 北京工业大学 Remote sensing image road identification method based on multistage frame significant characteristics
CN105279519A (en) * 2015-09-24 2016-01-27 四川航天系统工程研究所 Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning
CN108229425A (en) * 2018-01-29 2018-06-29 浙江大学 A kind of identifying water boy method based on high-resolution remote sensing image
CN108256424A (en) * 2017-12-11 2018-07-06 中交信息技术国家工程实验室有限公司 A kind of high-resolution remote sensing image method for extracting roads based on deep learning
US20190171879A1 (en) * 2017-12-04 2019-06-06 Transport Planning and Research Institute Ministry of Transport Method of extracting warehouse in port from hierarchically screened remote sensing image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915636A (en) * 2015-04-15 2015-09-16 北京工业大学 Remote sensing image road identification method based on multistage frame significant characteristics
CN105279519A (en) * 2015-09-24 2016-01-27 四川航天系统工程研究所 Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning
US20190171879A1 (en) * 2017-12-04 2019-06-06 Transport Planning and Research Institute Ministry of Transport Method of extracting warehouse in port from hierarchically screened remote sensing image
CN108256424A (en) * 2017-12-11 2018-07-06 中交信息技术国家工程实验室有限公司 A kind of high-resolution remote sensing image method for extracting roads based on deep learning
CN108229425A (en) * 2018-01-29 2018-06-29 浙江大学 A kind of identifying water boy method based on high-resolution remote sensing image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949657A (en) * 2021-03-09 2021-06-11 河南省现代农业大数据产业技术研究院有限公司 Forest land distribution extraction method and device based on remote sensing image texture features

Similar Documents

Publication Publication Date Title
CN107392925B (en) Remote sensing image ground object classification method based on super-pixel coding and convolutional neural network
CN110544251B (en) Dam crack detection method based on multi-migration learning model fusion
CN107657257A (en) A kind of semantic image dividing method based on multichannel convolutive neutral net
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN104809481A (en) Natural scene text detection method based on adaptive color clustering
CN111105389B (en) Road surface crack detection method integrating Gabor filter and convolutional neural network
CN114387528A (en) Pine nematode disease monitoring space-air-ground integrated monitoring method
CN105469098A (en) Precise LINDAR data ground object classification method based on adaptive characteristic weight synthesis
CN110807485B (en) Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image
CN104077612B (en) A kind of insect image-recognizing method based on multiple features rarefaction representation technology
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN111709901A (en) Non-multiple multi/hyperspectral remote sensing image color homogenizing method based on FCM cluster matching and Wallis filtering
CN112417931B (en) Method for detecting and classifying water surface objects based on visual saliency
CN108985363A (en) A kind of cracks in reinforced concrete bridge classifying identification method based on RBPNN
CN112488050A (en) Color and texture combined aerial image scene classification method and system
CN107992856A (en) High score remote sensing building effects detection method under City scenarios
CN105868683A (en) Channel logo identification method and apparatus
CN103295019B (en) A kind of Chinese fragment self-adaptive recovery method based on probability-statistics
CN115661777A (en) Semantic-combined foggy road target detection algorithm
CN116740121A (en) Straw image segmentation method based on special neural network and image preprocessing
CN112131912A (en) Remote sensing image underlying surface extraction method
CN113505856A (en) Hyperspectral image unsupervised self-adaptive classification method
CN111882573A (en) Cultivated land plot extraction method and system based on high-resolution image data
CN113516193B (en) Image processing-based red date defect identification and classification method and device
CN116206156A (en) Pavement crack classification and identification method under shadow interference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination