CN116129278A - Land utilization classification and identification system based on remote sensing images - Google Patents

Land utilization classification and identification system based on remote sensing images Download PDF

Info

Publication number
CN116129278A
CN116129278A CN202310368802.7A CN202310368802A CN116129278A CN 116129278 A CN116129278 A CN 116129278A CN 202310368802 A CN202310368802 A CN 202310368802A CN 116129278 A CN116129278 A CN 116129278A
Authority
CN
China
Prior art keywords
pixel point
image
remote sensing
obtaining
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310368802.7A
Other languages
Chinese (zh)
Other versions
CN116129278B (en
Inventor
朱坤庆
宫玉鑫
房立伟
魏士春
王春雨
马文龙
王登喜
王晴
孙艳丽
袁晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wrangler Shandong Survey And Mapping Group Co ltd
Original Assignee
Wrangler Shandong Survey And Mapping Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wrangler Shandong Survey And Mapping Group Co ltd filed Critical Wrangler Shandong Survey And Mapping Group Co ltd
Priority to CN202310368802.7A priority Critical patent/CN116129278B/en
Publication of CN116129278A publication Critical patent/CN116129278A/en
Application granted granted Critical
Publication of CN116129278B publication Critical patent/CN116129278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a land utilization classification and identification system based on remote sensing images, which relates to the field of image processing, and comprises the following components: and an image acquisition module: the method is used for acquiring full-color remote sensing images and multispectral remote sensing images; and a pretreatment module: the method comprises the steps of obtaining the probability that each pixel point in a multispectral remote sensing image belongs to each land utilization category, and further obtaining a characteristic value vector and a characteristic image of each pixel point; acquiring a plurality of downsampled images of the full-color remote sensing image and similarity among pixel points in the downsampled images; and a data processing module: the method is used for obtaining the characteristic value vector of each pixel point in the full-color remote sensing image step by utilizing the similarity among the pixel points in the target image and the characteristic value vector of the pixel points; and an identification module: the method is used for obtaining the land utilization category corresponding to each pixel point by utilizing the characteristic value vector of each pixel point in the full-color remote sensing image. The invention improves the accuracy of land utilization category identification.

Description

Land utilization classification and identification system based on remote sensing images
Technical Field
The invention relates to the technical field of image processing, in particular to a land utilization classification and identification system based on remote sensing images.
Background
Land utilization classification can better utilize land, realizes the maximum value of land, and land utilization classification is multipurpose in aspects of urban planning, environmental evaluation and the like, so that accurate land utilization classification and identification have very important significance. Along with the continuous development of China science and technology, the remote sensing image gradually replaces the manual mapping data, and the application of the high-resolution remote sensing image in land utilization classification is also wider and wider due to the fact that the high-resolution remote sensing image contains rich land feature information.
However, the accuracy of the remote sensing image is reduced due to the fact that the remote sensing image is easily affected by weather, the resolution of the multispectral remote sensing image obtained by utilizing different wave bands is insufficient, full-color remote sensing images of all wave bands are required to be obtained for high resolution, but color information in the images cannot be obtained in the full-color remote sensing images, and classification and identification of lands cannot be performed. Therefore, the two are often fused in the prior art to obtain a fused image with high resolution and multiband color information; and carrying out subsequent land utilization classification by utilizing the color information in the fusion image. However, since the color information on the fused image is obtained from the multispectral remote sensing image with insufficient resolution, the color information on the fused image with high resolution actually appears continuously and widely, that is, the precision of the color information of the fused image is insufficient, and the land utilization category recognition result obtained according to the analysis of the fused image has errors.
Disclosure of Invention
The invention provides a land utilization classification and identification system based on remote sensing images, which aims to solve the problems that the precision is insufficient after the existing full-color remote sensing images and multispectral remote sensing images are fused, and the land utilization classification and identification result is inaccurate.
The invention discloses a land utilization classification and identification system based on remote sensing images, which comprises the following steps:
and an image acquisition module: the method comprises the steps of acquiring full-color remote sensing images of areas to be classified and multispectral remote sensing images under the combination of a plurality of different wave bands;
and a pretreatment module: the method comprises the steps of obtaining the probability that each pixel point in each multispectral remote sensing image belongs to a corresponding land utilization category, and forming a characteristic value vector of each pixel point at each position by utilizing the probability that the pixel points at the same position in each multispectral remote sensing image belong to each land utilization category to obtain a characteristic image;
step-by-step downsampling the panchromatic remote sensing image to obtain a plurality of downsampled images, and taking the downsampled images with the same resolution as the characteristic images as target images; obtaining the similarity between each pixel point in all downsampled images and each neighborhood pixel point;
and a data processing module: the method comprises the steps of obtaining a characteristic value vector of each pixel point in a target image by utilizing the similarity of each pixel point in the target image and each neighborhood pixel point and the characteristic value vector of each pixel point in a corresponding characteristic image;
according to the characteristic value vector of each pixel point on the target image and the similarity between the pixel point in the upper-stage downsampling image of the target image and each neighborhood pixel point, obtaining the characteristic value vector of each pixel point in the upper-stage downsampling image of the target image; sequentially and gradually upwards obtaining a characteristic value vector of each pixel point in the full-color remote sensing image;
and an identification module: the method is used for obtaining the land utilization category corresponding to each pixel point according to the probability of each land utilization category contained in the characteristic value vector of each pixel point in the full-color remote sensing image.
Further, the method for obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land utilization category comprises the following steps:
acquiring HSV values of each pixel point in the multispectral remote sensing image;
acquiring an H value range of each multispectral remote sensing image belonging to a corresponding land utilization category;
and obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land utilization category by utilizing the H value of each pixel point in each multispectral remote sensing image and the H value range of the corresponding land utilization category.
Further, the preprocessing module further comprises the step of combining probabilities of the corresponding land utilization categories of the pixel points at the same position in each multispectral remote sensing image to obtain a characteristic value vector of the pixel points at each position.
Further, in the preprocessing module, the method for obtaining the similarity between each pixel point on the target image and each neighborhood pixel point comprises the following steps:
obtaining an LBP value of each pixel point on a target image;
and obtaining the similarity of each pixel point and each neighborhood pixel point according to the LBP value and the gray value of each pixel point and each neighborhood pixel point in the target image.
Further, the expression for obtaining the similarity between each pixel point on the target image and each neighborhood pixel point is as follows:
Figure SMS_1
wherein ,
Figure SMS_10
representing the first in the target image
Figure SMS_3
The pixel point and the first pixel point
Figure SMS_6
Each neighborhood pixel point
Figure SMS_5
Similarity of (2);
Figure SMS_8
representing the first in the target image
Figure SMS_9
LBP values for individual pixels;
Figure SMS_12
representing the first in the target image
Figure SMS_14
The first pixel point
Figure SMS_17
Each neighborhood pixel point
Figure SMS_2
LBP value of (a);
Figure SMS_7
representing the first in the target image
Figure SMS_11
Gray values of the individual pixels;
Figure SMS_15
representing the first in the target image
Figure SMS_13
The first pixel point
Figure SMS_16
Each neighborhood pixel point
Figure SMS_4
Is a gray value of (a).
Further, in the data processing module, the expression for obtaining the eigenvalue vector of each pixel point in the target image is:
Figure SMS_18
wherein ,
Figure SMS_22
representing the first in the target image
Figure SMS_25
Characteristic value vectors of the pixel points;
Figure SMS_29
representing the down-sampling times of the full-color remote sensing image;
Figure SMS_21
representing the number of updates;
Figure SMS_23
representing the first in the target image
Figure SMS_28
The pixel point and the first
Figure SMS_30
Each neighborhood pixel point
Figure SMS_19
Similarity of (2);
Figure SMS_24
representing the first in the target image
Figure SMS_27
The first pixel point
Figure SMS_32
Each neighborhood pixel point
Figure SMS_20
A feature value vector in the corresponding feature image;
Figure SMS_26
representing the first in the target image
Figure SMS_31
The pixel points are in the corresponding characteristicsEigenvalue vectors in the image.
Further, during the step-by-step downsampling, the pixel point in each stage downsampling image corresponds to a plurality of pixel points in the adjacent upper stage downsampling image, namely, each pixel point in the target image corresponds to a plurality of pixel points in the upper stage downsampling image of the target image.
Further, the identification module further comprises dividing the land into a plurality of utilization categories through the land utilization categories corresponding to each pixel point.
The beneficial effects of the invention are as follows: according to the land use classification recognition system based on the remote sensing images, the probability that each pixel point belongs to each land use category in the region to be classified can be obtained by acquiring a plurality of multispectral remote sensing images, and then the characteristic image containing the probability that the pixel point belongs to each land use category is obtained; the method comprises the steps of obtaining a target image with the same resolution as a characteristic image by downsampling a full-color remote sensing image with high resolution, and converting the probability that pixel points obtained by multispectral remote sensing images belong to each land utilization category, namely, characteristic value vectors of the pixel points into the target image; according to the similarity between the pixel points in each downsampled image and the neighborhood pixel points, the characteristic value vector of the pixel points is converted into the full-color remote sensing image at the uppermost layer step by step, and compared with the existing method for carrying out direct image fusion by utilizing the multispectral remote sensing image and the full-color remote sensing image, the method for converting the characteristic value vector of the pixel points obtained by combining the similarity between the pixel points into the image fusion in the full-color remote sensing image with high resolution step by combining the similarity between the pixel points is more accurate, and the situation that the edge of the land utilization classification result has errors due to the fact that a large number of pixel points with the same color information appear on the full-color remote sensing image with high resolution in the prior art is reduced, so that the result of land utilization type identification is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a land utilization classification and identification system based on remote sensing images.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of a land utilization classification and identification system based on remote sensing images of the present invention, as shown in fig. 1, comprises: an image acquisition module 10, a preprocessing module 11, a data processing module 12 and an identification module 13.
Image acquisition module 10: the method is used for acquiring the full-color remote sensing image of the region to be classified and multispectral remote sensing images under the combination of a plurality of different wave bands.
Specifically, the invention needs to obtain land utilization types of the areas to be classified, in the prior art, different wave band combinations are selected, so that multispectral remote sensing images can be displayed as different false color images, the invention has the function of carrying out targeted identification on ground objects, for example, under 543 wave band combinations, vegetation is displayed as red, thus the 543 wave band can be selected to carry out higher accurate identification on vegetation, the 564 wave band combinations can be selected to effectively identify water, the 764 wave band combinations can be selected to effectively identify construction lands, and the 652 wave band combinations can be selected to effectively identify cultivated lands.
Therefore, in the scheme, the multispectral remote sensing images under the combination of four different wave bands of vegetation, cultivated land, water body and construction land can be effectively identified by acquiring the region to be classified; and acquiring a full-color remote sensing image with high resolution of the region to be classified.
Pretreatment module 11: the method comprises the steps of obtaining the probability that each pixel point in each multispectral remote sensing image belongs to a corresponding land utilization category, and forming a characteristic value vector of each pixel point at each position by utilizing the probability that the pixel points at the same position in each multispectral remote sensing image belong to each land utilization category to obtain a characteristic image; step-by-step downsampling the panchromatic remote sensing image to obtain a plurality of downsampled images, and taking the downsampled images with the same resolution as the characteristic images as target images; and obtaining the similarity between each pixel point in all downsampled images and each neighborhood pixel point.
Specifically, the conversion of the acquired multispectral remote sensing image of four different band combinations into HSV Space (HSV) is a color space created by a.r.smith in 1978 according to the visual characteristics of colors, also called a hexagonal pyramid Model (hexacone Model)), and each color in the color space is represented by Hue (Hue, H), saturation (Saturation, S) and color brightness (Value, V), that is, parameters of the color in the Model are respectively: the hue H, saturation S and brightness V are represented by the H value in the invention, each multispectral remote sensing image is used for calculating the probability of a land utilization category, the range of the H value of the land utilization category corresponding to each multispectral remote sensing image is obtained, for example 543 band combination is used for calculating the probability that each pixel point in the multispectral remote sensing image belongs to a vegetation region, and the red H value range [
Figure SMS_33
]Calculating the probability that each pixel belongs to a vegetation region by using the H value of each pixel in the 543-band combination, wherein the expression for calculating the probability that the pixel belongs to the vegetation region is as follows:
Figure SMS_34
wherein ,
Figure SMS_35
representing the first in a multispectral remote sensing image
Figure SMS_36
Probability that each pixel point belongs to a vegetation region;
Figure SMS_37
representing the first in a multispectral remote sensing image
Figure SMS_38
H values for the individual pixels;
Figure SMS_39
a minimum H value representing the H value corresponding to red in HSV space;
Figure SMS_40
a maximum H value representing the H value corresponding to red in HSV space; the H value of the pixel point is within the H value range of red [
Figure SMS_41
]And when the H value is not in the range of the H value of red, the probability of belonging to the vegetation area is 0, and the larger the red value is, the larger the probability of belonging to the vegetation area of the pixel point is.
The probability that each pixel point belongs to water, farmland and building land in other three multispectral images is obtained by using the method for obtaining the probability that the pixel point belongs to vegetation areas, and the probabilities that the pixel points belong to water, farmland and building land in other three multispectral images are respectively used
Figure SMS_42
Figure SMS_43
And
Figure SMS_44
and (3) representing.
Combining the multispectral remote sensing images combined by four different wave bands of the region to be classified to obtain the probability that the pixel points at the same position in the multispectral remote sensing images of the region to be classified belong to different land utilization categories, and combining the probabilities that the pixel points at each position belong to different land utilization categories to obtain the eigenvalue vector of the pixel points at each position, namely the imageElement point
Figure SMS_46
The eigenvalue vector of the method is%
Figure SMS_51
Figure SMS_55
Figure SMS_47
Figure SMS_49
), wherein ,
Figure SMS_54
representing the first of the characteristic images
Figure SMS_57
The probability that a pixel belongs to vegetation,
Figure SMS_45
representing the first of the characteristic images
Figure SMS_52
The probability that a pixel point belongs to a body of water,
Figure SMS_53
representing the first of the characteristic images
Figure SMS_56
The probability that each pixel point belongs to cultivated land,
Figure SMS_48
representing the first of the characteristic images
Figure SMS_50
Probability that a pixel belongs to a building site.
And combining the eigenvalue vectors of the pixel points of each position to form an eigenvector.
Downsampling the full-color remote sensing image of the region to be classified: setting the sampling level of the full-color remote sensing image to be 1, and performing the first timeAfter downsampling, the sampling level of the obtained downsampled image is 2; and continuing downsampling the downsampled image obtained after the first downsampling, wherein the sampling level of the obtained downsampled image is 3, and so on until the downsampling is carried out to obtain the downsampled image with the same resolution as the feature image, stopping downsampling, and taking the downsampled image of the last stage as a target image. The obtained downsampled images of each level corresponding to the full-color remote sensing image are respectively expressed as:
Figure SMS_58
, wherein ,
Figure SMS_59
the image of the object is represented and,
Figure SMS_60
is a full-color remote sensing image,
Figure SMS_61
representing the downsampled image with the sampling level of 2 after the first downsampling of the full-color remote sensing image.
The downsampled image with the same resolution as the feature image is used as the target image, so that the color information in the multispectral image is converted into the downsampled image in a one-to-one correspondence manner, and then is converted into the full-color remote sensing image with high resolution step by step, and land utilization type identification is performed.
When performing color information conversion, that is, conversion of eigenvalue vectors of pixel points, the similarity between pixel points in the downsampled image needs to be combined for conversion, so that the similarity between texture features and gray values between pixel points in the downsampled image at each level needs to be analyzed first.
The method for calculating the similarity between the pixel point and the neighborhood pixel point in each stage of downsampling image is the same, and in this embodiment, the calculation is performed by taking the last stage of downsampling image, i.e. the target image as an example.
Specifically, the method for obtaining the LBP value (LBP full scale Local Binary Pattern, which represents the local binary pattern) of each pixel on the target image is the prior art, and is not described herein. According to the LBP value and gray value of each pixel point and each neighborhood pixel point in the target image, the similarity of each pixel point and each neighborhood pixel point is obtained, and the similarity expression of each pixel point and each neighborhood pixel point on the target image is:
Figure SMS_62
wherein ,
Figure SMS_80
representing the first in the target image
Figure SMS_85
The pixel point and the first pixel point
Figure SMS_87
Each neighborhood pixel point
Figure SMS_66
Similarity of (2);
Figure SMS_70
representing the first in the target image
Figure SMS_74
LBP values for individual pixels;
Figure SMS_76
representing the first in the target image
Figure SMS_65
The first pixel point
Figure SMS_69
Each neighborhood pixel point
Figure SMS_73
LBP value of (a);
Figure SMS_77
representing the first in the target image
Figure SMS_79
Gray scale of each pixel pointA value;
Figure SMS_86
representing the first in the target image
Figure SMS_88
The first pixel point
Figure SMS_92
Each neighborhood pixel point
Figure SMS_82
Is a gray value of (a). The LBP value is the texture feature value of the pixel,
Figure SMS_84
represent the first
Figure SMS_89
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_94
The difference of the texture characteristic values of the pixel points is also normalized value, the value is between 0 and 1, and the larger the value is, the more the description is
Figure SMS_64
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_68
The larger the difference of texture features among the pixel points, the smaller the difference of texture features is, which indicates the first
Figure SMS_72
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_78
The smaller the texture feature difference between the individual pixels.
Figure SMS_81
Represent the first
Figure SMS_83
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_90
The gray level difference of each pixel point is 0-1, and the larger the value is, the more the first is
Figure SMS_93
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_91
The larger the gray scale difference between the pixel points, the smaller the value thereof indicates the first
Figure SMS_95
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_96
The smaller the gray scale difference between the individual pixel points; the result of the texture characteristic value difference and the gray level difference is added, the value range is between 0 and 2, so that the division is 2, the final result is between 0 and 1, the value under the root number is subtracted by the value 1, the result of the two characteristic differences is subjected to negative correlation mapping,
Figure SMS_97
a number between 0 and 1, the value of which is more toward 0 indicates the first
Figure SMS_63
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_67
The more dissimilar the pixels are, the more similar the pixel is to 1 description 1
Figure SMS_71
The 8 th pixel point and its 8 th neighboring pixel point
Figure SMS_75
The more similar the pixel points are.
So far, the similarity between each pixel point in the target image and each neighborhood pixel point is obtained, and the similarity between each pixel point on each level of downsampling image and each neighborhood pixel point is obtained by the same method.
The data processing module 12: the method comprises the steps of obtaining a characteristic value vector of each pixel point in a target image by utilizing the similarity of each pixel point in the target image and each neighborhood pixel point and the characteristic value vector of each pixel point in a corresponding characteristic image; according to the characteristic value vector of each pixel point on the target image and the similarity between the pixel point in the upper-stage downsampling image of the target image and each neighborhood pixel point, obtaining the characteristic value vector of each pixel point in the upper-stage downsampling image of the target image; and sequentially and gradually upwards obtaining the eigenvalue vector of each pixel point in the full-color remote sensing image.
The similarity between the pixel points in each level of downsampling image and each neighborhood pixel point is obtained in the preprocessing module, the similarity between the pixel points is utilized to reflect the relation between the pixel points in the downsampling image, the greater the similarity between the two pixel points is, the more similar the feature value vectors of the two pixel points in the feature image are, the more similar the two pixel points are, the more the two pixel points are close to the same land utilization category, so that the feature value vectors of the pixel points in the downsampling image can be converted into a full-color remote sensing image with high resolution level by the similarity between the pixel points, and the accuracy of the land utilization category recognition result is ensured.
Specifically, the resolution of the target image is the same as that of the land feature class diagram, so that the feature value vector of each pixel point in the target image is obtained by updating the feature value vector of the pixel point by using the feature value vector of the pixel points in the feature image and the similarity between the pixel points in the target image. And obtaining a characteristic value vector of each pixel point in the target image according to the following formula:
Figure SMS_98
wherein ,
Figure SMS_100
representing the first in the target image
Figure SMS_105
Characteristic value vectors of the pixel points;
Figure SMS_109
representing the down-sampling times of the full-color remote sensing image;
Figure SMS_101
indicating the number of updates, the update from the feature image to the target image is the first update, so here
Figure SMS_106
Taking a value of 1;
Figure SMS_110
representing the first in the target image
Figure SMS_112
The pixel point and the first
Figure SMS_99
Each neighborhood pixel point
Figure SMS_103
Similarity of (2);
Figure SMS_107
representing the first in the target image
Figure SMS_111
The first pixel point
Figure SMS_102
Each neighborhood pixel point
Figure SMS_104
A feature value vector in the corresponding feature image;
Figure SMS_108
representing the first in the target image
Figure SMS_113
And the characteristic value vector of each pixel point in the corresponding characteristic image.
Figure SMS_114
Represent the first
Figure SMS_118
Each neighborhood pixel point
Figure SMS_120
And pixel point
Figure SMS_116
The ratio of the similarity between the pixel point and the sum of the similarity between the 8 neighborhood pixel points is about larger, and the neighborhood pixel points are considered to be the pixel points
Figure SMS_119
The greater the similarity, the feature value vector of the neighborhood pixel point and the pixel point
Figure SMS_122
The closer the eigenvalue vector of the neighborhood pixel is, the more the duty ratio is multiplied with the eigenvalue vector of the corresponding neighborhood pixel to obtain the neighborhood pixel-to-pixel
Figure SMS_123
The assigned eigenvalue vector is utilized to compare the pixel points by the eigenvalue vector of each neighborhood pixel point in 8 neighborhood
Figure SMS_115
Giving eigenvalue vector and summing to obtain pixel point at this time
Figure SMS_117
Is a preliminary speculative eigenvalue vector of (c). Considering that when the full-color remote sensing image is downsampled, the greater the sampling level is, the lower the resolution of the downsampled image is, the more blurred the texture is, and therefore the greater the sampling level is, the smaller the similarity between pixels has an auxiliary effect on the feature value vector of the converted pixels, so that the ratio of the update times to the total downsampled times is used as a weight value, the weight value is multiplied by the initially presumed feature value vector, and the smaller the update times is, the more accurate the original feature value vector of the pixels in the feature image is, so that the method is utilized
Figure SMS_121
Multiplying the characteristic value vector of the pixel point by the characteristic value vector of the pixel point, wherein the smaller the updating times are, the larger the weight value is, and the more accurate the characteristic value vector of the pixel point on the updated target image is obtained.
The characteristic value vector of each pixel point on the target image is obtained, and the characteristic value vector of the pixel point in the downsampling image of the upper stage of the target image is estimated by utilizing the characteristic value vector of the pixel point in the target image. However, when the eigenvalue vector of the target image is calculated by the eigenvalue vector of the pixel point in the eigenvector image, the resolution of the eigenvector image and the resolution of the target image are the same, and the resolution of the target image and the resolution of the last-stage downsampled image are different, in this embodiment, the multiple between each stage of downsampled image is 4, so that one pixel point of the target image corresponds to four pixel points in the last-stage downsampled image of the target image, and the eigenvalue vector of each pixel point in the target image is used to calculate the eigenvalue vector of the corresponding four pixel points in the last-stage downsampled image.
Specifically, the eigenvalue vector of the pixel point in the downsampled image after the second update is obtained according to the following formula, namely, the eigenvalue vector of the pixel point in the downsampled image of the previous stage of the target image:
Figure SMS_124
wherein ,
Figure SMS_126
representing the first level of downsampled image of a target image
Figure SMS_131
Characteristic value vectors of the pixel points;
Figure SMS_134
representing the down-sampling times of the full-color remote sensing image;
Figure SMS_128
the number of updates is represented, and the update from the feature image to the target image is the first update, so here is 2, representing the second update;
Figure SMS_130
Representing the first level of downsampled image of a target image
Figure SMS_136
The pixel point and the first
Figure SMS_137
Each neighborhood pixel point
Figure SMS_125
Similarity of (2);
Figure SMS_132
representing the first level of downsampled image of a target image
Figure SMS_133
The first pixel point
Figure SMS_138
Each neighborhood pixel point
Figure SMS_127
Characteristic value vectors of corresponding pixel points in the target image;
Figure SMS_129
representing the first level of downsampled image of a target image
Figure SMS_135
Characteristic value vectors of corresponding pixel points in the target image;
Figure SMS_139
and 2 in the (2) is the update times, multiplying the ratio by the eigenvalue vector of the corresponding pixel point, and obtaining the more accurate eigenvalue vector of the pixel point on the updated target image, wherein the smaller the update times is, the larger the weight value is.
And sequentially updating the characteristic value vectors of the pixel points in the downsampled image after the second updating step by using a method for obtaining the characteristic value vector of each pixel point in the full-color remote sensing image.
The identification module 13: the method is used for obtaining the land utilization category of the area corresponding to each pixel point by utilizing the characteristic value vector of each pixel point in the full-color remote sensing image.
And obtaining a characteristic value vector of each pixel point in the full-color remote sensing image in the data processing module, wherein the characteristic value vector of each pixel point in the full-color remote sensing image comprises the probability that the area corresponding to each pixel point belongs to each land utilization category. For full-color remote sensing image
Figure SMS_141
Upper first
Figure SMS_143
Individual pixel points
Figure SMS_145
Corresponding eigenvalue vector
Figure SMS_142
Comprises four characteristic values, namely probability values of vegetation, water body, cultivated land and building land
Figure SMS_144
Figure SMS_146
Figure SMS_147
Figure SMS_140
. The probability value is a value between 0 and 1, but the region to be classified also belongs to land utilization categories outside vegetation, water bodies, cultivated lands and construction lands.
Therefore, four characteristic values of each pixel point in the full-color remote sensing image, namely the maximum value in probability values belonging to four land utilization categories, are obtained, and if the maximum probability value is greater than a probability threshold value of 0.5 (the probability threshold value can be set according to specific conditions), the land utilization category corresponding to the maximum probability value is taken as the land utilization category of the area corresponding to the pixel point; if the maximum probability value is not greater than 0.5, the area corresponding to the pixel point is considered to be other land utilization types, namely the area does not belong to vegetation, cultivated land, water body and construction land.
In summary, the present invention provides a land use classification and identification system based on remote sensing images, which can obtain the probability that each pixel point in a region to be classified belongs to each land use category by obtaining a plurality of multispectral remote sensing images, so as to obtain a feature image including the probability that the pixel point belongs to each land use category; the method comprises the steps of obtaining a target image with the same resolution as a characteristic image by downsampling a full-color remote sensing image with high resolution, and converting the probability that pixel points obtained by multispectral remote sensing images belong to each land utilization category, namely, characteristic value vectors of the pixel points into the target image; according to the similarity between the pixel points in each downsampled image and the neighborhood pixel points, the characteristic value vector of the pixel points is converted into the full-color remote sensing image at the uppermost layer step by step, and compared with the existing method for carrying out direct image fusion by utilizing the multispectral remote sensing image and the full-color remote sensing image, the method for converting the characteristic value vector of the pixel points obtained by combining the similarity between the pixel points into the image fusion in the full-color remote sensing image with high resolution step by combining the similarity between the pixel points is more accurate, and the situation that the edge of the land utilization classification result is error due to the fact that a large number of pixel points with the same color information appear on the full-color remote sensing image with high resolution in the prior art is reduced, so that the land utilization type identification result is more accurate.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. Land utilization classification and identification system based on remote sensing images is characterized by comprising:
and an image acquisition module: the method comprises the steps of acquiring full-color remote sensing images of areas to be classified and multispectral remote sensing images under the combination of a plurality of different wave bands;
and a pretreatment module: the method comprises the steps of obtaining the probability that each pixel point in each multispectral remote sensing image belongs to a corresponding land utilization category, and forming a characteristic value vector of each pixel point at each position by utilizing the probability that the pixel points at the same position in each multispectral remote sensing image belong to each land utilization category to obtain a characteristic image;
step-by-step downsampling the panchromatic remote sensing image to obtain a plurality of downsampled images, and taking the downsampled images with the same resolution as the characteristic images as target images; obtaining the similarity between each pixel point in all downsampled images and each neighborhood pixel point;
and a data processing module: the method comprises the steps of obtaining a characteristic value vector of each pixel point in a target image by utilizing the similarity of each pixel point in the target image and each neighborhood pixel point and the characteristic value vector of each pixel point in a corresponding characteristic image;
according to the characteristic value vector of each pixel point on the target image and the similarity between the pixel point in the upper-stage downsampling image of the target image and each neighborhood pixel point, obtaining the characteristic value vector of each pixel point in the upper-stage downsampling image of the target image; sequentially and gradually upwards obtaining a characteristic value vector of each pixel point in the full-color remote sensing image;
and an identification module: the method is used for obtaining the land utilization category corresponding to each pixel point according to the probability of each land utilization category contained in the characteristic value vector of each pixel point in the full-color remote sensing image.
2. The land use classification and identification system based on remote sensing images as set forth in claim 1, wherein the method for obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land use category comprises the following steps:
acquiring HSV values of each pixel point in the multispectral remote sensing image;
acquiring an H value range of each multispectral remote sensing image belonging to a corresponding land utilization category;
and obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land utilization category by utilizing the H value of each pixel point in each multispectral remote sensing image and the H value range of the corresponding land utilization category.
3. The land use classification and identification system based on remote sensing images according to claim 1, wherein the preprocessing module further comprises combining probabilities of corresponding land use categories of pixels at the same position in each multispectral remote sensing image to obtain eigenvalue vectors of the pixels at each position.
4. The land use classification and identification system based on remote sensing images as set forth in claim 1, wherein the preprocessing module obtains the similarity between each pixel point on the target image and each neighboring pixel point by:
obtaining an LBP value of each pixel point on a target image;
and obtaining the similarity of each pixel point and each neighborhood pixel point according to the LBP value and the gray value of each pixel point and each neighborhood pixel point in the target image.
5. The land use classification and identification system based on remote sensing images as set forth in claim 4, wherein the expression for obtaining the similarity between each pixel point on the target image and each neighboring pixel point is:
Figure QLYQS_1
wherein ,
Figure QLYQS_10
representing the%>
Figure QLYQS_5
The pixel and the (th) thereof>
Figure QLYQS_6
Each neighborhood pixel point->
Figure QLYQS_12
Similarity of (2); />
Figure QLYQS_14
Representing the%>
Figure QLYQS_16
LBP values for individual pixels; />
Figure QLYQS_17
Representing the%>
Figure QLYQS_9
The +.>
Figure QLYQS_11
Each neighborhood pixel point->
Figure QLYQS_2
LBP value of (a); />
Figure QLYQS_7
Representing the%>
Figure QLYQS_4
Gray values of the individual pixels; />
Figure QLYQS_8
Representing the%>
Figure QLYQS_13
The +.>
Figure QLYQS_15
Each neighborhood pixel point->
Figure QLYQS_3
Is a gray value of (a).
6. The land use classification and identification system based on remote sensing images as set forth in claim 1, wherein the data processing module obtains the expression of the eigenvalue vector of each pixel point in the target image as follows:
Figure QLYQS_18
wherein ,
Figure QLYQS_21
representing the%>
Figure QLYQS_24
Characteristic value vectors of the pixel points; />
Figure QLYQS_27
Representing the down-sampling times of the full-color remote sensing image; />
Figure QLYQS_22
Representing the number of updates; />
Figure QLYQS_25
Representing the%>
Figure QLYQS_29
Pixel dot and->
Figure QLYQS_32
Each neighborhood pixel point->
Figure QLYQS_20
Similarity of (2); />
Figure QLYQS_26
Representing the%>
Figure QLYQS_30
The +.>
Figure QLYQS_31
Each neighborhood pixel point->
Figure QLYQS_19
A feature value vector in the corresponding feature image; />
Figure QLYQS_23
Representing the%>
Figure QLYQS_28
And the characteristic value vector of each pixel point in the corresponding characteristic image.
7. The land use classification and identification system based on remote sensing images according to claim 1, wherein, during the step-by-step downsampling, the pixels in each downsampling image correspond to the pixels in the downsampling image of the adjacent upper downsampling image, i.e. each pixel in the target image corresponds to the pixels in the downsampling image of the upper downsampling image of the target image.
8. The land use classification and identification system based on remote sensing images as claimed in claim 1, wherein said identification module further comprises dividing the land into a plurality of utilization categories by the land use category to which each pixel corresponds.
CN202310368802.7A 2023-04-10 2023-04-10 Land utilization classification and identification system based on remote sensing images Active CN116129278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310368802.7A CN116129278B (en) 2023-04-10 2023-04-10 Land utilization classification and identification system based on remote sensing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310368802.7A CN116129278B (en) 2023-04-10 2023-04-10 Land utilization classification and identification system based on remote sensing images

Publications (2)

Publication Number Publication Date
CN116129278A true CN116129278A (en) 2023-05-16
CN116129278B CN116129278B (en) 2023-06-30

Family

ID=86295924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310368802.7A Active CN116129278B (en) 2023-04-10 2023-04-10 Land utilization classification and identification system based on remote sensing images

Country Status (1)

Country Link
CN (1) CN116129278B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310772A (en) * 2023-05-18 2023-06-23 德州华恒环保科技有限公司 Water environment pollution identification method based on multispectral image

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679675A (en) * 2013-11-29 2014-03-26 航天恒星科技有限公司 Remote sensing image fusion method oriented to water quality quantitative remote sensing application
US20170076438A1 (en) * 2015-08-31 2017-03-16 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US20180286052A1 (en) * 2017-03-30 2018-10-04 4DM Inc. Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors
CN109697475A (en) * 2019-01-17 2019-04-30 中国地质大学(北京) A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method
CN110263717A (en) * 2019-06-21 2019-09-20 中国科学院地理科学与资源研究所 It is a kind of incorporate streetscape image land used status determine method
CN110599424A (en) * 2019-09-16 2019-12-20 北京航天宏图信息技术股份有限公司 Method and device for automatic image color-homogenizing processing, electronic equipment and storage medium
CN111681207A (en) * 2020-05-09 2020-09-18 宁波大学 Remote sensing image fusion quality evaluation method
CN112036246A (en) * 2020-07-30 2020-12-04 长安大学 Construction method of remote sensing image classification model, remote sensing image classification method and system
CN112149547A (en) * 2020-09-17 2020-12-29 南京信息工程大学 Remote sensing image water body identification based on image pyramid guidance and pixel pair matching
CN113191440A (en) * 2021-05-12 2021-07-30 济南大学 Remote sensing image instance classification method, system, terminal and storage medium
CN113312993A (en) * 2021-05-17 2021-08-27 北京大学 Remote sensing data land cover classification method based on PSPNet
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN113887344A (en) * 2021-09-16 2022-01-04 同济大学 Ground feature element classification method based on satellite remote sensing multispectral and panchromatic image fusion
CN115564692A (en) * 2022-09-07 2023-01-03 宁波大学 Panchromatic-multispectral-hyperspectral integrated fusion method considering width difference
CN115578660A (en) * 2022-11-09 2023-01-06 牧马人(山东)勘察测绘集团有限公司 Land block segmentation method based on remote sensing image
CN115631372A (en) * 2022-10-18 2023-01-20 菏泽市土地储备中心 Land information classification management method based on soil remote sensing data
WO2023000159A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Semi-supervised classification method, apparatus and device for high-resolution remote sensing image, and medium
CN115713694A (en) * 2023-01-06 2023-02-24 东营国图信息科技有限公司 Land surveying and mapping information management method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679675A (en) * 2013-11-29 2014-03-26 航天恒星科技有限公司 Remote sensing image fusion method oriented to water quality quantitative remote sensing application
US20170076438A1 (en) * 2015-08-31 2017-03-16 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
US20180286052A1 (en) * 2017-03-30 2018-10-04 4DM Inc. Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors
CN109697475A (en) * 2019-01-17 2019-04-30 中国地质大学(北京) A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method
CN110263717A (en) * 2019-06-21 2019-09-20 中国科学院地理科学与资源研究所 It is a kind of incorporate streetscape image land used status determine method
CN110599424A (en) * 2019-09-16 2019-12-20 北京航天宏图信息技术股份有限公司 Method and device for automatic image color-homogenizing processing, electronic equipment and storage medium
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN111681207A (en) * 2020-05-09 2020-09-18 宁波大学 Remote sensing image fusion quality evaluation method
CN112036246A (en) * 2020-07-30 2020-12-04 长安大学 Construction method of remote sensing image classification model, remote sensing image classification method and system
CN112149547A (en) * 2020-09-17 2020-12-29 南京信息工程大学 Remote sensing image water body identification based on image pyramid guidance and pixel pair matching
CN113191440A (en) * 2021-05-12 2021-07-30 济南大学 Remote sensing image instance classification method, system, terminal and storage medium
CN113312993A (en) * 2021-05-17 2021-08-27 北京大学 Remote sensing data land cover classification method based on PSPNet
WO2023000159A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Semi-supervised classification method, apparatus and device for high-resolution remote sensing image, and medium
CN113887344A (en) * 2021-09-16 2022-01-04 同济大学 Ground feature element classification method based on satellite remote sensing multispectral and panchromatic image fusion
CN115564692A (en) * 2022-09-07 2023-01-03 宁波大学 Panchromatic-multispectral-hyperspectral integrated fusion method considering width difference
CN115631372A (en) * 2022-10-18 2023-01-20 菏泽市土地储备中心 Land information classification management method based on soil remote sensing data
CN115578660A (en) * 2022-11-09 2023-01-06 牧马人(山东)勘察测绘集团有限公司 Land block segmentation method based on remote sensing image
CN115713694A (en) * 2023-01-06 2023-02-24 东营国图信息科技有限公司 Land surveying and mapping information management method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FANG GAO等: "A high-resolution panchromatic-multispectral satellite image fusion method assisted with building segmentation", 《COMPUTERS AND GEOSCIENCES 》, pages 1 - 17 *
丁星: "基于超像素的遥感图像海岸线检测与海岸带地物分类", 《中国优秀硕士学位论文全文数据库 基础科学辑》, vol. 2020, no. 6, pages 010 - 5 *
刘天宇: "高分二号与哨兵二号影像融合及地物分类研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》, vol. 2023, no. 1, pages 008 - 352 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310772A (en) * 2023-05-18 2023-06-23 德州华恒环保科技有限公司 Water environment pollution identification method based on multispectral image

Also Published As

Publication number Publication date
CN116129278B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN109446992B (en) Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment
CN109389163B (en) Unmanned aerial vehicle image classification system and method based on topographic map
CN112016436A (en) Remote sensing image change detection method based on deep learning
CN108428220B (en) Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence
CN112949407B (en) Remote sensing image building vectorization method based on deep learning and point set optimization
CN110598564B (en) OpenStreetMap-based high-spatial-resolution remote sensing image transfer learning classification method
CN111553922B (en) Automatic cloud detection method for satellite remote sensing image
CN112285710B (en) Multi-source remote sensing reservoir water storage capacity estimation method and device
CN107688777B (en) Urban green land extraction method for collaborative multi-source remote sensing image
Shaoqing et al. The comparative study of three methods of remote sensing image change detection
CN116129278B (en) Land utilization classification and identification system based on remote sensing images
CN111881801B (en) Newly-added construction land remote sensing monitoring method and equipment based on invariant detection strategy
CN111738113A (en) Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint
CN107688776B (en) Urban water body extraction method
CN112364289B (en) Method for extracting water body information through data fusion
CN114266958A (en) Cloud platform based mangrove remote sensing rapid and accurate extraction method
CN109671038B (en) Relative radiation correction method based on pseudo-invariant feature point classification layering
CN112329790B (en) Quick extraction method for urban impervious surface information
CN113486975A (en) Ground object classification method, device, equipment and storage medium for remote sensing image
CN110569797A (en) earth stationary orbit satellite image forest fire detection method, system and storage medium thereof
CN114022459A (en) Multi-temporal satellite image-based super-pixel change detection method and system
CN116433940A (en) Remote sensing image change detection method based on twin mirror network
CN117853949B (en) Deep learning method and system for identifying cold front by using satellite cloud image
CN109064490B (en) Moving target tracking method based on MeanShift
CN112184785B (en) Multi-mode remote sensing image registration method based on MCD measurement and VTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Land Use Classification and Recognition System Based on Remote Sensing Images

Effective date of registration: 20231114

Granted publication date: 20230630

Pledgee: Bank of Beijing Co.,Ltd. Jinan Branch

Pledgor: Wrangler (Shandong) Survey and Mapping Group Co.,Ltd.

Registration number: Y2023980065472

PE01 Entry into force of the registration of the contract for pledge of patent right