CN116129278A - Land utilization classification and identification system based on remote sensing images - Google Patents
Land utilization classification and identification system based on remote sensing images Download PDFInfo
- Publication number
- CN116129278A CN116129278A CN202310368802.7A CN202310368802A CN116129278A CN 116129278 A CN116129278 A CN 116129278A CN 202310368802 A CN202310368802 A CN 202310368802A CN 116129278 A CN116129278 A CN 116129278A
- Authority
- CN
- China
- Prior art keywords
- pixel point
- image
- remote sensing
- obtaining
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000013598 vector Substances 0.000 claims abstract description 86
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000005070 sampling Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 6
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 8
- 230000004927 fusion Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a land utilization classification and identification system based on remote sensing images, which relates to the field of image processing, and comprises the following components: and an image acquisition module: the method is used for acquiring full-color remote sensing images and multispectral remote sensing images; and a pretreatment module: the method comprises the steps of obtaining the probability that each pixel point in a multispectral remote sensing image belongs to each land utilization category, and further obtaining a characteristic value vector and a characteristic image of each pixel point; acquiring a plurality of downsampled images of the full-color remote sensing image and similarity among pixel points in the downsampled images; and a data processing module: the method is used for obtaining the characteristic value vector of each pixel point in the full-color remote sensing image step by utilizing the similarity among the pixel points in the target image and the characteristic value vector of the pixel points; and an identification module: the method is used for obtaining the land utilization category corresponding to each pixel point by utilizing the characteristic value vector of each pixel point in the full-color remote sensing image. The invention improves the accuracy of land utilization category identification.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a land utilization classification and identification system based on remote sensing images.
Background
Land utilization classification can better utilize land, realizes the maximum value of land, and land utilization classification is multipurpose in aspects of urban planning, environmental evaluation and the like, so that accurate land utilization classification and identification have very important significance. Along with the continuous development of China science and technology, the remote sensing image gradually replaces the manual mapping data, and the application of the high-resolution remote sensing image in land utilization classification is also wider and wider due to the fact that the high-resolution remote sensing image contains rich land feature information.
However, the accuracy of the remote sensing image is reduced due to the fact that the remote sensing image is easily affected by weather, the resolution of the multispectral remote sensing image obtained by utilizing different wave bands is insufficient, full-color remote sensing images of all wave bands are required to be obtained for high resolution, but color information in the images cannot be obtained in the full-color remote sensing images, and classification and identification of lands cannot be performed. Therefore, the two are often fused in the prior art to obtain a fused image with high resolution and multiband color information; and carrying out subsequent land utilization classification by utilizing the color information in the fusion image. However, since the color information on the fused image is obtained from the multispectral remote sensing image with insufficient resolution, the color information on the fused image with high resolution actually appears continuously and widely, that is, the precision of the color information of the fused image is insufficient, and the land utilization category recognition result obtained according to the analysis of the fused image has errors.
Disclosure of Invention
The invention provides a land utilization classification and identification system based on remote sensing images, which aims to solve the problems that the precision is insufficient after the existing full-color remote sensing images and multispectral remote sensing images are fused, and the land utilization classification and identification result is inaccurate.
The invention discloses a land utilization classification and identification system based on remote sensing images, which comprises the following steps:
and an image acquisition module: the method comprises the steps of acquiring full-color remote sensing images of areas to be classified and multispectral remote sensing images under the combination of a plurality of different wave bands;
and a pretreatment module: the method comprises the steps of obtaining the probability that each pixel point in each multispectral remote sensing image belongs to a corresponding land utilization category, and forming a characteristic value vector of each pixel point at each position by utilizing the probability that the pixel points at the same position in each multispectral remote sensing image belong to each land utilization category to obtain a characteristic image;
step-by-step downsampling the panchromatic remote sensing image to obtain a plurality of downsampled images, and taking the downsampled images with the same resolution as the characteristic images as target images; obtaining the similarity between each pixel point in all downsampled images and each neighborhood pixel point;
and a data processing module: the method comprises the steps of obtaining a characteristic value vector of each pixel point in a target image by utilizing the similarity of each pixel point in the target image and each neighborhood pixel point and the characteristic value vector of each pixel point in a corresponding characteristic image;
according to the characteristic value vector of each pixel point on the target image and the similarity between the pixel point in the upper-stage downsampling image of the target image and each neighborhood pixel point, obtaining the characteristic value vector of each pixel point in the upper-stage downsampling image of the target image; sequentially and gradually upwards obtaining a characteristic value vector of each pixel point in the full-color remote sensing image;
and an identification module: the method is used for obtaining the land utilization category corresponding to each pixel point according to the probability of each land utilization category contained in the characteristic value vector of each pixel point in the full-color remote sensing image.
Further, the method for obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land utilization category comprises the following steps:
acquiring HSV values of each pixel point in the multispectral remote sensing image;
acquiring an H value range of each multispectral remote sensing image belonging to a corresponding land utilization category;
and obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land utilization category by utilizing the H value of each pixel point in each multispectral remote sensing image and the H value range of the corresponding land utilization category.
Further, the preprocessing module further comprises the step of combining probabilities of the corresponding land utilization categories of the pixel points at the same position in each multispectral remote sensing image to obtain a characteristic value vector of the pixel points at each position.
Further, in the preprocessing module, the method for obtaining the similarity between each pixel point on the target image and each neighborhood pixel point comprises the following steps:
obtaining an LBP value of each pixel point on a target image;
and obtaining the similarity of each pixel point and each neighborhood pixel point according to the LBP value and the gray value of each pixel point and each neighborhood pixel point in the target image.
Further, the expression for obtaining the similarity between each pixel point on the target image and each neighborhood pixel point is as follows:
wherein ,representing the first in the target imageThe pixel point and the first pixel pointEach neighborhood pixel pointSimilarity of (2);representing the first in the target imageLBP values for individual pixels;representing the first in the target imageThe first pixel pointEach neighborhood pixel pointLBP value of (a);representing the first in the target imageGray values of the individual pixels;representing the first in the target imageThe first pixel pointEach neighborhood pixel pointIs a gray value of (a).
Further, in the data processing module, the expression for obtaining the eigenvalue vector of each pixel point in the target image is:
wherein ,representing the first in the target imageCharacteristic value vectors of the pixel points;representing the down-sampling times of the full-color remote sensing image;representing the number of updates;representing the first in the target imageThe pixel point and the firstEach neighborhood pixel pointSimilarity of (2);representing the first in the target imageThe first pixel pointEach neighborhood pixel pointA feature value vector in the corresponding feature image;representing the first in the target imageThe pixel points are in the corresponding characteristicsEigenvalue vectors in the image.
Further, during the step-by-step downsampling, the pixel point in each stage downsampling image corresponds to a plurality of pixel points in the adjacent upper stage downsampling image, namely, each pixel point in the target image corresponds to a plurality of pixel points in the upper stage downsampling image of the target image.
Further, the identification module further comprises dividing the land into a plurality of utilization categories through the land utilization categories corresponding to each pixel point.
The beneficial effects of the invention are as follows: according to the land use classification recognition system based on the remote sensing images, the probability that each pixel point belongs to each land use category in the region to be classified can be obtained by acquiring a plurality of multispectral remote sensing images, and then the characteristic image containing the probability that the pixel point belongs to each land use category is obtained; the method comprises the steps of obtaining a target image with the same resolution as a characteristic image by downsampling a full-color remote sensing image with high resolution, and converting the probability that pixel points obtained by multispectral remote sensing images belong to each land utilization category, namely, characteristic value vectors of the pixel points into the target image; according to the similarity between the pixel points in each downsampled image and the neighborhood pixel points, the characteristic value vector of the pixel points is converted into the full-color remote sensing image at the uppermost layer step by step, and compared with the existing method for carrying out direct image fusion by utilizing the multispectral remote sensing image and the full-color remote sensing image, the method for converting the characteristic value vector of the pixel points obtained by combining the similarity between the pixel points into the image fusion in the full-color remote sensing image with high resolution step by combining the similarity between the pixel points is more accurate, and the situation that the edge of the land utilization classification result has errors due to the fact that a large number of pixel points with the same color information appear on the full-color remote sensing image with high resolution in the prior art is reduced, so that the result of land utilization type identification is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a land utilization classification and identification system based on remote sensing images.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of a land utilization classification and identification system based on remote sensing images of the present invention, as shown in fig. 1, comprises: an image acquisition module 10, a preprocessing module 11, a data processing module 12 and an identification module 13.
Image acquisition module 10: the method is used for acquiring the full-color remote sensing image of the region to be classified and multispectral remote sensing images under the combination of a plurality of different wave bands.
Specifically, the invention needs to obtain land utilization types of the areas to be classified, in the prior art, different wave band combinations are selected, so that multispectral remote sensing images can be displayed as different false color images, the invention has the function of carrying out targeted identification on ground objects, for example, under 543 wave band combinations, vegetation is displayed as red, thus the 543 wave band can be selected to carry out higher accurate identification on vegetation, the 564 wave band combinations can be selected to effectively identify water, the 764 wave band combinations can be selected to effectively identify construction lands, and the 652 wave band combinations can be selected to effectively identify cultivated lands.
Therefore, in the scheme, the multispectral remote sensing images under the combination of four different wave bands of vegetation, cultivated land, water body and construction land can be effectively identified by acquiring the region to be classified; and acquiring a full-color remote sensing image with high resolution of the region to be classified.
Pretreatment module 11: the method comprises the steps of obtaining the probability that each pixel point in each multispectral remote sensing image belongs to a corresponding land utilization category, and forming a characteristic value vector of each pixel point at each position by utilizing the probability that the pixel points at the same position in each multispectral remote sensing image belong to each land utilization category to obtain a characteristic image; step-by-step downsampling the panchromatic remote sensing image to obtain a plurality of downsampled images, and taking the downsampled images with the same resolution as the characteristic images as target images; and obtaining the similarity between each pixel point in all downsampled images and each neighborhood pixel point.
Specifically, the conversion of the acquired multispectral remote sensing image of four different band combinations into HSV Space (HSV) is a color space created by a.r.smith in 1978 according to the visual characteristics of colors, also called a hexagonal pyramid Model (hexacone Model)), and each color in the color space is represented by Hue (Hue, H), saturation (Saturation, S) and color brightness (Value, V), that is, parameters of the color in the Model are respectively: the hue H, saturation S and brightness V are represented by the H value in the invention, each multispectral remote sensing image is used for calculating the probability of a land utilization category, the range of the H value of the land utilization category corresponding to each multispectral remote sensing image is obtained, for example 543 band combination is used for calculating the probability that each pixel point in the multispectral remote sensing image belongs to a vegetation region, and the red H value range []Calculating the probability that each pixel belongs to a vegetation region by using the H value of each pixel in the 543-band combination, wherein the expression for calculating the probability that the pixel belongs to the vegetation region is as follows:
wherein ,representing the first in a multispectral remote sensing imageProbability that each pixel point belongs to a vegetation region;representing the first in a multispectral remote sensing imageH values for the individual pixels;a minimum H value representing the H value corresponding to red in HSV space;a maximum H value representing the H value corresponding to red in HSV space; the H value of the pixel point is within the H value range of red []And when the H value is not in the range of the H value of red, the probability of belonging to the vegetation area is 0, and the larger the red value is, the larger the probability of belonging to the vegetation area of the pixel point is.
The probability that each pixel point belongs to water, farmland and building land in other three multispectral images is obtained by using the method for obtaining the probability that the pixel point belongs to vegetation areas, and the probabilities that the pixel points belong to water, farmland and building land in other three multispectral images are respectively used、Andand (3) representing.
Combining the multispectral remote sensing images combined by four different wave bands of the region to be classified to obtain the probability that the pixel points at the same position in the multispectral remote sensing images of the region to be classified belong to different land utilization categories, and combining the probabilities that the pixel points at each position belong to different land utilization categories to obtain the eigenvalue vector of the pixel points at each position, namely the imageElement pointThe eigenvalue vector of the method is%,,,), wherein ,representing the first of the characteristic imagesThe probability that a pixel belongs to vegetation,representing the first of the characteristic imagesThe probability that a pixel point belongs to a body of water,representing the first of the characteristic imagesThe probability that each pixel point belongs to cultivated land,representing the first of the characteristic imagesProbability that a pixel belongs to a building site.
And combining the eigenvalue vectors of the pixel points of each position to form an eigenvector.
Downsampling the full-color remote sensing image of the region to be classified: setting the sampling level of the full-color remote sensing image to be 1, and performing the first timeAfter downsampling, the sampling level of the obtained downsampled image is 2; and continuing downsampling the downsampled image obtained after the first downsampling, wherein the sampling level of the obtained downsampled image is 3, and so on until the downsampling is carried out to obtain the downsampled image with the same resolution as the feature image, stopping downsampling, and taking the downsampled image of the last stage as a target image. The obtained downsampled images of each level corresponding to the full-color remote sensing image are respectively expressed as:, wherein ,the image of the object is represented and,is a full-color remote sensing image,representing the downsampled image with the sampling level of 2 after the first downsampling of the full-color remote sensing image.
The downsampled image with the same resolution as the feature image is used as the target image, so that the color information in the multispectral image is converted into the downsampled image in a one-to-one correspondence manner, and then is converted into the full-color remote sensing image with high resolution step by step, and land utilization type identification is performed.
When performing color information conversion, that is, conversion of eigenvalue vectors of pixel points, the similarity between pixel points in the downsampled image needs to be combined for conversion, so that the similarity between texture features and gray values between pixel points in the downsampled image at each level needs to be analyzed first.
The method for calculating the similarity between the pixel point and the neighborhood pixel point in each stage of downsampling image is the same, and in this embodiment, the calculation is performed by taking the last stage of downsampling image, i.e. the target image as an example.
Specifically, the method for obtaining the LBP value (LBP full scale Local Binary Pattern, which represents the local binary pattern) of each pixel on the target image is the prior art, and is not described herein. According to the LBP value and gray value of each pixel point and each neighborhood pixel point in the target image, the similarity of each pixel point and each neighborhood pixel point is obtained, and the similarity expression of each pixel point and each neighborhood pixel point on the target image is:
wherein ,representing the first in the target imageThe pixel point and the first pixel pointEach neighborhood pixel pointSimilarity of (2);representing the first in the target imageLBP values for individual pixels;representing the first in the target imageThe first pixel pointEach neighborhood pixel pointLBP value of (a);representing the first in the target imageGray scale of each pixel pointA value;representing the first in the target imageThe first pixel pointEach neighborhood pixel pointIs a gray value of (a). The LBP value is the texture feature value of the pixel,represent the firstThe 8 th pixel point and its 8 th neighboring pixel pointThe difference of the texture characteristic values of the pixel points is also normalized value, the value is between 0 and 1, and the larger the value is, the more the description isThe 8 th pixel point and its 8 th neighboring pixel pointThe larger the difference of texture features among the pixel points, the smaller the difference of texture features is, which indicates the firstThe 8 th pixel point and its 8 th neighboring pixel pointThe smaller the texture feature difference between the individual pixels.Represent the firstThe 8 th pixel point and its 8 th neighboring pixel pointThe gray level difference of each pixel point is 0-1, and the larger the value is, the more the first isThe 8 th pixel point and its 8 th neighboring pixel pointThe larger the gray scale difference between the pixel points, the smaller the value thereof indicates the firstThe 8 th pixel point and its 8 th neighboring pixel pointThe smaller the gray scale difference between the individual pixel points; the result of the texture characteristic value difference and the gray level difference is added, the value range is between 0 and 2, so that the division is 2, the final result is between 0 and 1, the value under the root number is subtracted by the value 1, the result of the two characteristic differences is subjected to negative correlation mapping,a number between 0 and 1, the value of which is more toward 0 indicates the firstThe 8 th pixel point and its 8 th neighboring pixel pointThe more dissimilar the pixels are, the more similar the pixel is to 1 description 1The 8 th pixel point and its 8 th neighboring pixel pointThe more similar the pixel points are.
So far, the similarity between each pixel point in the target image and each neighborhood pixel point is obtained, and the similarity between each pixel point on each level of downsampling image and each neighborhood pixel point is obtained by the same method.
The data processing module 12: the method comprises the steps of obtaining a characteristic value vector of each pixel point in a target image by utilizing the similarity of each pixel point in the target image and each neighborhood pixel point and the characteristic value vector of each pixel point in a corresponding characteristic image; according to the characteristic value vector of each pixel point on the target image and the similarity between the pixel point in the upper-stage downsampling image of the target image and each neighborhood pixel point, obtaining the characteristic value vector of each pixel point in the upper-stage downsampling image of the target image; and sequentially and gradually upwards obtaining the eigenvalue vector of each pixel point in the full-color remote sensing image.
The similarity between the pixel points in each level of downsampling image and each neighborhood pixel point is obtained in the preprocessing module, the similarity between the pixel points is utilized to reflect the relation between the pixel points in the downsampling image, the greater the similarity between the two pixel points is, the more similar the feature value vectors of the two pixel points in the feature image are, the more similar the two pixel points are, the more the two pixel points are close to the same land utilization category, so that the feature value vectors of the pixel points in the downsampling image can be converted into a full-color remote sensing image with high resolution level by the similarity between the pixel points, and the accuracy of the land utilization category recognition result is ensured.
Specifically, the resolution of the target image is the same as that of the land feature class diagram, so that the feature value vector of each pixel point in the target image is obtained by updating the feature value vector of the pixel point by using the feature value vector of the pixel points in the feature image and the similarity between the pixel points in the target image. And obtaining a characteristic value vector of each pixel point in the target image according to the following formula:
wherein ,representing the first in the target imageCharacteristic value vectors of the pixel points;representing the down-sampling times of the full-color remote sensing image;indicating the number of updates, the update from the feature image to the target image is the first update, so hereTaking a value of 1;representing the first in the target imageThe pixel point and the firstEach neighborhood pixel pointSimilarity of (2);representing the first in the target imageThe first pixel pointEach neighborhood pixel pointA feature value vector in the corresponding feature image;representing the first in the target imageAnd the characteristic value vector of each pixel point in the corresponding characteristic image.
Represent the firstEach neighborhood pixel pointAnd pixel pointThe ratio of the similarity between the pixel point and the sum of the similarity between the 8 neighborhood pixel points is about larger, and the neighborhood pixel points are considered to be the pixel pointsThe greater the similarity, the feature value vector of the neighborhood pixel point and the pixel pointThe closer the eigenvalue vector of the neighborhood pixel is, the more the duty ratio is multiplied with the eigenvalue vector of the corresponding neighborhood pixel to obtain the neighborhood pixel-to-pixelThe assigned eigenvalue vector is utilized to compare the pixel points by the eigenvalue vector of each neighborhood pixel point in 8 neighborhoodGiving eigenvalue vector and summing to obtain pixel point at this timeIs a preliminary speculative eigenvalue vector of (c). Considering that when the full-color remote sensing image is downsampled, the greater the sampling level is, the lower the resolution of the downsampled image is, the more blurred the texture is, and therefore the greater the sampling level is, the smaller the similarity between pixels has an auxiliary effect on the feature value vector of the converted pixels, so that the ratio of the update times to the total downsampled times is used as a weight value, the weight value is multiplied by the initially presumed feature value vector, and the smaller the update times is, the more accurate the original feature value vector of the pixels in the feature image is, so that the method is utilizedMultiplying the characteristic value vector of the pixel point by the characteristic value vector of the pixel point, wherein the smaller the updating times are, the larger the weight value is, and the more accurate the characteristic value vector of the pixel point on the updated target image is obtained.
The characteristic value vector of each pixel point on the target image is obtained, and the characteristic value vector of the pixel point in the downsampling image of the upper stage of the target image is estimated by utilizing the characteristic value vector of the pixel point in the target image. However, when the eigenvalue vector of the target image is calculated by the eigenvalue vector of the pixel point in the eigenvector image, the resolution of the eigenvector image and the resolution of the target image are the same, and the resolution of the target image and the resolution of the last-stage downsampled image are different, in this embodiment, the multiple between each stage of downsampled image is 4, so that one pixel point of the target image corresponds to four pixel points in the last-stage downsampled image of the target image, and the eigenvalue vector of each pixel point in the target image is used to calculate the eigenvalue vector of the corresponding four pixel points in the last-stage downsampled image.
Specifically, the eigenvalue vector of the pixel point in the downsampled image after the second update is obtained according to the following formula, namely, the eigenvalue vector of the pixel point in the downsampled image of the previous stage of the target image:
wherein ,representing the first level of downsampled image of a target imageCharacteristic value vectors of the pixel points;representing the down-sampling times of the full-color remote sensing image;the number of updates is represented, and the update from the feature image to the target image is the first update, so here is 2, representing the second update;Representing the first level of downsampled image of a target imageThe pixel point and the firstEach neighborhood pixel pointSimilarity of (2);representing the first level of downsampled image of a target imageThe first pixel pointEach neighborhood pixel pointCharacteristic value vectors of corresponding pixel points in the target image;representing the first level of downsampled image of a target imageCharacteristic value vectors of corresponding pixel points in the target image;and 2 in the (2) is the update times, multiplying the ratio by the eigenvalue vector of the corresponding pixel point, and obtaining the more accurate eigenvalue vector of the pixel point on the updated target image, wherein the smaller the update times is, the larger the weight value is.
And sequentially updating the characteristic value vectors of the pixel points in the downsampled image after the second updating step by using a method for obtaining the characteristic value vector of each pixel point in the full-color remote sensing image.
The identification module 13: the method is used for obtaining the land utilization category of the area corresponding to each pixel point by utilizing the characteristic value vector of each pixel point in the full-color remote sensing image.
And obtaining a characteristic value vector of each pixel point in the full-color remote sensing image in the data processing module, wherein the characteristic value vector of each pixel point in the full-color remote sensing image comprises the probability that the area corresponding to each pixel point belongs to each land utilization category. For full-color remote sensing imageUpper firstIndividual pixel pointsCorresponding eigenvalue vectorComprises four characteristic values, namely probability values of vegetation, water body, cultivated land and building land,,,. The probability value is a value between 0 and 1, but the region to be classified also belongs to land utilization categories outside vegetation, water bodies, cultivated lands and construction lands.
Therefore, four characteristic values of each pixel point in the full-color remote sensing image, namely the maximum value in probability values belonging to four land utilization categories, are obtained, and if the maximum probability value is greater than a probability threshold value of 0.5 (the probability threshold value can be set according to specific conditions), the land utilization category corresponding to the maximum probability value is taken as the land utilization category of the area corresponding to the pixel point; if the maximum probability value is not greater than 0.5, the area corresponding to the pixel point is considered to be other land utilization types, namely the area does not belong to vegetation, cultivated land, water body and construction land.
In summary, the present invention provides a land use classification and identification system based on remote sensing images, which can obtain the probability that each pixel point in a region to be classified belongs to each land use category by obtaining a plurality of multispectral remote sensing images, so as to obtain a feature image including the probability that the pixel point belongs to each land use category; the method comprises the steps of obtaining a target image with the same resolution as a characteristic image by downsampling a full-color remote sensing image with high resolution, and converting the probability that pixel points obtained by multispectral remote sensing images belong to each land utilization category, namely, characteristic value vectors of the pixel points into the target image; according to the similarity between the pixel points in each downsampled image and the neighborhood pixel points, the characteristic value vector of the pixel points is converted into the full-color remote sensing image at the uppermost layer step by step, and compared with the existing method for carrying out direct image fusion by utilizing the multispectral remote sensing image and the full-color remote sensing image, the method for converting the characteristic value vector of the pixel points obtained by combining the similarity between the pixel points into the image fusion in the full-color remote sensing image with high resolution step by combining the similarity between the pixel points is more accurate, and the situation that the edge of the land utilization classification result is error due to the fact that a large number of pixel points with the same color information appear on the full-color remote sensing image with high resolution in the prior art is reduced, so that the land utilization type identification result is more accurate.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (8)
1. Land utilization classification and identification system based on remote sensing images is characterized by comprising:
and an image acquisition module: the method comprises the steps of acquiring full-color remote sensing images of areas to be classified and multispectral remote sensing images under the combination of a plurality of different wave bands;
and a pretreatment module: the method comprises the steps of obtaining the probability that each pixel point in each multispectral remote sensing image belongs to a corresponding land utilization category, and forming a characteristic value vector of each pixel point at each position by utilizing the probability that the pixel points at the same position in each multispectral remote sensing image belong to each land utilization category to obtain a characteristic image;
step-by-step downsampling the panchromatic remote sensing image to obtain a plurality of downsampled images, and taking the downsampled images with the same resolution as the characteristic images as target images; obtaining the similarity between each pixel point in all downsampled images and each neighborhood pixel point;
and a data processing module: the method comprises the steps of obtaining a characteristic value vector of each pixel point in a target image by utilizing the similarity of each pixel point in the target image and each neighborhood pixel point and the characteristic value vector of each pixel point in a corresponding characteristic image;
according to the characteristic value vector of each pixel point on the target image and the similarity between the pixel point in the upper-stage downsampling image of the target image and each neighborhood pixel point, obtaining the characteristic value vector of each pixel point in the upper-stage downsampling image of the target image; sequentially and gradually upwards obtaining a characteristic value vector of each pixel point in the full-color remote sensing image;
and an identification module: the method is used for obtaining the land utilization category corresponding to each pixel point according to the probability of each land utilization category contained in the characteristic value vector of each pixel point in the full-color remote sensing image.
2. The land use classification and identification system based on remote sensing images as set forth in claim 1, wherein the method for obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land use category comprises the following steps:
acquiring HSV values of each pixel point in the multispectral remote sensing image;
acquiring an H value range of each multispectral remote sensing image belonging to a corresponding land utilization category;
and obtaining the probability that each pixel point in the multispectral image belongs to the corresponding land utilization category by utilizing the H value of each pixel point in each multispectral remote sensing image and the H value range of the corresponding land utilization category.
3. The land use classification and identification system based on remote sensing images according to claim 1, wherein the preprocessing module further comprises combining probabilities of corresponding land use categories of pixels at the same position in each multispectral remote sensing image to obtain eigenvalue vectors of the pixels at each position.
4. The land use classification and identification system based on remote sensing images as set forth in claim 1, wherein the preprocessing module obtains the similarity between each pixel point on the target image and each neighboring pixel point by:
obtaining an LBP value of each pixel point on a target image;
and obtaining the similarity of each pixel point and each neighborhood pixel point according to the LBP value and the gray value of each pixel point and each neighborhood pixel point in the target image.
5. The land use classification and identification system based on remote sensing images as set forth in claim 4, wherein the expression for obtaining the similarity between each pixel point on the target image and each neighboring pixel point is:
wherein ,representing the%>The pixel and the (th) thereof>Each neighborhood pixel point->Similarity of (2); />Representing the%>LBP values for individual pixels; />Representing the%>The +.>Each neighborhood pixel point->LBP value of (a); />Representing the%>Gray values of the individual pixels; />Representing the%>The +.>Each neighborhood pixel point->Is a gray value of (a).
6. The land use classification and identification system based on remote sensing images as set forth in claim 1, wherein the data processing module obtains the expression of the eigenvalue vector of each pixel point in the target image as follows:
wherein ,representing the%>Characteristic value vectors of the pixel points; />Representing the down-sampling times of the full-color remote sensing image; />Representing the number of updates; />Representing the%>Pixel dot and->Each neighborhood pixel point->Similarity of (2); />Representing the%>The +.>Each neighborhood pixel point->A feature value vector in the corresponding feature image; />Representing the%>And the characteristic value vector of each pixel point in the corresponding characteristic image.
7. The land use classification and identification system based on remote sensing images according to claim 1, wherein, during the step-by-step downsampling, the pixels in each downsampling image correspond to the pixels in the downsampling image of the adjacent upper downsampling image, i.e. each pixel in the target image corresponds to the pixels in the downsampling image of the upper downsampling image of the target image.
8. The land use classification and identification system based on remote sensing images as claimed in claim 1, wherein said identification module further comprises dividing the land into a plurality of utilization categories by the land use category to which each pixel corresponds.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310368802.7A CN116129278B (en) | 2023-04-10 | 2023-04-10 | Land utilization classification and identification system based on remote sensing images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310368802.7A CN116129278B (en) | 2023-04-10 | 2023-04-10 | Land utilization classification and identification system based on remote sensing images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116129278A true CN116129278A (en) | 2023-05-16 |
CN116129278B CN116129278B (en) | 2023-06-30 |
Family
ID=86295924
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310368802.7A Active CN116129278B (en) | 2023-04-10 | 2023-04-10 | Land utilization classification and identification system based on remote sensing images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116129278B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310772A (en) * | 2023-05-18 | 2023-06-23 | 德州华恒环保科技有限公司 | Water environment pollution identification method based on multispectral image |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679675A (en) * | 2013-11-29 | 2014-03-26 | 航天恒星科技有限公司 | Remote sensing image fusion method oriented to water quality quantitative remote sensing application |
US20170076438A1 (en) * | 2015-08-31 | 2017-03-16 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
US20180286052A1 (en) * | 2017-03-30 | 2018-10-04 | 4DM Inc. | Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors |
CN109697475A (en) * | 2019-01-17 | 2019-04-30 | 中国地质大学(北京) | A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method |
CN110263717A (en) * | 2019-06-21 | 2019-09-20 | 中国科学院地理科学与资源研究所 | It is a kind of incorporate streetscape image land used status determine method |
CN110599424A (en) * | 2019-09-16 | 2019-12-20 | 北京航天宏图信息技术股份有限公司 | Method and device for automatic image color-homogenizing processing, electronic equipment and storage medium |
CN111681207A (en) * | 2020-05-09 | 2020-09-18 | 宁波大学 | Remote sensing image fusion quality evaluation method |
CN112036246A (en) * | 2020-07-30 | 2020-12-04 | 长安大学 | Construction method of remote sensing image classification model, remote sensing image classification method and system |
CN112149547A (en) * | 2020-09-17 | 2020-12-29 | 南京信息工程大学 | Remote sensing image water body identification based on image pyramid guidance and pixel pair matching |
CN113191440A (en) * | 2021-05-12 | 2021-07-30 | 济南大学 | Remote sensing image instance classification method, system, terminal and storage medium |
CN113312993A (en) * | 2021-05-17 | 2021-08-27 | 北京大学 | Remote sensing data land cover classification method based on PSPNet |
WO2021184891A1 (en) * | 2020-03-20 | 2021-09-23 | 中国科学院深圳先进技术研究院 | Remotely-sensed image-based terrain classification method, and system |
CN113887344A (en) * | 2021-09-16 | 2022-01-04 | 同济大学 | Ground feature element classification method based on satellite remote sensing multispectral and panchromatic image fusion |
CN115564692A (en) * | 2022-09-07 | 2023-01-03 | 宁波大学 | Panchromatic-multispectral-hyperspectral integrated fusion method considering width difference |
CN115578660A (en) * | 2022-11-09 | 2023-01-06 | 牧马人(山东)勘察测绘集团有限公司 | Land block segmentation method based on remote sensing image |
CN115631372A (en) * | 2022-10-18 | 2023-01-20 | 菏泽市土地储备中心 | Land information classification management method based on soil remote sensing data |
WO2023000159A1 (en) * | 2021-07-20 | 2023-01-26 | 海南长光卫星信息技术有限公司 | Semi-supervised classification method, apparatus and device for high-resolution remote sensing image, and medium |
CN115713694A (en) * | 2023-01-06 | 2023-02-24 | 东营国图信息科技有限公司 | Land surveying and mapping information management method |
-
2023
- 2023-04-10 CN CN202310368802.7A patent/CN116129278B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679675A (en) * | 2013-11-29 | 2014-03-26 | 航天恒星科技有限公司 | Remote sensing image fusion method oriented to water quality quantitative remote sensing application |
US20170076438A1 (en) * | 2015-08-31 | 2017-03-16 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
US20180286052A1 (en) * | 2017-03-30 | 2018-10-04 | 4DM Inc. | Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors |
CN109697475A (en) * | 2019-01-17 | 2019-04-30 | 中国地质大学(北京) | A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method |
CN110263717A (en) * | 2019-06-21 | 2019-09-20 | 中国科学院地理科学与资源研究所 | It is a kind of incorporate streetscape image land used status determine method |
CN110599424A (en) * | 2019-09-16 | 2019-12-20 | 北京航天宏图信息技术股份有限公司 | Method and device for automatic image color-homogenizing processing, electronic equipment and storage medium |
WO2021184891A1 (en) * | 2020-03-20 | 2021-09-23 | 中国科学院深圳先进技术研究院 | Remotely-sensed image-based terrain classification method, and system |
CN111681207A (en) * | 2020-05-09 | 2020-09-18 | 宁波大学 | Remote sensing image fusion quality evaluation method |
CN112036246A (en) * | 2020-07-30 | 2020-12-04 | 长安大学 | Construction method of remote sensing image classification model, remote sensing image classification method and system |
CN112149547A (en) * | 2020-09-17 | 2020-12-29 | 南京信息工程大学 | Remote sensing image water body identification based on image pyramid guidance and pixel pair matching |
CN113191440A (en) * | 2021-05-12 | 2021-07-30 | 济南大学 | Remote sensing image instance classification method, system, terminal and storage medium |
CN113312993A (en) * | 2021-05-17 | 2021-08-27 | 北京大学 | Remote sensing data land cover classification method based on PSPNet |
WO2023000159A1 (en) * | 2021-07-20 | 2023-01-26 | 海南长光卫星信息技术有限公司 | Semi-supervised classification method, apparatus and device for high-resolution remote sensing image, and medium |
CN113887344A (en) * | 2021-09-16 | 2022-01-04 | 同济大学 | Ground feature element classification method based on satellite remote sensing multispectral and panchromatic image fusion |
CN115564692A (en) * | 2022-09-07 | 2023-01-03 | 宁波大学 | Panchromatic-multispectral-hyperspectral integrated fusion method considering width difference |
CN115631372A (en) * | 2022-10-18 | 2023-01-20 | 菏泽市土地储备中心 | Land information classification management method based on soil remote sensing data |
CN115578660A (en) * | 2022-11-09 | 2023-01-06 | 牧马人(山东)勘察测绘集团有限公司 | Land block segmentation method based on remote sensing image |
CN115713694A (en) * | 2023-01-06 | 2023-02-24 | 东营国图信息科技有限公司 | Land surveying and mapping information management method |
Non-Patent Citations (3)
Title |
---|
FANG GAO等: "A high-resolution panchromatic-multispectral satellite image fusion method assisted with building segmentation", 《COMPUTERS AND GEOSCIENCES 》, pages 1 - 17 * |
丁星: "基于超像素的遥感图像海岸线检测与海岸带地物分类", 《中国优秀硕士学位论文全文数据库 基础科学辑》, vol. 2020, no. 6, pages 010 - 5 * |
刘天宇: "高分二号与哨兵二号影像融合及地物分类研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》, vol. 2023, no. 1, pages 008 - 352 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310772A (en) * | 2023-05-18 | 2023-06-23 | 德州华恒环保科技有限公司 | Water environment pollution identification method based on multispectral image |
Also Published As
Publication number | Publication date |
---|---|
CN116129278B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109446992B (en) | Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment | |
CN109389163B (en) | Unmanned aerial vehicle image classification system and method based on topographic map | |
CN112016436A (en) | Remote sensing image change detection method based on deep learning | |
CN108428220B (en) | Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence | |
CN112949407B (en) | Remote sensing image building vectorization method based on deep learning and point set optimization | |
CN110598564B (en) | OpenStreetMap-based high-spatial-resolution remote sensing image transfer learning classification method | |
CN111553922B (en) | Automatic cloud detection method for satellite remote sensing image | |
CN112285710B (en) | Multi-source remote sensing reservoir water storage capacity estimation method and device | |
CN107688777B (en) | Urban green land extraction method for collaborative multi-source remote sensing image | |
Shaoqing et al. | The comparative study of three methods of remote sensing image change detection | |
CN116129278B (en) | Land utilization classification and identification system based on remote sensing images | |
CN111881801B (en) | Newly-added construction land remote sensing monitoring method and equipment based on invariant detection strategy | |
CN111738113A (en) | Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint | |
CN107688776B (en) | Urban water body extraction method | |
CN112364289B (en) | Method for extracting water body information through data fusion | |
CN114266958A (en) | Cloud platform based mangrove remote sensing rapid and accurate extraction method | |
CN109671038B (en) | Relative radiation correction method based on pseudo-invariant feature point classification layering | |
CN112329790B (en) | Quick extraction method for urban impervious surface information | |
CN113486975A (en) | Ground object classification method, device, equipment and storage medium for remote sensing image | |
CN110569797A (en) | earth stationary orbit satellite image forest fire detection method, system and storage medium thereof | |
CN114022459A (en) | Multi-temporal satellite image-based super-pixel change detection method and system | |
CN116433940A (en) | Remote sensing image change detection method based on twin mirror network | |
CN117853949B (en) | Deep learning method and system for identifying cold front by using satellite cloud image | |
CN109064490B (en) | Moving target tracking method based on MeanShift | |
CN112184785B (en) | Multi-mode remote sensing image registration method based on MCD measurement and VTM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A Land Use Classification and Recognition System Based on Remote Sensing Images Effective date of registration: 20231114 Granted publication date: 20230630 Pledgee: Bank of Beijing Co.,Ltd. Jinan Branch Pledgor: Wrangler (Shandong) Survey and Mapping Group Co.,Ltd. Registration number: Y2023980065472 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |