CN113628218A - Sub-pixel drawing method based on multi-scale target infrared information - Google Patents

Sub-pixel drawing method based on multi-scale target infrared information Download PDF

Info

Publication number
CN113628218A
CN113628218A CN202010379859.3A CN202010379859A CN113628218A CN 113628218 A CN113628218 A CN 113628218A CN 202010379859 A CN202010379859 A CN 202010379859A CN 113628218 A CN113628218 A CN 113628218A
Authority
CN
China
Prior art keywords
sub
target
pixel
information
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010379859.3A
Other languages
Chinese (zh)
Other versions
CN113628218B (en
Inventor
王鹏
肖子逸
张海仁
蒲子琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010379859.3A priority Critical patent/CN113628218B/en
Publication of CN113628218A publication Critical patent/CN113628218A/en
Application granted granted Critical
Publication of CN113628218B publication Critical patent/CN113628218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a sub-pixel drawing method based on multi-scale target infrared information. The method comprises the following steps: (1) in a multi-scale path, generating a fine abundance image with multi-scale object information by a Laplacian pyramid deep learning network, segmentation and random walk expansion algorithm; (2) in the infrared path, a fine abundance image with rich infrared information can be obtained by calculating a normalized difference target index; (3) obtaining two kinds of fine abundance images with different information obtained in the step (1) and the step (2) by utilizing a linear integration method to obtain a finer abundance image with multi-scale target infrared information; (4) and according to the finer abundance image, obtaining a final sub-pixel mapping result of the land cover target identification by adopting a particle swarm optimization algorithm, and obtaining a final sub-pixel mapping result. According to the invention, the result of higher spatial resolution is obtained by combining the multi-scale information and the infrared information of the image.

Description

Sub-pixel drawing method based on multi-scale target infrared information
Technical Field
The invention relates to the technical field of remote sensing information processing, in particular to a sub-pixel mapping method based on multi-scale target infrared information.
Background
Mapping information of land cover targets such as vegetation, water, buildings and the like is one of basic data of researches such as ecosystem prediction, environmental pollution, population monitoring and the like. Mapping information for Land cover targets may be obtained using landform 8 utility terrestrial Imager (OLI) images on landform 8 satellites launched in 2013 at month 2. The Landsat8 OLI image has a spatial resolution of 30m, eight bands in the visible, near infrared and short wave infrared regions of the electromagnetic spectrum, plus an additional full color band at 15m spatial resolution. However, due to the limitations of sensors and the diversity of land cover categories, the resolution of the acquired OLI images is sometimes low and the number of mixed pixels is large, which hinders accurate mapping information of land cover targets. To solve this problem, a sub-pixel mapping (SPM) method is used in a rough remote sensing image including a Landsat8 OLI image to obtain mapping information at a sub-pixel level. The SRM divides the mixed pixels into S multiplied by S sub-pixels according to the scale factor S, and converts a rough fractal image obtained through spectral unmixing of the original remote sensing image into a hard classification image with higher spatial resolution.
In recent years, many studies on SPM have been rapidly developed. Most SPM methods are based on various forms of spatial correlation. In particular, recently, a more efficient SPM based on Object Spatial Dependency (OSD) has been proposed. There are two types of SRMs, namely an initialization then optimization method, and a soft then hard method, depending on the manner in which the SPM results are generated. In the method of initialization followed by optimization, the land cover category labels are randomly assigned to the sub-pixels and then optimized by gradually transforming the spatial positions of the sub-pixels. There are some methods such as the pixel swapping algorithm, the minimum perimeter method, the neighborhood method, and the Moran's I method, which are all of this type. To obtain better results, artificial intelligence algorithms, such as simulated annealing, particle swarm optimization and genetic algorithms have been used to optimize these methods. However, these algorithms typically require complex physical structures and long computation times. The soft-hardening method is more widely used due to its simple physical meaning and rapid operation. The method comprises the following two steps: 1) sub-pixel sharpening and 2) class assignment of the soft-hard type. Sub-pixel sharpening increases the resolution of the coarsely fractal image to produce high-resolution sub-images, where the proportion of each sub-pixel belongs to the land cover target information. The spatial attraction model, the Hopfield neural network, the back propagation neural network, the indicator cooperated with the Kriging method and some super-resolution algorithms can realize the sub-pixel sharpening. And according to the proportions, assigning a class label to each sub-pixel by using a class assignment method. The class assignment method includes linear optimization, where the highest soft attribute value, object unit, class unit, and so on are assigned first. Since sub-pixel mapping is a pathological problem, the accuracy of the mapping result can also be improved by some auxiliary data, such as shifted images of multiple sub-pixels, light detection and ranging, fused images, fine scale information, etc.
However, since the object information is single-scaled in the existing SPM, it may affect the resulting SPM result. Furthermore, the infrared information in Landsat8 OLI images is not used in existing SPM methods for the processing of land cover targets. In order to solve the above problems, a sub-pixel coverage target mapping based on multiscale infrared information processing of a landcover target object for a landcover target 8 OLI image (MOII) is proposed. The MOII includes an object item and an infrared item. For an object, firstly, a rough fractal image of a land covering target is obtained by de-mixing an original rough Landsat8 OLI image, and meanwhile, a primary principal component of the Landsat8 OLI image is obtained through Principal Component Analysis (PCA); the coarse fractal image and the multi-scale first principal component are then trained using a laplacian pyramid deep learning network to produce an upsampled fractal image and an upsampled first principal component. Then, generating an object by segmenting the upsampled first principal component; finally, a class proportion of the object is obtained by fusing the upsampled fractal image with the generated object, and the class proportion of the object is calculated by using a random walk Extension (ERW) algorithm to generate an object item having multi-scale object information. On the other hand, in the case of a liquid,by calculating a Normalized Difference Target Index (NDTI), an infrared term with rich infrared information can be derived. In the Landsat8 OLI image, the basis for selecting two wavelength bands is that the reflectance of the target sharply increases from the near infrared wavelength band to another wavelength band. The infrared term is the NDTI value (NDTI) observed by minimizingobe) And simulated NDTI value (NDTI)sim) The difference in spectral indicators between. By combining the target item with the infrared item, an objective function with multi-scale target infrared information may be generated. And obtaining a final SPM result of the land cover target recognition by adopting a Particle Swarm Optimization (PSO) according to the target function.
Disclosure of Invention
Images generated by landmine Imager (OLI) of landmine 8 can provide important mapping information for Land-covering targets such as vegetation, water and buildings. Due to the limitations of sensors and the diversity of land cover categories, the resolution of the acquired Landsat8 OLI images is sometimes very coarse, thus preventing accurate mapping of land cover targets. To address this problem, sub-pixel mapping (SPM) methods have been proposed for mapping land cover targets at a sub-pixel scale. However, the target information is single scale, and the infrared information is not fully utilized in the existing SPM method.
In order to improve the drawing precision of a land cover object, the invention provides a sub-pixel drawing Method (MOII) based on multi-scale target mapping based on multi-scale target infrared information, which comprises the following steps:
(1) in the multi-scale path, a fine abundance image with multi-scale object information is generated through a laplacian pyramid deep learning network, an Extended Random Walker (ERW).
(2) In the infrared path, by calculating a Normalized Difference Target Index (NDTI), a fine abundance image with rich infrared information can be obtained.
(3) And (3) obtaining two kinds of fine abundance images with different information obtained in the step (1) and the step (2) by utilizing a linear integration method to obtain a finer abundance image with multi-scale target infrared information.
(4) And (3) obtaining a final sub-pixel mapping (SPM) result of land cover target identification by adopting a Particle Swarm Optimization (PSO) algorithm according to the finer abundance image, and obtaining a final sub-pixel mapping result.
Preferably, in the step (1), based on the laplacian pyramid deep learning network, the segmentation and random walk extension algorithm specifically includes: in order to obtain multi-scale information of an object, a target item T is introduced through a Laplacian pyramid deep learning networkobjAnd sequentially segmenting the image and performing a random walk expansion algorithm.
First, we obtain a rough fractal image of the land cover target by fusing the rough image of the original land satellite Landsat8 OLI, while we obtain the first principal component of the land satellite Landsat8 OLI image using PCA.
In a second step, the coarse fractal image and the multi-scale first principal component are trained using laplacian pyramid deep learning to produce an upsampled fractal image and an upsampled first principal component.
And thirdly, generating a target object by dividing the upsampled first main component, wherein Q is defined as a subdivision proportion parameter for determining a merging termination condition and the size of the target. The segmentation method is given by:
H=θ×Hspectral+(1-θ)×Hshape (1)
wherein H is regional heterogeneity; theta represents a free parameter to balance spectral heterogeneity HspectralAnd heterogeneity of shape HshapeAnd is generally set to 0.5. HspectralAnd HshapeIs given by the formulae (2) and (3):
Figure RE-GSB0000189652090000031
Figure RE-GSB0000189652090000032
wherein B represents a spectral band (B1, 2.., B; B is the total number of spectral bands); thetabIs to divide the b-th in the target regionthStandard deviation of spectral values of the segment spectral bands;
Figure RE-GSB0000189652090000033
is the b ththThe free spectral parameter value for the segment spectral band, which value is set herein to 1 for all selected spectral bands;
Figure RE-GSB0000189652090000034
and
Figure RE-GSB0000189652090000035
respectively representing the smoothness and compactness of the segmentation target region. a is the actual boundary length of the divided region, and r is the rectangular boundary length of the divided region. N is the number of sub-pixels in the segmented region. ThetashapeIs a free parameter that is typically set to 0.4 here.
Two objects with minimal heterogeneity are merged between adjacent regions. And when the heterogeneity H of the merging area is larger than the preset subdivision proportion parameter Q, terminating the merging process and extracting the object.
Finally, after the above processing steps, the upsampled first principal component is divided into K objects Ok(K ═ 1, 2,. K), object OkComprises NkSub-pixel, and the up-sampled fractal image includes sub-pixel pi(i=1,2,...,Nk) Class ratio L (p) ofi). By averaging the target area OkThe analog example of the sub-pixel obtains the class ratio G (O)k)。
Figure RE-GSB0000189652090000041
Obtaining the ith through ERW algorithmthSub-pixel corresponding T (i)objGiven by equation (5):
T(i)obj=λ×Tamong(G)+(1-λ)×Twithin(G) (5)
wherein T isamong(G) Representing information between objects, Twithin(G) Indicating information within each object, G ═ G (O)1),G(O2),...,G(Ok)]Is a column vector and λ is a free parameter variable set to 0.5.
Tamong(G) Obtained by the following formula:
Tamong(G)=GTLG (6)
l is the laplace matrix:
Figure RE-GSB0000189652090000042
wherein v iskj=exp(-μ(vk-vj)2) Is the k-ththObject OkAnd j (h) ththObject OjThe spectral difference between them. μ is a free variable, set here to 0.6. KththSpectral value v of individual objectkThe calculation method comprises the following steps:
Figure RE-GSB0000189652090000043
wherein
Figure RE-GSB0000189652090000044
Is OkI th of (1)thSpectral values of the sub-pixels.
Twithin(G) Is defined as:
Figure RE-GSB0000189652090000045
where Λ is a diagonal matrix, whichThe value on the diagonal is the land cover target class proportion of each object,
Figure RE-GSB0000189652090000046
also a diagonal matrix whose diagonal values represent the background class scale of each object. Object item TobjCan be obtained by establishing a linear optimization model. The linear optimization model is used to minimize T of all sub-pixelsobjIt is derived from the following formula:
Figure RE-GSB0000189652090000051
Figure RE-GSB0000189652090000052
because the Laplace pyramid deep learning adopts a multi-scale training image and the ERW algorithm considers the object information, the target item T with the multi-scale target information is obtained through the processing flowobj
Preferably, in the step (2), the normalized difference-based target index (NDTI) algorithm is specifically: in order to fully utilize infrared information, a new infrared item T is providedinfAiming to minimize the observed NDTI value (NDTI)obe) And simulated NDTI value (NDTI)sim) The difference in spectral indicators between.
In the Landsat8 OLI image, NDTI is obtained by calculating the difference between the spectral reflectances of the targets in the near-infrared band and other bandsobe. According to different land cover targets, different wave bands are selected to be combined with near infrared wave bands. And NDTIobeThe value is given by:
Figure RE-GSB0000189652090000053
wherein
Figure RE-GSB0000189652090000054
And
Figure RE-GSB0000189652090000055
is the observed reflectance of each mixed pixel in the X and Y bands, which are obtained directly from the original Landsat8 OLI image.
Suppose that
Figure RE-GSB0000189652090000056
And
Figure RE-GSB0000189652090000057
is the target reflectivity in the X and Y bands,
Figure RE-GSB0000189652090000058
and
Figure RE-GSB0000189652090000059
is the corresponding reflectivity of its background. For each mixed pixel in two frequency bands, the target
Figure RE-GSB00001896520900000510
And
Figure RE-GSB00001896520900000511
the ratio of (d) is obtained by dividing the number of target sub-pixels by the total number of sub-pixels. The ratio of the background in the two bands is respectively
Figure RE-GSB00001896520900000512
And
Figure RE-GSB00001896520900000513
the reflectivity of each blended pixel is considered to be a linear blend of all of the sub-pixel spectra it possesses. Thus, each mixed pixel in the two frequency bands is calculated using equations (13) and (14), respectively
Figure RE-GSB00001896520900000514
And
Figure RE-GSB00001896520900000515
the simulated reflectance of (1).
Figure RE-GSB00001896520900000516
Figure RE-GSB00001896520900000517
NDTIsimGiven by the following equation:
Figure RE-GSB00001896520900000518
and then through minimizing NDTIobeAnd NDTIsimDifference therebetween to obtain the infrared term Tinf
Tinf=min(NDTIobe-NDTIsim)2 (16)
Preferably, in the step (3), the linear integration method specifically comprises: coordinating the infrared term T by a trade-off coefficient λinfAnd a target item TobjAn objective function T with multi-dimensional infrared information is generated. The purpose of MOII is to minimize T.
min T=(1-λ)Tobj+λTinf (17)
Preferably, in the step (4), the particle swarm optimization algorithm specifically comprises: the objective function is optimized using Particle Swarm Optimization (PSO) and SPM results are obtained. First, object or background coordinates are randomly assigned to each sub-pixel. Next, the coordinates of the sub-pixels are iteratively updated until a minimum value of the objective function T is obtained. At each iteration, the target coordinates are transformed into background coordinates and vice versa. If T is decreased, the iteration is continued, and if not decreased, the iteration is not accepted. The PSO ends when less than 0.1% of the coordinates have changed. And all the sub-pixels are subjected to category distribution to obtain a sub-pixel drawing result.
The invention has the beneficial effects that: the method improves the prior mapping model of the land cover object, and obtains a result with higher positioning precision by using the new method.
Drawings
Fig. 1 is a schematic flow chart of a principle of generating a fine abundance image with multi-scale object information through a laplacian pyramid deep learning network, segmentation and a random walk extension algorithm.
FIG. 2 is a schematic flow chart of the method of the present invention.
Fig. 3 is a raw simulated coarse satellite image generated by down-sampling the data set 1 by S6.
Fig. 4 is a simulated raw coarse satellite image generated by down-sampling data set 2 by S6.
Fig. 5(a) is a subpixel rendering result of processing dataset 1 using the HNNA method.
Fig. 5(b) is a sub-pel charting result of processing a data set 1 using the RBF method.
Fig. 5(c) is a sub-pel charting result of processing data set 1 using the DPSO method.
FIG. 5(d) is a sub-pel charting result of processing data set 1 using the MOII method.
Fig. 6(a) is a subpixel mapping result of processing data set 2 using the HNNA method.
Fig. 6(b) is a sub-pel charting result of processing data set 2 using the RBF method.
Fig. 6(c) is a sub-pel charting result of processing data set 2 using the DPSO method.
FIG. 6(d) is a sub-pel charting result of processing data set 2 using the MOII method.
FIG. 7 is a schematic diagram of a sub-pixel mapping method based on multi-scale target infrared information.
Detailed Description
A sub-pixel drawing method based on multi-scale target infrared information comprises the following steps:
(1) inputting an original Landsat8 OLI image, and utilizing a Laplacian pyramid deep learning network to segment and ERW to obtain a target T with multi-scale informationobjThe process flow is shown in fig. 1. At the same timeObtaining infrared item T with rich infrared information by calculating NDTI of original Landsat8 OLI imageinf
(2) Coordinating the infrared term T by a trade-off coefficient λinfAnd a target item TobjAn objective function T with multi-dimensional infrared information is generated.
(3) Minimize T, optimize the objective function using Particle Swarm Optimization (PSO) and obtain SPM results. First, object or background coordinates are randomly assigned to each sub-pixel. Next, the coordinates of the sub-pixels are iteratively updated until a minimum value of the objective function T is obtained. At each iteration, the target coordinates are transformed into background coordinates and vice versa. If T is decreased, the iteration is continued, and if not decreased, the iteration is not accepted. The PSO ends when less than 0.1% of the coordinates have changed. And all the sub-pixels are subjected to category distribution to obtain a sub-pixel drawing result.
The implementation block diagram of the Sub-pixel Mapping method (Sub-pixel Land Cover Target Mapping Based on Multiscale Object-Information for Land Sat8 OLI Image, MOII) Based on the multi-scale Target Infrared Information is shown in FIG. 2.
Two Landsat8 OLI datasets were collected as input. The two data sets have six wave bands of red, green, blue, near infrared, short wave infrared 1 and short wave infrared 2 in turn.
Fig. 3 is a raw simulated coarse satellite image generated by down-sampling the data set 1 by S6.
Fig. 4 is a simulated raw coarse satellite image generated by down-sampling data set 2 by S6.
Fig. 5 is a sub-pel charting result of data set 1. Wherein: a) the method comprises the following steps of (a) a Hopfield neural network model sub-pixel mapping method (Hopfield neural network with anisotropic spatial dependency, HNNA) based on anisotropic spatial dependency, (b) a radial Basis Function neural network model-based sub-pixel mapping method (RBF), and (c) a Discrete Particle Swarm algorithm-based sub-pixel mapping method (DPSO), and (d) a multi-scale target infrared information-based sub-pixel mapping Method (MOII).
Fig. 6 is a sub-pel charting result for data set 2. Wherein: a) the method comprises the following steps of (a) a Hopfield neural network model sub-pixel mapping method (Hopfield neural network with anisotropic spatial dependency, HNNA) based on anisotropic spatial dependency, (b) a radial Basis Function neural network model-based sub-pixel mapping method (RBF), and (c) a Discrete Particle Swarm algorithm-based sub-pixel mapping method (DPSO), and (d) a multi-scale target infrared information-based sub-pixel mapping Method (MOII).
The performance of the four SRFIM methods for the two data sets was evaluated using a Percentage of Corrected Classified (PCC) and Kappa coefficients. According to tables 1 and 2, MOII can obtain the highest PCC (%) and Kappa. Compared with the DPSO method, PCC (%) and Kappa of MOII are respectively increased by about 2.2% and 0.036 in data 1 and by about 3.1% and 0.059 in data 2, and the sub-pixel mapping advantage of the proposed MOII method is verified.
Evaluation indexes of four methods in data 1 of Table 1
Figure RE-GSB0000189652090000081
Evaluation indexes of four methods in data 2 of table 2
Figure RE-GSB0000189652090000082

Claims (5)

1. A sub-pixel drawing method based on multi-scale target infrared information is characterized by comprising the following steps:
(1) in a multi-scale path, generating a fine abundance image with multi-scale object information through a Laplacian pyramid deep learning network, segmentation and random walk Extension (ERW);
(2) in the infrared path, a fine abundance image with rich infrared information can be obtained by calculating a Normalized Difference Target Index (NDTI);
(3) obtaining two kinds of fine abundance images with different information obtained in the step (1) and the step (2) by utilizing a linear integration method to obtain a finer abundance image with multi-scale target infrared information;
(4) and (3) obtaining a final sub-pixel mapping (SPM) result of land cover target identification by adopting a Particle Swarm Optimization (PSO) algorithm according to the finer abundance image, and obtaining a final sub-pixel mapping result.
2. The sub-pixel mapping method based on multi-scale target infrared information as claimed in claim 1, wherein in step (1), in order to obtain multi-scale information of the object, we introduce a target item T through a laplacian pyramid deep learning networkobjSequentially dividing the image and performing a random walk extension algorithm (ERW);
firstly, a rough fractal image of a land cover target is obtained by fusing a rough image of an original land satellite Landsat8 OLI, and meanwhile, a first principal component of the land satellite Landsat8 OLI image is obtained by using Principal Component Analysis (PCA);
a second step of training a coarse fractal image and a multi-scale first principal component using laplacian pyramid deep learning, thereby generating an upsampled fractal image and an upsampled first principal component;
and thirdly, generating a target object by dividing the upsampled first main component, wherein Q is defined as a subdivision proportion parameter for determining a merging termination condition and the size of the target. The segmentation method is given by:
H=θ×Hspectral+(1-θ)×Hshape (1)
wherein H is regional heterogeneity; theta represents a free parameter to balance spectral heterogeneity HspectralAnd heterogeneity of shape HshapeAnd is generally set to 0.5. HspectralAnd HshapeIs given by the formulae (2) and (3):
Figure FSA0000208140250000011
Figure FSA0000208140250000012
wherein B represents a spectral band (B1, 2.., B; B is the total number of spectral bands); thetabIs to divide the b-th in the target regionthStandard deviation of spectral values of the segment spectral bands;
Figure FSA0000208140250000013
is the b ththThe free spectral parameter value for the segment spectral band, which value is set herein to 1 for all selected spectral bands;
Figure FSA0000208140250000021
and
Figure FSA0000208140250000022
respectively representing the smoothness and the compactness of the segmentation target area; a is the actual boundary length of the divided region, and r is the rectangular boundary length of the divided region. N is the number of sub-pixels in the segmented region. ThetashapeIs a free parameter here typically set to 0.4;
two objects with minimal heterogeneity are merged between adjacent regions. When the heterogeneity H of the merging area is larger than a preset subdivision proportion parameter Q, terminating the merging process and extracting the object;
finally, after the above processing steps, the upsampled first principal component is divided into K objects Ok(K ═ 1, 2,. K), object OkComprises NkSub-pixel, and the up-sampled fractal image includes sub-pixel pi(i=1,2,...,Nk) Class ratio L (p) ofi). By averaging the target area OkAnalog example acquisition of sub-pixels thereofClass ratio G (O)k):
Figure FSA0000208140250000023
Obtaining the ith through ERW algorithmthSub-pixel corresponding T (i)objGiven by equation (5):
T(i)obj=λ×Tamong(G)+(1-λ)×Twithin(G) (5)
wherein T isamong(G) Representing information between objects, Twithin(G) The information within each of the objects is represented,
G=[G(O1),G(O2),...,G(Ok)]is a column vector and λ is a free parameter variable set to 0.5.
Tamong(G) Obtained by the following formula:
Tamong(G)=GTLG (6)
l is the laplace matrix:
Figure FSA0000208140250000024
wherein v iskj=exp(-μ(vk-vj)2) Is the k-ththObject OkAnd j (h) ththObject OjThe spectral difference between them. μ is a free variable, set here to 0.6. KththSpectral value v of individual objectkThe calculation method comprises the following steps:
Figure FSA0000208140250000025
wherein
Figure FSA0000208140250000026
Is OkI th of (1)thSpectral values of the sub-pixels.
Twithin(G) Is defined as:
Figure FSA0000208140250000027
where Λ is a diagonal matrix whose values on the diagonal are the land cover target class proportions for each object,
Figure FSA0000208140250000031
also a diagonal matrix whose diagonal values represent the background class scale of each object. Object item TobjCan be obtained by establishing a linear optimization model; the linear optimization model is used to minimize T of all sub-pixelsobjIt is derived from the following formula:
Figure FSA0000208140250000032
Figure FSA0000208140250000033
because the Laplace pyramid deep learning adopts a multi-scale training image and the ERW algorithm considers the object information, the target item T with the multi-scale target information is obtained through the processing flowobj
3. The sub-pixel mapping method based on multi-scale target infrared information as claimed in claim 1, wherein in step (2), in order to fully utilize the infrared information, a new infrared term T is providedinfAiming to minimize the observed NDTI value (NDTI)obe) And simulated NDTI value (NDTI)sim) The difference in spectral indicators between;
in the Landsat8 OLI image, NDTI is obtained by calculating the difference between the spectral reflectances of the targets in the near-infrared band and other bandsobe(ii) a We have based on the difference in the land cover goals,selecting different wave bands to combine with the near infrared wave band; and NDTIobeThe value is given by:
Figure FSA0000208140250000034
wherein
Figure FSA0000208140250000035
And
Figure FSA0000208140250000036
is the observed reflectance of each mixed pixel in the X and Y bands, which are obtained directly from the original Landsat8 OLI image;
suppose that
Figure FSA0000208140250000037
And
Figure FSA0000208140250000038
is the target reflectivity in the X and Y bands,
Figure FSA0000208140250000039
and
Figure FSA00002081402500000310
is the corresponding reflectivity of its background; for each mixed pixel in two frequency bands, the target
Figure FSA00002081402500000311
And
Figure FSA00002081402500000312
the ratio of (a) is obtained by dividing the number of target sub-pixels by the total number of sub-pixels; the ratio of the background in the two bands is respectively
Figure FSA00002081402500000313
And
Figure FSA00002081402500000314
the reflectance of each blended pixel is considered to be a linear blend of all of the sub-pixel spectra it possesses; thus, each mixed pixel in the two frequency bands is calculated using equations (13) and (14), respectively
Figure FSA00002081402500000315
And
Figure FSA00002081402500000316
simulated reflectance of (2):
Figure FSA00002081402500000317
Figure FSA00002081402500000318
NDTIsimgiven by the following equation:
Figure FSA0000208140250000041
and then through minimizing NDTIobeAnd NDTIsimDifference therebetween to obtain the infrared term Tinf
Tinf=min(NDTIobe-NDTIsim)2 (16)
4. The sub-pel mapping method based on multi-scale target infrared information as claimed in claim 1, wherein in step (5), the infrared term T is coordinated by a trade-off coefficient λinfAnd a target item TobjGenerating a target function T with multi-dimensional infrared information; the purpose of MOII is to minimize T:
min T=(1-λ)Tobj+λTinf (17)
5. the sub-pixel mapping method based on multi-scale target infrared information as claimed in claim 1, wherein in step (4), a Particle Swarm Optimization (PSO) is used to optimize the objective function and obtain the SPM result: firstly, randomly allocating target or background coordinates to each sub-pixel; next, iteratively updating the coordinates of the sub-pixels until the minimum value of the target function T is obtained; at each iteration, the target coordinates are transformed into background coordinates and vice versa; if T is reduced, the iteration is continued, and if T is not reduced, the iteration is not accepted; when the coordinate is changed by less than 0.1%, PSO is ended, and all the sub-pixels are subjected to category distribution to obtain a sub-pixel drawing result.
CN202010379859.3A 2020-05-07 2020-05-07 Sub-pixel mapping method based on multi-scale target infrared information Active CN113628218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010379859.3A CN113628218B (en) 2020-05-07 2020-05-07 Sub-pixel mapping method based on multi-scale target infrared information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010379859.3A CN113628218B (en) 2020-05-07 2020-05-07 Sub-pixel mapping method based on multi-scale target infrared information

Publications (2)

Publication Number Publication Date
CN113628218A true CN113628218A (en) 2021-11-09
CN113628218B CN113628218B (en) 2024-05-17

Family

ID=78377004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010379859.3A Active CN113628218B (en) 2020-05-07 2020-05-07 Sub-pixel mapping method based on multi-scale target infrared information

Country Status (1)

Country Link
CN (1) CN113628218B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2012100257A4 (en) * 2012-03-08 2012-04-05 Beijing Normal University Method for Radiometric Information Restoration of Mountainous Shadows in Remotely Sensed Images
CN110070518A (en) * 2019-03-15 2019-07-30 南京航空航天大学 It is a kind of based on dual path support under high spectrum image Super-resolution Mapping

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2012100257A4 (en) * 2012-03-08 2012-04-05 Beijing Normal University Method for Radiometric Information Restoration of Mountainous Shadows in Remotely Sensed Images
CN110070518A (en) * 2019-03-15 2019-07-30 南京航空航天大学 It is a kind of based on dual path support under high spectrum image Super-resolution Mapping

Also Published As

Publication number Publication date
CN113628218B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN102282572B (en) Method and system for representing image patches
Mertens et al. Sub-pixel mapping and sub-pixel sharpening using neural network predicted wavelet coefficients
CN108830870B (en) Satellite image high-precision farmland boundary extraction method based on multi-scale structure learning
CN110070518B (en) Hyperspectral image super-resolution mapping method based on dual-path support
CN107358260B (en) Multispectral image classification method based on surface wave CNN
CN111625608B (en) Method and system for generating electronic map according to remote sensing image based on GAN model
El-naggar Determination of optimum segmentation parameter values for extracting building from remote sensing images
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
Wang et al. The effect of the point spread function on sub-pixel mapping
CN110728197B (en) Single-tree-level tree species identification method based on deep learning
Toivanen et al. Edge detection in multispectral images using the self-organizing map
Wang et al. Utilizing multiple subpixel shifted images in subpixel mapping with image interpolation
CN107688777B (en) Urban green land extraction method for collaborative multi-source remote sensing image
CN110363236B (en) Hyperspectral image extreme learning machine clustering method for embedding space-spectrum combined hypergraph
Li et al. Spatially adaptive superresolution land cover mapping with multispectral and panchromatic images
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
Ma et al. A super-resolution convolutional-neural-network-based approach for subpixel mapping of hyperspectral images
Zhong et al. Adaptive MAP sub-pixel mapping model based on regularization curve for multiple shifted hyperspectral imagery
CN114022459A (en) Multi-temporal satellite image-based super-pixel change detection method and system
CN110070545A (en) A kind of method that textural characteristics density in cities and towns automatically extracts cities and towns built-up areas
CN110084747B (en) Spatial gravitation model sub-pixel positioning method based on support of panchromatic sharpening technology
Chen et al. Scene segmentation of remotely sensed images with data augmentation using U-net++
CN113446998B (en) Hyperspectral target detection data-based dynamic unmixing method
CN112883823A (en) Land cover category sub-pixel positioning method based on multi-source remote sensing data fusion
CN112131968A (en) Double-time-phase remote sensing image change detection method based on DCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant