CN111611931A - Method, device and terminal for analyzing response attenuation of sensing instrument - Google Patents

Method, device and terminal for analyzing response attenuation of sensing instrument Download PDF

Info

Publication number
CN111611931A
CN111611931A CN202010440427.9A CN202010440427A CN111611931A CN 111611931 A CN111611931 A CN 111611931A CN 202010440427 A CN202010440427 A CN 202010440427A CN 111611931 A CN111611931 A CN 111611931A
Authority
CN
China
Prior art keywords
image
image pair
attenuation
pair
sensing instrument
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010440427.9A
Other languages
Chinese (zh)
Other versions
CN111611931B (en
Inventor
叶允明
胡秀清
李旭涛
叶智字
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
National Satellite Meteorological Center
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202010440427.9A priority Critical patent/CN111611931B/en
Publication of CN111611931A publication Critical patent/CN111611931A/en
Application granted granted Critical
Publication of CN111611931B publication Critical patent/CN111611931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device and a terminal for analyzing response attenuation of a sensing instrument, wherein the method for analyzing the response attenuation of the sensing instrument comprises the following steps: acquiring a target image pair set to be analyzed; sequentially inputting each target image pair in the target image pair set into a preset convolutional neural network model to obtain a relative attenuation ratio of each target image pair; fitting to generate the sensing instrument response attenuation curve based on the relative attenuation ratio of each target image pair in the set of target image pairs. The method can avoid the difficulty of invariant pixel detection, improves the operability, reduces the application difficulty, and is beneficial to popularization and application of the method.

Description

Method, device and terminal for analyzing response attenuation of sensing instrument
Technical Field
The invention relates to the technical field of radiation correction, in particular to a method, a device and a terminal for analyzing response attenuation of a sensing instrument.
Background
With the progress of remote sensing technology, the field of remote sensing application is wider and wider, and the requirements on the precision and quality of observation data provided by a satellite are higher and higher. In particular, in the process of acquiring data at different times and in the same place, special attention needs to be paid to the influence of sensor decay on earth radiation observation signal response, and the data acquired by the sensor can accurately reflect the real change trend of the measured earth physical quantity rather than false information generated by the sensor response decay. In contrast, the multi-temporal remote sensing images need to be subjected to radiation correction to eliminate differences caused by instrument attenuation between the multi-temporal remote sensing images.
In the prior art, an invariant pixel of an image scene is analyzed and determined by an iterative weighted multivariate change detection method, relative decay of a sensor is determined by orthogonal regression of two image invariant pixels, a long-time sequence of a plurality of image pairs is analyzed, a curve of decay of the sensor along with time is obtained by polynomial fitting, and radiation correction is carried out on a remote sensing image based on the curve.
According to the sensing instrument response attenuation analysis method based on the iterative weighted multivariate change detection method, the relative decay of the sensor can be determined only by identifying the invariant pixel, and the identification of the invariant pixel is difficult, so that the sensing instrument response attenuation analysis method based on the iterative weighted multivariate change detection method is difficult to realize.
Disclosure of Invention
The invention solves the problem that the existing method for analyzing the response attenuation of the sensing instrument needs to identify the invariant pixel, and is difficult to realize.
In order to solve the above problem, the present invention provides a method for analyzing response attenuation of a sensing instrument, including:
acquiring a target image pair set to be analyzed; sequentially inputting each target image pair in the target image pair set into a preset convolutional neural network model to obtain a relative attenuation ratio of each target image pair; fitting to generate the sensing instrument response attenuation curve based on the relative attenuation ratio of each target image pair in the set of target image pairs.
Optionally, before the acquiring a set of target image pairs to be analyzed, the method further includes:
constructing a training sample set, wherein the training sample set is a set of labeled image pairs, and the label is a relative attenuation ratio of the image pairs; and training the preset convolutional neural network model based on the training sample set.
Optionally, the constructing a training sample set includes:
selecting a candidate image set from a preset image library based on a first preset screening rule; generating a set of candidate image pairs based on the set of candidate images; and selecting the image pair set used as the training sample from the candidate image pair set based on a second preset screening rule.
Optionally, after the selecting, from the candidate image pair sets based on the second preset screening rule, an image pair set used as a training sample, the method further includes:
determining a relative attenuation ratio value of each image pair in the set of image pairs used as training samples; correcting each image pair based on the relative attenuation ratio of each image pair, and then giving random attenuation to each image in the image pair to generate a new image pair; determining a relative attenuation ratio of the new image pair based on the random attenuation, the new image pair being used as a training sample;
alternatively, the first and second electrodes may be,
determining a relative attenuation ratio of each image in the image pair set used as the training sample relative to an initial image, wherein the initial image is an image without response attenuation of a sensing instrument; correcting each image based on the relative attenuation ratio of each image relative to the initial image, and then randomly attenuating each image to generate a new image and a new image pair; based on the random attenuation, a relative attenuation ratio of the new image pair is determined, the new image pair being used as a training sample.
Optionally, after the selecting, from the candidate image pair sets based on the second preset screening rule, an image pair set used as a training sample, the method further includes:
giving random attenuation to each image in the set of image pairs used as training samples to generate a new image pair; determining a relative attenuation ratio of the new image pair based on the random attenuation, the new image pair being used as a training sample.
Optionally, the first preset filtering rule includes at least one of:
the image display area is at least one of desert and bare land;
the satellite observation angles of the pixels in the image are all smaller than a first preset angle;
pixels with no default value in the image;
there is no cloud in the image.
Optionally, the second preset screening rule includes at least one of:
the shooting time of the image pair satisfies: T1-T2 ═ 365.25 × y ± n, y ∈ Z, n ∈ [1,60], where T1 and T2 respectively denote the capturing times of two images in an image pair, y denotes a year parameter, and n denotes a date parameter;
the satellite observation angle difference of the corresponding pixel of the image pair is smaller than a second preset angle.
The invention also provides a response attenuation analysis device of the sensing instrument, which comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target image pair set to be analyzed;
the processing unit is used for sequentially inputting each target image pair in the target image pair set into a preset convolutional neural network model to obtain a relative attenuation ratio of each target image pair;
a fitting unit for fitting to generate the sensing instrument response attenuation curve based on the relative attenuation ratio of each target image pair in the set of target image pairs.
The invention also provides a sensing instrument response attenuation analysis terminal, which comprises a computer readable storage medium and a processor, wherein the computer readable storage medium is used for storing a computer program, and the computer program is read by the processor and runs to realize the sensing instrument response attenuation analysis method.
The invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is read and executed by a processor, the method for analyzing response attenuation of a sensing instrument is implemented.
The method has the advantages that the target image pairs to be analyzed are collected and sequentially input into the preset convolutional neural network model to obtain the relative attenuation ratio of each target image pair, the sensing instrument response attenuation curve is generated by fitting based on the obtained relative attenuation ratio, the difficulty of invariant pixel detection is avoided, the operability is improved, the application difficulty is reduced, and the popularization and the application of the method are facilitated.
Drawings
FIG. 1 is a schematic view of an embodiment of a method for analyzing response attenuation of a sensing instrument according to the present invention;
FIG. 2 is a schematic diagram of a model structure of a preset convolutional neural network model according to the response attenuation analysis method of the sensing instrument;
FIG. 3 is a schematic diagram of another embodiment of the response attenuation analysis method of the sensing instrument according to the present invention;
FIG. 4 is a schematic diagram of an embodiment of the response attenuation analyzing method of the sensing instrument according to the present invention after step S10 is refined;
FIG. 5 is a schematic diagram of another embodiment of the response attenuation analyzing method of the sensing instrument of the present invention after step S10 is refined.
Description of reference numerals:
1-pair of images; 2-a convolutional layer; 3-a pooling layer; 4-full connection layer.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
For the understanding of the present invention, a brief introduction will be made to the source of the radiation error of the remote sensing image.
The radiation error is a difference between a measured value obtained when the reflected or radiated energy of a target is observed by a sensor and a physical quantity such as a spectral reflectance or spectral radiation brightness of the target, and is called a radiation error. The radiation error causes distortion of the remote sensing image, and in order to facilitate subsequent analysis and application of the remote sensing image, the radiation error needs to be corrected so as to eliminate or reduce the radiation error of the remote sensing image. The sources of radiation error are mainly: radiation error caused by the sensor; radiation error caused by atmospheric scattering and absorption; radiation errors due to terrain, sun altitude, etc.
Wherein, the radiation error caused by the sensor comprises the radiation error caused by the response attenuation of the sensing instrument.
The invention provides a response attenuation analysis method for a sensing instrument, which is used for eliminating radiation errors caused by instrument attenuation between multi-temporal remote sensing images. FIG. 1 is a schematic view of an embodiment of a response attenuation analysis method of a sensing instrument according to the present invention. As shown in fig. 1, the method for analyzing response attenuation of the sensing instrument includes:
step S30, acquiring a target image pair set to be analyzed;
and the target image pair set is a remote sensing image obtained by actual detection of the sensor channel to be analyzed, and a response attenuation change curve is established for the sensor channel to be analyzed based on the remote sensing image obtained by actual detection of the sensor channel to be analyzed.
Optionally, to reduce radiation errors caused by attenuation of non-sensing instruments, the set of target image pairs complies with certain preset rules. Step S30 includes: a set of target image pairs is constructed. The method specifically comprises the following steps: the method comprises the steps of taking a remote sensing image obtained by detection of a sensor channel to be analyzed as basic data, screening out an image which accords with a preset regional rule from the remote sensing image, wherein the image is called a first image set for convenience of description, screening out an image with a pixel of which a satellite observation angle is larger than a certain preset value from the first image set, optionally screening out an image with cloud and/or an image with an default value, wherein the image obtained after screening out is called a second image set for convenience of description, pairing the images in the second image set pairwise to obtain an image pair set, screening out an image pair of which the shooting moment accords with a preset time interval rule and the satellite observation angle difference accords with a preset angle rule from the image pair set, and the image pair is the target image pair.
The preset area rule refers to an area with low earth surface reflectivity. The vegetation of the earth surface such as the desert, the bare land and the like is sparse, so that the desert and the bare land can be preset into the area which accords with the preset area rule, and the remote sensing image corresponding to the desert and/or the bare land can be selected from the basic data to be used as the first image set. In addition, the influence of atmospheric water vapor and aerosol is low in the weather of non-sand dust due to less artificial activities caused by dry desert and bare land.
The larger the satellite observation angle is, the larger the apparent reflectivity distortion is caused, so that if the image has pixels with the satellite observation angle larger than a certain value, interference may be caused to response attenuation analysis of a sensing instrument, and the analysis accuracy is reduced, so that the image is screened out. Similarly, the images with missing values or clouds or the images with more missing values or clouds are also screened out, so that the data pureness of the target image pair set to be analyzed is improved, and the accuracy of response attenuation analysis is improved.
Optionally, the images may also be screened based on brightness, such that the target image pairs subsequently used for response attenuation analysis equalize the high, medium, and low brightness distributions of the collection.
To reduce seasonal effects, the two images of the target image pair are taken at the same time in the same month or in similar months of the same year or different years. Optionally, the preset time interval rule is: the two images of the target image pair satisfy T1-T2 being 365.25 x y +/-n, y being Z, n being 1,60, wherein T1 and T2 are the shooting time of the two images in the image pair respectively, y is an integer and comprises a negative integer, 0 and a positive integer.
The preset angle rule is that the satellite observation angle difference value of corresponding pixels in the two images of the target image pair is smaller than a certain value, so that the interference of the satellite observation angle on the response attenuation analysis of the sensing instrument can be reduced.
Step S40, inputting each target image pair in the target image pair set into a preset convolution neural network model in sequence to obtain the relative attenuation ratio of each target image pair;
the preset convolution neural network model is a pre-trained network model used for calculating the relative attenuation of two images, the input of the preset convolution neural network model is an image pair, and the output of the preset convolution neural network model is the relative attenuation ratio of the image pair.
Optionally, the preset convolutional neural network model has a model structure as shown in fig. 2, and includes: 12 convolutional layers and 3 fully-connected layers, for a total of 15 layers. All convolution kernels are 3 × 3 in size, the number of convolution kernels of each convolution layer is {16, 16, 32, 32, 64, 64, 128, 128, 128, 128}, and the activation functions are linear rectification functions (ReLU functions), and each two convolution layers 2 are followed by a mean pooling 3.
And sequentially inputting each target image pair in the target image pair set into a preset convolutional neural network model, and sequentially obtaining the relative attenuation ratio of each target image pair, namely obtaining a series of relative attenuation ratios.
Step S50, fitting to generate the sensing instrument response attenuation curve based on the relative attenuation ratio of each target image pair in the set of target image pairs.
After obtaining the relative attenuation ratio values of all the target image pairs, a response attenuation curve of the sensor along with the change of time is fitted based on the relative attenuation ratio values of all the target image pairs.
The fitting of the response decay curve may be performed using a polynomial fit. When a response attenuation curve is generated by using polynomial fitting, the change trend of the attenuation along with the time is determined based on the relative attenuation ratio of all target image pairs, so that the minimum order capable of being sufficiently fitted is determined, and overfitting is avoided. Optionally, the minimum order of fit is determined by determining the extreme points that appear in the trend of the decay over time.
Each sensor channel has certain independence, a target image pair set is obtained through screening based on a remote sensing image generated by the sensor channel, a response attenuation curve corresponding to the sensor channel is generated through fitting based on the target image pair set, different sensor channels have unique corresponding response attenuation curves, and the minimum order of fitting of the response attenuation curves is different.
And fitting the generated response attenuation curve of the sensing instrument, and correcting radiation errors caused by the sensor.
The method has the advantages that the target image pairs to be analyzed are collected and sequentially input into the preset convolutional neural network model to obtain the relative attenuation ratio of each target image pair, the sensing instrument response attenuation curve is generated by fitting based on the obtained relative attenuation ratio, the difficulty of invariant pixel detection is avoided, the operability is improved, the application difficulty is reduced, and the popularization and the application of the method are facilitated.
Optionally, as shown in fig. 3, before performing step S30, a model training process of the preset convolutional neural network model is further included, including:
step S10, constructing a training sample set, wherein the training sample set is a set of labeled image pairs, and the label is the relative attenuation ratio of the image pairs;
the training samples are image pairs used for training the model, and the preset convolutional neural network model training uses supervised deep learning, so that the training samples are labeled, and the labels are relative attenuation ratios of the image pairs. The labels of the image pairs can be determined based on absolute calibration, on-satellite calibration and other modes, and the absolute calibration mode can be selected as multi-site calibration. For example, if the scaling formula determined based on the multi-field scaling method is p (t) 9.444 × 10-5 × t +1.064, then for the image pair with the shooting time t1 and t2 (where t1 and t2 refer to the number of satellite transmission days), t1 and t2 are input into the scaling formula, respectively, to obtain the corresponding scaling coefficients pt1 and pt2, and the labels are pt1/pt 2.
The training sample can be constructed based on a remote sensing image obtained by detecting the sensor channel to be analyzed, or a remote sensing image obtained by detecting the same sensor channel as the sensor channel to be analyzed.
Optionally, using a remote sensing image obtained by detecting a sensor channel to be analyzed or a remote sensing image obtained by detecting a sensor channel same as the sensor channel to be analyzed as a basic database, firstly, screening out an image meeting a preset region rule from the basic database, for convenience of description, referring to the image as a third image set, then, screening out an image with a satellite observation angle larger than a certain preset value in the third image set, optionally, screening out an image with cloud and/or an image with a defect value, for convenience of description, referring to the image obtained after screening out as a fourth image set, and finally, pairing the images in the fourth image set two by two to obtain an image pair set, screening out an image pair with a shooting moment meeting a preset time interval rule and a satellite observation angle difference meeting a preset angle rule from the image pair set, the pair of images serves as a training sample.
Optionally, the training sample set includes images with high, medium and low brightness, so that sample diversity is realized, and the trained model is ensured to have better universality.
Optionally, the training sample set does not have images with a large number of missing values or a large number of clouds, so that data pureness of a target image pair set to be analyzed is improved, and accuracy of response attenuation analysis is improved.
Alternatively, the images in the training sample set can be images subjected to atmosphere correction, terrain correction and the like, namely images used for training a model, and only radiation errors caused by decay of the sensing instrument are reserved so as to improve the accuracy of response attenuation analysis.
And step S20, training the preset convolutional neural network model based on the training sample set.
Based on the training sample set, the convolutional neural network model is trained by using the existing model training mode, and parameters of the convolutional neural network model are continuously optimized until the convolutional neural network model with the optimal model parameters is trained and used as a preset convolutional neural network model for subsequent practical application.
The training sample set can be selected as a remote sensing image formed by a certain sensor channel on a certain satellite, a trained convolutional neural network model has certain universality based on remote sensing data formed in a part of time period of the certain sensor channel on the certain satellite, and the relative attenuation ratio of the two images can be calculated, so that the training sample set can be used for estimating other response attenuation curves of the same sensor channel of the same satellite at the orbit running time and can also be used for estimating the attenuation curves of the same sensor of different satellites.
Alternatively, as shown in fig. 4, step S10 includes:
step S11, selecting a candidate image set from a preset image library based on a first preset screening rule;
the preset image library is composed of remote sensing images obtained by detecting the sensor channel to be analyzed, or is composed of remote sensing images obtained by detecting the same sensor channel as the sensor channel to be analyzed.
Firstly, a single image meeting certain conditions is selected, because a model for detecting the response attenuation of the sensing instrument needs to be trained, the image difference caused by other factors needs to be reduced to the maximum extent for the images participating in the training, and only the image difference caused by the response attenuation of the sensing instrument is reserved as far as possible.
Optionally, the first preset filtering rule includes at least one of:
the image display area is at least one of desert and bare land; the satellite observation angles of the pixels in the image are all smaller than a first preset angle; pixels with no default value in the image; there is no cloud in the image.
The vegetation on the surface of the desert and the bare land is sparse, the surface reflectivity is low, the change of the surface reflectivity along with seasons is small, the artificial activities of the dry desert and the bare land are few, and the influence of atmospheric water vapor and aerosol is low in non-dust weather. Therefore, the image corresponding to the displayed area is the desert, or the bare land, or the desert and the bare land, which is selected as the training sample. Therefore, radiation errors caused by the atmosphere can be reduced, and the accuracy of attenuation analysis of the sensing instrument is improved.
The larger the satellite observation angle is, the larger the apparent reflectivity distortion is, so that if the image has pixels with the satellite observation angle larger than a certain value, interference may be caused to response attenuation analysis of the sensing instrument, and the analysis accuracy is reduced.
The image difference is interfered by the missing value pixel or the cloud, so that the accuracy of the training image can be ensured to the maximum extent by removing the image with the missing value pixel or the cloud.
Step S12, generating a candidate image pair set based on the candidate image set;
and pairwise pairing based on the candidate image set to generate a candidate image pair set.
Step S13, selecting a set of image pairs to be used as a training sample from the set of candidate image pairs based on a second preset screening rule.
And selecting the image pairs which are suitable for being used as the training samples from the candidate image pair set on the basis of a second preset screening rule.
Optionally, the second preset screening rule includes at least one of:
the shooting time of the image pair satisfies: T1-T2 ═ 365.25 x y ± n, y ∈ Z, n ∈ [1,60], where T1, T2 are the capturing times of two images in an image pair, respectively, y is a year parameter, n is a date parameter, i.e., y is used to define the interval year of two image captures, and n is used to define the interval month of two image captures; the satellite observation angle difference of the corresponding pixel of the image pair is smaller than a second preset angle.
In order to reduce seasonal influence, the shooting moments of the two images of the target image pair are in the same month or similar months of the same year or different years. Specifically, the time interval between the shooting times of the two images of the target image pair is within the range of 365.25 × y ± n (y ∈ Z, n ∈ 1, 60) days from T1 to T2, the size of n can be determined based on the usage demand, and the smaller n is, the smaller the seasonal influence is, and conversely, the larger the seasonal influence is.
Considering that leap years have 366 days and non-leap years have 365 days, 365.25 is taken as the time length of one year.
The shooting time interval of the image pair serving as the training sample is restrained to a certain extent, so that the change of the earth surface reflectivity caused by seasonal changes is reduced, the interference of the earth surface reflectivity on the attenuation analysis of the sensing instrument is reduced, and the accuracy of the attenuation analysis of the sensing instrument is improved.
In order to reduce interference caused by satellite observation angle difference to response attenuation analysis of the sensor instrument, image pairs with overlarge satellite observation angle difference are screened out, and specifically, image pairs with the satellite observation angle difference of pixels corresponding to the two images smaller than a second preset angle are screened out to serve as a training sample set.
In the image pairs obtained by actual shooting, the image pairs with short shooting time intervals have more images than the image pairs with long shooting time intervals, that is, the samples of the image pairs used for model training are unbalanced, which also means that the labels of the training samples are extremely unbalanced, which is very disadvantageous for training the deep learning model. Therefore, in addition to the image pair set selected from the candidate image pair set as a part of the training sample, a pseudo sample needs to be constructed to achieve sample equalization. For this reason, the following examples of constructing a dummy sample are given.
Optionally, step S13 is followed by:
determining a relative attenuation ratio value of each image pair in the set of image pairs used as training samples; correcting each image pair based on the relative attenuation ratio of each image pair, and then giving random attenuation to each image in the image pair to generate a new image pair; based on the random attenuation, a relative attenuation ratio of the new image pair is determined, the new image pair being used as a training sample.
For convenience of description, the determined image pair in the image pair set used as the training sample is referred to as a real image pair, and the relative attenuation ratio of the real image pair is determined by means of absolute calibration, on-satellite calibration, and the like, which is the prior art and is not described herein again; correcting each image pair based on the relative attenuation ratio of each image pair, by: and based on the relative attenuation ratio of each real image pair, taking one of the real image pairs as a reference image, correcting the other image to ensure that the two images have the same attenuation, giving each image a random attenuation, and obtaining the relative attenuation ratio of a new image pair after the random attenuation, namely the given random attenuation ratio. Based on the mode, a large number of pseudo samples can be constructed, so that the purpose of sample balance is achieved, labels are balanced, and the accuracy of the trained model is improved.
For example, two images (img1, img2) of a certain image pair are obtained, where img1 is attenuated to 80% of the original value, img2 is attenuated to 75% of the original value, and img1/img2 is 0.75/0.8, based on the relative attenuation ratio value, (img1, img2) is corrected, img2 is corrected using img1 as a reference image so that img2 is also attenuated to 80% of the original value, at this time, the invariant point values of 1 at the time of transmission in img1 and img2 are both now 0.8, and have the same attenuation, and the two images are given two different random attenuations, respectively attenuated to 90% and 70% of the original value, so that a new image pair is obtained, at this time, the invariant point values in the pair are 0.8, 0.9 and 0.8, and at this time, the scaling coefficient pt0.8/(pt0.9) is 0.7, and 360.7 is 360.8/(0.7) as a new image pair, and the scaling coefficient is 367 is 360.8/(0.8).
Optionally, step S13 is followed by:
determining a relative attenuation ratio of each image in the image pair set used as the training sample relative to an initial image, wherein the initial image is an image without response attenuation of a sensing instrument; correcting each image based on the relative attenuation ratio of each image relative to the initial image, and then randomly attenuating each image to generate a new image and a new image pair; based on the random attenuation, a relative attenuation ratio of the new image pair is determined, the new image pair being used as a training sample.
For convenience of description, the determined image pairs in the image pair set used as the training samples are referred to as real image pairs, and the relative attenuation ratio of each image relative to the initial image is determined by means of absolute calibration, on-satellite calibration, and the like, which is the prior art and is not described herein again; the initial image refers to a remote sensing image acquired by a satellite in the first day, no sensing instrument responds to attenuation, each real image in the real image pair is corrected based on the attenuation value of each image, the reference image is the initial image, the attenuation values of two corrected images of the real image pair are equal to the attenuation value of the initial image, different random attenuation values of the corrected images are given, a new image pair is generated, and the relative attenuation ratio of the new image pair can be calculated based on the given random attenuation. Based on the mode, a large number of pseudo samples can be constructed, so that the purpose of sample balance is achieved, labels are balanced, and the accuracy of the trained model is improved.
For example, two images (img1, img2) of a certain image pair are provided, wherein the attenuation value of the invariant point in img1 is 80% of the original value, the attenuation value of the invariant point in img2 is 75% of the original value, the image of the first day of satellite transmission is used as a reference image (wherein the invariant point is 1), img1 and img2 are respectively corrected, so that the attenuation values after img1 and img2 are both 1, two different random attenuations of the two images are respectively attenuated to 90% and 70% of the original value, a new image pair is obtained, at this time, the invariant point value in the new image pair is changed to 0.9 and 0.7, the scaling coefficient pt1 is 1/(1 × 0.9), pt2 is 1/(1 × 0.7), and the relative attenuation ratio pt1/pt2 of the two is 0.7/0.9.
Alternatively, as shown in fig. 5, step S13 is followed by:
step S14, giving random attenuation to each image in the image pair set used as the training sample to generate a new image pair;
step S15, determining a relative attenuation ratio of the new image pair based on the random attenuation, using the new image pair as a training sample.
For convenience of description, the determined image pairs in the set of image pairs used as the training samples are referred to as real image pairs, wherein the attenuation value of each real image in the real image pairs is determined; on the basis of the existing attenuation of the real image pair, different random attenuations are given to the two images in the real image pair, a new image pair is generated, and the relative attenuation ratio of the new image pair is calculated on the basis of the existing attenuation and the random attenuation. Based on the mode, a large number of pseudo samples can be constructed, so that the purpose of sample balance is achieved, labels are balanced, and the accuracy of the trained model is improved.
Optionally, the random attenuation given to the real image in the above embodiment is constrained by the apparent reflectivity of the real image. And different sensor channels, wherein the actually obtained images have different values of apparent reflectivity, and the values of the apparent reflectivity of the new images obtained after random attenuation are still within the values of the apparent reflectivity of the corresponding sensor channels. Since the label is a continuous value and the number of generated pseudo samples is much greater than the number of real samples, in the construction of the pseudo samples, the attenuation given is completely random within the constraints so that the samples tend to equalize.
In an optional implementation mode, the value range of the apparent reflectivity of a remote sensing image acquired by a target sensor and the possible attenuation range of the sensor are determined, and then a random attenuation value is determined based on the value range of the apparent reflectivity and the possible attenuation range of the sensor; wherein the target sensor refers to a sensor channel to be analyzed or the same sensor channel as the sensor channel to be analyzed. Different sensor channels, the images actually obtained have different values of apparent reflectivity, so random attenuation is limited based on the apparent reflectivity, the reliability of the generated new image pair is further ensured, and the accuracy of the trained model is ensured.
For example, the value range of the apparent reflectivity of the remote sensing image acquired by the target sensor is [ a, b ] (such as [1,100]), and the attenuation range of the sensor is multiplied by the original [ k1, k2] (such as [0.5,1 ]). The value ranges of the image pairs img1 and img2 before random attenuation are [ a1, b1] and [ a2, b2], if the random attenuation is respectively given to be times of the original s1 and s2, the new image pairs are img1 s1 and img2 s2, and the label is s2/s 1. Wherein, the value range of s1 is [ a/a1, b/b1], the value range of s2 is [ max (s1 × k1, a/a2), min (s1 × k2, b/b2) ], thus, the value range of the apparent reflectivity of the image obtained after random attenuation is ensured to be [ a, b ], and the value range of s2/s1 is [ k1, k2 ].
The invention also provides a response attenuation analysis device of the sensing instrument.
In one embodiment, the sensing instrument response attenuation analyzing apparatus includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target image pair set to be analyzed;
the processing unit is used for sequentially inputting each target image pair in the target image pair set into a preset convolutional neural network model to obtain a relative attenuation ratio of each target image pair;
a fitting unit for fitting to generate the sensing instrument response attenuation curve based on the relative attenuation ratio of each target image pair in the set of target image pairs.
Optionally, the sensing instrument response attenuation analyzing apparatus includes:
a sample construction unit for constructing a training sample set, wherein the training sample set is a set of labeled image pairs, the labels being relative attenuation ratios of the image pairs;
and the model training unit is used for training the preset convolutional neural network model based on the training sample set.
Optionally, the sample construction unit is further configured to select a candidate image set from a preset image library based on a first preset screening rule; generating a set of candidate image pairs based on the set of candidate images; and selecting the image pair set used as the training sample from the candidate image pair set based on a second preset screening rule.
Optionally, the sample construction unit is further configured to determine a relative attenuation ratio value of each image pair in the set of image pairs used as training samples; correcting each image pair based on the relative attenuation ratio of each image pair, and then giving random attenuation to each image in the image pair to generate a new image pair; based on the random attenuation, a relative attenuation ratio of the new image pair is determined, the new image pair being used as a training sample.
Optionally, the sample construction unit is further configured to determine a relative attenuation ratio of each image to an initial image in the set of image pairs serving as the training sample, where the initial image is an image without response attenuation of the sensing instrument; correcting each image based on the relative attenuation ratio of each image relative to the initial image, and then randomly attenuating each image to generate a new image and a new image pair; based on the random attenuation, a relative attenuation ratio of the new image pair is determined, the new image pair being used as a training sample.
Optionally, the sample construction unit is further configured to give a random attenuation to each image in the set of image pairs used as the training sample, and generate a new image pair; determining a relative attenuation ratio of the new image pair based on the random attenuation, the new image pair being used as a training sample.
Optionally, the first preset filtering rule includes at least one of:
the image display area is at least one of desert and bare land;
the satellite observation angles of the pixels in the image are all smaller than a first preset angle;
pixels with no default value in the image;
there is no cloud in the image.
Optionally, the second preset screening rule includes at least one of:
the shooting time of the image pair satisfies: T1-T2 ═ 365.25 × y ± n, y ∈ Z, n ∈ [1,60], where T1 and T2 respectively denote the capturing times of two images in an image pair, y denotes a year parameter, and n denotes a date parameter;
the satellite observation angle difference of the corresponding pixel of the image pair is smaller than a second preset angle.
The invention further provides a response attenuation analysis terminal of the sensing instrument. The sensing instrument response attenuation analysis terminal comprises a computer readable storage medium and a processor, wherein the computer readable storage medium stores a computer program, the computer program is read by the processor and runs, the sensing instrument response attenuation analysis method is realized, relevant contents and beneficial effects are described above, and details are not repeated here.
The present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is read and executed by a processor, the method for analyzing response attenuation of a sensing instrument is implemented, and related contents and beneficial effects are as described above, and are not described herein again.
Although the present disclosure has been described above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present disclosure, and these changes and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A method for response attenuation analysis of a sensing instrument, comprising:
acquiring a target image pair set to be analyzed;
sequentially inputting each target image pair in the target image pair set into a preset convolutional neural network model to obtain a relative attenuation ratio of each target image pair;
fitting to generate the sensing instrument response attenuation curve based on the relative attenuation ratio of each target image pair in the set of target image pairs.
2. The sensing instrument response attenuation analysis method of claim 1, further comprising, prior to said acquiring a set of target image pairs to be analyzed:
constructing a training sample set, wherein the training sample set is a set of labeled image pairs, and the label is a relative attenuation ratio of the image pairs;
and training the preset convolutional neural network model based on the training sample set.
3. The sensing instrument response decay analysis method of claim 2, wherein the constructing a training sample set comprises:
selecting a candidate image set from a preset image library based on a first preset screening rule;
generating a set of candidate image pairs based on the set of candidate images;
and selecting the image pair set used as the training sample from the candidate image pair set based on a second preset screening rule.
4. The method for analyzing response attenuation of a sensing instrument as claimed in claim 3, wherein after selecting the set of image pairs to be used as the training sample from the set of candidate image pairs based on the second predetermined filtering rule, the method further comprises:
determining a relative attenuation ratio value of each image pair in the set of image pairs used as training samples; correcting each image pair based on the relative attenuation ratio of each image pair, and then giving random attenuation to each image in the image pair to generate a new image pair; determining a relative attenuation ratio of the new image pair based on the random attenuation, the new image pair being used as a training sample;
alternatively, the first and second electrodes may be,
determining a relative attenuation ratio of each image in the image pair set used as the training sample relative to an initial image, wherein the initial image is an image without response attenuation of a sensing instrument; correcting each image based on the relative attenuation ratio of each image relative to the initial image, and then randomly attenuating each image to generate a new image and a new image pair; based on the random attenuation, a relative attenuation ratio of the new image pair is determined, the new image pair being used as a training sample.
5. The method for analyzing response attenuation of a sensing instrument as claimed in claim 3, wherein after selecting the set of image pairs to be used as the training sample from the set of candidate image pairs based on the second predetermined filtering rule, the method further comprises:
giving random attenuation to each image in the set of image pairs used as training samples to generate a new image pair;
determining a relative attenuation ratio of the new image pair based on the random attenuation, the new image pair being used as a training sample.
6. The sensing instrument response decay analysis method of any one of claims 3-5, wherein the first preset screening rule comprises at least one of:
the image display area is at least one of desert and bare land;
the satellite observation angles of the pixels in the image are all smaller than a first preset angle;
pixels with no default value in the image;
there is no cloud in the image.
7. The sensing instrument response decay analysis method of any one of claims 3-5, wherein the second preset screening rule comprises at least one of:
the shooting time of the image pair satisfies: T1-T2 ═ 365.25 × y ± n, y ∈ Z, n ∈ [1,60], where T1 and T2 respectively denote the capturing times of two images in an image pair, y denotes a year parameter, and n denotes a date parameter;
the satellite observation angle difference of the corresponding pixel of the image pair is smaller than a second preset angle.
8. A sensing instrument response attenuation analysis apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target image pair set to be analyzed;
the processing unit is used for sequentially inputting each target image pair in the target image pair set into a preset convolutional neural network model to obtain a relative attenuation ratio of each target image pair;
a fitting unit for fitting to generate the sensing instrument response attenuation curve based on the relative attenuation ratio of each target image pair in the set of target image pairs.
9. A sensing instrument response attenuation analysis terminal comprising a computer readable storage medium storing a computer program and a processor, the computer program being read and executed by the processor to implement the sensing instrument response attenuation analysis method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when read and executed by a processor, implements the sensor instrument response attenuation analysis method according to any one of claims 1 to 7.
CN202010440427.9A 2020-05-22 2020-05-22 Response attenuation analysis method and device for sensing instrument and terminal Active CN111611931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010440427.9A CN111611931B (en) 2020-05-22 2020-05-22 Response attenuation analysis method and device for sensing instrument and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010440427.9A CN111611931B (en) 2020-05-22 2020-05-22 Response attenuation analysis method and device for sensing instrument and terminal

Publications (2)

Publication Number Publication Date
CN111611931A true CN111611931A (en) 2020-09-01
CN111611931B CN111611931B (en) 2024-04-09

Family

ID=72205171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010440427.9A Active CN111611931B (en) 2020-05-22 2020-05-22 Response attenuation analysis method and device for sensing instrument and terminal

Country Status (1)

Country Link
CN (1) CN111611931B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957086A (en) * 2016-05-09 2016-09-21 西北工业大学 Remote sensing image change detection method based on optimized neural network model
CN110740230A (en) * 2019-10-23 2020-01-31 北京集创北方科技股份有限公司 Image acquisition method, residual image attenuation parameter measurement system, image acquisition device, electronic apparatus, and computer-readable storage medium
CN111179372A (en) * 2019-12-31 2020-05-19 上海联影智能医疗科技有限公司 Image attenuation correction method, device, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957086A (en) * 2016-05-09 2016-09-21 西北工业大学 Remote sensing image change detection method based on optimized neural network model
CN110740230A (en) * 2019-10-23 2020-01-31 北京集创北方科技股份有限公司 Image acquisition method, residual image attenuation parameter measurement system, image acquisition device, electronic apparatus, and computer-readable storage medium
CN111179372A (en) * 2019-12-31 2020-05-19 上海联影智能医疗科技有限公司 Image attenuation correction method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈楠楠 等: "一种EMD的SAR图像自适应辐射均衡方法" *

Also Published As

Publication number Publication date
CN111611931B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
Goslee Analyzing remote sensing data in R: the landsat package
Sawicki et al. The CFHT large area U-band deep survey (CLAUDS)
Morfitt et al. Landsat-8 Operational Land Imager (OLI) radiometric performance on-orbit
CN111461052A (en) Migration learning-based method for identifying lodging regions of wheat in multiple growth periods
Minařík et al. Radiometric and atmospheric corrections of multispectral μMCA camera for UAV spectroscopy
Pahlevan et al. Uncertainties in coastal ocean color products: Impacts of spatial sampling
CN112345458A (en) Wheat yield estimation method based on multispectral image of unmanned aerial vehicle
CN108154479A (en) A kind of method that remote sensing images are carried out with image rectification
CN102324098A (en) Relative radiation calibration method in combination with laboratory calibration and even landscape statistics
Iordache et al. A dynamic unmixing framework for plant production system monitoring
CN109712089A (en) Method suitable for the infrared shortwave load relative detector calibration of export-oriented remote sensing satellite
Casa et al. Retrieval of crop canopy properties: a comparison between model inversion from hyperspectral data and image classification
CN116519557A (en) Aerosol optical thickness inversion method
JP2018527667A (en) Detection of point light sources with different emission intensities in a series of images with different point spread functions
CN117115669B (en) Object-level ground object sample self-adaptive generation method and system with double-condition quality constraint
CN104198397B (en) The method that chamber crop nutrient content is detected under N P and K reciprocation
Towers et al. Effect of the illumination angle on NDVI data composed of mixed surface values obtained over vertical-shoot-positioned vineyards
CN111611931B (en) Response attenuation analysis method and device for sensing instrument and terminal
CN111199557A (en) Quantitative analysis method and system for decay of remote sensor
CN116469000A (en) Inversion method and device for forest ground biomass and leaf area index
CN115452167A (en) Satellite remote sensor cross calibration method and device based on invariant pixel
Pearson et al. The first source counts at 18 μm from the AKARI NEP Survey
CN112945897B (en) Continuous terahertz image non-uniformity correction method
Gascon et al. Using multi‐directional high‐resolution imagery from POLDER sensor to retrieve leaf area index
CN102855625A (en) Method of obtaining rice canopy cluster indexes by using hemispherical photography method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210422

Address after: 518055 Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province, Harbin University of Technology Campus

Applicant after: HARBIN INSTITUTE OF TECHNOLOGY (SHENZHEN)

Applicant after: NATIONAL SATELLITE METEOROLOGICAL CENTER

Address before: 1611, building L, Harbin Institute of Technology campus, Xili University Town, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: HARBIN INSTITUTE OF TECHNOLOGY (SHENZHEN)

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant