CN111815525B - Scene-based radiation calibration method and system - Google Patents

Scene-based radiation calibration method and system Download PDF

Info

Publication number
CN111815525B
CN111815525B CN202010515427.0A CN202010515427A CN111815525B CN 111815525 B CN111815525 B CN 111815525B CN 202010515427 A CN202010515427 A CN 202010515427A CN 111815525 B CN111815525 B CN 111815525B
Authority
CN
China
Prior art keywords
scene
imaging
imaging data
preset event
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010515427.0A
Other languages
Chinese (zh)
Other versions
CN111815525A (en
Inventor
谢成荫
杨峰
任维佳
杜志贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spacety Co ltd Changsha
Original Assignee
Spacety Co ltd Changsha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spacety Co ltd Changsha filed Critical Spacety Co ltd Changsha
Priority to CN202010515427.0A priority Critical patent/CN111815525B/en
Publication of CN111815525A publication Critical patent/CN111815525A/en
Application granted granted Critical
Publication of CN111815525B publication Critical patent/CN111815525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Abstract

The invention relates to a scene-based radiation calibration method and a scene-based radiation calibration system, wherein the method comprises the following steps: under the condition that a flying platform passes through a uniform scene, performing push-broom imaging in a mode that an included angle alpha is formed between the rail direction of the flying platform and the arrangement direction of the probe element linear arrays of the sensors so as to acquire imaging data of a ground scene, wherein the arrangement direction of at least one row of probe element linear arrays is limited in a mode that the probe element linear arrays are neither parallel nor perpendicular to the rail direction of the flying platform; and performing pre-targeting processing based on the imaging data.

Description

Scene-based radiation calibration method and system
The invention relates to a split application of a scene-based adaptive correction method and a scene-based adaptive correction system for a satellite-borne remote sensing instrument, which has the application number of 201911262809.0, the application date of 2019, 12, 11 and the application type of the invention.
Technical Field
The invention belongs to the technical field of remote sensing, relates to a radiation calibration method, relates to a self-adaptive radiation calibration method, relates to a radiation calibration correction method based on a scene, and particularly relates to a satellite-borne remote sensing instrument self-adaptive correction method based on a scene.
Background
The linear array push-broom optical sensor has unique response characteristics due to response and bias nonuniformity among the probe elements, inherent noise and dark current nonuniformity of each probe element and response nonuniformity of characteristic differences of peripheral circuits of the sensor, so that each probe element has certain differences in imaging, and various random image noises are shown on an image. The relative radiation calibration is to calibrate the error of the imaging system by utilizing a high-precision radiation calibration reference to determine the corresponding relation between each probe element and each probe element, so that the accuracy of the radiation calibration reference directly influences the precision of the relative radiation calibration.
At present, the main methods of remote sensing satellite relative radiometric calibration include: laboratory calibration by using an integrating sphere before satellite transmission, on-board calibration by using an on-board calibration lamp or a diffuse reflection board on the satellite in orbit, on-orbit field calibration by using a ground uniform field, on-orbit statistical calibration by using a full life cycle image of the satellite and the like.
For example, chinese patent document publication No. CN106871925a discloses a method for calibrating the relative radiation of a remote sensing satellite by on-orbit comprehensive dynamic adjustment, which (1) obtains images corresponding to each radiance condition of a laboratory integrating sphere, and further obtains a coefficient table; (2) During the in-orbit test, taking satellite safety as a basic condition, and when the satellite is in a preset state, performing right-angle yaw imaging for as many times as possible to acquire strip data of the right-angle yaw of the satellite; obtaining a right angle yaw radiometric calibration lookup table; (3) Obtaining a look-up table at each time interval during an on-track test or during running imaging; (4) Performing relative radiometric calibration processing on an image in a certain state of a satellite, judging whether a right-angle yaw radiometric calibration lookup table in the state exists, and if so, calling the right-angle yaw radiometric calibration lookup table to perform relative radiometric calibration processing; if the imaging time is not less than the preset time, judging whether the imaging time is less than the preset time, and if the imaging time is less than the preset time, performing radiation calibration treatment on the coefficient table; and if the radiation frequency is not less than the preset frequency, calling a lookup table with the nearest time, and performing relative radiation calibration processing.
For example, chinese patent publication No. CN109671038A discloses a relative radiation correction method based on pseudo-invariant feature point classification hierarchy. The existing radiation correction method has low correction accuracy of relative radiation on remote sensing images comprising coastlines, islands and other dominant ground feature areas. The method comprises the following steps: 1. acquiring a ground object sub-image based on remote sensing image classification; 2. determining an initial relative radiation correction model and an initial PIFs of the ground object sub-image based on nonlinear regression analysis of the spectrum; 3. determining a fine nonlinear relative radiation correction model and fine PIFs of the ground object sub-image based on fine nonlinear regression analysis of the gradient; 4. performing relative radiation correction on the object sub-image to be corrected by using the refined PIFs and the refined nonlinear relative radiation correction model; 5. and synthesizing the corrected images into a complete image.
In the above calibration method, the integrating sphere, the calibration lamp or the diffuse reflection plate is a high-precision radiation reference, and the mass sample size of the uniform field object (such as desert, ocean, snow and the like) and the on-orbit statistical calibration is a radiation reference with assumed properties based on the probability statistical theory. However, due to vibration in the satellite transmitting process and the change of the space environment where the satellite is positioned after the satellite is transmitted, the response state of each probe element of the satellite sensor is changed, or the corresponding state of the satellite sensor is attenuated along with the transition of the satellite in-orbit time, so that the laboratory calibration method cannot ensure the high-precision radiocalibration of the whole life cycle of the satellite; although the on-board radiometric calibration can achieve higher calibration precision and frequency, not all satellites are provided with on-board calibration equipment, and the on-board calibration equipment is also attenuated in state, so that the radiometric calibration precision is reduced. The on-orbit statistics and calibration requirements of mass sample image data or uniform field data cannot be met according to the calibration of the satellite in the initial orbit entering stage and the calibration requirements of the satellite high frequency.
Along with the improvement of the agility of the remote sensing satellite, related researchers propose to perform relative radiation calibration by using yaw 90-degree imaging of the satellite or a camera by utilizing the agility of the satellite, and the relative radiation calibration method is applied to the satellite such as QuickBird, rapidEye, landsat and the like.
For example, document [1] Long Liang, wang Zhongmin ] an in-orbit radiometric calibration method based on satellite agility [ J ] space return and remote sensing 2013,34 (4): 77-85. Methods are disclosed that utilize 90 ° yaw imaging of a satellite or camera for relative radiometric calibration, i.e. using Side-slither calibration method. The method comprises the steps that a mode of staggered parallel arrangement of a plurality of CCD (charge coupled device) is adopted for focal plane arrangement of a satellite sensor, and in order to ensure that all probe elements of each CCD of the sensor acquire the same entrance pupil radiation brightness during yaw radiation calibration, a mode of imaging ground uniform calibration field objects is adopted for yaw relative radiation calibration of the satellite. The method greatly reduces the requirement on the uniform characteristics of the ground scene and is beneficial to improving the on-orbit radiation calibration effect. The method has ideal effect on the satellite sensor with good linearity of the probe cells, but the scheme can not meet the radiation calibration of the full dynamic range of each probe cell of the sensor because the corresponding function of each probe cell of the sensor in different brightness response intervals is different and the response functions of different probe cells are also different. For example, in order to enable the calibration features (scenes) to cover the response range of the focal plane probe more comprehensively, when 90 ° yaw calibration is performed, various types of features are generally selected as calibration scenes, such as dried lake beds, tropical rain forests, polar ice covers, deserts, and the like. In practice this is consistent with the traditional scene selection principle of field scaling. The desert, polar region and the like, the scene with higher emission or reflection radiance is particularly suitable for scaling the side of the dynamic range of the detector close to the top end, the scene with lower radiance is relatively difficult to obtain, such as vegetation with low radiance, the radiation characteristic of the device can change along with seasons, seawater or seawater is selected as the scaling scene with low radiance, and proper conditions such as wind power, solar altitude angle and the like are also needed. Document [1] to obtain a uniform calibration scene with low radiance, 90 ° yaw calibration is performed using moon as the calibration scene at a portion of the detector dynamic range close to the bottom. The advantages of selecting moon as the scaling scene are mainly 3 points: 1. the whole lunar surface can be used as a calibration scene with low radiance; 2. the influence of the atmospheric environment change can be basically ignored in the calibration process; 3. the calibration work is carried out in the shadow area of the track or when passing through the two poles of the earth, and the calibration work cannot conflict with the normal data acquisition work task time period. However, by adopting the method, a plurality of CCD staggered parallel sensors are required to be arranged on both sides of the course of the satellite, so that the weight of the satellite is increased; and a series of devices such as control, feedback and correction are needed to control and sense the newly added sensor, so that the cost of the satellite is increased, and the capacity of the satellite for bearing other functional loads is sacrificed.
Aiming at the arrangement of the focal plane of the sensor, an optical splicing mode is adopted, the response functions of each probe element of the sensor in different brightness response intervals are different, the document [2] is published, li Litao, no. 25 non-field relative radiometric calibration [ J ] is obtained by remote sensing, and the method for calibrating the relative radiometric by using non-field 90-degree yaw radiometric is disclosed, wherein the 90-degree yaw radiometric calibration is a method for calibrating the relative radiometric by rotating a satellite platform or a camera by 90 degrees and correcting a drift angle caused by earth rotation, so that a linear array CCD sensor is parallel to the push-broom direction of the satellite, and the satellite acquires radiometric calibration data along push-broom imaging of the orbit. The method further comprises the steps of: 1. banding noise suppression and contrast enhancement; 2. yaw scaling data is specified; 3. and solving scaling parameters. The method disclosed in the document [2] is that the linear array CCD probe sequentially passes through the same ground object in the yaw radiation calibration imaging process, and the ground object radiation brightness obtained by the linear array CCD probe is completely equal in the imaging time of all the probe elements of the linear array (the time of remote sensing No. 25 is 3.2 s) without consideration, so that the yaw radiation calibration provides a high-precision radiation reference for the radiation response relation of each probe element of the calibration sensor, the imaging of the same ground object by each probe element is ensured, and the high-precision radiation reference is provided for the radiation calibration; the no-field ensures that the relative radiometric calibration of the satellite does not depend on the ground uniformity or the requirement of radiometric calibration on uniform ground features, and provides support for the on-orbit high-frequency calibration of the satellite sensor; the ground uniform field is not relied on, so that the radiation brightness range of the ground object can cover the whole dynamic range of the sensor response, and data support is provided for full dynamic range radiation calibration. However, the method disclosed in document [2] has several problems: firstly, a method of prescribing a sensor probe histogram is adopted to realize the calculation of relative radiation calibration parameters of a remote sensing No. 25 full dynamic range, and the problem that residual errors exist in relative radiation correction of low-brightness area data of the result of statistical calibration by using medium-high-brightness uniform area data in the document [2] is not solved; as described in document [2], the remote sensing-25 sensor probe is not completely linear in the whole response dynamic range, that is, the responses of the same probe in the high-reflection uniform region, the medium-reflection uniform region and the low-reflection uniform region are different and nonlinear, so that residual errors are definitely present in the relative radiation correction of the low-brightness region data by using the data statistics calibration result of the medium-high brightness uniform region; finally, the 90-degree yaw calibration can select a uniform calibration scene to reduce correction residual errors, so that the 90-degree yaw radiation calibration is facilitated when the dynamic range of detection of the calibration scene at a certain moment is relatively small, however, the radiation characteristics of the calibrated uniform field are dynamically changed, such as ocean, and the temperature and the radiation characteristics of sea surface targets are different in different sea areas, different seasons and different dimensions, so that the correction coefficients need to be updated continuously according to actual detection conditions.
Furthermore, there are differences in one aspect due to understanding to those skilled in the art; on the other hand, as the inventors studied numerous documents and patents while the present invention was made, the text is not limited to details and contents of all that are listed, but it is by no means the present invention does not have these prior art features, the present invention has all the prior art features, and the applicant remains in the background art to which the rights of the related prior art are added.
Disclosure of Invention
Aiming at the problem that the response range of a focal plane probe cannot be covered comprehensively in the one-time calibration process by the existing relative radiation calibration method, the invention utilizes the condition that a sensor can rotate around the aviation axis of a flight platform, and utilizes the mode that other probe linear arrays and the orbit of the flight platform form a plurality of different oblique angles while 90-degree yaw imaging is carried out, so that the high-reflection area, the medium-reflection area and the low-reflection area of a scene are detected as quickly as possible, trigger data are provided for the ground base station and the calculation terminal of the flight platform, imaging data are classified, the response range of the probe can be covered comprehensively, the data mixing of different response ranges is avoided, and the accuracy of calibration data is reduced. In addition, aiming at the calibration residual error problem caused by the dynamic characteristic change of the scene radiation, the invention adopts a method of differentiating the time period according to the relation between the change amplitude and the time in the dynamic characteristic change of the scene, picks out the time period with the statistical mean and the variance of the probe element output signal approximately unchanged, and constructs a linear correction model, thereby carrying out real-time correction on the pre-calibrated imaging data.
A scene-based adaptive correction method for a satellite-borne remote sensing instrument at least comprises the following steps: and the sensor which is borne by the flying platform and can rotate around the flying platform aviation shaft is imaged in a linear array push-broom mode. The calculation load of the flight platform and/or the calculation terminal of the ground base station execute the following steps: based on triggering of a preset event, performing push-broom imaging in a mode that an included angle is formed between the rail direction of the flying platform and the arrangement direction of the probe element linear array of the sensor so as to acquire imaging data of a ground scene, and performing preset standard processing according to the acquired imaging data; and correcting the imaging data subjected to the preset mark processing in real time based on the degree and the speed of dynamic characteristic change of the ground scene in the duration time of the preset event.
According to a preferred embodiment, the preset events at least comprise a first preset event conforming to a high-reflection scene, a second preset event conforming to a medium-reflection scene and a third preset event conforming to a low-reflection scene, which are constructed based on information sent by other flight platforms and a priori knowledge. The first preset event includes at least a first termination event for terminating imaging of the first preset event. The second preset event includes at least a second termination event for terminating imaging of the second preset event. The third preset event includes at least a third termination event for terminating imaging of the third preset event.
According to a preferred embodiment, in case at least one of the first, second, third preset events is triggered, the computing load and/or the computing terminal performs the following steps: recording a first trigger time point and a first initial probe element triggered by the first preset event, a second trigger time point and a second initial probe element triggered by the second preset event, and a third trigger time point and a third initial probe element triggered by the third preset event; recording a first termination time point and a first termination probe element triggered by a first termination event, a second termination time point and a second termination probe element triggered by a second termination event, and a third termination time and a third termination probe element triggered by a third termination event; the imaging data is classified to form first imaging data responsive to a high reflectance scene, second imaging data responsive to a medium reflectance scene, and third imaging data responsive to a low reflectance scene within a dynamic range.
According to a preferred embodiment, the calculation of the load of the flying platform and/or the calculation of the ground base station terminal, before the flying platform triggers the recording of the preset event, performs the following steps:
the flying platform performs push-broom imaging in the arrangement direction of the probe element linear arrays of at least one row of sensors, so that different probe elements can sequentially image the same scene unit;
The arrangement direction of the probe element linear array of at least one row of sensors limits the included angle to push-broom imaging in a rail-wise mode of being neither parallel nor perpendicular to the flying platform, so that the first preset event and/or the second preset event and/or the third preset event are triggered with maximum probability.
According to a preferred embodiment, the steps of the computing load and/or computing terminal performing a pre-scaling process on the basis of the obtained imaging data are as follows:
constructing an initial image in units of pixels generated by the probe based on imaging data including at least the first imaging data, the second imaging data, and the third imaging data;
denoising processing is carried out based on the initial image, and high-frequency amplification is carried out by taking the pixels as units so as to enhance details of a straight line formed by the pixels for imaging the same scene unit in the initial image;
and shifting the gray value of each pixel based on denoising and enhancement so that the pixels in the same row in the initial image are imaging of different probe elements on the same scene unit, and the pixels in the same column are imaging of the same probe element.
According to a preferred embodiment, the following steps are performed after the computational load and/or computational terminal pre-scaling the imaging data:
Judging whether the dynamic characteristic change degree of the ground scene in the duration time of the preset event exceeds a first threshold value or not;
and correcting the preset imaging data in real time by utilizing a linear correction model under the condition that the dynamic characteristic change degree does not exceed a first threshold value. A multiplier term of a correction model of the pel is estimated using a mean square error of the ground scene. And estimating constant terms of a correction model of the pixel by using the mean value and the mean square error of the ground scene.
According to a preferred embodiment, in the event that the degree of variation of the dynamic characteristics exceeds a first threshold value, the calculation load of the flying platform and/or the calculation terminal of the ground base station perform the following steps:
segmenting the duration of the preset event according to a first unit time;
searching at least one first time set which is formed by continuous first unit time and has the dynamic characteristic change degree not exceeding a first threshold value;
and judging whether the ratio of the number of the first unit time in the first time set to the dynamic characteristic change degree in the first time set meets a second threshold value or not. And under the condition that the ratio of the number of the first unit time in the first time set to the dynamic characteristic change degree in the first time set meets a second threshold value, correcting the preset imaging data in real time by using a linear correction model.
According to a preferred embodiment, after the calculation load and/or the calculation terminal corrects the imaging data after the pre-calibration processing in real time, the calculation load and/or the calculation terminal calculates the first calibration data parameter of the first imaging data, the second calibration data parameter of the second imaging data and the third calibration data parameter of the third imaging data by using histogram prescribing based on the imaging data obtained after the real-time correction.
A scene-based adaptive correction system for a satellite-borne remote sensing instrument at least comprises a flight platform and a sensor which can rotate around a flight axis of the flight platform and can image in a linear array push-broom mode. The system also includes a pre-scaling processing module and a correction module. The calculation load of the flying platform and/or the calculation terminal of the ground base station are/is configured to be capable of performing push-broom imaging in a mode that an included angle is formed between the track direction of the flying platform and the arrangement direction of the probe element linear array of the sensor based on the triggering of a preset event, so that imaging data of a ground scene are acquired. And the pre-targeting processing module is used for pre-targeting the imaging data in response to the instruction of the computing load and/or the computing terminal. The correction module responds to the instruction of the computing load and/or the computing terminal to correct imaging data processed by a preset standard in real time based on the degree and the speed of dynamic characteristic change of the ground scene in the duration of the preset event.
The preset events of the computing load and/or the computing terminal at least comprise a first preset event which is constructed based on signals sent by other flight platforms and priori knowledge and accords with a high-reflection scene, a second preset event which accords with a medium-reflection scene and a third preset event which accords with a low-reflection scene. The first preset event comprises at least a first termination event for terminating imaging of the first preset event, the second preset event comprises at least a second termination event for terminating imaging of the second preset event, and the third preset event comprises at least a third termination event for terminating imaging of the third preset event.
The beneficial technical effects of the invention include one or more of the following:
1. aiming at the calibration error problem caused by the change of the scene radiation dynamic characteristic, the invention carries out differential processing on the imaging time period according to the relation between the change amplitude and time in the scene dynamic characteristic change to generate a plurality of sub-time periods, picks out the sub-time period with the statistical mean value and variance of the probe element output signals approximately unchanged, and leads the built linear correction model to carry out real-time correction on the imaging data of the pre-calibration of the sub-time period by approximately considering the concept that the statistical mean value of the signals output by each probe element is constant, thereby avoiding the relative radiation calibration error from being transferred;
2. Imaging can be classified according to triggering of different ground scenes, so that imaging data in different response ranges can be classified, radiation calibration and correction can be performed, response characteristics of a single probe element in response intervals with different radiance can be obtained, the response range of the probe element is comprehensively covered, data mixing of the different response ranges is avoided, and accuracy of calibration data is reduced;
3. according to the invention, push scanning imaging is carried out in a rail-wise mode of neither parallel nor perpendicular to the flying platform in the arrangement direction of the probe linear array, so that the sensor can image a scene in a large range before the probe linear array with the relative included angle alpha of 0 degrees is imaged, a first preset event, a second preset event and a third preset event are triggered with maximum probability, real-time information of a calibration scene can be obtained in advance, and trigger information of the first preset event, the second preset event and the third preset event is provided for the next calibration imaging. Moreover, when the relative included angle alpha is 0 degree, the imaging of the calibration scene can acquire dynamic changes of the calibration scene in a large range in real time, so that the mean value and variance of the dynamic changes of the radiation characteristics of the calibration scene are provided for the subsequent steps, and the correction gain and bias corresponding to the pixels are estimated based on the radiation mean value and variance of the calibration scene;
4. According to the invention, the constant term of the correction model of the pixel is estimated by utilizing the mean value and the variance of the ground scene, and the imaging data after the preset standard processing can be corrected in real time by performing simple iterative processing, and the linear model is utilized for correction, so that the method is simple in algorithm and low in time complexity, the calibration coefficient can be adaptively adjusted according to the dynamic change of the calibration scene, the calculation cost of the correction data is reduced, the imaging data is prevented from being processed by consuming a large amount of resources, the complex load design of a flight platform is reduced, the service life of the flight platform is prolonged, and the processing speed of the imaging data is also improved.
Drawings
FIG. 1 is a schematic illustration of a flying platform push broom imaging in a preferred embodiment of the method of the present invention;
FIG. 2 is a schematic flow diagram of a preferred embodiment of the method of the present invention;
FIG. 3 is a schematic illustration of a push broom of the method of the present invention with an angle of 90℃between the linear sensor array and the flying platform along the rail;
FIG. 4 is a schematic push-broom view of the method of the present invention with an angle of 0℃between the linear sensor array and the flying platform along the rail;
FIG. 5 is a picture element ordering diagram of a preferred embodiment of the method of the invention;
FIG. 6 is a picture element ordering diagram after pre-scaling treatment of a preferred embodiment of the method of the present invention; and
fig. 7 is a schematic diagram of the modular connection of a preferred embodiment of the system of the present invention.
List of reference numerals
1: flying platform 2: sensor for detecting a position of a body
3: preset event 4: computing terminal of ground base station
Alpha: and an included angle 5: first imaging data
6: second imaging data 7: third imaging data
8: the pre-scaling processing module 9: correction module
11: calculation load 21: probe unit
31: first preset event 32: second preset event
33: third preset event 5a: first scaling data parameter
6a: second scaling data parameter 7a: third scaling data parameter
311: first termination event 321: second termination event
331: third termination event
Detailed Description
The following is a detailed description with reference to fig. 1 to 7.
Example 1
The embodiment discloses a correction method, which can be a radiation calibration method, a relative radiation calibration method, an adaptive radiation calibration method, a relative radiation calibration correction method, a direct adaptive radiation calibration correction method, a scene-based satellite-borne remote sensing instrument adaptive correction method, and the method can be realized by the system and/or other replaceable parts. The method disclosed in this embodiment is implemented, for example, by using various components in the system of the present invention. In addition to this embodiment, the preferred implementation of the other embodiment may be provided in whole and/or in part without conflict or contradiction.
Preferably, radiometric calibration is the process of establishing a data link between the amount of radiation and the output of the detector. The purpose is to eliminate errors of the sensor itself and to determine an accurate radiation value at the entrance pupil of the sensor. Radiometric calibration techniques for spatial cameras or sensors mainly include two parts, relative radiometric calibration (also known as uniformity correction) and absolute radiometric calibration. Relative radiometric calibration is a process of correcting the responsivity of the detector for different pixels (probe elements), and causes of such responsivity and bias differences, and includes, in addition to the different process levels, other factors such as non-uniformity of the sensor itself, non-uniformity introduced during operation of the sensor, non-uniformity related to external inputs, and influence of an optical system. Since the sensor technology is mature, the current visible light detection device generally does not need uniformity correction, and therefore, the relative radiation calibration is mainly used for infrared wave bands. The responses of a plurality of probe elements of the focal plane device to radiation are inconsistent and have no certain relation, and the response rate of general photosensitive elements (probe elements) is not linear, which brings great difficulty to the correction of non-uniformity. Preferably, as shown in fig. 4, the imaging data shown in fig. 5 can be obtained by 90 ° yaw, i.e., the direction in which the array of probe elements 21 is arranged is parallel to the imaging direction. As shown in fig. 5, the same behavior is different for the pixels of the same scaled scene, e.g. a or B or C or D or E, by the probe elements 21. The pixels A, B, C, D, E of the same column represent pixels generated by the same probe. In the above arrangement, each probe element on the sensor 2 theoretically images the same scene in turn, without taking other influencing factors into account. For example, the first column of probe elements in fig. 5 images pixel A, B, C, D, E separately, the second column of probe elements images pixel A, B, C, D, E separately, and the third column of probe elements images pixel A, B, C, D, E separately, such that each probe element images the same scene a. However, the radiation characteristics of the same scene may also change dynamically, such as the same sea, different seasons, different winds, and different solar altitude angles, so that the radiation characteristics of the scaled scene may change within the same imaging period.
In summary, the invention aims at the calibration error problem caused by the change of the scene radiation dynamic characteristic, carries out differential processing on the imaged time period according to the relation between the change amplitude and time in the scene dynamic characteristic change, generates a plurality of sub-time periods, picks out the sub-time period with the statistical mean and variance of the probe element output signal approximately unchanged, and carries out real-time correction on the pre-calibrated imaging data of the sub-time period through the constructed linear correction model so as to avoid the relative radiation calibration error from being transferred.
A scene-based adaptive correction method for a satellite-borne remote sensing instrument at least comprises the step flow shown in figure 2.
The method comprises the following steps:
s100: push scan imaging with sensor 2 is performed with flying platform 1 based on triggering of preset event 3. Preferably, the flying platform 1 performs push-broom imaging in the manner as in fig. 1. Preferably, the flying platform 1 carries a calculation load 11 for calculation as well as the sensor 2. The sensor 2 may be a CCD of a line array. A CCD refers to a charge coupled device, which is a semiconductor device. A CCD is a detecting element that uses an electric charge quantity to represent the signal magnitude and uses a coupling mode to transmit signals. The CCD has the advantages of self-scanning, wide sense wave spectrum, small distortion, small size, light weight, low system noise, low power consumption and long service life. CCD is widely used in digital photography, astronomy, especially in optical remote sensing, optical and spectrum telescope and high-speed photography. Preferably, the method comprises the steps of. Push-broom imaging refers to a CCD made of semiconductor materials, a linear array or a surface array sensor is formed, a wide-angle optical system is adopted, a strip track is swept like a brush by means of movement of a flight platform 1 in the whole view field, and a two-dimensional image of the ground along the flight direction is acquired.
Preferably, the sensor 2 is rotatable about the aerial platform 1 yaw axis. The aeroplane axis refers to the axis of the flying platform 1 in its flying direction. Preferably, the sensor 2 is constituted by a plurality of probe elements 21 arranged in a line. Preferably, the probe element 21 may be a photosensitive element within a CCD. Preferably, an included angle α is formed between the arrangement direction of the linear array of the probe elements 21 of the sensor 2 and the flying direction of the flying platform 1 along the track, as shown in fig. 2 and 3. Preferably, as shown in fig. 3, the sensor 2 is at an angle α of 90 ° to the orbit or heading of the flying platform 1. As shown in fig. 2, the sensor 2 is at an angle α of 0 ° to the orbit or heading of the flight platform 1.
Preferably, after push-broom imaging of the flying platform 1, the calculation load 11 of the flying platform 1 and the calculation terminal 4 of the ground base station can acquire imaging data of the ground scene acquired by the sensor 2. Preferably, the computation load 11 refers to a computation chip, a circuit, etc., such as a CPU, GPU, integrated circuit, FPGA, single chip, MCU, serial chip of ARM architecture, etc. Preferably, the computing terminal 4 refers to a computing device such as a computer, a server, or the like.
Preferably, the calculated load 11 of the flying platform 1 and/or the calculation terminal 4 of the ground base station perform the triggering control of the rotation of the sensor 2 based on the preset event 3. Preferably, the sensor 2 is provided with at least one line array arranged by a plurality of probe elements 21. Preferably, the calculation load 11 and/or the calculation terminal 4 control different linear array probe elements 21 to rotate according to the triggering of the preset event 3, so that different linear array probe elements 21 perform push-broom imaging at different included angles 2. Preferably, imaging data for a scaled ground scene may be obtained from the imaging of a push-broom shot of the ground scene by the sensor 2.
According to a preferred embodiment, the preset events 3 comprise at least a first preset event 31, which corresponds to a high reflection scene, a second preset event 32, which corresponds to a medium reflection scene, and a third preset event 33, which corresponds to a low reflection scene. Preferably, in order to satisfy the difference of response functions of a single probe element 21 in response intervals of different radiance of the sensor 2 and the difference of response functions of different probe elements 21, the method disclosed in the embodiment adopts a method of selecting various types of ground scenes as calibration scenes in order to satisfy the radiation calibration of the full dynamic range of each probe element 21 of the sensor 2. The ground scene fields are preferably divided into high-reflection, medium-reflection and low-reflection scenes according to the dynamic range of the existing probe element 21 and the radiation reflection characteristics of the individual fields. Preferably, the reflectance of the ground scene is set to be higher than 35% or more as a high-reflectance scene by taking the full-color band of 450nm to 900nm, that is, the visible light-near infrared band as an example. Setting the reflectivity within 15% -34% as a medium reflection scene. A low reflection scene is set with a reflectivity below 15%. Through the arrangement mode, the invention has the beneficial effects that: imaging can be classified according to triggering of different ground scenes, so that imaging data in different response ranges can be classified, radiation calibration and correction can be performed, response characteristics of a single probe element 21 in response intervals with different radiance can be obtained, the response range of the probe element can be comprehensively covered, data mixing of different response ranges is avoided, and accuracy of calibration data is reduced.
Preferably, the accuracy of the sensor 2 multi-scene radiometric calibration is affected by a plurality of links, wherein the specific characteristics of the scene are the primary preconditions for the use of the invention, so that the surface characteristics, atmospheric characteristics and uniform area of the selected scene must be ensured to meet the specific requirements of on-orbit field calibration, and the detailed selection principle is as follows:
1. the scene reflection characteristic covers a plurality of high, medium and low ground object types;
2. the spatial characteristic and the emission characteristic of a single scene are relatively uniform, and the reflectivity changes smoothly in the range of the wave band of the sensor 2;
3. the scene is positioned in a high-altitude area, the surrounding atmosphere is relatively clean, and the atmosphere is relatively stable;
4. each scene can be covered by the satellite remote sensing same orbit observation image;
5. the area of the uniform area of the scene is larger than 10 pixels×10 pixels of the sensor 2 to be calibrated, and no large target shielding object exists around the scene;
6. each scene has traffic conditions for carrying out a satellite-ground synchronous observation test.
Preferably, according to the above conditions, through priori knowledge selection, corresponding uniform scenes can be obtained. For example, in the Dunhuang radiation correction field of China, the highly reflective scene is located on the north side of the Dunhuang radiation correction field, the total area of the field is about 6km×4km, the area of the uniform highly reflective scene area is 400m×400m, the geographic coordinates are N40 DEG 28', E94 DEG 22', the reflectivity of the visible light-near infrared band of the area is about 35% -45%, and the spectral reflectivity of the probe cells 21 of different sensors 2 varies by less than 1%. The medium reflection scenario may select a resource satellite field of the Dunhuang radiation correction field located on Gobi desert at about 30km from the western side of Dunhuang city. The total area is about 30km×35km, the medium reflection scene area is 550m×550m, the geographical coordinates are N40 ° 05'27.75 ", E94 ° 23' 39", and the altitude is 1229m. The field has higher stability and uniformity, the reflectivity of the visible light-near infrared band is about 15% -30%, and the spectral reflectivity of the probe elements 21 of different sensors 2 is about 1% -2%. The low reflection scene can be selected from the water body of the south-side south lake of the radiation correction field, the area of the field in summer and autumn is about 3.5km multiplied by 1.2km, the geographic coordinates are N39 degrees 52', E94 degrees 07', the average water depth is about 5m, the water body is pollution-free, and the characteristics are uniform.
Preferably, the first preset event 31, the second preset event 32 and the third preset event 33 may be constructed according to existing prior knowledge, and may be continuously updated according to signals transmitted based on other flying platforms 1. Preferably, because the ground scene is affected by the atmosphere, dimensions, wind power and solar altitude, the radiation of the ground scene is continuously and dynamically changed, so that the imaging of the flying platform 1 is triggered in time based on the radiation information about the scene sent by other flying platforms. Preferably, the first preset event 31 comprises at least an event of the flight platform 1 entering a highly reflective scene. The first preset event 1 further comprises a first termination event 311 for terminating the imaging of the first preset event 31. The first termination event 311 is the flight platform 1 leaving the highly reflective scene. Likewise, the second preset event 32 includes at least the flight platform 1 entering a medium reflection scene. The second preset event 32 further includes a second termination event 321 for terminating imaging of the second preset event 32. The second termination event 321 includes the flight platform 1 leaving the medium reflection scene. The third preset event 33 comprises at least the entry of the flight platform 1 into the low reflection scene, and a third termination event 331 for terminating the imaging of the third preset event 33. The third termination event includes at least the flying platform 1 leaving the low reflection scene. Preferably, the flight platform 1 is informed of the entry into the high-reflection, medium-reflection or low-reflection scene by a priori knowledge of information sent by other flight platforms or ground base stations. Or, the flying platform 1 obtains reflectivity information, coordinates, longitude and latitude information and the like conforming to the high-reflection scene, and the flying platform 1 judges whether the flying platform 1 enters the high-reflection scene or not according to real-time flying data of the flying platform 1 and the reflectivity, uniformity and the like of the ground scene fed back by the sensor 2 based on the information. Preferably, the flight platform 1 is informed whether to leave a high-reflection scene, a medium-reflection scene or a low-reflection scene by a priori knowledge and other information sent by the flight platform or ground base station. Alternatively, the flying platform 1 obtains reflectivity information, coordinates, longitude and latitude information and the like conforming to the high-reflection scene, and the flying platform 1 judges whether the flying platform 1 leaves the high-reflection scene or not according to real-time flight data of the flying platform 1 and the reflectivity, uniformity and the like of the ground scene fed back by the sensor 2 based on the information. By the above arrangement, the flight platform 1 can trigger the corresponding preset event 3 in time based on the above information. In addition, under the condition that the preset event 3 is not triggered, the sensor 2 of the flying platform 1 is in a sleep state, and the sensor 2 can be prevented from being in an on state for a long time, so that energy consumption is saved, and the method is particularly suitable for the existing microsatellite and is beneficial to prolonging the on-orbit service life of the flying platform 1.
According to a preferred embodiment, in case at least one of the first preset event 31, the second preset event 32, the third preset event 33 is triggered, the computing load 11 and/or the computing terminal 4 performs the following steps:
1. recording a first trigger time point and a first initial probe element triggered by a first preset event 31, a second trigger time point and a second initial probe element triggered by a second preset event 32, and a third trigger time point and a third initial probe element triggered by a third preset event 33;
2. recording a first termination time point and a first termination probe element triggered by a first termination event 311, a second termination time point and a second termination probe element triggered by a second termination event 321, and a third termination event and a third termination probe element triggered by a third termination time 331;
preferably, the imaging data conforming to the first preset event 31 in the imaging data output by the sensor 2 can be obtained by the first start probe and the first end probe and the first trigger time point and the first end time point. Likewise, by the second start probe, the second end probe, the second trigger time point, and the second end time point, imaging data conforming to the second setting event 32 in the imaging data output from the sensor 2 can be obtained. By the third start probe, the third end probe, the third trigger time point, and the third end time point, imaging data conforming to the third setting event 33 in the imaging data output from the sensor 2 can be obtained. By the above arrangement, the flight platform 1 is able to image not only the first preset event 311, the second preset event 32 or the third preset event 33 singly, but also the first preset event 31, the second preset event 32 or the third preset event 33 simultaneously by marking the time point and the probe element of the triggering and termination of the corresponding preset event.
Preferably, the imaging data is classified based on the start time point, the end time point, the start probe element, and the end probe element described above. For example, in the case of triggering the first preset event 31, the calculation load 11 or the calculation terminal 4 records the first start time and the first start probe element for starting imaging in the sensor 2, and in the case of triggering the first termination event, the calculation load 11 or the calculation terminal 4 records the first termination probe element and the first termination time so that all probe elements in the same line in the sensor 2 are located between the first start probe element and the first termination probe element image the first preset event 31, and the first imaging data 5 is acquired. The imaging time of the first imaging data 5 is obtained by calculation of the first start time and the first end time for real-time correction in the subsequent step. Likewise, second imaging data 6 are obtained by means of a second starting probe and a second ending probe. The imaging time of the second imaging data 6 is obtained by the second start time and the second end time. Third imaging data 7 are obtained by means of a third starting probe and a third ending probe. The imaging time of the third imaging data 7 is obtained by the third start time and the third end time. Preferably, the first imaging data 5 is imaging data responsive to highly reflective scenes in the dynamic range. The second imaging data 6 is imaging data responsive to a moderately reflective scene in the dynamic range. The third imaging data 7 is imaging data responsive to a low reflection scene in the dynamic range.
According to a preferred embodiment, before the flight platform 1 triggers the preset event 3 to record, the calculation load 11 of the flight platform 1 and/or the calculation terminal 4 of the ground base station perform the following steps:
1. the flying platform 1 performs push-broom imaging in the arrangement direction of the linear array of the probe elements 21 of at least one row of sensors 2, as shown in fig. 4. The flying platform 1 controls the sensor 2 to perform push-broom imaging with the included angle alpha being 0 degrees. By this arrangement, different probe elements 21 located in the same linear array can be made to image the same scene unit sequentially, as shown in fig. 5. A, B, C, D, E in fig. 5 are pixels of the same detector 21 imaged in time order of the same scene. Preferably, the chronological order refers to a chronological order from first to last. A refers to a first region in the same scene. B refers to a second region adjacent to the first region in the same scene. C refers to a third region adjacent to the second region in the same field. D refers to a fourth region adjacent to the third region in the same scene. E refers to a fifth region adjacent to the fourth region in the same scene. As shown in fig. 5, each column in fig. 5 has A, B, C, D, E, and A, B, C, D, E of the same column indicates that the same column of pixels imaged by the same probe element 21 is formed, and a plurality of a or B or C of the same row are pixels imaged by different probe elements 21.
2. The array direction of the linear array of probe elements 21 of at least one row of sensors 2 defines an angle α in a rail-wise manner that is neither parallel nor perpendicular to the flying platform 1. Preferably, the line array of the probe elements 21 of which at least one included angle α is not 0 ° of the sensor 2 is rotatable. The included angle alpha is not 0 degrees and 90 degrees after rotation, namely the arrangement direction of the linear array of the probe elements 21 is not parallel to or perpendicular to the rail-along direction of the flying platform 1. By adopting the setting mode for push-broom imaging, the sensor 2 can image a scene in a large range before linear array imaging of the probe element 21 with the relative included angle alpha of 0 degrees, the first preset event 31, the second preset event 32 and the third preset event 33 are triggered with maximum probability, and real-time information of a calibration scene can be acquired in advance, so that trigger information of the first preset event 31, the second preset event 32 and the third preset event 33 is provided for the next calibration imaging. Moreover, while the linear array imaging of the probe 21 with the relative included angle alpha being 0 degrees, the imaging of the calibration scene in a large range can acquire the dynamic change of the calibration scene in real time, so that the mean value and the mean square error of the dynamic change of the radiation characteristic of the calibration scene are provided for the subsequent step S300, and the correction gain and the offset corresponding to the pixel are estimated based on the radiation mean value and the mean square error of the calibration scene.
S200: performing pre-marking processing according to the obtained imaging data;
according to a preferred embodiment, the steps of pre-scaling the calculation load 11 of the flying platform 1 and/or the calculation terminal 4 of the ground base station according to the imaging data obtained are as follows:
1. an initial image is generated based on imaging data comprising at least first imaging data 5, second imaging data 6 and third imaging data 7. Preferably, the initial image is as shown in fig. 5. The initial image is in pixels generated by the probe 21. For example, a first probe element of a line of probe elements 21 having an angle α of 0 ° generates an a-pel in a first imaging sample along the flight direction of the flight platform 1. The first probe element generates a B pixel in the second imaging sample, and the second probe element generates an A pixel in the second imaging sample. The first probe element generates a C pixel in the third sampling, the second probe element generates a B pixel in the third sampling, and the third probe element generates an A pixel in the third sampling.
2. Denoising processing is performed based on the initial image. Preferably, the initial image is composed of pixels of the probe cells 21 of each column, and A, B, C, D, E in the initial image is arranged obliquely. Since the pels of each column are imaging of different probe cells 21, and the response functions of the different probe cells 21 are different, and since the scaled scene dynamic radiation characteristics, pels a of each column in the initial image are different, streak noise of the initial image appears in diagonal lines.
Preferably, the denoising process is to remove low-frequency noise by passing the image data of the initial image through a low-pass filter. Preferably, the low pass filter may be an exponential low pass filter. According to the setting mode, the initial image is subjected to Fourier transformation to obtain the frequency spectrum of the frequency domain, the low-frequency noise component is filtered out by the low-pass filter to obtain the high-frequency component of the initial image, and the high-frequency component is amplified to enhance the details of the oblique straight line formed by the pixels imaged by the same scene unit in the initial image.
3. Shifting is performed based on the gray value of each pixel in the denoised and enhanced initial image. Preferably, since a plurality of pixels are diagonally arranged in the initial image, a straight line composed of the plurality of pixels can be detected according to the LSD method. Preferably, the LSD detects a straight line as follows:
a. scaling by choice 1, i.e. no gaussian sampling is performed. Preferably, the scaled data cannot be used for scaling or fails because the sampling would disrupt the relationship of the nonlinear response between the individual probe elements 21 in the initial image.
b. And calculating the gradient value and the gradient direction of each pixel point, and performing pseudo-sequencing. Preferably, the larger the gradient value, the more pronounced the edge point and therefore more suitable as a seed point. However, since the time overhead for fully ordering the gradient values is excessive, the gradient values are simply divided into 1024 levels, and the 1024 levels cover the gradient range from 0 to 255, and the ordering is linear time overhead. Preferably, the seed points are searched downwards from 1024 with the highest gradient value, the pixel points with the same gradient value are put into the same linked list, thereby 1024 linked lists are obtained, and a state table containing 1024 linked lists is synthesized according to the sequence from small to large of the gradient values. The points in the state table are all set to the no-use state.
c. And setting the point with the gradient value smaller than p as a usable state, taking out the point with the maximum gradient value in the state table, and searching the surrounding direction within the angle tolerance as a starting point. Preferably, the area diffusion is performed by searching for directions around within the angular tolerance, i.e. according to directions similar to the gradient angular direction. Preferably, rectangular fitting is performed on the diffused region to generate a rectangle R. Preferably, p may be the desired of all gradient values, or may be set manually. Preferably, points with gradient values less than p tend to occur in smooth areas, or noise at low frequencies only, which can seriously affect the calculation of the straight line angle. Thus in LSD pixels with gradient magnitude less than p will be rejected from participating in the construction of rectangle R. Preferably, rectangular fitting of the diffused region is essentially a shifting of the gray value of the pixel data, and not sampling of the data points.
d. And judging whether the density of the homopolar points in the rectangle R meets a threshold value F. Preferably, if not satisfied, the rectangle R is truncated to become a plurality of rectangle boxes satisfying the threshold D. Preferably, the threshold F may be set to one third of the number of imaging probe elements 21 actually involved in the sensor 2, so that shorter length lines may be eliminated.
By the arrangement mode, the oblique line rectangle R formed by the same pixels in the initial image can be detected, and the included angle alpha between the oblique line and the flying platform along the track direction can also be obtained.
4. The initial image after the line detection is processed according to the included angle α and the following formula, so that the pixels in the same row in the initial image are imaging of different probe elements 21 on the same scene unit, and the pixels in the same column are imaging of the same probe element 21, as shown in fig. 6. The formula is as follows:
wherein DN is gray data of one-dimensional initial image stored in image line, DN [ m+n.t ]]Representing the gray values of the nth listed image in the mth row of the initial image. t denotes the number of probe elements 21 involved in imaging. K (K) 1 =tanα,K 2 =tan(90°-α)。
S300: and carrying out real-time correction based on the degree of dynamic characteristic change of the ground scene within the duration of a preset event and the imaging data processed by the preset mark. Preferably, after the pre-scaling of the imaging data by the computational load 11 and/or the computational terminal 4 of the flying platform 1, the following steps are performed:
1. and judging whether the dynamic characteristic change degree of the ground scene in the duration of the preset event exceeds a first threshold value. Preferably, the duration refers to the imaging time of the line array probe 21 for the preset event 3. Preferably, the first threshold refers to a scene for relative radiometric calibration, whose degree or rate of change of reflectivity is 2% or 5% over the imaging period. Preferably, the linear correction model is used to correct the pre-calibrated imaging data in real time in case the degree of dynamic characteristic change does not exceed the first threshold. Preferably, in the case where the rate of change does not exceed 2%, the statistical average of the signal output by each probe element 21 can be approximately regarded as constant. The statistical variance of the signals input to the probe 21 are all equal. A linear correction model can be used for correction. The linear correction model is:
Where n is the number of iterations of the linear correction model. X is X i (n) is the original output of the ith probe element 21. Y is Y i (n) is the corrected output value.The model multiplier term, i.e., gain, is corrected. />Is a constant term of the correction model, i.e., offset. Preferably, the mean square error of the ground scene can be used to estimate the multiplier term of the correction model for the pel. And estimating constant terms of the correction model of the pixel by using the mean value and the mean square error of the ground scene. Preferably, let a i (n) is the mean square error, beta, of the imaged scene i (n) is the mean of the imaged scene. The formula is as follows:
through the two iteration formulas, imaging data after pre-scaling processing can be corrected in real time, and the linear model is utilized for correction, so that the algorithm is simple, the algorithm is complex and low, and the scaling coefficient can be adaptively adjusted according to the dynamic change of the scaling scene.
2. In the event that the degree of variation of the dynamic characteristics exceeds a first threshold value, the calculation load 11 of the flying platform 1 and/or the calculation terminal 4 of the ground base station perform the following steps:
a. the duration of the preset event is segmented by a first unit time. Preferably, the imaging time period in which the dynamic characteristic change degree of the scaling scene exceeds the first threshold is subdivided by using a differential thought method, and at least one first time set in which the dynamic characteristic change degree does not exceed the first threshold in the time period is searched. Preferably, the first time set includes a plurality of first unit times adjacent to each other, i.e., the plurality of first unit times are continuous and free of disconnection. Preferably, it is determined whether the ratio of the number of first unit times in the first time set to the degree of dynamic feature variation in the first time set satisfies the second threshold. Preferably, the second threshold is a ratio of imaging time of 1024 pixels to a degree of change of dynamic characteristics of the scene within the imaging time of 5%. Preferably, if the ratio is exceeded, the statistical average of the signal output by each probe element 21 is not considered constant and cannot be corrected in real time.
b. In the case where the ratio of the number of the first unit times in the first time set to the degree of variation of the dynamic characteristics in the first time set satisfies the second threshold, the statistical average value of the signals output by each probe element 21 is considered to be approximately constant, and the linear correction model can be used to correct the pre-calibrated imaging data in real time. By means of the arrangement mode, by utilizing the principle of differential ground concept, a time period which accords with the approximately unchanged statistical mean and variance of the output signals of the probe element 21 in the imaging time period can be searched as far as possible, and a linear correction model is constructed by means of the mean and variance of a calibration scene in the time period. In fact, in actual relative radiometric calibration, the radiation characteristics of most calibration scenes are slowly varying, or slowly varying over a certain period of time, and by this characteristic of the calibration scene, the residual errors in calibration can be corrected in real time during the period of imaging of all the probe elements 21, improving the accuracy of the calibration data.
3. Based on the imaging data corrected in real time, the first calibration data parameter 5a of the first imaging data 5, the second calibration data parameter 6a of the second imaging data 6, and the third calibration data parameter 7a of the third imaging data 7 are obtained by histogram specification calculation, respectively. Preferably, the scaling parameters are calculated by adopting the specification based on the histogram of the probe elements, and the processing flow is as follows:
1. And establishing a cumulative probability distribution function of each probe element according to the initial image processed by the method and the following formula, and selecting the cumulative probability distribution function of the required probe element as an ideal reference cumulative probability distribution function. The formula is as follows:
where k is the probe imaging gray level. pn (k) is the number of pixels when the tone of the probe is k. dpn (i) images all pixel numbers for the j-th probe element.
2. And carrying out histogram prescribing processing on the cumulative probability distribution function of each probe by taking the ideal reference cumulative probability distribution function as a standard according to the following formula to obtain the relative radiometric calibration parameters of each probe element. The formula is as follows:
f (k-x) ≤f (k) ≤f (k+y)
wherein, the value range of x and y is [2 ] bits -1]Bits is a quantization unit for the sensor 2 to obtain a sensor image. Through the algorithm, the radiation response difference between each probe element 21 can be reflected well, and the overall column value distribution change after the corresponding relative radiation calibration parameters are applied uniformly accords with the law of actual scene change and the radiation brightness difference between CCD taps.
Example 2
The embodiment discloses a correction system, which can be a radiometric calibration system, a relative radiometric calibration system, an adaptive radiometric calibration system, a relative radiometric calibration correction system, a direct adaptive radiometric calibration correction system, a scene-based radiometric calibration correction system, and a scene-based adaptive correction system method for a satellite-borne remote sensing instrument, wherein the system can be realized by the system and/or other replaceable parts. The method disclosed in this embodiment is implemented, for example, by using various components in the system of the present invention. In addition to this embodiment, the preferred implementation of the other embodiment may be provided in whole and/or in part without conflict or contradiction.
As shown in fig. 7, a scene-based adaptive correction system for a satellite-borne remote sensing instrument at least comprises a flight platform 1, a sensor 2, a ground base station, a pre-calibration processing module 8 and a correction module 9. Preferably the flying platform 1 may be an aircraft, spacecraft or missile. The aircraft may be a balloon, airship, aircraft, or the like. The spacecraft may be an artificial earth satellite, a manned spacecraft, a space probe, a space shuttle, etc. Preferably, the sensor 2 is an advanced optical system mounted on the flying platform 1, a sensor that can be used to obtain information of the earth's target. The sensor 2 may be a sensor such as a space camera or a CCD, or may be a sensor array including a plurality of CCDs. Preferably, the probe elements 21, i.e. the photosensitive elements, within the sensor 2 constitute the sensor 2 in a line arrangement. The sensor 2 may be a CCD of a line array. A CCD refers to a charge coupled device, which is a semiconductor device. A CCD is a detecting element that uses an electric charge quantity to represent the signal magnitude and uses a coupling mode to transmit signals. Preferably, the sensor 2 is rotatable about the aerial platform 1 yaw axis. Preferably, the aeroaxis refers to the axis of the flying platform 1 in its flight direction. As shown in fig. 3, the sensor 2 is at an angle α of 90 ° to the orbit or heading of the flight platform 1. As shown in fig. 2, the sensor 2 is at an angle α of 0 ° to the orbit or heading of the flight platform 1. Preferably, the sensor 2 is imaged in a linear push-broom fashion. Preferably, the method comprises the steps of. Push-broom imaging refers to a CCD made of semiconductor materials, a linear array or a surface array sensor 2 is formed, a wide-angle optical system is adopted, a strip track is swept like a brush by means of movement of a flight platform 1 in the whole view field, and a two-dimensional image of the ground along the flight direction is acquired. Preferably, the flying platform 1 is also loaded with a calculated load 11. Preferably, the computation load 11 refers to a computation chip, a circuit, etc., such as a CPU, GPU, integrated circuit, FPGA, single chip, MCU, serial chip of ARM architecture, etc. Preferably, the ground base station comprises at least the computing terminal 4. The computing terminal 4 refers to a computing device such as a computer or a server. Preferably, the pre-label processing module 8 comprises at least registers, a storage medium and a computing chip. The registers are used to store instructions for computing the load 11 and the computing terminal 4. The instruction contains at least operation control information. The storage medium is used to store the processed data. Preferably, the storage medium may be Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. The computing chip can be a CPU, a GPU, an integrated circuit, an FPGA, a singlechip, an MCU, a series of chips of ARM architecture, and the like. Preferably, the pre-label processing system may also employ the computational load 11 for the computation and processing of data. Preferably, the correction module 9 also comprises at least a register, a storage medium and a computing chip.
Preferably, the calculated load 11 of the flying platform 1 and/or the calculation terminal 4 of the ground base station are configured to be able to control the rotation of the sensor 2 based on the triggering of the preset event 3. Preferably, the sensor 2 is provided with at least one line array arranged by a plurality of probe elements 21. Preferably, the sensor 2 is rotatable about the aerial platform 1 yaw axis. The aeroplane axis refers to the axis of the flying platform 1 in its flying direction. Preferably, the sensor 2 is constituted by a plurality of probe elements 21 arranged in a line. Preferably, the probe element 21 may be a photosensitive element within a CCD. Preferably, an included angle α is formed between the arrangement direction of the linear array of the probe elements 21 of the sensor 2 and the flying direction of the flying platform 1 along the track, as shown in fig. 3 and 4. Preferably, as shown in fig. 3, the sensor 2 is at an angle α of 90 ° to the orbit or heading of the flying platform 1. As shown in fig. 4, the sensor 2 is at an angle α of 0 ° to the orbit or heading of the flight platform 1.
Preferably, the calculation load 11 and/or the calculation terminal 4 control different line array probe elements 21 to rotate according to the triggering of the preset event 3, so that the different line array probe elements 21 perform push-broom imaging at different included angles alpha, and imaging data are obtained.
According to a preferred embodiment, the preset events 3 comprise at least a first preset event 31, which corresponds to a high reflection scene, a second preset event 32, which corresponds to a medium reflection scene, and a third preset event 33, which corresponds to a low reflection scene. Preferably, in order to satisfy that the response functions of a single probe element of the sensor 2 have differences in response intervals of different radiance and the response functions of different probe elements have differences, the method disclosed by the embodiment adopts a method for selecting various types of ground scenes as calibration scenes in order to satisfy the radiation calibration of the full dynamic range of each probe element of the sensor. Preferably, the ground scene fields are divided into high reflection, medium reflection and low reflection scenes according to the dynamic range of the existing probe cells and the radiation reflection characteristics of the individual fields. Preferably, the reflectance of the ground scene is set to be higher than 35% or more as a high-reflectance scene by taking the full-color band of 450nm to 900nm, that is, the visible light-near infrared band as an example. Setting the reflectivity within 15% -34% as a medium reflection 5% scene. A low reflection scene is set with a reflectivity below 15%. Through the arrangement mode, the invention has the beneficial effects that: the imaging can be classified according to the triggering of different ground scenes, so that the imaging data in different response ranges can be classified, the response characteristics of a single probe element in the response ranges with different radiance can be obtained, the response range of the probe element is comprehensively covered, the data mixing of the different response ranges is avoided, and the accuracy of the calibration data is reduced.
Preferably, the first preset event 31, the second preset event 32 and the third preset event 33 may be constructed according to existing prior knowledge, and may be continuously updated according to signals transmitted based on other flying platforms 1. Preferably, because the ground scene is affected by the atmosphere, dimensions, wind power and solar altitude, the radiation of the ground scene is continuously and dynamically changed, so that the imaging of the flying platform 1 is triggered in time based on the radiation information about the scene sent by other flying platforms. Preferably, the first preset event 31 comprises at least an event of the flight platform 1 entering a highly reflective scene. The first preset event 1 further comprises a first termination event 311 for terminating the imaging of the first preset event 31. The first termination event 311 is the flight platform 1 leaving the highly reflective scene. Likewise, the second preset event 32 includes at least the flight platform 1 entering a medium reflection scene. The second preset event 32 further includes a second termination event 321 for terminating imaging of the second preset event 32. The second termination event 321 includes the flight platform 1 leaving the medium reflection scene. The third preset event 33 comprises at least the entry of the flight platform 1 into the low reflection scene, and a third termination event 331 for terminating the imaging of the third preset event 33. The third termination event includes at least the flying platform 1 leaving the low reflection scene. Preferably, the flight platform 1 is informed of the entry into the high-reflection, medium-reflection or low-reflection scene by a priori knowledge of information sent by other flight platforms or ground base stations. Or, the flying platform 1 obtains reflectivity information, coordinates, longitude and latitude information and the like conforming to the high-reflection scene, and the flying platform 1 judges whether the flying platform 1 enters the high-reflection scene or not according to real-time flying data of the flying platform 1 and the reflectivity, uniformity and the like of the ground scene fed back by the sensor 2 based on the information. Preferably, the flight platform 1 is informed whether to leave a high-reflection scene, a medium-reflection scene or a low-reflection scene by a priori knowledge and other information sent by the flight platform or ground base station. Alternatively, the flying platform 1 obtains reflectivity information, coordinates, longitude and latitude information and the like conforming to the high-reflection scene, and the flying platform 1 judges whether the flying platform 1 leaves the high-reflection scene or not according to real-time flight data of the flying platform 1 and the reflectivity, uniformity and the like of the ground scene fed back by the sensor 2 based on the information. By the above arrangement, the flight platform 1 can trigger the corresponding preset event 3 in time based on the above information. In addition, under the condition that the preset event 3 is not triggered, the sensor 2 of the flying platform 1 is in a sleep state, and the sensor 2 can be prevented from being in an on state for a long time, so that energy consumption is saved, and the method is particularly suitable for the existing microsatellite and is beneficial to prolonging the on-orbit service life of the flying platform 1.
According to a preferred embodiment, in case at least one of the first preset event 31, the second preset event 32, the third preset event 33 is triggered, the computing load 11 and/or the computing terminal 4 performs the following steps:
1. recording a first trigger time point and a first initial probe element triggered by a first preset event 31, a second trigger time point and a second initial probe element triggered by a second preset event 32, and a third trigger time point and a third initial probe element triggered by a third preset event 33;
2. recording a first termination time point and a first termination probe element triggered by a first termination event 311, a second termination time point and a second termination probe element triggered by a second termination event 321, and a third termination event and a third termination probe element triggered by a third termination time 331;
preferably, the imaging data conforming to the first preset event 31 in the imaging data output by the sensor 2 can be obtained by the first start probe and the first end probe and the first trigger time point and the first end time point. Likewise, by the second start probe, the second end probe, the second trigger time point, and the second end time point, imaging data conforming to the second setting event 32 in the imaging data output from the sensor 2 can be obtained. By the third start probe, the third end probe, the third trigger time point, and the third end time point, imaging data conforming to the third setting event 33 in the imaging data output from the sensor 2 can be obtained. By the above arrangement, the flight platform 1 is able to image not only the first preset event 311, the second preset event 32 or the third preset event 33 singly, but also the first preset event 31, the second preset event 32 or the third preset event 33 simultaneously by marking the time point and the probe element of the triggering and termination of the corresponding preset event.
Preferably, the imaging data is classified based on the start time point, the end time point, the start probe element, and the end probe element described above. For example, in the case of triggering the first preset event 31, the calculation load 11 or the calculation terminal 4 records the first start time and the first start probe element for starting imaging in the sensor 2, and in the case of triggering the first termination event, the calculation load 11 or the calculation terminal 4 records the first termination probe element and the first termination time so that all probe elements in the same line in the sensor 2 are located between the first start probe element and the first termination probe element image the first preset event 31, and the first imaging data 5 is acquired. The imaging time of the first imaging data 5 is obtained by calculation of the first start time and the first end time for real-time correction in the subsequent step.
According to a preferred embodiment, the flying platform 1 performs push-broom imaging in the arrangement direction of the linear array of probe elements 21 of at least one row of sensors 2, as shown in fig. 4. The flying platform 1 controls the sensor 2 to perform push-broom imaging with the included angle alpha being 0 degrees. By this arrangement, different probe elements 21 located in the same linear array can be made to image the same scene unit sequentially, as shown in fig. 5. A, B, C, D, E in fig. 5 are pixels of the same detector 21 imaged in time order of the same scene. Preferably, the chronological order refers to a chronological order from first to last. A refers to a first region in the same scene. B refers to a second region adjacent to the first region in the same scene. C refers to a third region adjacent to the second region in the same field. D refers to a fourth region adjacent to the third region in the same scene. E refers to a fifth region adjacent to the fourth region in the same scene. As shown in fig. 5, each column in fig. 5 has A, B, C, D, E, and A, B, C, D, E of the same column indicates that the same column of pixels imaged by the same probe element 21 is formed, and a plurality of a or B or C of the same row are pixels imaged by different probe elements 21.
Preferably, the array direction of the linear array of probe elements 21 of at least one row of sensors 2 defines an angle α in such a way that it is neither parallel nor perpendicular to the rail-wise direction of the flying platform 1. Preferably, the line array of the probe elements 21 of which at least one included angle α is not 0 ° of the sensor 2 is rotatable. The included angle alpha is not 0 degrees and 90 degrees after rotation, namely the arrangement direction of the linear array of the probe elements 21 is not parallel to or perpendicular to the rail-along direction of the flying platform 1. By adopting the setting mode for push-broom imaging, the sensor 2 can image a scene in a large range before linear array imaging of the probe element 21 with the relative included angle alpha of 0 degrees, the first preset event 31, the second preset event 32 and the third preset event 33 are triggered with maximum probability, and real-time information of a calibration scene can be acquired in advance, so that trigger information of the first preset event 31, the second preset event 32 and the third preset event 33 is provided for the next calibration imaging. Moreover, while the linear array imaging of the probe 21 with the relative included angle alpha being 0 degrees, the imaging of the calibration scene in a large range can acquire the dynamic change of the calibration scene in real time, so that the average value and the mean square error of the dynamic change of the radiation characteristic of the calibration scene are provided for the subsequent correction module 9, and the correction gain and the offset corresponding to the pixel are estimated based on the radiation average value and the mean square error of the calibration scene.
Preferably, the pre-targeting module 8 subjects the imaging data to pre-targeting in response to instructions from the computational load 11 and/or the computational terminal 4. Preferably, the computing load 11 and/or the computing terminal 4 send processing instructions to the pre-label processing module 8 according to the termination of the preset event 3. Preferably, the pre-targeting module 8 generates the initial image based on imaging data comprising at least the first imaging data 5, the second imaging data 6 and the third imaging data 7. Preferably, the initial image is as shown in fig. 5. The initial image is in pixels generated by the probe 21. For example, a first probe element of a line of probe elements 21 having an angle α of 0 ° generates an a-pel in a first imaging sample along the flight direction of the flight platform 1. The first probe element generates a B pixel in the second imaging sample, and the second probe element generates an A pixel in the second imaging sample. The first probe element generates a C pixel in the third sampling, the second probe element generates a B pixel in the third sampling, and the third probe element generates an A pixel in the third sampling.
Preferably, the pre-label processing module 8 performs denoising processing based on the initial image. Preferably, the initial image is composed of pixels of the probe cells 21 of each column, and A, B, C, D, E in the initial image is arranged obliquely. Since the pels of each column are imaging of different probe cells 21, and the response functions of the different probe cells 21 are different, and since the scaled scene dynamic radiation characteristics, pels a of each column in the initial image are different, streak noise of the initial image appears in diagonal lines. Preferably, the denoising process is to remove low-frequency noise by passing the image data of the initial image through a low-pass filter. Preferably, the low pass filter may be an exponential low pass filter. According to the setting mode, the initial image is subjected to Fourier transformation to obtain the frequency spectrum of the frequency domain, the low-frequency noise component is filtered out by the low-pass filter to obtain the high-frequency component of the initial image, and the high-frequency component is amplified to enhance the details of the oblique straight line formed by the pixels imaged by the same scene unit in the initial image.
Preferably, the pre-label processing module 8 shifts the gray value of each pixel in the initial image after denoising and enhancement. Preferably, since a plurality of pixels are diagonally arranged in the initial image, a straight line composed of the plurality of pixels can be detected according to the LSD method. Preferably, the LSD detects a straight line as follows:
a. scaling by choice 1, i.e. no gaussian sampling is performed. Preferably, the scaled data cannot be used for scaling or fails because the sampling would disrupt the relationship of the nonlinear response between the individual probe elements 21 in the initial image.
b. And calculating the gradient value and the gradient direction of each pixel point, and performing pseudo-sequencing. Preferably, the larger the gradient value, the more pronounced the edge point and therefore more suitable as a seed point. However, since the time overhead for fully ordering the gradient values is excessive, the gradient values are simply divided into 1024 levels, and the 1024 levels cover the gradient range from 0 to 255, and the ordering is linear time overhead. Preferably, the seed points are searched downwards from 1024 with the highest gradient value, the pixel points with the same gradient value are put into the same linked list, thereby 1024 linked lists are obtained, and a state table containing 1024 linked lists is synthesized according to the sequence from small to large of the gradient values. The points in the state table are all set to the no-use state.
c. And setting the point with the gradient value smaller than p as a usable state, taking out the point with the maximum gradient value in the state table, and searching the surrounding direction within the angle tolerance as a starting point. Preferably, the area diffusion is performed by searching for directions around within the angular tolerance, i.e. according to directions similar to the gradient angular direction. Preferably, rectangular fitting is performed on the diffused region to generate a rectangle R. Preferably, p may be the desired of all gradient values, or may be set manually. Preferably, points with gradient values less than p tend to occur in smooth areas, or noise at low frequencies only, which can seriously affect the calculation of the straight line angle. Thus in LSD pixels with gradient magnitude less than p will be rejected from participating in the construction of rectangle R. Preferably, rectangular fitting of the diffused region is essentially a shifting of the gray value of the pixel data, and not sampling of the data points.
d. And judging whether the density of the homopolar points in the rectangle R meets a threshold value F. Preferably, if not satisfied, the rectangle R is truncated to become a plurality of rectangle boxes satisfying the threshold D. Preferably, the threshold F may be set to one third of the number of imaging probe elements 21 actually involved in the sensor 2, so that shorter length lines may be eliminated.
By the arrangement mode, the oblique line rectangle R formed by the same pixels in the initial image can be detected, and the included angle alpha between the oblique line and the flying platform along the track direction can also be obtained.
Preferably, the pre-labeling processing module 8 processes the initial image after the line detection according to the included angle α and the following formula, so that the pixels in the same row in the initial image are imaging of different probe elements 21 on the same scene unit, and the pixels in the same column are imaging of the same probe element 21, as shown in fig. 6. The formula is as follows:
wherein DN is gray data of one-dimensional initial image stored in image line, DN [ m+n.t ]]Representing the gray values of the nth listed image in the mth row of the initial image. t denotes the number of probe elements 21 involved in imaging. K (K) 1 =tanα,K 2 =tan(90°-α)。
Preferably, the correction module 9 corrects the pre-label processed imaging data in response to the instructions of the computational load 11 and/or the computational terminal 4. Preferably, the correction module 9 corrects the imaging data after the pre-targeting in real time based on the degree and speed of dynamic feature changes of the ground scene over the duration of the pre-set event. Preferably, the correction module 9 determines whether the degree of dynamic characteristic variation of the ground scene over the duration of the preset event exceeds a first threshold. Preferably, the duration refers to the imaging time of the line array probe 21 for the preset event 3. Preferably, the first threshold refers to a scene for relative radiometric calibration, whose degree or rate of change of reflectivity is 2% or 5% over the imaging period. Preferably, the linear correction model is used to correct the pre-calibrated imaging data in real time in case the degree of dynamic characteristic change does not exceed the first threshold. Preferably, in the case where the rate of change does not exceed 2%, the statistical average of the signal output by each probe element 21 can be approximately regarded as constant. The statistical variance of the signals input to the probe 21 are all equal. A linear correction model can be used for correction. The linear correction model is:
Where n is the number of iterations of the linear correction model. X is X i (n) is the original output of the ith probe element 21. Y is Y i (n) is the corrected output value.The model multiplier term, i.e., gain, is corrected. />Is a constant term of the correction model, i.e., offset. Preferably, the mean square error of the ground scene can be used to estimate the multiplier term of the correction model for the pel. And estimating constant terms of the correction model of the pixel by using the mean value and the mean square error of the ground scene. Preferably, let a i (n) is the mean square error, beta, of the imaged scene i (n) is the mean of the imaged scene. The formula is as follows:
through the two iteration formulas, imaging data after pre-scaling processing can be corrected in real time, and the linear model is utilized for correction, so that the algorithm is simple, the algorithm is complex and low, and the scaling coefficient can be adaptively adjusted according to the dynamic change of the scaling scene.
Preferably, in case the degree of variation of the dynamic characteristic exceeds a first threshold value, the duration of the preset event is segmented in a first unit time. Preferably, the imaging time period in which the dynamic characteristic change degree of the scaling scene exceeds the first threshold is subdivided by using a differential thought method, and at least one first time set in which the dynamic characteristic change degree does not exceed the first threshold in the time period is searched. Preferably, the first time set includes a plurality of first unit times adjacent to each other, i.e., the plurality of first unit times are continuous and free of disconnection. Preferably, it is determined whether the ratio of the number of first unit times in the first time set to the degree of dynamic feature variation in the first time set satisfies the second threshold. Preferably, the second threshold is a ratio of imaging time of 1024 pixels to a degree of change of dynamic characteristics of the scene within the imaging time of 5%. Preferably, if the ratio is exceeded, the statistical average of the signal output by each probe element 21 is not considered constant and cannot be corrected in real time. Preferably, in the case that the ratio of the number of the first unit time in the first time set to the degree of variation of the dynamic characteristic in the first time set satisfies the second threshold, the statistical average value of the signal output by each probe element 21 is considered to be approximately constant, and the linear correction model can be used to correct the predetermined calibration imaging data in real time. By means of the arrangement mode, by utilizing the principle of differential ground concept, a time period which accords with the approximately unchanged statistical mean and variance of the output signals of the probe element 21 in the imaging time period can be searched as far as possible, and a linear correction model is constructed by means of the mean and variance of a calibration scene in the time period. In fact, in actual relative radiometric calibration, the radiation characteristics of most calibration scenes are slowly varying, or slowly varying over a certain period of time, and by this characteristic of the calibration scene, the residual errors in calibration can be corrected in real time during the period of imaging of all the probe elements 21, improving the accuracy of the calibration data.
Preferably, after correction by the correction module 9, the calculation load 11 and/or the calculation terminal 4 obtains the first calibration data parameter 5a of the first imaging data 5, the second calibration data parameter 6a of the second imaging data 6 and the third calibration data parameter 7a of the third imaging data 7, respectively, using a histogram specification calculation based on the imaging data corrected in real time. Preferably, the scaling parameters are calculated by adopting the specification based on the histogram of the probe elements, and the processing flow is as follows:
1. and establishing a cumulative probability distribution function of each probe element according to the initial image processed by the method and the following formula, and selecting the cumulative probability distribution function of the required probe element as an ideal reference cumulative probability distribution function. The formula is as follows:
where k is the probe imaging gray level. pn (k) is the number of pixels when the tone of the probe is k. dpn (i) images all pixel numbers for the j-th probe element.
2. And carrying out histogram prescribing processing on the cumulative probability distribution function of each probe by taking the ideal reference cumulative probability distribution function as a standard according to the following formula to obtain the relative radiometric calibration parameters of each probe element. The formula is as follows:
f (k-x) ≤f (k) ≤f (k+y)
wherein, the value range of x and y is [2 ] bits -1]Bits is a quantization unit for the sensor 2 to obtain a sensor image. Through the algorithm, the radiation response difference between each probe element 21 can be reflected well, and the overall column value distribution change after the corresponding relative radiation calibration parameters are applied uniformly accords with the law of actual scene change and the radiation brightness difference between CCD taps.
It should be noted that the above-described embodiments are exemplary, and that a person skilled in the art, in light of the present disclosure, may devise various solutions that fall within the scope of the present disclosure and fall within the scope of the present disclosure. It should be understood by those skilled in the art that the present description and drawings are illustrative and not limiting to the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1. A method of scene-based radiometric targeting, the method comprising:
under the condition that the flying platform (1) passes through a uniform scene, push-broom imaging is carried out in a mode that the array direction of the probe elements (21) of the flying platform (1) along the rail direction opposite to the sensor (2) forms an included angle alpha so as to acquire imaging data of a ground scene, in the process, trigger time points, starting probe elements, ending time points and ending probe elements of the preset event (3) are recorded based on trigger of the preset event (3), so that imaging data which are output by the sensor (2) and accord with the preset event (3) are obtained, the acquired imaging data are classified,
wherein,
the arrangement direction of at least one row of the linear arrays of the probe elements (21) defines the included angle alpha in a rail-wise manner which is neither parallel nor perpendicular to the flying platform (1); performing a pre-labeling process based on the imaging data, wherein an initial image in units of pixels generated by the probe element (21) is constructed based on the imaging data including at least the classified imaging data; denoising processing is carried out based on the initial image, and shifting is carried out based on the gray value of each pixel after denoising and enhancement so that the pixels in the same row in the initial image are imaging of the same scene unit by different probe elements (21), and the pixels in the same column are imaging of the same probe element (21).
2. The method of radiometric calibration as defined in claim 1, wherein,
the preset events (3) at least comprise a first preset event (31) which accords with a high-reflection scene, a second preset event (32) which accords with a medium-reflection scene and a third preset event (33) which accords with a low-reflection scene and are constructed based on information sent by other flight platforms (1) and priori knowledge.
3. The radiation targeting method according to claim 2, wherein the first preset event (31) comprises at least a first termination event (311) for terminating imaging of the first preset event (31), the second preset event (32) comprises at least a second termination event (321) for terminating imaging of the second preset event (32), and the third preset event (33) comprises at least a third termination event (331) for terminating imaging of the third preset event (33).
4. A radiation targeting method according to claim 3, characterized in that in case at least one of the first (31), second (32), third (33) preset events is triggered, the following steps are performed:
recording a first trigger time point and a first initial probe element triggered by the first preset event (31), a second trigger time point and a second initial probe element triggered by the second preset event (32), and a third trigger time point and a third initial probe element triggered by the third preset event (33);
Recording a first termination time point and a first termination probe element triggered by a first termination event (311), a second termination time point and a second termination probe element triggered by a second termination event (321), and a third termination time and a third termination probe element triggered by a third termination event (331);
the imaging data is classified to form first imaging data (5) responsive to a high reflectance scene, second imaging data (6) of a medium reflectance scene, and third imaging data (7) of a low reflectance scene in a dynamic range.
5. The radiation targeting method according to claim 4, characterized in that the following steps are performed before the flight platform (1) triggers the recording of the preset event (3):
the flying platform (1) performs push-broom imaging in the arrangement direction of the linear array of at least one row of the probe elements (21) of the sensor (2), so that different probe elements (21) sequentially image the same scene unit.
6. The radiometric calibration method according to claim 5, characterized in that the imaging data after the pre-calibration treatment are corrected in real time on the basis of the degree of dynamic characteristic variation of the ground scene over the duration of the pre-set event (3).
7. The method of radiometric calibration as set forth in claim 6 wherein the step of performing a pre-calibration process based on obtaining said imaging data is as follows:
Constructing an initial image in units of pixels generated by the probe element (21) based on imaging data including at least the first imaging data (5), the second imaging data (6), and the third imaging data (7);
high-frequency amplification is performed in units of the pixels to enhance details of a straight line formed by the pixels imaging the same scene unit in the initial image.
8. The method of radiometric calibration as defined in claim 7 wherein the step of real-time correction of imaging data after pre-targeting is as follows:
judging whether the dynamic characteristic change degree of the ground scene in the duration time of the preset event (3) exceeds a first threshold value or not;
and correcting the preset imaging data in real time by utilizing a linear correction model under the condition that the dynamic characteristic change degree does not exceed a first threshold value.
9. The method of radiometric calibration according to claim 8, wherein in the event that the degree of dynamic characteristic variation exceeds a first threshold, the following steps are performed:
-segmenting the duration of the preset event (3) in a first unit time;
searching at least one first time set which is formed by continuous first unit time and has the dynamic characteristic change degree not exceeding a first threshold value;
Determining whether a ratio of a number of first unit times in a first time set to a degree of variation of the dynamic feature in the first time set satisfies a second threshold, wherein,
and under the condition that the ratio of the number of the first unit time in the first time set to the dynamic characteristic change degree in the first time set meets a second threshold value, performing real-time correction on imaging data processed by a preset mark by using a linear correction model.
10. A scene-based radiometric calibration system, to which radiometric calibration method according to claims 1-9 is applied, comprising at least a flying platform (1) and a sensor (2), characterized in that it further comprises a pre-calibration processing module (8), wherein,
under the condition that the flying platform (1) passes through a uniform scene, a ground base station or a calculation load (11) of the flying platform (1) controls a probe element (21) of the sensor (2) to push and sweep in a mode that an included angle alpha is formed between the rail direction of the flying platform (1) and the arrangement direction of a linear array of the flying platform so as to acquire imaging data of the ground scene, wherein,
the arrangement direction of at least one row of the linear arrays of the probe elements (21) defines the included angle alpha in a rail-wise manner which is neither parallel nor perpendicular to the flying platform (1);
The scaling processing module (8) performs a pre-scaling process based on the imaging data.
CN202010515427.0A 2019-12-11 2019-12-11 Scene-based radiation calibration method and system Active CN111815525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010515427.0A CN111815525B (en) 2019-12-11 2019-12-11 Scene-based radiation calibration method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911262809.0A CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system
CN202010515427.0A CN111815525B (en) 2019-12-11 2019-12-11 Scene-based radiation calibration method and system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201911262809.0A Division CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system

Publications (2)

Publication Number Publication Date
CN111815525A CN111815525A (en) 2020-10-23
CN111815525B true CN111815525B (en) 2024-04-09

Family

ID=69117777

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010515427.0A Active CN111815525B (en) 2019-12-11 2019-12-11 Scene-based radiation calibration method and system
CN201911262809.0A Active CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201911262809.0A Active CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system

Country Status (1)

Country Link
CN (2) CN111815525B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257343B (en) * 2020-10-22 2023-03-17 上海卫星工程研究所 High-precision ground track repetitive track optimization method and system
CN112954239B (en) * 2021-01-29 2022-07-19 中国科学院长春光学精密机械与物理研究所 On-board CMOS image dust pollution removal and recovery system and recovery method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107093196A (en) * 2017-04-10 2017-08-25 武汉大学 The in-orbit relative radiometric calibration method of video satellite area array cameras
CN110120077A (en) * 2019-05-06 2019-08-13 航天东方红卫星有限公司 A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19703629A1 (en) * 1997-01-31 1998-08-06 Daimler Benz Aerospace Ag Method for autonomously determining the position of a satellite
TW201042220A (en) * 2009-01-22 2010-12-01 Inspired Surgical Technologies Inc Actuated feedforward controlled solar tracking system
US9194953B2 (en) * 2010-10-21 2015-11-24 Sony Corporation 3D time-of-light camera and method
CN102469580A (en) * 2010-11-18 2012-05-23 上海启电信息科技有限公司 mobile positioning service system based on wireless sensing technology
CN104267739A (en) * 2014-10-17 2015-01-07 成都国卫通信技术有限公司 Satellite signal tracking device and method
CN105222788B (en) * 2015-09-30 2018-07-06 清华大学 The automatic correcting method of the matched aircraft Route Offset error of feature based
CN105300407B (en) * 2015-10-09 2018-10-23 中国船舶重工集团公司第七一七研究所 A kind of marine dynamic starting method for single axis modulation laser gyro inertial navigation system
CN105551053A (en) * 2015-12-01 2016-05-04 中国科学院上海技术物理研究所 Fast geometric precise correction method of mini-planar array satellite-borne TDI CCD camera
CN106600589B (en) * 2016-12-09 2019-08-30 中国科学院合肥物质科学研究院 A kind of spaceborne spectrometer direction method for registering based on coastline regional remote sensing figure
CN107705267B (en) * 2017-10-18 2020-06-26 中国科学院电子学研究所 Optical satellite image geometric correction method based on control vector
CN108776955B (en) * 2018-04-16 2020-08-18 国家卫星气象中心 Real-time correction method and correction device for remote sensing image
CN109188468B (en) * 2018-09-13 2021-11-23 上海垣信卫星科技有限公司 Ground monitoring system for monitoring satellite running state
CN110411444B (en) * 2019-08-22 2024-01-09 深圳赛奥航空科技有限公司 Inertial navigation positioning system and positioning method for underground mining mobile equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107093196A (en) * 2017-04-10 2017-08-25 武汉大学 The in-orbit relative radiometric calibration method of video satellite area array cameras
CN110120077A (en) * 2019-05-06 2019-08-13 航天东方红卫星有限公司 A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于均匀场地的遥感图像相对校正算法研究;赵燕;易维宁;杜丽丽;黄红莲;;大气与环境光学学报(02);全文 *
遥感25号无场化相对辐射定标;张过;李立涛;;测绘学报(08);全文 *

Also Published As

Publication number Publication date
CN110689505A (en) 2020-01-14
CN111815525A (en) 2020-10-23
CN110689505B (en) 2020-07-17
CN111815524A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
Sterckx et al. The PROBA-V mission: Image processing and calibration
Suzuki et al. Initial inflight calibration for Hayabusa2 optical navigation camera (ONC) for science observations of asteroid Ryugu
CN110120077B (en) Area array camera in-orbit relative radiation calibration method based on satellite attitude adjustment
Reid et al. Imager for Mars Pathfinder (IMP) image calibration
CN107093196B (en) Video satellite area-array camera on-orbit relative radiation calibration method
Humm et al. Flight calibration of the LROC narrow angle camera
Liu et al. A new method for cross-calibration of two satellite sensors
CN109120848B (en) Method for adjusting integration series of space camera
CN111815525B (en) Scene-based radiation calibration method and system
Dev et al. Estimation of solar irradiance using ground-based whole sky imagers
Uprety et al. Calibration improvements in S-NPP VIIRS DNB sensor data record using version 2 reprocessing
CN110782429B (en) Imaging quality evaluation method based on satellite-borne remote sensing camera
Bruegge et al. The MISR radiometric calibration process
Andre et al. Instrumental concept and performances of the POLDER instrument
Oberst et al. The imaging performance of the SRC on Mars Express
Kameche et al. In-flight MTF stability assessment of ALSAT-2A satellite
CN111815524B (en) Correction system and method for radiation calibration
Cede et al. Raw EPIC data calibration
Shimada et al. Calibration of advanced visible and near infrared radiometer
Abolghasemi et al. Design and performance evaluation of the imaging payload for a remote sensing satellite
Ravindra et al. Instrument data metrics evaluator for tradespace analysis of earth observing constellations
Doelling et al. MTSAT-1R visible imager point spread correction function, Part I: The need for, validation of, and calibration with
Stone et al. Potential for calibration of geostationary meteorological imagers using the Moon
Payne et al. The AEROS mission: multi-spectral ocean science measurement network via small satellite connectivity
US20180042190A1 (en) Water irrigation restriction violation system and associated methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant