CN111815525A - Radiation calibration method and system based on scene - Google Patents

Radiation calibration method and system based on scene Download PDF

Info

Publication number
CN111815525A
CN111815525A CN202010515427.0A CN202010515427A CN111815525A CN 111815525 A CN111815525 A CN 111815525A CN 202010515427 A CN202010515427 A CN 202010515427A CN 111815525 A CN111815525 A CN 111815525A
Authority
CN
China
Prior art keywords
scene
imaging
imaging data
probe
preset event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010515427.0A
Other languages
Chinese (zh)
Other versions
CN111815525B (en
Inventor
谢成荫
杨峰
任维佳
杜志贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Tianyi Space Technology Research Institute Co Ltd
Original Assignee
Changsha Tianyi Space Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Tianyi Space Technology Research Institute Co Ltd filed Critical Changsha Tianyi Space Technology Research Institute Co Ltd
Priority to CN202010515427.0A priority Critical patent/CN111815525B/en
Publication of CN111815525A publication Critical patent/CN111815525A/en
Application granted granted Critical
Publication of CN111815525B publication Critical patent/CN111815525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manufacturing & Machinery (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Input (AREA)

Abstract

The invention relates to a radiation calibration method and a radiation calibration system based on a scene, wherein the method comprises the following steps: under the condition that a flying platform passes through a uniform scene, performing push-broom imaging in a mode that the arrangement direction of the linear array of the probe elements of the sensor forms an included angle alpha relative to the arrangement direction of the linear array of the probe elements of the flying platform, so as to acquire imaging data of the ground scene, wherein the arrangement direction of at least one row of the linear array of the probe elements defines the included angle alpha in a mode that the arrangement direction of the linear array of the probe elements is neither parallel to nor perpendicular to the arrangement direction of the linear array of; performing a pre-scaling process based on the imaging data.

Description

Radiation calibration method and system based on scene
The invention relates to a case division application of a scene-based satellite-borne remote sensing instrument self-adaptive correction method and system, which has the application number of 201911262809.0, the application date of 2019, 12 months and 11 days, and the application type of the invention.
Technical Field
The invention belongs to the technical field of remote sensing, relates to a radiometric calibration method, relates to a self-adaptive radiometric calibration method, relates to a radiometric calibration correction method, relates to a scene-based radiometric calibration correction method, and particularly relates to a scene-based satellite-borne remote sensing instrument self-adaptive calibration method.
Background
The linear array push-broom optical sensor has unique response characteristics due to response and bias nonuniformity among the probe elements, inherent noise and dark current inconsistency of each probe element and response inconsistency of peripheral circuit characteristic differences of the sensor, so that imaging of each probe element has certain difference and is represented as various random image noises on an image. The relative radiometric calibration is to calibrate the error of the imaging system by using a high-precision radiometric calibration standard and determine the corresponding relationship between each probe element and the probe element, so that the accuracy of the radiometric calibration standard directly influences the precision of the relative radiometric calibration.
Currently, the main methods for relative radiometric calibration of remote sensing satellites include: the method comprises the following steps of utilizing a laboratory calibration method of an integrating sphere before satellite transmission, utilizing satellite on-orbit calibration lamps or diffuse reflection plates to perform on-orbit calibration on the basis of satellite on-orbit, utilizing an on-orbit field calibration on the basis of ground uniform fields to perform on-orbit statistical calibration on full-life-cycle images of the satellite, and the like.
For example, chinese patent publication No. CN106871925A discloses a method for calibrating relative radiation of a remote sensing satellite by on-orbit comprehensive dynamic adjustment, which includes (1) obtaining images corresponding to each radiance condition of an integrating sphere in a laboratory, and further obtaining a coefficient table; (2) taking satellite safety as a basic condition during the in-orbit test, and when the satellite is in a preset state, performing right-angle yaw imaging for as many times as possible to obtain the strip data of the satellite right-angle yaw; obtaining a right-angle yaw radiometric calibration lookup table; (3) obtaining a lookup table at each time interval during the rail test or during the running imaging; (4) carrying out relative radiometric calibration processing on an image of a satellite in a certain state, judging whether a right-angle yaw radiometric calibration lookup table in the state exists or not, and if so, calling the right-angle yaw radiometric calibration lookup table to carry out relative radiometric calibration processing; if not, judging whether the imaging time is less than the preset time, and if so, performing radiation calibration processing on the coefficient table; if not, calling the lookup table with the nearest time to perform relative radiometric calibration processing.
For example, chinese patent publication No. CN109671038A discloses a relative radiation correction method based on pseudo-invariant feature point classification hierarchy. The existing radiation correction method has low relative radiation correction precision on remote sensing images including dominant ground object areas such as coastlines and islands. The method comprises the following steps: firstly, acquiring a ground feature sub-image based on remote sensing image classification; secondly, determining an initial relative radiation correction model and initial PIFs of the ground feature sub-image based on the nonlinear regression analysis of the spectrum; thirdly, determining a refined nonlinear relative radiation correction model and refined PIFs of the ground feature sub-image based on the gradient refined nonlinear regression analysis; fourthly, performing relative radiation correction on the sub-image of the ground object to be corrected by utilizing the refined PIFs and the refined nonlinear relative radiation correction model; and fifthly, synthesizing the corrected images into a complete image.
In the above calibration method, the integrating sphere, the calibration lamp or the diffuse reflection plate is a high-precision radiation reference, and the mass sample size for uniform field objects (such as desert, sea, snow, etc.) and on-orbit statistical calibration is a radiation reference having hypothetical properties based on probability statistics theory. However, due to vibration in the satellite transmitting process and change of the space environment where the satellite is transmitted, the response state of each probe element of the satellite sensor changes, or the corresponding state of the satellite sensor attenuates along with the passage of the orbital time of the satellite, so that the laboratory calibration method cannot guarantee high-precision radiometric calibration of the whole life cycle of the satellite; although the on-satellite radiometric calibration can achieve higher calibration accuracy and frequency, not all satellites are provided with on-satellite calibration equipment, and the on-satellite calibration equipment has state attenuation, so that the radiometric calibration accuracy is reduced. In-orbit statistics and calibration requirements of massive sample image data or uniform field data cannot meet calibration requirements of satellite in-orbit initial stage and satellite high-frequency calibration requirements.
With the improvement of agility of remote sensing satellites, relevant researchers put forward that relative radiometric calibration is performed by imaging a satellite or a camera by yawing 90 degrees and the calibration is applied to the satellites such as QuickBird, RapidEye, Landsat8 and the like by utilizing the agility of the satellite.
For example, document [1] Longlian, Wang Zhongmin, an in-orbit radiometric calibration method [ J ] aerospace Return and remote sensing based on the agility of satellites, 2013,34(4):77-85, discloses a method for relative radiometric calibration by imaging satellites or cameras by 90 ° yaw, i.e., using the Side-slit calibration method. The method comprises the steps that a mode that a plurality of CCDs are arranged in a staggered and parallel mode is adopted for arranging the focal plane of the satellite sensor, and in order to ensure that all detecting elements of each CCD of the satellite sensor obtain the same entrance pupil radiance during yaw radiation calibration, a mode of imaging ground objects in a ground uniform calibration field is adopted for yaw relative radiation calibration of the satellite. The method greatly reduces the requirement on the uniform characteristic of the ground scene and is beneficial to improving the on-orbit radiation calibration effect. The method has ideal effect on the satellite sensor with better detection element linearity, but because the corresponding functions of each detection element of the sensor in different brightness response intervals are different and the response functions of different detection elements are also different, the scheme can not meet the radiation calibration of the full dynamic range of each detection element of the sensor. For example, in order to make the calibration ground object (scene) cover the response range of the focal plane probe relatively comprehensively, when performing 90 ° yaw calibration, various types of ground objects are generally selected as calibration scenes, such as dry lake beds, tropical rainforests, polar ice covers, deserts, and the like. This is in fact consistent with the conventional site-targeted scene selection principles. In desert, polar region, etc., the scenes with high transmitted or reflected radiance are especially suitable for the calibration of the dynamic range of the detector near the top, while the scenes with low radiance are relatively difficult to obtain, such as vegetation with low radiance, the radiance of the detector changes with seasons, and sea water or sea water is selected as the calibration scene with low radiance, and proper conditions such as wind power, solar altitude, etc. are also needed. Document [1] to obtain a uniform calibration scene with low radiance, 90 ° yaw calibration is performed using the moon as the calibration scene in the portion of the detector dynamic range near the bottom. The benefits of choosing the moon as the calibration scene are mainly 3 points: 1. the whole lunar surface can be used as a calibration scene with low radiance; 2. the influence of atmospheric environment changes can be basically ignored during the calibration; 3. the calibration work is carried out in the shadow area of the orbit or when the calibration work passes through the two poles of the earth, and the time period of the calibration work can not conflict with the time period of the normal data acquisition work task. However, the method needs to arrange a plurality of CCD sensors which are arranged in a staggered and parallel manner on the two sides of the course of the satellite, so that the weight of the satellite is increased; and secondly, a series of devices such as control, feedback and correction are needed to control and sense the newly added sensors, so that the cost of the satellite is increased, and the capability of the satellite for bearing other functional loads is sacrificed.
An optical splicing method is adopted for the arrangement of the focal planes of the sensors, the response functions of all detecting elements of the sensors in different brightness response intervals are different, and a method for utilizing an non-field 90-degree yaw radiometric calibration method independent of ground uniformity is disclosed in the document [2], Liristao, remote sensing No. 25 No. field-free relative radiometric calibration [ J ], the university of surveying and mapping, 2017(08):75-82 ], wherein 90-degree yaw radiometric calibration is that a satellite platform or a camera is rotated by 90 degrees, and simultaneously, a drift angle caused by earth rotation is corrected, so that a linear array CCD sensor is parallel to a satellite track push-broom direction, and a satellite is pushed and scanned along the track to obtain radiometric calibration data for relative radiometric calibration. The method further comprises the following steps: 1. banding noise suppression and contrast enhancement; 2. defining yaw calibration data; 3. and solving the calibration parameters. In the method disclosed in the document [2], in the process of yaw radiometric calibration imaging, linear array CCD (charge coupled device) probe elements sequentially pass through the same ground object, and the radiation brightness of the ground object obtained by the linear array CCD probe elements is completely equal without considering the atmospheric change of all probe element imaging time of the linear array (remote sensing No. 25, the time is 3.2s), so that the yaw radiometric calibration provides a high-precision radiation reference for the radiation response relation of each probe element of the calibration sensor, ensures that each probe element can image the same ground object, and provides a high-precision radiation reference for the radiometric calibration; the non-field satellite relative radiometric calibration does not depend on the ground uniformity or the radiometric calibration has no requirement on the uniformity of ground objects, and the in-orbit high-frequency calibration of the satellite sensor is supported; the radiation brightness range of the ground object can cover the whole dynamic range of the response of the sensor without depending on the ground uniform field, and data support is provided for full dynamic range radiometric calibration. However, the method disclosed in document [2] has the following problems: firstly, resolving a remote sensing No. 25 full dynamic range relative radiometric calibration parameter by adopting a sensor probe element histogram specification method, and solving the problem that residual errors exist in the relative radiometric correction of low-brightness area data of a medium-high brightness uniform area data statistical calibration result in document [2 ]; as described in document [2], the remote sensing-25 sensor probe element is not completely linear in the whole response dynamic range, that is, the responses of the same probe element in the high reflection uniform region, the medium reflection uniform region and the low reflection uniform region are different and in a nonlinear relationship, so that there is certainly a residual error in performing relative radiation correction on the low brightness region data by using the data statistical calibration result of the medium and high brightness uniform regions; finally, the calibration residual error can be reduced by selecting a uniform calibration scene for 90 ° yaw calibration, so that when the dynamic range of the calibration scene detected at a certain moment is relatively small, 90 ° yaw radiometric calibration is facilitated, however, the radiation characteristics of the calibrated uniform field are dynamically changed, for example, in the ocean, in different sea areas, different seasons, different dimensions, the temperature and the radiation characteristics of the sea surface target are different, and therefore, the calibration coefficient needs to be updated continuously according to the actual detection condition.
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the problem that the response range of a focal plane probe element cannot be fully covered in the process of one-time calibration in the existing relative radiation calibration method, the invention utilizes the condition that a sensor can rotate around the navigation axis of a flight platform, and utilizes the mode that other probe line arrays and the track of the flight platform form a plurality of different oblique angles while carrying out 90-degree yaw imaging to quickly detect a high reflection area, a medium reflection area and a low reflection area of a scene as far as possible, thereby providing trigger data for the ground base station and the computing terminal of the flight platform, classifying imaging data, fully covering the response range of the probe element, avoiding data mixing of different response ranges and reducing the accuracy of calibration data. In addition, aiming at the problem of calibration residual error caused by scene radiation dynamic characteristic change, the invention selects a time period with approximately unchanged statistical mean and variance of the output signal of the probe element by adopting a time period differentiation method according to the relation between the change amplitude and time in the scene dynamic characteristic change, and constructs a linear correction model, thereby performing real-time correction on pre-calibrated imaging data.
A self-adaptive correction method of a scene-based satellite-borne remote sensing instrument at least comprises the following steps: and a sensor which is borne by the flight platform and can rotate around the flight axis of the flight platform forms images in a linear array push-broom mode. The calculation load of the flight platform and/or the calculation terminal of the ground base station execute the following steps: based on the triggering of a preset event, performing push-broom imaging in a mode that the along-track direction of the flying platform forms an included angle relative to the arrangement direction of the probe element linear array of the sensor so as to acquire imaging data of a ground scene, and performing preset standard processing according to the acquired imaging data; and correcting the imaging data after the pre-calibration processing in real time based on the degree and speed of the dynamic characteristic change of the ground scene in the duration of the preset event.
According to a preferred embodiment, the preset events at least include a first preset event conforming to a high reflection scene, a second preset event conforming to a medium reflection scene and a third preset event conforming to a low reflection scene, which are constructed based on information sent by other flight platforms and a priori knowledge. The first preset event includes at least a first termination event for terminating imaging of the first preset event. The second preset event includes at least a second termination event for terminating imaging of the second preset event. The third preset event includes at least a third termination event for terminating imaging of the third preset event.
According to a preferred embodiment, in case of at least one of the first preset event, the second preset event, and the third preset event triggering, the computing load and/or the computing terminal performs the following steps: recording a first trigger time point and a first initial probe element triggered by the first preset event, a second trigger time point and a second initial probe element triggered by the second preset event, and a third trigger time point and a third initial probe element triggered by the third preset event; recording a first termination time point and a first termination probe element triggered by a first termination event, a second termination time point and a second termination probe element triggered by a second termination event, and a third termination time and a third termination probe element triggered by a third termination event; the imaging data is classified to form first imaging data responsive to a high reflectance scene in a dynamic range, second imaging data responsive to a medium reflectance scene, and third imaging data responsive to a low reflectance scene.
According to a preferred embodiment, before the flight platform triggers the recording of the preset event, the calculation load of the flight platform and/or the calculation terminal of the ground base station performs the following steps:
the flight platform performs push-broom imaging in the arrangement direction of the probe element linear arrays of at least one row of the sensors, so that different probe elements perform sequential imaging on the same scene unit;
and the arrangement direction of the linear array of the probe elements of at least one row of the sensors is limited in a mode of being not parallel to or perpendicular to the along-track direction of the flying platform to carry out push-scan imaging, so that the first preset event and/or the second preset event and/or the third preset event are triggered at the maximum probability.
According to a preferred embodiment, the steps of the computing load and/or the computing terminal performing the pre-scaling process based on the obtained imaging data are as follows:
constructing an initial image in units of pixels generated by the probe based on imaging data including at least the first imaging data, the second imaging data and the third imaging data;
denoising the initial image, and performing high-frequency amplification by taking the pixel as a unit so as to enhance the details of a straight line formed by the pixels imaging the same scene unit in the initial image;
and shifting based on the gray value of each pixel after denoising and enhancement so as to enable pixels in the same row in the initial image to be the images of different probe elements on the same scene unit, and pixels in the same column to be the images of the same probe element.
According to a preferred embodiment, the following steps are performed after said pre-scaling processing of said imaging data by said computation load and/or by said computation terminal:
judging whether the dynamic characteristic change degree of the ground scene in the duration time of the preset event exceeds a first threshold value or not;
and under the condition that the dynamic characteristic change degree does not exceed the first threshold value, utilizing a linear correction model to perform real-time correction on the pre-calibrated imaging data. Estimating a multiplier term of a correction model of the pixel using a mean square error of the ground scene. And estimating a constant term of a correction model of the pixel by using the mean and mean square error of the ground scene.
According to a preferred embodiment, in the case where the degree of variation of the dynamic characteristics exceeds a first threshold value, the calculation load of the flight platform and/or the calculation terminal of the ground base station performs the following steps:
segmenting the duration of the preset event according to a first unit time;
searching at least one first time set which is formed by continuous first unit time and has the dynamic characteristic change degree not exceeding a first threshold value;
and judging whether the ratio of the number of the first unit time in the first time set to the dynamic characteristic change degree in the first time set meets a second threshold value or not. And under the condition that the ratio of the number of the first unit time in the first time set to the dynamic characteristic change degree in the first time set meets a second threshold value, utilizing a linear correction model to correct the pre-calibrated imaging data in real time.
According to a preferred embodiment, after the computation load and/or the computation terminal performs real-time correction on the imaging data after the pre-scaling processing, the computation load and/or the computation terminal obtains a first scaling data parameter of the first imaging data, a second scaling data parameter of the second imaging data, and a third scaling data parameter of the third imaging data by histogram regularization computation based on the imaging data obtained after the real-time correction.
A scene-based satellite-borne remote sensing instrument self-adaptive correction system at least comprises a flight platform and a sensor which can rotate around the flight axis of the flight platform and can form images in a linear array push-broom mode. The system also includes a pre-scaling module and a correction module. The calculation load of the flight platform and/or the calculation terminal of the ground base station are configured to be capable of performing push-broom imaging based on triggering of a preset event in a manner that the along-track direction of the flight platform forms an included angle with respect to the arrangement direction of the probe element linear array of the sensor, so as to acquire imaging data of a ground scene. And the pre-calibration processing module responds to the calculation load and/or the instruction of the calculation terminal to perform pre-calibration processing on the imaging data. And the correction module responds to the instruction of the calculation load and/or the calculation terminal to perform real-time correction on the imaging data after the pre-calibration processing based on the degree and speed of the dynamic characteristic change of the ground scene in the duration of the preset event.
The calculation load and/or the preset events of the calculation terminal at least comprise a first preset event which is in accordance with a high reflection scene, a second preset event which is in accordance with a medium reflection scene and a third preset event which is in accordance with a low reflection scene, wherein the first preset event, the second preset event and the third preset event are constructed based on signals sent by other flight platforms and priori knowledge. The first preset event at least comprises a first termination event for terminating imaging of the first preset event, the second preset event at least comprises a second termination event for terminating imaging of the second preset event, and the third preset event at least comprises a third termination event for terminating imaging of the third preset event.
The beneficial technical effects of the invention comprise one or more of the following:
1. aiming at the problem of calibration errors caused by the change of scene radiation dynamic characteristics, according to the relation between the change amplitude and time in the change of scene dynamic characteristics, the imaging time period is subjected to differential processing to generate a plurality of sub-time periods, the sub-time periods with approximately unchanged statistical mean and variance of output signals of the probe elements are selected, and the constructed linear correction model is used for correcting pre-calibrated imaging data of the sub-time periods in real time by approximately considering the idea that the statistical mean of the signals output by each probe element is constant, so that the relative radiation calibration errors are prevented from being transmitted;
2. the imaging can be carried out in a classified manner according to the triggering of different ground scenes, so that the imaging data in different response ranges can be classified, calibrated and corrected, the response characteristics of a single probe element in different radiance response intervals can be obtained, the response range of the probe element is comprehensively covered, the data mixing of different response ranges is avoided, and the accuracy of calibration data is reduced;
3. the invention can image the scene in a large range by the sensor before the imaging of the probe linear array with the relative included angle alpha of 0 degree by performing push-broom imaging in a mode of not being parallel or perpendicular to the rail direction of the flying platform through the arrangement direction of the probe linear array, and can trigger the first preset event, the second preset event and the third preset event at the maximum probability, and can acquire the real-time information of the calibration scene in advance, thereby providing the triggering information of the first preset event, the second preset event and the third preset event for the next calibration imaging. Moreover, when the probe linear array with the relative included angle alpha of 0 degree is imaged, the dynamic change of the calibration scene can be obtained in real time by imaging the calibration scene in a large range, so that the mean value and the variance of the dynamic change of the radiation characteristic of the calibration scene are provided for the subsequent steps, and the correction gain and the bias corresponding to the pixel are estimated based on the radiation mean value and the variance of the calibration scene;
4. the method estimates the constant term of the correction model of the pixel by using the mean value and the variance of the ground scene, can perform real-time correction on the imaging data subjected to the preset calibration processing by simple iterative processing, and performs correction by using a linear model, has simple algorithm and low time complexity, can adaptively adjust the calibration coefficient according to the dynamic change of the calibration scene, reduces the calculation overhead of the correction data, avoids consuming a large amount of resources to process the imaging data, reduces the complex load design of the flight platform, prolongs the service life of the flight platform, and improves the imaging data processing speed.
Drawings
FIG. 1 is a schematic representation of a flying platform push-broom imaging in a preferred embodiment of the method of the present invention;
FIG. 2 is a schematic flow diagram of a preferred embodiment of the method of the present invention;
FIG. 3 is a push-broom schematic diagram of the sensor linear array and the flying platform when the included angle between the sensor linear array and the flying platform along the rail direction is 90 degrees;
FIG. 4 is a push-broom schematic diagram when an included angle between a sensor linear array and a flying platform along the rail direction is 0 degree according to the method of the invention;
FIG. 5 is a diagram of the ordering of the pixels of a preferred embodiment of the method of the invention;
FIG. 6 is a diagram of an ordering of pixels subjected to a predetermined criteria process in accordance with a preferred embodiment of the method of the present invention; and
fig. 7 is a schematic block diagram of a preferred embodiment of the system of the present invention.
List of reference numerals
1: the flying platform 2: sensor with a sensor element
3: preset event 4: computing terminal of ground base station
α: an included angle 5: first imaging data
6: second imaging data 7: third imaging data
8: the pre-scaling processing module 9: correction module
11: calculation load 21: exploring unit
31: first preset event 32: second preset event
33: third preset event 5 a: first scaled data parameter
6 a: second scaling data parameter 7 a: third scaled data parameter
311: first termination event 321: second termination event
331: third termination event
Detailed Description
The following detailed description is made with reference to fig. 1 to 7.
Example 1
The embodiment discloses a correction method, which may be a radiometric calibration method, a relative radiometric calibration method, a self-adaptive radiometric calibration method, a relative radiometric calibration method, a calibration method for a constant self-adaptive radiometric calibration, a radiometric calibration method based on a scene, or a self-adaptive calibration method for a satellite-borne remote sensing instrument based on a scene, and the method may be implemented by the system and/or other alternative components of the present invention. For example, the method disclosed in the present embodiment is implemented by using various components in the system of the present invention. The preferred embodiments of the present invention are described in whole and/or in part in the context of other embodiments, which can supplement the present embodiment, without resulting in conflict or inconsistency.
Radiometric calibration is preferably a process that establishes a data link between the amount of radiation and the detector output. The purpose is to eliminate the error of the sensor itself and determine the accurate radiation value at the entrance pupil of the sensor. Radiometric calibration techniques for space cameras or sensors mainly consist of two parts, relative radiometric calibration (also called uniformity correction) and absolute radiometric calibration. The relative radiometric calibration is a process for correcting responsivity of different pixels (probe elements) of the detector, and causes different responsivity and bias, and besides different process levels, the relative radiometric calibration also comprises other factors, such as non-uniformity of the sensor itself, non-uniformity introduced when the sensor works, non-uniformity related to external input, and influence of an optical system. As the sensor process is mature, the existing visible light detection device generally does not need uniformity correction, so the relative radiometric calibration is mainly used for an infrared band. The response of a plurality of probe elements of the focal plane device to radiation is inconsistent and has no certain relation with each other, and the response rate of a general photosensitive element (probe element) is not linear, which brings great difficulty to the non-uniformity correction. Preferably, as shown in fig. 4, the imaging data as shown in fig. 5 can be obtained by 90 ° yaw, i.e. the direction in which the linear arrays of probe elements 21 are arranged is parallel to the imaging direction. As shown in FIG. 5, the same behavior is different for the pixels of the same scaled scene, such as A or B or C or D or E, by the different probes 21. The pels A, B, C, D, E in the same column represent pels generated by the same probe. Through the setting mode, under the condition of not considering other influence factors, each probe element on the sensor 2 images the same scene in turn theoretically. For example, in fig. 5, the first column of probes images pel A, B, C, D, E, the second column of probes images pel A, B, C, D, E, and the third column of probes images pel A, B, C, D, E, respectively, so that each probe images the same scene a. However, the radiation characteristics of the same scene may also change dynamically, for example, the radiation characteristics of the calibrated scene may change in the same imaging time period due to different seasons, different wind forces and different solar altitude angles of the same ocean.
In summary, the invention differentiates the imaging time period according to the relation between the variation amplitude and time in the scene dynamic characteristic variation to generate a plurality of sub-time periods, selects the sub-time periods with approximately unchanged statistical mean and variance of the output signal of the probe, and corrects the pre-calibrated imaging data of the sub-time periods in real time through the constructed linear correction model to avoid the transmission of the relative radiometric calibration error.
A scene-based satellite-borne remote sensing instrument self-adaptive correction method at least comprises the step flow shown in figure 2.
The method comprises the following steps:
s100: push-broom imaging with the flying platform 1 with the sensors 2 is performed based on the triggering of the preset event 3. Preferably, the flying platform 1 is push-broom imaged in the manner as shown in fig. 1. Preferably, the flying platform 1 carries a calculation load 11 for the calculation and the sensors 2. The sensor 2 may be a line array CCD. The CCD refers to a charge coupled device, which is a kind of semiconductor device. The CCD is a detecting element which uses electric charge to represent the magnitude of a signal and transmits the signal in a coupling manner. The CCD has a series of advantages of self-scanning, wide sensing wave spectrum, small distortion, small volume, light weight, low system noise, small power consumption and long service life. CCD is widely used in digital photography, astronomy, especially optical remote sensing, optical and spectrum telescope and high speed photography. Preferably. The push-broom imaging is to use a CCD made of semiconductor material to form a linear array or an area array sensor, and to use a wide-angle optical system to sweep out a strip-shaped track like a brush by means of the movement of the flying platform 1 in the whole field of view, so as to obtain a two-dimensional image of the ground along the flying direction.
Preferably, the sensor 2 is rotatable about the axis of flight of the platform 1. The axis of flight refers to the axis of the flying platform 1 in its direction of flight. Preferably, the sensor 2 is formed by a plurality of probe elements 21 arranged in a line. Preferably, the probe 21 may be a light sensitive element within a CCD. Preferably, the linear array of the probe elements 21 of the sensor 2 is arranged at an angle α with the flying direction of the flying platform 1 along the track, as shown in fig. 2 and 3. Preferably, as shown in fig. 3, the angle α between the sensor 2 and the orbit or heading of the flying platform 1 is 90 °. As shown in fig. 2, the angle α between the sensor 2 and the orbit or heading of the flying platform 1 is 0 °.
Preferably, after the flying platform 1 is subjected to push-broom imaging, the computing load 11 of the flying platform 1 and the computing terminal 4 of the ground base station can acquire imaging data of the ground scene acquired by the sensor 2. Preferably, the computation payload 11 refers to a computation chip, a circuit, etc., such as a CPU, a GPU, an integrated circuit, an FPGA, a single chip, an MCU, an ARM architecture serial chip, etc. Preferably, the computing terminal 4 refers to a computing device such as a computer, a server, or the like.
Preferably, the computing load 11 of the flying platform 1 and/or the computing terminal 4 of the ground base station perform the triggering of the control sensor 2 rotation based on the preset event 3. Preferably, the sensor 2 is provided with at least one line array of a plurality of probe elements 21. Preferably, the computation load 11 and/or the computation terminal 4 controls the different line array probe elements 21 to rotate according to the trigger of the preset event 3, so that the different line array probe elements 21 perform push-broom imaging at different included angles 2. Preferably, from the imaging of the push-broom shot of the ground scene by the sensor 2, imaging data for the targeted ground scene can be obtained.
According to a preferred embodiment, the preset events 3 comprise at least a first preset event 31 complying with a high reflection scenario, a second preset event 32 complying with a medium reflection scenario and a third preset event 33 complying with a low reflection scenario. Preferably, in order to satisfy the difference between the response functions of the single probe element 21 of the sensor 2 in different radiance response intervals and the difference between the response functions of the different probe elements 21, therefore, in order to satisfy radiometric calibration of the full dynamic range of each probe element 21 of the sensor 2, the method disclosed in this embodiment selects multiple types of ground scenes as the calibration scene. Preferably, the ground scene venue is divided into high reflection scenes, medium reflection scenes and low reflection scenes according to the dynamic range of the existing probe elements 21 and the radiance reflection characteristics of the individual venue. Preferably, a high reflection scene in which the reflectance in the ground scene is higher than 35% is set as an example of a full color band of 450nm to 900nm, i.e., a visible light-near infrared band. A medium reflection scene with a reflectance within 15% to 34% is set. A low reflection scene with a reflectivity below 15% is set. Through the arrangement mode, the invention has the beneficial effects that: the imaging can be carried out in a classified mode according to triggering of different ground scenes, so that the imaging data in different response ranges can be classified, radiation calibration and correction can be carried out, the response characteristics of a single probe element 21 in different radiance response intervals can be obtained, the response ranges of the probe elements are comprehensively covered, data mixing of different response ranges is avoided, and the accuracy of calibration data is reduced.
Preferably, the accuracy of multi-scene radiometric calibration of the sensor 2 is affected by a plurality of links, wherein the specific characteristics of the scene are the first prerequisite for the use of the present invention, and therefore it must be ensured that the surface characteristics, atmospheric characteristics and uniform area of the selected scene meet the specific requirements of on-orbit site calibration, and the detailed selection principle is as follows:
1. the scene reflection characteristics cover various ground object types of high, medium and low;
2. the space characteristic and the emission characteristic of a single scene are relatively uniform, and the reflectivity change is smooth in the wave band range of the sensor 2;
3. the scene is located in a high-altitude area, the ambient atmosphere is relatively dry and clean, and the atmosphere is relatively stable;
4. each scene can be covered by the same orbit observation image of the satellite remote sensing;
5. the area of the uniform area of the scene is larger than 10 pixels multiplied by 10 pixels of the sensor 2 to be calibrated, and no large target sheltering object is arranged around the scene;
6. each scene has traffic conditions for developing a satellite-ground synchronous observation test.
Preferably, according to the above conditions, the corresponding uniform scene can be obtained by prior knowledge selection. For example, in the Dunhuang radiation correction field in China, the high reflection scene is located at the north side of the Dunhuang radiation correction field, the total area of the field is about 6km multiplied by 4km, the area of the uniform high reflection scene area is 400m multiplied by 400m, the geographic coordinates are N40 degrees 28 degrees and E94 degrees 22 degrees, the reflectivity of the visible light-near infrared band in the area is about 35 percent to 45 percent, and the spectral reflectivity change among the probe elements 21 of different sensors 2 is less than 1 percent. The medium reflection scenario may take the resource satellite arena of the Dunhuang radiation correction arena, which is located on the gobi desert at about 30km to the west side of Dunhuang City. The total area is about 30km × 35km, the area of the medium reflection scene area is 550m × 550m, the geographic coordinates are N40 ° 05 '27.75 ", E94 ° 23' 39", and the altitude is 1229 m. The field area has high stability and uniformity, the reflectivity of a visible light-near infrared band is about 15% -30%, and the spectral reflectivity change among the detecting elements 21 of different sensors 2 is about 1% -2%. The low reflection scene can select the south lake water body on the south side of the radiation correction field, the area of the south lake water body in summer and autumn of the field is about 3.5km multiplied by 1.2km, the geographic coordinates are N39 degrees 52 ', E94 degrees 07', the average water depth is about 5m, the water body is free of pollution, and the characteristics are uniform.
Preferably, the first preset event 31, the second preset event 32 and the third preset event 33 may be constructed according to existing a priori knowledge, and may also be continuously updated according to signals transmitted based on other flight platforms 1. Preferably, the radiation of the ground scene is continuously and dynamically changed due to the influence of the atmosphere, the dimensionality, the wind power and the solar altitude angle on the ground scene, so that the flight platform 1 can be triggered to image in time based on the radiation information about the scene sent by other flight platforms. Preferably, the first preset event 31 comprises at least the event that the flying platform 1 enters a highly reflective scene. The first preset event 1 further comprises a first termination event 311 for terminating the imaging of the first preset event 31. The first termination event 311 is the flight platform 1 leaving the high reflection scenario. Likewise, the second predetermined event 32 includes at least the flight platform 1 entering a medium reflection scenario. The second preset event 32 further comprises a second termination event 321 for terminating the imaging of the second preset event 32. The second termination event 321 includes the flight platform 1 leaving a medium reflection scene. The third predetermined event 33 comprises at least the flight platform 1 entering a low reflection scenario, and further comprises a third termination event 331 for terminating the imaging of the third predetermined event 33. The third termination event includes at least the flight platform 1 leaving the low reflection scenario. Preferably, the flight platform 1 is informed of entering a high reflection scene, a medium reflection scene or a low reflection scene through a priori knowledge and information sent by other flight platforms or ground base stations. Or, the flying platform 1 obtains the reflectivity information, the coordinates, the longitude and latitude information and the like which accord with the high reflection scene, and the flying platform 1 judges whether the flying platform 1 enters the high reflection scene according to the real-time flight data of the flying platform 1 and the reflectivity, the uniformity and other information of the ground scene fed back by the sensor 2 based on the information. Preferably, the information sent by the flight platform or the ground base station is informed whether the flight platform 1 leaves a high reflection scene, a medium reflection scene or a low reflection scene through a priori knowledge. Or, the flying platform 1 obtains the reflectivity information, the coordinates, the longitude and latitude information and the like which accord with the high reflection scene, and the flying platform 1 judges whether the flying platform 1 leaves the high reflection scene according to the real-time flight data of the flying platform 1 and the reflectivity, the uniformity and other information of the ground scene fed back by the sensor 2 based on the information. Through the above setting mode, the flying platform 1 can trigger the corresponding preset event 3 in time based on the above information. Moreover, under the condition that the preset event 3 is not triggered by the flight platform 1, the sensor 2 is in a sleep state, and the sensor 2 can be prevented from being in an on state for a long time, so that the energy consumption is saved, and the method is particularly suitable for the existing micro satellite and is beneficial to prolonging the on-orbit service life of the flight platform 1.
According to a preferred embodiment, in case of at least one of the first preset event 31, the second preset event 32, the third preset event 33 triggering, the computing load 11 and/or the computing terminal 4 performs the following steps:
1. recording a first trigger time point and a first initial probe element triggered by a first preset event 31, a second trigger time point and a second initial probe element triggered by a second preset event 32, and a third trigger time point and a third initial probe element triggered by a third preset event 33;
2. recording a first termination time point and a first termination probe element triggered by a first termination event 311, a second termination time point and a second termination probe element triggered by a second termination event 321, and a third termination event and a third termination probe element triggered by a third termination time 331;
preferably, the imaging data corresponding to the first preset event 31 in the imaging data output by the sensor 2 can be obtained through the first starting probe element and the first terminating probe element, and the first triggering time point and the first terminating time point. Similarly, the imaging data corresponding to the second set event 32 in the imaging data output by the sensor 2 can be obtained through the second start probe, the second end probe, the second trigger time point and the second end time point. Through the third start probe element, the third termination probe element, the third trigger time point and the third termination time point, the imaging data which is in accordance with the third set event 33 in the imaging data output by the sensor 2 can be obtained. Through the above setting manner, the flying platform 1 can not only image the first preset event 311, the second preset event 32 or the third preset event 33 singly, but also image the first preset event 31, the second preset event 32 or the third preset event 33 simultaneously by marking the time point and the probe element of triggering and terminating the corresponding preset event.
Preferably, the imaging data is classified based on the start time point, the end time point, the start probe, and the end probe described above. For example, in the case of triggering the first preset event 31, the computation load 11 or the computation terminal 4 records the first start time and the first start probe of imaging start in the sensor 2, and in the case of triggering the first termination event, the computation load 11 or the computation terminal 4 records the first termination probe and the first termination time, so that all probes located in the same line in the sensor 2 and between the first start probe and the first termination probe image the first preset event 31 to obtain the first imaging data 5. The imaging time of the first imaging data 5 is obtained by calculation of the first start time and the first end time for real-time correction in the subsequent steps. Likewise, second imaging data 6 are obtained by a second start probe and a second end probe. The imaging time of the second imaging data 6 is obtained by the second start time and the second end time. Third imaging data 7 are obtained by the third start probe and the third stop probe. The imaging time of the third imaging data 7 is obtained by the third start time and the third end time. Preferably, the first imaging data 5 is imaging data responsive to a highly reflective scene in a dynamic range. The second imaging data 6 is imaging data responsive to a medium reflective scene in dynamic range. The third imaging data 7 is imaging data responsive to a low reflection scene in the dynamic range.
According to a preferred embodiment, before the flight platform 1 triggers the recording of the preset event 3, the calculation load 11 of the flight platform 1 and/or the calculation terminal 4 of the ground base station performs the following steps:
1. the flying platform 1 performs push-scan imaging in the arrangement direction of the linear array of the probe elements 21 of at least one row of sensors 2, as shown in fig. 4. The flying platform 1 controls the sensor 2 to perform push-broom imaging with an included angle alpha of 0 degree. By the arrangement mode, different probe elements 21 positioned in the same linear array can sequentially image the same scene unit, as shown in fig. 5. A, B, C, D, E in FIG. 5 are pixels of the same probe 21 that image the same scene in time sequence. Preferably, chronological refers to going from first to last in time. A refers to the first region in the same scene. B refers to a second area adjacent to the first area in the same scene. C refers to a third region adjacent to the second region in the same field. D refers to a fourth area adjacent to the third area in the same scene. E refers to a fifth area adjacent to the fourth area in the scene. As shown in FIG. 5, each column in FIG. 5 has A, B, C, D, E, A, B, C, D, E in the same column indicates the column of pixels imaged by the same probe element 21, and A or B or C in the same row are pixels imaged by different probe elements 21.
2. The arrangement direction of the linear array of the probe elements 21 of at least one row of sensors 2 defines an included angle alpha in a manner of being neither parallel to nor perpendicular to the along-track direction of the flying platform 1. Preferably, the array of at least one array of detector elements 21 of the sensor 2 having an angle α other than 0 ° is rotatable. After rotation, the included angle α is not 0 ° or 90 °, that is, the linear array of the probe elements 21 is arranged in a direction which is neither parallel to nor perpendicular to the along-track direction of the flying platform 1. By adopting the setting mode to carry out push-broom imaging, the sensor 2 can image a scene in a large range before the linear array imaging of the probe element 21 with the relative included angle alpha of 0 degree, and trigger the first preset event 31, the second preset event 32 and the third preset event 33 with the maximum probability, so that the real-time information of a calibration scene can be obtained in advance, and the trigger information of the first preset event 31, the second preset event 32 and the third preset event 33 is provided for the next calibration imaging. Moreover, when the linear array of the probe element 21 with the relative included angle α of 0 ° is imaged, the dynamic change of the calibration scene can be obtained in real time by imaging the calibration scene in a large range, so as to provide the mean value and the mean square error of the dynamic change of the radiation characteristic of the calibration scene for the subsequent step S300, and the correction gain and the offset corresponding to the pixel element are estimated based on the radiation mean value and the mean square error of the calibration scene.
S200: performing pre-targeting processing according to the obtained imaging data;
according to a preferred embodiment, the steps of pre-scaling the calculation load 11 of the flying platform 1 and/or the calculation terminal 4 of the ground base station according to the obtained imaging data are as follows:
1. an initial image is generated based on imaging data comprising at least the first imaging data 5, the second imaging data 6 and the third imaging data 7. Preferably, the initial image is as shown in fig. 5. The initial image is in units of pixels generated by the probe 21. For example, the first probe element of the line array of probe elements 21 with the included angle α of 0 ° is imaged and sampled for the first time along the flight direction of the flight platform 1 to generate an a-pixel. The first probe element generates a B pixel in the second imaging sampling, and the second probe element generates an A pixel in the second sampling. The first detecting element generates a C pixel in the third sampling, the second detecting element generates a B pixel in the third sampling, and the third detecting element generates an A pixel in the third sampling.
2. And denoising based on the initial image. Preferably, the initial image is formed by the pixels of each column of probe elements 21, and A, B, C, D, E in the initial image are arranged diagonally. Since the image element of each column is the imaging of different probe elements 21, and the response functions of different probe elements 21 are different, and since the dynamic radiation characteristic of the calibration scene, the image element a of each column in the initial image is different, resulting in the streak noise of the oblique line of the initial image.
Preferably, the denoising process is to pass the image data of the initial image through a low-pass filter to remove low-frequency noise. Preferably, the low-pass filter may be an exponential low-pass filter. According to the setting mode, the initial image is subjected to Fourier transform to obtain the frequency spectrum of the frequency domain of the initial image, low-frequency noise components are filtered by a low-pass filter to obtain high-frequency components of the initial image, and details of oblique straight lines formed by pixels imaged by the same scene unit in the initial image can be enhanced by amplifying the high-frequency components.
3. And shifting based on the gray value of each pixel in the denoised and enhanced initial image. Preferably, since the initial image includes a plurality of pixels arranged diagonally, a straight line formed by the plurality of pixels may be detected according to the LSD method. Preferably, the step of LSD detecting the straight line is as follows:
a. the scaling selects 1, i.e. means that no gaussian sampling is performed. Preferably, the scaled data is not usable for scaling or fails to scale because sampling destroys the non-linear response relationship between the probes 21 in the original image.
b. And calculating the gradient value and gradient direction of each pixel point, and performing pseudo-sorting. Preferably, the larger the gradient value, the more prominent the edge point and thus more suitable as a seed point. However, due to the fact that the time overhead of fully sorting the gradient values is too large, the gradient values are simply divided into 1024 levels, the 1024 levels cover the variation range of the gradient from 0 to 255, and the sorting is linear time overhead. Preferably, the seed points are searched downwards in sequence from 1024 with the highest gradient value, the pixel points with the same gradient value are placed in the same linked list, 1024 linked lists are obtained, and a state table containing the 1024 linked lists is synthesized according to the sequence of the gradient values from small to large. All points in the state table are set to a no-use state.
c. And setting the point with the gradient value smaller than p as a usable state, taking out the point with the maximum gradient value in the state table, and taking the point as a starting point to search the surrounding directions within the angle tolerance. Preferably, the search is performed to directions within angular tolerance of the surroundings, i.e. according to directions in which the gradient angle directions are similar, so that the region diffusion is performed. Preferably, the diffused region is subjected to a rectangle fitting to generate a rectangle R. Preferably, p may be a desire for all gradient values, or may be set manually. Preferably, points with gradient values smaller than p tend to appear in smooth areas, or only low frequency noise, can severely affect the calculation of the straight line angle. Thus, in LSD, pixel points with gradient magnitude less than p will be rejected from participating in the construction of rectangle R. Preferably, the rectangular fitting of the diffused regions is essentially a shifting process of the gray values of the pixel point data, and not a sampling of the data points.
d. And judging whether the density of the same-polarity points in the rectangle R meets a threshold value F or not. Preferably, if the threshold D is not satisfied, the rectangle R is truncated to form a plurality of rectangular frames satisfying the threshold D. Preferably, the threshold F may be set to be one third of the number of actual imaging probes 21 participating in the sensor 2, so that short straight lines can be eliminated.
Through the arrangement mode, the oblique line rectangle R formed by the same pixels in the initial image can be detected, and the included angle alpha between the oblique line and the flying platform along the axial direction can be obtained.
4. The initial image after the line detection is processed according to the included angle α and the following formula, so that the pixels in the same row in the initial image are images of the same scene unit by different probe elements 21, and the pixels in the same column are images of the same probe element 21, as shown in fig. 6. The formula is as follows:
Figure BDA0002529304460000151
wherein DN is the gray data of one-dimensional initial image stored by image line, DN [ m + n.t]Indicating the gray value of the mth row and nth list image in the initial image. t represents the number of probe elements 21 participating in imaging. K1=tanα,K2=tan(90°-α)。
S300: and performing real-time correction by combining imaging data processed by a preset standard based on the degree of dynamic characteristic change of the ground scene within the duration of the preset event. Preferably, after the pre-calibration processing of the imaging data by the computation load 11 of the flying platform 1 and/or by the computation terminal 4, the following steps are performed:
1. and judging whether the dynamic characteristic change degree of the ground scene in the duration time of the preset event exceeds a first threshold value. Preferably, the duration refers to the imaging time of the line probe element 21 to the preset event 3. Preferably, the first threshold refers to a scene for relative radiometric calibration, the degree or rate of change of reflectivity of which is 2% or 5% over the imaging period. Preferably, the pre-scaled imaging data is corrected in real time using a linear correction model in the event that the degree of dynamic characteristic change does not exceed the first threshold. Preferably, the statistical average of the signal output by each probe 21 can be considered approximately constant without the rate of change exceeding 2%. The statistical variance of the signals input to the probe 21 is equal. A linear correction model can be used for the correction. The linear correction model is:
Figure BDA0002529304460000161
wherein n is the number of iterations of the linear correction model. XiAnd (n) is the original output of the ith probe 21. Y isiAnd (n) is the corrected output value.
Figure BDA0002529304460000162
The model multiplier term, i.e., the gain, is corrected.
Figure BDA0002529304460000163
Is a constant term of the correction model, i.e. the bias. Preferably, the mean square error of the ground scene can be used to estimate the multiplier term of the correction model for the pel. And estimating a constant term of a correction model of the pixel by using the mean value and the mean square error of the ground scene. Preferably, let ai(n) is the mean square error, β, of the imaged scenei(n) is the mean of the imaged scene. The formula is as follows:
Figure BDA0002529304460000164
Figure BDA0002529304460000165
through the two iterative formulas, the imaging data after the preset calibration processing can be corrected in real time, the linear model is used for correction, the algorithm is simple and low in complexity, and the calibration coefficient can be adaptively adjusted according to the dynamic change of the calibration scene.
2. In the case that the degree of variation of the dynamic characteristics exceeds a first threshold value, the calculation load 11 of the flight platform 1 and/or the calculation terminal 4 of the ground base station performs the following steps:
a. the duration of the preset event is segmented by a first unit time. Preferably, an imaging time period of the calibration scene, in which the degree of change of the dynamic features exceeds the first threshold value, is subdivided by using a differential idea method, and at least one first time set, in which the degree of change of the dynamic features does not exceed the first threshold value, is found in the time period. Preferably, the first time set includes a plurality of first unit times adjacent to each other, that is, the plurality of first unit times are continuous and have no disconnection. Preferably, it is determined whether a ratio of the number of the first unit time in the first time set to the degree of change of the dynamic characteristic in the first time set satisfies a second threshold. Preferably, the second threshold is a ratio of imaging time of 1024 pixels to a degree of change of dynamic characteristics of the scene within the imaging time, which is 5%. Preferably, if the ratio is exceeded, the statistical average of the signals output by each probe 21 is considered to be not constant and cannot be corrected in real time.
b. Under the condition that the ratio of the number of the first unit time in the first time set to the degree of change of the dynamic characteristics in the first time set satisfies the second threshold, the statistical average of the signals output by each probe 21 is considered to be approximately constant, and the linear correction model can be used to perform real-time correction on the pre-calibrated imaging data. By the setting mode, the time period which is approximately unchanged according with the statistical mean and variance of the output signal of the probe element 21 in the imaging time period can be found as far as possible by utilizing the differential thought principle, and the linear correction model is constructed by the mean and variance of the calibration scene in the time period. In fact, when actually calibrating the relative radiation, the radiation characteristic of most of the calibration scene is changed slowly or within a certain time period, and the calibration residual error can be corrected in real time within the time period of imaging all the probe elements 21 by the characteristic of the calibration scene, so as to improve the accuracy of the calibration data.
3. Based on the imaging data corrected in real time, a first calibration data parameter 5a of the first imaging data 5, a second calibration data parameter 6a of the second imaging data 6, and a third calibration data parameter 7a of the third imaging data 7 are obtained by histogram normalization calculation, respectively. Preferably, the scaling parameters are calculated by using the rule based on the probe histogram, and the processing flow is as follows:
1. and establishing a cumulative probability distribution function of each probe element according to the processed initial image according to the following formula, and selecting the cumulative probability distribution function of the probe element to be used as an ideal reference cumulative probability distribution function. The formula is as follows:
Figure BDA0002529304460000171
and k is the imaging gray level of the probe element. pn (k) is the number of pixels when the bin gray level is k. dpn (i) imaging all the pixels for the jth probe element.
2. And performing histogram regularization processing on the cumulative probability distribution function of each detector by taking the ideal reference cumulative probability distribution function as a reference according to the following formula to obtain relative radiometric calibration parameters of each detector element. The formula is as follows:
f(k-x)≤f(k)≤f(k+y)
Figure BDA0002529304460000172
wherein, the value range of x and y is [2]bits-1]And bits is a quantization unit of the sensor 2 to obtain the sensor image. Through the algorithm, the radiation response difference between each probe element 21 can be better reflected, and the whole column value distribution change after the corresponding relative radiation calibration parameters are applied uniformly conforms to the law of actual scene change and the radiation brightness difference between CCD taps.
Example 2
The embodiment discloses a calibration system, which may be a radiometric calibration system, a relative radiometric calibration system, a self-adaptive radiometric calibration system, a relative radiometric calibration system, a calibration system for constant self-adaptive radiometric calibration, a radiometric calibration system based on a scene, or a satellite-borne remote sensing instrument self-adaptive calibration system method based on a scene, and the system may be implemented by the system of the invention and/or other replaceable components. For example, the method disclosed in the present embodiment is implemented by using various components in the system of the present invention. The preferred embodiments of the present invention are described in whole and/or in part in the context of other embodiments, which can supplement the present embodiment, without resulting in conflict or inconsistency.
As shown in fig. 7, the adaptive correction system for the scene-based satellite-borne remote sensing instrument at least comprises a flight platform 1, a sensor 2, a ground base station, a pre-standard processing module 8 and a correction module 9. The flying platform 1 may preferably be an aircraft, spacecraft or missile. The aircraft may be a balloon, airship, airplane, and the like. The spacecraft may be artificial earth satellite, manned spacecraft, space probe, space shuttle, etc. Preferably, the sensor 2 is an advanced optical system mounted on the flying platform 1, and can be used for acquiring information of the earth target. The sensor 2 may be a sensor such as a space camera or a CCD, or may be a sensor array constituted by a plurality of CCDs. The detector elements 21, i.e. the light sensitive elements, within the sensor 2 preferably constitute the sensor 2 in a line arrangement. The sensor 2 may be a line array CCD. The CCD refers to a charge coupled device, which is a kind of semiconductor device. The CCD is a detecting element which uses electric charge to represent the magnitude of a signal and transmits the signal in a coupling manner. Preferably, the sensor 2 is rotatable about the axis of flight of the platform 1. Preferably, the axis of flight refers to the axis of the flying platform 1 in its direction of flight. As shown in fig. 3, the angle α between the sensor 2 and the orbit or heading of the flying platform 1 is 90 °. As shown in fig. 2, the angle α between the sensor 2 and the orbit or heading of the flying platform 1 is 0 °. Preferably, the sensor 2 is imaged in a line push-broom fashion. Preferably. The push-broom imaging is that a linear array or an area array sensor 2 is formed by using a CCD made of a semiconductor material, a wide-angle optical system is adopted, a strip-shaped track is swept out like a brush by the movement of a flying platform 1 in the whole field of view, and a two-dimensional image of the ground along the flying direction is obtained. Preferably, the flying platform 1 is also loaded with a computational load 11. Preferably, the computation payload 11 refers to a computation chip, a circuit, etc., such as a CPU, a GPU, an integrated circuit, an FPGA, a single chip, an MCU, an ARM architecture serial chip, etc. Preferably, the ground base station comprises at least a computing terminal 4. The computing terminal 4 refers to a computing device such as a computer or a server. Preferably, the prespecified processing module 8 comprises at least a register, a storage medium and a computing chip. The registers are used to store instructions for the computation load 11 and the computation terminal 4. The instruction includes at least operation control information. The storage medium is used for storing the processed data. Preferably, the storage medium may be Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. The computing chip can be a serial chip of a CPU, a GPU, an integrated circuit, an FPGA, a singlechip, an MCU, an ARM architecture and the like. Preferably, the pretargeting system may also use the computation load 11 for the computation and processing of data. Preferably, the correction module 9 also comprises at least a register, a storage medium and a computing chip.
Preferably, the computing load 11 of the flying platform 1 and/or the computing terminal 4 of the ground base station are configured so as to be able to control the rotation of the sensor 2 based on the triggering of the preset event 3. Preferably, the sensor 2 is provided with at least one line array of a plurality of probe elements 21. Preferably, the sensor 2 is rotatable about the axis of flight of the platform 1. The axis of flight refers to the axis of the flying platform 1 in its direction of flight. Preferably, the sensor 2 is formed by a plurality of probe elements 21 arranged in a line. Preferably, the probe 21 may be a light sensitive element within a CCD. Preferably, the linear array of the probe elements 21 of the sensor 2 is arranged at an angle α with the flying direction of the flying platform 1 along the track, as shown in fig. 3 and 4. Preferably, as shown in fig. 3, the angle α between the sensor 2 and the orbit or heading of the flying platform 1 is 90 °. As shown in fig. 4, the angle α between the sensor 2 and the orbit or heading of the flying platform 1 is 0 °.
Preferably, the calculation load 11 and/or the calculation terminal 4 controls the different line array probe elements 21 to rotate according to the trigger of the preset event 3, so that the different line array probe elements 21 perform push-scan imaging at different included angles α, thereby obtaining imaging data.
According to a preferred embodiment, the preset events 3 comprise at least a first preset event 31 complying with a high reflection scenario, a second preset event 32 complying with a medium reflection scenario and a third preset event 33 complying with a low reflection scenario. Preferably, in order to satisfy the difference between the response functions of the single probe element of the sensor 2 in the response intervals of different radiances and the difference between the response functions of different probe elements, therefore, in order to satisfy the radiometric calibration of the full dynamic range of each probe element of the sensor, the method disclosed in this embodiment selects multiple types of ground scenes as the calibration scene. Preferably, the ground scene field is divided into a high reflection scene, a medium reflection scene and a low reflection scene according to the dynamic range of the existing probe element and the radiation reflection characteristics of a single field. Preferably, a high reflection scene in which the reflectance in the ground scene is higher than 35% is set as an example of a full color band of 450nm to 900nm, i.e., a visible light-near infrared band. A scene with a reflectivity within 15% to 34% is set to a medium reflection of 5%. A low reflection scene with a reflectivity below 15% is set. Through the arrangement mode, the invention has the beneficial effects that: the imaging can be carried out in a classified mode according to triggering of different ground scenes, so that imaging data in different response ranges can be calibrated in a classified mode through radiation, the response characteristics of a single probe element in different radiance response intervals can be obtained, the response ranges of the probe elements are covered comprehensively, data mixing of different response ranges is avoided, and accuracy of calibration data is reduced.
Preferably, the first preset event 31, the second preset event 32 and the third preset event 33 may be constructed according to existing a priori knowledge, and may also be continuously updated according to signals transmitted based on other flight platforms 1. Preferably, the radiation of the ground scene is continuously and dynamically changed due to the influence of the atmosphere, the dimensionality, the wind power and the solar altitude angle on the ground scene, so that the flight platform 1 can be triggered to image in time based on the radiation information about the scene sent by other flight platforms. Preferably, the first preset event 31 comprises at least the event that the flying platform 1 enters a highly reflective scene. The first preset event 1 further comprises a first termination event 311 for terminating the imaging of the first preset event 31. The first termination event 311 is the flight platform 1 leaving the high reflection scenario. Likewise, the second predetermined event 32 includes at least the flight platform 1 entering a medium reflection scenario. The second preset event 32 further comprises a second termination event 321 for terminating the imaging of the second preset event 32. The second termination event 321 includes the flight platform 1 leaving a medium reflection scene. The third predetermined event 33 comprises at least the flight platform 1 entering a low reflection scenario, and further comprises a third termination event 331 for terminating the imaging of the third predetermined event 33. The third termination event includes at least the flight platform 1 leaving the low reflection scenario. Preferably, the flight platform 1 is informed of entering a high reflection scene, a medium reflection scene or a low reflection scene through a priori knowledge and information sent by other flight platforms or ground base stations. Or, the flying platform 1 obtains the reflectivity information, the coordinates, the longitude and latitude information and the like which accord with the high reflection scene, and the flying platform 1 judges whether the flying platform 1 enters the high reflection scene according to the real-time flight data of the flying platform 1 and the reflectivity, the uniformity and other information of the ground scene fed back by the sensor 2 based on the information. Preferably, the information sent by the flight platform or the ground base station is informed whether the flight platform 1 leaves a high reflection scene, a medium reflection scene or a low reflection scene through a priori knowledge. Or, the flying platform 1 obtains the reflectivity information, the coordinates, the longitude and latitude information and the like which accord with the high reflection scene, and the flying platform 1 judges whether the flying platform 1 leaves the high reflection scene according to the real-time flight data of the flying platform 1 and the reflectivity, the uniformity and other information of the ground scene fed back by the sensor 2 based on the information. Through the above setting mode, the flying platform 1 can trigger the corresponding preset event 3 in time based on the above information. Moreover, under the condition that the preset event 3 is not triggered by the flight platform 1, the sensor 2 is in a sleep state, and the sensor 2 can be prevented from being in an on state for a long time, so that the energy consumption is saved, and the method is particularly suitable for the existing micro satellite and is beneficial to prolonging the on-orbit service life of the flight platform 1.
According to a preferred embodiment, in case of at least one of the first preset event 31, the second preset event 32, the third preset event 33 triggering, the computing load 11 and/or the computing terminal 4 performs the following steps:
1. recording a first trigger time point and a first initial probe element triggered by a first preset event 31, a second trigger time point and a second initial probe element triggered by a second preset event 32, and a third trigger time point and a third initial probe element triggered by a third preset event 33;
2. recording a first termination time point and a first termination probe element triggered by a first termination event 311, a second termination time point and a second termination probe element triggered by a second termination event 321, and a third termination event and a third termination probe element triggered by a third termination time 331;
preferably, the imaging data corresponding to the first preset event 31 in the imaging data output by the sensor 2 can be obtained through the first starting probe element and the first terminating probe element, and the first triggering time point and the first terminating time point. Similarly, the imaging data corresponding to the second set event 32 in the imaging data output by the sensor 2 can be obtained through the second start probe, the second end probe, the second trigger time point and the second end time point. Through the third start probe element, the third termination probe element, the third trigger time point and the third termination time point, the imaging data which is in accordance with the third set event 33 in the imaging data output by the sensor 2 can be obtained. Through the above setting manner, the flying platform 1 can not only image the first preset event 311, the second preset event 32 or the third preset event 33 singly, but also image the first preset event 31, the second preset event 32 or the third preset event 33 simultaneously by marking the time point and the probe element of triggering and terminating the corresponding preset event.
Preferably, the imaging data is classified based on the start time point, the end time point, the start probe, and the end probe described above. For example, in the case of triggering the first preset event 31, the computation load 11 or the computation terminal 4 records the first start time and the first start probe of imaging start in the sensor 2, and in the case of triggering the first termination event, the computation load 11 or the computation terminal 4 records the first termination probe and the first termination time, so that all probes located in the same line in the sensor 2 and between the first start probe and the first termination probe image the first preset event 31 to obtain the first imaging data 5. The imaging time of the first imaging data 5 is obtained by calculation of the first start time and the first end time for real-time correction in the subsequent steps.
According to a preferred embodiment, the flying platform 1 performs push-broom imaging in the arrangement direction of the linear array of probe elements 21 of at least one row of sensors 2, as shown in fig. 4. The flying platform 1 controls the sensor 2 to perform push-broom imaging with an included angle alpha of 0 degree. By the arrangement mode, different probe elements 21 positioned in the same linear array can sequentially image the same scene unit, as shown in fig. 5. A, B, C, D, E in FIG. 5 are pixels of the same probe 21 that image the same scene in time sequence. Preferably, chronological refers to going from first to last in time. A refers to the first region in the same scene. B refers to a second area adjacent to the first area in the same scene. C refers to a third region adjacent to the second region in the same field. D refers to a fourth area adjacent to the third area in the same scene. E refers to a fifth area adjacent to the fourth area in the scene. As shown in FIG. 5, each column in FIG. 5 has A, B, C, D, E, A, B, C, D, E in the same column indicates the column of pixels imaged by the same probe element 21, and A or B or C in the same row are pixels imaged by different probe elements 21.
Preferably, the arrangement direction of the linear array of probe elements 21 of at least one row of sensors 2 defines an angle α in a manner that is neither parallel nor perpendicular to the along-track direction of the flying platform 1. Preferably, the array of at least one array of detector elements 21 of the sensor 2 having an angle α other than 0 ° is rotatable. After rotation, the included angle α is not 0 ° or 90 °, that is, the linear array of the probe elements 21 is arranged in a direction which is neither parallel to nor perpendicular to the along-track direction of the flying platform 1. By adopting the setting mode to carry out push-broom imaging, the sensor 2 can image a scene in a large range before the linear array imaging of the probe element 21 with the relative included angle alpha of 0 degree, and trigger the first preset event 31, the second preset event 32 and the third preset event 33 with the maximum probability, so that the real-time information of a calibration scene can be obtained in advance, and the trigger information of the first preset event 31, the second preset event 32 and the third preset event 33 is provided for the next calibration imaging. Moreover, when the linear array of the probe element 21 with the relative included angle α of 0 ° is imaged, the dynamic change of the calibration scene can be obtained in real time by imaging the calibration scene in a large range, so that the mean value and the mean square error of the dynamic change of the radiation characteristic of the calibration scene are provided for the subsequent correction module 9, and the correction gain and the offset corresponding to the pixel element are estimated based on the radiation mean value and the mean square error of the calibration scene.
Preferably, the pretarget processing module 8 performs pretarget processing on the imaging data in response to instructions from the computing load 11 and/or the computing terminal 4. Preferably, the computation load 11 and/or the computation terminal 4 send a processing instruction to the predefined object processing module 8 upon termination of the preset event 3. Preferably, the landmark processing module 8 generates an initial image based on imaging data including at least the first imaging data 5, the second imaging data 6, and the third imaging data 7. Preferably, the initial image is as shown in fig. 5. The initial image is in units of pixels generated by the probe 21. For example, the first probe element of the line array of probe elements 21 with the included angle α of 0 ° is imaged and sampled for the first time along the flight direction of the flight platform 1 to generate an a-pixel. The first probe element generates a B pixel in the second imaging sampling, and the second probe element generates an A pixel in the second sampling. The first detecting element generates a C pixel in the third sampling, the second detecting element generates a B pixel in the third sampling, and the third detecting element generates an A pixel in the third sampling.
Preferably, the pre-scaling module 8 performs a de-noising process based on the initial image. Preferably, the initial image is formed by the pixels of each column of probe elements 21, and A, B, C, D, E in the initial image are arranged diagonally. Since the image element of each column is the imaging of different probe elements 21, and the response functions of different probe elements 21 are different, and since the dynamic radiation characteristic of the calibration scene, the image element a of each column in the initial image is different, resulting in the streak noise of the oblique line of the initial image. Preferably, the denoising process is to pass the image data of the initial image through a low-pass filter to remove low-frequency noise. Preferably, the low-pass filter may be an exponential low-pass filter. According to the setting mode, the initial image is subjected to Fourier transform to obtain the frequency spectrum of the frequency domain of the initial image, low-frequency noise components are filtered by a low-pass filter to obtain high-frequency components of the initial image, and details of oblique straight lines formed by pixels imaged by the same scene unit in the initial image can be enhanced by amplifying the high-frequency components.
Preferably, the prescaler processing module 8 shifts based on the gray value of each pixel in the denoised and enhanced initial image. Preferably, since the initial image includes a plurality of pixels arranged diagonally, a straight line formed by the plurality of pixels may be detected according to the LSD method. Preferably, the step of LSD detecting the straight line is as follows:
a. the scaling selects 1, i.e. means that no gaussian sampling is performed. Preferably, the scaled data is not usable for scaling or fails to scale because sampling destroys the non-linear response relationship between the probes 21 in the original image.
b. And calculating the gradient value and gradient direction of each pixel point, and performing pseudo-sorting. Preferably, the larger the gradient value, the more prominent the edge point and thus more suitable as a seed point. However, due to the fact that the time overhead of fully sorting the gradient values is too large, the gradient values are simply divided into 1024 levels, the 1024 levels cover the variation range of the gradient from 0 to 255, and the sorting is linear time overhead. Preferably, the seed points are searched downwards in sequence from 1024 with the highest gradient value, the pixel points with the same gradient value are placed in the same linked list, 1024 linked lists are obtained, and a state table containing the 1024 linked lists is synthesized according to the sequence of the gradient values from small to large. All points in the state table are set to a no-use state.
c. And setting the point with the gradient value smaller than p as a usable state, taking out the point with the maximum gradient value in the state table, taking the point as a starting point, and searching towards the surrounding direction within the angle tolerance. Preferably, the search is performed to directions within angular tolerance of the surroundings, i.e. according to directions in which the gradient angle directions are similar, so that the region diffusion is performed. Preferably, the diffused region is subjected to a rectangle fitting to generate a rectangle R. Preferably, p may be a desire for all gradient values, or may be set manually. Preferably, points with gradient values smaller than p tend to appear in smooth areas, or only low frequency noise, can severely affect the calculation of the straight line angle. Thus, in LSD, pixel points with gradient magnitude less than p will be rejected from participating in the construction of rectangle R. Preferably, the rectangular fitting of the diffused regions is essentially a shifting process of the gray values of the pixel point data, and not a sampling of the data points.
d. And judging whether the density of the same-polarity points in the rectangle R meets a threshold value F or not. Preferably, if the threshold D is not satisfied, the rectangle R is truncated to form a plurality of rectangular frames satisfying the threshold D. Preferably, the threshold F may be set to be one third of the number of actual imaging probes 21 participating in the sensor 2, so that short straight lines can be eliminated.
Through the arrangement mode, the oblique line rectangle R formed by the same pixels in the initial image can be detected, and the included angle alpha between the oblique line and the flying platform along the axial direction can be obtained.
Preferably, the pre-calibration processing module 8 processes the initial image after the line detection according to the included angle α and the following formula, so that the pixels in the same row in the initial image are images of the same scene unit by different probe elements 21, and the pixels in the same column are images of the same probe element 21, as shown in fig. 6. The formula is as follows:
Figure BDA0002529304460000231
wherein DN is the gray data of one-dimensional initial image stored by image line, DN [ m + n.t]Indicating the gray value of the mth row and nth list image in the initial image. t represents the number of probe elements 21 participating in imaging. K1=tanα,K2=tan(90°-α)。
Preferably, the correction module 9 corrects the pre-calibrated imaging data in response to instructions from the computing load 11 and/or the computing terminal 4. Preferably, the correction module 9 performs real-time correction of the pre-calibrated processed imaging data based on the degree and speed of the dynamic characteristic variation of the ground scene over the duration of the preset event. Preferably, the correction module 9 determines whether the degree of change of the dynamic characteristics of the ground scene within the duration of the preset event exceeds a first threshold. Preferably, the duration refers to the imaging time of the line probe element 21 to the preset event 3. Preferably, the first threshold refers to a scene for relative radiometric calibration, the degree or rate of change of reflectivity of which is 2% or 5% over the imaging period. Preferably, the pre-scaled imaging data is corrected in real time using a linear correction model in the event that the degree of dynamic characteristic change does not exceed the first threshold. Preferably, the statistical average of the signal output by each probe 21 can be considered approximately constant without the rate of change exceeding 2%. The statistical variance of the signals input to the probe 21 is equal. A linear correction model can be used for the correction. The linear correction model is:
Figure BDA0002529304460000232
wherein n is the number of iterations of the linear correction model. XiAnd (n) is the original output of the ith probe 21. Y isiAnd (n) is the corrected output value.
Figure BDA0002529304460000233
The model multiplier term, i.e., the gain, is corrected.
Figure BDA0002529304460000234
Is a constant term of the correction model, i.e. the bias. Preferably, the mean square error of the ground scene can be used to estimate the multiplier term of the correction model for the pel. And estimating a constant term of a correction model of the pixel by using the mean value and the mean square error of the ground scene. Preferably, let ai(n) is the mean square error, β, of the imaged scenei(n) is the mean of the imaged scene. The formula is as follows:
Figure BDA0002529304460000241
Figure BDA0002529304460000242
through the two iterative formulas, the imaging data after the preset calibration processing can be corrected in real time, the linear model is used for correction, the algorithm is simple and low in complexity, and the calibration coefficient can be adaptively adjusted according to the dynamic change of the calibration scene.
Preferably, in the case that the degree of change of the dynamic characteristics exceeds the first threshold value, the duration of the preset event is segmented by a first unit time. Preferably, an imaging time period of the calibration scene, in which the degree of change of the dynamic features exceeds the first threshold value, is subdivided by using a differential idea method, and at least one first time set, in which the degree of change of the dynamic features does not exceed the first threshold value, is found in the time period. Preferably, the first time set includes a plurality of first unit times adjacent to each other, that is, the plurality of first unit times are continuous and have no disconnection. Preferably, it is determined whether a ratio of the number of the first unit time in the first time set to the degree of change of the dynamic characteristic in the first time set satisfies a second threshold. Preferably, the second threshold is a ratio of imaging time of 1024 pixels to a degree of change of dynamic characteristics of the scene within the imaging time, which is 5%. Preferably, if the ratio is exceeded, the statistical average of the signals output by each probe 21 is considered to be not constant and cannot be corrected in real time. Preferably, in the case that the ratio of the number of the first unit time in the first time set to the degree of the dynamic characteristic change in the first time set satisfies the second threshold, the statistical average of the signal output by each probe 21 is considered to be approximately constant, and the linear correction model may be used to perform real-time correction on the pre-scaled imaging data. By the setting mode, the time period which is approximately unchanged according with the statistical mean and variance of the output signal of the probe element 21 in the imaging time period can be found as far as possible by utilizing the differential thought principle, and the linear correction model is constructed by the mean and variance of the calibration scene in the time period. In fact, when actually calibrating the relative radiation, the radiation characteristic of most of the calibration scene is changed slowly or within a certain time period, and the calibration residual error can be corrected in real time within the time period of imaging all the probe elements 21 by the characteristic of the calibration scene, so as to improve the accuracy of the calibration data.
Preferably, after the correction by the correction module 9, the calculation load 11 and/or the calculation terminal 4 respectively obtain the first calibration data parameter 5a of the first imaging data 5, the second calibration data parameter 6a of the second imaging data 6 and the third calibration data parameter 7a of the third imaging data 7 by histogram normalization calculation based on the imaging data corrected in real time. Preferably, the scaling parameters are calculated by using the rule based on the probe histogram, and the processing flow is as follows:
1. and establishing a cumulative probability distribution function of each probe element according to the processed initial image according to the following formula, and selecting the cumulative probability distribution function of the probe element to be used as an ideal reference cumulative probability distribution function. The formula is as follows:
Figure BDA0002529304460000243
and k is the imaging gray level of the probe element. pn (k) is the number of pixels when the bin gray level is k. dpn (i) imaging all the pixels for the jth probe element.
2. And performing histogram regularization processing on the cumulative probability distribution function of each detector by taking the ideal reference cumulative probability distribution function as a reference according to the following formula to obtain relative radiometric calibration parameters of each detector element. The formula is as follows:
f(k-x)≤f(k)≤f(k+y)
Figure BDA0002529304460000251
wherein, the value range of x and y is [2]bits-1]And bits is a quantization unit of the sensor 2 to obtain the sensor image. Through the algorithm, the radiation response difference between each probe element 21 can be better reflected, and the whole column value distribution change after the corresponding relative radiation calibration parameters are applied uniformly conforms to the law of actual scene change and the radiation brightness difference between CCD taps.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1. A method for scene-based radiometric calibration, the method comprising:
under the condition that a flying platform (1) passes through a uniform scene, performing push-broom imaging in a mode that the arrangement direction of a linear array of probe elements (21) of a sensor (2) along the axial direction of the flying platform (1) forms an included angle alpha with the arrangement direction of the linear array of the probe elements, so as to acquire imaging data of the ground scene,
the arrangement direction of at least one row of linear arrays of the probe elements (21) is limited by the included angle alpha in a mode of being neither parallel to nor perpendicular to the along-track direction of the flying platform (1);
performing a pre-scaling process based on the imaging data.
2. The radiometric calibration method of claim 1, wherein, during the push-broom imaging to acquire imaging data of the ground scene,
recording a trigger time point, a starting probe element, a termination time point and a termination probe element of a preset event (3) based on the triggering of the preset event (3), wherein,
the preset events (3) at least comprise a first preset event (31) which is in accordance with a high-reflection scene, a second preset event (32) which is in accordance with a medium-reflection scene and a third preset event (33) which is in accordance with a low-reflection scene, wherein the first preset event (31) is constructed on the basis of information sent by other flight platforms (1) and priori knowledge.
3. A method of radiometric calibration according to claim 2, characterized in that the first preset events (31) comprise at least a first termination event (311) for terminating the imaging of the first preset events (31), the second preset events (32) comprise at least a second termination event (321) for terminating the imaging of the second preset events (32), and the third preset events (33) comprise at least a third termination event (331) for terminating the imaging of the third preset events (33).
4. A radiometric calibration method according to claim 3, characterized in that, in case of at least one of the triggering of the first (31), second (32) and third (33) preset event, the following steps are performed:
recording a first trigger time point and a first initial probe element triggered by the first preset event (31), a second trigger time point and a second initial probe element triggered by the second preset event (32) and a third trigger time point and a third initial probe element triggered by the third preset event (33);
recording a first termination time point and a first termination probe element triggered by a first termination event (311), a second termination time point and a second termination probe element triggered by a second termination event (321), and a third termination time and a third termination probe element triggered by a third termination event (331);
the imaging data is classified to form first imaging data (5) responsive to a high reflectance scene in a dynamic range, second imaging data (6) responsive to a medium reflectance scene, and third imaging data (7) responsive to a low reflectance scene.
5. The radiometric calibration method according to claim 4, characterized in that, before the flight platform (1) triggers the recording of the preset event (3), the following steps are performed:
the flight platform (1) performs push-broom imaging in the arrangement direction of the linear arrays of the probe elements (21) of at least one row of the sensors (2), so that different probe elements (21) sequentially image the same scene unit.
6. A radiometric calibration method according to claim 5, characterized in that the pre-calibrated imaging data is corrected in real time based on the extent of dynamic characteristic variations of the ground scene over the duration of the preset event (3).
7. The radiometric targeting method of claim 6, wherein the pre-scaling process based on the imaging data acquisition is performed as follows:
constructing an initial image in units of picture elements generated by the probe (21) based on imaging data comprising at least the first imaging data (5), the second imaging data (6) and the third imaging data (7);
denoising the initial image, and performing high-frequency amplification by taking the pixel as a unit so as to enhance the details of a straight line formed by the pixels imaging the same scene unit in the initial image;
and shifting based on the gray value of each pixel after denoising and enhancement so as to enable pixels in the same row in the initial image to be different probe elements (21) to image the same scene unit, and pixels in the same column are the same probe element (21).
8. The radiometric calibration method of claim 7, wherein the step of real-time correcting the pre-calibrated imaging data is as follows:
judging whether the dynamic characteristic change degree of the ground scene in the duration time of the preset event exceeds a first threshold value or not;
and under the condition that the dynamic characteristic change degree does not exceed the first threshold value, utilizing a linear correction model to perform real-time correction on the pre-calibrated imaging data.
9. The radiometric calibration method according to claim 8, characterized in that, in case the degree of dynamic characteristic variation exceeds a first threshold value, the following steps are performed:
segmenting the duration of the preset event according to a first unit time;
searching at least one first time set which is formed by continuous first unit time and has the dynamic characteristic change degree not exceeding a first threshold value;
determining whether a ratio of a number of first units of time in a first set of times to the degree of change in the dynamic characteristic in the first set of times satisfies a second threshold, wherein,
and under the condition that the ratio of the number of the first unit time in the first time set to the dynamic characteristic change degree in the first time set meets a second threshold value, utilizing a linear correction model to correct the imaging data subjected to the preset standard processing in real time.
10. A scene-based radiometric calibration system, comprising at least a flying platform (1) and a sensor (2), characterized in that it further comprises a pre-calibration processing module (8), wherein,
under the condition that a flying platform (1) passes through a uniform scene, a ground base station or a calculation load (11) of the flying platform (1) controls a probe element (21) of a sensor (2) to perform push-broom imaging in a mode that the along-track direction of the flying platform (1) forms an included angle alpha relative to the arrangement direction of a linear array of the flying platform, so as to acquire imaging data of the ground scene,
the arrangement direction of at least one row of linear arrays of the probe elements (21) is limited by the included angle alpha in a mode of being neither parallel to nor perpendicular to the along-track direction of the flying platform (1);
the scaling processing module (8) performs a pre-scaling process based on the imaging data.
CN202010515427.0A 2019-12-11 2019-12-11 Scene-based radiation calibration method and system Active CN111815525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010515427.0A CN111815525B (en) 2019-12-11 2019-12-11 Scene-based radiation calibration method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911262809.0A CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system
CN202010515427.0A CN111815525B (en) 2019-12-11 2019-12-11 Scene-based radiation calibration method and system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201911262809.0A Division CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system

Publications (2)

Publication Number Publication Date
CN111815525A true CN111815525A (en) 2020-10-23
CN111815525B CN111815525B (en) 2024-04-09

Family

ID=69117777

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202010515427.0A Active CN111815525B (en) 2019-12-11 2019-12-11 Scene-based radiation calibration method and system
CN202010515300.9A Active CN111815524B (en) 2019-12-11 2019-12-11 Correction system and method for radiation calibration
CN201911262809.0A Active CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202010515300.9A Active CN111815524B (en) 2019-12-11 2019-12-11 Correction system and method for radiation calibration
CN201911262809.0A Active CN110689505B (en) 2019-12-11 2019-12-11 Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system

Country Status (1)

Country Link
CN (3) CN111815525B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257343B (en) * 2020-10-22 2023-03-17 上海卫星工程研究所 High-precision ground track repetitive track optimization method and system
CN112954239B (en) * 2021-01-29 2022-07-19 中国科学院长春光学精密机械与物理研究所 On-board CMOS image dust pollution removal and recovery system and recovery method
CN117576362B (en) * 2024-01-16 2024-05-24 国科大杭州高等研究院 Low-resolution thermal infrared image aircraft identification method based on shielding ratio

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120098935A1 (en) * 2010-10-21 2012-04-26 Sony Corporation 3d time-of-flight camera and method
CN107093196A (en) * 2017-04-10 2017-08-25 武汉大学 The in-orbit relative radiometric calibration method of video satellite area array cameras
CN110120077A (en) * 2019-05-06 2019-08-13 航天东方红卫星有限公司 A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19703629A1 (en) * 1997-01-31 1998-08-06 Daimler Benz Aerospace Ag Method for autonomously determining the position of a satellite
AU2010206766A1 (en) * 2009-01-22 2011-08-11 Kenneth Oosting Actuated feedforward controlled solar tracking system
CN102469580A (en) * 2010-11-18 2012-05-23 上海启电信息科技有限公司 mobile positioning service system based on wireless sensing technology
CN104267739A (en) * 2014-10-17 2015-01-07 成都国卫通信技术有限公司 Satellite signal tracking device and method
CN105222788B (en) * 2015-09-30 2018-07-06 清华大学 The automatic correcting method of the matched aircraft Route Offset error of feature based
CN105300407B (en) * 2015-10-09 2018-10-23 中国船舶重工集团公司第七一七研究所 A kind of marine dynamic starting method for single axis modulation laser gyro inertial navigation system
CN105447853B (en) * 2015-11-13 2018-07-13 深圳市道通智能航空技术有限公司 Flight instruments, flight control system and method
CN105551053A (en) * 2015-12-01 2016-05-04 中国科学院上海技术物理研究所 Fast geometric precise correction method of mini-planar array satellite-borne TDI CCD camera
CN106600589B (en) * 2016-12-09 2019-08-30 中国科学院合肥物质科学研究院 A kind of spaceborne spectrometer direction method for registering based on coastline regional remote sensing figure
CN107705267B (en) * 2017-10-18 2020-06-26 中国科学院电子学研究所 Optical satellite image geometric correction method based on control vector
CN108776955B (en) * 2018-04-16 2020-08-18 国家卫星气象中心 Real-time correction method and correction device for remote sensing image
CN109188468B (en) * 2018-09-13 2021-11-23 上海垣信卫星科技有限公司 Ground monitoring system for monitoring satellite running state
CN110009688A (en) * 2019-03-19 2019-07-12 北京市遥感信息研究所 A kind of infrared remote sensing image relative radiometric calibration method, system and remote sensing platform
CN110411444B (en) * 2019-08-22 2024-01-09 深圳赛奥航空科技有限公司 Inertial navigation positioning system and positioning method for underground mining mobile equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120098935A1 (en) * 2010-10-21 2012-04-26 Sony Corporation 3d time-of-flight camera and method
CN107093196A (en) * 2017-04-10 2017-08-25 武汉大学 The in-orbit relative radiometric calibration method of video satellite area array cameras
CN110120077A (en) * 2019-05-06 2019-08-13 航天东方红卫星有限公司 A kind of in-orbit relative radiometric calibration method of area array cameras based on attitude of satellite adjustment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张过;李立涛;: "遥感25号无场化相对辐射定标", 测绘学报, no. 08 *
赵燕;易维宁;杜丽丽;黄红莲;: "基于均匀场地的遥感图像相对校正算法研究", 大气与环境光学学报, no. 02 *

Also Published As

Publication number Publication date
CN111815525B (en) 2024-04-09
CN110689505B (en) 2020-07-17
CN111815524B (en) 2024-04-23
CN110689505A (en) 2020-01-14
CN111815524A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
Sterckx et al. The PROBA-V mission: Image processing and calibration
Suzuki et al. Initial inflight calibration for Hayabusa2 optical navigation camera (ONC) for science observations of asteroid Ryugu
CN110689505B (en) Scene-based satellite-borne remote sensing instrument self-adaptive correction method and system
Yu et al. Analysis of the NPOESS VIIRS land surface temperature algorithm using MODIS data
CN110120077B (en) Area array camera in-orbit relative radiation calibration method based on satellite attitude adjustment
Reid et al. Imager for Mars Pathfinder (IMP) image calibration
Humm et al. Flight calibration of the LROC narrow angle camera
Dev et al. Estimation of solar irradiance using ground-based whole sky imagers
Markham et al. Landsat program
Mahanti et al. Inflight calibration of the lunar reconnaissance orbiter camera wide angle camera
Bruegge et al. The MISR radiometric calibration process
CN110782429B (en) Imaging quality evaluation method based on satellite-borne remote sensing camera
Andre et al. Instrumental concept and performances of the POLDER instrument
Valorge et al. Forty years of experience with SPOT in-flight calibration
CN115187481A (en) Airborne push-broom hyperspectral image radiation disturbance correction method
Gerlach Characteristics of Space Imaging's one-meter resolution satellite imagery products
Cede et al. Raw EPIC data calibration
Kameche et al. In-flight MTF stability assessment of ALSAT-2A satellite
Chang et al. Terra and Aqua MODIS TEB intercomparison using Himawari-8/AHI as reference
Ravindra et al. Instrument data metrics evaluator for tradespace analysis of earth observing constellations
Shimada et al. Calibration of advanced visible and near infrared radiometer
Abolghasemi et al. Design and performance evaluation of the imaging payload for a remote sensing satellite
Murakami et al. Vicarious calibration of ADEOS-2 GLI visible to shortwave infrared bands using global datasets
Kouyama et al. One-year lunar calibration result of Hodoyoshi-1, Moon as an ideal target for small satellite radiometric calibration
Weber et al. Polarization upgrade of specMACS: calibration and characterization of the 2D RGB polarization-resolving cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant