CN114708204B - Push-broom imaging optimization processing method and system - Google Patents

Push-broom imaging optimization processing method and system Download PDF

Info

Publication number
CN114708204B
CN114708204B CN202210291593.6A CN202210291593A CN114708204B CN 114708204 B CN114708204 B CN 114708204B CN 202210291593 A CN202210291593 A CN 202210291593A CN 114708204 B CN114708204 B CN 114708204B
Authority
CN
China
Prior art keywords
pixel array
pixel
contrast
imaging
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210291593.6A
Other languages
Chinese (zh)
Other versions
CN114708204A (en
Inventor
谢成荫
杜志贵
杨峰
任维佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spacety Co ltd Changsha
Original Assignee
Spacety Co ltd Changsha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spacety Co ltd Changsha filed Critical Spacety Co ltd Changsha
Priority to CN202210291593.6A priority Critical patent/CN114708204B/en
Publication of CN114708204A publication Critical patent/CN114708204A/en
Application granted granted Critical
Publication of CN114708204B publication Critical patent/CN114708204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an optimization processing method and system for push-broom imaging, wherein when a sensor probe performs push-broom imaging, a computing terminal analyzes the signal-to-noise ratio and the contrast of a pixel array acquired based on the push-broom imaging of the sensor probe, and acquires correction parameters at least comprising the signal-to-noise ratio and the contrast of the pixel array. After the pixels in the pixel array are partitioned based on the correction parameters, the crosstalk area is eliminated to generate an initial afterimage. Denoising processing is carried out based on the initial residual image, the high frequency of the initial residual image of each pixel is amplified to generate an enhanced residual image, the size of the enhanced residual image is amplified and cut in equal proportion to replace the initial residual image, so that a first pixel array is generated to enhance imaging details, and the imaging effect is optimized based on the gray value of each probe element in the first pixel array.

Description

Push-broom imaging optimization processing method and system
And (3) case division description:
The original foundation of the divisional application is a patent application with the application number 201910920501.4 and the application date 2019, 09 and 26, and the name of the patent application is an imaging quality assessment method based on a satellite-borne remote sensing camera.
Technical Field
The invention belongs to the technical field of remote sensing imaging, and particularly relates to a push-broom imaging optimization processing method and system.
Background
The push-broom type space remote sensing camera performs scanning imaging by using the relative motion of the satellite and the ground. The relative velocity of the satellite and the ground remains substantially constant in a fixed orbit, so the higher the resolution, the shorter the integration time. In order to solve the problem of insufficient exposure energy caused by short Integration time, a TDI (TIME DELAY Integration) CCD is generally adopted as a remote sensing camera detector. The TDICCD device has the advantages that the same target is integrated for multiple times by utilizing the multi-stage photosensitive elements in motion, and the enhanced signals are obtained by superposing weak signals at all stages, so that the signal-to-noise ratio of the system is improved. However, TDICCD imaging has a high requirement on satellite attitude control accuracy, and any control accuracy error brings a large TDICCD accumulated error, so that an imaging Modulation Transfer Function (MTF) is reduced, and the larger the number of integration stages is, the more obvious the MTF is reduced. In order to ensure the imaging quality of the TDICCD, researchers have conducted corresponding research work.
For example, literature [1]Wang D,Zhang T,Kuang H.Clocking smear analysis and reduction for multi phase TDI CCD in remote sensing system.[J].Optics Express,2011,19(6):4868-80. has conducted an in-depth analysis of a blurring phenomenon caused by a clock, and has proposed to reduce the blurring phenomenon by changing a timing relationship of a transfer clock, thereby ensuring that the MTF is not lowered.
For example, document [2]Deng X.Factors Affecting the TDI-CCD Camera Image Quality in the Lunar Orbiter[J].Spacecraft Engineering,2010. discloses the effect of image shift matching on MTF, and analysis and mathematical modeling. The document [3] Zhuang Xuxia, wang Zhile, ruan Ningjuan, et al, analysis of the influence of image shift on the imaging quality of a satellite-borne TDICCD camera [ J ]. Space return and remote sensing, 2013 (6), analysis of the mechanism of image shift causing image blurring and geometric deformation, and research of evaluation parameters in terms of image shift degradation and geometric aspects. According to the working principle of the TDICCD, a space-variant degradation model of the TDICCD camera under the influence of image movement is established, and based on the space-variant degradation model, the image quality degradation conditions under the influence of the image movement in different forms and different directions are calculated and analyzed, and the image quality degradation caused by non-push-broom image movement is analyzed from two angles of image blurring and geometric deformation, so that the relation between the image movement, a diffusion function and a transfer function is established.
For example, in-depth analysis of signal-to-noise ratio of TDI-CCD cameras [ J ]. IR technology, 2008,30 (12): 683-687, et al, 8192, have been studied specifically for signal-to-noise ratio and noise reduction techniques of TDICCD. However, the above documents are developed from the two aspects of signal-to-noise ratio and image shift, respectively, and do not consider the constraint relation among the signal-to-noise ratio, image shift and MTF. In fact, the increase of the integration level is beneficial to improving the signal-to-noise ratio of the camera, but has higher requirements on the attitude control precision of the satellite. If the control progress of the satellite attitude is not satisfactory, the MTF of the camera is lowered. A compromise optimization must therefore be chosen in the setting of the number of integration steps. The signal-to-noise ratio of the image reflects the radian resolution of the image, and the MTF of the image reflects the spatial resolution of the image. Therefore, the literature [5] Xue Xucheng, dan Junxia, lv Hengyi, et al, optimization setting of the TDI CCD integration series and gain of a space remote sensing camera [ J ]. Optical precision engineering, 2011 (4): 857-863, propose to combine the signal to noise ratio and MTF as the optimization index of image quality, optimize the integration series, wherein the integration technology can solve the problem of insufficient light energy and effectively improve the signal to noise ratio, but can reduce the MTF of the image at the same time, so the literature [5] solves the problem of insufficient light energy by increasing the gain, and the increase of the gain has no influence on the signal to noise ratio and the MTF.
Document [6] is fully passed through Li Litao. Remote sensing No. 25 no-field relative radiometric calibration [ J ]. Mapping school report, 2017 (08): 75-82. A method for radiometric calibration by no-field 90 DEG yaw independent of ground uniform row is disclosed, wherein 90 DEG yaw radiometric calibration is a method for rotating a satellite platform or a camera by 90 DEG and correcting a drift angle caused by earth rotation at the same time, so that a linear array CCD sensor is parallel to the push-broom direction of a satellite, and the satellite acquires radiometric calibration data along push-broom imaging of the orbit, and carries out relative radiometric calibration. The method further comprises the steps of: 1. banding noise suppression and contrast enhancement; 2. yaw scaling data is specified; 3. and solving scaling parameters. The scaling image in the relative radiometric scaling method disclosed in the document [6] can be evaluated by using the document [5], and the scaling image is corrected based on the evaluation result, so that the scaling effect is improved.
However, the evaluation of the relative radiometric image disclosed in document [6] using document [5] presents several problems: firstly, TDICCD has the advantages that the integral time can be increased to improve the signal to noise ratio, but TDICCD has strict requirement on image shift matching, and the requirement on image shift is more strict along with the increase of integral progression, the mismatching of image shift can lead to the decrease of MTF, especially when a remote sensing camera performs relative radiation calibration to push-scan imaging on a uniform scene on the ground, due to the rotation of the earth, the camera needs to rotate by a corresponding angle according to a bias current angle, so that CCD lines are imaged along the actual image shift direction, the CCD can also correct the CCD integral time in real time while rotating according to the bias current angle, when the charge transfer speed of the CCD is not matched with the image transfer speed, the MTF of the image is obviously reduced, so that strict requirements on the integration time are met; secondly, when the CCD performs 90-degree yaw radiometric calibration, the calibrated imaged images are arranged in a diagonal manner on pixels imaged on the same scene, as shown in fig. 3, since imaging response functions of different probe elements on the same scene are different, stripe noise can be generated on the images, and the MTF of the images can be reduced by the stripe noise; finally, aiming at the problem that the data of the same ground object in different columns is used as a reference when yaw radiometric calibration is ensured, the imaging data of the same ground object of each row of different probe elements are distinguished by utilizing the detection result of the yaw radiometric calibration data included angle, so that an agile satellite needs to have an accurate angle measuring instrument to acquire the angle measuring result, the burden of a satellite platform is increased, and the included angle data is calibrated because the result of the calibration data depends on the accuracy of the yaw radiometric calibration data included angle, and the satellite load is required to be increased by additional overhead. In addition, the dynamic radiation characteristic change of the scene, the uncertainty of CCD linear array steering, the uncertainty of the response of each probe element of the CCD and the like bring great difficulty to the correction of the non-uniformity. For example, by 90 ° yaw, i.e., the direction in which the array of probe elements is aligned is parallel to the imaging direction, imaging data as shown in fig. 3 can be obtained. As shown in FIG. 3, each behavior is the same probe pair for the pels of the same scaled scene, e.g., A or B or C or D or E. The pixels A, B, C, D, E of different columns represent pixels generated by the same probe. Each probe element of the sensor theoretically images the same scene in turn, without taking other influencing factors into account. For example, the first column of probe elements in fig. 3 images pixel A, B, C, D, E separately, the second column of probe elements images pixel A, B, C, D, E separately, and the third column of probe elements images pixel A, B, C, D, E separately, such that each probe element images the same scene a. However, the dynamic radiation characteristic change of the scene, the uncertainty of CCD linear array steering and the uncertainty of CCD each probe element response all cause the signal-to-noise ratio, contrast and definition of the same pixel generated by different probe elements to be different. Uncertainty in CCD linear array steering refers to the problem of accuracy in CCD linear array rotation. In order to reduce the influence caused by the rotation of the earth, the CCD linear array needs to be rotated to compensate the drift angle caused by the rotation of the earth, so that the CCD linear array can image along the actual flight direction of the satellite. However, due to vibration, space environment change and inherent rotation deviation of a machine, the CCD linear array steering may have certain uncertainty, so that a satellite cannot be imaged along the arrangement direction of the CCD linear array, and thus pixels of other calibration scenes may appear in the imaging A of the same calibration scene in each row in FIG. 4, that is, crosstalk appears, and a large amount of stripe noise exists in the calibration image.
In summary, the imaging quality of the relative radiation calibration needs to be evaluated, that is, parameters such as signal to noise ratio, contrast and the like of an image can be estimated by evaluating a pixel array generated by a calibration scene, so that a crosstalk part in each pixel is identified and locked, and an image after the crosstalk part of the calibration image is removed is an initial residual image. The relative calibration processing is carried out by extracting the part with the best corresponding uniformity from the initial residual image, so that the accuracy of relative calibration can be greatly improved.
Furthermore, there are differences in one aspect due to understanding to those skilled in the art; on the other hand, as the inventors studied numerous documents and patents while the present invention was made, the text is not limited to details and contents of all that are listed, but it is by no means the present invention does not have these prior art features, the present invention has all the prior art features, and the applicant remains in the background art to which the rights of the related prior art are added.
Disclosure of Invention
Aiming at the problem that the result of 90-degree yaw calibration data depends on the accuracy of a yaw radiometric calibration data included angle in the prior art, the invention evaluates the processed imaging data based on the uncertainty of the included angle, and carries out non-uniform value on each row of pixels according to the evaluation result, so that the condition that partial pixels in each row of pixels possibly have imaging data of different scene units due to the uncertainty of the included angle occurs, each pixel in each row is evaluated in a non-uniform mode, the imaging data of only the same scene unit is extracted from each pixel, and the influence of non-uniform calibration scene radiation caused by the fact that the pixels contain data of different scene units is eliminated.
In the condition that a satellite platform performs push-broom imaging along the arrangement direction of sensor probe elements, a load of the satellite platform and/or a computing terminal of a ground base station execute the following steps: performing quality evaluation based on the pixel array acquired by the sensor probe unit push-broom imaging, and at least acquiring correction parameters of the pixel array; according to the imaging quality assessment method, aiming at the problem that part of pixels possibly have imaging data of different scene units due to the fact that the result of 90-degree yaw calibration data depends on the accuracy of the yaw radiometric calibration data included angle, namely the problem of crosstalk areas occurs, the initial crosstalk areas are obtained through preliminary assessment by adopting signal to noise ratio, and the crosstalk areas are assessed again by contrast assessment to provide the crosstalk areas in the initial crosstalk areas, so that the crosstalk areas can be accurately and completely determined and identified, the influence of non-uniform calibration scene radiation caused by the pixels containing data of different scene units can be eliminated by eliminating the crosstalk areas, and the accuracy of relative radiometric calibration is improved; and after the pixels in the pixel array are subjected to partition processing based on the correction parameters, eliminating crosstalk areas to generate initial residual images.
According to a preferred embodiment, the load and/or the computing terminal performs the quality assessment steps as follows: estimating the signal-to-noise ratio of the pixel array based on priori knowledge; dividing an initial crosstalk area for the pixel array based on the signal-to-noise ratio of the pixel array, and performing contrast estimation; wherein the correction parameters include at least a signal-to-noise ratio estimate and a contrast estimate.
According to a preferred embodiment, the load and/or the computing terminal performs the signal-to-noise ratio estimation according to the following steps: dividing the pixel array based on at least one pixel to form a plurality of sub-pixel arrays, and estimating local noise based on the sub-pixel arrays; the maximum value of the gray scale of the divided pixel array is used as the estimated value of the signal. The invention is based on the characteristic of taking the pixel as a unit when the sensor probe element is imaged, adopts the signal-to-noise ratio estimation method of the thought of blocking and partitioning, not only can improve the accuracy of noise estimation as much as possible, but also in practical application, the invention utilizes the maximum gray value in the image as the estimation of the signal, and the signal-to-noise ratio is obtained to be closer to the practical value; in addition, other noise estimation methods, such as a laboratory integrating sphere method, do not fully consider the influence of a complex space environment on the signal to noise ratio of a camera when performing theoretical estimation, so that the theoretical estimation is higher than the signal to noise ratio obtained by laboratory measurement.
According to a preferred embodiment, in case the load and/or computation terminal performs contrast estimation on the array of picture elements, the load and/or computation terminal performs the following steps: performing linear detection on a plurality of pixels in the pixel array for imaging the same scene to divide a plurality of rectangular areas; and respectively estimating the signal to noise ratio by taking a plurality of rectangular areas as units, and dividing an initial crosstalk area for each pixel in the rectangular areas. According to the method, the LSD straight line detection method is adopted to detect and divide the pixels in the pixel array, which image the same scene, to form a plurality of inclined rectangular areas, and the calculation area is reduced, the time complexity of the contrast estimation is reduced, the accuracy and the distinction degree of the contrast estimation are increased by carrying out the contrast estimation in the rectangular areas, so that the non-uniform area in each pixel can be effectively identified.
According to a preferred embodiment, the step of estimating the contrast of the load and/or the computing terminal comprises at least: acquiring a first contrast estimation value in the rectangular area based on a maximum value and a minimum value of brightness in the rectangular area; acquiring a second contrast estimation value based on the first contrast estimation value and the maximum value and the minimum value of the brightness of each pixel in the rectangular area; and deleting the crosstalk area in the initial crosstalk area based on the second contrast estimation value to generate an initial residual image of each pixel.
According to a preferred embodiment, in case the load and/or the computing terminal calculates the second contrast estimate, the load and/or the computing terminal calculates the second contrast estimate based on the pixels generated at the first time within each rectangular region.
According to a preferred embodiment, the loading and/or computing terminal deletes the crosstalk area within the initial crosstalk area to generate an initial residual image of each pixel according to the following steps: the portion of the luminance within each initial crosstalk zone that exceeds the first threshold of the second contrast estimate is deleted.
According to a preferred embodiment, the load and/or the computing terminal performs a scaling data process based on the initial residual image, as follows: denoising processing is carried out based on the initial residual image, and high frequency of the initial residual image of each pixel is amplified to generate an enhanced residual image; performing equal-proportion amplification and clipping on the size of the enhanced residual image to replace the initial residual image, thereby generating a first pixel array to enhance the details of a straight line formed by pixels imaged by the same scene unit; and shifting the gray value of each probe element in the first pixel array so that each row of pixels in the first pixel array forms images of the same scene for a plurality of sensor probe elements, and each column of pixels forms images of different sensor probe elements.
An imaging quality evaluation system based on a satellite-borne remote sensing camera at least comprises a load and a sensor probe element which are carried on a satellite platform and a computing terminal of a ground base station which is communicated with the satellite platform. In the case where the satellite platform performs push-broom imaging along the sensor probe array direction, the load and/or ground computing terminal is configured to: performing quality evaluation based on the pixel array acquired by the sensor probe unit push-broom imaging, and at least acquiring correction parameters of the pixel array; and after the pixels in the pixel array are subjected to partition processing based on the correction parameters, eliminating crosstalk areas to generate initial residual images.
According to a preferred embodiment, the load and/or the computing terminal is configured to perform a quality assessment as follows: estimating the signal-to-noise ratio of the pixel array based on priori knowledge; dividing an initial crosstalk area for the pixel array based on the signal-to-noise ratio of the pixel array, and performing contrast estimation; wherein the correction parameters include at least a signal-to-noise ratio estimate and a contrast estimate.
Drawings
FIG. 1 is a schematic flow diagram of a preferred embodiment of the method of the present invention;
FIG. 2 is a block diagram of an operative embodiment of the system of the present invention;
FIG. 3 is an image array generated using 90 yaw relative radiation calibration in the prior art; and
Fig. 4 is an image array of the image array generated in fig. 3 after a prescribed process.
List of reference numerals
1: Flying platform 2: sensor for detecting a position of a body
3: Computing terminal 11: load of
Detailed Description
The following is a detailed description with reference to fig. 3 to 4.
Crosstalk area: because vibration, space environment change and inherent rotation deviation of a machine can cause a CCD linear array to turn to have certain uncertainty, satellites cannot be imaged along the arrangement direction of the CCD linear array, and pixels of the same calibration scene imaged by each row in FIG. 4 can possibly generate pixels of other calibration scenes, namely crosstalk is generated.
Example 1
The embodiment discloses an evaluation method, which can be an imaging quality evaluation method, can be a relative radiometric calibration imaging quality evaluation method, can be an imaging quality evaluation method based on a satellite-borne remote sensing camera, and can be realized by the system and/or other replaceable parts. The method disclosed in this embodiment is implemented, for example, by using various components in the system of the present invention. In addition to this embodiment, the preferred implementation of the other embodiment may be provided in whole and/or in part without conflict or contradiction.
Preferably, radiometric calibration is the process of establishing a data link between the amount of radiation and the output of the detector. The purpose is to eliminate errors of the sensor itself and to determine an accurate radiation value at the entrance pupil of the sensor. The radiometric calibration technique of a remote sensing camera or sensor mainly comprises two parts, namely relative radiometric calibration (also called uniformity correction) and absolute radiometric calibration. Relative radiometric calibration is a process of correcting the responsivity of the detector for different pixels (probe elements), and causes of such responsivity and bias differences, and includes, in addition to the different process levels, other factors such as non-uniformity of the sensor itself, non-uniformity introduced during operation of the sensor, non-uniformity related to external inputs, and influence of an optical system. Since the sensor technology is mature, the current visible light detection device generally does not need uniformity correction, and therefore, the relative radiation calibration is mainly used for infrared wave bands. The responses of a plurality of probe elements of the focal plane device to radiation are inconsistent and have no certain relation, and the response rate of general photosensitive elements (probe elements) is not linear, which brings great difficulty to the correction of non-uniformity. Preferably, the scaling uncertainty is determined by various uncertainties in the scaling of the measurement data chain during the transfer. Different measurement chains can introduce different measurement errors, and finally the calibration precision is synthesized. The absolute accuracy in radiometric calibration is the uncertainty in metrology. The calibration uncertainty must be determined according to the measurement uncertainty policy implementation guidelines approved by the international standardization organization, namely, a data model of the measured and various influence quantities is established first, then the propagation relation of the influence quantities and the uncertainty of the standard of each component are obtained by partial differentiation of the data model, and finally the total synthesis standard uncertainty of the measured is calculated according to a synthesis uncertainty formula. Preferably, the uncertainty factor of scaling at least comprises dynamic radiation characteristic change of a scene, uncertainty of CCD linear array steering and uncertainty of CCD individual probe element response. For example, by 90 ° yaw, i.e., the direction in which the array of probe elements is aligned is parallel to the imaging direction, imaging data as shown in fig. 3 can be obtained. As shown in FIG. 3, each behavior is the same probe pair for the pels of the same scaled scene, e.g., A or B or C or D or E. The pixels A, B, C, D, E of different columns represent pixels generated by the same probe. In the above arrangement, each probe element on the sensor 2 theoretically images the same scene in turn, without taking other influencing factors into account. For example, the first column of probe elements in fig. 3 images pixel A, B, C, D, E separately, the second column of probe elements images pixel A, B, C, D, E separately, and the third column of probe elements images pixel A, B, C, D, E separately, such that each probe element images the same scene a. However, the dynamic radiation characteristic change of the scene, the uncertainty of CCD linear array steering and the uncertainty of CCD each probe element response all cause the signal-to-noise ratio, contrast and definition of the same pixel generated by different probe elements to be different. Preferably, uncertainty in CCD array steering refers to the accuracy problem of CCD array rotation. In order to reduce the influence caused by the rotation of the earth, the CCD linear array needs to be rotated to compensate the drift angle caused by the rotation of the earth, so that the CCD linear array can image along the actual flight direction of the satellite. However, due to vibration, space environment change and inherent rotation deviation of a machine, the CCD linear array steering may have certain uncertainty, so that a satellite cannot be imaged along the arrangement direction of the CCD linear array, and thus pixels of other calibration scenes may appear in the imaging A of the same calibration scene in each row in FIG. 4, that is, crosstalk appears, and a large amount of stripe noise exists in the calibration image.
Preferably, the remote sensing camera generally performs relative calibration and then performs absolute calibration as known from radiometric calibration procedures. There is a certain link and influence between relative scaling and absolute scaling. On the one hand, the accuracy of the relative scaling will have an effect on the picture elements. For example, when using a modulation transfer function (Modulation Transfer Function, MTF) as the pixel evaluation means, the MTF is essentially the image contrast. The relative calibration is to correct the non-uniformity of the probe element, if the relative calibration is inaccurate, the contrast of the pixel space has an error necessarily, the MTF of the image is affected, on the other hand, the absolute calibration does not affect the image quality, but the absolute calibration is carried out on the basis that the relative calibration is considered to be accurate, therefore, if the error of the relative calibration cannot be eliminated, the absolute calibration is necessarily affected, and the radiometric calibration error is transmitted in the process.
In summary, the imaging quality of the relative radiation calibration needs to be evaluated, that is, parameters such as signal to noise ratio, contrast and the like of an image can be estimated by evaluating the pixel array generated by the calibration scene, so that crosstalk parts in each pixel are identified and locked, and the crosstalk parts are removed to extract the parts with the best corresponding uniformity for relative calibration processing, so that the accuracy of relative calibration can be greatly improved.
An imaging quality evaluation method based on an on-board remote sensing camera at least comprises the step flow shown in figure 1. The method comprises the following steps:
S100: the load 11 of the satellite platform 1 and/or the computing terminal 3 of the ground base station perform quality evaluation based on the pixel array acquired by the sensor probe unit push broom imaging. Preferably, at least the correction parameters of the array of picture elements can be obtained by a quality evaluation. Preferably, the satellite platform 1 performs push-broom imaging in a 90 ° yaw mode. Preferably, the method comprises the steps of. Push-broom imaging refers to a CCD made of semiconductor materials, a linear array or a surface array sensor is formed, a wide-angle optical system is adopted, a strip track is swept like a brush by means of movement of a flight platform 1 in the whole view field, and a two-dimensional image of the ground along the flight direction is acquired. Preferably, each sensor probe element generates one pel for each scaled scene. Preferably, the push-broom imaging produces an array of picture elements as shown in FIG. 3.
Preferably, the quality evaluation of the load 11 and/or the computing terminal 3 comprises at least an evaluation of the signal-to-noise ratio and the contrast of the generated array of picture elements. Preferably, the correction parameters include at least a signal-to-noise ratio estimate and a contrast estimate. Preferably, the load 11 and/or the computing terminal 3 performs the quality assessment as follows:
S101: and estimating the signal-to-noise ratio of the pixel array based on the priori knowledge. Preferably, the CCD linear array probe of the space optical remote sensing camera is interfered by various random factors in radiation transmission and photoelectric conversion of imaging, so that various types of noise are generated. Preferably, the noise is always represented in the form of noise points in the imaged remote sensing image. The amount of noise, i.e., the magnitude of the signal-to-noise ratio, affects the imaging performance of the sensor. The quality assessment of the remote sensing image can be accomplished at least in part by estimating the noise or signal-to-noise ratio of the remote sensing image. Preferably, the signal-to-noise ratio is defined as the ratio of the square of the signal power to the square of the noise power. Preferably, the image may be divided into a flat region and a textured region. In human eye vision, noise in flat areas is of greater concern. The variance of the flat region is typically used as the image noise estimate. Preferably, the on-orbit testing of the signal-to-noise ratio can be based on relative radiometric scaling, selecting a region of uniform radiation, the scene, for imaging. Preferably, uniform scenes with different reflectivities, such as ice caps with high reflectivity, gobi with medium reflectivity, desert and water with low reflectivity, are selected, and the mean value and variance of the image are calculated to obtain signal and noise estimation respectively.
Preferably, the satellite platform 1 performs push-broom imaging in a 90-degree yaw mode, and the obtained image is divided by taking pixels as units. Preferably, for a pixel array as shown in fig. 3, it is considered that an image formed by the pixel array can be divided into a flat area and a stripe area. Preferably, the pixel array may be partitioned by selecting 2 pixels as a group or partitioned by selecting 3 pixels as a group of object element arrays. Preferably, since there are different local variances at the flat region and the edge, a maximum value of the local variances of the image may be utilized as the variance of the image signal. The ratio of the maximum value and the minimum value of the local variance of the image is used as the signal-to-noise ratio estimation of the image. Preferably, the local variance of each pixel in the array image is:
wherein mu g is a local mean value, which can be obtained by the following formula:
Where p and q are the regional window sizes of the local variances. For example, if 3 rows are selected for division into groups, then p=1 and q=3. Preferably, the local variance of the image texture region is typically greater than that of the flat region containing noise, without the noise information being so large as to drown out the image information. Based on this, it is determined that the region with small variance is a flat region and the region with large variance is a texture region. Preferably, a times the variance of the entire pixel array can also be used as the threshold for texture region and flat region segmentation. Preferably, the texture region is determined when the size of the variance is greater than a times the variance of all the pixel arrays. When the variance is less than a times the variance of the entire pixel array, it is determined as a flat area. Preferably, the average value of the flat area is calculated as 1/N Σ Nδ2 as the final noise estimation value. Preferably, the maximum gray value in the array of picture elements is taken as the input signal, and the signal to noise ratio is estimated as the ratio of the square of the maximum gray value to 1/N Σ Nδ2. Preferably, the parameter a is affected not only by the camera itself but also by the environment when the satellite is in orbit. Preferably, the parameter a cannot be quantitatively calculated by analytical calculation. Preferably, a series of pictures with uniform gray level distribution and almost no noise are selected, a series of different Gaussian white noise is added, and the magnitude of the parameter a is adjusted, so that the quantitative magnitude of the parameter a when the error between the estimated value and the added noise value is minimum can be obtained. With this arrangement, the accuracy of noise estimation can be improved as much as possible. In practical application, satellite on-orbit testing usually adopts relative radiometric calibration, and uniform scene images with different reflectivities are selected for imaging in the dynamic range of sensor probe imaging. Such as high reflectivity ice caps, medium reflectivity gobi and desert, low reflectivity bodies of water, etc. When a scaling image is obtained for a scene such as a water body with low reflectivity, the gray average value of the image is too small, and the obtained signal to noise ratio is smaller. The invention uses the maximum gray value in the image as the signal estimation to obtain the signal-to-noise ratio which is closer to the actual value. In addition, other noise estimation methods, such as a laboratory integrating sphere method, do not completely consider the influence of a complex space environment on the signal-to-noise ratio of a camera when performing theoretical estimation, so that the theoretical estimation is higher than the signal-to-noise ratio obtained by laboratory measurement, and the signal-to-noise ratio estimation obtained by measuring an actual pixel array better accords with the ratio of the actual signal-to-noise ratio.
S102: the initial crosstalk zone is divided for the pixel array based on the signal-to-noise ratio of the pixel array. Preferably, due to the fact that the result of the 90-degree yaw calibration data depends on the accuracy of the yaw radiometric calibration data included angle, when the included angle steering angle is uncertain, satellites cannot be imaged along the arrangement direction of the sensor probe element linear array, and therefore pixels of the same calibration scene imaged by each row in fig. 4 may appear in pixels of other calibration scenes. Preferably, because the response parameters of each sensor probe cell are different, the signal-to-noise ratio is different when the same sensor probe cell images different scenes. When different sensors 2 image the same scene, their signal to noise ratios are different. Based on the principle, the signal-to-noise ratio estimation of the sub-areas can be performed through the pixel array, and whether each pixel has a crosstalk area or not is judged according to the distribution of the signal-to-noise ratio estimation values. Preferably, the divided crosstalk zone is estimated as an initial crosstalk zone according to the signal-to-noise ratio. Because the uncertainty of the signal-to-noise ratio estimation value is large, the initial crosstalk area divided by the signal-to-noise ratio estimation cannot accurately position the imaging data of the non-identical scene unit in each pixel. It is necessary to divide a uniform imaging area of the same scene by estimating the contrast in the initial crosstalk area.
Preferably, in case the payload 11 and/or the computing terminal 3 performs contrast estimation on the array of picture elements, the payload 11 and/or the computing terminal 3 performs the following steps:
S103: and carrying out linear detection on a plurality of pixels in the same scene based on the pixel array to divide a plurality of rectangular areas. Preferably, since a plurality of pixels are arranged diagonally in the pixel array, a straight line formed by the plurality of pixels can be detected according to the LSD method. Preferably, the LSD detects a straight line as follows:
a. Scaling selects parameter 1, i.e., indicates that no gaussian sampling is performed. Preferably, no Gaussian sampling is performed because the sampling would disrupt the relationship of the nonlinear response between individual sensor probe elements in the pixel array.
B. And calculating the gradient value and the gradient direction of each pixel point, and performing pseudo-sequencing. Preferably, the larger the gradient value, the more pronounced the edge point and therefore more suitable as a seed point. However, since the time overhead for fully ordering the gradient values is excessive, the gradient values are simply divided into 1024 levels, and the 1024 levels cover the gradient range from 0 to 255, and the ordering is linear time overhead. Preferably, the seed points are searched downwards from 1024 with the highest gradient value, the pixel points with the same gradient value are put into the same linked list, thereby 1024 linked lists are obtained, and a state table containing 1024 linked lists is synthesized according to the descending order of the gradient values. The points in the state table are all set to the no-use state.
C. And setting the point with the gradient value smaller than p as a usable state, taking out the point with the maximum gradient value in the state table, and searching the surrounding direction within the angle tolerance as a starting point. Preferably, the area diffusion is performed by searching for directions around within the angular tolerance, i.e. according to directions similar to the gradient angular direction. Preferably, rectangular fitting is performed on the diffused region to generate a rectangle R. Preferably, p may be the desired of all gradient values, or may be set manually. Preferably, points with gradient values less than p tend to occur in smooth areas, or noise at low frequencies only, which can seriously affect the calculation of the straight line angle. Thus in LSD pixels with gradient magnitude less than p will be rejected from participating in the construction of rectangle R. Preferably, rectangular fitting of the diffused region is essentially a shifting of the gray value of the pixel data, and not sampling of the data points.
D. And judging whether the density of the homopolar points in the rectangle R meets a threshold value F. Preferably, if not satisfied, the rectangle R is truncated to become a plurality of rectangle boxes satisfying the threshold D. Preferably, the threshold F may be set to be one third of the number of imaging probe elements actually involved in the sensor, so that straight lines with shorter lengths may be eliminated.
By the arrangement mode, the oblique line rectangle R formed by the same pixels in the pixel array can be detected, and the included angle alpha between the oblique line and the flying platform along the track direction can also be obtained.
According to a preferred embodiment, the step of contrast estimation by the load 11 and/or by the computing terminal 3 comprises at least:
s104: the first contrast estimation value in the rectangular region R is obtained based on the maximum value and the minimum value of the luminance in the rectangular region R. Preferably, the first contrast estimate is calculated by the following formula:
Wherein C is the first contrast estimate. I max is the maximum value of the luminance in the rectangular region R. I min is a minimum value of luminance in the rectangular region R. By the arrangement mode, the average value of contrast ratios among different sensor probe elements for imaging the same scene can be obtained.
S105: the second contrast estimation value is obtained based on the first contrast estimation value and the maximum and minimum values of the brightness of each pixel within the rectangular region R. Preferably, the second contrast estimate is calculated by the following formula:
Wherein C' is the second contrast estimate for the pixel. I max is the maximum value of the brightness of each picture element within the rectangular region R. I min is a minimum value of the brightness of each pixel within the rectangular region R. Preferably, the estimated value of the second contrast is a ratio of a brightness range of each pixel in the rectangular region R to a contrast average value in the rectangular region R. By the arrangement mode, the contrast estimation of each pixel can be obtained more accurately and rapidly.
Preferably, in case the payload 11 and/or the computing terminal 3 calculate the second contrast estimate, the payload 11 and/or the computing terminal 3 calculate the second contrast estimate based on the pixels generated at the first time within each rectangular region R. Preferably, due to the characteristic of 90 ° yaw push scan imaging, most of the pixels generated at the first time of the pixel array obtained by imaging are pixels without other scenes. Therefore, by taking the pixels generated at the first time in each rectangular region R as a reference, the time complexity of contrast estimation calculation can be improved, the accuracy of crosstalk region division can be improved, and the crosstalk region can be removed more accurately.
S200: after the pixels in the pixel array are partitioned based on the correction parameters, the crosstalk area is eliminated to generate an initial afterimage. By the arrangement mode, after each pixel in the pixel array is partitioned, a rejection operation is adopted to reject the crosstalk area in the initial crosstalk area. The reject operation has two functions, one of which is to accurately select the crosstalk area within the initial crosstalk area, and the other of which is to delete the selected crosstalk area. Whether imaging of other scene components exists in each pixel imaging can be detected through a rejection operation, and by deleting imaging components of other scenes in the pixels, calibration imaging data can be obtained more accurately under the condition that the accuracy of yaw radiometric calibration data included angles is not required, namely, under the condition that satellite loads are not required to be increased, software and hardware overhead is not required to calibrate the included angle data, and errors are reduced for relative radiometric calibration.
Preferably, the crosstalk area within the initial crosstalk area is deleted based on the second contrast estimation value, thereby generating an initial afterimage of each pixel. By the arrangement mode, the area with larger contrast deviation in the initial crosstalk area of each pixel can be screened by using the second contrast estimated value on the basis of the first contrast estimated value of the pixels imaged by different probe elements. Although the first contrast estimated value, namely the average value, is used for screening the crosstalk area with larger contrast deviation in the initial crosstalk area, the result is not accurate, and the screening range is larger, and the non-crosstalk area can be possibly covered, by adopting the calculation method, the crosstalk area can be screened out rapidly, the calculation cost of satellite load is saved, and the area without crosstalk area after the initial crosstalk area is screened can be ensured. Although the cross-talk area screened by the screening mode is too much and too large, the cross-talk of the screened area can be avoided, so that the accuracy of the follow-up radiation calibration data is ensured.
Preferably, the crosstalk area in the initial crosstalk area is divided in the following manner: the brightness in each initial crosstalk zone exceeds a portion of the first threshold of the second contrast estimate. Preferably, the second contrast estimate is calculated for the pixels generated at the first time within the rectangular region R. Preferably, the first threshold is 5% of the second contrast estimate. The difference in contrast is evident to the human eye beyond 5% of the second contrast estimate, so its picture elements comprise the imaging of two different scenes. Through the arrangement mode, the existence of the crosstalk area can be judged through the non-uniformity of the contrast and the signal-to-noise ratio of each pixel imaging, the crosstalk area can be divided more accurately, pixels with other scene components in push-broom imaging are obtained, pixels with other scene components are removed, calibration imaging data can be obtained more accurately, and errors are reduced for relative radiation calibration.
S300: the payload 11 and/or the computing terminal 3 performs a scaling data processing based on the initial residual image, as follows:
S301: denoising processing is carried out based on the initial residual image, and the high frequency of the initial residual image of each pixel is amplified to generate an enhanced residual image. Preferably, the array of picture elements is as shown in fig. 3. The array of picture elements is in units of picture elements generated by the sensor probe elements. For example, a first probe element generates an a-pel at a first imaging sample along the flight direction of the flight platform 1. The first probe element generates a B pixel in the second imaging sample, and the second probe element generates an A pixel in the second imaging sample. The first probe element generates a C pixel in the third sampling, the second probe element generates a B pixel in the third sampling, and the third probe element generates an A pixel in the third sampling. Preferably, the denoising process is performed based on an array of pixels. Preferably, the pixel array is composed of pixels of sensor probe cells of each column, and A, B, C, D, E in the initial image are arranged diagonally. Because the probe elements of each column are different, and the response functions of the different probe elements are different, and because of the dynamic radiation characteristics of the scaled scene, the pels A of each column in the initial image are different, resulting in diagonal stripe noise of the pel array.
Preferably, the denoising process is to remove low-frequency noise by passing the image data of the initial image through a low-pass filter. Preferably, the low pass filter may be an exponential low pass filter. According to the setting mode, the initial image is subjected to Fourier transformation to obtain the frequency spectrum of the frequency domain, the low-frequency noise component is filtered by the low-pass filter to obtain the high-frequency component of the pixel, and the details of the oblique straight line formed by the pixels imaged by the same scene unit can be enhanced by amplifying the high-frequency component.
Preferably, the enhanced afterimage is scaled up and cropped in size to replace the original afterimage. Through the arrangement mode, the original pixel array with the crosstalk area removed in a large range can be supplemented, so that the sizes of all pixels are consistent, subsequent processing is facilitated, and the straight line detail formed by the pixels imaged by the same scene unit is enhanced. Preferably, the scaling up and cropping will sacrifice content and detail elements within the sensor pixel array, thereby reducing the amount of data relative to radiation calibration. In fact, however, the abundant calibration data sources may bring about opposite technical effects, and the data are relatively abundant, so that the uniformity of the calibrated image data cannot necessarily be guaranteed, and the calibration data heterogeneous is unfavorable for the relative radiation calibration. Therefore, the uniformity of the data of the image in the pixel can be ensured by single equal-scale amplification. In addition, the rough screening method of the second contrast estimated value is adopted, so that the initial residual image after being removed has less calibration data and different sizes, and is unfavorable for subsequent image processing, and the image sizes of the initial residual image can be unified through equal-proportion amplification and clipping.
S302: preferably, the shifting is based on the gray value of each probe element within the first array of pixels such that the pixels of each row in the first array of pixels are imaging of the same scene for a plurality of sensor probe elements and the pixels of each column are imaging of different sensor probe elements. As shown in fig. 4, the pixel array is processed according to the included angle α and the following formula:
where DN is the gray data of the one-dimensional initial image stored in image lines, and DN [ m+n.t ] represents the gray value of the nth listed image in the m-th line of the initial image. t represents the number of probe elements involved in imaging.
K1=tanα,K2=tan(90°-α)
S303: and carrying out histogram prescribing processing on the cumulative probability distribution function of each probe by taking the ideal reference cumulative probability distribution function as a standard according to the following formula to obtain the relative radiometric calibration parameters of each probe element. The formula is as follows:
f(k-x)≤f(k)≤f(k+y)
wherein, the value range of x and y is [2 bits -1], and bits is the quantization unit of the sensor image obtained by the sensor 2. Through the algorithm, the radiation response difference between each probe element can be reflected well, and the overall column value distribution change after the corresponding relative radiation calibration parameters are applied uniformly accords with the law of actual scene change and the radiation brightness difference between CCD taps.
Preferably, in order to meet the condition that response functions of the sensor probe cells have differences in response intervals of different radiance and the response functions of the different probe cells also have differences, the method disclosed by the embodiment adopts a method for selecting various types of ground scenes as calibration scenes in order to meet the radiation calibration of the full dynamic range of each probe cell of the sensor. Preferably, the ground scene fields are divided into high reflection, medium reflection and low reflection scenes according to the dynamic range of the existing probe cells and the radiation reflection characteristics of the individual fields. Preferably, the reflectance of the ground scene is set to be higher than 35% or more as a high-reflectance scene by taking the full-color band of 450nm to 900nm, that is, the visible light-near infrared band as an example. Setting the reflectivity within 15% -34% as a medium reflection scene. A low reflection scene is set with a reflectivity below 15%. Through the arrangement mode, the invention has the beneficial effects that: the imaging can be classified according to the triggering of different ground scenes, so that the imaging data in different response ranges can be classified, the response characteristics of a single probe element in the response ranges with different radiance can be obtained, the response range of the probe element is comprehensively covered, the data mixing of the different response ranges is avoided, and the accuracy of the calibration data is reduced.
Preferably, the accuracy of sensor probe element multi-scene radiometric calibration is affected by a plurality of links, wherein specific characteristics of the scene are the primary preconditions used in the invention, so that the surface characteristics, the atmospheric characteristics and the uniform area of the selected scene are required to be ensured to meet the specific requirements of on-orbit field calibration, and the detailed selection principle is as follows:
a. The scene reflection characteristic covers various high, medium, low and other ground object types;
b. the spatial characteristic and the emission characteristic of a single scene are relatively uniform, and the reflectivity changes smoothly in the range of the wave band of the sensor 2;
c. The scene is positioned in a high-altitude area, the surrounding atmosphere is relatively clean, and the atmosphere is relatively stable;
d. each scene can be covered by the satellite remote sensing same orbit observation image;
e. The area of the uniform area of the scene is larger than 10 pixels×10 pixels of the sensor 2 to be calibrated, and no large target shielding object exists around the scene;
f. each scene has traffic conditions for carrying out a satellite-ground synchronous observation test.
Preferably, according to the above conditions, through priori knowledge selection, corresponding uniform scenes can be obtained. For example, in the Dunhuang radiation correction field in China, the highly reflective scene is located on the north side of the Dunhuang radiation correction field, the total area of the field is about 6km×4km, the area of the uniform highly reflective scene area is 400m×400m, the geographic coordinates are N40 DEG 28', E94 DEG 22', the reflectivity of the visible light-near infrared band of the area is about 35% -45%, and the spectral reflectivity of the different sensor 2 probe elements changes by less than 1%. The medium reflection scenario may select a Dunhuang radiation correction field resource satellite field located on Gobi desert at about 30km from the west side of Dunhuang city. The total area is about 30km×35km, the medium reflection scene area is 550m×550m, the geographical coordinates are N40 ° 05'27.75 ", E94 ° 23' 39", and the altitude is 1229m. The field region has higher stability and uniformity, the reflectivity of visible light-near infrared wave bands is about 15% -30%, and the spectral reflectivity change between different sensor 2 probe elements is about 1% -2%. The low reflection scene can be selected from the water body of the south-side south lake of the radiation correction field, the area of the field in summer and autumn is about 3.5km multiplied by 1.2km, the geographic coordinates are N39 degrees 52', E94 degrees 07', the average water depth is about 5m, the water body is pollution-free, and the characteristics are uniform.
Example 2
The embodiment discloses an evaluation system, which can be an imaging quality evaluation system, an imaging quality evaluation system of a remote sensing camera, an imaging quality evaluation system based on a satellite-borne remote sensing camera, and a correction system based on imaging quality evaluation, and can be realized by the system and/or other alternative parts of the invention. The method disclosed in this embodiment is implemented, for example, by using various components in the system of the present invention. In addition to this embodiment, the preferred implementation of the other embodiment may be provided in whole and/or in part without conflict or contradiction.
As shown in fig. 2, an imaging quality evaluation system based on an on-board remote sensing camera at least comprises a load 11 and a sensor probe element mounted on a satellite platform 1, and a computing terminal 3 of a ground base station in communication with the satellite platform 1. Preferably the satellite platform 1 may be a spacecraft. The spacecraft may be an artificial earth satellite, a manned spacecraft, a space probe, a space shuttle, etc. Preferably, the sensor probe is an advanced optical system mounted on the satellite platform 1, and may be used to obtain the information of the earth target. The sensor probe element can be a sensor such as a remote sensing camera and a CCD, or a sensor probe element line array formed by a plurality of CCDs. Preferably, the probe elements, i.e. the photosensitive elements, within the sensor 2 constitute the sensor in a line arrangement. The sensor probe may be a CCD of a line array. A CCD refers to a charge coupled device, which is a semiconductor device. A CCD is a detecting element that uses an electric charge quantity to represent the signal magnitude and uses a coupling mode to transmit signals. Preferably, the sensor probe is rotatable about the satellite platform 1 yaw axis. Preferably, the aeroaxis refers to the axis of the satellite platform 1 along its flight direction. Preferably, the sensor probe elements are imaged in a linear push-broom fashion. Preferably, the method comprises the steps of. Push-broom imaging refers to a CCD made of semiconductor materials, a linear array or a planar array sensor is formed, a wide-angle optical system is adopted, a strip track is swept like a brush by means of movement of a satellite platform 1 in the whole view field, and a two-dimensional image of the ground along the flight direction is acquired. Preferably, the satellite platform 1 is also loaded with a load 11. Preferably, the payload 11 may be a computing chip, a circuit, etc., such as a CPU, GPU, integrated circuit, FPGA, single chip, MCU, serial chip of ARM architecture, etc. Preferably, the ground base station comprises at least the computing terminal 3. The computing terminal 3 refers to a computing device such as a computer or a server. Preferably, the system comprises at least a storage medium. The storage medium is used to store the processed data. Preferably, the storage medium may be Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Preferably, in the case where the satellite platform 1 performs push-broom imaging along the sensor probe element arrangement direction, the load 11 and/or the ground calculation terminal 3 are configured to be able to perform quality evaluation based on the pixel array acquired by the sensor probe element push-broom imaging. Preferably, the load 11 and/or the computing terminal 3 are at least able to obtain correction parameters of the array of picture elements. Preferably, the satellite platform 1 performs push-broom imaging in a 90 ° yaw mode. Preferably, each sensor probe element generates one pel for each scaled scene. Preferably, the push-broom imaging produces an array of picture elements as shown in FIG. 3.
Preferably, the quality evaluation of the load 11 and/or the computing terminal 3 comprises at least an evaluation of the signal-to-noise ratio and the contrast of the generated array of picture elements. Preferably, the correction parameters include at least a signal-to-noise ratio estimate and a contrast estimate. Preferably, the load 11 and/or the computing terminal 3 performs the quality assessment as follows:
S10, estimating the signal-to-noise ratio of the pixel array based on priori knowledge. Preferably, the CCD linear array probe of the space optical remote sensing camera is interfered by various random factors in radiation transmission and photoelectric conversion of imaging, so that various types of noise are generated. Preferably, the noise is always represented in the form of noise points in the imaged remote sensing image. The amount of noise, i.e., the magnitude of the signal-to-noise ratio, affects the imaging performance of the sensor. The quality assessment of the remote sensing image can be accomplished at least in part by estimating the noise or signal-to-noise ratio of the remote sensing image. Preferably, the signal-to-noise ratio is defined as the ratio of the square of the signal power to the square of the noise power. Preferably, the image may be divided into a flat region and a textured region. In human eye vision, noise in flat areas is of greater concern. The variance of the flat region is typically used as the image noise estimate. Preferably, the on-orbit testing of the signal-to-noise ratio can be based on relative radiometric scaling, selecting a region of uniform radiation, the scene, for imaging. Preferably, uniform scenes with different reflectivities, such as ice caps with high reflectivity, gobi with medium reflectivity, desert and water with low reflectivity, are selected, and the mean value and variance of the image are calculated to obtain signal and noise estimation respectively.
Preferably, the satellite platform 1 performs push-broom imaging in a 90-degree yaw mode, and the obtained image is divided by taking pixels as units. Preferably, for a pixel array as shown in fig. 3, it is considered that an image formed by the pixel array can be divided into a flat area and a stripe area. Preferably, the pixel array may be partitioned by selecting 2 pixels as a group or partitioned by selecting 3 pixels as a group of object element arrays. Preferably, since there are different local variances at the flat region and the edge, a maximum value of the local variances of the image may be utilized as the variance of the image signal. The ratio of the maximum value and the minimum value of the local variance of the image is used as the signal-to-noise ratio estimation of the image. Preferably, the local variance of no pixel in the array image is:
wherein mu g is a local mean value, which can be obtained by the following formula:
Where p and q are the regional window sizes of the local variances. For example, if 3 rows are selected for division into groups, then p=1 and q=3. Preferably, the local variance of the image texture region is typically greater than that of the flat region containing noise, without the noise information being so large as to drown out the image information. Based on this, it is determined that the region with small variance is a flat region and the region with large variance is a texture region. Preferably, a times the variance of the entire pixel array can also be used as the threshold for texture region and flat region segmentation. Preferably, the texture region is determined when the size of the variance is greater than a times the variance of all the pixel arrays. When the variance is less than a times the variance of the entire pixel array, it is determined as a flat area. Preferably, the average value of the flat area is calculated as 1/N Σ Nδ2 as the final noise estimation value. Preferably, the maximum gray value in the array of picture elements is taken as the input signal, and the signal to noise ratio is estimated as the ratio of the square of the maximum gray value to 1/N Σ Nδ2. Preferably, the parameter a is affected not only by the camera itself but also by the environment when the satellite is in orbit. Preferably, the parameter a cannot be quantitatively calculated by analytical calculation. Preferably, a series of pictures with uniform gray level distribution and almost no noise are selected, a series of different Gaussian white noise is added, and the magnitude of the parameter a is adjusted, so that the quantitative magnitude of the parameter a when the error between the estimated value and the added noise value is minimum can be obtained. With this arrangement, the accuracy of noise estimation can be improved as much as possible. In practical application, satellite on-orbit testing usually adopts relative radiometric calibration, and uniform scene images with different reflectivities are selected for imaging in the dynamic range of sensor probe imaging. Such as high reflectivity ice caps, medium reflectivity gobi and desert, low reflectivity bodies of water, etc. When a scaling image is obtained for a scene such as a water body with low reflectivity, the gray average value of the image is too small, and the obtained signal to noise ratio is smaller. The invention uses the maximum gray value in the image as the signal estimation to obtain the signal-to-noise ratio which is closer to the actual value. In addition, other noise estimation methods, such as a laboratory integrating sphere method, do not fully consider the influence of a complex space environment on the signal-to-noise ratio of a camera when performing theoretical estimation, so that the theoretical estimation is higher than the signal-to-noise ratio obtained by laboratory measurement, and the signal-to-noise ratio estimation obtained by measuring an actual pixel array better accords with the ratio of the actual signal-to-noise ratio.
S20, dividing an initial crosstalk area for the pixel array based on the signal-to-noise ratio of the pixel array. Preferably, due to the fact that the result of the 90-degree yaw calibration data depends on the accuracy of the yaw radiometric calibration data included angle, when the included angle steering angle is uncertain, satellites cannot be imaged along the arrangement direction of the sensor probe element linear array, and therefore pixels of the same calibration scene imaged by each row in fig. 4 may appear in pixels of other calibration scenes. Preferably, because the response parameters of each sensor probe cell are different, the signal-to-noise ratio is different when the same sensor probe cell images different scenes. When different sensors 2 image the same scene, their signal to noise ratios are different. Based on the principle, the signal-to-noise ratio estimation of the sub-areas can be performed through the pixel array, and whether each pixel has a crosstalk area or not is judged according to the distribution of the signal-to-noise ratio estimation values. Preferably, the divided crosstalk zone is estimated as an initial crosstalk zone according to the signal-to-noise ratio. Because the uncertainty of the signal-to-noise ratio estimation value is large, the initial crosstalk area divided by the signal-to-noise ratio estimation cannot accurately position the imaging data of the non-identical scene unit in each pixel. It is necessary to divide a uniform imaging area of the same scene by estimating the contrast in the initial crosstalk area.
Preferably, in case the payload 11 and/or the computing terminal 3 performs contrast estimation on the array of picture elements, the payload 11 and/or the computing terminal 3 performs the following steps:
s21, carrying out linear detection on a plurality of pixels in the same scene based on the pixel array to divide a plurality of rectangular areas. Preferably, since a plurality of pixels are arranged diagonally in the pixel array, a straight line formed by the plurality of pixels can be detected according to the LSD method. Preferably, the LSD detects a straight line as follows:
a. scaling by choice 1, i.e. no gaussian sampling is performed. Preferably, no Gaussian sampling is performed because the sampling would disrupt the relationship of the nonlinear response between individual sensor probe elements in the pixel array.
B. And calculating the gradient value and the gradient direction of each pixel point, and performing pseudo-sequencing. Preferably, the larger the gradient value, the more pronounced the edge point and therefore more suitable as a seed point. However, since the time overhead for fully ordering the gradient values is excessive, the gradient values are simply divided into 1024 levels, and the 1024 levels cover the gradient range from 0 to 255, and the ordering is linear time overhead. Preferably, the seed points are searched downwards from 1024 with the highest gradient value, the pixel points with the same gradient value are put into the same linked list, thereby 1024 linked lists are obtained, and a state table containing 1024 linked lists is synthesized according to the descending order of the gradient values. The points in the state table are all set to the no-use state.
C. And setting the point with the gradient value smaller than p as a usable state, taking out the point with the maximum gradient value in the state table, and searching the surrounding direction within the angle tolerance as a starting point. Preferably, the area diffusion is performed by searching for directions around within the angular tolerance, i.e. according to directions similar to the gradient angular direction. Preferably, rectangular fitting is performed on the diffused region to generate a rectangle R. Preferably, p may be the desired of all gradient values, or may be set manually. Preferably, points with gradient values less than p tend to occur in smooth areas, or noise at low frequencies only, which can seriously affect the calculation of the straight line angle. Thus in LSD pixels with gradient magnitude less than p will be rejected from participating in the construction of rectangle R. Preferably, rectangular fitting of the diffused region is essentially a shifting of the gray value of the pixel data, and not sampling of the data points.
D. And judging whether the density of the homopolar points in the rectangle R meets a threshold value F. Preferably, if not satisfied, the rectangle R is truncated to become a plurality of rectangle boxes satisfying the threshold D. Preferably, the threshold F may be set to be one third of the number of imaging probe elements actually involved in the sensor, so that a straight line with a shorter length may be eliminated.
By the arrangement mode, the oblique line rectangle R formed by the same pixels in the pixel array can be detected, and the included angle alpha between the oblique line and the flying platform along the track direction can also be obtained.
According to a preferred embodiment, the step of contrast estimation by the load 11 and/or by the computing terminal 3 comprises at least:
S22, acquiring a first contrast estimation value in the rectangular region R based on the maximum value and the minimum value of the brightness in the rectangular region R. Preferably, the first contrast estimate is calculated by the following formula:
Wherein C is the first contrast estimate. I max is the maximum value of the luminance in the rectangular region R. I min is a minimum value of luminance in the rectangular region R. By the arrangement mode, the average value of contrast ratios among different sensor probe elements for imaging the same scene can be obtained.
S23, acquiring a second contrast estimation value based on the first contrast estimation value and the maximum value and the minimum value of the brightness of each pixel in the rectangular region R. Preferably, the second contrast estimate is calculated by the following formula:
Wherein C' is the second contrast estimate for the pixel. I max is the maximum value of the brightness of each picture element within the rectangular region R. I min is a minimum value of the brightness of each pixel within the rectangular region R. Preferably, the estimated value of the second contrast is a ratio of a brightness range of each pixel in the rectangular region R to a contrast average value in the rectangular region R. By the arrangement mode, the contrast estimation of each pixel can be obtained more accurately and rapidly.
Preferably, in case the payload 11 and/or the computing terminal 3 calculate the second contrast estimate, the payload 11 and/or the computing terminal 3 calculate the second contrast estimate based on the pixels generated at the first time within each rectangular region R. Preferably, due to the characteristic of 90 ° yaw push scan imaging, most of the pixels generated at the first time of the pixel array obtained by imaging are pixels without other scenes. Therefore, by taking the pixels generated at the first time in each rectangular region R as a reference, the time complexity of contrast estimation calculation can be improved, the accuracy of crosstalk region division can be improved, and the crosstalk region can be removed more accurately.
Preferably, after partitioning the pixels within the array of pixels based on the correction parameters, the payload 11 and/or the computing terminal 3 culls out the crosstalk area to generate an initial afterimage. Preferably, the crosstalk area within the initial crosstalk area is deleted based on the second contrast estimation value, thereby generating an initial afterimage of each pixel. Preferably, the crosstalk area in the initial crosstalk area is divided in the following manner: the brightness in each initial crosstalk zone exceeds a portion of the first threshold of the second contrast estimate. Preferably, the second contrast estimate is calculated for the pixels generated at the first time within the rectangular region R. Preferably, the first threshold is 5% of the second contrast estimate. More than 5% of the second contrast estimate may be visually apparent to a human sample, and thus its pixels comprise the imaging of two different scenes. Through the arrangement mode, the existence of the crosstalk area can be judged through the non-uniformity of the contrast and the signal-to-noise ratio of each pixel imaging, the crosstalk area can be divided more accurately, pixels with other scene components in push-broom imaging are obtained, pixels with other scene components are removed, calibration imaging data can be obtained more accurately, and errors are reduced for relative radiation calibration.
It should be noted that the above-described embodiments are exemplary, and that a person skilled in the art, in light of the present disclosure, may devise various solutions that fall within the scope of the present disclosure and fall within the scope of the present disclosure. It should be understood by those skilled in the art that the present description and drawings are illustrative and not limiting to the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1. The optimization processing method for push-broom imaging is characterized in that when a sensor probe unit (2) performs push-broom imaging, a computing terminal (3) executes the following steps:
Analyzing the signal-to-noise ratio and the contrast of the pixel array acquired based on the push-broom imaging of the sensor probe unit (2), and acquiring correction parameters of the pixel array at least comprising the signal-to-noise ratio and the contrast;
Partitioning pixels in the pixel array based on the correction parameters, and eliminating crosstalk areas to generate an initial residual image;
Denoising processing is carried out based on the initial residual image, high frequency of the initial residual image of each pixel is amplified to generate an enhanced residual image, the size of the enhanced residual image is amplified and cut in equal proportion to replace the initial residual image, so that a first pixel array is generated to enhance imaging details, and shifting is carried out based on the gray value of each probe element in the first pixel array to optimize imaging effect.
2. The optimization processing method according to claim 1, characterized in that the step of the computing terminal (3) performing the optimization processing is as follows:
Estimating the signal-to-noise ratio of the pixel array based on priori knowledge;
and dividing an initial crosstalk area for the pixel array based on the signal-to-noise ratio of the pixel array, and carrying out contrast estimation.
3. The optimization processing method according to claim 2, characterized in that the computing terminal (3) performs signal-to-noise ratio estimation according to the following steps:
Dividing the pixel array based on at least one pixel to form a plurality of sub-pixel arrays, and estimating local noise based on the sub-pixel arrays;
The maximum value of the gray scale of the divided pixel array is used as the estimated value of the signal.
4. A method of optimizing the processing according to claim 3, characterized in that in case the computing terminal (3) performs contrast estimation on the array of picture elements, the computing terminal (3) performs the steps of:
performing linear detection on a plurality of pixels in the pixel array for imaging the same scene to divide a plurality of rectangular areas;
And respectively estimating the signal to noise ratio by taking a plurality of rectangular areas as units, and dividing an initial crosstalk area for each pixel in the rectangular areas.
5. The optimization processing method according to claim 4, wherein the step of computing the contrast estimation by the terminal (3) comprises at least:
acquiring a first contrast estimation value in the rectangular area based on a maximum value and a minimum value of brightness in the rectangular area;
Acquiring a second contrast estimation value based on the first contrast estimation value and the maximum value and the minimum value of the brightness of each pixel in the rectangular area;
And deleting the crosstalk area in the initial crosstalk area based on the second contrast estimation value, so as to generate an initial residual image of each pixel.
6. The optimization processing method according to claim 5, wherein in the case where the computing terminal (3) computes the second contrast estimation value, the computing terminal (3) computes the second contrast estimation value based on the pixels generated at the first time within each rectangular region.
7. The optimization processing method according to claim 6, wherein the computing terminal (3) deletes the crosstalk area within the initial crosstalk area to generate an initial residual image of each pixel according to the following steps:
The portion of the luminance within each initial crosstalk zone that exceeds the first threshold of the second contrast estimate is deleted.
8. The optimization processing method according to claim 7, wherein the computing terminal (3) performs scaling data processing based on the initial afterimage, comprising the steps of:
And shifting the gray value of each probe element in the first pixel array so that the pixels of each row in the first pixel array form images of the same scene by a plurality of sensor probe elements (2), and the pixels of each column form images of different sensor probe elements (2).
9. The push-broom imaging optimization processing system at least comprises a sensor probe unit (2) and a computing terminal (3), and is characterized in that the computing terminal (3) executes the following steps when the sensor probe unit (2) performs push-broom imaging:
Analyzing the signal-to-noise ratio and the contrast of the pixel array acquired based on the push-broom imaging of the sensor probe unit (2), and acquiring correction parameters of the pixel array at least comprising the signal-to-noise ratio and the contrast;
Partitioning pixels in the pixel array based on the correction parameters, and eliminating crosstalk areas to generate an initial residual image;
Denoising processing is carried out based on the initial residual image, high frequency of the initial residual image of each pixel is amplified to generate an enhanced residual image, the size of the enhanced residual image is amplified and cut in equal proportion to replace the initial residual image, so that a first pixel array is generated to enhance imaging details, and shifting is carried out based on the gray value of each probe element in the first pixel array to optimize imaging effect.
10. The optimization processing system according to the preceding claim 9, characterized in that the computing terminal (3) is configured to perform the optimization processing as follows:
Estimating the signal-to-noise ratio of the pixel array based on priori knowledge;
and dividing an initial crosstalk area for the pixel array based on the signal-to-noise ratio of the pixel array, and carrying out contrast estimation.
CN202210291593.6A 2019-09-26 2019-09-26 Push-broom imaging optimization processing method and system Active CN114708204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210291593.6A CN114708204B (en) 2019-09-26 2019-09-26 Push-broom imaging optimization processing method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910920501.4A CN110782429B (en) 2019-09-26 2019-09-26 Imaging quality evaluation method based on satellite-borne remote sensing camera
CN202210291593.6A CN114708204B (en) 2019-09-26 2019-09-26 Push-broom imaging optimization processing method and system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910920501.4A Division CN110782429B (en) 2019-09-26 2019-09-26 Imaging quality evaluation method based on satellite-borne remote sensing camera

Publications (2)

Publication Number Publication Date
CN114708204A CN114708204A (en) 2022-07-05
CN114708204B true CN114708204B (en) 2024-06-11

Family

ID=69384574

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202210291593.6A Active CN114708204B (en) 2019-09-26 2019-09-26 Push-broom imaging optimization processing method and system
CN202210308576.9A Active CN114708211B (en) 2019-09-26 2019-09-26 Optimization processing method and system for satellite remote sensing imaging
CN201910920501.4A Active CN110782429B (en) 2019-09-26 2019-09-26 Imaging quality evaluation method based on satellite-borne remote sensing camera

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202210308576.9A Active CN114708211B (en) 2019-09-26 2019-09-26 Optimization processing method and system for satellite remote sensing imaging
CN201910920501.4A Active CN110782429B (en) 2019-09-26 2019-09-26 Imaging quality evaluation method based on satellite-borne remote sensing camera

Country Status (1)

Country Link
CN (3) CN114708204B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001263B (en) * 2020-07-28 2024-02-09 国家卫星气象中心(国家空间天气监测预警中心) Method and system for selecting reference probe of linear array scanning remote sensor
CN112669265B (en) * 2020-12-17 2022-06-21 华中科技大学 Method for realizing surface defect detection based on Fourier transform and image gradient characteristics
CN113724202B (en) * 2021-08-03 2023-10-13 哈尔滨工程大学 Image sensor correction effect quantitative evaluation method based on one-dimensional Fourier transform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2737376A1 (en) * 1995-07-28 1997-01-31 Centre Nat Etd Spatiales Image acquisition system for use with earth scanning satellite - uses push-broom scanning technique with two detector strips and sampling by quincunx of pixels using specified relative displacements for detectors
CN103024303A (en) * 2011-09-20 2013-04-03 比亚迪股份有限公司 Image pixel array and image pixel unit
WO2015007065A1 (en) * 2013-07-18 2015-01-22 西南交通大学 Method for reducing image fuzzy degree of tdi-ccd camera
CN108154479A (en) * 2016-12-02 2018-06-12 航天星图科技(北京)有限公司 A kind of method that remote sensing images are carried out with image rectification
CN108269243A (en) * 2018-01-18 2018-07-10 福州鑫图光电有限公司 The Enhancement Method and terminal of a kind of signal noise ratio (snr) of image
CN108961163A (en) * 2018-06-28 2018-12-07 长光卫星技术有限公司 A kind of high-resolution satellite image super-resolution reconstruction method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11339020A (en) * 1998-05-28 1999-12-10 Hitachi Medical Corp Image processor
US20080114546A1 (en) * 2006-11-15 2008-05-15 Space Systems/Loral, Inc. Image navigation and registration accuracy improvement using parametric systematic error correction
US8452123B2 (en) * 2008-01-18 2013-05-28 California Institute Of Technology Distortion calibration for optical sensors
US8405748B2 (en) * 2010-07-16 2013-03-26 Omnivision Technologies, Inc. CMOS image sensor with improved photodiode area allocation
JP2013066140A (en) * 2011-08-31 2013-04-11 Sony Corp Imaging device, signal processing method, and program
US20170084007A1 (en) * 2014-05-15 2017-03-23 Wrnch Inc. Time-space methods and systems for the reduction of video noise
CN105304656B (en) * 2014-06-23 2018-06-22 上海箩箕技术有限公司 Photoelectric sensor
CN105185805B (en) * 2015-09-28 2018-01-19 合肥芯福传感器技术有限公司 Umbrella type structure pixel and pixel array for MEMS imaging sensors
US10444415B2 (en) * 2017-02-14 2019-10-15 Cista System Corp. Multispectral sensing system and method
CN109712089A (en) * 2018-12-14 2019-05-03 航天恒星科技有限公司 Method suitable for the infrared shortwave load relative detector calibration of export-oriented remote sensing satellite
CN110111274B (en) * 2019-04-28 2020-06-19 张过 Method for calibrating exterior orientation elements of satellite-borne push-broom optical sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2737376A1 (en) * 1995-07-28 1997-01-31 Centre Nat Etd Spatiales Image acquisition system for use with earth scanning satellite - uses push-broom scanning technique with two detector strips and sampling by quincunx of pixels using specified relative displacements for detectors
CN103024303A (en) * 2011-09-20 2013-04-03 比亚迪股份有限公司 Image pixel array and image pixel unit
WO2015007065A1 (en) * 2013-07-18 2015-01-22 西南交通大学 Method for reducing image fuzzy degree of tdi-ccd camera
CN108154479A (en) * 2016-12-02 2018-06-12 航天星图科技(北京)有限公司 A kind of method that remote sensing images are carried out with image rectification
CN108269243A (en) * 2018-01-18 2018-07-10 福州鑫图光电有限公司 The Enhancement Method and terminal of a kind of signal noise ratio (snr) of image
CN108961163A (en) * 2018-06-28 2018-12-07 长光卫星技术有限公司 A kind of high-resolution satellite image super-resolution reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种推扫式卫星图像的相对辐射校正方法;李海超等;光电工程;20110115(第01期);全文 *

Also Published As

Publication number Publication date
CN110782429B (en) 2022-04-15
CN114708211B (en) 2024-06-11
CN110782429A (en) 2020-02-11
CN114708211A (en) 2022-07-05
CN114708204A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN114708204B (en) Push-broom imaging optimization processing method and system
Ryan et al. IKONOS spatial resolution and image interpretability characterization
US8094960B2 (en) Spectral calibration of image pairs using atmospheric characterization
CN106940887B (en) GF-4 satellite sequence image cloud and cloud shadow detection method
CN102279393A (en) Cross radiometric calibration method of hyper-spectral sensor based on multi-spectral sensor
CN112598608B (en) Method for manufacturing optical satellite rapid fusion product based on target area
CN102540166A (en) Cross radiation calibration method based on optimization algorithm of hyper-spectral sensor
CN110703244B (en) Method and device for identifying urban water body based on remote sensing data
CN113454677A (en) Remote sensing satellite system
CN111815524B (en) Correction system and method for radiation calibration
Slocum et al. Combined geometric-radiometric and neural network approach to shallow bathymetric mapping with UAS imagery
US10810704B2 (en) Method for processing an optical image acquired in the presence of aerosols and/or clouds in the atmosphere
Mahanti et al. Inflight calibration of the lunar reconnaissance orbiter camera wide angle camera
CN113610729A (en) Method and system for correcting satellite-ground collaborative atmosphere of hyperspectral remote sensing image and storage medium
Beisl et al. Correction of atmospheric and bidirectional effects in multispectral ADS40 images for mapping purposes
CN109472237B (en) Atmospheric correction method and system for visible light remote sensing satellite image
CN110516588B (en) Remote sensing satellite system
CN111199557A (en) Quantitative analysis method and system for decay of remote sensor
Weber et al. Polarization upgrade of specMACS: calibration and characterization of the 2D RGB polarization resolving cameras
Rangaswamy Quickbird II: Two-dimensional on-orbit modulation transfer function analysis using convex mirror array
Weber et al. Polarization upgrade of specMACS: calibration and characterization of the 2D RGB polarization-resolving cameras
Saidi et al. A refined automatic co-registration method for high-resolution optical and sar images by maximizing mutual information
Reulke et al. Improvement of spatial resolution with staggered arrays as used in the airborne optical sensor ADS40
Crespi et al. Radiometric quality and DSM generation analysis of CartoSat-1 stereo imagery
Gliß et al. A Python Software Toolbox for the Analysis of SO2 Camera Data. Implications in Geosciences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant