Disclosure of Invention
The invention aims to solve the technical problems in the prior art and provides a method for quickly splicing large-area sub-meter-level night scene remote sensing images.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method for quickly splicing large-area sub-meter-level night scene remote sensing images comprises the following steps:
the method comprises the following steps: relative radiation correction
Carrying out normalization correction on the quantized value of each pixel radiance information response of the original night scene remote sensing image, reducing or eliminating the response difference of each detection element of the sensor, and enabling the response of the detection element to the radiance to be uniform and consistent; performing relative radiometric correction using the relative radiometric calibration results;
step two: removing isolated noise
First, the original data is separated into R, G, B information of three bands, and each band is median filtered, i.e. the median filtering is carried out
Imed(R,G,B)=medfilt(Iori) (1)
Wherein, IoriAs raw data, ImedThe median filtered image;
then, the median filtering image with the isolated noise points removed is subjected to binarization processing, namely
Ibw(R,G,B)=im2bw(Imed(R,G,B),thre) (2)
Wherein, IbwThe image is a binary image, and thre is a threshold value of binary processing;
the binarized image is multiplied point by point with the original data, i.e.
Idenoise(i,j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor de-noised images, Idenoise(i, j) is the gray value of ith row and jth column of the image;
step three: uncontrolled area network adjustment based on RPC
Performing GPU-based sift feature point extraction and matching on overlapping areas of adjacent scene images of the multi-point imaging task, performing RPC-based uncontrolled area network adjustment on multi-frame night scene data by using matching results, and eliminating errors in RPC;
step four: even light and even color treatment
Performing Mask dodging and Wallis transformation-based color homogenizing treatment on the night scene image;
dodging by adopting a Mask difference method, obtaining a background image of an original image by utilizing a Gaussian low-pass filtering method, and then performing subtraction operation on the original image and the background image to obtain an image with uniform brightness distribution, wherein the subtraction operation adopts a formula shown in a formula (4);
Iout(x,y)=Iin(x,y)-Iback(x,y)+offset (4)
offset in the formula (4) is a grayscale offset amount;
according to the maximum value f of the gray level of the original image
maxMinimum value f
minAverage value of
And the maximum value g of the resultant image gray scale
maxMinimum value g
minAverage value of
Performing piecewise linear stretching; the stretching formula is given by equation (5):
in the formula (5), g (x, y) is the dodging result image, and g' (x, y) is the image after the dodging result image stretching treatment;
the Wallis transformation can be expressed as:
in the formula (6), g (x, y) and f (x, y) are gray values of the original image and the Wallis transformation result image respectively; m isgAnd mfRespectively, the local gray level mean value and the standard deviation of the original image; sgAnd sfRespectively, the local gray level mean and standard deviation of the resulting imageTarget value of difference c ∈ [0,1 ]]B ∈ [0,1 ] is the spreading constant of the image variance]As the luminance coefficient of the image, when b → 1, the image mean is forced to mfWhen b → 0, the image mean is forced to mg;
Step five: RPC-based orthorectification and image resampling
And performing orthorectification based on RPC on the night scene image acquired by multipoint imaging, and resampling the rectified image to acquire a final spliced image.
In the above technical solution, in the fifth step, the RPC-based orthorectification step is as follows:
1) calculating angular point object coordinates
Respectively carrying out forward calculation by using RPC (remote position control) to obtain ground coordinates according to the image square coordinates of the four corners of the image and corresponding initial object square coordinates given by the regularization coefficient, solving an initial affine transformation coefficient, and then calculating object square point coordinates corresponding to the image point coordinates based on the RPC;
2) constructing a resultant image
Obtaining image coverage ranges (lat 0-lat 1, lon 0-lon 1) according to the minimum and maximum latitude in ground range object space plane coordinates corresponding to multi-point imaging, setting orthoimage resolution gsd, and calculating image size (W, H):
3) pixel-by-pixel traversal mapping to original image
Each pixel (s, l) of the orthoimage can be calculated to an initial value (lat, lon) of the coordinates of the object space plane by a projection formula as follows:
acquiring the elevation H of a (lat, lon) position according to DEM data, substituting the elevation H into an RPC model formula, and calculating to obtain an image point coordinate (x, y);
4) interpolating gray values and assigning
Interpolating gray scale on the original image according to the image point coordinates (x, y) obtained by inverse calculation in the step 3); and after the gray level p is calculated, assigning the position of the result image (s, l), and finally outputting the night scene remote sensing spliced image.
In the above technical solution, in step 1), the obtaining of the object space point coordinates by RPC forward calculation includes the following steps:
(1) giving an initial object space plane coordinate value (Lon, Lat) according to the initial affine transformation coefficient parameters aiming at the image point coordinates;
(2) reading DEM data according to a given object space plane coordinate initial value to obtain an elevation value H, solving the corresponding image point coordinate by using RPC, and solving a new radiation transformation coefficient by using the image point and the object space point coordinate;
(3) and giving object space plane coordinates according to the new affine transformation coefficient parameters aiming at the image point coordinates, reading an elevation value corresponding to the DEM, completing the solution if the elevation value obtained by the two-time solution is smaller than a threshold value, and otherwise, repeating the process until the object space elevation difference value obtained by the two-time calculation is smaller than the threshold value.
In the above technical solution, in step 4), the interpolation is performed by using a bilinear interpolation method, and a formula is as follows: p (i, j) × (1-dx) × (1-dy) + p (i +1, j +1) × dy + p (i, j +1) ((1-dx) × dy).
The invention has the following beneficial effects:
based on the existing image splicing processing algorithm, the invention fully considers the imaging characteristics that the satellite attitude changes greatly when the optical satellite performs multipoint imaging, and the night scene image imaging object is a light spot and has an isolated noise point, and the characteristics that the large-area-array high-resolution image data volume is large, and carries out the processing of relative radiation correction, image denoising, RPC-based uncontrolled area network adjustment, uniform light and color, RPC-based orthorectification, image resampling and the like on the original image to obtain the large-range night scene remote sensing spliced image.
According to the method for quickly splicing the large-area-array sub-meter-level night scene remote sensing images, the large-area-array sub-meter-level night scene remote sensing spliced images are obtained by processing the original images through relative radiometric correction, image denoising, RPC-based uncontrolled area network adjustment, RPC-based orthorectification for uniform light and uniform color, image resampling and the like, and quick processing of the algorithm is realized through GPU acceleration, so that the accuracy of the algorithm is guaranteed, the processing speed is greatly improved, the algorithm is simple and easy to implement, and the method is easy to directly apply to engineering processing.
Detailed Description
According to the method, an original color image obtained by a sensor is subjected to relative radiation correction to obtain a night scene image with consistent radiation, image denoising is carried out aiming at isolated noise points, uncontrolled area network adjustment based on RPC is carried out aiming at the denoised night scene image, light and color evening processing is carried out aiming at an orthographic image, orthographic correction based on RPC is carried out, and finally a splicing line is searched for a plurality of night scene images with consistent colors to obtain an ideal night scene spliced image.
The present invention will be described in detail with reference to the accompanying drawings.
Referring to fig. 1: a method for quickly splicing large-area sub-meter-level night scene remote sensing images comprises the following steps:
the method comprises the following steps: relative radiation correction
The relative radiation correction is also called as normalization processing of the detection elements of the sensor, and is processing of normalization correction on the quantized values (DN) of all pixel radiance information responses, so that the response difference of all the detection elements of the sensor is reduced or eliminated, and the responses of the detection elements to the radiance are uniform and consistent. Relative radiometric corrections are made using the results of the relative radiometric calibration.
Step two: removing isolated noise
First, the original data is separated into R, G, B information of three bands, and each band is median filtered, i.e. the median filtering is carried out
Imed(R,G,B)=medfilt(Iori) (1)
Wherein, IoriAs raw data, ImedThe median filtered image. Because the highlight noise in the image is an isolated noise, the median filtering algorithm can well filter the highlight noise in the image.
Then, the median filtering image with the isolated noise points removed is subjected to binarization processing, namely
Ibw(R,G,B)=im2bw(Imed(R,G,B),thre) (2)
Wherein, IbwFor binarized images, thre is the threshold for the binarization process. By the binarization processing, the background noise and foreground information of the image can be separated.
The binarized image is multiplied point by point with the original data, i.e.
Idenoise(i,j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor de-noised images, Idenoise(i, j) is the gray scale value of ith row and jth column of the image. The obtained de-noised image not only removes background noise and isolated highlight noise points at dark positions of the image, but also retains high-frequency information at bright positions of the image.
Step three: uncontrolled area network adjustment based on RPC
The rational function model is a polynomial ratio that expresses the image point coordinates (r, c) as arguments with the corresponding ground point coordinates (X, Y, Z), i.e.
In the formula (r)n,cn) And (X)n,Yn,Zn) Respectively representing the normalized coordinates of pixel coordinates (r, c) and ground point coordinates (X, Y, Z) after translation and scaling, and the value is between-1.0 and +1.0, wherein the coefficient of the polynomial is called the coefficient of the rational function (rational function coefficit)ent, RPC), by means of which a relation between the image coordinate system and the ground coordinate system can be established.
Under the condition of no ground control participation, an RPC model generated by directly utilizing satellite attitude parameters and orbit parameters often contains larger system errors, the image positioning precision is influenced, and overlapped areas cannot be accurately overlapped after multi-view image orthorectification, so that the night view image acquired by multi-point imaging needs to be subjected to the RPC-based uncontrolled area network adjustment. In the adjustment process, feature point matching needs to be carried out on a night scene image overlapping area, sift matching with rotation invariance resistance is adopted, and in order to realize quick splicing of the night scene image, sift matching based on a GPU is adopted, so that the conventional sift matching time based on a CPU can be shortened by about 140 times.
And performing image-based uncontrolled area network adjustment on the night scene image acquired by using the acquired homonymy point-to-multipoint imaging, and determining the error of the RPC model of the to-be-spliced monoscopic image, thereby ensuring that the ground object corresponding to the overlapped area image has almost no relative geographical position deviation after the error is eliminated.
Step four: even light and even color treatment
Because the acquired night scene images have different degrees of difference in color due to the image acquisition time, the shooting angle, the external light source, the atmospheric attenuation and other factors, Mask dodging and Wallis dodging processing are performed on the night scene images.
Dodging by adopting a Mask difference method, obtaining a background image of an original image by utilizing a Gaussian low-pass filtering method, and then performing subtraction operation on the original image and the background image to obtain an image with uniform brightness distribution, wherein the subtraction operation can adopt a formula shown in a formula (4).
Iout(x,y)=Iin(x,y)-Iback(x,y)+offset (4)
The offset in equation (4) is a grayscale offset amount.
In order to increase the contrast of adjacent details and increase the overall contrast of the whole image, the processed image needs to be subjected to piecewise linear stretching. According to the maximum value f of the gray level of the original image
maxMinimum value f
minAverage value of
And the maximum value g of the resultant image gray scale
maxMinimum value g
minAverage value of
And performing piecewise linear stretching.
The stretching formula is given by equation (5):
in the formula (5), g (x, y) is a dodging-result image, and g' (x, y) is an image obtained by stretching the dodging-result image. The piecewise linear stretching does not need to add additional parameters, and can restore the gray scale dynamic range of the processed image to be within the gray scale range of the original image.
In order to adjust the color balance among the night scene remote sensing images, a Wallis transformation-based color balance processing method is needed, Wallis transformation energy conversion suppresses noise while enhancing the local contrast of the original sub-meter level images, and the method has a local self-adaptive function. The Wallis transformation can be expressed as:
in the formula (6), g (x, y) and f (x, y) are gray values of the original image and the Wallis transformation result image respectively; m isgAnd mfRespectively, the local gray level mean and the standard deviation (variance) of the original image; sgAnd sfTarget values of local gray-scale mean and standard deviation of the resulting image, c ∈ [0,1 ]]B ∈ [0,1 ] is the spreading constant of the image variance]As the luminance coefficient of the image, when b → 1, the image mean is forced to mfWhen b → 0, the image mean is forced to mg。
Step five: RPC-based orthorectification and image resampling
The large-area array night scene remote sensing image is projected as a center, a certain inclination angle exists during image acquisition, in order to eliminate deformation caused by image inclination, topographic relief and the like, the night scene image acquired by multipoint imaging needs to be subjected to RPC-based orthorectification, and the rectified image is resampled to acquire a final spliced image. Wherein the RPC-based orthorectification steps are as follows:
1) calculating angular point object coordinates
And (4) respectively using RPC to forward calculate the ground coordinates according to the image space coordinates of the four corners of the image and the corresponding initial object space coordinates given by the regularization coefficient, and solving the initial affine transformation coefficient. And then calculating object point coordinates corresponding to the image point coordinates based on the RPC. The following steps can be specifically adopted:
(1) giving an initial object space plane coordinate value (Lon, Lat) according to the initial affine transformation coefficient parameters aiming at the image point coordinates;
(2) reading DEM data according to a given object space plane coordinate initial value to obtain an elevation value H, solving the corresponding image point coordinate by using RPC, and solving a new radiation transformation coefficient by using the image point and the object space point coordinate;
(3) and giving object space plane coordinates according to the new affine transformation coefficient parameters aiming at the image point coordinates, reading an elevation value corresponding to the DEM, completing the solution if the elevation value obtained by the two-time solution is smaller than a threshold value, and otherwise, repeating the process until the object space elevation difference value obtained by the two-time calculation is smaller than the threshold value.
2) Constructing a resultant image
Obtaining image coverage ranges (lat 0-lat 1, lon 0-lon 1) according to the minimum and maximum latitude in ground range object space plane coordinates corresponding to multi-point imaging, setting orthoimage resolution gsd, and calculating image size (W, H):
3) pixel-by-pixel traversal mapping to original image
Each pixel (s, l) of the orthoimage can be calculated to an initial value (lat, lon) of the coordinates of the object space plane by a projection formula as follows:
and acquiring the elevation H of the (lat, lon) position according to the DEM data, and substituting the elevation H into an RPC model formula to calculate and obtain image space coordinates (x, y).
4) Interpolating gray values and assigning
Interpolating gray scale on the original image from the coordinates (x, y) of the image point obtained by the inverse calculation in 3). The interpolation is divided into three methods, namely a nearest neighbor interpolation method, a bilinear interpolation method and a bicubic interpolation method, in order to take efficiency and precision into consideration, the bilinear interpolation method is adopted, and the formula is as follows:
p=p(i,j)*(1-dx)*(1-dy)+p(i+1,j)*dx*(1-dy)+p(i+1,j+1)*dx*dy+p(i,j+1)*(1-dx)*dy
and after the gray level p is calculated, assigning the position of the result image (s, l), and finally outputting the night scene remote sensing spliced image (see figures 2 and 3).
According to the method for quickly splicing the large-area-array sub-meter-level night scene remote sensing images, the large-range night scene remote sensing spliced images are obtained by processing the original images through relative radiometric correction, image denoising, uncontrolled area network adjustment based on RPC, light and color evening, orthorectification based on RPC, image resampling and the like, and quick processing of the algorithm is realized through GPU acceleration, so that the accuracy of the algorithm is guaranteed, the processing speed is greatly improved, the algorithm is simple and easy to implement, and the method is easy to directly apply to engineering processing.
The method for denoising and enhancing the night scene image is described in detail below by taking the satellite transmitted by the long and wide satellite technology limited company, video 03 star, as an example.
A video camera with a main moment of 3200mm is adopted by a video 03 star, the resolution of a point below the star is 0.92m, and the size of an acquired single-frame night scene image is 12000 multiplied by 5000 pixels. The video 03 is star in 2017, 4, 1 and 5 points 43 shoot night scenes of London, the longitude and latitude of the shooting points are longitude-0.179 degrees, and the latitude is 51.4628 degrees. The invention specifically describes a method for quickly splicing large-area array sub-meter-level night scene remote sensing images aiming at the multi-point night scene image imaging task, which comprises the following steps:
the method comprises the following steps: relative radiation correction
And carrying out relative radiation correction on the night scene image according to the relative radiation calibration result.
Step two: isolated noise point denoising
Firstly, separating R, G, B three wave band information from original data, wherein the size of R, G, B is 3000 × 1250 pixels, 6000 × 2500 pixels and 3000 × 1250 pixels, respectively, carrying out median filtering on each wave band by adopting a formula (1) to obtain a filtered image I without high brightness noisemed(R,G,B)。
Then, the median filtered image I with the isolated noise points filtered outmed(R, G, B) binarizes according to the formula (2), and sets the binarization threshold thre to 6 according to the acquired image data, thereby obtaining a binarized image Ibw. Through binarization processing, background noise and foreground information of the image can be effectively separated. Carrying out point-by-point multiplication on the binary image and the original data according to a formula (3) to obtain a denoised image Idenoise。
The obtained de-noised image not only removes background noise and isolated highlight noise points at dark positions of the image, but also retains high-frequency information at bright positions of the image.
Step three: uncontrolled area network adjustment based on RPC
And performing GPU-based sift feature point extraction and matching on the overlapping area of the adjacent scene images of the multi-point imaging task, and performing RPC-based uncontrolled area network adjustment on multi-frame night scene data by using the matching result to eliminate errors in the RPC.
Step four: even light and even color treatment
And performing Mask principle-based dodging processing and Wallis transformation-based color homogenizing processing on the multi-frame night scene images to obtain night scene remote sensing images with consistent colors.
Step five: RPC-based orthorectification and resampling
And (3) eliminating image point displacement caused by an imaging angle and topographic relief through RPC-based orthorectification processing, calculating according to the object space range of the image to obtain the size of a spliced image, calculating the position of the spliced image in a single-scene night scene image point by point, and performing bilinear interpolation to obtain a final multipoint-spliced night scene remote sensing image.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.