CN107563964B - Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images - Google Patents

Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images Download PDF

Info

Publication number
CN107563964B
CN107563964B CN201710722702.4A CN201710722702A CN107563964B CN 107563964 B CN107563964 B CN 107563964B CN 201710722702 A CN201710722702 A CN 201710722702A CN 107563964 B CN107563964 B CN 107563964B
Authority
CN
China
Prior art keywords
image
value
rpc
night scene
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710722702.4A
Other languages
Chinese (zh)
Other versions
CN107563964A (en
Inventor
武红宇
白杨
王灵丽
谷文双
潘征
陆晗
钟兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chang Guang Satellite Technology Co Ltd
Original Assignee
Chang Guang Satellite Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chang Guang Satellite Technology Co Ltd filed Critical Chang Guang Satellite Technology Co Ltd
Priority to CN201710722702.4A priority Critical patent/CN107563964B/en
Publication of CN107563964A publication Critical patent/CN107563964A/en
Application granted granted Critical
Publication of CN107563964B publication Critical patent/CN107563964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a method for quickly splicing large-area array sub-meter-level night scene remote sensing images, which comprises the following steps of: the method comprises the following steps: correcting relative radiation; step two: removing isolated noise points; step three: carrying out network adjustment of an uncontrolled area based on RPC; step four: carrying out light and color homogenizing treatment; step five: orthorectification and image resampling based on RPC. The invention adopts the rapid splicing method of the large-area array sub-meter level night scene remote sensing image, obtains the large-range night scene remote sensing spliced image by processing the original image such as relative radiation correction, image denoising, RPC-based uncontrolled area network adjustment, uniform light and color, RPC-based orthorectification, image resampling and the like, and realizes the rapid processing of the algorithm by GPU acceleration, thereby not only ensuring the accuracy of the algorithm, but also greatly improving the processing speed, and the algorithm is simple and easy to implement and is easy to directly apply in engineering processing.

Description

Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images
Technical Field
The invention relates to the field of remote sensing image processing, in particular to a method for quickly splicing large-area sub-meter-level night scene remote sensing images.
Background
Image stitching, also known as image mosaicing, is a technical process of stitching two or more images with a certain degree of overlap to form an integral image. In remote sensing image applications, images with smaller sizes or from different sensors are usually processed and stitched together to obtain a larger range of remote sensing images. Only by simple splicing, obvious geometrical structural dislocation and radiation difference can be caused at the spliced part. The geometric structure dislocation is caused by reasons of incorrect spatial relative position relation of spliced images or certain geometric distortion of local areas of the images, and the radiation difference is caused by reasons of different seasons for acquiring the images or different types of sensors. Therefore, in image mosaicing, geometric misalignment and radiation difference between mosaiced images should be eliminated to better satisfy the practical application of the images.
The night scene image is the earth surface data acquired by the remote sensing satellite at night, and by utilizing the low-light scanning function of the sensor, urban light, even low-intensity visible light radiation sources emitted by small-scale residents, traffic flow, fishing boat light, fire points and the like can be effectively detected, and footprints of human life can be captured under the dark background. The night scene remote sensing image is widely applied to the fields of social and economic parameter estimation, regional development research, urbanization monitoring, light pollution and the like, and can objectively reflect the change trend of social and economic. However, when the night scene image is obtained, the night scene image is limited by factors such as a sensor manufacturing process and a satellite orbit height, the coverage area of a single image is limited, for example, the ground area covered by a single image with a resolution of 0.92m of an oversized area array of a 12K × 5K long-light satellite video 03 satellite is about 50 square kilometers, it can be seen that a single large-area array sub-meter-level image cannot obtain large-range data covering a main area of a city at one time, therefore, a plurality of remote sensing images in an imaging task range are obtained by performing a multi-point imaging task through a satellite, and images obtained in a multi-point imaging mode are spliced to obtain a final remote sensing image according to image overlapping between adjacent images, but the postures of the satellite are different when each task obtains the image, so that each image has different perspective distortions; the sensor needs to adopt higher gain and longer exposure time during imaging, which inevitably introduces noises such as high-brightness noise, chroma noise and the like into the image, and in addition, the night scene image is overall darker and is not beneficial to visual interpretation and search of splicing lines in an overlapped area. Therefore, distortion correction, image denoising and image dodging and color homogenizing treatment are required to be carried out on the night scene remote sensing images acquired by different imaging tasks; the RPC coefficient of a single-frame night scene image has a certain error, so that uncontrolled area network adjustment based on RPC is required, the area network adjustment process needs to extract and match sift feature points of an overlapped area, a sift matching algorithm based on a GPU is adopted to improve the processing speed, the image is subjected to central imaging, a certain shooting angle exists in the imaging process, so that orthorectification based on RPC is required to be performed on the image, and the rectified image is resampled to obtain a final seamless spliced image.
The smoothness of the gray curved surface of the image is considered when the image is embedded, the definition of the image is considered, a splicing algorithm for remote sensing images mainly comprises technologies such as image registration and synthesis, researches on a tone processing method before embedding of SPOT5 images are published on remote sensing technologies and applications in different seasons by Yanglimna, forest blossoming and aging in 2009, and patch land features and irregular fragmentary land features with large tone differences on the images are processed by adopting image processing methods such as numerical value adjustment, grid editing and filling, feature information extraction and classification, so that the purpose of reducing the tone differences is achieved, and the method has strong applicability. In 2006, "automatic dodging processing and application of optical remote sensing images" was published in "university of wuhan university book (information science edition)" by luderren, wangmi, handspike, and the thesis proposes an automatic dodging processing method and a processing flow aiming at the phenomenon that an image obtained by optical remote sensing is in the interior of one image and color imbalance among a plurality of images in an area range, realizes principles and flows through image dodging software, and obtains good effects by combining with actual engineering application. The Liu Xiao Long in 2001 published the mosaic technology of digital ortho images based on image matching and edge-joining correction in the remote sensing journal, and the paper introduced the mosaic technology of digital ortho images, which includes preprocessing technology, mosaic process and application prospect, wherein the preprocessing technology includes color balance, image matching and edge-joining correction technology, the color balance is used for radiometric correction, and the image matching and edge-joining correction is used for geometric difference correction. However, the existing method has few processing methods aiming at the splicing problem of the large-area-array sub-meter-level night scene remote sensing image.
Disclosure of Invention
The invention aims to solve the technical problems in the prior art and provides a method for quickly splicing large-area sub-meter-level night scene remote sensing images.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method for quickly splicing large-area sub-meter-level night scene remote sensing images comprises the following steps:
the method comprises the following steps: relative radiation correction
Carrying out normalization correction on the quantized value of each pixel radiance information response of the original night scene remote sensing image, reducing or eliminating the response difference of each detection element of the sensor, and enabling the response of the detection element to the radiance to be uniform and consistent; performing relative radiometric correction using the relative radiometric calibration results;
step two: removing isolated noise
First, the original data is separated into R, G, B information of three bands, and each band is median filtered, i.e. the median filtering is carried out
Imed(R,G,B)=medfilt(Iori) (1)
Wherein, IoriAs raw data, ImedThe median filtered image;
then, the median filtering image with the isolated noise points removed is subjected to binarization processing, namely
Ibw(R,G,B)=im2bw(Imed(R,G,B),thre) (2)
Wherein, IbwThe image is a binary image, and thre is a threshold value of binary processing;
the binarized image is multiplied point by point with the original data, i.e.
Idenoise(i,j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor de-noised images, Idenoise(i, j) is the gray value of ith row and jth column of the image;
step three: uncontrolled area network adjustment based on RPC
Performing GPU-based sift feature point extraction and matching on overlapping areas of adjacent scene images of the multi-point imaging task, performing RPC-based uncontrolled area network adjustment on multi-frame night scene data by using matching results, and eliminating errors in RPC;
step four: even light and even color treatment
Performing Mask dodging and Wallis transformation-based color homogenizing treatment on the night scene image;
dodging by adopting a Mask difference method, obtaining a background image of an original image by utilizing a Gaussian low-pass filtering method, and then performing subtraction operation on the original image and the background image to obtain an image with uniform brightness distribution, wherein the subtraction operation adopts a formula shown in a formula (4);
Iout(x,y)=Iin(x,y)-Iback(x,y)+offset (4)
offset in the formula (4) is a grayscale offset amount;
according to the maximum value f of the gray level of the original imagemaxMinimum value fminAverage value of
Figure GDA0002523828980000041
And the maximum value g of the resultant image gray scalemaxMinimum value gminAverage value of
Figure GDA0002523828980000042
Performing piecewise linear stretching; the stretching formula is given by equation (5):
Figure GDA0002523828980000043
in the formula (5), g (x, y) is the dodging result image, and g' (x, y) is the image after the dodging result image stretching treatment;
the Wallis transformation can be expressed as:
Figure GDA0002523828980000044
in the formula (6), g (x, y) and f (x, y) are gray values of the original image and the Wallis transformation result image respectively; m isgAnd mfRespectively, the local gray level mean value and the standard deviation of the original image; sgAnd sfRespectively, the local gray level mean and standard deviation of the resulting imageTarget value of difference c ∈ [0,1 ]]B ∈ [0,1 ] is the spreading constant of the image variance]As the luminance coefficient of the image, when b → 1, the image mean is forced to mfWhen b → 0, the image mean is forced to mg
Step five: RPC-based orthorectification and image resampling
And performing orthorectification based on RPC on the night scene image acquired by multipoint imaging, and resampling the rectified image to acquire a final spliced image.
In the above technical solution, in the fifth step, the RPC-based orthorectification step is as follows:
1) calculating angular point object coordinates
Respectively carrying out forward calculation by using RPC (remote position control) to obtain ground coordinates according to the image square coordinates of the four corners of the image and corresponding initial object square coordinates given by the regularization coefficient, solving an initial affine transformation coefficient, and then calculating object square point coordinates corresponding to the image point coordinates based on the RPC;
2) constructing a resultant image
Obtaining image coverage ranges (lat 0-lat 1, lon 0-lon 1) according to the minimum and maximum latitude in ground range object space plane coordinates corresponding to multi-point imaging, setting orthoimage resolution gsd, and calculating image size (W, H):
Figure GDA0002523828980000051
3) pixel-by-pixel traversal mapping to original image
Each pixel (s, l) of the orthoimage can be calculated to an initial value (lat, lon) of the coordinates of the object space plane by a projection formula as follows:
Figure GDA0002523828980000052
acquiring the elevation H of a (lat, lon) position according to DEM data, substituting the elevation H into an RPC model formula, and calculating to obtain an image point coordinate (x, y);
4) interpolating gray values and assigning
Interpolating gray scale on the original image according to the image point coordinates (x, y) obtained by inverse calculation in the step 3); and after the gray level p is calculated, assigning the position of the result image (s, l), and finally outputting the night scene remote sensing spliced image.
In the above technical solution, in step 1), the obtaining of the object space point coordinates by RPC forward calculation includes the following steps:
(1) giving an initial object space plane coordinate value (Lon, Lat) according to the initial affine transformation coefficient parameters aiming at the image point coordinates;
(2) reading DEM data according to a given object space plane coordinate initial value to obtain an elevation value H, solving the corresponding image point coordinate by using RPC, and solving a new radiation transformation coefficient by using the image point and the object space point coordinate;
(3) and giving object space plane coordinates according to the new affine transformation coefficient parameters aiming at the image point coordinates, reading an elevation value corresponding to the DEM, completing the solution if the elevation value obtained by the two-time solution is smaller than a threshold value, and otherwise, repeating the process until the object space elevation difference value obtained by the two-time calculation is smaller than the threshold value.
In the above technical solution, in step 4), the interpolation is performed by using a bilinear interpolation method, and a formula is as follows: p (i, j) × (1-dx) × (1-dy) + p (i +1, j +1) × dy + p (i, j +1) ((1-dx) × dy).
The invention has the following beneficial effects:
based on the existing image splicing processing algorithm, the invention fully considers the imaging characteristics that the satellite attitude changes greatly when the optical satellite performs multipoint imaging, and the night scene image imaging object is a light spot and has an isolated noise point, and the characteristics that the large-area-array high-resolution image data volume is large, and carries out the processing of relative radiation correction, image denoising, RPC-based uncontrolled area network adjustment, uniform light and color, RPC-based orthorectification, image resampling and the like on the original image to obtain the large-range night scene remote sensing spliced image.
According to the method for quickly splicing the large-area-array sub-meter-level night scene remote sensing images, the large-area-array sub-meter-level night scene remote sensing spliced images are obtained by processing the original images through relative radiometric correction, image denoising, RPC-based uncontrolled area network adjustment, RPC-based orthorectification for uniform light and uniform color, image resampling and the like, and quick processing of the algorithm is realized through GPU acceleration, so that the accuracy of the algorithm is guaranteed, the processing speed is greatly improved, the algorithm is simple and easy to implement, and the method is easy to directly apply to engineering processing.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of the method for rapidly stitching large-area sub-meter-level night scene remote sensing images.
Fig. 2 and fig. 3 are comparison diagrams of the effect of the night scene image before and after splicing, wherein fig. 2 is an image before splicing, and fig. 3 is an image after splicing.
Detailed Description
According to the method, an original color image obtained by a sensor is subjected to relative radiation correction to obtain a night scene image with consistent radiation, image denoising is carried out aiming at isolated noise points, uncontrolled area network adjustment based on RPC is carried out aiming at the denoised night scene image, light and color evening processing is carried out aiming at an orthographic image, orthographic correction based on RPC is carried out, and finally a splicing line is searched for a plurality of night scene images with consistent colors to obtain an ideal night scene spliced image.
The present invention will be described in detail with reference to the accompanying drawings.
Referring to fig. 1: a method for quickly splicing large-area sub-meter-level night scene remote sensing images comprises the following steps:
the method comprises the following steps: relative radiation correction
The relative radiation correction is also called as normalization processing of the detection elements of the sensor, and is processing of normalization correction on the quantized values (DN) of all pixel radiance information responses, so that the response difference of all the detection elements of the sensor is reduced or eliminated, and the responses of the detection elements to the radiance are uniform and consistent. Relative radiometric corrections are made using the results of the relative radiometric calibration.
Step two: removing isolated noise
First, the original data is separated into R, G, B information of three bands, and each band is median filtered, i.e. the median filtering is carried out
Imed(R,G,B)=medfilt(Iori) (1)
Wherein, IoriAs raw data, ImedThe median filtered image. Because the highlight noise in the image is an isolated noise, the median filtering algorithm can well filter the highlight noise in the image.
Then, the median filtering image with the isolated noise points removed is subjected to binarization processing, namely
Ibw(R,G,B)=im2bw(Imed(R,G,B),thre) (2)
Wherein, IbwFor binarized images, thre is the threshold for the binarization process. By the binarization processing, the background noise and foreground information of the image can be separated.
The binarized image is multiplied point by point with the original data, i.e.
Idenoise(i,j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor de-noised images, Idenoise(i, j) is the gray scale value of ith row and jth column of the image. The obtained de-noised image not only removes background noise and isolated highlight noise points at dark positions of the image, but also retains high-frequency information at bright positions of the image.
Step three: uncontrolled area network adjustment based on RPC
The rational function model is a polynomial ratio that expresses the image point coordinates (r, c) as arguments with the corresponding ground point coordinates (X, Y, Z), i.e.
Figure GDA0002523828980000081
In the formula (r)n,cn) And (X)n,Yn,Zn) Respectively representing the normalized coordinates of pixel coordinates (r, c) and ground point coordinates (X, Y, Z) after translation and scaling, and the value is between-1.0 and +1.0, wherein the coefficient of the polynomial is called the coefficient of the rational function (rational function coefficit)ent, RPC), by means of which a relation between the image coordinate system and the ground coordinate system can be established.
Under the condition of no ground control participation, an RPC model generated by directly utilizing satellite attitude parameters and orbit parameters often contains larger system errors, the image positioning precision is influenced, and overlapped areas cannot be accurately overlapped after multi-view image orthorectification, so that the night view image acquired by multi-point imaging needs to be subjected to the RPC-based uncontrolled area network adjustment. In the adjustment process, feature point matching needs to be carried out on a night scene image overlapping area, sift matching with rotation invariance resistance is adopted, and in order to realize quick splicing of the night scene image, sift matching based on a GPU is adopted, so that the conventional sift matching time based on a CPU can be shortened by about 140 times.
And performing image-based uncontrolled area network adjustment on the night scene image acquired by using the acquired homonymy point-to-multipoint imaging, and determining the error of the RPC model of the to-be-spliced monoscopic image, thereby ensuring that the ground object corresponding to the overlapped area image has almost no relative geographical position deviation after the error is eliminated.
Step four: even light and even color treatment
Because the acquired night scene images have different degrees of difference in color due to the image acquisition time, the shooting angle, the external light source, the atmospheric attenuation and other factors, Mask dodging and Wallis dodging processing are performed on the night scene images.
Dodging by adopting a Mask difference method, obtaining a background image of an original image by utilizing a Gaussian low-pass filtering method, and then performing subtraction operation on the original image and the background image to obtain an image with uniform brightness distribution, wherein the subtraction operation can adopt a formula shown in a formula (4).
Iout(x,y)=Iin(x,y)-Iback(x,y)+offset (4)
The offset in equation (4) is a grayscale offset amount.
In order to increase the contrast of adjacent details and increase the overall contrast of the whole image, the processed image needs to be subjected to piecewise linear stretching. According to the maximum value f of the gray level of the original imagemaxMinimum value fminAverage value of
Figure GDA0002523828980000091
And the maximum value g of the resultant image gray scalemaxMinimum value gminAverage value of
Figure GDA0002523828980000092
And performing piecewise linear stretching.
The stretching formula is given by equation (5):
Figure GDA0002523828980000101
in the formula (5), g (x, y) is a dodging-result image, and g' (x, y) is an image obtained by stretching the dodging-result image. The piecewise linear stretching does not need to add additional parameters, and can restore the gray scale dynamic range of the processed image to be within the gray scale range of the original image.
In order to adjust the color balance among the night scene remote sensing images, a Wallis transformation-based color balance processing method is needed, Wallis transformation energy conversion suppresses noise while enhancing the local contrast of the original sub-meter level images, and the method has a local self-adaptive function. The Wallis transformation can be expressed as:
Figure GDA0002523828980000102
in the formula (6), g (x, y) and f (x, y) are gray values of the original image and the Wallis transformation result image respectively; m isgAnd mfRespectively, the local gray level mean and the standard deviation (variance) of the original image; sgAnd sfTarget values of local gray-scale mean and standard deviation of the resulting image, c ∈ [0,1 ]]B ∈ [0,1 ] is the spreading constant of the image variance]As the luminance coefficient of the image, when b → 1, the image mean is forced to mfWhen b → 0, the image mean is forced to mg
Step five: RPC-based orthorectification and image resampling
The large-area array night scene remote sensing image is projected as a center, a certain inclination angle exists during image acquisition, in order to eliminate deformation caused by image inclination, topographic relief and the like, the night scene image acquired by multipoint imaging needs to be subjected to RPC-based orthorectification, and the rectified image is resampled to acquire a final spliced image. Wherein the RPC-based orthorectification steps are as follows:
1) calculating angular point object coordinates
And (4) respectively using RPC to forward calculate the ground coordinates according to the image space coordinates of the four corners of the image and the corresponding initial object space coordinates given by the regularization coefficient, and solving the initial affine transformation coefficient. And then calculating object point coordinates corresponding to the image point coordinates based on the RPC. The following steps can be specifically adopted:
(1) giving an initial object space plane coordinate value (Lon, Lat) according to the initial affine transformation coefficient parameters aiming at the image point coordinates;
(2) reading DEM data according to a given object space plane coordinate initial value to obtain an elevation value H, solving the corresponding image point coordinate by using RPC, and solving a new radiation transformation coefficient by using the image point and the object space point coordinate;
(3) and giving object space plane coordinates according to the new affine transformation coefficient parameters aiming at the image point coordinates, reading an elevation value corresponding to the DEM, completing the solution if the elevation value obtained by the two-time solution is smaller than a threshold value, and otherwise, repeating the process until the object space elevation difference value obtained by the two-time calculation is smaller than the threshold value.
2) Constructing a resultant image
Obtaining image coverage ranges (lat 0-lat 1, lon 0-lon 1) according to the minimum and maximum latitude in ground range object space plane coordinates corresponding to multi-point imaging, setting orthoimage resolution gsd, and calculating image size (W, H):
Figure GDA0002523828980000111
3) pixel-by-pixel traversal mapping to original image
Each pixel (s, l) of the orthoimage can be calculated to an initial value (lat, lon) of the coordinates of the object space plane by a projection formula as follows:
Figure GDA0002523828980000112
and acquiring the elevation H of the (lat, lon) position according to the DEM data, and substituting the elevation H into an RPC model formula to calculate and obtain image space coordinates (x, y).
4) Interpolating gray values and assigning
Interpolating gray scale on the original image from the coordinates (x, y) of the image point obtained by the inverse calculation in 3). The interpolation is divided into three methods, namely a nearest neighbor interpolation method, a bilinear interpolation method and a bicubic interpolation method, in order to take efficiency and precision into consideration, the bilinear interpolation method is adopted, and the formula is as follows:
p=p(i,j)*(1-dx)*(1-dy)+p(i+1,j)*dx*(1-dy)+p(i+1,j+1)*dx*dy+p(i,j+1)*(1-dx)*dy
and after the gray level p is calculated, assigning the position of the result image (s, l), and finally outputting the night scene remote sensing spliced image (see figures 2 and 3).
According to the method for quickly splicing the large-area-array sub-meter-level night scene remote sensing images, the large-range night scene remote sensing spliced images are obtained by processing the original images through relative radiometric correction, image denoising, uncontrolled area network adjustment based on RPC, light and color evening, orthorectification based on RPC, image resampling and the like, and quick processing of the algorithm is realized through GPU acceleration, so that the accuracy of the algorithm is guaranteed, the processing speed is greatly improved, the algorithm is simple and easy to implement, and the method is easy to directly apply to engineering processing.
The method for denoising and enhancing the night scene image is described in detail below by taking the satellite transmitted by the long and wide satellite technology limited company, video 03 star, as an example.
A video camera with a main moment of 3200mm is adopted by a video 03 star, the resolution of a point below the star is 0.92m, and the size of an acquired single-frame night scene image is 12000 multiplied by 5000 pixels. The video 03 is star in 2017, 4, 1 and 5 points 43 shoot night scenes of London, the longitude and latitude of the shooting points are longitude-0.179 degrees, and the latitude is 51.4628 degrees. The invention specifically describes a method for quickly splicing large-area array sub-meter-level night scene remote sensing images aiming at the multi-point night scene image imaging task, which comprises the following steps:
the method comprises the following steps: relative radiation correction
And carrying out relative radiation correction on the night scene image according to the relative radiation calibration result.
Step two: isolated noise point denoising
Firstly, separating R, G, B three wave band information from original data, wherein the size of R, G, B is 3000 × 1250 pixels, 6000 × 2500 pixels and 3000 × 1250 pixels, respectively, carrying out median filtering on each wave band by adopting a formula (1) to obtain a filtered image I without high brightness noisemed(R,G,B)。
Then, the median filtered image I with the isolated noise points filtered outmed(R, G, B) binarizes according to the formula (2), and sets the binarization threshold thre to 6 according to the acquired image data, thereby obtaining a binarized image Ibw. Through binarization processing, background noise and foreground information of the image can be effectively separated. Carrying out point-by-point multiplication on the binary image and the original data according to a formula (3) to obtain a denoised image Idenoise
The obtained de-noised image not only removes background noise and isolated highlight noise points at dark positions of the image, but also retains high-frequency information at bright positions of the image.
Step three: uncontrolled area network adjustment based on RPC
And performing GPU-based sift feature point extraction and matching on the overlapping area of the adjacent scene images of the multi-point imaging task, and performing RPC-based uncontrolled area network adjustment on multi-frame night scene data by using the matching result to eliminate errors in the RPC.
Step four: even light and even color treatment
And performing Mask principle-based dodging processing and Wallis transformation-based color homogenizing processing on the multi-frame night scene images to obtain night scene remote sensing images with consistent colors.
Step five: RPC-based orthorectification and resampling
And (3) eliminating image point displacement caused by an imaging angle and topographic relief through RPC-based orthorectification processing, calculating according to the object space range of the image to obtain the size of a spliced image, calculating the position of the spliced image in a single-scene night scene image point by point, and performing bilinear interpolation to obtain a final multipoint-spliced night scene remote sensing image.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (3)

1. A method for quickly splicing large-area sub-meter-level night scene remote sensing images is characterized by comprising the following steps:
the method comprises the following steps: relative radiation correction
Carrying out normalization correction on the quantized value of each pixel radiance information response of the original night scene remote sensing image, reducing or eliminating the response difference of each detection element of the sensor, and enabling the response of the detection element to the radiance to be uniform and consistent; performing relative radiometric correction using the relative radiometric calibration results;
step two: removing isolated noise
Firstly, R, G, B information of three wave bands is separated from the corrected original data of the step one, and the median filtering is carried out on each wave band, namely
Imed(R,G,B)=medfilt(Iori) (1)
Wherein, IoriAs raw data, ImedThe median filtered image;
then, the median filtering image with the isolated noise points removed is subjected to binarization processing, namely
Ibw(R,G,B)=im2bw(Imed(R,G,B),thre) (2)
Wherein, IbwThe image is a binary image, and thre is a threshold value of binary processing;
the binarized image is multiplied point by point with the original data, i.e.
Idenoise(i,j)=Ibw(i,j)×Iori(i,j) (3)
Wherein, IdenoiseFor de-noised images, Idenoise(i, j) is the gray value of ith row and jth column of the image;
step three: uncontrolled area network adjustment based on RPC
Performing GPU-based sift feature point extraction and matching on the overlapping area of adjacent scene images in the multi-point imaging task after the isolated noise points are removed in the second step, and performing RPC-based uncontrolled area network adjustment on multi-frame night scene data by using the matching result to eliminate errors in the RPC;
step four: even light and even color treatment
Performing Mask dodging and Wallis transformation-based color homogenizing treatment on the night scene image subjected to the three differences in the step;
dodging by adopting a Mask difference method, obtaining a background image of an original image by utilizing a Gaussian low-pass filtering method, and then performing subtraction operation on the original image and the background image to obtain an image with uniform brightness distribution, wherein the subtraction operation adopts a formula shown in a formula (4);
Iout(x,y)=Iin(x,y)-Iback(x,y)+offset (4)
the offset in the formula (4) is a gray scale offset amount, Iout(x, y) is the resulting image after processing, Iin(x, y) is the original image, Iback(x, y) is a background image;
according to the maximum value f of the gray level of the original imagemaxMinimum value fminAverage value of
Figure FDA0002523828970000021
And the maximum value g of the image gray scale of the dodging resultmaxMinimum value gminAverage value of
Figure FDA0002523828970000022
Performing piecewise linear stretching; the stretching formula is given by equation (5):
Figure FDA0002523828970000023
in the formula (5), g (x, y) is the dodging result image, and g' (x, y) is the image after the dodging result image stretching treatment;
the Wallis transformation is represented as:
Figure FDA0002523828970000024
in the formula (6), p (x, y) and f (x, y) are the gray values of the original image to be color-homogenized and the Wallis transformation result image respectively; m isgAnd mfRespectively, the local gray level mean value and the standard deviation of the original image; sgAnd sfTarget values of local gray-scale mean and standard deviation of the resulting image, c ∈ [0,1 ]]B ∈ [0,1 ] is the spreading constant of the image variance]As the luminance coefficient of the image, when b → 1, the image mean is forced to mfWhen b → 0, the image mean is forced to mg
Step five: RPC-based orthorectification and image resampling
Performing orthorectification based on RPC on a night scene image obtained by multipoint imaging after the dodging and color homogenizing treatment in the step four, and resampling the rectified image to obtain a final spliced image; the method comprises the following specific steps:
1) calculating angular point object coordinates
Respectively carrying out forward calculation by using RPC (remote position control) to obtain ground coordinates according to the image square coordinates of the four corners of the image and corresponding initial object square coordinates given by the regularization coefficient, solving an initial affine transformation coefficient, and then calculating object square point coordinates corresponding to the image point coordinates based on the RPC;
2) constructing a resultant image
Obtaining image coverage ranges (lat 0-lat 1, lon 0-lon 1) according to the minimum and maximum latitude in ground range object space plane coordinates corresponding to multi-point imaging, setting orthoimage resolution gsd, and calculating image size (W, H):
Figure FDA0002523828970000031
3) pixel-by-pixel traversal mapping to original image
Calculating an initial value (lat, lon) of coordinates of an object space plane by a projection formula for each pixel (s, l) of the orthoimage, wherein the formula is as follows:
Figure FDA0002523828970000032
acquiring the elevation H of a (lat, lon) position according to DEM data, substituting the elevation H into an RPC model formula, and calculating to obtain an image point coordinate (x, y);
4) interpolating gray values and assigning
Interpolating gray scale on the original image according to the image point coordinates (x, y) obtained by inverse calculation in the step 3); and after the gray level p is calculated, assigning the position of the result image (s, l), and finally outputting the night scene remote sensing spliced image.
2. The method for rapidly splicing large-area sub-meter-scale night-scene remote sensing images according to claim 1, wherein in the step 1) of the step five, the step of obtaining object space point coordinates through RPC calculation comprises the following steps:
(1) giving an initial object space plane coordinate value (lat, lon) according to the initial affine transformation coefficient parameters aiming at the image point coordinates;
(2) reading DEM data according to a given object space plane coordinate initial value to obtain an elevation value H, solving the corresponding image point coordinate by using RPC, and solving a new affine transformation coefficient by using the image point and the object space point coordinate;
(3) and giving object space plane coordinates according to the new affine transformation coefficient parameters aiming at the image point coordinates, reading an elevation value corresponding to the DEM, finishing the solution if the elevation difference value obtained by the two times of solution is smaller than a threshold value, and otherwise, repeating the process until the object space elevation difference value obtained by the two times of calculation is smaller than the threshold value.
3. The method for rapidly splicing large-area sub-meter-level night scene remote sensing images according to claim 1, wherein in the step 4) of the step five, a bilinear interpolation method is adopted for interpolation, and the formula is as follows:
p=p(i,j)*(1-dx)*(1-dy)+p(i+1,j)*dx*(1-dy)+p(i+1,j+1)*dx*dy+p(i,j+1)*(1-dx)*dy;
in the formula, i is a coordinate value obtained by taking an integer downward from a coordinate x, j is a coordinate value obtained by taking an integer downward from a coordinate y, dx is a difference value between the x coordinate and the i coordinate, dy is a difference value between the y coordinate and the j coordinate, p (i, j) is a gray value of a (i, j) position in the image, p (i +1, j) is a gray value of a (i +1, j) position in the image, p (i +1, j +1) is a gray value of a (i +1, j +1) position in the image, p (i, j +1) is a gray value of a (i, j +1) position in the image, and p is a gray value of a (x, y) position in the image obtained by calculation.
CN201710722702.4A 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images Active CN107563964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710722702.4A CN107563964B (en) 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710722702.4A CN107563964B (en) 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images

Publications (2)

Publication Number Publication Date
CN107563964A CN107563964A (en) 2018-01-09
CN107563964B true CN107563964B (en) 2020-09-04

Family

ID=60976225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710722702.4A Active CN107563964B (en) 2017-08-22 2017-08-22 Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images

Country Status (1)

Country Link
CN (1) CN107563964B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492260B (en) * 2018-02-07 2019-01-08 长安大学 Gelolgical lineament extracting method based on Tensor Voting coupling Hough transformation
CN108550129B (en) * 2018-04-20 2019-04-09 北京航天宏图信息技术股份有限公司 Even color method and device based on geographical template
CN108734685B (en) * 2018-05-10 2022-06-03 中国矿业大学(北京) Splicing method for unmanned aerial vehicle-mounted hyperspectral line array remote sensing images
CN109118421A (en) * 2018-07-06 2019-01-01 航天星图科技(北京)有限公司 A kind of image light and color homogenization method based on Distributed Architecture
CN109063711B (en) * 2018-07-06 2021-10-29 中科星图股份有限公司 Satellite image orthorectification algorithm based on LLTS framework
CN109118429B (en) * 2018-08-02 2023-04-25 武汉大学 Method for rapidly generating intermediate wave infrared-visible light multispectral image
CN110276280B (en) * 2019-06-06 2021-06-04 刘嘉津 Optical processing method for automatically identifying crop pest images
CN110660023B (en) * 2019-09-12 2020-09-29 中国测绘科学研究院 Video stitching method based on image semantic segmentation
CN110988908B (en) * 2019-12-19 2023-06-09 长光卫星技术股份有限公司 Quantitative analysis method for imaging influence of spectral shift of optical filter on space optical remote sensor
CN112233190B (en) * 2020-05-19 2023-04-07 同济大学 Satellite remote sensing image color balancing method based on block adjustment
CN112184546B (en) * 2020-06-10 2024-03-15 中国人民解放军32023部队 Satellite remote sensing image data processing method
CN112017115A (en) * 2020-07-09 2020-12-01 卢凯旋 Remote sensing image splicing method, device, equipment and storage medium
CN112288650B (en) * 2020-10-28 2021-07-20 武汉大学 Multi-source remote sensing satellite image geometric and semantic integrated processing method and system
CN112465986A (en) * 2020-11-27 2021-03-09 航天恒星科技有限公司 Method and device for inlaying satellite remote sensing image
CN113469899B (en) * 2021-06-04 2023-12-29 中国资源卫星应用中心 Optical remote sensing satellite relative radiation correction method based on radiation energy reconstruction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282006A (en) * 2014-09-30 2015-01-14 中国科学院国家天文台 High-resolution image splicing method based on CE-2 data
CN106373088A (en) * 2016-08-25 2017-02-01 中国电子科技集团公司第十研究所 Quick mosaic method for aviation images with high tilt rate and low overlapping rate

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282006A (en) * 2014-09-30 2015-01-14 中国科学院国家天文台 High-resolution image splicing method based on CE-2 data
CN106373088A (en) * 2016-08-25 2017-02-01 中国电子科技集团公司第十研究所 Quick mosaic method for aviation images with high tilt rate and low overlapping rate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卫星遥感影像的区域正射纠正;汪韬阳等;《武汉大学学报(信息科学版)》;20140731;第838-842页 *

Also Published As

Publication number Publication date
CN107563964A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN107563964B (en) Rapid splicing method for large-area-array sub-meter-level night scene remote sensing images
CN111080724B (en) Fusion method of infrared light and visible light
WO2021120406A1 (en) Infrared and visible light fusion method based on saliency map enhancement
US20200090390A1 (en) Mosaic oblique images and systems and methods of making and using same
US9142021B1 (en) Aligning ground based images and aerial imagery
CN112258579B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US8064712B2 (en) System and method for reconstructing restored facial images from video
CN107680140B (en) Depth image high-resolution reconstruction method based on Kinect camera
CN110211043A (en) A kind of method for registering based on grid optimization for Panorama Mosaic
US8577139B2 (en) Method of orthoimage color correction using multiple aerial images
CN111510691B (en) Color interpolation method and device, equipment and storage medium
Sidike et al. Adaptive trigonometric transformation function with image contrast and color enhancement: Application to unmanned aerial system imagery
CN111340895B (en) Image color uniformizing method based on pyramid multi-scale fusion
JP7460579B2 (en) Enhancement of dark images
US20220020178A1 (en) Method and system for enhancing images using machine learning
CN112907493A (en) Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle swarm cooperative reconnaissance
CN112200848B (en) Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment
US20210065336A1 (en) Method for generating a reduced-blur digital image
KR20190060481A (en) Method for satellite image processing and recording medium Therefor
CN108830921A (en) Laser point cloud reflected intensity correcting method based on incident angle
CN115311556A (en) Remote sensing image processing method and system for natural resource management
CN114331835A (en) Panoramic image splicing method and device based on optimal mapping matrix
CN117092647A (en) Method and system for manufacturing regional satellite-borne optical and SAR image DOM
Akcay et al. The effect of image enhancement methods during feature detection and matching of thermal images
CN113706424B (en) Jelly effect image correction method and system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No. 1299, Mingxi Road, Beihu science and Technology Development Zone, Changchun City, Jilin Province

Patentee after: Changguang Satellite Technology Co.,Ltd.

Address before: 130032 No. 1759, Mingxi Road, Gaoxin North District, Changchun City, Jilin Province

Patentee before: CHANG GUANG SATELLITE TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Fast Splicing Method for Large Area Array Submeter Level Night Scene Remote Sensing Images

Granted publication date: 20200904

Pledgee: Jilin credit financing guarantee Investment Group Co.,Ltd.

Pledgor: Changguang Satellite Technology Co.,Ltd.

Registration number: Y2024220000032

PE01 Entry into force of the registration of the contract for pledge of patent right