CN115953297A - Remote sensing image super-resolution reconstruction and enhancement method and device - Google Patents

Remote sensing image super-resolution reconstruction and enhancement method and device Download PDF

Info

Publication number
CN115953297A
CN115953297A CN202211685920.2A CN202211685920A CN115953297A CN 115953297 A CN115953297 A CN 115953297A CN 202211685920 A CN202211685920 A CN 202211685920A CN 115953297 A CN115953297 A CN 115953297A
Authority
CN
China
Prior art keywords
image
point
points
ridge line
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211685920.2A
Other languages
Chinese (zh)
Other versions
CN115953297B (en
Inventor
何建军
熊桢
陈婷
祝晴
张乐勇
王智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twenty First Century Aerospace Technology Co ltd
Original Assignee
Twenty First Century Aerospace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twenty First Century Aerospace Technology Co ltd filed Critical Twenty First Century Aerospace Technology Co ltd
Priority to CN202211685920.2A priority Critical patent/CN115953297B/en
Publication of CN115953297A publication Critical patent/CN115953297A/en
Application granted granted Critical
Publication of CN115953297B publication Critical patent/CN115953297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application relates to a remote sensing image super-resolution reconstruction and enhancement method and device. The method comprises the following steps: preprocessing an image; resampling the original image to generate a high-resolution image; performing surface fitting on the image to inhibit image noise; carrying out high-frequency enhancement on the image and simultaneously suppressing noise; performing linear feature enhancement on the image; reconstructing the surrounding environment of the linear characteristic; adjusting the brightness of the image; and outputting the super-resolution image. The technical scheme of the application gives consideration to the advantages of space enhancement and frequency domain enhancement, edge reconstruction and noise suppression are added, and the produced super-resolution image is high in definition, vivid in color and rich in hierarchy.

Description

Remote sensing image super-resolution reconstruction and enhancement method and device
Technical Field
The application relates to the technical field of remote sensing application, in particular to a remote sensing image super-resolution reconstruction and enhancement method and device.
Background
The super-resolution technology is a technology for processing a low-resolution image to improve the resolution, definition and identifiability of the image, thereby facilitating extraction of more useful information from the image. Since the concept of Super-Resolution was proposed in 1964, many hyper-Resolution algorithms have been developed. In summary, these algorithms can be classified into single-image-based and multi-image-based categories according to the number of images required. The techniques can be classified into machine learning or deep learning based techniques and policy based image processing techniques according to the difference in the techniques. The deep learning-based algorithm needs a large amount of training data, the quality of the hyper-resolution image also depends on the training data, and the imaging characteristics of a camera are difficult to mine so as to improve the quality of the image. The image processing technology based on single image can be divided into a linear feature enhancement technology of an image space and a high-frequency information enhancement technology of a frequency domain space, but the related technology is still based on a single threshold and a processing form, so that the problems of insufficient improvement of image definition, noise increase and the like are caused, and the requirement of remote sensing image large-area imaging over-resolution cannot be met.
Disclosure of Invention
In order to solve the problem that the reconstruction definition of the current super-resolution image is not improved enough, the application provides a method and a device for reconstructing and enhancing the super-resolution of the remote sensing image.
In a first aspect, the present application provides a method for reconstructing and enhancing remote sensing image super-resolution, which adopts the following technical scheme:
a remote sensing image super-resolution reconstruction and enhancement method comprises the following steps:
acquiring an image and preprocessing the image;
resampling the preprocessed image;
performing surface fitting on the resampled image;
performing frequency domain enhancement on the image subjected to surface fitting;
performing linear feature enhancement based on the image after frequency domain enhancement;
reconstructing the surrounding environment of the linear characteristic;
performing brightness stretching on the image after the reconstruction of the surrounding environment;
and outputting the super-resolution image.
Optionally, the performing surface fitting on the resampled image includes:
and performing surface fitting on each image point of the resampled image according to the image point and the eight neighborhood points thereof according to a least square principle.
Optionally, the performing frequency domain enhancement on the image after surface fitting includes:
carrying out Fourier transformation on the image subjected to surface fitting;
carrying out high-frequency filtering on the image subjected to Fourier transform by adopting the high-frequency filter;
carrying out inverse Fourier transform on the image subjected to high-frequency filtering;
carrying out overflow limiting processing on the Fourier inverse transformation result;
calculating the maximum gray difference value of each pixel point of the image and four neighborhood points of the image, and comparing the maximum gray difference value with a first set threshold value; if the gray value is smaller than the first set threshold, discarding the gray value subjected to overflow limiting processing, and restoring the gray value to a corresponding gray value before frequency domain enhancement; and if the gray value is larger than or equal to the first set threshold, using the gray value after the overflow limiting processing.
Optionally, the performing linear feature enhancement based on the image after frequency domain enhancement includes:
extracting ridge lines and valley lines based on the image after frequency domain enhancement;
extracting ridge line points and valley line points which meet set requirements as linear characteristics;
and performing linear characteristic enhancement on the linear characteristic.
Optionally, the extracting the ridge line and the valley line based on the image after the frequency domain enhancement includes:
the ridge line extraction adopts the following mode:
defining a ridge line graph P, and setting each pixel value of P to be 1;
sequentially acquiring the gray value of each pixel point of the image;
searching gray values of four neighborhood points of the pixel point, namely upper, lower, left and right;
setting the pixel value of a ridge line graph corresponding to the pixel point and the minimum gray scale value of the four neighborhood points of the pixel point as 0;
after traversing the complete graph, obtaining an initial ridge line;
traversing each ridge line point in sequence;
calculating the maximum gray difference between the ridge line point and four neighborhood points of the upper, lower, left and right;
if the maximum gray difference is smaller than a second set threshold, the point is removed from the ridge line point;
traversing all initial ridge line points to obtain ridge line points;
the method for extracting the mountain valley line comprises the following steps:
defining a valley line graph K, and setting each pixel value of K to be 1;
sequentially acquiring the gray value of each pixel point of the image;
searching gray values of four neighborhood points of the pixel point, namely upper, lower, left and right;
setting the pixel value of a valley line graph corresponding to the pixel value maximum in the pixel point and the four neighborhood points thereof as 0;
after traversing the complete graph, obtaining an initial valley line;
traversing each valley line point in sequence;
calculating the maximum gray difference between the valley line point and four neighborhood points at the upper, lower, left and right sides of the valley line point;
if the maximum gray difference is smaller than a third set threshold value, the point is removed from the valley line point;
and traversing the initial valley line points of the whole part to obtain the valley line points.
Optionally, the extracting ridge line points and valley line points that meet the setting requirement includes, as linear features:
the linear characteristic of the ridge line point is extracted by the following method:
traversing each ridge line point in sequence;
dividing a circle of the ridge line point into a plurality of equal parts by taking the ridge line point as a center;
searching for a ridge line point along each of the divided directions until a next point is not a ridge line point;
counting the number of ridge line points in each direction;
giving the number of points with the most ridge line points in each direction as a weight to the ridge line points;
traversing the ridge line point again, and if the weight of the ridge line point is judged to be smaller than a fourth set threshold value, abandoning the ridge line point; judging that the remaining ridge line points meet the set requirements as linear characteristics;
the linear characteristic of the mountain and valley line points is extracted by the following method:
traversing each valley line point in sequence;
dividing a circle of the top plate into a plurality of equal parts by taking the valley line point as a center;
searching for a valley line point along each of the divided directions until a next point is not a valley line point;
counting the number of valley line points in each direction;
giving the number of points with the largest number of valley line points in each direction as a weight to the valley line points;
traversing the valley line points again, and if the weight of the valley line points is judged to be smaller than a fourth set threshold value, abandoning the valley line points; and the remaining valley line points are judged to meet the set requirements and serve as linear characteristics.
Optionally, the performing linear feature enhancement on the linear feature includes:
the linear characteristic enhancement of the ridge line adopts the following mode:
sequentially traversing linear characteristic points of the ridge line;
calculating a new gray value of the linear feature point; wherein the new gray value = the current gray value of the linear feature point x the enhancement coefficient; wherein the enhancement factor is greater than 1 and less than 2;
the new grey value is limited: if the new gray value is judged to be larger than 1, the new gray value is 1;
the method for enhancing the linear characteristics of the mountain valley line comprises the following steps:
sequentially traversing linear characteristic points of the mountain valley line;
calculating a new gray value of the linear feature point; wherein the new gray value = the current gray value of the linear feature point x the enhancement coefficient; wherein the enhancement factor is greater than 0 and less than 1.
Optionally, the reconstructing the surrounding environment of the linear feature includes:
performing expansion processing on the linear characteristic for n times; n is greater than or equal to 2;
performing median filtering on linear characteristic points in the expansion region from outside to inside from n layers; the linear characteristic points participating in the median filtering can only participate in the current layer point and the outer layer point.
Optionally, the performing luminance stretching on the image obtained by reconstructing the surrounding environment includes:
traversing each pixel point to obtain a gray value a;
calculating a gray scale stretch coefficient x = a +0.6;
the gray scale stretch coefficient is limited: if x is less than 0.9, x is 0.9, and if x is more than 1.1, x is 1.1; if x is judged to be more than or equal to 0.9 and less than or equal to 1.1, x is kept unchanged;
calculating a new gray value b = x a of the pixel point;
the new grey values are limited: if b is larger than 1, b is 1; if b is judged to be less than or equal to 1, b is kept unchanged.
In a second aspect, the remote sensing image super-resolution reconstruction and enhancement device provided by the application adopts the following technical scheme: a remote sensing image super-resolution reconstruction and enhancement device comprises:
the preprocessing module is used for acquiring and preprocessing images;
the resampling module is used for resampling the preprocessed image;
the processing module is used for carrying out surface fitting on the resampled image; performing frequency domain enhancement on the image subjected to surface fitting; performing linear feature enhancement based on the image after the frequency domain enhancement; reconstructing the surrounding environment of the linear characteristic; performing brightness stretching on the image after the reconstruction of the surrounding environment;
and the output module is used for outputting the super-resolution image.
In summary, the present application includes at least the following advantageous technical effects:
1. the method has the advantages of space enhancement and frequency domain enhancement, edge reconstruction and noise suppression are added, and the produced super-resolution image is high in definition, vivid in color and rich in hierarchy.
Drawings
FIG. 1 is a flow chart of a remote sensing image super-resolution reconstruction and enhancement method in an embodiment of the present application;
FIG. 2 is a block diagram of a process of performing frequency domain enhancement on an image after surface fitting according to an embodiment of the present application;
FIG. 3 is a block diagram of a flow chart of a ridge line extraction algorithm in an embodiment of the present application;
FIG. 4 is a block flow diagram of a linear feature extraction algorithm in an embodiment of the present application;
FIG. 5 is a block flow diagram of an edge environment reconstruction algorithm according to an embodiment of the present application;
FIG. 6 is a block flow diagram of a luminance stretching algorithm in an embodiment of the present application;
FIG. 7 is a graph of a distribution of weights in an embodiment of the present application;
FIG. 8 is a block flow diagram of another ridge line extraction algorithm in an embodiment of the present application;
FIG. 9 is a block flow diagram of another linear feature extraction algorithm in an embodiment of the present application;
FIG. 10 is a block flow diagram of another edge environment reconstruction algorithm in the embodiment of the present application;
FIG. 11 is a block flow diagram of another luminance stretching algorithm in an embodiment of the present application;
FIG. 12 is a graph of gray scale stretch coefficient versus gray scale value in an embodiment of the present application;
fig. 13 is a full-color original in the embodiment of the present application;
FIG. 14 is a super-resolution diagram in an embodiment of the present application;
FIG. 15 is a block diagram of a remote sensing image super-resolution reconstruction and enhancement device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The embodiment of the application discloses a remote sensing image super-resolution reconstruction and enhancement method.
Referring to fig. 1, a remote sensing image super-resolution reconstruction and enhancement method includes the following steps:
s1, acquiring an image and preprocessing the image.
Image data is acquired and imported, supporting a plurality of image formats including but not limited to TIFF, JPG, PNG, etc., and supporting a single band image, a color RGB image and a multispectral image.
The projection information of the original image is inherited, and the projection parameters are modified as follows, so that the new projection information is matched with the hyper-resolution image.
GeoTransform[1]/=s;
GeoTransform[5]/=s;
Where the first term is to modify the size of the super-resolution image pixels in the X-direction and s is the magnification.
Where the second term is to modify the size of the super-divided image pixels in the Y-direction and s is the magnification.
And modifying an RPC (remote multimedia coeffients) file in the original image to be matched with the hyper-score image.
RPC.dfLINE_OFF*=s;
RPC.dfSAMP_OFF*=s;
RPC.dfLINE_SCALE*=s;
RPC.dfSAMP_SCALE*=s;
LINE _ OFF represents row offset, SAMP _ OFF represents column offset, LINE _ SCALE represents row SCALE, and SAMP _ SCALE represents column SCALE.
And (3) partitioning the image, wherein the partitioning can be determined according to the processing performance of the computer, so that the problem of insufficient memory of the computer is avoided.
And normalizing the image and converting the image into a floating point type. The image data itself is a UNIT type data of 0-255, which needs to be normalized to be between 0-1 for the convenience of system processing.
The image is converted from RGB to YIQ (Luminance, in-phase, quadrature; luminance, in-phase, quadrature).
It should be understood that different pretreatment methods can be flexibly selected based on actual requirements.
And S2, resampling the preprocessed image.
The original image is sampled so that the newly generated image size is consistent with the user's requirements.
And S3, performing surface fitting on the resampled image.
The embodiment of the application provides a method for suppressing image noise based on surface fitting. And performing surface fitting on each image point of the resampled image according to the image point and the eight neighborhood points thereof according to a least square principle.
And S4, performing frequency domain enhancement on the image subjected to surface fitting.
The embodiment of the application provides an algorithm for enhancing the high frequency of an image and suppressing noise. This both highlights features in the image and prevents noise from being amplified.
Referring to fig. 2, the method mainly comprises the following steps:
and S41, carrying out Fourier transform on the image subjected to surface fitting.
And S42, performing high-frequency filtering on the image after Fourier transform by using the high-frequency filter.
Here the high frequency filter we use a gaussian filter.
And S43, performing inverse Fourier transform on the image subjected to high-frequency filtering.
And S44, performing overflow limiting processing on the Fourier inverse transformation result.
Specifically, when the inverse fourier transform result is less than 0, 0 is selected; when the inverse Fourier transform result is greater than 1, 1 is selected; between 0 and 1, then remain unchanged.
S45, calculating the maximum gray difference value between each pixel point of the image and four neighborhood points of the image.
Here, the maximum gray value difference refers to the maximum gray value difference between the two, and the positive and negative values are not considered.
And S46, comparing the maximum gray difference value with a first set threshold value.
The first setting threshold can be flexibly set. For example, the first set threshold is set to be greater than 0.05 and less than 0.2.
And S47, if the gray value is smaller than the first set threshold, discarding the gray value subjected to the overflow limiting processing, and restoring the gray value to the corresponding gray value before the frequency domain enhancement.
And S48, if the gray value is larger than or equal to the first set threshold, using the gray value after the overflow limiting processing.
And S5, performing linear feature enhancement based on the image subjected to frequency domain enhancement.
In the optional embodiment of the application, the linear feature enhancement is divided into three parts, namely ridge line/valley line extraction, linear feature extraction and linear feature enhancement.
The embodiment of the application provides an algorithm for extracting remarkable ridge lines and valley lines, so that the extracted ridge lines and valley lines are consistent with topographic feature lines. The specific algorithm for ridge line extraction refers to fig. 3:
s511, a ridge line map P is defined, and each pixel value of P is set to 1.
S512, sequentially obtaining the gray value of each pixel point of the image.
S513, searching gray values of four neighborhood points of the pixel point, namely the upper neighborhood point, the lower neighborhood point, the left neighborhood point and the right neighborhood point.
If the pixel point is a boundary, ignoring the neighbor points which may not exist, and selecting the existing neighbor points.
And S514, setting the pixel gray value of the ridge line image corresponding to the pixel point and the minimum gray value in the four neighborhood points thereof as a specific value N, wherein the pixel gray value is set as 0 in the embodiment.
And the gray values of other pixel points are unchanged.
And S515, traversing the complete graph to obtain an initial ridge line.
I.e., all points whose grayscale values are 0, the initial ridge line is obtained.
And S516, traversing each ridge line point in sequence.
S517, calculating the maximum gray difference between the ridge line point and four neighborhood points, up, down, left and right.
Here, the maximum gray value difference refers to the maximum gray value difference between the two, and the positive and negative values are not considered.
And S518, if the maximum gray difference is smaller than a second set threshold value, eliminating the point from the ridge line point.
The second setting threshold can be flexibly set. For example, the second set threshold is set to be the same as the first set threshold.
And S519, traversing the complete initial ridge line point to obtain a ridge line point.
Namely, the initial ridge line point with the maximum gray difference with the adjacent point being greater than or equal to the second set threshold value is used as the ridge line point obtained by screening and extracting.
The extraction mode of the valley line is similar to that of the ridge line, and specifically comprises the following steps:
defining a valley line graph K, and setting each pixel value of K to be 1; sequentially acquiring the gray value of each pixel point of the image; searching gray values of four neighborhood points of the pixel point, namely upper, lower, left and right; setting a specific value N to the pixel value of the valley line graph corresponding to the pixel value maximum in the pixel point and the four neighborhood points thereof, wherein the specific value N is set to 0 in the embodiment; after traversing the complete graph, obtaining an initial valley line; traversing each valley line point in sequence; calculating the maximum gray difference between the valley line point and four neighborhood points of the upper, lower, left and right; if the maximum gray difference is smaller than a third set threshold value, the point is removed from the valley line point; and traversing the initial valley line points of the whole part to obtain the valley line points.
And the third setting threshold value can be flexibly set. For example, the third set threshold is set to be the same as the second set threshold.
Regarding the linear feature extraction, the linear feature in which the ridge line point is extracted is as follows, with reference to fig. 4:
and S521, traversing each ridge line point in sequence.
And S522, dividing the ridge line point into a plurality of equal parts in 360 degrees around the ridge line point.
Such as 360 equal divisions, 144 equal divisions, 72 equal divisions, 36 equal divisions, and so forth. The specific flexible setting is possible.
S523, search for a ridge line point along each divided direction until the next point is not a ridge line point.
And S524, counting the number of the ridge line points in each direction.
Note that the ridge line points in the two directions that are 180 degrees apart are calculated together. Such as points in the 0 degree and 180 degree directions are merged together for calculation.
And S525, giving the point number with the most ridge line points in each direction as a weight to the ridge line points.
S526, traversing the ridge line points again, and if the weight of the ridge line points is judged to be smaller than a fourth set threshold value, discarding the ridge line points; the remaining ridge points are determined to satisfy the set requirements as linear characteristics.
And the fourth setting threshold can be flexibly set. For example, 6, 10, 15, etc.
The remaining ridge line points are ridge line points weighted to be equal to or greater than the fourth set threshold.
The linear feature extraction of the valley line points is similar to that of the ridge line points, and the following mode is adopted: sequentially traversing each valley line point; dividing a circle of the top plate into a plurality of equal parts by taking the valley line point as a center; searching for a valley line point along each of the divided directions until a next point is not a valley line point; counting the number of valley line points in each direction; giving the most points on the valley line points in each direction as the right to the valley line points; traversing the valley line points again, and if the weight of the valley line points is judged to be smaller than a fourth set threshold value, abandoning the valley line points; and the remaining valley line points are judged to meet the set requirements and serve as linear characteristics.
The linear characteristic enhancement of the ridge line adopts the following mode:
sequentially traversing linear characteristic points of the ridge line; calculating a new gray value of the linear feature point; wherein, the new gray value = the current gray value of the linear feature point x the enhancement coefficient; wherein the enhancement factor is greater than 1 and less than 2; the new grey values are limited: and if the new gray value is judged to be larger than 1, taking 1 as the new gray value.
The method for enhancing the linear characteristics of the mountain valley line comprises the following steps:
sequentially traversing linear characteristic points of the mountain valley line; calculating a new gray value of the linear feature point; wherein the new gray value = the current gray value of the linear feature point x the enhancement coefficient; wherein the enhancement factor is greater than 0 and less than 1.
S6, reconstructing the surrounding environment with linear characteristics.
In an embodiment of the present application, an algorithm for reconstructing an ambient environment with linear features is provided. This eliminates noise and highlights linear characteristics even more. Edge environment reconstruction specific algorithm referring to fig. 5:
s61, performing expansion processing on the linear characteristic n times, and marking the expansion area 1,2, … n.
And S62, performing median filtering on points in the expansion region from outside to inside from the n layers. The points participating in the median filtering can only participate in the current layer and the outer layer points, but can not participate in the inner layer points.
For example, when the filtering process is performed on the layer 3 point, only the points other than the layer 3 and the layer 4, etc. can participate, and the layer 2 and the layer 1 point cannot participate.
And S7, performing brightness stretching on the image after the reconstruction of the surrounding environment.
In the embodiment of the present application, an algorithm for performing luminance stretching on an image is provided, so that the contrast of the image is increased. Referring to fig. 6:
and S71, traversing each pixel point to obtain a gray value a.
And S72, calculating the gray scale stretching coefficient x = a +0.6.
And S73, limiting the gray scale stretch coefficient.
Specifically, if x is less than 0.9, x is 0.9;
if x is larger than 1.1, taking x as 1.1;
if x is judged to be more than or equal to 0.9 and less than or equal to 1.1, x is kept unchanged.
And S74, calculating a new gray value b = x a of the pixel point.
And S75, limiting the new gray value.
If b is larger than 1, b is 1;
if b is judged to be less than or equal to 1, b is kept unchanged.
And S8, outputting the super-resolution image.
1. And performing inverse transformation on the processed YIQ image to generate an RGB image.
2. And splicing the images after the blocking processing.
3. And writing the processed hyper-resolution image into a format file required by a user.
In an alternative embodiment of the present application, the adaptive processing may be performed in a specific preprocessing process.
Taking 2 times of over-division of the panchromatic image of the Beijing III minisatellite as an example, the resolution of the panchromatic image of the Beijing III A satellite is 0.5 meter, the multispectral resolution is 2 meters, the resolution of the image is enhanced for improving the definition of the image, and 2 times of over-division is carried out, and the specific embodiment is as follows:
step 1: acquiring an image for preprocessing
1. Image data is acquired, supporting a variety of image formats including, but not limited to, TIFF, JPG, PNG, etc., and supporting single band images, color RGB images, and multispectral images.
2. The projection information of the original image is inherited, but the projection parameters are modified such that the new projection information matches the hyper-divided image.
GeoTransform[1]/=2
GeoTransform[5]/=2
Where the first term is to modify the size of the super-resolution image pixels in the X-direction and 2 is the magnification.
Where the second term is to modify the size of the super-resolution image pixels in the Y-direction, 2 is the magnification.
3. And modifying the RPC file in the original image to be matched with the hyper-resolution image.
RPC.dfLINE_OFF*=2;
RPC.dfSAMP_OFF*=2;
RPC.dfLINE_SCALE*=2;
RPC.dfSAMP_SCALE*=2。
4. And the image is processed in a blocking way, so that the problem of insufficient memory of a computer is avoided.
5. The image is converted into a floating point type, and if the imported image is an integer type, the imported image is converted into a floating point type between 0 and 1.
6. The image is converted from RGB to YIQ. The general formula is as follows:
Y=(0.299*R+0.587*G+0.114*B)
I=(0.595716*R-0.274453*G-0.321263*B)
Q=(0.211456*R-0.522591*G+0.311135*B)
this step is not required for full color images.
Step 2: resampling based on preprocessed image
The original image is sampled so that the newly generated image size is consistent with the user's requirements.
In this embodiment, a Lanczos sampling algorithm is adopted to resample an image. Calculating weights L (x) and L (y) corresponding to different positions in the window according to the input pixel position (x, y); the formula is as follows:
Figure BDA0004021023310000101
Figure BDA0004021023310000102
where a is half the window length. The points in the template are then weighted averaged according to the weights. The formula is as follows:
Figure BDA0004021023310000111
fig. 7 is a weight distribution when a =3.
Here the Lanczos sampling window is 7*7.
And step 3: performing surface fitting on the resampled image
The embodiment of the application provides an algorithm for suppressing image noise based on surface fitting.
The quadric surface is fitted based on the image points and the eight neighborhood points, and can be well attached to the surface of an image, so that the effects of keeping the sharpness of the original image and removing noise are achieved.
The surface fitting formula is as follows:
f(x,y)=ax 2 +by 2 +cxy+dx+ey+f;
the solution is performed using a least squares method so that the error is minimized. Here the error is:
Figure BDA0004021023310000112
the following conditions need to be satisfied:
Figure BDA0004021023310000113
Figure BDA0004021023310000114
Figure BDA0004021023310000115
Figure BDA0004021023310000116
Figure BDA0004021023310000117
Figure BDA0004021023310000118
written as a matrix expression:
Figure BDA0004021023310000121
the method is simplified as follows: AX = W;
then, X = A -1 W;
After solving the coefficient X, calculating the gray value z of the fitting point:
z=ax 2 +by 2 +cxy+dx+ey+f;
if z is less than 0, then z is 0; if z is greater than 1, then z is taken to be 1.
And 4, step 4: performing frequency domain enhancement on the image after surface fitting
The embodiment of the application provides an algorithm for carrying out high-frequency enhancement on an image and suppressing noise. This both highlights features in the image and prevents noise from being amplified.
1. And (4) Fourier transform.
Figure BDA0004021023310000122
The image is fourier transformed.
2. And constructing a high-frequency filter. Here we use a gaussian filter.
Figure BDA0004021023310000123
Filter(r,c)=1,if,s<=R;
(R, c) denotes the image point coordinates, gain is the Gain factor, and R is the filter radius. Here R =300, gain =1.5.
s 2 =(r-y 0 ) 2 +(c-x 0 ),(x 0 ,y 0 ) Representing the coordinates of the centre of the window image and s the distance of the window image point (r, c) to the centre of the window.
3. And carrying out high-frequency filtering on the image.
F_Filter(r,c)=F(r,c)*Filter(r,c);
And (4) performing high-frequency filtering on the Fourier transformed image F (r, c) by adopting the Gaussian filter established above.
4. And (4) carrying out inverse Fourier transform.
Figure BDA0004021023310000131
5. And (5) overflow limiting processing.
if,f'(x,y)<0,f'(x,y)=0;
if,f'(x,y)>1,f'(x,y)=1;
That is, when the result of the inverse fourier transform is less than 0, 0 is selected; when the inverse Fourier transform result is greater than 1, 1 is selected; between 0 and 1, then remain unchanged.
6. And (5) noise suppression.
Calculating the maximum gray difference value of each pixel point of the image and four neighborhood points of the image, namely the upper neighborhood point, the lower neighborhood point, the left neighborhood point, the right neighborhood point and the left neighborhood point:
Max_Diff=max{abs(f(x,y)-f(x 0 ,y 0 ))};
comparing the maximum gray difference value with a threshold value T:
if,Max_Diff<T,f'(x,y)=f(x,y);T=0.08;
if the gray value is less than 0.08, discarding the gray value subjected to the overflow limiting processing, and restoring the gray value to the corresponding gray value before the frequency domain enhancement; if the gray scale value is 0.08 or more, the gray scale value after the overflow limiting process is used.
And 5: linear feature enhancement for frequency domain enhanced images
The linear feature increasing is divided into a ridge line/valley line extraction part, a linear feature extraction part and a linear feature enhancement part.
And extracting ridge lines/valley lines. The invention develops an algorithm for extracting the remarkable ridge lines and the remarkable valley lines, so that the extracted ridge lines and the extracted valley lines are consistent with the topographic characteristic lines. The ridge line extraction is divided into the following 8 steps, and the specific algorithm refers to fig. 8:
1) The value of each pixel of the image is taken in turn.
2) And searching the values of four neighborhood pixels of the upper, lower, left and right.
3) When extracting the ridge line, setting the pixel value of the point with the minimum pixel value as 0; when the valley line is extracted, the pixel value of the point at which the pixel value is maximum is set to 0.
4) And traversing the complete graph to obtain an initial ridge line.
5) And traversing each ridge line point in turn.
6) And calculating the maximum gray difference between the calculated gray difference and the adjacent point.
7) If the maximum gray difference is less than the threshold (0.08), this culls the point from the ridge line points.
8) And traversing the complete initial ridge line point to obtain a ridge line point.
And (3) an algorithm for extracting linear characteristics based on ridge lines/valley lines. See fig. 9 for a specific algorithm:
1) And traversing each ridge line point in turn.
2) A week is divided into a number of equal parts 360 degrees, here 72 equal parts.
3) The ridge line point is searched in each direction until the next point is not the ridge line point.
4) And counting the number of ridge line points in each direction.
Note that the ridge line points in the two directions that are 180 degrees apart are calculated together. For example, points in the 0 degree and 180 degree directions are merged together for calculation.
5) The point number having the largest ridge line point in each direction is given as a weight to the ridge line point.
6) The ridge line point is traversed again and if the weight of the ridge line point is less than the threshold (here, the threshold is 10 pixels), the point is discarded.
7) The remaining ridge line points are the linear feature points.
The embodiment of the application also provides an algorithm for enhancing linear characteristics based on ridge lines and valley lines. The enhancement of the linear characteristic of the ridge line is divided into the following 3 steps:
1) And traversing the linear characteristic points of the ridge line in sequence.
2) A new gray value for that point is calculated.
New gray scale value = original gray scale value × enhancement coefficient. Here the enhancement factor is set to 1.2.
3) If the new gray scale value is greater than 1, the new gray scale value is taken as 1.
The enhancement of linear characteristics of the mountain valley line is divided into the following 2 steps:
1) And traversing the linear characteristic points of the mountain-valley lines in sequence.
2) A new gray value for that point is calculated.
New gray value = original gray value × enhancement coefficient. Here the enhancement factor is set to 0.8.
Step 6: reconstruction of the surroundings of linear features
The embodiment of the application provides an algorithm for reconstructing the surrounding environment of linear features. This eliminates noise and highlights linear characteristics even more. The edge environment reconstruction is divided into the following 2 steps, and the specific algorithm refers to fig. 10.
The linear feature is dilated n times and the dilated area is marked 1,2,3, where n =3.
Starting from n layers, the points in the expansion region are median filtered from outside to inside. The points participating in the median filtering can only participate in the current layer and the outer layer points, but can not participate in the inner layer points. For example, if filtering processing is performed on a layer 3 point, only points other than the layer 3 and layer 4, etc., can participate, and a layer 2 and layer 1 point cannot participate.
And 7: performing brightness stretching on the image after characteristic reconstruction
The embodiment of the application provides an algorithm for stretching the brightness of an image, so that the contrast of the image is increased. Refer to fig. 11.
1) And traversing each pixel point to obtain a gray value a.
2) The grayscale stretch coefficient x = a +0.6 is calculated. If x is less than 0.9, x is 0.9; if x is greater than 1.1, then x is 1.1. The relationship between the gray stretch coefficient and the gray value is shown in fig. 12.
3) The grey value b = x a of the new image is calculated, and if b is greater than 1, b takes 1.
And step 8: outputting super-resolution images
1. And performing inverse transformation on the processed YIQ image to generate an RGB image.
R=Y+0.9563*I+0.6210*Q
G=Y-0.2721*I-0.6474*Q
B=Y-1.1070*I+1.7046*Q
2. And splicing the images after the blocking processing.
3. And writing the processed hyper-resolution image into a format file required by a user.
The remote sensing image super-resolution processing effect is shown in fig. 13 and 14, wherein fig. 13 is a full-color original image, and fig. 14 is a super-resolution image; compared with the prior art, the resolution ratio is obviously improved.
Based on the same design concept, the embodiment also discloses a remote sensing image super-resolution reconstruction and enhancement device.
Referring to fig. 15, a remote sensing image super-resolution reconstruction and enhancement device includes:
a preprocessing module 151, configured to acquire an image and perform preprocessing;
a resampling module 152 for resampling the preprocessed image;
the processing module 153 is configured to perform surface fitting on the resampled image; performing frequency domain enhancement on the image subjected to surface fitting; performing linear feature enhancement based on the image after frequency domain enhancement; reconstructing the surrounding environment of the linear characteristic; performing brightness stretching on the image after the reconstruction of the surrounding environment;
and an output module 154, configured to output the super-resolution image.
The device can realize the relevant steps of the remote sensing image super-resolution reconstruction and enhancement method, and for concrete reference, the above description is referred, and details are not repeated here.
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method and the core idea of the present application, and should not be construed as limiting the present application. Those skilled in the art should also appreciate that various modifications and substitutions can be made without departing from the scope of the present disclosure.

Claims (10)

1. A remote sensing image super-resolution reconstruction and enhancement method is characterized by comprising the following steps:
acquiring an image and preprocessing the image;
resampling the preprocessed image;
performing surface fitting on the resampled image;
performing frequency domain enhancement on the image subjected to surface fitting;
performing linear feature enhancement based on the image after frequency domain enhancement;
reconstructing the surrounding environment of the linear characteristic;
performing brightness stretching on the image after the reconstruction of the surrounding environment;
and outputting the super-resolution image.
2. The method of claim 1, wherein the surface fitting the resampled image comprises:
and performing surface fitting on each image point of the resampled image according to the image point and the eight neighborhood points thereof according to a least square principle.
3. The method of claim 1, wherein the frequency domain enhancing the surface-fitted image comprises:
carrying out Fourier transform on the image subjected to surface fitting;
carrying out high-frequency filtering on the image after Fourier transform by adopting the high-frequency filter;
carrying out inverse Fourier transform on the image subjected to high-frequency filtering;
carrying out overflow limiting processing on the Fourier inverse transformation result;
calculating the maximum gray difference value of each pixel point of the image and four neighborhood points of the image, and comparing the maximum gray difference value with a first set threshold value; if the gray value is smaller than the first set threshold, discarding the gray value subjected to overflow limiting processing, and restoring the gray value to a corresponding gray value before frequency domain enhancement; and if the gray value is larger than or equal to the first set threshold, using the gray value after the overflow limiting processing.
4. The method according to any one of claims 1-3, wherein the performing linear feature enhancement based on the frequency domain enhanced image comprises:
extracting ridge lines and valley lines based on the image after frequency domain enhancement;
extracting ridge line points and valley line points which meet set requirements as linear characteristics;
and performing linear characteristic enhancement on the linear characteristic.
5. The method of claim 4, wherein the extracting the ridge line and the valley line based on the frequency domain enhanced image comprises:
the ridge line extraction adopts the following mode:
defining a ridge line graph P, and setting each pixel value of P to be 1;
sequentially acquiring the gray value of each pixel point of the image;
searching gray values of four neighborhood points of the pixel point, namely upper, lower, left and right;
setting the value of the pixel point corresponding to the ridge line graph corresponding to the pixel point and the gray value minimum in the four neighborhood points thereof as 0;
after traversing the complete graph, obtaining an initial ridge line;
traversing each ridge line point in sequence;
calculating the maximum gray difference between the ridge line point and four neighborhood points of the upper, lower, left and right;
if the maximum gray difference is smaller than a second set threshold, the point is removed from the ridge line point;
traversing all initial ridge line points to obtain ridge line points;
the method for extracting the mountain valley line comprises the following steps:
defining a valley line graph K, and setting each pixel value of K to be 1;
sequentially acquiring the gray value of each pixel point of the image;
searching gray values of four neighborhood points of the pixel point, namely upper, lower, left and right;
setting the pixel value of a valley line graph corresponding to the pixel value maximum in the pixel point and the four neighborhood points thereof as 0;
traversing the complete graph to obtain an initial valley line;
sequentially traversing each valley line point;
calculating the maximum gray difference between the valley line point and four neighborhood points at the upper, lower, left and right sides of the valley line point;
if the maximum gray difference is smaller than a third set threshold value, the point is removed from the valley line point;
and traversing all the initial valley line points to obtain valley line points.
6. The method according to claim 4, wherein the extracting ridge line points and valley line points that satisfy a setting requirement as linear features includes:
the linear characteristic of the ridge line point is extracted by the following method:
traversing each ridge line point in sequence;
dividing a circle of the ridge line point into a plurality of equal parts by taking the ridge line point as a center;
searching for a ridge line point along each of the divided directions until a next point is not a ridge line point;
counting the number of ridge line points in each direction;
giving the most points of the ridge line points in each direction as the weight to the ridge line points;
traversing the ridge line point again, and if the weight of the ridge line point is judged to be smaller than a fourth set threshold value, abandoning the ridge line point; judging that the remaining ridge line points meet the set requirements as linear characteristics;
the linear characteristic of the mountain and valley line points is extracted by the following method:
traversing each valley line point in sequence;
dividing a circle of the top plate into a plurality of equal parts by taking the valley line point as a center;
searching for a valley line point along each divided direction until the next point is not a valley line point;
counting the number of valley line points in each direction;
giving the most points on the valley line points in each direction as the right to the valley line points;
traversing the valley line points again, and if the weight of the valley line points is judged to be smaller than a fourth set threshold value, abandoning the valley line points; and the remaining valley points are judged to meet the set requirements and serve as linear characteristics.
7. The method of claim 4, wherein said performing linear feature enhancement on the linear feature comprises:
the linear characteristic enhancement of the ridge line adopts the following mode:
sequentially traversing linear characteristic points of the ridge line;
calculating a new gray value of the linear feature point; wherein the new gray value = the current gray value of the linear feature point x the enhancement coefficient; wherein the enhancement factor is greater than 1 and less than 2;
the new grey values are limited: if the new gray value is judged to be larger than 1, the new gray value is 1;
the linear characteristic enhancement of the mountain valley line adopts the following mode:
sequentially traversing linear characteristic points of the mountain valley line;
calculating a new gray value of the linear feature point; wherein the new gray value = the current gray value of the linear feature point x the enhancement coefficient; wherein the enhancement factor is greater than 0 and less than 1.
8. The method of claim 1, wherein the reconstructing the ambient environment of the linear feature comprises:
performing expansion processing on the linear characteristic for n times; n is greater than or equal to 2;
performing median filtering on linear characteristic points in the expansion region from outside to inside from n layers; the linear characteristic points participating in the median filtering can only participate in the current layer point and the outer layer point.
9. The method of claim 1, wherein luminance stretching the reconstructed image of the surrounding environment comprises:
traversing each pixel point to obtain a gray value a;
calculating a gray scale stretch coefficient x = a +0.6;
the gray scale stretch coefficient is limited: if x is less than 0.9, x is 0.9, and if x is more than 1.1, x is 1.1; if x is judged to be more than or equal to 0.9 and less than or equal to 1.1, x is kept unchanged;
calculating a new gray value b = x a of the pixel point;
the new grey value is limited: if b is larger than 1, b is 1; if b is judged to be less than or equal to 1, b is kept unchanged.
10. A remote sensing image super-resolution reconstruction and enhancement device is characterized by comprising:
the preprocessing module is used for acquiring and preprocessing an image;
the resampling module is used for resampling the preprocessed image;
the processing module is used for performing surface fitting on the resampled image; performing frequency domain enhancement on the image subjected to surface fitting; performing linear feature enhancement based on the image after frequency domain enhancement; reconstructing the surrounding environment of the linear characteristic; performing brightness stretching on the image after the reconstruction of the surrounding environment;
and the output module is used for outputting the super-resolution image.
CN202211685920.2A 2022-12-27 2022-12-27 Remote sensing image super-resolution reconstruction and enhancement method and device Active CN115953297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211685920.2A CN115953297B (en) 2022-12-27 2022-12-27 Remote sensing image super-resolution reconstruction and enhancement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211685920.2A CN115953297B (en) 2022-12-27 2022-12-27 Remote sensing image super-resolution reconstruction and enhancement method and device

Publications (2)

Publication Number Publication Date
CN115953297A true CN115953297A (en) 2023-04-11
CN115953297B CN115953297B (en) 2023-12-22

Family

ID=87290951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211685920.2A Active CN115953297B (en) 2022-12-27 2022-12-27 Remote sensing image super-resolution reconstruction and enhancement method and device

Country Status (1)

Country Link
CN (1) CN115953297B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874499A (en) * 2006-05-12 2006-12-06 北京理工大学 High dynamic equipment for reconstructing image in high resolution
CN101540039A (en) * 2008-03-21 2009-09-23 李金宗 Method for super resolution of single-frame images
CN104484577A (en) * 2014-12-30 2015-04-01 华中科技大学 Detection method based on ridge energy correction for ribbon underground target in mountain land
US20160027189A1 (en) * 2014-07-22 2016-01-28 Xerox Corporation Method and apparatus for using super resolution encoding to provide edge enhancements for non-saturated objects
CN107123089A (en) * 2017-04-24 2017-09-01 中国科学院遥感与数字地球研究所 Remote sensing images super-resolution reconstruction method and system based on depth convolutional network
CN110660022A (en) * 2019-09-10 2020-01-07 中国人民解放军国防科技大学 Image super-resolution reconstruction method based on surface fitting
WO2021097916A1 (en) * 2019-11-18 2021-05-27 中国科学院苏州生物医学工程技术研究所 Method and system for reconstructing high-fidelity image, computer device, and storage medium
CN114092325A (en) * 2021-09-24 2022-02-25 熵智科技(深圳)有限公司 Fluorescent image super-resolution reconstruction method and device, computer equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874499A (en) * 2006-05-12 2006-12-06 北京理工大学 High dynamic equipment for reconstructing image in high resolution
CN101540039A (en) * 2008-03-21 2009-09-23 李金宗 Method for super resolution of single-frame images
US20160027189A1 (en) * 2014-07-22 2016-01-28 Xerox Corporation Method and apparatus for using super resolution encoding to provide edge enhancements for non-saturated objects
CN104484577A (en) * 2014-12-30 2015-04-01 华中科技大学 Detection method based on ridge energy correction for ribbon underground target in mountain land
CN107123089A (en) * 2017-04-24 2017-09-01 中国科学院遥感与数字地球研究所 Remote sensing images super-resolution reconstruction method and system based on depth convolutional network
CN110660022A (en) * 2019-09-10 2020-01-07 中国人民解放军国防科技大学 Image super-resolution reconstruction method based on surface fitting
WO2021097916A1 (en) * 2019-11-18 2021-05-27 中国科学院苏州生物医学工程技术研究所 Method and system for reconstructing high-fidelity image, computer device, and storage medium
CN114092325A (en) * 2021-09-24 2022-02-25 熵智科技(深圳)有限公司 Fluorescent image super-resolution reconstruction method and device, computer equipment and medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
CSDN: "遥感图像超分辨重建综述", Retrieved from the Internet <URL:http://t.csdn.cn/gPUw7> *
HE, HQ ET AL.: "Remote sensing image super-resolution using deep-shallow cascaded convolutional neural networks", SENSOR REVIEW, vol. 39, no. 05, pages 629 - 635 *
丁海勇: "遥感图像超分辨率重建技术研究", 中国博士学位论文全文数据库 (信息科技辑), no. 02, pages 140 - 58 *
李云峰;李晟阳;韩茜茜;: "基于多邻域信息的监控图像超分辨率算法", 计算机工程, no. 06, pages 267 - 270 *
李然等: "基于SIFT特征的多源遥感影像自动匹配方法", 测绘科学, vol. 36, no. 03, pages 8 - 10 *
杨黎等: "高分辨率遥感卫星图像采集系统的设计与实现", 第四届高分辨率对地观测学术年会论文集, pages 980 - 988 *
王京萌等: "基于参叉像元和非均匀B样条曲面的遥感图像超分辨率重建", 国土资源遥感, vol. 27, no. 01, pages 144 - 149 *
王虹: "地形特征提取算法的研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 02, pages 138 - 1868 *

Also Published As

Publication number Publication date
CN115953297B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN110189253B (en) Image super-resolution reconstruction method based on improved generation countermeasure network
CN110827200B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
CN108122197B (en) Image super-resolution reconstruction method based on deep learning
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
JP5555706B2 (en) High resolution video acquisition apparatus and method
US11854244B2 (en) Labeling techniques for a modified panoptic labeling neural network
Mousavi et al. Sparsity-based color image super resolution via exploiting cross channel constraints
CN112734642B (en) Remote sensing satellite super-resolution method and device of multi-scale texture transfer residual error network
CN113837946B (en) Lightweight image super-resolution reconstruction method based on progressive distillation network
CN114266957B (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN108280804A (en) A kind of multi-frame image super-resolution reconstruction method
Yue et al. Unsupervised moiré pattern removal for recaptured screen images
JP2004166007A (en) Device, method and program for image processing, and storage medium
CN114511449A (en) Image enhancement method, device and computer readable storage medium
CN111738954A (en) Single-frame turbulence degradation image distortion removal method based on double-layer cavity U-Net model
CN111340080A (en) High-resolution remote sensing image fusion method and system based on complementary convolution characteristics
Liu et al. Research on super-resolution reconstruction of remote sensing images: A comprehensive review
CN116934592A (en) Image stitching method, system, equipment and medium based on deep learning
Abbass et al. Image super resolution based on residual dense CNN and guided filters
Wang et al. A deep unfolding method for satellite super resolution
Rui et al. Research on fast natural aerial image mosaic
CN110020986B (en) Single-frame image super-resolution reconstruction method based on Euclidean subspace group double-remapping
CN115953297A (en) Remote sensing image super-resolution reconstruction and enhancement method and device
CN116416132A (en) Image reconstruction method based on multi-view reference image, computer equipment and medium
CN108492264B (en) Single-frame image fast super-resolution method based on sigmoid transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant