CN111784617B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111784617B
CN111784617B CN202010526199.7A CN202010526199A CN111784617B CN 111784617 B CN111784617 B CN 111784617B CN 202010526199 A CN202010526199 A CN 202010526199A CN 111784617 B CN111784617 B CN 111784617B
Authority
CN
China
Prior art keywords
image
stripped
domain
component image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010526199.7A
Other languages
Chinese (zh)
Other versions
CN111784617A (en
Inventor
孙凌
张群燕
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Satellite Meteorological Center
Original Assignee
National Satellite Meteorological Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Satellite Meteorological Center filed Critical National Satellite Meteorological Center
Priority to CN202010526199.7A priority Critical patent/CN111784617B/en
Publication of CN111784617A publication Critical patent/CN111784617A/en
Application granted granted Critical
Publication of CN111784617B publication Critical patent/CN111784617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method and device, wherein the method comprises the following steps: obtaining a to-be-stripped domain in a remote sensing image, wherein the to-be-stripped domain comprises pixels to be stripped; performing iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with stripes and a second target component image without stripes after iteration; processing the first target component image to obtain a low-frequency image; and combining the low-frequency image and the second target component image to obtain a target image. The embodiment of the invention improves the image quality.

Description

Image processing method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method and apparatus.
Background
The multi-element scanning imaging remote sensing data generally has stripe and stripe phenomena. Wherein the fringe phenomenon is an image discontinuity within a frame of image between rows, which is mainly caused by inconsistent multi-probe radiation responses; banding is the presence of discontinuities in the brightness of inter-frame images within adjacent frame images.
For the stripe phenomenon, three types of processing methods are generally used at present. The first mode is to adopt on-board normalization, namely adopting an electronic technology to linearly transform and round up the pixel brightness value (DigitalNumber, DN) of the original sampled remote sensing image and then download, and the mode can greatly inhibit stripes, but cannot solve the stripe phenomenon. The second type is based on computer image algorithm cancellation such as wavelet transform, median filtering, fourier transform, etc.; the algorithm depends on certain spectral characteristics of stripe noise, and the stripe noise can be removed by using an image spectral analysis method, so that a good stripe elimination effect can be obtained for a single image, but the algorithm depends on the image to extract correction parameters, and the parameters are not stable enough and cannot meet the requirement of automatic processing. The third type of approach, which is an empirical distribution function matching approach, relies on collecting statistics of larger samples and equalizing based on differences derived from such statistics, may generally provide improvements to the original image, but they may not have the flexibility to reflect local variations in banding noise and thus may not be completely removed.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which are used for removing stripe phenomenon in a remote sensing image and improving data quality.
The embodiment of the invention provides an image processing method, which comprises the following steps:
obtaining a to-be-stripped domain in a remote sensing image, wherein the to-be-stripped domain comprises pixels to be stripped;
performing iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with stripes and a second target component image without stripes after iteration;
processing the first target component image to obtain a low-frequency image;
and combining the low-frequency image and the second target component image to obtain a target image.
Optionally, the obtaining the to-be-stripped domain in the remote sensing image includes:
the field to be stripped is obtained by the following formula:
the preset conditions include: r is R (x,y) >R max 、R (x,y) <R min 、|R (x+1,y) -R (x,y) |>D x Or |R (x,y+1) -R (x,y) |>D y
D x =min[A×D x,0.99 ,D max1 ];
D y =min[A×D y,0.99 ,D max2 ];
M (x,y) When 1, the field to be stripped is represented, M (x,y) When 0, representing the region except the to-be-stripped region in the remote sensing image, R (x,y) Representing pixels in the remote sensing image, R max Representing the most of a particular bandHigh possible reflectivity, R min Representing the minimum possible reflectivity of said particular band, D represents a first gradient threshold in the scanning direction of the image, D y A second gradient threshold value in the cross-scanning direction of the image, A represents a preset value, D max1 Represents an empirical value in the scanning direction, D max2 Representing empirical values across scan directions, D x,0.99 Represents the absolute value of the gradient with the cumulative probability density of 0.99 in the scanning direction, D y,0.99 Representing the absolute value of the gradient with a cumulative probability density of 0.99 across the scan direction.
Optionally, the performing iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with a strip and a second target component image without a strip after iteration, including:
gradually extracting component images without stripes from the original image in the to-be-stripped domain through iterative calculation, and merging the component images without the stripes, which are obtained through the gradual extraction, to obtain the second target component image;
and determining images except the second target component image in the original image in the to-be-stripped domain as the first target component image.
Optionally, the step of extracting the component image without the stripe from the original image in the domain to be stripped by iterative calculation includes:
initializing a component image with stripes to be an original image in the domain to be stripped, and initializing a component image without stripes to be 0;
calculating the Laplacian of the limited gradient for the component image with the fringes by the following first formula:
L i (x,y)=[v i-1 (x-1,y)-2v i-1 (x,y)+v i-1 (x+1,y)]+[v i-1 (x,y+1)-v i-1 (x,y)]M(x,y)+[v i-1 (x,y-1)-v i-1 (x,y)]M(x,y-1);
calculating the discrete poisson equation for the component image without the stripes by the following second formula:
L i (x,y)=u i (x-1,y)+u i (x+1,y)+u i (x,y-1)+u i (x,y+1)-4u i (x,y);
performing discrete Fourier transform on the first formula to obtain a frequency domain equation:
L i (k x ,k y )=[2cos(πk x /N x )+2cos(πk y /N y )-4]u i (k x ,k y );
obtaining a component image u without stripes after the ith iteration based on the frequency domain equation and the second formula i (x,y);
Based on the following third formula, removing the component image without the stripes from the component image with the stripes to obtain the component image with the stripes after the ith iteration:
v i (x,y)=v i-1 (x,y)-u i (x,y);
wherein L is i (x, y) represents the Laplacian, v, corresponding to the ith iteration i-1 (x, y) represents the component image with stripe corresponding to the (i-1) th iteration, M (x, y) has a value of 1, u i (x, y) represents the corresponding component image without fringes after the ith iteration, v i (x, y) represents the component image corresponding to the ith iteration and having stripes, L i (k x ,k y ) Representing the frequency domain equation.
Optionally, the processing the first target component image to obtain a low-frequency image includes:
and processing the first target component image by adopting a nonlinear filter through the following formula to obtain the low-frequency image:
wherein V is N(x,y) Representing the low frequency image, x representing pixels in the scanning direction, y representing pixels across the scanning direction, H representing a filtering domain across the scanning directionRange.
Optionally, before the obtaining the to-be-stripped domain in the remote sensing image, the method further includes:
performing dynamic interpolation expansion on the remote sensing image in the edge row direction to obtain an expanded remote sensing image;
correspondingly, the obtaining the to-be-stripped domain in the remote sensing image comprises the following steps:
and obtaining a to-be-stripped domain in the extended remote sensing image.
Optionally, after the combining the low-frequency image and the second target component image to obtain a target image, the method further includes:
acquiring a pixel value of a pixel in the target image;
and when the pixel value of the target image is larger than the maximum pixel value of the original image in the to-be-stripped domain or the pixel value of the target image is smaller than the minimum pixel value of the original image, replacing the pixel in the target image with the pixel in the original image.
Optionally, after the combining the low-frequency image and the second target component image to obtain a target image, the method further includes:
detecting the quality of the target image through the following fourth and fifth formulas;
the fourth formula is:
NIF=∑(|d y,old |-|d y,new |)/∑|d y,old |;
the fifth formula is:
NDF=1-∑(|d x,old –d x,new |)/∑|d x,old |;
wherein NIF represents a normalization improvement factor, d y,old Representing the gradient of the original image across the scan direction, d y,new Representing gradients in the target image across the scan direction; NDF represents normalized distortion factor, d x,old Representing the gradient of the original image along the scanning direction d x,new Representing a gradient of the target image along a scan direction;
and determining that the quality of the target image reaches a preset quality threshold value when the value of the normalization improvement factor is in a first range, and determining that horizontal gradient data exceeding the preset threshold value exists in the target image when the value of the normalization distortion factor is in a second range.
The embodiment of the invention also provides an image processing device, which comprises:
the first acquisition module is used for acquiring a to-be-stripped domain in the remote sensing image, wherein the to-be-stripped domain contains pixels to be stripped;
the second acquisition module is used for carrying out iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with the strips and a second target component image without the strips after iteration;
the third acquisition module is used for processing the first target component image to obtain a low-frequency image;
and a fourth acquisition module, configured to combine the low-frequency image and the second target component image to obtain a target image.
The embodiment of the invention provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the image processing method when executing the computer program.
Embodiments of the present invention provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method.
According to the image processing method and device, the to-be-stripped area in the remote sensing image is obtained, the original image in the to-be-stripped area is subjected to iterative decomposition to obtain the first target component image with the strips and the second target component image without the strips, then the first target component image is processed to obtain the low-frequency image, and finally the low-frequency image and the second target component image are combined to obtain the target image, so that real image information is extracted from the original image as much as possible, only the first target component image is processed, real image characteristics can be well reserved, the computational complexity is low, and the data quality is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating steps of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the overall steps of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram showing the contrast between an original image, an iterated image, and a filtered image in an embodiment of the present invention;
FIG. 4 is a histogram of count value statistics analysis before and after striping in an embodiment of the present invention;
FIG. 5 is a chart showing the statistical frequency histogram of the count value variation before and after stripping according to the embodiment of the present invention;
FIG. 6 is a diagram showing the comparison of count values before and after striping across the scanning direction in an embodiment of the present invention;
FIG. 7 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, a flowchart of steps of an image processing method according to an embodiment of the present invention includes the following steps:
step 101: and obtaining a domain to be stripped in the remote sensing image.
Specifically, the remote sensing image can be an FY-4/AGRI primary data image.
In addition, specifically, the field to be stripped contains the pixel to be stripped.
Preferably, the field to be stripped may contain all the pixels that need to be stripped. Namely, all pixels needing to be stripped in the remote sensing image are acquired in the step.
Step 102: and carrying out iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with the strips and a second target component image without the strips after iteration.
In this step, specifically, after obtaining the domain to be stripped, the original image in the domain to be stripped is subjected to iterative decomposition, so as to obtain a first target component image with the stripe and a second target component image without the stripe after iteration.
Step 103: and processing the first target component image to obtain a low-frequency image.
In particular, after decomposing the original image in the domain to be de-striped into a first target component image and a second target component image, most of the true image features are contained in the second target component image, while the streak disturbances are contained in the first target component image.
However, since the first target component image may still include the slowly-varying (low-frequency) image features that should belong to the second target component image, in this step, nonlinear filtering processing may be performed on the first target component image obtained after iteration, that is, the stripe interference (characterized by high spatial frequency) and the slowly-varying features (low spatial frequency) may be separated, so as to obtain a low-frequency image.
Step 104: and combining the low-frequency image and the second target component image to obtain a target image.
In this step, specifically, after the low-frequency image is obtained, the low-frequency image and the second target component image are combined to obtain a final target image.
In this way, according to the embodiment, the to-be-stripped area in the remote sensing image is obtained, the original image in the to-be-stripped area is subjected to iterative decomposition to obtain the first target component image with the strips and the second target component image without the strips, then the first target component image is processed to obtain the low-frequency image, and finally the low-frequency image and the second target component image are combined to obtain the target image, so that the real image information is extracted from the original image as much as possible, and only the first target component image is processed, and therefore the real image characteristics can be well reserved, and the calculation complexity is low.
Further, in this embodiment, when obtaining a domain to be stripped in a remote sensing image, the domain to be stripped may be obtained by the following formula:
the preset conditions include: r is R (x,y) >R max 、R (x,y) <R min 、|R (x+1,y) R (x,y) |>D x Or |R (x,y+1) -R (x,y) |>D y
D x =min[A×D x,0.99 ,D max1 ];
D y =min[A×D y,0.99 ,D max2 ];
M (x,y) When 1, the field to be stripped is represented, M (x,y) When 0, representing the region except the to-be-stripped region in the remote sensing image, R (x,y) Representing pixels in the remote sensing image, R max Representing the maximum possible reflectivity of a particular band, R min Representing the minimum possible reflectivity of said particular band, D represents a first gradient threshold in the scanning direction of the image, D y Representing a second gradient threshold in the cross-scan direction of the image, A representing the pre-scanSetting value D max1 Represents an empirical value in the scanning direction, D max2 Representing empirical values across scan directions, D x,0.99 D represents the absolute value of the gradient with the cumulative probability density of 0.99 in the scanning direction y,0.99 Represents the absolute value of the gradient with a cumulative probability density of 0.99 across the scan direction.
Specifically, the value of a may be 1.2. In addition, here, x denotes a pixel in the scanning direction in the original image, y denotes a pixel in the cross-scanning direction in the original image, (x+1) denotes a next pixel adjacent to the x pixel in the scanning direction, and (y+1) denotes a next pixel adjacent to the y pixel in the cross-scanning direction.
I.e. in this embodiment a binary mask can be introduced, where M (x,y) =0 denotes the pixel outside the field to be stripped, M (x,y) =1 denotes pixels inside the domain to be striped; further, the present embodiment is based on d=min [ AD ] in the image in the scanning direction (x direction) and in the cross-scanning direction (y direction) 0.99 ,D max ]The formulas respectively and dynamically acquire gradient data, and then the striping domain M is obtained (x,y)
In addition, in this embodiment, when the original image in the to-be-stripped domain is decomposed to obtain a first target component image with a stripe and a second target component image without a stripe, the component image without a stripe may be gradually extracted from the original image in the to-be-stripped domain through iterative computation, and the component images without a stripe obtained by the gradual extraction may be combined to obtain the second target component image; then, an image other than the second target component image in the original image in the to-be-stripped domain is determined as a first target component image.
Specifically, when component images without stripes are gradually extracted from the original image in the to-be-stripped domain through iterative computation, the method may include the following steps:
initializing a component image with stripes to be an original image in the domain to be stripped, and initializing a component image without stripes to be 0;
calculating the Laplacian of the limited gradient for the component image with the fringes by the following first formula:
L i (x,y)=[v i-1 (x-1,y)-2v i-1 (x,y)+v i-1 (x+1,y)]+[v i-1 (x,y+1)-v i-1 (x,y)]M(x,y)+[v i-1 (x,y-1)-v i-1 (x,y)]M(x,y-1);
calculating the discrete poisson equation for the component image without the stripes by the following second formula:
L i (x,y)=u i (x-1,y)+u i (x+1,y)+u i (x,y-1)+u i (x,y+1)-4u i (x,y);
performing discrete Fourier transform on the first formula to obtain a frequency domain equation:
L i (k x ,k y )=[2cos(πk x /N x )+2cos(πk y /N y )-4]u i (k x ,k y );
obtaining a component image u without stripes after the ith iteration based on the frequency domain equation and the second formula i (x,y);
Based on the following third formula, removing the component image without the stripes from the component image with the stripes to obtain the component image with the stripes after the ith iteration:
v i (x,y)=v i-1 (x,y)-u i (x,y);
wherein L is i (x, y) represents the Laplacian, v, corresponding to the ith iteration i-1 (x, y) represents the component image with stripe corresponding to the (i-1) th iteration, M (x, y) has a value of 1, u i (x, y) represents the corresponding component image without fringes after the ith iteration, v i (x, y) represents the component image corresponding to the ith iteration and having stripes, L i (k x ,k y ) Representing the frequency domain equation.
That is, after the nth iteration, the original image in the to-be-stripped domain may be decomposed into:
where N represents the number of iterations,representing the sum of the component images extracted after each of the N iterations without fringes, i.e. the second target component image, v N (x, y) represents the first target component image after N iterations.
That is, when the image is decomposed to obtain the first target component image and the second target component image in the present embodiment, the following procedure may be included:
firstly, initializing a component image with stripes to an original image in a domain to be stripped, and initializing a component image without stripes to 0, namely:
v 0 (x,y)=R(x,y),u 0 (x,y)=0,v 0 (x, y) represents a component image in which streaks exist, u 0 (x, y) represents a component image in which no streak exists, and R (x, y) represents an original image in a domain to be de-striped.
Then for each iteration i (i=1..n), from the component image v where the fringes are present i-1 Extracting a component image u without stripes from (x, y) i (x, y). In this process, since the stripes are consistent with the scanning direction (x), the gradients in the x direction are little affected by the stripes, so that all x-direction and partial y-direction (m=1) gradient data are adopted when extracting a component image without stripes (no stripe component), the data of the y-direction to-be-stripped domain (m=0) are not considered, and the laplace operator is calculated by using the limited gradients (see the first formula); in addition, the component image without fringes should have the same Laplacian operator as that based on the constrained gradient, i.e. a discrete Poisson equation (see second formula) is obtained, at this time, in order to simplify the solving process, discrete Fourier transform is performed on the first formula to obtain a frequency domain equation, and the obtained u is solved i (k x ,k y ) Obtaining a solution u of the second formula through inverse transformation i And (x, y) obtaining the component image without stripes after the ith iteration.
Finally, the component image without the stripes is removed from the component image with the stripes after each iteration, namely v i (x,y)=v i-1 (x,y)-u i (x, y); after N iterations, the image in the area to be stripped can be decomposed intoRepresenting the sum of the component images without fringes after each iteration in the N iteration processes, namely a second target component image, v N (x, y) represents the component image with the streak after N iterations, i.e., the first target component image.
The value of N may be 8.
In this way, the embodiment extracts the "no stripe" component containing the real image feature from the original image step by step through iterative calculation based on the domain to be stripped, initializes the "stripe" component as the original image, calculates the laplace operator by using the limited gradient for the "stripe" component, constructs the "no stripe" component discrete poisson equation, solves the poisson equation through discrete fourier transform, obtains the contribution of the "no stripe" component in the "stripe" component, eliminates the contribution of the "no stripe" component from the "stripe" component, accumulates the "no stripe" component, and improves the data quality.
Further, in the present embodiment, when the first target component image is processed to obtain a low-frequency image, a nonlinear filter may be used, and the first target component image is processed to obtain the low-frequency image by the following formula:
wherein V is N(x,y) Representing the low frequency image, x representing pixels in the scanning direction, y representing pixels across the scanning direction, and H representing a filter domain range across the scanning direction.
Specifically, after the first target component image and the second target component image are obtained, since most of the true image features are contained in the second target component image, i.e., in the "no-streak" component image, streak interference is contained in the first target component image, i.e., in the "streak" component image. However, the "striped" component may still contain image features that should belong to a gradual (low frequency) transition of the "no-striped" component. Thus, finally, a nonlinear filter is applied to the "striped" component of the image, separating the banding interference (characterized by high spatial frequencies) from the ramp features (low spatial frequencies).
Further, since the streak disturbance varies rapidly in the cross-scan direction (y) and slowly in the scan direction (x), the filter domain is extended to include several adjacent pixels in the cross-scan direction, and contains only one pixel in the scan direction. The filter domain range (H) across the scan direction should match the repetition period of the fringes, which is determined by the number of probe elements (or number of rows per scan). The low frequency image at this time is:
at this time, after obtaining the low-frequency image, the second target component image, which is the "no-streak" component image, and the filtered low-frequency image may be combined to obtain the de-streaked target image, which is
R destriped (x, y) represents a target image, V N (x, y) represents a low frequency image, < ->Representing a second object component image.
In addition, before the to-be-stripped domain in the remote sensing image is obtained, the embodiment can also perform dynamic interpolation expansion of the edge row and column directions on the remote sensing image to obtain an expanded remote sensing image; at this time, correspondingly, when the to-be-stripped domain in the remote sensing image is obtained, the to-be-stripped domain in the extended remote sensing image can be obtained.
Therefore, by carrying out dynamic interpolation expansion on the edge row direction of the remote sensing image, the time complexity of solving the poisson equation is reduced, and the calculation speed is improved.
In addition, in this embodiment, after the low-frequency image and the second target component image are combined to obtain a target image, the pixel value of the pixel in the target image may also be obtained, and when the pixel value of the target image is greater than the maximum pixel value of the original image in the stripe to be removed domain, or when the pixel value of the target image is less than the minimum pixel value of the original image, the pixel in the target image is replaced with the pixel in the original image.
Specifically, after the target image is obtained, the embodiment compares the pixel value of the target image with the maximum pixel value or the minimum pixel value of the original image in the to-be-stripped domain by obtaining the pixel value of the pixel in the target image, and when the pixel value of the target image obtained by comparison is larger than the maximum pixel value of the original image in the to-be-stripped domain or the pixel value of the target image is smaller than the minimum pixel value of the original image, the pixel in the target image is replaced with the pixel in the original image, so that the image quality of the target image is ensured, and the quality control of the stripped target image is realized.
In addition, in this embodiment, after the low-frequency image and the second target component image are combined to obtain the target image, the quality of the target image may also be detected by the following fourth formula and fifth formula;
the fourth formula is:
NIF=∑(|d y,old |-|d y,new |)/∑|d y,old |;
the fifth formula is:
NDF=1-∑(|d x,old –d x,new |)/∑|d x,old |;
wherein NIF represents a normalization improvement factor, d y,old Representing the gradient of the original image across the scan direction, d y,new Representing cross-scan in the target imageGradient in direction; NDF represents normalized distortion factor, d x,old Representing the gradient of the original image along the scanning direction d x,new Representing a gradient of the target image along a scan direction;
and determining that the quality of the target image reaches a preset quality threshold value when the value of the normalization improvement factor is in a first range, and determining that horizontal gradient data exceeding the preset threshold value exists in the target image when the value of the normalization distortion factor is in a second range.
Specifically, the first range may be 18% to 21%, which is not specifically limited herein; the second range may be 92% to 95%, although it is not specifically limited herein.
In this way, by calculating the normalization improvement factor and/or the normalization distortion factor, it is possible to detect whether the image quality of the target image is better improved, and in this case, if the normalization improvement factor ranges from 18% to 21%, it is considered that the image quality is better improved, or the normalization distortion factor is about 92%, it means that most of the horizontal gradient remains in the de-streak data.
The present invention will be specifically described below by way of specific examples.
For example, FY-4A scanning radiometer (FY-4A/AGRI) primary telemetry data, for a total of 6 visible near infrared channels. Wherein the channels 1 and 3 are 1KM resolution channels, and 16-element parallel scanning is performed; channel 2 is a 500 m resolution channel, 128-element parallel scanning; the channels 4, 5 and 6 are 2KM resolution channels, 8-element parallel scanning is performed; the fringe phenomenon is caused by the difference in radiation response between the probe cells. The striping may be performed by the flow shown in fig. 2, that is, the striping may be performed by:
firstly, reading channel data of primary product data of an FY-4A imager and calibration lookup table data, wherein the channel data is DN value, respectively calculating the channel data along scanning direction and cross scanning direction, and dynamically obtaining gradient threshold D x And D y
Specifically, in the dynamic acquisition of gradient threshold D x And D y When the method is used, firstly, the DN of the data to be processed is prepared, and then judgment is carried outIf the pixel values of the current pixel and the next pixel have abnormal values, calculating a gradient absolute value, and carrying out probability statistics on the gradient value; at the moment, the probability is summed and calculated, and when the cumulative probability exceeds 99%, 1.2 times of the reflectivity of the gradient is obtained as a dynamic threshold; and finally judging the dynamic threshold value and the experience threshold value, and taking a smaller value as the current threshold value.
The channel data DN values are then converted to reflectivity data using a scaled look-up table. And the reflectivity data is subjected to dynamic interpolation expansion in the edge row direction, so that the time complexity of solving the poisson equation is reduced, and the calculation speed is improved.
For example, the image for band 1 may be expanded from original 10992 rows by 10992 columns to 11664 rows by 11664 columns; the specific algorithm flow is as follows:
firstly, calculating the maximum pixel number between the current line number +16 and 1.1 times of the current line number as the extended line number, and calculating the maximum pixel number between the current column number +16 and 1.1 times of the current column number as the extended column number; and calculating the number of rows and columns of the edge expansion, calculating the average value of the reflectivities of all pixels in the first row, the first column, the last row and the last column, and performing distance weighted interpolation operation on the upper, lower, left, right and four corners by using the average value and the data of the nearest row and the nearest column, so as to finally obtain the final image to be stripped.
And then, obtaining a field to be stripped to obtain a binary mask image.
Then, the "striped" component is initialized to the original image reflectivity data, and the "non-striped" component is 0.
Then, iterative computation is performed, and the number of iterations is set to 8. In the iterative process, for the "stripeless" component u i Constructing poisson equation for "striped" component image v i-1 Calculating a limited gradient Laplacian operator, and analyzing a Poisson equation to obtain a stripeless component image u to be extracted i Updating the fringe-free component to obtain a second target component image after finishing N times of iteration, and updating the fringe-free component to obtain a first target component image; if the N iterations are not completed, the iteration decomposition operation is continued.
For example, as shown in fig. 3, a comparative schematic of the original image, the iterated image, and the filtered image is shown in fig. 3. Wherein V is 0 Representing initialized "striped" component images, U 0 Representing initialized "stripeless" component images, V m Representing the "striped" component image after m iterations,
U m representing "stripeless" component image after m iterations, W m Representing the filtered low frequency image and R representing the final result image after de-striping.
And then, performing filtering calculation on the iterated striped component by using a nonlinear filter to obtain low-frequency image information, and combining the low-frequency image information with the iterated stripeless component image (namely, a second target component image) to obtain a final target image. And then, performing quality inspection on the final target image, namely acquiring the pixel value of the pixel in the target image, and replacing the pixel in the target image with the pixel in the original image when the pixel value of the target image is larger than the maximum pixel value of the original image in the stripe to be removed or when the pixel value of the target image is smaller than the minimum pixel value of the original image.
For example, as shown in fig. 4, a schematic diagram of statistical analysis of counts before and after stripping is shown, it can be seen from fig. 4 that DN values before and after stripping process do not change greatly, and most of useful information is retained.
In addition, as shown in fig. 5, in order to calculate a histogram of the count value change amount before and after striping, in fig. 5, probability represents probability distribution, and stripe-raw represents the count value after striping-initial count value. As can be seen from fig. 5, the change range of the count value before and after stripping is not large, most of the values are basically unchanged, and the change average value is only about 30.
In addition, as shown in fig. 6, the count values before and after the striping across the scanning direction are compared, and as can be seen from fig. 6, the burr phenomenon before and after the striping process is alleviated.
Finally, a Normalized Improvement Factor (NIF) and a Normalized Distortion Factor (NDF) are calculated.
Specifically, NIF quantifies the relative change in gradient throughout the scan by varying the gradient across the scan direction, with NIF always being positive, and when the range is between 18% and 21%, it is believed that the image quality is significantly better improved.
Furthermore, NDF quantifies relative changes by gradient striping along the scan direction, which when in the range of 92% to 95% indicates that most of the horizontal gradients have remained in the striping data.
Thus, the striping processing process of the remote sensing data is completed.
In this way, the stripe phenomenon in the remote sensing image is removed through the process, and the data quality is improved.
In addition, as shown in fig. 7, a block diagram of an image processing apparatus according to an embodiment of the present invention includes:
a first obtaining module 701, configured to obtain a to-be-stripped domain in a remote sensing image, where the to-be-stripped domain includes pixels to be stripped;
a second obtaining module 702, configured to iteratively decompose the original image in the to-be-stripped domain to obtain a first target component image with a stripe and a second target component image without a stripe after iteration;
a third obtaining module 703, configured to perform nonlinear filtering processing on the first target component image to obtain a low frequency image;
and a fourth obtaining module 704, configured to combine the low-frequency image and the second target component image to obtain a target image.
It should be noted that, the device can implement all the method steps in the method embodiment and achieve the same technical effects, and will not be described in detail herein.
In addition, as shown in fig. 8, an entity structure diagram of an electronic device according to an embodiment of the present invention may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may call a computer program stored on the memory 830 and executable on the processor 810 to perform the steps of:
obtaining a to-be-stripped domain in a remote sensing image, wherein the to-be-stripped domain comprises pixels to be stripped;
performing iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with stripes and a second target component image without stripes after iteration;
processing the first target component image to obtain a low-frequency image;
and combining the low-frequency image and the second target component image to obtain a target image.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiments of the present invention further provide a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, is implemented to perform the method provided in the foregoing embodiments and achieve the same technical effects, and will not be described in detail herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. An image processing method, comprising:
obtaining a to-be-stripped domain in a remote sensing image, wherein the to-be-stripped domain comprises pixels to be stripped;
performing iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with stripes and a second target component image without stripes after iteration;
the step of performing iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with the strip and a second target component image without the strip after iteration, including:
gradually extracting component images without stripes from the original image in the to-be-stripped domain through iterative calculation, and merging the component images without the stripes, which are obtained through the gradual extraction, to obtain the second target component image;
the step-by-step extraction of the component image without the stripes from the original image in the domain to be stripped through iterative calculation comprises the following steps:
initializing a component image with stripes to be an original image in the domain to be stripped, and initializing a component image without stripes to be 0;
calculating the Laplacian of the limited gradient for the component image with the fringes by the following first formula:
L i (x,y)=[v i-1 (x-1,y)-2v i-1 (x,y)+v i-1 (x+1,y)]+[v i-1 (x,y+1)
-v i-1 (x,y)]M(x,y)+[v i-1 (x,y-1)-v i-1 (x,y)]M(x,y-1);
calculating the discrete poisson equation for the component image without the stripes by the following second formula:
L i (x,y)=u i (x-1,y)+u i (x+1,y)+u i (x,y-1)+u i (x,y+1)-4u i (x,y);
performing discrete Fourier transform on the first formula to obtain a frequency domain equation:
L i (k x ,k y )=[2cos(πk x /N x )+2cos(πk y /N y )-4]u i (k x ,k y );
obtaining a component image u without stripes after the ith iteration based on the frequency domain equation and the second formula i (x,y);
Based on the following third formula, removing the component image without the stripes from the component image with the stripes to obtain the component image with the stripes after the ith iteration:
v i (x,y)=v i-1 (x,y)-u i (x,y);
wherein L is i (x, y) represents the Laplacian, v, corresponding to the ith iteration i-1 (x, y) represents the component image with stripe corresponding to the (i-1) th iteration, M (x, y) has a value of 1, u i (x, y) represents the corresponding component image without fringes after the ith iteration, v i (x, y) represents the component image corresponding to the ith iteration and having stripes, L i (k x ,k y ) Representing a frequency domain equation;
determining an image except the second target component image in the original image in the to-be-stripped domain as the first target component image;
processing the first target component image to obtain a low-frequency image;
the processing the first target component image to obtain a low-frequency image includes:
and processing the first target component image by adopting a nonlinear filter through the following formula to obtain the low-frequency image:
C1=e C2
C2=-[V N(x,z) -V N(x,y) ] 2 /(2δ 2 );
wherein V is N(x,y) Representing the low-frequency image, x representing pixels in a scanning direction, y representing pixels across the scanning direction, and H representing a filter domain range across the scanning direction;
and combining the low-frequency image and the second target component image to obtain a target image.
2. The image processing method according to claim 1, wherein the acquiring the to-be-stripped domain in the remote sensing image includes:
the field to be stripped is obtained by the following formula:
the preset conditions include: r is R (x,y) >R max 、R (x,y) <R min 、|R (x+1,y) -R (x,y) |>D x Or |R (x,y+1) -R (x,y) |>D y
D x =min[A×D x,0.99 ,D max1 ];
D y =min[A×D y,099 ,D max2 ];
M (x,y) When 1, the field to be stripped is represented, M (x,y) When 0, representing the region except the to-be-stripped region in the remote sensing image, R (x,y) Representing pixels in the remote sensing image, R max Representing the maximum possible reflectivity of a particular band, R min Representing the minimum possible reflectivity of a particular band, D x A first gradient threshold value D representing the scanning direction of the image y A second gradient threshold value in the cross-scanning direction of the image, A represents a preset value, D max1 Represents an empirical value in the scanning direction, D max2 Representing empirical values across scan directions, D x,0.99 Represents the absolute value of the gradient with the cumulative probability density of 0.99 in the scanning direction, D y,0.99 Representing the absolute value of the gradient with a cumulative probability density of 0.99 across the scan direction.
3. The image processing method according to claim 1, further comprising, before the obtaining the to-be-stripped domain in the remote sensing image:
performing dynamic interpolation expansion on the remote sensing image in the edge row direction to obtain an expanded remote sensing image;
correspondingly, the obtaining the to-be-stripped domain in the remote sensing image comprises the following steps:
and obtaining a to-be-stripped domain in the extended remote sensing image.
4. The image processing method according to claim 1, wherein after the combining the low-frequency image and the second target component image to obtain a target image, further comprising:
acquiring a pixel value of a pixel in the target image;
and when the pixel value of the target image is larger than the maximum pixel value of the original image in the to-be-stripped domain or the pixel value of the target image is smaller than the minimum pixel value of the original image, replacing the pixel in the target image with the pixel in the original image.
5. The image processing method according to claim 1, wherein after the combining the low-frequency image and the second target component image to obtain a target image, further comprising:
detecting the quality of the target image through the following fourth and fifth formulas;
the fourth formula is:
NIF=∑(|d y,old |-|d y,new |)/∑|d y,old |;
the fifth formula is:
NDF=1-∑(|d x,old –d x,new |)/∑|d x,old |;
wherein NIF represents a normalization improvement factor, d y,old Representing the gradient of the original image across the scan direction, d y,new Representing gradients in the target image across the scan direction; NDF represents normalized distortion factor, d x,old Representing the gradient of the original image along the scanning direction d x,new Representing a gradient of the target image along a scan direction;
and determining that the quality of the target image reaches a preset quality threshold value when the value of the normalization improvement factor is in a first range, and determining that horizontal gradient data exceeding the preset threshold value exists in the target image when the value of the normalization distortion factor is in a second range.
6. An image processing apparatus, comprising:
the first acquisition module is used for acquiring a to-be-stripped domain in the remote sensing image, wherein the to-be-stripped domain contains pixels to be stripped;
the second acquisition module is used for carrying out iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with the strips and a second target component image without the strips after iteration;
the step of performing iterative decomposition on the original image in the to-be-stripped domain to obtain a first target component image with the strip and a second target component image without the strip after iteration, including:
gradually extracting component images without stripes from the original image in the to-be-stripped domain through iterative calculation, and merging the component images without the stripes, which are obtained through the gradual extraction, to obtain the second target component image;
the step-by-step extraction of the component image without the stripes from the original image in the domain to be stripped through iterative calculation comprises the following steps:
initializing a component image with stripes to be an original image in the domain to be stripped, and initializing a component image without stripes to be 0;
calculating the Laplacian of the limited gradient for the component image with the fringes by the following first formula:
L i (x,y)=[v i-1 (x-1,y)-2v i-1 (x,y)+v i-1 (x+1,y)]+[v i-1 (x,y+1)
-v i-1 (x,y)]M(x,y)+[v i-1 (x,y-1)-v i-1 (x,y)]M(x,y-1);
calculating the discrete poisson equation for the component image without the stripes by the following second formula:
L i (x,y)=u i (x-1,y)+u i (x+1,y)+u i (x,y-1)+u i (x,y+1)-4u i (x,y);
performing discrete Fourier transform on the first formula to obtain a frequency domain equation:
L i (k x ,k y )=[2cos(πk x /N x )+2cos(πk y /N y )-4]u i (k x ,k y );
obtaining a component image u without stripes after the ith iteration based on the frequency domain equation and the second formula i (x,y);
Based on the following third formula, removing the component image without the stripes from the component image with the stripes to obtain the component image with the stripes after the ith iteration:
v i (x,y)=v i-1 (x,y)-u i (x,y);
wherein L is i (x, y) represents the Laplacian, v, corresponding to the ith iteration i-1 (x, y) represents the component image with stripe corresponding to the (i-1) th iteration, M (x, y) has a value of 1, u i (x, y) represents the corresponding component image without fringes after the ith iteration, v i (x, y) represents the component image corresponding to the ith iteration and having stripes, L i (k x ,k y ) Representing a frequency domain equation;
determining an image except the second target component image in the original image in the to-be-stripped domain as the first target component image; the third acquisition module is used for processing the first target component image to obtain a low-frequency image;
the processing the first target component image to obtain a low-frequency image includes:
and processing the first target component image by adopting a nonlinear filter through the following formula to obtain the low-frequency image:
C1=e C2
C2=-[V N(x,z) -V N(x,y) ] 2 /(2δ 2 );
wherein V is N(x,y) Representing the low-frequency image, x representing pixels in a scanning direction, y representing pixels across the scanning direction, and H representing a filter domain range across the scanning direction;
and a fourth acquisition module, configured to combine the low-frequency image and the second target component image to obtain a target image.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the image processing method according to any one of claims 1 to 5 when the computer program is executed.
CN202010526199.7A 2020-06-09 2020-06-09 Image processing method and device Active CN111784617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010526199.7A CN111784617B (en) 2020-06-09 2020-06-09 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010526199.7A CN111784617B (en) 2020-06-09 2020-06-09 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111784617A CN111784617A (en) 2020-10-16
CN111784617B true CN111784617B (en) 2023-08-15

Family

ID=72755823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010526199.7A Active CN111784617B (en) 2020-06-09 2020-06-09 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111784617B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6035065A (en) * 1996-06-10 2000-03-07 Fuji Xerox Co., Ltd. Image processing coefficient determination method, image processing coefficient calculation system, image processing system, image processing method, and storage medium
CN107481205A (en) * 2017-08-23 2017-12-15 电子科技大学 A kind of Terahertz image fringes noise processing method and system
CN109934772A (en) * 2019-03-11 2019-06-25 深圳岚锋创视网络科技有限公司 A kind of image interfusion method, device and portable terminal
CN111031241A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method and device, terminal and computer readable storage medium
CN111238644A (en) * 2020-01-20 2020-06-05 西安工业大学 White light interference removing method for interference spectrum of DFDI instrument

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6035065A (en) * 1996-06-10 2000-03-07 Fuji Xerox Co., Ltd. Image processing coefficient determination method, image processing coefficient calculation system, image processing system, image processing method, and storage medium
CN107481205A (en) * 2017-08-23 2017-12-15 电子科技大学 A kind of Terahertz image fringes noise processing method and system
CN109934772A (en) * 2019-03-11 2019-06-25 深圳岚锋创视网络科技有限公司 A kind of image interfusion method, device and portable terminal
CN111031241A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method and device, terminal and computer readable storage medium
CN111238644A (en) * 2020-01-20 2020-06-05 西安工业大学 White light interference removing method for interference spectrum of DFDI instrument

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种去除遥感图像条带噪声的小波矩匹配方法;张霞等;《遥感技术与应用》;第33卷(第2期);305-312 *

Also Published As

Publication number Publication date
CN111784617A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
Claus et al. Videnn: Deep blind video denoising
CN110517327B (en) Underwater image enhancement method based on color correction and contrast stretching
Zhang et al. Joint image denoising using adaptive principal component analysis and self-similarity
JP2001527305A (en) Estimation of frequency dependence and gray level dependence of noise in images
Thakur et al. Agsdnet: Attention and gradient-based sar denoising network
CN110533665B (en) SAR image processing method for inhibiting scallop effect and sub-band splicing effect
CN113989168B (en) Self-adaptive non-local mean value filtering method for spiced salt noise
Thai et al. Generalized signal-dependent noise model and parameter estimation for natural images
CN111815537B (en) Novel image blind solution deblurring method
CN114519676A (en) Bayer format-based raw image denoising device and method
CN111784617B (en) Image processing method and device
Chen et al. Time fractional diffusion equation based on caputo fractional derivative for image denoising
CN111402173B (en) Mixed noise removing method and device, electronic equipment and storage medium
Getreuer Contour stencils for edge-adaptive image interpolation
CN108288267B (en) Dark channel-based non-reference evaluation method for image definition of scanning electron microscope
Park et al. False contour reduction using neural networks and adaptive bi-directional smoothing
CN113160069B (en) Hyperspectral image denoising method based on image signal
CN113379629B (en) Satellite image denoising method, device, computer equipment and storage medium
Kerouh et al. Wavelet-based blind blur reduction
CN111028159B (en) Image stripe noise suppression method and system
RU2405200C2 (en) Method and device for fast noise filtration in digital images
Kumar et al. An efficient image denoising approach to remove random valued impulse noise by truncating data inside sliding window
Sudheesh et al. Selective weights based median filtering approach for impulse noise removal of brain MRI images
CN115661006B (en) Seabed landform image denoising method
WO2015128302A1 (en) Method and apparatus for filtering and analyzing a noise in an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant