CN111738929A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111738929A
CN111738929A CN202010384687.9A CN202010384687A CN111738929A CN 111738929 A CN111738929 A CN 111738929A CN 202010384687 A CN202010384687 A CN 202010384687A CN 111738929 A CN111738929 A CN 111738929A
Authority
CN
China
Prior art keywords
image
value
gray value
target
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010384687.9A
Other languages
Chinese (zh)
Other versions
CN111738929B (en
Inventor
蔡永华
范怀涛
王宇
张志敏
王沛
邓云凯
禹卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202010384687.9A priority Critical patent/CN111738929B/en
Publication of CN111738929A publication Critical patent/CN111738929A/en
Application granted granted Critical
Publication of CN111738929B publication Critical patent/CN111738929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device, electronic equipment and a storage medium, which can improve the image quality of radar imaging. The method comprises the following steps: acquiring a gray value of each original pixel in each original image from at least two original images; each original pixel is a pixel contained in each original image; correcting each original image under the action of a preset low-pass filtering model based on the gray value standard difference of each azimuth direction in each original image to obtain at least two first corrected images; acquiring a gray value mean value of each azimuth in each first correction image, and correcting each first correction image under the action of a preset low-pass filtering model based on the gray value mean value of each azimuth to obtain at least two target images; and splicing at least two target images into a composite image to finish the image processing process.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of synthetic aperture radar imaging, in particular to an image processing method and device, electronic equipment and a storage medium.
Background
Synthetic Aperture Radar (SAR) has become one of the important means for high-resolution earth observation and global management through the development of up to 60 years since the late 20 th century, 50 s. As an active remote sensor working in a microwave frequency band, the synthetic aperture radar has the characteristics of no limitation of sunshine and weather conditions and all-weather, all-weather and all-around earth observation compared with an optical sensor, thereby having important application in the field of modern microwave remote sensing. In order to further shorten the global observation period and monitor the large-scale earth surface phenomenon with rapid change, a scan Synthetic Aperture Radar (ScanSAR) adopts a Burst working mode to obtain a larger surveying and mapping range of the sea and land environment, and has become an important direction for the development of the satellite-borne SAR technology at present.
Radiation correction is a key technology for obtaining a normally radiated wide ScanSAR image, the research on ScanSAR radiation correction in the early stage mainly focuses on the signal processing stage, the scallop effect and stripe unevenness are eliminated in the echo data imaging process, the method needs to depend on a large amount of prior data such as radar parameters, correction processing cannot be carried out only according to an original problem image, and the algorithm complexity is high. Radiation correction methods based on image post-processing are widely studied by scholars at home and abroad. After the imaging processing of echo data is finished, only the characteristics and information of the image are relied on to correct the specific scallop effect and the stripe non-uniformity phenomenon appearing in the image, and eliminate periodic stripes and large-range bright bands; before image splicing, the brightness, the contrast and the brightness trend of the overlapped area of every two connected images are adjusted to be consistent, so that splicing seams are prevented from appearing in the spliced images, and the visual effect and the post-processing are not influenced. However, in the case of lacking prior information such as radar parameters or correcting only a single problem image, the method in the prior art is not suitable, and the image quality of radar imaging cannot be improved.
Disclosure of Invention
Embodiments of the present invention are intended to provide an image processing method and apparatus, an electronic device, and a storage medium, which can improve image quality of radar imaging.
The technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a gray value of each original pixel in each original image from at least two original images; each original pixel is a pixel contained in each original image;
correcting each original image under the action of a preset low-pass filtering model based on the gray value standard difference of each azimuth direction in each original image to obtain at least two first corrected images;
acquiring a mean value of gray values of each azimuth direction in each first correction image, and correcting each first correction image under the action of the preset low-pass filtering model based on the mean value of gray values of each azimuth direction to obtain at least two target images;
and splicing the at least two target images into a composite image to finish the image processing process.
In the foregoing scheme, the correcting each original image under the action of a preset low-pass filtering model based on the standard deviation of the gray scale value in each azimuth direction in each original image to obtain at least two first corrected images includes:
filtering the gray value standard deviation of each azimuth direction in each original image by using the preset low-pass filtering model to obtain a standard deviation estimation value of each azimuth direction; the gray value standard deviation of each azimuth direction corresponds to the standard deviation estimation value of each azimuth direction one by one;
taking the ratio of the gray value standard deviation of each azimuth direction to the standard deviation estimation value of the azimuth direction as the gray value gain of each azimuth direction;
and correcting each original image according to the gray value gain of each azimuth direction so as to obtain at least two first corrected images.
In the foregoing solution, the correcting each original image according to the gray value gain in each azimuth direction to obtain the at least two first corrected images includes:
correspondingly dividing the gray value of each original pixel with the gray value gain of each azimuth direction to obtain a first correction gray value corresponding to each original pixel;
and correcting each original pixel according to the first correction gray value to obtain each first correction pixel, so as to obtain a first correction image corresponding to each original image, and further obtain the at least two first correction images.
In the above scheme, the obtaining a mean value of gray values of each azimuth direction in each first corrected image, and correcting each first corrected image under the action of the preset low-pass filtering model based on the mean value of gray values of each azimuth direction to obtain at least two target images includes:
filtering the gray value mean value of each azimuth direction in each first correction image by using the preset low-pass filtering model to obtain a mean value estimation value of each azimuth direction; the gray value mean value of each azimuth direction corresponds to the mean value estimation value of each azimuth direction one by one;
taking the difference value of the gray value mean value of each azimuth direction and the mean value estimation value of the azimuth direction as the gray value deviation of each azimuth direction;
and correcting each first correction image according to the gray value deviation of each azimuth direction, thereby obtaining the at least two target images.
In the foregoing solution, the correcting each first corrected image according to the gray value deviation in each azimuth direction to obtain the at least two target images includes:
acquiring a gray value of each first correction pixel in each first correction image;
correspondingly subtracting the gray value of each first correction pixel from the gray value deviation of each azimuth direction to obtain a target gray value corresponding to each first correction pixel;
and correcting the first correction pixel of each first correction image according to the target gray value of each first correction pixel to obtain each target pixel, so as to obtain a target image corresponding to each first correction image, and further obtain the at least two target images.
In the above scheme, after obtaining a mean value of gray values of each azimuth direction in each first corrected image, and correcting each first corrected image under the action of the preset low-pass filtering model based on the mean value of gray values of each azimuth direction to obtain at least two target images, the method further includes:
in each target image, obtaining a mean value of gray values of each target image in each distance direction according to the gray value of each target pixel;
for each target image, based on the gray value mean value of each distance direction, performing filtering correction on each target image under the action of a Gaussian filtering model to obtain a second corrected image of each target image, and further obtain at least two second corrected images;
and splicing the at least two second correction images into a final composite image to finish the image processing process.
In the foregoing solution, the performing, for each target image, filtering and correcting the target image under the action of a gaussian filter model based on the mean value of the grayscale values in each distance direction to obtain a second corrected image of each target image, and further obtain at least two second corrected images includes:
calculating the mean value of the gray values of all target pixels in each target image to serve as the integral gray value of each icon image;
for each target image, correspondingly calculating the ratio of the overall gray value mean value of each target image to the gray value mean value of each distance direction in the target image to obtain the gray value mean value ratio of each distance direction;
filtering the gray value average ratio of each distance direction through the Gaussian filtering model to obtain a compensation factor of each distance direction;
in each original image, multiplying the gray value of each target pixel by the compensation factor of each distance direction correspondingly to obtain the corrected gray value of each target pixel;
and obtaining each second correction pixel according to the correction gray value of each target pixel, thereby obtaining a second correction image of each target image and further obtaining at least two second correction images.
In the above scheme, the stitching the at least two target images into one composite image to complete the image processing process includes:
acquiring a geographical positioning result of each target image;
according to the geographic positioning result of each target image, calculating an overlapping area between each two adjacent target images in the at least two target images to obtain a first overlapping area and a second overlapping area; the first overlap region is a region in a previous target image of the every two adjacent target images; the second overlapping area is an area in a subsequent target image of the every two adjacent target images;
taking the previous target image of every two adjacent target images as a standard image, and adjusting the next target image of every two adjacent target images under the action of a uniform color filter model and Gaussian convolution operation on the basis of the first overlapping area and the second overlapping area to obtain a final adjusted image corresponding to the next target image;
and for every two adjacent target images in the at least two target images, continuously splicing the standard images in every two adjacent target images with the final adjustment image until the last two adjacent target images are processed, obtaining the composite image, and finishing the image processing process.
In the foregoing scheme, the taking a previous target image of every two adjacent target images as a standard image, and adjusting a subsequent target image of every two adjacent target images under the action of a uniform color filter model and a gaussian filter algorithm based on the standard image and the overlap region to obtain a final adjusted image corresponding to the subsequent target image includes:
respectively calculating a first gray value mean value and a first gray value standard deviation of the first overlapping area;
respectively calculating a second gray value mean value and a second gray value standard deviation of the second overlapped area;
performing brightness uniformity processing on the next target image under the action of a uniform color filter model based on the first gray value mean value, the first gray value standard deviation, the second gray value mean value and the second gray value standard deviation to obtain an adjusted image corresponding to the next target image;
calculating the gray value mean value of each azimuth direction in the first overlapping area to obtain a third gray value mean value sequence;
calculating the gray value mean value of each azimuth in the overlapping area of the adjusted image and the standard image according to the adjusted image to obtain a fourth gray value mean value sequence;
and adjusting the adjusted image according to the ratio of the third gray value mean sequence to the fourth gray value mean sequence through Gaussian convolution operation to obtain the final adjusted image.
In the foregoing scheme, the adjusting the adjusted image according to the ratio of the third gray value mean to the first gray value mean by gaussian convolution operation to obtain the final adjusted image includes:
performing Gaussian convolution operation on the ratio of the third gray value mean value to the first gray value mean value and a Gaussian kernel with a preset length to obtain an operation result;
multiplying the operation result by the gray value of each adjusting pixel in the adjusting image to obtain a final adjusting image; the each adjustment pixel is each pixel included in the adjustment image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the device comprises an acquisition unit, a correction unit and a splicing unit; wherein,
the acquiring unit is used for acquiring the gray value of each original pixel in each original image in at least two original images; each original pixel is a pixel contained in each original image;
the correction unit is used for correcting each original image under the action of a preset low-pass filtering model based on the gray value standard deviation of each azimuth direction in each original image to obtain at least two first corrected images; acquiring a mean value of gray values of each azimuth direction in each first correction image, and correcting each first correction image under the action of the preset low-pass filtering model based on the mean value of gray values of each azimuth direction to obtain at least two target images;
and the splicing unit is used for splicing the at least two target images into a composite image to finish the image processing process.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
a memory for storing executable data instructions;
and the processor is used for realizing the image processing method when executing the executable instructions stored in the memory.
In a fourth aspect, an embodiment of the present invention provides a storage medium storing executable instructions for causing a processor to implement the image processing method described above when executed.
The embodiment of the invention provides an image processing method and device, electronic equipment and a storage medium, wherein the image processing method comprises the following steps: acquiring a gray value of each original pixel in each original image from at least two original images; each original pixel is a pixel contained in each original image; correcting each original image under the action of a preset low-pass filtering model based on the gray value standard difference of each azimuth direction in each original image to obtain at least two first corrected images; acquiring a gray value mean value of each azimuth in each first correction image, and correcting each first correction image under the action of a preset low-pass filtering model based on the gray value mean value of each azimuth to obtain at least two target images; and splicing at least two target images into a composite image to finish the image processing process. By adopting the method provided by the embodiment of the invention, the image processing device can carry out twice filtering and twice gray value correction on the original image through the low-pass filter respectively based on the gray values of the original image and the first corrected image so as to eliminate the stripes of scallop effect in the direction of the original image and improve the image quality of radar imaging.
Drawings
FIG. 1 is a schematic diagram of a working machine of a satellite-borne ScanSAR imaging system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
FIG. 5(a) is a schematic diagram of a ScanSAR image without scallop effect correction;
FIG. 5(b) is a schematic diagram of an image of FIG. 5(a) after scallop effect correction by the method in the embodiment of the invention;
FIG. 6 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
FIG. 7(a) is a schematic diagram of a ScanSAR image without correction of banding non-uniformity;
FIG. 7(b) is a schematic diagram of an image of FIG. 7(a) after correction of banding non-uniformity by the method according to the embodiment of the present invention;
FIG. 8 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
FIG. 9(a) is a schematic diagram showing the image stitching result without any processing;
FIG. 9(b) is a schematic diagram showing the effect of using a classical Wallis filtering process on FIG. 9 (a);
FIG. 9(c) is a schematic diagram illustrating the effect of image stitching by the method in the embodiment of the present invention;
FIG. 10 is a schematic flow chart of an alternative image processing method according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
An image processing method provided by the embodiment of the invention is suitable for a satellite-borne ScanSAR imaging system shown in FIG. 1, and FIG. 1 shows a working mechanism of the satellite-borne ScanSAR, wherein a satellite 100 carrying a synthetic aperture radar sensor runs at the H height along a satellite orbit 120, the ScanSAR changes the antenna elevation within a synthetic aperture time, performs beam switching, performs multiple scanning along the distance direction to acquire echo data of a plurality of sub-bands 110_1 and 110_2 … 110_ n, and then images the echo data on each sub-band and splices the echo data to acquire a synthetic aperture image with an ultra-large width. Since the ScanSAR image is an oblique range imaging, in the embodiment of the invention, the direction along the track is referred to as an azimuth direction; the direction along which the radar wave is transmitted will be the direction of distance.
Fig. 2 is an alternative flow chart of the method provided by the embodiment of the present invention, which will be described with reference to the steps shown in fig. 2.
S101, acquiring a gray value of each original pixel in each original image from at least two original images; each original pixel is a pixel included in each original image.
In the embodiment of the invention, the image processing device takes an image obtained by imaging the echo data on each sub-band of the radar as an original image, so as to obtain at least two original images.
In the embodiment of the invention, because the ground target can only receive the irradiation of a part of the directional diagram when the radar switches the wave beams, after the echo data received by the radar is imaged, light and dark fine stripes, namely scallop effect, can appear in the direction of the image direction. The scallop effect has the characteristics of verticality, periodicity and gradual change, the stripes are basically vertical to the azimuth direction, the brightness degree periodically changes along the azimuth direction, the error in the middle is large, and the two sides are gradually reduced. In addition, the severity of the scallop effect is related to the imaging algorithm used, so the image processing apparatus needs to first eliminate the scallop effect in at least two original images before stitching the at least two original images.
In the embodiment of the present invention, the image processing apparatus may acquire, in each of the at least two original images, a grayscale value of each original pixel in each of the original images to be used for subsequent grayscale value processing, where. An original pixel refers to a pixel in an original image.
In the embodiment of the invention, the gray value represents the color depth and the brightness and darkness of each pixel point in the original image.
S102, correcting each original image under the action of a preset low-pass filtering model based on the gray value standard deviation of each azimuth direction in each original image to obtain at least two first corrected images.
In the embodiment of the present invention, for each original image, the image processing apparatus may use the gray scale value standard deviation of the original image in each azimuth direction as an observation value sequence, perform filtering by using a preset low-pass filtering model, and then perform gray scale value correction on the original image according to the filtering result of the gray scale value standard deviation in each azimuth direction, to obtain a first corrected image corresponding to the original image. The image processing device processes at least two original images by using the same method to obtain at least two first correction images.
In the embodiment of the invention, the contrast of the stripes due to the scallop effect is obviously different. Assuming that the streaks of the scallop effect are caused by multiplicative factors, a multiplicative factor relationship model of the gray values between the original image and the scallop effect corrected image can be as shown in equation (1), as follows:
Sc(i,j)=gc(i,j)·Sm(i,j) (1)
in formula (1), i, j is a position coordinate of each pixel point in an original image with a scallop effect in the distance direction and the azimuth direction respectively, and Sc(i, j) is the gray value of each original pixel point (i, j) in the original image, gc(i, j) is the gray value S of each original pixel point (i, j)c(i, j) the gray value gain, Sm(i, j) is the gray value S of each original pixel point (i, j)c(i, j) passing through the Gray value gain gc(i, j) the corrected gray scale value, i.e. the gray scale value of each first correction pixel in the first correction image corresponding to the original image.
In the embodiment of the invention, the gray value gain g is obtained by the formula (1)c(i, j) may correct for multiplicative factors in the scallop effect. Statistically, the standard deviation of the gray-scale values of the image can characterize the contrast of the image. Therefore, the image processing apparatus may first use the standard deviation of the gray scale value in each azimuth direction in the original image as the feature of the azimuth position, then use the standard deviation of the gray scale value in each azimuth direction as the observed value, perform statistical estimation through a preset low-pass filtering model, such as a first-order kalman filter, to obtain the optimal estimation value of the standard deviation of the gray scale value in each azimuth direction, and then obtain the gain g of the gray scale value for correction based on the optimal estimation value of the standard deviation of the gray scale value in each azimuth directionc(i,j)。
In the embodiment of the invention, each azimuth direction in the original image corresponds to each pixel column one by one. For each original image, the image processing device firstly calculates the gray value standard deviation of each row of original pixels according to the gray value of each original pixel in the original image, and accordingly the gray value standard deviation of each azimuth direction is obtained.
In some embodiments, the original image S may be represented by an M × N matrix of pixels, as shown in equation (2):
Figure BDA0002482190250000101
in the formula (2), M is the number of pixel rows of the original image S, and N is the number of pixel columns of the original image S, then for the original image represented by the formula (2), the image processing apparatus may obtain the gray scale value standard deviation of each column of the original pixels, so as to obtain 1 × N gray scale value standard deviations as the gray scale value standard deviation of each orientation in the original image S.
In the embodiment of the invention, the preset low-pass filtering model is used for processing the input observation value sequence containing the noise, performing assumed estimation on the statistical properties of the observation value sequence, and outputting the estimation value corresponding to each observation value in the observation value sequence as the filtering result of the preset low-pass filtering model. When the input observation value sequence is the standard deviation of the gray value of each azimuth direction in the original image, the preset low-pass filtering model can output the estimation value corresponding to the standard deviation of the gray value of each azimuth direction. In this way, the image processing apparatus can calculate the gray value gain adjustable in each azimuth direction based on the gray value standard deviation in each azimuth direction and the estimated value of the gray value standard deviation in each azimuth direction.
In some embodiments, the preset low-pass model may be a first-order kalman filter, or may also be a gaussian filter, a mean filter, a median filter, and the like with the same function.
In the embodiment of the invention, the gray value gain g is obtainedcAfter (i, j), the image processing apparatus may apply the gray scale of each original pixel according to the formula (1)Value using gray value gain gcAnd (i, j) correcting, and obtaining a first corrected image according to the corrected gray value.
S103, obtaining a gray value mean value of each azimuth direction in each first correction image, and correcting each first correction image under the action of a preset low-pass filtering model based on the gray value mean value of each azimuth direction to obtain at least two target images.
In the embodiment of the invention, each first correction image is an image which is obtained by correcting the scallop effect in each original image by the image processing device by a multiplicative factor and weakening the contrast difference between the scallop fringe and the adjacent area. After the image processing device obtains each first corrected image, the image processing device may use the preset low-pass model again, perform filtering on each first corrected image again based on the mean value of the gray values of each azimuth direction, perform re-correction on each first corrected image according to the result of the second filtering, and finally obtain at least two target images without scallop effect.
In the embodiment of the present invention, the image processing apparatus sets the pixel included in the target image as the target pixel.
In the embodiment of the present invention, the image processing apparatus calculates, in each first corrected image, a mean value of gray values of the first corrected pixels in each column by column, as a mean value of gray values of each azimuth in each first corrected image.
In the embodiment of the present invention, the first correction pixels are pixels included in each of the first correction images.
In the embodiment of the invention, besides the contrast difference, the stripes in the scallop effect also have the characteristic that the brightness of the original image is inconsistent with that of adjacent areas. Thus, assuming that the striations of the scallop effect are also caused by additive factors, the additive factor relationship model can be as shown in equation (3), as follows:
Sm(i,j)=S0(i,j)+om(i,j) (3)
in the formula (2), Sm(i, j) is the gray value of each first correction pixel in the first corrected image,om(i, j) is the gray value S of each first correction pixelmThe deviation of the gray values, S, present in (i, j)0(i, j) is the gray value S of each target pixel of the image in the target without scallop effect0(i,j)。
In the embodiment of the invention, the gray value deviation o is obtained by the formula (3)m(i, j) can correct additive factors in scallop effect. Statistically, the mean of the gray values of the image can characterize the brightness of the image. Therefore, the image processing apparatus may first use the gray-scale value mean value in each azimuth direction in the original image as an observed value, perform statistical estimation through a preset low-pass filtering model, such as a first-order kalman filter, to obtain an optimal estimation value of the gray-scale value mean value in each azimuth direction, and then obtain the gray-scale value deviation o for correction based on the optimal estimation value of the gray-scale value standard deviation in each azimuth directionm(i,j)。
In the examples of the present invention, g isc(i, j) and om(i, j) are all functions of azimuth position only, and remain the same for the same azimuth but different distances.
In the embodiment of the invention, the gray value deviation o is obtainedmAfter (i, j), the image processing apparatus may apply the gradation value S of each of the first corrected images according to the formula (3)m(i, j) Using the deviation of the Gray value omAnd (i, j) carrying out secondary correction to finally obtain a target image without scallop effect corresponding to each first corrected image.
It should be noted that, in the embodiment of the present invention, two first-order kalman filters used for filtering the gray scale value standard deviation and the mean value in sequence may set the same parameter for filtering, or may use different filtering model parameters, which is not limited in the embodiment of the present invention.
In some embodiments, the image processing apparatus may construct a first-order kalman filter as the preset low-pass filtering model by equations (4) to (8), which are used in the filtering processes in S102 and S103, where equations (4) to (8) are as follows:
Figure BDA0002482190250000121
P(k|k-1)=P(k-1|k-1)+Q (5)
Figure BDA0002482190250000122
Figure BDA0002482190250000123
P(k|k)=[1-Kg(k)]P(k|k-1) (8)
in the embodiment of the invention, the Kalman filter is an algorithm which utilizes a linear system state equation and carries out optimal estimation on the system state through inputting and outputting observation data of the system. The optimal estimation can also be seen as a filtering process, since the observed data includes the effects of noise and interference in the system. In the formula (4) and the formula (5),
Figure BDA0002482190250000124
representing the state one-step prediction value from step k-1 to step k,
Figure BDA0002482190250000125
representing the minimum mean square error estimate of the previous step, P (k | k-1) and P (k-1| k-1) are
Figure BDA0002482190250000126
And
Figure BDA0002482190250000127
the corresponding covariance, Q, represents the process noise variance.
In the embodiment of the present invention, equation (6) is an update equation, and in equations (6) to (8), Kg represents a kalman gain, i.e., a weight, representing a ratio of the observed value and the estimated value in equation (7). R denotes an observation noise variance, Z denotes an observation value,
Figure BDA0002482190250000128
is the k-th step estimate, i.e., the filtered output, P (k | k) isAnd (4) covariance corresponding to the estimated value in the k step.
in some embodiments, Q may be set to 1 × 10-6And R is 0.01, and the initial value of P updated in each iteration is set to be 0.01, so that a corresponding first-order Kalman filter is obtained and used as a preset low-pass filtering model, and each original image or each first correction image is filtered.
And S104, splicing at least two target images into a composite image to finish the image processing process.
In the embodiment of the invention, the at least two target images are the images with the scallop effect eliminated, so that after the image processing device obtains the at least two target images, each target image can be subjected to image splicing to finally obtain a composite image, the image processing process is completed, the scallop effect is removed from the obtained composite image, and the image quality of the final composite image is improved.
It can be understood that, in the embodiment of the present invention, the image processing apparatus can perform two times of filtering and two times of gray value correction on the original image through the low pass filter based on the gray value mean value and the gray value standard deviation, respectively, so as to eliminate the stripe of the scallop effect in the original image in the direction, and improve the image quality of the echo data imaging; moreover, the image processing device corrects the original image by filtering twice, so that the calculation amount of each filtering operation is reduced, and the image processing speed is increased.
In the embodiment of the present invention, based on fig. 2 and S102, based on the gray scale value standard deviation of each direction in each original image, each original image is corrected under the action of a preset low-pass filtering model to obtain at least two first corrected images, which may be specifically as shown in fig. 3, including S1021-S1023, as follows:
s1021, filtering the gray value standard deviation of each azimuth direction in each original image by using a preset low-pass filtering model to obtain a standard deviation estimation value of each azimuth direction; the gray value standard deviation of each azimuth direction corresponds to the standard deviation estimation value of each azimuth direction one by one.
In the embodiment of the invention, the image processing device uses the preset low-pass filtering model, the gray value standard deviation of each azimuth direction is used as the observation value sequence to be input into the preset low-pass filtering model, the preset low-pass filtering model is used for carrying out filtering estimation on the gray value standard deviation of each azimuth direction, the estimation result of each step is reserved, and the estimation value of the gray value standard deviation of each azimuth direction can be correspondingly obtained to be used as the filtering result.
In the embodiment of the invention, the gray value standard deviation of each azimuth direction corresponds to the standard deviation estimation value of each azimuth direction one by one.
in some embodiments, for 1 × N gray scale value standard deviations D obtained by formula (2), the image processing apparatus may obtain 1 × N standard deviation estimated values corresponding to the 1 × N gray scale value standard deviations D by presetting the low-pass filtering model
Figure BDA0002482190250000131
And S1022, taking the ratio of the gray value standard deviation of each azimuth direction to the standard deviation estimated value of the azimuth direction as the gray value gain of each azimuth direction.
In the embodiment of the present invention, after obtaining the standard deviation estimated value of each azimuth, the image processing apparatus calculates a ratio of the gray-scale value standard deviation of each azimuth to the standard deviation estimated value corresponding to the azimuth as the gray-scale value gain of each azimuth, as shown in equation (9):
Figure BDA0002482190250000141
in formula (9), D (j) is the gray scale standard deviation of the jth original pixel in the jth original image, j is the number of columns of the original pixels,
Figure BDA0002482190250000142
is the standard deviation estimated value, g, corresponding to the j-th column original pixelc(j) The gray value gain corresponding to the original pixel of the j-th column. The image processing apparatus can obtain each image in the original image by the formula (9)Gray value gain for each azimuth.
And S1023, correcting each original image according to the gray value gain of each azimuth direction, thereby obtaining at least two first corrected images.
In the embodiment of the present invention, after obtaining the gray value gain in each azimuth direction, the image processing apparatus may correct each original image according to the gray value gain in each azimuth direction, and remove the contrast difference caused by the gray value gain from the gray value of each original pixel to obtain the first corrected image corresponding to each original image. The image processing device performs the same processing on at least two original images to obtain at least two first corrected images.
In some embodiments of the present invention, S1023 may specifically include S201-S202, as follows:
s201, correspondingly dividing the gray value of each original pixel with the gray value gain of each azimuth direction to obtain a first correction gray value corresponding to each original pixel.
In the embodiment of the present invention, based on the formula (1), it can be known that removing the corresponding gray value gain from the gray value of the original pixel can eliminate the scallop effect caused by the multiplicative factor, as shown in the formula (10), as follows:
Figure BDA0002482190250000143
in the formula (10), for an original pixel at a distance position i and an orientation position j in an original image, the image processing apparatus obtains the gray value S (i, j) of the original pixel and the gray value gain g of the pixel row where the original pixel is locatedc(j) Quotient is made to obtain a first corrected gray value S after correcting the gray value S (i, j)m(i,j)。
S202, correcting each original pixel according to the first correction gray value to obtain each first correction pixel, so that a first correction image corresponding to each original image is obtained, and at least two first correction images are obtained.
In the embodiment of the present invention, for each original pixel point in an original image, the image processing apparatus may replace the original gray value of the original pixel point with the first correction gray value of the original pixel point, and use the original pixel after replacement of each gray value as each first correction pixel, thereby updating the whole gray value of the original image to obtain the first correction image corresponding to the original image.
In the embodiment of the invention, the image processing device processes at least two original images in the same way to obtain at least two first corrected images.
In the embodiment of the present invention, based on fig. 2 or fig. 3, the gray-scale value mean value of each azimuth direction in each first corrected image is obtained in S103, and based on the gray-scale value mean value of each azimuth direction, each first corrected image is corrected under the action of a preset low-pass filtering model to obtain at least two target images, and the obtaining of the at least two target images may specifically be as shown in fig. 4, which includes S1031 to S1033, as follows:
s1031, filtering the gray value mean value of each azimuth direction in each first correction image by using a preset low-pass filtering model to obtain a mean value estimation value of each azimuth direction; the gray value mean value of each azimuth direction corresponds to the mean value estimation value of each azimuth direction one by one.
In the embodiment of the invention, the image processing device inputs the gray value mean value of each azimuth direction as an observation value sequence into the preset low-pass filtering model, carries out filtering estimation on the gray value mean value of each azimuth direction through the preset low-pass filtering model, retains the filtering result of each step, and can correspondingly obtain the gray value estimation value of each azimuth direction as the mean value estimation value of each azimuth direction. The gray value estimation value of each azimuth direction is an estimation value obtained by filtering and estimating the mean value of the gray value of each azimuth direction through a preset low-pass filtering model by the image processing device.
And S1032, taking the difference value between the gray value mean value of each azimuth direction and the mean value estimated value of the azimuth direction as the gray value deviation of each azimuth direction.
In this embodiment of the present invention, the image processing apparatus may use a difference between the mean gray-scale value in each azimuth direction and the mean estimate value in the azimuth direction as the gray-scale value deviation in each azimuth direction according to formula (11), as follows:
Figure BDA0002482190250000161
in formula (11), om(j) Representing the gray value deviation of the first correction pixel in the jth column, namely the gray value deviation of the jth azimuth; m (j) represents the gray value average value of the first correction pixel in the jth column, namely the gray value average value of the jth azimuth direction;
Figure BDA0002482190250000162
and the mean value of the first correction pixel in the jth column is the mean value of the jth azimuth.
And S1033, correcting each first correction image according to the gray value deviation of each azimuth direction, so as to obtain at least two target images.
In the embodiment of the present invention, the image processing apparatus may correct the stripe brightness difference of the scallop effect through the gray value deviation in each azimuth direction, and specifically includes S301 to S302, as follows:
s301, obtaining a gray value of each first correction pixel in each first correction image.
S302, correspondingly subtracting the gray value of each first correction pixel from the gray value deviation of each azimuth direction to obtain a target gray value corresponding to each first correction pixel.
In each first correction image, correspondingly subtracting the gray value deviation of the azimuth direction of each first correction pixel from the gray value of each first correction pixel to obtain a target gray value corresponding to each first correction pixel; the direction of the first correction pixel is the pixel row of the first correction pixel.
In the embodiment of the present invention, based on the formula (3), the formula (12) may be obtained, and the image processing apparatus may obtain, by using the formula (12), for each first correction image, the target gray-scale value of each first correction pixel by subtracting the gray-scale value deviation corresponding to the first correction pixel from the gray-scale value of each first correction pixel in the first correction image, as follows:
Figure BDA0002482190250000163
in the formula (12), S0(i, j) represents the gray value deviation corrected distance position i, the first corrected pixel gray value at azimuth position j,
s303, correcting the first correction pixel of each first correction image according to the target gray value of each first correction pixel to obtain each target pixel, so as to obtain a target image corresponding to each first correction image, and further obtain at least two target images.
In the embodiment of the present invention, after obtaining the target gray-scale value of each first correction pixel in one first correction image, the image processing apparatus may use the target gray-scale value of each first correction pixel to replace the original gray-scale value of the first correction pixel, so as to obtain the target image corresponding to the first correction image. The image processing device obtains at least two target images by using the same method for at least two first correction images.
It can be understood that, in the embodiment of the present invention, the image processing apparatus may perform filtering correction on the original image based on the standard deviation of the gray scale value of the original image through a preset low-pass filtering model to obtain a first corrected image, so as to weaken the contrast difference of the scallop effect fringes in the original image, perform filtering correction on the average value of the gray scale value based on the first corrected image, weaken the brightness difference of the scallop effect fringes in the original image, and thus improve the image quality; furthermore, the preset low-pass filtering model only needs to filter one parameter of the standard deviation or the mean value of the gray values during each filtering, so that the calculated amount is small, and the speed of image processing is further improved.
In some embodiments, when the preset filtering model is a kalman filter, the ScanSAR image without scallop effect correction may be as shown in fig. 5(a), in the image in fig. 5(a), scallop fringes that periodically change along the azimuth direction may be obviously observed, details of the image are lost, normal interpretation of the image is disturbed, and it can be seen from the graph in fig. 5(a) that the fluctuation of the mean value and the standard deviation of the pixel gray values is large, and the continuity of the gray value distribution is poor. By processing the image shown in fig. 5(a) with the method in the embodiment of the present invention, the image after the scallop effect correction as shown in fig. 5(b) can be obtained, the image shown in fig. 5(b) has no obvious scallop striations, and the part of the details of fig. 5(a) which are covered by the scallop effect is recovered, the jagged fluctuation of the mean value and the standard deviation of the position of the image shown in fig. 5(a) in the azimuth direction is eliminated from the graph shown in fig. 5(a), and the periodic variation of the brightness and the contrast of the image is corrected.
In this embodiment of the present invention, based on any one of the methods shown in fig. 2 to 4, after S103 obtains a mean value of gray values of each azimuth direction in each first corrected image, and corrects each first corrected image under the action of a preset low-pass filtering model based on the mean value of gray values of each azimuth direction, to obtain at least two target images, as shown in fig. 6, S401-S403 may be further included, as follows:
s401, in each target image, obtaining a mean value of gray values of each target image in each distance direction according to the gray value of each target pixel.
In the embodiment of the invention, in the ScanSAR imaging process, because the distance radiation gain is mainly influenced by the slant distance and the distance antenna directional diagram modulation, the distance directional diagrams of different beams have large difference, and a large-range uniform strip can appear in the image distance direction after imaging.
In the embodiment of the application, the stripe non-uniformity phenomenon has no periodicity, but has verticality, and the non-uniform stripes are almost vertical to the distance direction. The stripe unevenness has a gradual change and a smooth trend in the direction of the distance. Since the stripe non-uniformity phenomenon is independent and independent from the orientation scallop effect, the image processing device can correct the stripe non-uniformity linearity after ScanSAR imaging after correcting the scallop effect by using the method in S102-S103.
In the embodiment of the invention, the uneven stripe phenomenon is caused by the difference between the distance direction directional diagrams of different beams, so that the uneven stripe phenomenon is corrected, the distance direction directional diagrams are accurately estimated, the distance direction directional diagrams of each image are known, the distance direction directional diagrams of the images can be compensated, and the scattered radiation distribution of the images can be corrected uniformly by multiplying the compensation function and the distance direction of the images.
In the embodiment of the invention, the target image S0(M × N), the image processing device may calculate by rows to obtain a mean value M of gray values of target pixels of each row in the target imager(M × 1) as the mean of the gray values for each distance direction in the target image.
In the embodiment of the invention, the mean value of the gray values of each distance direction represents the brightness distribution of the target image in the distance direction.
S402, aiming at each target image, based on the gray value mean value of each distance direction, filtering correction is carried out on each target image under the action of a Gaussian filter model, a second correction image of each target image is obtained, and at least two second correction images are obtained.
In this embodiment of the present invention, for each target image, the image processing apparatus may substitute a gaussian filter model for each distance direction in the target image, so as to perform smooth filtering on the luminance distribution of the target image in the distance direction through the gaussian filter model, and obtain a second corrected image of the target image. The image processing apparatus applies the same method to each target image, thereby obtaining a second corrected image of each target image.
In the embodiment of the present invention, S402 may specifically include S4021 to S4025, as follows:
s4021, calculating the mean value of the gradation values of all the target pixels in each target image as the whole gradation value of each icon image.
In the embodiment of the present invention, for each target image, the image processing apparatus may calculate a mean value of gray values of all target pixels in the target image as an overall mean value of gray values of the target image.
S4022, for each target image, correspondingly calculating the ratio of the overall gray value mean value of each target image to the gray value mean value of each distance direction in the target image to obtain the gray value mean value ratio of each distance direction.
In the embodiment of the present invention, the image processing apparatus calculates the overall gray-scale value mean of each target image and the gray-scale value mean of each distance direction in the target image to obtain the gray-scale value mean ratio of each distance direction.
S4023, filtering the gray value average ratio of each distance direction through a Gaussian filtering model to obtain a compensation factor of each distance direction.
In the embodiment of the present invention, the image processing apparatus uses a gaussian filter to filter the gray-to-average ratio in each distance direction according to formula (13), so as to obtain a compensation factor for performing brightness compensation on each line of target pixels in the target image, as follows:
Figure BDA0002482190250000191
in the formula (13), F (n) represents a compensation factor,
Figure BDA0002482190250000192
for convolution, g represents a discrete gaussian kernel of length L and standard deviation σ, as shown in equation (14):
Figure BDA0002482190250000193
in some embodiments, L may be 800, σ may be 200, or may be determined according to different image situations, specifically, selected according to actual situations, and the embodiments of the present invention are not limited.
S4024, in each original image, multiplying the gray value correspondence of each target pixel by the compensation factor of each distance direction to obtain the corrected gray value of each target pixel.
In the embodiment of the present invention, after obtaining the compensation factor F, the image processing apparatus may compensate the gray scale value of each target pixel in the target image by using the compensation factor, so as to obtain the corrected gray scale value of each target pixel, as shown in formula (15):
S1(i,j)=F(i)·S(i,j) (i∈[1,M],j∈[1,N]) (15)
in the formula (15), S (i, j) is the gray scale value of the target pixel in the ith row and the jth column in the target image, F (i) is the compensation factor corresponding to each target pixel in the ith row, and the image processing device multiplies S (i, j) and F (i) to obtain the corrected gray scale value S corresponding to the target pixel1(i,j)。
S4025, obtaining each second correction pixel according to the correction gray value of each target pixel, thereby obtaining a second correction image of each target image, and further obtaining at least two second correction images.
In the embodiment of the present invention, after the image processing device obtains the corrected grayscale value of each target pixel for each target image, the original grayscale value of each target pixel may be replaced by the corrected grayscale value of each target pixel to obtain a second corrected image corresponding to each target image, and the compensation and correction process for the stripe non-uniformity of the target image is completed.
In the embodiment of the present invention, the image processing apparatus processes at least two target images using the same method, thereby obtaining at least two second correction images.
And S403, splicing the at least two second corrected images into a final composite image to finish the image processing process.
In this embodiment of the present invention, the image processing apparatus may stitch at least two second corrected images into one final composite image, thereby completing the image processing process.
In the embodiment of the present invention, the method for splicing at least two second corrected images by the image processing apparatus to obtain the final composite image is the same as S104 in principle, and is not described herein again.
It can be understood that, in the embodiment of the present invention, the image processing apparatus may further eliminate the stripe non-uniformity phenomenon based on the original target image from which the scallop effect is eliminated, and perform filtering correction on the distance-wise gray-scale value average value by using a gaussian filter, so as to obtain a target image with better image quality. Then, the image processing apparatus can perform image stitching based on the target image from which the scallop effect and the banding non-uniformity are removed, thereby further improving the image imaging quality.
In some embodiments, as shown in fig. 7(a), which shows a ScanSAR image without correction of the banding non-uniformity, it can be seen from the image of fig. 7(a) that the distance is stronger toward the middle upper portion and the image is brighter, and the distance is weaker toward the bottom portion and the image is darker. As can be seen from the distance-to-luminance distribution graph of fig. 7(a), the distance appears as a higher peak toward the middle-upper portion, and decreases toward both sides. The phenomenon of stripe non-uniformity affects the overall perception of the image, is prone to give false radiation intensity distributions, affects interpretation and interpretation of the image, and causes great difficulty in image post-processing work. Fig. 7(b) shows an image after the stripe non-uniformity phenomenon is corrected by the method in the embodiment of the present invention, as shown in the distance-to-brightness distribution graph in fig. 7(b), the distance-to-brightness distribution is uniform, as shown in the image in fig. 7(b), a wide-range bright band in the image disappears, the up-and-down transition is uniform, and the stripe non-uniformity phenomenon is effectively eliminated.
In the embodiment of the present invention, based on fig. 2 to 6, the step S104 of stitching at least two second corrected images into a final synthesized image to complete the image processing process may specifically be as shown in fig. 8, including the steps S1041 to S1044, as follows:
s1041, obtaining a geographical positioning result of each target image.
In the embodiment of the invention, the image processing device can acquire the geographic positioning result of each target image from the satellite parameter corresponding to the target image.
S1042, according to the geographical positioning result of each target image, calculating an overlapping area between each two adjacent target images in at least two target images to obtain a first overlapping area and a second overlapping area; the first overlap region is a region in a previous target image of every two adjacent target images; the second overlapping area is an area in the next target image of every two adjacent target images.
In the embodiment of the invention, due to the influence of various factors such as the slant range of the satellite from a target, the radar viewing angle, the antenna directional diagram, the atmospheric attenuation and the like, the radiation intensity of images formed by different sub mapping bands can be different, when the images are spliced into a wide-width image, an obvious seam appears at the spliced part, and the brightness of the images at two sides of the spliced seam is not uniform. If reasonable and effective correction is not carried out, the phenomenon of all radiation unevenness seriously influences the visual effect of the image, interferes the interpretation of the image, and hinders the post-processing such as feature extraction, image splicing and the like.
Therefore, in the embodiment of the present invention, after the image processing device obtains the target image, the image processing device may calculate an overlapping area between two adjacent images according to the image geographic positioning result, and eliminate the spliced seam based on the overlapping area.
In the embodiment of the invention, in every two adjacent target images, the image processing device takes the area which is overlapped with the geographical positioning of the next target image in the previous target image as a first overlapping area; and taking the area which is overlapped with the previous target image in the geographical positioning mode in the next target image as a second overlapping area.
And S1043, taking the previous target image in every two adjacent target images as a standard image, and adjusting the next target image in every two adjacent target images under the action of a uniform color filter model and Gaussian convolution operation on the basis of the first overlapping area and the second overlapping area to obtain a final adjusted image corresponding to the next target image.
In the embodiment of the present invention, after obtaining the overlap area of every two adjacent target images, the image processing apparatus may adjust a subsequent target image of every two adjacent target images through a uniform color filter model based on the gray values of the first overlap area and the second overlap area between every two adjacent target images, and use the adjusted subsequent target image as the adjustment image.
In the embodiment of the invention, because the actual splicing processing is directional, the image processing device takes the previous target image as the standard image and adjusts the next target image in the adjacent target images. For example, if the order of image stitching is stitching from left to right, the image on the right is adjusted by taking the image on the left of the adjacent target images as the standard image, and so on.
In some embodiments of the present invention, the uniform color filter model may be a Wallis filter, and the Wallis filter may map the gray level mean and variance of a local image in an image to a given gray level mean and variance value, so that the gray level variance and the gray level mean at different positions of the image have approximately equal values, and finally, the contrast of a region with small image contrast is increased, the contrast of a region with large image contrast is decreased, and the tiny information of the gray level in the image is enhanced. Other uniform color and uniform light filtering models can be selected according to different actual filtering requirements, and the embodiment of the invention is not limited.
In the embodiment of the invention, if the target image is corrected by the uniform color filter model only once, when the brightness trend of the overlapping region is just opposite to that of the target image, the correction can be failed, so that the embodiment of the invention can perform Gaussian convolution operation processing on the adjusted image again, make up the limitation that the classical Wallis filter fails when the brightness trend of the overlapping region is just opposite, and obtain the final adjusted image.
In some embodiments of the present invention, S1043 may specifically include S501-S506, as follows:
s501, calculating a first gray value mean value and a first gray value standard deviation of the first overlapping area respectively.
In the embodiment of the invention, for a first overlapping area in every two adjacent target images, the image processing device calculates the average value of the gray values of all pixels in the first overlapping area as the first gray value average value; the image processing device calculates a standard deviation of the gradation values of all the pixels in the first overlap region as a first gradation value standard deviation.
And S502, respectively calculating a second gray value mean value and a second gray value standard deviation of the second overlapped area.
In the embodiment of the invention, for the second overlapped area of every two adjacent target images, the image processing device calculates the average value of the gray values of all pixels in the second overlapped area as the average value of the second gray values; the image processing device calculates a standard deviation of the gradation values of all the pixels in the second overlap area as a second gradation value standard deviation.
S503, based on the first gray value mean value, the first gray value standard deviation, the second gray value mean value and the second gray value standard deviation, under the action of the color homogenizing filter model, conducting brightness homogenizing treatment on the next target image to obtain an adjusting image corresponding to the next target image.
In the embodiment of the present invention, an algorithm corresponding to the uniform color filter model is shown in formula (16), and the image processing apparatus may perform, by using formula (16), luminance uniformity processing on each target pixel in the subsequent target image based on the first gray value mean, the first gray value standard deviation, the second gray value mean, and the second gray value standard deviation, to obtain an adjusted gray value of each target pixel in the subsequent target image, as follows:
Figure BDA0002482190250000231
in the formula (16), SβFor the next target image, s, of every two adjacent target imagesαIs the first gray value standard deviation, mαIs the mean value of the first gray value, sβIs the second standard deviation of gray scale value, mβIs the mean value of the second gray value, S'βIs SβAnd adjusting the image after the image is processed by the uniform color filtering model.
S504, calculating the gray value mean value of each azimuth direction in the first overlapping area to obtain a third gray value mean value sequence.
In the embodiment of the invention, the image processing device recalculates the gray value mean value of each azimuth in the first overlapping area in the standard image to obtain a third gray value mean value sequence.
And S505, calculating the gray value mean value of each azimuth in the overlapped area of the adjusted image and the standard image according to the adjusted image to obtain a fourth gray value mean value sequence.
In an embodiment of the present invention, since the second overlapping area is an area in the next target image, after the image processing apparatus adjusts the next target image into the adjusted image, the image processing apparatus may recalculate the mean gray value of each azimuth in the overlapping area based on the overlapping area of the calculated adjusted image and the standard image, so as to obtain the third mean gray value sequence.
S506, adjusting the adjusted image according to the ratio of the third gray value mean sequence to the fourth gray value mean sequence through Gaussian convolution operation to obtain a final adjusted image.
In the embodiment of the present invention, after the image processing device obtains the third mean gray value, the gaussian convolution operation processing may be performed on the adjusted image according to the ratio of the third mean gray value to the mean gray value of the standard image, so as to obtain the final adjusted image.
In the embodiment of the present invention, the gaussian convolution operation formula may be shown as formula (17), as follows:
Figure BDA0002482190250000241
in formula (17), S'βAnd (4) adjusting the image after being filtered by the uniform color filtering model in every two adjacent target images. MαIs a third gray value mean value sequence, MβIs the fourth gray value mean sequence. Mean ratio of gray values to avoid jitter effects
Figure BDA0002482190250000242
Low-pass filtering with Gaussian kernel g (n) convolution operation is needed to obtain the final adjustment image S after the Gaussian convolution operation processing "β
And S1044, continuously splicing the standard image and the final adjustment image in each two adjacent target images of the at least two target images until the final two adjacent target images are processed to obtain a composite image, and finishing the image processing process.
In the embodiment of the invention, the image processing device carries out image splicing on the standard image and the final adjustment image in every two adjacent target images in at least two target images so as to eliminate image splicing seams, and after the image processing device finishes processing all every two adjacent target images in the at least two target images, a composite image is obtained, and the image processing process is finished.
It can be understood that, in the embodiment of the present invention, after one Wallis filtering is completed, an average ratio is added to supplement description of the luminance trend of the overlap region, so as to make up for the limitation that a classical Wallis filter fails when the luminance trends of the overlap region are opposite, obtain a better image stitching effect, and improve image quality.
In some embodiments, the result of image stitching without any processing is shown in fig. 9(a), and in the image of fig. 9(a), due to the difference of the radiation intensity of the two images before stitching, the upper half of the image is brighter and the lower half is darker, so that the stitching seam is very obvious and a step fault appears on the azimuth brightness distribution curve. Fig. 9(b) shows the effect of passing fig. 9(a) through a classical Wallis filter, which already shows a uniform distribution when viewed from an azimuth-to-brightness distribution curve, but a seam still exists in the image, the brightness of the upper half of the image changes from bright to dark, the brightness of the lower half of the image changes from dark to bright, the brightness trends are opposite, but the mean value and the standard deviation of the image are almost the same, which is the limitation of the classical Wallis filter. Fig. 9(c) shows the effect of eliminating the image stitching seams by using the method in the embodiment of the invention, the images are properly joined up and down, the transition is stable, the distribution curve of the azimuth brightness is uniform, and the stitching seams are effectively eliminated.
The embodiment of the invention provides an image processing method which can be applied to the first C-band multi-polarization high-resolution synthetic aperture radar satellite in China, and the only civil microwave remote sensing imaging satellite high-resolution three-number (GF-3) satellite in 'national high-resolution earth observation system great special item' and has a plurality of imaging modes including a scanning working mode. The high-quality ScanSAR data of the GF-3 satellite can be used for scientific research experiments, and has important significance for verifying the satellite-borne ScanSAR image radiation correction algorithm.
The embodiment of the invention selects the ScanSAR image shot by the GF-3 satellite in 2016 to test, and the selected image is located in the red peak city of the autonomous region of Mongolia in China and is a region for intercepting the south-west section of great Khingan mountains and the north-end mountain range of the seven-old map. The scattering in the area is relatively uniform, the phenomenon of nonuniform radiation is easy to observe from the image, and the method is suitable for radiation correction of the satellite-borne ScanSAR image. In the embodiment of the invention, the method in S601-S609 can be adopted to correct and splice the satellite-borne ScanSAR image, and the method comprises the following steps:
s601, acquiring the gray value of each original pixel in each original image in at least two original images.
S602, correcting each original image under the action of a preset low-pass filtering model based on the gray value standard deviation of each azimuth direction in each original image to obtain at least two first corrected images.
S603, obtaining a gray value mean value of each azimuth direction in each first correction image, and correcting each first correction image under the action of a preset low-pass filtering model based on the gray value mean value of each azimuth direction to obtain at least two target images.
And S604, in each target image, obtaining the mean value of the gray values of each distance direction of each target image according to the gray value of each target pixel.
S605, filtering and correcting each target image under the action of a Gaussian filter model according to the gray value average value of each distance direction of each target image to obtain a second corrected image of each target image, and further obtain at least two second corrected images.
S606, acquiring the geographic positioning result of each second correction image.
In the embodiment of the present invention, the principle of the method for acquiring the geolocation result of each second calibration image in S606 is the same as that in S1041, and details are not repeated here.
S607, according to the geographical positioning result of each second correction image, calculating the overlapping area between each two adjacent second correction images in at least two second correction images to obtain a third overlapping area and a fourth overlapping area; the third overlapping area is an area in the previous target image of every two adjacent second corrected images; the fourth overlapping area is an area in the second corrected image subsequent to each two adjacent second corrected images.
In the embodiment of the present invention, the principle of the method for obtaining the third overlapping area and the fourth overlapping area in S607 is the same as that of the method for obtaining the first overlapping area and the second overlapping area in S1042, and details are not repeated here.
And S608, taking the previous second corrected image in every two adjacent second corrected images as a standard image, and adjusting the next second corrected image in every two adjacent second corrected images under the action of the uniform color filtering model and the Gaussian convolution operation on the basis of the third overlapped area and the fourth overlapped area to obtain a final adjusted image corresponding to the next second corrected image.
In the embodiment of the present invention, the principle of the method for obtaining the final adjustment image corresponding to the next second correction image in S608 is the same as that of the method for obtaining the final adjustment image corresponding to the next target image in S1043, and details are not repeated here.
And S609, continuously splicing the standard image and the final adjustment image in each two adjacent second correction images in the at least two second correction images until the last two adjacent second correction images are processed to obtain a composite image, and finishing the image processing process.
In the embodiment of the present invention, the principle of the method for obtaining the synthetic image in S609 is the same as S1044, and details are not repeated here.
It can be understood that, in the embodiment of the present invention, the image processing device may sequentially perform image processing procedures of azimuth scallop effect elimination, distance stripe non-uniformity correction, and stitching seam elimination on at least two original images obtained by radar scanning, so as to finally obtain a composite image, thereby greatly improving the image quality of radar imaging.
An embodiment of the present invention provides an image processing apparatus 2, as shown in fig. 11, the image processing apparatus 2 including: an acquisition unit 200, a correction unit 201 and a splicing unit 202; wherein,
the acquiring unit 200 is configured to acquire, in at least two original images, a gray value of each original pixel in each original image; each original pixel is a pixel contained in each original image;
the correcting unit 201 is configured to correct each original image under the action of a preset low-pass filtering model based on a standard deviation of a gray value of each azimuth direction in each original image, so as to obtain at least two first corrected images; acquiring a mean value of gray values of each azimuth direction in each first correction image, and correcting each first correction image under the action of the preset low-pass filtering model based on the mean value of gray values of each azimuth direction to obtain at least two target images;
the stitching unit 202 is configured to stitch the at least two target images into a composite image, so as to complete an image processing process.
In some embodiments of the present invention, the correcting unit 201 is further configured to filter, in each original image, the standard deviation of the gray value in each direction using the preset low-pass filtering model, so as to obtain an estimated value of the standard deviation in each direction; the gray value standard deviation of each azimuth direction corresponds to the standard deviation estimation value of each azimuth direction one by one; taking the ratio of the gray value standard deviation of each azimuth direction to the standard deviation estimation value of the azimuth direction as the gray value gain of each azimuth direction; and correcting each original image according to the gray value gain of each azimuth direction so as to obtain at least two first corrected images.
In some embodiments of the present invention, the correcting unit 201 is further configured to divide the gray-level value of each original pixel by the gray-level gain of each azimuth direction, so as to obtain a first corrected gray-level value corresponding to each original pixel; and correcting each original pixel according to the first correction gray value to obtain each first correction pixel, so as to obtain a first correction image corresponding to each original image, and further obtain the at least two first correction images.
In some embodiments of the present invention, the correcting unit 201 is further configured to filter the mean gray-scale value of each azimuth direction in each first corrected image by using the preset low-pass filtering model, so as to obtain a mean estimated value of each azimuth direction; the gray value mean value of each azimuth direction corresponds to the mean value estimation value of each azimuth direction one by one; taking the difference value of the gray value mean value of each azimuth direction and the mean value estimation value of the azimuth direction as the gray value deviation of each azimuth direction; filtering the gray value mean value of each azimuth direction in each first correction image by using the preset low-pass filtering model to obtain a mean value estimation value of each azimuth direction; the gray value mean value of each azimuth direction corresponds to the mean value estimation value of each azimuth direction one by one; taking the difference value of the gray value mean value of each azimuth direction and the mean value estimation value of the azimuth direction as the gray value deviation of each azimuth direction; and correcting each first correction image according to the gray value deviation of each azimuth direction so as to obtain the gray value deviation of the at least two target images according to each azimuth direction, and correcting each first correction image so as to obtain the at least two target images.
In some embodiments of the present invention, the correcting unit 201 is further configured to obtain a gray value of each first correction pixel in each first correction image; correspondingly subtracting the gray value of each first correction pixel from the gray value deviation of each azimuth direction to obtain a target gray value corresponding to each first correction pixel; and correcting the first correction pixel of each first correction image according to the target gray value of each first correction pixel to obtain each target pixel, so as to obtain a target image corresponding to each first correction image, and further obtain the at least two target images.
In some embodiments of the invention, the image processing apparatus 2 further comprises a compensation correction unit, wherein,
the compensation correction unit is configured to acquire a mean value of gray values of each orientation in each first corrected image, correct each first corrected image under the action of the preset low-pass filtering model based on the mean value of gray values of each orientation to obtain at least two target images, and obtain, in each target image, a mean value of gray values of each distance direction of each target image according to a gray value of each target pixel; for each target image, based on the gray value mean value of each distance direction, performing filtering correction on each target image under the action of a Gaussian filtering model to obtain a second corrected image of each target image, and further obtain at least two second corrected images; and splicing the at least two second correction images into a final composite image to finish the image processing process.
In some embodiments of the present invention, the compensation correction unit is further configured to calculate, in each target image, an average of gray scale values of all target pixels as an overall gray scale value of each icon image; for each target image, correspondingly calculating the ratio of the overall gray value mean value of each target image to the gray value mean value of each distance direction in the target image to obtain the gray value mean value ratio of each distance direction; filtering the gray value average ratio of each distance direction through the Gaussian filtering model to obtain a compensation factor of each distance direction; in each original image, multiplying the gray value of each target pixel by the compensation factor of each distance direction correspondingly to obtain the corrected gray value of each target pixel; and obtaining each second correction pixel according to the correction gray value of each target pixel, thereby obtaining a second correction image of each target image and further obtaining at least two second correction images.
In some embodiments of the present invention, the stitching unit 202 is further configured to obtain a geographic positioning result of each target image; according to the geographic positioning result of each target image, calculating an overlapping area between each two adjacent target images in the at least two target images to obtain a first overlapping area and a second overlapping area; the first overlap region is a region in a previous target image of the every two adjacent target images; the second overlapping area is an area in a subsequent target image of the every two adjacent target images; taking the previous target image of every two adjacent target images as a standard image, and adjusting the next target image of every two adjacent target images under the action of a uniform color filter model and Gaussian convolution operation on the basis of the first overlapping area and the second overlapping area to obtain a final adjusted image corresponding to the next target image; and for every two adjacent target images in the at least two target images, continuously splicing the standard images in every two adjacent target images with the final adjustment image until the last two adjacent target images are processed, obtaining the composite image, and finishing the image processing process.
In some embodiments of the present invention, the splicing unit 202 is further configured to calculate a first mean gray-scale value and a first standard deviation gray-scale value of the first overlapping area, respectively; respectively calculating a second gray value mean value and a second gray value standard deviation of the second overlapped area; performing brightness uniformity processing on the next target image under the action of a uniform color filter model based on the first gray value mean value, the first gray value standard deviation, the second gray value mean value and the second gray value standard deviation to obtain an adjusted image corresponding to the next target image; calculating the gray value mean value of each azimuth direction in the first overlapping area to obtain a third gray value mean value sequence; calculating the gray value mean value of each azimuth in the overlapping area of the adjusted image and the standard image according to the adjusted image to obtain a fourth gray value mean value sequence; and adjusting the adjusted image according to the ratio of the third gray value mean sequence to the fourth gray value mean sequence through Gaussian convolution operation to obtain the final adjusted image.
In some embodiments of the present invention, the splicing unit 202 is further configured to perform a gaussian convolution operation on a ratio of the third gray value mean to the first gray value mean and a gaussian kernel with a preset length to obtain an operation result; multiplying the operation result by the gray value of each adjusting pixel in the adjusting image to obtain a final adjusting image; the each adjustment pixel is each pixel included in the adjustment image.
An embodiment of the present invention provides an electronic device 5, and as shown in fig. 12, the electronic device 5 includes: a processor 54, a memory 55 and a communication bus 56, the memory 55 being in communication with the processor 54 via the communication bus 56, the memory 55 storing one or more programs executable by the processor 54, the processor 54 performing the image processing method as described in any one of the above when the one or more programs are executed.
The disclosed embodiments provide a computer readable storage medium storing one or more programs executable by one or more processors 54 to implement an image processing method as in any above.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (13)

1. An image processing method, comprising:
acquiring a gray value of each original pixel in each original image from at least two original images; each original pixel is a pixel contained in each original image;
correcting each original image under the action of a preset low-pass filtering model based on the gray value standard difference of each azimuth direction in each original image to obtain at least two first corrected images;
acquiring a mean value of gray values of each azimuth direction in each first correction image, and correcting each first correction image under the action of the preset low-pass filtering model based on the mean value of gray values of each azimuth direction to obtain at least two target images;
and splicing the at least two target images into a composite image to finish the image processing process.
2. The method according to claim 1, wherein the correcting each original image under the action of a preset low-pass filtering model based on the standard deviation of the gray-scale value of each azimuth direction in each original image to obtain at least two first corrected images comprises:
filtering the gray value standard deviation of each azimuth direction in each original image by using the preset low-pass filtering model to obtain a standard deviation estimation value of each azimuth direction; the gray value standard deviation of each azimuth direction corresponds to the standard deviation estimation value of each azimuth direction one by one;
taking the ratio of the gray value standard deviation of each azimuth direction to the standard deviation estimation value of the azimuth direction as the gray value gain of each azimuth direction;
and correcting each original image according to the gray value gain of each azimuth direction so as to obtain at least two first corrected images.
3. The method of claim 2, wherein said correcting each of said original images according to said gray value gain for each orientation to obtain said at least two first corrected images comprises:
correspondingly dividing the gray value of each original pixel with the gray value gain of each azimuth direction to obtain a first correction gray value corresponding to each original pixel;
and correcting each original pixel according to the first correction gray value to obtain each first correction pixel, so as to obtain a first correction image corresponding to each original image, and further obtain the at least two first correction images.
4. The method according to any one of claims 1 to 3, wherein the obtaining of the mean value of the gray-scale values of each orientation in each first corrected image, and based on the mean value of the gray-scale values of each orientation, correcting each first corrected image under the action of the preset low-pass filtering model to obtain at least two target images comprises:
filtering the gray value mean value of each azimuth direction in each first correction image by using the preset low-pass filtering model to obtain a mean value estimation value of each azimuth direction; the gray value mean value of each azimuth direction corresponds to the mean value estimation value of each azimuth direction one by one;
taking the difference value of the gray value mean value of each azimuth direction and the mean value estimation value of the azimuth direction as the gray value deviation of each azimuth direction;
and correcting each first correction image according to the gray value deviation of each azimuth direction, thereby obtaining the at least two target images.
5. The method of claim 4, wherein said correcting each first corrected image according to the gray value deviation of each orientation to obtain the at least two target images comprises:
acquiring a gray value of each first correction pixel in each first correction image;
correspondingly subtracting the gray value of each first correction pixel from the gray value deviation of each azimuth direction to obtain a target gray value corresponding to each first correction pixel;
and correcting the first correction pixel of each first correction image according to the target gray value of each first correction pixel to obtain each target pixel, so as to obtain a target image corresponding to each first correction image, and further obtain the at least two target images.
6. The method according to any one of claims 1 to 5, wherein after obtaining a mean value of gray-level values of each azimuth direction in each first corrected image, and correcting each first corrected image under the action of the preset low-pass filtering model based on the mean value of gray-level values of each azimuth direction to obtain at least two target images, the method further comprises:
in each target image, obtaining a mean value of gray values of each target image in each distance direction according to the gray value of each target pixel;
for each target image, based on the gray value mean value of each distance direction, performing filtering correction on each target image under the action of a Gaussian filtering model to obtain a second corrected image of each target image, and further obtain at least two second corrected images;
and splicing the at least two second correction images into a final composite image to finish the image processing process.
7. The method according to claim 6, wherein the performing, for each target image, filter correction on each target image under the effect of a gaussian filter model based on the mean of the grayscale values of each distance direction to obtain a second corrected image of each target image, and further obtain at least two second corrected images, comprises:
calculating the mean value of the gray values of all target pixels in each target image to serve as the integral gray value of each icon image;
for each target image, correspondingly calculating the ratio of the overall gray value mean value of each target image to the gray value mean value of each distance direction in the target image to obtain the gray value mean value ratio of each distance direction;
filtering the gray value average ratio of each distance direction through the Gaussian filtering model to obtain a compensation factor of each distance direction;
in each original image, multiplying the gray value of each target pixel by the compensation factor of each distance direction correspondingly to obtain the corrected gray value of each target pixel;
and obtaining each second correction pixel according to the correction gray value of each target pixel, thereby obtaining a second correction image of each target image and further obtaining at least two second correction images.
8. The method according to any one of claims 1 to 7, wherein said stitching said at least two target images into a composite image, performing an image processing procedure, comprises:
acquiring a geographical positioning result of each target image;
according to the geographic positioning result of each target image, calculating an overlapping area between each two adjacent target images in the at least two target images to obtain a first overlapping area and a second overlapping area; the first overlap region is a region in a previous target image of the every two adjacent target images; the second overlapping area is an area in a subsequent target image of the every two adjacent target images;
taking the previous target image of every two adjacent target images as a standard image, and adjusting the next target image of every two adjacent target images under the action of a uniform color filter model and Gaussian convolution operation on the basis of the first overlapping area and the second overlapping area to obtain a final adjusted image corresponding to the next target image;
and for every two adjacent target images in the at least two target images, continuously splicing the standard images in every two adjacent target images with the final adjustment image until the last two adjacent target images are processed, obtaining the composite image, and finishing the image processing process.
9. The method according to claim 8, wherein the step of adjusting a subsequent target image of each two adjacent target images based on the overlapping area of the standard image and the overlapping area under the action of a uniform color filter model and a gaussian filter algorithm to obtain a final adjusted image corresponding to the subsequent target image, which is performed by taking a previous target image of each two adjacent target images as a standard image, comprises:
respectively calculating a first gray value mean value and a first gray value standard deviation of the first overlapping area;
respectively calculating a second gray value mean value and a second gray value standard deviation of the second overlapped area;
performing brightness uniformity processing on the next target image under the action of a uniform color filter model based on the first gray value mean value, the first gray value standard deviation, the second gray value mean value and the second gray value standard deviation to obtain an adjusted image corresponding to the next target image;
calculating the gray value mean value of each azimuth direction in the first overlapping area to obtain a third gray value mean value sequence;
calculating the gray value mean value of each azimuth in the overlapping area of the adjusted image and the standard image according to the adjusted image to obtain a fourth gray value mean value sequence; and adjusting the adjusted image according to the ratio of the third gray value mean sequence to the fourth gray value mean sequence through Gaussian convolution operation to obtain the final adjusted image.
10. The method of claim 9, wherein the adjusting the adjusted image according to the ratio of the third gray-scale value mean to the first gray-scale value mean by gaussian convolution operation to obtain the final adjusted image comprises:
performing Gaussian convolution operation on the ratio of the third gray value mean value to the first gray value mean value and a Gaussian kernel with a preset length to obtain an operation result;
multiplying the operation result by the gray value of each adjusting pixel in the adjusting image to obtain a final adjusting image; the each adjustment pixel is each pixel included in the adjustment image.
11. An image processing apparatus characterized by comprising: the device comprises an acquisition unit, a correction unit and a splicing unit; wherein,
the acquiring unit is used for acquiring the gray value of each original pixel in each original image in at least two original images; each original pixel is a pixel contained in each original image;
the correction unit is used for correcting each original image under the action of a preset low-pass filtering model based on the gray value standard deviation of each azimuth direction in each original image to obtain at least two first corrected images; acquiring a mean value of gray values of each azimuth direction in each first correction image, and correcting each first correction image under the action of the preset low-pass filtering model based on the mean value of gray values of each azimuth direction to obtain at least two target images;
and the splicing unit is used for splicing the at least two target images into a composite image to finish the image processing process.
12. An electronic device, comprising:
a memory for storing executable data instructions;
a processor for implementing the method of any one of claims 1 to 10 when executing executable instructions stored in the memory.
13. A storage medium having stored thereon executable instructions for causing a processor to perform the method of any one of claims 1 to 10 when executed.
CN202010384687.9A 2020-05-08 2020-05-08 Image processing method and device, electronic equipment and storage medium Active CN111738929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010384687.9A CN111738929B (en) 2020-05-08 2020-05-08 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010384687.9A CN111738929B (en) 2020-05-08 2020-05-08 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111738929A true CN111738929A (en) 2020-10-02
CN111738929B CN111738929B (en) 2022-08-30

Family

ID=72646983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010384687.9A Active CN111738929B (en) 2020-05-08 2020-05-08 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111738929B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819729A (en) * 2021-02-23 2021-05-18 中国科学院空天信息创新研究院 Image correction method and device, computer storage medium and equipment
CN114125431A (en) * 2021-11-22 2022-03-01 北京市遥感信息研究所 Non-uniformity calibration correction method for static track optical large-area array camera
CN115937050A (en) * 2023-03-02 2023-04-07 图兮数字科技(北京)有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6618510B1 (en) * 1999-02-05 2003-09-09 Nec Corporation Method and apparatus for processing image data
CN104715255A (en) * 2015-04-01 2015-06-17 电子科技大学 Landslide information extraction method based on SAR (Synthetic Aperture Radar) images
CN106097249A (en) * 2016-06-21 2016-11-09 中国科学院电子学研究所 A kind of diameter radar image anastomosing and splicing method and device
CN108564532A (en) * 2018-03-30 2018-09-21 合肥工业大学 Large scale distance satellite-borne SAR image method for embedding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6618510B1 (en) * 1999-02-05 2003-09-09 Nec Corporation Method and apparatus for processing image data
CN104715255A (en) * 2015-04-01 2015-06-17 电子科技大学 Landslide information extraction method based on SAR (Synthetic Aperture Radar) images
CN106097249A (en) * 2016-06-21 2016-11-09 中国科学院电子学研究所 A kind of diameter radar image anastomosing and splicing method and device
CN108564532A (en) * 2018-03-30 2018-09-21 合肥工业大学 Large scale distance satellite-borne SAR image method for embedding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔爱欣等: "基于FPGA的星载SAR成像信号处理技术", 《现代雷达》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819729A (en) * 2021-02-23 2021-05-18 中国科学院空天信息创新研究院 Image correction method and device, computer storage medium and equipment
CN114125431A (en) * 2021-11-22 2022-03-01 北京市遥感信息研究所 Non-uniformity calibration correction method for static track optical large-area array camera
CN114125431B (en) * 2021-11-22 2023-06-23 北京市遥感信息研究所 Non-uniformity calibration correction method for stationary track optical large area array camera
CN115937050A (en) * 2023-03-02 2023-04-07 图兮数字科技(北京)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115937050B (en) * 2023-03-02 2023-10-13 图兮数字科技(北京)有限公司 Image processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111738929B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111738929B (en) Image processing method and device, electronic equipment and storage medium
Filipponi Sentinel-1 GRD preprocessing workflow
CA2877547C (en) System and method for residual analysis of images
US20080018524A1 (en) System and method for estimating airborne radar antenna pointing errors
CN111768337B (en) Image processing method and device and electronic equipment
CN107958442A (en) Gray correction method and device in several Microscopic Image Mosaicings
KR20170014167A (en) Method and Apparatus for Correcting Ionospheric Distortion based on multiple aperture interferometry
AU2019311751B2 (en) Image turbulence correction using tile approach
CN117975287B (en) Key parameter analysis method for early identification of landslide hazard InSAR
CN109272465B (en) Aviation image color consistency processing algorithm
KR100870894B1 (en) Method of automatic geometric correction for linear pushbroom image
Bezvesilniy et al. Synthetic aperture radar systems for small aircrafts: Data processing approaches
Walker Flux Density Calibration on the VLBA
CN109752697B (en) Method for measuring relative radiation performance of large-scanning-angle sliding spotlight SAR (synthetic aperture radar) satellite system
Kirk et al. Comparison of digital terrain models from two photoclinometry methods
CN116385898A (en) Satellite image processing method and system
CN113469899B (en) Optical remote sensing satellite relative radiation correction method based on radiation energy reconstruction
CN112964229B (en) Satellite-ground combined observation determination method for target day area coverage
CN112750077B (en) Parallelized synthetic aperture radar image sub-band splicing effect processing method
JP2022185296A (en) Positioning method of satellite image
CN110910436B (en) Distance measuring method, device, equipment and medium based on image information enhancement technology
Roncella et al. A monte carlo simulation study on the dome effect
US20160267054A1 (en) Method for determining optimum reference data number for smoothing measured data and method for correcting measured data
CN108983230B (en) Ionosphere chromatography construction method based on SAR (synthetic aperture radar) azimuth offset
CN113076515A (en) Method for evaluating performance of ground surface layer adaptive optical system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant