CN111899200A - Infrared image enhancement method based on 3D filtering - Google Patents

Infrared image enhancement method based on 3D filtering Download PDF

Info

Publication number
CN111899200A
CN111899200A CN202010794350.5A CN202010794350A CN111899200A CN 111899200 A CN111899200 A CN 111899200A CN 202010794350 A CN202010794350 A CN 202010794350A CN 111899200 A CN111899200 A CN 111899200A
Authority
CN
China
Prior art keywords
image
filtering
detail
window
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010794350.5A
Other languages
Chinese (zh)
Other versions
CN111899200B (en
Inventor
贺明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Teemsun Beijing Technology Co ltd
Original Assignee
Teemsun Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teemsun Beijing Technology Co ltd filed Critical Teemsun Beijing Technology Co ltd
Priority to CN202010794350.5A priority Critical patent/CN111899200B/en
Publication of CN111899200A publication Critical patent/CN111899200A/en
Application granted granted Critical
Publication of CN111899200B publication Critical patent/CN111899200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Abstract

The invention discloses an infrared image enhancement method based on 3D filtering, which comprises the steps of firstly, carrying out rapid guiding filtering on an infrared image by adopting an improved guiding filter to obtain a basic image and a detail image, then carrying out optical flow motion estimation on a local image block of the detail image to obtain a motion vector of the detail image, adding the motion vector into a continuous frame image, filtering the detail image by adopting three methods of space and time sequence, and carrying out self-adaptive weighting fusion on the obtained detail image and the basic image to obtain a final enhanced infrared image.

Description

Infrared image enhancement method based on 3D filtering
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an infrared image enhancement method based on 3D filtering.
Background
With the continuous development of science and technology, infrared imaging is used as a product combining an infrared technology and an imaging technology, is applied more and more widely, and is already applied to many fields such as security monitoring, military target detection and tracking, medical treatment and the like. When the temperature of an object is actually detected, the temperature is easily influenced by heat transfer, heat radiation and atmospheric attenuation, so that low contrast of a graph, unclear texture details and the like are caused, wherein the contrast between a target and a background is low, so that the background and the target object in an infrared graph are difficult to identify, and a lot of inconvenience is brought to target identification and tracking. Therefore, it is very important to study the infrared enhancement algorithm.
The traditional image enhancement algorithm is divided into spatial domain enhancement and frequency domain enhancement, the spatial domain enhancement directly processes pixel gray values, and the main methods comprise gray stretching, histogram equalization, unsharp masking and the like; the frequency domain enhancement firstly transforms the image to a frequency domain, and then processes the frequency domain image by using a frequency domain filter to realize the enhancement, and the simple spatial enhancement or the frequency domain enhancement can not meet the requirements of the existing system for not only eliminating noise but also enhancing details.
Disclosure of Invention
Aiming at the defects in the prior art, the infrared image enhancement method based on 3D filtering provided by the invention solves the problem that the existing infrared image enhancement method is difficult to ensure that the noise is eliminated and the details are enhanced.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an infrared image enhancement method based on 3D filtering comprises the following steps:
s1, acquiring an original infrared image, and dividing the original infrared image into a basic image and a detail image through a guiding filter;
s2, carrying out motion estimation on local image blocks in the detail image by adopting a local optical flow method to obtain motion vectors of the detail image;
s3, carrying out 3D filtering on the detail image based on the motion vector to obtain a 3D filtered detail image;
and S4, carrying out self-adaptive weighted synthesis on the 3D filtered detail image and the basic image to obtain an enhanced infrared image.
Further, the formula of the pilot filter in step S1 is as follows:
Figure BDA0002624988650000021
in the formula, quTo output an image, IfTo guide the image, wkIs a pixel block window, subscript u is a pixel point, akAnd bkFor a window of blocks of pixelsWindow coefficient of (1); wherein the content of the first and second substances,
Figure BDA0002624988650000022
Figure BDA0002624988650000023
to guide the variance of the image in the window, linear regression coefficients,
Figure BDA0002624988650000024
is the average value of the image to be smoothed in the window.
Further, the step S1 is specifically:
s11, taking the original infrared image as a guide image in a guide filter;
s12, respectively calculating a guide filtering window and a coefficient window of a guide filter based on the guide image to obtain a corresponding basic image;
and S13, subtracting the basic image from the original infrared image to obtain a corresponding detail image.
Further, the size of the pilot filter window is 8 × 8, and the size of the coefficient window is 4 × 4.
Further, the step S2 is specifically:
s21, determining an optical flow calculation model for motion estimation in a two-dimensional plane:
Ix(u)Vx+Iy(u)Vy+It(u)=0,u=(1,2,...,n)
in the formula Ix(u) and Iy(u) spatial dimension information, I, of the pixels of the detail image, respectivelyt(u) is the time dimension information, V, of a pixel u of a detail imagexAnd VyAre respectively motion vector (V)x,Vy) Components in the horizontal and vertical directions;
s22, solving the optical flow calculation model by adopting a least square method to obtain a motion vector (V)x,Vy) The expression of (a) is:
Figure BDA0002624988650000031
in the formula, omega is a weight coefficient;
s23, at motion vector (V)x,Vy) Sets the intermediate calculation parameter in the expression of (c), obtains the motion vector (V)x,Vy) Comprises the following steps:
Figure BDA0002624988650000032
in the formula, AAxx, AAyy, AAxy, ABxt and AByt are all set intermediate calculation parameters, and
Figure BDA0002624988650000033
AAxy=∑iωIx(i)Iy(i),ABxt=∑iωIx(i)It(i),AByt=∑iωIy(i)It(i)。
further, the step S3 is specifically:
s31, determining a local window image of 3 x 3 around each pixel in the detail image of the current frame;
s32, determining local window images in the first two frames of detail images of the current frame of detail image by using the motion vector;
and S33, performing 3D filtering on the 3X 3 local window images by taking the 3X 3 local window image center as a center point and performing three dimensions of spatial similarity factor, gray scale similarity factor and time similarity factor to obtain a 3D filtered detail image.
Further, the expression of the 3D filtered detail image is:
Figure BDA0002624988650000034
wherein h (y) is the gray value of the detail image after 3D filtering, hij(y) is the gray value of the (i, j) th neighborhood detector, wr(i, j) is the gray level similarity factor of the (i, j) th neighborhood detector element, ws(i, j) is the spatial neighborhood of the (i, j) th neighborhood probeDegree factor, wk(i, j) is a time domain factor, (i, j) is a label of the neighborhood probe, i, j is a horizontal and vertical coordinate of the neighborhood probe, wherein,
Figure BDA0002624988650000041
Figure BDA0002624988650000042
Figure BDA0002624988650000043
is the spatial position variance, f (i, j) is the original infrared image,
Figure BDA0002624988650000044
is the variance of gray value, f (k, l) is the gray value of the pixel at the center point of the current frame image,
Figure BDA0002624988650000045
is the time variance.
Further, in the step S4, the enhanced infrared image fout(i, j) is:
fout(i,j)=LP[f(i,j)]+α*h(y)
where α is a weighting factor, LP [ f (i, j) ] is the base image after guided filtering, and h (y) is the detail image after 3D filtering.
The invention has the beneficial effects that:
the method comprises the steps of firstly carrying out rapid guiding filtering on an infrared image by adopting an improved guiding filter to obtain a basic image and a detail image, then carrying out optical flow motion estimation on a local image block of the detail image to obtain a motion vector of the detail image, adding continuous frame images by utilizing the motion vector, filtering the detail image by adopting three methods of space and time sequence, and carrying out self-adaptive weighting fusion on the obtained detail image and the basic image to obtain a finally enhanced infrared image.
Drawings
Fig. 1 is a flowchart of an infrared image enhancement method based on 3D filtering according to the present invention.
Fig. 2 is a comparison diagram of the first infrared image enhancement effect provided by the present invention.
Fig. 3 is a comparison diagram of the second infrared image enhancement effect provided by the present invention.
Fig. 4 is a comparison diagram of the third infrared image enhancement effect provided by the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Example 1:
as shown in fig. 1, an infrared image enhancement method based on 3D filtering includes the following steps:
s1, acquiring an original infrared image, and dividing the original infrared image into a basic image and a detail image through a guiding filter;
s2, carrying out motion estimation on local image blocks in the detail image by adopting a local optical flow method to obtain motion vectors of the detail image;
s3, carrying out 3D filtering on the detail image based on the motion vector to obtain a 3D filtered detail image;
and S4, carrying out self-adaptive weighted synthesis on the 3D filtered detail image and the basic image to obtain an enhanced infrared image.
In step S1, the guiding filter has the same functions as the bilateral filter in edge-preserving and smoothing filtering, and the formula of the guiding filter is:
Figure BDA0002624988650000051
in the formula, quIs an output diagramLike, IfTo guide the image, wkIs a pixel block window, subscript u is a pixel point, akAnd bkIs a window coefficient in a window of pixel blocks; in determining akAnd bkAccording to the minimum mean square error criterion, one can define:
Figure BDA0002624988650000052
Figure BDA0002624988650000061
in the formula, mukTo guide the average of the image in the window,
Figure BDA0002624988650000062
to guide the variance of the image in the window, w is the sum of the number of pixels in the window, PkFor the pixel points in the image to be smoothed,
Figure BDA0002624988650000063
determining the smoothing degree of the filter for the average value of the image to be smoothed in the window and the linear regression coefficient;
in the embodiment, the guide filter is improved, the guide filter adopts the large window and the small window as the guide filtering window and the coefficient window to respectively carry out the coefficient, the calculation speed is greatly improved, the algorithm efficiency is increased, the guide image is the input image, akAnd bkCan be modified into the following steps:
Figure BDA0002624988650000064
Figure BDA0002624988650000065
based on the guiding filter, the method for guiding and filtering the original infrared image in step S1 specifically includes:
s11, taking the original infrared image as a guide image in a guide filter;
s12, respectively calculating a guide filtering window and a coefficient window of a guide filter based on the guide image to obtain a corresponding basic image;
and S13, subtracting the basic image from the original infrared image to obtain a corresponding detail image.
Specifically, the size of the pilot filter window in the pilot filter in this embodiment is 88, and the size of the coefficient window is 4 × 4, where the coefficient window is smaller than the pilot filter window, which can reduce the amount of calculation.
In step S2, an image sequence g (x) may be represented by a three-dimensional column vector x ═ x, y, tTWhere x and y are spatial components and t is a temporal component. According to the constant constraint of brightness, the object motion of the space-time domain generates a brightness model with a certain direction, and the following is assumed according to the constant brightness:
I(x,y,t)=I′(x+dx,y+dy,t+dt)
where I and I' represent adjacent frames of the image and dx and dy represent the incremental displacement of the pixels in the x, y direction over dt times. According to the assumption of a small motion model, the above formula Taylor is expanded and then high-order terms are omitted, and the optical flow calculation model equation is obtained as follows:
IxVx+IyVy+It=0,
in the formula, Vx,VyComponents of the light flow vector in the horizontal and vertical directions, Ix,IyAnd ItRepresenting spatial and temporal dimension information of the image, respectively.
In a three-dimensional world, if points belonging to the same object plane have the same velocity, points projected to the two-dimensional plane also have the same velocity.
Based on the above, in this embodiment, the step S2 is specifically:
s21, determining an optical flow calculation model for motion estimation in a two-dimensional plane:
Ix(u)Vx+Iy(u)Vy+It(u)=0,u=(1,2,...,n)
in the formula (I), the compound is shown in the specification,Ix(u) and Iy(u) spatial dimension information, I, of the pixels of the detail image, respectivelyt(u) is the time dimension information, V, of a pixel u of a detail imagexAnd VyAre respectively motion vector (V)x,Vy) Components in the horizontal and vertical directions;
s22, solving the optical flow calculation model by adopting a least square method to obtain a motion vector (V)x,Vy) The expression of (a) is:
Figure BDA0002624988650000071
in the formula, omega is a weight coefficient;
s23, at motion vector (V)x,Vy) Sets the intermediate calculation parameter in the expression of (c), obtains the motion vector (V)x,Vy) Comprises the following steps:
Figure BDA0002624988650000072
in the formula, AAxx, AAyy, AAxy, ABxt and AByt are all set intermediate calculation parameters, and
Figure BDA0002624988650000081
AAxy=∑iωIx(i)Iy(i),ABxt=∑iωIx(i)It(i),AByt=∑iωIy(i)It(i)。
the step S3 is specifically:
s31, determining a local window image of 3 x 3 around each pixel in the detail image of the current frame;
s32, determining local window images in the first two frames of detail images of the current frame of detail image by using the motion vector;
and S33, performing 3D filtering on the 3X 3 local window images by taking the 3X 3 local window image center as a center point and performing three dimensions of spatial similarity factor, gray scale similarity factor and time similarity factor to obtain a 3D filtered detail image.
Wherein, the expression of the detail image after 3D filtering is:
Figure BDA0002624988650000082
wherein h (y) is the gray value of the detail image after 3D filtering, S represents the nine neighborhood space with the central point (k, l), hij(y) is the gray value of the (i, j) th neighborhood detector, wr(i, j) is the gray level similarity factor of the (i, j) th neighborhood detecting element, which decreases with the increase of the gray level difference, ws(i, j) is the spatial proximity factor of the (i, j) th neighborhood probe element, which decreases with increasing Euclidean distance from the center point, wk(i, j) is a time domain factor, which decreases with the time domain gray scale difference, (i, j) is the label of the neighborhood detecting element, i, j is the horizontal and vertical coordinates of the neighborhood detecting element, wherein,
Figure BDA0002624988650000083
Figure BDA0002624988650000084
is the spatial position variance, f (i, j) is the original infrared image,
Figure BDA0002624988650000085
is the variance of gray value, f (k, l) is the gray value of the pixel at the center point of the current frame image,
Figure BDA0002624988650000086
is the time variance.
In the 3D filtering process, in the region where the image is gentle and the motion between frames is small, the gray level difference in the neighborhood is not large, bilateral filtering is converted into a Gaussian low-pass filter, and in the image with the suddenly changed gray level, the filter replaces the original value by the average gray level of the similar gray level elements of the gray level values near the edge pixels, so that the three-direction detail filter not only smoothes the image, but also keeps the edge of the image.
Obtaining the enhanced infrared image f in the step S4 based on the above processout(i, j) is:
fout(i,j)=LP[f(i,j)]+α*h(y)
where α is a weighting factor, LP [ f (i, j) ] is the base image after guided filtering, and h (y) is the detail image after 3D filtering.
Example 2:
the infrared image is enhanced by the method of the invention to obtain the effect contrast images (a is the original infrared image, and b is the enhanced infrared image) of fig. 2-4, and the images enhanced by the algorithm can be seen from the images, so that the noise can be effectively filtered, the image contrast is improved, and the image details are greatly improved.

Claims (8)

1. An infrared image enhancement method based on 3D filtering is characterized by comprising the following steps:
s1, acquiring an original infrared image, and dividing the original infrared image into a basic image and a detail image through a guiding filter;
s2, carrying out motion estimation on local image blocks in the detail image by adopting a local optical flow method to obtain motion vectors of the detail image;
s3, carrying out 3D filtering on the detail image based on the motion vector to obtain a 3D filtered detail image;
and S4, carrying out self-adaptive weighted synthesis on the 3D filtered detail image and the basic image to obtain an enhanced infrared image.
2. The infrared image enhancement method based on 3D filtering as claimed in claim 1, wherein the formula of the guiding filter in step S1 is:
Figure FDA0002624988640000011
in the formula, quTo output an image, IfTo guide the image, wkIs a pixel block window, subscript u is a pixel point, akAnd bkIs a window coefficient in a window of pixel blocks; wherein the content of the first and second substances,
Figure FDA0002624988640000012
Figure FDA0002624988640000013
to guide the variance of the image in the window, linear regression coefficients,
Figure FDA0002624988640000014
is the average value of the image to be smoothed in the window.
3. The infrared image enhancement method based on 3D filtering according to claim 2, wherein the step S1 specifically includes:
s11, taking the original infrared image as a guide image in a guide filter;
s12, respectively calculating a guide filtering window and a coefficient window of a guide filter based on the guide image to obtain a corresponding basic image;
and S13, subtracting the basic image from the original infrared image to obtain a corresponding detail image.
4. The method of claim 3, wherein the size of the guiding filter window is 8 x 8 and the size of the coefficient window is 4 x 4.
5. The infrared image enhancement method based on 3D filtering according to claim 1, wherein the step S2 specifically includes:
s21, determining an optical flow calculation model for motion estimation in a two-dimensional plane:
Ix(u)Vx+Iy(u)Vy+It(u)=0,u=(1,2,...,n)
in the formula Ix(u) and Iy(u) spatial dimension information, I, of the pixels of the detail image, respectivelyt(u) is the time dimension information, V, of a pixel u of a detail imagexAnd VyAre respectively motion vector (V)x,Vy) Components in the horizontal and vertical directions;
s22, solving the optical flow calculation model by adopting a least square method to obtain a motion vector (V)x,Vy) The expression of (a) is:
Figure FDA0002624988640000021
in the formula, omega is a weight coefficient;
s23, at motion vector (V)x,Vy) Sets the intermediate calculation parameter in the expression of (c), obtains the motion vector (V)x,Vy) Comprises the following steps:
Figure FDA0002624988640000022
in the formula, AAxx, AAyy, AAxy, ABxt and AByt are all set intermediate calculation parameters, and
Figure FDA0002624988640000023
AAxy=∑iωIx(i)Iy(i),ABxt=∑iωIx(i)It(i),AByt=∑iωIy(i)It(i)。
6. the infrared image enhancement method based on 3D filtering according to claim 5, wherein the step S3 specifically includes:
s31, determining a local window image of 3 x 3 around each pixel in the detail image of the current frame;
s32, determining local window images in the first two frames of detail images of the current frame of detail image by using the motion vector;
and S33, performing 3D filtering on the 3X 3 local window images by taking the 3X 3 local window image center as a center point and performing three dimensions of spatial similarity factor, gray scale similarity factor and time similarity factor to obtain a 3D filtered detail image.
7. The infrared image enhancement method based on 3D filtering as claimed in claim 6, characterized in that the expression of the detail image after 3D filtering is:
Figure FDA0002624988640000031
wherein h (y) is the gray value of the detail image after 3D filtering, hij(y) is the gray value of the (i, j) th neighborhood detector, wr(i, j) is the gray level similarity factor of the (i, j) th neighborhood detector element, ws(i, j) is the spatial proximity factor of the (i, j) th neighborhood probe, wk(i, j) is a time domain factor, (i, j) is a label of the neighborhood probe, i, j is a horizontal and vertical coordinate of the neighborhood probe, wherein,
Figure FDA0002624988640000032
Figure FDA0002624988640000033
Figure FDA0002624988640000034
is the spatial position variance, f (i, j) is the original infrared image,
Figure FDA0002624988640000035
is the variance of gray value, f (k, l) is the gray value of the pixel at the center point of the current frame image,
Figure FDA0002624988640000036
is the time variance.
8. The infrared image enhancement method based on 3D filtering as claimed in claim 1, wherein in step S4, the enhanced infrared image fout(i, j) is:
fout(i,j)=LP[f(i,j)]+α*h(y)
where α is a weighting factor, LP [ f (i, j) ] is the base image after guided filtering, and h (y) is the detail image after 3D filtering.
CN202010794350.5A 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering Active CN111899200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010794350.5A CN111899200B (en) 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010794350.5A CN111899200B (en) 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering

Publications (2)

Publication Number Publication Date
CN111899200A true CN111899200A (en) 2020-11-06
CN111899200B CN111899200B (en) 2021-06-22

Family

ID=73246713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010794350.5A Active CN111899200B (en) 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering

Country Status (1)

Country Link
CN (1) CN111899200B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822352A (en) * 2021-09-15 2021-12-21 中北大学 Infrared dim target detection method based on multi-feature fusion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015172235A1 (en) * 2014-05-15 2015-11-19 Tandemlaunch Technologies Inc. Time-space methods and systems for the reduction of video noise

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015172235A1 (en) * 2014-05-15 2015-11-19 Tandemlaunch Technologies Inc. Time-space methods and systems for the reduction of video noise

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHRISTIAN RICHARDT E.T.: "Coherent Spatiotemporal Filtering,Upsampling and Rendering of RGBZ Videos", 《EUROGRAPHICS》 *
V V TITKOV E.T.: "Application of Lucas–Kanade algorithm with weight coefficient bilateral filtration for the digital image correlation method", 《IOP CONF.SERIES:MATERIALS SCIENCE AND ENGINEERING》 *
张金林 等: "基于改进的Lucas-Kanade光流估算模型的运动目标检测", 《微计算机信息》 *
李苗苗: "红外图像数字细节增强关键技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
苏泽林 等: "基于3D Max-median和3D Max-mean滤波的红外小目标检测", 《电视技术》 *
谢剑斌 等: "《视觉感知与智能视频监控》", 31 March 2012 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822352A (en) * 2021-09-15 2021-12-21 中北大学 Infrared dim target detection method based on multi-feature fusion

Also Published As

Publication number Publication date
CN111899200B (en) 2021-06-22

Similar Documents

Publication Publication Date Title
Sun et al. Weighted guided image filtering with steering kernel
Shin et al. Radiance–reflectance combined optimization and structure-guided $\ell _0 $-Norm for single image dehazing
Cao et al. Effective strip noise removal for low-textured infrared images based on 1-D guided filtering
CN110796616B (en) Turbulence degradation image recovery method based on norm constraint and self-adaptive weighted gradient
CN103533214A (en) Video real-time denoising method based on kalman filtering and bilateral filtering
CN110910421A (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN111161222A (en) Printing roller defect detection method based on visual saliency
Liu et al. A computationally efficient denoising and hole-filling method for depth image enhancement
CN107451986B (en) Single infrared image enhancement method based on fusion technology
Zhu et al. Infrared moving point target detection based on an anisotropic spatial-temporal fourth-order diffusion filter
CN111614965B (en) Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
WO2022233252A1 (en) Image processing method and apparatus, and computer device and storage medium
Li et al. Image enhancement algorithm based on depth difference and illumination adjustment
Luo et al. Fast removal of rain streaks from a single image via a shape prior
CN111899200B (en) Infrared image enhancement method based on 3D filtering
CN113177898B (en) Image defogging method and device, electronic equipment and storage medium
CN112465725B (en) Infrared image frame rate up-conversion method based on PWC-Net
Hua et al. Removing atmospheric turbulence effects via geometric distortion and blur representation
Raveendran et al. Image fusion using LEP filtering and bilinear interpolation
Zhang et al. Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model
Yan et al. Multidirectional gradient neighbourhood-weighted image sharpness evaluation algorithm
Wang et al. Image haze removal using a hybrid of fuzzy inference system and weighted estimation
Tung et al. Multiple depth layers and all-in-focus image generations by blurring and deblurring operations
Zhou et al. Single image dehazing based on weighted variational regularized model
CN109658357A (en) A kind of denoising method towards remote sensing satellite image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100094 room 901, 9 / F, building 4, zone 1, 81 Beiqing Road, Haidian District, Beijing

Applicant after: Guoke Tiancheng Technology Co.,Ltd.

Address before: 100094 room 901, 9 / F, building 4, zone 1, 81 Beiqing Road, Haidian District, Beijing

Applicant before: TEEMSUN (BEIJING) TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant