CN111223060A - Image processing method based on self-adaptive PLIP model - Google Patents

Image processing method based on self-adaptive PLIP model Download PDF

Info

Publication number
CN111223060A
CN111223060A CN202010007691.3A CN202010007691A CN111223060A CN 111223060 A CN111223060 A CN 111223060A CN 202010007691 A CN202010007691 A CN 202010007691A CN 111223060 A CN111223060 A CN 111223060A
Authority
CN
China
Prior art keywords
image
plip
graph
diagram
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010007691.3A
Other languages
Chinese (zh)
Other versions
CN111223060B (en
Inventor
王俊平
张宏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010007691.3A priority Critical patent/CN111223060B/en
Publication of CN111223060A publication Critical patent/CN111223060A/en
Application granted granted Critical
Publication of CN111223060B publication Critical patent/CN111223060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses an image processing method based on a self-adaptive PLIP model. Firstly, obtaining a brightness component image of an image to be processed, and carrying out self-adaptive PLIP processing and detail processing on the brightness component image to obtain a detail enhancement image; inputting the brightness component diagram into a guide diagram filter to obtain a filter processing diagram; respectively fusing the detail enhancement image, the filter processing image and the brightness component image by using an exposure fusion method to obtain a fusion image; and converting the fusion map into a space to obtain a result map. The invention can carry out noise suppression, detail enhancement, edge maintenance and other processing on the natural image and has the advantages of high detail accuracy of a processing result image and uniform brightness.

Description

Image processing method based on self-adaptive PLIP model
Technical Field
The invention belongs to the technical field of image processing, and further relates to an image enhancement method based on a self-adaptive Parameterized Logarithmic image processing PLIP (planar image processing) model in the technical field of natural image processing. The invention can perform noise suppression, detail enhancement, edge preservation and other processing on the low-illumination image.
Background
The low-illumination images shot and obtained in the technical fields of remote sensing, military, industry and medicine have the problems of rich noise, fuzzy overall image details and low image average brightness. The PLIP model can solve the problem that the pixel value exceeds the gray value interval caused by the traditional operation on the image, but can not adaptively calculate the transformation parameters according to the characteristics of different areas of the image, thereby causing the loss of details. Therefore, the PLIP model for research adaptation can be well applied to digital image processing.
The Nanjing university of science and engineering in its patent document "image enhancement method based on improved Retinex algorithm" (application date: 2019, month 07, 30, application number: 201910691758.7, application publication number: 110473152A) proposes an image enhancement method for estimating a luminance image using guided filtering instead of Gaussian filtering. The method comprises the steps of carrying out edge detection operation on an input image by using a Sobel edge detector to obtain a weight factor of a multi-scale guide filtering image of the input image, and obtaining a result image by using a retinal cortex Retinex method; for a color image, converting the image from a red, green and blue (Red, Green and blue) color space to a non-linear brightness and chrominance YUV color space according to a conversion formula, performing enhancement processing by using a Retinex method, and then converting the image back to an RGB color space display result image. The method has the following defects: when the conversion formula is used for converting an image from an RGB color space to a YUV color space, addition and subtraction operation is involved, and since the pixel gray value interval of the image is [0,255], a negative value or a value exceeding 255 is generated when the addition and subtraction operation is carried out on the pixel gray value, so that the problem that the gray value of the digital image exceeds the interval is caused; in the process of Retinex algorithm processing, areas with different characteristics of the image are processed in the same way, and the problem of image detail loss caused by adaptive calculation of weight factors according to the characteristics of each area of the image is solved; the image detail can not be enhanced, and the overall structure maintenance and the brightness correction of the image can not be simultaneously considered.
The university of electronic technology proposed an image enhancement method using an improved Retinex algorithm in combination with guided filtering in the patent document "an image enhancement method based on Retinex algorithm and guided filtering" (application date: 2019, 03 and 26, application No.: 201910231635.5, application publication No.: 109978789 a). The method comprises the steps of carrying out nonlinear transformation preprocessing on an image before using a Multi-Scale retina cortex MSRCR (Multi-Scale Retinex with Color retrieval) algorithm with Color recovery to obtain an MSRCR enhanced graph; simultaneously converting the original image into a hexagonal cone HSV color space, performing Multi-Scale retinal cortex MSR (Multi-Scale Retinex) algorithm processing on the brightness V (value) component of the image, and performing adaptive adjustment on the hue H (hue) and the saturation S (saturation) component to obtain an HSV color space enhancement image; performing difference on the original image and the image subjected to the guided filtering processing to extract edge information to obtain an edge information image; carrying out average fusion processing on the edge information graph, the HSV color space enhancement graph and the MSRCR enhancement graph to obtain a pre-output image; and performing guide filtering processing on the pre-output image again to obtain an output image. The method has the following defects: in the process of carrying out average fusion operation on various enhancement processing results, the problem of uneven brightness of a fusion result image can be caused by not considering the different brightness of each pixel point of the image.
Disclosure of Invention
The invention aims to provide an image processing method based on an adaptive PLIP model for solving the problems of loss of detail and uneven brightness of a result image when natural images are subjected to enhancement processing in the prior art.
The idea for realizing the purpose of the invention is as follows: the method comprises the steps of sequentially carrying out self-adaptive PLIP processing and detail processing on a brightness component diagram of an image to be processed according to the information entropy of different areas of the image to be processed, keeping the image structure of the image to be processed by using a guide diagram filter, fusing the advantages of different processing diagrams by using an exposure fusion method, and converting a result diagram from an HSV space to an RGB space for convenient display.
The method comprises the following specific steps:
(1) obtaining a brightness component map of an image to be processed:
(1a) inputting a natural image to be processed, if the image is a color image, converting the image from a red, green and blue RGB color space to a hexagonal pyramid HSV color space, extracting an image brightness component from the hexagonal pyramid HSV color space of the image to obtain a brightness component image of the image to be processed, and equally dividing the brightness component image into even blocks;
(1b) calculating the information entropy of each image block after partitioning by using an information entropy formula, and taking the average value of the information entropies of all the image blocks in the brightness component diagram as the information entropy of the brightness component diagram;
(2) and (3) carrying out adaptive PLIP processing on the brightness component diagram:
(2a) calculating the transformation parameters of the image block parameter logarithm image processing PLIP model by using the following transformation parameter formula:
Figure BDA0002355921710000021
wherein λ isiThe PLIP model transformation parameters with parameters representing the ith image block, α represents an adjustment factor with a value of 1.5, EiEntropy of information representing the ith image Block, EmaxRepresenting the maximum value of the entropy of all image block information in the luma component map, EminRepresenting the minimum value of all image block information entropies in the brightness component diagram;
(2b) carrying out PLIP model forward transformation on each image block by using a forward transformation formula of the PLIP model to obtain a PLIP forward transformation graph of each image block;
(3) and performing detail processing on the brightness component diagram:
(3a) the reflection component of each image block is calculated using the following reflection component formula:
Figure BDA0002355921710000031
wherein R isiRepresenting the reflection component of the ith image block, N representing the total number of blocks into which the luminance component image is blocked, Σ representing the summation operation, ln representing the logarithmic operation based on the natural constant e,
Figure BDA0002355921710000032
PLIP positive transformation diagram representing the ith image block, representing a convolution operation, G1Representing a Gaussian kernel with scale 80, E representing the entropy of the information of the luminance component map, G2Representing a Gaussian kernel of dimension 30, G3A gaussian kernel with a scale of 200;
(3b) utilizing an inverse transformation formula of the PLIP model to carry out PLIP inverse transformation on the reflection component of each image block to obtain a PLIP inverse transformation diagram of each image block;
(3c) putting the PLIP inverse transformation image of each image block into the same position of the image block in the brightness component image before blocking to obtain a detail enhancement image;
(4) obtaining a filter processing graph of an image to be processed:
inputting the brightness component diagram into a guide diagram filter to obtain a filter processing diagram;
(5) acquiring a fusion image of an image to be processed by using an exposure fusion method:
(5a) respectively calculating the value of each pixel in the 3 fusion weight value graphs by using a Gaussian function formula;
(5b) respectively constructing a detail enhancement graph, a filter processing graph, a brightness component graph Gaussian pyramid and a fusion weight graph Gaussian pyramid corresponding to each graph according to a Gaussian pyramid rule;
(5c) multiplying the top level of the Gaussian pyramid of the detail enhancement diagram, the filter processing diagram and the brightness component diagram with the top level of the Gaussian pyramid of the fusion weight diagram corresponding to each diagram respectively to obtain a detail enhancement weighted diagram, a filter processing weighted diagram and a brightness component weighted diagram;
(5d) adding the detail enhancement weighted graph, the filter processing weighted graph and the brightness component weighted graph to obtain a primary fusion graph, constructing a Laplacian pyramid of the primary fusion graph according to a Laplacian pyramid rule, and taking the top layer of the Laplacian pyramid of the primary fusion graph as a fusion graph;
(6) and converting the fusion graph from the HSV space to the fusion graph of the RGB space and outputting the fusion graph. .....
Compared with the prior art, the invention has the following advantages:
firstly, because the invention uses the positive transformation formula of the PLIP model to carry out PLIP model positive transformation on each image block in the brightness component diagram in the process of enhancing the natural image, the defect that the gray value of the natural image exceeds the interval due to the generation of a negative value or a value exceeding 255 when the addition and subtraction operation is carried out on the gray value of the pixel in the prior art is overcome, the invention can keep the local detail information of the original natural image, the details of the processed natural image are more consistent with the original image, and the accuracy of the details of the processed image is improved.
Secondly, because the invention utilizes the exposure fusion method to obtain the fusion image of the image to be processed in the process of enhancing the natural image, the defect that the brightness of the fusion image is not uniform because different brightness of each pixel point in the natural image is not considered in the process of carrying out average fusion operation on various enhancing processing results in the prior art is overcome. The invention can be fused according to the brightness of each pixel point in the natural image, the brightness of the processed natural image is more uniform, and the average brightness of the processed image is improved.
Thirdly, because the invention utilizes the transformation parameter formula to calculate the transformation parameter of the PLIP model of each image block in the brightness component diagram in the process of enhancing the natural image, the invention overcomes the defect that the details of the processed image are lost because the regions with different characteristics of the natural image are processed identically and the weight factor is not calculated adaptively according to the characteristics of each region of the natural image in the prior art, so that the invention can carry out adaptive processing according to the characteristics of different regions of the natural image and improve the detail abundance degree of the processed image.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a diagram of a simulation experiment result of the present invention, wherein (a) in fig. 2 is an input diagram of the simulation experiment of the present invention, and (b) in fig. 2 is a diagram of a simulation result of a prior art image enhancement method based on Retinex algorithm and guided filtering. Fig. 2(c) is a graph showing the simulation result of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention is further described with reference to fig. 1.
Step 1, obtaining a brightness component map of an image to be processed.
Inputting a natural image to be processed, if the image is a color image, converting the image from a red, green and blue RGB color space to a hexagonal cone HSV color space, extracting an image brightness component from the hexagonal cone HSV color space of the image to obtain a brightness component image of the image to be processed, and averaging the brightness component image into even blocks.
And calculating the information entropy of each image block after partitioning by using the following information entropy formula, and taking the average value of the information entropies of all the image blocks in the brightness component map as the information entropy of the brightness component map.
Figure BDA0002355921710000051
Wherein k represents the maximum gray value among all pixel points of each image block, j represents the minimum gray value among all pixel points of each image block, log2Denotes base 2 logarithmic operation, pmAnd (4) representing the probability of all the m-gray-scale values of each image block in the image block.
And 2, performing self-adaptive PLIP processing on the brightness component diagram.
And calculating the transformation parameters of the image block parameter logarithm image processing PLIP model by using the following transformation parameter formula.
Figure BDA0002355921710000052
Wherein λ isiThe PLIP model transformation parameters with parameters representing the ith image block, α represents an adjustment factor with a value of 1.5, EiEntropy of information representing the ith image Block, EmaxRepresenting the maximum value of the entropy of all image block information in the luma component map, EminRepresents the minimum value of the entropy of all image block information in the luma component map.
And (3) carrying out PLIP model forward transformation on each image block by using the following PLIP model forward transformation formula to obtain a PLIP forward transformation diagram of each image block.
Figure BDA0002355921710000053
Wherein β represents a log factor with a value of 1.2,
Figure BDA0002355921710000054
representing the i-th image block of the luminance component map.
And 3, carrying out detail processing on the brightness component diagram.
The reflection component of each image block is calculated using the following reflection component formula.
Figure BDA0002355921710000055
Wherein R isiRepresenting the reflection component of the ith image block, N representing the total number of blocks into which the luminance component image is blocked, Σ representing the summation operation, ln representing the logarithmic operation based on the natural constant e,
Figure BDA0002355921710000061
PLIP positive transformation diagram representing the ith image block, representing a convolution operation, G1Representing a Gaussian kernel with scale 80, E representing the entropy of the information of the luminance component map, G2Representing a Gaussian kernel of dimension 30, G3A gaussian kernel with a scale of 200 is shown.
And performing PLIP inverse transformation on the reflection component of each image block by using an inverse transformation formula of a PLIP model to obtain a PLIP inverse transformation diagram of each image block.
Figure BDA0002355921710000062
Wherein the content of the first and second substances,
Figure BDA0002355921710000063
the inverse PLIP transform is shown for the ith image block, exp for the exponential operation, and β for a log factor with a value of 1.2.
And putting the PLIP inverse transformation image of each image block into the same position of the image block in the brightness component image before the image block is partitioned to obtain a detail enhancement image.
And 4, obtaining a filter processing diagram of the image to be processed.
The luminance component map is input to a guide map filter to obtain a filter processing map. The pilot pattern filter used in a specific embodiment of the present invention is the pilot pattern filter in MATLAB R2018a software.
And 5, acquiring a fusion image of the image to be processed by using an exposure fusion method.
The value of each pixel in the 3 fusion weight maps is calculated separately using the following gaussian function formula.
Figure BDA0002355921710000064
Wherein, BtAnd C represents the value of a pixel point at the same position as t in a result graph corresponding to the fusion weight value graph.
And respectively constructing a detail enhancement graph, a filter processing graph, a brightness component graph Gaussian pyramid and a fusion weight graph Gaussian pyramid corresponding to each graph according to a Gaussian pyramid rule.
And multiplying the top level of the Gaussian pyramid of the detail enhancement map, the filter processing map and the brightness component map by the top level of the Gaussian pyramid of the fusion weight map corresponding to each map to obtain a detail enhancement weighted map, a filter processing weighted map and a brightness component weighted map.
And adding the detail enhancement weighted graph, the filter processing weighted graph and the brightness component weighted graph to obtain a primary fusion graph, constructing a Laplacian pyramid of the primary fusion graph according to a Laplacian pyramid rule, and taking the top layer of the Laplacian pyramid of the primary fusion graph as the fusion graph.
And 6, converting the fusion graph from the HSV space to the fusion graph of the RGB space and outputting the fusion graph.
The effects of the present invention will be described in further detail below with reference to simulation experiments.
1. Simulation experiment conditions are as follows:
the hardware platform of the simulation experiment of the invention is as follows: the processor is an Intel i 56300 HQ CPU, the main frequency is 2.3GHz, and the memory is 8 GB.
The software platform of the simulation experiment of the invention is as follows: windows 10 operating system and MATLAB 2018 a.
The input image used by the simulation experiment of the invention is a natural image, the size of the image is 107 multiplied by 142 pixels, and the image format is jpg.
2. Simulation content and result analysis thereof:
the simulation experiment of the invention is to adopt the invention and a prior art (image enhancement method based on Retinex algorithm and guided filtering) to respectively process the input natural image (as shown in figure 2 (a)), and the obtained simulation result is shown in figures 2(b) and 2 (c).
In the simulation experiment, the adopted prior art refers to that: the patent of university of electronic technology "an image enhancement method based on Retinex algorithm and guided filtering" (application date: 26/03/2019, application number: 201910231635.5, application publication number: 109978789A). For short, an image enhancement method based on Retinex algorithm and guided filtering.
The effect of the present invention will be further described with reference to the simulation diagram of fig. 2.
Fig. 2(a) is a natural image input by the simulation experiment of the present invention, and fig. 2(b) is a simulation result diagram of the natural image by the image enhancement method based on the Retinex algorithm and the guided filtering in the prior art. Fig. 2(c) is a diagram showing a simulation result of the natural image according to the present invention.
As can be seen from fig. 2(c), compared with the simulation result of the prior art, the simulation result of the present invention has less detail loss and uniform brightness, and the processing effect of the present invention is proved to be better than that of the prior art and to be more ideal.
In order to evaluate the detail abundance and the brightness uniformity of the simulation result of the image enhancement method based on Retinex algorithm and guided filtering in the prior art, the information entropies of the simulation results of the two methods are respectively calculated by using an evaluation index of the following formula, and all the calculation results are drawn as table 1:
Figure BDA0002355921710000071
wherein the content of the first and second substances,
Figure BDA0002355921710000081
expressing the information entropy of each simulation result graph, rho expressing the maximum gray value of all pixel points of each simulation result graph, q expressing the minimum gray value of all pixel points of each simulation result graph, log2Denotes base 2 logarithmic operation, pηAnd the probability of all pixels with the gray values of η in the image block of each simulation result graph is represented.
Table 1 objective evaluation quality table of classification results of the present invention and the prior art in simulation experiment
Objective evaluation parameter Entropy of information
Image enhancement method based on Retinex algorithm and guided filtering 6.0580
The invention 7.2041
As can be seen from Table 1, the entropy of the information is 7.2041, which is obviously higher than that of the prior art method, and it is proved that the method of the present invention can obtain higher accuracy of the image details.
The above simulation experiments show that: the PLIP model positive transformation is carried out on the brightness component diagram by utilizing the positive transformation formula of the PLIP model, the problem that the gray value of a natural image exceeds an interval due to the fact that a negative value or a value exceeding 255 is generated when the addition and subtraction operation is carried out on the gray value of a pixel in the prior art is solved, and the method is a very practical natural image processing method.

Claims (5)

1. An image processing method based on a PLIP model of adaptive logarithmic image processing with parameters is characterized in that adaptive PLIP processing and detail processing are sequentially carried out on a brightness component image of an image to be processed according to information entropy of different areas of the image, and the method specifically comprises the following steps:
(1) obtaining a brightness component map of an image to be processed:
(1a) inputting a natural image to be processed, if the image is a color image, converting the image from a red, green and blue RGB color space to a hexagonal pyramid HSV color space, extracting an image brightness component from the hexagonal pyramid HSV color space of the image to obtain a brightness component image of the image to be processed, and equally dividing the brightness component image into even blocks;
(1b) calculating the information entropy of each image block after partitioning by using an information entropy formula, and taking the average value of the information entropies of all the image blocks in the brightness component diagram as the information entropy of the brightness component diagram;
(2) and (3) carrying out adaptive PLIP processing on the brightness component diagram:
(2a) calculating the transformation parameters of the image block parameter logarithm image processing PLIP model by using the following transformation parameter formula:
Figure FDA0002355921700000011
wherein λ isiThe PLIP model transformation parameters with parameters representing the ith image block, α represents an adjustment factor with a value of 1.5, EiEntropy of information representing the ith image Block, EmaxRepresenting the maximum value of the entropy of all image block information in the luma component map, EminRepresenting the minimum value of all image block information entropies in the brightness component diagram;
(2b) carrying out PLIP model forward transformation on each image block by using a forward transformation formula of the PLIP model to obtain a PLIP forward transformation graph of each image block;
(3) and performing detail processing on the brightness component diagram:
(3a) the reflection component of each image block is calculated using the following reflection component formula:
Figure FDA0002355921700000012
wherein R isiRepresenting the reflection component of the ith image block, N representing the total number of blocks into which the luminance component image is blocked, Σ representing the summation operation, ln representing the logarithmic operation based on the natural constant e,
Figure FDA0002355921700000013
PLIP positive transformation diagram representing the ith image block, representing a convolution operation, G1Representing a Gaussian kernel with scale 80, E representing the entropy of the information of the luminance component map, G2Representing a Gaussian kernel of dimension 30, G3A gaussian kernel with a scale of 200;
(3b) utilizing an inverse transformation formula of the PLIP model to carry out PLIP inverse transformation on the reflection component of each image block to obtain a PLIP inverse transformation diagram of each image block;
(3c) putting the PLIP inverse transformation image of each image block into the same position of the image block in the brightness component image before blocking to obtain a detail enhancement image;
(4) obtaining a filter processing graph of an image to be processed:
inputting the brightness component diagram into a guide diagram filter to obtain a filter processing diagram;
(5) acquiring a fusion image of an image to be processed by using an exposure fusion method:
(5a) respectively calculating the value of each pixel in the 3 fusion weight value graphs by using a Gaussian function formula;
(5b) respectively constructing a detail enhancement graph, a filter processing graph, a brightness component graph Gaussian pyramid and a fusion weight graph Gaussian pyramid corresponding to each graph according to a Gaussian pyramid rule;
(5c) multiplying the top level of the Gaussian pyramid of the detail enhancement diagram, the filter processing diagram and the brightness component diagram with the top level of the Gaussian pyramid of the fusion weight diagram corresponding to each diagram respectively to obtain a detail enhancement weighted diagram, a filter processing weighted diagram and a brightness component weighted diagram;
(5d) adding the detail enhancement weighted graph, the filter processing weighted graph and the brightness component weighted graph to obtain a primary fusion graph, constructing a Laplacian pyramid of the primary fusion graph according to a Laplacian pyramid rule, and taking the top layer of the Laplacian pyramid of the primary fusion graph as a fusion graph;
(6) and converting the fusion graph from the HSV space to the fusion graph of the RGB space and outputting the fusion graph.
2. The image processing method based on the adaptive PLIP model with parameter logarithm according to claim 1, wherein: the information entropy formula in step (1b) is as follows:
Figure FDA0002355921700000021
wherein k represents the maximum gray value among all pixel points of each image block, j represents the minimum gray value among all pixel points of each image block, log2Denotes base 2 logarithmic operation, pmAnd (4) representing the probability of all the m-gray-scale values of each image block in the image block.
3. The image processing method based on the adaptive PLIP model with parameter logarithm according to claim 1, wherein: the positive transformation formula of the PLIP model in the step (2b) is as follows:
Figure FDA0002355921700000031
wherein β represents a log factor with a value of 1.2,
Figure FDA0002355921700000032
representing the i-th image block of the luminance component map.
4. The image processing method based on the adaptive PLIP model with parameter logarithm according to claim 3, wherein: the inverse transformation formula of the PLIP model described in step (3b) is as follows:
Figure FDA0002355921700000033
wherein, CiThe inverse PLIP transform diagram for the ith image block is shown, and exp represents the exponential operation with the natural constant e as the base.
5. The image processing method based on the adaptive PLIP model with parameter logarithm according to claim 1, wherein: the gaussian function described in step (1b) is as follows:
Figure FDA0002355921700000034
wherein, BtAnd C represents the value of a pixel point at the same position as t in a result graph corresponding to the fusion weight value graph.
CN202010007691.3A 2020-01-05 2020-01-05 Image processing method based on self-adaptive PLIP model Active CN111223060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010007691.3A CN111223060B (en) 2020-01-05 2020-01-05 Image processing method based on self-adaptive PLIP model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010007691.3A CN111223060B (en) 2020-01-05 2020-01-05 Image processing method based on self-adaptive PLIP model

Publications (2)

Publication Number Publication Date
CN111223060A true CN111223060A (en) 2020-06-02
CN111223060B CN111223060B (en) 2021-01-05

Family

ID=70828118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010007691.3A Active CN111223060B (en) 2020-01-05 2020-01-05 Image processing method based on self-adaptive PLIP model

Country Status (1)

Country Link
CN (1) CN111223060B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245689A1 (en) * 2008-03-27 2009-10-01 Panetta Karen A Methods and apparatus for visual sub-band decomposition of signals
CN104574328A (en) * 2015-01-06 2015-04-29 北京环境特性研究所 Color image enhancement method based on histogram segmentation
US20160225126A1 (en) * 2008-09-26 2016-08-04 Google Inc. Method for image processing using local statistics convolution
CN107451974A (en) * 2017-07-31 2017-12-08 北京电子工程总体研究所 A kind of adaptive rendering display methods of high dynamic range images
CN107862666A (en) * 2017-11-22 2018-03-30 新疆大学 Mixing Enhancement Methods about Satellite Images based on NSST domains
CN109961415A (en) * 2019-03-26 2019-07-02 常州工学院 A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model
CN110175964A (en) * 2019-05-30 2019-08-27 大连海事大学 A kind of Retinex image enchancing method based on laplacian pyramid
CN110278425A (en) * 2019-07-04 2019-09-24 潍坊学院 Image enchancing method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245689A1 (en) * 2008-03-27 2009-10-01 Panetta Karen A Methods and apparatus for visual sub-band decomposition of signals
US20160225126A1 (en) * 2008-09-26 2016-08-04 Google Inc. Method for image processing using local statistics convolution
CN104574328A (en) * 2015-01-06 2015-04-29 北京环境特性研究所 Color image enhancement method based on histogram segmentation
CN107451974A (en) * 2017-07-31 2017-12-08 北京电子工程总体研究所 A kind of adaptive rendering display methods of high dynamic range images
CN107862666A (en) * 2017-11-22 2018-03-30 新疆大学 Mixing Enhancement Methods about Satellite Images based on NSST domains
CN109961415A (en) * 2019-03-26 2019-07-02 常州工学院 A kind of adaptive gain underwater picture Enhancement Method based on HSI space optics imaging model
CN110175964A (en) * 2019-05-30 2019-08-27 大连海事大学 A kind of Retinex image enchancing method based on laplacian pyramid
CN110278425A (en) * 2019-07-04 2019-09-24 潍坊学院 Image enchancing method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAREN PANETTA,ET AL.: "Parameterized logarithmic framework for image enhancement", 《IEEE TRANSACTIONS ON SYSTEMS,MAN AND CYBERNETICS》 *
张广燕: "对数图像处理新模型及其应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN111223060B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
US11127122B2 (en) Image enhancement method and system
Wang et al. Adaptive image enhancement method for correcting low-illumination images
Celik Spatial mutual information and PageRank-based contrast enhancement and quality-aware relative contrast measure
CN108090886B (en) High dynamic range infrared image display and detail enhancement method
CN111598791B (en) Image defogging method based on improved dynamic atmospheric scattering coefficient function
CN103268598A (en) Retinex-theory-based low-illumination low-altitude remote sensing image enhancing method
CN104574293A (en) Multiscale Retinex image sharpening algorithm based on bounded operation
CN104318529A (en) Method for processing low-illumination images shot in severe environment
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
CN107256539B (en) Image sharpening method based on local contrast
CN112541869A (en) Retinex image defogging method based on matlab
Parihar et al. A comprehensive analysis of fusion-based image enhancement techniques
CN106981052B (en) Adaptive uneven brightness variation correction method based on variation frame
Muniraj et al. Underwater image enhancement by modified color correction and adaptive Look-Up-Table with edge-preserving filter
CN113344810A (en) Image enhancement method based on dynamic data distribution
Jindal et al. Bio-medical image enhancement based on spatial domain technique
CN111223060B (en) Image processing method based on self-adaptive PLIP model
CN110175509A (en) A kind of round-the-clock eye circumference recognition methods based on cascade super-resolution
CN113222859B (en) Low-illumination image enhancement system and method based on logarithmic image processing model
CN109886901B (en) Night image enhancement method based on multi-channel decomposition
CN113012079A (en) Low-brightness vehicle bottom image enhancement method and device and storage medium
Neelima et al. Performance Evaluation of Clustering Based Tone Mapping Operators with State-of-Art Methods
CN113160073B (en) Remote sensing image haze removal method combining rolling deep learning and Retinex theory
Yin et al. Enhancement of Low-Light Image using Homomorphic Filtering, Unsharp Masking, and Gamma Correction
Zhu et al. Improved Adaptive Retinex Image Enhancement Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant