WO2020082593A1 - 增强图像对比度的方法及其装置 - Google Patents

增强图像对比度的方法及其装置 Download PDF

Info

Publication number
WO2020082593A1
WO2020082593A1 PCT/CN2018/124517 CN2018124517W WO2020082593A1 WO 2020082593 A1 WO2020082593 A1 WO 2020082593A1 CN 2018124517 W CN2018124517 W CN 2018124517W WO 2020082593 A1 WO2020082593 A1 WO 2020082593A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
pixel
component
gray
Prior art date
Application number
PCT/CN2018/124517
Other languages
English (en)
French (fr)
Inventor
邓宇帆
Original Assignee
深圳市华星光电技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市华星光电技术有限公司 filed Critical 深圳市华星光电技术有限公司
Publication of WO2020082593A1 publication Critical patent/WO2020082593A1/zh

Links

Classifications

    • G06T5/92
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present application relates to the technical field of digital image processing, in particular to a method and device for enhancing image contrast.
  • Histogram equalization is a method of adjusting contrast using image histogram in the field of image processing. As shown in Figure 1, histogram equalization increases the dynamics of pixel gray values by changing the gray histogram of the original image from a certain gray interval in the comparative set to a uniform distribution in the entire gray range. The range can achieve the effect of enhancing the overall contrast of the image. This method has obvious effect on the overall dark or bright image, but it will cause the image to lose some details.
  • the purpose of the present application is to provide a method for enhancing the contrast of an image, so as to solve the problem that the prior art may lose some details of the image when increasing the contrast of the image.
  • a method for enhancing image contrast includes the following steps:
  • the adjusting the brightness component Y to obtain the processed image includes the following steps:
  • the pulse-coupled neural network model is used to fuse the dark area detail image and the bright area detail image to obtain a contrast-enhanced brightness component Y 1 ;
  • the contrast-enhanced luminance component Y 1 , the blue chrominance component C b and the red chrominance component C r constitute the processed image.
  • the fusion of the dark area detail image and the bright area detail image by using a pulse coupled neural network model includes the following steps:
  • N is an integer greater than 0, the pixel (i, j) represents a pixel located in the i-th row and j-th column, and both i and j are positive integers greater than 0;
  • the pulse coupled neural network model includes the first channel and the second channel.
  • the comparison of the ignition values of the pixels (i, j) in the first ignition matrix and the second ignition matrix includes the following steps:
  • the gray value after fusion is the gray value of the pixel (i, j) in the dark area detail image
  • the pixel (i, J) The gray value after fusion is the gray value of the pixel (i, j) in the bright area detail image
  • the gray value after fusion of the pixels (i, j) constitutes the contrast-enhanced brightness component Y 1 .
  • any one of Laplace operator, Gaussian Laplace operator, Canny operator and Sobel operator is used to calculate the dark The gray gradient value of the pixels (i, j) in the area detail image and the bright area detail image.
  • grads i, j lum (i-1, j) + lum (i + 1, j) + lum (i, j-1) + lum (i, j + 1) -4lum (i, j)
  • the lum (i, j) represents the gray value of the pixel (i, j)
  • the lum (i-1, j) represents the gray value of the pixel (i-1, j)
  • the lum (i + 1, j) represents the gray value of the pixel (i + 1, j)
  • the lum (i, j-1) represents the gray value of the pixel (i, j-1)
  • the lum (i , j + 1) represents the gray value of the pixel (i, j + 1), where grads i, j is the gray value of the pixel (i, j).
  • performing gamma curve correction on the non-color image corresponding to the luminance component Y to stretch the low gray scale and the high gray scale respectively includes the following steps:
  • the non-color image corresponding to the luminance component Y is corrected by a gamma curve to stretch the low gray scale to obtain the dark area detail image;
  • the non-color image corresponding to the luminance component Y is corrected by a gamma curve to stretch the high gray scale to obtain the bright area detail image;
  • the non-color image corresponding to the luminance component Y is corrected by a gamma curve to stretch the low grayscale, and when ⁇ is greater than 2.2, the non-color image corresponding to the luminance component Y is Gamma curve correction to stretch high gray levels.
  • the formula for converting the source image from the RGB color space to the YC b C r color space is:
  • R represents the value of the red component of the source image in the RGB color space
  • G represents the value of the green component of the source image in the RGB color space
  • B represents the blue component of the source image in the RGB color space
  • Y represents the value of the luminance component in the source image converted to YC b C r color space
  • C b represents the value of the blue chrominance component in the source image converted to YC b C r color space
  • C r represents the value of the red chrominance component in the source image converted into the YC b C r color space.
  • the formula for converting the processed image to the RGB color space is:
  • R Y 1 + 1.403C r ;
  • Y 1, C r and C b are values of the luminance component of the image processed source image and the transformed values of the source image to the red chrominance component conversion to YC b C r space YC b C r space
  • the values of the blue chrominance component; R, G, and B are the values of the red component, the green component, and the blue component of the processed image in the RGB color space, respectively.
  • Another object of the present application is to provide a device for enhancing image contrast.
  • a device for enhancing image contrast including:
  • the first conversion module is used to convert the source image from the RGB color space to the YC b C r color space;
  • An obtaining module configured to obtain the luminance component Y, the blue chrominance component C b and the red chrominance component C r of the source image in the YC b C r space;
  • a brightness adjustment module for adjusting the brightness component Y to obtain a processed image
  • the second conversion module is used to convert the processed image into the RGB color space to obtain an image with enhanced contrast
  • the brightness adjustment module includes:
  • a grayscale stretching unit which is used to perform gamma curve correction on the non-color image corresponding to the luminance component Y to stretch the low grayscale and the high grayscale respectively, so as to obtain a dark area detail image and a bright area detail image respectively;
  • a fusion unit configured to fuse the dark area detail image and the bright area detail image by using a pulse coupled neural network model to obtain a brightness component Y 1 with enhanced contrast
  • the contrast-enhanced luminance component Y 1 , the blue chrominance component C b and the red chrominance component C r constitute the processed image.
  • the fusion unit includes:
  • the first calculation subunit is used to calculate the absolute values of the gray gradient values of the pixels (i, j) in the dark area detail image and the bright area detail image as the first stimulation value and the second stimulation value respectively;
  • the second calculation subunit is used to calculate the absolute value of the difference between the gray value of the pixel (i, j) and the 128 gray level in the dark area detail image and the bright area detail image as the first intensity Connection value and second strength connection value;
  • the first ignition matrix acquisition subunit is configured to use the first stimulation value and the first intensity connection value as the input value of the pixel (i, j) in the first channel and iterate N times to acquire the The first ignition matrix corresponding to the dark area detail image;
  • a second ignition matrix acquisition subunit configured to use the second stimulus value and the second intensity connection value as the input value of the pixel (i, j) in the second channel and iterate N times to acquire the bright The second ignition matrix corresponding to the area detail image;
  • a judgment subunit configured to compare the ignition values of pixels (i, j) in the first ignition matrix and the second ignition matrix, and obtain the contrast-enhanced brightness component Y 1 ;
  • N is an integer greater than 0, the pixel (i, j) represents a pixel located in the i-th row and j-th column, and both i and j are positive integers greater than 0;
  • the pulse coupled neural network model includes the first channel and the second channel.
  • the judgment sub-unit is used to compare the ignition values of pixels (i, j) in the first ignition matrix and the second ignition matrix to obtain the contrast-enhanced brightness component Y 1 including: step:
  • the gray value after fusion is the gray value of the pixel (i, j) in the dark area detail image
  • the pixel (i, J) The gray value after fusion is the gray value of the pixel (i, j) in the bright area detail image
  • the gray value after fusion of the pixels (i, j) constitutes the contrast-enhanced brightness component Y 1 .
  • the first computing unit uses any one of Laplacian, Gaussian Laplacian, Canny and Sobel operators One is to calculate the gray gradient value of the pixels (i, j) in the dark area detail image and the bright area detail image.
  • the first calculation subunit calculates the gray of the pixels (i, j) in the dark area detail image and the bright area detail image using a Laplace operator
  • the formula for the degree gradient value is:
  • grads i, j lum (i-1, j) + lum (i + 1, j) + lum (i, j-1) + lum (i, j + 1) -4lum (i, j);
  • the lum (i, j) represents the gray value of the pixel (i, j)
  • the lum (i-1, j) represents the gray value of the pixel (i-1, j)
  • the lum ( i + 1, j) represents the gray value of the pixel (i + 1, j)
  • the lum (i, j-1) represents the gray value of the pixel (i, j-1)
  • the lum (i, j + 1) represents the gray value of the pixel (i, j + 1), where grads i, j is the gray value of the pixel (i, j).
  • the gray-scale stretching unit includes:
  • a first stretching subunit configured to correct the non-color image corresponding to the luminance component Y by gamma curve to stretch the low gray scale to obtain the dark area detail image
  • a second stretching subunit configured to correct the non-color image corresponding to the luminance component Y by gamma curve to stretch the high gray scale to obtain the bright area detail image
  • the non-color image corresponding to the luminance component Y is corrected by a gamma curve to stretch the low grayscale; when ⁇ is greater than 2.2, the non-color image corresponding to the luminance component Y is Gamma curve correction to stretch high gray levels.
  • the formula for the first conversion module to convert the source image from the RGB color space to the YC b C r color space is:
  • R represents the value of the red component of the source image in the RGB color space
  • G represents the value of the green component of the source image in the RGB color space
  • B represents the source image in the RGB color space value of the blue component
  • Y stands represents the source image conversion to YC b C r color space value of the luminance component
  • C b represents the source image is transformed into a color space YC b C r fetch blue chrominance component Value
  • C r represents the value of the red chrominance component in the source image converted to YC b C r color space.
  • the formula for the second conversion module to convert the processed image into the RGB color space is:
  • R Y 1 + 1.403C r ;
  • Y 1 , C r and C b are the value of the luminance component of the processed image, the value of the red chrominance component in the source image converted to YC b C r space and the source image converted to YC b C
  • the values of the blue chrominance component in the r space; R, G, and B are the values of the red component, the green component, and the blue component of the processed image in the RGB color space, respectively.
  • This application extracts the brightness component by transferring the source image from the RGB color space to the YCbCr color space, and performs gamma curve correction on the non-color image corresponding to the brightness component to obtain dark area detail image and bright area detail contrast enhancement Contrast-enhanced bright-area detail images, using the pulse neural network model to extract the rich and gray-scale areas of the dark-area detail image and the bright-area detail image separately and fuse them together to obtain the adjusted contrast-enhanced brightness component, contrast
  • the enhanced luminance component, blue chrominance component and red chrominance component are transferred back to the RGB color space to obtain an image with enhanced contrast.
  • the details of the image with enhanced contrast are also protected, and the image with enhanced contrast is also improved. image.
  • Figure 1 is the image and grayscale histogram before and after the image is processed by histogram equalization, where Figures A and B are the image and grayscale histogram of the original image before processing, and Figures C and D are the original image The processed image and its gray histogram;
  • FIG. 2 is a flowchart of a method for enhancing image contrast according to an embodiment of the application
  • Fig. 3 is a flow chart of using a pulse coupled neural network model to fuse the dark area detail image and the bright area detail image;
  • FIG. 4 is a schematic diagram of an apparatus for enhancing image contrast according to an embodiment of the application.
  • FIG. 2 it is a flowchart of a method for enhancing image contrast according to an embodiment of the present application, including:
  • RGB is the most common color space.
  • the RGB color space is composed of a red component, a green component, and a blue component.
  • the red component, green component, and The blue component values range from 0 to 255.
  • the larger the value of a color component the higher the brightness of the color component, that is, the brightness information exists in the three color components; the three color components are equally important and highly Relatedly, when the brightness of pixels of a color image is to be adjusted, the color of the pixels of the color image will also change.
  • Y represents the luminance component
  • C r and C b represent the red chrominance component and blue chrominance component, respectively
  • the range of values of Y, C r and C b are 0-255
  • the luminance signal (Y) and the chrominance signal (C r and C b ) are independent of each other.
  • the luminance component Y is enhanced, it does not affect the chrominance signal.
  • the spatial information of a pixel needs to be represented by two components. Specifically, in this application, the spatial information of a pixel is represented by (i, j), i indicates that the pixel is located in the i-th row, and j indicates that the pixel is located in the j-th Column.
  • This application converts the source image from the RGB color space to the YC b C r color space, extracts the brightness information of the YC b C r color space for adjustment, and other information of the image is not affected, and the source image is changed from the RGB color space
  • the conversion to YC b C r color space is a linear conversion, the formula is as follows:
  • R represents the value of the red component of the source image in the RGB color space
  • G represents the value of the green component of the source image in the RGB color space
  • B represents the blue component of the source image in the RGB color space
  • Y represents the value of the luminance component in the source image converted to YC b C r color space
  • C b represents the value of the blue chrominance component in the source image converted to YC b C r color space
  • C r represents the source image Converted to the value of the red chroma component in the YC b C r color space.
  • the non-color image corresponding to the luminance component Y is subjected to gamma curve correction to stretch the low gray scale and the high gray scale, respectively, to obtain a dark area detail image and a bright area detail image;
  • the dark area detail image is obtained by stretching the non-color image corresponding to the luminance component Y with low gray scale, that is, the gray scale dynamic range corresponding to the dark area detail of the non-color image is widened and the gray area corresponding to the bright area detail
  • the dynamic range is compressed, so that the contrast of the dark area details is enhanced
  • the bright area detail image is obtained by high gray scale stretching of the non-color image corresponding to the luminance component Y, that is, the gray corresponding to the bright area detail image of the non-color image
  • the dynamic range of the degree is widened and the gray scale dynamic range corresponding to the details of the dark area is compressed, so that the contrast of the details of the bright area is enhanced.
  • the color image corresponding to the luminance component Y is corrected by gamma curve to obtain the dark area respectively Images with enhanced detail contrast and images with enhanced contrast in bright areas.
  • the pulse-coupled neural network model is used to fuse the dark area detail image and the bright area detail image to obtain the contrast-enhanced brightness component Y 1 ; the contrast-enhanced brightness component Y 1 , the blue chroma component C b and the red chroma component C r constitutes the processed image.
  • Pulse Coupled Neural Network (Pulse Coupled Neural Network, PCNN) is proposed by Eckhorn et al. Based on the phenomenon of synchronous oscillation of cat visual cortical neurons and pulse distribution of neurons.
  • pulse coupled neural networks It is a feedback network formed by connecting several neurons.
  • the neuron corresponds to the pixel in the image, and the input of the neuron also corresponds to the information about the gray value of the pixel. Since the pixel point is discrete, the input signal of the pulse coupled neural network model is also discrete.
  • Each neuron is composed of three parts, namely the input area, the connection area, the pulse generator, and the mathematical description corresponding to the neuron model corresponding to each pixel (i, j) can be simplified to the following formula:
  • L i, j (n) exp (- ⁇ L ) L i, j (n-1) + ⁇ k, l W ij, kl Y ij, kl (n-1),
  • I is the image to be fused
  • I i, j is the value of the gray-scale related information of the image to be fused
  • I i, j is used as the input stimulus of F i, j (n) Value
  • n is the nth iteration in PCNN
  • L i, j (n) represents the neighborhood influence value of the pixel (i, j)
  • ⁇ L represents the time decay constant of the link path
  • W ij, kl represents the (j + k) th row (j + l)
  • Y ij, kl (n-1) represents the output of the pixel in the (j + l) th row (j + l) at the (n-1) th iteration
  • U i, j (n) represents the internal activity of the pixel (i, j) at the nth iteration
  • ⁇ i, j represents the connection strength value
  • k and l represent the neuron corresponding to the current pixel (i, j) Link the range of other neurons input to (i, j);
  • T i, j (n) is the threshold of the pixel (i, j) at the nth iteration
  • ⁇ T and v T represent the time decay constant and magnification of the adjustable threshold of the neuron
  • Equation (3) The output value Y i, j (n) corresponding to the pixel (i, j) defined in equation (3) is processed using equation (4) to obtain the ignition value (number of ignitions) of the pixel (i, j) when iterated n times Sum), formula (4) is as follows:
  • the surrounding pixels referenced by the connection area are 3 ⁇ 3 neighborhoods, and the value of W is an empirical value, for example, W is:
  • ⁇ k, l W ij, kl Y ij, kl (n-1) 0.5Y i-1, j-1 (n-1) + Y i-1, j (n-1) + 0.5Y i -1, j + 1 (n-1) + Y i, j-1 (n-1) + Y i, j + 1 (n-1) + 0.5Y i + 1, j-1 (n-1) + Y i + 1, j (n-1) + 0.5Y i + 1, j + 1 (n-1);
  • the contrast enhancement in the dark area detail image and the gray area range of the dark area detail and the bright area detail image in the contrast enhancement and the wide gray scale range of brightness details are extracted and fused together, so that the fusion
  • the contrast of the dark area detail and the bright area detail are both enhanced and blended into an image, while the dark and light area details of the image will not be lost.
  • the influence of neighboring pixels will be considered, so the image with enhanced contrast is an image with improved noise.
  • R Y 1 + 1.403C r ;
  • Y 1 , C r and C b are the value of the processed image luminance component, the value of the red chrominance component of the source image in YC b C r space and the source image in YC b C r space blue chrominance component values; R, G, B image are processed in the red component of RGB color space values, values of a green component value, and the blue component.
  • the above scheme extracts the brightness component by transferring the source image from the RGB color space to the YC b C r color space, and performs gamma curve correction on the non-color image corresponding to the brightness component to obtain a dark area detail image with enhanced contrast of dark area detail and Bright area detail image with enhanced contrast in bright area, using pulse neural network model to extract the rich and gray areas in the dark area detail image and bright area detail image respectively and fuse them together to get adjusted contrast enhanced brightness Component, the contrast-enhanced brightness component, the blue chrominance component and the red chrominance component are transferred back to the RGB color space to obtain a contrast-enhanced image. At the same time, the details of the contrast-enhanced image are also protected, and the contrast-enhanced image is also a noise phenomenon Improved image.
  • FIG. 3 it is a flow chart of using a pulse-coupled neural network model to merge a dark area detail image and a bright area detail image, including the following steps:
  • N is an integer greater than 0, the pixel (i, j) represents a pixel located in the i-th row and j-th column, and both i and j are positive integers greater than 0;
  • the pulse coupled neural network model includes a first channel and a second channel.
  • the "contrast enhancement algorithm” generally has two requirements: (1) As far as the entire image is concerned, the bright areas of the image become brighter, the dark areas become darker, the gray scale range expands, and the overall contrast of the image increases; (2) As far as the image part is concerned, the brightness levels of adjacent pixels are pulled apart, and the local details are rich.
  • the absolute value of the gray gradient of the pixel (i, j) and the gray value of the pixel (i, j) and the absolute value of 128 gray levels as the two inputs of the PCNN model, where the pixel (i, j)
  • the absolute value of the gray gradient is used as the stimulus value of the PCNN to measure local details.
  • the gray value of the pixel (i, j) and the absolute value of 128 gray levels are used as the connection intensity value of the PCNN to measure the gray scale range.
  • the greater the absolute value of the gray value and the 128 gray scale the more the brightness deviates from the intermediate value, and the more it helps to expand the overall gray scale range.
  • the two inputs of this application will comprehensively affect the ignition value output by the PCNN model. For example, if the gradient values of the pixels (i, j) in the dark area detail image and the bright area detail image are equal, but the gray value in the dark area detail is greater than the absolute value of 128 gray levels, then it is calculated by the PCNN model After that, the ignition value of the dark area detail image will exceed that of the bright area detail image. In the final fusion image, the gray value of the pixel will adopt the gray value in the dark area detail image.
  • comparing the ignition values of the pixels (i, j) in the first ignition matrix and the second ignition matrix includes the following steps:
  • the gray value of the pixel (i, j) after fusion is the dark area
  • the gray value of the pixel (i, j) after fusion Is the gray value of pixels (i, j) in the bright area detail image
  • the gray value after the pixel (i, j) fusion constitutes the contrast-enhanced brightness component Y 1 .
  • any one of the Laplace operator, Gaussian Laplace operator, Kenny Canny operator, and Sobel operator is used to calculate the pixels (i, j) The gray gradient value.
  • the Laplacian operator is used to calculate the gray gradient values of pixels (i, j) in the dark area detail image and the bright area detail image respectively, the formula is as follows:
  • grads i, j lum (i-1, j) + lum (i + 1, j) + lum (i, j-1) + lum (i, j + 1) -4lum (i, j); (6 )
  • lum (i, j) represents the gray value of the pixel (i, j)
  • lum (i-1, j) represents the gray value of the pixel (i-1, j)
  • lum (i + 1, j) represents the gray value of the pixel (i + 1, j)
  • lum (i, j-1) represents the gray value of the pixel (i, j-1)
  • lum (i, j + 1) represents The gray value of the pixel (i, j + 1)
  • grads i, j is the gray value of the pixel (i, j).
  • performing gamma curve correction on the non-color image corresponding to the luminance component Y to stretch the low gray scale and the high gray scale respectively includes the following steps:
  • the non-color image corresponding to the luminance component Y is corrected by the gamma curve to stretch the low gray scale to obtain a detailed image of the dark area;
  • the non-color image corresponding to the luminance component Y is corrected by the gamma curve to stretch the high gray scale to obtain a detailed image of the bright area;
  • the non-color image corresponding to the luminance component Y is corrected by the gamma curve to stretch the low grayscale; when ⁇ is greater than 2.2, the non-color image corresponding to the luminance component Y is corrected by the gamma curve to stretch the high Grayscale.
  • the value of ⁇ is greater than 0 and less than 2.2, the dark area of the image expands to the bright area, and when the value of ⁇ is greater than 2.2, the image expands from the bright area to the dark area.
  • a dark detail image with enhanced dark area detail contrast and a bright area detail image with enhanced brightness detail contrast are obtained through gamma correction.
  • the main body of the method for enhancing image contrast of the present application is an electronic device with image processing capabilities, such as a television, a camera device, a monitoring device, a tablet computer, and a server.
  • FIG. 4 which is an apparatus 30 for enhancing image contrast according to an embodiment of the present application, it includes:
  • the first conversion module 31 is used to convert the source image from the RGB color space to the YC b C r color space;
  • the obtaining module 32 is used to obtain the luminance component Y, the blue chrominance component C b and the red chrominance component C r of the source image in the YC b C r space;
  • the brightness adjustment module 33 is used to adjust the brightness component Y to obtain the processed image
  • the second conversion module 34 is used to convert the processed image into the RGB color space to obtain an image with enhanced contrast
  • the brightness adjustment module 33 includes:
  • the gray scale stretching unit 331 is used to perform gamma curve correction on the non-color image corresponding to the luminance component Y to stretch the low gray scale and the high gray scale respectively, to obtain a dark area detail image and a bright area detail image respectively;
  • the fusion unit 332 is used to fuse the dark area detail image and the bright area detail image by using a pulse coupled neural network model to obtain a brightness component Y 1 with enhanced contrast;
  • the contrast-enhanced luminance component Y 1 , the blue chrominance component C b and the red chrominance component Cr constitute the processed image.
  • the fusion unit 332 includes:
  • the first calculation subunit is used to calculate the absolute values of the gray gradient values of the pixels (i, j) in the dark area detail image and the bright area detail image as the first stimulation value and the second stimulation value;
  • the second calculation subunit is used to calculate the absolute value of the difference between the gray value of the pixel (i, j) and the 128 gray scale in the dark area detail image and the bright area detail image as the first intensity connection value and the second intensity Connection value
  • the first ignition matrix acquisition subunit is used to take the first stimulus value and the first intensity connection value as the input value of the pixel (i, j) in the first channel and iterate N times to obtain the first Ignition matrix
  • the second ignition matrix acquisition subunit is used to take the second stimulus value and the second intensity connection value as the input value of the pixel (i, j) in the second channel and iterate N times to obtain the second ignition corresponding to the bright area detail image matrix;
  • the judgment subunit is used to compare the ignition values of the pixels (i, j) in the first ignition matrix and the second ignition matrix to obtain the brightness component Y 1 with enhanced contrast;
  • N is an integer greater than 0, the pixel (i, j) represents a pixel located in the i-th row and j-th column, and both i and j are positive integers greater than 0;
  • the pulse coupled neural network model includes a first channel PCNN1 and a second channel PCNN2.
  • the determining subunit for comparing the ignition values of the pixels (i, j) in the first ignition matrix and the second ignition matrix to obtain the contrast-enhanced brightness component Y 1 includes the following steps:
  • the gray value of the pixel (i, j) after fusion is the dark area
  • the gray value of the pixel (i, j) after fusion The gray value of pixels (i, j) in the detailed image of the bright area;
  • the gray value after the pixel (i, j) fusion constitutes the contrast-enhanced brightness component Y 1 .
  • any one of the Laplacian operator, Gaussian Laplacian operator, Kenny operator, and Sobel operator is used to calculate the values in the dark area detail image and the bright area detail image.
  • the gray-scale stretching unit 331 includes:
  • the first stretching subunit is used to correct the non-color image corresponding to the luminance component Y by gamma curve to stretch the low gray scale to obtain a detailed image in the dark area;
  • the second stretching subunit is used to correct the non-color image corresponding to the luminance component Y through the gamma curve to stretch the high gray scale to obtain the detailed image in the bright area;
  • y 255 ⁇ (x / 255) ⁇ ( ⁇ / 2.2)
  • x is the gray value of the pixel (i, j) in the non-color image corresponding to the luminance component Y
  • is gray Degree coefficient
  • y is the gray value of the pixel (i, j) in the luminance component Y after stretching;
  • the non-color image corresponding to the luminance component Y is corrected by the gamma curve to stretch the low grayscale; when ⁇ is greater than 2.2, the non-color image corresponding to the luminance component Y is corrected by the gamma curve to stretch the high Grayscale.
  • the device for enhancing image contrast provided by the above embodiments only uses the division of the above functional modules as an example when enhancing the image contrast.
  • the above functions may be allocated by different functional modules according to needs That is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.

Abstract

一种增强图像对比度的方法及其装置,通过将源图像从RGB空间转至YC bC r空间以提取亮度分量并使其对应非彩色图像进行伽马曲线校正以获得暗区细节图像和亮区细节图像,利用PCNN模型将暗区细节图像和亮区细节图像融合以得到对比度增强的亮度分量,对比增强的亮度分量及未调整的分量转回至RGB颜色空间,得对比度增强图像。

Description

增强图像对比度的方法及其装置 技术领域
本申请涉及数字图像处理技术领域,尤其涉及一种增强图像对比度的方法及其装置。
背景技术
直方图均衡化是图像处理领域中利用图像直方图对对比度进行调整的方法。如图1所示,直方图均衡化通过使原始图像的灰度直方图从比较集中的某个灰度区间变成在全部灰度范围内地均匀分布,该方法增加了象素灰度值的动态范围从而可达到增强图像整体对比度的效果。该方法对于整体偏暗或偏亮的图像有明显效果,然而其会导致图像失去部分细节。
因此,有必要提出一种技术方案以解决现有技术在提高图像对比度时会失去图像部分细节的问题。
技术问题
本申请的目的在于提供一种增强图像对比度的方法,以解决现有技术在提高图像对比度时会失去图像部分细节的问题。
技术解决方案
一种增强图像对比度的方法,包括如下步骤:
将源图像从RGB颜色空间转换至YC bC r颜色空间;
获取所述源图像在YC bC r空间的亮度分量Y、蓝色色度分量C b以及红色色度分量C r
调整所述亮度分量Y以获得处理后的图像;
将所述处理后的图像转换至RGB颜色空间,得对比度增强的图像;
其中,所述调整所述亮度分量Y以获得处理后的图像包括如下步骤:
将所述亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶,分别得暗区细节图像和亮区细节图像;
采用脉冲耦合神经网络模型将所述暗区细节图像和所述亮区细节图像进行融合,得对比度增强的亮度分量Y 1
所述对比度增强的亮度分量Y 1、所述蓝色色度分量C b以及所述红色色度分量C r构成所述处理后的图像。
在上述增强图像对比度的方法中,所述采用脉冲耦合神经网络模型将所述暗区细节图像和所述亮区细节图像进行融合包括如下步骤:
分别计算所述暗区细节图像和所述亮区细节图像中像素(i,j)灰度梯度值的绝对值作为第一刺激值和第二刺激值;
分别计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度值与128灰阶的差值的绝对值作为第一强度连接值和第二强度连接值;
将所述第一刺激值和所述第一强度连接值作为所述像素(i,j)在第一通道的输入值并迭代N次,获取所述暗区细节图像对应的第一点火矩阵;
将所述第二刺激值和所述第二强度连接值作为所述像素(i,j)在第二通道的输入值并迭代N次,获取所述亮区细节图像对应的第二点火矩阵;
将所述第一点火矩阵和所述第二点火矩阵中所述像素(i,j)的点火值进行比较,获取所述对比度增强的亮度分量Y 1
其中,所述N为大于0的整数,所述像素(i,j)表示位于第i行第j列的像素,所述i和所述j均为大于0的正整数;
所述脉冲耦合神经网络模型包括所述第一通道和所述第二通道。
在上述增强图像对比度的方法中,所述将所述第一点火矩阵和所述第二点火矩阵中所述像素(i,j)的点火值进行比较包括如下步骤:
若所述像素(i,j)在所述第一点火矩阵中的点火值大于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述暗区细节图像中所述像素(i,j)的灰度值;
若所述像素(i,j)在所述第一点火矩阵中的点火值小于或等于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述亮区细节图像中所述像素(i,j)的灰度值;
所述像素(i,j)融合后的灰度值构成所述对比度增强的亮度分量Y 1
在上述增强图像对比度的方法中,用拉普拉斯算子、高斯拉普拉斯算子、凯尼(Canny)算子、索贝尔(Sobel)算子中的任意一种以计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度梯度值。
在上述增强图像对比度的方法中,所述拉普拉斯算子的公式为:
grads i,j=lum(i-1,j)+lum(i+1,j)+lum(i,j-1)+lum(i,j+1)-4lum(i,j)
;其中,所述lum(i, j)表示像素(i, j)的灰度值,所述lum(i-1,j)表示像素(i-1, j)的灰度值,所述lum(i+1,j)表示像素(i+1, j)的灰度值,所述lum(i,j-1)表示像素(i, j-1)的灰度值,所述lum(i,j+1)表示像素(i, j+1)的灰度值,所述grads i,j为像素(i,j)的灰度梯度值。
在上述增强图像对比度的方法中,所述将所述亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶包括如下步骤:
将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,得所述暗区细节图像;
将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶,得所述亮区细节图像;
所述伽马曲线对应的函数为y = 255·(x/255)^(γ/2.2),所述x为所述亮度分量Y对应的非彩色图像中像素(i,j)的灰度值,所述γ为灰度系数,所述y为拉伸后亮度分量Y中像素(i,j)的灰度值;
所述γ大于0且小于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,所述γ大于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶。
在上述增强图像对比度的方法中,所述γ=2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶得所述暗区细节图像;所述γ=2.4时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶得所述亮区细节图像。
在上述增强图像对比度的方法中,所述将源图像从RGB颜色空间转换至YC bC r颜色空间的公式为:
Y = 0.299R + 0.587G + 0.114B;
C b = -0.169R - 0.331G + 0.500B;
C r = 0.500R - 0.419G - 0.081B;
其中,R表示所述源图像在RGB颜色空间中红色分量的取值,G表示所述源图像在RGB颜色空间中绿色分量的取值,B表示所述源图像在RGB颜色空间中蓝色分量的取值;Y表示所述源图像转化至YC bC r颜色空间中亮度分量的取值,C b表示所述源图像转化至YC bC r颜色空间中蓝色色度分量的取值,C r表示所述源图像转化至YC bC r颜色空间中红色色度分量的取值。
在上述增强图像对比度的方法中,所述将所述处理后的图像转换至所述RGB颜色空间的公式为:
R = Y 1 + 1.403C r
G = Y 1 - 0.344C b - 0.714C r
B = Y 1+ 1.773C b
其中,Y 1、C r及C b分别为处理后的图像亮度分量的取值、源图像转化至YC bC r空间中红色色度分量的取值以及源图像转化至YC bC r空间中蓝色色度分量的取值;R、G、B分别为处理后的图像在RGB颜色空间中的红色分量的取值、绿色分量的取值以及蓝色分量的取值。
本申请的又一目的是提供一种增强图像对比度的装置。
一种增强图像对比度的装置,包括:
第一转换模块,用于将源图像从RGB颜色空间转换至YC bC r颜色空间;
获取模块,用于获取所述源图像在YC bC r空间的亮度分量Y、蓝色色度分量C b以及红色色度分量C r
亮度调整模块,用于调整所述亮度分量Y以获得处理后的图像;
第二转换模块,用于将所述处理后的图像转换至RGB颜色空间,得对比度增强的图像;
其中,所述亮度调整模块包括:
灰阶拉伸单元,用于将所述亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶,分别得暗区细节图像和亮区细节图像;
融合单元,用于采用脉冲耦合神经网络模型将所述暗区细节图像和所述亮区细节图像进行融合,得对比度增强的亮度分量Y 1
所述对比度增强的亮度分量Y 1、所述蓝色色度分量C b以及所述红色色度分量C r构成所述处理后的图像。
在上述增强图像对比度的装置中,所述融合单元包括:
第一计算子单元,用于分别计算所述暗区细节图像和所述亮区细节图像中像素(i,j)的灰度梯度值的绝对值作为第一刺激值和第二刺激值;
第二计算子单元,用于分别计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度值与128灰阶的差值的绝对值作为第一强度连接值和第二强度连接值;
第一点火矩阵获取子单元,用于将所述第一刺激值和所述第一强度连接值作为所述像素(i,j)在第一通道的输入值并迭代N次,获取所述暗区细节图像对应的第一点火矩阵;
第二点火矩阵获取子单元,用于将所述第二刺激值和所述第二强度连接值作为所述像素(i,j)在第二通道的输入值并迭代N次,获取所述亮区细节图像对应的第二点火矩阵;
判断子单元,用于将所述第一点火矩阵和所述第二点火矩阵中像素(i,j)的点火值进行比较,获取所述对比度增强的亮度分量Y 1
其中,所述N为大于0的整数,所述像素(i,j)表示位于第i行第j列的像素,所述i和所述j均为大于0的正整数;
所述脉冲耦合神经网络模型包括所述第一通道和所述第二通道。
在上述增强图像对比度的装置中,所述判断子单元用于将第一点火矩阵和第二点火矩阵中像素(i,j)的点火值进行比较以获取对比度增强的亮度分量Y 1包括如下步骤:
若所述像素(i,j)在所述第一点火矩阵中的点火值大于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述暗区细节图像中所述像素(i,j)的灰度值;
若所述像素(i,j)在所述第一点火矩阵中的点火值小于或等于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述亮区细节图像中所述像素(i,j)的灰度值;
所述像素(i,j)融合后的灰度值构成所述对比度增强的亮度分量Y 1
在上述增强图像对比度的装置中,所述第一计算子单元用拉普拉斯算子、高斯拉普拉斯算子、凯尼(Canny)算子、索贝尔(Sobel)算子中的任意一种以计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度梯度值。
在上述增强图像对比度的装置中,所述第一计算子单元用拉普拉斯算子计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的所述灰度梯度值的公式为:
grads i,j=lum(i-1,j)+lum(i+1,j)+lum(i,j-1)+lum(i,j+1)-4lum(i,j) ;
其中,所述lum(i, j)表示像素(i, j)的灰度值,所述lum(i-1,j)表示像素(i-1, j)的灰度值,所述lum(i+1,j)表示像素(i+1, j)的灰度值,所述lum(i,j-1)表示像素(i, j-1)的灰度值,所述lum(i,j+1)表示像素(i, j+1)的灰度值,所述grads i,j为像素(i,j)的灰度梯度值。
在上述增强图像对比度的装置中,所述灰阶拉伸单元包括:
第一拉伸子单元,用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,得所述暗区细节图像;
第二拉伸子单元,用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶,得所述亮区细节图像;
所述伽马曲线对应的函数为y = 255·(x/255)^(γ/2.2),所述x为所述亮度分量Y对应的非彩色图像中像素(i,j)的灰度值,所述γ为灰度系数,所述y为拉伸后亮度分量Y中像素(i,j)的灰度值;
所述γ大于0且小于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶;所述γ大于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶。
在上述增强图像对比度的装置中,所述第一拉伸子单元用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶时,所述γ=2;
第二拉伸子单元用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶时,所述γ=2.4。
在上述增强图像对比度的装置中,所述第一转换模块用于将源图像从RGB颜色空间转换至YC bC r颜色空间的公式为:
Y = 0.299R + 0.587G + 0.114B;
C b = -0.169R - 0.331G + 0.500B;
C r = 0.500R - 0.419G - 0.081B;
在上式中,R表示所述源图像在RGB颜色空间中红色分量的取值,G表示所述源图像在RGB颜色空间中绿色分量的取值,B表示所述源图像在RGB颜色空间中蓝色分量的取值;Y表示所述源图像转化至YC bC r颜色空间中亮度分量的取值,C b表示所述源图像转化至YC bC r颜色空间中蓝色色度分量的取值,C r表示所述源图像转化至YC bC r颜色空间中红色色度分量的取值。
在上述增强图像对比度的装置中,所述第二转换模块用于将所述处理后的图像转换至RGB颜色空间的公式为:
R = Y 1 + 1.403C r
G = Y 1 - 0.344C b - 0.714C r
B = Y 1+ 1.773C b
在上式中,Y 1、C r及C b分别为处理后的图像亮度分量的取值、源图像转化至YC bC r空间中红色色度分量的取值以及源图像转化至YC bC r空间中蓝色色度分量的取值;R、G、B分别为处理后的图像在RGB颜色空间中的红色分量的取值、绿色分量的取值以及蓝色分量的取值。
有益效果
本申请通过将源图像从RGB颜色空间转至YCbCr颜色空间以提取亮度分量,通过对亮度分量对应的非彩色图像进行伽马曲线校正以获得暗区细节对比度增强的暗区细节图像和亮区细节对比度增强的亮区细节图像,利用脉冲神经网络模型分别提取暗区细节图像和亮区细节图像中细节丰富且灰阶范围广的区域并融合在一起,以得到调整后对比度增强的亮度分量,对比增强的亮度分量、蓝色色度分量以及红色色度分量转回至RGB颜色空间,得对比度增强的图像,同时,对比度增强的图像的细节部分也得到保护,对比度增强的图像也是噪点现象得到改善的图像。
附图说明
图1为利用直方图均衡化处理图像前后的图像及其灰度直方图,其中,图A和图B分别为原始图像处理前的图像及其灰度直方图,图C和图D为原始图像处理后的图像及其灰度直方图;
图2为本申请一实施例的增强图像对比度的方法的流程图;
图3为采用脉冲耦合神经网络模型将暗区细节图像和亮区细节图像进行融合的流程图;
图4为本申请一实施例的增强图像对比度的装置示意图。
本发明的实施方式
以下各实施例的说明是参考附加的图示,用以例示本申请可用以实施的特定实施例。本申请所提到的方向用语,例如[上]、[下]、[前]、[后]、[左]、[右]、[内]、[外]、[侧面]等,仅是参考附加图式的方向。因此,使用的方向用语是用以说明及理解本申请,而非用以限制本申请。在图中,结构相似的单元是用以相同标号表示。
如图2所示,其为本申请一实施例的增强图像对比度的方法的流程图,包括:
S10:将源图像从RGB颜色空间转换至YC bC r颜色空间;
需要了解的是,记录彩色图像时,RGB是最常见的一种颜色空间,RGB颜色空间由红色(Red)分量、绿色(Green)分量以及蓝色(Blue)分量组成,红色分量、绿色分量以及蓝色分量的取值范围均为0-255,某个颜色分量的取值越大,该颜色分量的亮度越高,即亮度信息存在于三个颜色分量中;3个颜色分量同等重要且高度相关,当要对彩色图像的像素的亮度进行调整时,彩色图像的像素颜色也会发生变化。在YC rC b颜色空间中,Y表示亮度分量,C r和C b分别表示红色色度分量和蓝色色度分量,Y、C r和C b的取值范围均为0-255,亮度信号(Y)和色度信号(C r和C b)相互独立,对亮度分量Y进行增强时,不会影响色度信号。另外,对于二维图像,像素的空间信息需要两个分量进行表示,具体在本申请中,像素的空间信息用(i,j)表示,i表示像素位于第i行,j表示像素位于第j列。
本申请将源图像从RGB颜色空间转换到YC bC r颜色空间,提取出YC bC r颜色空间的亮度信息以进行调节,图像的其他信息不会受到影响,而且将源图像从RGB颜色空间转换到YC bC r颜色空间是线性转换,公式如下:
Y = 0.299R + 0.587G + 0.114B;
C b = -0.169R - 0.331G + 0.500B;
C r = 0.500R - 0.419G - 0.081B;(1)
在上式(1)中,R表示源图像在RGB颜色空间中红色分量的取值,G表示源图像在RGB颜色空间中绿色分量的取值,B表示源图像在RGB颜色空间中蓝色分量的取值;Y表示源图像转化至YC bC r颜色空间中亮度分量的取值,C b表示源图像转化至YC bC r颜色空间中蓝色色度分量的取值,C r表示源图像转化至YC bC r颜色空间中红色色度分量的取值。
S11:获取源图像在YC bC r空间的亮度分量Y、蓝色色度分量C b以及红色色度分量C r
S12:调整亮度分量Y以获得处理后的图像;
具体地,将亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶,分别得暗区细节图像和亮区细节图像;
暗区细节图像是将亮度分量Y对应的非彩色图像进行低灰阶拉伸得到的,即非彩色图像的暗区细节所对应的灰度动态范围被拉宽而亮区细节所对应的灰度动态范围被压缩,从而使得暗区细节的对比度增强;亮区细节图像是将亮度分量Y对应的非彩色图像进行高灰阶拉伸得到的,即非彩色图像的亮区细节图像所对应的灰度动态范围被拉宽而暗区细节所对应的灰度动态范围被压缩,从而使得亮区细节的对比度增强,本申请通过将亮度分量Y对应的彩色图像进行伽马曲线校正,分别得到暗区细节对比度增强的图像和亮区细节对比度增强的图像。
接着,采用脉冲耦合神经网络模型将暗区细节图像和亮区细节图像进行融合,得对比度增强的亮度分量Y 1;对比度增强的亮度分量Y 1、蓝色色度分量C b以及红色色度分量C r构成处理后的图像。
需要了解的是,脉冲耦合神经网络模型(Pulse Coupled Neural Network, PCNN)是由Eckhorn等人根据猫的视皮层神经元脉冲同步震荡和神经元脉冲发放现象提出的,具体而言,脉冲耦合神经网络是由若干个神经元相连而成的反馈网络。在图像处理时,神经元与图像中的像素对应,神经元的输入也与像素的灰度值相关信息对应,由于像素点是离散的,该脉冲耦合神经网络模型的输入信号也是离散的。每个神经元都是由三个部分组成,即输入区、连接区、脉冲发生器,每个像素(i,j)所对应的神经元模型所对应的数学描述可简化为如下公式:
输入区:F i,j (n)= I i,j
连接区:L i,j(n) = exp(-α L)L i,j(n-1) + ∑ k,lW ij,klY ij,kl(n-1),
 U i,j(n) = F i,j(n)(1+β i,jL i,j(n)),;
脉冲发生器:T i,j(n) = exp(-α T)T i,j(n-1) + v TY i,j(n-1),
当U i,j(n)>T i,j(n)时,Y i,j(n) = 1;
当U i,j(n)≤T i,j(n)时,Y i,j(n) = 0;(3)
在上述公式(3)中,输入区中,I为待融合图像,I i,j为待融合图像的灰度相关信息的值,将I i,j作为F i,j (n)的输入刺激值,n为在PCNN中迭代第n次;
连接区中,L i,j(n) 表示像素(i,j)的邻域影响值,α L表示链接通路的时间衰减常数,W ij,kl表示第(i+k)行第(j+l)列邻域像素输出值的权重,Y ij,kl(n-1)表示第(i+k)行第(j+l)列像素在第(n-1)次迭代时的输出,U i,j(n)表示像素(i,j)在第n次迭代时的内部活动项,β i,j表示连接强度值,k和l表示当前像素(i,j)所对应的神经元提供链接输入的其他神经元与(i,j)相连的范围;
脉冲发生器中,T i,j(n)为像素(i,j)在第n次迭代时的阈值,α T和v T表示神经元可调阈值的时间衰减常数和放大倍数;
对式(3)中定义的像素(i,j)对应的输出值 Y i,j(n)利用公式(4)进行处理得到像素(i,j)在迭代n次时的点火值(点火次数之和),公式(4)如下:
Sum i,j(n) = Sum i,j(n-1) + Y i,j(n)。(4)
在计算像素(i,j)迭代N次的点火值Sum i,j(n)之前,需要将一些参数进行初始化,即F(0) = Y(0) = T(0) = U(0) = Sum(0) = 0。
一般而言,连接区参考的周边像素为3×3邻域,W的数值为经验值,例如W为:
0.5  1  0.5
1   0   1
0.5  1  0.5
即,∑ k,lW ij,klY ij,kl(n-1) = 0.5Y i-1,j-1(n-1) + Y i-1,j(n-1) + 0.5Y i-1,j+1(n-1) + Y i,j-1(n-1) + Y i,j+1(n-1) + 0.5Y i+1,j-1(n-1) + Y i+1,j(n-1) + 0.5Y i+1,j+1(n-1);
α L、α T及v T也为经验值,例如,α L = 0.01,α T = 0.1,v T = 25。
通过采用脉冲耦合神经网络模型分别将暗区细节图像中对比度增强且灰阶范围的暗区细节和亮区细节图像中对比度增强且灰阶范围广的亮度细节提取出来并融合在一起,使得融合后暗区细节对比度和亮区细节对比度都得到增强且融合在一张图像中的同时,图像的暗区细节和亮区细节不会丢失。另外,在脉冲耦合神经网络模型中,会考虑邻域像素的影响,故对比度增强的图像是噪点现象得到改善的图像。
S13:将处理后的图像转换至RGB颜色空间,得对比度增强的图像;
将处理后的图像转换至RGB颜色空间所用的公式如下:
R = Y 1 + 1.403C r
G = Y 1 - 0.344C b - 0.714C r
B = Y 1 + 1.773C b;(5)
在上述公式(5)中,Y 1、C r及C b分别为处理后的图像亮度分量的取值、源图像在YC bC r空间中红色色度分量的取值以及源图像在YC bC r空间中蓝色色度分量的取值;R、G、B分别为处理后的图像在RGB颜色空间的红色分量的取值、绿色分量的取值以及蓝色分量的取值。
上述方案通过将源图像从RGB颜色空间转至YC bC r颜色空间以提取亮度分量,通过对亮度分量对应的非彩色图像进行伽马曲线校正以获得暗区细节对比度增强的暗区细节图像和亮区细节对比度增强的亮区细节图像,利用脉冲神经网络模型分别提取暗区细节图像和亮区细节图像中细节丰富且灰阶范围广的区域并融合在一起,以得到调整后对比度增强的亮度分量,对比增强的亮度分量、蓝色色度分量以及红色色度分量转回至RGB颜色空间,得对比度增强的图像,同时,对比度增强的图像的细节部分也得到保护,对比度增强的图像也是噪点现象得到改善的图像。
进一步地,如图3所示,其为采用脉冲耦合神经网络模型将暗区细节图像和亮区细节图像进行融合的流程图,包括如下步骤:
分别计算暗区细节图像和亮区细节图像中像素(i,j)灰度梯度值的绝对值作为第一刺激值和第二刺激值;
分别计算暗区细节图像和亮区细节图像中像素(i,j)的灰度值与128灰阶的差值的绝对值作为第一强度连接值和第二强度连接值;
将第一刺激值和第一强度连接值作为像素(i,j)在第一通道的输入值并迭代N次,获取暗区细节图像对应的第一点火矩阵;
将第二刺激值和第二强度连接值作为像素(i,j)在第二通道的输入值并迭代N次,获取所述亮区细节图像对应的第二点火矩阵;
将第一点火矩阵和第二点火矩阵中所述像素(i,j)的点火值进行比较,获取对比度增强的亮度分量Y 1
其中,所述N为大于0的整数,所述像素(i,j)表示位于第i行第j列的像素,所述i和所述j均为大于0的正整数;
脉冲耦合神经网络模型包括第一通道和第二通道。
“对比度增强算法”一般有两方面需求:(1)就整幅图像而言,图像的亮区变得更亮,暗区变得更暗,灰阶范围扩大,图像整体对比度提高;(2)就图像局部而言,相邻像素的亮度层次拉开,局部细节丰富。本申请通过设定像素(i,j)的灰度梯度绝对值和像素(i,j)的灰度值与128灰阶的绝对值作为PCNN模型的两项输入,其中,像素(i,j)的灰度梯度绝对值作为PCNN的刺激值以衡量局部细节,像素(i,j)的灰度值与128灰阶的绝对值作为PCNN的连接强度值以衡量灰阶范围,可理解为,灰度值与128灰阶的绝对值越大,亮度越偏离中间值,越助于扩大整体的灰阶范围。本申请的两项输入会综合影响PCNN模型输出的点火值。例如,如果像素(i,j)在暗区细节图像、亮区细节图像中的梯度值相等,但在暗区细节中的灰度值与128灰阶的绝对值更大,则经过PCNN模型计算后,暗区细节图像的点火值会超过亮区细节图像,最终的融合图里,该像素的灰度值会采用暗区细节图像中的灰度值。
更进一步地,将第一点火矩阵和第二点火矩阵中所述像素(i,j)的点火值进行比较包括如下步骤:
若像素(i,j)在第一点火矩阵中的点火值大于像素(i,j)在第二点火矩阵中的点火值,则像素(i,j)融合后的灰度值为暗区细节图像中像素(i,j)的灰度值;
若像素(i,j)在述第一点火矩阵中的点火值小于或等于像素(i,j)在第二点火矩阵中的点火值,则像素(i,j)融合后的灰度值为亮区细节图像中像素(i,j)的灰度值;
像素(i,j)融合后的灰度值构成对比度增强的亮度分量Y 1
进一步地,用拉普拉斯算子、高斯拉普拉斯算子、凯尼Canny算子、索贝尔算子中的任意一种以计算暗区细节图像和亮区细节图像中像素(i,j)的灰度梯度值。具体在本实施例中,采用拉普拉斯算子分别计算暗区细节图像和亮区细节图像中像素(i,j)的灰度梯度值,公式如下:
grads i,j=lum(i-1,j)+lum(i+1,j)+lum(i,j-1)+lum(i,j+1)-4lum(i,j) ;(6)
在公式(6)中,lum(i,j)表示像素(i,j)的灰度值,lum(i-1,j)表示像素(i-1, j)的灰度值,lum(i+1,j)表示像素(i+1, j)的灰度值,lum(i,j-1)表示像素(i, j-1)的灰度值,lum(i,j+1)表示像素(i, j+1)的灰度值,grads i,j为像素(i,j)的灰度梯度值。
进一步地,将亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶包括如下步骤:
将亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,得暗区细节图像;
将亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶,得亮区细节图像;
伽马曲线对应的函数为y = 255·(x/255)^(γ/2.2),x为亮度分量Y对应的非彩色图像中像素(i,j)的灰度值,γ为灰度系数,y为拉伸后亮度分量Y中像素(i,j)的灰度值;
γ大于0且小于2.2时,亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,γ大于2.2时,亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶。
具体地,γ=2时,亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶得暗区细节图像;γ=2.4时,亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶得亮区细节图像。当γ的值大于0且小于2.2时,图像暗区向亮区扩展,当γ的值大于2.2时,图像由亮区向暗区扩展。本实施例是通过伽马校正分别获得暗区细节对比度增强的暗细节图像和亮度细节对比度增强的亮区细节图像。
本申请的增强图像对比度的方法的执行主体为具有图像处理能力的电子设备,如电视、摄像设备、监控设备、平板电脑及服务器等。
如图4所示,其为本申请一实施例的增强图像对比度的装置30,包括:
第一转换模块31,用于将源图像从RGB颜色空间转换至YC bC r颜色空间;
获取模块32,用于获取源图像在YC bC r空间的亮度分量Y、蓝色色度分量C b以及红色色度分量C r
亮度调整模块33,用于调整亮度分量Y以获得处理后的图像;
第二转换模块34,用于将处理后的图像转换至RGB颜色空间,得对比度增强的图像;
其中,亮度调整模块33包括:
灰阶拉伸单元331,用于将亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶,分别得暗区细节图像和亮区细节图像;
融合单元332,用于采用脉冲耦合神经网络模型将暗区细节图像和亮区细节图像进行融合,得对比度增强的亮度分量Y 1
对比度增强的亮度分量Y 1、蓝色色度分量C b以及红色色度分量C r构成所述处理后的图像。
进一步地,融合单元332包括:
第一计算子单元,用于分别计算暗区细节图像和亮区细节图像中像素(i,j)的灰度梯度值的绝对值作为第一刺激值和第二刺激值;
第二计算子单元,用于分别计算暗区细节图像和亮区细节图像中像素(i,j)的灰度值与128灰阶的差值的绝对值作为第一强度连接值和第二强度连接值;
第一点火矩阵获取子单元,用于将第一刺激值和第一强度连接值作为像素(i,j)在第一通道的输入值并迭代N次,获取暗区细节图像对应的第一点火矩阵;
第二点火矩阵获取子单元,用于将第二刺激值和第二强度连接值作为像素(i,j)在第二通道的输入值并迭代N次,获取亮区细节图像对应的第二点火矩阵;
判断子单元,用于将第一点火矩阵和第二点火矩阵中像素(i,j)的点火值进行比较,获取对比度增强的亮度分量Y 1
其中,所述N为大于0的整数,所述像素(i,j)表示位于第i行第j列的像素,所述i和所述j均为大于0的正整数;
脉冲耦合神经网络模型包括第一通道PCNN1和第二通道PCNN2。
进一步地,判断子单元用于将第一点火矩阵和第二点火矩阵中像素(i,j)的点火值进行比较以获取对比度增强的亮度分量Y 1包括如下步骤:
若像素(i,j)在第一点火矩阵中的点火值大于像素(i,j)在第二点火矩阵中的点火值,则像素(i,j)融合后的灰度值为暗区细节图像中像素(i,j)的灰度值;
若像素(i,j)在第一点火矩阵中的点火值小于或等于像素(i,j)在第二点火矩阵中的点火值,则像素(i,j)融合后的灰度值为亮区细节图像中像素(i,j)的灰度值;
像素(i,j)融合后的灰度值构成对比度增强的亮度分量Y 1
进一步地,用拉普拉斯算子、高斯拉普拉斯算子、凯尼算子、索贝尔算子中的任意一种以计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度梯度值。
进一步地,灰阶拉伸单元331包括:
第一拉伸子单元,用于将亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,得暗区细节图像;
第二拉伸子单元,用于将亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶,得亮区细节图像;
伽马曲线对应的函数为y = 255·(x/255)^(γ/2.2),x为亮度分量Y对应的非彩色图像中像素(i,j)的灰度值,所述γ为灰度系数,y为拉伸后亮度分量Y中像素(i,j)的灰度值;
γ大于0且小于2.2时,亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶;γ大于2.2时,亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶。
本实施例的增强图像对比度的装置提高图像对比度的原理和有益效果与上述增强图像对比度的方法相同,此处不作详述。
需要说明的是:上述实施例提供的增强图像对比度的装置在增强图像对比度时,仅以上述各功能模块的划分进行举例,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即设备的内部结构划分成不同的功能模块,以完成上述描述的全部或部分功能。
综上所述,虽然本申请已以优选实施例揭露如上,但上述优选实施例并非用以限制本申请,本领域的普通技术人员,在不脱离本申请的精神和范围内,均可作各种更动与润饰,因此本申请的保护范围以权利要求界定的范围为准。

Claims (18)

  1. 一种增强图像对比度的方法,其中,包括如下步骤:
    将源图像从RGB颜色空间转换至YC bC r颜色空间;
    获取所述源图像在YC bC r空间的亮度分量Y、蓝色色度分量C b以及红色色度分量C r
    调整所述亮度分量Y以获得处理后的图像;
    将所述处理后的图像转换至所述RGB颜色空间,得对比度增强的图像;
    其中,所述调整所述亮度分量Y以获得处理后的图像包括如下步骤:
    将所述亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶,分别得暗区细节图像和亮区细节图像;
    采用脉冲耦合神经网络模型将所述暗区细节图像和所述亮区细节图像进行融合,得对比度增强的亮度分量Y 1
    所述对比度增强的亮度分量Y 1、所述蓝色色度分量C b以及所述红色色度分量C r构成所述处理后的图像。
  2. 根据权利要求1所述的增强图像对比度的方法,其中,所述采用脉冲耦合神经网络模型将所述暗区细节图像和所述亮区细节图像进行融合包括如下步骤:
    分别计算所述暗区细节图像和所述亮区细节图像中像素(i,j)灰度梯度值的绝对值作为第一刺激值和第二刺激值;
    分别计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度值与128灰阶的差值的绝对值作为第一强度连接值和第二强度连接值;
    将所述第一刺激值和所述第一强度连接值作为所述像素(i,j)在第一通道的输入值并迭代N次,获取所述暗区细节图像对应的第一点火矩阵;
    将所述第二刺激值和所述第二强度连接值作为所述像素(i,j)在第二通道的输入值并迭代N次,获取所述亮区细节图像对应的第二点火矩阵;
    将所述第一点火矩阵和所述第二点火矩阵中所述像素(i,j)的点火值进行比较,获取所述对比度增强的亮度分量Y 1
    其中,所述N为大于0的整数,所述像素(i,j)表示位于第i行第j列的像素,所述i和所述j均为大于0的正整数;
    所述脉冲耦合神经网络模型包括所述第一通道和所述第二通道。
  3. 根据权利要求2所述的增强图像对比度的方法,其中,所述将所述第一点火矩阵和所述第二点火矩阵中所述像素(i,j)的点火值进行比较包括如下步骤:
    若所述像素(i,j)在所述第一点火矩阵中的点火值大于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述暗区细节图像中所述像素(i,j)的灰度值;
    若所述像素(i,j)在所述第一点火矩阵中的点火值小于或等于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述亮区细节图像中所述像素(i,j)的灰度值;
    所述像素(i,j)融合后的灰度值构成所述对比度增强的亮度分量Y 1
  4. 根据权利要求2所述的增强图像对比度的方法,其中,用拉普拉斯算子、高斯拉普拉斯算子、凯尼算子、索贝尔算子中的任意一种以计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度梯度值。
  5. 根据权利要求4所述的增强图像对比度的方法,其中,所述拉普拉斯算子的公式为:
    grads i,j=lum(i-1,j)+lum(i+1,j)+lum(i,j-1)+lum(i,j+1)-4lum(i,j) ,
    其中,所述lum(i, j)表示像素(i, j)的灰度值,所述lum(i-1,j)表示像素(i-1, j)的灰度值,所述lum(i+1,j)表示像素(i+1, j)的灰度值,所述lum(i,j-1)表示像素(i, j-1)的灰度值,所述lum(i,j+1)表示像素(i, j+1)的灰度值,所述grads i,j为像素(i,j)的灰度梯度值。
  6. 根据权利要求1所述的增强图像对比度的方法,其中,所述将所述亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶包括如下步骤:
    将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,得所述暗区细节图像;
    将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶,得所述亮区细节图像;
    所述伽马曲线对应的函数为y = 255·(x/255)^(γ/2.2),所述x为所述亮度分量Y对应的非彩色图像中像素(i,j)的灰度值,所述γ为灰度系数,所述y为拉伸后亮度分量Y中像素(i,j)的灰度值;
    所述γ大于0且小于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,所述γ大于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶。
  7. 根据权利要求6所述的增强图像对比度的方法,其中,所述γ=2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶得所述暗区细节图像;所述γ=2.4时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶得所述亮区细节图像。
  8. 根据权利要求1所述的增强图像对比度的方法,其中,所述将源图像从RGB颜色空间转换至YC bC r颜色空间的公式为:
    Y = 0.299R + 0.587G + 0.114B;
    C b = -0.169R - 0.331G + 0.500B;
    C r = 0.500R - 0.419G - 0.081B;
    其中,R表示所述源图像在RGB颜色空间中红色分量的取值,G表示所述源图像在RGB颜色空间中绿色分量的取值,B表示所述源图像在RGB颜色空间中蓝色分量的取值;Y表示所述源图像转化至YC bC r颜色空间中亮度分量的取值,C b表示所述源图像转化至YC bC r颜色空间中蓝色色度分量的取值,C r表示所述源图像转化至YC bC r颜色空间中红色色度分量的取值。
  9. 根据权利要求1所述的增强图像对比度的方法,其中,所述将所述处理后的图像转换至所述RGB颜色空间的公式为:
    R = Y 1 + 1.403C r
    G = Y 1 - 0.344C b - 0.714C r
    B = Y 1+ 1.773C b
    其中,Y 1、C r及C b分别为处理后的图像亮度分量的取值、源图像转化至YC bC r空间中红色色度分量的取值以及源图像转化至YC bC r空间中蓝色色度分量的取值;R、G、B分别为处理后的图像在RGB颜色空间中的红色分量的取值、绿色分量的取值以及蓝色分量的取值。
  10. 一种增强图像对比度的装置,其中,包括:
    第一转换模块,用于将源图像从RGB颜色空间转换至YC bC r颜色空间;
    获取模块,用于获取所述源图像在YC bC r空间的亮度分量Y、蓝色色度分量C b以及红色色度分量C r
    亮度调整模块,用于调整所述亮度分量Y以获得处理后的图像;
    第二转换模块,用于将所述处理后的图像转换至RGB颜色空间,得对比度增强的图像;
    其中,所述亮度调整模块包括:
    灰阶拉伸单元,用于将所述亮度分量Y对应的非彩色图像进行伽马曲线校正以分别拉伸低灰阶和拉伸高灰阶,分别得暗区细节图像和亮区细节图像;
    融合单元,用于采用脉冲耦合神经网络模型将所述暗区细节图像和所述亮区细节图像进行融合,得对比度增强的亮度分量Y 1
    所述对比度增强的亮度分量Y 1、所述蓝色色度分量C b以及所述红色色度分量C r构成所述处理后的图像。
  11. 根据权利要求10所述的增强图像对比度的装置,其中,所述融合单元包括:
    第一计算子单元,用于分别计算所述暗区细节图像和所述亮区细节图像中像素(i,j)的灰度梯度值的绝对值作为第一刺激值和第二刺激值;
    第二计算子单元,用于分别计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的灰度值与128灰阶的差值的绝对值作为第一强度连接值和第二强度连接值;
    第一点火矩阵获取子单元,用于将所述第一刺激值和所述第一强度连接值作为所述像素(i,j)在第一通道的输入值并迭代N次,获取所述暗区细节图像对应的第一点火矩阵;
    第二点火矩阵获取子单元,用于将所述第二刺激值和所述第二强度连接值作为所述像素(i,j)在第二通道的输入值并迭代N次,获取所述亮区细节图像对应的第二点火矩阵;
    判断子单元,用于将所述第一点火矩阵和所述第二点火矩阵中像素(i,j)的点火值进行比较,获取所述对比度增强的亮度分量Y 1
    其中,所述N为大于0的整数,所述像素(i,j)表示位于第i行第j列的像素,所述i和所述j均为大于0的整数;
    所述脉冲耦合神经网络模型包括所述第一通道和所述第二通道。
  12. 根据权利要求11所述的增强图像对比度的装置,其中,所述判断子单元用于将第一点火矩阵和第二点火矩阵中像素(i,j)的点火值进行比较以获取对比度增强的亮度分量Y 1包括如下步骤:
    若所述像素(i,j)在所述第一点火矩阵中的点火值大于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述暗区细节图像中所述像素(i,j)的灰度值;
    若所述像素(i,j)在所述第一点火矩阵中的点火值小于或等于所述像素(i,j)在所述第二点火矩阵中的点火值,则所述像素(i,j)融合后的灰度值为所述亮区细节图像中所述像素(i,j)的灰度值;
    所述像素(i,j)融合后的灰度值构成所述对比度增强的亮度分量Y 1
  13. 根据权利要11所述的增强图像对比度的装置,其中,所述第一计算子单元用拉普拉斯算子、高斯拉普拉斯算子、凯尼算子、索贝尔算子中的任意一种以计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的所述灰度梯度值。
  14. 根据权利要求13所述的增强图像对比度的装置,其中,所述第一计算子单元用拉普拉斯算子计算所述暗区细节图像和所述亮区细节图像中所述像素(i,j)的所述灰度梯度值的公式为:
    grads i,j=lum(i-1,j)+lum(i+1,j)+lum(i,j-1)+lum(i,j+1)-4lum(i,j)
    ,其中,所述lum(i, j)表示像素(i, j)的灰度值,所述lum(i-1,j)表示像素(i-1, j)的灰度值,所述lum(i+1,j)表示像素(i+1, j)的灰度值,所述lum(i,j-1)表示像素(i, j-1)的灰度值,所述lum(i,j+1)表示像素(i, j+1)的灰度值,所述grads i,j为像素(i,j)的灰度梯度值。
  15. 根据权利要10所述的增强图像对比度的装置,其中,所述灰阶拉伸单元包括:
    第一拉伸子单元,用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶,得所述暗区细节图像;
    第二拉伸子单元,用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶,得所述亮区细节图像;
    所述伽马曲线对应的函数为y = 255·(x/255)^(γ/2.2),所述x为所述亮度分量Y对应的非彩色图像中像素(i,j)的灰度值,所述γ为灰度系数,所述y为拉伸后亮度分量Y中像素(i,j)的灰度值;
    所述γ大于0且小于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶;所述γ大于2.2时,所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶。
  16. 根据权利要求15所述的增强图像对比度的装置,其中,
    所述第一拉伸子单元用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸低灰阶时,所述γ=2;
    所述第二拉伸子单元用于将所述亮度分量Y对应的非彩色图像经伽马曲线校正以拉伸高灰阶时,所述γ=2.4。
  17. 根据权利要求10所述的增强图像对比度的装置,其中,所述第一转换模块用于将源图像从RGB颜色空间转换至YC bC r颜色空间的公式为:
    Y = 0.299R + 0.587G + 0.114B;
    C b = -0.169R - 0.331G + 0.500B;
    C r = 0.500R - 0.419G - 0.081B;
    在上式中,R表示所述源图像在RGB颜色空间中红色分量的取值,G表示所述源图像在RGB颜色空间中绿色分量的取值,B表示所述源图像在RGB颜色空间中蓝色分量的取值;Y表示所述源图像转化至YC bC r颜色空间中亮度分量的取值,C b表示所述源图像转化至YC bC r颜色空间中蓝色色度分量的取值,C r表示所述源图像转化至YC bC r颜色空间中红色色度分量的取值。
  18. 根据权利要求10所述的增强图像对比度的装置,其中,所述第二转换模块用于将所述处理后的图像转换至RGB颜色空间的公式为:
    R = Y 1 + 1.403C r
    G = Y 1 - 0.344C b - 0.714C r
    B = Y 1+ 1.773C b
    在上式中,Y 1、C r及C b分别为处理后的图像亮度分量的取值、源图像转化至YC bC r空间中红色色度分量的取值以及源图像转化至YC bC r空间中蓝色色度分量的取值;R、G、B分别为处理后的图像在RGB颜色空间中的红色分量的取值、绿色分量的取值以及蓝色分量的取值。
PCT/CN2018/124517 2018-10-26 2018-12-27 增强图像对比度的方法及其装置 WO2020082593A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811259739.9 2018-10-26
CN201811259739.9A CN109658341B (zh) 2018-10-26 2018-10-26 增强图像对比度的方法及其装置

Publications (1)

Publication Number Publication Date
WO2020082593A1 true WO2020082593A1 (zh) 2020-04-30

Family

ID=66110277

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124517 WO2020082593A1 (zh) 2018-10-26 2018-12-27 增强图像对比度的方法及其装置

Country Status (2)

Country Link
CN (1) CN109658341B (zh)
WO (1) WO2020082593A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861359A (zh) * 2022-12-16 2023-03-28 兰州交通大学 一种水面漂浮垃圾图像自适应分割提取方法
CN117455780A (zh) * 2023-12-26 2024-01-26 广东欧谱曼迪科技股份有限公司 内镜暗场图像的增强方法、装置、电子设备及存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968039B (zh) * 2019-05-20 2023-08-22 北京航空航天大学 基于硅传感器相机的昼夜通用图像处理方法、装置及设备
CN112446228B (zh) * 2019-08-27 2022-04-01 北京易真学思教育科技有限公司 视频检测方法、装置、电子设备及计算机存储介质
CN110619610B (zh) * 2019-09-12 2023-01-10 紫光展讯通信(惠州)有限公司 图像处理方法及装置
WO2021179142A1 (zh) * 2020-03-09 2021-09-16 华为技术有限公司 一种图像处理方法及相关装置
CN112598612B (zh) * 2020-12-23 2023-07-07 南京邮电大学 一种基于照度分解的无闪烁暗光视频增强方法及装置
CN112700752B (zh) * 2021-01-14 2022-04-12 凌云光技术股份有限公司 一种亮度调节方法
CN113470156A (zh) * 2021-06-23 2021-10-01 网易(杭州)网络有限公司 纹理贴图的混合处理方法、装置、电子设备及存储介质
CN113643651B (zh) * 2021-07-13 2022-08-09 深圳市洲明科技股份有限公司 一种图像增强方法、装置、计算机设备和存储介质
CN115050326B (zh) * 2022-08-15 2022-11-04 禹创半导体(深圳)有限公司 一种强光下oled适应性可见调光方法
CN116363017B (zh) * 2023-05-26 2023-10-24 荣耀终端有限公司 图像处理方法及装置
CN116894795B (zh) * 2023-09-11 2023-12-26 归芯科技(深圳)有限公司 图像处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030031376A1 (en) * 2001-08-13 2003-02-13 Casper Liu Image enhancement method
CN101178875A (zh) * 2006-11-10 2008-05-14 精工爱普生株式会社 图像显示控制装置
CN102496152A (zh) * 2011-12-01 2012-06-13 四川虹微技术有限公司 一种基于直方图的自适应图像对比度增强方法
US20170301075A1 (en) * 2016-04-13 2017-10-19 Realtek Semiconductor Corp. Image contrast enhancement method and apparatus thereof
CN108629738A (zh) * 2017-03-16 2018-10-09 阿里巴巴集团控股有限公司 一种图像处理方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383912B (zh) * 2008-10-23 2010-12-08 上海交通大学 电视摄像色彩智能化自动调节方法
CN102110289B (zh) * 2011-03-29 2012-09-19 东南大学 基于变分框架的彩色图像对比度增强方法
CN104616268A (zh) * 2015-02-17 2015-05-13 天津大学 一种基于湍流模型的水下图像复原方法
CN107481206A (zh) * 2017-08-28 2017-12-15 湖南友哲科技有限公司 显微镜图像背景均衡处理算法
CN108122213B (zh) * 2017-12-25 2019-02-12 北京航空航天大学 一种基于YCrCb的低对比度图像增强方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030031376A1 (en) * 2001-08-13 2003-02-13 Casper Liu Image enhancement method
CN101178875A (zh) * 2006-11-10 2008-05-14 精工爱普生株式会社 图像显示控制装置
CN102496152A (zh) * 2011-12-01 2012-06-13 四川虹微技术有限公司 一种基于直方图的自适应图像对比度增强方法
US20170301075A1 (en) * 2016-04-13 2017-10-19 Realtek Semiconductor Corp. Image contrast enhancement method and apparatus thereof
CN108629738A (zh) * 2017-03-16 2018-10-09 阿里巴巴集团控股有限公司 一种图像处理方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861359A (zh) * 2022-12-16 2023-03-28 兰州交通大学 一种水面漂浮垃圾图像自适应分割提取方法
CN115861359B (zh) * 2022-12-16 2023-07-21 兰州交通大学 一种水面漂浮垃圾图像自适应分割提取方法
CN117455780A (zh) * 2023-12-26 2024-01-26 广东欧谱曼迪科技股份有限公司 内镜暗场图像的增强方法、装置、电子设备及存储介质
CN117455780B (zh) * 2023-12-26 2024-04-09 广东欧谱曼迪科技股份有限公司 内镜暗场图像的增强方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN109658341B (zh) 2021-01-01
CN109658341A (zh) 2019-04-19

Similar Documents

Publication Publication Date Title
WO2020082593A1 (zh) 增强图像对比度的方法及其装置
CN103593830B (zh) 一种低照度视频图像增强方法
CN109785240B (zh) 一种低照度图像增强方法、装置及图像处理设备
US10521887B2 (en) Image processing device and image processing method
CN108876742B (zh) 图像色彩增强方法和装置
WO2019056549A1 (zh) 图像增强方法以及图像处理装置
US10771709B2 (en) Evaluation device, evaluation method, and camera system
JP2004064792A (ja) 色補正装置及びその方法
WO2021218603A1 (zh) 图像处理方法及投影系统
CN111970432A (zh) 一种图像处理方法及图像处理装置
CN110060222A (zh) 一种图像校正方法、装置及内窥镜系统
KR20200089410A (ko) 최적의 감마보정 기반 저조도 영상 보정방법
WO2021073330A1 (zh) 一种视频信号处理方法及装置
KR20230146974A (ko) 영상의 밝기 개선 방법 및 장치
WO2020118902A1 (zh) 图像处理方法及图像处理系统
CN111107330A (zh) 一种Lab空间的偏色校正方法
CN107027017A (zh) 一种图像白平衡的调整方法、装置、图像处理芯片及存储装置
JP5410378B2 (ja) 映像信号補正装置および映像信号補正プログラム
CN105208362B (zh) 基于灰度平衡原理的图像色偏自动校正方法
JP4719559B2 (ja) 画質改善装置及びプログラム
TWI479878B (zh) Correction of pseudo - color pixels in digital image
CN107292829B (zh) 图像处理方法及装置
CN107680068A (zh) 一种考虑图像自然度的数字图像增强方法
CN105303515B (zh) 一种针对密闭实验箱中特殊光照条件下的色偏校正方法
KR20160025876A (ko) 영상의 대비 강화 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937843

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18937843

Country of ref document: EP

Kind code of ref document: A1