CN112884661A - Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method - Google Patents

Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method Download PDF

Info

Publication number
CN112884661A
CN112884661A CN202011537468.6A CN202011537468A CN112884661A CN 112884661 A CN112884661 A CN 112884661A CN 202011537468 A CN202011537468 A CN 202011537468A CN 112884661 A CN112884661 A CN 112884661A
Authority
CN
China
Prior art keywords
pixel data
value
partition
gray scale
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011537468.6A
Other languages
Chinese (zh)
Inventor
宋振坤
杨杰
毕育欣
孙炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Optoelectronics Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202011537468.6A priority Critical patent/CN112884661A/en
Publication of CN112884661A publication Critical patent/CN112884661A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing apparatus and method, display apparatus and computer readable storage medium, the image processing apparatus includes a brightness saturation calculation unit for calculating a brightness value and a saturation value of a current frame picture according to input RGB pixel data; the normalization processing unit is used for performing normalization processing on input RGB pixel data; the RGBW determining unit is used for determining RGBW pixel data according to the RGB pixel data after normalization processing and the brightness value and the saturation value; the inverse normalization processing unit is used for performing inverse normalization processing on the RGBW pixel data; and the sub-pixel rendering unit is used for arranging the RGBW pixel data after the reverse normalization processing and determining the output RGBW pixel data. The invention avoids the problems of insufficient display brightness and low contrast, and achieves the technical effect of enhancing the contrast of the display picture while improving the display brightness.

Description

Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method
Technical Field
The present invention relates to the field of display technologies, and in particular, to an image processing apparatus and method, a display apparatus, and a computer-readable storage medium.
Background
Currently, in the field of Display technologies, for example, in image Display devices such as Liquid Crystal Displays (LCDs) and Organic Light Emitting Displays (OLEDs), in order to Display higher luminance while reducing power consumption of the displays, RGBW Display technologies, which are new pixel arrangement (RGBW arrangement) composed of red (R) sub-pixels, green (G) sub-pixels, blue (B) sub-pixels, and white (W) sub-pixels, are often used, and RGBW screens can Display higher luminance when backlight luminance is the same; when the explicit brightness is the same, the RGBW screen consumes less power.
Due to the video signals that are currently in common use, for example: the Video Graphics Array (VGA) Interface or the Digital Video Interface (DVI) Interface transmits RGB format data, so that the RGBW data can be obtained by converting the RGB data, thereby driving the RGBW display screen to correctly display a color picture. At present, FPGA or customized IC is often used for realizing data processing and display module driving.
However, the method for converting RGB data into RGBW data in the prior art has the problems of insufficient display brightness and low contrast, thereby reducing the display quality of the display screen.
Disclosure of Invention
Embodiments of the present invention provide an image processing apparatus and method, a display apparatus, and a computer-readable storage medium, which are used for improving the brightness of a display screen without changing power consumption, and enhancing the contrast of the display screen while ensuring that the display brightness is improved.
To achieve the above object, an embodiment of the present invention provides an image processing apparatus including: the brightness saturation calculating unit is used for calculating a brightness value and a saturation value of the current frame picture according to the input RGB pixel data; the normalization processing unit is used for performing normalization processing on input RGB pixel data; the RGBW determining unit is used for determining RGBW pixel data according to the RGB pixel data after normalization processing and the brightness value and the saturation value; the inverse normalization processing unit is used for performing inverse normalization processing on the RGBW pixel data; and the sub-pixel rendering unit is used for arranging the RGBW pixel data after the reverse normalization processing and determining the output RGBW pixel data.
Optionally, the RGBW determining unit includes: a gray scale mapping processing unit: the RGB pixel data processing module is used for carrying out gray scale mapping processing on the RGB pixel data after normalization processing; a partition contrast enhancement processing unit: the system is used for carrying out partition contrast enhancement processing on the RGB pixel data subjected to the gray scale mapping processing; and an RGBW conversion unit: the method is used for converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data.
Optionally, the grayscale mapping processing unit is configured to: calculating a gain coefficient K _ scaler, and performing gray scale mapping processing on the normalized RGB pixel data by using the gain coefficient K _ scaler obtained through calculation, wherein the gain coefficient K _ scaler is obtained through calculation according to the following formula:
K_scaler=128+scaler_in×S×V/255
the scaler _ in is an externally configurable coefficient, the value range is 0-2, S is a saturation value, and V is a brightness value.
Optionally, the grayscale mapping processing unit is configured to: performing gray scale mapping processing on the normalized RGB pixel data through the following formula:
R1=K_scaler×R0
G1=K_scaler×G0
B1=K_scaler×B0
wherein, R0 is the gray scale value of the normalized red sub-pixel, G0 is the gray scale value of the normalized green sub-pixel, B0 is the gray scale value of the normalized blue sub-pixel, R1 is the gray scale value of the red sub-pixel after gray scale mapping, G1 is the gray scale value of the green sub-pixel after gray scale mapping, and B1 is the gray scale value of the blue sub-pixel after gray scale mapping.
Optionally, the partition contrast enhancement processing unit is configured to: converting the RGB image subjected to gray scale mapping processing into a YUV color space to obtain a YUV image comprising YUV components, keeping the UV components unchanged, dividing the Y component image into a plurality of partitions with the same size, and respectively counting the number s (i) of sub-pixels with different gray scales in each partition, wherein i is 0,1,2 … … 255; and when s (i) is greater than a first threshold value, setting the value of s (i) as the first threshold value, and when s (i) is less than a second threshold value, setting the value of s (i) as the second threshold value.
Optionally, the partition contrast enhancement processing unit is configured to: accumulating the number s (i) of sub-pixels with different gray scales in each partition to obtain the number p (i) of sub-pixels with different gray scales in the current frame image,
Figure BDA0002853939370000031
then, mapping the next frame image by using the gray scale distribution condition of the current frame image according to the following formula:
y_out(i)=255×p(i)/p(255)i=0,1,2……255
wherein i is an input gray-scale value, Y _ out (i) is a gray-scale value of the Y component after the partition contrast enhancement, and then processed YUV pixel data formed by the Y component after the partition contrast enhancement and the original UV component is converted back to an RGB color space to obtain RGB pixel data after the partition contrast enhancement, wherein R1 ' is a gray-scale value of the red sub-pixel after the partition contrast enhancement, G1 ' is a gray-scale value of the green sub-pixel after the partition contrast enhancement, and B1 ' is a gray-scale value of the blue sub-pixel after the partition contrast enhancement.
Optionally, the RGBW converting unit is configured to: converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data by the following formula:
W=min(R1′,G1′,B1′)
Figure BDA0002853939370000032
RO=k×R1′-W
GO=k×G1′-W
BO=k×B1′-W
WO=w_scaler×W
wherein, w _ scaler is an externally configurable coefficient, and the value range is 0-2.
The embodiment of the invention also provides an image processing method, which comprises the following steps: calculating a brightness value and a saturation value of a current frame picture according to input RGB pixel data; normalizing the input RGB pixel data; determining RGBW pixel data according to the RGB pixel data after normalization processing and the brightness value and the saturation value; performing inverse normalization processing on the RGBW pixel data; and arranging the RGBW pixel data after the reverse normalization processing, and determining the output RGBW pixel data.
Optionally, the step of determining RGBW pixel data according to the RGB pixel data after the normalization processing and the brightness value and the saturation value includes: carrying out gray scale mapping processing on the RGB pixel data after normalization processing; performing partition contrast enhancement processing on the RGB pixel data subjected to the gray scale mapping processing; and converting the RGB pixel data subjected to the partition contrast enhancement processing into RGBW pixel data.
Optionally, the step of performing gray scale mapping processing on the RGB pixel data after the normalization processing includes: calculating a gain coefficient K _ scaler, and performing gray scale mapping processing on the normalized RGB pixel data by using the gain coefficient K _ scaler obtained through calculation, wherein the gain coefficient K _ scaler is obtained through calculation according to the following formula:
K_scaler=128+scaler_in×S×V/255
the scaler _ in is an externally configurable coefficient, the value range is 0-2, S is a saturation value, and V is a brightness value.
Optionally, the step of performing gray scale mapping processing on the RGB pixel data after the normalization processing includes: performing gray scale mapping processing on the normalized RGB pixel data through the following formula:
R1=K_scaler×R0
G1=K_scaler×G0
B1=K_scaler×B0
wherein, R0 is the gray-scale value of the normalized red sub-pixel, G0 is the gray-scale value of the normalized green sub-pixel, B0 is the gray-scale value of the normalized blue sub-pixel, R1 is the gray-scale value of the red sub-pixel after gray-scale mapping, G1 is the gray-scale value of the green sub-pixel after gray-scale mapping, and B1 is the gray-scale value of the blue sub-pixel after gray-scale mapping.
Optionally, the step of performing partition contrast enhancement processing on the RGB pixel data after the gray scale mapping processing includes: converting the RGB image subjected to gray scale mapping into a YUV color space to obtain a YUV image comprising YUV components, keeping the UV components unchanged, dividing the Y component image into a plurality of partitions with the same size, and respectively counting the number s (i) of different gray scale pixels in each partition, wherein i is 0,1,2 … … 255; and when s (i) is greater than a first threshold value, setting the value of s (i) as the first threshold value, and when s (i) is less than a second threshold value, setting the value of s (i) as the second threshold value.
Optionally, the step of performing partition contrast enhancement processing on the RGB pixel data after the gray scale mapping processing includes: accumulating the number s (i) of the pixels with different gray scales in each partition to obtain the number p (i) of the pixels with different gray scales in the current frame image,
Figure BDA0002853939370000051
then, mapping the next frame image by using the gray scale distribution condition of the current frame image according to the following formula:
y_out(i)=255×p(i)/p(255)i=0,1,2……255
wherein i is an input gray-scale value, Y _ out (i) is a gray-scale value of the Y component after the partition contrast enhancement, and then processed YUV pixel data formed by the Y component after the partition contrast enhancement and the original UV component is converted back to an RGB color space to obtain RGB pixel data after the partition contrast enhancement, R1 ' is a gray-scale value of the red sub-pixel after the partition contrast enhancement, G1 ' is a gray-scale value of the green sub-pixel after the partition contrast enhancement, and B1 ' is a gray-scale value of the blue sub-pixel after the partition contrast enhancement.
Optionally, the step of converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data includes: converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data by the following formula:
W=min(R1′,G1′,B1′)
Figure BDA0002853939370000052
RO=k×R1′-W
GO=k×G1′-W
BO=k×B1′-W
WO=w_scaler×W
wherein, w _ scaler is an externally configurable coefficient, and the value range is 0-2.
The embodiment of the invention also provides a display device and a memory; one or more processors; the memory and the one or more processors are connected to each other; and the memory stores computer-executable instructions for controlling the one or more processors to implement the image processing method described above when executed.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the image processing method described above.
Drawings
Fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an original image and an image gray scale distribution without gray scale mapping according to an embodiment of the present invention;
FIG. 3 shows an image and image gray scale distribution after gray scale mapping processing according to an embodiment of the present invention;
fig. 4 is a gray scale distribution of an original image and an image without gray scale mapping, a gray scale distribution of an image and an image after gray scale mapping, and a gray scale distribution of an image and an image after partition contrast enhancement according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a method for computing a weighted average of gray-scale values of neighborhood partition images according to an embodiment of the present invention;
FIG. 6 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 7 is a flowchart of an image processing method according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the image processing apparatus and method, the display apparatus and the computer readable storage medium provided by the present invention are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 1, the apparatus includes: a lightness saturation calculation unit 11, a normalization processing unit 12, an RGBW determination unit 13, an inverse normalization processing unit 17, and a sub-pixel rendering unit 18. The brightness saturation calculating unit 11 is configured to calculate a brightness value and a saturation value of a current frame picture according to input RGB pixel data, and the normalization processing unit 12 is configured to perform normalization processing on the input RGB pixel data; the RGBW determining unit 13 is configured to determine RGBW pixel data according to the RGB pixel data after the normalization processing and the brightness value and the saturation value; the inverse normalization processing unit 17 is configured to perform inverse normalization processing on the RGBW pixel data; and the sub-pixel rendering unit 18 is configured to arrange the RGBW pixel data after the inverse normalization processing, and determine the output RGBW pixel data.
In this embodiment, the brightness saturation calculating unit 11 is configured to calculate a brightness value and a saturation value of the current frame picture according to the input RGB pixel data. In image processing, commonly used color spaces include an RGB model, a CMYK model, an HSV model, and an HSL model. Among them, the RGB model adopts a three-dimensional coordinate form, and is commonly used for color display and image processing. The HSV model is a color model for the look and feel of a user, defines colors by Hue (Hue), Value (Value), and Saturation (Saturation), is closer to the definition of colors by human eyes, focuses on color representation, and often chooses to convert an RGB model into an HSV model for calculation in the fields of computer graphics application and display. The embodiment of the invention adopts a conventional conversion formula in the prior art to convert an RGB model into an HSV model so as to obtain a brightness Value (Value) and a Saturation Value (Saturation) of a current frame picture.
In addition, in the present embodiment, the normalization processing unit 12 is configured to perform normalization processing on the input RGB pixel data. Each sub-pixel value range in the RGB pixel data is 0-225, so that the RGB pixel data is mapped into the range of 0-1 for processing, and the data processing is convenient, more convenient and faster. The normalization processing unit 12 is configured to subject the input RGB pixel data (Rin/Gin/Bin) to normalization processing by the following formula,
Figure BDA0002853939370000071
Figure BDA0002853939370000072
Figure BDA0002853939370000073
wherein γ is 2.2.
Further, the RGBW determining unit 13 includes: a gray scale mapping processing unit 14, a divisional contrast enhancement processing unit 15, and an RGBW converting unit 16. The gray scale mapping processing unit 14 is configured to perform gray scale mapping processing on the RGB pixel data after the normalization processing; the partition contrast enhancement processing unit 15 is configured to perform partition contrast enhancement processing on the RGB pixel data after the gray scale mapping processing; and the RGBW converting unit 16 is configured to convert the RGB pixel data subjected to the divisional contrast enhancement processing into RGBW pixel data.
Specifically, the grayscale mapping processing unit 14 is configured to calculate a gain coefficient K _ scaler, and perform grayscale mapping processing on the normalized RGB pixel data by using the calculated gain coefficient K _ scaler, where the gain coefficient K _ scaler is calculated by the following formula:
K_scaler=128+scaler_in×S×V/255
the scaler _ in is an externally configurable coefficient, the value range is 0-2, S is a saturation value obtained after an RGB model is converted into an HSV model, and V is a brightness value obtained after the RGB model is converted into the HSV model. And performing gray scale mapping processing on the normalized RGB pixel data by using the gain coefficient K _ scaler obtained by calculation through the following formula:
R1=K_scaler×R0
G1=K_scaler×G0
B1=K_scaler×B0
wherein, R0 is the gray scale value of the normalized red sub-pixel, G0 is the gray scale value of the normalized green sub-pixel, B0 is the gray scale value of the normalized blue sub-pixel, R1 is the gray scale value of the red sub-pixel after gray scale mapping, G1 is the gray scale value of the green sub-pixel after gray scale mapping, and B1 is the gray scale value of the blue sub-pixel after gray scale mapping.
It should be noted that the gray scale mapping processing unit 14 is provided in the present invention to increase the gray scale value and increase the brightness of the image within a certain range, fig. 2 is an original image and image gray scale distribution that are not subjected to gray scale mapping processing and are provided in the embodiment of the present invention, and fig. 3 is an image and image gray scale distribution that are subjected to gray scale mapping processing and are provided in the embodiment of the present invention, and by comparing fig. 2 and fig. 3, it can be seen that the gray scale value of the image is increased by the gray scale mapping processing, so that the overall display brightness is increased.
Further, the partition contrast enhancement processing unit 15 is configured to convert the RGB image after the gray scale mapping processing into a YUV color space to obtain a YUV image including YUV components, keep the UV components unchanged, divide the Y component image into a plurality of partitions with the same size, the partitions are related to the size, resolution and the like of the screen, and perform reasonable partitioning according to practical situations, and respectively count the number s (i) of sub-pixels with different gray scales in each partition, where i is 0,1,2 … … 255; and when s (i) is greater than a first threshold value, setting the value of s (i) as the first threshold value, and when s (i) is less than a second threshold value, setting the value of s (i) as the second threshold value, the first threshold value being greater than the second threshold value. Further, accumulating the number s (i) of sub-pixels with different gray scales in each partition to obtain the number p (i) of sub-pixels with different gray scales in the current frame image,
Figure BDA0002853939370000091
then, mapping the next frame image by using the gray scale distribution condition of the current frame image according to the following formula:
y_out(i)=255×p(i)/p(255)i=0,1,2……255
wherein i is an input gray-scale value, Y _ out (i) is a gray-scale value of the Y component after the partition contrast enhancement, and then processed YUV pixel data formed by the Y component after the partition contrast enhancement and the original UV component is converted back to an RGB color space to obtain RGB pixel data after the partition contrast enhancement, R1 ' is a gray-scale value of the red sub-pixel after the partition contrast enhancement, G1 ' is a gray-scale value of the green sub-pixel after the partition contrast enhancement, and B1 ' is a gray-scale value of the blue sub-pixel after the partition contrast enhancement.
Since the present application performs gray scale mapping on the red (R), green (G), and blue (B) sub-pixels in the RGB pixel data, respectively, and then converts the RGB image into the YUV color space to obtain the YUV image including the YUV components, Y represents Luminance (Luminance), U represents Chrominance (Chroma), and V represents saturation (Chroma). The image video file coded in the YUV format occupies a smaller bandwidth in transmission, can store more brightness information and less color difference information. According to the method and the device, the partition contrast enhancement processing is only carried out on the Y component image by keeping the UV component unchanged, and then the enhanced Y component and the original UV value are converted back to the RGB color space, so that the overall contrast of the image is more balanced, and the contrast of the display picture is enhanced while the display brightness is ensured to be improved. Hereinafter, unless otherwise specified, all pixels refer to sub-pixels.
Specifically, the setting of the first threshold and the second threshold in the embodiment of the present invention is explained as follows: the invention considers the change of the actual image, avoids the problem that the image is over-bright or the target point is weakened, and limits the number of the pixel points with different gray scales of the image in each partition by setting a first threshold (an upper threshold) and a second threshold (a lower threshold), thereby limiting the number of the pixel points with different gray scales of the whole image. Specifically, when the gray values of most of the pixels in the image are distributed at positions close to 0, if an upper threshold is not set, the gray values of a large number of low-gray-level pixels may become very high after processing, so that the image is too bright as a whole; when most of the content in the image is background pixels and the number of the target pixels is small, if the lower threshold is not set, the processing result of the target pixels with small number after processing is close to the background pixels, so that the target in the image is weakened or even disappears.
The specific calculation manner of the segmentation contrast enhancement processing unit is explained below by a simple example to more clearly explain the overall trend change of the image after setting the upper and lower threshold values. For the sake of easy understanding, the whole image is divided into one partition for illustration, and it is assumed that there is a 25 × 25 partition, and the gray level distribution of the original image in the partition is:
the number of the pixel points with the gray value of 10 is 100;
the number of the pixels with the gray value of 20 is 200;
the number of the pixel points with the gray value of 30 is 200;
the number of the pixels with the gray value of 200 is 100;
the number of pixels with the gray value of 255 is 25;
if no threshold limit is set, after contrast enhancement processing, the result is obtained as:
the gray value of the pixel point with the gray value of 10 in the original image is changed into:
255×100/625=40,
similarly, the gray values of the pixel points with the gray value of 20/30/200/255 in the original image after being processed are respectively:
255×300/625=120,
255×500/625=200,
255×600/625=240,
255×625/625=255。
if the upper threshold is set to be 100 (explained by taking the upper threshold as an example), the processed gray values of the pixel points with the gray value of 10/20/30/200/255 in the original image are respectively:
255×100/425=60,
255×200/425=120,
255×300/425=180,
255×400/425=240,
255×425/425=255。
here, 425 is 100 (the number of pixels having a grayscale of 10) +100 (the number of pixels having a grayscale of 20) +100 (the number of pixels having a grayscale of 30) +100 (the number of pixels having a grayscale of 200) +25 (the number of pixels having a grayscale of 255).
As can be seen from the above example, the overall contrast of the image processed by the partition contrast enhancement processing unit is more balanced, and the contrast of the display screen is enhanced while the display brightness is ensured to be improved.
In addition, fig. 4 shows an original image and an image gray scale distribution without gray scale mapping processing, an image and an image gray scale distribution after gray scale mapping processing, and an image gray scale distribution after partition contrast enhancement processing according to an embodiment of the present invention. As can be seen from fig. 4, after the partition contrast enhancement processing, compared with the original image gray scale distribution, the brightness of the display image is improved without changing the power consumption, and the contrast of the display image is enhanced while the display brightness is ensured to be improved.
It should be noted that, in order to avoid the problem that the pixels at the partition boundaries have different gray scale values and thus distort the image, it is ensured that the image at the partition boundary positions can be smoothly transited, and each partition boundary position needs to perform weighted average processing according to the gray scale values of the images of the neighborhood partitions (if there are neighborhood partitions), so as to ensure the smoothness of the whole displayed image. The formula for calculating the weighted average can be referred to as follows:
Result=k1*I1+(1–k1)*I2
wherein I1 and I2 respectively represent the pixel gray scale value of a pixel point at the same position in two adjacent partitions, I1 is the pixel gray scale value of a pixel point in the left partition, I2 is the pixel gray scale value of a pixel point in the right partition, and k1 is a weighting coefficient, which is related to the distance of the current pixel point from the partition boundary position, specifically as shown in fig. 5, fig. 5 is a schematic diagram of a calculation method for weighted average processing of gray scale values of a neighborhood partition image in the embodiment of the present invention; in fig. 5, the left frame and the right frame respectively represent two adjacent left and right partitions, the horizontal overlapping width is L pixel points, the final pixel value of each pixel point in the overlapping region is a calculation result obtained by performing weighted average according to the above formula, and the calculation method of the weight coefficient k1 of the pixel point position in fig. 5 is as follows:
k1=L2/L=(L–L1)/L
where L is the number of pixels with overlapping width in the horizontal direction, L1 is the number of pixels from the current pixel (e.g., the current pixel is located in the right partition) to the left boundary, L2 is the number of pixels from the current pixel to the right boundary, and L1+ L2 is equal to L.
To illustrate the setting of the weight more clearly, it is now assumed that the number of pixels from the right boundary of the pixel point (e.g., the current pixel point is located in the left partition) in fig. 5 is greater than the number of pixels from the left boundary of the pixel point (e.g., the current pixel point is located in the right partition), i.e., L2> L1. The pixel point is closer to the center point of the left partition and farther from the center point of the right partition, so that a result generated by the left partition should be given a higher weight coefficient, and thus, through the calculation of the formula, the value of k1 is greater than the value of 1-k1, that is, I1 is given a higher weight, I2 is given a lower weight value, the pixel gray value of the partition boundary can be further determined more reasonably, and the image at the position of the partition boundary can be ensured to be in smooth transition.
Further, the RGBW converting unit 16 is configured to convert the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data, and since the input signal is RGB data, when the display panel is an RGBW pixel architecture, data conversion from RGB to RGBW needs to be performed. Currently, algorithms for converting RGB into RGBW commonly used in the industry include substitution (Replacement) and Enhancement (Enhancement). An RGBW conversion unit 16 according to an embodiment of the present invention is configured to convert RGB pixel data after the partition contrast enhancement processing into RGBW pixel data by the following formula:
W=min(R1′,G1′,B1′)
Figure BDA0002853939370000121
RO=k×R1′-W
GO=k×G1′-W
BO=k×B1′-W
WO=w_scaler×W
wherein, w _ scaler is an externally configurable coefficient, and the value range is 0-2. The display brightness of the image can be improved by setting the external gain coefficient w _ scaler.
Further, the inverse normalization processing unit 17 is configured to perform inverse normalization processing on the RGBW pixel data, and display the RGBW pixel data after the processing is completed by the display driving device, where the inverse normalization processing unit 17 is configured to perform Gamma processing as follows:
Figure BDA0002853939370000131
Figure BDA0002853939370000132
Figure BDA0002853939370000133
Figure BDA0002853939370000134
wherein γ is 2.2.
Further, the sub-pixel rendering unit 18 is configured to arrange the RGBW pixel data after the inverse normalization processing, and determine output RGBW pixel data. Each main Pixel is composed of four Sub-pixels, which are composed of "red + green + blue + white", and Sub-Pixel Rendering (Sub Pixel Rendering) is a Pixel Rendering technique that outputs Pixel data conforming to a visual effect by periodically arranging Sub-pixels in a reasonable order, for example, different arrangements such as "red + green" or "green + blue" or "blue + red". By adopting the sub-pixel rendering unit 18, the penetration rate of the display device can be further improved, the power consumption of the display device is reduced, the PPI (Pixel Per Inc) of the display device is improved, and the picture details are enriched.
An embodiment of the present invention further provides an image processing method, as shown in fig. 6, fig. 6 shows a flowchart of an image processing method provided in the embodiment of the present invention, and the method includes:
step 101, calculating a brightness value and a saturation value of a current frame picture according to input RGB pixel data;
102, normalizing input RGB pixel data;
step 103, determining RGBW pixel data according to the RGB pixel data after normalization processing and the brightness value and the saturation value;
104, performing inverse normalization processing on the RGBW pixel data; and
and 105, arranging the RGBW pixel data after the reverse normalization processing, and determining the output RGBW pixel data.
As shown in fig. 7, fig. 7 is a flowchart illustrating an image processing method according to a second embodiment of the present invention, and step 103 may include:
step 201, performing gray scale mapping processing on the RGB pixel data after normalization processing;
step 202, converting RGB pixel data subjected to gray scale mapping into a YUV color space to obtain a YUV image comprising YUV components, keeping the UV components unchanged, and performing partition contrast enhancement processing on the Y component image;
step 203, converting the processed YUV pixel data formed by the Y component after the partition contrast enhancement processing and the original UV component back to an RGB color space to obtain RGB pixel data after the partition contrast enhancement processing; and
and step 204, converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data.
Wherein, step 201 specifically includes:
calculating a gain coefficient K _ scaler, and performing gray scale mapping processing on the normalized RGB pixel data by using the gain coefficient K _ scaler obtained through calculation, wherein the gain coefficient K _ scaler is obtained through calculation according to the following formula:
K_scaler=128+scaler_in×S×V/255
the scaler _ in is an externally configurable coefficient, the value range is 0-2, S is a saturation value, and V is a brightness value.
Performing gray scale mapping processing on the normalized RGB pixel data through the following formula:
R1=K_scaler×R0
G1=K_scaler×G0
B1=K_scaler×B0
wherein, R0 is the gray scale value of the normalized red sub-pixel, G0 is the gray scale value of the normalized green sub-pixel, B0 is the gray scale value of the normalized blue sub-pixel, R1 is the gray scale value of the red sub-pixel after gray scale mapping, G1 is the gray scale value of the green sub-pixel after gray scale mapping, and B1 is the gray scale value of the blue sub-pixel after gray scale mapping.
Wherein, step 202 specifically includes:
converting the RGB image subjected to gray scale mapping processing into a YUV color space to obtain a YUV image comprising YUV components, keeping the UV components unchanged, dividing the Y component image into a plurality of partitions with the same size, and respectively counting the number s (i) of sub-pixels with different gray scales in each partition, wherein i is 0,1,2 … … 255; and is
Setting the value of s (i) to a first threshold value when s (i) is greater than the first threshold value,
when s (i) is less than a second threshold value, setting the value of s (i) to the second threshold value.
Accumulating the number s (i) of sub-pixels with different gray scales in each partition to obtain the number p (i) of sub-pixels with different gray scales in the current frame image,
Figure BDA0002853939370000151
then, mapping the next frame image by using the gray scale distribution condition of the current frame image according to the following formula:
y_out(i)=255×p(i)/p(255)i=0,1,2……255
wherein i is an input gray-scale value, Y _ out (i) is a gray-scale value of the Y component after the partition contrast enhancement, and then processed YUV pixel data formed by the Y component after the partition contrast enhancement and the original UV component is converted back to an RGB color space to obtain RGB pixel data after the partition contrast enhancement, R1 ' is a gray-scale value of the red sub-pixel after the partition contrast enhancement, G1 ' is a gray-scale value of the green sub-pixel after the partition contrast enhancement, and B1 ' is a gray-scale value of the blue sub-pixel after the partition contrast enhancement.
Wherein step 203 comprises:
converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data by the following formula:
W=min(R1′,G1′,B1′)
Figure BDA0002853939370000152
Figure BDA0002853939370000153
wherein, w _ scaler is an externally configurable coefficient, and the value range is 0-2.
In addition, for the description of the implementation details and the technical effects of the above methods, reference may be made to the description of the foregoing embodiments of the apparatus, and further description is omitted here.
It should be understood that each unit of the image processing apparatus in the first embodiment may be implemented by hardware, and may also be implemented by a combination of hardware and software. For example, each unit of the apparatus shown in fig. 1 may be a Central Processing Unit (CPU), an Application Processor (AP), a Digital Signal Processor (DSP), a field programmable logic circuit (FPGA), a Microprocessor (MCU), a filter, an Integrated Circuit (IC), or an Application Specific Integrated Circuit (ASIC) having the respective functions described in the embodiments of the present disclosure. For example, the respective units of the apparatus shown in fig. 1 may be realized by means of a combination of a processor, a memory, and a computer program stored in the memory, the processor reading and executing the computer program from the memory, thereby functioning as the respective units of the apparatus shown in fig. 1.
The third embodiment of the invention also provides a display device, which comprises a memory; one or more processors; the memory and the one or more processors are connected to each other; and the memory stores computer-executable instructions for controlling the one or more processors to implement the image processing method of the second embodiment.
An embodiment four of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the image processing method according to the foregoing embodiment two is implemented.
The image processing device and the method thereof, the display device and the computer readable storage medium provided by the embodiment of the invention avoid the problems of insufficient display brightness and low contrast ratio in the prior art, thereby reducing the display quality of a display picture. Firstly, carry out the preliminary treatment to the RGB image of input, carry out certain degree grey scale mapping in the preliminary treatment process, simultaneously, in order to guarantee the image contrast after the grey scale mapping, adopt the contrast enhancement method of subregion to the image processing after the grey scale mapping in this proposal, promote the contrast, then carry out RGBW conversion and SPR processing again, final output processing result drive RGBW display module assembly demonstration RGBW signal. The technical effects of improving the brightness of the display picture on the premise of not changing the power consumption, ensuring that the display brightness is improved and simultaneously enhancing the contrast of the display picture are achieved.
It should be noted that the above-mentioned embodiments of the present disclosure may be combined with each other without explicit conflict.
It is to be understood that the above embodiments are merely exemplary embodiments that are employed to illustrate the principles of the present disclosure, and that the present disclosure is not limited thereto. It will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the scope of the disclosure as defined by the appended claims.

Claims (16)

1. An image processing apparatus characterized by comprising:
the brightness saturation calculating unit is used for calculating a brightness value and a saturation value of the current frame picture according to the input RGB pixel data;
the normalization processing unit is used for performing normalization processing on input RGB pixel data;
the RGBW determining unit is used for determining RGBW pixel data according to the RGB pixel data after normalization processing and the brightness value and the saturation value;
the inverse normalization processing unit is used for performing inverse normalization processing on the RGBW pixel data; and
and the sub-pixel rendering unit is used for arranging the RGBW pixel data after the reverse normalization processing and determining the output RGBW pixel data.
2. The image processing apparatus according to claim 1, wherein the RGBW determining unit includes:
the gray scale mapping processing unit is used for carrying out gray scale mapping processing on the RGB pixel data after the normalization processing;
the partition contrast enhancement processing unit is used for carrying out partition contrast enhancement processing on the RGB pixel data subjected to the gray scale mapping processing; and
and the RGBW conversion unit is used for converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data.
3. The image processing apparatus according to claim 2, wherein the grayscale mapping processing unit is configured to:
calculating a gain coefficient K _ scaler, and performing gray scale mapping processing on the normalized RGB pixel data by using the gain coefficient K _ scaler obtained through calculation, wherein the gain coefficient K _ scaler is obtained through calculation according to the following formula:
K_scaler=128+scaler_in×S×V/255
the scaler _ in is an externally configurable coefficient, the value range is 0-2, S is a saturation value, and V is a brightness value.
4. The image processing apparatus according to claim 3, wherein the grayscale mapping processing unit is configured to:
performing gray scale mapping processing on the normalized RGB pixel data through the following formula:
R1=K_scaler×R0
G1=K_scaler×G0
B1=K_scaler×B0
wherein, R0 is the gray scale value of the normalized red sub-pixel, G0 is the gray scale value of the normalized green sub-pixel, B0 is the gray scale value of the normalized blue sub-pixel, R1 is the gray scale value of the red sub-pixel after gray scale mapping, G1 is the gray scale value of the green sub-pixel after gray scale mapping, and B1 is the gray scale value of the blue sub-pixel after gray scale mapping.
5. The image processing apparatus according to any one of claims 2 to 4, wherein the partition contrast enhancement processing unit is configured to:
converting the RGB image subjected to gray scale mapping processing into a YUV color space to obtain a YUV image comprising YUV components, keeping the UV components unchanged, dividing the Y component image into a plurality of partitions with the same size, and respectively counting the number s (i) of sub-pixels with different gray scales in each partition, wherein i is 0,1,2 … … 255; and is
Setting the value of s (i) to a first threshold value when s (i) is greater than the first threshold value,
when s (i) is less than a second threshold value, setting the value of s (i) to the second threshold value.
6. The image processing apparatus according to claim 5, wherein the divisional contrast enhancement processing unit is configured to:
accumulating the number s (i) of sub-pixels with different gray scales in each partition to obtain the number p (i) of sub-pixels with different gray scales in the current frame image,
Figure FDA0002853939360000031
then, mapping the next frame image by using the gray scale distribution condition of the current frame image according to the following formula:
y_out(i)=255×p(i)/p(255)i=0,1,2……255
wherein i is the input gray level value, Y _ out (i) is the gray level value of the Y component after the partition contrast enhancement processing,
and then converting processed YUV pixel data formed by the Y component after the partition contrast enhancement processing and the original UV component back to an RGB color space to obtain RGB pixel data after the partition contrast enhancement processing, wherein R1 ' is a gray-scale value of a red sub-pixel after the partition contrast enhancement processing, G1 ' is a gray-scale value of a green sub-pixel after the partition contrast enhancement processing, and B1 ' is a gray-scale value of a blue sub-pixel after the partition contrast enhancement processing.
7. The image processing apparatus according to claim 6, wherein the RGBW conversion unit is configured to:
converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data by the following formula:
W=min(R1′,G1′,B1′)
Figure FDA0002853939360000032
RO=k×R1′-W
GO=k×G1′-W
BO=k×B1′-W
WO=w_scaler×W
wherein, w _ scaler is an externally configurable coefficient, and the value range is 0-2.
8. An image processing method, characterized in that the method comprises:
calculating a brightness value and a saturation value of a current frame picture according to input RGB pixel data;
normalizing the input RGB pixel data;
determining RGBW pixel data according to the RGB pixel data after normalization processing and the brightness value and the saturation value;
performing inverse normalization processing on the RGBW pixel data; and
and arranging the RGBW pixel data after the reverse normalization processing, and determining the output RGBW pixel data.
9. The image processing method of claim 8, wherein the step of determining RGBW pixel data from the normalized RGB pixel data and the lightness and saturation values comprises:
carrying out gray scale mapping processing on the RGB pixel data after normalization processing;
performing partition contrast enhancement processing on the RGB pixel data subjected to the gray scale mapping processing; and
and converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data.
10. The image processing method as claimed in claim 9, wherein the step of subjecting the normalized RGB pixel data to the gray-scale mapping process comprises:
calculating a gain coefficient K _ scaler, and performing gray scale mapping processing on the normalized RGB pixel data by using the gain coefficient K _ scaler obtained through calculation, wherein the gain coefficient K _ scaler is obtained through calculation according to the following formula:
K_scaler=128+scaler_in×S×V/255
the scaler _ in is an externally configurable coefficient, the value range is 0-2, S is a saturation value, and V is a brightness value.
11. The image processing method as claimed in claim 10, wherein the step of subjecting the normalized RGB pixel data to the gray-scale mapping process comprises:
performing gray scale mapping processing on the normalized RGB pixel data through the following formula:
R1=K_scaler×R0
G1=K_scaler×G0
B1=K_scaler×B0
wherein, R0 is the gray scale value of the normalized red sub-pixel, G0 is the gray scale value of the normalized green sub-pixel, B0 is the gray scale value of the normalized blue sub-pixel, R1 is the gray scale value of the red sub-pixel after gray scale mapping, G1 is the gray scale value of the green sub-pixel after gray scale mapping, and B1 is the gray scale value of the blue sub-pixel after gray scale mapping.
12. The image processing method according to any one of claims 9 to 11, wherein the step of performing the divisional contrast enhancement processing on the RGB pixel data after the gray-scale mapping processing includes:
converting the RGB image subjected to gray scale mapping processing into a YUV color space to obtain a YUV image comprising YUV components, keeping the UV components unchanged, dividing the Y component image into a plurality of partitions with the same size, and respectively counting the number s (i) of sub-pixels with different gray scales in each partition, wherein i is 0,1,2 … … 255; and is
Setting the value of s (i) to a first threshold value when s (i) is greater than the first threshold value,
when s (i) is less than a second threshold value, setting the value of s (i) to the second threshold value.
13. The image processing method according to claim 12, wherein the step of performing the divisional contrast enhancement processing on the RGB pixel data after the gray-scale mapping processing includes:
accumulating the number s (i) of sub-pixels with different gray scales in each partition to obtain the number p (i) of sub-pixels with different gray scales in the current frame image,
Figure FDA0002853939360000051
then, mapping the next frame image by using the gray scale distribution condition of the current frame image according to the following formula:
y_out(i)=255×p(i)/p(255)i=0,1,2……255
wherein i is the input gray level value, Y _ out (i) is the gray level value of the Y component after the partition contrast enhancement processing,
and then converting processed YUV pixel data formed by the Y component after the partition contrast enhancement processing and the original UV component back to an RGB color space to obtain RGB pixel data after the partition contrast enhancement processing, wherein R1 ' is a gray-scale value of a red sub-pixel after the partition contrast enhancement processing, G1 ' is a gray-scale value of a green sub-pixel after the partition contrast enhancement processing, and B1 ' is a gray-scale value of a blue sub-pixel after the partition contrast enhancement processing.
14. The image processing method according to claim 13, wherein the step of converting the RGB pixel data after the divisional contrast enhancement processing into RGBW pixel data comprises:
converting the RGB pixel data after the partition contrast enhancement processing into RGBW pixel data by the following formula:
W=min(R1′,G1′,B1′)
Figure FDA0002853939360000061
RO=k×R1′-W
GO=k×G1′-W
BO=k×B1′-W
WO=w_scaler×W
wherein, w _ scaler is an externally configurable coefficient, and the value range is 0-2.
15. A display device, comprising:
a memory;
one or more processors;
wherein the memory and the one or more processors are connected to each other; and
the memory stores computer-executable instructions for controlling the one or more processors to implement the image processing method of any one of claims 8 to 14 when executed.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 8 to 14.
CN202011537468.6A 2020-12-23 2020-12-23 Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method Pending CN112884661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011537468.6A CN112884661A (en) 2020-12-23 2020-12-23 Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011537468.6A CN112884661A (en) 2020-12-23 2020-12-23 Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method

Publications (1)

Publication Number Publication Date
CN112884661A true CN112884661A (en) 2021-06-01

Family

ID=76043859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011537468.6A Pending CN112884661A (en) 2020-12-23 2020-12-23 Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method

Country Status (1)

Country Link
CN (1) CN112884661A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114023276A (en) * 2021-10-12 2022-02-08 苏州蓝博控制技术有限公司 Adaptive soft display control method, control system, control device and computer readable storage medium for liquid crystal display device
CN117237258A (en) * 2023-11-14 2023-12-15 山东捷瑞数字科技股份有限公司 Night vision image processing method, system, equipment and medium based on three-dimensional engine

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114023276A (en) * 2021-10-12 2022-02-08 苏州蓝博控制技术有限公司 Adaptive soft display control method, control system, control device and computer readable storage medium for liquid crystal display device
CN114023276B (en) * 2021-10-12 2022-09-27 苏州蓝博控制技术有限公司 Adaptive soft display control method, control system, control device and computer readable storage medium for liquid crystal display device
CN117237258A (en) * 2023-11-14 2023-12-15 山东捷瑞数字科技股份有限公司 Night vision image processing method, system, equipment and medium based on three-dimensional engine
CN117237258B (en) * 2023-11-14 2024-02-09 山东捷瑞数字科技股份有限公司 Night vision image processing method, system, equipment and medium based on three-dimensional engine

Similar Documents

Publication Publication Date Title
CN109979401B (en) Driving method, driving apparatus, display device, and computer readable medium
CN108053797B (en) driving method and driving device of display device
JP5430068B2 (en) Display device
CN107863083B (en) Driving method and driving device of display device
CN107978289B (en) Driving method and driving device of display device
CN104900205B (en) Liquid-crystal panel and drive method therefor
US20210098541A1 (en) Method for driving a display panel, display driving device and electronic device
CN107863084B (en) Driving method and driving device of display device
US20100007679A1 (en) Display apparatus, method of driving display apparatus, drive-use integrated circuit, driving method employed by drive-use integrated circuit, and signal processing method
KR100772906B1 (en) Method and apparatus for displaying image signal
US9728160B2 (en) Image processing method of a display for reducing color shift
US11263987B2 (en) Method of enhancing contrast and a dual-cell display apparatus
CN108962167B (en) Data processing method and device, driving method, display panel and storage medium
CN104952410B (en) The display ameliorative way and its equipment of liquid crystal panel
US11398195B2 (en) Backlight brightness processing method and system, backlight brightness adjustment method, storage medium
CN108122546B (en) Display apparatus and image processing method thereof
US20180226031A1 (en) Driving methods and driving devices of display panels
CN112884661A (en) Image processing apparatus, display apparatus, computer-readable storage medium, and image processing method
US9953558B1 (en) Transparent display apparatus and method for driving transparent display panel thereof
CN114267291A (en) Gray scale data determination method, device and equipment and screen driving board
WO2018113050A1 (en) Drive method and drive apparatus of display panel
US11321812B2 (en) Display method, display device, virtual reality display device, virtual reality device, and storage medium
CN109377966B (en) Display method, system and display device
US9311886B2 (en) Display device including signal processing unit that converts an input signal for an input HSV color space, electronic apparatus including the display device, and drive method for the display device
US20200160492A1 (en) Image Adjustment Method and Device, Image Display Method and Device, Non-Transitory Storage Medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination