CN109817170B - Pixel compensation method and device and terminal equipment - Google Patents

Pixel compensation method and device and terminal equipment Download PDF

Info

Publication number
CN109817170B
CN109817170B CN201711166453.1A CN201711166453A CN109817170B CN 109817170 B CN109817170 B CN 109817170B CN 201711166453 A CN201711166453 A CN 201711166453A CN 109817170 B CN109817170 B CN 109817170B
Authority
CN
China
Prior art keywords
image
backlight
pixel
brightness value
compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711166453.1A
Other languages
Chinese (zh)
Other versions
CN109817170A (en
Inventor
张涛
巫红英
李昌禄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201711166453.1A priority Critical patent/CN109817170B/en
Priority to PCT/CN2018/115836 priority patent/WO2019101005A1/en
Publication of CN109817170A publication Critical patent/CN109817170A/en
Application granted granted Critical
Publication of CN109817170B publication Critical patent/CN109817170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Liquid Crystal Display Device Control (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The application provides a pixel compensation method and device and terminal equipment. The method comprises the following steps: acquiring a first backlight image of an image to be processed, wherein pixel points in the first backlight image correspond to backlight units in a backlight array one by one, and the backlight array is used for providing backlight for displaying the image to be processed; dividing the pixel points of the first backlight image according to the position attributes of the pixel points to obtain a plurality of regions of the first backlight image; respectively filtering and amplifying the images of the plurality of regions to obtain a second backlight image with the same resolution as that of the image to be processed, wherein when the images of the plurality of regions are filtered, different spatial filtering is applied to the images of at least two regions of the plurality of regions; and performing pixel compensation on the image to be processed according to the brightness value of the second backlight image. The display effect can be improved.

Description

Pixel compensation method and device and terminal equipment
Technical Field
The present application relates to the field of display pixel compensation technology, and more particularly, to a pixel compensation method, apparatus and terminal device.
Background
In order to improve the dynamic range of the display, the backlight module of the display may be generally divided into a plurality of regions, and then the brightness of each region of the backlight module is adjusted according to the distribution of the image content, and the pixel compensation is performed on the image pixel value, so as to improve the dynamic range of the display while ensuring the display effect.
In the conventional scheme, when pixel compensation is performed on an image, backlight smoothing processing is performed according to the brightness of a backlight area, and then pixel compensation is performed on the area after the backlight smoothing processing so as to ensure the display effect. Specifically, during the backlight smoothing process, the same filtering template is used for low-pass filtering of the whole backlight area, and then pixel compensation is performed, and since the backlight module has different light diffusion conditions in different areas, the same low-pass filtering template is used for filtering, the diffusion condition of the backlight cannot be well simulated, and the final display effect of the display is poor.
Disclosure of Invention
The application provides a pixel compensation method, a pixel compensation device and terminal equipment, so as to improve the display effect.
In a first aspect, a pixel compensation method is provided, which includes: acquiring a first backlight image of an image to be processed, wherein pixel points in the first backlight image correspond to backlight units in a backlight array one by one, and the backlight array is used for providing backlight for displaying the image to be processed; dividing the pixel points of the first backlight image according to the position attributes of the pixel points to obtain a plurality of regions of the first backlight image; respectively filtering and amplifying the images of the plurality of regions to obtain a second backlight image with the same resolution as that of the image to be processed, wherein when the images of the plurality of regions are filtered, different spatial filtering is applied to the images of at least two regions of the plurality of regions; and performing pixel compensation on the image to be processed according to the brightness value of the second backlight image.
The backlight array comprises a plurality of backlight units, and the number of the backlight units in the backlight array is the same as the number of the pixel points in the first backlight image.
In the application, the images in at least two areas in the plurality of areas are respectively filtered by adopting different types of spatial filtering, and compared with a mode of filtering by adopting a unified filtering template, the method can better simulate backlight diffusion, so that the images after pixel compensation can obtain better display effect when being displayed.
With reference to the first aspect, in some implementation manners of the first aspect, dividing the pixel points of the first backlight image according to the position attributes of the pixel points to obtain a plurality of regions of the first backlight image includes: dividing pixel points with the same position attribute into the same area, wherein the pixel points with the same position attribute comprise the same number of adjacent pixel points or the same distance from the center position of the first backlight image.
The pixel points with the same position attribute are divided into the same region, the same spatial filtering can be adopted for the pixel points with the same position attribute, different filtering can be adopted for the pixel points with different positions, the light diffusion phenomenon of different positions can be well simulated, and therefore the display effect can be improved.
Optionally, the same position attribute may also mean that the number of adjacent pixel points is within a preset range, for example, pixel points whose number of adjacent pixel points is less than or equal to 5 may belong to the same position attribute, and pixel points whose number of adjacent pixel points is greater than 5 may be considered as belonging to another position attribute.
With reference to the first aspect, in some implementation manners of the first aspect, the dividing the pixel points of the first backlight image according to the position attributes of the pixel points to obtain a plurality of regions of the first backlight image includes: dividing the positions of pixel points with 8 adjacent pixel points into a first area; dividing the positions of the pixel points with 5 adjacent pixel points into a second area; and dividing the pixel point position with 3 adjacent pixel points into a third area.
With reference to the first aspect, in certain implementation manners of the first aspect, the performing pixel compensation on the image to be processed according to the brightness value of the second backlight image includes: acquiring a first brightness value of any pixel point from the image to be processed; acquiring a second brightness value of a pixel point at a position corresponding to any one pixel point from the second backlight image; acquiring a maximum brightness value, wherein the maximum brightness value is a brightness upper limit value of the second backlight image or a brightness upper limit value which can be displayed by the backlight unit; determining a compensation coefficient of any pixel point according to the maximum brightness value and the second brightness value; and compensating the first brightness value of any pixel point according to the compensation coefficient to obtain a target brightness value of any pixel point.
In the process of obtaining the second backlight image, different spatial filtering is adopted for different areas, so that the backlight diffusion phenomenon can be better simulated, and therefore when pixel compensation is carried out on a subsequent image to be processed, pixel compensation can be carried out on pixel points to be processed according to the real brightness value of backlight, and a better pixel compensation effect is obtained.
With reference to the first aspect, in some implementation manners of the first aspect, the compensating the first luminance value of the any one pixel point according to the compensation coefficient to obtain a target luminance value of the any one pixel point includes: and determining the product of the first brightness value and the compensation coefficient as a target brightness value of any pixel point.
With reference to the first aspect, in certain implementations of the first aspect, the compensation coefficient is obtained according to the following formula:
K=log2(BLmax/BL)1.0/γ
where K is the compensation coefficient, BLmaxFor the maximum brightness value, BL is the second brightness value, γ is a preset gamma coefficient, and γ may generally be 2.2.
Because the processing of the image signal by the human eyes is similar to the processing link of an approximate logarithm algorithm, the contrast of the image can be effectively improved by taking the logarithm in the process of compensating the pixels. In addition, in the above formula, BLmaxBy taking the maximum brightness value, the compensation coefficient K can not be too large, thereby avoiding pixel overflow when pixel compensation is carried out.
With reference to the first aspect, in some implementation manners of the first aspect, when the format of the image to be processed is RGB, the obtaining a first luminance value of any one pixel point in the image to be processed includes: determining the maximum component value of the three component values of any pixel point as the first brightness value; or, the first brightness value is calculated according to the three component values of any pixel point.
Optionally, when the first luminance value is calculated according to three component values of any one pixel, the first luminance value may be calculated according to a formula: y ═ ((R299) + (G587) + (B114))/1000, where Y denotes the first luminance value, and R, G and B denote the three component values of the pixel point to be processed, respectively.
In a second aspect, there is provided a pixel compensation arrangement comprising means for performing the method of the first aspect or its various implementations.
In a third aspect, a pixel compensation device is provided, which includes: a memory for storing a program; a processor for executing the program stored by the memory, the processor being configured to perform the method of the first aspect or its various implementations when the program is executed.
In a fourth aspect, a terminal device is provided, where the terminal device includes the pixel compensation apparatus in the second aspect, and a display, where the display is configured to display an image obtained by performing pixel compensation on the image to be processed by the pixel compensation apparatus.
In a fifth aspect, a terminal device is provided, where the terminal device includes: a memory for storing a program; a processor for executing the program stored by the memory, the processor being configured to perform the method of the first aspect or its various implementations when the program is executed.
A sixth aspect provides a computer readable medium storing program code for execution by a device, the program code comprising instructions for performing the method of the first aspect or its various implementations.
Drawings
FIG. 1 is a schematic flow chart diagram of a pixel compensation method of an embodiment of the present application;
FIG. 2 is a schematic diagram of a backlight array;
FIG. 3 is a schematic diagram of a first backlight image;
FIG. 4 is a schematic illustration of a plurality of regions of a first backlight image;
FIG. 5 is a schematic illustration of a plurality of regions of a first backlight image;
FIG. 6 is a schematic illustration of a plurality of regions of a first backlight image;
FIG. 7 is a schematic flow chart diagram of a pixel compensation method of an embodiment of the present application;
FIG. 8 is a schematic diagram of a filtering template;
FIG. 9 is a schematic diagram of a pixel compensation curve;
FIG. 10 is a schematic block diagram of a pixel compensation arrangement of an embodiment of the present application;
fig. 11 is a schematic block diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The dynamic range of the image is generally the ratio of the darkest brightness to the brightest brightness, the larger the ratio is, the larger the dynamic range is, the more layers can be displayed by the image, and the better the display effect is.
The natural brightness range is wide, and the night scene (10) is from the sky-3cd/m2) Brightness to the sun itself (10)5cd/m2) There are about 8 orders of magnitude luminance ranges, and the human eye can capture 5 orders of magnitude luminance ranges from nature due to its own accommodation mechanism. However, the current display can only present a dynamic range of 2-3 orders of magnitude generally, and cannot reflect real information of a natural scene observed by human eyes.
In order to improve the dynamic range of the display, the display can be closer to the dynamic range in the natural scene, so that the real information of the natural scene can be better reflected. Area backlight dimming techniques can generally be adopted, and generally comprise two major parts: backlight brightness extraction and pixel compensation.
The backlight brightness extraction refers to dividing the backlight module into a plurality of partitions (the shape of each partition can be a rectangular area), then dynamically extracting characteristic parameters capable of representing the brightness information of each partition according to the image content corresponding to each partition, and then determining the brightness of the backlight unit of each partition according to the parameters.
After the brightness of each partition of the backlight module is determined, the brightness of the pixels should be adjusted to a certain extent, so that the color of the image before and after adjustment does not have large deviation, and the display effect is ensured. After the backlight brightness is obtained, a diffusion mode of the backlight needs to be simulated, so that a brightness value corresponding to each pixel point of a Liquid Crystal Display (LCD) panel is obtained, and then pixel compensation is performed according to the brightness value of each pixel point, so that how to simulate the backlight diffusion and how to perform the pixel compensation is a problem to be solved, and the Display effect is guaranteed.
The pixel compensation method according to the embodiment of the present application is described in detail below with reference to fig. 1.
Fig. 1 is a schematic flow chart of a pixel compensation method according to an embodiment of the present application. The method shown in fig. 1 may be performed by a liquid crystal display device, a smart terminal, a tablet computer, a desktop computer, or the like capable of displaying video. The method shown in fig. 1 includes steps 101 to 104, and the steps 101 to 104 are described in detail below with reference to specific examples.
101. A first backlight image of an image to be processed is acquired.
The first backlight image is a two-dimensional matrix formed by luminance values of a backlight array of a display device for displaying the image to be processed, and pixel points in the first backlight image are in one-to-one correspondence with backlight units in the backlight array (the backlight array is used for providing backlight for displaying the image to be processed), and the luminance value of each pixel point is used for representing the luminance of the corresponding backlight unit in the backlight array. In addition, the backlight array includes the same number of backlight units as the number of pixel points in the first backlight image.
It should be understood that the backlight array of the display device is composed of a plurality of backlight units, the brightness of each backlight unit can be adjusted individually (the brightness of each backlight unit can be adjusted by the control unit of the display), and the area where each backlight unit is located can be referred to as a partition of the backlight array.
For example, the backlight array of the display is composed of M × N (M and N are both integers greater than or equal to 1) backlight units, and then the whole area of the backlight array is composed of M × N partitions, and accordingly, the first backlight image includes M × N pixel points, and each pixel point corresponds to one partition in the backlight matrix.
As shown in fig. 2, the entire area of the backlight array is composed of 6 × 10 partitions, each of which is provided with one backlight unit. Accordingly, the first backlight image is shown in fig. 3, and the first backlight image includes 6 × 10 pixel points, and each pixel point corresponds to one partition in the backlight array in fig. 2.
102. And dividing the pixel points of the first backlight image according to the position attributes of the pixel points to obtain a plurality of regions of the first backlight image.
When the pixel points of the first backlight image are divided, the pixel points with the same position attribute (or called as position type) can be divided into the same region, that is, the pixel points divided into the same region have the same position attribute and therefore have some similar characteristics, and the same filtering template can be adopted for processing in the subsequent filtering processing.
Optionally, as an embodiment, performing region division on the pixel point of the first backlight image according to the position attribute of the pixel point to obtain a plurality of regions of the first backlight image includes: and dividing the pixel points with the same position attribute into the same area.
The above-mentioned location attribute being the same may include the number of adjacent pixel points being the same or the same distance from the center position of the second backlight image.
Specifically, in the first backlight image, if the number of adjacent pixel points of some pixel points is the same, the position attributes of the pixel points can be considered to be the same.
Optionally, as an embodiment, dividing the pixel points of the first backlight image according to the position attribute of the pixel points to obtain a plurality of regions of the first backlight image includes: dividing the positions of pixel points with 8 adjacent pixel points into a first area; dividing the positions of the pixel points with 5 adjacent pixel points into a second area; and dividing the pixel point position with 3 adjacent pixel points into a third area.
For example, as shown in fig. 4, in the first region, the number of adjacent pixel points of each pixel point is 8, the number of adjacent pixel points of each pixel point in the second region is 5, and the number of adjacent pixel points of each pixel point in the third region is 3.
Alternatively, in the first backlight image, if the distances between some pixel points and the center position of the first backlight image are equal, the position attributes of the pixel points may also be considered to be the same.
Specifically, the pixel points having the first backlight image center position at a distance smaller than the first distance may be divided into one region, and the pixel points having the first backlight image center position at a distance larger than the first distance may be divided into another region.
For example, as shown in fig. 5, the pixel points having the distance from the center position of the first backlight image greater than two pixel points are divided into the first region, and the pixel points having the distance from the center position of the first backlight image less than or equal to two pixel points are divided into the second region. In fig. 5, the pixel points in the first region are mainly the pixel points closer to the center position, and the pixel points in the second region are mainly the pixel points farther from the center position. It should be understood that the distance from each pixel point to the center of the first backlight image refers to the horizontal distance or the vertical distance from the edge of the pixel point to the center.
Optionally, the same position attribute may also mean that the number of adjacent pixel points is within a preset range, for example, in the first backlight image, a pixel point whose number of adjacent pixel points is less than or equal to 5 may be considered as belonging to the same position attribute, and a pixel point whose number of adjacent pixel points is greater than 5 may be considered as belonging to another position attribute.
For example, as shown in fig. 6, pixel points having a number of adjacent pixel points greater than 5 are divided into a first region (the number of adjacent pixel points of each pixel point in the first region is 8), and pixel points having a number of adjacent pixel points less than or equal to 5 are divided into a second region (the number of adjacent pixel points of the pixel points in the second region is 5 or 3).
Optionally, it may also be considered that pixel points located at the edge of the backlight image have the same position attribute, and pixel points located inside the backlight image have the same position attribute. As also shown in fig. 6, the pixel points at the extreme edge in the first backlight image may be divided into the first region, and the remaining pixel points may be divided into the second region.
It should be understood that the same region division result can be obtained by different division methods, for example, the division result shown in fig. 6 can be obtained by dividing the region according to whether the region is at an edge or not, or can be obtained by dividing according to whether the number of adjacent pixel points is within a preset range or not.
By dividing the pixel points with the same position attribute into the same region, the same spatial filtering can be adopted for the pixel points with the same position attribute, and the filtering effect can be improved.
103. And respectively filtering and amplifying the images of the plurality of areas to obtain a second backlight image with the same resolution as that of the image to be processed.
Wherein, when filtering the images of the plurality of regions, the images of at least two of the plurality of regions are applied with different spatial filtering.
In addition, the application of different spatial filtering to the images of the at least two regions may mean that different filtering templates (specifically, low-pass filtering templates) may be adopted when performing filtering processing on the images of the at least two regions, and the filtering coefficients of the different filtering templates may be different.
104. And performing pixel compensation on the image to be processed according to the brightness value of the second backlight image.
In the application, the images in at least two areas in the plurality of areas are filtered by adopting different types of spatial filtering, and compared with a mode of filtering by adopting a unified filtering template, the method can better simulate backlight diffusion, so that the images after pixel compensation can obtain better display effect when being displayed.
It should be understood that, in the present application, pixel compensation for an image to be processed refers to compensating or adjusting the brightness value of each pixel point in the image to be processed.
Because the second backlight image has the same image resolution as the image to be processed, each pixel point in the image to be processed has a corresponding position in the second backlight image, which represents the backlight brightness value of the image to be processed at the position, and the pixel of the corresponding position of the image to be processed can be compensated according to the pixel value of the second backlight image.
Optionally, as an embodiment, performing pixel compensation on the image to be processed according to the brightness value of the second backlight image specifically includes: acquiring a first brightness value of any pixel point from an image to be processed; acquiring a second brightness value of a pixel point at a position corresponding to the any pixel point from the second backlight image; obtaining a maximum brightness value; determining a compensation coefficient of any pixel point according to the maximum brightness value and the second brightness value; and according to the compensation coefficient, compensating the first brightness value of any pixel point to obtain a target brightness value of the pixel point.
The maximum brightness value refers to a brightness upper limit value of the second backlight image or a brightness upper limit value that can be displayed by the backlight unit.
In the process of obtaining the second backlight image, different spatial filtering is adopted for different areas, so that the backlight diffusion phenomenon can be better simulated, and therefore when pixel compensation is carried out on a subsequent image to be processed, pixel compensation can be carried out on pixel points to be processed according to the real brightness value of backlight, and a better pixel compensation effect is obtained.
Optionally, when the first luminance value of any one pixel is compensated according to the compensation coefficient to obtain the target luminance value of any one pixel, a product of the compensation coefficient and the first luminance value of any one pixel may be specifically used as the target luminance value of any one pixel.
The compensation coefficient can be represented by the formula K ═ log2(BLmax/BL)1.0/γIs calculated, where K is the compensation factor, BLmaxFor the maximum brightness value, BL is the second brightness value, γ is the preset gamma coefficient, γ may be an empirical value determined through experiments, and γ may generally take 2.2.
Because the processing of the image signal of the human eye is similar to the processing link of the approximate logarithm algorithm, the logarithm is taken in the process of compensating the pixel, so that the image signal can be obtainedThe contrast of the image is effectively improved. Thereby obtaining better pixel compensation effect. In addition, in the above formula, BLmaxBy taking the maximum brightness value, the compensation coefficient K can not be too large, thereby avoiding pixel overflow when pixel compensation is carried out.
In addition, when the first luminance value of any one pixel point is determined, if the image format of the image to be processed is RGB, then the maximum component value of the three component values of the any one pixel point may be determined as the first luminance value, or the first luminance value may be calculated according to the three component values of the any one pixel point.
When the first luminance value is calculated according to the three component values of the pixel to be processed, the first luminance value may be calculated according to the formula Y ═ ((R × 299) + (G × 587) + (B × 114))/1000, where Y represents the first luminance value, and R, G and B represent the three component values of the pixel to be processed, respectively.
By directly determining the maximum component value of the three components as the first brightness value, the first brightness value can be quickly determined, and certain calculation complexity can be reduced. When the first brightness value is calculated through the three component values, a more accurate brightness value can be obtained, and pixel compensation can be performed better in the follow-up process.
The embodiments of the present application will be described in more detail below with reference to specific examples, and it should be understood that the embodiment shown in fig. 7 is intended to assist those skilled in the art in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values and specific scenarios shown in the method of fig. 7. Various equivalent modifications or changes may be made by those skilled in the art based on the embodiment shown in fig. 7, and these modifications or changes also fall within the scope of the embodiments of the present application.
Fig. 7 is a schematic flow chart of a pixel compensation method according to an embodiment of the present application. The method shown in fig. 7 may be performed by a liquid crystal display device, a smart terminal, a tablet computer, a desktop computer, or the like capable of displaying video. The method shown in fig. 7 specifically includes the following steps:
201. a first backlight image is determined.
The backlight array of the display device includes a plurality of backlight units, and the brightness value of each backlight unit can be dynamically adjusted according to the image content corresponding to the backlight unit, so that the dynamic range of the display device is increased. The brightness values of all backlight units in the backlight array after backlight adjustment can form a first backlight image, and the brightness value of each pixel point in the first backlight image corresponds to the brightness of each backlight unit in the backlight array.
The size of the first backlight image is related to the number of backlight units included in the backlight array, and assuming that the backlight array includes M × N backlight units, the luminance values of the pixels of the first backlight image also form a matrix of M × N size, and the first backlight image BLinitThe brightness value of each pixel point in the image can be specifically represented by formula (1).
Figure BDA0001476380970000071
202. And dividing pixel points of the first backlight image.
It should be understood that each pixel point in the first backlight image corresponds to one backlight unit in the backlight array, and the brightness value of each pixel point is used to represent the brightness of the corresponding backlight unit.
The first backlight image BLinitThe total M × N pixel points constitute, for this first backlight image, the number of the adjacent pixel points of different pixel points is probably different, for the pixel point that is in first backlight image edge, the number of its adjacent pixel points is less than the number of the adjacent pixel points that are located the pixel point inside first backlight image, that is to say, there are different regions in first backlight image, the quantity of the adjacent pixel points of the pixel point in some regions is less, the quantity of the adjacent pixel points in some regions is more, like this at the in-process of light diffusion, the light diffusion and the mixed condition of different regions are also different.
Therefore, in order to better simulate the backlight diffusion phenomenon of different types of areas, the pixel points of the first backlight image can be divided according to the position of each pixel point in the first backlight image, so that different areas are obtained.
As shown in fig. 4, for a first backlight image composed of 6 × 10 pixels, the first backlight image may be divided into three regions, where the number of adjacent pixels of each pixel in the first region is 3, the number of adjacent pixels of each pixel in the second region is 5, and the number of adjacent pixels of each pixel in the third region is 8.
203. Respectively mixing different areas of the first backlight image by adopting different filter templates to obtain a backlight image BL after light mixingmixed
When the first backlight image obtained in step 202 is divided into three regions as shown in fig. 4, the first, second and third regions in the first backlight image may be mixed by using the first, second and third filter templates shown in fig. 8.
Specifically, for the first region, the formula (2) may be used to perform processing to obtain a luminance value of each pixel in the first region after the processing.
BLmixed_redi,j=a×BLi,j+b×BLi,j+1+c×BLi+1,j+d×BLi+1,j+1(2)
For the second area, the formula (3) may be adopted to perform light mixing processing, so as to obtain a brightness value of each pixel point in the second area after the light mixing processing.
BLmixed_greeni,j=a×BLi,j+b×(BLi,j-1+BLi,j+1)+c×BLi+1,j+d×(BLi+1,j-1+BLi+1,j+1)(3)
For the third area, the formula (4) may be adopted to perform light mixing processing, so as to obtain a brightness value of each pixel point in the third area after the light mixing processing.
Figure BDA0001476380970000081
In the above equations (2) to (4), a, b, c, and d are coefficients of the first filter template, the second filter template, and the third filter template, respectively. Wherein, a is positioned at the center of the diffusion template and represents the coefficient of the light energy left after the light diffusion center diffuses, and b, c and d are distributed around the diffusion template and respectively represent the coefficients of the light diffusion center to the diffusion capacity of the periphery. Assuming that the light is conserved in energy during propagation, the coefficients of the first filter template satisfy a + b + c + d equal to 1, the coefficients of the second filter template satisfy a +2b + c +2d equal to 1, and the coefficients of the third filter template satisfy a +2b +2c +4d equal to 1.
The specific values of a, b, c, and d can be obtained through a large number of experiments, and the coefficients of different filter templates are as follows:
first-class filtering templates: a is 0.4, b is 0.2, c is 0.2, d is 0.2
Second class of filtering templates: a is 0.38, b is 0.15, c is 0.12, d is 0.1
A third class of filtering templates: 0.38 for a, 0.112 for b, 0.02 for c, 0.06 for d
204. For the backlight image BL after mixingmixedExpanding to obtain an expanded backlight image BLexpand
When the backlight image after the light mixing is expanded, a linear difference method may be adopted, and further, the backlight image after the light mixing may be expanded by a bilinear difference method. When expanding the mixed backlight image, the mixed backlight image may be expanded twice as much as the original backlight image each time, for example, the mixed light matrix BL before expansionmixedIs M × N, the expanded light mixing matrix BLexpandIs 2M × 2n it is to be understood that any difference method can be used to expand the backlight image after mixing.
205. Determining an extended backlight image BLexpandWhether the backlight smoothing between the respective regions achieves a preset effect.
If BLexpandStep 206 is performed, and if the smoothing effect is not achieved, the above steps 203 and 204 can be continued until BLexpandEach region ofA good smoothing effect is achieved between, namely BLexpandWithout distinct block boundaries between the various regions.
206. The expanded backlight image is expanded to the size of the image to be processed to obtain a target backlight image BL for pixel compensationfinal
It should be understood that in expanding BLexpandWhen, if the size of the image to be processed is 160 × 160, BLexpandIs 64 × 64, then BL can be started firstexpandEnlarging by two times, and then obtaining a target backlight image BL with the same size as the image to be processed by an interpolation method (specifically, a bilinear interpolation method can be adopted)final
207. And converting the image to be processed from the RGB format to the YUV format.
208. According to the target backlight image BLfinalAnd carrying out pixel compensation on the image to be processed.
Specifically, the pixel compensation may be performed in a logarithmic manner using equation (5).
Figure BDA0001476380970000082
In the formula (5), Yi,jPixel luminance before compensation, Y ', for pixel point (i, j) in the image to be processed'i,jPixel brightness, BL ', compensated for pixel point (i, j) in the image to be processed'maxFor the target backlight image BLfinalIs taken here for the purpose of avoiding pixel overflow, BL ', when pixel compensation is performed'i,jFor the target backlight image BLfinalCorresponding to the backlight brightness, K, of the pixel point (i, j) in the image to be processedi,jThe gamma coefficient is a luminance compensation coefficient of the pixel, and gamma is generally 2.2.
In order to prevent color distortion of the image after the local dimming, the UV component needs to be compensated as well as the Y component, and specifically, the UV component may also be compensated according to the formula (6).
Figure BDA0001476380970000091
The curve for pixel compensation based on the above equations (5) and (6) is shown as a logarithmic curve in fig. 9, where in fig. 9, the luminance value Y of the original pixel followsinGradually decrease the degree of pixel compensation (Y)outGradually decreasing in magnitude). That is, for the low-brightness pixels, because the backlight degree is reduced more, in order to effectively compensate the reduction of the display brightness caused by the reduction of the backlight brightness, the low-brightness pixels need to be compensated to a greater degree so as to ensure that the display effect is basically consistent with the display effect when the backlight is fully bright; for the high-brightness pixel, the reduction of the backlight brightness is less than that of the full-brightness backlight, so that the display effect basically consistent with that of the full-brightness backlight can be ensured only by performing compensation to a small degree. Therefore, pixel overflow distortion caused by too large pixel compensation degree can be effectively avoided when pixel compensation is carried out in the application.
209. And converting the pixel-compensated image to be processed from the YUV format to the RGB format to obtain the pixel-compensated image.
The main purpose of pixel compensation is to ensure that the color of the image is not distorted after the backlight brightness is adjusted.
Alternatively, after the pixel compensation is performed, in order to further improve the contrast of the image, the image may be subjected to contrast stretching after the pixel compensation is performed on the image. Since a subjective effect brought to a person by an image after backlight brightness adjustment and pixel compensation is that a dark place in the image is darker and a bright part is brighter, the image after pixel compensation can be divided into a low brightness segment, an intermediate brightness segment and a high brightness segment according to brightness, index transformation can be respectively adopted for the low brightness segment and the high brightness segment, gray scale compression is carried out on the low brightness segment, gray scale expansion is carried out on the high brightness segment, and linear transformation is carried out on the intermediate brightness segment, and a specific transformation formula can be shown as (7).
Figure BDA0001476380970000092
In the formula (7), Y'i,jIs the luminance after pixel compensation, Y ″)i,jIs the pixel brightness after contrast stretching, H1And H2The pixel luminance values corresponding to 10% and 90% of the total pixels of the image containing the pixel compensation are also the segmentation points of the luminance transformation curve. The brightness of the low-brightness section is compressed and becomes dark, and the brightness of the high-brightness section becomes bright due to expansion, so that the difference of brightness of the image is more obvious, and the image contrast is effectively improved.
In order to ensure that the color is not distorted, the UV component also needs to be transformed by the same multiple as the Y component, and specifically, the UV component can be processed by the same method using the formula (8).
Figure BDA0001476380970000093
Wherein, Y'i,jIs the luminance after pixel compensation, Y ″)i,jIs the pixel brightness after contrast stretching, Ki,jIs a luminance compensation coefficient, U 'of the pixel'i,jAnd V'i,jIs the pixel compensated U and V components, U ″)i,jAnd V ″)i,jAre the U and V components after the contrast stretch.
While the pixel compensation method according to the embodiment of the present application is described in detail with reference to fig. 1 to 9, and the pixel compensation device and the terminal device according to the embodiment of the present application are described in detail with reference to fig. 10 and 11, it should be understood that the pixel compensation device and the terminal device shown in fig. 10 and 11 can perform the pixel compensation method according to the embodiment of the present application, can implement the steps of the pixel compensation method according to the embodiment of the present application, and repeated descriptions are appropriately omitted in describing the pixel compensation device and the terminal device according to the embodiment of the present application for the sake of brevity.
Fig. 10 is a schematic block diagram of a pixel compensation device according to an embodiment of the present application. The pixel compensation apparatus 300 shown in fig. 10 can perform the pixel compensation method according to the embodiment of the present application, and the pixel compensation apparatus 300 specifically includes:
an obtaining module 301, configured to obtain a first backlight image of an image to be processed, where pixel points in the first backlight image correspond to backlight units in a backlight array in a one-to-one manner, and the backlight array is configured to provide backlight for displaying the image to be processed;
the processing module 302 is configured to divide the pixel points of the first backlight image according to the position attributes of the pixel points to obtain multiple regions of the first backlight image;
the processing module 302 is further configured to filter and amplify the images of the multiple regions respectively to obtain a second backlight image with the same resolution as that of the image to be processed, where, when filtering the images of the multiple regions, different spatial filtering is applied to the images of at least two of the multiple regions;
the pixel compensation module 303 is configured to perform pixel compensation on the image to be processed according to the brightness value of the second backlight image.
In the application, the images in at least two areas in the plurality of areas are respectively filtered by adopting different types of spatial filtering, and compared with a mode of filtering by adopting a unified filtering template, the method can better simulate backlight diffusion, so that the images after pixel compensation can obtain better display effect when being displayed.
Optionally, as an embodiment, the processing module 302 is specifically configured to:
dividing the positions of pixel points with 8 adjacent pixel points into a first area;
dividing the positions of the pixel points with 5 adjacent pixel points into a second area;
and dividing the pixel point position with 3 adjacent pixel points into a third area.
Optionally, as an embodiment, the pixel compensation module 303 is specifically configured to:
acquiring a first brightness value of any pixel point from the image to be processed;
acquiring a second brightness value of a pixel point at a position corresponding to any one pixel point from the second backlight image;
acquiring a maximum brightness value, wherein the maximum brightness value is a brightness upper limit value of the second backlight image or a brightness upper limit value which can be displayed by the backlight unit;
determining a compensation coefficient of any pixel point according to the maximum brightness value and the second brightness value;
and compensating the first brightness value of any pixel point according to the compensation coefficient to obtain a target brightness value of any pixel point.
Optionally, as an embodiment, the pixel compensation module 303 is specifically configured to:
and determining the product of the first brightness value and the compensation coefficient as a target brightness value of any pixel point.
Optionally, as an embodiment, the compensation coefficient is obtained according to the following formula:
K=log2(BLmax/BL)1.0/γ(9)
where K is the compensation coefficient, BLmaxAnd the maximum brightness value is set, BL is the second brightness value, and gamma is a preset gamma coefficient.
Optionally, as an embodiment, when the format of the image to be processed is RGB, the obtaining module 301 is specifically configured to:
determining the maximum component value of the three component values of any pixel point as the first brightness value;
alternatively, the first and second electrodes may be,
and calculating the first brightness value according to the three component values of any pixel point.
Fig. 11 is a schematic block diagram of a terminal device according to an embodiment of the present application. The terminal device 400 shown in fig. 11 includes a pixel compensation device 401 and a display 402, where the pixel compensation device 401 may be the pixel compensation device 300 shown in fig. 10, the pixel compensation device 401 in the terminal device can perform the pixel compensation method of the embodiment of the present application to perform pixel compensation on the image to be processed, and the display 402 may display the image after the pixel compensation device 401 performs the pixel compensation.
The present application also provides a pixel compensation device, which includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program is executed, the processor is used for executing the pixel compensation method of the embodiment of the application.
The present application further provides a terminal device, the terminal device includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program is executed, the processor is used for executing the pixel compensation method of the embodiment of the application.
The present application further provides a computer-readable medium storing program code for execution by a device, the program code including instructions for performing the pixel compensation method of embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A pixel compensation method, comprising:
acquiring a first backlight image of an image to be processed, wherein pixel points in the first backlight image correspond to backlight units in a backlight array one by one, and the backlight array is used for providing backlight for displaying the image to be processed;
dividing the pixel points of the first backlight image according to the position attributes of the pixel points to obtain a plurality of regions of the first backlight image;
respectively filtering and amplifying the images of the plurality of regions to obtain a second backlight image with the same resolution as that of the image to be processed, wherein when the images of the plurality of regions are filtered, different spatial filtering is applied to the images of at least two regions of the plurality of regions;
and performing pixel compensation on the image to be processed according to the brightness value of the second backlight image.
2. The method of claim 1, wherein the dividing the pixel points of the first backlight image according to the location attributes of the pixel points to obtain the plurality of regions of the first backlight image comprises:
dividing the positions of pixel points with 8 adjacent pixel points into a first area;
dividing the positions of the pixel points with 5 adjacent pixel points into a second area;
and dividing the pixel point position with 3 adjacent pixel points into a third area.
3. The method of claim 1 or 2, wherein the pixel compensating the image to be processed according to the luminance value of the second backlight image comprises:
acquiring a first brightness value of any pixel point from the image to be processed;
acquiring a second brightness value of a pixel point at a position corresponding to any one pixel point from the second backlight image;
acquiring a maximum brightness value, wherein the maximum brightness value is a brightness upper limit value of the second backlight image or a brightness upper limit value which can be displayed by the backlight unit;
determining a compensation coefficient of any pixel point according to the maximum brightness value and the second brightness value;
and compensating the first brightness value of any pixel point according to the compensation coefficient to obtain a target brightness value of any pixel point.
4. The method according to claim 3, wherein said compensating the first luminance value of the arbitrary pixel according to the compensation coefficient to obtain the target luminance value of the arbitrary pixel comprises:
and determining the product of the first brightness value and the compensation coefficient as a target brightness value of any pixel point.
5. The method of claim 3, wherein the compensation factor is obtained according to the following equation:
K=log2(BLmax/BL)1.0/γ
where K is the compensation coefficient, BLmaxAnd the maximum brightness value is set, BL is the second brightness value, and gamma is a preset gamma coefficient.
6. The method according to claim 3, wherein when the image format of the image to be processed is RGB, the obtaining a first luminance value of any one pixel point in the image to be processed includes:
determining the maximum component value of the three component values of any pixel point as the first brightness value;
alternatively, the first and second electrodes may be,
calculating the first brightness value according to the three component values of any one pixel point, wherein the first brightness value is obtained according to the following formula:
Y=((R*299)+(G*587)+(B*114))/1000
wherein Y represents the first luminance value, and R, G and B represent three component values of the arbitrary one pixel point, respectively.
7. A pixel compensation apparatus, comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first backlight image of an image to be processed, pixel points in the first backlight image correspond to backlight units in a backlight array one by one, and the backlight array is used for providing backlight for displaying the image to be processed;
the processing module is used for dividing the pixel points of the first backlight image according to the position attributes of the pixel points to obtain a plurality of areas of the first backlight image;
the processing module is further configured to filter and amplify the images of the multiple regions respectively to obtain a second backlight image with the same resolution as that of the image to be processed, where, when filtering the images of the multiple regions, different spatial filtering is applied to the images of at least two of the multiple regions;
and the pixel compensation module is used for carrying out pixel compensation on the image to be processed according to the brightness value of the second backlight image.
8. The apparatus of claim 7, wherein the processing module is specifically configured to:
dividing the positions of pixel points with 8 adjacent pixel points into a first area;
dividing the positions of the pixel points with 5 adjacent pixel points into a second area;
and dividing the pixel point position with 3 adjacent pixel points into a third area.
9. The apparatus of claim 7 or 8, wherein the pixel compensation module is specifically configured to:
acquiring a first brightness value of any pixel point from the image to be processed;
acquiring a second brightness value of a pixel point at a position corresponding to any one pixel point from the second backlight image;
acquiring a maximum brightness value, wherein the maximum brightness value is a brightness upper limit value of the second backlight image or a brightness upper limit value which can be displayed by the backlight unit;
determining a compensation coefficient of any pixel point according to the maximum brightness value and the second brightness value;
and compensating the first brightness value of any pixel point according to the compensation coefficient to obtain a target brightness value of any pixel point.
10. The apparatus of claim 9, wherein the pixel compensation module is specifically configured to:
and determining the product of the first brightness value and the compensation coefficient as a target brightness value of any pixel point.
11. The apparatus of claim 9, wherein the compensation factor is obtained according to the following equation:
K=log2(BLmax/BL)1.0/γ
where K is the compensation coefficient, BLmaxAnd the maximum brightness value is set, BL is the second brightness value, and gamma is a preset gamma coefficient.
12. The apparatus according to claim 9, wherein when the format of the image to be processed is RGB, the obtaining module is specifically configured to:
determining the maximum component value of the three component values of any pixel point as the first brightness value;
alternatively, the first and second electrodes may be,
calculating the first brightness value according to the three component values of any one pixel point, wherein the first brightness value is obtained according to the following formula:
Y=((R*299)+(G*587)+(B*114))/1000
wherein Y represents the first luminance value, and R, G and B represent three component values of the arbitrary one pixel point, respectively.
13. A terminal device, characterized in that the terminal device comprises the pixel compensation device according to any one of claims 7-12 and a display, wherein the display is used for displaying the image after the pixel compensation device performs pixel compensation on the image to be processed.
CN201711166453.1A 2017-11-21 2017-11-21 Pixel compensation method and device and terminal equipment Active CN109817170B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711166453.1A CN109817170B (en) 2017-11-21 2017-11-21 Pixel compensation method and device and terminal equipment
PCT/CN2018/115836 WO2019101005A1 (en) 2017-11-21 2018-11-16 Pixel compensation method and apparatus, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711166453.1A CN109817170B (en) 2017-11-21 2017-11-21 Pixel compensation method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN109817170A CN109817170A (en) 2019-05-28
CN109817170B true CN109817170B (en) 2020-09-29

Family

ID=66600396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711166453.1A Active CN109817170B (en) 2017-11-21 2017-11-21 Pixel compensation method and device and terminal equipment

Country Status (2)

Country Link
CN (1) CN109817170B (en)
WO (1) WO2019101005A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI712025B (en) * 2019-12-25 2020-12-01 友達光電股份有限公司 Driving method for pixel circuit
CN112348759A (en) * 2020-11-25 2021-02-09 Oppo广东移动通信有限公司 Image display method and apparatus, terminal and readable storage medium
CN112508820A (en) * 2020-12-18 2021-03-16 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113781968A (en) * 2021-09-08 2021-12-10 深圳创维-Rgb电子有限公司 Backlight adjusting method, device, equipment and storage medium
CN114333710A (en) * 2021-12-29 2022-04-12 青岛信芯微电子科技股份有限公司 Image compensation method, device, display equipment, chip and medium
WO2023133770A1 (en) * 2022-01-13 2023-07-20 硅谷数模(苏州)半导体股份有限公司 Image display method, image display apparatus, and display system
CN114420061B (en) * 2022-01-27 2023-05-23 深圳Tcl数字技术有限公司 Screen brightness adjusting method and device, storage medium and display device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074208A (en) * 2009-11-24 2011-05-25 乐金显示有限公司 Liquid crystal display and local dimming control method thereof
CN102097066A (en) * 2009-12-11 2011-06-15 乐金显示有限公司 Driving method for local dimming of liquid crystal display device and apparatus using the same

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100189178B1 (en) * 1995-05-19 1999-06-01 오우라 히로시 Lcd panel test apparatus having means for correcting data difference among test apparatus
JP3512535B2 (en) * 1995-05-19 2004-03-29 株式会社アドバンテスト Panel image quality inspection apparatus and image quality correction method thereof
CN100461247C (en) * 2006-12-11 2009-02-11 友达光电股份有限公司 Method for controlling brightness of image subarea
CN102045493A (en) * 2009-10-21 2011-05-04 华晶科技股份有限公司 Noise suppression method for digital image
KR101611913B1 (en) * 2009-12-18 2016-04-14 엘지디스플레이 주식회사 Method for driving local dimming of liquid crystal display device and apparatus thereof
CN104112433B (en) * 2013-04-22 2016-06-29 华为技术有限公司 A kind of image compensation method and device
TWI536341B (en) * 2014-03-21 2016-06-01 緯創資通股份有限公司 Display compensating method and display compensating system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074208A (en) * 2009-11-24 2011-05-25 乐金显示有限公司 Liquid crystal display and local dimming control method thereof
CN102097066A (en) * 2009-12-11 2011-06-15 乐金显示有限公司 Driving method for local dimming of liquid crystal display device and apparatus using the same

Also Published As

Publication number Publication date
CN109817170A (en) 2019-05-28
WO2019101005A1 (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN109817170B (en) Pixel compensation method and device and terminal equipment
US10672112B2 (en) Method and system for real-time noise removal and image enhancement of high-dynamic range images
CN109191395B (en) Image contrast enhancement method, device, equipment and storage medium
US8831372B2 (en) Image processing device, image processing method and storage medium storing image processing program
CN108702514B (en) High dynamic range image processing method and device
CN105850114A (en) Method for inverse tone mapping of an image
CN108022223B (en) Tone mapping method based on logarithm mapping function blocking processing fusion
CN108009997B (en) Method and device for adjusting image contrast
CN108076384B (en) image processing method, device, equipment and medium based on virtual reality
US10013747B2 (en) Image processing method, image processing apparatus and display apparatus
CN109686342B (en) Image processing method and device
CN111489322B (en) Method and device for adding sky filter to static picture
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
CN108074220A (en) A kind of processing method of image, device and television set
CN108305232B (en) A kind of single frames high dynamic range images generation method
WO2012015020A1 (en) Method and device for image enhancement
CN107993189B (en) Image tone dynamic adjustment method and device based on local blocking
CN105654424B (en) Adjustment ratio display methods, display system, display device and the terminal of image
CN106709888B (en) A kind of high dynamic range images production method based on human vision model
CN108564633B (en) Gray scale image compression method and device and computer equipment
JP5410378B2 (en) Video signal correction apparatus and video signal correction program
CN116453470B (en) Image display method, device, electronic equipment and computer readable storage medium
CN109308690B (en) Image brightness balancing method and terminal
Zhang et al. Adaptive local contrast enhancement for the visualization of high dynamic range images
CN114078094A (en) Image edge brightness correction method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant