CN113935911A - High dynamic range video image processing method, computer device and computer readable storage medium - Google Patents

High dynamic range video image processing method, computer device and computer readable storage medium Download PDF

Info

Publication number
CN113935911A
CN113935911A CN202111131866.2A CN202111131866A CN113935911A CN 113935911 A CN113935911 A CN 113935911A CN 202111131866 A CN202111131866 A CN 202111131866A CN 113935911 A CN113935911 A CN 113935911A
Authority
CN
China
Prior art keywords
value
dynamic range
rgb color
image
color value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111131866.2A
Other languages
Chinese (zh)
Inventor
张景良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Allwinner Technology Co Ltd
Original Assignee
Allwinner Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Allwinner Technology Co Ltd filed Critical Allwinner Technology Co Ltd
Priority to CN202111131866.2A priority Critical patent/CN113935911A/en
Publication of CN113935911A publication Critical patent/CN113935911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/94
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Abstract

The invention provides a high dynamic range video image processing method, a computer device and a computer readable storage medium, wherein the method comprises the steps of counting local brightness information of an input image, carrying out edge detection on the input image, and obtaining an edge intensity weighted value of each pixel; dividing an input image into a plurality of areas, calculating an area dark part value of each area, and calculating a dark part intensity value of each pixel; performing edge enhancement on an input image, performing color space conversion on the image subjected to edge enhancement, and performing electro-optic conversion; and (3) carrying out local brightness interception on the RGB color value data: intercepting RGB color value data according to the maximum brightness value of the monitor; and performing color gamut conversion on the intercepted RGB color value data, performing tone mapping, and performing photoelectric conversion on the RGB color value data subjected to tone mapping. The invention also provides a computer device and a computer readable storage medium for realizing the method. The invention can improve the quality of the converted video image in the dynamic range.

Description

High dynamic range video image processing method, computer device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method for compressing a high dynamic range video image, and further to a computer device and a computer-readable storage medium for implementing the method.
Background
With the development of display technology of electronic devices, users have higher and higher requirements for displaying images of electronic devices, wherein the dynamic range of the images(Dynamic Range) is often a parameter used to evaluate the image display quality. The dynamic range of an image refers to the ratio between the maximum and minimum values of the brightness level that an image can represent in the field of digital image processing. The unit of brightness is candela per square meter (d/m)2) Or nit (nit). The larger the dynamic range of the brightness of the image is, the higher the contrast of the image is, the more details can be displayed, and the more the image can show a real scene.
Most of the current video images are Standard Dynamic Range (Standard Dynamic Range) video images, and can exhibit a luminance Range of 0.0002 nit to 200 nit. However, the Dynamic Range is far lower than the Dynamic Range that can be recognized by human eyes, and in order to make the display effect of the image more vivid, the High Dynamic Range (High Dynamic Range) technology is gradually popularized and applied, and the video image with the High Dynamic Range can generally represent the luminance Range from 0.001 nit to 10000 nit, and can display more colors, have a wider color gamut, and record more details.
High dynamic range video requires the use of a high dynamic range display to be able to properly display the effects of the high dynamic range video. However, most of the displays used in homes are conventional standard dynamic range displays, the maximum brightness of the display can only reach 200 nits to 300 nits, the displayed color gamut is narrow, and the high dynamic range video images cannot be displayed correctly.
For this reason, it is necessary to convert a video image of a high dynamic range into a video image that can be displayed by a display suitable for general household use by an image dynamic range compression technique. At present, the mainstream image dynamic range compression algorithm mainly performs processing such as tone mapping and color gamut mapping on an image, and the image dynamic range compression algorithm is mainly divided into a global dynamic range compression algorithm, a local dynamic range compression algorithm and a mixed dynamic range compression algorithm. The global dynamic range compression algorithm adopts the same dynamic range compression operator for the whole image, and the method has the advantages of high speed, high efficiency and strong robustness, and has the defects that the contrast of the compressed image is low, and the image details are not as excellent as those of the local dynamic range compression algorithm. The local dynamic range compression algorithm divides an image into a plurality of regions, then carries out information statistics on each region, and finally adopts different dynamic range compression operators for the image of each region according to statistical information. The hybrid dynamic range compression algorithm combines global and local methods to maximize the avoidance of halo while preserving image detail.
Most of the currently commonly used dynamic range compression techniques include three techniques, i.e., image detail enhancement, tone mapping, and color gamut mapping. The image detail enhancement technology divides an image into a basic image and a detail image by adopting a fuzzy filtering method, and combines the basic image with the detail image again after carrying out tone mapping and color gamut mapping, so as to solve the problem of preventing the details of tone mapping compressed images, for example, the schemes disclosed in the chinese patent applications CN201610798065.4 and CN 200810188656.5. In addition, the chinese patent application CN201480009785.7 discloses that a method of differentiating the images before and after tone mapping is used to obtain a detailed image, and then the detailed image is recombined with the original image to obtain a new image. The method can achieve the purpose of enhancing image details, but the influence of noise is not considered.
For another example, the solution disclosed in the chinese patent application CN20140009785.7 modifies the curve according to the statistical information of the whole image, while the solutions disclosed in the chinese patent applications CN201210572466 and CN201610950433.2 modify the curve through local information. In addition, in the aspect of color gamut mapping, the prior art mostly adopts a simple matrix multiplication to implement, such as the scheme disclosed in chinese patent application CN 201680069324.8. However, the truncation of the matrix multiplication causes a problem of color distortion.
In addition, chinese patent applications CN201480009785.7, CN200810188656.5, and CN201680069324.8 disclose several methods for processing high dynamic range images, some of which adopt a hybrid high dynamic range compression method and process images in a manner of detail enhancement, color gamut mapping, and tone mapping, but these methods do not consider that the processed images have insufficient contrast, and especially the detail display in the dark area is unclear, and these methods do not relate to the dark processing of images, which results in unsatisfactory quality of the processed images, and even color distortion of the processed images.
Disclosure of Invention
The first purpose of the invention is to provide a high dynamic range video image processing method which can improve the image contrast and ensure the definition of the details of the image in the dark area.
The second objective of the present invention is to provide a computer device for implementing the above-mentioned high dynamic range video image processing method.
A third object of the present invention is to provide a computer readable storage medium for implementing the high dynamic range video image processing method of the above-mentioned running device.
In order to achieve the main object of the present invention, the method for processing a high dynamic range video image includes obtaining an input image; counting local brightness information of an input image, carrying out edge detection on the input image, and obtaining an edge intensity weighted value of each pixel; dividing an input image into a plurality of non-overlapping areas, calculating an area dark part value of each area, and calculating a dark part intensity value of each pixel by applying the dark part values of the areas; performing edge enhancement on an input image, performing color space conversion on the image subjected to edge enhancement, and performing electro-optical conversion to obtain RGB color value data of a linear optical signal; and (3) carrying out local brightness interception on the RGB color value data: intercepting RGB color value data according to the maximum brightness value of the monitor; and performing color gamut conversion on the intercepted RGB color value data, performing tone mapping, and performing photoelectric conversion on the RGB color value data subjected to tone mapping to form an output color value of an output image.
According to the scheme, the edge enhancement processing is carried out on the image, so that the edge area in the image can be enhanced, and the contrast of the image is improved. In addition, by intercepting the local brightness of the RGB color value data and intercepting the RGB color value according to the maximum brightness value of different monitors, the dark part details of the image can be increased, so that the dark part details of the image can be well reserved, and the quality of the processed image is improved.
Preferably, the edge enhancement of the input image comprises: and calculating the difference value between the original color value of the input image and the filtered color value to obtain a detail image color value, and calculating the color value of each pixel after edge enhancement by applying the detail image color value, the original color value and the edge intensity weighted value.
Therefore, the color value of each pixel after edge enhancement is calculated through the color value of the detail image, the original color value and the edge strength weighted value, the detail definition at the edge of the image can be improved, and the detail of the image is clearer.
Further, the local brightness clipping of the RGB color value data includes: and normalizing the RGB color value data to be within a preset range interval in proportion according to the maximum brightness value of the monitor.
It can be seen that the RGB color value data is processed according to the maximum luminance value of the monitor, so that the finally output image can be suitable for the display performance of the monitor with less distortion of the image.
Further, when the RGB color value data is normalized to a preset range interval in proportion, a ratio of the RGB color value data to a maximum luminance value reference value is calculated, wherein the maximum luminance value reference value is dynamically adjusted: and adjusting the maximum brightness value of the monitor according to the intensity value of the dark part of each pixel to obtain a maximum brightness value reference value.
Therefore, the normalized RGB color value data are more suitable for the dark area condition of the image through the dynamic adjustment of the maximum brightness value reference value, and the detail definition of the dark area can be improved.
Preferably, the color gamut conversion of the clipped RGB color value data includes: and converting the intercepted RGB color value data into the RGB color value of the target color gamut by applying a multiplication matrix, and converting the RGB color value which exceeds a preset range into a compression range.
Further, converting the RGB color values beyond the preset range into the compressed range includes: and converting the RGB color values larger than the upper limit value of the preset range into a range between the upper limit value of the preset range and the upper limit value of the compression range, and converting the RGB color values smaller than the lower limit value of the preset range into a range between the lower limit value of the preset range and the lower limit value of the compression range.
Therefore, the problem that the RGB color value exceeding the preset range cannot be displayed in a striving mode can be avoided, and the color reality of the processed image is improved.
Further, the calculating the dark portion intensity value of each pixel by using the dark portion values of the plurality of regions comprises: determining a plurality of areas adjacent to the pixel, and calculating the dark part intensity value of the pixel by applying the area dark part values of the adjacent areas in a bilinear interpolation method.
Therefore, the reasonable dark part intensity value of each pixel can be calculated, the image edge detection is carried out according to the reasonable dark part intensity value, and the definition of the edge area of the dark area in the image can be improved.
Further, the calculating the area dark part value of each area comprises: and for each pixel point in the area, calculating a difference value between the dark part threshold value and the color value of the pixel point, then calculating a ratio of the difference value to the dark part threshold value, and if the ratio is greater than 0, counting the ratio into the area dark part value of the area.
The dark part value of each area can be calculated in a simple manner by the calculation method, so that the calculation amount is simplified, and the processing speed of the image can be increased.
In order to achieve the second object, the present invention provides a computer device comprising a processor and a memory, wherein the memory stores a computer program, and the computer program realizes the steps of the high dynamic range video image processing method when being executed by the processor.
To achieve the third object, the present invention provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the high dynamic range video image processing method.
Drawings
Fig. 1 is a flow chart of an embodiment of a high dynamic range video image processing method of the present invention.
FIG. 2 is a schematic diagram illustrating dark portion intensity value calculation of a pixel in an embodiment of the high dynamic range video image processing method of the present invention.
Fig. 3 is a compression curve when the maximum value of the screen brightness is 0.5 in the embodiment of the high dynamic range video image processing method of the present invention.
The invention is further explained with reference to the drawings and the embodiments.
Detailed Description
The high dynamic range video image processing method of the present invention is applied to an electronic device having an image display function, and preferably, the electronic device has a display adapted to display a standard dynamic range image. The high dynamic range video image processing method is used for converting the high dynamic range image into a standard state range image and displaying the standard state range image by the display. Further, the electronic device is provided with a processor and a memory, the memory is stored with a computer program, and the high dynamic range video image processing method is realized through the computer program.
The embodiment of the high dynamic range video image processing method comprises the following steps:
the high dynamic range video image processing method of the embodiment mainly comprises the following steps: obtaining local information statistics, image strong edge enhancement, electro-optical conversion, local brightness interception, adaptive color gamut conversion, tone mapping and photoelectric conversion. The steps of this embodiment are described below with reference to fig. 1.
First, step S1 is executed to obtain an input image, in this embodiment, the input image is a high dynamic range video image, the color value of the image is YUV, and in the subsequent steps, this embodiment needs to convert the high dynamic range YUV image into a standard dynamic range RGB image.
Then, step S2 is executed to perform statistics on the local luminance information of the input image, specifically, the local luminance information statistics mainly includes performing strong edge detection on the Y component in the input image and performing dark area detection.
In this embodiment, the strong edge detection mainly detects a strong edge region in the input image, and then transmits the statistical information to the image edge enhancement module. Specifically, the input image is first subjected to filtering processing, and the filter may be a gaussian filter or an edge-preserving low-pass filter. For example, using YinRepresenting the Y component of the input image, YbRepresenting the filtered output image, the filtering process can be expressed as:
Yb=F(Yin,Hb) (formula 1)
In formula 1, F (Y)in,Hb) Indicating to the input image YinThe application has a kernel HbIn a filter of (1), wherein HbIs a low pass filter.
Then the filtered image YbPerforming edge detection to obtain edge strength GeThe process can be expressed as:
Ge=F(Yb,Hg) (formula 2)
In the formula 2, wherein HgOptionally a Sobel edge detection kernel, preferably edge strength GeThe output value of (c) needs to be clamped in the interval 0 to 255.
Since edge enhancement processing is required subsequently, considering that noise may affect edge detection, the edge strength G is required in this embodimenteIntensity mapping is carried out, and an edge intensity weighted value W of each pixel is obtainedeFor example, using the following formula:
We=max(min((Ge–MinThe)/(MaxThe–MinThe) 1),0) (formula 3)
In formula 3, MinTheAnd MaxTheRespectively, an upper threshold and a lower threshold of the edge intensity, such that the edge intensity weight value W of each pixeleTo an interval range of 0 to 1 when the edge intensity G iseIs less thanLower threshold MinTheWhen the edge strength weighted value W is greater than the threshold valueeIs 0, when the edge strength GeGreater than an upper threshold value MaxTheWhen the edge strength weighted value W is greater than the threshold valueeIs 1.
In addition, step S2 also requires dark area detection. Specifically, the input image is divided into a plurality of non-overlapping regions, for example, each region has a width and a height of 128 pixels and 72 pixels, respectively. Then, the occupation ratio of the dark part pixels in each area is counted, so as to obtain the area dark part value D of the areablock. For example, for an area, the dark part value D of the area is calculatedblockIs set to be 0, then the brightness of the color value of each pixel point in the area is respectively calculated, when the color value is 0, the brightness is the maximum value 1, and the color value is more than or equal to the dark threshold ThdWhen the brightness degree is 0, the brightness degree of each pixel point in the area is counted in sequence, and the area dark part value D of the area is accumulatedblock. For example, for each pixel point in the area, the dark threshold Th is calculateddThe difference between the color value of the pixel point and the color value of the pixel point is calculated, and the difference and the dark part threshold Th are calculateddIf the ratio is greater than 0, then the ratio is counted to the area dark part value D of the areablockIn (1). Thus, the area dark portion value D of one areablockThe calculation method of (c) can be expressed by the following formula:
Dblock=Dblock+max((Thd–Yblock(i,j))/Thd0) (formula 4)
Wherein, YblockIs a color component, i.e., Y component, of the region in the input image, and i e [1,72 ]],j∈[1,128],ThdIs a preset dark part threshold value. Preferably, after the accumulation, the dark part value D of each region is countedblockThen, the area dark part value D is calculatedblockNormalization is performed such that the result of normalization is clamped at [0,1]Within the interval of (a).
Then, the dark part value D is calculated according to the area of each areablockThe dark intensity value of each pixel is calculated. In particularAnd determining a plurality of areas adjacent to a pixel, and calculating the dark part intensity value of the pixel by applying the area dark part values of the adjacent areas in a bilinear interpolation method. As shown in fig. 2, for a pixel 11, four regions adjacent to the pixel 11 are calculated, for example, the position of the center point of each region is determined, four regions having the shortest distance between the center point and the pixel 11 in four directions, i.e., the upper left, the upper right, the lower left, and the lower right, are calculated, for example, the regions 21, 22, 23, and 24 in fig. 2, and the dark region values D of the four regions are respectively obtainedblockE.g. respectively Dblock(i,j)、Dblock(i+1,j)、Dblock(i,j+1)、Dblock(i+1,j+1). Then, the area dark portion value D of the four areas is appliedblock(i,j)、Dblock(i+1,j)、Dblock(i,j+1)、Dblock(i+1,j+1)Bilinear interpolation is performed to calculate and obtain the dark portion intensity value W of the pixel 11d
For example, the distance between the center point of the area 21 and the pixel 11 is used as a weight to be multiplied by the area dark portion value D of the area 21block(i,j)The weighted value of the area 21 is obtained, the weighted values of the other three areas are calculated by analogy, the average value of the weighted values of the four areas is calculated, and the average value is used as the dark intensity value W of the pixel 11d
Then, step S3 is executed to perform edge enhancement processing on the input image. Since the present embodiment needs to perform processes such as tone mapping and color gamut mapping on the image, which will reduce the contrast of the image, step S3 needs to perform edge enhancement on the input image first to avoid that the contrast reduction of the image is too obvious to affect the quality of the image. However, if the edge addition is performed on all the edges of the image, it will result in an increase in noise of the image, and therefore, in order to prevent the phenomenon of noise increase, the present embodiment performs the enhancement processing only on the strong edge, for example, the enhancement processing of the strong edge is performed according to the edge detection result of step S2.
Specifically, since step S2 is performed on the Y component Y of the input imageinFiltering to obtain filtered image YbTherefore, step S3 will be possibleUsing the filtered image Y directlybObtaining a detail image YdThereby reducing the amount of computation of the edge enhancement process. Y corresponding to each pixel of the detail image is calculated, for example, using the following formuladThe value of (c):
Yd=Yin-Yb(formula 5)
It will be appreciated that for one pixel, YdIs the detail image color value, Y, of the pixelinIs the original color value of the input image, YbIs a filtered color value, and thus, the detail image color value of a pixel is the difference of the original color value and the filtered color value.
Obtaining a detail image YdThen, the obtained edge intensity weight value W is calculated in step S2ePerforming edge enhancement calculation to obtain an edge-enhanced image YeFor example, the following formula is used for calculation:
Ye=Yin+We*Yd(formula 6)
Since the image data obtained in step S3 is obtained by processing in a non-linear space, and both the gamut mapping and the tone mapping are performed in a linear space, the present embodiment needs to convert the image data into a linear light signal, i.e., to perform step S4. This embodiment uses ST 2084 of SMPTE, society of motion picture and television engineers: the EOTF curve proposed in the 2014 standard converts image-coded signals to linear light signals.
Specifically, firstly, the image after edge enhancement is subjected to color space conversion, specifically, YUV color value data of the image is converted into RGB color value data, then the RGB color value data is converted into a linear light signal by using the following formula, the unit is nit, and the interval range of the output signal is [0,10000 ]:
Figure BDA0003280728020000091
n represents an input nonlinear electrical signal, L represents an output linear optical signal, m1、m2Andc1、c2、c3is a predetermined constant. Therefore, nonlinear electrical signals R ', G ', and B ' obtained by converting the edge-enhanced YUV color value data are respectively substituted into formula 7, and are used as input parameters N in formula 7, and linear optical signals L corresponding to respective color values are respectively obtained by calculation, so that color values of the linear optical signals are respectively represented as R, G, B.
Then, step S5 is executed to perform local brightness clipping on the image. Since the high dynamic range video image contains auxiliary information in addition to the image data, the auxiliary information is used to assist the decoder in decoding, and the auxiliary information usually includes image width and height, color gamut information, monitor maximum luminance value, and the like. The maximum brightness value of the monitor is the maximum brightness of the monitor used in the video production process, and the value represents the maximum brightness which can be embodied by the video. Step S5 entails truncating the color value R, G, B of the image signal by the monitor maximum luminance value, for example using the following formula:
Rclip=max(min(R*10000/SrcNit,1),0)
Gclip=max(min(G*10000/SrcNit,1),0)
Bclipmax (min (B10000/SrcNit, 1),0) (formula 8)
Wherein R isclip,GclipAnd BclipRespectively representing the color values of the intercepted image signals, the interval of the color values is [0,1]SrcNit is the monitor maximum brightness value, which can be obtained from the auxiliary information. As can be seen, step S5 is to scale the RGB color value data to [0,1] according to the maximum luminance value of the monitor]Within a range interval of (a).
Since the details of the dark area in the input image often cannot be clearly displayed, in order to increase the brightness of the dark area in the image, the present embodiment needs to use the dark intensity value WdThe monitor maximum brightness value SrcNit is correspondingly reduced to obtain the maximum brightness value reference value SrcNitAdj, and thus, for each pixel (i, j), the maximum brightness value reference value SrcNitAdj (i, j) is dynamically adjusted, i.e., on a per-pixel basisDark intensity value W of a pixeldAdjusting the maximum brightness value SrcNit of the monitor to obtain a maximum brightness value reference value SrcNitAdj (i, j) corresponding to the pixel, and calculating according to the following formula:
SrcNittAdj(i,j)=SrcNit(i,j)-Wd(i, j) LightUpValue (formula 9)
The LightUpValue is a luminance value to be reduced, and is usually selected to be 100, although other suitable values may be selected according to actual needs. Therefore, the SrcNitAdj obtained by calculation in formula 9 may be substituted for SrcNit in formula 8 and operated.
Next, step S6 is executed to perform adaptive color gamut conversion on the image. Since the high dynamic range video image belongs to the bt.2020 color gamut, it is necessary to convert the high dynamic range video image to the bt.709 color gamut where the standard dynamic range video image is located, the conversion process can be implemented by using a multiplication matrix, and the embodiment performs conversion by using the conversion coefficient in the report ITU-R bt.2407, for example, the following formula:
Figure BDA0003280728020000101
as can be seen from equation 10, the converted color value data exceeds the interval range of [0,1 ]. Some existing processing methods directly intercept data beyond the interval range of [0,1] to a minimum value of 0 and a maximum value of 1, for example, directly take the value of data smaller than 0 as 0, and directly take the value of data larger than 1 as 1. However, such a processing method will cause a problem of color distortion, and in order to avoid this, the present embodiment needs to compress the portion beyond the range of [0,1 ]. Specifically, the maximum value and the minimum value of each color channel after conversion are detected first, and for example, the following formula is calculated:
Rmax=max(R709),Rmin=min(R709)
Gmax=max(G709),Gmin=min(G709)
Bmax=max(B709),Bmin=min(B709) (formula 11)
Then, the part beyond the range of [0,1] interval is compressed, specifically, using the following formula:
if (R)709>Tmax) Then R is709=Tmax+(R709-Tmax)*(1-Tmax)/(Rmax-Tmax)
If (G)709>Tmax) Then G is709=Tmax+(G709-Tmax)*(1-Tmax)/(Gmax-Tmax)
If (B)709>Tmax) Then B is709=Tmax+(B709-Tmax)*(1-Tmax)/(Bmax-Tmax)
If (R)709<Tmin) Then R is709=Tmin-(Tmin-R709)*Tmin/(Tmin-Rmin)
If (G)709<Tmin) Then G is709=Tmin-(Tmin-G709)*Tmin/(Tmin-Gmin)
If (B)709<Tmin) Then B is709=Tmin-(Tmin-B709)*Tmin/(Tmin-Bmin) (formula 12)
Wherein T ismaxAnd TminAn upper threshold and a lower threshold indicating a preset compression range. It can be seen that when the color value R is709、G709、B709Is in the range of a maximum value to TmaxWill be compressed to 1 to TmaxWhen the color value R709、G709、B709Is in the range of a minimum value to TminWill be compressed to 0 to Tmin. That is, the RGB color value greater than the upper limit of the preset range is converted into the range between the upper limit of the preset range and the upper limit of the compression range, and the RGB color value less than the lower limit of the preset rangeThe RGB color values of the values are converted to a range between the preset range lower limit and the compression range lower limit.
Step S6 converts the image signal to the monitor maximum luminance, but the maximum luminance of most home-use displays is much less than the maximum luminance of monitors, so step S7 needs to be performed to tone map the image and compress the image signal so that the image signal matches the luminance of the display that is actually being played. Specifically, the present embodiment performs processing using the following compression curve.
Figure BDA0003280728020000111
Wherein DispNit is the maximum value of the screen brightness of the display which is actually played, and the interval range is [0,1 ]. Figure 3 shows the compression curve when the value of DispNit is 0.5, and it can be seen that the curve only compresses the latter part.
In this embodiment, the color value R obtained by calculation in step S6 is used709、G709、B709Respectively substituted into formula 13 to obtain brightness compressed image signals RTMO、GTMO、BTMOAfter compression, normalization is carried out to ensure that the output value is in the interval range [0,1]]And (4) the following steps. Specifically, the normalization process may be performed using the following formula:
RTMO=RTMO/DispNit
GTMO=GTMO/DispNit
BTMO=BTMO/DispNit (formula 14)
Then, step S8 is executed to convert the linear photoelectric signal into an encoded electrical signal, in this embodiment, a gamma function is used as the conversion function, for example, the following formula is used for conversion:
OETF(x)=x1/gamma(formula 15)
Specifically, R obtained by calculating formula 14TMO、GTMO、BTMORespectively substituted into formula 15 to obtain final outputResults Rout、Gout、BoutAs the color value output by each pixel. Finally, step S9 is executed to determine the color value R of each pixelout、Gout、BoutAnd outputting and forming an output image.
It can be seen that, since the present embodiment performs local brightness clipping on the image, that is, the operation of step S5 is performed, the enhancement of noise can be well suppressed while the local contrast of the image is maintained, and in addition, since the edge enhancement processing of the dark area is performed on the image, the dark details of the image can be enhanced, so that the display effect of the dark area of the image is better, and the problem of color distortion caused by color gamut mapping can also be prevented.
The embodiment of the computer device comprises:
the computer apparatus of this embodiment may be an electronic device with a video display function, the electronic device having a processor, a memory, and a computer program stored in the memory and executable on the processor, such as an information processing program for implementing the above-mentioned information processing method, and the processor implements the steps of the above-mentioned high dynamic range video image processing method when executing the computer program.
For example, a computer program may be partitioned into one or more modules that are stored in a memory and executed by a processor to implement the modules of the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the terminal device.
It should be noted that the terminal device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor, a memory. It will be understood by those skilled in the art that the schematic diagram of the present invention is merely an example of a terminal device, and does not constitute a limitation of the terminal device, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device may further include an input-output device, a network access device, a bus, etc.
The Processor may be a Central Processing Unit (CPU), or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the terminal device and connecting the various parts of the entire terminal device using various interfaces and lines.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
A computer-readable storage medium:
the computer program stored in the computer device may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the method for processing high dynamic range video images.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
By applying the high dynamic range video image processing method, the operation area of the transport devices such as the muck truck and the like can be judged quickly and effectively, so that good conditions can be created for the dynamic allocation of the transport devices.
Finally, it should be emphasized that the present invention is not limited to the above embodiments, such as the variation of the parameters used in the local brightness extraction, or the variation of the specific calculation manner for calculating the dark portion intensity value of each pixel, and such variations should also be included in the protection scope of the present invention.

Claims (10)

1. A high dynamic range video image processing method, comprising:
acquiring an input image;
the method is characterized in that:
counting local brightness information of the input image, carrying out edge detection on the input image, and obtaining an edge intensity weighted value of each pixel;
dividing the input image into a plurality of non-overlapping areas, calculating an area dark part value of each area, and calculating a dark part intensity value of each pixel by applying the area dark part values of the areas;
performing edge enhancement on the input image, performing color space conversion on the image subjected to edge enhancement, and performing electro-optical conversion to obtain RGB color value data of a linear optical signal;
and carrying out local brightness interception on the RGB color value data: intercepting the RGB color value data according to the maximum brightness value of the monitor;
and performing color gamut conversion on the intercepted RGB color value data, performing tone mapping, and performing photoelectric conversion on the RGB color value data subjected to tone mapping to form an output color value of an output image.
2. The high dynamic range video image processing method of claim 1, wherein:
edge enhancing the input image includes: and calculating the difference value between the original color value of the input image and the filtered color value to obtain a detail image color value, and calculating the color value of each pixel after edge enhancement by applying the detail image color value, the original color value and the edge intensity weighted value.
3. The high dynamic range video image processing method of claim 1, wherein:
the performing local luminance clipping on the RGB color value data includes: and normalizing the RGB color value data to be within a preset range according to the proportion according to the maximum brightness value of the monitor.
4. The high dynamic range video image processing method of claim 3, wherein:
when the RGB color value data are normalized to a preset range according to a proportion, calculating the ratio of the RGB color value data to a maximum brightness value reference value, wherein the maximum brightness value reference value is dynamically adjusted: and adjusting the maximum brightness value of the monitor according to the dark part intensity value of each pixel to obtain the maximum brightness value reference value.
5. The high dynamic range video image processing method according to any one of claims 1 to 4, characterized in that:
performing color gamut transformation on the clipped RGB color value data includes: and converting the intercepted RGB color value data into the RGB color value of the target color gamut by applying a multiplication matrix, and converting the RGB color value which exceeds a preset range into a compression range.
6. The high dynamic range video image processing method of claim 5, further comprising:
converting the RGB color values beyond the preset range into the compression range comprises the following steps: and converting the RGB color values larger than the upper limit value of the preset range into the range between the upper limit value of the preset range and the upper limit value of the compression range, and converting the RGB color values smaller than the lower limit value of the preset range into the range between the lower limit value of the preset range and the lower limit value of the compression range.
7. The high dynamic range video image processing method according to any one of claims 1 to 4, characterized in that:
applying the dark portion values of the plurality of regions to calculate the dark portion intensity value for each of the pixels comprises: and determining a plurality of areas adjacent to the pixel, and calculating the dark part intensity value of the pixel by applying the dark part values of the areas adjacent to the pixel in a bilinear interpolation method.
8. The high dynamic range video image processing method according to any one of claims 1 to 4, characterized in that:
calculating the area dark portion value for each of the areas comprises: and for each pixel point in the area, calculating a difference value between the dark part threshold value and the color value of the pixel point, then calculating a ratio of the difference value to the dark part threshold value, and if the ratio is greater than 0, counting the ratio into the area dark part value of the area.
9. Computer arrangement, characterized in that it comprises a processor and a memory, said memory storing a computer program that, when executed by the processor, carries out the steps of the high dynamic range video image processing method according to any of claims 1 to 8.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, performs the steps of the high dynamic range video image processing method of any one of claims 1 to 8.
CN202111131866.2A 2021-09-26 2021-09-26 High dynamic range video image processing method, computer device and computer readable storage medium Pending CN113935911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111131866.2A CN113935911A (en) 2021-09-26 2021-09-26 High dynamic range video image processing method, computer device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111131866.2A CN113935911A (en) 2021-09-26 2021-09-26 High dynamic range video image processing method, computer device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113935911A true CN113935911A (en) 2022-01-14

Family

ID=79276873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111131866.2A Pending CN113935911A (en) 2021-09-26 2021-09-26 High dynamic range video image processing method, computer device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113935911A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529459A (en) * 2022-04-25 2022-05-24 东莞市兆丰精密仪器有限公司 Method, system and medium for enhancing image edge
CN115293994A (en) * 2022-09-30 2022-11-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN116072059A (en) * 2023-02-27 2023-05-05 卡莱特云科技股份有限公司 Image display method and device, electronic equipment and storage medium
CN116167950A (en) * 2023-04-26 2023-05-26 镕铭微电子(上海)有限公司 Image processing method, device, electronic equipment and storage medium
CN116363232A (en) * 2022-07-14 2023-06-30 上海玄戒技术有限公司 Color gamut compression method, device, electronic equipment, chip and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529459A (en) * 2022-04-25 2022-05-24 东莞市兆丰精密仪器有限公司 Method, system and medium for enhancing image edge
CN114529459B (en) * 2022-04-25 2022-08-02 东莞市兆丰精密仪器有限公司 Method, system and medium for enhancing image edge
CN116363232A (en) * 2022-07-14 2023-06-30 上海玄戒技术有限公司 Color gamut compression method, device, electronic equipment, chip and storage medium
CN116363232B (en) * 2022-07-14 2024-02-09 上海玄戒技术有限公司 Color gamut compression method, device, electronic equipment, chip and storage medium
CN115293994A (en) * 2022-09-30 2022-11-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN115293994B (en) * 2022-09-30 2022-12-16 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN116072059A (en) * 2023-02-27 2023-05-05 卡莱特云科技股份有限公司 Image display method and device, electronic equipment and storage medium
CN116167950A (en) * 2023-04-26 2023-05-26 镕铭微电子(上海)有限公司 Image processing method, device, electronic equipment and storage medium
CN116167950B (en) * 2023-04-26 2023-08-04 镕铭微电子(上海)有限公司 Image processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113935911A (en) High dynamic range video image processing method, computer device and computer readable storage medium
US7020332B2 (en) Method and apparatus for enhancing a digital image by applying an inverse histogram-based pixel mapping function to pixels of the digital image
JP6461165B2 (en) Method of inverse tone mapping of image
JP5159208B2 (en) Image correction method and apparatus
KR101311817B1 (en) Image detail enhancement
CN107680056B (en) Image processing method and device
JP6602789B2 (en) System and method for local contrast enhancement
US8159616B2 (en) Histogram and chrominance processing
CN111292269B (en) Image tone mapping method, computer device, and computer-readable storage medium
US8238687B1 (en) Local contrast enhancement of images
JPH08251432A (en) Real time picture enhancing technique
CN111161188A (en) Method for reducing image color noise, computer device and computer readable storage medium
CN112634384A (en) Method and device for compressing high dynamic range image
DE102020200310A1 (en) Method and system for reducing haze for image processing
CN108280836B (en) Image processing method and device
CN114998122A (en) Low-illumination image enhancement method
US8824795B2 (en) Digital image processing method and device for lightening said image
CN115239578A (en) Image processing method and device, computer readable storage medium and terminal equipment
JP3807266B2 (en) Image processing device
CN111031301A (en) Method for adjusting color gamut space, storage device and display terminal
CN101685537B (en) Region gain is utilized to correct with the method strengthening image
CN112954232A (en) Moving image processing method, moving image processing apparatus, camera, and storage medium
KR101073497B1 (en) Apparatus for enhancing image and method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination