CN107786865A - A kind for the treatment of method and apparatus of frame of video - Google Patents

A kind for the treatment of method and apparatus of frame of video Download PDF

Info

Publication number
CN107786865A
CN107786865A CN201610798824.7A CN201610798824A CN107786865A CN 107786865 A CN107786865 A CN 107786865A CN 201610798824 A CN201610798824 A CN 201610798824A CN 107786865 A CN107786865 A CN 107786865A
Authority
CN
China
Prior art keywords
boundary line
component
interval
mean
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610798824.7A
Other languages
Chinese (zh)
Other versions
CN107786865B (en
Inventor
文锦松
顾国华
徐科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Kress Semiconductor Technology Co ltd
Original Assignee
Shenzhen ZTE Microelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZTE Microelectronics Technology Co Ltd filed Critical Shenzhen ZTE Microelectronics Technology Co Ltd
Priority to CN201610798824.7A priority Critical patent/CN107786865B/en
Priority to PCT/CN2017/082524 priority patent/WO2018040599A1/en
Publication of CN107786865A publication Critical patent/CN107786865A/en
Application granted granted Critical
Publication of CN107786865B publication Critical patent/CN107786865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
    • H04N9/69Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind for the treatment of method and apparatus of frame of video;This method can include:The adaptive curve according to corresponding to the Y-component of HDR HDR frame of video determines the HDR frame of video;The Y-component of the HDR frame of video is adjusted according to the adaptive curve, the Y-component after output processing;After the C components of the HDR frame of video are changed according to default mapping relations, then by the C components after conversion according to the progress color saturation enhancing of default color interval, the C components after being handled.

Description

Video frame processing method and device
Technical Field
The present invention relates to video processing technologies, and in particular, to a method and an apparatus for processing video frames.
Background
The High-Dynamic Range (HDR) can expand the brightness Range of display, show more details of bright parts and dark parts, and bring richer colors and more vivid and natural detailed expressions to the picture, so that the picture is closer to the human eyes.
The brightness range which can be sensed by human eyes is about 10 -3 -10 6 Nit, the range of the perceived instantaneous contrast can reach 10000:1; the brightness of current consumer-grade lcds is generally around 300 to 400nit, and the contrast ratio thereof is generally 2000:1. as can be seen, the brightness resolution of the human eye is much higher than that of the current mainstream display devices. Therefore, the HDR display technology is to increase the luminance range of the display, and in popular terms, the HDR display technology is to make the display luminance brighter and the displayed black darker.
The current limit on display contrast is determined by the ITU BT.709 standard published by the International Telecommunication Union (ITU) in 1990. The Electro-Optical Transfer Function (EOTF) in the standard is set based on the characteristics of a past Cathode Ray Tube (CRT) display. But the brightness of CRT displays generally does not exceed 100nit, and for displays with brightness now reaching 400nit or even 1000nit, the bt.709 standard has bound the presentation of the best display effect. Therefore, currently, many organizations in the field of broadcast television have proposed a new HDR display technology, and HDR video is played through an HDR display, so as to achieve a higher dynamic range of display.
But most of the current television inventories are based on the Standard Dynamic Range (SDR) display of the ITU bt.709 Standard. Therefore, the display cannot experience the HDR effect when playing the HDR content.
Disclosure of Invention
In order to solve the foregoing technical problems, embodiments of the present invention are to provide a method and an apparatus for processing a video frame; the HDR effect can be generated when the SDR display plays HDR video.
The technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for processing a video frame, where the method includes:
determining an adaptive curve corresponding to the HDR video frame according to a Y component of the HDR video frame;
adjusting the Y component of the HDR video frame according to the self-adaptive curve, and outputting the processed Y component;
and converting the C component of the HDR video frame according to a preset mapping relation, and then performing color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component.
In the above solution, the determining an adaptive curve corresponding to an HDR video frame according to a Y component of the HDR video frame specifically includes:
obtaining metadata for the HDR video frame from a Y component of the HDR video frame; wherein the metadata is to indicate feature information of the HDR video frame;
acquiring output values of an upper boundary line and a lower boundary line at each interval endpoint according to metadata of the HDR video frame and a preset interval number;
determining an upper boundary line and a lower boundary line according to the preset number of intervals and the output values of the upper boundary line and the lower boundary line at the end points of the intervals;
and acquiring the self-adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line.
In the above scheme, the metadata of the HDR video frame is a luminance mean of the HDR frame.
In the above scheme, the obtaining output values of the upper boundary line and the lower boundary line at the end points of each interval according to the metadata of the HDR video frame and the preset number of intervals specifically includes:
performing gamma conversion according to the brightness mean value of the HDR frame and a formula 1 to obtain a gamma output value;
wherein input is a luminance mean value of an input HDR frame, output is a gamma output value, and gamma is a gamma distortion coefficient;
acquiring a derivative gradient [ n ] of the gamma output value according to the formula 2;
wherein n is the point sequence number in the interval;
acquiring an output value at the endpoint of the low light interval according to a derivative of the gamma output value and a preset low light division threshold;
and acquiring an output value of the end point of the highlight section according to the symmetrical relation between the highlight section and the low light section and the output value of the end point of the low light section.
In the above solution, the determining the upper boundary line and the lower boundary line according to the preset number of intervals and the output values of the upper boundary line and the lower boundary line at the end points of each interval specifically includes:
after the output value at the end point of the low light interval and the output value at the end point of the high light interval are obtained, the curve of the upper boundary line up _ line in each interval is obtained according to equation 3;
wherein, Δ y 1 The difference of output between the low light areas is shown; Δ y 2 A difference value of output indicating the mid-light interval; the middle point of the middle light interval is the intersection point of the upper boundary line and the lower boundary line;
the lower boundary down _ line is set to a straight line having a slope of 1.
In the above scheme, obtaining the adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line specifically includes:
acquiring the self-adaptive curve LUT according to the upper boundary line up _ line, the lower boundary line down _ line and formula 4;
wherein mean _ up is 1024; and the expression clip3 (mean, 0, mean xup) represents: when mean <0, the value of the expression is 0, and when mean > mean _ up, the value of the expression is mean _ up; when 0-tow mean-up, the value of the expression is mean.
In the foregoing scheme, the converting the C component of the HDR video frame according to a preset mapping relationship, and then performing color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component includes:
converting the component C from the BT2020 domain to the BT709 domain according to a preset conversion matrix;
and performing saturation enhancement on the converted C component according to a preset color interval and an enhancement coefficient corresponding to each color interval to obtain a processed C component.
In a second aspect, an embodiment of the present invention provides an apparatus for processing a video frame, where the apparatus includes: the device comprises a determining unit, a Y component adjusting unit and a C component processing unit; wherein,
the determining unit is used for determining an adaptive curve corresponding to the HDR video frame according to a Y component of the HDR video frame;
the Y component adjusting unit is used for adjusting the Y component of the HDR video frame according to the adaptive curve and outputting the processed Y component;
and the C component processing unit is used for converting the C component of the HDR video frame according to a preset mapping relation, and then performing color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component.
In the above scheme, the determining unit is configured to obtain metadata of the HDR video frame according to a Y component of the HDR video frame; wherein the metadata is to indicate feature information of the HDR video frame;
acquiring output values of an upper boundary line and a lower boundary line at each interval endpoint according to metadata of the HDR video frame and a preset interval number;
determining an upper boundary line and a lower boundary line according to the preset number of intervals and the output values of the upper boundary line and the lower boundary line at the end points of the intervals;
and acquiring the self-adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line.
In the above solution, the metadata of the HDR video frame is a luminance mean of the HDR frame.
In the foregoing solution, the determining unit is configured to:
performing gamma conversion according to the brightness mean value of the HDR frame and a formula 5 to obtain a gamma output value;
wherein input is a luminance mean value of an input HDR frame, output is a gamma output value, and gamma is a gamma distortion coefficient;
and obtaining a derivative gradient [ n ] of the gamma output value according to formula 6;
wherein n is the point sequence number in the interval;
acquiring an output value at the endpoint of the low light interval according to the derivative of the gamma output value and a preset low light division threshold;
and acquiring an output value of the end point of the highlight section according to the symmetrical relation between the highlight section and the low light section and the output value of the end point of the low light section.
In the foregoing solution, the determining unit is configured to:
after the output value at the end point of the low light interval and the output value at the end point of the high light interval are acquired, the curve of the upper boundary line up _ line in each interval is acquired according to equation 7;
wherein, Δ y 1 Indicates the difference in output between the highlights; Δ y 2 A difference value of output indicating the mid-light interval; the middle point of the middle light interval is the intersection point of the upper boundary line and the lower boundary line;
and the lower boundary down _ line is set to a straight line having a slope of 1.
In the foregoing solution, the determining unit is configured to:
obtaining the adaptive curve LUT according to the upper boundary line up _ line, the lower boundary line down _ line and equation 8;
wherein mean _ up is 1024; and the expression clip3 (mean, 0, mean xup) represents: when mean <0, the value of the expression is 0, and when mean > mean _ up, the value of the expression is mean _ up; when 0-tow mean-up, the value of the expression is mean.
In the foregoing solution, the C component processing unit is configured to:
converting the C component from the BT2020 domain to the BT709 domain according to a preset conversion matrix;
and performing saturation enhancement on the converted C component according to a preset color interval and an enhancement coefficient corresponding to each color interval to obtain a processed C component.
The embodiment of the invention provides a method and a device for processing video frames; and respectively carrying out adaptive curve and color saturation enhancement processing on the Y component and the C component of the HDR video frame, so that the HDR effect is generated when the processed video frame is played on an SDR display.
Drawings
FIG. 1 is a graph comparing the photoelectric conversion curve of HDR and that of SDR display provided by the present invention;
fig. 2 is a schematic flow chart of a video frame processing method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of determining an adaptive curve according to an embodiment of the present invention;
FIG. 4 is a schematic graph illustrating a curve according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a process for obtaining output values of an upper boundary line and a lower boundary line at end points of each interval according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a curve base according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of another curve base provided by an embodiment of the present invention;
fig. 8 is a schematic flowchart of a process for obtaining a processed C component according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a video frame processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
The embodiment of the invention firstly briefly explains the problem caused by directly playing HDR video on an SDR display.
As shown in the graph of fig. 1 comparing the HDR photoelectric conversion curve with the SDR display photoelectric conversion curve, it can be seen that the HDR photoelectric conversion curve of the left graph is exemplified by ST2084 standard, and normalized light signals from 0 to 1 represent 0nit to 10000nit. It can be seen that the curve is steep in the low light portion and gentle in the middle and high light portions; the SDR photoelectric conversion curve of the right graph takes the BT709 standard as an example, normalized optical signals are 0nit to 100nit represented from 0 to 1, and it can be seen that the curve of the right graph is steeper in a low light part than that of the left graph; and the curve of the right graph is also less gentle in the mid and high light portions than in the left graph.
When the real shooting is carried out, the shooting is carried out by either HDR equipment or SDR equipment; most of the photographed video is concentrated on the mid-light portion, that is, a range of about 0.5 in abscissa.
However, if the HDR video with a horizontal axis coordinate of about 0.5 is interpreted according to the SDR signal, the value of the electrical signal is obviously higher, and the difference between the optical signals is obviously reduced because the middle light portion of the left graph is gentler than the middle light portion of the right graph. Therefore, when we play the HDR video directly through the SDR display without any processing, the middle light part is obviously too bright, and the contrast is obviously lower. Therefore, the HDR video is generally over-bright and whitish when played on the SDR display.
In addition, since the HDR video content is BT2020 domain and the SDR display is BT709 domain, playing the HDR video directly through the SDR display also has the problem of being less colorful.
In view of the above problems and the reasons for the problems, the embodiments of the present invention provide a method and an apparatus for processing video frames to achieve the HDR effect when an SDR display plays an HDR video.
Example one
Referring to fig. 2, a method for processing a video frame according to an embodiment of the present invention is shown, where the method may include:
s201: determining an adaptive curve corresponding to the HDR video frame according to a Y component of the HDR video frame;
s202: adjusting the Y component of the HDR video frame according to the self-adaptive curve, and outputting the processed Y component;
s203: and converting the C component of the HDR video frame according to a preset mapping relation, and then performing color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component.
By the scheme, the Y component and the C component of the HDR video frame are respectively subjected to adaptive curve and color saturation enhancement processing, so that the HDR effect is generated when the processed video frame is played on an SDR display.
For example, referring to fig. 3, for determining an adaptive curve corresponding to the HDR video frame according to the Y component of the HDR video frame in step S201, specifically, the method may include:
s2011: acquiring metadata of an HDR video frame according to a Y component of the HDR video frame; wherein the metadata is to indicate feature information of the HDR video frame;
in this embodiment, the metadata of the HDR video frame is a luminance mean of the HDR frame.
S2012: acquiring output values of an upper boundary line and a lower boundary line at each interval endpoint according to metadata of the HDR video frame and a preset interval number;
s2013: determining an upper boundary line and a lower boundary line according to the number of preset intervals and the output values of the upper boundary line and the lower boundary line at the end points of each interval;
s2014: and acquiring the self-adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line.
For the above example, it should be noted that, as labeled in fig. 4, the dashed line is an adaptive curve, and it can be seen that the adaptive curve adaptively changes between an upper boundary line up _ line and a lower boundary line down _ line. The abscissa in fig. 4 represents the input value of HDR 10bit, which can be considered as the value of the Y component of the current HDR video frame; the ordinate is the output value and can be considered as the adjusted Y component value.
Since a video frame may include three areas of low light, medium light, and high light. The end point B of the low light area of the upper boundary line upLine determines an increase low light contrast pull-up point, the position of the point B determines the low light pull-up amplitude and the pull-down amplitude of the middle light between BC; the starting point D of the highlight region of the upper boundary up _ line may be directly set as a symmetrical point of the point B for the enhancement of the highlight and determine the enhancement of the light in between CDs.
The adjustment of the B-point and the D-point to the low light area and the high light area of the video frame is an important point for achieving the effect of HDR generated when the HDR video frame is played on the SDR display, and therefore, referring to fig. 5, for obtaining the output values of the upper boundary line and the lower boundary line at the end points of each interval according to the metadata of the HDR video frame and the preset number of intervals in step S2012, the method specifically includes:
s20121: performing gamma conversion according to the brightness mean value of the HDR frame and a formula 1 to obtain a gamma output value;
wherein input is a luminance mean value of an input HDR frame, output is a gamma output value, and gamma is a gamma distortion coefficient;
it should be noted that, referring to fig. 6, a curve base of gamma conversion when γ traverses from 1.0 to 2.2 is shown, and fig. 6 is a curve base of an upper boundary up _ line; it can be seen that when γ is 1.0, it represents 1 output, without any compression; the larger the γ, the more significant the compression for low light, the more black the effect will be. While the contrast of the detail will be more apparent in the next lower light portion.
Accordingly, when γ is traversed from 1.0 to 2.2, the reciprocal of γ is taken as the curve base of gamma conversion of the gamma distortion factor, which can be taken as the curve base of the lower boundary down _ line, as shown in fig. 7.
S20122: acquiring a derivative of the gamma output value according to the formula 2;
wherein, gradient [ n ] is the derivative of gamma output value, and n is the serial number of points in the interval;
s20123: acquiring an output value at the endpoint of the low light interval according to a derivative of the gamma output value and a preset low light division threshold;
s20124: and acquiring an output value of the end point of the highlight section according to the symmetrical relation between the highlight section and the low light section and the output value of the end point of the low light section.
It should be noted that, for obtaining the output value at the end point of the low light interval according to the derivative of the gamma output value and the preset low light division threshold in S20123, the output value may be obtained by equation 3:
wherein, B _ split is the output value of B at the endpoint of the low light interval; g _ thr is a B point division threshold value, and if g _ thr is larger, more values are determined as low light; point B is the first point where the derivative is greater than g _ thr.
For the output value obtained at the end point of the highlight section according to the symmetric relationship between the highlight section and the low light section and the output value at the end point of the low light section in S20124, in a specific implementation process, the output value can be calculated by equation 4:
thus, the obtaining of the output values at the section end points, and accordingly, the determining of the upper boundary line and the lower boundary line according to the preset number of the sections and the output values of the upper boundary line and the lower boundary line at the section end points in step S2013 may include:
after the output value at the end point of the low light interval and the output value at the end point of the high light interval are obtained, the curve of the upper boundary line up _ line in each interval is obtained according to equation 5;
and the lower boundary down _ line is set to a straight line having a slope of 1.
Specifically, referring to fig. 4, the upper and lower borderlines up _ line and down _ line each include four intervals, as shown in equation 6:
each segment can be designed using a primary curve, a secondary curve, or even an N-th curve, but it is necessary to ensure that the derivative is positive.
In this embodiment, the lower boundary line down _ line is designed as a gamma curve with a gamma value equal to 1.0, i.e., a straight line with a slope of 1, so that the segments a-E are all an expression: y = x;
the upper boundary line up _ line is designed as a curve base with a gamma value equal to 1.5, g \/thr of 0.66, and the B, D coordinates are determined. Therefore, the AB, BC and CD segments are all straight lines, and the DE segment is a curve. As shown in equation 6. Point C is an intersection of the upper boundary up _ line and the lower boundary down _ line, which is 512 in this embodiment.
After obtaining the upper boundary line up _ line and the lower boundary line down _ line, for the step S2014 of obtaining the adaptive curve according to the preset fitting algorithm and the upper boundary line and the lower boundary line, the method may include:
acquiring the adaptive curve LUT according to the upper boundary line up _ line, the lower boundary line down _ line and equation 7;
wherein mean _ up is 1024; and the expression clip3 (mean, 0, mean xup) represents: when mean <0, the value of the expression is 0, and when mean > mean _ up, the value of the expression is mean _ up; when 0-tow mean-up, the value of the expression is mean.
It should be understood that the adaptive curve LUT is a dashed line shown in fig. 4, and is used to indicate a corresponding relationship between the Y component of the HDR video frame and the Y component after processing, so that the Y component of the HDR video frame can be used as an input value of the dashed line, and then a corresponding output value is obtained by adjusting the Y component of the HDR video frame according to the adaptive curve, and outputting the processed Y component, as described in step S102.
For example, referring to fig. 8, for step S203, after the C component of the HDR video frame is converted according to the preset mapping relationship, and then the color saturation of the converted C component is enhanced according to the preset color interval, to obtain a processed C component, specifically, the method may include:
s2031: converting the C component from the BT2020 domain to the BT709 domain according to a preset conversion matrix;
s2032: and performing saturation enhancement on the converted C component according to a preset color interval and an enhancement coefficient corresponding to each color interval to obtain a processed C component.
Specifically, for step S2031, the conversion from YUV BT2020 domain to BT709 domain can be performed by equation 8:
in this embodiment, the transformation matrixCan be made ofIt can be understood that those skilled in the art can select different transformation matrices according to actual situations and requirements, and details are not described in this embodiment.
For step S2032, the preset color interval may be 6 color intervals, which may include red, yellow, green, cyan, blue, and magenta; the saturation enhancement coefficients gain corresponding to different color intervals correspond to: 164, 164, 196, 164, 196, 164.
Therefore, the converted C component is subjected to saturation enhancement according to equation 9, and a processed C component is obtained.
Wherein cb _ in is cb converted by formula 8; cr _ in is cr after conversion by equation 8.
The embodiment provides a video frame processing method, which is characterized in that a Y component and a C component of an HDR video frame are respectively subjected to adaptive curve and color saturation enhancement processing, so that the HDR effect is generated when the processed video frame is played on an SDR display.
Example two
Based on the same technical concept as the foregoing embodiment, referring to fig. 9, it shows a video frame processing apparatus 90 provided by an embodiment of the present invention, where the apparatus 90 includes: a determination unit 901, a Y component adjustment unit 902, and a C component processing unit 903; wherein,
the determining unit 901 is configured to determine an adaptive curve corresponding to an HDR video frame according to a Y component of the HDR video frame;
the Y component adjusting unit 902 is configured to adjust a Y component of the HDR video frame according to the adaptive curve, and output the processed Y component;
the C component processing unit 903 is configured to convert the C component of the HDR video frame according to a preset mapping relationship, and then perform color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component.
In the above solution, the determining unit 901 is configured to obtain metadata of the HDR video frame according to a Y component of the HDR video frame; wherein the metadata is to indicate feature information of the HDR video frame;
acquiring output values of an upper boundary line and a lower boundary line at each interval endpoint according to metadata of the HDR video frame and a preset interval number;
determining an upper boundary line and a lower boundary line according to the preset number of intervals and the output values of the upper boundary line and the lower boundary line at the end points of the intervals;
and acquiring the self-adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line.
In the above scheme, the metadata of the HDR video frame is a luminance mean of the HDR frame.
In the foregoing solution, the determining unit 901 is configured to:
performing gamma conversion according to the brightness mean value of the HDR frame and a formula 10 to obtain a gamma output value;
wherein input is a luminance average value of an input HDR frame, output is a gamma output value, and gamma is a gamma distortion coefficient;
and, obtaining a derivative gradient [ n ] of the gamma output value according to equation 11;
wherein n is the point sequence number in the interval;
acquiring an output value at the endpoint of the low light interval according to the derivative of the gamma output value and a preset low light division threshold;
and acquiring an output value of the end point of the highlight section according to the symmetrical relation between the highlight section and the low light section and the output value of the end point of the low light section.
In the above solution, the determining unit 901 is configured to:
after the output values at the end points of the low light section and the high light section are obtained, the curves of the upper boundary line up _ line in each section are obtained according to equation 12;
wherein, Δ y 1 The difference of output between the low light areas is shown; Δ y 2 Indicating the mid-light intervalThe difference of output of (1); the middle point of the middle light interval is the intersection point of the upper boundary line and the lower boundary line;
and the lower boundary down _ line is set to a straight line having a slope of 1.
In the foregoing solution, the determining unit 901 is configured to:
acquiring the adaptive curve LUT according to the upper boundary line up _ line, the lower boundary line down _ line and a formula 13;
wherein mean _ up is 1024; and the expression clip3 (mean, 0, mean xup) represents: when mean <0, the value of the expression is 0, and when mean > mean _ up, the value of the expression is mean _ up; when 0-mean-over-mean is constructed over u up, the value of the expression is mean.
In the foregoing solution, the C component processing unit 903 is configured to:
converting the component C from the BT2020 domain to the BT709 domain according to a preset conversion matrix;
and performing saturation enhancement on the converted C component according to a preset color interval and an enhancement coefficient corresponding to each color interval to obtain a processed C component.
The embodiment provides a video frame processing apparatus 90, which respectively performs adaptive curve and color saturation enhancement processing on a Y component and a C component of an HDR video frame, so that the processed video frame generates HDR effect when being played on an SDR display.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (14)

1. A method for processing video frames, the method comprising:
determining an adaptive curve corresponding to the HDR video frame according to a Y component of the HDR video frame;
adjusting the Y component of the HDR video frame according to the self-adaptive curve, and outputting the processed Y component;
and converting the C component of the HDR video frame according to a preset mapping relation, and then performing color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component.
2. The method according to claim 1, wherein the determining an adaptive curve corresponding to the HDR video frame according to the Y component of the HDR video frame specifically comprises:
obtaining metadata for the HDR video frame from a Y component of the HDR video frame; wherein the metadata is to indicate feature information of the HDR video frame;
acquiring output values of an upper boundary line and a lower boundary line at each interval endpoint according to metadata of the HDR video frame and a preset interval number;
determining an upper boundary line and a lower boundary line according to the number of preset intervals and the output values of the upper boundary line and the lower boundary line at the end points of each interval;
and acquiring the self-adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line.
3. The method of claim 2, wherein the metadata of the HDR video frame is a luminance mean of the HDR frame.
4. The method as claimed in claim 3, wherein the obtaining the output values of the upper boundary line and the lower boundary line at the end points of each section according to the metadata of the HDR video frame and the preset number of sections specifically comprises:
performing gamma conversion according to the brightness mean value of the HDR frame and the formula 1 to obtain a gamma output value;
wherein input is a luminance mean value of an input HDR frame, output is a gamma output value, and gamma is a gamma distortion coefficient;
acquiring a derivative gradient [ n ] of the gamma output value according to the formula 2;
wherein n is the point sequence number in the interval;
acquiring an output value at the endpoint of a low light interval according to a derivative of the gamma output value and a preset low light division threshold;
and acquiring an output value of the end point of the highlight section according to the symmetrical relation between the highlight section and the low light section and the output value of the end point of the low light section.
5. The method of claim 4, wherein determining the upper boundary line and the lower boundary line according to the preset number of intervals and the output values of the upper boundary line and the lower boundary line at the end points of each interval comprises:
after the output value at the end point of the low light interval and the output value at the end point of the high light interval are obtained, the curve of the upper boundary line up _ line in each interval is obtained according to equation 3;
wherein, Δ y 1 The difference of output between the low light areas is shown; Δ y 2 A difference value of output indicating the mid-light interval; the middle point of the middle light interval is the intersection point of the upper boundary line and the lower boundary line;
the lower boundary down _ line is set to a straight line having a slope of 1.
6. The method according to claim 2, wherein obtaining the adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line specifically comprises:
acquiring the self-adaptive curve LUT according to the upper boundary line up _ line, the lower boundary line down _ line and formula 4;
wherein mean _ up is 1024; and the expression clip3 (mean, 0, mean xup) represents: when mean <0, the value of the expression is 0, and when mean > mean _ up, the value of the expression is mean _ up; when 0< mean _ up, the value of the expression is mean.
7. The method as claimed in claim 1, wherein the converting the C component of the HDR video frame according to a preset mapping relationship, and then performing color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component comprises:
converting the C component from the BT2020 domain to the BT709 domain according to a preset conversion matrix;
and performing saturation enhancement on the converted C component according to a preset color interval and an enhancement coefficient corresponding to each color interval to obtain a processed C component.
8. An apparatus for processing video frames, the apparatus comprising: the device comprises a determining unit, a Y component adjusting unit and a C component processing unit; wherein,
the determining unit is used for determining an adaptive curve corresponding to the HDR video frame according to a Y component of the HDR video frame;
the Y component adjusting unit is used for adjusting the Y component of the HDR video frame according to the adaptive curve and outputting the processed Y component;
and the C component processing unit is used for converting the C component of the HDR video frame according to a preset mapping relation, and then performing color saturation enhancement on the converted C component according to a preset color interval to obtain a processed C component.
9. The apparatus of claim 8, wherein the determining unit is configured to obtain metadata of the HDR video frame according to a Y component of the HDR video frame; wherein the metadata is to indicate feature information of the HDR video frame;
acquiring output values of an upper boundary line and a lower boundary line at each interval endpoint according to metadata of the HDR video frame and a preset interval number;
determining an upper boundary line and a lower boundary line according to the preset number of intervals and the output values of the upper boundary line and the lower boundary line at the end points of the intervals;
and acquiring the self-adaptive curve according to a preset fitting algorithm and the upper boundary line and the lower boundary line.
10. The apparatus of claim 9, wherein the metadata of the HDR video frame is a luminance mean of the HDR frame.
11. The apparatus of claim 10, wherein the determining unit is configured to:
performing gamma conversion according to the brightness mean value of the HDR frame and the formula 5 to obtain a gamma output value;
wherein input is a luminance mean value of an input HDR frame, output is a gamma output value, and gamma is a gamma distortion coefficient;
and obtaining a derivative gradient [ n ] of the gamma output value according to formula 6;
wherein n is the point sequence number in the interval;
acquiring an output value at the endpoint of the low light interval according to the derivative of the gamma output value and a preset low light division threshold;
and acquiring an output value of the end point of the highlight section according to the symmetrical relation between the highlight section and the low light section and the output value of the end point of the low light section.
12. The apparatus of claim 11, wherein the determining unit is configured to:
after the output value at the end point of the low light interval and the output value at the end point of the high light interval are obtained, the curve of the upper boundary line up _ line in each interval is obtained according to equation 7;
wherein, Δ y 1 The difference of output between the low light areas is shown; Δ y 2 A difference value of output indicating the mid-light interval; the middle point of the middle light interval is the intersection point of the upper boundary line and the lower boundary line;
and the lower boundary down _ line is set to a straight line having a slope of 1.
13. The apparatus of claim 9, wherein the determining unit is configured to:
acquiring the adaptive curve LUT according to the upper boundary line up _ line, the lower boundary line down _ line and formula 8;
wherein mean _ up is 1024; and the expression clip3 (mean, 0, mean _up) represents: when mean <0, the value of the expression is 0, and when mean > mean _ up, the value of the expression is mean _ up; when 0-tow mean-up, the value of the expression is mean.
14. The apparatus of claim 8, wherein the C component processing unit is configured to:
converting the component C from the BT2020 domain to the BT709 domain according to a preset conversion matrix;
and performing saturation enhancement on the converted C component according to a preset color interval and an enhancement coefficient corresponding to each color interval to obtain a processed C component.
CN201610798824.7A 2016-08-31 2016-08-31 A kind for the treatment of method and apparatus of video frame Active CN107786865B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610798824.7A CN107786865B (en) 2016-08-31 2016-08-31 A kind for the treatment of method and apparatus of video frame
PCT/CN2017/082524 WO2018040599A1 (en) 2016-08-31 2017-04-28 Method and device for processing video frame, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610798824.7A CN107786865B (en) 2016-08-31 2016-08-31 A kind for the treatment of method and apparatus of video frame

Publications (2)

Publication Number Publication Date
CN107786865A true CN107786865A (en) 2018-03-09
CN107786865B CN107786865B (en) 2019-11-26

Family

ID=61299953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610798824.7A Active CN107786865B (en) 2016-08-31 2016-08-31 A kind for the treatment of method and apparatus of video frame

Country Status (2)

Country Link
CN (1) CN107786865B (en)
WO (1) WO2018040599A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109327734A (en) * 2018-11-27 2019-02-12 成都索贝数码科技股份有限公司 A method of HDR video that surveying light based on dynamic downconverts to SDR video
CN110691277A (en) * 2018-07-05 2020-01-14 华为技术有限公司 Video signal processing method and device
CN113906497A (en) * 2019-05-24 2022-01-07 交互数字Ce专利控股公司 Apparatus and method for converting between brightness levels

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101707666A (en) * 2009-11-26 2010-05-12 北京中星微电子有限公司 Adjusting method and device with high dynamic range
CN101916555A (en) * 2010-07-26 2010-12-15 武汉大学 Method for displaying and reproducing high dynamic range images
CN103024300A (en) * 2012-12-25 2013-04-03 华为技术有限公司 Device and method for high dynamic range image display
CN103327323A (en) * 2012-03-14 2013-09-25 杜比实验室特许公司 Efficient tone-mapping of high-bit-depth video to low-bit-depth display
CN103747225A (en) * 2014-01-23 2014-04-23 福州大学 High-dynamic-range image dual-screen display method based on color space switching
CN104618700A (en) * 2015-01-29 2015-05-13 电子科技大学 Enhanced display method for color high dynamic range image
CN105744378A (en) * 2016-02-15 2016-07-06 深圳Tcl数字技术有限公司 Video playing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082992A (en) * 2007-07-06 2007-12-05 浙江大学 Drawing of real time high dynamic range image and display process
CN101901475B (en) * 2010-06-12 2011-10-05 北京理工大学 High dynamic range image tone mapping method based on retina adaptive model
CN103295194B (en) * 2013-05-15 2015-11-04 中山大学 The controlled tone mapping method with Hemifusus ternatanus of brightness
CN104408752B (en) * 2014-12-11 2017-07-04 西安电子科技大学 A kind of high dynamic range images compression method based on mixed shades mapping algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101707666A (en) * 2009-11-26 2010-05-12 北京中星微电子有限公司 Adjusting method and device with high dynamic range
CN101916555A (en) * 2010-07-26 2010-12-15 武汉大学 Method for displaying and reproducing high dynamic range images
CN103327323A (en) * 2012-03-14 2013-09-25 杜比实验室特许公司 Efficient tone-mapping of high-bit-depth video to low-bit-depth display
CN103024300A (en) * 2012-12-25 2013-04-03 华为技术有限公司 Device and method for high dynamic range image display
CN103747225A (en) * 2014-01-23 2014-04-23 福州大学 High-dynamic-range image dual-screen display method based on color space switching
CN104618700A (en) * 2015-01-29 2015-05-13 电子科技大学 Enhanced display method for color high dynamic range image
CN105744378A (en) * 2016-02-15 2016-07-06 深圳Tcl数字技术有限公司 Video playing method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691277A (en) * 2018-07-05 2020-01-14 华为技术有限公司 Video signal processing method and device
US11750785B2 (en) 2018-07-05 2023-09-05 Huawei Technologies Co., Ltd. Video signal processing method and apparatus
CN110691277B (en) * 2018-07-05 2024-03-05 华为技术有限公司 Video signal processing method and device
CN109327734A (en) * 2018-11-27 2019-02-12 成都索贝数码科技股份有限公司 A method of HDR video that surveying light based on dynamic downconverts to SDR video
CN109327734B (en) * 2018-11-27 2020-11-13 成都索贝数码科技股份有限公司 Method for down-converting HDR video to SDR video based on dynamic photometry
CN113906497A (en) * 2019-05-24 2022-01-07 交互数字Ce专利控股公司 Apparatus and method for converting between brightness levels

Also Published As

Publication number Publication date
CN107786865B (en) 2019-11-26
WO2018040599A1 (en) 2018-03-08

Similar Documents

Publication Publication Date Title
CN107154059B (en) High dynamic range video processing method
KR102117522B1 (en) Display management for high dynamic range video
KR101490727B1 (en) Method for image data transformation
KR20150120993A (en) Display management for high dynamic range video
US10332481B2 (en) Adaptive display management using 3D look-up table interpolation
CN111064942B (en) Image processing method and apparatus
CN104954771A (en) Image processing apparatus that performs tone correction, image processing method, and storage medium
CN101507258A (en) Image gamut mapping
CN107786865B (en) A kind for the treatment of method and apparatus of video frame
CN104284168A (en) Image color enhancing method and system
CN110570384A (en) method and device for carrying out illumination equalization processing on scene image, computer equipment and computer storage medium
JP6190482B1 (en) Display control device, display device, television receiver, display control device control method, control program, and recording medium
JP2014220815A (en) Image color adjusting method and electronic device using the same
JP2002536920A (en) Histogram equalization method
US8284316B2 (en) Real-time image processing circuit capable of enhancing brightness contrast and color saturation
KR101634652B1 (en) Method and apparatus for intensificating contrast in image
TWI697873B (en) Image saturation adjusting method and device
JP2009505469A (en) Method and apparatus for setting contrast in digital image processing
CN111031301A (en) Method for adjusting color gamut space, storage device and display terminal
CN103517048B (en) Adaptive edge strengthens
CN116167950B (en) Image processing method, device, electronic equipment and storage medium
JPH10173958A (en) Video signal processor
KR100461018B1 (en) Natural color reproduction method and apparatus on DTV
US8587725B2 (en) Method of digital signal processing
JP2011172009A (en) Device and method for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221021

Address after: 610213 Industrial land in Southwest Airport Industrial Development Zone, Shuangliu District, Chengdu, Sichuan

Patentee after: Chengdu Kress Semiconductor Technology Co.,Ltd.

Address before: No.1 Factory building, Dameisha, Yantian District, Shenzhen, Guangdong 518085

Patentee before: SANECHIPS TECHNOLOGY Co.,Ltd.