CN116684629A - Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media - Google Patents

Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media Download PDF

Info

Publication number
CN116684629A
CN116684629A CN202310849838.7A CN202310849838A CN116684629A CN 116684629 A CN116684629 A CN 116684629A CN 202310849838 A CN202310849838 A CN 202310849838A CN 116684629 A CN116684629 A CN 116684629A
Authority
CN
China
Prior art keywords
video
hdr
sdr
video frame
luminance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310849838.7A
Other languages
Chinese (zh)
Inventor
程鹏
刘文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310849838.7A priority Critical patent/CN116684629A/en
Publication of CN116684629A publication Critical patent/CN116684629A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a video encoding and decoding method, a video encoding and decoding device, electronic equipment and a video medium, and belongs to the technical field of videos. The video coding method comprises the following steps: performing luminance and chrominance channel separation on each video frame in the recorded high dynamic range HDR video to obtain luminance channel data and chrominance channel data of each video frame; according to the luminance channel data and the chrominance channel data of each video frame, determining luminance information and chrominance information of each video frame after the video frame is mapped from HDR to a standard dynamic range SDR, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from the SDR luminance to the HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between the HDR and the SDR; coding each video frame according to the brightness information and the chromaticity information of each video frame to generate SDR video; and outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer.

Description

Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media
Technical Field
The application belongs to the technical field of video, and particularly relates to a video coding and decoding method, a video coding and decoding device, electronic equipment and a medium.
Background
Currently, video is mainly of two standards, standard dynamic range (Standard Dynamic Range, SDR) and high dynamic range (High Dynamic Range, HDR), where HDR is better in color and display than SDR, greatly improving color space, color depth, and brightness. However, the existing HDR has poor compatibility, and cannot be used on devices that do not support HDR.
Disclosure of Invention
The embodiment of the application aims to provide a video encoding and decoding method, a video encoding and decoding device, electronic equipment and a video medium, which can solve the problems that the existing HDR has poor compatibility and cannot be used on equipment which does not support HDR.
In a first aspect, an embodiment of the present application provides a video encoding method, applied to a first electronic device, where the method includes:
performing luminance and chrominance channel separation on each video frame in the recorded HDR video to obtain luminance channel data and chrominance channel data of each video frame;
determining luminance information and chrominance information of each video frame after mapping from HDR to SDR according to luminance channel data and chrominance channel data of each video frame respectively, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from SDR luminance to HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between HDR and SDR;
Coding each video frame according to the brightness information and the chromaticity information of each video frame to generate SDR video;
and outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer.
In a first aspect, an embodiment of the present application provides a video decoding method, applied to a second electronic device, where the method includes:
receiving an encoding result of an HDR video, wherein the encoding result comprises an SDR video corresponding to the HDR video, a dynamic element and a reconstruction layer, the reconstruction layer is used for reconstructing each video frame from SDR brightness to the HDR brightness, and the dynamic element is used for establishing brightness and chromaticity mapping between HDR and SDR of each video frame in the HDR video;
and extracting the SDR video from the coding result for playing, or decoding to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer for playing.
In a third aspect, an embodiment of the present application provides a video encoding apparatus, provided in a first electronic device, including:
the channel separation module is used for carrying out brightness and chromaticity channel separation on each video frame in the recorded HDR video to obtain brightness channel data and chromaticity channel data of each video frame;
A first determining module, configured to determine luminance information and chrominance information after mapping each video frame from HDR to SDR according to luminance channel data and chrominance channel data of each video frame, and determine a dynamic element and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chrominance mapping between HDR and SDR for each video frame;
the coding module is used for coding each video frame according to the brightness information and the chromaticity information of each video frame to generate an SDR video;
and the output module is used for outputting the encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer.
In a fourth aspect, an embodiment of the present application provides a video decoding apparatus, provided in a second electronic device, including:
a receiving module, configured to receive an encoding result of an HDR video, where the encoding result includes an SDR video corresponding to the HDR video, a dynamic element, and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chromaticity mapping between HDR and SDR for each video frame in the HDR video;
And the decoding module is used for extracting the SDR video from the coding result to play, or decoding to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer to play.
In a fifth aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the method as described in the first aspect or the steps of the method as described in the second aspect when executed by the processor.
In a sixth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor, implement the steps of the method as described in the first aspect, or implement the steps of the method as described in the second aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement a method according to the first aspect, or to implement a method according to the second aspect.
In an eighth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement a method as described in the first aspect, or to implement a method as described in the second aspect.
In the embodiment of the application, each video frame in the recorded high dynamic range HDR video is subjected to luminance and chrominance channel separation to obtain luminance channel data and chrominance channel data of each video frame; according to the luminance channel data and the chrominance channel data of each video frame, determining luminance information and chrominance information of each video frame after the video frame is mapped from HDR to a standard dynamic range SDR, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from the SDR luminance to the HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between the HDR and the SDR; coding each video frame according to the brightness information and the chromaticity information of each video frame to generate SDR video; and outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer. In this way, the recorded HDR video is encoded into the video of the SDR standard, and the dynamic element and the reconstruction layer are extracted, so that the electronic equipment which does not support HDR can directly extract the play of the SDR video, and the electronic equipment which supports HDR can obtain the play of the HDR video by decoding the SDR video, the reconstruction layer and the dynamic element, thereby realizing that the HDR video can be played on different electronic equipment, and improving the compatibility of HDR.
Drawings
Fig. 1 is a flowchart of a video encoding method according to an embodiment of the present application;
FIG. 2 is a flow chart of an HDR encoding algorithm provided by an embodiment of the present application;
fig. 3 is a flowchart of a video decoding method according to an embodiment of the present application;
FIG. 4 is a flow chart of an HDR decoding algorithm provided by an embodiment of the present application;
fig. 5 is a flowchart of an SDR decoding algorithm provided by an embodiment of the present application;
fig. 6 is a flowchart of a video encoding and decoding method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of HDR video recording and sharing according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a video encoding apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video decoding apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video encoding method and the video decoding method provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a video encoding method according to an embodiment of the present application, which is applied to a first electronic device, as shown in fig. 1, and the method includes the following steps:
and 101, performing luminance and chrominance channel separation on each video frame in the recorded HDR video to obtain luminance channel data and chrominance channel data of each video frame.
In the embodiment of the application, in order to improve the compatibility of HDR, ensure that the HDR video can be played on equipment which does not support HDR, it is proposed to encode the recorded HDR video by adopting a compatibility encoding algorithm, encode the HDR video into three streams of video, dynamic element and reconstruction layer of SDR standard, so that when the HDR video of the encoding mode is obtained, equipment which only supports SDR can decode and obtain the video of the corresponding SDR standard, thereby playing can be carried out, and the playing can not be opened as in the prior art, and when the HDR video of the encoding mode is obtained, the equipment which supports HDR can decode and obtain the video of the HDR standard according to the video, dynamic element and reconstruction layer of SDR standard, thereby playing the HDR standard, obtaining higher-quality playing effect, and improving the compatibility of HDR.
Among them, SDR is a standard dynamic range, which is a video standard that has been used since a Cathode Ray Tube (CRT) display, SDR is a default video format used in televisions, displays and projectors, and most video contents, whether games, movies or network videos, still use SDR. HDR is a high dynamic range, a new standard in images and video, widely introduced into movies, video and even games, which truly depicts these scenes in a wider color space, color depth and brightness. In terms of color and display, SDR is inferior to HDR in all respects, which improves color space, luminance and color depth significantly, but HDR is now poorly compatible.
In the HDR video coding manner in the embodiment of the present application, in the HDR video recording process, luminance and chrominance channels are separated from each other frame to frame, and then luminance and chrominance mapping is performed on each frame by the channels, that is, the frames are mapped from HDR to SDR, so that coding in SDR format is performed according to the mapped luminance information and chrominance information, and corresponding SDR video is output.
Specifically, in this step, the first electronic device may be an HDR-capable device, and the user may record an HDR video using the device, and in the recording process, the device camera outputs an original HDR signal (i.e. HLG/PQ bt.2020), and sends the original HDR signal to the coding algorithm, and the coding algorithm performs luminance and chrominance channel separation on the recorded video frame by frame, so as to obtain luminance channel data and chrominance channel data of each video frame. As shown in fig. 2, the original HDR signal is subjected to linearity processing, long-medium short wave (LMS) processing, transformation, and the like to separate the HDR signal into data of the ICtCp color space, where I is luminance channel data, ctCp is chrominance channel data, I represents luminance, ct represents blue-yellow, and Cp represents red-green.
Step 102, determining luminance information and chrominance information of each video frame after mapping from HDR to SDR according to luminance channel data and chrominance channel data of each video frame, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from SDR luminance to HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between HDR and SDR.
In this step, luminance information after each video frame is mapped from HDR to SDR may be calculated according to luminance channel data of each video frame after separation, specifically, as shown in fig. 2, the HDR luminance information of each pixel point in each video frame may be obtained from the luminance channel data of each video frame, and then, based on a luminance mapping relationship between HDR and SDR, the HDR luminance information of each pixel point in each video frame may be mapped/compressed to SDR luminance information, where the luminance mapping relationship may be preset, or may be calculated according to scene distribution information in the luminance channel data, for example, the luminance mapping relationship may be calculated according to which object information is included in the video frame, where there is a difference in luminance value between different object information.
As shown in fig. 2, the chroma information of each video frame after the video frame is mapped from HDR to SDR may be calculated according to the separated chroma channel data of each video frame, specifically, the HDR chroma information of each pixel point in each video frame may be obtained from the chroma channel data of each video frame, and then the HDR chroma information of each pixel point in each video frame may be mapped/compressed to the SDR chroma information based on a gamut mapping relationship between HDR and SDR, where the gamut mapping relationship may be preset, or may be calculated according to scene distribution information in the chroma channel data, for example, the chroma mapping relationship may be calculated according to which object information is included in the video frame, and characteristics that there is a difference in chroma value between different object information.
In the embodiment of the application, the HDR luminance information of each pixel point in each video frame can be obtained from the luminance channel data of each video frame according to the luminance channel data of each video frame after separation, and the mapping parameters between the HDR luminance and the SDR luminance of each pixel point in each video frame, namely, the luminance mapping parameters, can be determined, and the compressed SDR luminance information of each pixel point in each video frame and the luminance reconstruction information from SDR to HDR can be used as the data of a reconstruction layer; similarly, obtaining the HDR chromaticity information of each pixel point in each video frame from the separated chromaticity channel data of each video frame, and determining the mapping parameter between the HDR chromaticity and the SDR chromaticity of each pixel point in each video frame, namely the chromaticity mapping parameter; the luminance mapping parameter and the chrominance mapping parameter may then be used as dynamic metadata.
Wherein, each video frame can be reconstructed from the SDR luminance to the HDR luminance according to the reconstruction layer, and the luminance and chromaticity mapping between the HDR and the SDR of each video frame can be established according to the dynamic element, so that the reconstruction layer and the dynamic element can be combined to restore the SDR video to the HDR video when decoding the HDR video.
And 103, encoding each video frame according to the brightness information and the chromaticity information of each video frame to generate the SDR video.
After the SDR luminance information and the SDR chrominance information of each video frame are obtained through luminance mapping and color gamut mapping, each video frame can be encoded based on the SDR luminance information and the SDR chrominance information of each video frame, for example, YUV encoding is performed, so as to generate an SDR video, as shown in fig. 2, after the original HDR signal is subjected to an encoding algorithm, one bt.709 video stream, namely, the SDR video can be output.
And step 104, outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer.
The dynamic element and the reconstruction layer determined in the step 102 may be combined with the SDR video generated in the step 103 to obtain the encoding result of the HDR video, that is, the encoding result of the recorded HDR video includes mapping the converted SDR video, dynamic element and reconstruction layer three streams, so that when the HDR video is played on an electronic device supporting different video standards, even if the electronic device does not support HDR, the SDR video can be obtained through decoding, so that the SDR video can be played in the SDR standard, but not in the playback. For an electronic device supporting HDR, the encoding result of the HDR video can be decoded and converted to obtain the HDR video, so that the HDR video is played in an HDR standard, the HDR effect of the video is ensured, or the decoded SDR video can be selected to be played in an SDR standard according to the requirement. Therefore, through the embodiment of the application, the compatibility of the HDR video on the SDR equipment can be realized, and the HDR effect of the HDR video can be reserved on the HDR equipment.
Optionally, the determining the dynamic element and the reconstruction layer according to the luminance channel data and the chrominance channel data of each video frame respectively includes:
according to the brightness channel data of each video frame, determining brightness mapping information and HDR brightness reconstruction information of each video frame between HDR and SDR, storing the brightness mapping information into a dynamic element, and storing the mapped SDR brightness information and the HDR brightness reconstruction information of each video frame into the reconstruction layer;
and determining chromaticity mapping information of each video frame between HDR and SDR according to chromaticity channel data of each video frame, and storing the chromaticity mapping information into the dynamic element.
In one embodiment, luminance mapping information, HDR luminance reconstruction information, and chrominance mapping information between HDR and SDR of each video frame may be calculated according to luminance channel data and chrominance channel data of each video frame, respectively, where the luminance mapping information and the chrominance mapping information of each video frame are stored as dynamic element (DM) data in a frame-by-frame manner, and the HDR luminance reconstruction information and the SDR luminance information mapped by each video frame are stored as Reconstruction Layer (RL) data in a frame-by-frame manner.
Specifically, the HDR luminance information of each pixel point in each video frame may be obtained from the luminance channel data of each video frame after separation, and then the HDR luminance information of each pixel point in each video frame is mapped/compressed to the luminance information after SDR based on the luminance mapping relationship or parameters between HDR and SDR, so as to obtain the HDR luminance information of each pixel point in each video frame and the corresponding SDR luminance information, and the HDR luminance information of each pixel point in each video frame may be split into two parts, one part is the mapped SDR luminance information, and the other part is the luminance information from SDR to HDR, which is referred to as HDR luminance reconstruction information, and is used for reconstructing the SDR luminance into the HDR luminance.
The HDR luminance information and the corresponding SDR luminance information of each pixel point in each video frame may be stored in a dynamic element frame by frame, that is, the dynamic element may include target display luminance of each video frame, that is, SDR luminance information, actual content luminance, that is, HDR luminance information, and mapping curve information, such as a luminance mapping relationship or parameter, a key point in a video frame, and so on. The SDR luminance information and the HDR luminance reconstruction information mapped by each pixel point in each video frame may be stored in the reconstruction layer frame by frame, i.e., the reconstruction layer may include the SDR luminance information and the HDR luminance reconstruction information mapped by each video frame. It should be noted that, in some implementations, the reconstruction layer may further include a small portion of color information, that is, a portion of the color mapping information may be selectively stored in the reconstruction layer. The dynamic element can also store the mapping relation between different HDR brightness ranges.
The luminance mapping relationship or parameter may be preset or default, for example, a fixed luminance mapping relationship or parameter exists between HDR and SDR, or may be calculated according to scene distribution information in luminance channel data, where the scene distribution information may refer to which object information is included in a video frame, and luminance values of different object information have differences.
Similarly, the HDR chrominance information of each pixel point in each video frame may be obtained from the separated chrominance channel data of each video frame, and then the HDR chrominance information of each pixel point in each video frame is mapped/compressed to the SDR chrominance information based on the chrominance mapping relationship or parameters between the HDR and the SDR, so as to obtain the HDR chrominance information of each pixel point in each video frame and the corresponding SDR chrominance information, and may be used as a dynamic element, that is, the dynamic element may further include the HDR chrominance information of each video frame, and the compressed color information, that is, the SDR chrominance information. The chromaticity mapping relationship or parameter may be preset or default, for example, a fixed chromaticity mapping relationship or parameter exists between HDR and SDR, or may be calculated according to scene distribution information in chromaticity channel data, where the scene distribution information may refer to which object information is included in a video frame, and chromaticity values between different object information have differences.
With this embodiment, a reconstruction layer for reconstructing HDR luminance information and a dynamic element for displaying a map to the HDR standard can be extracted, ensuring the compatibility of the HDR video encoding result.
Optionally, the determining the brightness mapping information of each video frame between HDR and SDR according to the brightness channel data of each video frame includes:
according to the brightness channel data of each video frame, counting object information in each video frame;
determining brightness mapping parameters of key pixel points in each video frame between HDR and SDR according to the object information in each video frame;
and according to the brightness mapping parameters of the key pixel points in each video frame between the HDR and the SDR, mapping the HDR brightness of the key pixel points in each video frame to the SDR brightness, and obtaining brightness mapping information of the key pixel points in each video frame between the HDR and the SDR.
In one implementation manner, the compatibility coding algorithm in the embodiment of the present application may calculate the luminance mapping parameter between HDR and SDR by counting the object information in each video frame, that is, determine the luminance mapping relationship between HDR and SDR, so as to determine the luminance information after the key pixel point in each video frame is mapped from HDR to SDR based on the calculated luminance mapping parameter.
Specifically, as shown in fig. 2, according to the luminance channel data of each video frame, the object information/scene information in each video frame may be counted, for example, counting the sun, the person, the automobile, the building, etc. with darker luminance in the video frame, where the luminance between different objects is different. And then calculating brightness mapping parameters of different objects based on the counted object information in each video frame, and carrying out anchor points on each video frame by adopting Bezier curves to determine key points in each video frame, including key points of different objects. Finally, for the key points in each video frame, according to the brightness mapping parameters corresponding to the object, calculating the brightness information after the HDR brightness information is mapped to the SDR, finally obtaining the brightness mapping information of each key pixel point between the HDR and the SDR in each video frame, and storing the dynamic metadata frame by frame.
According to the embodiment, the brightness mapping information of each key pixel point in the video frame between the HDR and the SDR can be accurately calculated, and the calculated amount can be reduced.
Optionally, the determining chromaticity mapping information between HDR and SDR for each video frame according to chromaticity channel data of each video frame includes:
and mapping each video frame from a wide color gamut to a narrow color gamut based on the chromaticity mapping parameters according to the chromaticity channel data of each video frame to obtain chromaticity mapping information of each video frame between HDR and SDR.
In one embodiment, the HDR chromaticity information of each pixel point in each video frame may be obtained from the separated chromaticity channel data of each video frame, and then each video frame is mapped from a wide color gamut to a narrow color gamut, i.e., bt.709, based on a chromaticity mapping parameter between HDR and SDR, so as to obtain the SDR chromaticity information mapped by each pixel point in each video frame, where the wide color gamut corresponds to the color gamut in which the HDR chromaticity is located, and the narrow color gamut corresponds to the color gamut in which the SDR chromaticity is located.
By the implementation mode, the chromaticity mapping information of each key pixel point in the video frame between the HDR and the SDR can be rapidly and accurately calculated.
Optionally, the method further comprises:
And sending the encoding result of the HDR video to a second electronic device for the second electronic device to select to play the HDR video or play the SDR video.
In one embodiment, after the recorded HDR is encoded, the encoding result of the HDR video may be shared with other electronic devices, and the other electronic devices may select the playing format of the HDR video according to whether the HDR is supported by the electronic devices.
Specifically, after sharing the encoding result of the HDR video to the second electronic device, if the second electronic device is an electronic device that only supports SDR, that is, an SDR device, the second electronic device may extract the SDR video from the encoding result to play the SDR video, and if the second electronic device is an electronic device that supports HDR, that is, an HDR device, the second electronic device may completely decode the encoding result to obtain the HDR video to play the HDR video.
Thus, through the embodiment, the HDR video coding result can be shared to different electronic devices, the HDR video can be played and used on the different electronic devices, the sharing purpose of the HDR video is realized, and the HDR compatibility is improved.
According to the video coding method, luminance and chrominance channel separation is carried out on each video frame in the recorded high dynamic range HDR video, and luminance channel data and chrominance channel data of each video frame are obtained; according to the luminance channel data and the chrominance channel data of each video frame, determining luminance information and chrominance information of each video frame after the video frame is mapped from HDR to a standard dynamic range SDR, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from the SDR luminance to the HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between the HDR and the SDR; coding each video frame according to the brightness information and the chromaticity information of each video frame to generate SDR video; and outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer. In this way, the recorded HDR video is encoded into the video of the SDR standard, and the dynamic element and the reconstruction layer are extracted, so that the electronic equipment which does not support HDR can directly extract the play of the SDR video, and the electronic equipment which supports HDR can obtain the play of the HDR video by decoding the SDR video, the reconstruction layer and the dynamic element, thereby realizing that the HDR video can be played on different electronic equipment, and improving the compatibility of HDR.
Fig. 3 is a flowchart of a video decoding method according to an embodiment of the present application, applied to a second electronic device, as shown in fig. 3, where the method includes the following steps:
step 301, receiving an encoding result of an HDR video, where the encoding result includes an SDR video corresponding to the HDR video, a dynamic element, and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chrominance mapping between HDR and SDR for each video frame in the HDR video.
In the embodiment of the application, in order to ensure the HDR effect and compatibility, the embodiment of the application can adopt a compatibility decoding algorithm to decode the encoding result of the HDR video, namely, different modes are adopted to decode the encoding result of the HDR video according to whether the playing equipment supports HDR. Specifically, for an electronic device that only supports SDR, i.e., an SDR device, the SDR video may be directly extracted from the encoding result of the HDR video, and played in an SDR standard; for an electronic device supporting HDR, i.e. an HDR device, the encoding result of the HDR video may be inversely encoded according to the encoding manner of the HDR video, so as to obtain the HDR video, and the HDR video may be played in an HDR standard.
Specifically, in this step, the second electronic device may receive an encoding result of an HDR video shared by other electronic devices, where the encoding result includes an SDR video, a dynamic element, and a reconstruction layer corresponding to the HDR video, where the encoding result of the HDR video may be obtained by other electronic devices using a compatibility encoding manner introduced in the method embodiment shown in fig. 1, and as shown in fig. 2, the encoding result of the HDR video outputs three-way stream data including the SDR video+the dynamic element+the reconstruction layer, and the specific encoding manner is not described herein.
And 302, extracting the SDR video from the coding result for playing, or decoding to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer for playing.
In the embodiment of the present application, under the condition that the second electronic device does not support HDR, the encoding result of the HDR video is shared to the electronic device that does not support HDR, and the second electronic device may directly extract the SDR video from the encoding result to play.
In the case that the second electronic device supports HDR, that is, the encoding result of the HDR video is shared to an electronic device supporting HDR, the second electronic device may perform inverse encoding on the encoding result of the HDR video according to the encoding manner of the HDR video, for example, may reconstruct HDR luminance information in combination with the SDR video, the reconstruction layer, and the dynamic element corresponding to the HDR video, and perform display mapping from SDR to HDR, convert the SDR video into a target HDR video, and play the target HDR video with an HDR standard, thereby preserving the HDR effect, that is, capable of improving the display effect of the HDR video.
Optionally, the decoding, according to the SDR video, the dynamic element and the reconstruction layer, to obtain a target HDR video and playing the target HDR video includes:
reconstructing luminance information of the HDR according to the SDR video and the reconstruction layer to obtain a reconstructed HDR video;
mapping the reconstructed HDR video from a narrow color gamut to a wide color gamut according to the dynamic element, and mapping the brightness of the reconstructed HDR video to a brightness range supported by the second electronic equipment to obtain a target HDR video;
playing the target HDR video.
In one embodiment, an HDR-enabled electronic device, i.e., an HDR device, may reconstruct luminance information of an HDR from the encoding results of the HDR video, and convert the SDR video into a target HDR video via a chroma map and a luminance map.
Specifically, an HDR decoding algorithm as shown in fig. 4 may be adopted, according to the SDR video and the reconstruction layer, the SDR video is converted from YUV color space to RGB space, luminance information of the HDR is reconstructed, RGB data of the SDR is obtained, that is, the reconstructed HDR video is reconstructed, and then, in combination with the dynamic element, the reconstructed HDR video is subjected to frame-by-frame display mapping of chromaticity, specifically, mapping from a narrow color gamut to a wide color gamut, so as to recover HDR chromaticity; and according to the luminance range supported by the second electronic device, the luminance of the reconstructed HDR video is mapped to the luminance range supported by the second electronic device according to the mapping relation between different HDR luminance ranges stored in the dynamic element, so as to obtain the RGB data of the HDR, namely the target HDR video, and realize the conversion from the SDR video to the HDR video.
It should be noted that when sharing the encoding result of the HDR video to the SDR device, an SDR decoding algorithm as shown in fig. 5 may be adopted, and the SDR video in the encoding result may be directly taken and displayed, so as to discard the dynamic metadata and the reconstruction layer data.
By the implementation mode, the encoding result of the HDR video can be decoded to obtain the original HDR video for playing, and the HDR effect of the video is guaranteed.
The video decoding method in the embodiment of the application receives an encoding result of an HDR video, wherein the encoding result comprises an SDR video corresponding to the HDR video, a dynamic element and a reconstruction layer, the reconstruction layer is used for reconstructing each video frame from SDR brightness to HDR brightness, and the dynamic element is used for establishing brightness and chromaticity mapping between HDR and SDR of each video frame in the HDR video; and extracting the SDR video from the coding result for playing, or decoding to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer for playing. In this way, the shared encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer, so that the SDR video can be extracted and played on electronic equipment which does not support HDR, the HDR video can be obtained and played on the electronic equipment which supports HDR by decoding the SDR video, the reconstruction layer and the dynamic element, and the HDR compatibility is improved.
In connection with the above description, the flow of the video encoding and decoding method in the embodiment of the present application may be as shown in fig. 6, and includes the following steps:
step 61, recording an HDR video by using an HDR device;
step 62, encoding the recorded HDR video using a compatibility encoding algorithm.
Step 63, outputting the coding result comprising the SDR video, the dynamic element and the reconstruction layer;
step 64, sharing the encoding result of the HDR video to the HDR device;
step 64', sharing the encoding result of the HDR video to the SDR device;
step 65, decoding the HDR video using a compatibility decoding algorithm;
step 66, decoding to obtain an HDR video;
step 66', decoding to obtain SDR video;
step 67, the decoded HDR video or SDR video is played normally.
A schematic diagram of the decoding flow of the HDR device after encoding, decoding, and sharing to the SDR device in recording the HDR video may be shown in fig. 7. The electronic equipment 1 supports HDR recording and playing, the electronic equipment 1 adopts a compatibility coding algorithm to code a recorded HDR video frame in the process of recording the HDR video, outputs a coding result comprising an SDR video, a dynamic element DM and a reconstruction layer RL, and adopts a compatibility decoding algorithm to decode the coding result when the HDR video needs to be played, so as to obtain the HDR video and play the HDR video. The electronic device 1 shares the encoding result to the electronic device 2 which does not support the HDR recording and playing, the electronic device 2 adopts a compatibility decoding algorithm to decode the encoding result, and the SDR video is directly extracted for playing.
According to the video coding method provided by the embodiment of the application, the execution subject can be a video coding device. In the embodiment of the present application, a video encoding method performed by a video encoding device is taken as an example, and the video encoding device provided by the embodiment of the present application is described.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a video encoding apparatus provided in an embodiment of the present application, where the video encoding apparatus is disposed in a first electronic device, as shown in fig. 8, the video encoding apparatus 800 includes:
the channel separation module 801 is configured to separate luminance and chrominance channels of each video frame in the recorded HDR video, so as to obtain luminance channel data and chrominance channel data of each video frame;
a first determining module 802, configured to determine luminance information and chrominance information after each video frame is mapped from HDR to SDR according to luminance channel data and chrominance channel data of each video frame, and determine a dynamic element and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chrominance mapping between HDR and SDR for each video frame;
a coding module 803, configured to code each video frame according to the luminance information and the chrominance information of each video frame, and generate an SDR video;
An output module 804, configured to output an encoding result of the HDR video, where the encoding result of the HDR video includes the SDR video, the dynamic element, and the reconstruction layer.
Optionally, the video encoding apparatus 800 further includes:
the second determining module is used for determining brightness mapping information and HDR brightness reconstruction information of each video frame between HDR and SDR according to brightness channel data of each video frame, storing the brightness mapping information into a dynamic element, and storing the mapped SDR brightness information and the HDR brightness reconstruction information of each video frame into the reconstruction layer;
and the third determining module is used for determining chromaticity mapping information of each video frame between HDR and SDR according to chromaticity channel data of each video frame, and storing the chromaticity mapping information into the dynamic element.
Optionally, the second determining module includes:
the statistics unit is used for counting object information in each video frame according to the brightness channel data of each video frame;
the determining unit is used for determining brightness mapping parameters of each key pixel point between HDR and SDR in each video frame according to the object information in each video frame;
the first mapping unit is configured to map, according to the luminance mapping parameter of each key pixel point in each video frame between HDR and SDR, the HDR luminance of each key pixel point in each video frame to SDR luminance, so as to obtain luminance mapping information of each key pixel point in each video frame between HDR and SDR.
Optionally, the third determining module is configured to map each video frame from a wide color gamut to a narrow color gamut based on the chromaticity mapping parameter according to chromaticity channel data of each video frame, so as to obtain chromaticity mapping information of each video frame between HDR and SDR.
Optionally, the video encoding apparatus 800 further includes:
and the sending module is used for sending the encoding result of the HDR video to the second electronic equipment and selecting the second electronic equipment to play the HDR video or play the SDR video.
The video coding device in the embodiment of the application performs luminance and chrominance channel separation on each video frame in the recorded high dynamic range HDR video to obtain luminance channel data and chrominance channel data of each video frame; according to the luminance channel data and the chrominance channel data of each video frame, determining luminance information and chrominance information of each video frame after the video frame is mapped from HDR to a standard dynamic range SDR, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from the SDR luminance to the HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between the HDR and the SDR; coding each video frame according to the brightness information and the chromaticity information of each video frame to generate SDR video; and outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer. In this way, the recorded HDR video is encoded into the video of the SDR standard, and the dynamic element and the reconstruction layer are extracted, so that the electronic equipment which does not support HDR can directly extract the play of the SDR video, and the electronic equipment which supports HDR can obtain the play of the HDR video by decoding the SDR video, the reconstruction layer and the dynamic element, thereby realizing that the HDR video can be played on different electronic equipment, and improving the compatibility of HDR.
The video encoding device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, details are not repeated here.
According to the video decoding method provided by the embodiment of the application, the execution subject can be a video decoding device. In the embodiment of the present application, a video decoding method performed by a video decoding device is taken as an example, and the video decoding device provided by the embodiment of the present application is described.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a video decoding apparatus according to an embodiment of the present application, where the video decoding apparatus is disposed on a second electronic device, as shown in fig. 9, the video decoding apparatus 900 includes:
a receiving module 901, configured to receive an encoding result of an HDR video, where the encoding result includes an SDR video corresponding to the HDR video, a dynamic element, and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chrominance mapping between HDR and SDR for each video frame in the HDR video;
and a decoding module 902, configured to extract the SDR video from the encoding result for playing, or decode to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer, and play the target HDR video.
Optionally, the decoding module 902 includes:
a reconstruction unit, configured to reconstruct luminance information of an HDR according to the SDR video and the reconstruction layer, to obtain a reconstructed HDR video;
a second mapping unit, configured to map, according to the dynamic element, the reconstructed HDR video from a narrow color gamut to a wide color gamut, and map luminance of the reconstructed HDR video to a luminance range supported by the second electronic device, to obtain a target HDR video;
and the playing unit is used for playing the target HDR video.
The video decoding device in the embodiment of the application receives an encoding result of an HDR video, wherein the encoding result comprises an SDR video corresponding to the HDR video, a dynamic element and a reconstruction layer, the reconstruction layer is used for reconstructing each video frame from SDR brightness to HDR brightness, and the dynamic element is used for establishing brightness and chromaticity mapping between HDR and SDR of each video frame in the HDR video; and extracting the SDR video from the coding result for playing, or decoding to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer for playing. In this way, the shared encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer, so that the SDR video can be extracted and played on electronic equipment which does not support HDR, the HDR video can be obtained and played on the electronic equipment which supports HDR by decoding the SDR video, the reconstruction layer and the dynamic element, and the HDR compatibility is improved.
The video decoding device provided by the embodiment of the present application can implement each process implemented by the method embodiment of fig. 3, and in order to avoid repetition, a description thereof will not be repeated here.
The video encoding device and the video decoding device in the embodiments of the present application may be electronic devices, or may be components in electronic devices, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet, notebook, palmtop, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented Reality (Augmented Reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-Mobile Personal Computer, UMPC, netbook or personal digital assistant (Personal Digital Assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (Personal Computer, PC), television (Television, TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video encoding device and the video decoding device in the embodiments of the present application may be devices having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
Optionally, as shown in fig. 10, the embodiment of the present application further provides an electronic device 1000, including a processor 1001 and a memory 1002, where the memory 1002 stores a program or an instruction that can be executed on the processor 1001, and the program or the instruction implements each step of the embodiment of the video encoding method or implements each step of the embodiment of the video decoding method when executed by the processor 1001, and the steps achieve the same technical effects, and are not repeated herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, and processor 1110.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1110 by a power management system, such as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine some components, or may be arranged in different components, which are not described in detail herein.
In one embodiment, the electronic device 1100 is used as a first electronic device, and the processor 1110 is configured to:
performing luminance and chrominance channel separation on each video frame in the recorded HDR video to obtain luminance channel data and chrominance channel data of each video frame, wherein the first electronic device is an electronic device supporting HDR;
determining luminance information and chrominance information of each video frame after mapping from HDR to SDR according to luminance channel data and chrominance channel data of each video frame respectively, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from SDR luminance to HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between HDR and SDR;
coding each video frame according to the brightness information and the chromaticity information of each video frame to generate SDR video;
and outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer.
Optionally, the processor 1110 is further configured to:
according to the brightness channel data of each video frame, determining brightness mapping information and HDR brightness reconstruction information of each video frame between HDR and SDR, storing the brightness mapping information into a dynamic element, and storing the mapped SDR brightness information and the HDR brightness reconstruction information of each video frame into the reconstruction layer;
And determining chromaticity mapping information of each video frame between HDR and SDR according to chromaticity channel data of each video frame, and storing the chromaticity mapping information into the dynamic element.
Optionally, the processor 1110 is further configured to:
according to the brightness channel data of each video frame, counting object information in each video frame;
determining brightness mapping parameters of key pixel points in each video frame between HDR and SDR according to the object information in each video frame;
and according to the brightness mapping parameters of the key pixel points in each video frame between the HDR and the SDR, mapping the HDR brightness of the key pixel points in each video frame to the SDR brightness, and obtaining brightness mapping information of the key pixel points in each video frame between the HDR and the SDR.
Optionally, the processor 1110 is further configured to:
and mapping each video frame from a wide color gamut to a narrow color gamut based on the chromaticity mapping parameters according to the chromaticity channel data of each video frame to obtain chromaticity mapping information of each video frame between HDR and SDR.
Optionally, the radio frequency unit 1101 is configured to send a result of encoding the HDR video to a second electronic device, and be used for the second electronic device to select to play the HDR video or the SDR video.
In another embodiment, the electronic device 1100 is used as a second electronic device, and the radio frequency unit 1101 is configured to receive an encoding result of an HDR video, where the encoding result includes an SDR video corresponding to the HDR video, a dynamic element, and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chromaticity mapping between HDR and SDR of each video frame in the HDR video;
and a processor 1110, configured to extract the SDR video from the encoding result for playing, or decode to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer, and play the target HDR video.
Optionally, the processor 1110 is further configured to:
reconstructing luminance information of the HDR according to the SDR video and the reconstruction layer to obtain a reconstructed HDR video;
according to the dynamic element, mapping the reconstructed HDR video from a narrow color gamut to a wide color gamut, and mapping the brightness of the reconstructed HDR video to a brightness range supported by the second electronic device to obtain a target HDR video;
playing the target HDR video.
It should be appreciated that in embodiments of the present application, the input unit 1104 may include a graphics processor (Graphics Processing Unit, GPU) 11041 and a microphone 11042, the graphics processor 11041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. The touch panel 11071 is also referred to as a touch screen. The touch panel 11071 may include two parts, a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units; optionally, processor 1110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction implements each process of the video encoding method embodiment or implements each process of the video decoding method embodiment when executed by a processor, and the process may achieve the same technical effect, so that repetition is avoided and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used for running a program or instructions, implementing each process of the video encoding method embodiment, or implementing each process of the video decoding method embodiment, and achieving the same technical effect, so that repetition is avoided, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the video encoding method embodiment or to implement the respective processes of the video decoding method embodiment, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (15)

1. A video encoding method for use with a first electronic device, the method comprising:
performing luminance and chrominance channel separation on each video frame in the recorded high dynamic range HDR video to obtain luminance channel data and chrominance channel data of each video frame;
according to the luminance channel data and the chrominance channel data of each video frame, determining luminance information and chrominance information of each video frame after the video frame is mapped from HDR to a standard dynamic range SDR, and determining a dynamic element and a reconstruction layer, wherein the reconstruction layer is used for reconstructing each video frame from the SDR luminance to the HDR luminance, and the dynamic element is used for establishing luminance and chrominance mapping of each video frame between the HDR and the SDR;
coding each video frame according to the brightness information and the chromaticity information of each video frame to generate SDR video;
and outputting an encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer.
2. The method of claim 1, wherein determining the dynamic element and the reconstruction layer based on luminance channel data and chrominance channel data, respectively, for each video frame comprises:
According to the brightness channel data of each video frame, determining brightness mapping information and HDR brightness reconstruction information of each video frame between HDR and SDR, storing the brightness mapping information into a dynamic element, and storing the mapped SDR brightness information and the HDR brightness reconstruction information of each video frame into the reconstruction layer;
and determining chromaticity mapping information of each video frame between HDR and SDR according to chromaticity channel data of each video frame, and storing the chromaticity mapping information into the dynamic element.
3. The method of claim 2, wherein determining luminance mapping information between HDR and SDR for each video frame based on luminance channel data for each video frame, comprises:
according to the brightness channel data of each video frame, counting object information in each video frame;
determining brightness mapping parameters of key pixel points in each video frame between HDR and SDR according to the object information in each video frame;
and according to the brightness mapping parameters of the key pixel points in each video frame between the HDR and the SDR, mapping the HDR brightness of the key pixel points in each video frame to the SDR brightness, and obtaining brightness mapping information of the key pixel points in each video frame between the HDR and the SDR.
4. The method of claim 2, wherein determining chroma mapping information between HDR and SDR for each video frame based on chroma channel data for each video frame, comprises:
and mapping each video frame from a wide color gamut to a narrow color gamut based on the chromaticity mapping parameters according to the chromaticity channel data of each video frame to obtain chromaticity mapping information of each video frame between HDR and SDR.
5. The method according to any one of claims 1 to 4, further comprising:
and sending the encoding result of the HDR video to a second electronic device for the second electronic device to select to play the HDR video or the SDR video.
6. A video decoding method, applied to a second electronic device, the method comprising:
receiving an encoding result of an HDR video, wherein the encoding result comprises an SDR video corresponding to the HDR video, a dynamic element and a reconstruction layer, the reconstruction layer is used for reconstructing each video frame from SDR brightness to the HDR brightness, and the dynamic element is used for establishing brightness and chromaticity mapping between HDR and SDR of each video frame in the HDR video;
and extracting the SDR video from the coding result for playing, or decoding to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer for playing.
7. The method of claim 6, wherein decoding the target HDR video from the SDR video, the dynamic element, and the reconstruction layer and playing the target HDR video, comprises:
reconstructing luminance information of the HDR according to the SDR video and the reconstruction layer to obtain a reconstructed HDR video;
according to the dynamic element, mapping the reconstructed HDR video from a narrow color gamut to a wide color gamut, and mapping the brightness of the reconstructed HDR video to a brightness range supported by the second electronic device to obtain a target HDR video;
playing the target HDR video.
8. A video encoding apparatus provided in a first electronic device, the video encoding apparatus comprising:
the channel separation module is used for carrying out brightness and chromaticity channel separation on each video frame in the recorded HDR video to obtain brightness channel data and chromaticity channel data of each video frame;
a first determining module, configured to determine luminance information and chrominance information after mapping each video frame from HDR to SDR according to luminance channel data and chrominance channel data of each video frame, and determine a dynamic element and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chrominance mapping between HDR and SDR for each video frame;
The coding module is used for coding each video frame according to the brightness information and the chromaticity information of each video frame to generate an SDR video;
and the output module is used for outputting the encoding result of the HDR video, wherein the encoding result of the HDR video comprises the SDR video, the dynamic element and the reconstruction layer.
9. The video encoding device of claim 8, wherein the video encoding device further comprises:
the second determining module is used for determining brightness mapping information and HDR brightness reconstruction information of each video frame between HDR and SDR according to brightness channel data of each video frame, storing the brightness mapping information into a dynamic element, and storing the mapped SDR brightness information and the HDR brightness reconstruction information of each video frame into the reconstruction layer;
and the third determining module is used for determining chromaticity mapping information of each video frame between HDR and SDR according to chromaticity channel data of each video frame, and storing the chromaticity mapping information into the dynamic element.
10. The video encoding device of claim 9, wherein the second determining module comprises:
the statistics unit is used for counting object information in each video frame according to the brightness channel data of each video frame;
The determining unit is used for determining brightness mapping parameters of each key pixel point between HDR and SDR in each video frame according to the object information in each video frame;
the first mapping unit is configured to map, according to the luminance mapping parameter of each key pixel point in each video frame between HDR and SDR, the HDR luminance of each key pixel point in each video frame to SDR luminance, so as to obtain luminance mapping information of each key pixel point in each video frame between HDR and SDR.
11. The video coding device of claim 9, wherein the third determining module is configured to map each video frame from a wide color gamut to a narrow color gamut based on the chroma mapping parameters according to the chroma channel data of each video frame, to obtain chroma mapping information between HDR and SDR for each video frame.
12. A video decoding apparatus provided in a second electronic device, the video decoding apparatus comprising:
a receiving module, configured to receive an encoding result of an HDR video, where the encoding result includes an SDR video corresponding to the HDR video, a dynamic element, and a reconstruction layer, where the reconstruction layer is configured to reconstruct each video frame from SDR luminance to HDR luminance, and the dynamic element is configured to establish luminance and chromaticity mapping between HDR and SDR for each video frame in the HDR video;
And the decoding module is used for extracting the SDR video from the coding result to play, or decoding to obtain a target HDR video according to the SDR video, the dynamic element and the reconstruction layer to play.
13. The video decoding device of claim 12, wherein the decoding module comprises:
a reconstruction unit, configured to reconstruct luminance information of an HDR according to the SDR video and the reconstruction layer, to obtain a reconstructed HDR video;
a second mapping unit, configured to map, according to the dynamic element, the reconstructed HDR video from a narrow color gamut to a wide color gamut, and map luminance of the reconstructed HDR video to a luminance range supported by the second electronic device, to obtain a target HDR video;
and the playing unit is used for playing the target HDR video.
14. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video encoding method of any one of claims 1 to 5, or the steps of the video decoding method of claim 6 or 7.
15. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions, which when executed by a processor, implements the steps of the video encoding method according to any one of claims 1 to 5, or the steps of the video decoding method according to claim 6 or 7.
CN202310849838.7A 2023-07-12 2023-07-12 Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media Pending CN116684629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310849838.7A CN116684629A (en) 2023-07-12 2023-07-12 Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310849838.7A CN116684629A (en) 2023-07-12 2023-07-12 Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media

Publications (1)

Publication Number Publication Date
CN116684629A true CN116684629A (en) 2023-09-01

Family

ID=87791108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310849838.7A Pending CN116684629A (en) 2023-07-12 2023-07-12 Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media

Country Status (1)

Country Link
CN (1) CN116684629A (en)

Similar Documents

Publication Publication Date Title
CN109983757B (en) View dependent operations during panoramic video playback
US10257483B2 (en) Color gamut mapper for dynamic range conversion and methods for use therewith
US20180192063A1 (en) Method and System for Virtual Reality (VR) Video Transcode By Extracting Residual From Different Resolutions
US10574955B2 (en) Re-projecting flat projections of pictures of panoramic video for rendering by application
AU2020201708B2 (en) Techniques for encoding, decoding and representing high dynamic range images
WO2019134557A1 (en) Method and device for processing video image
US20200066225A1 (en) Transitioning between video priority and graphics priority
EP3656126A1 (en) Methods, devices and stream for encoding and decoding volumetric video
KR102617258B1 (en) Image processing method and apparatus
CN112087648B (en) Image processing method, image processing device, electronic equipment and storage medium
US10958950B2 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
CN105491396A (en) Multimedia information processing method and server
CN111193928B (en) Method and apparatus for delivering region of interest information in video
WO2018219202A1 (en) Method for presenting and packaging video image, and device for presenting and packaging video image
CN111406404B (en) Compression method, decompression method, system and storage medium for obtaining video file
KR20160015125A (en) System for cloud streaming service, method of cloud streaming service using still image compression technique and apparatus for the same
CN116684629A (en) Video encoding and decoding methods, video encoding and decoding devices, electronic equipment and media
CN103037247A (en) Image compression method, media data file and decompression method
WO2023193524A1 (en) Live streaming video processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN116744009A (en) Gain map encoding method, decoding method, device, equipment and medium
WO2023150193A1 (en) Supporting multiple target display types

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination