WO2020143114A1 - 图像解码方法、解码器及计算机存储介质 - Google Patents

图像解码方法、解码器及计算机存储介质 Download PDF

Info

Publication number
WO2020143114A1
WO2020143114A1 PCT/CN2019/078195 CN2019078195W WO2020143114A1 WO 2020143114 A1 WO2020143114 A1 WO 2020143114A1 CN 2019078195 W CN2019078195 W CN 2019078195W WO 2020143114 A1 WO2020143114 A1 WO 2020143114A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
decoding
bit stream
decoder
control identifier
Prior art date
Application number
PCT/CN2019/078195
Other languages
English (en)
French (fr)
Inventor
万帅
马彦卓
霍俊彦
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN202110352754.3A priority Critical patent/CN113055671B/zh
Priority to JP2021530947A priority patent/JP7431827B2/ja
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to AU2019420838A priority patent/AU2019420838A1/en
Priority to SG11202105839RA priority patent/SG11202105839RA/en
Priority to MX2021006450A priority patent/MX2021006450A/es
Priority to CA3121922A priority patent/CA3121922C/en
Priority to CN202210961409.4A priority patent/CN115941944A/zh
Priority to CN201980056388.8A priority patent/CN112640449A/zh
Priority to EP19908400.5A priority patent/EP3843388A4/en
Priority to KR1020217016139A priority patent/KR20210110796A/ko
Publication of WO2020143114A1 publication Critical patent/WO2020143114A1/zh
Priority to US17/326,310 priority patent/US11272186B2/en
Priority to IL283477A priority patent/IL283477A/en
Priority to ZA2021/03762A priority patent/ZA202103762B/en
Priority to US17/646,673 priority patent/US11785225B2/en
Priority to US18/457,705 priority patent/US20230403400A1/en
Priority to JP2024014222A priority patent/JP2024045388A/ja
Priority to JP2024014195A priority patent/JP2024045387A/ja

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the embodiments of the present application relate to the technical field of video image encoding and decoding, and in particular, to an image decoding method, a decoder, and a computer storage medium.
  • next-generation video coding standard H.266 or versatile video coding (Versatile Video Coding, VVC)
  • VVC Very Video Coding
  • Cross-component linear model prediction Cross-component Linear Model Prediction
  • CCLM Cross-component Linear Model Prediction
  • DM Direct Mode
  • CCLM and other cross-component dependent encoding and decoding methods can improve coding efficiency, however, for scenes that require fast processing or scenarios that require high parallel processing, cross-component dependent encoding and decoding methods cannot be effectively used for parallel encoding and decoding, and there are complex Degree of higher defects.
  • Embodiments of the present application provide an image decoding method, a decoder, and a computer storage medium, which can implement parallel encoding and decoding in scenarios that require fast processing or scenarios that require high parallel processing, and reduce the complexity of encoding and decoding.
  • An embodiment of the present application provides an image decoding method according to an embodiment of the present application.
  • the method includes:
  • the preset cross decoding function When the decoding mode corresponding to the control identifier is independent decoding of image components, the preset cross decoding function is turned off; wherein, the preset cross decoding function is used to perform decoding processing based on dependencies between image components.
  • Embodiments of the present application provide an image decoding method, a decoder, and a computer storage medium.
  • the decoder obtains the bit stream corresponding to the current video image; parses the bit stream to obtain the control identifier corresponding to the current video image; when the control identifier corresponds to the decoding method
  • the preset cross decoding function is turned off; wherein, the preset cross decoding function is used to perform decoding processing based on the dependencies between the image components.
  • the decoder can first parse the bitstream corresponding to the current video image to obtain the control flag used in the bitstream to determine whether to allow dependency between image components, if the control flag
  • the corresponding decoding method is independent decoding of image components, that is, there is no dependency between image components, then the decoder needs to turn off the preset cross decoding function, that is, the decoder does not perform the current video on the basis of the dependency between image components
  • the image is decoded, so that in a scene that needs to be processed quickly or a scene that requires high parallel processing, parallel codec can be implemented to reduce the complexity of codec.
  • parallel codec can be implemented to reduce the complexity of codec.
  • the bits that do not depend on decoding between image components in the CU layer are omitted, and the coding efficiency in this scene is improved.
  • Figure 1 is a schematic diagram of the composition of the video encoding system
  • Figure 2 is a schematic diagram of the composition of the video decoding system
  • FIG. 3 is a first schematic flowchart of an image decoding method according to an embodiment of the present application.
  • FIG. 4 is a second schematic flowchart of an image decoding method according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart 3 of an image decoding method according to an embodiment of the present application.
  • FIG. 6 is a fourth schematic flowchart of an image decoding method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram 5 of an implementation process of an image decoding method according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram 1 of a composition of a decoder proposed by an embodiment of the present application.
  • FIG. 9 is a second schematic diagram of the structure of the decoder according to an embodiment of the present application.
  • To encode a video is to encode a frame by frame image; similarly, to decode a video stream after video encoding and compression is to decode a frame by frame image.
  • CU Coding Unit
  • to encode a video image sequence is to encode each encoding unit of each frame image, that is, each CU in turn; decoding the code stream of a video image sequence is also to sequentially decode each CU of each frame image, and finally rewrite Construct the entire video image sequence.
  • the first image component, the second image component, and the third image component are generally used to characterize the coding block.
  • the first image component, the second image component and the third image component are respectively a luminance component, a blue chroma component and a red chroma component.
  • the luminance component is generally represented by the symbol Y
  • the blue chrominance component is generally represented by the symbol Cb
  • the red chrominance component is generally represented by the symbol Cr.
  • the first image component, the second image component, and the third image component may be the luminance component Y, the blue chrominance component Cb, and the red chrominance component Cr, for example, the first The image component may be a luminance component Y, the second image component may be a red chrominance component Cr, and the third image component may be a blue chrominance component Cb, which is not specifically limited in the embodiments of the present application.
  • CCLM implements prediction between the first image component to the second image component, the first image component to the third image component, and the second and third image components.
  • the CCLM prediction mode includes the method of predicting the chrominance component with the luma component, that is, predicting the second image component with the first image component, or predicting the third image component with the first image component
  • the prediction between two chroma components is also included, that is, the prediction method between the second image component and the third image component is also included.
  • the prediction method between the second image component and the third image component may predict the Cr component from the Cb component or the Cb component from the Cr component.
  • FIG. 1 is a schematic diagram of the composition structure of the video encoding system. As shown in FIG.
  • the video encoding system 200 includes a transform and quantization unit 201, an intra estimation unit 202, an intra prediction unit 203, a motion compensation unit 204, a motion estimation unit 205, an inverse transform and inverse quantization unit 206, a filter control analysis unit 207, a filtering unit 208, an entropy encoding unit 209, a current video image buffer unit 210, etc., wherein the filtering unit 208 can implement deblocking filtering and sample adaptive compensation ( Sample, Adaptive, Offset (SAO) filtering, the entropy coding unit 209 can implement header information coding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC).
  • the filtering unit 208 can implement deblocking filtering and sample adaptive compensation ( Sample, Adaptive, Offset (SAO) filtering
  • the entropy coding unit 209 can implement header information coding and context-based adaptive binary arithmetic coding (Context-
  • Coding Tree Unit For the original video signal input, Coding Tree Unit (CTU) can be obtained through preliminary division, and the content adaptive division of a CTU can be continued to obtain CU.
  • CU generally contains one or more coding blocks (Coding Block) , CB), and then the residual pixel information obtained after intra or inter prediction by the transform and quantization unit 201 to transform the video coding block, including transforming the residual information from the pixel domain to the transform domain, and the Quantizes the transform coefficients of, to further reduce the bit rate; intra-estimation unit 202 and intra-prediction unit 203 are used to intra-predict the video encoding block; specifically, intra-estimation unit 202 and intra-prediction Unit 203 is used to determine the intra prediction mode to be used to encode the video encoding block; motion compensation unit 204 and motion estimation unit 205 are used to perform one or more of the received video encoding blocks relative to one or more reference frames The inter-frame predictive coding of each block to provide temporal prediction information; the motion estimation performed by
  • the context content may be based on adjacent encoding blocks, which may be used for intra prediction determined by the encoding instruction.
  • the mode information outputs the code stream of the video signal; and the current video image buffer unit 210 is used to store the reconstructed video encoding block for prediction reference. As the video image encoding proceeds, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the current video image buffer unit 210.
  • the video decoding system 300 includes an entropy decoding unit 301, an inverse transform and inverse quantization unit 302, an intra prediction unit 303, a motion compensation unit 304, and a filtering unit 305 And the current video image buffer unit 306, etc., wherein the entropy decoding unit 301 can implement header information decoding and CABAC decoding, and the filtering unit 305 can implement deblocking filtering and SAO filtering.
  • the entropy decoding unit 301 can implement header information decoding and CABAC decoding
  • the filtering unit 305 can implement deblocking filtering and SAO filtering.
  • the code stream of the video signal is output; the code stream is input to the video decoding system 300 and first passes through the entropy decoding unit 301 to obtain the decoded transform coefficient; for the transform coefficient Processed by the inverse transform and inverse quantization unit 302 to generate a residual block in the pixel domain; the intra prediction unit 303 can be used based on the determined intra prediction mode and data from the previously decoded block of the current frame or picture Generate the prediction data of the current video decoding block; the motion compensation unit 304 determines the prediction information for the video decoding block by parsing the motion vector and other associated syntax elements, and uses the prediction information to generate the prediction of the video decoding block being decoded Predictive block; by summing the residual block from the inverse transform and inverse quantization unit 302 and the corresponding predictive block generated by the intra prediction unit 303 or the motion compensation unit 304 to form a decoded video block; the decoded video The signal passes through the filtering unit 305 to remove block artifacts, which can improve the
  • FIG. 3 is a schematic flowchart 1 of an implementation of an image decoding method according to an embodiment of the present application.
  • the image decoding method of the above decoder may include the following steps:
  • Step 101 Obtain the bit stream corresponding to the current video image.
  • the decoder may first obtain the bit stream corresponding to the current video image.
  • a bit stream which is a code stream, refers to the data flow used by a video file in a unit time, and is an important part of picture quality control in video encoding.
  • the encoder after encoding the current video image, the encoder can generate corresponding stream data for storage or transmission. Accordingly, when the decoder decodes the current video image, The bitstream corresponding to the current video image can be received first.
  • Step 102 Parse the bit stream to obtain the control identifier corresponding to the current video image.
  • the decoder may parse the bit stream to obtain the control identifier corresponding to the current video image.
  • control identifier may be used to characterize the relationship between different image components corresponding to the current video image.
  • the relationship between different image components corresponding to the current video image may be mutually dependent, and the relationship between different image components corresponding to the current video image may also be independent of each other.
  • the encoder may determine the control identifier according to the relationship between different components in the current video image. For example, if the encoder closes the dependencies between different image components during the encoding of the current video image, that is, closes the dependencies between the luma and chroma components, and the different chroma components, then the encoding The encoder determines the control flag as 0 in the bitstream; if the encoder enables the dependency between different image components during the encoding of the current video image, that is, the different colors between the luma component and the chroma component are enabled. Dependent relationship between degree components, the encoder determines the control flag to be 1 in the bitstream.
  • the decoder when the decoder parses the bitstream corresponding to the current video image, if the control flag in the bitstream obtained by parsing is 1, then the decoder may consider that the dependency between different image components needs to be turned on when decoding the video image Relationship, that is, the dependency relationship between the luma component and the chroma component, and between different chroma components is turned on; if the control flag in the bitstream obtained by parsing is 0, the decoder may think that it is necessary to close the different images when decoding the video image
  • the dependency relationship between components that is, the dependency relationship between the luma component and the chroma component, and between different chroma components is turned off.
  • the different image components corresponding to the current video image may include the first image component, the second image component, and the third image component, that is, they may include three of Y, Cb, and Cr.
  • Image components therefore, when the decoder characterizes the relationship between different image components corresponding to the current video image through the control identifier, it can also characterize the mutual relationship between the first image component, the second image component, and the third image component through the control identifier
  • the dependent or mutually independent relationship may also be characterized by controlling the identification of the mutually dependent or independent relationship between at least two image components of the first image component, the second image component, and the third image component.
  • the parsed control identifier may be located in a sequence parameter set (SPS), image parameter set ( Picture (Parameter) Set (PPS), supplemental enhancement information (Supplemental Enhancement Information (SEI), coding tree unit and coding unit, etc.).
  • SPS sequence parameter set
  • PPS Picture (Parameter) Set
  • SEI Supplemental Enhancement Information
  • VCL Video Coding Layer
  • the first NAL Unit in the H.264 bit stream is SPS; the second NAL Unit in the H.264 bit stream is PPS; the third NAL Unit in the H.264 bit stream is the instant decoder Refresh (Instantaneous Decoding Refresh, IDR).
  • each frame of data corresponding to the video image is a NAL Unit.
  • the information in the SPS is crucial. If the data in the SPS is lost or an error occurs, the decoding process is likely to fail.
  • SPS is also commonly used as initialization information of decoder instances in video processing frameworks of certain platforms such as VideoToolBox for iOS.
  • the SPS stores a set of global parameters of the encoded video sequence.
  • the so-called coded video sequence is a sequence composed of the structure of the pixel data of each frame of the original video after being encoded.
  • the parameters that the encoded data of each frame depends on are stored in the PPS.
  • SPS and PPS NAL Units are usually located at the beginning of the entire bitstream. But in some special cases, these two structures may also appear in the middle of the bitstream. This is because the decoder needs to start decoding in the middle of the bitstream, or it may be because the encoder changed the parameters of the bitstream during the encoding process. (Such as image resolution, etc.).
  • Step 103 When the decoding mode corresponding to the control identifier is independent decoding of image components, the preset cross decoding function is turned off; wherein, the preset cross decoding function is used to perform decoding processing based on the dependencies between the image components.
  • the decoder may turn off the preset cross decoding function.
  • the preset cross decoding function is used to perform decoding processing based on the dependencies between image components. That is to say, when decoding the current video image, the preset cross decoding function characterizes the existence of cross-component dependencies, that is, the decoder can decode the current video image through CCLM or DM.
  • the decoder can first determine the decoding method corresponding to the control identifier.
  • the decoding method corresponding to the control identifier is image component independent decoding or image Component cross decoding.
  • the decoder when the decoding method corresponding to the control identifier is independent decoding of image components, the decoder cannot use the dependency relationship between different image components for decoding processing, that is, the Perform independent decoding.
  • the control flag in the bitstream obtained by the decoder parsing process is 0, it can be considered that the encoder closes the dependency relationship between the luma component and the chroma component, and between different chroma components when encoding the current video image, Then, it can be determined that the decoding method corresponding to the control identifier is the independent decoding of the image components.
  • the decoder also needs to close the dependency between the luminance component and the chrominance component, and between different chrominance components, and then decode the current video image.
  • the decoder since the decoder characterizes the relationship between different image components corresponding to the current video image through the control identifier, it can also characterize the first image component and the second image component through the control identifier And the interdependence or independence of the third image component, and the interdependence or independence of at least two image components among the first image component, the second image component, and the third image component can also be characterized by the control flag relationship. Therefore, the decoding method corresponding to the control identifier may include independent decoding of image components between three image components and cross decoding of image components between three image components, or independent decoding of image components between any two image components Cross-decode with any two image components.
  • control flag characterizes the relationship between the three image components Y, Cb, and Cr
  • the control flag in the bitstream is 1, it can be considered that the encoder turned on the luminance component and color when encoding the current video image.
  • the dependency between the chrominance components and between the different chrominance components then the decoder can turn on the dependency between the three different image components of Y, Cb and Cr; when the control flag characterizes the two image components of Cb and Cr If the control flag in the bitstream is 0, it can be considered that when the encoder encodes the current video image, the dependency between different chroma components is turned off, then the decoder can turn off both Cb and Cr.
  • Dependencies between image components, but the dependencies between Y and Cb, Y and Cr image components do not need to be closed.
  • FIG. 4 is a schematic flowchart 2 of an implementation of an image decoding method proposed in an embodiment of the present application.
  • the decoder is parsing the bit stream to obtain the control corresponding to the current video image.
  • the image decoding method of the decoder may further include the following steps:
  • Step 104 When the decoding mode corresponding to the control identifier is cross-decoding of image components, the preset cross-decoding function is enabled.
  • the decoder may enable the preset cross-decoding function.
  • the decoder after the decoder decodes the bitstream to obtain the control identifier, if the decoding method corresponding to the control identifier is cross-decoding of image components, the decoder can use the Depend on the decoding process, that is, the current video image can be decoded through CCLM or DM.
  • the control flag in the bitstream obtained by the decoder parsing process is 1, it can be considered that when the encoder encodes the current video image, the dependency relationship between the luma component and the chroma component, and between different chroma components is turned on, Then, it can be determined that the decoding mode corresponding to the control identifier is cross-decoding of the image components, and the decoder can also turn on the dependency relationship between the luma component and the chroma component, and between different chroma components, and then decode the current video image.
  • FIG. 5 is a schematic flowchart 3 of an implementation of an image decoding method proposed in an embodiment of the present application.
  • the decoder parses the bit stream to obtain the control corresponding to the current video image.
  • the image decoding method of the decoder may further include the following steps:
  • Step 105 When the decoding mode corresponding to the control identifier is DM prohibited, the DM representation mode is turned off.
  • the decoder may turn off the DM representation mode.
  • control identifier in the bit stream may also enable or disable the DM technology. Specifically, if the decoding method corresponding to the control flag is DM prohibited, the decoder needs to turn off the DM expression mode when decoding the current video image; if the decoding method corresponding to the control flag is DM allowed, the decoder is performing the current video When decoding the image, the DM expression mode needs to be turned on.
  • control identifier in the bitstream can also turn on or off any technology or expression mode that requires dependency between image components. That is to say, in the implementation of this application, the control identifier in the bitstream is not only an enabling tool for DM technology, but also other enabling technologies that require dependency between image components, which is not specifically limited in this application. .
  • the decoder obtains the bit stream corresponding to the current video image; parses the bit stream to obtain the control mark corresponding to the current video image; when the decoding mode corresponding to the control mark is independent decoding of image components, the pre A cross decoding function is provided; wherein, the preset cross decoding function is used to perform decoding processing based on dependencies between image components.
  • the decoder can first parse the bit stream corresponding to the current video image to obtain the control flag used in the bit stream to determine whether to allow dependency between image components, if the control flag
  • the corresponding decoding method is independent decoding of image components, that is, there is no dependency between image components, then the decoder needs to turn off the preset cross decoding function, that is, the decoder does not perform the current video on the basis of the dependency between image components
  • the image is decoded, so that in a scene that needs to be processed quickly or a scene that requires high parallel processing, parallel codec can be implemented to reduce the complexity of codec.
  • parallel codec can be implemented to reduce the complexity of codec.
  • the bits that do not depend on decoding between image components in the CU layer are omitted, and the coding efficiency in this scene is improved.
  • the image components in the above steps 101 to 103 may include a first image component, a second image component, and a third image component.
  • the first image component, the second image component, and the third image component may be the luminance component Y, the blue chrominance component Cb, and the red chrominance component Cr, for example, the first image component may be the luminance component Y, the second image
  • the component may be a red chrominance component Cr
  • the third image component may be a blue chrominance component Cb, which is not specifically limited in this embodiment of the present application.
  • FIG. 6 is a schematic flowchart 4 of an implementation of an image decoding method according to an embodiment of the present application.
  • the decoder The method of parsing the bit stream to obtain the control identifier corresponding to the current video image may include the following steps:
  • Step 201 After parsing the bit stream, obtain the control identifier in the SPS in the bit stream.
  • the decoder may perform parsing processing on the bitstream to obtain the control identifier corresponding to the current video image in the SPS in the bitstream.
  • the parsed control identifier may be located in the SPS. Specifically, since the SPS stores a set of encoded video sequences Global parameters, therefore, if the decoder obtains the control identification in the SPS in the bitstream, then the control identification can be applied to all image frames of the current video image.
  • the decoder may perform decoding processing on all image frames of the current video image according to the control identifier. For example, if the decoding method corresponding to the control identifier is cross-decoding of image components, the decoder can enable the preset cross-decoding function and use the dependency between different image components to decode all image frames, that is, through CCLM or DM, etc. Decode all image frames.
  • Step 202 After parsing the bit stream, obtain a control identifier in the PPS in the bit stream.
  • the decoder may perform parsing processing on the bit stream to obtain the control identifier corresponding to the current video image in the PPS in the bit stream.
  • the parsed control identifier may be located in the PPS. Specifically, since the PPS saves the encoding of a frame of image The parameters that the data depends on. Therefore, if the decoder obtains the control identifier from the PPS in the bitstream, the control identifier can be applied to the frame of the current video image corresponding to the PPS.
  • the decoder may perform decoding processing on the frame image corresponding to the PPS in the current video image according to the control identification. For example, if the decoding method corresponding to the control identifier is cross-decoding of image components, the decoder can enable the preset cross-decoding function, and use the dependency between different image components to decode a frame of the current video image corresponding to the PPS , That is, one frame of images corresponding to PPS in the current video image can be decoded through CCLM or DM.
  • Step 203 After parsing the bit stream, obtain a control identifier in the SEI in the bit stream.
  • the decoder may perform parsing processing on the bitstream to obtain the control identifier corresponding to the current video image in the SEI in the bitstream.
  • the parsed control identifier may be located in the SEI. Specifically, since the SEI plays an auxiliary role in the decoding process, It is used to add additional video information to the bitstream. Therefore, if the decoder obtains the control identifier from the SEI in the bitstream, the control identifier can be applied to the image information corresponding to the SEI in the current video image.
  • the parsed control identifier may be located in one or more of SPS, PPS, SEI, coding tree unit and coding unit in the bitstream.
  • the decoder can perform adaptive decoding processing in the corresponding video image information according to the specific position of the control identifier in the bit stream.
  • the decoder obtains the bit stream corresponding to the current video image; parses the bit stream to obtain the control mark corresponding to the current video image; when the decoding mode corresponding to the control mark is independent decoding of image components, the pre A cross decoding function is provided; wherein, the preset cross decoding function is used to perform decoding processing based on dependencies between image components.
  • the decoder can first parse the bit stream corresponding to the current video image to obtain the control flag used in the bit stream to determine whether to allow dependency between image components, if the control flag
  • the corresponding decoding method is independent decoding of image components, that is, there is no dependency between image components, then the decoder needs to turn off the preset cross decoding function, that is, the decoder does not perform the current video on the basis of the dependency between image components
  • the image is decoded, so that in a scene that needs to be processed quickly or a scene that requires high parallel processing, parallel codec can be implemented to reduce the complexity of codec.
  • FIG. 7 is a schematic flowchart 5 of an implementation of an image decoding method proposed in an embodiment of the present application.
  • the image decoding method of the decoder may further include the following steps:
  • Step 106 Decode the current video image according to the DM.
  • the decoder may decode the current video image according to the DM after the preset cross-decoding function is turned on.
  • H.266/VVC In the early test model (Joint Exploration Model, JEM) or VVC test model (VVC Test Model, VTM), the alternative representation of the components of the prediction mode is used.
  • the decoder when the decoding mode corresponding to the control identifier is image component cross-decoding, after the decoder enables the preset cross-decoding function, that is, after step 104 above, the decoder can not only follow CCLM or DM To decode the current video image, any technology that requires dependency between image components can also be used to decode the current video image, which is not specifically limited in this application.
  • the decoder obtains the bit stream corresponding to the current video image; parses the bit stream to obtain the control mark corresponding to the current video image; when the decoding mode corresponding to the control mark is independent decoding of image components, the pre A cross decoding function is provided; wherein, the preset cross decoding function is used to perform decoding processing based on dependencies between image components.
  • the decoder can first parse the bit stream corresponding to the current video image to obtain the control flag used in the bit stream to determine whether to allow dependency between image components, if the control flag
  • the corresponding decoding method is independent decoding of image components, that is, there is no dependency between image components, then the decoder needs to turn off the preset cross decoding function, that is, the decoder does not perform the current video on the basis of the dependency between image components
  • the image is decoded, so that in a scene that needs to be processed quickly or a scene that requires high parallel processing, parallel codec can be implemented to reduce the complexity of codec.
  • FIG. 8 is a schematic diagram 1 of the composition structure of the decoder proposed by the embodiment of the present application.
  • the decoder 100 proposed by the embodiment of the present application may include Section 101, parsing section 102, closing section 103, opening section 104, and decoding section 105.
  • the acquiring section 101 is configured to acquire a bit stream corresponding to the current video image.
  • the parsing part 102 is configured to parse the bit stream to obtain the control identifier corresponding to the current video image.
  • the closing part 103 is configured to turn off the preset cross decoding function when the decoding mode corresponding to the control flag is independent decoding of image components; wherein, the preset cross decoding function is used based on the dependencies between the image components Perform the decoding process.
  • the opening part 104 is configured to parse the bit stream to obtain the control identifier corresponding to the current video image, and when the decoding mode corresponding to the control identifier is image component crossover When decoding, the preset cross decoding function is turned on.
  • the image component includes at least two image components among a first image component, a second image component, and a third image component.
  • the decoding section 105 is further configured to, when the preset cross decoding function is turned on when the decoding method corresponding to the control identifier is the cross decoding of image components, according to the DM Decode the current video image.
  • the closing part 103 is further configured to parse the bit stream to obtain the control identification corresponding to the current video image, when the decoding mode corresponding to the control identification is DM prohibited , The DM mode is turned off.
  • the parsing section 102 is specifically configured to obtain the control identifier in the SPS in the bitstream after parsing the bitstream.
  • the parsing section 102 is further specifically configured to obtain the control identifier in the PPS in the bitstream after parsing the bitstream.
  • the parsing part 102 is further specifically configured to obtain the control identifier in the SEI in the bitstream after parsing the bitstream.
  • FIG. 9 is a second schematic diagram of the composition structure of the decoder proposed by the embodiment of the present application.
  • the decoder 100 proposed by the embodiment of the present application may further include a processor 106 and a memory 107 storing executable instructions of the processor 106 , A communication interface 108, and a bus 109 for connecting the processor 106, the memory 107, and the communication interface 108.
  • the processor 106 may be an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD ), programmable logic device (ProgRAMmable Logic Device, PLD), field programmable gate array (Field ProgRAMmable Gate Array, FPGA), central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor At least one. Understandably, for different devices, there may be other electronic devices for realizing the above-mentioned processor functions, which are not specifically limited in the embodiments of the present application.
  • the device 1 may further include a memory 107, which may be connected to the processor 106, wherein the memory 107 is used to store executable program code, and the program code includes computer operation instructions.
  • the memory 107 may include a high-speed RAM memory, or may also include Non-volatile memory, for example, at least two disk memories.
  • the bus 109 is used to connect the communication interface 108, the processor 106, and the memory 107 and the mutual communication between these devices.
  • the memory 107 is used to store instructions and data.
  • the above processor 106 is configured to obtain a bit stream corresponding to the current video image; parse the bit stream to obtain a control identifier corresponding to the current video image; when the control identifier corresponds
  • the decoding method is when the image components are independently decoded, the preset cross decoding function is turned off; wherein, the preset cross decoding function is used to perform decoding processing based on the dependencies between the image components.
  • the above-mentioned memory 107 may be a volatile first memory (volatile memory), such as a random access first memory (Random-Access Memory, RAM); or a non-volatile first memory (non-volatile memory) ), such as read-only memory (Read-Only Memory, ROM), flash first memory (flash memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD); or the above types Of the first memory and provide instructions and data to the processor 106.
  • volatile first memory such as a random access first memory (Random-Access Memory, RAM); or a non-volatile first memory (non-volatile memory) ), such as read-only memory (Read-Only Memory, ROM), flash first memory (flash memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD); or the above types Of the first memory and provide instructions and data to the processor 106.
  • ROM read-only memory
  • each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software function module.
  • the integrated unit is implemented as a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or right Part of the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to make a computer device (which can be an individual) A computer, a server, or a network device, etc.) or a processor (processor) executes all or part of the steps of the method of this embodiment.
  • the foregoing storage media include various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk.
  • program codes such as a USB flash drive, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk.
  • a decoder provided by an embodiment of the present application, the decoder obtains a bit stream corresponding to a current video image; parses the bit stream to obtain a control identifier corresponding to the current video image; when the decoding method corresponding to the control identifier is independent decoding of image components, The preset cross decoding function is turned off; wherein, the preset cross decoding function is used for decoding processing based on dependencies between image components.
  • the decoder can first parse the bit stream corresponding to the current video image to obtain the control flag used in the bit stream to determine whether to allow dependency between image components, if the control flag
  • the corresponding decoding method is independent decoding of image components, that is, there is no dependency between image components, then the decoder needs to turn off the preset cross decoding function, that is, the decoder does not perform the current video on the basis of the dependency between image components
  • the image is decoded, so that in a scene that needs to be processed quickly or a scene that requires high parallel processing, parallel codec can be implemented to reduce the complexity of codec.
  • parallel codec can be implemented to reduce the complexity of codec.
  • the bits that do not depend on decoding between image components in the CU layer are omitted, and the coding efficiency in this scene is improved.
  • An embodiment of the present application provides a first computer-readable storage medium on which a program is stored, and when the program is executed by a processor, the image decoding method described above is implemented.
  • the program instructions corresponding to an image decoding method in this embodiment may be stored on a storage medium such as an optical disk, a hard disk, or a U disk.
  • a storage medium such as an optical disk, a hard disk, or a U disk.
  • the preset cross decoding function When the decoding mode corresponding to the control identifier is independent decoding of image components, the preset cross decoding function is turned off; wherein, the preset cross decoding function is used to perform decoding processing based on dependencies between image components.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. Moreover, the present application may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage and optical storage, etc.) containing computer usable program code.
  • a computer usable storage media including but not limited to disk storage and optical storage, etc.
  • each flow and/or block in the flow diagram and/or block diagram and a combination of the flow and/or block in the flow diagram and/or block diagram can be implemented by computer program instructions.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to produce a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device A device for realizing the functions specified in one block or multiple blocks of a block diagram or a block diagram.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions
  • the device implements the functions specified in the implementation flow diagram one flow or multiple flows and/or the block diagram one block or multiple blocks.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to generate computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flow diagram.
  • Embodiments of the present application provide an image decoding method, a decoder, and a computer storage medium.
  • the decoder obtains the bit stream corresponding to the current video image; parses the bit stream to obtain the control identifier corresponding to the current video image; when the control identifier corresponds to the decoding method
  • the preset cross decoding function is turned off; wherein, the preset cross decoding function is used to perform decoding processing based on the dependencies between the image components.
  • the decoder can first parse the bit stream corresponding to the current video image to obtain the control flag used in the bit stream to determine whether to allow dependency between image components, if the control flag
  • the corresponding decoding method is independent decoding of image components, that is, there is no dependency between image components, then the decoder needs to turn off the preset cross decoding function, that is, the decoder does not perform the current video on the basis of the dependency between image components
  • the image is decoded, so that in a scene that needs to be processed quickly or a scene that requires high parallel processing, parallel codec can be implemented to reduce the complexity of codec.
  • parallel codec can be implemented to reduce the complexity of codec.
  • the bits that do not depend on decoding between image components in the CU layer are omitted, and the coding efficiency in this scene is improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种图像解码方法、解码器及计算机存储介质,该图像解码方法包括:获取当前视频图像对应的比特流;解析比特流,获得当前视频图像对应的控制标识;当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。

Description

图像解码方法、解码器及计算机存储介质 技术领域
本申请实施例涉及视频图像编解码技术领域,尤其涉及一种图像解码方法、解码器及计算机存储介质。
背景技术
在下一代视频编码标准H.266或多功能视频编码(Versatile Video Coding,VVC)中,允许跨分量间依赖性的存在,从而可以通过跨分量线性模型预测(Cross-component Linear Model Prediction,CCLM)和跨分量模式表示方法(Direct Mode,DM)实现亮度值到色度值,或者色度值之间的预测。
虽然CCLM等跨分量依赖性编解码方式能够提高编码效率,然而,对于需要快速处理的场景,或者需求高并行处理的场景,跨分量依赖性编解码方式不能有效的用于并行编解码,存在复杂度较高的缺陷。
发明内容
本申请实施例提供一种图像解码方法、解码器及计算机存储介质,能够在需要快速处理的场景,或者需求高并行处理的场景中,实现并行编解码,降低编解码的复杂度。
本申请实施例的技术方案是这样实现的:
本申请实施例提出本申请实施例提出一种图像解码方法,所述方法包括:
获取当前视频图像对应的比特流;
解析所述比特流,获得所述当前视频图像对应的控制标识;
当所述控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,所述预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。
本申请实施例提供了一种图像解码方法、解码器及计算机存储介质,解码器获取当前视频图像对应的比特流;解析比特流,获得当前视频图像对应的控制标识;当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。也就是说,在本申请的实施例中, 解码器可以先对当前视频图像对应的比特流进行解析,获得比特流中用于确定是否允许图像分量之间存在依赖性的控制标识,如果控制标识对应的解码方式为图像分量独立解码,即不支持图像分量之间存在依赖性,那么解码器便需要关闭预设交叉解码功能,即解码器不在图像分量之间的依赖性的基础上对当前视频图像进行解码处理,从而可以在需要快速处理的场景,或者需求高并行处理的场景中,实现并行编解码,降低编解码的复杂度。同时在这些场景下省去在CU层表示不进行图像分量之间依赖解码的比特,提高该场景下的编码效率。
附图说明
图1为视频编码系统的组成结构示意图;
图2为视频解码系统的组成结构示意图;
图3本申请实施例提出的一种图像解码方法的实现流程示意图一;
图4本申请实施例提出的一种图像解码方法的实现流程示意图二;
图5本申请实施例提出的一种图像解码方法的实现流程示意图三;
图6本申请实施例提出的一种图像解码方法的实现流程示意图四;
图7本申请实施例提出的一种图像解码方法的实现流程示意图五;
图8为本申请实施例提出的解码器的组成结构示意图一;
图9为本申请实施例提出的解码器的组成结构示意图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。
对视频进行编码就是对一帧一帧图像进行编码;同样,对视频编码压缩后的视频码流进行解码就是对一帧一帧图像的码流进行解码。在几乎所有的视频图像编码的国际标准中,对一帧图像进行编码时,都需要把一帧图像划分成若干块MxM像素的子图像,称为编码单元(Coding Unit,CU),以CU为基本编码单位,对子图像一块一块的进行编码。常用的M的大小是4,8,16,32,64。因此,对一个视频图像序列进行编码就是对各帧图像的各个编码单元,即各个CU依次进行编码;对一个视频图像序列的码流 进行解码也是对各帧图像的各个CU依次进行解码,最终重构出整个视频图像序列。
在视频图像中,一般采用第一图像分量、第二图像分量和第三图像分量来表征编码块。其中,第一图像分量、第二图像分量和第三图像分量分别为一个亮度分量、一个蓝色色度分量和一个红色色度分量。具体地,亮度分量通常使用符号Y表示,蓝色色度分量通常使用符号Cb表示,红色色度分量通常使用符号Cr表示。
需要说明的是,在本申请的实施例中,第一图像分量、第二图像分量和第三图像分量可以分别为亮度分量Y、蓝色色度分量Cb以及红色色度分量Cr,例如,第一图像分量可以为亮度分量Y,第二图像分量可以为红色色度分量Cr,第三图像分量可以为蓝色色度分量Cb,本申请实施例对此并不作具体限定。
在H.266中,为了进一步提升了编码性能和编码效率,针对分量间预测(Cross-component Prediction,CCP)进行了扩展改进。在H.266中,CCLM实现了第一图像分量到第二图像分量、第一图像分量到第三图像分量以及第二图像分量与第三图像分量之间的预测。也就是说,CCLM预测模式除包含以亮度分量对色度分量预测的方法之外,即以第一图像分量对第二图像分量进行预测,或者以第一图像分量对第三图像分量进行预测的方法之外,也包含两个色度分量之间的预测,即还包含第二图像分量与第三图像分量之间的预测方法。其中,在本申请的实施例中,第二图像分量与第三图像分量之间的预测方法,既可以从Cb分量预测Cr分量,也可以从Cr分量预测Cb分量。
对于视频编码标准中允许跨分量间依赖性存在的、如CCLM和DM等技术,在3D视频,点云等的未来媒体编码都可能涉及这种工具。在这些技术中,由于可以使用亮度分量来预测色度分量、编码模式以及残差等信息,同时色度分量之间的预测也是可能的,因此可以大大提高编码效率,但是,跨分量依赖性的方式也为并行编解码带来了挑战。也就是说,在某些场景中,可能需要关闭亮度分量和色度分量之间、不同色度分量之间或者不同颜色分量之间的依赖关系,从而降低编解码过程中的复杂度。
本申请实施例中提出一种在比特流中加入是否允许不同图像分量间交叉编解码的控制标识,从而可以对CCLM和DM等技术进行使能控制,以使编解码器可以有效的用于并行编解码,克服编解码复杂度较高的缺陷。其中,图1为视频编码系统的组成结构示意图,如图1所示,视频编码系统200包括变换与量化单元201、帧内估计单元202、帧内预测单元203、运动补偿单元204、运动估计单元205、反变换与反量化单元206、滤波器控制分析单元207、滤波单元208、熵编码单元209和当前视频图像缓存单元210等,其中,滤波单元208可以实现去方块滤波及样本自适应补偿(Sample Adaptive 0ffset, SAO)滤波,熵编码单元209可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmatic Coding,CABAC)。
针对输入的原始视频信号,通过初步划分可以得到编码树单元(Coding Tree Unit,CTU),而对一个CTU继续进行内容自适应划分,可以得到CU,CU一般包含一个或多个编码块(Coding Block,CB),然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元201对该视频编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元202和帧内预测单元203是用于对该视频编码块进行帧内预测;明确地说,帧内估计单元202和帧内预测单元203用于确定待用以编码该视频编码块的帧内预测模式;运动补偿单元204和运动估计单元205用于执行所接收的视频编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元205执行的运动估计为产生运动向量的过程,所述运动向量可以估计该视频编码块的运动,然后由运动补偿单元204基于由运动估计单元205所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元203还用于将所选择的帧内预测数据提供到熵编码单元209,而且运动估计单元205将所计算确定的运动向量数据也发送到熵编码单元209;此外,反变换与反量化单元206是用于该视频编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元207和滤波单元208去除方块效应伪影,然后将该重构残差块添加到当前视频图像缓存单元210的帧中的一个预测性块,用以产生经重构建的视频编码块;熵编码单元209是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而当前视频图像缓存单元210是用于存放重构建的视频编码块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在当前视频图像缓存单元210中。
图2为视频解码系统的组成结构示意图,如图2所示,该视频解码系统300包括熵解码单元301、反变换与反量化单元302、帧内预测单元303、运动补偿单元304、滤波单元305和当前视频图像缓存单元306等,其中,熵解码单元301可以实现头信息解码以及CABAC解码,滤波单元305可以实现去方块滤波以及SAO滤波。输入的视频信号经过图2的编码处理之后,输出该视频信号的码流;该码流输入视频解码系统300中,首先经过熵解码单元301,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元302进行处理,以便在像素域中产生残差块;帧内预测单元303可用于基于 所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前视频解码块的预测数据;运动补偿单元304是通过剖析运动向量和其他关联语法元素来确定用于视频解码块的预测信息,并使用该预测信息以产生正被解码的视频解码块的预测性块;通过对来自反变换与反量化单元302的残差块与由帧内预测单元303或运动补偿单元304产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频信号通过滤波单元305以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于当前视频图像缓存单元306中,当前视频图像缓存单元306存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
在一实施例中,本申请实施例提供了一种图像解码方法,图3为本申请实施例提出的一种图像解码方法的实现流程示意图一,如图3所示,在本申请的实施例中,上述解码器进行图像解码的方法可以包括以下步骤:
步骤101、获取当前视频图像对应的比特流。
在本申请的实施例中,解码器可以先获取当前视频图像对应的比特流。
进一步地,在本申请的实施中,比特流(Data Rate),即为码流,是指视频文件在单位时间内使用的数据流量,是视频编码中画面质量控制的重要部分。
需要说明的是,在本申请的实施中,编码器在对当前视频图像进行编码处理之后,可以生成对应的码流数据进行存储或者传输,相应地,解码器在对当前视频图像进行解码时,可以先接收当前视频图像对应的比特流。
步骤102、解析比特流,获得当前视频图像对应的控制标识。
在本申请的实施例中,解码器在获取当前视频图像对应的比特流之后,可以对比特流进行解析处理,获得与当前视频图像对应的控制标识。
需要说明的是,在本申请的实施中,控制标识可以用于表征当前视频图像对应的不同图像分量之间的关系。具体地,在本申请的实施例中,当前视频图像对应的不同图像分量之间的关系可以为相互依赖,当前视频图像对应的不同图像分量之间的关系也可以为相互独立。
进一步地,编码器在对当前视频图像进行编码时,可以当前视频图像中的不同分量之间的关系对控制标识进行确定。例如,如果编码器在对当前视频图像进行编码的过程中关闭了不同图像分量之间的依赖关系,即关闭了亮度分量和色度分量之间、不同色度 分量之间的依赖关系,那么编码器在比特流中将控制标识确定为0;如果编码器在对当前视频图像进行编码的过程中开启了不同图像分量之间的依赖关系,即开启了亮度分量和色度分量之间、不同色度分量之间的依赖关系,那么编码器在比特流中将控制标识确定为1。
相应地,解码器在对当前视频图像对应的比特流进行解析时,如果解析获得比特流中的控制标识为1,那么解码器可以认为在进行视频图像解码时需要开启不同图像分量之间的依赖关系,即开启亮度分量和色度分量之间、不同色度分量之间的依赖关系;如果解析获得比特流中的控制标识为0,那么解码器可以认为在进行视频图像解码时需要关闭不同图像分量之间的依赖关系,即关闭亮度分量和色度分量之间、不同色度分量之间的依赖关系。
需要说明的是,在本申请的实施例中,由于当前视频图像对应的不同图像分量可以包括第一图像分量、第二图像分量以及第三图像分量,即可以包括Y、Cb以及Cr这三个图像分量,因此,解码器在通过控制标识表征当前视频图像对应的不同图像分量之间的关系时,既可以通过控制标识表征第一图像分量、第二图像分量以及第三图像分量之间的互相依赖或者互相独立关系,也可以通过控制标识表征第一图像分量、第二图像分量以及第三图像分量中的至少两个图像分量之间的互相依赖或者互相独立关系。进一步地,在本申请的实施中,解码器在对当前视频图像对应的比特流解析之后,解析到的控制标识可以位于比特流中的序列参数集(Sequence Parameter Set,SPS)、图像参数集(Picture Parameter Set,PPS)、补充增强信息(Supplemental enhancement information,SEI)、编码树单元以及编码单元等一个或者多个中。
在H.264/AVC视频编码标准中,整个系统框架被分为了两个层面,包括网络抽象层(Network Abstract Layer,NAL)和视频编码层(Video Coding Layer,VCL)。其中,VCL负责有效表示视频数据的内容,而NAL则负责格式化数据并提供头信息,以保证数据适合各种信道和存储介质上的传输。
进一步地,在H.264标准协议中规定了多种不同的NAL Unit类型,不同的NAL Unit保存有不同的数据。其中,H.264比特流中的第一个NAL Unit为SPS;在H.264比特流中的第二个NAL Unit为PPS;在H.264比特流中的第三个NAL Unit为即时解码器刷新(Instantaneous Decoding Refresh,IDR)。
需要说明的是,在本申请的实施例中,除SPS和PPS以外,视频图像对应的每一帧数据就是一个NAL Unit。
进一步地,在本申请的实施例中,SPS中的信息至关重要,如果SPS中的数据丢失或出现错误,那么解码过程很可能会失败。具体地,SPS在如iOS的VideoToolBox等某些平台的视频处理框架中还通常作为解码器实例的初始化信息使用。
需要说明的是,在本申请的实施例中,SPS中保存了一组编码视频序列的全局参数。所谓的编码视频序列即原始视频的一帧一帧的像素数据经过编码之后的结构组成的序列。而每一帧的编码后数据所依赖的参数保存于PPS中。
进一步地,SPS和PPS的NAL Unit通常位于整个比特流的起始位置。但在某些特殊情况下,在比特流中间也可能出现这两种结构,这是由于解码器需要在比特流中间开始解码,也可能是因为编码器在编码的过程中改变了比特流的参数(如图像分辨率等)。
步骤103、当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。
在本申请的实施例中,解码器在解析比特流,获得当前视频图像对应的控制标识之后,如果控制标识对应的解码方式为图像分量独立解码,那么解码器可以关闭预设交叉解码功能。
需要说明的是,在本申请的实施中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。也就是说,在对当前视频图像进行解码处理时,预设交叉解码功能表征允许跨分量间依赖性的存在,即解码器可以通过CCLM或者DM等方式对当前视频图像进行解码处理。
进一步地,在本申请的实施中,解码器在解析获得比特流中的控制标识之后,可以先确定出控制标识对应的解码方式,具体地,控制标识对应的解码方式为图像分量独立解码或者图像分量交叉解码。
需要说明的是,在本申请的实施中,当控制标识对应的解码方式为图像分量独立解码时,解码器便不能利用不同图像分量之间的依赖关系进行解码处理,即需要按照一种图像分量进行独立的解码处理。例如,解码器解析处理获得的比特流中的控制标识为0,可以认为编码器在对当前视频图像编码时,关闭了亮度分量和色度分量之间、不同色度分量之间的依赖关系,那么便可以确定控制标识对应的解码方式为图像分量独立解码,解码器也需要关闭亮度分量和色度分量之间、不同色度分量之间的依赖关系,然后对当前视频图像进行解码处理。
需要说明的是,在本申请的实施例中,由于解码器在通过控制标识表征当前视频图像对应的不同图像分量之间的关系时,既可以通过控制标识表征第一图像分量、第二图 像分量以及第三图像分量之间的互相依赖或者互相独立关系,也可以通过控制标识表征第一图像分量、第二图像分量以及第三图像分量中的至少两个图像分量之间的互相依赖或者互相独立关系。因此,控制标识对应的解码方式既可以包括三个图像分量之间的图像分量独立解码和三个图像分量之间的图像分量交叉解码,也可以包括任意两个图像分量之间的图像分量独立解码和任意两个图像分量之间的图像分量交叉解码。例如,当控制标识表征Y、Cb以及Cr这三个图像分量之间的关系时,如果比特流中的控制标识为1,可以认为编码器在对当前视频图像编码时,开启了亮度分量和色度分量之间、不同色度分量之间的依赖关系,那么解码器可以开启Y、Cb以及Cr这三个不同图像分量之间的依赖关系;当控制标识表征Cb和Cr这两个图像分量之间的关系时,如果比特流中的控制标识为0,可以认为编码器在对当前视频图像编码时,关闭了不同色度分量之间的依赖关系,那么解码器可以关闭Cb和Cr这两个图像分量之间的依赖关系,但是Y和Cb、Y和Cr图像分量之间的依赖关系则不需要关闭。
在本申请的实施中,进一步地,图4为本申请实施例提出的一种图像解码方法的实现流程示意图二,如图4所示,解码器在解析比特流,获得当前视频图像对应的控制标识之后,即步骤102之后,解码器进行图像解码的方法还可以包括以下步骤:
步骤104、当控制标识对应的解码方式为图像分量交叉解码时,开启预设交叉解码功能。
在本申请的实施例中,解码器在解析比特流,获得当前视频图像对应的控制标识之后,如果控制标识对应的解码方式为图像分量交叉解码,那么解码器可以开启预设交叉解码功能。
需要说明的是,在本申请的实施中,解码器在对比特流进行解码获得控制标识之后,如果控制标识对应的解码方式为图像分量交叉解码,那么解码器便可以利用不同图像分量之间的依赖关系进行解码处理,即可以通过CCLM或者DM等方式对当前视频图像进行解码处理。例如,解码器解析处理获得的比特流中的控制标识为1,可以认为编码器在对当前视频图像编码时,开启了亮度分量和色度分量之间、不同色度分量之间的依赖关系,那么便可以确定控制标识对应的解码方式为图像分量交叉解码,解码器也可以开启亮度分量和色度分量之间、不同色度分量之间的依赖关系,然后对当前视频图像进行解码处理。
在本申请的实施中,进一步地,图5为本申请实施例提出的一种图像解码方法的实现流程示意图三,如图5所示,解码器在解析比特流,获得当前视频图像对应的控制标 识之后,即步骤102之后,解码器进行图像解码的方法还可以包括以下步骤:
步骤105、当控制标识对应的解码方式为禁止DM时,关闭DM表示模式。
在本申请的实施例中,解码器在解析比特流,获得当前视频图像对应的控制标识之后,如果控制标识对应的解码方式为禁止DM,那么解码器可以关闭DM表示模式。
需要说明的是,在本申请的实施中,比特流中的控制标识还可以对DM技术进行开启或者关闭。具体地,如果控制标识对应的解码方式为禁止DM,那么解码器在进行当前视频图像的解码时,需要关闭DM表达模式;如果控制标识对应的解码方式为允许DM,那么解码器在进行当前视频图像的解码时,需要开启DM表达模式。
进一步地,在本申请的实施例中,比特流中的控制标识还可以对任意一种需要图像分量间依赖性的技术或表达模式进行开启或者关闭。也就是说,在本申请的实施中,比特流中的控制标识并不仅仅为DM技术的使能工具,还可以为其他需要图像分量间依赖性的技术的使能工具,本申请不作具体限定。
本申请提出的一种图像解码方法,解码器获取当前视频图像对应的比特流;解析比特流,获得当前视频图像对应的控制标识;当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。也就是说,在本申请的实施例中,解码器可以先对当前视频图像对应的比特流进行解析,获得比特流中用于确定是否允许图像分量之间存在依赖性的控制标识,如果控制标识对应的解码方式为图像分量独立解码,即不支持图像分量之间存在依赖性,那么解码器便需要关闭预设交叉解码功能,即解码器不在图像分量之间的依赖性的基础上对当前视频图像进行解码处理,从而可以在需要快速处理的场景,或者需求高并行处理的场景中,实现并行编解码,降低编解码的复杂度。同时在这些场景下省去在CU层表示不进行图像分量之间依赖解码的比特,提高该场景下的编码效率。
基于上述实施例,在本申请的又一实施例中,上述步骤101至步骤103中的图像分量可以包括第一图像分量、第二图像分量以及第三图像分量。其中,第一图像分量、第二图像分量和第三图像分量可以分别为亮度分量Y、蓝色色度分量Cb以及红色色度分量Cr,例如,第一图像分量可以为亮度分量Y,第二图像分量可以为红色色度分量Cr,第三图像分量可以为蓝色色度分量Cb,本申请实施例对此并不作具体限定。
在本申请的实施中,进一步地,图6为本申请实施例提出的一种图像解码方法的实现流程示意图四,如图6所示,对于上述步骤101至步骤105所述的方法,解码器解析比特流,获得当前视频图像对应的控制标识的方法可以包括以下步骤:
步骤201、对比特流进行解析处理后,在比特流中的SPS中获得控制标识。
在本申请的实施中,解码器在获取当前视频图像对应的比特流之后,可以对比特流进行解析处理,在比特流中的SPS中获得与当前视频图像对应的控制标识。
需要说明的是,在本申请的实施例中,解码器在对当前视频图像对应的比特流解析之后,解析到的控制标识可以位于SPS中,具体地,由于SPS保存了一组编码视频序列的全局参数,因此,如果解码器在比特流中的SPS中获取控制标识,那么,控制标识可以作用于当前视频图像的全部图像帧中。
进一步地,在本申请的实施中,如果解码器在SPS中获取控制标识,那么,解码器可以按照控制标识对当前视频图像的全部图像帧进行相应地解码处理。例如,如果控制标识对应的解码方式为图像分量交叉解码,那么解码器可以开启预设交叉解码功能,利用不同图像分量之间的依赖关系对全部图像帧进行解码处理,即可以通过CCLM或者DM等方式对全部图像帧进行解码处理。
步骤202、对比特流进行解析处理后,在比特流中的PPS中获得控制标识。
在本申请的实施中,解码器在获取当前视频图像对应的比特流之后,可以对比特流进行解析处理,在比特流中的PPS中获得与当前视频图像对应的控制标识。
需要说明的是,在本申请的实施例中,解码器在对当前视频图像对应的比特流解析之后,解析到的控制标识可以位于PPS中,具体地,由于PPS保存了一帧图像的编码后数据所依赖的参数,因此,如果解码器在比特流中的PPS中获取控制标识,那么,控制标识可以作用于当前视频图像中与PPS对应的一帧图像中。
进一步地,在本申请的实施中,如果解码器在PPS中获取控制标识,那么,解码器可以按照控制标识对当前视频图像中与PPS对应的一帧图像进行相应地解码处理。例如,如果控制标识对应的解码方式为图像分量交叉解码,那么解码器可以开启预设交叉解码功能,利用不同图像分量之间的依赖关系对当前视频图像中与PPS对应的一帧图像进行解码处理,即可以通过CCLM或者DM等方式对当前视频图像中与PPS对应的一帧图像进行解码处理。
步骤203、对比特流进行解析处理后,在比特流中的SEI中获得控制标识。
在本申请的实施中,解码器在获取当前视频图像对应的比特流之后,可以对比特流进行解析处理,在比特流中的SEI中获得与当前视频图像对应的控制标识。
需要说明的是,在本申请的实施例中,解码器在对当前视频图像对应的比特流解析之后,解析到的控制标识可以位于SEI中,具体地,由于SEI在解码过程中起辅助作用, 用于向比特流中加入了视频附加信息,因此,如果解码器在比特流中的SEI中获取控制标识,那么,控制标识可以作用于当前视频图像中与SEI对应的图像信息中。
在本申请的实施中,进一步地,由于解析到的控制标识可以位于比特流中的SPS、PPS、SEI、编码树单元以及编码单元等一个或者多个中。相应地,解码器在对当前视频图像进行处理时,可以根据控制标识在比特流中的具体的位置在对应的视频图像信息中进行适应性的解码处理。
本申请提出的一种图像解码方法,解码器获取当前视频图像对应的比特流;解析比特流,获得当前视频图像对应的控制标识;当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。也就是说,在本申请的实施例中,解码器可以先对当前视频图像对应的比特流进行解析,获得比特流中用于确定是否允许图像分量之间存在依赖性的控制标识,如果控制标识对应的解码方式为图像分量独立解码,即不支持图像分量之间存在依赖性,那么解码器便需要关闭预设交叉解码功能,即解码器不在图像分量之间的依赖性的基础上对当前视频图像进行解码处理,从而可以在需要快速处理的场景,或者需求高并行处理的场景中,实现并行编解码,降低编解码的复杂度。
基于上述实施例,在本申请的再一实施例中,进一步地,图7为本申请实施例提出的一种图像解码方法的实现流程示意图五,如图7所示,当控制标识对应的解码方式为图像分量交叉解码时,解码器在开启预设交叉解码功能之后,即上述步骤104之后,解码器进行图像解码的方法还可以包括以下步骤:
步骤106、按照DM对当前视频图像进行解码处理。
在本申请的实施例中,如果控制标识对应的解码方式为图像分量交叉解码,那么解码器可以在开启预设交叉解码功能之后,按照DM对当前视频图像进行解码处理。
进一步度,在本申请的实施中,DM方法在实现亮度分量到色度分量的预测时,为了减少亮度分量与色度分量间和不同色度分量之间的冗余,在H.266/VVC早期测试模型(Joint Exploration Model,JEM)或VVC测试模型(VVC Test model,VTM)中使用预测模式的分量间替代表示方式。
在本申请的实施中,进一步地,当控制标识对应的解码方式为图像分量交叉解码时,解码器在开启预设交叉解码功能之后,即上述步骤104之后,解码器不仅仅可以按照CCLM或者DM对当前视频图像进行解码处理,还可以利用任意一种需要图像分量间依赖性的技术对当前视频图像进行解码处理,本申请不作具体限定。
本申请提出的一种图像解码方法,解码器获取当前视频图像对应的比特流;解析比特流,获得当前视频图像对应的控制标识;当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。也就是说,在本申请的实施例中,解码器可以先对当前视频图像对应的比特流进行解析,获得比特流中用于确定是否允许图像分量之间存在依赖性的控制标识,如果控制标识对应的解码方式为图像分量独立解码,即不支持图像分量之间存在依赖性,那么解码器便需要关闭预设交叉解码功能,即解码器不在图像分量之间的依赖性的基础上对当前视频图像进行解码处理,从而可以在需要快速处理的场景,或者需求高并行处理的场景中,实现并行编解码,降低编解码的复杂度。
基于上述实施例,在本申请的又一实施例中,图8为本申请实施例提出的解码器的组成结构示意图一,如图8所示,本申请实施例提出的解码器100可以包括获取部分101,解析部分102、关闭部分103,开启部分104以及解码部分105。
所述获取部分101,配置于获取当前视频图像对应的比特流。
所述解析部分102,配置于解析所述比特流,获得所述当前视频图像对应的控制标识。
所述关闭部分103,配置于当所述控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,所述预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。
进一步地,在本申请的实施例中,所述开启部分104,配置于解析所述比特流,获得所述当前视频图像对应的控制标识之后,当所述控制标识对应的解码方式为图像分量交叉解码时,开启所述预设交叉解码功能。
进一步地,在本申请的实施例中,所述图像分量包括第一图像分量、第二图像分量以及第三图像分量中的至少两个图像分量。
进一步地,在本申请的实施例中,所述解码部分105,还配置于当所述控制标识对应的解码方式为图像分量交叉解码时,开启所述预设交叉解码功能之后,按照DM对所述当前视频图像进行解码处理。
进一步地,在本申请的实施例中,所述关闭部分103,还配置于解析所述比特流,获得所述当前视频图像对应的控制标识之后,当所述控制标识对应的解码方式为禁止DM时,关闭DM表示模式。
进一步地,在本申请的实施例中,所述解析部分102,具体配置于对所述比特流进 行解析处理后,在所述比特流中的SPS中获得所述控制标识。
进一步地,在本申请的实施例中,所述解析部分102,还具体配置于对所述比特流进行解析处理后,在所述比特流中的PPS中获得所述控制标识。
进一步地,在本申请的实施例中,所述解析部分102,还具体配置于对所述比特流进行解析处理后,在所述比特流中的SEI中获得所述控制标识。
图9为本申请实施例提出的解码器的组成结构示意图二,如图9所示,本申请实施例提出的解码器100还可以包括处理器106、存储有处理器106可执行指令的存储器107、通信接口108,和用于连接处理器106、存储器107以及通信接口108的总线109。
在本申请的实施例中,上述处理器106可以为特定用途集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理装置(Digital Signal Processing Device,DSPD)、可编程逻辑装置(ProgRAMmable Logic Device,PLD)、现场可编程门阵列(Field ProgRAMmable Gate Array,FPGA)、中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。装置1还可以包括存储器107,该存储器107可以与处理器106连接,其中,存储器107用于存储可执行程序代码,该程序代码包括计算机操作指令,存储器107可能包含高速RAM存储器,也可能还包括非易失性存储器,例如,至少两个磁盘存储器。
在本申请的实施例中,总线109用于连接通信接口108、处理器106以及存储器107以及这些器件之间的相互通信。
在本申请的实施例中,存储器107,用于存储指令和数据。
进一步地,在本申请的实施例中,上述处理器106,用于获取当前视频图像对应的比特流;解析所述比特流,获得所述当前视频图像对应的控制标识;当所述控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,所述预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。
在实际应用中,上述存储器107可以是易失性第一存储器(volatile memory),例如随机存取第一存储器(Random-Access Memory,RAM);或者非易失性第一存储器(non-volatile memory),例如只读第一存储器(Read-Only Memory,ROM),快闪第一存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者上述种类的第一存储器的组合,并向处理器106提供指令和数据。
另外,在本实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例提出的一种解码器,该解码器获取当前视频图像对应的比特流;解析比特流,获得当前视频图像对应的控制标识;当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。也就是说,在本申请的实施例中,解码器可以先对当前视频图像对应的比特流进行解析,获得比特流中用于确定是否允许图像分量之间存在依赖性的控制标识,如果控制标识对应的解码方式为图像分量独立解码,即不支持图像分量之间存在依赖性,那么解码器便需要关闭预设交叉解码功能,即解码器不在图像分量之间的依赖性的基础上对当前视频图像进行解码处理,从而可以在需要快速处理的场景,或者需求高并行处理的场景中,实现并行编解码,降低编解码的复杂度。同时在这些场景下省去在CU层表示不进行图像分量之间依赖解码的比特,提高该场景下的编码效率。
本申请实施例提供第一计算机可读存储介质,其上存储有程序,该程序被处理器执行时实现如上所述的图像解码方法。
具体来讲,本实施例中的一种图像解码方法对应的程序指令可以被存储在光盘,硬盘,U盘等存储介质上,当存储介质中的与一种图像解码方法对应的程序指令被一电子设备读取或被执行时,包括如下步骤:
获取当前视频图像对应的比特流;
解析所述比特流,获得所述当前视频图像对应的控制标识;
当所述控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,所述预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的实现流程示意图和/或方框图来描述的。应理解可由计算机程序指令实现流程示意图和/或方框图中的每一流程和/或方框、以及实现流程示意图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。
工业实用性
本申请实施例提供了一种图像解码方法、解码器及计算机存储介质,解码器获取当前视频图像对应的比特流;解析比特流,获得当前视频图像对应的控制标识;当控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。也就是说,在本申请的实施例中,解码器可以先对当前视频图像对应的比特流进行解析,获得比特流中用于确定是否允许图像分量之间存在依赖性的控制标识,如果控制标识对应的解码方式为图像分量独立解码,即不支持图像分量之间存在依赖性,那么解码器便需要关闭预设交叉解码功能,即 解码器不在图像分量之间的依赖性的基础上对当前视频图像进行解码处理,从而可以在需要快速处理的场景,或者需求高并行处理的场景中,实现并行编解码,降低编解码的复杂度。同时在这些场景下省去在CU层表示不进行图像分量之间依赖解码的比特,提高该场景下的编码效率。

Claims (18)

  1. 一种图像解码方法,所述方法包括:
    获取当前视频图像对应的比特流;
    解析所述比特流,获得所述当前视频图像对应的控制标识;
    当所述控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,所述预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。
  2. 根据权利要求1所述的方法,其中,所述解析所述比特流,获得所述当前视频图像对应的控制标识之后,所述方法还包括:
    当所述控制标识对应的解码方式为图像分量交叉解码时,开启所述预设交叉解码功能。
  3. 根据权利要求1所述的方法,其中,所述图像分量包括第一图像分量、第二图像分量以及第三图像分量中的至少两个图像分量。
  4. 根据权利要求2所述的方法,其中,所述当所述控制标识对应的解码方式为图像分量交叉解码时,开启所述预设交叉解码功能之后,所述方法还包括:
    按照跨分量模式表示方法DM对所述当前视频图像进行解码处理。
  5. 根据权利要求1所述的方法,其中,所述解析所述比特流,获得所述当前视频图像对应的控制标识之后,所述方法还包括:
    当所述控制标识对应的解码方式为禁止DM时,关闭DM表示模式。
  6. 根据权利要求1至5任一项所述的方法,其中,所述解析所述比特流,获得所述当前视频图像对应的控制标识,包括:
    对所述比特流进行解析处理后,在所述比特流中的序列参数集SPS中获得所述控制标识。
  7. 根据权利要求1至5任一项所述的方法,其中,所述解析所述比特流,获得所述当前视频图像对应的控制标识,包括:
    对所述比特流进行解析处理后,在所述比特流中的图像参数集PPS中获得所述控制标识。
  8. 根据权利要求1至5任一项所述的方法,其中,所述解析所述比特流,获得所述当前视频图像对应的控制标识,包括:
    对所述比特流进行解析处理后,在所述比特流中的补充增强信息SEI中获得所述控 制标识。
  9. 一种解码器,所述解码器包括:获取部分,解析部分以及关闭部分,
    所述获取部分,配置于获取当前视频图像对应的比特流;
    所述解析部分,配置于解析所述比特流,获得所述当前视频图像对应的控制标识;
    所述关闭部分,配置于当所述控制标识对应的解码方式为图像分量独立解码时,关闭预设交叉解码功能;其中,所述预设交叉解码功能用于基于图像分量之间的依赖性进行解码处理。
  10. 根据权利要求9所述的解码器,其中,所述解码器还包括:开启部分,
    所述开启部分,配置于解析所述比特流,获得所述当前视频图像对应的控制标识之后,当所述控制标识对应的解码方式为图像分量交叉解码时,开启所述预设交叉解码功能。
  11. 根据权利要求9所述的解码器,其中,所述图像分量包括第一图像分量、第二图像分量以及第三图像分量中的至少两个图像分量。
  12. 根据权利要求10所述的解码器,其中,所述解码器还包括:解码部分,
    所述解码部分,配置于当所述控制标识对应的解码方式为图像分量交叉解码时,开启所述预设交叉解码功能之后,按照DM对所述当前视频图像进行解码处理。
  13. 根据权利要求9所述的解码器,其中,
    所述关闭部分,还配置于解析所述比特流,获得所述当前视频图像对应的控制标识之后,当所述控制标识对应的解码方式为禁止DM时,关闭DM表示模式。
  14. 根据权利要求9至13任一项所述的解码器,其中,
    所述解析部分,具体配置于对所述比特流进行解析处理后,在所述比特流中的SPS中获得所述控制标识。
  15. 根据权利要求9至13任一项所述的解码器,其中,
    所述解析部分,还具体配置于对所述比特流进行解析处理后,在所述比特流中的PPS中获得所述控制标识。
  16. 根据权利要求9至13任一项所述的解码器,其中,
    所述解析部分,还具体配置于对所述比特流进行解析处理后,在所述比特流中的SEI中获得所述控制标识。
  17. 一种解码器,其中,所述解码器包括处理器、存储有所述处理器可执行指令的存储器、通信接口,和用于连接所述处理器、所述存储器以及所述通信接口的总线,当 所述指令被所述处理器执行时,实现如权利要求1-8任一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有程序,应用于解码器中,其中,所述程序被处理器执行时,实现如权利要求1-8任一项所述的方法。
PCT/CN2019/078195 2019-01-10 2019-03-14 图像解码方法、解码器及计算机存储介质 WO2020143114A1 (zh)

Priority Applications (17)

Application Number Priority Date Filing Date Title
EP19908400.5A EP3843388A4 (en) 2019-01-10 2019-03-14 IMAGE DECODING METHOD, DECODER AND COMPUTER STORAGE MEDIUM
CN201980056388.8A CN112640449A (zh) 2019-01-10 2019-03-14 图像解码方法、解码器及计算机存储介质
AU2019420838A AU2019420838A1 (en) 2019-01-10 2019-03-14 Image decoding method, decoder, and computer storage medium
SG11202105839RA SG11202105839RA (en) 2019-01-10 2019-03-14 Method for picture decoding, decoder, and computer storage medium
MX2021006450A MX2021006450A (es) 2019-01-10 2019-03-14 Procedimiento de descodificacion de imagenes, descodificador y medio de almacenamiento informatico.
CA3121922A CA3121922C (en) 2019-01-10 2019-03-14 Method for picture decoding, decoder, and computer storage medium
CN202210961409.4A CN115941944A (zh) 2019-01-10 2019-03-14 图像解码方法、解码器及计算机存储介质
CN202110352754.3A CN113055671B (zh) 2019-01-10 2019-03-14 图像解码方法、解码器及计算机存储介质
KR1020217016139A KR20210110796A (ko) 2019-01-10 2019-03-14 화상 디코딩 방법, 디코더 및 컴퓨터 저장 매체
JP2021530947A JP7431827B2 (ja) 2019-01-10 2019-03-14 画像デコーディング方法、デコーダ及びコンピューター記憶媒体
US17/326,310 US11272186B2 (en) 2019-01-10 2021-05-20 Method for picture decoding, method for picture encoding, decoder, and encoder
IL283477A IL283477A (en) 2019-01-10 2021-05-26 A method for decoding an image, a decoder and a computer storage medium
ZA2021/03762A ZA202103762B (en) 2019-01-10 2021-06-01 Method for picture decoding, decoder, and computer storage medium
US17/646,673 US11785225B2 (en) 2019-01-10 2021-12-30 Method for picture decoding, method for picture encoding, decoder, and encoder
US18/457,705 US20230403400A1 (en) 2019-01-10 2023-08-29 Method for picture decoding, method for picture encoding, decoder, and encoder
JP2024014222A JP2024045388A (ja) 2019-01-10 2024-02-01 画像デコーディング方法、デコーダ及びコンピューター記憶媒体
JP2024014195A JP2024045387A (ja) 2019-01-10 2024-02-01 画像デコーディング方法、デコーダ及びコンピューター記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962790795P 2019-01-10 2019-01-10
US62/790,795 2019-01-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/326,310 Continuation US11272186B2 (en) 2019-01-10 2021-05-20 Method for picture decoding, method for picture encoding, decoder, and encoder

Publications (1)

Publication Number Publication Date
WO2020143114A1 true WO2020143114A1 (zh) 2020-07-16

Family

ID=71521835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078195 WO2020143114A1 (zh) 2019-01-10 2019-03-14 图像解码方法、解码器及计算机存储介质

Country Status (12)

Country Link
US (3) US11272186B2 (zh)
EP (1) EP3843388A4 (zh)
JP (3) JP7431827B2 (zh)
KR (1) KR20210110796A (zh)
CN (3) CN113055671B (zh)
AU (1) AU2019420838A1 (zh)
CA (1) CA3121922C (zh)
IL (1) IL283477A (zh)
MX (1) MX2021006450A (zh)
SG (1) SG11202105839RA (zh)
WO (1) WO2020143114A1 (zh)
ZA (1) ZA202103762B (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023097694A1 (zh) * 2021-12-03 2023-06-08 Oppo广东移动通信有限公司 解码方法、编码方法、解码器以及编码器

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867820A (zh) * 2005-09-20 2010-10-20 三菱电机株式会社 图像解码装置以及图像解码方法
CN103096051A (zh) * 2011-11-04 2013-05-08 华为技术有限公司 一种图像块信号分量采样点的帧内解码方法和装置
WO2015198954A1 (en) * 2014-06-27 2015-12-30 Mitsubishi Electric Corporation Method and decoder for predicting and filtering color components in pictures
CN106576176A (zh) * 2014-06-20 2017-04-19 索尼公司 图像编码设备和方法以及图像解码设备和方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4410989B2 (ja) 2002-12-12 2010-02-10 キヤノン株式会社 画像処理装置及び画像復号処理装置
KR100763196B1 (ko) 2005-10-19 2007-10-04 삼성전자주식회사 어떤 계층의 플래그를 계층간의 연관성을 이용하여부호화하는 방법, 상기 부호화된 플래그를 복호화하는방법, 및 장치
JP5026092B2 (ja) * 2007-01-12 2012-09-12 三菱電機株式会社 動画像復号装置および動画像復号方法
JP2009017472A (ja) * 2007-07-09 2009-01-22 Renesas Technology Corp 画像復号装置および画像復号方法
CN103220508B (zh) * 2012-01-20 2014-06-11 华为技术有限公司 编解码方法和装置
GB2498982B (en) * 2012-02-01 2014-04-16 Canon Kk Method and device for encoding or decoding an image
CN113259684A (zh) * 2013-04-08 2021-08-13 Ge视频压缩有限责任公司 分量间预测
JP2014209757A (ja) * 2014-06-12 2014-11-06 インテル コーポレイション クロスチャネル残差予測
US10623747B2 (en) * 2014-06-20 2020-04-14 Hfi Innovation Inc. Method of palette predictor signaling for video coding
US10536695B2 (en) * 2015-09-09 2020-01-14 Qualcomm Incorporated Colour remapping information supplemental enhancement information message processing
US10554974B2 (en) * 2017-01-13 2020-02-04 Mediatek Inc. Method and apparatus enabling adaptive multiple transform for chroma transport blocks using control flags

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867820A (zh) * 2005-09-20 2010-10-20 三菱电机株式会社 图像解码装置以及图像解码方法
CN103096051A (zh) * 2011-11-04 2013-05-08 华为技术有限公司 一种图像块信号分量采样点的帧内解码方法和装置
CN106576176A (zh) * 2014-06-20 2017-04-19 索尼公司 图像编码设备和方法以及图像解码设备和方法
WO2015198954A1 (en) * 2014-06-27 2015-12-30 Mitsubishi Electric Corporation Method and decoder for predicting and filtering color components in pictures

Also Published As

Publication number Publication date
KR20210110796A (ko) 2021-09-09
SG11202105839RA (en) 2021-07-29
AU2019420838A1 (en) 2021-06-17
US11272186B2 (en) 2022-03-08
MX2021006450A (es) 2021-07-02
EP3843388A1 (en) 2021-06-30
JP2024045388A (ja) 2024-04-02
US11785225B2 (en) 2023-10-10
JP7431827B2 (ja) 2024-02-15
CN113055671A (zh) 2021-06-29
CN112640449A (zh) 2021-04-09
JP2022516694A (ja) 2022-03-02
US20210274191A1 (en) 2021-09-02
JP2024045387A (ja) 2024-04-02
IL283477A (en) 2021-07-29
CA3121922C (en) 2023-10-03
EP3843388A4 (en) 2021-12-22
US20220124348A1 (en) 2022-04-21
US20230403400A1 (en) 2023-12-14
CN113055671B (zh) 2022-09-02
ZA202103762B (en) 2022-08-31
CN115941944A (zh) 2023-04-07
CA3121922A1 (en) 2020-07-16

Similar Documents

Publication Publication Date Title
CN113168718B (zh) 视频解码方法、装置和存储介质
TWI634782B (zh) 可適性色彩空間轉換寫碼
AU2014203924B2 (en) Syntax and semantics for buffering information to simplify video splicing
US20130195350A1 (en) Image encoding device, image encoding method, image decoding device, image decoding method, and computer program product
WO2021004153A1 (zh) 图像预测方法、编码器、解码器以及存储介质
WO2015053680A1 (en) Layer switching in video coding
JP2024045388A (ja) 画像デコーディング方法、デコーダ及びコンピューター記憶媒体
JP2023508665A (ja) ビデオコーディングにおける復号パラメータセット
US11683514B2 (en) Method and apparatus for video coding for machine
TWI797560B (zh) 跨層參考限制條件
JP2022549910A (ja) ビデオ符号化のための方法、装置、媒体およびコンピュータ・プログラム
CN115336280A (zh) 用于视频编解码中的高级语法的方法和设备
WO2015052938A1 (en) Highest temporal sub-layer list
RU2784440C1 (ru) Способ декодирования изображений, декодер и компьютерный носитель данных
WO2020255771A1 (ja) 画像処理装置および方法
CN117221604A (zh) 用于视频编解码中的高级语法的方法和设备
CN117041602A (zh) 用于对视频信号进行编码的方法、计算设备和存储介质
JP7372483B2 (ja) 映像ピクチャヘッダにおけるフィルタリングパラメータ信号通知
JP2023529854A (ja) ビデオビットストリームの各レイヤ表現のための値の導出
WO2024039680A1 (en) Neural-network post-filter purposes with downsampling capabilities
CN115606180A (zh) 视频编码的通用约束信息

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19908400

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019908400

Country of ref document: EP

Effective date: 20210325

ENP Entry into the national phase

Ref document number: 2021530947

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 3121922

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2019420838

Country of ref document: AU

Date of ref document: 20190314

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE