WO2023051156A1 - 视频图像的处理方法及装置 - Google Patents

视频图像的处理方法及装置 Download PDF

Info

Publication number
WO2023051156A1
WO2023051156A1 PCT/CN2022/116596 CN2022116596W WO2023051156A1 WO 2023051156 A1 WO2023051156 A1 WO 2023051156A1 CN 2022116596 W CN2022116596 W CN 2022116596W WO 2023051156 A1 WO2023051156 A1 WO 2023051156A1
Authority
WO
WIPO (PCT)
Prior art keywords
encoding
image
reconstructed
encoded
code stream
Prior art date
Application number
PCT/CN2022/116596
Other languages
English (en)
French (fr)
Inventor
方华猛
邸佩云
刘欣
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023051156A1 publication Critical patent/WO2023051156A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present application relates to the technical field of video encoding and decoding, and in particular to a video image processing method and device.
  • video has been widely distributed on the Internet, TV broadcasting and various emerging media applications.
  • video codec technology As an efficient way of information transmission, video has been widely distributed on the Internet, TV broadcasting and various emerging media applications.
  • video codec technology With the rapid development of video codec technology, communication technology, and electronic equipment, more and more application scenarios have higher requirements for the delay of video playback. For example, video conferencing, interactive entertainment, live broadcast of sports events, and other application scenarios such as on-demand or live broadcast.
  • Multi-camera shooting can provide users with a multi-angle, three-dimensional stereoscopic visual experience. More broadly, in other scenarios that require high latency, such as cloud virtual reality (CloudVR) games and low-latency live broadcast application scenarios, in order to allow users to switch between different content video screens without any perception of lag, the video Content decoding switching delay is a key indicator that affects user experience.
  • CloudVR cloud virtual reality
  • the commonly used video content switching method is to divide the encoded video into segments according to fixed time intervals, and each segment uses the I frame as the starting frame.
  • the decoding is continued or played to the latest time point of the new content.
  • segment start decoding and playing from the segment I frame of the new content.
  • This switching method has a long delay from receiving the switching instruction to completing the switching of the content, which cannot meet the low-latency requirements of some application scenarios.
  • the embodiment of the present application provides a video image processing method and device, by controlling the second encoding used to generate the second bit stream according to the encoding information of the first encoding, the second bit stream adopts full intra-frame prediction mode encoding,
  • the second code stream is used as a random access code stream, which can improve the decoding quality of the accessed video content, reduce block effects, and eliminate some artifact effects on the basis of meeting low-latency access to video content.
  • an embodiment of the present application provides a method for processing a video image, and the method may include: acquiring an image to be encoded. Perform first encoding on the image to be encoded to generate a first code stream. According to the encoding information of the first encoding, the image to be encoded or the first reconstructed image is subjected to the second encoding in the full intra-frame prediction mode, so as to generate the second code stream.
  • the first reconstructed image is a reconstructed image in the first code stream or in the first encoding process.
  • the first encoding and the second encoding are two different encoding modes, the first encoding allows the inter-frame prediction mode, and the second encoding is the full intra-frame prediction mode.
  • the first encoding is used to generate the first code stream
  • the second encoding is used to generate the second code stream.
  • the encoded information of the first encoding and the second encoding may be different.
  • the division manner of the first encoding may be different from the division manner of the second encoding
  • the quantization parameter of the first encoding may be different from that of the second encoding.
  • the encoded information of the first encoding and the second encoding may be the same.
  • the division manner of the first encoding and the division manner of the second encoding may be the same, and the quantization parameter of the first encoding may be the same as that of the second encoding.
  • the functions of the first code stream and the second code stream are different.
  • the decoding end switches the displayed video content, the decoding end can first decode the second code stream at the corresponding time of the video content to be displayed, and then use the frame obtained by decoding the second code stream at the corresponding time as the follow-up of decoding the first code stream
  • the reference frame of the frame can perform subsequent decoding on the first code stream in time, without waiting for the next I frame of the first code stream to decode the first code stream, and the decoding delay is significantly reduced.
  • the second encoding is performed according to the encoding information of the first encoding to obtain the second code stream, so that the quality of the second code stream is equivalent to that of the first code stream, and the second code stream is used for decoding.
  • the decoding result of the stream can also be smoothly connected as the reference frame of the first code stream, without obvious block effects or obvious artifacts when switching content, which will affect the user experience.
  • the second encoding of the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, so as to realize the reconstructed image of the first code stream and
  • the quality of the reconstructed image of the corresponding second code stream is the same or equivalent, so that on the basis of meeting low-latency access to video content, it is beneficial to reduce block effects and partial artifacts caused by inconsistent codecs, and improve access to video content. decoding quality.
  • the encoding information of the first encoding includes one or more items of the division mode of the first encoding, the quantization parameter of the first encoding, and the encoding distortion information of the first encoding.
  • the first encoded division manner may include the first encoded TU division manner, the first encoded PU division manner, or the first encoded CU division manner.
  • the second encoding may adopt the same CU division manner as that of the first encoding, or the CUs of the second encoding do not cross the CU boundary of the first encoding.
  • the quantization parameter of the second encoding may be a quantization parameter offset superimposed on the basis of the quantization parameter of the first encoding.
  • the second encoding of the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, including at least one of the following: using the same division method as the first encoding to treat Performing the second encoding of the full intra-frame prediction mode on the coded image or the first reconstructed image; or performing the second encoding of the full intra-frame prediction mode on the image to be encoded or the first reconstructed image by using the same quantization parameter as the first encoding; or, Determine the quantization parameter of the second encoding according to the quantization parameter and the quantization parameter offset of the first encoding, and perform the second encoding of the full intra prediction mode on the image to be encoded or the first reconstructed image according to the quantization parameter of the second encoding; Or, determine the quantization parameter of the second encoding according to the encoding distortion information of the first encoding, and perform the second encoding in the full intra prediction mode
  • the second encoding of the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, including: according to the encoding information of the first encoding and the feature information of the image to be encoded , to determine the quantization parameter of the second encoding.
  • the second encoding in the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image.
  • the feature information of the image to be encoded includes one or more of the content complexity of the image to be encoded, the color classification information of the image to be encoded, the contrast information of the image to be encoded and the content segmentation information of the image to be encoded item.
  • the feature information of the image to be encoded may be obtained by analyzing the feature of the image to be encoded.
  • the second encoding of the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, including: according to the encoding information of the first encoding and the first reconstructed image, determining At least one of the first division mode or the first encoding parameter adopted for the second encoding of the image to be encoded or the first reconstructed image. According to at least one of the first division mode or the first encoding parameter, the second encoding in the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image.
  • the interval between two adjacent frames of all intra-frame prediction modes in the first code stream is greater than the interval between two adjacent frames of all intra-frame prediction modes in the second code stream.
  • the first code stream may also be called a long GOP code stream, and the second code stream may also be called a random access code stream.
  • the second code stream is first decoded, and the decoding result of the second code stream is used as a reference frame to decode the first code stream, so as to realize fast access to video content.
  • the encoding information of the first encoding and the first reconstructed image at least one of the first division method or the first encoding parameter used to encode the second code stream is determined, so as to realize the first
  • the quality of the reconstructed image of the code stream is the same or equivalent to that of the corresponding second code stream, which is beneficial to reduce block effects and partial artifacts caused by codec inconsistencies on the basis of satisfying low-latency access to video content , to improve the decoding quality of the access video content.
  • the difference between the first reconstructed image and the second reconstructed image is smaller than the difference threshold or the similarity between the first reconstructed image and the second reconstructed image is higher than the similarity threshold, and the second reconstructed image is the first The binary stream or the reconstructed image in the second encoding process.
  • the reconstructed image of the first code stream and the corresponding The quality of the reconstructed image of the second code stream is the same or equivalent, so that on the basis of satisfying low-latency access to video content, it is beneficial to reduce block effects and partial artifacts caused by inconsistencies in encoding and decoding, and improve the decoding of access video content quality.
  • At least one of the first division method or the first coding parameter used for the second coding of the image to be coded or the first reconstructed image is determined, Including: determining a plurality of second division methods according to the encoding information of the first encoding and the first reconstructed image, and selecting a second division method among the plurality of second division methods as the first division method; and/or, according to the first
  • the encoded encoding information and the first reconstructed image are used to determine a plurality of second encoding parameters, and select one second encoding parameter from the plurality of second encoding parameters as the first encoding parameter.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and a plurality of third reconstructed images
  • the plurality of third reconstructed images include the second reconstructed image
  • more A third reconstructed image is a reconstructed image in the process of performing multiple second encodings on the image to be encoded or the first reconstructed image respectively according to multiple second division methods and/or multiple second encoding parameters, or multiple third reconstructed images It is a reconstructed image of multiple third code streams, and the multiple third code streams are obtained by performing multiple second encodings on the image to be encoded or the first reconstructed image according to multiple second division methods and/or multiple second encoding parameters respectively of.
  • the above method may further include: acquiring a prediction mode of the first encoding.
  • the prediction mode of the first encoding is inter-frame prediction
  • the encoding information of the first encoding is obtained, and according to the encoding information of the first encoding, the second encoding of the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, so as to The step of generating the second code stream.
  • the prediction mode of the first encoding is intra-frame prediction
  • the first code stream is used as the second code stream.
  • the prediction mode of the first encoding is intra-frame prediction
  • the prediction mode of the first encoding is intra-frame prediction
  • the image to be encoded is a source video image.
  • the quality of the first code stream and the second code stream is controlled from the frame level to maintain consistency, which is beneficial to reduce block effects and partial artifacts caused by codec inconsistencies , to improve the decoding quality of the access video content.
  • the image to be encoded is an image block after dividing the source video image.
  • the two code streams at the image block level can be output synchronously, so as to quickly obtain the random access frame of the second code stream used to access the video content , reducing the access delay.
  • the first encoding parameter may include a first quantization parameter or a first code rate.
  • the second coding parameter may include a second quantization parameter or a second coding rate.
  • the embodiment of the present application provides a video image processing method, the method may include: acquiring at least one first image to be encoded and a second image to be encoded, the second image to be encoded is at least one first image to be encoded previous video image. Perform first encoding on at least one first image to be encoded respectively, so as to generate a first code stream. According to at least one first reconstructed image, at least one of the first division method or the first encoding parameter used for the second encoding of the second image to be encoded is determined, and the at least one first reconstructed image is the first code stream or the first Reconstructed image during encoding. Perform second encoding in full intra-frame prediction mode on the second image to be encoded according to at least one of the first division mode or the first encoding parameter, so as to generate a second code stream.
  • the first code stream may also be called a long GOP code stream, and the second code stream may also be called a random access code stream.
  • the second code stream is first decoded, and the decoding result of the second code stream is used as a reference frame to decode the first code stream, so as to realize fast access to video content.
  • the encoding of the second code stream is adjusted to achieve the same or equivalent quality of the reconstructed image of the first code stream and the reconstructed image of the corresponding second code stream, so that the low-latency connection is satisfied.
  • the decoding quality of the access video content is improved, the block effect is reduced, and part of the artifact effect is eliminated.
  • the second image to be encoded may be a video image that is one frame before the at least one first image to be encoded or is separated by one or more frames.
  • the number of at least one first image to be encoded is one
  • the number of at least one first reconstructed image is one
  • the difference between the first reconstructed image and the second reconstructed image is less than a difference threshold, or , the similarity between the first reconstructed image and the second reconstructed image is higher than the similarity threshold
  • the second reconstructed image is obtained by decoding the first code stream with the third reconstructed image as a reference image
  • the third reconstructed image is the second code stream stream or the reconstructed image in the second encoding process.
  • the number of at least one first image to be encoded is multiple, the number of at least one first reconstructed image is multiple, and the distance between the multiple first reconstructed images and the multiple second reconstructed images
  • the difference is smaller than the difference threshold, or, the similarity between the plurality of first reconstructed images and the plurality of second reconstructed images is higher than the similarity threshold, and the plurality of second reconstructed images use the third reconstructed image as a reference image to decode the first code stream, the third reconstructed image is the reconstructed image in the second code stream or in the second encoding process.
  • the difference between the multiple first reconstructed images and the multiple second reconstructed images may be a weighted sum of the differences between each of the multiple first reconstructed images and the corresponding second reconstructed images.
  • the similarity between the multiple first reconstructed images and the multiple second reconstructed images may be a weighted sum of the similarities between each of the multiple first reconstructed images and the corresponding second reconstructed images.
  • the second reconstructed image corresponding to one first reconstructed image among the plurality of first reconstructed images refers to a reconstructed image of the same video content.
  • the number of at least one first image to be encoded is one
  • the number of at least one first reconstructed image is one
  • At least one of the first division method or the first encoding parameter used for encoding includes: selecting a second division method among multiple second division methods as the first division method according to the first reconstructed image; and/or, According to the first reconstructed image, one of the multiple second encoding parameters is selected as the first encoding parameter.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and a plurality of fourth reconstructed images
  • the plurality of fourth reconstructed images include the second reconstructed image
  • more The fourth reconstructed image is obtained by decoding the first code stream by using a plurality of fifth reconstructed images as reference images respectively
  • the plurality of fifth reconstructed images are reconstructed images of a plurality of third code streams
  • the plurality of third code streams It is obtained by performing multiple second encodings on the second image to be encoded according to multiple second division methods and/or multiple second encoding parameters respectively, or the multiple fifth reconstructed images are respectively obtained according to multiple second division modes and/or a plurality of second encoding parameters to reconstruct the image in the second encoding process multiple times on the second image to be encoded.
  • At least one first image to be encoded is a plurality of first images to be encoded, and according to at least one first reconstructed image, determine the first division method or At least one of the first coding parameters includes: selecting a second partitioning method among multiple second partitioning methods as the first partitioning method according to multiple first reconstructed images; Select a second encoding parameter as the first encoding parameter.
  • multiple second encodings may be performed on the second image to be encoded according to multiple second division manners and/or multiple second encoding parameters respectively, so as to generate multiple third code streams.
  • the plurality of fifth reconstructed images are the plurality of third code streams or the plurality of reconstructed images in the second encoding process.
  • a plurality of fifth reconstructed images may be respectively used as reference images to decode the first code stream to obtain multiple groups of fourth reconstructed images, and each group of fourth reconstructed images in the multiple groups may include a plurality of first images to be encoded corresponding to each For the fourth reconstructed image, by comparing the similarities between each of the multiple groups of fourth reconstructed images and the multiple first reconstructed images, a group of the highest similarity is selected as the second reconstructed image corresponding to each of the multiple first images to be encoded .
  • the third code stream corresponding to a group of fourth reconstructed images with the highest similarity is used as the second code stream.
  • the third code stream corresponding to a set of fourth reconstructed images refers to using the reconstructed image of the third code stream or the reconstructed image in the encoding process used to generate the third code stream as a reference image to decode the first code stream,
  • the set of fourth reconstructed images can be obtained.
  • the above method before performing first encoding on at least one first image to be encoded, the above method further includes: performing first encoding on a second image to be encoded to generate a fourth code stream. Get the prediction mode of the first encoding.
  • the prediction mode of the first encoding is inter-frame prediction
  • at least one of the first division method or the first encoding parameter used for the second encoding of the second image to be encoded is determined according to at least one first reconstructed image. step.
  • the fourth code stream is used as the second code stream.
  • At least one first image to be encoded is at least one first source video image
  • the second image to be encoded is a second source video image
  • the first coding parameter includes a first quantization parameter or a first coding rate.
  • the second coding parameter includes a second quantization parameter or a second coding rate.
  • identification information of code stream characteristics may be carried during the encoding process.
  • the identification information is used by the decoding end to distinguish the first code stream from the second code stream.
  • the identification information of the first code stream and the second code stream may be carried in any item of parameter set, auxiliary enhancement information, encapsulation layer, file format, file description information or custom message.
  • the first code stream and the second code stream use the same parameter set
  • the parameter set may carry the first identification information.
  • the first identification information is used to indicate that the current code stream is the first code stream, or the second code stream, or the first code stream and the second code stream and the first code stream of the same video content is before the second code stream, or The first code stream and the second code stream, and the first code stream of the same video content is after the second code stream.
  • the first identification information may be a stream identifier (stream_id) in a video parameter set (video parameter set, VPS), a sequence parameter set (sequence parameter set, SPS) or a picture parameter set (picture parameter set, PPS).
  • the first code stream and the second code stream can be encapsulated together.
  • the first code stream and the second code stream use different parameter sets, and the different parameter sets may carry the second identification information.
  • the second identification information is used to indicate that the current code stream is the first code stream or the second code stream.
  • the second identification information may be a stream identification (stream_id) in the VPS, SPS or PPS of different code streams.
  • the first code stream and the second code stream can be packaged together, or can be packaged independently.
  • the slice header information of the first code stream may carry the third identification information.
  • the third identification information is used to indicate that the current code stream is the first code stream.
  • the slice header information of the first code stream may carry fourth identification information.
  • the fourth identification information is used to indicate that the current code stream is the second code stream.
  • the third identification information or the fourth identification information may be a stream identification (stream_id) in the slice header information.
  • the first code stream and the second code stream can be packaged together, or can be packaged independently.
  • the auxiliary enhancement information of the first code stream may carry fifth identification information.
  • the fifth identification information is used to indicate that the current code stream is the first code stream.
  • the auxiliary enhancement information of the second code stream may carry sixth identification information.
  • the sixth identification information is used to indicate that the current code stream is the second code stream.
  • the fifth identification information or the sixth identification information may be a stream identifier (stream_id) in the SEI.
  • the first code stream and the second code stream can be packaged together, or can be packaged independently.
  • the first code stream and the second code stream are encapsulated independently, and the encapsulation layer of the first code stream carries seventh identification information, and the seventh identification information is used to indicate that the current code stream is the first code stream .
  • the encapsulation layer of the second code stream carries the eighth identification information.
  • the eighth identification information is used to indicate that the current code stream is the second code stream.
  • the first code stream is encapsulated in the first media stream sequence (track)
  • the second code stream is encapsulated in the second media stream (track)
  • the seventh identification information or the eighth identification information may be the media stream sequence class ( track_class).
  • the first code stream and the second code stream are packaged and transmitted independently, and the file format of the first code stream carries ninth identification information, and the ninth identification information is used to indicate that the current code stream is the first stream.
  • the file format of the second code stream carries the tenth identification information. The tenth identification information is used to indicate that the current code stream is the second code stream.
  • the first code stream and the second code stream are encapsulated and transmitted independently, and the file description information of the first code stream carries eleventh identification information, and the eleventh identification information is used to indicate the current code stream is the first code stream.
  • the file description information of the second code stream carries the twelfth identification information.
  • the twelfth identification information is used to indicate that the current code stream is the second code stream.
  • the first code stream and the second code stream are transmitted in a custom message mode, and the custom message in which the first code stream is located carries the thirteenth identification information, and the thirteenth identification information is used to indicate the current
  • the code stream is the first code stream.
  • the custom message in which the second code stream is located carries fourteenth identification information.
  • the fourteenth identification information is used to indicate that the current code stream is the second code stream.
  • the custom message is a TLV message.
  • the thirteenth identification information or the fourteenth identification information may be type information in the TLV.
  • the present application provides a video image processing device, which may be an electronic device or a server, such as a chip or a system on a chip in an electronic device or a server, for example, may be used in an electronic device or a server to implement the first aspect Or the functional module of any possible implementation manner of the first aspect.
  • the video image processing device includes: an acquisition module, configured to acquire an image to be encoded.
  • the first encoding module is configured to first encode the image to be encoded to generate a first code stream.
  • the second encoding module is configured to perform second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image according to the encoding information of the first encoding, so as to generate a second code stream, and the first reconstructed image is the first code stream Or the reconstructed image during the first encoding process.
  • the encoding information of the first encoding includes one or more items of the division mode of the first encoding, the quantization parameter of the first encoding, and the encoding distortion information of the first encoding.
  • the second encoding module is configured to perform at least one of the following: adopt the same division method as the first encoding to perform the second encoding in the full intra-frame prediction mode of the image to be encoded or the first reconstructed image; or, adopt The same quantization parameter as the first encoding is performed on the image to be encoded or the first reconstructed image in the second encoding of the full intra-frame prediction mode; or, according to the quantization parameter and quantization parameter offset of the first encoding, the quantization parameter of the second encoding is determined, According to the quantization parameter of the second encoding, perform the second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image; or determine the quantization parameter of the second encoding according to the encoding distortion information of the first encoding, and according to the second encoding The quantization parameter of the second encoding is to perform the second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image;
  • the second encoding module is used to: determine the quantization parameter of the second encoding according to the encoding information of the first encoding and the characteristic information of the image to be encoded; A reconstructed image undergoes a second encoding in full intra prediction mode.
  • the feature information of the image to be encoded includes one or more of the content complexity of the image to be encoded, the color classification information of the image to be encoded, the contrast information of the image to be encoded and the content segmentation information of the image to be encoded item.
  • the second encoding module is configured to determine, according to the first encoded encoding information and the first reconstructed image, the first division method or the first Encode at least one of the parameters.
  • the second encoding module is further configured to perform second encoding in full intra-frame prediction mode on the image to be encoded or the first reconstructed image according to at least one of the first division method or the first encoding parameter.
  • the interval between two adjacent frames of all intra-frame prediction modes in the first code stream is greater than the interval between two adjacent frames of all intra-frame prediction modes in the second code stream.
  • the difference between the first reconstructed image and the second reconstructed image is smaller than the difference threshold or the similarity between the first reconstructed image and the second reconstructed image is higher than the similarity threshold, and the second reconstructed image is the first The binary stream or the reconstructed image in the second encoding process.
  • the second encoding module is configured to: determine a plurality of second division methods according to the encoding information of the first encoding and the first reconstructed image, and select a second division method among the plurality of second division methods as A first division method; and/or, according to the encoding information of the first encoding and the first reconstructed image, determine a plurality of second encoding parameters, and select a second encoding parameter from the plurality of second encoding parameters as the first encoding parameter.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and a plurality of third reconstructed images
  • the plurality of third reconstructed images include the second reconstructed image
  • more A third reconstructed image is a reconstructed image in the process of performing multiple second encodings on the image to be encoded or the first reconstructed image respectively according to multiple second division methods and/or multiple second encoding parameters, or multiple third reconstructed images
  • the image is a reconstructed image of multiple third code streams, and the multiple third code streams are performed multiple times of second encoding on the image to be encoded or the first reconstructed image respectively according to multiple second division methods and/or multiple second encoding parameters owned.
  • the second encoding module is further configured to: acquire a prediction mode of the first encoding.
  • the prediction mode of the first encoding is inter-frame prediction
  • the encoding information of the first encoding is obtained, and according to the encoding information of the first encoding, the second encoding of the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, so as to The step of generating the second code stream.
  • the prediction mode of the first encoding is intra-frame prediction
  • the first code stream is used as the second code stream.
  • the image to be encoded is a source video image; or, the image to be encoded is an image block after dividing the source video image.
  • the first coding parameter includes a first quantization parameter or a first coding rate.
  • the second coding parameter includes a second quantization parameter or a second coding rate.
  • the present application provides a video image processing device, which may be an electronic device or a server, such as a chip or a system-on-a-chip in an electronic device or a server, for example, may be used in an electronic device or a server to implement the second aspect Or the functional module of any possible implementation manner of the second aspect.
  • the video image processing device includes: an acquisition module, configured to acquire at least one first image to be encoded and a second image to be encoded, the second image to be encoded is a video image before the at least one first image to be encoded .
  • the first encoding module is configured to respectively perform first encoding on at least one first image to be encoded to generate a first code stream.
  • the second coding module is configured to determine at least one of the first division method or the first coding parameter used for the second coding of the second image to be coded according to at least one first reconstructed image, and the at least one first reconstructed image is The first code stream or the reconstructed image in the first encoding process.
  • the second encoding module is further configured to perform second encoding on the second image to be encoded according to at least one of the first division method or the first encoding parameter, so as to generate a second code stream.
  • the number of at least one first image to be encoded is one
  • the number of at least one first reconstructed image is one
  • the difference between the first reconstructed image and the second reconstructed image is less than a difference threshold, or , the similarity between the first reconstructed image and the second reconstructed image is higher than the similarity threshold
  • the second reconstructed image is obtained by decoding the first code stream with the third reconstructed image as a reference image
  • the third reconstructed image is the second code stream stream or the reconstructed image in the second encoding process.
  • the number of at least one first image to be encoded is one
  • the number of at least one first reconstructed image is one
  • the second encoding module is used to: according to the first reconstructed image, among multiple second Selecting a second division mode in the division mode as the first division mode; and/or, according to the first reconstructed image, selecting a second coding parameter from a plurality of second coding parameters as the first coding parameter.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and a plurality of fourth reconstructed images
  • the plurality of fourth reconstructed images include the second reconstructed image
  • more The fourth reconstructed image is obtained by decoding the first code stream by using a plurality of fifth reconstructed images as reference images respectively
  • the multiple fifth reconstructed images are reconstructed images of a plurality of third code streams
  • the multiple third code streams are respectively obtained according
  • a plurality of second division methods and/or a plurality of second encoding parameters are obtained by performing multiple second encodings on the second image to be encoded, or the plurality of fifth reconstructed images are respectively obtained according to the plurality of second division methods and/or
  • the multiple second encoding parameters are used to reconstruct the image in the second encoding process multiple times on the second image to be encoded.
  • the first encoding module is further configured to: perform first encoding on the second image to be encoded to generate a fourth code stream before performing first encoding on at least one first image to be encoded.
  • the second encoding module is also used for: acquiring the prediction mode of the first encoding.
  • the prediction mode of the first encoding is inter-frame prediction
  • at least one of the first division method or the first encoding parameter used for the second encoding of the second image to be encoded is determined according to at least one first reconstructed image. step.
  • the fourth code stream is used as the second code stream.
  • At least one first image to be encoded is at least one first source video image
  • the second image to be encoded is a second source video image
  • the first coding parameter includes a first quantization parameter or a first coding rate.
  • the second coding parameter includes a second quantization parameter or a second coding rate.
  • the embodiment of the present application provides an apparatus for processing video images, including: one or more processors.
  • Memory used to store one or more programs.
  • the one or more processors implement the method as described in the first aspect or any one of the first aspects, or make the One or more processors implement the method according to the second aspect or any one of the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, including the first code stream and the second code stream obtained according to the method described in any one of the first aspect or the first aspect, or, including the The first code stream and the second code stream obtained by the method described in any one of the two aspects or the second aspect.
  • the embodiment of the present application provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to execute the method as described in the first aspect or any one of the first aspect, or, as in the second aspect Or the method described in any one of the second aspect.
  • the embodiment of the present application provides a computer-readable storage medium, including computer instructions, which, when the computer instructions are run on the computer, cause the computer to execute the method as described in the first aspect or any one of the first aspect, or , as the second aspect or the method described in any one of the second aspect.
  • FIG. 1A is a block diagram of an example of a video encoding and decoding system 10 for implementing an embodiment of the present application
  • FIG. 1B is a block diagram of an example of a video decoding system 40 for implementing an embodiment of the present application
  • FIG. 2 is a block diagram of an example structure of an encoder 20 for implementing an embodiment of the present application
  • FIG. 3 is a block diagram of an example structure of a decoder 30 for implementing an embodiment of the present application
  • FIG. 4 is a block diagram of an example of a video decoding device 400 for implementing an embodiment of the present application
  • FIG. 5 is a block diagram of another example of an encoding device or a decoding device for implementing an embodiment of the present application
  • FIG. 6 is a schematic diagram of an application scene of a multi-camera shooting a sports event according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of a decoded frame trajectory for switching from the current video content to another video content provided by the embodiment of the present application;
  • FIG. 8 is a schematic diagram of a video image processing system provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a video image processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a video image processing method provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a video image processing method provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a method for processing video images provided by an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a video image processing method provided by an embodiment of the present application.
  • FIG. 14 is a schematic flow chart of a video image processing method provided by an embodiment of the present application.
  • FIG. 15 is a schematic flow chart of a video image processing method provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of the first encoding and the second encoding using the same TU division method provided by the embodiment of the present application;
  • FIG. 17 is a schematic diagram of quantization parameters of the first encoding and quantization parameters of the second encoding provided by the embodiment of the present application;
  • FIG. 18 is a schematic diagram of quantization parameters transmitted from the first code to the second code provided by the embodiment of the present application.
  • FIG. 19 is a schematic diagram of the arrangement of the first code stream and the second code stream when the stream identifier (stream_id) is 2 provided by the embodiment of the present application;
  • FIG. 20 is a schematic diagram of the arrangement of the first code stream and the second code stream when the stream identifier (stream_id) is 3 provided by the embodiment of the present application;
  • Figure 21 is a combination of three first code streams and second code streams provided by the embodiment of the present application to merge into one code stream;
  • FIG. 22 is a schematic structural diagram of a video image processing device provided by an embodiment of the present application.
  • the corresponding device may include one or more units, such as functional units, to perform the described one or more method steps (for example, one unit performs one or more steps , or a plurality of units, each of which performs one or more of the plurality of steps), even if such one or more units are not explicitly described or illustrated in the drawing.
  • units such as functional units, to perform the described one or more method steps (for example, one unit performs one or more steps , or a plurality of units, each of which performs one or more of the plurality of steps), even if such one or more units are not explicitly described or illustrated in the drawing.
  • the corresponding method may include a step to perform the functionality of one or more units (for example, a step to perform one or more units functionality, or a plurality of steps, each of which performs the functionality of one or more of the plurality of units), even if such one or more steps are not explicitly described or illustrated in the drawing.
  • a step to perform the functionality of one or more units for example, a step to perform one or more units functionality, or a plurality of steps, each of which performs the functionality of one or more of the plurality of units
  • Video coding generally refers to the processing of sequences of pictures that form a video or video sequence.
  • picture In the field of video coding, the terms “picture”, “frame” or “image” may be used as synonyms.
  • Video encoding as used herein means video encoding or video decoding.
  • Video encoding is performed on the source side and typically involves processing (eg, by compressing) raw video pictures to reduce the amount of data needed to represent the video pictures for more efficient storage and/or transmission.
  • Video decoding is performed at the destination and typically involves inverse processing relative to the encoder to reconstruct the video picture.
  • the "encoding" of video pictures involved in the embodiments should be understood as involving “encoding” or “decoding” of video sequences.
  • the combination of an encoding part and a decoding part is also called a codec (encoding and decoding, CODEC).
  • a video sequence includes a series of pictures (pictures), the pictures are further divided into slices (slices), and the slices are further divided into blocks (blocks).
  • Video coding is coded in units of blocks, and in some new video coding standards, the concept of blocks is further expanded. For example, there is a macroblock (macroblock, MB) in the H.264 standard, and the macroblock can be further divided into multiple predictive blocks (partitions) that can be used for predictive coding.
  • HEVC high-efficiency video coding
  • the basic concepts of coding unit (coding unit, CU), prediction unit (prediction unit, PU) and transform unit (transform unit, TU) are adopted.
  • a variety of block units are divided and described using a new tree-based structure.
  • a CU can be divided into smaller CUs according to a quadtree, and the smaller CUs can be further divided to form a quadtree structure.
  • a CU is a basic unit for dividing and encoding a coded image.
  • PU can correspond to a prediction block and is the basic unit of predictive coding.
  • the CU is further divided into multiple PUs according to the division mode.
  • a TU can correspond to a transform block and is a basic unit for transforming a prediction residual.
  • PU or TU they all belong to the concept of block (or image block) in essence.
  • a CTU is split into multiple CUs by using a quadtree structure represented as a coding tree.
  • the decision whether to encode a region of a picture using inter-picture (temporal) or intra-picture (spatial) prediction is made at the CU level.
  • Each CU can be further split into one, two or four PUs according to the PU split type.
  • the same prediction process is applied within a PU and relevant information is transferred to the decoder on a PU basis.
  • the CU can be partitioned into transform units (TUs) according to other quadtree structures similar to the coding tree used for the CU.
  • quad-tree and binary tree QTBT are used to partition frames to partition coding blocks.
  • a CU can be square or rectangular in shape.
  • the image block to be encoded in the currently encoded image may be referred to as the current block, for example, in encoding, it refers to the block currently being encoded; in decoding, it refers to the block currently being decoded.
  • the decoded image block used to predict the current block in the reference image is called a reference block, that is, the reference block is a block that provides a reference signal for the current block, wherein the reference signal represents a pixel value in the image block.
  • a block that provides a prediction signal for the current block in the reference image may be a prediction block, where the prediction signal represents a pixel value or a sample value or a sample signal in the prediction block. For example, after traversing multiple reference blocks, the best reference block is found, and this best reference block will provide prediction for the current block, and this block is called a prediction block.
  • the original video picture can be reconstructed, ie the reconstructed video picture has the same quality as the original video picture (assuming no transmission loss or other data loss during storage or transmission).
  • further compression is performed by, for example, quantization, to reduce the amount of data required to represent a video picture without being able to fully reconstruct the video picture at the decoder side, i.e. the quality of the reconstructed video picture is compared to the original video picture of low or poor quality.
  • Video coding standards of H.261 belong to "lossy hybrid video codecs" (ie, combining spatial and temporal prediction in the sample domain with 2D transform coding in the transform domain for applying quantization).
  • Each picture of a video sequence is usually partitioned into a non-overlapping set of blocks, usually coded at the block level.
  • the encoder side usually processes at the block (video block) level, i.e.
  • encodes the video for example, generates a predicted block through spatial (intra-picture) prediction and temporal (inter-picture) prediction, from the current block (currently processed or to be processed block) minus the predicted block to obtain the residual block, transform the residual block in the transform domain and quantize the residual block to reduce the amount of data to be transmitted (compressed), and the decoder side will be inversely processed relative to the encoder Partially applied to an encoded or compressed block to reconstruct the current block for representation. Additionally, the encoder replicates the decoder processing loop such that the encoder and decoder generate the same predictions (eg, intra and inter predictions) and/or reconstructions for processing, ie encoding, subsequent blocks.
  • predictions eg, intra and inter predictions
  • FIG. 1A exemplarily shows a schematic block diagram of a video encoding and decoding system 10 applied in the embodiment of the present application.
  • a video encoding and decoding system 10 may include a source device 12 that generates encoded video data and a destination device 14 , thus, the source device 12 may be referred to as a video encoding device.
  • Destination device 14 may decode encoded video data generated by source device 12, and thus, destination device 14 may be referred to as a video decoding device.
  • Various implementations of source device 12, destination device 14, or both may include one or more processors and memory coupled to the one or more processors.
  • Source device 12 and destination device 14 may include a variety of devices, including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, etc. machine, television, camera, display device, digital media player, video game console, in-vehicle computer, wireless communication device, or the like.
  • FIG. 1A depicts source device 12 and destination device 14 as separate devices
  • device embodiments may include both source device 12 and destination device 14 or the functionality of both, i.e. source device 12 or corresponding functionality and the destination device 14 or corresponding functionality.
  • source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software, or using separate hardware and/or software, or any combination thereof .
  • a communicative connection may be made between source device 12 and destination device 14 via link 13 via which destination device 14 may receive encoded video data from source device 12 .
  • Link 13 may include one or more media or devices capable of moving encoded video data from source device 12 to destination device 14 .
  • link 13 may include one or more communication media that enable source device 12 to transmit encoded video data directly to destination device 14 in real time.
  • source device 12 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination device 14 .
  • the one or more communication media may include wireless and/or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
  • RF radio frequency
  • the one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (eg, the Internet).
  • the one or more communication media may include routers, switches, base stations, or other devices that facilitate communication from source device 12 to destination device 14 .
  • the source device 12 includes an encoder 20 , and optionally, the source device 12 may also include a picture source 16 , a picture preprocessor 18 , and a communication interface 22 .
  • the encoder 20 , picture source 16 , picture preprocessor 18 , and communication interface 22 may be hardware components in the source device 12 or software programs in the source device 12 . They are described as follows:
  • Image source 16 which may include or be any type of image capture device, for example capturing real-world images, and/or any type of image or comment (for screen content encoding, some text on the screen is also considered to be encoded
  • a picture or part of an image) generating device such as a computer graphics processor for generating computer-animated pictures, or for acquiring and/or providing real-world pictures, computer-animated pictures (for example, screen content, virtual reality, VR) images), and/or any combination thereof (e.g., augmented reality (AR) images).
  • the picture source 16 may be a camera for capturing pictures or a memory for storing pictures, and the picture source 16 may also include any kind of interface (internal or external) that stores previously captured or generated pictures and/or acquires or receives pictures.
  • the picture source 16 When the picture source 16 is a camera, the picture source 16 can be, for example, a local or an integrated camera integrated in the source device; when the picture source 16 is a memory, the picture source 16 can be local or, for example, an integrated memory.
  • the interface can be, for example, an external interface that receives pictures from an external video source, such as an external picture capture device, such as a camera, an external memory, or an external picture generation device, such as For an external computer graphics processor, computer or server.
  • the interface can be any kind of interface according to any proprietary or standardized interface protocol, eg wired or wireless interface, optical interface.
  • the picture can be regarded as a two-dimensional array or matrix of pixel points (picture element).
  • the pixel points in the array can also be referred to as sampling points.
  • the number of samples in the array or picture in the horizontal and vertical directions (or axes) defines the size and/or resolution of the picture.
  • three color components are usually used, that is, a picture can be represented as or contain three sample arrays.
  • a picture includes corresponding red, green and blue sample arrays.
  • each pixel is usually expressed in luma/chroma format or color space. color components.
  • the luminance (luma) component Y represents luminance or grayscale intensity (eg, both are the same in a grayscale picture), while the two chroma (chroma) components U and V represent chrominance or color information components.
  • a picture in YUV format includes an array of luma samples for luma sample values (Y), and two arrays of chroma samples for chroma values (U and V).
  • a picture in RGB format can be converted or converted to YUV format and vice versa, this process is also called color conversion or conversion. If the picture is black and white, the picture may only include an array of luma samples.
  • the pictures transmitted from the picture source 16 to the picture processor may also be referred to as original picture data 17 .
  • the picture preprocessor 18 is configured to receive the original picture data 17 and perform preprocessing on the original picture data 17 to obtain a preprocessed picture 19 or preprocessed picture data 19 .
  • preprocessing performed by picture preprocessor 18 may include retouching, color format conversion (eg, from RGB format to YUV format), color grading, or denoising.
  • the encoder 20 (or video encoder 20) is configured to receive the preprocessed picture data 19, and process the preprocessed picture data 19 using a relevant prediction mode (such as the prediction mode in each embodiment herein), so that Encoded picture data 21 is provided (structural details of the encoder 20 will be described further below based on FIG. 2 or FIG. 4 or FIG. 5 ).
  • the encoder 20 can be used to implement various embodiments described later, so as to realize the application of the chroma block prediction method described in this application on the encoding side.
  • a communication interface 22 operable to receive encoded picture data 21 and transmit the encoded picture data 21 via link 13 to destination device 14 or any other device (such as a memory) for storage or direct reconstruction, so
  • the other devices mentioned above can be any device for decoding or storage.
  • the communication interface 22 may be used, for example, to package the encoded picture data 21 into a suitable format, such as a data packet, for transmission over the link 13 .
  • the destination device 14 includes a decoder 30 , and optionally, the destination device 14 may also include a communication interface 28 , a picture post-processor 32 and a display device 34 . They are described as follows:
  • a communication interface 28 may be used to receive encoded picture data 21 from the source device 12 or any other source, such as a storage device, such as a coded picture data storage device. Communication interface 28 may be used to transmit or receive encoded picture data 21 over link 13 between source device 12 and destination device 14, such as a direct wired or wireless connection, any A class of network is for example a wired or wireless network or any combination thereof, or any class of private and public networks, or any combination thereof. The communication interface 28 may be used, for example, to decapsulate the data packets transmitted by the communication interface 22 to obtain the encoded picture data 21 .
  • Both communication interface 28 and communication interface 22 may be configured as a one-way communication interface or a two-way communication interface, and may be used, for example, to send and receive messages to establish a connection, acknowledge and exchange any other communication link and/or, for example, encoded picture data Information about the transmitted data transfer.
  • Decoder 30 (or referred to as decoder 30), for receiving encoded picture data 21 and providing decoded picture data 31 or decoded picture 31 (the following will further describe the decoder 30 based on FIG. 3 or FIG. 4 or FIG. 5 structural details).
  • the decoder 30 can be used to implement various embodiments described later, so as to realize the application of the chroma block prediction method described in this application on the decoding side.
  • the picture post-processor 32 is configured to perform post-processing on the decoded picture data 31 (also referred to as reconstructed picture data) to obtain post-processed picture data 33 .
  • the post-processing performed by the picture post-processor 32 may include color format conversion (for example, from YUV format to RGB format), color correction, retouching or resampling, or any other processing, and may also be used to convert the post-processed picture data 33 to the display device 34.
  • the display device 34 may be or include any kind of display for presenting the reconstructed picture, eg, an integrated or external display or monitor.
  • the display may include a liquid crystal display (liquid crystal display, LCD), an organic light emitting diode (organic light emitting diode, OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS), A digital light processor (DLP) or other display of any kind.
  • FIG. 1A depicts source device 12 and destination device 14 as separate devices
  • device embodiments may include both source device 12 and destination device 14 or functionality of both, i.e., source device 12 or Corresponding functionality and destination device 14 or corresponding functionality.
  • source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software, or using separate hardware and/or software, or any combination thereof .
  • Source device 12 and destination device 14 may comprise any of a variety of devices, including any class of handheld or stationary devices, such as notebook or laptop computers, mobile phones, smartphones, tablet or tablet computers, video cameras, desktop Computers, set-top boxes, televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices etc., and can be used without or with any class of operating system.
  • handheld or stationary devices such as notebook or laptop computers, mobile phones, smartphones, tablet or tablet computers, video cameras, desktop Computers, set-top boxes, televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices etc., and can be used without or with any class of operating system.
  • Both encoder 20 and decoder 30 may be implemented as any of a variety of suitable circuits, for example, one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuit, ASIC), field-programmable gate array (field-programmable gate array, FPGA), discrete logic, hardware, or any combination thereof.
  • DSPs digital signal processors
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • a device may store instructions for the software in a suitable non-transitory computer-readable storage medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure . Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) may be considered as one or more processors.
  • the video encoding and decoding system 10 shown in FIG. decoding may be retrieved from local storage, streamed over a network, etc.
  • a video encoding device may encode and store data to memory, and/or a video decoding device may retrieve and decode data from memory.
  • encoding and decoding are performed by devices that do not communicate with each other but merely encode data to memory and/or retrieve data from memory and decode data.
  • FIG. 1B is an illustrative diagram of an example of a video coding system 40 including encoder 20 of FIG. 2 and/or decoder 30 of FIG. 3 , according to an exemplary embodiment.
  • the video decoding system 40 may implement a combination of various technologies in the embodiments of the present application.
  • video coding system 40 may include imaging device 41, encoder 20, decoder 30 (and/or a video encoder/decoder implemented by logic circuit 47 of processing unit 46), antenna 42 , one or more processors 43, one or more memories 44 and/or a display device 45.
  • imaging device 41 , antenna 42 , processing unit 46 , logic circuit 47 , encoder 20 , decoder 30 , processor 43 , memory 44 and/or display device 45 can communicate with each other.
  • video coding system 40 is illustrated with encoder 20 and decoder 30 , video coding system 40 may include only encoder 20 or only decoder 30 in different examples.
  • antenna 42 may be used to transmit or receive an encoded bitstream of video data.
  • display device 45 may be used to present video data.
  • logic circuitry 47 may be implemented by processing unit 46 .
  • the processing unit 46 may include application-specific integrated circuit (application-specific integrated circuit, ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the video decoding system 40 may also include an optional processor 43, and the optional processor 43 may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the logic circuit 47 may be implemented by hardware, such as dedicated hardware for video encoding, etc., and the processor 43 may be implemented by general-purpose software, an operating system, and the like.
  • the memory 44 can be any type of memory, such as a volatile memory (for example, a static random access memory (Static Random Access Memory, SRAM), a dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.) or a nonvolatile memory (eg, flash memory, etc.) etc.
  • memory 44 may be implemented by cache memory.
  • logic circuitry 47 may access memory 44 (eg, to implement an image buffer).
  • logic circuitry 47 and/or processing unit 46 may include memory (eg, cache, etc.) for implementing an image buffer or the like.
  • encoder 20 implemented by logic circuitry may include an image buffer (eg, implemented by processing unit 46 or memory 44 ) and a graphics processing unit (eg, implemented by processing unit 46 ).
  • a graphics processing unit may be communicatively coupled to the image buffer.
  • Graphics processing unit may contain encoder 20 implemented by logic circuitry 47 to implement the various modules discussed with reference to FIG. 2 and/or any other encoder system or subsystem described herein.
  • Logic circuits may be used to perform the various operations discussed herein.
  • decoder 30 may be implemented by logic circuitry 47 in a similar manner to implement the various modules discussed with reference to decoder 30 of FIG. 3 and/or any other decoder system or subsystem described herein.
  • logic implemented decoder 30 may include an image buffer (implemented by processing unit 2820 or memory 44 ) and a graphics processing unit (eg, implemented by processing unit 46 ).
  • a graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may contain decoder 30 implemented by logic circuitry 47 to implement the various modules discussed with reference to FIG. 3 and/or any other decoder system or subsystem described herein.
  • antenna 42 may be used to receive an encoded bitstream of video data.
  • an encoded bitstream may contain data related to encoded video frames, indicators, index values, mode selection data, etc., as discussed herein, such as data related to encoding partitions (e.g., transform coefficients or quantized transform coefficients , (as discussed) an optional indicator, and/or data defining an encoding split).
  • Video coding system 40 may also include decoder 30 coupled to antenna 42 and used to decode the encoded bitstream.
  • a display device 45 is used to present video frames.
  • the decoder 30 may be used to perform a reverse process.
  • the decoder 30 may be configured to receive and parse such syntax elements and decode the associated video data accordingly.
  • encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, decoder 30 may parse such syntax elements and decode related video data accordingly.
  • the encoder 20 and decoder 30 in the embodiment of the present application may be video standard protocols such as H.263, H.264, HEVV, MPEG-2, MPEG-4, VP8, VP9, or next-generation video Codecs corresponding to standard protocols (such as H.266, etc.).
  • video standard protocols such as H.263, H.264, HEVV, MPEG-2, MPEG-4, VP8, VP9, or next-generation video Codecs corresponding to standard protocols (such as H.266, etc.).
  • Fig. 2 shows a schematic/conceptual block diagram of an example of an encoder 20 for implementing an embodiment of the present application.
  • the encoder 20 includes a residual calculation unit 204, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214, a buffer 216, a loop filter unit 220 , a decoded picture buffer (decoded picture buffer, DPB) 230 , a prediction processing unit 260 and an entropy encoding unit 270 .
  • Prediction processing unit 260 may include inter prediction unit 244 , intra prediction unit 254 , and mode selection unit 262 .
  • Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown).
  • the encoder 20 shown in FIG. 2 may also be called a hybrid video encoder or a video encoder according to a hybrid video codec.
  • the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, the prediction processing unit 260 and the entropy encoding unit 270 form the forward signal path of the encoder 20, while for example the inverse quantization unit 210, the inverse transform processing unit 212, the re
  • the structure unit 214, the buffer 216, the loop filter 220, the decoded picture buffer (decoded picture buffer, DPB) 230, and the prediction processing unit 260 form the backward signal path of the encoder, wherein the backward signal path of the encoder corresponds to The signal path of the decoder (see decoder 30 in Fig. 3).
  • the encoder 20 receives, eg via an input 202, a picture 201 or an image block 203 of a picture 201, eg a picture in a sequence of pictures forming a video or a video sequence.
  • the image block 203 can also be called the current picture block or the picture block to be coded
  • the picture 201 can be called the current picture or the picture to be coded (especially when the current picture is distinguished from other pictures in video coding, other pictures such as the same video sequence That is, previously encoded and/or decoded pictures in the video sequence also including the current picture).
  • Embodiments of the encoder 20 may include a partitioning unit (not shown in FIG. 2 ) for partitioning the picture 201 into a plurality of blocks such as the image block 203 , usually into a plurality of non-overlapping blocks.
  • a split unit can be used to use the same block size for all pictures in a video sequence and a corresponding grid defining the block size, or to change the block size between pictures or subsets or groups of pictures and split each picture into the corresponding block.
  • prediction processing unit 260 of encoder 20 may be configured to perform any combination of the partitioning techniques described above.
  • the image block 203 is also or can be regarded as a two-dimensional array or matrix of sampling points with sample values, although its size is smaller than that of the picture 201 .
  • the image block 203 may include, for example, one sample array (eg, a luma array in the case of a black and white picture 201) or three sample arrays (eg, a luma array and two chrominance arrays in the case of a color picture) or An array of any other number and/or category depending on the color format being applied.
  • the number of sampling points in the horizontal and vertical directions (or axes) of the image block 203 defines the size of the image block 203 .
  • the encoder 20 shown in FIG. 2 is used to encode a picture 201 block by block, eg, perform encoding and prediction on each image block 203 .
  • the residual calculation unit 204 is configured to calculate the residual block 205 based on the picture image block 203 and the prediction block 265 (further details of the prediction block 265 are provided below), for example, by subtracting the sample values of the picture image block 203 on a sample-by-sample (pixel-by-pixel) basis.
  • the sample values of the block 265 are depredicted to obtain the residual block 205 in the sample domain.
  • the transform processing unit 206 is configured to apply a transform such as a discrete cosine transform (discrete cosine transform, DCT) or a discrete sine transform (discrete sine transform, DST) on the sample values of the residual block 205 to obtain transform coefficients 207 in the transform domain .
  • the transform coefficients 207 may also be referred to as transform residual coefficients and represent the residual block 205 in the transform domain.
  • the transform processing unit 206 may be configured to apply an integer approximation of DCT/DST, such as the transform specified for HEVC/H.265. Such integer approximations are usually scaled by some factor compared to the orthogonal DCT transform. In order to maintain the norm of the forward and inverse transformed residual blocks, an additional scaling factor is applied as part of the transformation process.
  • the scaling factor is usually chosen based on certain constraints, for example, the scaling factor is a power of 2 for the shift operation, the bit depth of the transform coefficients, the trade-off between accuracy and implementation cost, etc.
  • specifying a specific scaling factor for the inverse transform at the decoder 30 side by eg the inverse transform processing unit 212 (and at the encoder 20 side for the corresponding inverse transform by eg the inverse transform processing unit 212), and correspondingly, may be performed at the encoder
  • the side 20 specifies the corresponding scaling factor for the forward transform through the transform processing unit 206 .
  • a quantization unit 208 is configured to quantize the transform coefficients 207 , eg by applying scalar quantization or vector quantization, to obtain quantized transform coefficients 209 .
  • Quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209 .
  • the quantization process may reduce the bit depth associated with some or all of the transform coefficients 207 .
  • n-bit transform coefficients may be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
  • the degree of quantization can be modified by adjusting a quantization parameter (quantization parameter, QP). For example, with scalar quantization, different scales can be applied to achieve finer or coarser quantization.
  • a smaller quantization step size corresponds to finer quantization
  • a larger quantization step size corresponds to coarser quantization.
  • a suitable quantization step size can be indicated by a quantization parameter (QP).
  • QP quantization parameter
  • a quantization parameter may be an index to a predefined set of suitable quantization step sizes.
  • a smaller quantization parameter may correspond to fine quantization (smaller quantization step size)
  • a larger quantization parameter may correspond to coarse quantization (larger quantization step size)
  • Quantization may involve division by a quantization step size and corresponding quantization or inverse quantization, eg performed by inverse quantization 210, or may involve multiplication by a quantization step size.
  • Embodiments according to some standards such as HEVC may use quantization parameters to determine the quantization step size.
  • the quantization step size can be calculated based on the quantization parameter using a fixed-point approximation of an equation involving division.
  • An additional scaling factor can be introduced for quantization and dequantization to recover the norm of the residual block that might have been modified by the scale used in the fixed-point approximation of the equations for the quantization step size and quantization parameter.
  • inverse transform and inverse quantized scaling may be combined.
  • a custom quantization table can be used and signaled from the encoder to the decoder in e.g. the bitstream. Quantization is a lossy operation, where the larger the quantization step size, the greater the loss.
  • the inverse quantization unit 210 is configured to apply the inverse quantization of the quantization unit 208 on the quantized coefficients to obtain the dequantized coefficients 211, e.g., based on or using the same quantization step size as the quantization unit 208, applying the quantization scheme applied by the quantization unit 208 The inverse quantization scheme.
  • the dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, corresponding to the transform coefficients 207, although the losses due to quantization are generally not the same as the transform coefficients.
  • the inverse transform processing unit 212 is configured to apply an inverse transform of the transform applied by the transform processing unit 206, for example, an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST), to generate in the sample domain Get the inverse transform block 213 .
  • the inverse transform block 213 may also be referred to as the inverse transform dequantized block 213 or the inverse transform residual block 213 .
  • the reconstruction unit 214 (e.g. summer 214) is used to add the inverse transform block 213 (i.e. reconstructed residual block 213) to the prediction block 265 to obtain the reconstructed block 215 in the sample domain, e.g.
  • the sample values of the reconstructed residual block 213 are added to the sample values of the prediction block 265 .
  • a buffer unit 216 (or simply "buffer” 216), such as a line buffer 216, is used to buffer or store the reconstructed block 215 and corresponding sample values, for eg intra prediction.
  • the encoder may be configured to use the unfiltered reconstructed blocks and/or corresponding sample values stored in the buffer unit 216 for any kind of estimation and/or prediction, such as intra predict.
  • an embodiment of encoder 20 may be configured such that buffer unit 216 is used not only for storing reconstructed block 215 for intra prediction 254, but also for loop filter unit 220 (not shown in FIG. 2 ). output), and/or such that buffer unit 216 and decoded picture buffer unit 230 form one buffer.
  • Other embodiments may be used to use filtered blocks 221 and/or blocks or samples from decoded picture buffer 230 (neither shown in FIG. 2 ) as input or basis for intra prediction 254 .
  • a loop filter unit 220 (or “loop filter” 220 for short) is used to filter the reconstructed block 215 to obtain a filtered block 221 to smooth pixel transformation or improve video quality.
  • the loop filter unit 220 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (sample-adaptive offset, SAO) filters or other filters, such as bilateral filters, auto An adaptive loop filter (ALF), or a sharpening or smoothing filter, or a collaborative filter.
  • loop filter unit 220 is shown in FIG. 2 as an in-loop filter, in other configurations, loop filter unit 220 may be implemented as a post-loop filter.
  • the filtered block 221 may also be referred to as a filtered reconstructed block 221 .
  • the decoded picture buffer 230 may store the reconstructed encoded block after the loop filter unit 220 performs a filtering operation on the reconstructed encoded block.
  • Embodiments of the encoder 20 may be used to output loop filter parameters (e.g., SAO information), for example, directly or by the entropy encoding unit 270 or any other
  • the entropy encoding unit outputs after entropy encoding, for example, so that the decoder 30 can receive and apply the same loop filter parameters for decoding.
  • the decoded picture buffer (decoded picture buffer, DPB) 230 may be a reference picture memory storing reference picture data for the encoder 20 to encode video data.
  • the DPB 230 may be formed by any one of various memory devices, such as dynamic random access memory (DRAM) (including synchronous DRAM (synchronous DRAM, SDRAM), magnetoresistive RAM (magnetoresistive RAM, MRAM), resistive RAM (resistive RAM, RRAM)) or other types of memory devices.
  • DPB 230 and buffer 216 may be provided by the same memory device or by separate memory devices.
  • a decoded picture buffer (DPB) 230 is used to store the filtered block 221 .
  • the decoded picture buffer 230 may further be used to store other previously filtered blocks of the same current picture or a different picture such as a previous reconstructed picture, such as the previously reconstructed and filtered block 221, and may provide a complete previously Reconstructed, ie decoded pictures (and corresponding reference blocks and samples) and/or partially reconstructed current pictures (and corresponding reference blocks and samples), eg for inter prediction.
  • a decoded picture buffer (DPB) 230 is used to store the reconstructed block 215 if the reconstructed block 215 is reconstructed without in-loop filtering.
  • Prediction processing unit 260 also referred to as block prediction processing unit 260, is adapted to receive or acquire image block 203 (current image block 203 of current picture 201) and reconstructed picture data, e.g. the same (current) picture from buffer 216 and/or reference picture data 231 of one or more previously decoded pictures from the decoded picture buffer 230, and for processing such data for prediction, i. Prediction block 265 of intra prediction block 255 .
  • the mode selection unit 262 may be used to select a prediction mode (such as an intra or inter prediction mode) and/or the corresponding prediction block 245 or 255 used as the prediction block 265 for computing the residual block 205 and reconstructing the reconstructed block 215.
  • a prediction mode such as an intra or inter prediction mode
  • the corresponding prediction block 245 or 255 used as the prediction block 265 for computing the residual block 205 and reconstructing the reconstructed block 215.
  • Embodiments of the mode selection unit 262 may be used to select a prediction mode (e.g., from those supported by the prediction processing unit 260) that provides the best match or the smallest residual (minimum residual means better compression in transmission or storage), or provide minimal signaling overhead (minimum signaling overhead means better compression in transmission or storage), or consider or balance both of the above.
  • the mode selection unit 262 can be used to determine the prediction mode based on rate distortion optimization (RDO), that is, to select the prediction mode that provides the minimum rate distortion optimization, or to select the prediction mode that the relevant rate distortion at least satisfies the prediction mode selection standard .
  • RDO rate distortion optimization
  • the encoder 20 is used to determine or select the best or optimal prediction mode from a (predetermined) set of prediction modes.
  • the set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
  • the set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in H.265, or may include 67 different intra prediction modes, eg non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in the developing H.266.
  • the set of inter prediction modes depends on available reference pictures (i.e., for example, the aforementioned at least part of the decoded pictures stored in DBP 230 ) and other inter prediction parameters, for example on whether the entire reference picture is used or only A part of the reference picture, e.g. a search window area surrounding the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether pixel interpolation such as half-pixel and/or quarter-pixel interpolation is applied.
  • the inter prediction mode set may include, for example, an Advanced Motion Vector Prediction (AMVP) mode and a merge mode.
  • AMVP Advanced Motion Vector Prediction
  • the set of inter prediction modes may include the improved control point-based AMVP mode and the improved control point-based merge mode of the embodiment of the present application.
  • intra-prediction unit 254 may be configured to perform any combination of the inter-prediction techniques described below.
  • the embodiment of the present application may also apply a skip mode and/or a direct mode.
  • the prediction processing unit 260 may further be used to divide the image block 203 into smaller block partitions or sub-blocks, for example, by iteratively using quad-tree (quad-tree, QT) segmentation, binary-tree (binary-tree, BT) segmentation or ternary-tree (triple-tree, TT) partitioning, or any combination thereof, and for performing prediction, for example, for each of the block partitions or sub-blocks, wherein mode selection includes selection of the tree structure of the partitioned image block 203 and selection of the application prediction mode for each of the block partitions or sub-blocks.
  • quad-tree quad-tree
  • QT quad-tree
  • binary-tree binary-tree
  • BT binary-tree
  • TT ternary-tree
  • the inter prediction unit 244 may include a motion estimation (motion estimation, ME) unit (not shown in FIG. 2 ) and a motion compensation (motion compensation, MC) unit (not shown in FIG. 2 ).
  • the motion estimation unit is adapted to receive or acquire a picture image block 203 (current picture image block 203 of a current picture 201) and a decoded picture 231, or at least one or more previously reconstructed blocks, e.g. one or more other/different
  • the previously decoded reconstructed blocks of picture 231 are used for motion estimation.
  • the video sequence may comprise the current picture and the previously decoded picture 31, or in other words the current picture and the previously decoded picture 31 may be part of, or form, a sequence of pictures forming the video sequence.
  • the encoder 20 may be configured to select a reference block from a plurality of reference blocks in the same or a different picture in a plurality of other pictures, and provide a reference picture and/or provide a reference block to a motion estimation unit (not shown in FIG. 2 ).
  • the offset (spatial offset) between the location of the block (X, Y coordinates) and the location of the current block is used as an inter prediction parameter. This offset is also called a motion vector (MV).
  • the motion compensation unit is used to obtain inter prediction parameters, and perform inter prediction based on or using the inter prediction parameters to obtain inter prediction blocks 245 .
  • Motion compensation performed by a motion compensation unit may involve fetching or generating a predictive block based on motion/block vectors determined by motion estimation (possibly performing interpolation to sub-pixel precision). Interpolation filtering can generate additional pixel samples from known pixel samples, potentially increasing the number of candidate predictive blocks that can be used to encode a picture block.
  • motion compensation unit 246 may locate the predictive block to which the motion vector points in a reference picture list. Motion compensation unit 246 may also generate syntax elements associated with blocks and video slices for use by decoder 30 in decoding picture blocks of the video slice.
  • the above-mentioned inter prediction unit 244 may transmit syntax elements to the entropy coding unit 270, where the syntax elements include inter prediction parameters (for example, the inter prediction mode selected for the current block prediction after traversing multiple inter prediction modes) instructions).
  • the inter prediction parameters for example, the inter prediction mode selected for the current block prediction after traversing multiple inter prediction modes
  • the decoding end 30 may directly use the default prediction mode for decoding.
  • the inter prediction unit 244 can be configured to perform any combination of inter prediction techniques.
  • the intra prediction unit 254 is configured to obtain, eg receive, the picture block 203 (current picture block) of the same picture and one or more previously reconstructed blocks, eg reconstructed adjacent blocks, for intra estimation.
  • the encoder 20 may be configured to select an intra prediction mode from a plurality of (predetermined) intra prediction modes.
  • Embodiments of the encoder 20 may be used to select an intra prediction mode based on optimization criteria, such as based on the smallest residual (eg, the intra prediction mode that provides the most similar prediction block 255 to the current picture block 203 ) or the smallest rate-distortion.
  • optimization criteria such as based on the smallest residual (eg, the intra prediction mode that provides the most similar prediction block 255 to the current picture block 203 ) or the smallest rate-distortion.
  • the intra-prediction unit 254 is further configured to determine an intra-prediction block 255 based on the intra-prediction parameters as the selected intra-prediction mode. In any case, after selecting the intra prediction mode for the block, the intra prediction unit 254 is also configured to provide the intra prediction parameters to the entropy encoding unit 270, i.e., provide an indication of the selected intra prediction mode for the block Information. In one example, intra prediction unit 254 may be configured to perform any combination of intra prediction techniques.
  • the above-mentioned intra prediction unit 254 may transmit syntax elements to the entropy encoding unit 270, where the syntax elements include intra prediction parameters (for example, the intra prediction mode selected for current block prediction after traversing multiple intra prediction modes) instructions).
  • intra prediction parameters for example, the intra prediction mode selected for current block prediction after traversing multiple intra prediction modes
  • the decoding end 30 may directly use the default prediction mode for decoding.
  • the entropy coding unit 270 is used to use an entropy coding algorithm or scheme (for example, a variable length coding (variable length coding, VLC) scheme, a context adaptive VLC (context adaptive VLC, CAVLC) scheme, an arithmetic coding scheme, a context adaptive binary arithmetic Coding (context adaptive binary arithmetic coding, CABAC), syntax-based context-adaptive binary arithmetic coding (syntax-based context-adaptive binary arithmetic coding, SBAC), probability interval partitioning entropy (probability interval partitioning entropy, PIPE) coding or other entropy coding method or technique) applied to one or all (or none) of the quantized residual coefficients 209, inter prediction parameters, intra prediction parameters and/or loop filter parameters to obtain
  • the encoded picture data 21 output in the form of an encoded bit stream 21, for example.
  • the encoded bitstream may be transmitted to video decoder 30 or archived
  • a non-transform based encoder 20 may directly quantize the residual signal without the transform processing unit 206 for certain blocks or frames.
  • encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
  • the encoder 20 may be used to implement the video image processing method described in the following embodiments.
  • video encoder 20 may be used to encode a video stream.
  • the video encoder 20 can directly quantize the residual signal without being processed by the transform processing unit 206, and correspondingly does not need to be processed by the inverse transform processing unit 212; or, for some image blocks or image frames, the video encoder 20 does not generate residual data, and accordingly does not need to be processed by the transform processing unit 206, the quantization unit 208, the inverse quantization unit 210, and the inverse transform processing unit 212; or, the video encoder 20 can The reconstructed image block is directly stored as a reference block without being processed by the filter 220; or, the quantization unit 208 and the inverse quantization unit 210 in the video encoder 20 can be combined together.
  • the loop filter 220 is optional, and in the case of lossless compression coding, the transform processing unit 206, the quantization unit 208, the inverse quantization unit 210 and the inverse transform processing unit 212 are optional. It should be understood that, according to different application scenarios, the inter prediction unit 244 and the intra prediction unit 254 may be selectively enabled.
  • Fig. 3 shows a schematic/conceptual block diagram of an example of a decoder 30 for implementing an embodiment of the present application.
  • the video decoder 30 is configured to receive encoded picture data (eg, encoded bitstream) 21 , eg encoded by the encoder 20 , to obtain a decoded picture 231 .
  • video decoder 30 receives video data, such as an encoded video bitstream representing picture blocks of an encoded video slice, and associated syntax elements from video encoder 20 .
  • the decoder 30 includes an entropy decoding unit 304, an inverse quantization unit 310, an inverse transform processing unit 312, a reconstruction unit 314 (such as a summer 314), a buffer 316, a loop filter 320, an The decoded picture buffer 330 and the prediction processing unit 360 .
  • Prediction processing unit 360 may include inter prediction unit 344 , intra prediction unit 354 , and mode selection unit 362 .
  • video decoder 30 may perform a decoding pass that is substantially the inverse of the encoding pass described with reference to video encoder 20 of FIG. 2 .
  • the entropy decoding unit 304 is configured to perform entropy decoding on the encoded picture data 21 to obtain, for example, quantized coefficients 309 and/or decoded encoding parameters (not shown in FIG. 3 ), for example, inter-frame prediction, intra-frame prediction parameters Any or all of (decoded) , loop filter parameters, and/or other syntax elements.
  • the entropy decoding unit 304 is further configured to forward the inter prediction parameters, intra prediction parameters and/or other syntax elements to the prediction processing unit 360 .
  • Video decoder 30 may receive syntax elements at the video slice level and/or the video block level.
  • the inverse quantization unit 310 may be functionally the same as the inverse quantization unit 110
  • the inverse transform processing unit 312 may be functionally the same as the inverse transform processing unit 212
  • the reconfiguration unit 314 may be functionally the same as the reconfiguration unit 214
  • the buffer 316 may be functionally
  • loop filter 320 may be functionally the same as loop filter 220
  • decoded picture buffer 330 may be functionally the same as decoded picture buffer 230 .
  • the prediction processing unit 360 may include an inter prediction unit 344 and an intra prediction unit 354, wherein the inter prediction unit 344 may be functionally similar to the inter prediction unit 244, and the intra prediction unit 354 may be functionally similar to the intra prediction unit 254 .
  • the prediction processing unit 360 is generally configured to perform block prediction and/or obtain a predicted block 365 from the encoded data 21, and to receive or obtain prediction related parameters and/or information about the Information about the selected prediction mode.
  • intra prediction unit 354 of prediction processing unit 360 is used to perform the prediction based on the signaled intra prediction mode and the previous decoded block from the current frame or picture. data to generate prediction blocks 355 for picture blocks of the current video slice.
  • inter prediction unit 344 e.g., motion compensation unit
  • the predicted block can be generated from one reference picture within one reference picture list.
  • Video decoder 30 may use default construction techniques to construct reference frame lists based on the reference pictures stored in DPB 330: list 0 and list 1.
  • the prediction processing unit 360 is configured to determine prediction information for a video block of the current video slice by parsing motion vectors and other syntax elements, and use the prediction information to generate a prediction block for the current video block being decoded.
  • the prediction processing unit 360 uses some of the received syntax elements to determine the prediction mode (e.g., intra or inter prediction), the inter prediction slice type ( For example, B slice, P slice, or GPB slice), construction information for one or more of the reference picture lists for the slice, motion vectors for each inter-coded video block of the slice, Inter prediction status and other information for each inter-coded video block of the slice to decode the video blocks of the current video slice.
  • the prediction mode e.g., intra or inter prediction
  • the inter prediction slice type For example, B slice, P slice, or GPB slice
  • construction information for one or more of the reference picture lists for the slice motion vectors for each inter-coded video block of the slice
  • Inter prediction status and other information for each inter-coded video block of the slice to decode the video blocks of
  • the syntax elements received by the video decoder 30 from the bitstream include receiving an adaptive parameter set (adaptive parameter set, APS), a sequence parameter set (sequence parameter set, SPS), a picture parameter set (picture parameter set, PPS) or syntax elements in one or more of the slice headers.
  • an adaptive parameter set adaptive parameter set
  • SPS sequence parameter set
  • PPS picture parameter set
  • Inverse quantization unit 310 may be used to inverse quantize (ie, inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 304 .
  • the inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in a video slice to determine the degree of quantization that should be applied and likewise determine the degree of inverse quantization that should be applied.
  • An inverse transform processing unit 312 is used to apply an inverse transform (eg, an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients to generate a residual block in the pixel domain.
  • an inverse transform eg, an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process
  • Reconstruction unit 314 (e.g. summer 314) is used to add inverse transform block 313 (i.e. reconstructed residual block 313) to prediction block 365 to obtain reconstructed block 315 in the sample domain, e.g. by adding The sample values of the reconstructed residual block 313 are added to the sample values of the prediction block 365 .
  • a loop filter unit 320 is used (either during the encoding loop or after the encoding loop) to filter the reconstructed block 315 to obtain a filtered block 321 to smooth pixel transitions or improve video quality.
  • loop filter unit 320 may be configured to perform any combination of the filtering techniques described below.
  • the loop filter unit 320 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (sample-adaptive offset, SAO) filters or other filters, such as bilateral filters, auto An adaptive loop filter (ALF), or a sharpening or smoothing filter, or a collaborative filter.
  • loop filter unit 320 is shown in FIG. 3 as an in-loop filter, in other configurations, loop filter unit 320 may be implemented as a post-loop filter.
  • the decoded video blocks 321 in a given frame or picture are then stored in a decoded picture buffer 330 that stores reference pictures for subsequent motion compensation.
  • the decoder 30 is configured to output the decoded picture 31 for presentation to or viewing by a user, eg, via an output 332 .
  • video decoder 30 may be used to decode the compressed bitstream.
  • decoder 30 may generate an output video stream without loop filter unit 320 .
  • the non-transform based decoder 30 may directly inverse quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames.
  • video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
  • the decoder 30 is used to implement the video image processing method described in the following embodiments.
  • video decoder 30 may be used to decode the encoded video bitstream.
  • the video decoder 30 can generate an output video stream without being processed by the filter 320; or, for some image blocks or image frames, the entropy decoding unit 304 of the video decoder 30 does not decode the quantized coefficients, and accordingly does not It needs to be processed by the inverse quantization unit 310 and the inverse transform processing unit 312 .
  • the loop filter 320 is optional; and in the case of lossless compression, the inverse quantization unit 310 and the inverse transform processing unit 312 are optional. It should be understood that, according to different application scenarios, the inter prediction unit and the intra prediction unit may be selectively enabled.
  • FIG. 4 is a schematic structural diagram of a video decoding device 400 (for example, a video encoding device 400 or a video decoding device 400 ) provided by an embodiment of the present application.
  • Video coding apparatus 400 is suitable for implementing the embodiments described herein.
  • the video decoding device 400 may be a video decoder (such as the decoder 30 of FIG. 1A ) or a video encoder (such as the encoder 20 of FIG. 1A ).
  • the video decoding device 400 may be one or more components in the decoder 30 of FIG. 1A or the encoder 20 of FIG. 1A described above.
  • the video decoding device 400 includes: an ingress port 410 and a receiving unit (Rx) 420 for receiving data, a processor, logic unit or central processing unit (CPU) 430 for processing data, and a transmitter unit for transmitting data (Tx) 440 and egress port 450, and memory 460 for storing data.
  • the video decoding device 400 may also include a photoelectric conversion component and an electro-optical (EO) component coupled with the inlet port 410 , the receiver unit 420 , the transmitter unit 440 and the outlet port 450 for egress or ingress of optical or electrical signals.
  • EO electro-optical
  • the processor 430 is realized by hardware and software.
  • Processor 430 may be implemented as one or more CPU chips, cores (eg, multi-core processors), FPGAs, ASICs, and DSPs.
  • Processor 430 is in communication with inlet port 410 , receiver unit 420 , transmitter unit 440 , outlet port 450 and memory 460 .
  • the processor 430 includes a decoding module 470 (eg, an encoding module 470 or a decoding module 470).
  • the encoding/decoding module 470 implements the embodiments disclosed herein to implement the chrominance block prediction method provided by the embodiments of the present application. For example, the encoding/decoding module 470 implements, processes or provides various encoding operations.
  • the encoding/decoding module 470 is implemented in instructions stored in the memory 460 and executed by the processor 430 .
  • Memory 460 including one or more magnetic disks, tape drives, and solid-state drives, may be used as an overflow data storage device to store programs while those programs are selectively executed, and to store instructions and data that are read during program execution.
  • Memory 460 may be volatile and/or nonvolatile, and may be read-only memory (ROM), random-access memory (RAM), random-access memory (ternary content-addressable memory, TCAM), and/or static Random Access Memory (SRAM).
  • ROM read-only memory
  • RAM random-access memory
  • TCAM ternary content-addressable memory
  • SRAM static Random Access Memory
  • FIG. 5 is a simplified block diagram of an apparatus 500 usable as either or both of source device 12 and destination device 14 in FIG. 1A , according to an exemplary embodiment.
  • the device 500 can implement the technology of the present application.
  • FIG. 5 is a schematic block diagram of an implementation manner of an encoding device or a decoding device (referred to as a decoding device 500 for short) according to an embodiment of the present application.
  • the decoding device 500 may include a processor 510 , a memory 530 and a bus system 550 .
  • the processor and the memory are connected through a bus system, the memory is used for storing instructions, and the processor is used for executing the instructions stored in the memory.
  • the memory of the decoding device stores program codes, and the processor can call the program codes stored in the memory to execute various video encoding or decoding methods described in this application, especially various new random access stream quality control methods. To avoid repetition, no detailed description is given here.
  • the processor 510 may be a central processing unit (Central Processing Unit, referred to as "CPU"), and the processor 510 may also be other general-purpose processors, digital signal processors (DSP), dedicated integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSP digital signal processors
  • ASIC dedicated integrated circuit
  • FPGA off-the-shelf programmable gate array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the memory 530 may include a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device may also be used as memory 530 .
  • Memory 530 may include code and data 531 accessed by processor 510 using bus 550 .
  • the memory 530 may further include an operating system 533 and an application program 535, the application program 535 including at least one program allowing the processor 510 to execute the video encoding or decoding method described in this application (especially the video image processing method described in this application).
  • the application program 535 may include applications 1 to N, which further include a video encoding or decoding application (referred to as a video decoding application) for executing the video encoding or decoding method described in this application.
  • the bus system 550 may include not only a data bus, but also a power bus, a control bus, and a status signal bus. However, for clarity of illustration, the various buses are labeled as bus system 550 in the figure.
  • the decoding device 500 may also include one or more output devices, such as a display 570 .
  • the display 570 may be a touch-sensitive display that incorporates a display with a tactile unit operable to sense touch input.
  • Display 570 may be connected to processor 510 via bus 550 .
  • I frame In the field of video coding, a frame that can be decoded without relying on other frames is generally referred to as "I frame".
  • the encoding prediction method of all the blocks of the I frame is intra prediction.
  • P frame In the field of video coding, a frame that refers to a forward frame and is marked as a P frame type in a code stream is generally referred to as a "P frame".
  • B frame In the field of video coding, it can refer to both the forward frame and the backward frame, and is generally referred to as "B frame".
  • the embodiment of the present application is mainly aimed at low-latency scenarios, so in order to reduce the latency, it is generally encoded as a "P frame”.
  • Random access code stream for the convenience of description, in the embodiment of this application, the frame type of all frames in the video stream is I frame, or the video stream in which all frames are in intra-frame prediction mode is referred to as "random access code stream" . It should be noted that the random access code stream may also be called a second code stream, etc., and its name is not limited by this. The encoding process used to generate the random access code stream is called the second encoding.
  • Long group of pictures group of pictures, GOP
  • the video stream that contains at least one P frame between adjacent I frames in the video stream or the video stream that allows the inter-frame prediction mode is referred to as "Long GOP stream”.
  • the long GOP stream may also be called a first code stream or an elementary stream, etc., and its name is not limited by this.
  • the encoding process used to generate the GOP stream is called first encoding.
  • Reconstruction frame In the encoding process, the encoding result of the previous frame needs to be decoded and saved for reference in subsequent frames.
  • the decoded frame is generally called a "reconstruction frame”.
  • This reconstruction frame is consistent with the frame decoded by the decoder. , which is called codec consistency.
  • codec consistency For the convenience of description, in the embodiment of the present application, the frames decoded by the decoder are also collectively referred to as "reconstructed frames”.
  • Strong interaction scenario an application scenario that can receive user input and feedback in real time. Such as game scenes, live interactive scenes, etc.
  • Camera location A camera at a location belongs to a camera location. When switching from playing the content of one camera location to the content of another camera location, a camera location switch occurs.
  • Angle of view The orientation of the camera or the direction the user is looking at at a moment. When the camera rotates or the user turns his head, the angle of view switches.
  • Multi-camera shooting Use two or more cameras to shoot the same scene from multiple angles and directions at the same time.
  • the encoder can provide two streams for the same video content, one of which is a long GOP code stream, and the other code stream is random access code stream.
  • the coded blocks of all frames of the random access code stream are in intra-frame prediction mode.
  • Accessing video content includes initially accessing video content, or switching from one video content to another.
  • the initial access to video content may be to start decoding the video content based on the detected operation of the user clicking to start playing a live content.
  • Switching from one video content to another can be switching from decoding one camera position video to another camera position video as shown in Figure 6 below.
  • the encoding end provides two code streams for the same video content to reduce the delay in accessing video content.
  • the application scenario of multi-camera shooting a sports event shown in FIG. 6 is taken as an example for illustration.
  • a plurality of cameras are arranged at different positions on the scene of a sports event, for example, seven cameras as shown in Figure 6, respectively camera A, camera B, camera C, camera D, camera E, camera F and camera G. Cameras in different positions can shoot the same scene from different angles to obtain a set of video signals.
  • the group of video signals may include camera A video, camera B video, camera C video, camera D video, camera E video, camera F video and camera G video.
  • the camera position A video, the camera position B video, the camera position C video, the camera position D video, the camera position E video, the camera position F video, and the camera position G video can each be used as a video content.
  • Multiple camera positions (camera position A, camera position B, camera position C, camera position D, camera position E, camera position F, and camera position G) video can provide users with multi-angle, three-dimensional stereoscopic visual experience.
  • the encoding end can provide the above two code streams for the video of each camera position (camera A, camera B, camera C, camera D, camera E, camera F, or camera G).
  • the user can switch from one angle of camera position video to another angle of camera position video in an appropriate interactive form to watch.
  • FIG. 7 is a schematic diagram of a decoded frame trajectory for switching from a current video content to another video content according to an embodiment of the present application.
  • the current video content is the video content of camera position A
  • the other video content is the video content of camera position B.
  • the decoding end decodes the frame numbered 0 (frame #0), the frame numbered 1 (frame #1), and the frame numbered 2 (frame #2) of the long GOP code stream of the current video content , when decoding the frame numbered 3 (frame #3) of the current video content, video content switching occurs, that is, switching from decoding the current video content to decoding another video content.
  • the decoding end can obtain and decode the frame at the switching time in the random access code stream of another video content. For example, as shown in FIG. 7 , the frame numbered 3 (frame #3) of the random access code stream of another video content.
  • the decoding end does not need to start decoding from the previous I frame (ie #0 frame) at the switching time in the long GOP code stream of another video content, nor does it need to wait until the next I frame (ie #9 frame) to start decoding.
  • the reconstructed frame of the random access code stream of the video content is used as the reference frame of the long GOP code stream of the video content.
  • the frame used for encoding prediction is the reconstructed frame of the long GOP code stream, and when accessing the long GOP code stream, a random frame is used. Access the reconstructed frame of the code stream.
  • the reconstructed frame of the random access code stream lacks quality matching with the reconstructed frame of the long GOP code stream in the encoding process.
  • the embodiment of this application proposes a video image processing method as described below, which can also be called random access with quality control method, aiming to achieve On the basis of delayed access to video content, the decoding quality of the accessed video content is improved, block effects are reduced, and part of the artifact effect is eliminated.
  • FIG. 8 is a schematic diagram of a video image processing system provided by an embodiment of the present application.
  • the video image processing system may include a server 801 and a terminal 802 .
  • the server 801 can communicate with the terminal 802.
  • the terminal 802 can communicate through wireless-fidelity (wireless-fidelity, Wifi) communication, Bluetooth communication, or cellular 2/3/4/5 generation (2/3/4/5generation, 2G/3G/4G/5G) communication and other means to communicate with the server 801.
  • wireless-fidelity wireless-fidelity, Wifi
  • Bluetooth communication or cellular 2/3/4/5 generation (2/3/4/5generation, 2G/3G/4G/5G) communication and other means to communicate with the server 801.
  • FIG. 8 only uses one terminal 801 as a schematic explanation, and the system may include multiple terminals 802, which are not illustrated in this embodiment of the present application.
  • the above-mentioned terminal 802 may be various types of devices equipped with display components.
  • the terminal 802 may be a terminal device such as a mobile phone, a tablet computer, a notebook computer, and a smart TV (in FIG. 8, the terminal is a mobile phone as an example), and the terminal It can also be a device for virtual scene interaction, including VR glasses, AR devices, MR interactive devices, etc.
  • the terminal can also be a wearable electronic device such as a smart watch, a smart bracelet, etc.
  • the terminal can also be a vehicle, an unmanned vehicle, Equipment carried in vehicles such as drones and industrial robots. The embodiment of the present application does not specifically limit the specific form of the terminal.
  • the aforementioned terminal may also be referred to as user equipment (user equipment, UE), subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, Terminal device, access terminal, mobile terminal, wireless terminal, smart terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable term.
  • user equipment user equipment
  • UE user equipment
  • subscriber station mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, Terminal device, access terminal, mobile terminal, wireless terminal, smart terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable term.
  • the above-mentioned server 801 may be one or more physical servers (a physical server is taken as an example in FIG. 8 ), may also be a computer cluster, or may be a virtual machine or a cloud server in a cloud computing scenario, and so on.
  • the terminal 802 may be installed with a client, for example, the client may be a video playback application, a live broadcast application (for example, an e-commerce live broadcast, a game live broadcast, etc.), a video conferencing application, or An application program (application, APP) involving video codec, such as a game application program.
  • the terminal 802 can run the client based on the user's operation (such as clicking, touching, sliding, shaking, voice control, etc.), access the video content, and display the video content on the display component.
  • the server 801 can be used as the source device 12 described in the above-mentioned embodiment, through the video image processing method of the embodiment of the present application, to provide two code streams with the same or equivalent quality for the same video content, wherein one code stream is a long GOP code stream, and the other code stream is random access code stream.
  • the terminal 802 may serve as the destination device 14 described in the above embodiments, and realize fast access to video content by decoding the random access code stream and the long GOP code stream.
  • the server 801 may acquire a video image, where the video image may be a video image captured by a camera, or may also be a decoded video image.
  • the camera may be a camera at any position as shown in FIG. 6 .
  • the server 801 can use the video image processing method of the embodiment of the present application to provide two code streams with the same or equivalent quality for the same video content, wherein one code stream is a long GOP code stream, and the other code stream is a random access code stream stream.
  • the server 801 can provide the two code streams to the client.
  • the client can support the user to order video content, and the server 801 can store the two code streams.
  • the two code streams are sent to the client.
  • the client can first decode the random access code stream, and decode the long GOP code stream based on the decoding result of the random access code stream, so as to quickly access the video content.
  • the video content request is used to request from time t 0 , if the corresponding video content from time t 0 is an instant decoding refresh frame (instantaneous decoding refresh frame) in the long GOP code stream , IDR) frame
  • the server can send the long GOP code stream to the client.
  • the client can directly decode the long GOP code stream to access the video content.
  • the server may send the random access code stream and the long GOP code stream including video content at time t 0 and after time t 0 to the client.
  • the client can first decode the random access code stream, and decode the long GOP code stream based on the decoding result of the random access code stream, so as to quickly access the video content.
  • the client can also render and display the decoded video content, so as to present the video content to the user.
  • the client may support the function of live video content, and the server 801 may deliver the two code streams to the client when receiving a request for accessing live video content.
  • the client can first decode the random access code stream, and decode the long GOP code stream based on the decoding result of the random access code stream, so as to quickly access the video content.
  • the client can also render and display the decoded video content, so as to present the video content to the user.
  • an intermediate node 803 may also be set between the server 801 and the terminal 802 .
  • the server 801 can be used as the source device 12 described in the above-mentioned embodiment, through the video image processing method of the embodiment of the present application, to provide two code streams with the same or equivalent quality for the same video content, wherein one code stream is a long GOP code stream, and the other code stream is random access code stream.
  • the intermediate node 803 may serve as the destination device 14 described in the above embodiment, by decoding the random access code stream and the long GOP code stream.
  • the intermediate node 803 provides the decoded video content to the terminal 802, so as to realize quick access to the video content.
  • the intermediate node 803 may be a node in a content delivery network (content delivery network, CDN).
  • the client can support the function of live video content, and the server 801 can deliver the two code streams to the intermediate node 803 when receiving a request for accessing live video content.
  • the intermediate node 803 may first decode the random access code stream, and decode the long GOP code stream based on the decoding result of the random access code stream.
  • the intermediate node 803 provides the decoded video content to the terminal 802, so as to realize quick access to the video content.
  • the client can also render and display the decoded video content, so as to present the video content to the user.
  • the client sends a request for accessing live video content to the server, and the request for accessing live video content is used to request access to live content from time t0 .
  • the server receives the request for accessing live video content sent by the client, it can send the random access code stream and the long GOP code stream containing the video content at time t 0 and after time t 0 to the intermediate node or the client.
  • the node or the client can decode the random access code stream first, and decode the long GOP code stream based on the decoding result of the random access code stream, so as to quickly access the video content.
  • any of the above-mentioned application programs of the terminal 801 may be a built-in application program of the terminal 801 itself, or may be an application program provided by a third-party service provider installed by the user himself, and this is not done. Specific limits.
  • the two code streams with the same or equivalent quality involved in the embodiment of the present application refer to the reconstruction frame of the long GOP code stream and the reconstruction frame of the corresponding random access code stream having the same or equivalent quality, so that Using the reconstructed frame of the random access code stream as a reference frame to decode the long GOP code stream is beneficial to reduce the blocking effect and eliminate part of the artifact effect.
  • the reconstructed frame of the long GOP code stream and the reconstructed frame of the corresponding random access code stream with the same or equivalent quality meet one or more of the following:
  • the difference between the reconstructed frame of the long GOP code stream and the reconstructed frame of the corresponding random access code stream is less than the difference threshold; or,
  • the similarity between the reconstructed frame of the long GOP code stream and the reconstructed frame of the corresponding random access code stream is higher than the similarity threshold; or,
  • the difference between the reconstructed frames of the long GOP code stream and the corresponding random access code stream is smaller than the reconstruction of the long GOP code stream and the random access code stream obtained by encoding the same video content with other division methods or encoding parameters the difference between frames; or,
  • the similarity between the reconstructed frame of the long GOP code stream and the reconstructed frame of the corresponding random access code stream is higher than that of the long GOP code stream and the random access code stream obtained by encoding the same video content with other division methods or encoding parameters
  • the similarity between the reconstructed frames of or,
  • the difference between the pixel values at the same position of the reconstructed frame of the long GOP code stream and the reconstructed frame of the corresponding random access code stream is smaller than the pixel value threshold; for example, the pixel value threshold can be 128, and it can also be other values.
  • the application examples do not give examples one by one.
  • the reconstructed frame of the long GOP code stream and the corresponding reconstructed frame of the random access code stream specifically refer to the reconstructed frame of the long GOP code stream and the reconstructed frame of the random access code stream at the same position.
  • the reconstructed frame of the long GOP code stream and the reconstructed frame of the random access code stream at the same position can include, The reconstructed frame of the frame numbered 3 (#3 frame) and the reconstructed frame of the frame numbered 3 (#3 frame) of the random access code stream.
  • the video image processing method in the embodiment of the present application may be based on the encoding result of the first code stream (for example, the reconstructed image of the first code stream or the reconstructed image in the first encoding process, and/or the encoding information of the first encoding) , adjusting the encoding of the second code stream, so as to realize that the quality of the reconstructed frame of the first code stream is the same or equivalent to that of the corresponding reconstructed frame of the second code stream.
  • the video image processing method of the embodiment of the present application can encode the input image to be encoded to generate a first code stream and a second code stream. That is, the same video content can be encoded to generate a first code stream and a second code stream with the same or equivalent quality.
  • the quality of the reconstructed frame of the first code stream is the same or equivalent to that of the corresponding reconstructed frame of the second code stream.
  • the first code stream here may be the long GOP code stream as described above.
  • the second code stream here may be the above-mentioned random access code stream. Adjusting the encoding of the second bit stream based on the encoding result of the first bit stream may have different specific implementation manners. For example, the following embodiments may be used to achieve the same or equivalent quality of the reconstructed frame of the first code stream and the corresponding reconstructed frame of the second code stream.
  • the interval between two adjacent frames of all intra-frame prediction modes in the first code stream is greater than the interval between two adjacent frames of all intra-frame prediction modes in the second code stream.
  • FIG. 9 is a schematic diagram of a video image processing method provided by an embodiment of the present application.
  • the embodiments of the present application may be executed by an encoding device.
  • the encoding apparatus may be applied to the source device 12 in the above embodiments, for example, the server 801 in the embodiment shown in FIG. 8 .
  • the encoding device may acquire an image to be encoded, and perform first encoding on the image to be encoded, so as to generate a first code stream.
  • the second encoding in the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, so as to generate a second code stream.
  • the first reconstructed image is a reconstructed image in the first code stream or in the first encoding process.
  • the first encoding is used to generate the first code stream
  • the second encoding is used to generate the second code stream
  • the first code stream and the second code stream are generated through different encoding processes .
  • the first code stream and the second code stream may be transmitted independently, or may be transmitted after the two are interleaved.
  • the embodiments shown in FIGS. 10 to 15 are used to explain several possible ways of realizing the above-mentioned embodiment shown in FIG. 9 in detail.
  • the image to be encoded or the first reconstructed image is subjected to the second encoding in the full intra-frame prediction mode to generate the second code stream, or , according to the first reconstructed image and the encoding information of the first encoding, perform the second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image, so as to generate a second code stream.
  • FIG. 10 is a schematic flowchart of a video image processing method provided by an embodiment of the present application. Part of the method in the embodiment of the present application may be executed by an encoding device.
  • the encoding apparatus may be applied to the source device 12 in the above embodiments, for example, the server 801 in the embodiment shown in FIG. 8 .
  • a series of steps or operations involved in the embodiments of the present application may be executed in various orders and/or concurrently, and the execution order is not limited by the sequence numbers of the steps shown in FIG. 10 .
  • the method shown in Figure 10 may include the following implementation steps:
  • the image to be encoded in this embodiment of the present application may be a video image captured by a camera or other acquisition device, or a decoded video image.
  • the decoded video image may be an image obtained by decoding the compressed video image.
  • the image to be encoded may be a source video image.
  • the source video image is subjected to the first encoding and the second encoding, so as to generate the first code stream and the second code stream, and realize frame-level synchronous output.
  • the image to be encoded may be an image block obtained by dividing the source video image.
  • the divided image blocks are subjected to the first coding and the second coding to generate the first code stream and the second code stream to realize block-level synchronous output.
  • the first encoding may include one or more processing procedures such as prediction, transformation, quantization, and entropy encoding.
  • the image to be coded may be predicted, transformed, and quantized to generate first coded data, and then entropy coded on the first coded data to generate a first code stream including the first coded data.
  • the prediction mode of the first encoding may be inter-frame prediction.
  • First encoding is performed on the image to be encoded to generate a first code stream including P frames or P blocks.
  • the prediction mode of the first encoding may also be intra-frame prediction.
  • First encoding is performed on the image to be encoded to generate a first code stream including frames or I blocks in all intra-frame prediction modes.
  • the long GOP code stream shown in FIG. 7 that is, the first code stream here may include P frames and frames in all intra-frame prediction modes.
  • the first reconstructed image determine at least one of the first division method or the first encoding parameter used for the second encoding of the image to be encoded or the first reconstructed image, where the first reconstructed image is the first code stream or the first Reconstructed image during encoding.
  • the first code stream may be decoded to obtain the first reconstructed image.
  • the reconstructed image in the first encoding process may be acquired.
  • processes such as inverse quantization and inverse transformation are performed on the first encoded data to obtain the first reconstructed image.
  • the first division mode and/or the first encoding parameter used for the second encoding of the image to be encoded or the first reconstructed image is determined according to the first reconstructed image.
  • the first encoding parameter may include but not limited to a first quantization parameter (quantization parameter, QP) or a first code rate, and the like.
  • the image to be encoded may be a source video image, correspondingly, the first reconstructed image here is the first reconstructed frame, and the first reconstructed frame is obtained by decoding the first code stream corresponding to the source video image, or in In the first encoding process, the first encoding data corresponding to the source video image is obtained by performing inverse quantization, inverse transformation, and other processing.
  • the image to be encoded can be an image block after dividing the source video image, correspondingly, the first reconstructed image here is the first reconstructed image block, and the first reconstructed image block is the decoded and divided image obtained from the first code stream corresponding to the block, or obtained by performing inverse quantization, inverse transformation, and other processing on the encoded data corresponding to the divided image block during the first encoding process.
  • the encoding information of the first encoding may include one or more items of the division mode of the first encoding, the quantization parameter of the first encoding, and the encoding distortion information of the first encoding.
  • the second encoding in the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, so as to generate a second code stream.
  • the prediction mode of the second encoding may be intra prediction.
  • the second encoding is performed on the image to be encoded or the first reconstructed image to generate a second code stream including frames or I blocks in all intra-frame prediction modes.
  • the random access code stream shown in FIG. 7 that is, the second code stream here may include frames in all intra-frame prediction modes.
  • the second encoding may include one or more processing procedures such as prediction, transformation, quantization, and entropy encoding.
  • the image to be encoded or the first reconstructed image may be predicted, transformed, and quantized to generate second encoded data, and then the second encoded data may be entropy encoded to generate a second code stream including the second encoded data.
  • the difference between the second encoding and the first encoding is that the prediction modes are different, that is, the second encoding is a full intra-frame prediction mode.
  • the second encoding is a full intra-frame prediction mode.
  • other information such as encoding parameters of the second encoding and the first encoding may also be different.
  • the difference between the above-mentioned first reconstructed image and the second reconstructed image is smaller than the difference threshold or the similarity between the first reconstructed image and the second reconstructed image is higher than the similarity threshold, and the second reconstructed image is decoded
  • the above-mentioned second code stream may be obtained by performing inverse quantization, inverse transformation, and other processing on the above-mentioned second encoded data during the second encoding process.
  • the difference threshold or the similarity threshold can be reasonably set according to requirements.
  • the difference is used to represent the difference between the features of the first reconstructed image and the features of the second reconstructed image.
  • the difference can use mean absolute difference (mean absolute differences, MAD), absolute error sum (sum of absolute differences, SAD), error square sum algorithm (sum of squared differences, SSD), mean square error algorithm (mean square differences, MSD) , or the sum of absolute transformed difference (SATD) and other indicators.
  • MAD mean absolute differences
  • SAD absolute error sum
  • SD sum of squared differences
  • MSD mean square error algorithm
  • SATD sum of absolute transformed difference
  • the similarity is used to represent the similarity between the features of the first reconstructed image and the features of the second reconstructed image.
  • the similarity can be measured by indicators such as MAD, SAD, SSD, MSD, or SATD.
  • the larger the MAD, SAD, SSD, MSD, or SATD the lower the similarity, and the more different the quality of the first reconstructed image and the second reconstructed image.
  • the smaller the MAD, SAD, SSD, MSD, or SATD the higher the similarity, and the more identical or equivalent the quality of the first reconstructed image and the second reconstructed image.
  • the above difference threshold can be set to 0 or set according to requirements.
  • the brightness signal threshold can be set to 4, and the chrominance signal threshold can be set to 2.
  • SAD, SSD or MSD is selected
  • the brightness signal threshold can be set to 4xN, 16xN, 16 respectively
  • the chrominance signal threshold can be set to 2xN, 4xN, 4, where N is the range of the difference measurement object (which can be a coding block or an image)
  • the total number of pixels within; when SATD is selected as the measure of difference, the threshold can be set to 0.
  • the similarity threshold can be set similarly.
  • a specific implementable manner of the above S1003 may be, according to the first reconstructed image, determine a plurality of second division methods, and select a second division method among the plurality of second division methods as the first division method and/or, according to the first reconstructed image, determine a plurality of second encoding parameters, and select a second encoding parameter from among the plurality of second encoding parameters as the first encoding parameter.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and a plurality of third reconstructed images, and the plurality of third reconstructed images are obtained according to the plurality of first reconstructed images respectively.
  • the two-division method and/or the plurality of second encoding parameters mentioned above are reconstructed images that are subjected to multiple times of second encoding on the image to be encoded or the first reconstructed image.
  • the plurality of third reconstructed images may be reconstructed images in multiple second encoding processes, or may be reconstructed images of multiple code streams obtained by multiple second encoding processes.
  • the encoding device may perform multiple second encodings on the image to be encoded or the first reconstructed image according to multiple second division modes and/or multiple second encoding parameters respectively, so as to generate multiple third encoded data.
  • the multiple second encoding processes inverse quantization, inverse transformation and other processing are performed on the multiple third encoded data to obtain multiple third reconstructed images.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and the plurality of third reconstructed images.
  • the third coded data corresponding to the third reconstructed image with the highest similarity is used as the second coded data, so as to generate a code stream including the second coded data.
  • the code stream corresponding to the third reconstructed image with the highest similarity is used as the second code stream.
  • the video image processing method in the embodiment of the present application may further include: judging whether the prediction mode of the first encoding is intra prediction.
  • the prediction mode of the first encoding is inter-frame prediction, execute S1003.
  • the prediction mode of the first encoding is intra-frame prediction, the first code stream is used as the second code stream, or the first coded data is used as the second coded data to generate the second code stream.
  • the second encoding can be performed on the image to be encoded or the first reconstructed image by performing S1003 and S1004, so as to generate The second code stream of a frame or I block in intra prediction mode.
  • the encoding device judges whether the prediction mode of the first encoding is intra prediction. As shown in FIG. 7 , the prediction mode of the first encoding at this time is inter-frame prediction. Afterwards, the encoding device may perform second encoding on the image to be encoded or the reconstructed frame of the frame numbered 3 (frame #3) by executing S1003 and S1004, so as to generate the random access code of the frame numbered 3 (frame #3) flow.
  • the prediction mode of the frame numbered 3 (frame #3) in the random access code stream is intra prediction.
  • the second code stream when performing the first encoding on the image to be encoded to obtain the first code stream including the frame or I block of the full intra-frame prediction mode, it is also possible not to use the full intra-frame prediction mode Frames or I-blocks are used as the second coded data, that is, the second code stream may not include the frames or I-blocks in the full intra-frame prediction mode, which can be reasonably set according to video transmission requirements.
  • the foregoing second coded data may also be referred to as a random access frame.
  • the server may store the first code stream and the second code stream.
  • the first code stream and the second code stream are sent to the client.
  • An example is an on-demand application scenario, where the client requests to start playing a video content at time t 0 , and the server will start delivering the first stream including the video content at time t 0 .
  • the video content corresponding to time t 0 is an access frame in the first code stream (for example, a frame in the above-mentioned full intra-frame prediction mode)
  • the server only provides the first code stream to the client.
  • the client side decodes and plays the first code stream.
  • the video content corresponding to time t 0 is a frame encoded as 0 (#0 frame) as shown in FIG. 7 in the long GOP code stream, and then the server only provides the long GOP code stream to the client.
  • the server also needs to send the random access frame at time t 0 or the closest to time t 0 in the second code stream to the client.
  • the client can first decode the random access frame, and then decode the reference frame of the second code stream by using the reconstructed frame of the random access frame to decode and play the second code stream.
  • the video content corresponding to time t 0 is a frame encoded as 3 (#3 frame) as shown in Figure 7 in the long GOP code stream, then the server also needs to send a random access code stream to the client The frame coded as 3 in (#3 frame).
  • the client can first decode the frame encoded as 3 (#3 frame) in the random access code stream, and use the reconstructed frame of the frame coded as 3 (#3 frame) in the random access code stream as a reference frame.
  • the encoding of the stream is to decode 4 frames (frame #4), and then decode and play the subsequent frames of the long GOP code stream.
  • the client requests to access the live broadcast from time t 0 , and the server delivers the first code stream and the second code stream to the client.
  • the client first decodes the random access frame at time t 0 in the second code stream or the random access frame closest to time t 0 , and then uses the reconstructed frame of the random access frame as a reference frame to decode the subsequent frames of the first code stream.
  • Subsequent frames refer to frames after the moment of the random access frame in the first code stream.
  • the first encoding is performed on the image to be encoded to generate the first code stream
  • the first division method or the first encoding parameter used for the second encoding of the image to be encoded or the first reconstructed image is determined according to the first reconstructed image At least one of them, according to at least one of the first division method or the first encoding parameter, perform second encoding on the image to be encoded or the first reconstructed image to generate a second code stream
  • the first reconstructed image is the first code stream or The reconstructed image during the first encoding process.
  • the encoding of the second code stream is adjusted, so as to realize that the quality of the reconstructed image of the first code stream is the same as or Equivalent, so that on the basis of meeting the low-latency access to video content, the decoding quality of the access video content is improved, the block effect is reduced, and part of the artifact effect is eliminated.
  • the method for processing a video image according to the embodiment of the present application will be explained below by using the image to be encoded as the image block after dividing the source video image.
  • FIG. 11 is a schematic flowchart of a video image processing method provided by an embodiment of the present application. Part of the method in the embodiment of the present application may be executed by an encoding device.
  • the encoding apparatus may be applied to the source device 12 in the above embodiments, for example, the server 801 in the embodiment shown in FIG. 8 .
  • the image to be encoded is the Nth image block of the Kth frame image as an example.
  • the encoding device may encode the Nth image block of the Kth image frame to generate the first code stream and the second code stream.
  • the method shown in Figure 11 may include the following implementation steps:
  • the encoding device may receive the input image of the Kth frame, and perform block division on the image of the Kth frame to obtain multiple image blocks of the image of the Kth frame.
  • the encoding of the Nth image block of the Kth frame of image is taken as an example for illustration.
  • Other image blocks may be processed in the same or similar manner, and this embodiment of the present application does not explain them one by one.
  • S1102. Perform first encoding on the Nth image block to generate a first code stream.
  • the encoding device may use information such as the prediction mode P, the division method D, and the quantization parameter QP to perform first encoding on the Nth image block, so as to generate a first code stream.
  • the first code stream may include first encoded data of the Nth image block.
  • the reconstructed image block A of the Nth image block may also be generated.
  • information such as the division mode D and the quantization parameter QP involved in the above-mentioned first encoding may also be used as input information.
  • the encoding result of the current block in the second code stream directly uses the encoding result of the Nth image block in the first code stream.
  • the prediction mode P of the Nth image block is inter-frame prediction
  • the original image block to be encoded at the corresponding position of the reconstructed image block A in the Kth frame image, that is, the above Nth image block is performed through the following steps Second encoding for intra prediction mode.
  • the Nth image block is divided in a division manner, and a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided sub-blocks, to A reconstructed image block B of the second code stream is obtained.
  • a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided sub-blocks, to A reconstructed image block B of the second code stream is obtained.
  • the Nth image block is divided in a division manner, including but not limited to performing 2Nx2N division, NxN division, or no division on the Nth image block.
  • dividing the Nth image block into 2Nx2N means dividing the Nth image block into 2Nx2N sub-blocks.
  • N is any positive integer greater than 1.
  • a series of encoding information and/or encoding parameters used in a series of second encoding processes such as division, intra prediction, transformation, and quantization, such as division method, QP, code rate etc.
  • a series of encoding information and/or encoding parameters used in a series of second encoding processes such as division, intra prediction, transformation, and quantization, such as division method, QP, code rate etc.
  • the average QP of one or more I frames closest to the image of the Kth frame may be used.
  • the reconstructed image block B is an image block corresponding to the reconstructed image block A in the reconstructed image of the Kth frame image of the second code stream. That is, in the reconstructed image of the K-th frame image of the second code stream, the image block at the same position as the reconstructed image block A of the K-th frame image of the first code stream.
  • the reconstructed image block B may consist of one or more reconstructed blocks of coded sub-blocks.
  • the similarity cost function value f (division method, QP) of the reconstructed image block A and the reconstructed image block B can be calculated according to the following formula (1).
  • i represents the index of the pixel in the reconstructed image block
  • I represents the total number of pixels in the image of the reconstructed image block
  • B(division method, QP, i) represents the first division method of the reconstructed image block B using the quantization parameter QP
  • the reconstructed pixel value of the i pixel position, A(i) represents the reconstructed pixel value of the ith pixel position of the reconstructed image block A.
  • T can be 1 or 2.
  • the similarity of two reconstructed image blocks in addition to formula (1), other methods can be used to evaluate the similarity of two reconstructed image blocks, including but not limited to MAD, SAD, SSD, MSD, SATD wait.
  • the similarity of two reconstructed image blocks is evaluated by any one of the following formulas (2) to formula (5), or by calculating the sum of the absolute values of the difference images of the two reconstructed image blocks after Hadamard transformation.
  • the SATD similarity cost function is used to evaluate the similarity of two reconstructed image blocks: calculate the sum of the absolute values of the difference images of the two reconstructed image blocks after Hadamard transformation, and evaluate the similarity of the two reconstructed image blocks.
  • the similarity cost function value of the reconstructed image block A and the reconstructed image block B is calculated by using any of the above similarity cost functions.
  • the similarity cost function value of the reconstructed image block A and the reconstructed image block B is smaller than the threshold value of the similarity cost function value, directly execute S1109. If the limited iteration is locally optimal, directly execute S1109. Finite iterative local optimum specifically means that all division methods and/or encoding parameters have been used to perform the second encoding on the Nth image block, and the reconstructed image block B and the reconstructed image block B of each division mode and/or encoding parameters are calculated.
  • the value threshold of the similarity cost function can be flexibly set according to requirements. For example, set it to 0, or set the similarity cost function value thresholds of MAD, SAD, SSD, and MSD to evaluate the similarity for brightness signals to 4, 4xI, 16xI, and 16, respectively. In other words, when the average brightness difference between the reconstructed image block B and the reconstructed image block A is less than 4 or 16, then execute S1109.
  • the similarity cost function value thresholds for evaluating the similarity using MAD, SAD, SSD, and MSD are set to 2, 2xI, 4xI, and 4, respectively. In other words, when the average brightness difference between the reconstructed image block B and the reconstructed image block A is less than 2 or 4, then execute S1109.
  • the value threshold of the similarity cost function for evaluating the similarity using SATD may be set to 0.
  • the division method and the corresponding quantization result may be discarded.
  • S1101 may be repeatedly executed to start the first encoding of the N+1th image block of the Kth frame image until the first encoding and the second encoding of the entire frame are completed.
  • the embodiment of the present application may have multiple division methods and/or encoding parameters, and one division mode and/or encoding parameter may be selected among the multiple division modes and/or encoding parameters, and step 1105 is repeatedly executed to traverse the multiple division modes and/or encoding parameters. and/or encoding parameters, performing second encoding on the Nth image block, and calculating the similarity cost function values of the reconstructed image block B and the reconstructed image block A for each division mode and/or encoding parameter.
  • the encoding parameter as QP as an example, within a certain range (such as 0 to 51), select a QP with a certain step size (such as 1 or 2), and repeatedly execute S1105 until the limited QPs are enumerated.
  • a certain range such as 0 to 51
  • a certain step size such as 1 or 2
  • the second code stream may include second coded data of the Nth image block.
  • the second encoded data here is the encoded data corresponding to the reconstructed image block B.
  • Reconstructing the encoded data corresponding to the image block B refers to obtaining the encoded data by using a division method and/or encoding parameters.
  • the reconstructed image block B may be a reconstructed image of the coded data.
  • the second coded data here is the coded data corresponding to the reconstructed image block B that is locally optimal in the limited iteration.
  • Finite iterative local optimum specifically means that all division methods and/or encoding parameters have been used to encode the Nth image block, and the reconstruction image block B and reconstruction image block A of each division mode and/or encoding parameter are calculated. Similarity, select the one with the highest similarity as the limited iterative local optimum, and use the encoded data corresponding to the reconstructed image block B with the highest similarity as the second encoded data here.
  • other information may also be combined to determine whether to execute S1105. For example, it is determined whether to execute S1105 in combination with information such as the frame type used in the first encoding, the reference relationship, or whether temporal motion vector prediction is disabled. Exemplarily, when there is only a single frame reference in the first encoding and the temporal motion vector prediction is disabled, perform S1105, that is, encode the random access frame of the second code stream.
  • S1105. it is determined whether to perform S1105 in combination with tool parameters such as intra-frame prediction strong filtering and SAO filtering used in the first encoding.
  • tool parameters such as intra-frame prediction strong filtering and SAO filtering used in the first encoding.
  • S1105 is not performed, and the random access frame is encoded.
  • the encoding of the second code stream is adjusted based on the encoding result of the first code stream including the Nth image block, so as to realize the reconstructed image block of the first code stream and the corresponding reconstructed image block of the second code stream
  • the quality of the access video content is the same or equivalent, so as to meet the low-latency access video content, improve the decoding quality of the access video content, reduce block effects, and eliminate part of the artifact effect.
  • FIG. 12 is a schematic flowchart of a video image processing method provided by an embodiment of the present application. Part of the method in the embodiment of the present application may be executed by an encoding device.
  • the encoding apparatus may be applied to the source device 12 in the above embodiments, for example, the server 801 in the embodiment shown in FIG. 8 .
  • the image to be encoded is an image of the Kth frame as an example.
  • the encoding device may encode the image of the Kth frame to generate a first code stream and a second code stream.
  • the method shown in Figure 12 may include the following implementation steps:
  • the encoding device may receive an input image of the Kth frame.
  • the encoding of the image of the Kth frame is taken as an example for illustration, and other frames may be processed in the same or similar manner, and this embodiment of the present application does not explain each one.
  • the encoding device may use information such as the prediction mode P, the division method D, and the quantization parameter QP to perform first encoding on the image of the K-th frame, so as to generate a first code stream.
  • the first code stream may include the first coded data of the Kth frame image.
  • the reconstructed frame A of the Kth frame image may also be generated.
  • information such as the division mode D and the quantization parameter QP involved in the above-mentioned first encoding may also be used as input information.
  • the image of the Kth frame is divided in a division manner, and a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided sub-blocks, so as to obtain Reconstructed frame B of the second code stream.
  • a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided sub-blocks, so as to obtain Reconstructed frame B of the second code stream.
  • Coding information and/or coding parameters used in a series of second coding processes such as division, intra-frame prediction, transformation, quantization, etc. when performing this S1202 for the first time on the K-th frame image, for example, division method, QP, code rate etc.
  • the average QP of one or more I frames closest to the image of the Kth frame may be used.
  • the reconstructed frame B may consist of one or more reconstructed blocks of coded image blocks.
  • the second encoded data of the current frame of the second code stream directly uses the first encoded data of the image of the Kth frame of the first code stream.
  • the prediction mode P of the Kth frame image is not full intra-frame prediction (for example, the Kth frame image is a P frame, or there is a P block)
  • the prediction mode P of the Kth frame image is not full intra-frame prediction (for example, the Kth frame image is a P frame, or there is a P block)
  • S1202 perform the full intraframe prediction mode on the Kth frame image Second encoding.
  • the similarity cost function value f (division method, QP) of the reconstructed frame A and the reconstructed frame B can be calculated according to the following formula (6).
  • i represents the index of the pixel in the reconstructed frame
  • I represents the total number of pixels in the image of the reconstructed frame
  • B(division method, QP, i) represents the i-th pixel using the quantization parameter QP under a division method of the reconstructed frame
  • the reconstructed pixel value of position A(i) represents the reconstructed pixel value of the i-th pixel position of the reconstructed frame A.
  • T can be 1 or 2.
  • the embodiment of the present application may have multiple division methods and/or encoding parameters, and one division mode and/or encoding parameter may be selected among the multiple division modes and/or encoding parameters, and step 1202 is repeatedly performed to traverse the multiple division modes and/or encoding parameters. and/or encoding parameters, encoding the image of the Kth frame, and calculating the similarity cost function values of the reconstructed frame B and the reconstructed frame A for each division mode and/or encoding parameter.
  • the encoding parameter as QP as an example, within a certain interval (such as 0 to 51), select a QP with a certain step size (such as 1 or 2), and repeatedly execute S1202 until the limited QPs are enumerated.
  • a certain interval such as 0 to 51
  • select a QP with a certain step size such as 1 or 2
  • the second code stream may include second coded data of the Kth frame image.
  • the second encoded data here is the encoded data corresponding to the reconstructed frame B that is locally optimal in a finite iteration.
  • the finite iterative local optimum specifically means that all the division methods and/or encoding parameters have been used to encode the image of the K-th frame, and the similarity between the reconstructed frame B and the reconstructed frame A of each division mode and/or encoding parameters is calculated, Select the one with the highest similarity as the limited iterative local optimum, and use the encoded data corresponding to the reconstructed frame B with the highest similarity as the second encoded data here.
  • the second encoded data here is the encoded data corresponding to the reconstructed frame B.
  • Reconstructing the encoded data corresponding to frame B refers to obtaining the encoded data by using a division method and/or encoding parameters.
  • Reconstructed frame B may be a reconstructed image of the encoded data.
  • the division method and the corresponding quantization result may be discarded.
  • S1201 may be repeatedly performed to start encoding the image of the K+1th frame.
  • the encoding of the second code stream is adjusted based on the encoding result of the first code stream including the K-th frame image, so as to realize that the quality of the reconstructed frame of the first code stream is the same as that of the corresponding reconstructed frame of the second code stream Or equivalent, so as to meet the low-latency access to video content, improve the decoding quality of the access video content, reduce block effects, and eliminate part of the artifact effect.
  • the quality of the first code stream and the second code stream is controlled to be consistent at the frame level.
  • the video image processing method of the above embodiment controls the encoding of the current image of the second code stream according to the reconstructed image of the current image (for example, the current frame or the current block) of the first code stream, so as to realize the reconstructed image of the first code stream
  • the quality of the reconstructed image is the same as or equivalent to that of the corresponding second code stream.
  • the embodiment of the present application also provides the video image processing method of the following embodiment, controlling the encoding of the second image to be encoded in the second code stream according to at least one first image to be encoded in the first code stream, the second image to be encoded is At least one video image preceding the first image to be encoded, so that the quality of the reconstructed image of the first code stream is the same or equivalent to that of the corresponding reconstructed image of the second code stream.
  • FIG. 13 is a schematic flowchart of a video image processing method provided by an embodiment of the present application. Part of the method in the embodiment of the present application may be executed by an encoding device.
  • the encoding apparatus may be applied to the source device 12 in the above embodiments, for example, the server 801 in the embodiment shown in FIG. 8 .
  • a series of steps or operations involved in the embodiments of the present application may be executed in various orders and/or simultaneously, and the execution order is not limited by the sequence numbers of the steps shown in FIG. 13 .
  • the method shown in Figure 13 may include the following implementation steps:
  • the second picture to be encoded is a video picture preceding at least one picture to be encoded.
  • the second image to be encoded and the one or more first images to be encoded in this embodiment of the present application may be video images captured by a camera or other acquisition devices, or decoded video images.
  • the decoded video image may be an image obtained by decoding the compressed video image.
  • S1302. Perform first encoding on at least one first image to be encoded respectively, so as to generate a first code stream.
  • the first encoding may include one or more processing procedures such as prediction, transformation, quantization, and entropy encoding. For example, at least one first image to be encoded may be predicted, transformed, and quantized to generate one or more first encoded data, and then one or more first encoded data may be entropy encoded to generate one or more A first code stream of the first coded data.
  • processing procedures such as prediction, transformation, quantization, and entropy encoding.
  • the prediction mode of the first encoding may be inter-frame prediction.
  • the prediction mode of the first encoding may also be intra-frame prediction.
  • S1303. Determine at least one of the first division method or the first encoding parameter used for the second encoding of the second image to be encoded according to at least one first reconstructed image, where the at least one first reconstructed image is the first code stream or The reconstructed image during the first encoding process.
  • the first code stream may be decoded to obtain one or more first reconstructed images.
  • one or more first encoded data may be subjected to inverse quantization, inverse transformation, and other processing to obtain one or more first reconstructed images.
  • the first division mode and/or the first encoding parameter used for performing the second encoding on the second image to be encoded may be determined according to the one or more first reconstructed images.
  • At least one first image to be encoded may be at least one first source video image, correspondingly, the at least one first reconstructed image here is at least one first reconstructed frame, and at least one first reconstructed frame is decoded
  • the first code stream corresponding to each of the at least one first source video image is obtained, or is the reconstructed frame of the first encoded data corresponding to each of the at least one first source video image in the first encoding process.
  • the second encoding in the full intra-frame prediction mode is performed on the second image to be encoded, so as to generate a second code stream.
  • the second code stream may include second coded data.
  • the prediction mode of the second encoding may be intra prediction.
  • the second encoding is performed on the second image to be encoded, so as to generate a second code stream including frames in all intra-frame prediction modes.
  • the difference between the at least one first reconstructed image and the at least one second reconstructed image is smaller than a difference threshold, or the similarity between the at least one first reconstructed image and the at least one second reconstructed image is higher than the similarity degree threshold, at least one second reconstructed image is obtained by decoding the above-mentioned first code stream by using the third reconstructed image as a reference image.
  • the difference threshold or the similarity threshold can be reasonably set according to requirements.
  • the third reconstructed image is a reconstructed image in the second code stream or in the second encoding process.
  • the number of at least one first reconstructed image is the same as the number of at least one second reconstructed image.
  • first encoding is performed on an image to be encoded to generate a first code stream
  • second encoding is performed on a second image to be encoded to generate a second code stream.
  • the difference between the first reconstructed image and the second reconstructed image is smaller than the difference threshold or the similarity between the first reconstructed image and the second reconstructed image is higher than the similarity threshold.
  • the second reconstructed image is obtained by decoding the first code stream by using the third reconstructed image as a reference image.
  • first encoding is performed on multiple images to be encoded to generate a first code stream
  • second encoding is performed on a second image to be encoded according to the multiple first reconstructed images to generate a second code stream.
  • the difference between the plurality of first reconstructed images and the plurality of second reconstructed images is smaller than a difference threshold or the similarity between the plurality of first reconstructed images and the plurality of second reconstructed images is higher than a similarity threshold.
  • the multiple second reconstructed images are obtained by decoding the first code stream by using the third reconstructed image as a reference image.
  • the difference between the multiple first reconstructed images and the multiple second reconstructed images may be a weighted sum of differences between each of the multiple first reconstructed images and the corresponding second reconstructed images.
  • the similarity between the multiple first reconstructed images and the multiple second reconstructed images may be a weighted sum of the similarities between each of the multiple first reconstructed images and the corresponding second reconstructed images.
  • the above-mentioned at least one first image to be encoded is a first image to be encoded, and correspondingly, the at least one first reconstructed image is a first reconstructed image.
  • a specific implementable manner of the above-mentioned S1303 may be, according to For the first reconstructed image, select a second division mode from multiple second division modes as the first division mode; and/or select a second coding parameter from multiple second coding parameters as the first coding parameter.
  • the encoding device may perform multiple second encodings on the second image to be encoded according to multiple second division modes and/or multiple second encoding parameters respectively, so as to generate multiple third code streams.
  • Each of the plurality of third code streams may include a third coded data.
  • the encoding device may use the plurality of fifth reconstructed images as reference images to decode the first code stream to obtain a plurality of fourth reconstructed images, and compare the similarity between each of the plurality of fourth reconstructed images and the first reconstructed image degrees, and select the one with the highest similarity as the second reconstructed image.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and the plurality of fourth reconstructed images.
  • the third code stream corresponding to the fourth reconstructed image with the highest similarity is used as the second code stream, or the third coded data corresponding to the fourth reconstructed image with the highest similarity is used as the second coded data to generate coded data including of the second stream.
  • the plurality of fifth reconstructed images are reconstructed images of a plurality of third code streams, or reconstructed images in the above-mentioned multiple times of the second encoding process.
  • the above-mentioned at least one first image to be encoded is a plurality of first images to be encoded, and correspondingly, the at least one first reconstructed image is a plurality of first reconstructed images.
  • a specific implementable manner of the above S1303 may be , according to multiple first reconstructed images, select a second division mode as the first division mode among multiple second division modes; and/or, select a second encoding parameter as the first division mode among multiple second encoding parameters encoding parameters.
  • the encoding device may perform x times on the second image to be encoded according to multiple second division methods and/or multiple second encoding parameters respectively.
  • second encoding to generate x third code streams.
  • Each of the x third code streams may include a piece of third coded data.
  • the encoding device may respectively use the x fifth reconstructed images as reference images to decode the first code stream, so as to obtain x ⁇ m fourth reconstructed images. It can be understood as x groups of fourth reconstructed images, and a group of fourth reconstructed images includes m fourth reconstructed images.
  • one fifth reconstructed image among the x fifth reconstructed images is used as a reference image to decode the first code stream, and m fourth reconstructed images, that is, a group of fifth reconstructed images can be obtained.
  • a group with the highest similarity is selected as the second reconstructed images corresponding to the m first images to be encoded.
  • the third code stream corresponding to the m fourth reconstructed images with the highest similarity is used as the second code stream, or the third coded data corresponding to the m fourth reconstructed images with the highest similarity is used as the second coded data, so as to generate A code stream of the second coded data.
  • the third code stream corresponding to the m fourth reconstructed images with the highest similarity refers to the m fourth reconstructed images with the highest similarity, obtained by decoding the first code stream using the fifth reconstructed image of the third code stream as a reference image of.
  • the similarity between the second reconstructed image corresponding to each of the m first images to be encoded and the m first reconstructed images is the difference between the second reconstructed image corresponding to each of the m first images to be encoded and the corresponding first reconstructed image
  • the weighted sum of the similarities between them is the weighted sum of the similarities between them.
  • the multiple first images to be encoded include m first images to be encoded
  • the multiple first reconstructed images include m first reconstructed images
  • the m first reconstructed images are respectively A 1 , A 2 , ... , A m
  • m is any positive integer greater than 1.
  • the plurality of fourth reconstructed images corresponding to the i-th first image to be encoded among the m first images to be encoded are C 1i , ..., C xi , where x is any positive integer greater than 1, and i ranges from 2 to m.
  • x may represent the xth second code.
  • the plurality of fifth reconstructed images are B 1 , . . . , B x .
  • C 11 is obtained by decoding the first encoded data corresponding to the first image to be encoded by using B 1 as a reference image
  • C 12 is the first encoding corresponding to the second first image to be encoded by decoding C 11 as a reference image
  • the data obtained, ..., C 1m is obtained by decoding the first encoded data corresponding to the m-th first image to be encoded by using C 1(m-1) as a reference image.
  • C x1 is obtained by decoding the first encoded data corresponding to the first first image to be encoded by using B x as a reference image
  • C x2 is the first encoding corresponding to the second first image to be encoded by decoding C x1 as a reference image
  • the data obtained, ..., C xm is obtained by decoding the first encoded data corresponding to the mth first image to be encoded by using C x(m-1) as a reference image.
  • the video image processing method in the embodiment of the present application may further include: performing first encoding on the second image to be encoded before performing first encoding on at least one first image to be encoded, so as to Generate a fourth code stream, and judge whether the prediction mode of the first code is intra prediction.
  • the prediction mode of the first encoding is inter-frame prediction
  • the prediction mode of the first encoding is intra-frame prediction
  • the fourth code stream is used as the second code stream.
  • S1303 and S1304 can be performed to perform second encoding on the second image to be encoded to generate a code stream including all The second code stream of the frame in the intra prediction mode.
  • the first encoding is performed on the second image to be encoded to obtain the first code stream including frames in all intra-frame prediction modes, it is not necessary to perform S1303 and S1304, and directly use the frames in all intra-frame prediction modes as the second encoded data , so that the coding efficiency can be improved.
  • the encoding device judges whether the prediction mode of the first encoding is intra prediction. As shown in FIG. 7 , the prediction mode of the first encoding at this time is inter-frame prediction. Afterwards, the encoding apparatus may perform first encoding on the first image to be encoded by executing S1301 to S1304, so as to generate a long GOP code stream including a frame numbered 4 (frame #4). Perform second encoding on the second image to be encoded according to the reconstructed frame of frame number 4 (frame #4) of the long GOP code stream to generate a random access code stream of frame number 3 (frame #3). The prediction mode of the frame numbered 3 (frame #3) in the random access code stream is intra prediction.
  • the full intra-frame prediction mode when performing first encoding on at least one first image to be encoded to obtain the first code stream of a frame including the full intra-frame prediction mode, the full intra-frame prediction mode may not be used.
  • the frame of the mode is used as the second coded data, that is, the second code stream may not include the frame of the full intra-frame prediction mode, which can be reasonably set according to the video transmission requirement.
  • the foregoing second coded data may also be referred to as a random access frame.
  • the server may store the first code stream and the second code stream.
  • the first code stream and the second code stream are sent to the client.
  • first encoding is performed on at least one first image to be encoded respectively to generate a first code stream
  • the first division used to perform second encoding on the second image to be encoded is determined according to at least one first reconstructed image
  • the encoding of the second code stream is adjusted based on the encoding result of the first code stream, so as to achieve the same or equivalent quality of the reconstructed frame of the first code stream and the corresponding reconstructed frame of the second code stream, so that when the low On the basis of extending the access to video content, the decoding quality of the access video content is improved, the block effect is reduced, and part of the artifact effect is eliminated.
  • the method for processing a video image according to the embodiment of the present application will be explained below by using the at least one first image to be encoded as a source video image.
  • FIG. 14 is a schematic flowchart of a video image processing method provided by an embodiment of the present application. Part of the method in the embodiment of the present application may be executed by an encoding device.
  • the encoding device can be applied to the source device 12 in the above embodiment, for example, the server 801 in the embodiment shown in FIG. 8 .
  • it is taken that at least one first image to be coded is a K+1th frame image
  • the second image to be coded is a Kth frame image as an example.
  • the encoding device may adjust the second encoding on the Kth frame image according to the encoding result of the K+1th frame image, so as to generate the first code stream and the second code stream with consistent quality.
  • the method shown in Figure 14 may include the following implementation steps:
  • the encoding device may use information such as the prediction mode P, the division method D, and the quantization parameter QP to perform the first encoding on the image of the K+1th frame, so as to generate the first code stream.
  • the first code stream may include first encoded data of the K+1th frame image.
  • a reconstructed frame A of the K+1th frame image may also be generated.
  • information such as the division mode D and the quantization parameter QP involved in the above-mentioned first encoding may also be used as input information.
  • the K-th frame image is divided in a division manner, and a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided image blocks, so as to obtain Reconstructed frame B of the second code stream.
  • a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided image blocks, so as to obtain Reconstructed frame B of the second code stream.
  • Coding information and/or coding parameters used in a series of second coding processes such as partitioning, intra-frame prediction, transformation, quantization, etc., when this S1402 is executed for the first time on the K-th frame image, for example, partitioning method, QP, code rate etc., may be to initialize randomly generated coding information and/or coding parameters, or may also be to use coding information and/or coding parameters used by the I frame preceding the image of the Kth frame. For example, for the QP, the average QP of one or more I frames closest to the image of the Kth frame may be used.
  • the reconstructed frame B may consist of one or more reconstructed blocks of coded image blocks.
  • the second encoded data of the K-th frame image of the second code stream directly uses the K-th frame of the first code stream The first encoded data of the image.
  • the prediction mode P of the Kth frame image of the first code stream is not full intra-frame prediction (for example, the Kth frame image of the first code stream is a P frame, or there is a P block)
  • the Kth frame image is subjected to the second encoding in the full intra-frame prediction mode.
  • the similarity cost function value f (division method, QP) of the reconstructed frame A and the reconstructed frame C can be calculated according to the following formula (7).
  • i represents the index of the pixel in the reconstructed frame
  • I represents the total number of pixels in the image of the reconstructed frame
  • C(division method, QP, i) represents the i-th pixel using the quantization parameter QP under a division method of the reconstructed frame
  • the reconstructed pixel value of position A(i) represents the reconstructed pixel value of the i-th pixel position of the reconstructed frame A.
  • T can be 1 or 2.
  • the embodiment of the present application may have multiple division methods and/or encoding parameters, and one division mode and/or encoding parameter may be selected among the multiple division modes and/or encoding parameters, and step 1402 is repeatedly executed to traverse the multiple division modes and/or encoding parameters. and/or encoding parameters, encoding the image of the Kth frame, and calculating similarity cost function values of the reconstructed frame C and the reconstructed frame A for each division mode and/or encoding parameter.
  • the encoding parameter as QP as an example, within a certain interval (such as 0 to 51), select a QP with a certain step size (such as 1 or 2), and repeatedly execute S1302 until the limited QPs are enumerated.
  • a certain interval such as 0 to 51
  • select a QP with a certain step size such as 1 or 2
  • the second code stream may include second coded data of the Kth frame image.
  • the second encoded data here is the encoded data corresponding to the reconstructed frame C that is locally optimal in a finite iteration.
  • the finite iterative local optimum specifically means that all the division methods and/or encoding parameters have been used to encode the image of the K-th frame, and the similarity between the reconstructed frame C and the reconstructed frame A of each division mode and/or encoding parameters is calculated, Select the one with the highest similarity as the limited iterative local optimum, and use the encoded data corresponding to the reconstructed frame C with the highest similarity as the second encoded data here.
  • the second encoded data here is the encoded data corresponding to the reconstructed frame C.
  • Reconstructing the encoded data corresponding to the frame C refers to performing second encoding on the image of the K-th frame by using a division method and/or encoding parameters to obtain the encoded data. Decoding the encoded data yields the reconstructed frame B. The reconstructed frame B is used as a reference frame, and the first coded data is decoded to obtain the reconstructed frame C.
  • the division method and the corresponding quantization result may be discarded.
  • S1401 may be repeatedly executed to start encoding the image of the K+2th frame.
  • the second encoding of the Kth frame image is adjusted to realize the reconstructed frame of the first code stream and the corresponding second code stream
  • the quality of the reconstructed frames is the same or equivalent, so that on the basis of meeting the low-latency access video content, the decoding quality of the access video content is improved, the blocking effect is reduced, and part of the artifact effect is eliminated.
  • the encoding mode of the second code stream is adjusted, thereby reducing the inconsistency effect of encoding and decoding, which is beneficial to eliminating the blocking effect.
  • the video image processing method in the embodiment of the present application will be explained below by using the at least one first image to be encoded as a plurality of source video images.
  • FIG. 15 is a schematic flowchart of a video image processing method provided by an embodiment of the present application. Part of the method in the embodiment of the present application may be executed by an encoding device.
  • the encoding apparatus may be applied to the source device 12 in the above embodiments, for example, the server 801 in the embodiment shown in FIG. 8 .
  • the at least one first image to be encoded includes image K+1 frame, image K+2 frame, ..., and image K+m frame
  • the second image to be encoded is image frame K as an example.
  • the coding device can adjust the second coding of the K-th frame image according to the K+1-th frame image, the K+2-th frame image, ... , and the coding results of the K+m-th frame image, so as to generate the first image with consistent quality. code stream and the second code stream.
  • the method shown in Figure 15 may include the following implementation steps:
  • the encoding device may use information such as the prediction mode P, the division method D, and the quantization parameter QP to perform first encoding on the K+1th frame image, the K+2th frame image, ..., and the K+mth frame image to generate A first code stream including the first coded data of the K+1th frame image, the K+2th frame image, ..., and the K+mth frame image.
  • the reconstructed frame A 1 of the K+1th frame image, the reconstructed frame A2 of the K+2th frame image, . . . , and the reconstructed frame A m of the K+mth frame image may also be generated.
  • m is a positive integer greater than or equal to 2.
  • the K+1-th frame image, the K+2-th frame image, ..., and the K+m-th frame image, and the reconstructed frame A 1 of the K+1-th frame image, and the reconstructed frame A 2 of the K+2-th frame image , ... , and the reconstructed frame A m of the K+mth frame of image is used as input information, and encoding is started to generate a second code stream.
  • information such as the division mode D and the quantization parameter QP involved in the above-mentioned first encoding may also be used as input information.
  • the image of the Kth frame is divided in a division manner, and a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided sub-blocks, so as to obtain Reconstructed frame B of the second code stream.
  • a second encoding process of a series of processing such as intra prediction, transformation, quantization, inverse quantization, and inverse transformation is performed on all divided sub-blocks, so as to obtain Reconstructed frame B of the second code stream.
  • Coding information and/or coding parameters used in a series of second coding processes such as division, intra-frame prediction, transformation, quantization, etc., when this S1502 is executed for the first time on the K-th frame image, for example, division method, QP, code rate etc., may be to initialize randomly generated coding information and/or coding parameters, or may also be to use coding information and/or coding parameters used by the I frame preceding the image of the Kth frame. For example, for the QP, the average QP of one or more I frames closest to the image of the Kth frame may be used.
  • the reconstructed frame B may consist of one or more reconstructed blocks of coded image blocks.
  • the second encoded data of the K-th frame image of the second code stream directly uses the K-th frame of the first code stream The first encoded data of the image.
  • the prediction mode P of the Kth frame image of the first code stream is not full intra-frame prediction (for example, the Kth frame image of the first code stream is a P frame, or there is a P block)
  • the Kth frame image is subjected to the second encoding in the full intra-frame prediction mode.
  • the reconstructed frame B is used as a reference frame for decoding the K+1th frame of the first code stream, and decode the K+1th frame of the first code stream to obtain another reconstructed frame C 1 of the K+1th frame
  • the reconstructed frame C 1 is used as a reference frame for decoding the K+2th frame of the first code stream, and the K+2th frame of the first code stream is decoded to obtain another reconstructed frame C 2 of the K+2th frame, so that
  • the reconstructed frame C m is used as a reference frame for decoding the K+mth frame of the first code stream, and the K+mth frame of the first code stream is decoded to obtain another reconstructed frame C of the K+mth frame m .
  • i represents the index of the pixel in the reconstructed frame
  • N represents the total number of pixels in the image of the reconstructed frame
  • C m (division method, QP, i) represents the reconstruction frame B using the division method and quantization parameter QP as the reference frame decoding first
  • a m (i) represents the reconstructed pixel value at the i-th pixel position of the reconstructed frame generated by the first encoding.
  • T can be 1 or 2
  • w m represents the weighting coefficient of the similarity of the mth reconstructed frame.
  • the reconstructed frame A 1 and the reconstructed frame C 1 the reconstructed frame A 2 and the reconstructed frame C 2 .
  • other assessment methods can also be used, including but not limited to MAD, SAD, SSD, MSD, SATD, etc.
  • the embodiment of the present application may have multiple division methods and/or encoding parameters, and one division mode and/or encoding parameter may be selected among the multiple division modes and/or encoding parameters, and step 1402 is repeatedly executed to traverse the multiple division modes and/or encoding parameters.
  • /or encoding parameters perform second encoding on the image of the Kth frame, and calculate reconstruction frames C 1 , reconstruction frames C 2 , ..., reconstruction frames C m , and reconstruction frames A of each division method and/or encoding parameters 1. The similarity of the reconstructed frame A 2 , ..., and the reconstructed frame A m .
  • the encoding parameter as QP as an example, within a certain interval (such as 0 to 51), select a QP with a certain step size (such as 1 or 2), and repeatedly execute S1502 until the limited QPs are enumerated.
  • a certain interval such as 0 to 51
  • select a QP with a certain step size such as 1 or 2
  • the second code stream may include second coded data of the Kth frame image.
  • the second encoded data here is the reconstructed frame C 1 , reconstructed frame C 2 , .
  • Finite iterative local optimum specifically means that all division methods and/or encoding parameters have been used to encode the image of the K-th frame, and the reconstruction frame C 1 and reconstruction frame C 2 of each division mode and/or encoding parameter are calculated, ... ..., and the similarity between the reconstructed frame C m and the reconstructed frame A 1 , reconstructed frame A 2 , ..., and the reconstructed frame A m , choose the one with the highest similarity as the local optimum of the limited iteration, and reconstruct the highest similarity
  • the reconstructed frame C m are used as the second coded data here.
  • Decoding the encoded data yields the reconstructed frame B. Taking the reconstructed frame B as the reference frame, decoding the K+1th frame of the first code stream, the reconstructed frame C 1 can be obtained, and taking the reconstructed frame C 1 as the reference frame, decoding the K+2th frame of the first code stream, and can get Reconstruct frame C 2 , and so on, to obtain reconstructed frame C m .
  • the division method and the corresponding quantization result may be discarded.
  • the second encoding of the Kth frame image is adjusted , so as to achieve the same or equivalent quality of the reconstructed frame of the K-th frame image of the first code stream and the reconstructed frame of the K-th frame image of the second code stream, so as to improve access on the basis of satisfying low-latency access to video content Decoding quality of video content, reducing blocking artifacts, and partially removing artifact effects.
  • the encoding mode of the second code stream is adjusted, thereby reducing the inconsistency effect of encoding and decoding, which is beneficial to eliminating the blocking effect.
  • the embodiment of the present application also provides the following embodiments to specifically explain several other possible implementation manners of the above-mentioned embodiment shown in FIG. 9 .
  • the image to be encoded or the first reconstructed image is subjected to the second encoding in the full intra-frame prediction mode to generate the second code stream.
  • the encoding information of the first encoding may include one or more items of the division mode of the first encoding, the quantization parameter of the first encoding, and the encoding distortion information of the first encoding.
  • performing the second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image may include one or more of the following: Perform the second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image in the same division method as encoding; or perform full intra-frame encoding on the image to be encoded or the first reconstructed image using the same quantization parameter as the first encoding
  • the second encoding of the prediction mode or, according to the encoding distortion information of the first encoding, determine the quantization parameter of the second encoding, and perform the full intra-frame prediction mode on the image to be encoded or the first reconstructed image according to the quantization parameter of the second encoding the second code.
  • the first encoding division manner may include the first encoding TU division manner, PU division manner, or CU division manner.
  • the second encoding selects the same TU division method as the first encoding.
  • a schematic diagram of a TU division manner the first encoding and the second encoding may both adopt the TU division manner as shown in FIG. 16 .
  • the boundary consistency of distortion can be effectively guaranteed, that is, distortion exists at the same boundary, which is beneficial to keep the quality of the first code stream and the second code stream equal or identical.
  • the first coded PU or CU partition method may also be transmitted.
  • the second encoding is controlled based on the PU or CU partitioning manner of the first encoding.
  • the PU division method of the first encoding is used to quickly determine the prediction direction of the PU corresponding to the second encoding. For example, if the PU of a coding block of the first coding is intra, the PU of the corresponding coding block of the second coding can be consistent with the first coding; or, the PU of the second coding does not cross the PU boundary of the corresponding position of the first coding.
  • the CU division method of the first encoding may serve as a reference for pre-dividing CUs in the second encoding.
  • the second coded CU is only divided within the range of the first coded CU to ensure that the second coded CU does not cross the boundary of the first coded CU.
  • the image content input by the first encoding and the second encoding are similar or identical, for example, the input of the first encoding is the image to be encoded, the input of the second encoding is the image to be encoded and the first reconstructed image, and the first reconstructed image is obtained by the first
  • the coded image is a reconstructed image of the image to be coded, so the time-space complexity of the input video of the first code and the second code is similar or the same.
  • the second encoding may utilize quantization parameter distribution information of the first encoding.
  • the quantization parameter distribution information combines the differences in human eyes' sensitivity to different time and space complexities to design the quantization parameters of each coding block.
  • a quantization parameter offset may be added to the quantization parameter of each coding block, and qp_offset represents the quantization parameter offset.
  • Fig. 17 is a schematic diagram of the quantization parameter of the first encoding and the quantization parameter of the second encoding provided by the embodiment of the present application. Taking the example shown in Fig. 17, based on the quantization parameter of the first encoding and qp_offset (as shown in the figure - 3), the quantization parameter of the second encoding can be obtained.
  • the QP-3 of each coding block in the first coding can be used to obtain the QP of each coding block in the second coding.
  • the QP of the coded block in the first row and the first column of the first code is 32.
  • the QP of the coded block in the first row and the first column of the second code is 29.
  • the size of the quantization parameter transfer unit can be equal.
  • the size of each small square that is, the quantization parameter transfer unit
  • the size of each small square is 16x16 or 64x64 pixel units, that is, the position of each 16x16 or 64x64 pixel unit is the same QP value.
  • FIG. 18 is a schematic diagram of quantization parameters transmitted from the first code to the second code according to the embodiment of the present application. As shown in FIG. 18 , the sizes of different small squares (that is, quantization parameter transfer units) may be different.
  • the quantization parameters of the parameter set (such as the syntax element init_qp_minus26 in PPS), the slice-level quantization parameter offset (such as slice_qp_delta in the slice head) and the current coding block
  • the quantization parameter offset (such as mb_delta_quant) is obtained by the three operations. Therefore, the quantization parameters passed by the first encoding to the second-pass code include but are not limited to the quantization parameters in the parameter set, slice-level quantization parameter offsets, and one or any combination of the quantization parameter offsets of encoding blocks, or the final The final quantization parameter used in the first encoding quantization pass.
  • the encoding distortion threshold of the second encoding may be determined according to the encoding distortion information of the first encoding. For example, for a certain coding block of the first code stream, the coding distortion information adopts the MAD index, and its value is 4. Then, when making an encoding decision in the second encoding, when the MAD index between the predicted frame and the frame to be encoded, or the reconstructed frame and the frame to be encoded is less than 4, the decision-making judgment will be exited in advance, and the current encoding strategy will be the optimal encoding strategy.
  • the encoding distortion information may use one or more common indicators such as MAD, SAD, SSD, MSD, and SATD.
  • performing the second encoding of the full intra-frame prediction mode on the image to be encoded or the first reconstructed image according to the encoding information of the first encoding may include: according to the encoding information of the first encoding and The feature information of the image to be encoded is used to determine the quantization parameter of the second encoding. According to the quantization parameter of the second encoding, the second encoding in the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image.
  • the feature information of the image to be encoded may include one or more of content complexity of the image to be encoded, color classification information of the image to be encoded, contrast information of the image to be encoded and content segmentation information of the image to be encoded.
  • the image content input by the first encoding and the second encoding are similar or identical, for example, the input of the first encoding is the image to be encoded, the input of the second encoding is the image to be encoded and the first reconstructed image, and the first reconstructed image is obtained by the first The reconstructed image of the encoded image to be encoded, so the time-space complexity of the first encoding and the second encoding input video are similar or the same, and the content complexity analyzed in the first encoding process can be transferred to the second encoding, and the second encoding does not Repeated calculations are required, and encoding parameters such as quantization coefficients are directly calculated according to the complexity of the content to guide the second encoding to generate the second code stream.
  • the second encoding can set different quantization parameter offsets qp_offset for different regions according to the region information of the first encoding. For example, use a qp_offset of -5 for complex regions and a qp_offset of -3 for simple regions. Of course, it can be understood that it can also be other values, and the embodiment of the present application does not illustrate one by one.
  • the second encoding may be adjusted to set different encoding parameters for different regions according to information such as encoding distortion information, color classification, contrast information, and content segmentation of the first encoding.
  • the video image processing method of the present application can complete the encoding of the first code stream and the second code stream through any of the above-mentioned embodiments.
  • the embodiment of the present application can also use the following embodiments to carry the first The identification information of the code stream and the second code stream, so that the decoding device can distinguish the first code stream and the second code stream according to the identification information, and decode the corresponding code stream to quickly access the video content.
  • the encoding device may encapsulate and send the first code stream.
  • the decoding device can receive, decode or display the first code stream. If random access is required at a moment, the encoding device may select the second code stream corresponding to the random access frame corresponding to the moment or the next moment for encapsulation and transmission. The encoding device may encapsulate and send the first code stream at a subsequent moment. The decoding device may first receive, decode or display the second code stream, then receive the first code stream, and decode or display the first code stream based on the reconstructed image of the second code stream.
  • the code stream receiving device (for example, a decoding device) can judge whether the received code stream supports the single-frame random access function according to the identification information, and distinguish whether the received code stream belongs to the long GOP stream or the basic stream (that is, the above-mentioned first code stream ), or contain two code streams (the first code stream and the second code stream).
  • Table 1 shows that code stream identification information is added in the PPS parameter set.
  • u(1) represents a 1-bit unsigned integer in the coding standard
  • ue(v) represents Columbus code coding
  • information is added in the PPS parameter set to identify whether the current code stream supports single-frame random access, and the current code stream type.
  • the syntax elements have the following meanings:
  • Single frame random access enabled flag (single_insert_enabled_flag): The value of 1 indicates that the code stream supports single frame random access, and the value of 0 indicates that the code stream does not support single frame random access.
  • Stream ID (stream_id): This value exists when single_insert_enabled_flag is 1.
  • FIG. 19 is a schematic diagram of the arrangement of the first code stream and the second code stream when the stream identifier (stream_id) is 2 provided by the embodiment of the present application.
  • the first code stream may include long GOP frame 1 and long GOP frame 2
  • the second code stream may include random access frame 1 and random access frame 2
  • long GOP frame 1 and random Access frame 1 is the same video frame content
  • long GOP frame 2 and random access frame 2 are the same video frame content
  • the stream identifier (stream_id) in the PPS is 2
  • long GOP frame 1 is in the random Before access frame 1
  • long GOP frame 2 precedes random access frame 2.
  • FIG. 20 is a schematic diagram of the arrangement of the first code stream and the second code stream when the stream identifier (stream_id) is 3 provided by the embodiment of the present application.
  • the first code stream may include long GOP frame 1 and long GOP frame 2
  • the second code stream may include random access frame 1 and random access frame 2
  • long GOP frame 1 and random The access frame 1 is the same video frame content
  • the long GOP frame 2 and the random access frame 2 are the same video frame content
  • the stream identifier (stream_id) in the PPS is 3, as shown in Figure 20, the long GOP frame 1 is in the random After access frame 1, long GOP frame 2 follows random access frame 2.
  • information such as a single frame random access enabling flag (single_insert_enabled_flag) and a stream identifier (stream_id) may also be carried in the VPS or SPS.
  • single_insert_enabled_flag single frame random access enabling flag
  • stream_id stream identifier
  • the single-frame random access enable flag (single_insert_enabled_flag) and the stream identifier (stream_id) can be combined into a single syntax element, when it is 0, it means that the general code stream that does not support single-frame random access; when it is When it is 1, it means a long GOP flow; when it is 2, it means a random access flow.
  • the code stream receiving device (for example, a decoding device) can judge whether the received code stream supports the single-frame random access function according to the header information of each slice, and distinguish whether the received code stream belongs to a long GOP stream or an elementary stream.
  • a carrying manner of the slice_segment_header is shown in Table 2, and Table 2 shows that code stream identification information is added in the slice_segment_header.
  • slice_support_single_insert_enable A value of 1 indicates that the code stream supports single-frame random access, and a value of 0 indicates that the code stream does not support single-frame random access.
  • stream_id This value exists when slice_support_single_insert_enable is 1.
  • slice_support_single_insert_enable and slice_id can be combined into a single syntax element.
  • it means a general code stream that does not support single frame random access; when it is 1, it means a long GOP stream; when it is 2, it means random Access stream.
  • Fig. 21 is a combination method of combining three first code streams and second code streams into one code stream according to the embodiment of the present application. (a) in FIG. 21 shows the data arrangement forms of two code streams in one code stream when the parameter sets of the long GOP stream and the random access stream are the same.
  • VPS1, SPS1, and PPS1 in (b) in Figure 21 indicate that the parameter set belongs to the long GOP code stream
  • VPS2, SPS2, and PPS2 indicate that the parameter set belongs to the random access stream
  • each type of stream data can be directly decoded by placing the parameter set in front of it.
  • the parameter sets of the two code streams are arranged together, which is convenient for sending the parameter sets in advance in some scenarios (such as DASH stream transmission).
  • slice_segment_header data of the long GOP stream and the random access stream It is necessary to set different values for slice_pic_parameter_set_id.
  • the corresponding PPS parameter set can be found according to the slice_pic_parameter_set_id.
  • the PPS parameter set points to the corresponding SPS through pps_seq_parameter_set_id, and the corresponding parameter set is found accordingly.
  • a code stream formed by combining the two code streams may only contain one or more parameter sets of VPS, SPS or PPS in the long GOP stream or the random access stream.
  • the receiving end or the sending end performs encapsulation, sending, and Receive, decode or display. If the scenario requiring random access does not occur, select the code stream data encapsulation, transmission, reception, decoding or display of the long GOP frame; if the scenario requiring random access occurs, select the code stream data corresponding to the random access frame for Encapsulate, send, receive, decode or display.
  • the long GOP stream data (including parameter sets) and the random access stream (including parameter sets) may not carry long GOP streams or The distinguishing identifier of the data type of the random access stream. If there is no scene that requires random access, you can choose the code stream data encapsulation, transmission, reception, decoding or display of the long GOP frame; if the scene that requires random access occurs, then First judge whether the long GOP frame is a full intra-frame prediction block, if so, select the code stream data of the long GOP frame to encapsulate, send, receive, decode or display, otherwise select the code stream data corresponding to the random access frame for encapsulation and transmission , receive, decode or display.
  • Table 3 shows the general SEI information syntax.
  • Table 4 shows the SEI message syntax of splicing sub-code streams.
  • single_insert_enabled_flag A value of 1 indicates that the code stream supports single-frame random access, and a value of 0 indicates that the code stream does not support single-frame random access.
  • stream_id This value exists when single_insert_enabled_flag is 1.
  • each sub-stream can be independently encapsulated in a track, such as sub-picture track.
  • the grammatical description information of whether the sub-code stream can be spliced can be added to the sub-picture track, and the sample is as follows:
  • track_class When it is 0, it means that the general stream does not support single-frame random access; when it is 1, it means a long GOP stream; when it is 2, it means a random access stream.
  • the description code stream type information is added to the file format specified in the ISO base media file format (ISO base media file format, ISOBMFF).
  • ISO base media file format ISO base media file format
  • Sample Entry Type 'srand' to the video track.
  • the sample entry name is 'normal', it means that the current video track is a general code stream that does not support single-frame random access; when the sample entry name is 'base', it means a long GOP stream; when the sample entry name is 'insert' ', it means random access flow.
  • the code stream identification information may be carried in the file description information, for example, the code stream identification information is carried in a media presentation description (media presentation description, MPD) file in the DASH protocol.
  • MPD media presentation description
  • the new EssentialProperty attribute srand@value is specified.
  • the srand@value attribute is described in Table 5.
  • Table 5 shows the srand@value attribute description in "urn:mpeg:dash:srand:2014"
  • file_class When it is 0, it means that the general code stream that does not support single-frame random access; when it is 1, it means a long GOP stream; when it is 2, it means a random access stream.
  • the code stream can be sent in a custom TLV (type, length, value) message mode.
  • the code stream identification information can be carried in the type.
  • a TLV message may include a type (type) field, a length (length) field, and a payload (payload) field.
  • Type(8bits) data type
  • length(32bits) payload length
  • payload (variable length) stream data.
  • Table 6 shows that different types (type) have different loads (payload).
  • Semantics Payload 0x00 General code stream that does not support single frame random access General stream data 0x01 long GOP stream Long GOP stream data 0x02 random access flow random access stream data other reserve code stream or other data
  • the encoding end of the embodiment of the present application can carry the identification information of the first code stream and the second code stream in the code stream, or carry it in the encapsulation layer, or carry it in the transmission protocol layer, etc., so as to decode
  • the terminal distinguishes the first code stream from the second code stream based on the identification information, so as to correctly decode and obtain the video content.
  • the video image processing method of the embodiment of the present application has been described in detail above with reference to the accompanying drawings, and the video image processing apparatus of the embodiment of the present application will be introduced below with reference to FIG. 22 . It should be understood that the video image processing apparatus can execute the video image processing method of the embodiment of the present application. In order to avoid unnecessary repetition, repeated descriptions are appropriately omitted when introducing the video image processing apparatus according to the embodiment of the present application below.
  • FIG. 22 is a schematic structural diagram of a video image processing device provided by an embodiment of the present application.
  • the video image processing apparatus 2200 may include: an acquisition module 2201 , a first encoding module 2202 and a second encoding module 2203 .
  • An acquisition module 2201 configured to acquire an image to be encoded.
  • the first coding module 2202 is configured to perform first coding on the image to be coded to generate a first code stream.
  • the second encoding module 2203 is configured to perform second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image according to the encoding information of the first encoding, so as to generate a second code stream, and the first reconstructed image is the first code stream stream or the reconstructed image during the first encoding process.
  • the encoding information of the first encoding includes one or more items of the division mode of the first encoding, the quantization parameter of the first encoding, and the encoding distortion information of the first encoding.
  • the second encoding module 2203 is configured to perform at least one of the following: use the same division method as the first encoding to perform the second encoding in the full intra-frame prediction mode on the image to be encoded or the first reconstructed image; or, adopt The same quantization parameter as the first encoding is performed on the image to be encoded or the first reconstructed image in the second encoding of the full intra-frame prediction mode; or, according to the encoding distortion information of the first encoding, the quantization parameter of the second encoding is determined, and according to the second encoding The quantization parameters of the image to be encoded or the first reconstructed image are subjected to the second encoding in the full intra-frame prediction mode.
  • the second encoding module 2203 is configured to: determine the quantization parameter of the second encoding according to the encoding information of the first encoding and the feature information of the image to be encoded; A reconstructed image undergoes a second encoding in full intra prediction mode.
  • the feature information of the image to be encoded includes one or more of the content complexity of the image to be encoded, the color classification information of the image to be encoded, the contrast information of the image to be encoded and the content segmentation information of the image to be encoded .
  • the second coding module 2203 is configured to determine, according to the coding information of the first coding and the first reconstructed image, the first division method or the first Encode at least one of the parameters.
  • the second encoding module 2203 is further configured to perform second encoding in full intra-frame prediction mode on the image to be encoded or the first reconstructed image according to at least one of the first division method or the first encoding parameter.
  • the interval between two adjacent frames of all intra-frame prediction modes in the first code stream is greater than the interval between two adjacent frames of all intra-frame prediction modes in the second code stream.
  • the difference between the first reconstructed image and the second reconstructed image is less than the difference threshold or the similarity between the first reconstructed image and the second reconstructed image is higher than the similarity threshold, and the second reconstructed image is the second The code stream or the reconstructed image in the second encoding process.
  • the second encoding module 2203 is configured to: determine a plurality of second division methods according to the encoding information of the first encoding and the first reconstructed image, and select a second division method among the plurality of second division methods as A first division method; and/or, according to the encoding information of the first encoding and the first reconstructed image, determine a plurality of second encoding parameters, and select a second encoding parameter from the plurality of second encoding parameters as the first encoding parameter.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and a plurality of third reconstructed images
  • the plurality of third reconstructed images include the second reconstructed image
  • more A third reconstructed image is a reconstructed image in the process of performing multiple second encodings on the image to be encoded or the first reconstructed image respectively according to multiple second division methods and/or multiple second encoding parameters, or multiple third reconstructed images
  • the image is a reconstructed image of multiple third code streams, and the multiple third code streams are performed multiple times of second encoding on the image to be encoded or the first reconstructed image respectively according to multiple second division methods and/or multiple second encoding parameters owned.
  • the second encoding module 2203 is further configured to: acquire the prediction mode of the first encoding.
  • the prediction mode of the first encoding is inter-frame prediction
  • the encoding information of the first encoding is obtained, and according to the encoding information of the first encoding, the second encoding of the full intra-frame prediction mode is performed on the image to be encoded or the first reconstructed image, so as to The step of generating the second code stream.
  • the prediction mode of the first encoding is intra-frame prediction
  • the first code stream is used as the second code stream.
  • the image to be encoded is a source video image; or, the image to be encoded is an image block after dividing the source video image.
  • the video image processing apparatus 2200 may execute any one of FIG. 9 to FIG. 12 , or the method of the encoding apparatus in any one of the embodiments shown in FIG. 16 to 21 .
  • FIG. 9 to FIG. 12 the method of the encoding apparatus in any one of the embodiments shown in FIG. 16 to 21 .
  • the embodiment of the present application also provides another video image processing device, which adopts the same structure as the processing device shown in FIG. 22 .
  • the acquiring module is configured to acquire at least one first image to be encoded and a second image to be encoded, and the second image to be encoded is a video image preceding the at least one first image to be encoded.
  • the first encoding module is configured to respectively perform first encoding on at least one first image to be encoded to generate a first code stream.
  • the second coding module is configured to determine at least one of the first division method or the first coding parameter used for the second coding of the second image to be coded according to at least one first reconstructed image, and the at least one first reconstructed image is The first code stream or the reconstructed image in the first encoding process.
  • the second encoding module is further configured to perform second encoding on the second image to be encoded according to at least one of the first division method or the first encoding parameter, so as to generate a second code stream.
  • the interval between two adjacent frames of all intra-frame prediction modes in the first code stream is greater than the interval between two adjacent frames of all intra-frame prediction modes in the second code stream.
  • the number of at least one first image to be encoded is one, the number of at least one first reconstructed image is one, and the difference between the first reconstructed image and the second reconstructed image is less than a difference threshold, or the first The similarity between a reconstructed image and the second reconstructed image is higher than the similarity threshold, the second reconstructed image is obtained by decoding the first code stream by using the third reconstructed image as a reference image, and the third reconstructed image is the second code stream or The reconstructed image in the second encoding pass.
  • the number of at least one first image to be encoded is one, and the number of at least one first reconstructed image is one, and the second encoding module is used for: according to the first reconstructed image, in multiple second partitions One of the second division modes is selected as the first division mode; and/or, according to the first reconstructed image, one of the multiple second encoding parameters is selected as the first encoding parameter.
  • the similarity between the first reconstructed image and the second reconstructed image is the highest among the similarities between the first reconstructed image and a plurality of fourth reconstructed images
  • the plurality of fourth reconstructed images include the second reconstructed image
  • the fourth reconstructed image is obtained by decoding the first code stream by using multiple fifth reconstructed images as reference images respectively
  • the multiple fifth reconstructed images are reconstructed images of multiple third code streams
  • the multiple third code streams are respectively
  • the second image to be encoded is obtained by performing multiple second encodings on the second image to be encoded according to multiple second division modes and/or multiple second encoding parameters
  • the multiple fifth reconstructed images are obtained according to multiple second division modes and/or multiple second encoding parameters or a plurality of second encoding parameters to reconstruct the image in the second encoding process multiple times on the second image to be encoded.
  • the first encoding module is further configured to: perform first encoding on the second image to be encoded to generate a fourth code stream before performing first encoding on at least one first image to be encoded.
  • the second encoding module is also used for: acquiring the prediction mode of the first encoding.
  • the prediction mode of the first encoding is inter-frame prediction
  • at least one of the first division method or the first encoding parameter used for the second encoding of the second image to be encoded is determined according to at least one first reconstructed image. step.
  • the fourth code stream is used as the second code stream.
  • At least one first image to be encoded is at least one first source video image
  • the second image to be encoded is a second source video image
  • the apparatus for processing video images may execute the method of the encoding apparatus in any one of the embodiments shown in FIG. 13 to FIG. 15 .
  • the apparatus for processing video images may execute the method of the encoding apparatus in any one of the embodiments shown in FIG. 13 to FIG. 15 .
  • Computer-readable media may include computer-readable storage media, which correspond to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol) .
  • a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this application.
  • a computer program product may include a computer readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage, flash memory, or any other medium that can contain the desired program code in the form of a computer and can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • coaxial cable Wire, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD) and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce optically with lasers data. Combinations of the above should also be included within the scope of computer-readable media.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • the techniques of the present application may be implemented in a wide variety of devices or devices, including wireless handsets, an integrated circuit (IC), or a group of ICs (eg, a chipset).
  • IC integrated circuit
  • a group of ICs eg, a chipset
  • Various components, modules, or units are described in this application to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in a codec hardware unit in conjunction with suitable software and/or firmware, or by interoperating hardware units (comprising one or more processors as described above) to supply.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种视频图像的处理方法及装置,通过根据第一编码的编码信息和/或第一重建图像,控制用于生成第二码流的第二编码,第二码流采用全帧内预测模式编码,将第二码流作为随机接入流,可以在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应,属于视频编解码技术领域。

Description

视频图像的处理方法及装置
本申请要求于2021年09月30日提交中国国家知识产权局、申请号为202111164100.4、申请名称为“视频图像的处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频编解码技术领域,尤其涉及一种视频图像的处理方法及装置。
背景技术
视频作为一种高效的信息传输方式,已经广泛分布在互联网络、电视广播以及各种新兴的媒体应用上。随着视频编解码技术、通信技术以及电子设备的快速发展,越来越多应用场景中,对于视频播放的时延有着较高要求。例如,视频会议、互动娱乐、体育赛事直播、以及其他点播或直播等应用场景。
在诸如视频会议、互动娱乐、或体育赛事直播等应用场景中,为了给用户提供多角度观察体验,通常在现场不同位置布置多个摄像机从不同角度拍摄同一个场景,得到一组视频信号。用户可以以适当的交互形式选择某一个角度观看录制的场景视频。多机位拍摄可以为用户提供多角度、三维立体视觉体验。更广泛地,在其它对时延要求高的场景,比如云虚拟现实(CloudVR)游戏、低时延直播应用场景中,为了让用户能无卡顿感知地在不同内容视频画面之间切换,视频内容的解码切换播放时延是影响用户体验的关键性指标。
通常采用的视频内容切换方式是将编码后的视频按照固定时间间隔分段,其中每段均是以I帧作为起始帧,当需要切换视频内容时,延续解码或播放到新内容最近时间点的分段,从新内容的分段I帧开始解码播放。这种切换方式从接收到切换指令到完成内容的切换,延迟时间较长,无法满足一些应用场景的低时延需求。
发明内容
本申请实施例提供了一种视频图像的处理方法及装置,通过根据第一编码的编码信息,控制用于生成第二码流的第二编码,第二码流采用全帧内预测模式编码,第二码流作为随机接入码流,可以在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。
第一方面,本申请实施例提供一种视频图像的处理方法,该方法可以包括:获取待编码图像。对待编码图像进行第一编码,以生成第一码流。根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流。第一重建图像是第一码流或第一编码过程中的重建图像。
第一编码和第二编码是两种不同的编码方式,第一编码允许帧间预测模式,第二编码为全帧内预测模式。当分别使用这两种编码方式对相同视频内容的图像或重建图像进 行编码,第一编码用于生成第一码流,第二编码用于生成第二码流。在一些情况下,第一编码和第二编码的编码信息可以不同。例如,第一编码的划分方式和第二编码的划分方式可以不同,第一编码的量化参数和第二编码的量化参数可以不同等。在一些情况下,第一编码和第二编码的编码信息可以相同。例如,第一编码的划分方式和第二编码的划分方式可以相同,第一编码的量化参数和第二编码的量化参数可以相同等。
在解码端,第一码流和第二码流的作用不同。当解码端切换显示的视频内容时,解码端可以先对待显示的视频内容对应时刻的第二码流进行解码,再将对应时刻的第二码流解码得到的帧作为解码第一码流的后续帧的参考帧,就可以及时对第一码流进行后续解码,而不需要等到第一码流的下一个I帧才能进行第一码流的解码,解码延迟明显降低。在上述第一方面提供的方案中,根据第一编码的编码信息来进行第二编码以得到第二码流,可以使得第二码流与第一码流的质量相当,解码时用第二码流的解码结果作为第一码流的参考帧也可以流畅衔接,而不会出现切换内容时的明显块效应或明显的伪影等影响用户体验的情况。
由此,在上述第一方面的实现方式中,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以实现第一码流的重建图像和对应的第二码流的重建图像的质量相同或相当,从而在满足低时延接入视频内容基础上,有益于减少编解码不一致所引起的块效应和部分伪影效应,提升接入视频内容的解码质量。
一种可能的设计中,第一编码的编码信息包括第一编码的划分方式,第一编码的量化参数和第一编码的编码失真信息中的一项或多项。
第一编码的划分方式可以包括第一编码的TU划分方式、第一编码的PU划分方式或第一编码的CU划分方式。以第一编码的CU划分方式,第二编码可以采用与第一编码相同的CU划分方式,或者,第二编码的CU不跨越第一编码的CU边界。
以第一编码的编码信息包括第一编码的量化参数为例,第二编码的量化参数可以是在第一编码的量化参数的基础上叠加一个量化参数偏移量。
一种可能的设计中,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,包括以下至少一项:采用与第一编码相同的划分方式对待编码图像或第一重建图像进行全帧内预测模式的第二编码;或者,采用与第一编码相同的量化参数对待编码图像或第一重建图像进行全帧内预测模式的第二编码;或者,根据第一编码的量化参数和量化参数偏移,确定第二编码的量化参数,根据第二编码的量化参数,对待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,根据第一编码的编码失真信息,确定第二编码的量化参数,根据第二编码的量化参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
一种可能的设计中,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,包括:根据第一编码的编码信息和待编码图像的特征信息,确定第二编码的量化参数。根据第二编码的量化参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
一种可能的设计中,待编码图像的特征信息包括待编码图像的内容复杂度,待编码图像的颜色分类信息,待编码图像的对比度信息和待编码图像的内容分割信息中的一项 或多项。
待编码图像的特征信息可以是对待编码图像进行特征分析得到的。
一种可能的设计中,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,包括:根据第一编码的编码信息和第一重建图像,确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项。根据第一划分方式或第一编码参数中至少一项对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
其中,第一码流中相邻两个全帧内预测模式的帧之间的间隔大于第二码流的相邻两个全帧内预测模式的帧之间的间隔。
第一码流也可以称为长GOP码流,第二码流也可以称为随机接入码流。在需要接入视频内容时,先解码第二码流,使用第二码流的解码结果作为参考帧,解码第一码流,以实现快速接入视频内容。
由此,在上述可能的设计中,根据第一编码的编码信息和第一重建图像,确定编码第二码流所采用的第一划分方式或第一编码参数中至少一项,以实现第一码流的重建图像和对应的第二码流的重建图像的质量相同或相当,从而在满足低时延接入视频内容基础上,有益于减少编解码不一致所引起的块效应和部分伪影效应,提升接入视频内容的解码质量。
一种可能的设计中,第一重建图像与第二重建图像之间的差异小于差异阈值或者第一重建图像与第二重建图像之间的相似度高于相似度阈值,第二重建图像是第二码流或第二编码过程中的重建图像。
通过控制第一重建图像与第二重建图像之间的差异小于差异阈值或者第一重建图像与第二重建图像之间的相似度高于相似度阈值,使得第一码流的重建图像和对应的第二码流的重建图像的质量相同或相当,从而在满足低时延接入视频内容基础上,有益于减少编解码不一致所引起的块效应和部分伪影效应,提升接入视频内容的解码质量。
一种可能的设计中,根据第一编码的编码信息和第一重建图像,确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,包括:根据第一编码的编码信息和第一重建图像,确定多个第二划分方式,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,根据第一编码的编码信息和第一重建图像,确定多个第二编码参数,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
其中,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第三重建图像之间的相似度中最高的,多个第三重建图像包括第二重建图像,多个第三重建图像分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码过程中的重建图像,或者,多个第三重建图像是多个第三码流的重建图像,多个第三码流为分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码得到的。
分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码得到的多个第三码流,通过比较多个第三重建图像各自与第一重建图 像之间的相似度,从中选择一个相似度最高的作为第二重建图像,将其对应的第三码流作为第二码流。这样,通过最大化第一重建图像和第二重建图像之间的相似度,可以减少编解码不一致所引起的块效应和部分伪影效应。
当然可以理解的,也可以通过最小化第一重建图像和第二重建图像之间的差异,可以减少编解码不一致所引起的块效应和部分伪影效应。
一种可能的设计中,上述方法还可以包括:获取第一编码的预测模式。当第一编码的预测模式是帧间预测,则执行获取第一编码的编码信息,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流的步骤。当第一编码的预测模式是帧内预测,将第一码流作为第二码流。
通过判断第一编码的预测模式是否为帧内预测,当第一编码的预测模式是帧内预测时,直接将第一码流作为第二码流,可以提升编码生成第一码流和第二码流的效率。
一种可能的设计中,待编码图像为源视频图像。
通过帧级的第一码流和第二码流编码,从帧级上控制第一码流和第二码流的质量保持一致,有益于减少编解码不一致所引起的块效应和部分伪影效应,提升接入视频内容的解码质量。
一种可能的设计中,待编码图像为对源视频图像进行划分后的图像块。
通过图像块级的第一码流和第二码流编码,可以实现图像块级的两种码流同步输出,以较快的得到用于接入视频内容的第二码流的随机接入帧,降低接入时延。
一种可能的设计中,第一编码参数可以包括第一量化参数或第一码率。第二编码参数可以包括第二量化参数或第二码率。
第二方面,本申请实施例提供一种视频图像的处理方法,该方法可以包括:获取至少一个第一待编码图像和第二待编码图像,第二待编码图像是至少一个第一待编码图像之前的视频图像。分别对至少一个第一待编码图像进行第一编码,以生成第一码流。根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,至少一个第一重建图像是第一码流或第一编码过程中的重建图像。根据第一划分方式或第一编码参数中至少一项对第二待编码图像进行全帧内预测模式的第二编码,以生成第二码流。
第一码流也可以称为长GOP码流,第二码流也可以称为随机接入码流。在需要接入视频内容时,先解码第二码流,使用第二码流的解码结果作为参考帧,解码第一码流,以实现快速接入视频内容。
基于第一码流的编码结果对第二码流的编码进行调整,以实现第一码流的重建图像和对应的第二码流的重建图像的质量相同或相当,从而在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。
其中,通过在编码过程中,模拟解码端行为,有益于降低编解码不一致所引起的块效应和伪影效应。
可选的,第二待编码图像可以是至少一个第一待编码图像之前的一帧或间隔一帧或多帧的视频图像。
一种可能的设计中,至少一个第一待编码图像的个数为一个,至少一个第一重建图 像的个数为一个,第一重建图像与第二重建图像之间的差异小于差异阈值,或者,第一重建图像与第二重建图像之间的相似度高于相似度阈值,第二重建图像是将第三重建图像作为参考图像解码第一码流得到的,第三重建图像是第二码流或第二编码过程中的重建图像。
一种可能的设计中,至少一个第一待编码图像的个数为多个,至少一个第一重建图像的个数为多个,多个第一重建图像与多个第二重建图像之间的差异小于差异阈值,或者,多个第一重建图像与多个第二重建图像之间的相似度高于相似度阈值,多个第二重建图像是将第三重建图像作为参考图像解码第一码流得到的,第三重建图像是第二码流或第二编码过程中的重建图像。
其中,多个第一重建图像与多个第二重建图像之间的差异可以是多个第一重建图像各自与对应的第二重建图像之间的差异的加权之和。多个第一重建图像与多个第二重建图像之间的相似度可以是多个第一重建图像各自与对应的第二重建图像之间的相似度的加权之和。
多个第一重建图像中的一个第一重建图像对应的第二重建图像是指,相同视频内容的重建图像。
一种可能的设计中,至少一个第一待编码图像的个数为一个,至少一个第一重建图像的个数为一个,根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,包括:根据第一重建图像,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,根据第一重建图像,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
其中,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第四重建图像之间的相似度中最高的,多个第四重建图像包括第二重建图像,多个第四重建图像为将多个第五重建图像分别作为参考图像解码所述第一码流得到的,多个第五重建图像是多个第三码流的重建图像,多个第三码流为分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码得到的,或者,多个第五重建图像是分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码过程中的重建图像。
一种可能的设计中,至少一个第一待编码图像为多个第一待编码图像,根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,包括:根据多个第一重建图像,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
示例性的,可以分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码,以生成多个第三码流。多个第五重建图像是多个第三码流或多个第二编码过程中的重建图像。可以将多个第五重建图像分别作为参考图像解码第一码流,以得到多组第四重建图像,多组中的每一组第四重建图像可以包括多个第一待编码图像各自对应的第四重建图像,通过比较多组第四重建图像各自与多个第一重建图像之间的相似度,从中选择一组相似度最高的作为多个第一待编码图像各自对应的第二重建图像。 将相似度最高的一组第四重建图像对应的第三码流作为第二码流。一组第四重建图像对应的第三码流是指,使用这个第三码流的重建图像或者用于生成这个第三码流的编码过程中的重建图像,作为参考图像解码第一码流,可以得到该组第四重建图像。
一种可能的设计中,在分别对至少一个第一待编码图像进行第一编码之前,上述方法还包括:对第二待编码图像进行第一编码,以生成第四码流。获取第一编码的预测模式。当第一编码的预测模式是帧间预测,则执行根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项的步骤。当第一编码的预测模式是帧内预测,将第四码流作为第二码流。
一种可能的设计中,至少一个第一待编码图像为至少一个第一源视频图像,第二待编码图像为第二源视频图像。
一种可能的设计中,第一编码参数包括第一量化参数或第一码率。第二编码参数包括第二量化参数或第二码率。
在第一方面或第一方面任一可能的设计,或者第二方面或第二方面任一可能的设计的基础上,在编码过程中可以携带码流特征的标识信息。该标识信息用于解码端区分第一码流和第二码流。第一码流和第二码流的标识信息可以携带在参数集、辅助增强信息、封装层、文件格式、文件描述信息或者自定义消息任意一项中。
一种可能的设计中,第一码流和第二码流使用同一参数集,该参数集中可以携带第一标识信息。该第一标识信息用于指示当前码流为第一码流,或第二码流,或第一码流和第二码流且同一视频内容的第一码流在第二码流之前,或第一码流和第二码流且同一视频内容的第一码流在第二码流之后。例如,该第一标识信息可以是视频参数集(video parameter set,VPS)、序列参数集(sequence parameter set,SPS)或者图像参数集(picture parameter set,PPS)中的流标识(stream_id)。
这样,第一码流和第二码流可以一起封装。
一种可能的设计中,第一码流和第二码流使用不同参数集,该不同参数集中可以携带第二标识信息。该第二标识信息用于指示当前码流为第一码流,或第二码流。例如,该第二标识信息可以是不同码流的VPS、SPS或PPS中的流标识(stream_id)。
这样,第一码流和第二码流可以一起封装,也可以各自独立封装。
一种可能的设计中,第一码流的分片头信息可以携带第三标识信息。该第三标识信息用于指示当前码流为第一码流。第一码流的分片头信息可以携带第四标识信息。该第四标识信息用于指示当前码流为第二码流。例如,该第三标识信息或第四标识信息可以是分片头信息中的流标识(stream_id)。
这样,第一码流和第二码流可以一起封装,也可以各自独立封装。
一种可能的设计中,第一码流的辅助增强信息可以携带第五标识信息。该第五标识信息用于指示当前码流为第一码流。第二码流的辅助增强信息可以携带第六标识信息。该第六标识信息用于指示当前码流为第二码流。例如,该第五标识信息或第六标识信息可以是SEI中的流标识(stream_id)。
这样,第一码流和第二码流可以一起封装,也可以各自独立封装。
一种可能的设计中,第一码流和第二码流各自独立封装,第一码流的封装层中携带 第七标识信息,该第七标识信息用于指示当前码流为第一码流。第二码流的封装层中携带第八标识信息。该第八标识信息用于指示当前码流为第二码流。例如,第一码流封装在第一媒体流序列(track)中,第二码流封装在第二媒体序列(track),其中,第七标识信息或第八标识信息可以是媒体流序列类(track_class)。
一种可能的设计中,第一码流和第二码流各自独立封装和传输,第一码流的文件格式中携带第九标识信息,该第九标识信息用于指示当前码流为第一码流。第二码流的文件格式中携带第十标识信息。该第十标识信息用于指示当前码流为第二码流。
一种可能的设计中,第一码流和第二码流各自独立封装和传输,第一码流的文件描述信息中携带第十一标识信息,该第十一标识信息用于指示当前码流为第一码流。第二码流的文件描述信息中携带第十二标识信息。该第十二标识信息用于指示当前码流为第二码流。
一种可能的设计中,第一码流和第二码流以自定义消息模式传输,第一码流所在的自定义消息中携带第十三标识信息,该第十三标识信息用于指示当前码流为第一码流。第二码流所在的自定义消息中携带第十四标识信息。该第十四标识信息用于指示当前码流为第二码流。例如,自定义消息为TLV消息。第十三标识信息或第十四标识信息可以是TLV中的类型信息。
第三方面,本申请提供一种视频图像的处理装置,该装置可以为电子设备或者服务器,例如电子设备或者服务器中的芯片或者片上系统,例如可以为电子设备或者服务器中用于实现第一方面或第一方面的任一可能的实施方式的功能模块。举例来说,该视频图像的处理装置,包括:获取模块,用于获取待编码图像。第一编码模块,用于对待编码图像进行第一编码,以生成第一码流。第二编码模块,用于根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流,第一重建图像是第一码流或第一编码过程中的重建图像。
一种可能的设计中,第一编码的编码信息包括第一编码的划分方式,第一编码的量化参数和第一编码的编码失真信息中的一项或多项。
一种可能的设计中,第二编码模块用于执行以下至少一项:采用与第一编码相同的划分方式对待编码图像或第一重建图像进行全帧内预测模式的第二编码;或者,采用与第一编码相同的量化参数对待编码图像或第一重建图像进行全帧内预测模式的第二编码;或者,根据第一编码的量化参数和量化参数偏移,确定第二编码的量化参数,根据第二编码的量化参数,对待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,根据第一编码的编码失真信息,确定第二编码的量化参数,根据第二编码的量化参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
一种可能的设计中,第二编码模块用于:根据第一编码的编码信息和待编码图像的特征信息,确定第二编码的量化参数;根据第二编码的量化参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
一种可能的设计中,待编码图像的特征信息包括待编码图像的内容复杂度,待编码图像的颜色分类信息,待编码图像的对比度信息和待编码图像的内容分割信息中的一项或多项。
一种可能的设计中,第二编码模块,用于根据第一编码的编码信息和第一重建图像,确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项。第二编码模块,还用于根据第一划分方式或第一编码参数中至少一项对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
其中,第一码流中相邻两个全帧内预测模式的帧之间的间隔大于第二码流的相邻两个全帧内预测模式的帧之间的间隔。
一种可能的设计中,第一重建图像与第二重建图像之间的差异小于差异阈值或者第一重建图像与第二重建图像之间的相似度高于相似度阈值,第二重建图像是第二码流或第二编码过程中的重建图像。
一种可能的设计中,第二编码模块用于:根据第一编码的编码信息和第一重建图像,确定多个第二划分方式,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,根据第一编码的编码信息和第一重建图像,确定多个第二编码参数,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
其中,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第三重建图像之间的相似度中最高的,多个第三重建图像包括第二重建图像,多个第三重建图像是分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码过程中的重建图像,或者,多个第三重建图像是多个第三码流的重建图像,多个第三码流为分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码得到的。
一种可能的设计中,第二编码模块还用于:获取第一编码的预测模式。当第一编码的预测模式是帧间预测,则执行获取第一编码的编码信息,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流的步骤。当第一编码的预测模式是帧内预测,将第一码流作为第二码流。
一种可能的设计中,待编码图像为源视频图像;或者,待编码图像为对源视频图像进行划分后的图像块。
一种可能的设计中,第一编码参数包括第一量化参数或第一码率。第二编码参数包括第二量化参数或第二码率。
第四方面,本申请提供一种视频图像的处理装置,该装置可以为电子设备或者服务器,例如电子设备或者服务器中的芯片或者片上系统,例如可以为电子设备或者服务器中用于实现第二方面或第二方面的任一可能的实施方式的功能模块。举例来说,该视频图像的处理装置,包括:获取模块,用于获取至少一个第一待编码图像和第二待编码图像,第二待编码图像是至少一个第一待编码图像之前的视频图像。第一编码模块,用于分别对至少一个第一待编码图像进行第一编码,以生成第一码流。第二编码模块,用于根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,至少一个第一重建图像是第一码流或第一编码过程中的重建图像。第二编码模块,还用于根据第一划分方式或第一编码参数中至少一项对第二待编码图像进行第二编码,以生成第二码流。
一种可能的设计中,至少一个第一待编码图像的个数为一个,至少一个第一重建图 像的个数为一个,第一重建图像与第二重建图像之间的差异小于差异阈值,或者,第一重建图像与第二重建图像之间的相似度高于相似度阈值,第二重建图像是将第三重建图像作为参考图像解码第一码流得到的,第三重建图像是第二码流或第二编码过程中的重建图像。
一种可能的设计中,至少一个第一待编码图像的个数为一个,至少一个第一重建图像的个数为一个,第二编码模块用于:根据第一重建图像,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,根据第一重建图像,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
其中,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第四重建图像之间的相似度中最高的,多个第四重建图像包括第二重建图像,多个第四重建图像为将多个第五重建图像分别作为参考图像解码第一码流得到,多个第五重建图像是多个第三码流的重建图像,多个第三码流为分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码得到的,或者,多个第五重建图像是分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码过程中的重建图像。
一种可能的设计中,第一编码模块还用于:在分别对至少一个第一待编码图像进行第一编码之前,对第二待编码图像进行第一编码,以生成第四码流。第二编码模块还用于:获取第一编码的预测模式。当第一编码的预测模式是帧间预测,则执行根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项的步骤。当第一编码的预测模式是帧内预测,将第四码流作为第二码流。
一种可能的设计中,至少一个第一待编码图像为至少一个第一源视频图像,第二待编码图像为第二源视频图像。
一种可能的设计中,第一编码参数包括第一量化参数或第一码率。第二编码参数包括第二量化参数或第二码率。
第五方面,本申请实施例提供一种视频图像的处理装置,包括:一个或多个处理器。存储器,用于存储一个或多个程序。当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面或第一方面任一项所述的方法,或者,使得所述一个或多个处理器实现如第二方面或第二方面任一项所述的方法。
第六方面,本申请实施例提供一种计算机可读存储介质,包括根据第一方面或第一方面任一项所述的方法获得的第一码流和第二码流,或者,包括根据第二方面或第二方面任一项所述的方法获得的第一码流和第二码流。
第七方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面或第一方面任一项所述的方法,或者,如第二方面或第二方面任一项所述的方法。
第八方面,本申请实施例提供一种计算机可读存储介质,包括计算机指令,当计算机指令在计算机上运行时,使得计算机执行如第一方面或第一方面任一项所述的方法,或者,如第二方面或第二方面任一项所述的方法。
应当理解的是,本申请的第三至第八方面与本申请的第一至第二方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1A是用于实现本申请实施例的视频编码及解码系统10实例的框图;
图1B是用于实现本申请实施例的视频译码系统40实例的框图;
图2是用于实现本申请实施例的编码器20实例结构的框图;
图3是用于实现本申请实施例的解码器30实例结构的框图;
图4是用于实现本申请实施例的视频译码设备400实例的框图;
图5是用于实现本申请实施例的另一种编码装置或解码装置实例的框图;
图6为本申请实施例的一种多机位拍摄体育赛事的应用场景的示意图;
图7为本申请实施例提供的一种从当前视频内容切换到另一个视频内容的解码帧轨迹示意图;
图8为本申请实施例提供的一种视频图像的处理系统示意图;
图9是本申请实施例提供的一种视频图像的处理方法的示意图;
图10是本申请实施例提供的一种视频图像的处理方法的流程示意图;
图11是本申请实施例提供的一种视频图像的处理方法的流程示意图;
图12是本申请实施例提供的一种视频图像的处理方法的流程示意图;
图13是本申请实施例提供的一种视频图像的处理方法的流程示意图;
图14是本申请实施例提供的一种视频图像的处理方法的流程示意图;
图15是本申请实施例提供的一种视频图像的处理方法的流程示意图;
图16为本申请实施例提供的第一编码和第二编码采用相同的TU划分方式的示意图;
图17为本申请实施例提供的第一编码的量化参数和第二编码的量化参数的示意图;
图18为本申请实施例提供的第一编码传输给第二编码的量化参数的示意图;
图19为本申请实施例提供的流标识(stream_id)为2时第一码流和第二码流的排布形式的示意图;
图20为本申请实施例提供的流标识(stream_id)为3时第一码流和第二码流的排布形式的示意图;
图21是本申请实施例提供的三种第一码流和第二码流合并成一条码流的组合方式;
图22为本申请实施例提供的一种视频图像的处理装置的结构示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。以下描述中,参考形成本公开一部分并以说明之方式示出本申请实施例的具体方面或可使用本申请实施例的具体方面的附图。应理解,本申请实施例可在其它方面中使用,并可包括附图中未描绘的结构或逻辑变化。因此,以下详细描述不应以限制性的意义来理解,且本申请的范围由所附权利要求书界定。例如,应理解,结合所描述方法的揭示内容可以同样适用于用于执行所述方法的对应设备或系统,且反之亦然。例如,如果描述一个或多个具体方法步 骤,则对应的设备可以包含如功能单元等一个或多个单元,来执行所描述的一个或多个方法步骤(例如,一个单元执行一个或多个步骤,或多个单元,其中每个都执行多个步骤中的一个或多个),即使附图中未明确描述或说明这种一个或多个单元。另一方面,例如,如果基于如功能单元等一个或多个单元描述具体装置,则对应的方法可以包含一个步骤来执行一个或多个单元的功能性(例如,一个步骤执行一个或多个单元的功能性,或多个步骤,其中每个执行多个单元中一个或多个单元的功能性),即使附图中未明确描述或说明这种一个或多个步骤。进一步,应理解的是,除非另外明确提出,本文中所描述的各示例性实施例和/或方面的特征可以相互组合。
本申请实施例所涉及的技术方案不仅可能应用于现有的视频编码标准中(如H.264、HEVC等标准),还可能应用于未来的视频编码标准中(如H.266标准)。本申请的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。下面先对本申请实施例可能涉及的一些概念进行简单介绍。
视频编码通常是指处理形成视频或视频序列的图片序列。在视频编码领域,术语“图片(picture)”、“帧(frame)”或“图像(image)”可以用作同义词。本文中使用的视频编码表示视频编码或视频解码。视频编码在源侧执行,通常包括处理(例如,通过压缩)原始视频图片以减少表示该视频图片所需的数据量,从而更高效地存储和/或传输。视频解码在目的地侧执行,通常包括相对于编码器作逆处理,以重构视频图片。实施例涉及的视频图片“编码”应理解为涉及视频序列的“编码”或“解码”。编码部分和解码部分的组合也称为编解码(编码和解码,CODEC)。
视频序列包括一系列图像(picture),图像被进一步划分为切片(slice),切片再被划分为块(block)。视频编码以块为单位进行编码处理,在一些新的视频编码标准中,块的概念被进一步扩展。比如,在H.264标准中有宏块(macroblock,MB),宏块可进一步划分成多个可用于预测编码的预测块(partition)。在高性能视频编码(high efficiency video coding,HEVC)标准中,采用编码单元(coding unit,CU),预测单元(prediction unit,PU)和变换单元(transform unit,TU)等基本概念,从功能上划分了多种块单元,并采用全新的基于树结构进行描述。比如CU可以按照四叉树进行划分为更小的CU,而更小的CU还可以继续划分,从而形成一种四叉树结构,CU是对编码图像进行划分和编码的基本单元。对于PU和TU也有类似的树结构,PU可以对应预测块,是预测编码的基本单元。对CU按照划分模式进一步划分成多个PU。TU可以对应变换块,是对预测残差进行变换的基本单元。然而,无论CU,PU还是TU,本质上都属于块(或称图像块)的概念。
例如在HEVC中,通过使用表示为编码树的四叉树结构将CTU拆分为多个CU。在CU层级处作出是否使用图片间(时间)或图片内(空间)预测对图片区域进行编码的决策。每个CU可以根据PU拆分类型进一步拆分为一个、两个或四个PU。一个PU内应用相同的预测过程,并在PU基础上将相关信息传输到解码器。在通过基于PU拆分类型应用预测过程获取残差块之后,可以根据类似于用于CU的编码树的其它四叉树结构将CU分割成变换单元(transform unit,TU)。在视频压缩技术最新的发展中,使用四叉树和二叉树(Quad-tree andbinary tree,QTBT)分割帧来分割编码块。在QTBT块结构中, CU可以为正方形或矩形形状。
本文中,为了便于描述和理解,可将当前编码图像中待编码的图像块称为当前块,例如在编码中,指当前正在编码的块;在解码中,指当前正在解码的块。将参考图像中用于对当前块进行预测的已解码的图像块称为参考块,即参考块是为当前块提供参考信号的块,其中,参考信号表示图像块内的像素值。可将参考图像中为当前块提供预测信号的块为预测块,其中,预测信号表示预测块内的像素值或者采样值或者采样信号。例如,在遍历多个参考块以后,找到了最佳参考块,此最佳参考块将为当前块提供预测,此块称为预测块。
无损视频编码情况下,可以重构原始视频图片,即经重构视频图片具有与原始视频图片相同的质量(假设存储或传输期间没有传输损耗或其它数据丢失)。在有损视频编码情况下,通过例如量化执行进一步压缩,来减少表示视频图片所需的数据量,而解码器侧无法完全重构视频图片,即经重构视频图片的质量相比原始视频图片的质量较低或较差。
H.261的几个视频编码标准属于“有损混合型视频编解码”(即,将样本域中的空间和时间预测与变换域中用于应用量化的2D变换编码结合)。视频序列的每个图片通常分割成不重叠的块集合,通常在块层级上进行编码。换句话说,编码器侧通常在块(视频块)层级处理亦即编码视频,例如,通过空间(图片内)预测和时间(图片间)预测来产生预测块,从当前块(当前处理或待处理的块)减去预测块以获取残差块,在变换域变换残差块并量化残差块,以减少待传输(压缩)的数据量,而解码器侧将相对于编码器的逆处理部分应用于经编码或经压缩块,以重构用于表示的当前块。另外,编码器复制解码器处理循环,使得编码器和解码器生成相同的预测(例如帧内预测和帧间预测)和/或重构,用于处理亦即编码后续块。
下面描述本申请实施例所应用的系统架构。参见图1A,图1A示例性地给出了本申请实施例所应用的视频编码及解码系统10的示意性框图。如图1A所示,视频编码及解码系统10可包括源设备12和目的地设备14,源设备12产生经编码视频数据,因此,源设备12可被称为视频编码装置。目的地设备14可对由源设备12所产生的经编码的视频数据进行解码,因此,目的地设备14可被称为视频解码装置。源设备12、目的地设备14或两个的各种实施方案可包含一或多个处理器以及耦合到所述一或多个处理器的存储器。所述存储器可包含但不限于RAM、ROM、EEPROM、快闪存储器或可用于以可由计算机存取的指令或数据结构的形式存储所要的程序代码的任何其它媒体,如本文所描述。源设备12和目的地设备14可以包括各种装置,包含桌上型计算机、移动计算装置、笔记型(例如,膝上型)计算机、平板计算机、机顶盒、例如所谓的“智能”电话等电话手持机、电视机、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机、无线通信设备或其类似者。
虽然图1A将源设备12和目的地设备14绘示为单独的设备,但设备实施例也可以同时包括源设备12和目的地设备14或同时包括两者的功能性,即源设备12或对应的功能性以及目的地设备14或对应的功能性。在此类实施例中,可以使用相同硬件和/或软件, 或使用单独的硬件和/或软件,或其任何组合来实施源设备12或对应的功能性以及目的地设备14或对应的功能性。
源设备12和目的地设备14之间可通过链路13进行通信连接,目的地设备14可经由链路13从源设备12接收经编码视频数据。链路13可包括能够将经编码视频数据从源设备12移动到目的地设备14的一或多个媒体或装置。在一个实例中,链路13可包括使得源设备12能够实时将经编码视频数据直接发射到目的地设备14的一或多个通信媒体。在此实例中,源设备12可根据通信标准(例如无线通信协议)来调制经编码视频数据,且可将经调制的视频数据发射到目的地设备14。所述一或多个通信媒体可包含无线和/或有线通信媒体,例如射频(RF)频谱或一或多个物理传输线。所述一或多个通信媒体可形成基于分组的网络的一部分,基于分组的网络例如为局域网、广域网或全球网络(例如,因特网)。所述一或多个通信媒体可包含路由器、交换器、基站或促进从源设备12到目的地设备14的通信的其它设备。
源设备12包括编码器20,另外可选地,源设备12还可以包括图片源16、图片预处理器18、以及通信接口22。具体实现形态中,所述编码器20、图片源16、图片预处理器18、以及通信接口22可能是源设备12中的硬件部件,也可能是源设备12中的软件程序。分别描述如下:
图片源16,可以包括或可以为任何类别的图片捕获设备,用于例如捕获现实世界图片,和/或任何类别的图片或评论(对于屏幕内容编码,屏幕上的一些文字也认为是待编码的图片或图像的一部分)生成设备,例如,用于生成计算机动画图片的计算机图形处理器,或用于获取和/或提供现实世界图片、计算机动画图片(例如,屏幕内容、虚拟现实(virtual reality,VR)图片)的任何类别设备,和/或其任何组合(例如,实景(augmented reality,AR)图片)。图片源16可以为用于捕获图片的相机或者用于存储图片的存储器,图片源16还可以包括存储先前捕获或产生的图片和/或获取或接收图片的任何类别的(内部或外部)接口。当图片源16为相机时,图片源16可例如为本地的或集成在源设备中的集成相机;当图片源16为存储器时,图片源16可为本地的或例如集成在源设备中的集成存储器。当所述图片源16包括接口时,接口可例如为从外部视频源接收图片的外部接口,外部视频源例如为外部图片捕获设备,比如相机、外部存储器或外部图片生成设备,外部图片生成设备例如为外部计算机图形处理器、计算机或服务器。接口可以为根据任何专有或标准化接口协议的任何类别的接口,例如有线或无线接口、光接口。
其中,图片可以视为像素点(picture element)的二维阵列或矩阵。阵列中的像素点也可以称为采样点。阵列或图片在水平和垂直方向(或轴线)上的采样点数目定义图片的尺寸和/或分辨率。为了表示颜色,通常采用三个颜色分量,即图片可以表示为或包含三个采样阵列。例如在RBG格式或颜色空间中,图片包括对应的红色、绿色及蓝色采样阵列。但是,在视频编码中,每个像素通常以亮度/色度格式或颜色空间表示,例如对于YUV格式的图片,包括Y指示的亮度分量(有时也可以用L指示)以及U和V指示的两个色度分量。亮度(luma)分量Y表示亮度或灰度水平强度(例如,在灰度等级图片中两者相同),而两个色度(chroma)分量U和V表示色度或颜色信息分量。相应地,YUV格式的图片包括亮度采样值(Y)的亮度采样阵列,和色度值(U和V)的两个色 度采样阵列。RGB格式的图片可以转换或变换为YUV格式,反之亦然,该过程也称为色彩变换或转换。如果图片是黑白的,该图片可以只包括亮度采样阵列。本申请实施例中,由图片源16传输至图片处理器的图片也可称为原始图片数据17。
图片预处理器18,用于接收原始图片数据17并对原始图片数据17执行预处理,以获取经预处理的图片19或经预处理的图片数据19。例如,图片预处理器18执行的预处理可以包括整修、色彩格式转换(例如,从RGB格式转换为YUV格式)、调色或去噪。
编码器20(或称视频编码器20),用于接收经预处理的图片数据19,采用相关预测模式(如本文各个实施例中的预测模式)对经预处理的图片数据19进行处理,从而提供经编码图片数据21(下文将进一步基于图2或图4或图5描述编码器20的结构细节)。在一些实施例中,编码器20可以用于执行后文所描述的各个实施例,以实现本申请所描述的色度块预测方法在编码侧的应用。
通信接口22,可用于接收经编码图片数据21,并可通过链路13将经编码图片数据21传输至目的地设备14或任何其它设备(如存储器),以用于存储或直接重构,所述其它设备可为任何用于解码或存储的设备。通信接口22可例如用于将经编码图片数据21封装成合适的格式,例如数据包,以在链路13上传输。
目的地设备14包括解码器30,另外可选地,目的地设备14还可以包括通信接口28、图片后处理器32和显示设备34。分别描述如下:
通信接口28,可用于从源设备12或任何其它源接收经编码图片数据21,所述任何其它源例如为存储设备,存储设备例如为经编码图片数据存储设备。通信接口28可以用于藉由源设备12和目的地设备14之间的链路13或藉由任何类别的网络传输或接收经编码图片数据21,链路13例如为直接有线或无线连接,任何类别的网络例如为有线或无线网络或其任何组合,或任何类别的私网和公网,或其任何组合。通信接口28可以例如用于解封装通信接口22所传输的数据包以获取经编码图片数据21。
通信接口28和通信接口22都可以配置为单向通信接口或者双向通信接口,以及可以用于例如发送和接收消息来建立连接、确认和交换任何其它与通信链路和/或例如经编码图片数据传输的数据传输有关的信息。
解码器30(或称为解码器30),用于接收经编码图片数据21并提供经解码图片数据31或经解码图片31(下文将进一步基于图3或图4或图5描述解码器30的结构细节)。在一些实施例中,解码器30可以用于执行后文所描述的各个实施例,以实现本申请所描述的色度块预测方法在解码侧的应用。
图片后处理器32,用于对经解码图片数据31(也称为经重构图片数据)执行后处理,以获得经后处理图片数据33。图片后处理器32执行的后处理可以包括:色彩格式转换(例如,从YUV格式转换为RGB格式)、调色、整修或重采样,或任何其它处理,还可用于将将经后处理图片数据33传输至显示设备34。
显示设备34,用于接收经后处理图片数据33以向例如用户或观看者显示图片。显示设备34可以为或可以包括任何类别的用于呈现经重构图片的显示器,例如,集成的或外部的显示器或监视器。例如,显示器可以包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、 微LED显示器、硅基液晶(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任何类别的其它显示器。
虽然,图1A将源设备12和目的地设备14绘示为单独的设备,但设备实施例也可以同时包括源设备12和目的地设备14或同时包括两者的功能性,即源设备12或对应的功能性以及目的地设备14或对应的功能性。在此类实施例中,可以使用相同硬件和/或软件,或使用单独的硬件和/或软件,或其任何组合来实施源设备12或对应的功能性以及目的地设备14或对应的功能性。
本领域技术人员基于描述明显可知,不同单元的功能性或图1A所示的源设备12和/或目的地设备14的功能性的存在和(准确)划分可能根据实际设备和应用有所不同。源设备12和目的地设备14可以包括各种设备中的任一个,包含任何类别的手持或静止设备,例如,笔记本或膝上型计算机、移动电话、智能手机、平板或平板计算机、摄像机、台式计算机、机顶盒、电视机、相机、车载设备、显示设备、数字媒体播放器、视频游戏控制台、视频流式传输设备(例如内容服务服务器或内容分发服务器)、广播接收器设备、广播发射器设备等,并可以不使用或使用任何类别的操作系统。
编码器20和解码器30都可以实施为各种合适电路中的任一个,例如,一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件或其任何组合。如果部分地以软件实施所述技术,则设备可将软件的指令存储于合适的非暂时性计算机可读存储介质中,且可使用一或多个处理器以硬件执行指令从而执行本公开的技术。前述内容(包含硬件、软件、硬件与软件的组合等)中的任一者可视为一或多个处理器。
在一些情况下,图1A中所示视频编码及解码系统10仅为示例,本申请的技术可以适用于不必包含编码和解码设备之间的任何数据通信的视频编码设置(例如,视频编码或视频解码)。在其它实例中,数据可从本地存储器检索、在网络上流式传输等。视频编码设备可以对数据进行编码并且将数据存储到存储器,和/或视频解码设备可以从存储器检索数据并且对数据进行解码。在一些实例中,由并不彼此通信而是仅编码数据到存储器和/或从存储器检索数据且解码数据的设备执行编码和解码。
参见图1B,图1B是根据一示例性实施例的包含图2的编码器20和/或图3的解码器30的视频译码系统40的实例的说明图。视频译码系统40可以实现本申请实施例的各种技术的组合。在所说明的实施方式中,视频译码系统40可以包含成像设备41、编码器20、解码器30(和/或藉由处理单元46的逻辑电路47实施的视频编/解码器)、天线42、一个或多个处理器43、一个或多个存储器44和/或显示设备45。
如图1B所示,成像设备41、天线42、处理单元46、逻辑电路47、编码器20、解码器30、处理器43、存储器44和/或显示设备45能够互相通信。如所论述,虽然用编码器20和解码器30绘示视频译码系统40,但在不同实例中,视频译码系统40可以只包含编码器20或只包含解码器30。
在一些实例中,天线42可以用于传输或接收视频数据的经编码比特流。另外,在一 些实例中,显示设备45可以用于呈现视频数据。在一些实例中,逻辑电路47可以通过处理单元46实施。处理单元46可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。视频译码系统40也可以包含可选的处理器43,该可选处理器43类似地可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。在一些实例中,逻辑电路47可以通过硬件实施,如视频编码专用硬件等,处理器43可以通过通用软件、操作系统等实施。另外,存储器44可以是任何类型的存储器,例如易失性存储器(例如,静态随机存取存储器(Static RandomAccess Memory,SRAM)、动态随机存储器(Dynamic RandomAccess Memory,DRAM)等)或非易失性存储器(例如,闪存等)等。在非限制性实例中,存储器44可以由超速缓存内存实施。在一些实例中,逻辑电路47可以访问存储器44(例如用于实施图像缓冲器)。在其它实例中,逻辑电路47和/或处理单元46可以包含存储器(例如,缓存等)用于实施图像缓冲器等。
在一些实例中,通过逻辑电路实施的编码器20可以包含(例如,通过处理单元46或存储器44实施的)图像缓冲器和(例如,通过处理单元46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过逻辑电路47实施的编码器20,以实施参照图2和/或本文中所描述的任何其它编码器系统或子系统所论述的各种模块。逻辑电路可以用于执行本文所论述的各种操作。
在一些实例中,解码器30可以以类似方式通过逻辑电路47实施,以实施参照图3的解码器30和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。在一些实例中,逻辑电路实施的解码器30可以包含(通过处理单元2820或存储器44实施的)图像缓冲器和(例如,通过处理单元46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过逻辑电路47实施的解码器30,以实施参照图3和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。
在一些实例中,天线42可以用于接收视频数据的经编码比特流。如所论述,经编码比特流可以包含本文所论述的与编码视频帧相关的数据、指示符、索引值、模式选择数据等,例如与编码分割相关的数据(例如,变换系数或经量化变换系数,(如所论述的)可选指示符,和/或定义编码分割的数据)。视频译码系统40还可包含耦合至天线42并用于解码经编码比特流的解码器30。显示设备45用于呈现视频帧。
应理解,本申请实施例中对于参考编码器20所描述的实例,解码器30可以用于执行相反过程。关于信令语法元素,解码器30可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,编码器20可以将语法元素熵编码成经编码视频比特流。在此类实例中,解码器30可以解析这种语法元素,并相应地解码相关视频数据。
需要说明的是,本申请实施例中的编码器20和解码器30可以是例如H.263、H.264、HEVV、MPEG-2、MPEG-4、VP8、VP9等视频标准协议或者下一代视频标准协议(如H.266等)对应的编/解码器。
参见图2,图2示出用于实现本申请实施例的编码器20的实例的示意性/概念性框图。在图2的实例中,编码器20包括残差计算单元204、变换处理单元206、量化单元208、 逆量化单元210、逆变换处理单元212、重构单元214、缓冲器216、环路滤波器单元220、经解码图片缓冲器(decodedpicture buffer,DPB)230、预测处理单元260和熵编码单元270。预测处理单元260可以包含帧间预测单元244、帧内预测单元254和模式选择单元262。帧间预测单元244可以包含运动估计单元和运动补偿单元(未图示)。图2所示的编码器20也可以称为混合型视频编码器或根据混合型视频编解码器的视频编码器。
例如,残差计算单元204、变换处理单元206、量化单元208、预测处理单元260和熵编码单元270形成编码器20的前向信号路径,而例如逆量化单元210、逆变换处理单元212、重构单元214、缓冲器216、环路滤波器220、经解码图片缓冲器(decodedpicture buffer,DPB)230、预测处理单元260形成编码器的后向信号路径,其中编码器的后向信号路径对应于解码器的信号路径(参见图3中的解码器30)。
编码器20通过例如输入202,接收图片201或图片201的图像块203,例如,形成视频或视频序列的图片序列中的图片。图像块203也可以称为当前图片块或待编码图片块,图片201可以称为当前图片或待编码图片(尤其是在视频编码中将当前图片与其它图片区分开时,其它图片例如同一视频序列亦即也包括当前图片的视频序列中的先前经编码和/或经解码图片)。
编码器20的实施例可以包括分割单元(图2中未绘示),用于将图片201分割成多个例如图像块203的块,通常分割成多个不重叠的块。分割单元可以用于对视频序列中所有图片使用相同的块大小以及定义块大小的对应栅格,或用于在图片或子集或图片群组之间更改块大小,并将每个图片分割成对应的块。
在一个实例中,编码器20的预测处理单元260可以用于执行上述分割技术的任何组合。
如图片201,图像块203也是或可以视为具有采样值的采样点的二维阵列或矩阵,虽然其尺寸比图片201小。换句话说,图像块203可以包括,例如,一个采样阵列(例如黑白图片201情况下的亮度阵列)或三个采样阵列(例如,彩色图片情况下的一个亮度阵列和两个色度阵列)或依据所应用的色彩格式的任何其它数目和/或类别的阵列。图像块203的水平和垂直方向(或轴线)上采样点的数目定义图像块203的尺寸。
如图2所示的编码器20用于逐块编码图片201,例如,对每个图像块203执行编码和预测。
残差计算单元204用于基于图片图像块203和预测块265(下文提供预测块265的其它细节)计算残差块205,例如,通过逐样本(逐像素)将图片图像块203的样本值减去预测块265的样本值,以在样本域中获取残差块205。
变换处理单元206用于在残差块205的样本值上应用例如离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)的变换,以在变换域中获取变换系数207。变换系数207也可以称为变换残差系数,并在变换域中表示残差块205。
变换处理单元206可以用于应用DCT/DST的整数近似值,例如为HEVC/H.265指定的变换。与正交DCT变换相比,这种整数近似值通常由某一因子按比例缩放。为了维持经正变换和逆变换处理的残差块的范数,应用额外比例缩放因子作为变换过程的一部分。 比例缩放因子通常是基于某些约束条件选择的,例如,比例缩放因子是用于移位运算的2的幂、变换系数的位深度、准确性和实施成本之间的权衡等。例如,在解码器30侧通过例如逆变换处理单元212为逆变换(以及在编码器20侧通过例如逆变换处理单元212为对应逆变换)指定具体比例缩放因子,以及相应地,可以在编码器20侧通过变换处理单元206为正变换指定对应比例缩放因子。
量化单元208用于例如通过应用标量量化或向量量化来量化变换系数207,以获取经量化变换系数209。经量化变换系数209也可以称为经量化残差系数209。量化过程可以减少与部分或全部变换系数207有关的位深度。例如,可在量化期间将n位变换系数向下舍入到m位变换系数,其中n大于m。可通过调整量化参数(quantizationparameter,QP)修改量化程度。例如,对于标量量化,可以应用不同的标度来实现较细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可以通过量化参数(quantization parameter,QP)指示合适的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可以对应精细量化(较小量化步长),较大量化参数可以对应粗糙量化(较大量化步长),反之亦然。量化可以包含除以量化步长以及例如通过逆量化210执行的对应的量化或逆量化,或者可以包含乘以量化步长。根据例如HEVC的一些标准的实施例可以使用量化参数来确定量化步长。一般而言,可以基于量化参数使用包含除法的等式的定点近似来计算量化步长。可以引入额外比例缩放因子来进行量化和反量化,以恢复可能由于在用于量化步长和量化参数的等式的定点近似中使用的标度而修改的残差块的范数。在一个实例实施方式中,可以合并逆变换和反量化的标度。或者,可以使用自定义量化表并在例如比特流中将其从编码器通过信号发送到解码器。量化是有损操作,其中量化步长越大,损耗越大。
逆量化单元210用于在经量化系数上应用量化单元208的逆量化,以获取经反量化系数211,例如,基于或使用与量化单元208相同的量化步长,应用量化单元208应用的量化方案的逆量化方案。经反量化系数211也可以称为经反量化残差系数211,对应于变换系数207,虽然由于量化造成的损耗通常与变换系数不相同。
逆变换处理单元212用于应用变换处理单元206应用的变换的逆变换,例如,逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST),以在样本域中获取逆变换块213。逆变换块213也可以称为逆变换经反量化块213或逆变换残差块213。
重构单元214(例如,求和器214)用于将逆变换块213(即经重构残差块213)添加至预测块265,以在样本域中获取经重构块215,例如,将经重构残差块213的样本值与预测块265的样本值相加。
可选地,例如线缓冲器216的缓冲器单元216(或简称“缓冲器”216)用于缓冲或存储经重构块215和对应的样本值,用于例如帧内预测。在其它的实施例中,编码器可以用于使用存储在缓冲器单元216中的未经滤波的经重构块和/或对应的样本值来进行任何类别的估计和/或预测,例如帧内预测。
例如,编码器20的实施例可以经配置以使得缓冲器单元216不只用于存储用于帧内预测254的经重构块215,也用于环路滤波器单元220(在图2中未示出),和/或,例如 使得缓冲器单元216和经解码图片缓冲器单元230形成一个缓冲器。其它实施例可以用于将经滤波块221和/或来自经解码图片缓冲器230的块或样本(图2中均未示出)用作帧内预测254的输入或基础。
环路滤波器单元220(或简称“环路滤波器”220)用于对经重构块215进行滤波以获取经滤波块221,从而顺利进行像素转变或提高视频质量。环路滤波器单元220旨在表示一个或多个环路滤波器,例如去块滤波器、样本自适应偏移(sample-adaptive offset,SAO)滤波器或其它滤波器,例如双边滤波器、自适应环路滤波器(adaptive loop filter,ALF),或锐化或平滑滤波器,或协同滤波器。尽管环路滤波器单元220在图2中示出为环内滤波器,但在其它配置中,环路滤波器单元220可实施为环后滤波器。经滤波块221也可以称为经滤波的经重构块221。经解码图片缓冲器230可以在环路滤波器单元220对经重构编码块执行滤波操作之后存储经重构编码块。
编码器20(对应地,环路滤波器单元220)的实施例可以用于输出环路滤波器参数(例如,样本自适应偏移信息),例如,直接输出或由熵编码单元270或任何其它熵编码单元熵编码后输出,例如使得解码器30可以接收并应用相同的环路滤波器参数用于解码。
经解码图片缓冲器(decodedpicture buffer,DPB)230可以为存储参考图片数据供编码器20编码视频数据之用的参考图片存储器。DPB 230可由多种存储器设备中的任一个形成,例如动态随机存储器(dynamic random access memory,DRAM)(包含同步DRAM(synchronous DRAM,SDRAM)、磁阻式RAM(magnetoresistive RAM,MRAM)、电阻式RAM(resistive RAM,RRAM))或其它类型的存储器设备。可以由同一存储器设备或单独的存储器设备提供DPB 230和缓冲器216。在某一实例中,经解码图片缓冲器(decodedpicture buffer,DPB)230用于存储经滤波块221。经解码图片缓冲器230可以进一步用于存储同一当前图片或例如先前经重构图片的不同图片的其它先前的经滤波块,例如先前经重构和经滤波块221,以及可以提供完整的先前经重构亦即经解码图片(和对应参考块和样本)和/或部分经重构当前图片(和对应参考块和样本),例如用于帧间预测。在某一实例中,如果经重构块215无需环内滤波而得以重构,则经解码图片缓冲器(decodedpicture buffer,DPB)230用于存储经重构块215。
预测处理单元260,也称为块预测处理单元260,用于接收或获取图像块203(当前图片201的当前图像块203)和经重构图片数据,例如来自缓冲器216的同一(当前)图片的参考样本和/或来自经解码图片缓冲器230的一个或多个先前经解码图片的参考图片数据231,以及用于处理这类数据进行预测,即提供可以为经帧间预测块245或经帧内预测块255的预测块265。
模式选择单元262可以用于选择预测模式(例如帧内或帧间预测模式)和/或对应的用作预测块265的预测块245或255,以计算残差块205和重构经重构块215。
模式选择单元262的实施例可以用于选择预测模式(例如,从预测处理单元260所支持的那些预测模式中选择),所述预测模式提供最佳匹配或者说最小残差(最小残差意味着传输或存储中更好的压缩),或提供最小信令开销(最小信令开销意味着传输或存储中更好的压缩),或同时考虑或平衡以上两者。模式选择单元262可以用于基于码率失真优化(rate distortion optimization,RDO)确定预测模式,即选择提供最小码率失真优化 的预测模式,或选择相关码率失真至少满足预测模式选择标准的预测模式。
下文将详细解释编码器20的实例(例如,通过预测处理单元260)执行的预测处理和(例如,通过模式选择单元262)执行的模式选择。
如上文所述,编码器20用于从(预先确定的)预测模式集合中确定或选择最好或最优的预测模式。预测模式集合可以包括例如帧内预测模式和/或帧间预测模式。
帧内预测模式集合可以包括35种不同的帧内预测模式,例如,如DC(或均值)模式和平面模式的非方向性模式,或如H.265中定义的方向性模式,或者可以包括67种不同的帧内预测模式,例如,如DC(或均值)模式和平面模式的非方向性模式,或如正在发展中的H.266中定义的方向性模式。
在可能的实现中,帧间预测模式集合取决于可用参考图片(即,例如前述存储在DBP230中的至少部分经解码图片)和其它帧间预测参数,例如取决于是否使用整个参考图片或只使用参考图片的一部分,例如围绕当前块的区域的搜索窗区域,来搜索最佳匹配参考块,和/或例如取决于是否应用如半像素和/或四分之一像素内插的像素内插,帧间预测模式集合例如可包括先进运动矢量(Advanced Motion Vector Prediction,AMVP)模式和融合(merge)模式。具体实施中,帧间预测模式集合可包括本申请实施例改进的基于控制点的AMVP模式,以及,改进的基于控制点的merge模式。在一个实例中,帧内预测单元254可以用于执行下文描述的帧间预测技术的任意组合。
除了以上预测模式,本申请实施例也可以应用跳过模式和/或直接模式。
预测处理单元260可以进一步用于将图像块203分割成较小的块分区或子块,例如,通过迭代使用四叉树(quad-tree,QT)分割、二进制树(binary-tree,BT)分割或三叉树(triple-tree,TT)分割,或其任何组合,以及用于例如为块分区或子块中的每一个执行预测,其中模式选择包括选择分割的图像块203的树结构和选择应用于块分区或子块中的每一个的预测模式。
帧间预测单元244可以包含运动估计(motion estimation,ME)单元(图2中未示出)和运动补偿(motion compensation,MC)单元(图2中未示出)。运动估计单元用于接收或获取图片图像块203(当前图片201的当前图片图像块203)和经解码图片231,或至少一个或多个先前经重构块,例如,一个或多个其它/不同先前经解码图片231的经重构块,来进行运动估计。例如,视频序列可以包括当前图片和先前经解码图片31,或换句话说,当前图片和先前经解码图片31可以是形成视频序列的图片序列的一部分,或者形成该图片序列。
例如,编码器20可以用于从多个其它图片中的同一或不同图片的多个参考块中选择参考块,并向运动估计单元(图2中未示出)提供参考图片和/或提供参考块的位置(X、Y坐标)与当前块的位置之间的偏移(空间偏移)作为帧间预测参数。该偏移也称为运动向量(motion vector,MV)。
运动补偿单元用于获取帧间预测参数,并基于或使用帧间预测参数执行帧间预测来获取帧间预测块245。由运动补偿单元(图2中未示出)执行的运动补偿可以包含基于通过运动估计(可能执行对子像素精确度的内插)确定的运动/块向量取出或生成预测块。内插滤波可从已知像素样本产生额外像素样本,从而潜在地增加可用于编码图片块的候 选预测块的数目。一旦接收到用于当前图片块的PU的运动向量,运动补偿单元246可以在一个参考图片列表中定位运动向量指向的预测块。运动补偿单元246还可以生成与块和视频条带相关联的语法元素,以供解码器30在解码视频条带的图片块时使用。
具体的,上述帧间预测单元244可向熵编码单元270传输语法元素,所述语法元素包括帧间预测参数(比如遍历多个帧间预测模式后选择用于当前块预测的帧间预测模式的指示信息)。可能应用场景中,如果帧间预测模式只有一种,那么也可以不在语法元素中携带帧间预测参数,此时解码端30可直接使用默认的预测模式进行解码。可以理解的,帧间预测单元244可以用于执行帧间预测技术的任意组合。
帧内预测单元254用于获取,例如接收同一图片的图片块203(当前图片块)和一个或多个先前经重构块,例如经重构相相邻块,以进行帧内估计。例如,编码器20可以用于从多个(预定)帧内预测模式中选择帧内预测模式。
编码器20的实施例可以用于基于优化标准选择帧内预测模式,例如基于最小残差(例如,提供最类似于当前图片块203的预测块255的帧内预测模式)或最小码率失真。
帧内预测单元254进一步用于基于如所选择的帧内预测模式的帧内预测参数确定帧内预测块255。在任何情况下,在选择用于块的帧内预测模式之后,帧内预测单元254还用于向熵编码单元270提供帧内预测参数,即提供指示所选择的用于块的帧内预测模式的信息。在一个实例中,帧内预测单元254可以用于执行帧内预测技术的任意组合。
具体的,上述帧内预测单元254可向熵编码单元270传输语法元素,所述语法元素包括帧内预测参数(比如遍历多个帧内预测模式后选择用于当前块预测的帧内预测模式的指示信息)。可能应用场景中,如果帧内预测模式只有一种,那么也可以不在语法元素中携带帧内预测参数,此时解码端30可直接使用默认的预测模式进行解码。
熵编码单元270用于将熵编码算法或方案(例如,可变长度编码(variable length coding,VLC)方案、上下文自适应VLC(context adaptive VLC,CAVLC)方案、算术编码方案、上下文自适应二进制算术编码(context adaptive binary arithmetic coding,CABAC)、基于语法的上下文自适应二进制算术编码(syntax-based context-adaptive binary arithmetic coding,SBAC)、概率区间分割熵(probability interval partitioning entropy,PIPE)编码或其它熵编码方法或技术)应用于经量化残差系数209、帧间预测参数、帧内预测参数和/或环路滤波器参数中的单个或所有上(或不应用),以获取可以通过输出272以例如经编码比特流21的形式输出的经编码图片数据21。可以将经编码比特流传输到视频解码器30,或将其存档稍后由视频解码器30传输或检索。熵编码单元270还可用于熵编码正被编码的当前视频条带的其它语法元素。
视频编码器20的其它结构变型可用于编码视频流。例如,基于非变换的编码器20可以在没有针对某些块或帧的变换处理单元206的情况下直接量化残差信号。在另一实施方式中,编码器20可具有组合成单个单元的量化单元208和逆量化单元210。
具体的,在本申请实施例中,编码器20可用于实现后文实施例中描述的视频图像的处理方法。
应当理解的是,视频编码器20的其它的结构变化可用于编码视频流。例如,对于某些图像块或者图像帧,视频编码器20可以直接地量化残差信号而不需要经变换处理单元 206处理,相应地也不需要经逆变换处理单元212处理;或者,对于某些图像块或者图像帧,视频编码器20没有产生残差数据,相应地不需要经变换处理单元206、量化单元208、逆量化单元210和逆变换处理单元212处理;或者,视频编码器20可以将经重构图像块作为参考块直接地进行存储而不需要经滤波器220处理;或者,视频编码器20中量化单元208和逆量化单元210可以合并在一起。环路滤波器220是可选的,以及针对无损压缩编码的情况下,变换处理单元206、量化单元208、逆量化单元210和逆变换处理单元212是可选的。应当理解的是,根据不同的应用场景,帧间预测单元244和帧内预测单元254可以是被选择性的启用。
参见图3,图3示出用于实现本申请实施例的解码器30的实例的示意性/概念性框图。视频解码器30用于接收例如由编码器20编码的经编码图片数据(例如,经编码比特流)21,以获取经解码图片231。在解码过程期间,视频解码器30从视频编码器20接收视频数据,例如表示经编码视频条带的图片块的经编码视频比特流及相关联的语法元素。
在图3的实例中,解码器30包括熵解码单元304、逆量化单元310、逆变换处理单元312、重构单元314(例如求和器314)、缓冲器316、环路滤波器320、经解码图片缓冲器330以及预测处理单元360。预测处理单元360可以包含帧间预测单元344、帧内预测单元354和模式选择单元362。在一些实例中,视频解码器30可执行大体上与参照图2的视频编码器20描述的编码遍次互逆的解码遍次。
熵解码单元304用于对经编码图片数据21执行熵解码,以获取例如经量化系数309和/或经解码的编码参数(图3中未示出),例如,帧间预测、帧内预测参数、环路滤波器参数和/或其它语法元素中(经解码)的任意一个或全部。熵解码单元304进一步用于将帧间预测参数、帧内预测参数和/或其它语法元素转发至预测处理单元360。视频解码器30可接收视频条带层级和/或视频块层级的语法元素。
逆量化单元310功能上可与逆量化单元110相同,逆变换处理单元312功能上可与逆变换处理单元212相同,重构单元314功能上可与重构单元214相同,缓冲器316功能上可与缓冲器216相同,环路滤波器320功能上可与环路滤波器220相同,经解码图片缓冲器330功能上可与经解码图片缓冲器230相同。
预测处理单元360可以包括帧间预测单元344和帧内预测单元354,其中帧间预测单元344功能上可以类似于帧间预测单元244,帧内预测单元354功能上可以类似于帧内预测单元254。预测处理单元360通常用于执行块预测和/或从经编码数据21获取预测块365,以及从例如熵解码单元304(显式地或隐式地)接收或获取预测相关参数和/或关于所选择的预测模式的信息。
当视频条带经编码为经帧内编码(I)条带时,预测处理单元360的帧内预测单元354用于基于信号表示的帧内预测模式及来自当前帧或图片的先前经解码块的数据来产生用于当前视频条带的图片块的预测块355。当视频帧经编码为经帧间编码(即B或P)条带时,预测处理单元360的帧间预测单元344(例如,运动补偿单元)用于基于运动向量及从熵解码单元304接收的其它语法元素生成用于当前视频条带的视频块的预测块345。对于帧间预测,可从一个参考图片列表内的一个参考图片中产生预测块。视频解码器30可 基于存储于DPB 330中的参考图片,使用默认建构技术来建构参考帧列表:列表0和列表1。
预测处理单元360用于通过解析运动向量和其它语法元素,确定用于当前视频条带的视频块的预测信息,并使用预测信息产生用于正经解码的当前视频块的预测块。在本申请的一实例中,预测处理单元360使用接收到的一些语法元素确定用于编码视频条带的视频块的预测模式(例如,帧内或帧间预测)、帧间预测条带类型(例如,B条带、P条带或GPB条带)、用于条带的参考图片列表中的一个或多个的建构信息、用于条带的每个经帧间编码视频块的运动向量、条带的每个经帧间编码视频块的帧间预测状态以及其它信息,以解码当前视频条带的视频块。在本公开的另一实例中,视频解码器30从比特流接收的语法元素包含接收自适应参数集(adaptive parameter set,APS)、序列参数集(sequence parameter set,SPS)、图片参数集(picture parameter set,PPS)或条带标头中的一个或多个中的语法元素。
逆量化单元310可用于逆量化(即,反量化)在比特流中提供且由熵解码单元304解码的经量化变换系数。逆量化过程可包含使用由视频编码器20针对视频条带中的每一视频块所计算的量化参数来确定应该应用的量化程度并同样确定应该应用的逆量化程度。
逆变换处理单元312用于将逆变换(例如,逆DCT、逆整数变换或概念上类似的逆变换过程)应用于变换系数,以便在像素域中产生残差块。
重构单元314(例如,求和器314)用于将逆变换块313(即经重构残差块313)添加到预测块365,以在样本域中获取经重构块315,例如通过将经重构残差块313的样本值与预测块365的样本值相加。
环路滤波器单元320(在编码循环期间或在编码循环之后)用于对经重构块315进行滤波以获取经滤波块321,从而顺利进行像素转变或提高视频质量。在一个实例中,环路滤波器单元320可以用于执行下文描述的滤波技术的任意组合。环路滤波器单元320旨在表示一个或多个环路滤波器,例如去块滤波器、样本自适应偏移(sample-adaptive offset,SAO)滤波器或其它滤波器,例如双边滤波器、自适应环路滤波器(adaptive loop filter,ALF),或锐化或平滑滤波器,或协同滤波器。尽管环路滤波器单元320在图3中示出为环内滤波器,但在其它配置中,环路滤波器单元320可实施为环后滤波器。
随后将给定帧或图片中的经解码视频块321存储在存储用于后续运动补偿的参考图片的经解码图片缓冲器330中。
解码器30用于例如,藉由输出332输出经解码图片31,以向用户呈现或供用户查看。
视频解码器30的其它变型可用于对压缩的比特流进行解码。例如,解码器30可以在没有环路滤波器单元320的情况下生成输出视频流。例如,基于非变换的解码器30可以在没有针对某些块或帧的逆变换处理单元312的情况下直接逆量化残差信号。在另一实施方式中,视频解码器30可以具有组合成单个单元的逆量化单元310和逆变换处理单元312。
具体的,在本申请实施例中,解码器30用于实现后文实施例中描述的视频图像的处理方法。
应当理解的是,视频解码器30的其它结构变化可用于解码经编码视频位流。例如, 视频解码器30可以不经滤波器320处理而生成输出视频流;或者,对于某些图像块或者图像帧,视频解码器30的熵解码单元304没有解码出经量化的系数,相应地不需要经逆量化单元310和逆变换处理单元312处理。环路滤波器320是可选的;以及针对无损压缩的情况下,逆量化单元310和逆变换处理单元312是可选的。应当理解的是,根据不同的应用场景,帧间预测单元和帧内预测单元可以是被选择性的启用。
参见图4,图4是本申请实施例提供的视频译码设备400(例如视频编码设备400或视频解码设备400)的结构示意图。视频译码设备400适于实施本文所描述的实施例。在一个实施例中,视频译码设备400可以是视频解码器(例如图1A的解码器30)或视频编码器(例如图1A的编码器20)。在另一个实施例中,视频译码设备400可以是上述图1A的解码器30或图1A的编码器20中的一个或多个组件。
视频译码设备400包括:用于接收数据的入口端口410和接收单元(Rx)420,用于处理数据的处理器、逻辑单元或中央处理器(CPU)430,用于传输数据的发射器单元(Tx)440和出口端口450,以及,用于存储数据的存储器460。视频译码设备400还可以包括与入口端口410、接收器单元420、发射器单元440和出口端口450耦合的光电转换组件和电光(EO)组件,用于光信号或电信号的出口或入口。
处理器430通过硬件和软件实现。处理器430可以实现为一个或多个CPU芯片、核(例如,多核处理器)、FPGA、ASIC和DSP。处理器430与入口端口410、接收器单元420、发射器单元440、出口端口450和存储器460通信。处理器430包括译码模块470(例如编码模块470或解码模块470)。编码/解码模块470实现本文中所公开的实施例,以实现本申请实施例所提供的色度块预测方法。例如,编码/解码模块470实现、处理或提供各种编码操作。因此,通过编码/解码模块470为视频译码设备400的功能提供了实质性的改进,并影响了视频译码设备400到不同状态的转换。或者,以存储在存储器460中并由处理器430执行的指令来实现编码/解码模块470。
存储器460包括一个或多个磁盘、磁带机和固态硬盘,可以用作溢出数据存储设备,用于在选择性地执行这些程序时存储程序,并存储在程序执行过程中读取的指令和数据。存储器460可以是易失性和/或非易失性的,可以是只读存储器(ROM)、随机存取存储器(RAM)、随机存取存储器(ternary content-addressable memory,TCAM)和/或静态随机存取存储器(SRAM)。
参见图5,图5是根据一示例性实施例的可用作图1A中的源设备12和目的地设备14中的任一个或两个的装置500的简化框图。装置500可以实现本申请的技术。换言之,图5为本申请实施例的编码设备或解码设备(简称为译码设备500)的一种实现方式的示意性框图。其中,译码设备500可以包括处理器510、存储器530和总线系统550。其中,处理器和存储器通过总线系统相连,该存储器用于存储指令,该处理器用于执行该存储器存储的指令。译码设备的存储器存储程序代码,且处理器可以调用存储器中存储的程序代码执行本申请描述的各种视频编码或解码方法,尤其是各种新的随机接入流质量控制的方法。为避免重复,这里不再详细描述。
在本申请实施例中,该处理器510可以是中央处理单元(Central Processing Unit,简称为“CPU”),该处理器510还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器530可以包括只读存储器(ROM)设备或者随机存取存储器(RAM)设备。任何其他适宜类型的存储设备也可以用作存储器530。存储器530可以包括由处理器510使用总线550访问的代码和数据531。存储器530可以进一步包括操作系统533和应用程序535,该应用程序535包括允许处理器510执行本申请描述的视频编码或解码方法(尤其是本申请描述的视频图像的处理方法)的至少一个程序。例如,应用程序535可以包括应用1至N,其进一步包括执行在本申请描述的视频编码或解码方法的视频编码或解码应用(简称视频译码应用)。
该总线系统550除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统550。
可选的,译码设备500还可以包括一个或多个输出设备,诸如显示器570。在一个示例中,显示器570可以是触感显示器,其将显示器与可操作地感测触摸输入的触感单元合并。显示器570可以经由总线550连接到处理器510。
下面详细阐述本申请实施例的技术方案:
首先,介绍本申请实施例适用的一些技术术语或技术概念。
I帧:在视频编码领域中,能够不依赖其它帧独自完成解码的帧,一般简称为“I帧”。I帧所有的块的编码预测方式为帧内预测。
P帧:在视频编码领域中,参考前向帧且在码流中被标记为P帧类型的帧,一般简称为“P帧”。
B帧:在视频编码领域中,既可以参考前向帧,又可以参考后向帧,一般简称为“B帧”。本申请实施例主要针对低时延场景,因此为了降低时延,一般编码为“P帧”。
随机接入码流:为描述方便,本申请实施例中将视频流中所有帧的帧类型为I帧,或者,所有帧都是帧内预测模式的视频流简称为“随机接入码流”。需要说明的是,随机接入码流也可以称为第二码流等,其名称不以此作为限制。用于生成随机接入码流的编码过程称为第二编码。
长图像组(group ofpictures,GOP)流:为描述方便,本申请实施例将视频流中相邻I帧之间至少包含一个P帧的视频流或者存在允许帧间预测模式的视频流,简称为“长GOP流”。需要说明的是,长GOP流也可以称为第一码流或基本流等,其名称不以此作为限制。用于生成GOP流的编码过程称为第一编码。
重建帧:在编码过程中,需要将前一帧的编码结果进行解码保存,供后续帧做参考,解码出来的帧一般称之为“重建帧”,这个重建帧和解码器解码出来的帧一致,称之为编解码一致。为描述方便,本申请实施例中将解码器解码出的帧也统称为“重建帧”。
强交互场景:能够实时接收用户输入并反馈的应用场景。比如游戏场景、直播互动 场景等。
机位:在一个地点的摄像机属于一个机位,当从播放一个机位的内容,切换到另外一个机位的内容,发生机位的切换。
视角:一个时刻镜头的朝向或者用户所看的方向,当镜头旋转或者用户转头时,发生视角的切换。
多机位拍摄:使用两台或两台以上摄影机,对同一场面同时作多角度、多方位的拍摄。
为了降低接入(包括解码或播放)视频内容的时延,以满足一些应用场景的低时延需求,编码端可以为同一个视频内容提供两种码流,其中一种码流为长GOP码流,另外一种码流为随机接入码流。该随机接入码流的所有帧的编码块均为帧内预测模式。当需要接入视频内容时,先解码该视频内容的随机接入流中接入时刻的帧,然后将这一帧的解码结果作为该视频内容的长GOP码流的参考帧,进而快速接入视频内容。
接入视频内容包括初始接入视频内容,或从一个视频内容切换到另一个视频内容。举例而言,初始接入视频内容可以是基于检测到的用户点击开始播放一直播内容的操作,开始解码视频内容。从一个视频内容切换到另一个视频内容,可以是如下图6所示的从解码一个机位视频切换到另一个机位视频。
通过编码端为同一个视频内容提供两种码流,以降低接入视频内容的时延。以图6所示的多机位拍摄体育赛事的应用场景为例进行举例说明。如图6所示,在体育赛事的现场不同位置布置有多个摄像机,例如,如图6所示的7个摄像机,分别摄像机A、摄像机B、摄像机C、摄像机D、摄像机E、摄像机F以及摄像机G。不同位置的摄像机可以从不同角度拍摄同一场景,以得到一组视频信号。这一组视频信号可以包括机位A视频、机位B视频、机位C视频、机位D视频、机位E视频、机位F视频以及机位G视频。机位A视频、机位B视频、机位C视频、机位D视频、机位E视频、机位F视频以及机位G视频各自可以作为一个视频内容。多个机位(机位A、机位B、机位C、机位D、机位E、机位F、以及机位G)视频可以给用户提供多角度、三维立体视觉体验。编码端可以为每个机位(机位A、机位B、机位C、机位D、机位E、机位F、或机位G)视频提供如上所述两种码流。用户可以以适当交互形式从一个角度的机位视频切换到另一个角度的机位视频进行观看。解码端基于用户操作从解码一个机位视频的码流切换到解码另一个机位视频的码流。图7为本申请实施例提供的一种从当前视频内容切换到另一个视频内容的解码帧轨迹示意图。例如,当前视频内容为机位A视频内容,另一个视频内容为机位B视频内容。如图7所示,解码端解码当前视频内容的长GOP码流的编号为0的帧(#0帧),编号为1的帧(#1帧),编号为2的帧(#2帧),在解码当前视频内容的编号为3的帧(#3帧)时,发生视频内容切换,即由解码当前视频内容切换至解码另一个视频内容。解码端可以获取并解码另一个视频内容的随机接入码流中的切换时刻的帧。例如,如图7所示的另一个视频内容的随机接入码流的编号为3的帧(#3帧)。解码另一个视频内容的随机接入码流的编号为3的帧(#3帧)之后,将解码结果(即重建帧)放入参考帧列表缓存。之后,开始解码另一个视频内容的长GOP码流的编号为4的帧(#4帧), 解码另一个视频内容的长GOP码流的编号为4的帧(#4帧)参考随机接入码流的编号为3的帧(#3帧)的解码结果,从而完成解码,接入另一个视频内容。这样,解码端不需要从另一个视频内容的长GOP码流中切换时刻的前一个I帧(即#0帧)开始解码,也不需要等到下一个I帧(即#9帧)开始解码。
其中,在接入视频内容过程中,使用视频内容的随机接入码流的重建帧,作为视频内容的长GOP码流的参考帧。这样,虽然可以实现快速接入视频内容,但是会存在编解码不一致的问题。当随机接入码流的质量和长GOP码流的质量相差较大时,容易出现主观可见的块效应和显著的伪影(artifact)效应。经分析,造成上述问题的原因主要在于,编码该长GOP码流过程中,编码预测所使用的帧是该长GOP码流的重建帧,而接入该长GOP码流时,使用的是随机接入码流的重建帧。随机接入码流的重建帧与编码过程中长GOP码流的重建帧缺乏质量的匹配。
针对不同码流的重建帧缺乏质量的匹配的问题,本申请实施例提出如下所述的一种视频图像的处理方法,也可以称之为随机接入随质量控制方法,旨在实现在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。
在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的视频图像的处理系统进行说明。参见图8,图8为本申请实施例提供的一种视频图像的处理系统示意图。该视频图像的处理系统中可以包括服务器801和终端802。服务器801可以与终端802进行通信,例如,终端802可以通过比如无线保真(wireless-fidelity,Wifi)通信、蓝牙通信、或蜂窝2/3/4/5代(2/3/4/5generation,2G/3G/4G/5G)通信等方式与服务器801进行通信。应当理解的,服务器801和终端802之间还可以采用其他通信方式,包括未来的通信方式,对此不做具体限定。需要说明的是,图8仅以一个终端801作为示意性解释说明,该系统可以包括多个终端802,本申请实施例不一一举例说明。
上述终端802,可以是配置有显示组件的各种类型的设备,例如,终端802可以是手机、平板电脑、笔记本电脑、智能电视机等终端设备(图8中以终端是手机为例),终端也可以是用于虚拟场景交互的设备,包括VR眼镜、AR设备,MR互动设备等,终端还可以是智能手表、智能手环等可穿戴电子设备,终端还可以是车辆、无人驾驶车辆、无人机、工业机器人等载具中所搭载的设备。本申请实施例对终端的具体形式不做具体限定。
此外,上述终端也可以被称为用户装备(user equipment,UE)、订户站、移动单元、订户单元、无线单元、远程单元、移动设备、无线设备、无线通信设备、远程设备、移动订户站、终端设备、接入终端、移动终端、无线终端、智能终端、远程终端、手持机、用户代理、移动客户端、客户端、或其他某个合适的术语。
上述服务器801可以是一台或多台物理服务器(图8中以一台物理服务器为例),也可以是计算机集群,还可以是云计算场景的虚拟机或者云端服务器,等等。
一种示例,终端802可以安装有客户端,例如,该客户端可以是视频播放类应用程序、直播类应用程序(例如,电商直播类、游戏直播类等)、视频会议类应用程序、或者游戏类应用程序等涉及视频编解码的应用程序(application,APP)。终端802可以基于 用户的操作(例如点击、触摸、滑动、抖动、声控等)运行该客户端,接入视频内容,并在显示组件上显示视频内容。
服务器801可以作为上述实施例所述的源设备12,通过本申请实施例的视频图像的处理方法,以为同一视频内容提供两种质量相同或相当的码流,其中一种码流为长GOP码流,另外一种码流为随机接入码流。终端802可以作为上述实施例所述的目的地设备14,通过解码随机接入码流和长GOP码流,以实现快速接入视频内容。
具体的,服务器801可以获取视频图像,该视频图像可以是摄像机捕获的视频图像,或者也可以是经过解码的视频图像。该摄像机可以是如图6所示的任一机位的摄像机。服务器801可以通过本申请实施例的视频图像的处理方法,以为同一视频内容提供两种质量相同或相当的码流,其中一种码流为长GOP码流,另外一种码流为随机接入码流。服务器801可以向客户端提供这两种码流。一种可实现方式,对于点播应用场景,客户端可以支持用户点播视频内容功能,服务器801可以存储这两种码流。在接收到客户端发送的视频内容请求时,将这两种码流发送给客户端。客户端可以先对随机接入码流进行解码,基于随机接入码流的解码结果解码长GOP码流,以快速接入视频内容。或者,在接收到客户端发送的视频内容请求时,该视频内容请求用于请求从t 0时刻,如果从t 0时刻对应的视频内容在长GOP码流中是即时解码刷新帧(instantaneous decoding refresh,IDR)帧,服务器可以将长GOP码流发送给客户端。客户端可以直接对长GOP码流进行解码,以接入视频内容。如果从t 0时刻对应的视频内容在长GOP码流中不是IDR帧,服务器可以将包含t 0时刻以及t 0时刻之后的视频内容的随机接入码流和长GOP码流发送给客户端。客户端可以先对随机接入码流进行解码,基于随机接入码流的解码结果解码长GOP码流,以快速接入视频内容。客户端还可以对解码后的视频内容进行渲染和显示,从而向用户呈现视频内容。另一种可实现方式,对于直播应用场景,客户端可以支持直播视频内容功能,服务器801在接收到接入直播视频内容请求时,可以向客户端下发这两种码流。客户端可以先对随机接入码流进行解码,基于随机接入码流的解码结果解码长GOP码流,以快速接入视频内容。客户端还可以对解码后的视频内容进行渲染和显示,从而向用户呈现视频内容。
另一种示例,服务器801和终端802之间还可以设置有中间节点803。服务器801可以作为上述实施例所述的源设备12,通过本申请实施例的视频图像的处理方法,以为同一视频内容提供两种质量相同或相当的码流,其中一种码流为长GOP码流,另外一种码流为随机接入码流。该中间节点803可以作为上述实施例所述的目的地设备14,通过解码随机接入码流和长GOP码流。该中间节点803向终端802提供解码后的视频内容,以实现快速接入视频内容。举例而言,该中间节点803可以是内容分发网络(content delivery network,CDN)中的节点。
一种可实现方式,对于直播应用场景,客户端可以支持直播视频内容功能,服务器801在接收到接入直播视频内容请求时,可以向中间节点803下发这两种码流。中间节点803可以先对随机接入码流进行解码,基于随机接入码流的解码结果解码长GOP码流。该中间节点803向终端802提供解码后的视频内容,以实现快速接入视频内容。客户端还可以对解码后的视频内容进行渲染和显示,从而向用户呈现视频内容。
示例性的,客户端向服务器发送接入直播视频内容请求,该接入直播视频内容请求用于请求从t 0时刻接入直播内容。服务器在接收到客户端发送的接入直播视频内容请求时,可以将包含t 0时刻以及t 0时刻之后的视频内容的随机接入码流和长GOP码流下发给中间节点或客户端,中间节点或客户端可以先对随机接入码流进行解码,基于随机接入码流的解码结果解码长GOP码流,以快速接入视频内容。
需要说明的是,本申请实施例中,终端801的上述任意一种应用程序可以是终端801自身内置的应用程序,也可以是用户自行安装的第三方服务商提供的应用程序,对此不做具体限定。
需要说明的是,本申请实施例所涉及的两种质量相同或相当的码流,是指长GOP码流的重建帧和对应的随机接入码流的重建帧的质量相同或相当,以使得使用随机接入码流的重建帧作为参考帧,解码长GOP码流,有益于降低块效应,并消除部分伪影效应。
质量相同或相当的长GOP码流的重建帧和对应的随机接入码流的重建帧,满足以下一项或多项:
长GOP码流的重建帧和对应的随机接入码流的重建帧之间的差异小于差异阈值;或者,
长GOP码流的重建帧和对应的随机接入码流的重建帧之间的相似度高于相似度阈值;或者,
长GOP码流的重建帧和对应的随机接入码流的重建帧之间的差异,小于采用其他划分方式或编码参数对同一视频内容编码得到的长GOP码流和随机接入码流的重建帧之间的差异;或者,
长GOP码流的重建帧和对应的随机接入码流的重建帧之间的相似度,高于采用其他划分方式或编码参数对同一视频内容编码得到的长GOP码流和随机接入码流的重建帧之间的相似度;或者,
长GOP码流的重建帧和对应的随机接入码流的重建帧的相同位置的像素值的差异均小于像素值阈值;例如,该像素值阈值可以是128,其还可以是其他数值,本申请实施例不一一举例说明。
长GOP码流的重建帧和对应的随机接入码流的重建帧具体指,相同位置的长GOP码流的重建帧和随机接入码流的重建帧。以图7所示的另一个视频内容的长GOP码流和随机接入码流为例,相同位置的长GOP码流的重建帧和随机接入码流的重建帧可以包括,长GOP码流的编号为3的帧(#3帧)的重建帧,和随机接入码流的编号为3的帧(#3帧)的重建帧。
本申请实施例的视频图像的处理方法可以基于第一码流的编码结果(例如,第一码流的重建图像或第一编码过程中的重建图像,和/或,第一编码的编码信息),对第二码流的编码进行调整,以实现第一码流的重建帧和对应的第二码流的重建帧的质量相同或相当。通过本申请实施例的视频图像的处理方法可以对输入的待编码图像进行编码,以生成第一码流和第二码流。即同一视频内容,可以编码生成质量相同或相当的第一码流和第二码流。第一码流的重建帧和对应的第二码流的重建帧的质量相同或相当。这里的 第一码流可以是如上所述的长GOP码流。这里的第二码流可以是如上所述的随机接入码流。基于第一码流的编码结果,对第二码流的编码进行调整,可以有不同的具体实现方式。例如,可以通过如下所述的实施例,以实现第一码流的重建帧和对应的第二码流的重建帧的质量相同或相当。
其中,第一码流中相邻两个全帧内预测模式的帧之间的间隔大于第二码流的相邻两个全帧内预测模式的帧之间的间隔。
请参见图9,图9是本申请实施例提供的一种视频图像的处理方法的示意图。本申请实施例可以由编码装置执行。编码装置可以应用于上述实施例中的源设备12中,例如,如图8所示实施例中的服务器801。如图9所示,编码装置可以获取待编码图像,对待编码图像进行第一编码,以生成第一码流。之后,根据第一编码的编码信息和/或第一重建图像,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流。第一重建图像是第一码流或第一编码过程中的重建图像。
需要说明的是,如图9所示,第一编码用于生成第一码流,第二编码用于生成第二码流,第一码流和第二码流是通过不同的编码过程生成的。而对于第一码流和第二码流的传输,图9未示出,第一码流和第二码流可以是各自独立传输,也可以二者交织后传输。
下面以图10至图15所示的实施例,对上述图9所示实施例的几种可实现方式进行具体解说明。对于第二编码过程,图10至图15所示的实施例,根据第一重建图像,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流,或者,根据第一重建图像和第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流。
请参见图10,图10是本申请实施例提供的一种视频图像的处理方法的流程示意图。本申请实施例的方法部分可以由编码装置执行。编码装置可以应用于上述实施例中的源设备12中,例如,如图8所示实施例中的服务器801。应当理解的是,本申请实施例所涉及的一系列的步骤或操作,可以以各种顺序执行和/或同时发生,其执行顺序不以图10所示的步骤序号的大小作为限制。如图10所示的方法可以包括如下实施步骤:
S1001、获取待编码图像。
本申请实施例的待编码图像可以是摄像机或者其他采集设备捕获的视频图像,或者经过解码的视频图像。经过解码的视频图像可以是对压缩后的视频图像进行解码得到的图像。
一种可实现方式,待编码图像可以为源视频图像。由此,通过下述各个步骤,对源视频图像进行第一编码和第二编码,以生成第一码流和第二码流,实现帧级同步输出。另一种可实现方式,待编码图像可以为对源视频图像进行划分后的图像块。由此,通过下述各个步骤,对划分后的图像块进行第一编码和第二编码,以生成第一码流和第二码流,实现块级同步输出。
S1002、对待编码图像进行第一编码,以生成第一码流。
第一编码可以包括预测、变换、量化、熵编码等一项或多项处理过程。例如,可以 对待编码图像进行预测、变换和量化,以生成第一编码数据,之后对第一编码数据进行熵编码,以生成包括第一编码数据的第一码流。
可选的,第一编码的预测模式可以是帧间预测。对待编码图像进行第一编码,以生成包括P帧或P块的第一码流。或者,第一编码的预测模式也可以是帧内预测。对待编码图像进行第一编码,以生成包括全帧内预测模式的帧或I块的第一码流。例如,如图7所示的长GOP码流,即这里的第一码流可以包括P帧和全帧内预测模式的帧。
S1003、根据第一重建图像,确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,第一重建图像是第一码流或第一编码过程中的重建图像。
本申请实施例可以解码第一码流,以得到第一重建图像。或者,本申请实施例可以获取第一编码过程中的重建图像。例如,在第一编码过程中,对上述第一编码数据进行反量化、逆变换等处理,以得到第一重建图像。根据第一重建图像确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式和/或第一编码参数。
第一编码参数可以包括但不限于第一量化参数(quantization parameter,QP)或第一码率等。
一种可实现方式,待编码图像可以为源视频图像,相应的,这里的第一重建图像为第一重建帧,第一重建帧是解码源视频图像对应的第一码流得到的,或者在第一编码过程中,对源视频图像对应的第一编码数据进行反量化、逆变换等处理得到的。另一种可实现方式,待编码图像可以为对源视频图像进行划分后的图像块,相应的,这里的第一重建图像为第一重建图像块,第一重建图像块是解码划分后的图像块对应的第一码流得到的,或者在第一编码过程中,对划分后的图像块对应的编码数据进行反量化、逆变换等处理得到的。
可选的,上述S1003的另一种可实现方式,根据第一重建图像和第一编码的编码信息,确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项。第一编码的编码信息可以包括第一编码的划分方式,第一编码的量化参数和第一编码的编码失真信息中的一项或多项。
S1004、根据第一划分方式或第一编码参数中至少一项,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流。
根据第一划分方式和/或第一编码参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流。
换言之,第二编码的预测模式可以是帧内预测。由此,对待编码图像或第一重建图像进行第二编码,以生成包括全帧内预测模式的帧或I块的第二码流。例如,如图7所示的随机接入码流,即这里的第二码流可以包括全帧内预测模式的帧。
第二编码可以包括预测、变换、量化、熵编码等一项或多项处理过程。例如,可以对待编码图像或第一重建图像进行预测、变换和量化,以生成第二编码数据,之后对第二编码数据进行熵编码,以生成包括第二编码数据的第二码流。
第二编码与第一编码的不同之处在于,预测模式不同,即第二编码为全帧内预测模式。当然可以理解的,第二编码与第一编码的编码参数等其他信息也可以不同。
在一些实施例中,上述第一重建图像与第二重建图像之间的差异小于差异阈值或者第一重建图像与第二重建图像之间的相似度高于相似度阈值,第二重建图像是解码上述第二码流或者,在第二编码过程中,对上述第二编码数据进行反量化、逆变换等处理得到的。其中,差异阈值或相似度阈值可以根据需求进行合理设置。
其中,差异用于表示第一重建图像的特征与第二重建图像的特征的差异性。差异可以采用平均绝对差(mean absolute differences,MAD)、绝对误差和(sum of absolute differences,SAD)、误差平方和算法(sum ofsquared differences,SSD)、平均误差平方和算法(mean square differences,MSD)、或绝对变换差之和(sum of absolute transformed difference,SATD)等指标衡量。MAD、SAD、SSD、MSD、或SATD等指标越大,差异越大,第一重建图像和第二重建图像的质量越不同。MAD、SAD、SSD、MSD、或SATD等指标越小,差异越小,第一重建图像和第二重建图像的质量越相同或相当。相似度用于表示第一重建图像的特征与第二重建图像的特征的相似性。相似度可以采用MAD、SAD、SSD、MSD、或SATD等指标衡量。MAD、SAD、SSD、MSD、或SATD越大,相似度越低,第一重建图像和第二重建图像的质量越不同。MAD、SAD、SSD、MSD、或SATD越小,相似度越高,第一重建图像和第二重建图像的质量越相同或相当。
上述差异阈值可以设置为0或者根据需求设置,比如当选择MAD作为衡量差异性指标时,可以对亮度信号阈值设置为4,色度信号阈值设置为2,相应地,当选择SAD、SSD或MSD作为衡量差异性指标时,可以分别将亮度信号阈值设置为4xN、16xN、16,将色度信号阈值设置为2xN、4xN、4,其中N为衡量差异性对象(可以是编码块或者图像)范围内的像素总个数;当选择SATD作为衡量差异性指标时,阈值可设置为0。相似度阈值可以类似设置。
可选的,上述S1003的一种具体的可实现方式可以为,根据第一重建图像,确定多个第二划分方式,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,根据第一重建图像,确定多个第二编码参数,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
这样,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第三重建图像之间的相似度中最高的,多个第三重建图像是分别根据上述多个第二划分方式和/或上述多个第二编码参数,对待编码图像或第一重建图像进行多次第二编码的重建图像。多个第三重建图像可以是多次第二编码过程中的重建图像,也可以是多次第二编码得到的多个码流的重建图像。
示例性的,编码装置可以分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码,以生成多个第三编码数据。之后,在多次第二编码过程中,对多个第三编码数据进行反量化、逆变换等处理,以得到多个第三重建图像。通过比较多个第三重建图像各自分别与第一重建图像之间的相似度,从中选择一个相似度最高的作为第二重建图像。换言之,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第三重建图像之间的相似度中最高的。将相似度最高的第三重建图像对应的第三编码数据作为第二编码数据,以生成包括第二编码数据的码流。或者,将相似度最高的第三重建图像对应的码流作为上述第二码流。
可选的,在S1003之前,本申请实施例的视频图像的处理方法还可以包括:判断第一编码的预测模式是否为帧内预测。当第一编码的预测模式是帧间预测,则执行S1003。当第一编码的预测模式是帧内预测,将第一码流作为第二码流,或者将第一编码数据作为第二编码数据,以生成第二码流。这样,在对待编码图像进行第一编码以得到包括P帧或P块的第一码流时,可以通过执行S1003和S1004,对待编码图像或第一重建图像进行第二编码,以生成包括全帧内预测模式的帧或I块的第二码流。在对待编码图像进行第一编码以得到包括全帧内预测模式的帧或I块的第一码流时,可以无需执行S1003和S1004,直接将该全帧内预测模式的帧或I块作为第二编码数据,从而可以提升编码效率。
举例而言,以上述图7所示的另一个视频内容的长GOP码流和随机接入码流为例,编码装置对待编码图像进行第一编码,以生成包括编号为3的帧(#3帧)的长GOP码流。编码装置判断第一编码的预测模式是否为帧内预测。如图7所示,此时的第一编码的预测模式是帧间预测。之后,编码装置可以通过执行S1003和S1004,对待编码图像或编号为3的帧(#3帧)的重建帧进行第二编码,以生成编号为3的帧(#3帧)的随机接入码流。随机接入码流中的编号为3的帧(#3帧)的预测模式为帧内预测。
需要说明的是,在一些实施例中,在对待编码图像进行第一编码以得到包括全帧内预测模式的帧或I块的第一码流时,也可以不将该全帧内预测模式的帧或I块作为第二编码数据,即第二码流也可以不包括该全帧内预测模式的帧或I块,其可以根据视频传输需求进行合理设置。
还需要说明的是,上述第二编码数据也可以称之为随机接入帧。
可选的,以编码装置应用于服务器为例,服务器可以存储第一码流和第二码流。在接收到客户端发送的视频内容请求时,向客户端发送该第一码流和第二码流。
一种示例,点播应用场景,客户端请求从一个视频内容的t 0时刻开始播放,服务器将从包括t 0时刻视频内容的第一码流开始下发。如果t 0时刻对应的视频内容,在第一码流中是接入帧(例如,上述全帧内预测模式的帧),则服务器仅向客户端提供该第一码流。客户端对该第一码流进行解码和播放。示例性的,t 0时刻对应的视频内容,在长GOP码流中是如图7所示的编码为0的帧(#0帧),则服务器仅向客户端提供长GOP码流。如果t 0时刻对应的视频内容,在第一码流中不是接入帧,则服务器还需要向客户端下发第二码流中t 0时刻或距离t 0时刻最近的随机接入帧。客户端可以先解码随机接入帧,然后,以该随机接入帧的重建帧解码第二码流的参考帧,对第二码流进行解码和播放。示例性的,t 0时刻对应的视频内容,在长GOP码流中是如图7所示的编码为3的帧(#3帧),则服务器还需要向客户端下发随机接入码流中编码为3的帧(#3帧)。客户端可以先解码随机接入码流中编码为3的帧(#3帧),将随机接入码流中编码为3的帧(#3帧)的重建帧作为参考帧,对长GOP码流的编码为4的帧(#4帧)进行解码,之后对长GOP码流的后续帧进行解码和播放。
另一种示例,直播应用场景,客户端请求从t 0时刻接入直播,服务器向客户端下发第一码流和第二码流。客户端先解码第二码流中t 0时刻或距离t 0时刻最近的随机接入帧,然后以随机接入帧的重建帧作为参考帧,解码第一码流的后续帧,第一码流的后续帧是 指第一码流中,随机接入帧所在时刻之后的帧。
本实施例,对待编码图像进行第一编码,以生成第一码流,根据第一重建图像,确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,根据第一划分方式或第一编码参数中至少一项,对待编码图像或第一重建图像进行第二编码,以生成第二码流,第一重建图像是第一码流或第一编码过程中的重建图像。这样,基于第一码流或第一编码过程中的重建图像,对第二码流的编码进行调整,以实现第一码流的重建图像和对应的第二码流的重建图像的质量相同或相当,从而在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。
下面以上述待编码图像为对源视频图像进行划分后的图像块,对本申请实施例的视频图像的处理方法进行解释说明。
请参见图11,图11是本申请实施例提供的一种视频图像的处理方法的流程示意图。本申请实施例的方法部分可以由编码装置执行。编码装置可以应用于上述实施例中的源设备12中,例如,如图8所示实施例中的服务器801。本实施例以待编码图像为第K帧图像的第N个图像块为例。编码装置可以对第K帧图像的第N个图像块进行编码,以生成第一码流和第二码流。如图11所示的方法可以包括如下实施步骤:
S1101、获取第K帧图像的第N个图像块。
编码装置可以接收输入的第K帧图像,对第K帧图像进行块划分,以得到第K帧图像的多个图像块。本申请实施例以对第K帧图像的第N个图像块进行编码为例进行举例说明,其他图像块可以采用相同或相似的处理方式,本申请实施例不一一解释说明。
S1102、对第N个图像块进行第一编码,以生成第一码流。
编码装置可以使用预测模式P、划分方式D和量化参数QP等信息对第N个图像块进行第一编码,以生成第一码流。该第一码流可以包括第N个图像块的第一编码数据。第一编码的过程中,还可以生成第N个图像块的重建图像块A。
S1103、获取第N个图像块、预测模式P和重建图像块A。
将第N个图像块和重建图像块A作为输入信息,开始编码生成第二码流。
可选的,还可以将上述第一编码所涉及的划分方式D和量化参数QP等信息也作为输入信息。
S1104、判断预测模式P是否为帧内预测,若是,则将第一码流中的第N个图像块的编码结果作为第二码流的当前块的编码结果,若否,则执行S1105。
当第N个图像块的预测模式P是帧内预测时,则第二码流的当前块的编码结果直接使用第一码流的第N个图像块的编码结果。当第N个图像块的预测模式P是帧间预测时,则通过下述步骤,对第K帧图像中重建图像块A对应位置的原始待编码图像块,即上述第N个图像块,进行帧内预测模式的第二编码。
S1105、对第N个图像块进行一种划分方式的划分,对划分后的所有子块进行全帧内预测模式的第二编码,以得到第二码流的重建图像块B。
示例性的,对第N个图像块进行一种划分方式的划分,对划分后的所有子块进行帧 内预测、变换、量化、反量化、逆变换等一系列处理的第二编码过程,以得到第二码流的重建图像块B。
对第N个图像块进行一种划分方式的划分,包括但不限于对第N个图像块进行2Nx2N的划分,或NxN的划分,或不划分。例如,对第N个图像块进行2Nx2N的划分,即表示将第N个图像块划分为2Nx2N个子块。N为大于1的任意正整数。
对第N个图像块第一次执行本步骤时,进行划分、帧内预测、变换、量化等一系列第二编码过程所使用编码信息和/或编码参数,例如,划分方式、QP、码率等,可以是初始化随机生成的编码信息和/或编码参数,或者,也可以是使用第K帧图像前向的I帧所使用的编码信息和/或编码参数。举例而言,对于QP,可以使用第K帧图像前向最近的一个或多个I帧的平均QP。
重建图像块B为第二码流的第K帧图像的重建图像中,与重建图像块A对应的图像块。也即第二码流的第K帧图像的重建图像中,与第一码流的第K帧图像的重建图像块A位置相同的图像块。重建图像块B可以由一个或多个编码子块的重建块组成。
S1106、计算重建图像块A和重建图像块B的相似度代价函数值。
具体地,可以根据下述公式(1)计算重建图像块A和重建图像块B的相似度代价函数值f(划分方式,QP)。
Figure PCTCN2022116596-appb-000001
其中,i表示重建图像块中像素的索引,I表示重建图像块的图像总像素个数,B(划分方式,QP,i)表示重建图像块B的一种划分方式下采用量化参数QP的第i个像素位置的重建像素值,A(i)表示重建图像块A的第i个像素位置的重建像素值。T可选1或者2。
可选地,对于两个重建图像块的相似度除了可以采用公式(1)外,也可以采用其他方式评估两个重建图像块的相似度,包括但不限于MAD、SAD、SSD、MSD、SATD等。例如,通过下述公式(2)至公式(5)任一项,或者通过计算两个重建图像块的差值图像进行阿达马变换后绝对值的和,评估两个重建图像块的相似度。
采用MAD的相似度代价函数评估两个重建图像块的相似度:
Figure PCTCN2022116596-appb-000002
采用SAD的相似度代价函数评估两个重建图像块的相似度:
Figure PCTCN2022116596-appb-000003
采用SSD的相似度代价函数评估两个重建图像块的相似度:
Figure PCTCN2022116596-appb-000004
采用MSD的相似度代价函数评估两个重建图像块的相似度:
Figure PCTCN2022116596-appb-000005
采用SATD的相似度代价函数评估两个重建图像块的相似度:计算两个重建图像块的差值图像进行阿达马变换后绝对值的和,评估两个重建图像块的相似度。
其中,使用上述任一种相似度代价函数计算得到重建图像块A和重建图像块B的相似度代价函数值,相似度代价函数值越小,重建图像块A和重建图像块B之间的相似度 越高。
S1107、判断重建图像块A和重建图像块B的相似度代价函数值是否小于相似度代价函数阈值,或者有限迭代局部最优,若是,则执行S1109,若否,在执行S1108。
可选的,如果重建图像块A和重建图像块B的相似度代价函数值小于相似度代价函数值阈值,则直接执行S1109。如果有限迭代局部最优,则直接执行S1109。有限迭代局部最优具体指,已使用所有的划分方式和/或编码参数对第N个图像块进行第二编码,并计算得到各个划分方式和/或编码参数的重建图像块B和重建图像块A的相似度代价函数值,从中选取一个相似度最高(例如,相似度代价函数值最小)的作为有限迭代局部最优,对相似度最高的重建图像块B对应的编码数据,以及划分方式和/或编码参数,执行S1109。
相似度代价函数值阈值可以根据需求进行灵活设置。例如设置为0,或者,对于亮度信号设置采用MAD、SAD、SSD、MSD评估相似度的相似度代价函数值阈值分别设置为4、4xI、16xI、16。换言之,当重建图像块B和重建图像块A的平均亮度差异小于4或者16时,则执行S1109。再例如,对于对色差信号设置采用MAD、SAD、SSD、MSD评估相似度的相似度代价函数值阈值分别设置为2、2xI、4xI、4。换言之,当重建图像块B和重建图像块A的平均亮度差异小于2或者4时,则执行S1109。又例如,对于采用SATD评估相似度的相似度代价函数值阈值可设置为0。
可选的,在比较两种重建图像块的相似度过程中,如果存在个别像素差异较大,比如灰度值相差超过128,则可以丢弃该种划分方式和对应的量化结果。
在执行S1109之后,可以重复执行S1101,以开始对第K帧图像第N+1个图像块的第一编码,直到完成整帧的第一编码和第二编码。
S1108、变换划分方式和/或编码参数,重复执行S1105。
本申请实施例可以有多个划分方式和/或编码参数,可以在多个划分方式和/或编码参数选取一个划分方式和/或编码参数,重复执行骤1105,以遍历该多个划分方式和/或编码参数,对第N个图像块进行第二编码,并计算得到各个划分方式和/或编码参数的重建图像块B和重建图像块A的相似度代价函数值。
以编码参数为QP为例,在一定区间内(比如0~51),以一定步长(比如1或2)选择QP,重复执行S1105,直到枚举完有限的QP。
以划分方式为例,对每一种划分方式,执行S1105,直到枚举完有限的划分方式。
S1109、对第二编码数据,以及划分方式和/或编码参数,进行熵编码,以生成第二码流,完成第N个图像块的第二编码。
该第二码流可以包括第N个图像块的第二编码数据。
如果重建图像块A和重建图像块B的相似度代价函数值小于相似度代价函数值阈值,这里的第二编码数据即为重建图像块B对应的编码数据。重建图像块B对应的编码数据是指,采用一种划分方式和/或编码参数,得到编码数据。重建图像块B可以是该编码数据的重建图像。
如果有限迭代局部最优,这里的第二编码数据即为有限迭代局部最优的重建图像块B对应的编码数据。有限迭代局部最优具体指,已使用所有的划分方式和/或编码参数对 第N个图像块进行编码,并计算得到各个划分方式和/或编码参数的重建图像块B和重建图像块A的相似度,从中选取一个相似度最高的作为有限迭代局部最优,将相似度最高的重建图像块B对应的编码数据作为这里的第二编码数据。
可选地,还可以结合其他信息,判断是否执行S1105。例如,结合第一编码所采用的帧类型、参考关系、或时域运动矢量预测是否关闭等信息,判断是否执行S1105。示例性的,当在第一编码中只有单帧参考且时域运动矢量预测关闭时,执行S1105,即编码第二码流的随机接入帧。
可选地,还可以结合其他信息,判断是否执行S1105。例如,结合第一编码所采用的帧内预测强滤波、SAO滤波等工具参数,判断是否执行S1105。示例性的,在存在帧内预测强滤波或者SAO滤波开启的第一编码时,不执行S1105,编码随机接入帧。
本实施例,基于包括第N个图像块的第一码流的编码结果对第二码流的编码进行调整,以实现第一码流的重建图像块和对应的第二码流的重建图像块的质量相同或相当,从而在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。通过图像块级的第一码流和第二码流编码,可以实现图像块级的两种码流同步输出,以较快的得到用于接入视频内容的第二码流的随机接入帧,降低接入时延。
下面以上述待编码图像为源视频图像,对本申请实施例的视频图像的处理方法进行解释说明。
请参见图12,图12是本申请实施例提供的一种视频图像的处理方法的流程示意图。本申请实施例的方法部分可以由编码装置执行。编码装置可以应用于上述实施例中的源设备12中,例如,如图8所示实施例中的服务器801。本实施例以待编码图像为第K帧图像为例。编码装置可以对第K帧图像进行编码,以生成第一码流和第二码流。如图12所示的方法可以包括如下实施步骤:
S1201、对第K帧图像进行第一编码,以生成第一码流。
编码装置可以接收输入的第K帧图像。本申请实施例以对第K帧图像进行编码为例进行举例说明,其他帧可以采用相同或相似的处理方式,本申请实施例不一一解释说明。
编码装置可以使用预测模式P、划分方式D和量化参数QP等信息对第K帧图像进行第一编码,以生成第一码流。该第一码流可以包括第K帧图像的第一编码数据。第一编码的过程中,还可以生成第K帧图像的重建帧A。
S1202、对第K帧图像进行一种划分方式的划分,对划分后的所有图像块进行全帧内预测模式的第二编码,以得到第二码流的重建帧B。
将第K帧图像和重建帧A作为输入信息,开始编码生成第二码流。
可选的,还可以将上述第一编码所涉及的划分方式D和量化参数QP等信息也作为输入信息。
示例性的,对第K帧图像进行一种划分方式的划分,对划分后的所有子块进行帧内预测、变换、量化、反量化、逆变换等一系列处理的第二编码过程,以得到第二码流的重建帧B。
对第K帧图像第一次执行本S1202时,进行划分、帧内预测、变换、量化等一系列第二编码过程所使用的编码信息和/或编码参数,例如,划分方式、QP、码率等,可以是初始化随机生成的编码信息和/或编码参数,或者,也可以是使用第K帧图像前向的I帧所使用的编码信息和/或编码参数。举例而言,对于QP,可以使用第K帧图像前向最近的一个或多个I帧的平均QP。
重建帧B可以由一个或多个编码图像块的重建块组成。
可选的,当第K帧图像的预测模式P是全帧内预测时,则第二码流的当前帧的第二编码数据直接使用第一码流的第K帧图像的第一编码数据。当第K帧图像的预测模式P不是全帧内预测(例如,第K帧图像是P帧,或者,存在P块)时,则通过S1202,对第K帧图像,进行全帧内预测模式的第二编码。
S1203、计算重建帧A和重建帧B的相似度代价函数值。
具体地,可以根据下述公式(6)计算重建帧A和重建帧B的相似度代价函数值f(划分方式,QP)。
Figure PCTCN2022116596-appb-000006
其中,i表示重建帧中像素的索引,I表示重建帧的图像总像素个数,B(划分方式,QP,i)表示重建帧B的一种划分方式下采用量化参数QP的第i个像素位置的重建像素值,A(i)表示重建帧A的第i个像素位置的重建像素值。T可选1或者2。
可选地,对于两个重建帧的相似度除了可以采用公式(6)外,也可以采用其他方式评估两个重建帧的相似度,包括但不限于MAD、SAD、SSD、MSD、SATD等。
S1204、变换划分方式和/或编码参数,重复执行S1202。
本申请实施例可以有多个划分方式和/或编码参数,可以在多个划分方式和/或编码参数选取一个划分方式和/或编码参数,重复执行骤1202,以遍历该多个划分方式和/或编码参数,对第K帧图像进行编码,并计算得到各个划分方式和/或编码参数的重建帧B和重建帧A的相似度代价函数值。
以编码参数为QP为例,在一定区间内(比如0~51),以一定步长(比如1或2)选择QP,重复执行S1202,直到枚举完有限的QP。
以划分方式为例,对每一种划分方式,执行S1202,直到枚举完有限的划分方式。
S1205、对第二编码数据,以及划分方式和/或编码参数,进行熵编码,以生成第二码流,完成第K帧图像的编码。
该第二码流可以包括第K帧图像的第二编码数据。
这里的第二编码数据即为有限迭代局部最优的重建帧B对应的编码数据。有限迭代局部最优具体指,已使用所有的划分方式和/或编码参数对第K帧图像进行编码,并计算得到各个划分方式和/或编码参数的重建帧B和重建帧A的相似度,从中选取一个相似度最高的作为有限迭代局部最优,将相似度最高的重建帧B对应的编码数据作为这里的第二编码数据。
可选的,如果重建帧A和重建帧B的相似度代价函数值小于相似度代价函数值阈值,这里的第二编码数据即为重建帧B对应的编码数据。重建帧B对应的编码数据是指,采 用一种划分方式和/或编码参数,得到编码数据。重建帧B可以是该编码数据的重建图像。
可选的,在比较两种重建帧的相似度过程中,如果存在个别像素差异较大,比如灰度值相差超过128,则可以丢弃该种划分方式和对应的量化结果。
在执行S1205之后,可以重复执行S1201,以开始对第K+1帧图像编码。
本实施例,基于包括第K帧图像的第一码流的编码结果对第二码流的编码进行调整,以实现第一码流的重建帧和对应的第二码流的重建帧的质量相同或相当,从而在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。通过帧级的第一码流和第二码流编码,从帧级上控制第一码流和第二码流的质量保持一致。
上述实施例的视频图像的处理方法根据第一码流的当前图像(例如,当前帧或当前块)的重建图像,控制第二码流的当前图像的编码,以实现第一码流的重建图像和对应的第二码流的重建图像的质量相同或相当。本申请实施例还提供如下实施例的视频图像的处理方法,根据第一码流的至少一个第一待编码图像,控制第二码流的第二待编码图像的编码,第二待编码图像是至少一个第一待编码图像之前的视频图像,以实现第一码流的重建图像和对应的第二码流的重建图像的质量相同或相当。
请参见图13,图13是本申请实施例提供的一种视频图像的处理方法的流程示意图。本申请实施例的方法部分可以由编码装置执行。编码装置可以应用于上述实施例中的源设备12中,例如,如图8所示实施例中的服务器801。应当理解的是,本申请实施例所涉及的一系列的步骤或操作,可以以各种顺序执行和/或同时发生,其执行顺序不以图13所示的步骤序号的大小作为限制。如图13所示的方法可以包括如下实施步骤:
S1301、获取至少一个第一待编码图像和第二待编码图像。
第二待编码图像是至少一个待编码图像之前的视频图像。
本申请实施例的第二待编码图像和一个或多个第一待编码图像可以是摄像机或者其他采集设备捕获的视频图像,或者经过解码的视频图像。经过解码的视频图像可以是对压缩后的视频图像进行解码得到的图像。
S1302、分别对至少一个第一待编码图像进行第一编码,以生成第一码流。
第一编码可以包括预测、变换、量化、熵编码等一项或多项处理过程。例如,可以分别对至少一个第一待编码图像进行预测、变换和量化,以生成一个或多个第一编码数据,之后对一个或多个第一编码数据进行熵编码,以生成包括一个或多个第一编码数据的第一码流。
可选的,第一编码的预测模式可以是帧间预测。或者,第一编码的预测模式也可以是帧内预测。
S1303、根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,至少一个第一重建图像是第一码流或第一编码过程中的重建图像。
本申请实施例可以解码第一码流,以得到一个或多个第一重建图像。或者,本申请实施例可以在第一编码过程中,对一个或多个第一编码数据,进行反量化、逆变换等处 理,以得到一个或多个第一重建图像。之后,本申请实施例可以根据一个或多个第一重建图像确定对第二待编码图像进行第二编码所采用的第一划分方式和/或第一编码参数。
一种可实现方式,至少一个第一待编码图像可以为至少一个第一源视频图像,相应的,这里的至少一个第一重建图像为至少一个第一重建帧,至少一个第一重建帧是解码至少一个第一源视频图像各自对应的第一码流得到的,或者,是第一编码过程中的至少一个第一源视频图像各自对应的第一编码数据的重建帧。
S1304、根据第一划分方式或第一编码参数中至少一项,对第二待编码图像进行全帧内预测模式的第二编码,以生成第二码流。
根据第一划分方式和/或第一编码参数,对第二待编码图像进行全帧内预测模式的第二编码,以生成第二码流。该第二码流可以包括第二编码数据。
第二编码的预测模式可以是帧内预测。由此,对第二待编码图像进行第二编码,以生成包括全帧内预测模式的帧的第二码流。
在一些实施例中,上述至少一个第一重建图像与至少一个第二重建图像之间的差异小于差异阈值,或者至少一个第一重建图像与至少一个第二重建图像之间的相似度高于相似度阈值,至少一个第二重建图像是将第三重建图像作为参考图像解码上述第一码流得到的。其中,差异阈值或相似度阈值可以根据需求进行合理设置。第三重建图像是第二码流或第二编码过程中的重建图像。
其中,至少一个第一重建图像的个数与至少一个第二重建图像的个数相同。举例而言,对一个待编码图像进行第一编码,以生成第一码流,根据一个第一重建图像,对第二待编码图像进行第二编码,以生成第二码流。该第一重建图像与第二重建图像之间的差异小于差异阈值或者,该第一重建图像与第二重建图像之间的相似度高于相似度阈值。该第二重建图像是将第三重建图像作为参考图像解码该第一码流得到的。
其中,差异和相似度的具体解释说明可以参见图10所示实施例的S1004的相关解释说明,此处不再赘述。
举例而言,对多个待编码图像进行第一编码,以生成第一码流,根据多个第一重建图像,对第二待编码图像进行第二编码,以生成第二码流。该多个第一重建图像与多个第二重建图像之间的差异小于差异阈值或者,该多个第一重建图像与多个第二重建图像之间的相似度高于相似度阈值。该多个第二重建图像是将第三重建图像作为参考图像解码该第一码流得到的。其中,多个第一重建图像与多个第二重建图像之间的差异,可以是多个第一重建图像各自与对应的第二重建图像之间的差异的加权之和。多个第一重建图像与多个第二重建图像之间的相似度,可以是多个第一重建图像各自与对应的第二重建图像之间的相似度的加权之和。
可选的,上述至少一个第一待编码图像为一个第一待编码图像,相应的,至少一个第一重建图像为一个第一重建图像,上述S1303的一种具体的可实现方式可以为,根据第一重建图像,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
示例性的,编码装置可以分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码,以生成多个第三码流。该多个第三码流各自可以包括一 个第三编码数据。之后,编码装置可以将多个第五重建图像分别作为参考图像解码第一码流,以得到多个第四重建图像,通过比较多个第四重建图像各自分别与第一重建图像之间的相似度,从中选择一个相似度最高的作为第二重建图像。换言之,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第四重建图像之间的相似度中最高的。将相似度最高的第四重建图像对应的第三码流作为第二码流,或者将相似度最高的第四重建图像对应的第三编码数据作为第二编码数据,以生成包括第二编码数据的第二码流。
其中,多个第五重建图像是多个第三码流的重建图像,或者是上述多次第二编码过程中的重建图像。
可选的,上述至少一个第一待编码图像为多个第一待编码图像,相应的,至少一个第一重建图像为多个第一重建图像,上述S1303的一种具体的可实现方式可以为,根据多个第一重建图像,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
示例性的,以多个第一待编码图像的个数为m个为例,编码装置可以分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行x次第二编码,以生成x个第三码流。该x个第三码流各自可以包括一个第三编码数据。之后,编码装置可以将x个第五重建图像分别作为参考图像解码第一码流,以得到x×m个第四重建图像。可以理解为x组第四重建图像,一组第四重建图像中包括m个第四重建图像。其中,x个第五重建图像中的一个第五重建图像作为参考图像解码第一码流,可以得到m个第四重建图像,即一组第五重建图像。通过比较x组第四重建图像各自与m个第一重建图像之间的相似度,从中选择一组相似度最高的作为m个第一待编码图像各自对应的第二重建图像。将相似度最高的m个第四重建图像对应的第三码流作为第二码流,或者将相似度最高的m个第四重建图像对应的第三编码数据作为第二编码数据,以生成包括第二编码数据的码流。相似度最高的m个第四重建图像对应的第三码流,是指相似度最高的m个第四重建图像,是使用第三码流的第五重建图像作为参考图像解码第一码流得到的。
m个第一待编码图像各自对应的第二重建图像与m个第一重建图像之间的相似度,是m个第一待编码图像各自对应的第二重建图像与对应的第一重建图像之间的相似度的加权之和。
举例而言,多个第一待编码图像包括m个第一待编码图像,多个第一重建图像包括m个第一重建图像,m个第一重建图像分别为A 1、A 2,……,A m,m为大于1的任意正整数。m个第一待编码图像中的第i个第一待编码图像对应的多个第四重建图像为C 1i,……,C xi,x为大于1的任意正整数,i取2至m。x可以表示第x次第二编码。
多个第五重建图像为B 1,……,B x
C 11为将B 1作为参考图像解码第1个第一待编码图像对应的第一编码数据得到的,C 12为将C 11作为参考图像解码第2个第一待编码图像对应的第一编码数据得到的,……,C 1m为将C 1(m-1)作为参考图像解码第m个第一待编码图像对应的第一编码数据得到的。
C x1为将B x作为参考图像解码第1个第一待编码图像对应的第一编码数据得到的,C x2为将C x1作为参考图像解码第2个第一待编码图像对应的第一编码数据得到的,……, C xm为将C x(m-1)作为参考图像解码第m个第一待编码图像对应的第一编码数据得到的。
可选的,在S1303之前,本申请实施例的视频图像的处理方法还可以包括:在分别对至少一个第一待编码图像进行第一编码之前,对第二待编码图像进行第一编码,以生成第四码流,判断第一编码的预测模式是否为帧内预测。当第一编码的预测模式是帧间预测,则执行S1303。当第一编码的预测模式是帧内预测,将第四码流作为第二码流。这样,在对第二待编码图像进行第一编码以得到包括P帧或P块的第一码流时,可以通过执行S1303和S1304,对第二待编码图像进行第二编码,以生成包括全帧内预测模式的帧的第二码流。在对第二待编码图像进行第一编码以得到包括全帧内预测模式的帧的第一码流时,可以无需执行S1303和S1304,直接将该全帧内预测模式的帧作为第二编码数据,从而可以提升编码效率。
举例而言,以上述图7所示的另一个视频内容的长GOP码流和随机接入码流为例,编码装置对第二待编码图像进行第一编码,以生成包括编号为3的帧(#3帧)的长GOP码流。编码装置判断第一编码的预测模式是否为帧内预测。如图7所示,此时的第一编码的预测模式是帧间预测。之后,编码装置可以通过执行S1301至S1304,对第一待编码图像进行第一编码,以生成包括编号为4的帧(#4帧)的长GOP码流。根据长GOP码流的编号为4的帧(#4帧)的重建帧,对第二待编码图像进行第二编码,以生成编号为3的帧(#3帧)的随机接入码流。随机接入码流中的编号为3的帧(#3帧)的预测模式为帧内预测。
需要说明的是,在一些实施例中,在对至少一个第一待编码图像进行第一编码以得到包括全帧内预测模式的帧的第一码流时,也可以不将该全帧内预测模式的帧作为第二编码数据,即第二码流也可以不包括该全帧内预测模式的帧,其可以根据视频传输需求进行合理设置。
还需要说明的是,上述第二编码数据也可以称之为随机接入帧。
可选的,以编码装置应用于服务器为例,服务器可以存储第一码流和第二码流。在接收到客户端发送的视频内容请求时,向客户端发送该第一码流和第二码流。
本实施例,分别对至少一个第一待编码图像进行第一编码,以生成第一码流,根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,根据第一划分方式或第一编码参数中至少一项,对第二待编码图像进行第二编码,以生成第二码流,至少一个第一重建图像是第一码流或第一编码过程中的重建图像。这样,基于第一码流的编码结果对第二码流的编码进行调整,以实现第一码流的重建帧和对应的第二码流的重建帧的质量相同或相当,从而在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。
下面以上述至少一个第一待编码图像为一个源视频图像,对本申请实施例的视频图像的处理方法进行解释说明。
请参见图14,图14是本申请实施例提供的一种视频图像的处理方法的流程示意图。本申请实施例的方法部分可以由编码装置执行。编码装置可以应用于上述实施例中的源 设备12中,例如,如图8所示实施例中的服务器801。本实施例以至少一个第一待编码图像为第K+1帧图像,第二待编码图像为第K帧图像为例。编码装置可以根据第K+1帧图像的编码结果,调整对第K帧图像进行第二编码,以生成质量一致的第一码流和第二码流。如图14所示的方法可以包括如下实施步骤:
S1401、对第K+1帧图像进行第一编码,以生成第一码流。
编码装置可以使用预测模式P、划分方式D和量化参数QP等信息对第K+1帧图像进行第一编码,以生成第一码流。该第一码流可以包括第K+1帧图像的第一编码数据。第一编码的过程中,还可以生成第K+1帧图像的重建帧A。
S1402、对第K帧图像进行一种划分方式的划分,对划分后的所有图像块进行全帧内预测模式的第二编码,以得到第二码流的重建帧B。
将第K帧图像和重建帧A作为输入信息,开始编码生成第二码流。
可选的,还可以将上述第一编码所涉及的划分方式D和量化参数QP等信息也作为输入信息。
示例性的,对第K帧图像进行一种划分方式的划分,对划分后的所有图像块进行帧内预测、变换、量化、反量化、逆变换等一系列处理的第二编码过程,以得到第二码流的重建帧B。
对第K帧图像第一次执行本S1402时,进行划分、帧内预测、变换、量化等一系列第二编码过程所使用的编码信息和/或编码参数,例如,划分方式、QP、码率等,可以是初始化随机生成的编码信息和/或编码参数,或者,也可以是使用第K帧图像前向的I帧所使用的编码信息和/或编码参数。举例而言,对于QP,可以使用第K帧图像前向最近的一个或多个I帧的平均QP。
重建帧B可以由一个或多个编码图像块的重建块组成。
可选的,当第一码流的第K帧图像的预测模式P是全帧内预测时,则第二码流的第K帧图像的第二编码数据直接使用第一码流的第K帧图像的第一编码数据。当第一码流的第K帧图像的预测模式P不是全帧内预测(例如,第一码流的第K帧图像是P帧,或者,存在P块)时,则通过S1302,对第K帧图像,进行全帧内预测模式的第二编码。
S1403、将重建帧B作为解码第一码流的第K+1帧的参考帧,解码第一码流,以得到第K+1帧的另一种重建帧C。
S1404、计算重建帧A和重建帧C的相似度代价函数值。
具体地,可以根据下述公式(7)计算重建帧A和重建帧C的相似度代价函数值f(划分方式,QP)。
Figure PCTCN2022116596-appb-000007
其中,i表示重建帧中像素的索引,I表示重建帧的图像总像素个数,C(划分方式,QP,i)表示重建帧C的一种划分方式下采用量化参数QP的第i个像素位置的重建像素值,A(i)表示重建帧A的第i个像素位置的重建像素值。T可选1或者2。
可选地,对于两个重建帧的相似度除了可以采用公式(7)外,也可以采用其他方式评估两个重建帧的相似度,包括但不限于MAD、SAD、SSD、MSD、SATD等。
S1405、变换划分方式和/或编码参数,重复执行S1402。
本申请实施例可以有多个划分方式和/或编码参数,可以在多个划分方式和/或编码参数选取一个划分方式和/或编码参数,重复执行骤1402,以遍历该多个划分方式和/或编码参数,对第K帧图像进行编码,并计算得到各个划分方式和/或编码参数的重建帧C和重建帧A的相似度代价函数值。
以编码参数为QP为例,在一定区间内(比如0~51),以一定步长(比如1或2)选择QP,重复执行S1302,直到枚举完有限的QP。
以划分方式为例,对每一种划分方式,执行S1302,直到枚举完有限的划分方式。
S1406、对第二编码数据,以及划分方式和/或编码参数,进行熵编码,以生成第二码流,完成第K帧图像的编码。
该第二码流可以包括第K帧图像的第二编码数据。
这里的第二编码数据即为有限迭代局部最优的重建帧C对应的编码数据。有限迭代局部最优具体指,已使用所有的划分方式和/或编码参数对第K帧图像进行编码,并计算得到各个划分方式和/或编码参数的重建帧C和重建帧A的相似度,从中选取一个相似度最高的作为有限迭代局部最优,将相似度最高的重建帧C对应的编码数据作为这里的第二编码数据。
可选的,如果重建帧A和重建帧C的相似度代价函数值小于相似度代价函数值阈值,这里的第二编码数据即为重建帧C对应的编码数据。重建帧C对应的编码数据是指,采用一种划分方式和/或编码参数对第K帧图像进行第二编码,得到编码数据。解码该编码数据可以得到重建帧B。将重建帧B作为参考帧,解码第一编码数据,可以得到重建帧C。
可选的,在比较两种重建帧的相似度过程中,如果存在个别像素差异较大,比如灰度值相差超过128,则可以丢弃该种划分方式和对应的量化结果。
在执行S1406之后,可以重复执行S1401,以开始对第K+2帧图像编码。
本实施例,基于包括第K+1帧图像的第一码流的编码结果,对第K帧图像的第二编码进行调整,以实现第一码流的重建帧和对应的第二码流的重建帧的质量相同或相当,从而在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。通过模拟解码端解码方式,调整第二码流的编码方式,从而降低编解码不一致效应,有益于消除块效应。
下面以上述至少一个第一待编码图像为多个源视频图像,对本申请实施例的视频图像的处理方法进行解释说明。
请参见图15,图15是本申请实施例提供的一种视频图像的处理方法的流程示意图。本申请实施例的方法部分可以由编码装置执行。编码装置可以应用于上述实施例中的源设备12中,例如,如图8所示实施例中的服务器801。本实施例以至少一个第一待编码图像包括第K+1帧图像、第K+2帧图像,……,以及第K+m帧图像,第二待编码图像为第K帧图像为例。编码装置可以根据第K+1帧图像、第K+2帧图像,……,以及第K+m帧图像的编码结果,调整对第K帧图像进行第二编码,以生成质量一致的第一码流 和第二码流。如图15所示的方法可以包括如下实施步骤:
S1501、分别对第K+1帧图像、第K+2帧图像,……,以及第K+m帧图像进行第一编码,以生成第一码流。
编码装置可以使用预测模式P、划分方式D和量化参数QP等信息分别对第K+1帧图像、第K+2帧图像,……,以及第K+m帧图像进行第一编码,以生成包括第K+1帧图像、第K+2帧图像,……,以及第K+m帧图像的第一编码数据的第一码流。第一编码的过程中,还可以生成第K+1帧图像的重建帧A 1、第K+2帧图像的重建帧A 2,……,以及第K+m帧图像的重建帧A m。m为大于或等于2的正整数。
S1502、对第K帧图像进行一种划分方式的划分,对划分后的所有图像块进行全帧内预测模式的第二编码,以得到第二码流的重建帧B。
将第K+1帧图像、第K+2帧图像,……,以及第K+m帧图像、以及第K+1帧图像的重建帧A 1、第K+2帧图像的重建帧A 2,……,以及第K+m帧图像的重建帧A m作为输入信息,开始编码生成第二码流。
可选的,还可以将上述第一编码所涉及的划分方式D和量化参数QP等信息也作为输入信息。
示例性的,对第K帧图像进行一种划分方式的划分,对划分后的所有子块进行帧内预测、变换、量化、反量化、逆变换等一系列处理的第二编码过程,以得到第二码流的重建帧B。
对第K帧图像第一次执行本S1502时,进行划分、帧内预测、变换、量化等一系列第二编码过程所使用的编码信息和/或编码参数,例如,划分方式、QP、码率等,可以是初始化随机生成的编码信息和/或编码参数,或者,也可以是使用第K帧图像前向的I帧所使用的编码信息和/或编码参数。举例而言,对于QP,可以使用第K帧图像前向最近的一个或多个I帧的平均QP。
重建帧B可以由一个或多个编码图像块的重建块组成。
可选的,当第一码流的第K帧图像的预测模式P是全帧内预测时,则第二码流的第K帧图像的第二编码数据直接使用第一码流的第K帧图像的第一编码数据。当第一码流的第K帧图像的预测模式P不是全帧内预测(例如,第一码流的第K帧图像是P帧,或者,存在P块)时,则通过S1502,对第K帧图像,进行全帧内预测模式的第二编码。
S1503、将重建帧B作为解码第一码流的第K+1帧的参考帧,解码第一码流的第K+1帧,以得到第K+1帧的另一种重建帧C 1,将重建帧C 1作为解码第一码流的第K+2帧的参考帧,解码第一码流的第K+2帧,以得到第K+2帧的另一种重建帧C 2,以此类推,将重建帧C m作为解码第一码流的第K+m帧的参考帧,解码第一码流的第K+m帧,以得到第K+m帧的另一种重建帧C m
S1504、计算重建帧A 1和重建帧C 1、重建帧A 2和重建帧C 2、…、重建帧A m和重建帧C m的相似度代价函数值加权累加和。
具体地,可以根据下述公式(8)计算。
Figure PCTCN2022116596-appb-000008
其中,i表示重建帧中像素的索引,N表示重建帧的图像总像素个数,C m(划分方式,QP,i)表示采用划分方式和量化参数QP的重建帧B作为参考帧解码第一码流的第K+m个重建帧在第i个像素位置的像素值,A m(i)表示第一编码生成的重建帧在第i个像素位置的重建像素值。T可选1或者2,w m表示第m个重建帧相似度的加权系数。
可选的,可以根据和第M帧图像的距离选择不同的权重系数,且
Figure PCTCN2022116596-appb-000009
例如,对于m=2,可以选择w 1=0.6,w 2=0.4。
可选地,对于重建帧A 1和重建帧C 1、重建帧A 2和重建帧C 2…重建帧A m和重建帧C m的相似度代价函数值加权累加和除了可以采用公式(8)外,也可以采用其他方式评估,包括但不限于MAD、SAD、SSD、MSD、SATD等。
S1505、变换划分方式和/或编码参数,重复执行S1502。
本申请实施例可以有多个划分方式和/或编码参数,可以在多个划分方式和/或编码参数选取一个划分方式和/或编码参数,重复执行骤1402,以遍历该多个划分方式和/或编码参数,对第K帧图像进行第二编码,并计算得到各个划分方式和/或编码参数的重建帧C 1、重建帧C 2,……,以及重建帧C m,与重建帧A 1、重建帧A 2,……,以及重建帧A m的相似度。
以编码参数为QP为例,在一定区间内(比如0~51),以一定步长(比如1或2)选择QP,重复执行S1502,直到枚举完有限的QP。
以划分方式为例,对每一种划分方式,执行S1502,直到枚举完有限的划分方式。
S1506、对第二编码数据,以及划分方式和/或编码参数,进行熵编码,以生成第二码流,完成第K帧图像的编码。
该第二码流可以包括第K帧图像的第二编码数据。
这里的第二编码数据即为有限迭代局部最优的重建帧C 1、重建帧C 2,……,以及重建帧C m对应的编码数据。有限迭代局部最优具体指,已使用所有的划分方式和/或编码参数对第K帧图像进行编码,并计算得到各个划分方式和/或编码参数的重建帧C 1、重建帧C 2,……,以及重建帧C m与重建帧A 1、重建帧A 2,……,以及重建帧A m的相似度,从中选取一个相似度最高的作为有限迭代局部最优,将相似度最高的重建帧C 1、重建帧C 2,……,以及重建帧C m对应的编码数据作为这里的第二编码数据。重建帧C 1、重建帧C 2,……,以及重建帧C m对应的编码数据是指,采用一种划分方式和/或编码参数对第K帧图像进行第二编码,得到编码数据。解码该编码数据可以得到重建帧B。将重建帧B作为参考帧,解码第一码流的第K+1帧,可以得到重建帧C 1,将重建帧C 1作为参考帧,解码第一码流的第K+2帧,可以得到重建帧C 2,以此类推,可以得到重建帧C m
可选的,在比较两种重建帧的相似度过程中,如果存在个别像素差异较大,比如灰度值相差超过128,则可以丢弃该种划分方式和对应的量化结果。
本实施例,基于包括第K+1帧图像、第K+2帧图像,……,以及第K+m帧图像的第一码流的编码结果,对第K帧图像的第二编码进行调整,以实现第一码流的第K帧图像的重建帧和第二码流的第K帧图像的重建帧的质量相同或相当,从而在满足低时延接入视频内容基础上,提升接入视频内容的解码质量,降低块效应,并消除部分伪影效应。 通过模拟解码端解码方式,调整第二码流的编码方式,从而降低编解码不一致效应,有益于消除块效应。
本申请实施例还提供如下实施例,对上述图9所示实施例的另外几种可实现方式进行具体解说明。对于第二编码过程,下述实施例,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流。
其中,第一编码的编码信息可以包括第一编码的划分方式,第一编码的量化参数和第一编码的编码失真信息中的一项或多项。
一种可实现方式,本申请实施例在根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,可以包括以下一项或多项:采用与第一编码相同的划分方式对待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,采用与第一编码相同的量化参数对待编码图像或第一重建图像进行全帧内预测模式的第二编码;或者,根据第一编码的编码失真信息,确定第二编码的量化参数,根据第二编码的量化参数,对待编码图像或所述第一重建图像进行全帧内预测模式的第二编码。
(1)划分方式
第一编码的划分方式可以包括第一编码的TU划分方式、PU划分方式或CU划分方式。
示例性的,由于编码的失真主要来源于量化,因此第二编码选择采用和第一编码一致的TU划分方式,例如,图16为本申请实施例提供的第一编码和第二编码采用相同的TU划分方式的示意图,第一编码和第二编码可以均采用如图16所示TU划分方式。这样,可以有效地保证失真的边界一致性,即相同的边界存在失真,有利于保持第一码流和第二码流的质量相当或相同。
可选地,除了第一编码的TU划分方式以外,还可以传递第一编码的PU或者CU划分方式。基于第一编码的PU或者CU划分方式,控制第二编码。其中第一编码的PU划分方式用于快速决策第二编码对应位置PU的预测方向。例如第一编码的某一个编码块的PU为intra,则第二编码对应编码块的PU可以保持和第一编码一致;或者,第二编码的PU不跨越第一编码对应位置的PU边界。第一编码的CU划分方式可以为第二编码预划分CU做参考。例如第二编码的CU只在第一编码的CU范围内进行划分判决,保证第二编码CU不跨越第一编码的CU边界。
(2)量化参数
第一编码和第二编码输入的图像内容相似或相同,例如,第一编码的输入为待编码图像,第二编码的输入为待编码图像和第一重建图像,第一重建图像是经第一编码的待编码图像的重建图像,因此第一编码和第二编码输入的视频时空复杂度相似或相同。为了降低运算复杂度,第二编码可以利用第一编码的量化参数分布信息。该量化参数分布信息结合了人眼对不同时空复杂度存在敏感度的差异性而设计了各个编码块的量化参数。为了提升第二编码的质量,在保证量化参数分布差异的前提下,可以对每个编码块的量化参数叠加一个量化参数偏移量,qp_offset表示该量化参数偏移量。图17为本申请实施例提供的第一编码的量化参数和第二编码的量化参数的示意图,以图17所示为例,基于 第一编码的量化参数和qp_offset(如图中所示的-3),可以得到第二编码的量化参数。具体的,可以用第一编码的每个编码块的QP-3,得到第二编码的各个编码块的QP。例如,如图17所示的第一编码的第一行第一列的编码块的QP为32,基于此,可以得到第二编码的第一行第一列的编码块的QP为29。
可选地,量化参数传递单元大小可以相等,例如图17所示每个小方格(即量化参数传递单元)大小为16x16或者64x64个像素单位,即每16x16或64x64个像素单位位置为相同QP值。
可选地,量化参数传递单元大小可以不相等。图18为本申请实施例提供的第一编码传输给第二编码的量化参数的示意图,如图18所示,不同小方格(即量化参数传递单元)大小可以不相同。
可选地,由于每个编码量化过程中使用的量化参数,由参数集的量化参数(如PPS中语法元素init_qp_minus26)、slice级量化参数偏移量(如slice head中的slice_qp_delta)和当前编码块的量化参数偏移量(如mb_delta_quant)三者运算得到。因此第一编码传递给第二遍码的量化参数包括但不限于参数集中的量化参数、slice级量化参数偏移量、编码块的量化参数偏移量三者之一或者任意组合,或者传递最终第一编码量化过程中使用的最终量化参数。
(3)编码失真信息
在第二编码过程中,可以根据第一编码的编码失真信息决定第二编码的编码失真阈值。例如,对于第一码流的某一编码块,编码失真信息采用MAD指标,其值为4。则在第二编码做编码决策时,当预测帧和待编码帧,或者重建帧和待编码帧之间的MAD指标小于4时,则提前退出决策判断,以当前编码策略为最优编码策略。
可选地,编码失真信息可以采用MAD、SAD、SSD、MSD、SATD等常用指标中的一个或多个。
另一种可实现方式,本申请实施例在根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,可以包括:根据第一编码的编码信息和待编码图像的特征信息,确定第二编码的量化参数。根据第二编码的量化参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
其中,待编码图像的特征信息可以包括待编码图像的内容复杂度,待编码图像的颜色分类信息,待编码图像的对比度信息和待编码图像的内容分割信息中的一项或多项。
(1)内容复杂度
第一编码和第二编码输入的图像内容相似或相同,例如,第一编码的输入为待编码图像,第二编码的输入为待编码图像和第一重建图像,第一重建图像是经第一编码的待编码图像的重建图像,因此第一编码和第二编码输入的视频时空复杂度相似或相同,可以将第一编码过程中分析得到的内容复杂度传递到第二编码,第二编码不需要重复计算,直接根据内容复杂度计算量化系数等编码参数,以指导第二编码生成第二码流。
(2)区域信息
内容复杂度不同,人眼对失真的敏感度不同,因此第二编码可以根据第一编码的区域信息对不同区域设置不同的量化参数偏移量qp_offset。例如,对复杂区域采用qp_offset 为-5,在简单区域采用qp_offset为-3。当然可以理解的,其还可以是其他取值,本申请实施例不一一举例说明。
可选地,也可以根据第一编码的编码失真信息、颜色分类、对比度信息和内容分割等信息,调整第二编码对不同区域设置不同的编码参数。
本申请的视频图像的处理方法可以通过上述任一实施例完成第一码流和第二码流的编码,为了便于传输和视频解码,本申请实施例还可以通过下述实施例以携带第一码流和第二码流的标识信息,以便解码装置可以根据标识信息区分第一码流和第二码流,解码相应码流,以快速接入视频内容。
如果未发生需要随机接入的场景,则编码装置可以对第一码流进行封装和发送。解码装置可以接收、解码或显示该第一码流。如果一个时刻需要随机接入,则编码装置可以选择该时刻或下一时刻对应的随机接入帧所对应的第二码流进行封装和发送。编码装置可以对后续时刻的第一码流进行封装和发送。解码装置可以先接收、解码或显示该第二码流,然后接收第一码流,基于第二码流的重建图像解码或显示该第一码流。
(1)在视频参数集(video parameter set,VPS)、序列参数集(sequence parameter set,SPS)或者图像参数集(picture parameter set,PPS)等参数集中增加码流标识信息
码流接收设备(例如,解码装置)可以根据标识信息判断接收到的码流是否支持单帧随机接入功能,区分接收到的码流是属于长GOP流或基本流(即上述第一码流),或者包含两种码流(第一码流和第二码流)。
以PPS为例,如表1所示,表1示出了在PPS参数集中增加码流标识信息。
表1
Figure PCTCN2022116596-appb-000010
表1中,u(1)表示编码标准中1位无符号整数,ue(v)表示哥伦布码编码,在PPS参数集中增加信息,标识当前码流是否支持单帧随机接入,以及当前码流的类型。语法元素含义如下:
单帧随机接入使能标志位(single_insert_enabled_flag):该值为1表示该码流支持单帧随机接入,该值为0表示该码流不支持单帧随机接入。
流标识(stream_id):当single_insert_enabled_flag为1时,该值存在。
当该值为0时,表示当前码流为长GOP码流/基本流。
当该值为1时,表示当前码流为随机接入流。
当该值为2时,表示当前码流既包含长GOP码流数据,也包含随机接入码流数据,且同一视频帧内容(相同PTS)对应的长GOP码流数据在随机接入码流数据前面。图19 为本申请实施例提供的流标识(stream_id)为2时第一码流和第二码流的排布形式的示意图。示例性的,如图19所示,第一码流可以包括长GOP帧1和长GOP帧2,第二码流可以包括随机接入帧1和随机接入帧2,长GOP帧1和随机接入帧1为同一视频帧内容,长GOP帧2和随机接入帧2为同一视频帧内容,在PPS中的流标识(stream_id)为2,如图19所示,长GOP帧1在随机接入帧1之前,长GOP帧2在随机接入帧2之前。
当该值为3时,表示当前码流既包含长GOP码流数据,也包含随机接入码流数据,且同一视频帧内容(相同PTS)对应的长GOP码流数据在随机接入码流数据后面。图20为本申请实施例提供的流标识(stream_id)为3时第一码流和第二码流的排布形式的示意图。示例性的,如图20所示,第一码流可以包括长GOP帧1和长GOP帧2,第二码流可以包括随机接入帧1和随机接入帧2,长GOP帧1和随机接入帧1为同一视频帧内容,长GOP帧2和随机接入帧2为同一视频帧内容,在PPS中的流标识(stream_id)为3,如图20所示,长GOP帧1在随机接入帧1之后,长GOP帧2在随机接入帧2之后。
可选地,单帧随机接入使能标志位(single_insert_enabled_flag)和流标识(stream_id)等信息也可以在VPS或SPS中携带。
可选地,单帧随机接入使能标志位(single_insert_enabled_flag)和流标识(stream_id)可以组合成单一语法元素,当其为0时表示不支持单帧随机接入的一般码流;当其为1时表示长GOP流;当其为2时,表示随机接入流。
(2)在slice_segment_header中增加码流标识信息
码流接收设备(例如,解码装置)可以根据每一个slice的头信息判断接收到的码流是否支持单帧随机接入功能,区分接收到的码流是属于长GOP流或基本流。
slice_segment_header的一种携带方式如表2所示,表2示出了在slice_segment_header中增加码流标识信息。
表2
Figure PCTCN2022116596-appb-000011
表2中,语法元素含义如下:
slice_support_single_insert_enable:该值为1表示该码流支持单帧随机接入,该值为0表示该码流不支持单帧随机接入。
stream_id:当slice_support_single_insert_enable为1时,该值存在。
当该值为0时,表示当前码流为长GOP码流;
当该值为1时,表示当前码流为随机接入流;
可选地,slice_support_single_insert_enable和slice_id可以组合成单一语法元素,当 其为0时表示不支持单帧随机接入的一般码流;当其为1时表示长GOP流;当其为2时,表示随机接入流。
可选地,在存储或者传输过程中,可以将长GOP流和随机接入流二进制级联组合成一条码流,采用以上标识信息区分长GOP流或随机接入流。图21是本申请实施例提供的三种第一码流和第二码流合并成一条码流的组合方式。图21中的(a)表示了长GOP流和随机接入流的参数集相同时,一条码流中两种码流数据排布形式。图21中的(b)中VPS1、SPS1、PPS1表示参数集属于长GOP码流,VPS2、SPS2、PPS2表示参数集属于随机接入流,每种类型的流数据前面放置参数集可以直接解码。图21中的(c),两种码流的参数集放一起排布,方便一些场景下提前发送参数集(如DASH流传输),此时在长GOP流和随机接入流的slice_segment_header数据中需要slice_pic_parameter_set_id设置不同的数值,按照标准协议规定,根据该slice_pic_parameter_set_id可以找到对应的PPS参数集,PPS参数集通过pps_seq_parameter_set_id指向对应的SPS,依此找到对应的参数集。
可选地,两种码流合并成的一条码流可以只包含长GOP流或随机接入流中的VPS、SPS或者PPS中的一种或多种参数集。
接收端或发送端根据具体情况(例如信道变化、用户请求等),对码流中具有相同POC值的长GOP帧和随机接入帧对应的码流数据进行二选一方式的封装、发送、接收、解码或显示。如果未发生需要随机接入的场景,则选择长GOP帧的码流数据封装、发送、接收、解码或显示;如果发生需要随机接入的场景,则选择随机接入帧对应的码流数据进行封装、发送、接收、解码或显示。
可选地,如果将两种码流合成一条码流,长GOP流数据(包括参数集)和随机接入流(包括参数集)可以不在上述VPS、SPS、PPS或slice_segment_header中携带长GOP流或随机接入流数据类型的区分标识,如果未发生需要随机接入的场景,则可以选择长GOP帧的码流数据封装、发送、接收、解码或显示;如果发生需要随机接入的场景,则先判断长GOP帧是否为全帧内预测块,如果是,则选择长GOP帧的码流数据封装、发送、接收、解码或显示,否则选择随机接入帧对应的码流数据进行封装、发送、接收、解码或显示。
(3)在辅助增强信息(supplementary enhancement information,SEI)中携带码流标识信息。
其中,表3示出了通用SEI信息语法。
表3
Figure PCTCN2022116596-appb-000012
其中,表4示出了子码流拼接SEI消息语法。
表4
Figure PCTCN2022116596-appb-000013
表4中,针对SEI类型加入新类型182,用于表示当前码流的单帧接入信息,加入信息single_picture_info_insert(payloadSize)。包含的语法元素含义如下:
single_insert_enabled_flag:该值为1表示该码流支持单帧随机接入,该值为0表示该码流不支持单帧随机接入。
stream_id:当single_insert_enabled_flag为1时,该值存在。
当该值为0时,表示当前码流为长GOP码流/基本流;
当该值为1时,表示当前码流为随机接入流;
(4)在码流封装中携带码流标识信息
将每个子码流进行封装,每个子码流可以独立地封装在一个track中,比如sub-picture track。可以在sub-picture track中加入所述的子码流能否拼接的语法描述信息,样例如下:
在spco box中添加如下语法:
Figure PCTCN2022116596-appb-000014
语义如下:
track_class:当其为0时表示不支持单帧随机接入的一般码流;当其为1时表示长GOP流;当其为2时,表示随机接入流。
(5)在文件格式中增加描述码流标识信息
本实施例在ISO基本媒体文件格式(ISO base media file format,ISOBMFF)规定的文件格式中添加描述码流类型信息。在文件格式中,针对长GOP流和随机接入流,在视频track中添加Sample Entry Type:‘srand’。当sample entry name为‘normal’时,表示当前视频track中为不支持单帧随机接入的一般码流;当sample entry name为‘base’时,表示长GOP流;当sample entry name为‘insert’时,表示随机接入流。
(6)在文件描述信息中携带码流标识信息
码流标识信息可以携带在文件描述信息中,例如在DASH协议中的媒体呈现描述(mediapresentation description,MPD)文件中携带码流标识信息。本实施例给出了一种MPD中描述码流类型信息样例:
Figure PCTCN2022116596-appb-000015
Figure PCTCN2022116596-appb-000016
本例中指定新的EssentialProperty属性srand@value。srand@value属性描述如表5。表5示出了在"urn:mpeg:dash:srand:2014"中srand@value属性描述
表5
Figure PCTCN2022116596-appb-000017
语法元素语义如下:
file_class:当其为0时表示不支持单帧随机接入的一般码流;当其为1时表示长GOP流;当其为2时,表示随机接入流。
(7)在自定义消息中携带码流标识信息
码流可以采用自定义的TLV(type,length,value)消息模式发送,此时可以在type中携带码流标识信息。
例如,TLV消息可以包括类型(type)字段、长度(length)字段和负载(payload)字段。Type(8bits):数据类型,length(32bits):payload长度,payload(不定长):码流数据。
其中,表6示出了不同的类型(type)有不同的负载(payload)。
表6
Type 语义 Payload
0x00 不支持单帧随机接入的一般码流 一般码流数据
0x01 长GOP流 长GOP码流数据
0x02 随机接入流 随机接入流数据
其它 保留 码流或其它数据
由此可见,本申请实施例的编码端可以将第一码流和第二码流的标识信息可以携带在码流中,或者,携带在封装层,或者,携带在传输协议层等,以便解码端基于标识信息,区分第一码流和第二码流,以正确解码得到视频内容。
上文结合附图对本申请实施例的视频图像的处理方法进行了详细的介绍,下面结合图22对本申请实施例的视频图像的处理装置进行介绍。应理解,视频图像的处理装置能够执行本申请实施例的视频图像的处理方法。为了避免不必要的重复,下面在介绍本申请实施例的视频图像的处理装置时适当省略重复的描述。
参见图22,图22为本申请实施例提供的一种视频图像的处理装置的结构示意图。如图22所示,该视频图像的处理装置2200可以包括:获取模块2201、第一编码模块2202和第二编码模块2203。
获取模块2201,用于获取待编码图像。第一编码模块2202,用于对待编码图像进行第一编码,以生成第一码流。第二编码模块2203,用于根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流,第一重建图像是第一码流或第一编码过程中的重建图像。
在一些实施例中,第一编码的编码信息包括第一编码的划分方式,第一编码的量化参数和第一编码的编码失真信息中的一项或多项。
在一些实施例中,第二编码模块2203用于执行以下至少一项:采用与第一编码相同的划分方式对待编码图像或第一重建图像进行全帧内预测模式的第二编码;或者,采用与第一编码相同的量化参数对待编码图像或第一重建图像进行全帧内预测模式的第二编码;或者,根据第一编码的编码失真信息,确定第二编码的量化参数,根据第二编码的量化参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
在一些实施例中,第二编码模块2203用于:根据第一编码的编码信息和待编码图像的特征信息,确定第二编码的量化参数;根据第二编码的量化参数,对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
在一些实施例中,待编码图像的特征信息包括待编码图像的内容复杂度,待编码图像的颜色分类信息,待编码图像的对比度信息和待编码图像的内容分割信息中的一项或多项。
在一些实施例中,第二编码模块2203,用于根据第一编码的编码信息和第一重建图像,确定对待编码图像或第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项。第二编码模块2203,还用于根据第一划分方式或第一编码参数中至少 一项对待编码图像或第一重建图像进行全帧内预测模式的第二编码。
其中,第一码流中相邻两个全帧内预测模式的帧之间的间隔大于第二码流的相邻两个全帧内预测模式的帧之间的间隔。
在一些实施例中,第一重建图像与第二重建图像之间的差异小于差异阈值或者第一重建图像与第二重建图像之间的相似度高于相似度阈值,第二重建图像是第二码流或第二编码过程中的重建图像。
在一些实施例中,第二编码模块2203用于:根据第一编码的编码信息和第一重建图像,确定多个第二划分方式,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,根据第一编码的编码信息和第一重建图像,确定多个第二编码参数,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
其中,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第三重建图像之间的相似度中最高的,多个第三重建图像包括第二重建图像,多个第三重建图像是分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码过程中的重建图像,或者,多个第三重建图像是多个第三码流的重建图像,多个第三码流为分别根据多个第二划分方式和/或多个第二编码参数对待编码图像或第一重建图像进行多次第二编码得到的。
在一些实施例中,第二编码模块2203还用于:获取第一编码的预测模式。当第一编码的预测模式是帧间预测,则执行获取第一编码的编码信息,根据第一编码的编码信息,对待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流的步骤。当第一编码的预测模式是帧内预测,将第一码流作为第二码流。
在一些实施例中,待编码图像为源视频图像;或者,待编码图像为对源视频图像进行划分后的图像块。
需要说明的是,视频图像的处理装置2200可以执行图9至图12任一,或者,图16至21任一所示实施例的编码装置的方法。具体的实现原理和技术效果可以参考上述方法实施例的具体解释说明,此处不再赘述。
本申请实施例还提供另一种视频图像的处理装置,采用与图22所示的处理装置相同的结构。其中,获取模块,用于获取至少一个第一待编码图像和第二待编码图像,第二待编码图像是至少一个第一待编码图像之前的视频图像。第一编码模块,用于分别对至少一个第一待编码图像进行第一编码,以生成第一码流。第二编码模块,用于根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,至少一个第一重建图像是第一码流或第一编码过程中的重建图像。第二编码模块,还用于根据第一划分方式或第一编码参数中至少一项对第二待编码图像进行第二编码,以生成第二码流。
其中,第一码流中相邻两个全帧内预测模式的帧之间的间隔大于第二码流的相邻两个全帧内预测模式的帧之间的间隔。
在一些实施例中,至少一个第一待编码图像的个数为一个,至少一个第一重建图像的个数为一个,第一重建图像与第二重建图像之间的差异小于差异阈值,或者第一重建 图像与第二重建图像之间的相似度高于相似度阈值,第二重建图像是将第三重建图像作为参考图像解码第一码流得到的,第三重建图像是第二码流或第二编码过程中的重建图像。
在一些实施例中,至少一个第一待编码图像的个数为一个,至少一个第一重建图像的个数为一个,第二编码模块用于:根据第一重建图像,在多个第二划分方式中选取一个第二划分方式作为第一划分方式;和/或,根据第一重建图像,在多个第二编码参数中选取一个第二编码参数作为第一编码参数。
其中,第一重建图像与第二重建图像之间的相似度是第一重建图像与多个第四重建图像之间的相似度中最高的,多个第四重建图像包括第二重建图像,多个第四重建图像为将多个第五重建图像分别作为参考图像解码第一码流得到的,多个第五重建图像是多个第三码流的重建图像,多个第三码流为分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码得到的,或者,多个第五重建图像是分别根据多个第二划分方式和/或多个第二编码参数对第二待编码图像进行多次第二编码过程中的重建图像。
在一些实施例中,第一编码模块还用于:在分别对至少一个第一待编码图像进行第一编码之前,对第二待编码图像进行第一编码,以生成第四码流。第二编码模块还用于:获取第一编码的预测模式。当第一编码的预测模式是帧间预测,则执行根据至少一个第一重建图像,确定对第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项的步骤。当第一编码的预测模式是帧内预测,将第四码流作为第二码流。
在一些实施例中,至少一个第一待编码图像为至少一个第一源视频图像,第二待编码图像为第二源视频图像。
需要说明的是,视频图像的处理装置可以执行图13至图15任一所示实施例的编码装置的方法。具体的实现原理和技术效果可以参考上述方法实施例的具体解释说明,此处不再赘述。
本领域技术人员能够领会,结合本文公开描述的各种说明性逻辑框、模块和算法步骤所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。
作为实例而非限制,此类计算机可读存储媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来存储指令或数据结构的形式的所要程序代码并且可由计算机存取的任何其它媒体。并且,任何连接被恰当地称作计算机可读媒体。举例来说,如果使用同轴缆线、光纤缆线、 双绞线、数字订户线(DSL)或例如红外线、无线电和微波等无线技术从网站、服务器或其它远程源传输指令,那么同轴缆线、光纤缆线、双绞线、DSL或例如红外线、无线电和微波等无线技术包含在媒体的定义中。但是,应理解,所述计算机可读存储媒体和数据存储媒体并不包括连接、载波、信号或其它暂时媒体,而是实际上针对于非暂时性有形存储媒体。如本文中所使用,磁盘和光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)和蓝光光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光以光学方式再现数据。以上各项的组合也应包含在计算机可读媒体的范围内。
可通过例如一或多个数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程逻辑阵列(FPGA)或其它等效集成或离散逻辑电路等一或多个处理器来执行指令。因此,如本文中所使用的术语“处理器”可指前述结构或适合于实施本文中所描述的技术的任一其它结构中的任一者。另外,在一些方面中,本文中所描述的各种说明性逻辑框、模块、和步骤所描述的功能可以提供于经配置以用于编码和解码的专用硬件和/或软件模块内,或者并入在组合编解码器中。而且,所述技术可完全实施于一或多个电路或逻辑元件中。
本申请的技术可在各种各样的装置或设备中实施,包含无线手持机、集成电路(IC)或一组IC(例如,芯片组)。本申请中描述各种组件、模块或单元是为了强调用于执行所揭示的技术的装置的功能方面,但未必需要由不同硬件单元实现。实际上,如上文所描述,各种单元可结合合适的软件和/或固件组合在编码解码器硬件单元中,或者通过互操作硬件单元(包含如上文所描述的一或多个处理器)来提供。
在上述实施例中,对各个实施例的描述各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
以上所述,仅为本申请示例性的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (38)

  1. 一种视频图像的处理方法,其特征在于,包括:
    获取待编码图像;
    对所述待编码图像进行第一编码,以生成第一码流;
    根据所述第一编码的编码信息,对所述待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流,所述第一重建图像是所述第一码流或所述第一编码过程中的重建图像。
  2. 根据权利要求1所述的方法,其特征在于,所述第一编码的编码信息包括所述第一编码的划分方式,所述第一编码的量化参数和所述第一编码的编码失真信息中的一项或多项。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述第一编码的编码信息,对所述待编码图像或第一重建图像进行全帧内预测模式的第二编码,包括以下至少一项:
    采用与所述第一编码相同的划分方式对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,
    采用与所述第一编码相同的量化参数对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,
    根据所述第一编码的量化参数和量化参数偏移,确定所述第二编码的量化参数,根据所述第二编码的量化参数,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,
    根据所述第一编码的编码失真信息,确定所述第二编码的量化参数,根据所述第二编码的量化参数,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码。
  4. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一编码的编码信息,对所述待编码图像或第一重建图像进行全帧内预测模式的第二编码,包括:
    根据所述第一编码的编码信息和所述待编码图像的特征信息,确定所述第二编码的量化参数;
    根据所述第二编码的量化参数,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码。
  5. 根据权利要求4所述的方法,所述待编码图像的特征信息包括所述待编码图像的内容复杂度,所述待编码图像的颜色分类信息,所述待编码图像的对比度信息和所述待编码图像的内容分割信息中的一项或多项。
  6. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一编码的编码信息,对所述待编码图像或第一重建图像进行全帧内预测模式的第二编码,包括:
    根据所述第一编码的编码信息和所述第一重建图像,确定对所述待编码图像或所述第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项;
    根据所述第一划分方式或第一编码参数中至少一项,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码。
  7. 根据权利要求6所述的方法,其特征在于,所述第一重建图像与第二重建图像之 间的差异小于差异阈值或者所述第一重建图像与第二重建图像之间的相似度高于相似度阈值,所述第二重建图像是所述第二码流或所述第二编码过程中的重建图像。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述第一编码的编码信息和所述第一重建图像,确定对所述待编码图像或所述第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,包括:
    根据所述第一编码的编码信息和所述第一重建图像,确定多个第二划分方式,在所述多个第二划分方式中选取一个第二划分方式作为所述第一划分方式;和/或,
    根据所述第一编码的编码信息和所述第一重建图像,确定多个第二编码参数,在所述多个第二编码参数中选取一个第二编码参数作为所述第一编码参数;
    其中,所述第一重建图像与第二重建图像之间的相似度是所述第一重建图像与多个第三重建图像之间的相似度中最高的,所述多个第三重建图像包括所述第二重建图像,所述多个第三重建图像是分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述待编码图像或所述第一重建图像进行多次所述第二编码过程中的重建图像,或者,所述多个第三重建图像是多个第三码流的重建图像,所述多个第三码流为分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述待编码图像或所述第一重建图像进行多次所述第二编码得到的。
  9. 根据权利要求6-8任一项所述的方法,其特征在于,所述第一编码参数包括量化参数或码率。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述方法还包括:
    获取所述第一编码的预测模式;
    当所述第一编码的预测模式是帧间预测,则执行所述获取所述第一编码的编码信息,根据所述第一编码的编码信息,对所述待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流的步骤;
    当所述第一编码的预测模式是帧内预测,将所述第一码流作为所述第二码流。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述待编码图像为源视频图像,或者,所述待编码图像为对源视频图像进行划分后的图像块。
  12. 一种视频图像的处理方法,其特征在于,包括:
    获取至少一个第一待编码图像和第二待编码图像,所述第二待编码图像是所述至少一个第一待编码图像之前的视频图像;
    分别对所述至少一个第一待编码图像进行第一编码,以生成第一码流;
    根据至少一个第一重建图像,确定对所述第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,所述至少一个第一重建图像是所述第一码流或所述第一编码过程中的重建图像;
    根据所述第一划分方式或第一编码参数中至少一项对所述第二待编码图像进行全帧内预测模式的所述第二编码,以生成第二码流。
  13. 根据权利要求12所述的方法,其特征在于,所述至少一个第一待编码图像的个数为一个,所述至少一个第一重建图像的个数为一个,所述第一重建图像与第二重建图 像之间的差异小于差异阈值,或者,所述第一重建图像与第二重建图像之间的相似度高于相似度阈值,所述第二重建图像是将第三重建图像作为参考图像解码所述第一码流得到的,所述第三重建图像是所述第二码流或所述第二编码过程中的重建图像。
  14. 根据权利要求12所述的方法,其特征在于,所述至少一个第一待编码图像的个数为一个,所述至少一个第一重建图像的个数为一个,所述根据至少一个第一重建图像,确定对所述第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,包括:
    根据所述第一重建图像,在多个第二划分方式中选取一个第二划分方式作为所述第一划分方式;和/或,
    根据所述第一重建图像,在多个第二编码参数中选取一个第二编码参数作为所述第一编码参数;
    其中,所述第一重建图像与第二重建图像之间的相似度是所述第一重建图像与多个第四重建图像之间的相似度中最高的,所述多个第四重建图像包括所述第二重建图像,所述多个第四重建图像为将多个第五重建图像分别作为参考图像解码所述第一码流得到的,所述多个第五重建图像是多个第三码流的重建图像,所述多个第三码流为分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述第二待编码图像进行多次所述第二编码得到的,或者,所述多个第五重建图像是分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述第二待编码图像进行多次所述第二编码过程中的重建图像。
  15. 根据权利要求12-14任一项所述的方法,其特征在于,在分别对所述至少一个第一待编码图像进行第一编码之前,所述方法还包括:
    对所述第二待编码图像进行所述第一编码,以生成第四码流;
    获取所述第一编码的预测模式;
    当所述第一编码的预测模式是帧间预测,则执行所述根据至少一个第一重建图像,确定对所述第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项的步骤;
    当所述第一编码的预测模式是帧内预测,将所述第四码流作为所述第二码流。
  16. 根据权利要求12-15任一项所述的方法,其特征在于,所述至少一个第一待编码图像为至少一个第一源视频图像,所述第二待编码图像为第二源视频图像。
  17. 根据权利要求12-16任一项所述的方法,其特征在于,所述第一编码参数包括量化参数或码率。
  18. 一种视频图像的处理装置,其特征在于,包括:
    获取模块,用于获取待编码图像;
    第一编码模块,用于对所述待编码图像进行第一编码,以生成第一码流;
    第二编码模块,用于根据所述第一编码的编码信息,对所述待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流,所述第一重建图像是所述第一码流或所述第一编码过程中的重建图像。
  19. 根据权利要求18所述的装置,其特征在于,所述第一编码的编码信息包括所述第一编码的划分方式,所述第一编码的量化参数和所述第一编码的编码失真信息中的一项或多项。
  20. 根据权利要求19所述的装置,其特征在于,所述第二编码模块用于执行以下至少一项:
    采用与所述第一编码相同的划分方式对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,
    采用与所述第一编码相同的量化参数对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,
    根据所述第一编码的量化参数和量化参数偏移,确定所述第二编码的量化参数,根据所述第二编码的量化参数,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码;或者,
    根据所述第一编码的编码失真信息,确定所述第二编码的量化参数,根据所述第二编码的量化参数,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码。
  21. 根据权利要求18或19所述的装置,其特征在于,所述第二编码模块用于:
    根据所述第一编码的编码信息和所述待编码图像的特征信息,确定所述第二编码的量化参数;
    根据所述第二编码的量化参数,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码。
  22. 根据权利要求21所述的装置,所述待编码图像的特征信息包括所述待编码图像的内容复杂度,所述待编码图像的颜色分类信息,所述待编码图像的对比度信息和所述待编码图像的内容分割信息中的一项或多项。
  23. 根据权利要求18或19所述的装置,所述第二编码模块,用于根据所述第一编码的编码信息和所述第一重建图像,确定对所述待编码图像或所述第一重建图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项;
    所述第二编码模块,还用于根据所述第一划分方式或第一编码参数中至少一项,对所述待编码图像或所述第一重建图像进行全帧内预测模式的第二编码。
  24. 根据权利要求23所述的装置,其特征在于,所述第一重建图像与第二重建图像之间的差异小于差异阈值或者所述第一重建图像与第二重建图像之间的相似度高于相似度阈值,所述第二重建图像是所述第二码流或所述第二编码过程中的重建图像。
  25. 根据权利要求23所述的装置,其特征在于,所述第二编码模块用于:
    根据所述第一编码的编码信息和所述第一重建图像,确定多个第二划分方式,在所述多个第二划分方式中选取一个第二划分方式作为所述第一划分方式;和/或,
    根据所述第一编码的编码信息和所述第一重建图像,确定多个第二编码参数,在所述多个第二编码参数中选取一个第二编码参数作为所述第一编码参数;
    其中,所述第一重建图像与第二重建图像之间的相似度是所述第一重建图像与多个第三重建图像之间的相似度中最高的,所述多个第三重建图像包括所述第二重建图像, 所述多个第三重建图像是分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述待编码图像或所述第一重建图像进行多次所述第二编码过程中的重建图像,或者,所述多个第三重建图像是多个第三码流的重建图像,所述多个第三码流为分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述待编码图像或所述第一重建图像进行多次所述第二编码得到的。
  26. 根据权利要求23-25任一项所述的装置,其特征在于,所述第一编码参数包括量化参数或码率。
  27. 根据权利要求18-26任一项所述的装置,其特征在于,所述第二编码模块还用于:
    获取所述第一编码的预测模式;
    当所述第一编码的预测模式是帧间预测,则执行所述获取所述第一编码的编码信息,根据所述第一编码的编码信息,对所述待编码图像或第一重建图像进行全帧内预测模式的第二编码,以生成第二码流的步骤;
    当所述第一编码的预测模式是帧内预测,将所述第一码流作为所述第二码流。
  28. 根据权利要求18-27任一项所述的装置,其特征在于,所述待编码图像为源视频图像,或者,所述待编码图像为对源视频图像进行划分后的图像块。
  29. 一种视频图像的处理装置,其特征在于,包括:
    获取模块,用于获取至少一个第一待编码图像和第二待编码图像,所述第二待编码图像是所述至少一个第一待编码图像之前的视频图像;
    第一编码模块,用于分别对所述至少一个第一待编码图像进行第一编码,以生成第一码流;
    第二编码模块,用于根据至少一个第一重建图像,确定对所述第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项,所述至少一个第一重建图像是所述第一码流或所述第一编码过程中的重建图像;
    所述第二编码模块,还用于根据所述第一划分方式或第一编码参数中至少一项对所述第二待编码图像进行全帧内预测模式的所述第二编码,以生成第二码流。
  30. 根据权利要求29所述的装置,其特征在于,所述至少一个第一待编码图像的个数为一个,所述至少一个第一重建图像的个数为一个,所述第一重建图像与第二重建图像之间的差异小于差异阈值,或者,所述第一重建图像与第二重建图像之间的相似度高于相似度阈值,所述第二重建图像是将第三重建图像作为参考图像解码所述第一码流得到的,所述第三重建图像是所述第二码流或所述第二编码过程中的重建图像。
  31. 根据权利要求29所述的装置,其特征在于,所述至少一个第一待编码图像的个数为一个,所述至少一个第一重建图像的个数为一个,所述第二编码模块用于:
    根据所述第一重建图像,在多个第二划分方式中选取一个第二划分方式作为所述第一划分方式;和/或,
    根据所述第一重建图像,在多个第二编码参数中选取一个第二编码参数作为所述第一编码参数;
    其中,所述第一重建图像与第二重建图像之间的相似度是所述第一重建图像与多个第四重建图像之间的相似度中最高的,所述多个第四重建图像包括所述第二重建图像,所述多个第四重建图像为将多个第五重建图像分别作为参考图像解码所述第一码流得到的,所述多个第五重建图像是多个第三码流的重建图像,所述多个第三码流为分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述第二待编码图像进行多次所述第二编码得到的,或者,所述多个第五重建图像是分别根据所述多个第二划分方式和/或所述多个第二编码参数对所述第二待编码图像进行多次所述第二编码过程中的重建图像。
  32. 根据权利要求29-31任一项所述的装置,其特征在于,所述第一编码模块还用于:在分别对所述至少一个第一待编码图像进行第一编码之前,对所述第二待编码图像进行所述第一编码,以生成第四码流;
    所述第二编码模块还用于:获取所述第一编码的预测模式;
    当所述第一编码的预测模式是帧间预测,则执行所述根据至少一个第一重建图像,确定对所述第二待编码图像进行第二编码所采用的第一划分方式或第一编码参数中至少一项的步骤;
    当所述第一编码的预测模式是帧内预测,将所述第四码流作为所述第二码流。
  33. 根据权利要求29-32任一项所述的装置,其特征在于,所述至少一个第一待编码图像为至少一个第一源视频图像,所述第二待编码图像为第二源视频图像。
  34. 根据权利要求29-33任一项所述的装置,其特征在于,所述第一编码参数包括量化参数或码率。
  35. 一种视频图像的处理装置,其特征在于,包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-17中任一项所述的方法。
  36. 一种计算机可读存储介质,其特征在于,包括根据权利要求1-17中任一项所述的方法获得的第一码流和第二码流。
  37. 一种计算机程序产品,其特征在于,当计算机程序产品在计算机上运行时,使得计算机执行如权利要求1-17中任一项所述的视频图像的处理方法。
  38. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在计算机上运行时,使得所述计算机执行如权利要求1-17中任一项所述的视频图像的处理方法。
PCT/CN2022/116596 2021-09-30 2022-09-01 视频图像的处理方法及装置 WO2023051156A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111164100.4 2021-09-30
CN202111164100.4A CN115914648A (zh) 2021-09-30 2021-09-30 视频图像的处理方法及装置

Publications (1)

Publication Number Publication Date
WO2023051156A1 true WO2023051156A1 (zh) 2023-04-06

Family

ID=85729579

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116596 WO2023051156A1 (zh) 2021-09-30 2022-09-01 视频图像的处理方法及装置

Country Status (2)

Country Link
CN (1) CN115914648A (zh)
WO (1) WO2023051156A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275196B (zh) * 2023-11-23 2024-09-20 深圳小米房产网络科技有限公司 基于物联网的房屋安全监测预警方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090225869A1 (en) * 2008-03-10 2009-09-10 Samsung Electronics Co., Ltd. Video encoding apparatus, video decoding apparatus, and method
CN101860749A (zh) * 2010-04-20 2010-10-13 中兴通讯股份有限公司 一种视频图像编码和解码方法及装置
CN105812798A (zh) * 2014-12-31 2016-07-27 深圳中兴力维技术有限公司 图像编解码方法及其装置
CN112312133A (zh) * 2020-10-30 2021-02-02 北京奇艺世纪科技有限公司 一种视频编码方法、装置、电子设备及可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090225869A1 (en) * 2008-03-10 2009-09-10 Samsung Electronics Co., Ltd. Video encoding apparatus, video decoding apparatus, and method
CN101860749A (zh) * 2010-04-20 2010-10-13 中兴通讯股份有限公司 一种视频图像编码和解码方法及装置
CN105812798A (zh) * 2014-12-31 2016-07-27 深圳中兴力维技术有限公司 图像编解码方法及其装置
CN112312133A (zh) * 2020-10-30 2021-02-02 北京奇艺世纪科技有限公司 一种视频编码方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN115914648A (zh) 2023-04-04

Similar Documents

Publication Publication Date Title
US11765343B2 (en) Inter prediction method and apparatus
US11638003B2 (en) Video coding and decoding methods and devices using a library picture bitstream
US11438578B2 (en) Video picture prediction method and apparatus
KR102549670B1 (ko) 크로마 블록 예측 방법 및 디바이스
US20220094947A1 (en) Method for constructing mpm list, method for obtaining intra prediction mode of chroma block, and apparatus
EP4060990A1 (en) Video encoding method, video decoding method, and corresponding apparatuses
WO2020114394A1 (zh) 视频编解码方法、视频编码器和视频解码器
JP7410191B2 (ja) ビデオエンコーダ、ビデオデコーダ、及び対応する方法
US20210360275A1 (en) Inter prediction method and apparatus
WO2023051156A1 (zh) 视频图像的处理方法及装置
CN111432219B (zh) 一种帧间预测方法及装置
EP3910955A1 (en) Inter-frame prediction method and device
WO2020114291A1 (en) Video encoder, video decoder, and corresponding method
WO2023092256A1 (zh) 一种视频编码方法及其相关装置
WO2020114393A1 (zh) 变换方法、反变换方法以及视频编码器和视频解码器
WO2020135615A1 (zh) 视频图像解码方法及装置
CN113615191B (zh) 图像显示顺序的确定方法、装置和视频编解码设备
KR102704764B1 (ko) 이미지 예측 방법 및 디바이스
RU2787713C2 (ru) Способ и устройство предсказания блока цветности
RU2798316C2 (ru) Способ и аппаратура внешнего предсказания
WO2024113708A1 (zh) 视频处理方法及装置
WO2020143292A1 (zh) 一种帧间预测方法及装置
WO2020140889A1 (zh) 量化、反量化方法及装置
KR20210077759A (ko) 이미지 예측 방법 및 디바이스
KR20240136469A (ko) 이미지 예측 방법 및 디바이스

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874547

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE