CN117729328A - Video image encoding method, video image decoding method and related equipment - Google Patents

Video image encoding method, video image decoding method and related equipment Download PDF

Info

Publication number
CN117729328A
CN117729328A CN202310948472.9A CN202310948472A CN117729328A CN 117729328 A CN117729328 A CN 117729328A CN 202310948472 A CN202310948472 A CN 202310948472A CN 117729328 A CN117729328 A CN 117729328A
Authority
CN
China
Prior art keywords
format
component
data blocks
sub
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310948472.9A
Other languages
Chinese (zh)
Inventor
樊星星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaohongshu Technology Co ltd
Original Assignee
Xiaohongshu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaohongshu Technology Co ltd filed Critical Xiaohongshu Technology Co ltd
Priority to CN202310948472.9A priority Critical patent/CN117729328A/en
Publication of CN117729328A publication Critical patent/CN117729328A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a video image encoding method, a video image decoding method and related equipment, wherein the method comprises the following steps: acquiring a video image frame in YUV444 format; splitting a video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, wherein the supplementary data blocks in UV420 format are in one-to-one correspondence with the sub-data blocks in YUV420 format; coding a plurality of sub-data blocks in YUV420 format according to video compression coding standard to obtain a first path of video code stream; and coding the plurality of UV420 format supplementary data blocks according to a coding mode of a video compression coding standard or a proprietary protocol to obtain a second path of video code stream. The embodiment of the application can ensure the quality of the video image with YUV444 format after compression coding and reduce the size of the video code stream after compression coding.

Description

Video image encoding method, video image decoding method and related equipment
Technical Field
The present disclosure relates to the field of video encoding and decoding technologies, and in particular, to a video image encoding method, a video image decoding method, and related devices.
Background
Currently, part of software encoders and most of hardware encoders supporting standard video encoding formats (such as H.264/H.265/AV1 and the like) do not support direct input of YUV444 format video for encoding compression, or even support of YUV444 format encoding, the video compression rate is lower. From the viewpoint of video storage and transmission cost, the existing method generally converts the YUV444 format video into the YUV420 format, but the encoding side downsamples the YUV444 format video into the YUV420 format, and the playing side converts the YUV420 format into the YUV444 format, so that certain image loss is caused, and the quality of the encoded video is reduced.
Disclosure of Invention
The embodiment of the application provides a video image encoding method, a video image decoding method and related equipment, which can ensure the quality of compression-encoded video images in YUV444 format and reduce the size of a video code stream after compression-encoding.
A first aspect of an embodiment of the present application provides a method for encoding a video image, including:
acquiring a video image frame in YUV444 format;
splitting a video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, wherein the supplementary data blocks in UV420 format are in one-to-one correspondence with the sub-data blocks in YUV420 format;
coding a plurality of sub-data blocks in YUV420 format according to video compression coding standard to obtain a first path of video code stream; the method comprises the steps of,
and encoding the plurality of UV420 format supplementary data blocks according to the encoding mode of the video compression encoding standard or the private protocol to obtain a second path of video code stream.
With reference to the first aspect, in one possible implementation manner, splitting a video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format includes:
combining the 2 x 2 pixel blocks into a sub-data block in YUV420 format according to 4Y components, 1U component and 1V component for each 2 x 2 pixel block in the video image frame by taking the 2 x 2 pixel block as a unit to obtain a plurality of sub-data blocks in YUV420 format;
For each 2 x 2 pixel block, combining into one UV420 format supplemental data block according to the remaining 3U components and 3V components, resulting in multiple UV420 format supplemental data blocks.
With reference to the first aspect, in one possible implementation manner, the combining, according to the remaining 3U components and 3V components, a supplementary data block in UV420 format includes:
according to the pixel values of the 3U components and the pixel values of the target U components, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
according to the pixel values of the 3V components and the pixel values of the target V components, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
With reference to the first aspect, in one possible implementation manner, the combining, according to the remaining 3U components and 3V components, a supplementary data block in UV420 format includes:
according to the pixel values of the 3U components, the pixel values of the target U components and the adjustment coefficients, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
According to the pixel values of the 3V components, the pixel values of the target V components and the adjustment coefficients, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
A second aspect of embodiments of the present application provides a method for decoding a video image, including:
decoding a first path of video code stream and a second path of video code stream of a video image frame in a YUV444 format to respectively obtain a plurality of sub-data blocks in a YUV420 format and a plurality of supplementary data blocks in a UV420 format, wherein the supplementary data blocks in the UV420 format are in one-to-one correspondence with the sub-data blocks in the YUV420 format;
according to the sub-data blocks in the YUV420 format and the supplementary data blocks corresponding to the sub-data blocks in the YUV420 format, obtaining the original data blocks corresponding to the supplementary data blocks in the UV420 format;
and combining YUV components in the plurality of YUV420 format sub-data blocks with UV components in the corresponding original data blocks to obtain a video image frame.
With reference to the second aspect, in one possible implementation manner, the sub data block and the corresponding supplemental data block are obtained by splitting a 2×2 pixel block in a video image frame, where the sub data block is obtained by combining 4Y components, 1U component and 1V component in the 2×2 pixel block, and the supplemental data block corresponding to the sub data block is obtained by combining 3U components and 3V components remaining in the 2×2 pixel block;
According to the sub-data blocks in the plurality of YUV420 formats and the supplementary data blocks respectively corresponding to the sub-data blocks in the plurality of YUV420 formats, obtaining the original data blocks corresponding to the supplementary data blocks in the plurality of UV420 formats, including:
for each supplementary data block, respectively obtaining original pixel values of 3U components according to the supplementary pixel values of 3U components in each supplementary data block and the pixel values of the target U components; the target U component is the U component in the corresponding YUV420 format sub data block;
respectively obtaining original pixel values of 3V components according to the 3V component complementary pixel values in each complementary data block and the pixel value of the target V component; the target V component is the V component in the corresponding YUV420 format sub data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
With reference to the first aspect, in one possible implementation manner, the sub data block and the corresponding supplemental data block are obtained by splitting a 2×2 pixel block in a video image frame, where the sub data block is obtained by combining 4Y components, 1U component and 1V component in the 2×2 pixel block, and the supplemental data block corresponding to the sub data block is obtained by combining 3U components and 3V components remaining in the 2×2 pixel block;
According to the sub-data blocks in the plurality of YUV420 formats and the supplementary data blocks respectively corresponding to the sub-data blocks in the plurality of YUV420 formats, obtaining the original data blocks corresponding to the supplementary data blocks in the plurality of UV420 formats, including:
for each supplementary data block, respectively obtaining original pixel values of 3U components according to 3U component supplementary pixel values in each supplementary data block, the pixel value of the target U component and the adjustment coefficient; the target U component is the U component in the corresponding YUV420 format sub data block;
according to the 3V component supplementary pixel values in each supplementary data block, the pixel value of the target V component and the adjustment coefficient, respectively obtaining the original pixel values of the 3V components; the target V component is the V component in the corresponding YUV420 format sub data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
A third aspect of the embodiments of the present application provides an encoding apparatus for video images, the apparatus including a first processing unit configured to:
acquiring a video image frame in YUV444 format;
splitting the video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, wherein the plurality of supplementary data blocks in UV420 format are in one-to-one correspondence with the plurality of sub-data blocks in YUV420 format;
Coding the plurality of YUV420 formatted sub-data blocks according to a video compression coding standard to obtain a first path of video code stream; the method comprises the steps of,
and encoding the plurality of UV420 format supplementary data blocks according to a video compression encoding standard or an encoding mode of a private protocol to obtain a second path of video code stream.
With reference to the third aspect, in one possible implementation manner, in splitting a video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, the first processing unit is specifically configured to:
combining the 2 x 2 pixel blocks into a sub-data block in YUV420 format according to 4Y components, 1U component and 1V component for each 2 x 2 pixel block in the video image frame by taking the 2 x 2 pixel block as a unit to obtain a plurality of sub-data blocks in YUV420 format;
for each 2 x 2 pixel block, combining into one UV420 format supplemental data block according to the remaining 3U components and 3V components, resulting in multiple UV420 format supplemental data blocks.
With reference to the third aspect, in one possible implementation manner, in combining the remaining 3U components and 3V components into one UV420 format supplementary data block, the first processing unit is specifically configured to:
According to the pixel values of the 3U components and the pixel values of the target U components, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
according to the pixel values of the 3V components and the pixel values of the target V components, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
With reference to the third aspect, in one possible implementation manner, in combining the remaining 3U components and 3V components into one UV420 format supplementary data block, the first processing unit is specifically configured to:
according to the pixel values of the 3U components, the pixel values of the target U components and the adjustment coefficients, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
according to the pixel values of the 3V components, the pixel values of the target V components and the adjustment coefficients, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
A fourth aspect of the present application provides a decoding device for a video image, where the device includes a second processing unit, where the second processing unit is configured to:
decoding a first path of video code stream and a second path of video code stream of a video image frame in a YUV444 format to respectively obtain a plurality of sub-data blocks in a YUV420 format and a plurality of supplementary data blocks in a UV420 format, wherein the supplementary data blocks in the UV420 format are in one-to-one correspondence with the sub-data blocks in the YUV420 format;
according to the sub-data blocks in the YUV420 format and the supplementary data blocks corresponding to the sub-data blocks in the YUV420 format, obtaining the original data blocks corresponding to the supplementary data blocks in the UV420 format;
and combining YUV components in the plurality of YUV420 format sub-data blocks with UV components in the corresponding original data blocks to obtain a video image frame.
With reference to the fourth aspect, in one possible implementation manner, the sub data block and the corresponding supplemental data block are obtained by splitting a 2×2 pixel block in a video image frame, where the sub data block is obtained by combining 4Y components, 1U component and 1V component in the 2×2 pixel block, and the supplemental data block corresponding to the sub data block is obtained by combining 3U components and 3V components remaining in the 2×2 pixel block;
In terms of obtaining the original data blocks corresponding to the plurality of supplemental data blocks in the UV420 format according to the plurality of sub data blocks in the YUV420 format and the plurality of supplemental data blocks corresponding to the sub data blocks in the YUV420 format, respectively, the second processing unit is specifically configured to:
for each supplementary data block, respectively obtaining original pixel values of 3U components according to the supplementary pixel values of 3U components in each supplementary data block and the pixel values of the target U components; the target U component is the U component in the corresponding YUV420 format sub data block;
respectively obtaining original pixel values of 3V components according to the 3V component complementary pixel values in each complementary data block and the pixel value of the target V component; the target V component is the V component in the corresponding YUV420 format sub data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
With reference to the fourth aspect, in one possible implementation manner, the sub data block and the corresponding supplemental data block are obtained by splitting a 2×2 pixel block in a video image frame, where the sub data block is obtained by combining 4Y components, 1U component and 1V component in the 2×2 pixel block, and the supplemental data block corresponding to the sub data block is obtained by combining 3U components and 3V components remaining in the 2×2 pixel block;
In terms of obtaining the original data blocks corresponding to the plurality of supplemental data blocks in the UV420 format according to the plurality of sub data blocks in the YUV420 format and the plurality of supplemental data blocks corresponding to the sub data blocks in the YUV420 format, respectively, the second processing unit is specifically configured to:
for each supplementary data block, respectively obtaining original pixel values of 3U components according to the 3U component supplementary pixel values in each supplementary data block, the pixel value of the target U component and the adjustment coefficient; the target U component is a U component in a corresponding YUV420 format sub-data block;
respectively obtaining original pixel values of the 3V components according to the 3V component complementary pixel values in each complementary data block, the pixel value of the target V component and the adjustment coefficient; the target V component is a V component in a corresponding YUV420 format sub-data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the first aspect of the embodiment of the present application should be synchronously adapted to the third aspect of the embodiment of the present application, and the content of the second aspect of the embodiment of the present application should be synchronously adapted to the fourth aspect of the embodiment of the present application, and the same or similar beneficial effects can be achieved, which is not repeated herein.
A fifth aspect of the embodiments provides an encoding device comprising a first processor, a first memory, a first communication interface, and one or more programs stored in the first memory and configured to be executed by the first processor, the programs comprising instructions for performing the method of the first aspect.
A sixth aspect of the embodiments provides a decoding device comprising a second processor, a second memory, a second communication interface, and one or more programs stored in the second memory and configured to be executed by the second processor, the programs comprising instructions for performing the method of the second aspect.
A seventh aspect of the embodiments of the present application provides a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of the method of the first or second aspect.
An eighth aspect of the embodiments of the present application provides a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause an apparatus to perform the method of the first or second aspect.
The embodiment of the application can bring the following beneficial effects:
it can be seen that in the embodiment of the application, by splitting the video image in the YUV444 format into the video data in the YUV420 format and the UV420 supplemental data, compared with directly encoding and compressing the video image in the YUV444 format, the video code rate is reduced, the size of the video code stream can be reduced, the size of the UV420 supplemental data after encoding and compressing is relatively smaller, and the size of the whole code stream is lower than that of directly encoding and compressing the video image in the YUV444 format, which is beneficial to controlling the storage and transmission cost. In addition, the UV420 supplementary data well supplements the image loss caused by converting the video data into the YUV420 format, is beneficial to guaranteeing the quality of the video image displayed on the decoding side, and realizes the balance between the video code stream size and the video image quality.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application environment provided in an embodiment of the present application;
fig. 2 is a flow chart of a video image encoding method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a splitting manner of video image frames according to an embodiment of the present application;
fig. 4 is a schematic diagram of a sub-data block and a supplemental data block in YUV420 format according to an embodiment of the present application;
fig. 5 is a flowchart of a video image decoding method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an encoding device for video images according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a decoding device for video images according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a decoding device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
The terms "comprising" and "having" and any variations thereof, as used in the specification, claims and drawings, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment provided in an embodiment of the present application, and as shown in fig. 1, the application environment includes an encoding device, i.e. an encoding side device in video transmission, and a decoding device, i.e. a decoding side device in video transmission. The encoding device comprises a video acquisition unit, a video data splitting unit, a video encoding unit, a supplementary data encoding unit, a video code stream transmission unit and a supplementary data code stream transmission unit; the decoding device comprises a video code stream decoding unit, a complementary data code stream decoding unit, a video data combining unit and a video display unit.
The video acquisition unit is used for acquiring video images, and the video images can be in YUV format or RGB format. In case the video image is not in YUV444 format, the video acquisition unit is further adapted to convert it to YUV444 format.
The video data splitting unit is used for splitting the video image frame in the YUV444 format into one path of video data in the YUV420 format and one path of UV420 supplementary data containing color information by taking 2X 2 pixel blocks as units.
The video coding unit is used for carrying out compression coding on the video data in the YUV420 format according to a video compression coding standard to obtain a video code stream in the YUV420 format. Such as: the video compression encoding standard may be h.264, h.265, AV1, h.266, etc.
And the supplementary data encoding unit is used for carrying out compression encoding on the UV420 supplementary data according to the encoding mode of the video compression encoding standard or the private protocol to obtain a supplementary data code stream.
And the video code stream transmission unit is used for transmitting the video code stream in the YUV420 format to the decoding equipment.
And the supplementary data code stream transmission unit is used for transmitting the supplementary data code stream to the decoding equipment.
And the video code stream decoding unit is used for decoding the video code stream according to a decoding mode corresponding to the video coding unit side to obtain video data in YUV420 format.
And the complementary data code stream decoding unit is used for decoding the complementary data code stream according to the decoding mode corresponding to the complementary data encoding unit side to obtain UV420 complementary data.
And the video data combination unit is used for combining the video data in the YUV420 format corresponding to the 2 multiplied by 2 pixel blocks and the UV420 supplementary data to obtain a video image in the YUV444 format.
And the video display unit is used for displaying the combined restored video images.
It can be seen that the encoding device splits the video image in the YUV444 format into the video data in the YUV420 format and the UV420 supplemental data, compared with directly encoding and compressing the video image in the YUV444 format, the video code rate is reduced, the size of the video code stream can be reduced, the size of the UV420 supplemental data after encoding and compressing is relatively smaller, and the size of the whole code stream is lower than that of directly encoding and compressing the video image in the YUV444 format, so that the storage and transmission cost can be controlled. In addition, the UV420 supplementary data well supplements the image loss caused by converting the video data into the YUV420 format, is beneficial to guaranteeing the quality of the video image displayed on the decoding side, and realizes the balance between the video code stream size and the video image quality.
Referring to fig. 2, fig. 2 is a flowchart of a video image encoding method according to an embodiment of the present application, where the method may be applied to an encoding device, as shown in fig. 2, and the method includes steps 201 to 203:
201: a video image frame in YUV444 format is acquired.
In the embodiment of the application, the encoding device may acquire an original video image frame, and when the original video image frame is not in YUV444 format, the encoding device converts the original video image frame into YUV444 format. When the original video image frame is in YUV format, in one possible implementation, the obtaining of the video image frame in YUV444 format may be: the encoding device obtains resolution information of an original video image frame, and if the width and/or height of the original video image frame is odd, the original video image frame is subjected to even edge expansion to obtain the video image frame.
202: the video image frame is split into a plurality of sub-data blocks in YUV420 format and a plurality of supplemental data blocks in UV420 format.
Wherein, the plurality of UV420 format supplementary data blocks are in one-to-one correspondence with the plurality of YUV420 format sub data blocks.
In this embodiment, as shown in fig. 3, the video image frames are split by taking 2×2 pixel blocks as a unit, where each 2×2 pixel block may be split sequentially along the width or height direction, or multiple 2×2 pixel blocks may be split in parallel, so as to increase the splitting speed. Taking the 2×2 pixel block shown in fig. 3 as an example, referring to fig. 4, all of the positions 0, 1, 2 and 3 include Y component, U component and V component, 4Y components, 1U component and 1V component are combined into a sub-data block in YUV420 format according to the sampling requirement in YUV420 format, 4Y components share 1U component and 1V component, and at the same time, 3U components and 3V components remaining according to the 2×2 pixel block are combined into a supplementary data block in UV420 format. That is, one sub data block of the YUV420 format may be composed of pixel values of 4Y components, 1U component, and 1V component, and one supplementary data block of the UV420 format includes 3U components and 3V components corresponding to 2×2 pixel blocks. It should be understood that the U component in the sub-data block in YUV420 format may be a U component at any of position 0, position 1, position 2, and position 3, and the V component may be a V component at any of position 0, position 1, position 2, and position 3, which is not limited herein. Such as: the U component may be a U component at position 0, the V component may be a V component at position 0, or the U component may be a U component at position 0, the V component may be a V component at position 1, and so on. And splitting the video image frame by taking the 2 multiplied by 2 pixel blocks as a unit to obtain a plurality of sub-data blocks in YUV420 format and a plurality of complementary data blocks in UV420 format, which are in one-to-one correspondence with the plurality of sub-data blocks in YUV420 format.
It should be understood that the 2×2 pixel block in YUV444 format includes 4 pixels, each pixel occupies 3 bytes in the memory space, and is split into a sub-data block in YUV420 format and a complementary data block in UV420 format, each pixel in the sub-data block in YUV420 format occupies 1.5 bytes in the memory space, occupies 6 bytes in total, and each pixel in the complementary data block in UV420 format also occupies 1.5 bytes in the memory space, and occupies 6 bytes in total.
Illustratively, in one possible implementation a, combining the remaining 3U components and 3V components into one UV420 formatted supplemental data block includes:
according to the pixel values of the 3U components and the pixel values of the target U components, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
according to the pixel values of the 3V components and the pixel values of the target V components, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
Specifically, the remaining 3 pixel values of the U component are respectively differentiated from the pixel values of the target U component to obtain 3U component complementary pixel values, and the remaining 3 pixel values of the V component are respectively differentiated from the pixel values of the target V component to obtain 3V component complementary pixel values. Assuming that the target U component is the U component at position 0 in fig. 3 and the target V component is the V component at position 0 in fig. 3, the U component supplemental pixel values and the V component supplemental pixel values at positions 1, 2, and 3 are obtained using the following formulas:
△U 1 =U 1 -U 0 ,△V 1 =V 1 -V 0
△U 2 =U 2 -U 0 ,△V 2 =V 2 -V 0
△U 3 =U 3 -U 0 ,△V 3 =V 3 -V 0
Wherein DeltaU 1 Representing the U component supplemental pixel value at position 1, Δv 1 Representing the V component supplemental pixel value at position 1; deltaU 2 Representing the U component supplemental pixel value at position 2, Δv 2 Representing the V component supplemental pixel value at position 1; deltaU 3 Representing the U component supplemental pixel value at position 3, Δv 3 Representing the V component supplemental pixel value at position 3. U (U) 1 ,U 2 ,U 3 Representing the U component pixel values at positions 1, 2 and 3, V 1 ,V 2 ,V 3 Representing the V component pixel values at positions 1, 2 and 3, respectively, (U) 0 ,V 0 ) The U-component pixel value and the V-component pixel value at position 0 are represented, respectively.
In this implementation manner, in the 2×2 pixel block, the pixel values of the U component and the V component at each position are not greatly different from the pixel values of the U component and the V component at other positions, the pixel values of the remaining 3U components are respectively different from the pixel values of the target U component, and the pixel values of the remaining 3V components are respectively different from the pixel values of the target V component, so that the U component and the V component complementary pixel values with smaller values can be obtained, the code rate/bandwidth occupied by the UV complementary data can be reduced by using the smaller values as the complementary data, and the complementary data is encoded and compressed at a higher compression rate, which is beneficial to reducing the size and transmission bandwidth of the complementary data.
Illustratively, in one possible implementation B, combining the remaining 3U components and 3V components into one UV420 formatted supplemental data block includes:
according to the pixel values of the 3U components, the pixel values of the target U components and the adjustment coefficients, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
according to the pixel values of the 3V components, the pixel values of the target V components and the adjustment coefficients, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
Specifically, the pixel values of the remaining 3U components are respectively differenced with the pixel values of the target U components, and the difference value is added with the adjustment coefficient to obtain 3U component complementary pixel values; and respectively differencing the pixel values of the remaining 3V components with the pixel values of the target V components, and adding the difference value and the adjusting coefficient to obtain 3V component complementary pixel values. Wherein the adjustment factor may be 128. Assuming that the target U component is the U component at position 0 in fig. 3 and the target V component is the V component at position 0 in fig. 3, the U component supplemental pixel values and the V component supplemental pixel values at positions 1, 2, and 3 are obtained using the following formulas:
△U 1 =128+(U 1 -U 0 ),△V 1 =128+(V 1 -V 0 );
△U 2 =128+(U 2 -U 0 ),△V 2 =128+(V 2 -V 0 );
△U 3 =128+(U 3 -U 0 ),△V 3 =128+(V 3 -V 0 );
In this implementation manner, in the 2×2 pixel block, the pixel values of the U component and the V component at each position are not greatly different from the pixel values of the U component and the V component at other positions, the pixel values of the remaining 3U components are respectively different from the pixel values of the target U component, and the pixel values of the remaining 3V components are respectively different from the pixel values of the target V component, so that the situation that the difference is negative may occur, the negative data type is larger than the positive data type, and the memory occupation is relatively higher, so that the calculated complementary pixel value is ensured to be positive as much as possible by adopting the adjustment coefficient, and the memory occupation is further reduced.
For example, in one possible implementation manner, before splitting the video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, the encoding device may further detect color information of the video image frame, and in the case that the color information is not rich (for example, the color is not complex and the corresponding pixel value is smaller than a certain threshold value), may directly use original pixel values of 3U components and 3V components remaining in the 2×2 pixel block as supplementary data, and in the case that the color information is rich (for example, the color is complex and the corresponding pixel value is greater than or equal to a certain threshold value), use the splitting manner in step 202 to use the 3U component supplementary pixel values and the 3V component supplementary pixel values as supplementary data, that is, the embodiment of the present application may adaptively split and encode according to the content of the video image.
203: and encoding the plurality of sub-data blocks in YUV420 format according to a video compression encoding standard to obtain a first path of video code stream.
In the embodiment of the application, video compression coding standards such as H.264, H.265, AV1 or H.266 are adopted to carry out coding compression on a plurality of sub-data blocks in YUV420 format, so as to obtain a first path of video code stream of video image frames.
204: and encoding the plurality of UV420 format supplementary data blocks according to the encoding mode of the video compression encoding standard or the private protocol to obtain a second path of video code stream.
In this embodiment of the present application, the same compression encoding manner in step 3 may be used to encode and compress the plurality of UV420 format supplemental data blocks, or may be used to encode and compress the plurality of UV420 format supplemental data blocks by using a proprietary protocol encoding manner, which is not limited herein. That is, the embodiment of the application can realize the efficient compression of the video image frames in the YUV444 format by only adopting the video encoder supporting the YUV420 format, is compatible with the standard video encoding framework, and also supports the customized private protocol video compression.
The encoding device transmits the first path of video code stream and the second path of video code stream to the decoding device, the decoding device decodes and combines the first path of video code stream and the second path of video code stream, and finally, the video image frames are displayed.
It can be seen that the encoding device splits the video image frame in the YUV444 format into the video data in the YUV420 format and the UV420 supplemental data, compared with directly encoding and compressing the video image frame in the YUV444 format, the video code rate is reduced, the size of the video code stream can be reduced, the size of the UV420 supplemental data after encoding and compressing is relatively smaller, and the size of the whole code stream is lower than that of directly encoding and compressing the video image frame in the YUV444 format, which is beneficial to controlling the storage and transmission cost.
Referring to fig. 5, fig. 5 is a flowchart of a video image decoding method according to an embodiment of the present application, where the method can be applied to a decoding device, as shown in fig. 5, and the method includes steps 501-503:
501: decoding the first path of video code stream and the second path of video code stream of the video image frame in the YUV444 format to respectively obtain a plurality of sub-data blocks in the YUV420 format and a plurality of supplementary data blocks in the UV420 format.
Wherein, the plurality of UV420 format supplementary data blocks are in one-to-one correspondence with the plurality of YUV420 format sub data blocks.
In the embodiment of the application, the decoding device decodes the first path of video code stream and the second path of video code stream respectively by adopting a decoding mode corresponding to the encoding mode of the encoding device side. Such as: if the first path of video code stream is obtained by adopting video compression coding standard coding, decoding the first path of video code stream by the decoding equipment according to the video compression coding standard; if the second path of video code stream is obtained by adopting video compression coding standard coding, the decoding equipment decodes the second path of video code stream according to the video compression coding standard, and if the second path of video code stream is obtained by adopting private protocol coding, the decoding equipment decodes the second path of video code stream according to the private protocol.
And decoding the first path of video code stream to obtain pixel values corresponding to 4Y components, 1U component and 1V component in each sub-data block of the YUV420 format, and decoding the second path of video code stream to obtain 3U component complementary pixel values and 3V component complementary pixel values in each complementary data block of the UV420 format.
502: and obtaining original data blocks corresponding to the plurality of UV420 format supplemental data blocks according to the plurality of YUV420 format sub data blocks and the plurality of YUV420 format sub data blocks respectively corresponding to the supplemental data blocks.
In the embodiment of the present application, if 3U-component supplemental pixel values and 3V-component supplemental pixel values are obtained by means of the embodiment a shown in fig. 2, for each supplemental data block, according to the 3U-component supplemental pixel values in each supplemental data block and the pixel value of the target U-component, respectively obtaining the original pixel values of the 3U-components; the target U component is the U component in the corresponding YUV420 format sub data block; respectively obtaining original pixel values of 3V components according to the 3V component complementary pixel values in each complementary data block and the pixel value of the target V component; the target V component is the V component in the corresponding YUV420 format sub data block; and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components. Assuming that the target U component is the U component at position 0 in fig. 3 and the target V component is the V component at position 0 in fig. 3, the following formulas are used to find the U component pixel values and the V component pixel values at positions 1, 2, and 3:
U 1 =△U 1 +U 0 ,V 1 =△V 1 +V 0
U 2 =△U 2 +U 0 ,V 2 =△V 2 +V 0
U 3 =△U 3 +U 0 ,V 3 =△V 3 +V 0
Namely U 1 ,U 2 ,U 3 ,V 1 ,V 2 ,V 3 The original data blocks constituting the supplementary data block.
In the embodiment of the present application, if 3U-component supplemental pixel values and 3V-component supplemental pixel values are obtained by means of mode B of the embodiment shown in fig. 2, for each supplemental data block, according to the 3U-component supplemental pixel values in each supplemental data block, the pixel value of the target U-component and the adjustment coefficient, respectively obtaining 3U-component original pixel values; according to the 3V component supplementary pixel values in each supplementary data block, the pixel value of the target V component and the adjustment coefficient, respectively obtaining the original pixel values of the 3V components; and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components. Assuming that the target U component is the U component at position 0 in fig. 3 and the target V component is the V component at position 0 in fig. 3, the following formulas are used to find the U component pixel values and the V component pixel values at positions 1, 2, and 3:
U 1 =△U 1 -128+U 0 ,V 1 =△V 1 -128+V 0
U 2 =△U 2 -128+U 0 ,V 2 =△V 2 -128+V 0
U 3 =△U 3 -128+U 0 ,V 3 =△V 3 -128+V 0
503: and combining YUV components in the plurality of YUV420 format sub-data blocks with UV components in the corresponding original data blocks to obtain a video image frame.
In this embodiment of the present application, for any 2×2 pixel block, after obtaining the pixel value of the Y component, the pixel value of the U component, and the pixel value of the V component at each position of the 2×2 pixel block, the pixel values are combined to obtain the original pixel value of the 2×2 pixel block, and after combining and restoring the YUV components of the video image frame, the video image frame can be displayed.
It can be seen that the decoding device decodes two paths of code streams of the video image frame in the YUV444 format according to the encoding mode, restores the supplementary data block in the UV420 format obtained by decoding, combines the original data block and the corresponding sub-data block in the YUV420 format to obtain the video image frame in the YUV444 format, and the UV420 supplementary data well supplements the image loss caused by converting the video image frame into the YUV420 format, thereby being beneficial to ensuring the quality of the video image displayed on the decoding side.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a video image encoding apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus includes a first transceiver unit 601 and a first processing unit 602; the first processing unit 602 is configured to:
acquiring a video image frame in YUV444 format;
splitting the video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, wherein the plurality of supplementary data blocks in UV420 format are in one-to-one correspondence with the plurality of sub-data blocks in YUV420 format;
coding the plurality of YUV420 formatted sub-data blocks according to a video compression coding standard to obtain a first path of video code stream; the method comprises the steps of,
and encoding the plurality of UV420 format supplementary data blocks according to a video compression encoding standard or an encoding mode of a private protocol to obtain a second path of video code stream.
The first transceiver 601 is configured to cooperate with the first processor 602 to perform data receiving/transmitting, for example: and sending the first path of video code stream and the second path of video code stream to a decoding side.
In one possible implementation, in splitting a video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, the first processing unit 602 is specifically configured to:
combining the 2 x 2 pixel blocks into a sub-data block in YUV420 format according to 4Y components, 1U component and 1V component for each 2 x 2 pixel block in the video image frame by taking the 2 x 2 pixel block as a unit to obtain a plurality of sub-data blocks in YUV420 format;
for each 2 x 2 pixel block, combining into one UV420 format supplemental data block according to the remaining 3U components and 3V components, resulting in multiple UV420 format supplemental data blocks.
In one possible implementation, the first processing unit 602 is specifically configured to:
according to the pixel values of the 3U components and the pixel values of the target U components, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
According to the pixel values of the 3V components and the pixel values of the target V components, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
In one possible implementation, the first processing unit is specifically configured to:
according to the pixel values of the 3U components, the pixel values of the target U components and the adjustment coefficients, respectively obtaining 3U component complementary pixel values; the target U component is the U component in the corresponding YUV420 format sub data block;
according to the pixel values of the 3V components, the pixel values of the target V components and the adjustment coefficients, respectively obtaining 3V component complementary pixel values; the target V component is the V component in the corresponding YUV420 format sub data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a video image decoding apparatus according to an embodiment of the present application, and as shown in fig. 7, the apparatus includes a second transceiver unit 701 and a second processing unit 702; the second processing unit 702 is configured to:
Decoding a first path of video code stream and a second path of video code stream of a video image frame in a YUV444 format to respectively obtain a plurality of sub-data blocks in a YUV420 format and a plurality of supplementary data blocks in a UV420 format, wherein the supplementary data blocks in the UV420 format are in one-to-one correspondence with the sub-data blocks in the YUV420 format;
according to the sub-data blocks in the YUV420 format and the supplementary data blocks corresponding to the sub-data blocks in the YUV420 format, obtaining the original data blocks corresponding to the supplementary data blocks in the UV420 format;
and combining YUV components in the plurality of YUV420 format sub-data blocks with UV components in the corresponding original data blocks to obtain a video image frame.
The second transceiver 701 is configured to cooperate with the second processor 702 to receive/transmit data.
In one possible implementation manner, the sub data block and the corresponding supplementary data block are obtained by splitting a 2×2 pixel block in a video image frame, the sub data block is obtained by combining 4Y components, 1U component and 1V component in the 2×2 pixel block, and the supplementary data block corresponding to the sub data block is obtained by combining 3U components and 3V components remaining in the 2×2 pixel block;
in terms of obtaining the original data blocks corresponding to the plurality of supplemental data blocks in the UV420 format according to the plurality of sub data blocks in the YUV420 format and the plurality of supplemental data blocks corresponding to the sub data blocks in the YUV420 format, the second processing unit 702 is specifically configured to:
For each supplementary data block, respectively obtaining original pixel values of 3U components according to the supplementary pixel values of 3U components in each supplementary data block and the pixel values of the target U components; the target U component is the U component in the corresponding YUV420 format sub data block;
respectively obtaining original pixel values of 3V components according to the 3V component complementary pixel values in each complementary data block and the pixel value of the target V component; the target V component is the V component in the corresponding YUV420 format sub data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
In one possible implementation manner, the sub data block and the corresponding supplementary data block are obtained by splitting a 2×2 pixel block in a video image frame, the sub data block is obtained by combining 4Y components, 1U component and 1V component in the 2×2 pixel block, and the supplementary data block corresponding to the sub data block is obtained by combining 3U components and 3V components remaining in the 2×2 pixel block;
in terms of obtaining the original data blocks corresponding to the plurality of supplemental data blocks in the UV420 format according to the plurality of sub data blocks in the YUV420 format and the plurality of supplemental data blocks corresponding to the sub data blocks in the YUV420 format, the second processing unit 702 is specifically configured to:
For each supplementary data block, respectively obtaining original pixel values of 3U components according to the 3U component supplementary pixel values in each supplementary data block, the pixel value of the target U component and the adjustment coefficient; the target U component is a U component in a corresponding YUV420 format sub-data block;
respectively obtaining original pixel values of the 3V components according to the 3V component complementary pixel values in each complementary data block, the pixel value of the target V component and the adjustment coefficient; the target V component is a V component in a corresponding YUV420 format sub-data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
According to one embodiment of the present application, each unit in the apparatus shown in fig. 6 and fig. 7 may be separately or completely combined into one or several additional units, or some unit(s) thereof may be further split into a plurality of units with smaller functions, which may achieve the same operation without affecting the implementation of the technical effects of the embodiments of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the apparatus shown in fig. 6 and 7 may also include other units, and in practical applications, these functions may also be implemented with assistance by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present application, an apparatus as shown in fig. 6 or fig. 7 may be constructed by running a computer program (including a program code) capable of executing the steps involved in the respective methods as shown in fig. 2 or fig. 5 on a general-purpose computing device such as a computer including a processing element such as a central processing unit (Central Processing Unit, CPU), a random access storage medium (Random Access Memory, RAM), a Read-Only Memory (ROM), or the like, and a storage element, and implementing the encoding method or decoding method of a video image of the embodiment of the present application. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and executed by the above-described computing device via the computer-readable recording medium.
Based on the description of the method embodiment and the apparatus embodiment, the present application provides an encoding device 800, referring to fig. 8, where the encoding device 800 includes at least a first processor 801, a first memory 802, and a first communication interface 803, and the first processor 801, the first memory 802, and the first communication interface 803 are connected to each other through a bus.
The first memory 802 includes, but is not limited to, RAM, ROM, erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM), and the first memory 802 is used for associated computer programs and data. The first communication interface 803 is used for receiving and transmitting data.
The first processor 801 may be one or more CPUs, and in the case where the first processor 801 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The first processor 801 in the encoding apparatus 800 is configured to read the computer program code stored in the first memory 802, and perform the following operations:
acquiring a video image frame in YUV444 format;
splitting a video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, wherein the supplementary data blocks in UV420 format are in one-to-one correspondence with the sub-data blocks in YUV420 format;
coding a plurality of sub-data blocks in YUV420 format according to video compression coding standard to obtain a first path of video code stream; the method comprises the steps of,
and encoding the plurality of UV420 format supplementary data blocks according to the encoding mode of the video compression encoding standard or the private protocol to obtain a second path of video code stream.
It should be noted that the implementation of the respective operations may also correspond to the corresponding description of the method embodiment shown with reference to fig. 2. Since the first processor 801 of the encoding apparatus 800 implements the steps in the video image encoding method according to the embodiments of the present application when executing a computer program, the embodiments of the video image encoding method are applicable to the encoding apparatus, and the same or similar advantageous effects can be achieved.
The embodiment of the present application further provides a decoding device 900, referring to fig. 9, where the decoding device 900 includes at least a second processor 901, a second memory 902, and a second communication interface 903, where the second processor 901, the second memory 902, and the second communication interface 903 are connected to each other by a bus.
The second memory 902 includes, but is not limited to, RAM, ROM, erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM), the second memory 902 for associated computer programs and data. The second communication interface 903 is used to receive and transmit data.
The second processor 901 may be one or more CPUs, and in the case where the second processor 901 is one CPU, the CPU may be a single core CPU or a multi-core CPU.
The second processor 901 in the decoding device 900 is configured to read the computer program code stored in the second memory 902, and perform the following operations:
decoding a first path of video code stream and a second path of video code stream of a video image frame in a YUV444 format to respectively obtain a plurality of sub-data blocks in a YUV420 format and a plurality of supplementary data blocks in a UV420 format, wherein the supplementary data blocks in the UV420 format are in one-to-one correspondence with the sub-data blocks in the YUV420 format;
According to the sub-data blocks in the YUV420 format and the supplementary data blocks corresponding to the sub-data blocks in the YUV420 format, obtaining the original data blocks corresponding to the supplementary data blocks in the UV420 format;
and combining YUV components in the plurality of YUV420 format sub-data blocks with UV components in the corresponding original data blocks to obtain a video image frame.
It should be noted that the implementation of the respective operations may also correspond to the corresponding description of the method embodiment shown with reference to fig. 5. Since the second processor 901 of the decoding apparatus 900 implements the steps in the video image decoding method according to the embodiments of the present application when executing the computer program, the embodiments of the video image decoding method are applicable to the decoding apparatus, and the same or similar advantageous effects can be achieved.
It should be noted that, in some specific scenarios, the encoding device 800 and the decoding device 900 may be the same device, or may also be different parts in the same device.
The present embodiment provides a computer storage medium (Memory) which is a Memory device in the encoding device 800 or the decoding device 900 for storing programs and data. It will be appreciated that the computer storage medium herein may include both a built-in storage medium in the terminal and an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in the memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor. The computer storage medium herein may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; alternatively, it may be at least one computer storage medium located remotely from the aforementioned processor. In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by the first processor 801 to implement the corresponding steps of the encoding method described above with respect to video images. In another embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by the second processor 901 to implement the corresponding steps of the decoding method described above with respect to video images.
The computer program of the computer storage medium may illustratively include computer program code, which may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, ROM, RAM, an electrical carrier wave signal, a telecommunications signal, a software distribution medium, and so forth.
The embodiment of the application also provides a computer program product, wherein the computer program product comprises a computer program, and the computer program is operable to cause a computer to execute the steps in the encoding method and the decoding method of the video image. The computer program product may be a software installation package.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (12)

1. A method of encoding a video image, the method comprising:
acquiring a video image frame in YUV444 format;
splitting the video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, wherein the plurality of supplementary data blocks in UV420 format are in one-to-one correspondence with the plurality of sub-data blocks in YUV420 format;
coding the plurality of YUV420 formatted sub-data blocks according to a video compression coding standard to obtain a first path of video code stream; the method comprises the steps of,
and encoding the plurality of UV420 format supplementary data blocks according to a video compression encoding standard or an encoding mode of a private protocol to obtain a second path of video code stream.
2. The method of claim 1, wherein the splitting the video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplemental data blocks in UV420 format comprises:
combining each 2×2 pixel block in the video image frame into a sub-data block in a YUV420 format according to 4Y components, 1U component and 1V component by taking the 2×2 pixel block as a unit, so as to obtain a plurality of sub-data blocks in a YUV420 format;
and combining the remaining 3U components and 3V components into one UV420 format supplementary data block according to each 2X 2 pixel block to obtain a plurality of UV420 format supplementary data blocks.
3. The method of claim 2, wherein the combining the remaining 3U components and 3V components into one UV420 formatted supplemental data block comprises:
respectively obtaining 3U component complementary pixel values according to the pixel values of the 3U components and the pixel values of the target U component; the target U component is a U component in a corresponding YUV420 format sub-data block;
respectively obtaining 3V component complementary pixel values according to the pixel values of the 3V components and the pixel values of the target V component; the target V component is a V component in a corresponding YUV420 format sub-data block;
the 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
4. The method of claim 2, wherein the combining the remaining 3U components and 3V components into one UV420 formatted supplemental data block comprises:
according to the pixel values of the 3U components, the pixel values of the target U components and the adjusting coefficients, 3U component complementary pixel values are obtained respectively; the target U component is a U component in a corresponding YUV420 format sub-data block;
according to the pixel values of the 3V components, the pixel values of the target V components and the adjustment coefficients, 3V component complementary pixel values are obtained respectively; the target V component is a V component in a corresponding YUV420 format sub-data block;
The 3U-component supplemental pixel values and the 3V-component supplemental pixel values are combined into one block of supplemental data in UV420 format.
5. A method of decoding a video image, the method comprising:
decoding a first path of video code stream and a second path of video code stream of a video image frame in a YUV444 format to respectively obtain a plurality of sub-data blocks in a YUV420 format and a plurality of supplementary data blocks in a UV420 format, wherein the plurality of supplementary data blocks in the UV420 format are in one-to-one correspondence with the plurality of sub-data blocks in the YUV420 format;
obtaining original data blocks corresponding to the plurality of UV420 format supplemental data blocks according to the plurality of YUV420 format sub data blocks and the plurality of YUV420 format supplemental data blocks respectively corresponding to the plurality of YUV420 format sub data blocks;
and combining YUV components in the plurality of YUV420 format sub-data blocks with UV components in the corresponding original data blocks to obtain the video image frame.
6. The method of claim 5, wherein the sub-data block and the corresponding supplemental data block are obtained by splitting a 2 x 2 pixel block in the video image frame, the sub-data block being obtained by combining 4Y components, 1U component, and 1V component in the 2 x 2 pixel block, the supplemental data block corresponding to the sub-data block being obtained by combining 3U components and 3V components remaining in the 2 x 2 pixel block;
The obtaining the original data block corresponding to the plurality of supplementary data blocks in UV420 format according to the plurality of sub data blocks in YUV420 format and the supplementary data block corresponding to the plurality of sub data blocks in YUV420 format, includes:
for each supplementary data block, respectively obtaining original pixel values of 3U components according to the supplementary pixel values of the 3U components in each supplementary data block and the pixel values of the target U component; the target U component is a U component in a corresponding YUV420 format sub-data block;
respectively obtaining original pixel values of the 3V components according to the 3V component complementary pixel values in each complementary data block and the pixel value of the target V component; the target V component is a V component in a corresponding YUV420 format sub-data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
7. The method of claim 5, wherein the sub-data block and the corresponding supplemental data block are obtained by splitting a 2 x 2 pixel block in the video image frame, the sub-data block being obtained by combining 4Y components, 1U component, and 1V component in the 2 x 2 pixel block, the supplemental data block corresponding to the sub-data block being obtained by combining 3U components and 3V components remaining in the 2 x 2 pixel block;
The obtaining the original data block corresponding to the plurality of supplementary data blocks in UV420 format according to the plurality of sub data blocks in YUV420 format and the supplementary data block corresponding to the plurality of sub data blocks in YUV420 format, includes:
for each supplementary data block, respectively obtaining original pixel values of 3U components according to the 3U component supplementary pixel values in each supplementary data block, the pixel value of the target U component and the adjustment coefficient; the target U component is a U component in a corresponding YUV420 format sub-data block;
respectively obtaining original pixel values of the 3V components according to the 3V component complementary pixel values in each complementary data block, the pixel value of the target V component and the adjustment coefficient; the target V component is a V component in a corresponding YUV420 format sub-data block;
and obtaining a corresponding original data block according to the original pixel values of the 3U components and the original pixel values of the 3V components.
8. An apparatus for encoding video images, the apparatus comprising a first processing unit for:
acquiring a video image frame in YUV444 format;
splitting the video image frame into a plurality of sub-data blocks in YUV420 format and a plurality of supplementary data blocks in UV420 format, wherein the plurality of supplementary data blocks in UV420 format are in one-to-one correspondence with the plurality of sub-data blocks in YUV420 format;
Coding the plurality of YUV420 formatted sub-data blocks according to a video compression coding standard to obtain a first path of video code stream; the method comprises the steps of,
and encoding the plurality of UV420 format supplementary data blocks according to a video compression encoding standard or an encoding mode of a private protocol to obtain a second path of video code stream.
9. A decoding device for video images, the device comprising a second processing unit for:
decoding a first path of video code stream and a second path of video code stream of a video image frame in a YUV444 format to respectively obtain a plurality of sub-data blocks in a YUV420 format and a plurality of supplementary data blocks in a UV420 format, wherein the plurality of supplementary data blocks in the UV420 format are in one-to-one correspondence with the plurality of sub-data blocks in the YUV420 format;
obtaining original data blocks corresponding to the plurality of UV420 format supplemental data blocks according to the plurality of YUV420 format sub data blocks and the plurality of YUV420 format supplemental data blocks respectively corresponding to the plurality of YUV420 format sub data blocks;
and combining YUV components in the plurality of YUV420 format sub-data blocks with UV components in the corresponding original data blocks to obtain the video image frame.
10. An encoding device comprising a first processor, a first memory, a first communication interface, and one or more programs stored in the first memory and configured to be executed by the first processor, the programs comprising instructions for performing the method of any of claims 1-4.
11. A decoding device comprising a second processor, a second memory, a second communication interface, and one or more programs stored in the second memory and configured to be executed by the second processor, the programs comprising instructions for performing the method of any of claims 5-7.
12. A computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the method of any one of claims 1-4 or 5-7.
CN202310948472.9A 2023-07-28 2023-07-28 Video image encoding method, video image decoding method and related equipment Pending CN117729328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310948472.9A CN117729328A (en) 2023-07-28 2023-07-28 Video image encoding method, video image decoding method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310948472.9A CN117729328A (en) 2023-07-28 2023-07-28 Video image encoding method, video image decoding method and related equipment

Publications (1)

Publication Number Publication Date
CN117729328A true CN117729328A (en) 2024-03-19

Family

ID=90207534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310948472.9A Pending CN117729328A (en) 2023-07-28 2023-07-28 Video image encoding method, video image decoding method and related equipment

Country Status (1)

Country Link
CN (1) CN117729328A (en)

Similar Documents

Publication Publication Date Title
CN111580765B (en) Screen projection method, screen projection device, storage medium, screen projection equipment and screen projection equipment
US10798422B2 (en) Method and system of video coding with post-processing indication
JP5726919B2 (en) Enabling delta compression and motion prediction and metadata modification to render images on a remote display
US20140079134A1 (en) Color gamut scalability techniques
US8908982B2 (en) Image encoding device and image encoding method
JP2012508485A (en) Software video transcoder with GPU acceleration
KR100335057B1 (en) Apparatus for receiving moving picture
CN110572673B (en) Video encoding and decoding method and device, storage medium and electronic device
JP2010098352A (en) Image information encoder
US20170105012A1 (en) Method and Apparatus for Cross Color Space Mode Decision
US9456211B2 (en) Picture coding method, picture decoding method, picture coding apparatus, picture decoding apparatus, and program thereof
WO2024078066A1 (en) Video decoding method and apparatus, video encoding method and apparatus, storage medium, and device
CN110572677B (en) Video encoding and decoding method and device, storage medium and electronic device
US8411976B2 (en) Image data compression apparatus, decompression apparatus, compressing method, decompressing method, and storage medium
JP2009528751A (en) Differential encoding
CN117729328A (en) Video image encoding method, video image decoding method and related equipment
KR20160082521A (en) Chroma down-conversion and up-conversion processing
CN115866297A (en) Video processing method, device, equipment and storage medium
KR20150045951A (en) Receiving device, transmission device, and image transmission method
US10034007B2 (en) Non-subsampled encoding techniques
CN114079823A (en) Video rendering method, device, equipment and medium based on Flutter
JP3948597B2 (en) Moving picture compression encoding transmission apparatus, reception apparatus, and transmission / reception apparatus
CN114424552A (en) Low-delay source-channel joint coding method and related equipment
CN110636295B (en) Video encoding and decoding method and device, storage medium and electronic device
CN116248895B (en) Video cloud transcoding method and system for virtual reality panorama roaming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination