CN106954074B - Video data processing method and device - Google Patents

Video data processing method and device Download PDF

Info

Publication number
CN106954074B
CN106954074B CN201610008432.6A CN201610008432A CN106954074B CN 106954074 B CN106954074 B CN 106954074B CN 201610008432 A CN201610008432 A CN 201610008432A CN 106954074 B CN106954074 B CN 106954074B
Authority
CN
China
Prior art keywords
pixel
pixel points
group
priority
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610008432.6A
Other languages
Chinese (zh)
Other versions
CN106954074A (en
Inventor
刘卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Qingdao Hisense Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Electronics Co Ltd filed Critical Qingdao Hisense Electronics Co Ltd
Priority to CN201610008432.6A priority Critical patent/CN106954074B/en
Publication of CN106954074A publication Critical patent/CN106954074A/en
Application granted granted Critical
Publication of CN106954074B publication Critical patent/CN106954074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application provides a video data processing method, which is applied to a video post-processing system and comprises the following steps: acquiring video frame data to be processed provided by the video post-processing system; segmenting the video frame data to be processed into a plurality of independent blocks to be compressed; grouping pixel points in a single block to be compressed to obtain groups with different priorities; and compressing the pixel points of each group according to the priority of each group. In the compression processing, the pixel points in the same block to be compressed are divided into the groups with different priorities, and the pixel points in the different groups in the block to be compressed are compressed according to the groups with other priorities in the block to be compressed, so that when the compressed code stream generated by the compression processing is decompressed, the decompression is carried out only according to the pixel points in the same block to be compressed, and the decompression is not carried out according to other blocks to be compressed, and therefore the block to be compressed can be randomly accessed.

Description

Video data processing method and device
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video data processing method and a video data processing apparatus.
Background
In recent years, the rapid development of the ultra high definition television industry has been promoted by the subjective demand for high-quality visual enjoyment and the objective conditions for rapid development of semiconductor technology. However, due to the bandwidth limitation of the current transmission system, the ultra-high definition television programs can only be transmitted at a lower frame rate. Meanwhile, the refresh rate of the large-screen display device is greatly improved, the video frame rate is lower than the screen refresh rate, the mismatch directly causes the phenomena of smear, pause, blur and the like of the image, and the display effect is poor. The video frame rate up-conversion technology is used as an important video post-processing means, the frame rate of a displayed video can be effectively improved, and the subjective quality of an image is improved on a display screen with a high refresh rate.
The input of the ultra-high definition video frame rate up-conversion system is a set of image sequences with fixed frame rates, and after a series of motion estimation, vector post-processing and interpolation operations, the output is a set of image sequences with higher frame rates. This results in a significant increase in the data read-write throughput of the ultra-high-definition video frame rate up-conversion kernel and the off-chip cache. Wherein, motion estimation, vector post-processing and interpolation operation need to read a large amount of pixel data of a forward reference frame and a backward reference frame from an off-chip cache; an interpolated image sequence generated by frame rate up-conversion needs to be written into an off-chip cache; the display output port needs to read the original image sequence and the image sequence generated by frame rate up-conversion from the off-chip buffer for display on the screen.
However, in the current state of the art, there is a limit to the speed of CMOS integrated circuits, and the memory access speed has been increasing behind that of logic circuits, so the bandwidth of memory access is a bottleneck that restricts the system performance; meanwhile, large-scale data writing and reading also greatly improve the energy consumption of the system.
In order to solve the bandwidth and energy consumption bottleneck, compressing off-chip cache data and then writing the data is an effective and feasible method. In an ultra-high-definition video frame rate up-conversion system, the encoding and decoding processes are required to be finished at high speed in real time; random access to pixel blocks within a frame is required to reduce errors in motion estimation; lossless or minimal compression loss is required, but high compression ratios are not emphasized.
Disclosure of Invention
In view of the above problems, embodiments of the present application are proposed to provide a video data processing method and a corresponding video data processing apparatus that overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present application discloses a video data processing method, which is applied to a video post-processing system, and the method includes:
acquiring video frame data to be processed provided by the video post-processing system;
segmenting the video frame data to be processed into a plurality of independent blocks to be compressed;
grouping pixel points in a single block to be compressed to obtain groups with different priorities;
and compressing the pixel points of each group according to the priority of each group.
Preferably, the step of grouping the pixel points in a single block to be compressed to obtain the groups with different priorities includes:
determining a pixel point region to be segmented in a block to be compressed;
uniformly dividing the pixel point region to be divided into two sub-regions;
performing iteration and uniform segmentation on each subarea until the interval between an undivided pixel point and a segmented pixel point is less than or equal to 1, and stopping segmentation;
generating the priority of each pixel point according to the sequentially segmented result; the priority of the non-divided pixel points is sequenced after the priority of the divided pixel points.
Preferably, the step of compressing the pixel points of each packet according to the priority of each packet includes:
for the grouped pixel point with the highest priority, intercepting the digit of a first preset numerical value at the high position in the original pixel value of the pixel point as the residual value of the pixel point;
performing prediction processing on the pixel points of the groups except the group with the highest priority according to the priority of the groups to obtain residual values of the pixel points; the prediction process includes: predicting the pixel points of the current grouping by adopting a reconstruction value generated by the residual error value of the pixel points of the high-priority grouping; and the residual value of the pixel point is the difference value between the original pixel value of the pixel point and the predicted value of the pixel point.
Preferably, the step of compressing the pixel points of each packet according to the priority of each packet further includes:
and quantizing the residual values of the pixel points of the groups except the group with the highest priority to obtain the quantized residual values of the pixel points of the groups except the group with the highest priority.
Preferably, the step of compressing the pixel points of each packet according to the priority of each packet further includes:
for the grouped pixel points with the highest priority, zero padding is carried out on the residual value of the pixel points at the lower position to obtain a reconstruction value with the same number of bits as the original pixel value;
and carrying out inverse quantization and reconstruction processing on the quantized residual values of the pixel points of the packets except the packet with the highest priority to obtain the reconstruction values of the pixel points of the packets except the packet with the highest priority.
Preferably, the step of compressing the pixel points of each packet according to the priority of each packet further includes:
and entropy coding is carried out on the quantized residual error values of the pixel points of the groups other than the group with the highest priority to obtain the coded residual error values of the pixel points of the groups other than the group with the highest priority.
Preferably, the step of compressing the pixel points of each packet according to the priority of each packet further includes:
and carrying out code stream packing processing on the residual values after coding of the pixel points of the groups except the group with the highest priority and the residual values of the pixel points of the group with the highest priority according to the priority to obtain packed subcode streams of all the groups.
Simultaneously, this application still discloses a video data processing apparatus, is applied to video post processing system, the device include:
the video frame data acquisition module is used for acquiring video frame data to be processed, which is provided by the video post-processing system;
a block to be compressed generation module, configured to segment the video frame data to be processed into multiple independent blocks to be compressed;
the grouping module is used for grouping the pixel points in a single block to be compressed to obtain groups with different priorities;
and the compression module is used for compressing the pixel points of each group according to the priority of each group.
Preferably, the grouping module further comprises:
the region determining submodule is used for determining pixel point regions to be segmented in the blocks to be compressed;
the first segmentation submodule is used for uniformly segmenting the pixel point region to be segmented into two subregions;
the second segmentation submodule is used for iterating and uniformly segmenting each subarea until the interval between an undivided pixel point and a segmented pixel point is less than or equal to 1, and stopping segmentation;
the priority generation submodule is used for generating the priority of each pixel point according to the sequential segmentation result; the priority of the non-divided pixel points is sequenced after the priority of the divided pixel points.
Preferably, the compression module further comprises:
the residual error intercepting submodule is used for intercepting the digits of a first preset numerical value at the high position in the original pixel values of the pixel points as the residual error values of the pixel points for the grouped pixel points with the highest priority;
the prediction submodule is used for performing prediction processing on pixel points of the groups except the group with the highest priority according to the priority of the groups to obtain residual values of the pixel points; the prediction process includes: predicting the pixel points of the current grouping by adopting a reconstruction value generated by the residual error value of the pixel points of the high-priority grouping; and the residual value of the pixel point is the difference value between the original pixel value of the pixel point and the predicted value of the pixel point.
Preferably, the compression module further comprises:
and the quantization submodule is used for performing quantization processing on the residual error values of the pixel points of the groups other than the group with the highest priority to obtain the quantized residual error values of the pixel points of the groups other than the group with the highest priority.
Preferably, the compression module further comprises:
the first reconstruction value generation submodule is used for supplementing zero to the residual value of the pixel point with the highest priority of the grouped pixel points to obtain the reconstruction value with the same number of bits as the original pixel value;
and the second reconstruction value generation submodule is used for carrying out inverse quantization and reconstruction processing on the quantized residual error values of the pixel points of the groups except the group with the highest priority to obtain the reconstruction values of the pixel points of the groups except the group with the highest priority.
Preferably, the compression module further comprises:
and the entropy coding submodule is used for entropy coding the quantized residual error values of the pixel points of the groups except the group with the highest priority to obtain the coded residual error values of the pixel points of the groups except the group with the highest priority.
Preferably, the compression module further comprises:
and the code stream packing submodule is used for carrying out code stream packing processing on the residual error value after coding of the pixel points of the groups except the group with the highest priority and the residual error value of the pixel points of the group with the highest priority according to the priority to obtain the packed subcode streams of all the groups.
The embodiment of the application has the following advantages:
in the compression processing, the pixel points in the same block to be compressed are divided into the groups with different priorities, and the pixel points in the different groups in the block to be compressed are compressed according to the groups with other priorities in the block to be compressed, so that when the compressed code stream generated by the compression processing is decompressed, the decompression is carried out only according to the pixel points in the same block to be compressed, and the decompression is not carried out according to other blocks to be compressed, and therefore the block to be compressed can be randomly accessed.
Drawings
Fig. 1 is a schematic diagram of a conventional ultra high definition video frame rate up-conversion system;
fig. 2 is a flowchart of steps of an embodiment 1 of a video data processing method according to the present application;
FIG. 3 shows the ratio of YCbCr 4: 2: 2, a schematic diagram of segmenting video frame data in a sampling format;
fig. 4 shows the results of the comparison of YCbCr 4: 4: 4, a schematic diagram of segmenting video frame data in a sampling format;
fig. 5 shows the results of the comparison of YCbCr 4: 2: a schematic diagram of segmenting video frame data in a 0 sampling format;
fig. 6 is a schematic diagram illustrating grouping of pixel points in a block to be compressed according to an embodiment of the present application;
fig. 7 is a flowchart illustrating steps of an embodiment 2 of a video data processing method according to the present application;
FIG. 8 is a diagram illustrating a special compression process performed on a block to be compressed in an embodiment of the present application;
FIG. 9 is a diagram illustrating compression of a block to be compressed in an embodiment of the present application;
FIG. 10 is a schematic diagram of decompressing a compressed code stream in an embodiment of the present application;
fig. 11 is a block diagram showing a configuration of a video data processing apparatus according to embodiment 1 of the present application;
fig. 12 is a block diagram showing a configuration of a video data processing apparatus according to embodiment 2 of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Referring to fig. 1, a schematic diagram of an existing ultra-high-definition video frame rate up-conversion system is shown, which specifically includes: the system comprises a video input module 11, an off-chip cache 12, an ultra-high definition video frame rate up-conversion processing core 13 and a video output module 14; the video input module 11 inputs the original video frame to the off-chip cache 12; the ultra-high definition video frame rate up-conversion processing core 13 extracts an original video frame from the off-chip cache 12, performs a series of motion estimation, vector post-processing and interpolation operations on the extracted original video frame to generate an interpolated video frame, and finally sends the interpolated video frame to the off-chip cache 12; the video output module 13 extracts the original video frame data and the interpolated video frame data from the off-chip buffer 12 for output and display. However, due to the bandwidth limitation of the off-chip cache 12, the video frame data cannot be transmitted in time, so that the image is delayed in displaying.
To address the bandwidth bottleneck, compressing off-chip cache data is an efficient and feasible approach.
In the Standard of intra prediction process in the AVS (Audio Video coding Standard) Standard in our country, when the current macroblock adopts an intra prediction mode, the prediction process uses the rightmost 8 pixels of the left macroblock (if any) and the bottommost 8 pixels of the upper macroblock (if any) as reference pixels. Thus, at the decoding end, when the current block is decoded, the information of the left macro block and the upper macro block is needed, namely, the decoding of the left macro block and the upper macro block is carried out; the information of the left macroblock and the information of the upper macroblock of the left macroblock/the upper macroblock are needed for decoding, and the process is repeated, so that the random access of the intra macroblock cannot be realized in the AVS.
Except for the AVS, the intra prediction process of other general compression standards (such as h.264 and HEVC) is similar to that of the AVS, and random access to intra data blocks cannot be realized.
One of the core concepts of the embodiment of the application is that different packets are compressed by dividing blocks to be compressed into packets with different priorities; and when the compression processing result does not meet the preset compression ratio requirement, performing special compression processing on the block to be compressed to generate a compression code stream with fixed compression ratio meeting the preset compression ratio requirement.
Referring to fig. 2, a flowchart illustrating steps of embodiment 1 of a video data processing method according to the present application is shown, where the method is applied to a video post-processing system, and the method specifically includes the following steps:
step 101, acquiring video frame data to be processed provided by the video post-processing system;
in an embodiment of the present application, the video post-processing system may include an ultra high definition video frame rate up-conversion system; the method of the embodiment of the application aims at scenes in which random access of data blocks in frames is required to be realized in video data compression processing, for example, an ultra-high-definition video frame rate up-conversion system. However, the method of the embodiment of the present application is also applicable to a compression process that does not require random access to data blocks within a frame.
The ultra high definition video frame rate up-conversion system can comprise a video input module, an ultra high definition video frame rate up-conversion processing kernel, a video output module and a compressed code stream cache: the video frame data to be processed comprises original video frame data input by a video input module and interpolated video frame data generated by an up-conversion processing inner core of the ultra-high definition video frame rate;
step 102, segmenting the video frame data to be processed into a plurality of independent blocks to be compressed;
cutting original video frame data and interpolated video frame data into a plurality of independent blocks to be compressed;
103, grouping pixel points in a single block to be compressed to obtain groups with different priorities;
grouping the pixel points in the block to be compressed to obtain groups with different priorities;
and step 104, compressing the pixel points of each group according to the priority of each group.
The pixel points of each group can be compressed from high to low according to the priority order of each group.
The generated compressed code stream can be input into a compressed code stream cache of the ultra high definition video frame rate up-conversion system, and when the ultra high definition video frame rate up-conversion processing core or the video output module of the ultra high definition video frame rate up-conversion system needs to request video frame data, the compressed code stream is extracted from the compressed code stream cache for decompression processing.
In the embodiment of the application, each block to be compressed only uses the internal packet to perform compression processing, so that during decompression, the decompression only needs to be performed according to the internal packet, and the decompression does not need to be performed according to other blocks to be compressed, thereby realizing the random access of the pixel block.
In a preferred example of the embodiment of the present application, the step 102 may specifically include:
and segmenting the video frame data to be processed into a plurality of independent blocks to be compressed with brightness and a plurality of independent blocks to be compressed with chrominance according to the sampling mode of the video frame data.
The size of the block to be compressed may be specifically set according to a manner that the ultra high definition video frame rate up-conversion processing core reads data, and in a specific implementation, each time data is fetched by the ultra high definition video frame rate up-conversion processing core, 64 pixels in one row of video frame data are read, so that it may be considered that the block to be compressed is set into a plurality of matrices of 64 × N, and the number of rows N may be adjusted according to an actual compression effect.
According to different sampling modes of video frame data, the video frame data can be cut into a plurality of independent blocks to be compressed for brightness and blocks to be compressed for chrominance, and the common sampling of the video frame data comprises the following steps: YCbCr 4: 2: 2. YCbCr 4: 4: 4. YCbCr 4: 2: 0. YCbCr is part of the development of the world digital organization video standard as a recommendation in ITU-R BT.601, where Y refers to the luminance component, Cb to the blue chrominance component, and Cr to the red chrominance component; 4: 2: 0 means 4 luminance components per 4 pixels, 2 chrominance components (yyycbcr), sampling only odd scan lines, is the most common format for portable video devices (MPEG-4) and video conferencing (h.263); 4: 2: 2 denotes 4 luminance components per 4 pixels, 4 chrominance components (yyycbcrcbcr), the most common format for DVD, digital television, HDTV and other consumer video devices; 4: 4: 4 full pixel lattice (yyycbcrcbcrcbcrcbcr) for high quality video applications, studios and professional video production.
Referring to fig. 3, the comparison result of YCbCr 4: 2: 2, a schematic diagram of segmenting video frame data in a sampling format; the luminance components of 64 pixels in two consecutive lines in the video frame data are taken as a block to be compressed of a 64 × 2 matrix, and the 32 × 2 Cb components and the 32 × 2 Cr components corresponding to the pixel points are combined together to be taken as a block to be compressed of a 64 × 2 matrix.
Referring to fig. 4, in the embodiment of the present application, for YCbCr 4: 4: 4, a schematic diagram of segmenting video frame data in a sampling format; the luminance components of 64 pixels in two consecutive lines in the video frame data are used as a block to be compressed of a 64 × 2 matrix, 64 × 2 Cb components corresponding to the pixel points are used as a block to be compressed of a 64 × 2 matrix, and 64 × 2 Cr components corresponding to the pixel points are used as a block to be compressed of a 64 × 2 matrix.
Referring to fig. 5, the comparison result of YCbCr 4: 2: a schematic diagram of segmenting video frame data in a 0 sampling format; the luminance components of 64 pixels in four consecutive rows in the video frame data are taken as two blocks to be compressed in a 64 × 2 matrix, and the 32 × 2 Cb components and the 32 × 2 Cr components corresponding to the pixel points are combined to be taken as a block to be compressed in a 64 × 2 matrix.
As a preferred example of the embodiment of the present application, the step 103 may specifically include the following sub-steps:
a substep S11 of determining a pixel point region to be segmented in the block to be compressed;
firstly, dividing a block to be compressed into a plurality of pixel point regions to be divided; the selection of the pixel point region to be segmented can be selected according to the brightness component and the chrominance component in the pixel point region;
substep S12, uniformly dividing the pixel point region to be divided into two subregions;
each pixel point area to be segmented is evenly segmented into two subregions, and when the number of the pixel points in each row in the pixel point area to be segmented is an even number, the pixel points of the two subregions are equal; when the number of the pixel points in each row in the pixel point region to be segmented is an odd number, the number of the pixel points in each row of one subregion is one more or one less than that of the pixel points in each row of the other subregion;
substep S13, iterating and uniformly dividing each subarea until the interval between the undivided pixel point and the divided pixel point is less than or equal to 1, and stopping dividing;
iterating and uniformly dividing the sub-regions until the interval between the undivided pixel points and the divided pixel points is less than or equal to 1, and stopping dividing;
a substep S14 of generating the priority of each pixel point according to the sequential segmentation result; the priority of the non-divided pixel points is sequenced after the priority of the divided pixel points.
Dividing a block to be compressed into pixel points of pixel point regions to be segmented as a first group with the highest priority;
dividing a pixel point region to be divided, which is composed of the pixel points of the first group, into pixel points of two sub-regions, and taking the pixel points as a second group with a second priority;
dividing a subregion formed by the pixel points of the second grouping into pixel points of two subregions, and taking the pixel points as a third grouping with a third priority;
……
and distributing the priority to the unsegmented pixel points until the segmented pixel points are distributed with the priority.
Fig. 6 is a schematic diagram illustrating grouping of pixel points in a block to be compressed according to the embodiment of the present application. In the embodiment of the present application, a block to be compressed is composed of two rows of pixel points, and each row includes 64 pixel points.
For ease of description, we number 128 pixels as pixel _1_1, pixel _1_2, pixel _1_3, … …, pixel _1_64, pixel _2_1, pixel _2_2, pixel _2_3, … …, pixel _2_ 64. pixel _1_1 is the first pixel point in the first row of the block to be compressed; pixel _2_1 is a first pixel point of a second line of the block to be compressed; pixel _1_2 is the second pixel in the first row of the block to be compressed; the pixel _2_2 is a second pixel point of a second line of the block to be compressed; and so on until sixty-four pixel points in the first row and sixty-four pixel points in the second row.
Dividing 64 pixels in a row into 4 pixel point regions to be divided; the first pixel area to be divided is pixel _1_1 to pixel _1_16, the second pixel area to be divided is pixel _1_17 to pixel _1_32, and the third pixel area to be divided is pixel _1_33 to pixel _1_ 48; the fourth pixel point area to be divided is pixel _1_49 to pixel _1_ 64; the advantage of such segmentation is that when the block to be compressed is composed of 32 × 2 Cb components and 32 × 2 Cr components, the Cb components and the Cr components may be divided into different regions for compression, and the method of the pixel region to be segmented in the embodiment of the present application is not limited to this, and actually flexibly adjusted according to the difference of each block to be compressed.
Taking pixel _1_1, pixel _1_16, pixel _1_32, pixel _1_33, pixel _1_48, and pixel _1_64 as the first packet with the highest priority; likewise, the second line, pixel _2_1, pixel _2_16, pixel _2_32, pixel _2_33, pixel _2_48, pixel _2_64, may also be taken as the first packet with the highest priority; to improve compression performance, pixel _2_1, pixel _2_16, pixel _2_32, pixel _2_33, pixel _2_48, and pixel _2_64 in the second line may be taken as points of other groups, as shown in fig. 6, and pixel _2_1, pixel _2_16, pixel _2_32, pixel _2_33, pixel _2_48, and pixel _2_64 in the second line may be taken as points of the second group.
After the points of the first group are determined, dividing a pixel point region to be divided, which is composed of the pixel points of the first group, into pixel points of two sub-regions, and taking the pixel points as a second group with a second priority; then, the pixel point which divides the first pixel point area to be divided into two parts is pixel _1_ 8; the pixel point which divides the second pixel point area to be divided into two parts is pixel _1_ 24; a pixel point which divides the third pixel point area to be divided into two parts is pixel _1_ 40; a pixel point which divides the fourth pixel point area to be divided into two parts is pixel _1_ 56; pixel _1_8, pixel _1_24, pixel _1_40, and pixel _1_56 are regarded as second packets having the second priority, and similarly, pixel _2_8, pixel _2_24, pixel _2_40, and pixel _2_56 in the second row are regarded as second packets having the second priority.
Similarly, the pixel points which make the sub-region formed by the pixel points of the second grouping divided into two sub-regions are used as a third grouping with a third priority. The third grouping includes: pixel _1_4, pixel _1_12, pixel _1_20, pixel _1_28, pixel _1_36, pixel _1_44, pixel _1_52, pixel _1_60, pixel _2_4, pixel _2_12, pixel _2_20, pixel _2_28, pixel _2_36, pixel _2_44, pixel _2_52, pixel _2_ 60;
similarly, the pixel points which make the sub-region formed by the pixel points of the third group divided into two sub-regions are used as a fourth group with a fourth priority. The fourth group includes: pixel _1_2, pixel _1_6, pixel _1_10, pixel _1_14, pixel _1_18, pixel _1_22, pixel _1_26, pixel _1_30, pixel _1_34, pixel _1_38, pixel _1_42, pixel _1_46, pixel _1_50, pixel _1_54, pixel _1_58, pixel _1_ 62; pixel _2_2, pixel _2_6, pixel _2_10, pixel _2_14, pixel _2_18, pixel _2_22, pixel _2_26, pixel _2_30, pixel _2_34, pixel _2_38, pixel _2_42, pixel _2_46, pixel _2_50, pixel _2_54, pixel _2_58, pixel _2_ 62.
After the fourth component is finished, the interval between the pixel points which are not divided and the pixel points which are divided is less than or equal to 1, the division is stopped at the moment, and the priority is distributed to the pixel points which are not divided; undivided points, e.g., pixel _1_3, pixel _1_5, pixel _1_7, where pixel _1_3 is after pixel _1_2 belonging to the fourth group, pixel _1_5 is before pixel _1_6 belonging to the fourth group, and pixel _1_7 is after pixel _1_6 belonging to the fourth group; in the embodiment of the present application, a pixel point before a pixel point belonging to the last group is taken as a preceding group having a priority lower than that of a group composed of pixel points segmented at the last time, for example, the preceding group is set to have a fifth priority; the pixel point after the pixel point belonging to the last group is taken as the following group with a priority lower than that of the preceding group, for example, the following group is set to have the sixth priority, actually, the priorities of the preceding group and the following group can be exchanged, or the same priority is set, which is not limited in the present application.
The number of groups of blocks to be compressed is mainly determined by the size of the pixel point region to be segmented, the larger the pixel point region to be segmented is set, the more the iterative segmentation times are, the smaller the pixel point region to be segmented is set, and the smaller the iterative segmentation times are; considering the throughput and complexity of the system, the blocks to be compressed can be generally divided into 4-6 groups.
After grouping the blocks to be compressed, performing compression processing according to the priority of the packets, where as a preferred example of the embodiment of the present application, the step 104 may specifically include the following sub-steps:
substep S21, intercepting the digit of the first preset numerical value at the high position in the original pixel value of the pixel point as the residual error value of the pixel point for the grouped pixel point with the highest priority;
for the pixel point of the first group with the highest priority, the digit of the first preset numerical value in the original pixel value of the pixel point is intercepted as the residual value of the pixel point, in the embodiment of the application, the original pixel value of one pixel point is 10 bits, the higher 8 bits in the digit of the pixel point are intercepted as the residual value of the pixel point, the lower two bits are selected to be removed instead of other bits, because the bit weight of the lowest two bits is the minimum, and the error is the minimum after recovery. For example, after the lower two bits of information are lost, 1023 (binary format 11,1111, 1111), 1022 (binary format 11,1111, 1110), 1021 (binary format 11,1111, 1101) and 1020 (binary format 11,1111, 1100) are 255(1111 ), and at the time of recovery, 1020 (binary format 11,1111, 1100) can be recovered, and the maximum possible error is only 3. 0 (binary format 00, 0000, 0000), 256 (binary format 01, 0000, 0000), 512 (binary format 10, 0000, 0000) and 768 (binary format 11, 0000, 0000) are 0 (binary format 0000, 0000) after losing the upper two bits and are still 0 after recovery (binary format 00, 0000, 0000). After discarding the upper two bits, the resulting error may be large.
Substep S22, predicting pixel points of the groups except the group with the highest priority according to the priority of the groups to obtain residual values of the pixel points; the prediction process includes: predicting the pixel points of the current grouping by adopting a reconstruction value generated by the residual error value of the pixel points of the high-priority grouping; and the residual value of the pixel point is the difference value between the original pixel value of the pixel point and the predicted value of the pixel point.
And (4) carrying out prediction processing on the pixel points to obtain predicted values of the pixel points, and subtracting the predicted values from original pixel values of the pixel points to obtain residual values of the pixel points.
For example, when predicting the pixel points of the second grouping, the reconstruction values of the pixel points of the first grouping are used for prediction; when the pixel points of the third group are predicted, the reconstruction values of the pixel points of the first group and the reconstruction values of the pixel points of the second group are used for predicting; when the pixel points of the fourth group are predicted, the reconstruction values of the pixel points of the first group, the reconstruction values of the pixel points of the second group and the reconstruction values of the pixel points of the third group are used for prediction.
And a substep S23, performing quantization processing on residual values of the pixel points of the packets other than the packet with the highest priority to obtain quantized residual values of the pixel points of the packets other than the packet with the highest priority.
Since the pixel point of the first packet is transmitted by intercepting the upper 8 bits, it is equivalent to having been subjected to quantization processing.
Substep S24, for the grouped pixel point with the highest priority, zero-filling the residual value of the pixel point at the lower position to obtain the reconstruction value with the same digit as the original pixel value;
in the embodiment of the present application, the original pixel value of the pixel is 10 bits, and for the pixel of the group with the highest priority, 8 bits higher than the original pixel value are intercepted as the residual error value of the pixel. When calculating the reconstruction value, the lost low 2 bits need to be complemented, and 00 is complemented after 8-bit binary number of the residual value as the value of the low 2 bits, that is, the reconstruction value of the pixel point of the group with the highest priority is obtained by multiplying the residual value of the pixel point by 4.
For example, a pixel value in decimal format of a certain pixel is 1020, a pixel value in 10 bits in binary format is (11, 1111, 1100), a value in binary format of a residual value obtained by cutting the 8 bits higher is (1111), and a decimal value is 255; at recovery time, the lower 2 bits that are lost need to be replaced with 00 to recover the 10-bit form. Equivalently, the residual value is multiplied by 4 to obtain a reconstructed value (255 × 4 ═ 1020).
In the embodiment of the present application, the processing process of the reconstruction value of the pixel point of the packet with the highest priority specifically includes:
delta_1_1=msb(pixel_1_1,8);
delta_1_16=msb(pixel_1_16,8);
delta_1_32=msb(pixel_1_32,8);
delta_1_33=msb(pixel_1_33,8);
delta_1_48=msb(pixel_1_48,8);
delta_1_64=msb(pixel_1_64,8);
restruct_1_1=delta_1_1*4;
restruct_1_16=delta_1_16*4;
restruct_1_32=delta_1_32*4;
restruct_1_33=delta_1_33*4;
restruct_1_48=delta_1_48*4;
restruct_1_64=delta_1_64*4;
wherein delta _ x _ y represents a pixel residual value of the y-th pixel of the x-th row; retect _ x _ y represents a pixel reconstruction value of the y-th pixel of the x-th row; msb (pixel _ x _ y, z) represents taking the high z bit of the y th pixel of the x-th row.
And a substep S25, performing inverse quantization and reconstruction processing on the quantized residual error values of the pixel points of the packets other than the packet with the highest priority to obtain the reconstruction values of the pixel points of the packets other than the packet with the highest priority.
In the embodiment of the present application, the process of generating the reconstruction value of the pixel point of the packet other than the packet with the highest priority is as follows:
the predicted values of the 14 pixel points in the second grouping are obtained from the reconstruction values of the pixel points of the first grouping; the pixel residual value is the difference value between the original pixel value and the predicted value; the pixel reconstruction value is the sum of the residual value and the predicted value after quantization and inverse quantization. The treatment process is specifically
pred_2_1=restruct_1_1;
pred_1_8=(restruct_1_1+restruct_1_16)/2;
pred_2_8=(restruct_1_1+restruct_1_16)/2;
pred_2_16=restruct_1_16;
pred_1_24=(restruct_1_16+restruct_1_32)/2;
pred_2_24=(restruct_1_16+restruct_1_32)/2;
pred_2_32=restruct_1_32;
pred_2_33=restruct_1_33;
pred_1_40=(restruct_1_33+restruct_1_48)/2;
pred_2_40=(restruct_1_33+restruct_1_48)/2;
pred_2_48=restruct_1_48;
pred_1_56=(restruct_1_48+restruct_1_64)/2;
pred_2_56=(restruct_1_48+restruct_1_64)/2;
pixel_2_64=restruct_1_64;
delta _ x _ y-pixel _ x _ y-pred _ x _ y, x and y taking only the pixels in the second sub-group;
retry _ x _ y ═ pred _ x _ y + qdelta _ x _ y, x and y take only the pixels in the second group.
Wherein pred _ x _ y represents a pixel prediction value of the y pixel on the x row; delta _ x _ y represents the pixel residual value of the y-th pixel on the x-th row; retect _ x _ y represents a pixel reconstruction value of the y-th pixel of the x-th row; qdelta _ x _ y represents the quantized and dequantized pixel residue value of the y-th pixel on the x-th row.
The predicted values of the 16 pixel points of the third group are obtained from the reconstruction values of the pixel points of the first group and the second group, and the pixel of the third group adopts the reconstruction values of the pixel of the first group and the pixel of the second group which are nearest to the pixel of the third group to generate the predicted values; the processing of the residual values and reconstructed values of the pixels is the same as the second sub-grouping. The method specifically comprises the following steps:
pred_1_4=pred_2_4=(restruct_1_1+restruct_2_1+restruct_1_8+restruct_2_8)/4;
pred_1_12=pred_2_12=(restruct_1_8+restruct_2_8+restruct_1_16+restruct_2_16)/4;
pred_1_20=pred_2_20=(restruct_1_16+restruct_2_16+restruct_1_24+restruct_2_24)/4;
pred_1_28=pred_2_28=(restruct_1_24+restruct_2_24+restruct_1_32+restruct_2_32)/4;
pred_1_36=pred_2_36=(restruct_1_33+restruct_2_33+restruct_1_40+restruct_2_40)/4;
pred_1_44=pred_2_44=(restruct_1_40+restruct_2_40+restruct_1_48+restruct_2_48)/4;
pred_1_52=pred_2_52=(restruct_1_48+restruct_2_48+restruct_1_56+restruct_2_56)/4;
pred_1_60=pred_2_60=(restruct_1_56+restruct_2_56+restruct_1_64+restruct_2_64)/4;
delta _ x _ y-pixel _ x _ y-pred _ x _ y, x and y taking only the pixels in the third packet;
retry _ x _ y ═ pred _ x _ y + qdelta _ x _ y, x and y take only the pixels in the third packet.
The predicted values of the 32 pixel points of the fourth group are obtained from the reconstructed values of the pixel points of the first group, the second group and the third group; the processing of the residual values and reconstructed values of the pixels is the same as the second sub-grouping. For example:
pred _1_2 ═ pred _2_2 ═ restuct _1_1+ restuct _2_1+ restuct _1_4+ restuct _2_ 4)/4; when the segmented pixels nearest to the pixels of the fourth group are between the pixels of the first group and the pixels of the second group, generating a prediction value by using the reconstruction values of the pixels of the first group and the pixels of the second group which are nearest;
pred _1_6 ═ pred _2_6 ═ (retry _1_4+ retry _2_4+ retry _1_8+ retry _2_ 8)/4; when the segmented pixel nearest to the pixel of the fourth group is between the pixel of the second group and the pixel of the third group, generating a prediction value by using the reconstruction values of the nearest pixel of the second group and the pixel of the third group;
the predicted values of the remaining pixels in the fourth group are generated in the same manner as in the above example, and are not described again.
delta _ x _ y-pixel _ x _ y-pred _ x _ y, x and y taking only the pixels in the fourth packet;
retry _ x _ y ═ pred _ x _ y + qdelta _ x _ y, x and y take only the pixels in the fourth packet.
The 28 pixel points of the previous group are obtained by the reconstruction values of the pixel points of the first group, the second group, the third group and the fourth group; the processing of the residual values and reconstructed values of the pixels is the same as the second sub-grouping. For example:
pred _1_5 ═ pred _2_5 ═ (retry _1_4+ retry _2_4+ retry _1_6+ retry _2_ 6)/4; generating a prediction value using reconstruction values of the nearest-neighbor pixels of the third grouping and the fourth grouping when the nearest-neighbor segmented pixels of the previously grouped pixels are between the pixels of the third grouping and the pixels of the fourth grouping;
pred _1_9 ═ pred _2_9 ═ restuct _1_8+ restuct _2_8+ restuct _1_10+ restuct _2_ 10)/4; generating a prediction value using reconstructed values of the nearest neighboring second grouped pixels and fourth grouped pixels when the segmented pixels to which the previously grouped pixels are most neighboring are between the second grouped pixels and the fourth grouped pixels;
pred _1_17 ═ pred _2_17 ═ (retry _1_16+ retry _2_16+ retry _1_18+ retry _2_ 18)/4; generating a prediction value using a reconstruction value of a nearest-neighbor first grouped pixel and a fourth grouped pixel when a nearest-neighbor segmented pixel of a previously grouped pixel is between the first grouped pixel and the fourth grouped pixel;
the prediction values of the remaining previously grouped pixels are generated in the same manner as in the above example, and are not described again.
delta _ x _ y-pixel _ x _ y-pred _ x _ y, x and y taking only the pixels in the previous packet;
retry _ x _ y ═ pred _ x _ y + qdelta _ x _ y, x and y are taken only for pixels in the previous packet.
The 32 pixel points of the later group are obtained by the reconstruction values of the pixel points of the first group, the second group, the third group and the fourth group; the processing of the residual values and reconstructed values of the pixels is the same as the second sub-grouping. For example:
pred _1_3 ═ pred _2_3 ═ restuct _1_2+ restuct _2_2+ restuct _1_4+ restuct _2_ 4)/4; generating a prediction value using reconstructed values of the nearest-neighboring third grouped pixels and the fourth grouped pixels when the nearest-neighboring divided pixels of the following grouped pixels are between the third grouped pixels and the fourth grouped pixels;
pred _1_7 ═ pred _2_7 ═ restuct _1_6+ restuct _2_6+ restuct _1_8+ restuct _2_ 8)/4; generating a prediction value using reconstructed values of the nearest neighboring pixels of the second grouping and the pixels of the fourth grouping when the segmented pixels to which the pixels of the latter grouping are most neighboring are between the pixels of the second grouping and the pixels of the fourth grouping;
pred _1_15 ═ pred _2_15 ═ (retry _1_14+ retry _2_14+ retry _1_16+ retry _2_ 16)/4; generating a prediction value using reconstructed values of the nearest neighboring first grouped pixel and the fourth grouped pixel when a split pixel to which the latter grouped pixel is most neighboring is between the first grouped pixel and the fourth grouped pixel;
the remaining methods for generating the prediction values of the pixels grouped later are the same as those in the above example, and are not described again here.
delta _ x _ y-pixel _ x _ y-pred _ x _ y, x and y taking only the pixels in the preceding and following groups;
retry _ x _ y ═ pred _ x _ y + qdelta _ x _ y, x and y are taken only for the pixels in the later packet.
And a substep S26 of entropy-coding the quantized residual values of the pixel points of the packets other than the packet with the highest priority to obtain the coded residual values of the pixel points of the packets other than the packet with the highest priority.
In the embodiment of the application, the entropy coding may adopt golomb coding, when golomb coding is performed, the order of the golomb coding is divided according to groups, the coding orders of all pixels in each group are the same, and the coding orders of different groups may be fixed or adaptively adjusted. In the embodiment of the present application, the coding order of the second packet may be set to 3, the coding order of the third packet may be set to 2, and the coding orders of the fourth packet, the previous packet, and the following packet may be set to 0.
And a substep S27, performing code stream packing processing on the coded residual error values of the pixel points of the groups except the group with the highest priority and the residual error values of the pixel points of the group with the highest priority according to the priority to obtain packed subcode streams of the groups.
Substep S28, detecting the length of each packed subcode stream, if the length of a certain group of packed subcode streams is greater than the length of the original value of the transmission pixel, discarding the packed subcode streams, and using the high 8 bits of the original pixel value of the group of pixels; otherwise, using the packed subcode stream.
In the above embodiment, the compression processing of performing prediction processing, quantization processing, entropy coding processing, inverse quantization and pixel reconstruction processing, code stream packing processing, and the like on the block to be compressed may be regarded as general compression processing, and the compression ratio requirement can be satisfied by performing general compression processing on an image with strong image correlation under the condition of satisfying a certain image distortion; however, for an image with weak image correlation, when a certain image distortion needs to be satisfied, the compression rate cannot meet the compression rate requirement. Therefore, after the block to be compressed is subjected to the common compression processing, when the generated common compressed code stream does not meet the preset compression ratio requirement, the common compressed code stream is abandoned; and performing special grouping processing on the blocks to be compressed to obtain a special compressed code stream with a fixed compression ratio.
Referring to fig. 7, a flowchart illustrating steps of embodiment 2 of a video data processing method according to the present application is shown, where the method is applied to a video post-processing system, and the method specifically includes the following steps:
step 201, acquiring video frame data to be processed provided by the video post-processing system;
in an embodiment of the present application, the video post-processing system may include an ultra high definition video frame rate up-conversion system; the method of the embodiment of the application aims at scenes in which random access of data blocks in frames is required to be realized in video data compression processing, for example, an ultra-high-definition video frame rate up-conversion system. However, the method of the embodiment of the present application is also applicable to a compression process that does not require random access to data blocks within a frame.
Step 202, segmenting the video frame data to be processed into a plurality of independent blocks to be compressed;
step 203, performing general grouping processing on the pixel points in a single block to be compressed to obtain general groups with different priorities;
step 204, performing general compression processing on the pixel points of each group according to the priority of each general group;
step 205, when the result of the general compression processing does not meet the requirement of a preset compression ratio, performing special compression processing on the pixel points in the single block to be compressed; the special compression processing is compression processing for generating a compressed code stream with a fixed compression ratio.
After the block to be compressed is subjected to general compression processing, generating a general compressed code stream, and if the general compressed code stream does not meet the preset compression ratio requirement, discarding the general compressed code stream; and performing special grouping processing on the blocks to be compressed to obtain a special compressed code stream with a fixed compression ratio.
In the embodiment of the application, each block to be compressed includes 128 pixel points, each pixel point is 10 bits, and the total blocks to be compressed is 1280 bits. The compressed code stream after the compression processing can be set to 512 bits (the size of the fixed code stream is equivalent to a fixed compression ratio), and each block to be compressed of one video frame data can set the same compression ratio requirement and can also set different compression ratio requirements. When different compression rate requirements can be set for the blocks to be compressed, the compression rate requirements set for each block to be compressed can be determined according to the index mode. For example, each block to be compressed is numbered, and the compression rate requirement corresponding to the block to be compressed is indexed according to the number. And during decompression, according to the compression ratio requirement corresponding to the number index compressed code stream, selecting a matched decompression mode according to the compression ratio requirement to decompress.
As a preferred example of the embodiment of the present application, the step 205 may specifically include the following sub-steps:
a substep S31 of dividing the pixel points of the block to be compressed into a first special packet and a second special packet at intervals;
the pixel points in an independent block to be compressed are divided into two groups. The first grouping pixel points and the second grouping pixel points are spaced from each other. For example, in the block to be compressed, the pixels in odd rows and even rows are the first grouping pixels; and the pixel points of the even rows and the odd rows are the second grouping pixel points.
Substep S32, for the pixel of the first special grouping, intercepting the digit of a second preset numerical value at the high position in the original pixel values of the pixel as a first output pixel value of the pixel;
for the first grouping pixel point, the digit of the high-order preset interception numerical value in the original pixel value of the pixel point is directly intercepted to be used as the first output pixel value of the pixel point, and the first output pixel value is output to the compressed code stream cache. The first output pixel value generated by the intercepting mode is a code stream with a fixed length, and the code stream length of the first output pixel value can be changed by adjusting the intercepting digit.
In the substep S33, for the pixel of the second special grouping, a prediction mode is generated according to the original pixel value of the pixel and the original pixel value of the pixel of the first special grouping adjacent to the pixel, and the prediction mode is used as a second output pixel value of the pixel;
the step of generating the prediction mode according to the original pixel value of the pixel and the original pixel value of the pixel of the first special group adjacent to the pixel may specifically be: and generating a prediction mode by adopting the first output pixel value of the pixel point of the first special group, which is adjacent to the pixel point of the second group, and the difference value between the original pixel value and the original pixel value of the pixel point of the second group meets the preset requirement.
The prediction modes may include: adopting a first output pixel value of a first grouping pixel point on the adjacent left side as a predicted value; adopting a first output pixel value of the adjacent right first grouped pixel point as a predicted value; adopting a first output pixel value of a first grouping pixel point at the adjacent upper side as a predicted value; adopting a first output pixel value of a first grouping pixel point at the adjacent lower side as a predicted value; and adopting the average value of the first output pixel values of a plurality of adjacent first grouped pixel points as a predicted value. There may be a total of 2^4 ^ 16 prediction modes.
The prediction modes are represented by bit coding, and 16 prediction modes can be represented using 4-bit coding. However, in practical use, only a few prediction modes can be adopted as required to reduce the number of bits required for bit encoding.
For example, when the prediction mode includes only: adopting a first output pixel value of a first grouping pixel point on the adjacent left side as a predicted value; the first output pixel value of the adjacent right first grouped pixel is used as a predicted value, the first output pixel value of the adjacent upper first grouped pixel is used as a predicted value, and the first output pixel value of the adjacent lower first grouped pixel is used as a predicted value. For example, 00 denotes: adopting a first output pixel value of the adjacent right first grouped pixel point as a predicted value; 01 represents: adopting a first output pixel value of the adjacent right first grouped pixel point as a predicted value; 10 denotes: adopting a first output pixel value of a first grouping pixel point at the adjacent upper side as a predicted value; 11 denotes: and adopting the first output pixel value of the adjacent lower first grouped pixel point as a predicted value.
And a substep S34, combining a first output pixel value of the pixel point of the first special grouping and a second output pixel value of the pixel point of the second special grouping as a compressed code stream.
Referring to fig. 8, the schematic diagram of performing special compression processing on a block to be compressed in this embodiment is shown, where black pixel points are a first special group, white pixel points are a second special group, and the pixel points of the first special group and the pixel points of the second special group are spaced from each other. For the pixel point of the first special grouping, intercepting the high 6 bits as an output pixel value; and for the pixel point of the second special group, generating a prediction mode according to the original pixel value of the pixel point and the original pixel value of the pixel point of the first special group adjacent to the pixel point, and taking the prediction mode as a second output pixel value of the pixel point.
The prediction modes include 4, which are expressed by using 2-bit coding, such as: 2bit coding is 00, and the original pixel value of the pixel point on the left side is used as a predicted value; the 2bit coding is 01, and the original pixel value of the right pixel point is used as a predicted value; 2bit coding is 10, and the original pixel value of the pixel point in the vertical direction (upper side or lower side) is used as a predicted value; the 2-bit encoding is 11, which means that the average of the original pixel values of the adjacent 3 pixels is used as the prediction value. The prediction value is used as its reconstructed value when decoding.
The selection of the prediction mode is determined according to the difference value between the original pixel value of the pixel of the first special grouping and the original pixel value of the pixel of the adjacent second special grouping. For example, the original pixel value of the pixel point of the second special group whose difference value is smaller than the preset threshold value may be used as the prediction mode.
In the embodiment of the application, each block to be compressed includes 128 pixel points, each pixel point is 10 bits, and the total blocks to be compressed is 1280 bits. After the special compression processing is adopted, after the high 6 bits of the 64 pixel points of the first special grouping are intercepted, the required bit size is 384 bits, the 64 pixel points of the second special grouping are predicted by using a 2-bit prediction mode, and the required bit size is 128 bits. Therefore, the total size of a block to be compressed is 512 bits after compression, and the requirement of the preset compression ratio is met.
Preferably, part of the pixel points in the block to be compressed can select the prediction mode without adopting 2-bit coding, and the output pixel values of the adjacent pixel points are directly set and selected to be used as the predicted values of the pixel points, so that the size of the block to be compressed is further reduced.
The method comprises the steps that compressed code streams generated through general compression processing or special compression processing are input into a compressed code stream cache of an ultra high definition video frame rate up-conversion system, when an ultra high definition video frame rate up-conversion processing core or a video output module of the ultra high definition video frame rate up-conversion system needs to request video frame data, the compressed code streams are extracted from the compressed code stream cache to be decompressed to form decompressed pixel blocks, and the decompressed pixel blocks are output to the ultra high definition video frame rate up-conversion processing core or the video output module.
Fig. 9 is a schematic diagram illustrating compression of a block to be compressed in the embodiment of the present application. Firstly, video frame data is subjected to compression block forming processing to obtain a plurality of independent blocks to be compressed; firstly, performing general compression processing on a block to be compressed: the general compression process includes: prediction processing, quantization processing, entropy coding processing, inverse quantization and pixel reconstruction processing and code stream packaging processing.
Performing prediction processing, namely grouping pixels of each block to be compressed, and then performing prediction and residual error processing according to the groups to obtain a residual error of each pixel to be compressed; quantization processing is carried out on the residual error of the pixel to be compressed obtained by prediction processing, so as to obtain the quantized residual error of the pixel to be compressed; a particular operation of the quantization process is to divide the residual values by the quantized coefficients. For example, if the residual value is 1111111111 and the quantization coefficient is 2, the quantization process obtains a quantized residual value of 111111111.1, which corresponds to right-shifting the residual value by 2 bits and discarding decimal places. If the quantized coefficient is 4, the quantized residual value is 11111111.11, which is equivalent to right-shifting the residual value by 2 bits and discarding decimal places. In quantizing the residual error of the pixels of each group, different quantization coefficients may be used for different groups. The quantized coefficients of each group may be set to fixed values or may be adaptively adjusted.
Entropy coding processing is carried out on the residual error after the pixel quantization to obtain a coded pixel residual error; specifically, the entropy coding process may adopt golomb coding, the order of the golomb coding is divided by groups, the coding orders of all pixels in each group are the same, and the coding orders of different groups may be fixed or adaptively adjusted.
Packing the code stream, namely sequentially packing the entropy-coded pixel residual values to form a packed sub-code stream; and performing inverse quantization and pixel reconstruction processing, namely performing inverse quantization processing and pixel reconstruction processing on the quantized residual error of the pixel to be compressed to obtain a reconstruction value of the pixel for prediction processing.
And then, carrying out compressed code stream output processing on the packed sub code stream, and controlling the length of the output compressed code stream to form a compressed code stream to be output under the requirement of a compression ratio in the compressed code stream output processing. The specific treatment process comprises the following substeps:
substep S41, detecting the length of each packed subcode stream;
substep S42, if the length of a certain group of packed subcode stream is greater than the length of the original value of the transmission pixel, discarding the packed subcode stream when forming the compressed code stream, and using the high 8 bits of the original pixel value of the group of pixels; otherwise, using the packed subcode stream;
in sub-step S43, after sub-steps S41 and S42 are completed, the length of the candidate compressed code stream obtained through prediction, quantization, and the like can be obtained. If the length of the code stream does not meet the requirement of the compression ratio, the code stream is abandoned, and the result of special compression processing is selected to form a compressed code stream.
Fig. 10 is a schematic diagram illustrating decompression of a compressed code stream in the embodiment of the present application. Firstly, carrying out code stream analysis processing on a compressed code stream, and if the compressed code stream is generated by common compression processing, carrying out entropy decoding processing, inverse quantization processing, pixel formation processing and pixel block restoration processing on the compressed code stream; and if the compressed code stream is generated through special compression processing during code stream compression, carrying out special decoding processing and pixel block restoration processing on the compressed code stream.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
Referring to fig. 11, a block diagram of a video data processing apparatus in embodiment 1 of the present application is shown, where the apparatus is applied to a video post-processing system, and the apparatus may specifically include the following modules:
a video frame data obtaining module 31, configured to obtain video frame data to be processed provided by the video post-processing system;
in an embodiment of the present application, the video post-processing system may include an ultra high definition video frame rate up-conversion system;
a block to be compressed generation module 32, configured to segment the video frame data to be processed into a plurality of independent blocks to be compressed;
the grouping module 33 is configured to perform grouping processing on the pixel points in a single block to be compressed to obtain groups with different priorities;
and the compression module 34 is configured to perform compression processing on the pixel points of each packet according to the priority of each packet.
As a preferred example of the embodiment of the present application, the grouping module may further include:
the region determining submodule is used for determining pixel point regions to be segmented in the blocks to be compressed;
the first segmentation submodule is used for uniformly segmenting the pixel point region to be segmented into two subregions;
the second segmentation submodule is used for iterating and uniformly segmenting each subarea until the interval between an undivided pixel point and a segmented pixel point is less than or equal to 1, and stopping segmentation;
the priority generation submodule is used for generating the priority of each pixel point according to the sequential segmentation result; the priority of the non-divided pixel points is sequenced after the priority of the divided pixel points.
As a preferred example of the embodiment of the present application, the block to be compressed generating module may further include:
and the sampling and splitting submodule is used for splitting the video frame data to be processed into a plurality of independent blocks to be compressed with brightness and a plurality of independent blocks to be compressed with chroma according to the sampling mode of the video frame data.
As a preferred example of the embodiment of the present application, the compression module may further include:
the residual error intercepting submodule is used for intercepting the digits of a first preset numerical value at the high position in the original pixel values of the pixel points as the residual error values of the pixel points for the grouped pixel points with the highest priority;
the prediction submodule is used for performing prediction processing on pixel points of the groups except the group with the highest priority according to the priority of the groups to obtain residual values of the pixel points; the prediction process includes: predicting the pixel points of the current grouping by adopting a reconstruction value generated by the residual error value of the pixel points of the high-priority grouping; and the residual value of the pixel point is the difference value between the original pixel value of the pixel point and the predicted value of the pixel point.
As a preferred example of the embodiment of the present application, the compression module may further include:
and the quantization submodule is used for performing quantization processing on the residual error values of the pixel points of the groups other than the group with the highest priority to obtain the quantized residual error values of the pixel points of the groups other than the group with the highest priority.
As a preferred example of the embodiment of the present application, the compression module may further include:
the first reconstruction value generation submodule is used for supplementing zero to the residual value of the pixel point with the highest priority of the grouped pixel points to obtain the reconstruction value with the same number of bits as the original pixel value;
and the second reconstruction value generation submodule is used for carrying out inverse quantization and reconstruction processing on the quantized residual error values of the pixel points of the groups except the group with the highest priority to obtain the reconstruction values of the pixel points of the groups except the group with the highest priority.
As a preferred example of the embodiment of the present application, the compression module may further include:
and the entropy coding submodule is used for entropy coding the quantized residual error values of the pixel points of the groups except the group with the highest priority to obtain the coded residual error values of the pixel points of the groups except the group with the highest priority.
As a preferred example of the embodiment of the present application, the compression module may further include:
and the code stream packing submodule is used for carrying out code stream packing processing on the residual error value after coding of the pixel points of the groups except the group with the highest priority and the residual error value of the pixel points of the group with the highest priority according to the priority to obtain the packed subcode streams of all the groups.
Referring to fig. 12, a block diagram of an embodiment 2 of a video data processing apparatus according to the present application is shown, where the apparatus is applied to a video post-processing system, and the apparatus may specifically include the following modules:
a video frame data obtaining module 41, configured to obtain video frame data to be processed provided by the video post-processing system;
in an embodiment of the present application, the video post-processing system may include an ultra high definition video frame rate up-conversion system;
a block to be compressed generation module 42, configured to segment the video frame data to be processed into a plurality of independent blocks to be compressed;
a general grouping module 43, configured to perform general grouping processing on pixel points in a single block to be compressed to obtain general groups with different priorities;
a general compression module 44, configured to perform general compression processing on the pixel points of each general packet according to the priority of each general packet;
a special compression processing module 45, configured to perform special compression processing on the pixel points in the single block to be compressed when the result of the general compression processing does not meet a preset compression ratio requirement; the special compression processing is compression processing for generating a compressed code stream with a fixed compression ratio.
As a preferred example of the embodiment of the present application, the special compression module further includes:
and the interval grouping submodule is used for dividing the pixel points of the block to be compressed into a first special group and a second special group according to intervals.
The pixel value intercepting submodule is used for intercepting the digits of a second preset numerical value of the high order in the original pixel values of the pixel points as a first output pixel value of the pixel points for the pixel points of the first special grouping;
the special prediction submodule is used for generating a prediction mode for the pixel points of the second special grouping according to the original pixel values of the pixel points and the original pixel values of the adjacent pixel points of the first special grouping, and the prediction mode is used as a second output pixel value of the pixel points;
and the compressed code stream merging submodule is used for merging the first output pixel value of the pixel point of the first special group and the second output pixel value of the pixel point of the second special group to be used as a compressed code stream.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing detailed description is directed to a video data processing method and a video data processing apparatus, and the principles and embodiments of the present application are explained by applying specific examples, which are merely used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A video data processing method, applied to a video post-processing system, the method comprising:
acquiring video frame data to be processed provided by the video post-processing system;
segmenting the video frame data to be processed into a plurality of independent blocks to be compressed;
grouping pixel points in a single block to be compressed to obtain groups with different priorities;
compressing the pixel points of each group according to the priority of each group;
the step of grouping the pixel points in a single block to be compressed to obtain the groups with different priorities includes:
determining a pixel point region to be segmented in a block to be compressed;
uniformly dividing the pixel point region to be divided into two sub-regions;
performing iteration and uniform segmentation on each subarea until the interval between an undivided pixel point and a segmented pixel point is less than or equal to 1, and stopping segmentation;
generating the priority of each pixel point according to the sequentially segmented result; the priority of the non-segmented pixel points is sequenced after the priority of the segmented pixel points;
the step of compressing the pixel points of each group according to the priority of each group comprises the following steps:
and compressing the pixel points of each group according to the priority sequence of each group from high to low of the groups in a single block to be compressed.
2. The method according to claim 1, wherein the step of compressing the pixels of each group according to the priority of each group comprises:
for the grouped pixel point with the highest priority, intercepting the digit of a first preset numerical value at the high position in the original pixel value of the pixel point as the residual value of the pixel point;
performing prediction processing on the pixel points of the groups except the group with the highest priority according to the priority of the groups to obtain residual values of the pixel points; the prediction process includes: predicting the pixel points of the current grouping by adopting a reconstruction value generated by the residual error value of the pixel points of the high-priority grouping; and the residual value of the pixel point is the difference value between the original pixel value of the pixel point and the predicted value of the pixel point.
3. The method according to claim 2, wherein the step of compressing the pixels of each group according to the priority of each group further comprises:
and quantizing the residual values of the pixel points of the groups except the group with the highest priority to obtain the quantized residual values of the pixel points of the groups except the group with the highest priority.
4. The method according to claim 3, wherein the step of compressing the pixels of each group according to the priority of each group further comprises:
for the grouped pixel points with the highest priority, zero padding is carried out on the residual value of the pixel points at the lower position to obtain a reconstruction value with the same number of bits as the original pixel value;
and carrying out inverse quantization and reconstruction processing on the quantized residual values of the pixel points of the packets except the packet with the highest priority to obtain the reconstruction values of the pixel points of the packets except the packet with the highest priority.
5. The method according to claim 4, wherein the step of compressing the pixels of each group according to the priority of each group further comprises:
and entropy coding is carried out on the quantized residual error values of the pixel points of the groups other than the group with the highest priority to obtain the coded residual error values of the pixel points of the groups other than the group with the highest priority.
6. The method according to claim 5, wherein the step of compressing the pixels of each group according to the priority of each group further comprises:
and carrying out code stream packing processing on the residual values after coding of the pixel points of the groups except the group with the highest priority and the residual values of the pixel points of the group with the highest priority according to the priority to obtain packed subcode streams of all the groups.
7. A video data processing apparatus, for use in a video post-processing system, the apparatus comprising:
the video frame data acquisition module is used for acquiring video frame data to be processed, which is provided by the video post-processing system;
a block to be compressed generation module, configured to segment the video frame data to be processed into multiple independent blocks to be compressed;
the grouping module is used for grouping the pixel points in a single block to be compressed to obtain groups with different priorities;
the compression module is used for compressing the pixel points of each group according to the priority of each group;
wherein the grouping module comprises:
the region determining submodule is used for determining pixel point regions to be segmented in the blocks to be compressed;
the first segmentation submodule is used for uniformly segmenting the pixel point region to be segmented into two subregions;
the second segmentation submodule is used for iterating and uniformly segmenting each subarea until the interval between an undivided pixel point and a segmented pixel point is less than or equal to 1, and stopping segmentation;
the priority generation submodule is used for generating the priority of each pixel point according to the sequential segmentation result; the priority of the non-segmented pixel points is sequenced after the priority of the segmented pixel points;
the compression module is also used for compressing the grouping in the single block to be compressed according to the priority sequence of each grouping from high to low.
8. The apparatus of claim 7, wherein the compression module further comprises:
the residual error intercepting submodule is used for intercepting the digits of a first preset numerical value at the high position in the original pixel values of the pixel points as the residual error values of the pixel points for the grouped pixel points with the highest priority;
the prediction submodule is used for performing prediction processing on pixel points of the groups except the group with the highest priority according to the priority of the groups to obtain residual values of the pixel points; the prediction process includes: predicting the pixel points of the current grouping by adopting a reconstruction value generated by the residual error value of the pixel points of the high-priority grouping; and the residual value of the pixel point is the difference value between the original pixel value of the pixel point and the predicted value of the pixel point.
9. The apparatus of claim 8, wherein the compression module further comprises:
and the quantization submodule is used for performing quantization processing on the residual error values of the pixel points of the groups other than the group with the highest priority to obtain the quantized residual error values of the pixel points of the groups other than the group with the highest priority.
10. The apparatus of claim 9, wherein the compression module further comprises:
the first reconstruction value generation submodule is used for supplementing zero to the residual value of the pixel point with the highest priority of the grouped pixel points to obtain the reconstruction value with the same number of bits as the original pixel value;
and the second reconstruction value generation submodule is used for carrying out inverse quantization and reconstruction processing on the quantized residual error values of the pixel points of the groups except the group with the highest priority to obtain the reconstruction values of the pixel points of the groups except the group with the highest priority.
11. The apparatus of claim 10, wherein the compression module further comprises:
and the entropy coding submodule is used for entropy coding the quantized residual error values of the pixel points of the groups except the group with the highest priority to obtain the coded residual error values of the pixel points of the groups except the group with the highest priority.
12. The apparatus of claim 11, wherein the compression module further comprises:
and the code stream packing submodule is used for carrying out code stream packing processing on the residual error value after coding of the pixel points of the groups except the group with the highest priority and the residual error value of the pixel points of the group with the highest priority according to the priority to obtain the packed subcode streams of all the groups.
CN201610008432.6A 2016-01-07 2016-01-07 Video data processing method and device Active CN106954074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610008432.6A CN106954074B (en) 2016-01-07 2016-01-07 Video data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610008432.6A CN106954074B (en) 2016-01-07 2016-01-07 Video data processing method and device

Publications (2)

Publication Number Publication Date
CN106954074A CN106954074A (en) 2017-07-14
CN106954074B true CN106954074B (en) 2019-12-20

Family

ID=59465706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610008432.6A Active CN106954074B (en) 2016-01-07 2016-01-07 Video data processing method and device

Country Status (1)

Country Link
CN (1) CN106954074B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108900804B (en) * 2018-07-09 2020-11-03 南通世盾信息技术有限公司 Self-adaptive video stream processing method based on video entropy
CN109300444B (en) * 2018-12-03 2020-01-21 深圳市华星光电半导体显示技术有限公司 Compression method of compensation table

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101102495A (en) * 2007-07-26 2008-01-09 武汉大学 A video image decoding and encoding method and device based on area
CN101527849A (en) * 2009-03-30 2009-09-09 清华大学 Storing system of integrated video decoder
CN101583033A (en) * 2009-06-05 2009-11-18 中山大学 Method for protecting H.264 video data by using robust watermarks
CN103583044A (en) * 2011-01-31 2014-02-12 韩国电子通信研究院 Method and apparatus for encoding/decoding images using a motion vector
CN103634556A (en) * 2012-08-27 2014-03-12 联想(北京)有限公司 Information transmission method, information receiving method and electronic apparatus
EP2903276A1 (en) * 2014-02-04 2015-08-05 Thomson Licensing Method for encoding and decoding a picture comprising inpainting of the picture epitome and corresponding devices
CN105187831A (en) * 2015-09-18 2015-12-23 广州市百果园网络科技有限公司 Image compression method and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411734B2 (en) * 2007-02-06 2013-04-02 Microsoft Corporation Scalable multi-thread video decoding

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101102495A (en) * 2007-07-26 2008-01-09 武汉大学 A video image decoding and encoding method and device based on area
CN101527849A (en) * 2009-03-30 2009-09-09 清华大学 Storing system of integrated video decoder
CN101583033A (en) * 2009-06-05 2009-11-18 中山大学 Method for protecting H.264 video data by using robust watermarks
CN103583044A (en) * 2011-01-31 2014-02-12 韩国电子通信研究院 Method and apparatus for encoding/decoding images using a motion vector
CN103634556A (en) * 2012-08-27 2014-03-12 联想(北京)有限公司 Information transmission method, information receiving method and electronic apparatus
EP2903276A1 (en) * 2014-02-04 2015-08-05 Thomson Licensing Method for encoding and decoding a picture comprising inpainting of the picture epitome and corresponding devices
CN105187831A (en) * 2015-09-18 2015-12-23 广州市百果园网络科技有限公司 Image compression method and apparatus

Also Published As

Publication number Publication date
CN106954074A (en) 2017-07-14

Similar Documents

Publication Publication Date Title
AU2012285356B2 (en) Tiered signal decoding and signal reconstruction
JP6523324B2 (en) Image encoding / decoding method and apparatus
US10542265B2 (en) Self-adaptive prediction method for multi-layer codec
CN113766249B (en) Loop filtering method, device, equipment and storage medium in video coding and decoding
US10616498B2 (en) High dynamic range video capture control for video transmission
JP4541896B2 (en) Apparatus and method for multiple description encoding
US20230059060A1 (en) Intra block copy scratch frame buffer
JP2007514359A (en) Spatial scalable compression scheme with dead zone
US20230421786A1 (en) Chroma from luma prediction for video coding
US20210250575A1 (en) Image processing device
CN106954074B (en) Video data processing method and device
WO2015138311A1 (en) Phase control multi-tap downscale filter
CN116114246B (en) Intra-frame prediction smoothing filter system and method
US11463716B2 (en) Buffers for video coding in palette mode
CN106954073B (en) Video data input and output method, device and system
CN111212288B (en) Video data encoding and decoding method and device, computer equipment and storage medium
KR20230108286A (en) Video encoding using preprocessing
KR100798386B1 (en) Method of compressing and decompressing image and equipment thereof
CN105763826B (en) A kind of input of video data, output method and device
JP2022523461A (en) Information processing methods and devices, equipment, storage media
WO2023051223A1 (en) Filtering method and apparatus, encoding method and apparatus, decoding method and apparatus, computer-readable medium, and electronic device
JP6990172B2 (en) Determination of luminance samples to be co-located with color component samples for HDR coding / decoding
CN114071148A (en) Video coding method, device, equipment and product
KR20220088888A (en) Iterative training of neural networks for intra prediction
JP2024510433A (en) Temporal structure-based conditional convolutional neural network for video compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 266100 Zhuzhou Road, Laoshan District, Shandong, No. 151, No.

Patentee after: Hisense Video Technology Co.,Ltd.

Address before: 266100 Zhuzhou Road, Laoshan District, Shandong, No. 151, No.

Patentee before: HISENSE ELECTRIC Co.,Ltd.

CP01 Change in the name or title of a patent holder