CN117979059A - Video processing method, device, electronic equipment and storage medium - Google Patents

Video processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117979059A
CN117979059A CN202410117571.7A CN202410117571A CN117979059A CN 117979059 A CN117979059 A CN 117979059A CN 202410117571 A CN202410117571 A CN 202410117571A CN 117979059 A CN117979059 A CN 117979059A
Authority
CN
China
Prior art keywords
data
sub
target sub
yuv
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410117571.7A
Other languages
Chinese (zh)
Inventor
李拓
邹晓峰
满宏涛
张贞雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202410117571.7A priority Critical patent/CN117979059A/en
Publication of CN117979059A publication Critical patent/CN117979059A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to the field of video design, and in particular, to a video processing method, apparatus, electronic device, and storage medium. Acquiring original YUV data corresponding to an original image of a current frame in original video data; obtaining a target YUV data type corresponding to an original image of a current frame; according to the target YUV data type, carrying out data recombination processing on each original sub YUV data to generate target pixel data; the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the group number corresponding to the target sub-data is smaller than the group number corresponding to the original sub-YUV data; performing data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device. Therefore, the video data volume transmitted in YUV mode is reduced, the video frame loss rate is reduced, and the overall performance of the chip is optimized.

Description

Video processing method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of video design, and in particular, to a video processing method, apparatus, electronic device, and storage medium.
Background
With the development of technology, the requirements for video streaming are increasing. The video processing flow in the traditional baseboard management control chip is as follows: there are two video formats, one is a video in YUV format and one is a video in compression format such as JPEG.
Specifically, original video data at the host end is transmitted to a VGA module in the baseboard management control chip through PCIe, the VGA module generates original video data in RGB format, then the original video data in YUV format is generated through color space conversion, at the moment, the YUV data has two paths, one path is directly written into DDR, the other path is through video compression IP (H.264 format, JPEG format and the like), video data in compressed format is obtained, and the compressed data is written into external DDR.
According to the display requirement of the remote end, the EMAC network function of the baseboard management control chip is sent to the remote end for display, and then the function of remote management control is realized.
But YUV video data is uncompressed video data, the amount of data is very large, for DDR and bus bandwidth. Therefore, how to transmit YUV mode video data is a problem to be solved.
Disclosure of Invention
In view of the above, the present invention provides a video processing method, apparatus, electronic device and storage medium, so as to solve the problem of how to transmit YUV mode video data.
In a first aspect, the present invention provides a video processing method, including:
acquiring original YUV data corresponding to an original image of a current frame in original video data; the original YUV data comprises original sub YUV data corresponding to each pixel point in the original image of the current frame;
obtaining a target YUV data type corresponding to an original image of a current frame; the target YUV data types include YUV444 type and YUV420 type;
according to the target YUV data type, carrying out data recombination processing on each original sub YUV data to generate target pixel data; the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the group number corresponding to the target sub-data is smaller than the group number corresponding to the original sub-YUV data;
Performing data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device.
According to the video processing method provided by the embodiment of the application, the original YUV data corresponding to the original image of the current frame in the original video data is obtained, the target YUV data type corresponding to the original image of the current frame is obtained, and the accuracy of the determined target YUV data type corresponding to the original YUV data is ensured. And according to the target YUV data type, carrying out data recombination processing on each original sub-YUV data to generate target pixel data, and ensuring the accuracy of the generated target pixel data. Wherein the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth, so that the bus bandwidth utilization rate is greatly improved; and the number of groups corresponding to the target sub-data is smaller than the number of groups corresponding to the original sub-YUV data. The time for transmitting the video data in the YUV mode is reduced, and the efficiency for transmitting the video data in the YUV mode is improved. Then, carrying out data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device. Therefore, the video data volume transmitted in YUV mode is reduced, the video frame loss rate is reduced, and the overall performance of the chip is optimized.
In an alternative embodiment, when the target YUV data type is YUV444 type, performing data reorganization processing on each original sub-YUV data according to the target YUV data type to generate target pixel data, including:
According to the sequence corresponding to each pixel point in the original image of the current frame, sequentially carrying out data recombination on original sub YUV data corresponding to each pixel point to generate each target sub data;
And generating target pixel data corresponding to the original image of the current frame based on each target sub-data.
According to the video processing method provided by the embodiment of the application, the original sub YUV data corresponding to each pixel point is subjected to data recombination in sequence according to the sequence corresponding to each pixel point in the original image of the current frame, so that each target sub data is generated, the accuracy of each generated target sub data is ensured, and the sequence of each target sub data is free from errors. And generating target pixel data corresponding to the original image of the current frame based on each target sub-data, so that the accuracy of the generated target pixel data is ensured, and further, the accuracy of a target video generated according to the target sub-data in the later period can be ensured.
In an alternative embodiment, according to the sequence corresponding to each pixel point in the original image of the current frame, sequentially performing data reorganization on original sub-YUV data corresponding to each pixel point to generate each target sub-data, including:
Sequentially obtaining original sub-YUV data corresponding to each pixel point according to the sequence corresponding to each pixel point in the original image of the current frame;
Starting from the 1 st pixel point, taking original sub YUV data corresponding to every adjacent four pixel points as a first cyclic data set;
splitting and recombining adjacent four original sub-YUV data aiming at a first cyclic data group corresponding to each original sub-YUV data to generate three target sub-data;
And (3) circulating until all the first circulating data sets are completed, and generating each target sub-data corresponding to the original image of the current frame.
According to the video processing method provided by the embodiment of the application, the original sub-YUV data corresponding to each pixel point is sequentially acquired according to the sequence corresponding to each pixel point in the original image of the current frame, and the original sub-YUV data corresponding to every adjacent four pixel points is used as a first cyclic data group from the 1 st pixel point, so that the accuracy of each determined first cyclic data group is ensured, and the sequence accuracy among the first cyclic data groups is ensured. And further, the accuracy of three generated target sub-data can be ensured by splitting and recombining the adjacent four original sub-YUV data aiming at the first cyclic data group corresponding to each original sub-YUV data. Therefore, the data volume of the single generated target sub-data is larger than that of the single original sub-YUV data, the problem that 25% of invalid data exists in the original method for transmitting the original sub-YUV data by using a bus, DDR memory space and bus bandwidth are wasted greatly is solved, and therefore the bus bandwidth utilization rate is improved. In addition, the number of target sub-data is reduced relative to the original sub-YUV data, so that the time for transmitting the video data in the YUV mode is reduced, and the efficiency for transmitting the video data in the YUV mode is improved. And the method is circulated until all the first circulation data groups are completed, and each target sub-data corresponding to the original image of the current frame is generated, so that the accuracy of the generated target sub-data is ensured.
In an alternative embodiment, the data types corresponding to the three target sub-data are YUVY types, UVYU types, and VYUV types, respectively, for the first cyclic data group corresponding to each original sub-YUV data, splitting and recombining the adjacent four original sub-YUV data to generate three target sub-data, including:
Combining original sub-YUV data corresponding to a 1 st pixel point in a first cyclic data group and Y data in original sub-YUV data corresponding to a 2 nd pixel point in the first cyclic data group for each first cyclic data group corresponding to the original sub-YUV data to generate YUVY types of target sub-data;
Combining UV data in the original sub-YUV data corresponding to the 2 nd pixel point with YU data in the original sub-YUV data corresponding to the 3 rd pixel point to generate UVYU types of target sub-data;
And combining V data in the original sub-YUV data corresponding to the 3 rd pixel point with the original sub-YUV data corresponding to the 4 th pixel point to generate VYUV types of target sub-data.
According to the video processing method provided by the embodiment of the application, for a first cyclic data group corresponding to each original sub-YUV data, original sub-YUV data corresponding to a1 st pixel point in the first cyclic data group is combined with Y data in original sub-YUV data corresponding to a2 nd pixel point, so as to generate YUVY types of target sub-data; combining UV data in the original sub-YUV data corresponding to the 2 nd pixel point with YU data in the original sub-YUV data corresponding to the 3 rd pixel point to generate UVYU types of target sub-data; and combining V data in the original sub-YUV data corresponding to the 3 rd pixel point with the original sub-YUV data corresponding to the 4 th pixel point to generate VYUV types of target sub-data. The accuracy of the generated target sub-data of each type is guaranteed. The method solves the problems that in the original method, the original sub YUV data is transmitted by using a bus, 25% of invalid data exists, and DDR memory space and bus bandwidth are wasted greatly, so that the bus bandwidth utilization rate is improved. In addition, the number of target sub-data is reduced relative to the original sub-YUV data, so that the time for transmitting the video data in the YUV mode is reduced, and the efficiency for transmitting the video data in the YUV mode is improved.
In an alternative embodiment, when the target YUV data type is YUV420 type, performing data reorganization processing on each original sub-YUV data according to the target YUV data type to generate target pixel data, including:
Performing rounding processing on original sub-YUV data corresponding to each pixel point in an original image of a current frame to generate sub-YUV data to be processed corresponding to each pixel point;
According to the sequence corresponding to each pixel point in the original image of the current frame, sequentially carrying out recombination processing on the sub YUV data to be processed corresponding to each pixel point to generate each target sub data;
And generating target pixel data corresponding to the original image of the current frame based on each target sub-data.
According to the video processing method provided by the embodiment of the application, the original sub-YUV data corresponding to each pixel point in the original image of the current frame is subjected to the processing, the sub-YUV data to be processed corresponding to each pixel point is generated, the total data amount of the sub-YUV data to be processed corresponding to each pixel point which is finally generated is ensured to be reduced, so that the time for video data transmission in the YUV mode can be reduced, and the efficiency for video data transmission in the YUV mode is improved. And then, according to the sequence corresponding to each pixel point in the original image of the current frame, sequentially carrying out recombination processing on the sub YUV data to be processed corresponding to each pixel point to generate each target sub data, thereby ensuring the accuracy of each generated target sub data and ensuring that the sequence of each target sub data is not wrong. And generating target pixel data corresponding to the original image of the current frame based on each target sub-data, so that the accuracy of the generated target pixel data is ensured, and further, the accuracy of a target video generated according to the target sub-data in the later period can be ensured.
In an alternative embodiment, performing a rounding process on original sub-YUV data corresponding to each pixel in an original image of a current frame to generate sub-YUV data to be processed corresponding to each pixel, including:
For each pixel point of an even line and an even column in an original image of a current frame, original sub-YUV data corresponding to each pixel point of the even line and the even column is reserved, and sub-YUV data to be processed corresponding to each pixel point of the even line and the even column is generated;
for other pixel points except for each pixel point of even lines and even columns, only Y data in original sub-YUV data corresponding to each other pixel point is reserved, and sub-YUV data to be processed corresponding to each other pixel point is generated.
According to the video processing method provided by the embodiment of the application, original sub YUV data corresponding to each pixel point of an even row and an even column in an original image of a current frame are reserved, and sub YUV data to be processed corresponding to each pixel point of the even row and the even column is generated; for other pixel points except for each pixel point of even lines and even columns, only Y data in original sub-YUV data corresponding to each other pixel point is reserved, and sub-YUV data to be processed corresponding to each other pixel point is generated, so that the data quantity of the generated sub-YUV data to be processed is reduced, and the characteristics of the original sub-YUV data are guaranteed.
In an alternative embodiment, according to the sequence corresponding to each pixel point in the original image of the current frame, the sub-YUV data to be processed corresponding to each pixel point is sequentially recombined to generate each target sub-data, which includes:
For each pixel point of an even line in the original image of the current frame, starting from the 1 st pixel point, taking sub YUV data to be processed corresponding to every two adjacent pixel points as a second cyclic data group;
For each second cyclic data set, recombining two sub-YUV data to be processed in the second cyclic data set to generate target sub-data;
for each pixel point of an odd line in the original image of the current frame, starting from the 1 st pixel point, taking sub YUV data to be processed corresponding to every adjacent four pixel points as a third cyclic data group;
For each third cyclic data set, recombining four sub-YUV data to be processed in the third cyclic data set to generate target sub-data;
and (3) circulating until all the second circulating data set and the third circulating data set are completed, and generating each target sub-data.
According to the video processing method provided by the embodiment of the application, aiming at each pixel point of an even line in an original image of a current frame, starting from a1 st pixel point, sub YUV data to be processed corresponding to every two adjacent pixel points is taken as a second cyclic data group, so that the accuracy of each determined second cyclic data group is ensured, and the accuracy of the sequence among each second cyclic data group is ensured. And recombining two sub-YUV data to be processed in the second cyclic data group aiming at each second cyclic data group to generate one target sub-data, thereby ensuring the accuracy of the generated target sub-data and reducing the total data amount of the target sub-data. For each pixel point of an odd line in the original image of the current frame, starting from the 1 st pixel point, the sub YUV data to be processed corresponding to every adjacent four pixel points is used as a third cyclic data group, so that the accuracy of each determined third cyclic data group is ensured, and the accuracy of the sequence among the third cyclic data groups is ensured. For each third cyclic data set, recombining four sub-YUV data to be processed in the third cyclic data set to generate target sub-data; the accuracy of the generated target sub data is guaranteed, and the reduction of the total data amount of the target sub data is realized. And (3) circulating until all the second circulating data set and the third circulating data set are completed, and generating each target sub-data. The accuracy of the generated target sub-data is guaranteed.
In an alternative embodiment, the type of the target sub-data is YUVY types, and for each second cyclic data set, two sub-YUV data to be processed in the second cyclic data set are recombined to generate one target sub-data, which includes:
and combining the sub-YUV data to be processed corresponding to the 1 st pixel point in the second cyclic data group with Y data in the sub-YUV data to be processed corresponding to the 2 nd pixel point for each second cyclic data group to generate YUVY types of target sub-data.
According to the video processing method provided by the embodiment of the application, aiming at each second cyclic data group, the sub YUV data to be processed corresponding to the 1 st pixel point in the second cyclic data group is combined with Y data in the sub YUV data to be processed corresponding to the 2 nd pixel point, so as to generate YUVY types of target sub data. The accuracy of the generated target sub data is guaranteed, the bus utilization rate is greatly improved, invalid data writing into the memory is reduced, the frame loss rate is reduced, and the overall performance of the chip is improved.
In an alternative embodiment, the type of the target sub-data is yyyyy type, and for each third cyclic data set, the recombining four sub-YUV data to be processed in the second cyclic data set to generate one target sub-data includes:
And recombining Y data in the four sub-YUV data to be processed in the third cyclic data group aiming at each third cyclic data group to generate target sub-data of YYYY type.
According to the video processing method provided by the embodiment of the application, aiming at each third cyclic data group, Y data in four sub YUV data to be processed in the third cyclic data group are recombined to generate target sub data of YYYY type. The accuracy of the generated target sub data is guaranteed, the bus utilization rate is greatly improved, invalid data writing into the memory is reduced, the frame loss rate is reduced, and the overall performance of the chip is improved.
In an alternative embodiment, data compression is performed on target pixel data corresponding to N frames of original images in the original video data, to generate a YUV comparison table corresponding to the N frames of original images, including:
generating a plurality of labeling data aiming at the relation between the target sub-data corresponding to the N frames of original images respectively, wherein the data volume of the labeling data is smaller than that of each target sub-data; each labeling data corresponds to at least one target sub-data;
and generating a YUV comparison table corresponding to the N frames of original images according to each labeling data.
According to the video processing method provided by the embodiment of the application, a plurality of annotation data are generated aiming at the relation between target sub-data corresponding to N frames of original images respectively, and a YUV comparison table corresponding to the N frames of original images is generated according to each annotation data. The compression of each target sub data is realized, so that the transmission data quantity is reduced, the DDR memory space is saved, the bus bandwidth is reduced, and the overall performance of the chip is improved.
In an alternative embodiment, generating a plurality of labeling data for the relation between the target sub-data corresponding to the N frames of original images respectively includes:
acquiring current target sub-data; the current target sub-data is any one of the target sub-data;
detecting whether unprocessed historical target sub-data which is positioned before the current target sub-data and does not generate marking data exists or not; wherein the non-generated annotation data characterizes that before the unprocessed historical target sub-data, first same historical target sub-data which is the same as the unprocessed historical target sub-data exists; the number of unprocessed history target sub-data is at least one;
When unprocessed historical target sub-data does not exist, comparing the current target sub-data with each historical target sub-data before the current target sub-data, and judging whether second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data or not;
And outputting the labeling data corresponding to the current target sub-data according to the judging result.
The video processing method provided by the embodiment of the application acquires the current target sub-data; detecting whether unprocessed historical target sub-data which is positioned before the current target sub-data and does not generate marking data exists or not; when unprocessed historical target sub-data does not exist, comparing the current target sub-data with each historical target sub-data before the current target sub-data, and judging whether second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data or not, so that the accuracy of a judging result is ensured. And outputting the labeling data corresponding to the current target sub-data according to the judging result, so that the accuracy of the labeling data corresponding to the output current target sub-data is ensured.
In an alternative embodiment, when there is no unprocessed historical target sub-data, comparing the current target sub-data with each historical target sub-data preceding the current target sub-data, and determining whether there is a second identical historical target sub-data in each historical target sub-data that is identical to the current target sub-data, includes:
when unprocessed historical target sub-data does not exist, absolute difference values between the current target sub-data and each historical target sub-data are calculated in sequence;
Comparing each absolute difference value with a first preset difference value;
When the absolute difference value is smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data;
and when the absolute difference value is smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data does not exist in each historical target sub-data.
According to the video processing method provided by the embodiment of the application, when unprocessed historical target sub-data does not exist, absolute difference values between the current target sub-data and each historical target sub-data are calculated in sequence, so that the accuracy of the absolute difference values between the calculated current target sub-data and each historical target sub-data is ensured. Comparing each absolute difference value with a first preset difference value; when the absolute difference value is smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data, and when the absolute difference value is not smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data does not exist in each historical target sub-data. The accuracy of the result of determining whether the second same historical target sub-data exists is ensured.
In an optional implementation manner, according to the judging result, outputting the labeling data corresponding to the current target sub-data includes:
when the judgment result is that the second same historical target sub-data which is the same as the current target sub-data does not exist in the historical target sub-data, outputting marking data corresponding to the current target sub-data according to a preset mapping relation; and continuing to acquire the next target sub-data corresponding to the current target sub-data.
According to the video processing method provided by the embodiment of the application, when the judgment result is that the second same historical target sub-data which is the same as the current target sub-data does not exist in the historical target sub-data, the marking data corresponding to the current target sub-data is output according to the preset mapping relation; and continuing to acquire the next target sub-data corresponding to the current target sub-data. The accuracy of the output labeling data is ensured.
In an alternative embodiment, the method further comprises:
When the judgment result is that the second same historical target sub-data which is the same as the current target sub-data exists in the historical target sub-data, temporarily prohibiting the output of the marking data corresponding to the current target sub-data, and continuously acquiring the next target sub-data corresponding to the current target sub-data.
According to the video processing method provided by the embodiment of the application, when the judgment result is that the second same historical target sub-data which is the same as the current target sub-data exists in the historical target sub-data, the output of the marking data corresponding to the current target sub-data is temporarily forbidden, and the next target sub-data corresponding to the current target sub-data is continuously acquired. The accuracy of processing the annotation data corresponding to the current target sub-data is guaranteed.
In an alternative embodiment, the method further comprises:
When unprocessed historical target sub-data exists, combining the current target sub-data with the unprocessed historical target sub-data to generate a current target sub-data combination;
searching historical target sub-data combinations with the same number as the data included in the current target sub-data combination in each historical target sub-data before the current target sub-data combination;
Comparing the current target sub-data combination with each historical target sub-data combination, and judging whether the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination;
When the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, the output of the marking data corresponding to the current target sub-data combination is forbidden, and the next target sub-data corresponding to the current target sub-data is continuously acquired.
According to the video processing method provided by the embodiment of the application, when unprocessed historical target sub-data exists, the current target sub-data and the unprocessed historical target sub-data are combined to generate the current target sub-data combination, so that the accuracy of the generated current target sub-data combination is ensured. And searching the historical target sub-data combinations with the same number as the data included in the current target sub-data combination in each historical target sub-data before the current target sub-data combination, so that the accuracy of the determined historical target sub-data combination is ensured. Comparing the current target sub-data combination with each historical target sub-data combination, judging whether the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, and ensuring the accuracy of the obtained judging result. When the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, the output of the marking data corresponding to the current target sub-data combination is forbidden, and the next target sub-data corresponding to the current target sub-data is continuously acquired. Therefore, the output of the labeling data is reduced, and the data compression of the target pixel data corresponding to the N frames of original images in the original video data is realized. Therefore, the transmission data volume is reduced, the DDR memory space is saved, the bus bandwidth is reduced, and the overall performance of the chip is improved.
In an alternative embodiment, the method further comprises:
When the same historical target sub-data combination which is the same as the current target sub-data combination does not exist in each historical target sub-data combination, outputting marking data corresponding to the current target sub-data combination according to a preset mapping relation, and continuously acquiring next target sub-data corresponding to the current target sub-data.
According to the video processing method provided by the embodiment of the application, when the same historical target sub-data combination which is the same as the current target sub-data combination does not exist in each historical target sub-data combination, the marking data corresponding to the current target sub-data combination is output according to the preset mapping relation, and the next target sub-data corresponding to the current target sub-data is continuously acquired. The accuracy of the marking data corresponding to the output current target sub-data combination is guaranteed, and the current target sub-data combination is restored according to the marking data corresponding to the current target sub-data combination.
In an optional implementation manner, after performing data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images, the method further includes:
Calculating the data difference value between every two adjacent original images in the N frames of original images;
adding the data difference values, and calculating to obtain a total data difference value;
comparing the total data difference with the maximum data difference and the minimum data difference in a preset data difference range;
When the total data difference is greater than the maximum data difference, decreasing the value of N;
when the total data difference is less than the minimum data difference, the value of N is increased.
According to the video processing method provided by the embodiment of the application, the data difference value between every two adjacent frames of original images in the N frames of original images is calculated, and the accuracy of the calculated data difference value between any two adjacent frames of original images is ensured. Adding the data difference values, and calculating to obtain a total data difference value; the readiness of the calculated total data difference is ensured. Comparing the total data difference with the maximum data difference and the minimum data difference in a preset data difference range; when the total data difference is larger than the maximum data difference, the data volume of the current YUV comparison table is determined to be larger, so that the compression efficiency is improved as much as possible while the data transmission safety is ensured and the recovery quality of a video remote terminal is ensured, and the transmission data volume is reduced, so that the value of N is reduced. Therefore, the video remote terminal can restore the quality and realize more effective and accurate compression. When the total data difference value is smaller than the minimum data difference value, the data quantity of the current YUV comparison table is determined to be smaller, so that the data transmission frequency is reduced, the data transmission efficiency is improved, the value of N can be increased, and further the video remote terminal recovery quality is guaranteed, and meanwhile more effective and accurate compression is achieved.
In an alternative embodiment, calculating the data difference between each two adjacent frames of original images in the N frames of original images includes:
Sequentially calculating a first sub-difference value between two target sub-data at corresponding positions in two adjacent frames of original images according to any two adjacent frames of original images in each frame of original images;
And adding the first sub-difference values to obtain a data difference value between two adjacent frames of original images.
According to the video processing method provided by the embodiment of the application, for any two adjacent frames of original images in each frame of original image, the first sub-difference value between the two target sub-data at the corresponding positions in the two adjacent frames of original images is calculated in sequence, so that the accuracy of each calculated first sub-difference value is ensured. And adding the first sub-difference values to obtain a data difference value between two adjacent frames of original images. The accuracy of the data difference value corresponding to each frame of image obtained through calculation is guaranteed.
In an optional implementation manner, after performing data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images, the method further includes:
Generating a target image corresponding to each frame of original image according to the labeling data included in the YUV comparison table;
respectively calculating an image difference value between each frame of target image and the original image;
adding the image difference values to obtain a total image difference value;
Comparing the total image difference value with a maximum image difference value and a minimum image difference value in a preset image difference value range;
When the total image difference value is larger than the maximum image difference value, reducing the value of the first preset difference value;
and when the total image difference value is smaller than the minimum image difference value, increasing the value of the first preset difference value.
According to the video processing method provided by the embodiment of the application, the target image corresponding to each frame of original image is generated according to the labeling data included in the YUV comparison table, so that the accuracy of the generated target image corresponding to each frame of original image is ensured. And respectively calculating the image difference value between each frame of target image and the original image, so that the accuracy of the calculated image difference value between each frame of target image and the original image is ensured. And adding the image difference values to obtain a total image difference value, so that the accuracy of the obtained total image difference value is ensured. Comparing the total image difference value with a maximum image difference value and a minimum image difference value in a preset image difference value range; when the total image difference value is greater than the maximum image difference value, the value of the first preset difference value is reduced, so that the image difference value between the target image and the original image can be reduced. When the total image difference value is smaller than the minimum image difference value, the value of the first preset difference value is increased, so that the image difference value between the target image and the original image can be increased. The method realizes that the compression efficiency of data compression of the target pixel data corresponding to the N frames of original images in the original video data is improved as much as possible under the condition of ensuring the recovery quality of the target images, reduces the transmission data volume and realizes more effective and accurate compression.
In a second aspect, the present invention provides a video processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring original YUV data corresponding to an original image of a current frame in the original video data; the original YUV data comprises original sub YUV data corresponding to each pixel point in the original image of the current frame;
The identification module is used for acquiring a target YUV data type corresponding to the original image of the current frame; the target YUV data types include YUV444 type and YUV420 type;
The recombination module is used for carrying out data recombination processing on each original sub-YUV data according to the type of the target YUV data to generate target pixel data; the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the group number corresponding to the target sub-data is smaller than the group number corresponding to the original sub-YUV data;
The compression module is used for carrying out data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and the YUV comparison table is transmitted to the target device.
The video processing device provided by the embodiment of the application acquires the original YUV data corresponding to the original image of the current frame in the original video data, acquires the target YUV data type corresponding to the original image of the current frame, and ensures the accuracy of the determined target YUV data type corresponding to the original YUV data. And according to the target YUV data type, carrying out data recombination processing on each original sub-YUV data to generate target pixel data, and ensuring the accuracy of the generated target pixel data. Wherein the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth, so that the bus bandwidth utilization rate is greatly improved; and the number of groups corresponding to the target sub-data is smaller than the number of groups corresponding to the original sub-YUV data. The time for transmitting the video data in the YUV mode is reduced, and the efficiency for transmitting the video data in the YUV mode is improved. Then, carrying out data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device. Therefore, the video data volume transmitted in YUV mode is reduced, the video frame loss rate is reduced, and the overall performance of the chip is optimized.
In a third aspect, the present invention provides an electronic device, comprising: the video processing system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, so that the video processing method of the first aspect or any implementation mode corresponding to the first aspect is executed.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the video processing method of the first aspect or any of its corresponding embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a video function implementation in a conventional scheme according to an embodiment of the present invention;
FIG. 2 is a flow chart of another video processing method according to an embodiment of the invention;
FIG. 3 is a flow chart of yet another video processing method according to an embodiment of the present invention;
Fig. 4 is a schematic flow chart of data reorganization of original sub-YUV data in yet another video processing method according to an embodiment of the present invention;
FIG. 5 is a flow chart of another video processing method according to an embodiment of the invention;
fig. 6 is a schematic diagram of sub-YUV data to be processed generated in another video processing method according to an embodiment of the present invention;
FIG. 7 is a flow chart of another video processing method according to an embodiment of the invention;
Fig. 8 is a block diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
With the development of technology, the requirements for video streaming are increasing. The video processing flow in the traditional baseboard management control chip is as follows: there are two video formats, one is a video in YUV format and one is a video in compression format such as JPEG. Among them, YUV color coding uses brightness and chromaticity to specify the color of a pixel. Where Y represents brightness (Luminance, luma) and U and V represent chromaticity (Chrominance, chroma).
Specifically, original video data at the host end is transmitted to a VGA module in the baseboard management control chip through PCIe, the VGA module generates original video data in RGB format, then the original video data in YUV format is generated through color space conversion, at the moment, the YUV data has two paths, one path is directly written into DDR, the other path is through video compression IP (H.264 format, JPEG format and the like), video data in compressed format is obtained, and the compressed data is written into external DDR. According to the display requirement of the remote end, the EMAC network function of the baseboard management control chip is sent to the remote end for display, and then the function of remote management control is realized. As shown in fig. 1, the implementation of video functions in a conventional baseboard management control system.
The processing procedure of the video compression mode in the traditional scheme is as follows:
s1: video data of HOST end is transmitted to VGA module of baseboard management control chip through PCIe, after being processed inside VGA module, data in original RGB format is generated, RGB data is processed through color space conversion module, namely RGB2YUV module.
S2: the color space conversion module converts RGB format into YUV format, and the conversion process is completed by matrix conversion formula.
Y=(0.257×R)+(0.504×G)+(0.098×B)+16
U=0.148×R–0.291×G+0.439×B+128
V=0.439×R-0.368×G-0.071×B+128
S3: according to the mode configuration of the user, the video data in YUV format has two paths.
S3.1: when a user configures to output original video data in a YUV format, the YUV data obtained through the color space conversion module passes through a YUV format video output control module in the color space conversion module and is written into the DDR.
S3.2: when the user configures to output the compressed format, the YUV data obtained by the color space conversion module is compressed by the video compression sub-module in fig. 1 (the compressed format is not fixed, such as JPEG, h.264, etc.), and then the compressed video data is written into the DDR by the video output control module in fig. 1 (note that in the conventional scheme, the module outputs not only the compressed data in JPEG but also the original video data in YUV format).
However, the YUV mode in the conventional scheme has the following drawbacks:
1: YUV video data is uncompressed video data, the data size is very large, and the bandwidth is 1920×1200×3×60=396 MB/s, which is a great challenge for DDR as well as bus bandwidth, for example.
2: The frame loss rate of the video in the YUV mode is very high, because other programs are running in the baseboard management control chip, the bandwidth of the DDR bus is not occupied by the video function all the time, so that the YUV data cannot occupy the DDR bus all the time, when wready and awready of the DDR bus, which are given to the AI bus of the video function, are not pulled up for a long time, after the buffer space inside the video function is full, the remaining data which are not written into the DDR of the current frame are discarded, so that the frame loss is caused, and because the YUV data amount in the traditional scheme is very large, the time for transmitting one frame of YUV data is long, and when wready and awready are not pulled up for a long time, the frame loss rate is very high.
3: In the conventional scheme, a large amount of repeated transmission exists in the YUV data of adjacent frames, because the video data of adjacent frames has very small variation or no variation (especially when the local server is in a monitoring state and the remote end or the local end is not operating, the video picture of the HOST end has no variation or very little variation), the transmission of a large amount of repeated data not only causes low transmission efficiency, but also reduces the overall performance of the chip.
In accordance with an embodiment of the present invention, there is provided an embodiment of a video processing method, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
It should be noted that, the execution body of the video processing method provided by the embodiment of the present application may be a video processing device, where the video processing device may be implemented as part or all of an electronic device by software, hardware, or a combination of software and hardware, where the electronic device may be a server or a terminal, where the server in the embodiment of the present application may be a server or a server cluster formed by multiple servers, and the terminal in the embodiment of the present application may be a smart phone, a personal computer, a tablet computer, a wearable device, and other intelligent hardware devices such as an intelligent robot. In the following method embodiments, the execution subject is an electronic device.
In this embodiment, a video processing method is provided, which may be used in the above electronic device, and fig. 2 is a flowchart of a video processing method according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
Step S101, obtaining original YUV data corresponding to an original image of a current frame in the original video data.
The original YUV data comprises original sub YUV data corresponding to each pixel point in the original image of the current frame.
Specifically, the electronic device may receive the original video data input by the user, or may receive the original video data sent by other devices, and the electronic device may find the original video data in the storage space.
After the electronic device acquires the original video data, the electronic device may acquire original YUV data corresponding to original images of each frame in the original video data. For the original YUV data corresponding to the original image being processed currently, the electronic device may acquire the original YUV data corresponding to the original image of the current frame in the original video data.
It should be noted that, the data type of the original YUV data is YUV444 type. Wherein, in YUV data, Y is called a gray component, UV is a chrominance component, U is called a blue projection, and V is called a red projection.
Step S102, obtaining a target YUV data type corresponding to an original image of a current frame.
Wherein the target YUV data types include YUV444 type and YUV420 type.
Specifically, the electronic device may receive a target YUV data type corresponding to the original image of the current frame input by the user, or may receive a target YUV data type corresponding to the original image of the current frame sent by other devices, and the electronic device may determine the target YUV data type corresponding to the original image of the current frame according to configuration information of the original image of the current frame.
The mode of the electronic device for acquiring the target YUV data type corresponding to the original image of the current frame is not particularly limited.
Here, YUV444 type means 1U sample and 1V sample corresponding to every 1 point Y sample. YUV420 type, then represents 1U sample and 1V sample corresponding to every 4-point Y samples.
Step S103, according to the target YUV data type, carrying out data recombination processing on each original sub YUV data to generate target pixel data.
Wherein the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the number of groups corresponding to the target sub-data is smaller than the number of groups corresponding to the original sub-YUV data.
Specifically, when the target YUV data type corresponding to the original image of the current frame is UV444 type, the electronic device may unpack each original sub-YUV data into Y data, U data, and V data according to the data structure of each original sub-YUV data, and then reorganize 8 consecutive data each time to generate target pixel data. For example, the first generation YUVYUVYU of target sub-data, the second generation VYUVYUVY of target sub-data, and the third generation UVYUVYUV are looped until the target pixel data corresponding to the original data of each frame is generated.
When the target YUV data type corresponding to the original image of the current frame is YUV420 type, the electronic device may process each original sub-YUV data according to the format of the YUV420 type data, and generate sub-YUV data to be processed of YUV420 type. Then, according to the data structure of each sub-YUV data to be processed, each sub-YUV data to be processed can be disassembled into Y data, U data and V data, and then, 8 continuous data are recombined each time to generate target pixel data.
This step will be described in detail below.
Step S104, carrying out data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device.
Specifically, the electronic device may perform data compression on target pixel data corresponding to N frames of original images in the original video data by using a preset compression method, so as to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device.
The preset compression method may include, but is not limited to, at least one of a huffman coding method, a field coding method, a predictive coding method, a transform coding method, and the like, and the embodiment of the present application does not describe the preset compression method in detail.
According to the video processing method provided by the embodiment of the application, the original YUV data corresponding to the original image of the current frame in the original video data is obtained, the target YUV data type corresponding to the original image of the current frame is obtained, and the accuracy of the determined target YUV data type corresponding to the original YUV data is ensured. And according to the target YUV data type, carrying out data recombination processing on each original sub-YUV data to generate target pixel data, and ensuring the accuracy of the generated target pixel data. Wherein the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth, so that the bus bandwidth utilization rate is greatly improved; and the number of groups corresponding to the target sub-data is smaller than the number of groups corresponding to the original sub-YUV data. The time for transmitting the video data in the YUV mode is reduced, and the efficiency for transmitting the video data in the YUV mode is improved. Then, carrying out data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device. Therefore, the video data volume transmitted in YUV mode is reduced, the video frame loss rate is reduced, and the overall performance of the chip is optimized.
In this embodiment, a video processing method is provided, which may be used in the above electronic device, and fig. 3 is a flowchart of a video processing method according to an embodiment of the present invention, as shown in fig. 3, where the flowchart includes the following steps:
Step S201, obtaining original YUV data corresponding to an original image of a current frame in the original video data.
The original YUV data comprises original sub YUV data corresponding to each pixel point in the original image of the current frame.
Please refer to step S101 in the embodiment shown in fig. 2 in detail, which is not described herein.
Step S102, obtaining a target YUV data type corresponding to an original image of a current frame.
Wherein the target YUV data types include YUV444 type and YUV420 type.
Please refer to step S102 in the embodiment shown in fig. 2 in detail, which is not described herein.
Step S203, according to the target YUV data type, carrying out data recombination processing on each original sub YUV data to generate target pixel data.
Wherein the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the number of groups corresponding to the target sub-data is smaller than the number of groups corresponding to the original sub-YUV data.
In an alternative embodiment of the present application, when the target YUV data type is YUV444 type, the step S203 may include the following steps:
step S2031, sequentially performing data reorganization on the original sub-YUV data corresponding to each pixel according to the sequence corresponding to each pixel in the original image of the current frame, so as to generate each target sub-data.
Specifically, the step S2031 may include the following steps:
Step a1, sequentially acquiring original sub YUV data corresponding to each pixel point according to the sequence corresponding to each pixel point in the original image of the current frame.
Specifically, the electronic device may sequentially obtain the original sub-YUV data corresponding to each pixel point according to the sequence corresponding to each pixel point in the original image of the current frame.
Step a2, starting from the 1 st pixel, taking the original sub-YUV data corresponding to every adjacent four pixels as a first cyclic data set.
Specifically, the electronic device may divide the original sub-YUV data corresponding to each adjacent four pixels into a first cyclic data set starting from the 1 st pixel.
Step a3, splitting and recombining adjacent four original sub-YUV data according to a first cyclic data group corresponding to each original sub-YUV data, and generating three target sub-data.
Specifically, the data types corresponding to the three target sub-data are YUVY type, UVYU type, VYUV type, respectively, and the step a3 may include the following steps:
Step a31, for the first cyclic data set corresponding to each original sub-YUV data, combining the original sub-YUV data corresponding to the 1 st pixel point in the first cyclic data set with the Y data in the original sub-YUV data corresponding to the 2 nd pixel point, and generating YUVY types of target sub-data.
Specifically, for a first cyclic data set corresponding to each original sub-YUV data, the electronic device combines the original sub-YUV data corresponding to the 1 st pixel point and Y data in the original sub-YUV data corresponding to the 2 nd pixel point in the first cyclic data set to generate YUVY types of target sub-data.
Step a32, combining the UV data in the original sub-YUV data corresponding to the 2 nd pixel point with YU data in the original sub-YUV data corresponding to the 3 rd pixel point to generate UVYU types of target sub-data.
Specifically, the electronic device combines UV data in the original sub-YUV data corresponding to the 2 nd pixel point with YU data in the original sub-YUV data corresponding to the 3 rd pixel point to generate UVYU types of target sub-data.
Step a33, combining the V data in the original sub-YUV data corresponding to the 3 rd pixel point with the original sub-YUV data corresponding to the 4 th pixel point to generate VYUV types of target sub-data.
Specifically, the electronic device combines V data in the original sub-YUV data corresponding to the 3 rd pixel point with the original sub-YUV data corresponding to the 4 th pixel point to generate VYUV types of target sub-data.
And a4, circulating in this way until all the first circulating data sets are completed, and generating each target sub-data corresponding to the original image of the current frame.
Specifically, the method comprises the steps of circulating until the electronic equipment completes all first circulating data sets in the original image of the current frame, and generating all target sub-data corresponding to the original image of the current frame.
Step S2032, generating target pixel data corresponding to the original image of the current frame based on each target sub-data.
Specifically, the electronic device combines the target sub-data according to the generated sequence to generate target pixel data corresponding to the original image of the current frame.
Illustratively, in the left diagram shown in fig. 4, when the target YUV data type is YUV444 type, the processing method is as follows:
S1, after the system is reset, an arrow 1 at the left side of the IDLE state diagram 4 is positioned.
S2, when a new 1 frame 0/4/8 … th pixel arrives, the state machine jumps YUVY _444, arrow 2 in FIG. 4, waits for the 1 st pixel, and when the 1 st/5/9 … … th pixel arrives, the YUV of the 0 th pixel and the Y of the 1 st pixel are combined into 32bits of data { YUVY }, and the 32bits of data are output, and meanwhile the UV component of the 1 st pixel is buffered.
S3, when the 2/6/10 … … th pixel arrives, the state machine jumps to UVYU _444, arrow 3 in FIG. 4, and the UV component of the 1 st pixel and the YU of the 2 nd pixel which are temporarily stored are used for splitting to form 32bits of data { UVYU }, and output, and the V component of the 2 nd pixel is buffered.
S4, when the 3/7/11 … … th pixel arrives, the state machine jumps to VYUV _444, and the V component cached by the 2 nd pixel and the YUV component of the 3 rd pixel form 32bits { VYUV }, as shown by arrow 4 in FIG. 4.
S5, if the current frame is not input at the moment, the state machine jumps to YUVY to 444, arrow 5 in FIG. 4. And (5) sequentially circulating according to the rule of S2-S5.
S6, if the current frame input is finished at the moment, the state is jumped from VYUV-444 to the IDLE state, and a new frame starts.
Through the above-described processing flow, the process of inputting 24bits of a single pixel point to the combined 32bits output is completed.
Step S204, data compression is carried out on target pixel data corresponding to N frames of original images in the original video data, and a YUV comparison table corresponding to the N frames of original images is generated; and transmitting the YUV comparison table to the target device.
Please refer to step S104 in the embodiment shown in fig. 2, which is not described herein.
According to the video processing method provided by the embodiment of the application, the original sub-YUV data corresponding to each pixel point is sequentially acquired according to the sequence corresponding to each pixel point in the original image of the current frame, and the original sub-YUV data corresponding to every adjacent four pixel points is used as a first cyclic data group from the 1 st pixel point, so that the accuracy of each determined first cyclic data group is ensured, and the sequence accuracy among the first cyclic data groups is ensured.
Combining original sub-YUV data corresponding to a 1 st pixel point in a first cyclic data group and Y data in original sub-YUV data corresponding to a 2 nd pixel point in the first cyclic data group for each first cyclic data group corresponding to the original sub-YUV data to generate YUVY types of target sub-data; combining UV data in the original sub-YUV data corresponding to the 2 nd pixel point with YU data in the original sub-YUV data corresponding to the 3 rd pixel point to generate UVYU types of target sub-data; and combining V data in the original sub-YUV data corresponding to the 3 rd pixel point with the original sub-YUV data corresponding to the 4 th pixel point to generate VYUV types of target sub-data. The accuracy of the generated target sub-data of each type is guaranteed. The method solves the problems that in the original method, the original sub YUV data is transmitted by using a bus, 25% of invalid data exists, and DDR memory space and bus bandwidth are wasted greatly, so that the bus bandwidth utilization rate is improved. In addition, the number of target sub-data is reduced relative to the original sub-YUV data, so that the time for transmitting the video data in the YUV mode is reduced, and the efficiency for transmitting the video data in the YUV mode is improved.
And the method is circulated until all the first circulation data groups are completed, and each target sub-data corresponding to the original image of the current frame is generated, so that the accuracy of the generated target sub-data is ensured. And generating target pixel data corresponding to the original image of the current frame based on each target sub-data, so that the accuracy of the generated target pixel data is ensured, and further, the accuracy of a target video generated according to the target sub-data in the later period can be ensured.
In this embodiment, a video processing method is provided, which may be used in the above electronic device, and fig. 5 is a flowchart of the video processing method according to an embodiment of the present invention, as shown in fig. 5, where the flowchart includes the following steps:
step S301, obtaining original YUV data corresponding to an original image of a current frame in the original video data.
The original YUV data comprises original sub YUV data corresponding to each pixel point in the original image of the current frame.
Please refer to step S201 in the embodiment shown in fig. 3 in detail, which is not described herein.
Step S302, obtaining a target YUV data type corresponding to an original image of a current frame.
Wherein the target YUV data types include YUV444 type and YUV420 type.
Please refer to step S202 in the embodiment shown in fig. 3 in detail, which is not described herein.
Step S303, according to the target YUV data type, carrying out data recombination processing on each original sub YUV data to generate target pixel data.
Wherein the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the number of groups corresponding to the target sub-data is smaller than the number of groups corresponding to the original sub-YUV data.
In an alternative embodiment of the present application, when the target YUV data type is YUV420 type, the step S303 may include the following steps:
step S3031, performing rounding processing on the original sub-YUV data corresponding to each pixel point in the original image of the current frame to generate sub-YUV data to be processed corresponding to each pixel point.
Specifically, the step S3031 may include the following steps:
Step b1, reserving original sub-YUV data corresponding to each pixel point of an even row and an even column in the original image of the current frame, and generating sub-YUV data to be processed corresponding to each pixel point of the even row and the even column.
Specifically, the electronic device reserves original sub-YUV data corresponding to each pixel point of an even row and an even column in an original image of a current frame, and generates sub-YUV data to be processed corresponding to each pixel point of the even row and the even column.
And b2, for other pixel points except for each pixel point of the even number row and the even number column, only preserving Y data in original sub-YUV data corresponding to each other pixel point, and generating sub-YUV data to be processed corresponding to each other pixel point.
Specifically, the electronic device only retains Y data in original sub-YUV data corresponding to each other pixel point for other pixel points except for each pixel point in even rows and even columns, and generates sub-YUV data to be processed corresponding to each other pixel point.
For example, as shown in fig. 6, for the pixel points of row 0/2/4/6 … and column 0/2/4/6 …, the electronic device retains its original sub-YUV data, so that the sub-YUV data to be processed of the pixel points of row 0/2/4/6 … and column 0/2/4/6 … is YUV data, and the sub-YUV data to be processed corresponding to other pixel points is Y data. Therefore, the sub YUV data to be processed corresponding to the even-numbered rows of pixel points are YUV, Y, YUV and Y … … respectively; the sub YUV data to be processed corresponding to the pixel points of the odd lines are Y, Y, Y and Y … … respectively.
Step S3032, according to the sequence corresponding to each pixel point in the original image of the current frame, the sub-YUV data to be processed corresponding to each pixel point is sequentially recombined to generate each target sub-data.
Specifically, the step S3032 may include the following steps:
Step c1, starting from the 1 st pixel point, regarding each pixel point of the even line in the original image of the current frame as a second cyclic data set by using sub-YUV data to be processed corresponding to every two adjacent pixel points.
Specifically, for each pixel point of an even line in the original image of the current frame, starting from the 1 st pixel point, the electronic device uses sub-YUV data to be processed corresponding to every two adjacent pixel points as a second cyclic data set.
Step c2, for each second cyclic data set, recombining two sub-YUV data to be processed in the second cyclic data set to generate a target sub-data.
Specifically, the type of the target sub-data is YUVY types, and the step c2 may include:
and combining the sub-YUV data to be processed corresponding to the 1 st pixel point in the second cyclic data group with Y data in the sub-YUV data to be processed corresponding to the 2 nd pixel point for each second cyclic data group to generate YUVY types of target sub-data.
Specifically, for each second cyclic data set, the electronic device combines the sub-YUV data to be processed corresponding to the 1 st pixel in the second cyclic data set with Y data in the sub-YUV data to be processed corresponding to the 2 nd pixel, so as to generate YUVY types of target sub-data.
Step c3, starting from the 1 st pixel point, regarding each pixel point of the odd lines in the original image of the current frame, and taking the sub-YUV data to be processed corresponding to every adjacent four pixel points as a third cyclic data set.
Specifically, for each pixel point of the odd-numbered lines in the original image of the current frame, the electronic device starts from the 1 st pixel point, and the sub-YUV data to be processed corresponding to every adjacent four pixel points is taken as a third cyclic data set.
And c4, recombining four sub-YUV data to be processed in the third cyclic data group according to each third cyclic data group to generate target sub-data.
Specifically, the type of the target sub data is YYYY type, and step c4 may include:
And recombining Y data in the four sub-YUV data to be processed in the third cyclic data group aiming at each third cyclic data group to generate target sub-data of YYYY type.
Specifically, for each third cyclic data set, the electronic device reorganizes Y data in four sub-YUV data to be processed in the third cyclic data set to generate target sub-data of yyyyy type.
And c5, circulating until all the second circulating data set and the third circulating data set are completed, and generating each target sub-data.
Specifically, the electronic device loops in this way until all the second loop data set and the third loop data set are completed, and generates each target sub-data.
Step S3033, based on each target sub-data, target pixel data corresponding to the original image of the current frame is generated.
Specifically, the electronic device generates target pixel data corresponding to the original image of the current frame according to the sequence among the pixel points in the corresponding current original image frame by using each target sub data.
Illustratively, as shown in the right graph of fig. 4, when the target YUV data type is YUV444 type, the processing method is as follows:
s7, after the system is reset, an arrow 7 on the right side of the standby state in fig. 4 is formed.
S8, when the 0 th pixel point of the new 1 frame arrives at the 0/2/4 th row), the state machine jumps to YUVY _420, and the arrow 8 in fig. 4 waits for the 1 st pixel point, and when the 1 st pixel point arrives, the YUV of the 0 th pixel point and the Y of the 1 st pixel point are combined into 32bits of data { YUVY }, and the 32bits of data are output, and the UV component of the 1 st pixel point is abandoned.
S9, when the 2 nd pixel arrives, the state machine is still positioned at YUVY _420, an arrow 9 in fig. 4, and the arrival of the 3 rd pixel is waited, the YUV of the 2 nd pixel and the Y of the 3 rd pixel are used for forming 32bits of data { YUVY }, and the UV component of the 3 rd pixel is output and discarded.
Wherein, steps S8 and S9 are both in YUVY _420.
S10, until the jump condition of the arrow 10 occurs, namely a new line (odd line), for example, the step 8 completes the data processing of the 0 th line, and then the 1 st line/3/5 … th line is processed.
In the odd frame processing, in the yyyy_420 state, 4 pixels are required to be combined into 1 data of 32 bits.
S12, after the odd line processing is completed, a skip condition of arrow 12, that is, a new line (even line) occurs.
Steps S8-S11 are then repeated.
S13, if the input of the frame data is completed after the odd lines are processed, the state machine jumps to an IDLE state, and an arrow 13.
It should be noted that, under the traditional scheme:
The YUV444 type video data, the YUV component of the single pixel point, after compensating 0 in 8bits of maximum, compensating {8' H0,8bits0, YUV }, then using AI bus (data width 128bits, etc., are all integral multiple of 32 bits), write DDR, such disadvantage is that there is 25% invalid data, waste DDR memory space and bus bandwidth greatly, have reduced the performance of the chip.
The YUV42 type video data Y, U, V is stored separately and then the software is read back and then restored to processing. However, the design of the output control module of the subsequent stage is complex, especially when the output control module is stored separately, Y/U/V is not stored together, and Y retention may occur, but U/V may be lost, or Y is lost, and the scene where U/V remains (due to discontinuous bus signals) is difficult to control, so that the packet loss rate is high, and the probability of invalid writing is high.
Therefore, in the design of the embodiment of the application, the unified programming is combined into 32bits of data, and three 32bits of data of { YUVY } { UVYU } { VYUV } are formed under the YUV444 type; under YUV420 type, two kinds of 32bits data of { YUVY } { YYYY } are composed. The data combination design of the embodiment of the application can greatly improve the bus utilization rate, reduce the invalid data writing into the memory, reduce the frame loss rate and improve the overall performance of the chip.
Step S304, carrying out data compression on target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to the target device.
Please refer to step S204 in the embodiment shown in fig. 3 in detail, which is not described herein.
When the target YUV data type is YUV420 type, reserving original sub YUV data corresponding to each pixel point of an even row and an even column in an original image of a current frame, and generating sub YUV data to be processed corresponding to each pixel point of the even row and the even column; for other pixel points except for each pixel point of even lines and even columns, only Y data in original sub-YUV data corresponding to each other pixel point is reserved, and sub-YUV data to be processed corresponding to each other pixel point is generated, so that the data quantity of the generated sub-YUV data to be processed is reduced, and the characteristics of the original sub-YUV data are guaranteed.
For each pixel point of even lines in the original image of the current frame, starting from the 1 st pixel point, sub YUV data to be processed corresponding to every two adjacent pixel points is taken as a second cyclic data group, so that the accuracy of each determined second cyclic data group is ensured, and the accuracy of the sequence among the second cyclic data groups is ensured. And combining the sub-YUV data to be processed corresponding to the 1 st pixel point in the second cyclic data group with Y data in the sub-YUV data to be processed corresponding to the 2nd pixel point for each second cyclic data group to generate YUVY types of target sub-data. The accuracy of the generated target sub data is guaranteed, the bus utilization rate is greatly improved, invalid data writing into the memory is reduced, the frame loss rate is reduced, and the overall performance of the chip is improved. For each pixel point of an odd line in the original image of the current frame, starting from the 1 st pixel point, the sub YUV data to be processed corresponding to every adjacent four pixel points is used as a third cyclic data group, so that the accuracy of each determined third cyclic data group is ensured, and the accuracy of the sequence among the third cyclic data groups is ensured. And recombining Y data in the four sub-YUV data to be processed in the third cyclic data group aiming at each third cyclic data group to generate target sub-data of YYYY type. The accuracy of the generated target sub data is guaranteed, the bus utilization rate is greatly improved, invalid data writing into the memory is reduced, the frame loss rate is reduced, and the overall performance of the chip is improved. And (3) circulating until all the second circulating data set and the third circulating data set are completed, and generating each target sub-data. The accuracy of the generated target sub-data is guaranteed.
In this embodiment, a video processing method is provided, which may be used in the above electronic device, and fig. 7 is a flowchart of a video processing method according to an embodiment of the present invention, as shown in fig. 7, where the flowchart includes the following steps:
Step S401, original YUV data corresponding to an original image of a current frame in the original video data is obtained.
The original YUV data comprises original sub YUV data corresponding to each pixel point in the original image of the current frame.
Please refer to step S301 in the embodiment shown in fig. 5 in detail, which is not described herein.
Step S402, obtaining a target YUV data type corresponding to an original image of a current frame.
Wherein the target YUV data types include YUV444 type and YUV420 type.
Please refer to step S302 in the embodiment shown in fig. 5 in detail, which is not described herein.
Step S403, according to the target YUV data type, carrying out data recombination processing on each original sub YUV data to generate target pixel data.
Wherein the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the number of groups corresponding to the target sub-data is smaller than the number of groups corresponding to the original sub-YUV data.
Please refer to the step S203 of the embodiment shown in fig. 3 and the step S303 of the embodiment shown in fig. 5 in detail, which will not be described herein.
Step S404, data compression is carried out on target pixel data corresponding to N frames of original images in the original video data, and a YUV comparison table corresponding to the N frames of original images is generated; and transmitting the YUV comparison table to the target device.
Specifically, the step S404 of performing data compression on the target pixel data corresponding to the N frames of original images in the original video data to generate the YUV comparison table corresponding to the N frames of original images may include the following steps:
In step S4041, a plurality of labeling data are generated for the relationships between the target sub-data corresponding to the N frames of original images.
The data volume of the labeling data is smaller than that of each target sub-data; each annotation data corresponds to at least one target sub-data.
In an optional implementation manner, the electronic device may generate, by using a preset compression method, annotation data corresponding to the target sub-data corresponding to each of the N frames of original images. That is, one annotation data is generated for one target sub-data.
In another alternative embodiment, the step S4041 may include the following steps:
and d1, acquiring current target sub-data.
Wherein the current target sub-data is any one of the target sub-data.
Specifically, for any one of the target sub-data generated in the above embodiment, the electronic device may consider it as the current target sub-data.
And d2, detecting whether unprocessed historical target sub-data which is positioned before the current target sub-data and does not generate the labeling data exists.
Wherein the non-generated annotation data characterizes that before the unprocessed historical target sub-data, first same historical target sub-data which is the same as the unprocessed historical target sub-data exists; the number of unprocessed history target-sub-data is at least one.
Specifically, the electronic device may detect whether there is unprocessed historical target sub-data located before the current target sub-data and not generating the labeling data, according to the output result corresponding to the historical target sub-data located before the current target sub-data.
When the historical target sub-data before the current target sub-data outputs the marking data, determining that unprocessed historical target sub-data does not exist; when the historical target sub-data before the current target sub-data does not output the annotation data, determining that unprocessed historical target sub-data exists.
And d3, when unprocessed historical target sub-data does not exist, comparing the current target sub-data with each historical target sub-data before the current target sub-data, and judging whether second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data.
Specifically, the step d3 includes:
and d31, when unprocessed historical target sub-data does not exist, sequentially calculating absolute differences between the current target sub-data and each historical target sub-data.
Specifically, when there is no unprocessed historical target sub-data, the electronic device may treat the current target sub-data as a pending value. The electronic device sequentially calculates absolute differences between the current target sub-data and each of the historical target sub-data.
For example, the electronic device may treat the current target sub-data as a value to be processed, where YUV is represented using one byte (8 bits) and the range of values is 0-255, but to prevent overload due to signal variation, the Y value is 16-235 and the uv value is 16-240. Therefore, the electronic device can combine the values of the parameters corresponding to the current target sub-data to generate a value to be processed. For example, the current target sub-data corresponds to a value 32456718; the electronic device sequentially calculates absolute differences between the current target sub-data and each of the historical target sub-data.
And d32, comparing each absolute difference value with a first preset difference value.
Specifically, the electronic device may obtain a first preset difference value input by the user, or may also receive the first preset difference value sent by other devices, and the electronic device may preset a first preset difference value according to an actual situation.
After the first preset difference value is obtained, the electronic device can compare the absolute difference value between the calculated current target sub-data and each historical target sub-data.
And d33, when the absolute difference value is smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data.
Specifically, when the absolute difference is smaller than the first preset difference, the electronic device determines that second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data.
And d34, when the absolute difference value is smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data does not exist in each historical target sub-data.
Specifically, when the absolute difference is smaller than the first preset difference, the electronic device determines that second identical historical target sub-data which is identical to the current target sub-data does not exist in each historical target sub-data.
And d4, outputting the labeling data corresponding to the current target sub-data according to the judging result.
Specifically, the step d4 includes:
Step d41, outputting labeling data corresponding to the current target sub-data according to a preset mapping relation when the judgment result is that the second same historical target sub-data which is the same as the current target sub-data does not exist in the historical target sub-data; and continuing to acquire the next target sub-data corresponding to the current target sub-data.
Specifically, when the judgment result is that the second same historical target sub-data which is the same as the current target sub-data does not exist in each historical target sub-data, outputting marking data corresponding to the current target sub-data according to a preset mapping relation; and continuing to acquire the next target sub-data corresponding to the current target sub-data.
The preset mapping relation may be input to the electronic device by a user, may be sent to the electronic device by other devices, or may be determined by the electronic device according to a preset compression method. The preset compression method may include, but is not limited to, at least one of a huffman coding method, a field coding method, a predictive coding method, a transform coding method, and the like, and the embodiment of the present application does not describe the preset compression method in detail. The present application also does not specifically limit the preset mapping relation.
Step d42, when the judgment result is that the second same historical target sub-data which is the same as the current target sub-data exists in the historical target sub-data, temporarily prohibiting the output of the marking data corresponding to the current target sub-data, and continuously acquiring the next target sub-data corresponding to the current target sub-data.
Specifically, when the judgment result is that the second same historical target sub-data which is the same as the current target sub-data exists in the historical target sub-data, in order to reduce repeated compression of similar data, the electronic device may temporarily prohibit output of the labeling data corresponding to the current target sub-data and continue to acquire the next target sub-data corresponding to the current target sub-data.
And d5, when unprocessed historical target sub-data exists, combining the current target sub-data with the unprocessed historical target sub-data to generate the current target sub-data combination.
Specifically, when unprocessed historical target sub-data exists, the current target sub-data is combined with the unprocessed historical target sub-data to generate a current target sub-data combination.
And d6, searching historical target sub-data combinations with the same number as the data included in the current target sub-data combination in each historical target sub-data before the current target sub-data combination.
Specifically, the electronic device searches for the historical target sub-data combinations with the same number of data as the current target sub-data combination in each historical target sub-data before the current target sub-data combination.
Illustratively, assuming that the number of unprocessed historical target-sub-data is 2, the current target-sub-data is combined with the unprocessed historical target-sub-data to generate the current target-sub-data combination. Therefore, the current target sub-data combination comprises 3 target sub-data, and the electronic device can search 3 target sub-data according to each annotation data to generate a historical target sub-data combination of the annotation data.
And d7, comparing the current target sub-data combination with each historical target sub-data combination, and judging whether the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination.
Specifically, the electronic device may treat the current target sub-data combination as a complete pending value and treat each historical target sub-data combination as a processed value. The electronic device calculates in turn the absolute difference between the value to be processed and each processed value. When the absolute difference between the value to be processed and each processed value is smaller than a first preset difference, the electronic equipment determines that the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination; when the absolute difference between the non-processed value and each processed value is smaller than the first preset difference, the electronic device determines that the same historical target sub-data combination which is the same as the current target sub-data combination does not exist in each historical target sub-data combination.
And d8, when the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, prohibiting outputting the marking data corresponding to the current target sub-data combination, and continuously acquiring the next target sub-data corresponding to the current target sub-data.
Specifically, when the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, in order to reduce compression of the same data, the electronic device prohibits outputting of the marking data corresponding to the current target sub-data combination, and continues to acquire the next target sub-data corresponding to the current target sub-data.
Step d9, when the same historical target sub-data combination which is the same as the current target sub-data combination does not exist in each historical target sub-data combination, outputting the marking data corresponding to the current target sub-data combination according to a preset mapping relation, and continuously acquiring the next target sub-data corresponding to the current target sub-data.
Specifically, when the same historical target sub-data combination which is the same as the current target sub-data combination does not exist in each historical target sub-data combination, outputting the marking data corresponding to the current target sub-data combination according to a preset mapping relation, and continuously acquiring the next target sub-data corresponding to the current target sub-data.
The preset mapping relation may be input to the electronic device by a user, may be sent to the electronic device by other devices, or may be determined by the electronic device according to a preset compression method. The preset compression method may include, but is not limited to, at least one of a huffman coding method, a field coding method, a predictive coding method, a transform coding method, and the like, and the embodiment of the present application does not describe the preset compression method in detail. The present application also does not specifically limit the preset mapping relation.
By way of example, each of the target sub-data entered is represented by D0, D1, D2 … …, which may be YUVY type, UVYU type, VYUV type, and YYYY type.
In step e1, the electronic device acquires D0, and because D0 is the first target sub-data input, there is no historical target sub-data, and therefore, according to the preset mapping relationship, the labeling data T0 corresponding to D0 is generated. Wherein T0 may be defined with 8bits wide.
Step e2, the electronic device acquires D1, and first detects whether unprocessed historical target sub-data which is located before D1 and does not generate labeling data exists. Since the annotation data T0 corresponding to D0 already exists, the electronic device determines that there is no unprocessed history target sub-data.
And e3, calculating an absolute difference value between D1 and D0 when the unprocessed history target sub-data does not exist. When the absolute difference between D1 and D0 is smaller than a first preset difference, determining that D1 is identical to D0; when the absolute difference between D1 and D0 is larger than or equal to the first preset difference, determining that D1 and D0 are different.
And e4, when the D1 is the same as the D0, the electronic equipment temporarily prohibits outputting the marking data corresponding to the current target sub-data, and continues to acquire the D2.
And e5, when the D1 and the D0 are different, the electronic equipment generates marking data T1 corresponding to the D1 according to a preset mapping relation, and continues to acquire the D2.
In step e6, the electronic device acquires D2, and when D1 is the same as D0, there is no history target sub-data before D2, and combines D2 and D1 into the current target sub-data combination { D1, D2}.
Step e7, the electronic device searches for the historical target sub-data combination with the same number of data as the current target sub-data combination in each historical target sub-data before { D1, D2}, namely searches for the historical target sub-data combination comprising two target sub-data.
And e8, comparing the current target sub-data combination with each historical target sub-data combination, and judging whether the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination.
And e9, outputting labeling data corresponding to { D1, D2} according to a preset mapping relation when the same historical target sub-data combination which is the same as { D1, D2} does not exist in the historical target sub-data combinations, and continuing to acquire D3.
Step e10, when the same historical target sub-data combination which is the same as { D1, D2} exists in the historical target sub-data combinations, prohibiting outputting the labeling data corresponding to { D1, D2} and continuing to acquire D3,
Step e11, continuing to acquire D3, and when the same historical target sub-data combination which is the same as { D1, D2} exists in the historical target sub-data combinations, re-composing { D1, D2, D3}.
And cycling the steps again until all the target sub-data are completed.
Step S4042, generating a YUV comparison table corresponding to the N frames of original images according to each labeling data.
Specifically, the electronic device generates a YUV comparison table corresponding to the N frames of original images according to the generating sequence of each labeling data.
Step S405, calculating the data difference between every two adjacent frames of original images in the N frames of original images;
Specifically, the step S405 may include the following steps:
Step S4051, sequentially calculating, for any two adjacent frames of original images in the original images, a first sub-difference value between two target sub-data at corresponding positions in the two adjacent frames of original images.
Specifically, for any two adjacent frames of original images in each frame of original image, the electronic device sequentially calculates a first sub-difference value between two target sub-data at corresponding positions in the two adjacent frames of original images.
The electronic device calculates a first sub-difference value between the first target sub-data corresponding to each of the two adjacent frames of original images, then calculates a first sub-difference value between the second target sub-data corresponding to each of the two adjacent frames of original images, and so on to obtain a first sub-difference value between the two target sub-data corresponding to each of the two adjacent frames of original images.
In step S4052, the first sub-difference values are added to obtain the data difference value between the two adjacent frames of original images.
Specifically, the electronic device adds the first sub-difference values to obtain a data difference value between two adjacent frames of original images.
In step S406, the data differences are added to calculate a total data difference.
Specifically, the electronic device adds the data differences, and calculates a total data difference.
In step S407, the total data difference is compared with the maximum data difference and the minimum data difference in the preset data difference range.
Specifically, the electronic device may receive a preset data difference range input by a user, or may receive a preset data difference range sent by other devices, and the electronic device may set the preset data difference range according to an actual situation. The mode of acquiring the preset data difference range by the electronic equipment is not particularly limited.
After the preset data difference range is obtained, the electronic device compares the total data difference with the maximum data difference and the minimum data difference in the preset data difference range.
In step S408, when the total data difference is greater than the maximum data difference, the value of N is decreased.
Specifically, when the total data difference is greater than the maximum data difference, the electronic device decreases the value of N, thereby decreasing the data amount of the YUV comparison table.
In step S409, when the total data difference is smaller than the minimum data difference, the value of N is increased.
Specifically, when the total data difference is smaller than the minimum data difference, the electronic device increases the value of N, thereby increasing the data amount of the YUV comparison table.
Step S410, generating a target image corresponding to each frame of original image according to the labeling data included in the YUV comparison table.
Specifically, the electronic device may decompress the annotation data according to a preset mapping relationship corresponding to the generated annotation data, generate target sub-data corresponding to each annotation data, and generate a target image corresponding to each frame of original image according to the target sub-data.
Step S411, an image difference between each frame target image and the original image is calculated.
Specifically, the electronic device may perform subtraction using each frame of the target image and the original image, and calculate an image difference between each frame of the target image and the original image.
In step S412, the image differences are added to obtain a total image difference.
Specifically, the electronic device compares each frame of the target image with the original image.
In step S413, the total image difference is compared with the maximum image difference and the minimum image difference in the preset image difference range.
Specifically, the electronic device may receive a preset image difference range input by a user, or may receive a preset image difference range sent by other devices, and the electronic device may set the preset image difference range according to an actual situation. The mode of acquiring the preset image difference range by the electronic equipment is not particularly limited.
After the preset image difference range is obtained, the electronic device compares the total image difference with the maximum image difference and the minimum image difference in the preset image difference range.
In step S414, when the total image difference value is greater than the maximum image difference value, the value of the first preset difference value is reduced.
Specifically, when the total image difference value is greater than the maximum image difference value, it is indicated that the image difference value between the target image and the original image is too large, and therefore the electronic device reduces the value of the first preset difference value.
In step S415, when the total image difference value is smaller than the minimum image difference value, the value of the first preset difference value is increased.
Specifically, when the total image difference value is smaller than the minimum image difference value, it is indicated that the image difference value between the target image and the original image is too small, and therefore the electronic device increases the value of the first preset difference value.
The video processing method provided by the embodiment of the application acquires the current target sub-data; detecting whether unprocessed historical target sub-data which is positioned before the current target sub-data and does not generate marking data exists or not; when unprocessed historical target sub-data does not exist, absolute difference values between the current target sub-data and each historical target sub-data are calculated in sequence, and accuracy of the absolute difference values between the current target sub-data and each historical target sub-data obtained through calculation is guaranteed. Comparing each absolute difference value with a first preset difference value; when the absolute difference value is smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data, and when the absolute difference value is not smaller than the first preset difference value, determining that second identical historical target sub-data which is identical to the current target sub-data does not exist in each historical target sub-data. The accuracy of the result of determining whether the second same historical target sub-data exists is ensured.
When the judgment result is that the second same historical target sub-data which is the same as the current target sub-data does not exist in the historical target sub-data, outputting marking data corresponding to the current target sub-data according to a preset mapping relation; and continuing to acquire the next target sub-data corresponding to the current target sub-data. The accuracy of the output labeling data is ensured.
When the judgment result is that the second same historical target sub-data which is the same as the current target sub-data exists in the historical target sub-data, temporarily prohibiting the output of the marking data corresponding to the current target sub-data, and continuously acquiring the next target sub-data corresponding to the current target sub-data. The accuracy of processing the annotation data corresponding to the current target sub-data is guaranteed.
When unprocessed historical target sub-data exists, the current target sub-data and the unprocessed historical target sub-data are combined to generate the current target sub-data combination, so that the accuracy of the generated current target sub-data combination is ensured. And searching the historical target sub-data combinations with the same number as the data included in the current target sub-data combination in each historical target sub-data before the current target sub-data combination, so that the accuracy of the determined historical target sub-data combination is ensured. Comparing the current target sub-data combination with each historical target sub-data combination, judging whether the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, and ensuring the accuracy of the obtained judging result. When the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, the output of the marking data corresponding to the current target sub-data combination is forbidden, and the next target sub-data corresponding to the current target sub-data is continuously acquired. Therefore, the output of the labeling data is reduced, and the data compression of the target pixel data corresponding to the N frames of original images in the original video data is realized. Therefore, the transmission data volume is reduced, the DDR memory space is saved, the bus bandwidth is reduced, and the overall performance of the chip is improved.
When the same historical target sub-data combination which is the same as the current target sub-data combination does not exist in each historical target sub-data combination, outputting marking data corresponding to the current target sub-data combination according to a preset mapping relation, and continuously acquiring next target sub-data corresponding to the current target sub-data. The accuracy of the marking data corresponding to the output current target sub-data combination is guaranteed, and the current target sub-data combination is restored according to the marking data corresponding to the current target sub-data combination.
And generating a YUV comparison table corresponding to the N frames of original images according to each labeling data. The compression of each target sub data is realized, so that the transmission data quantity is reduced, the DDR memory space is saved, the bus bandwidth is reduced, and the overall performance of the chip is improved.
In addition, for any two adjacent frames of original images in each frame of original images, first sub-difference values between two target sub-data at corresponding positions in the two adjacent frames of original images are calculated in sequence, and the accuracy of each calculated first sub-difference value is ensured. And adding the first sub-difference values to obtain a data difference value between two adjacent frames of original images. The accuracy of the data difference value corresponding to each frame of image obtained through calculation is guaranteed. Adding the data difference values, and calculating to obtain a total data difference value; the readiness of the calculated total data difference is ensured. Comparing the total data difference with the maximum data difference and the minimum data difference in a preset data difference range; when the total data difference is larger than the maximum data difference, the data volume of the current YUV comparison table is determined to be larger, so that the compression efficiency is improved as much as possible while the data transmission safety is ensured and the recovery quality of a video remote terminal is ensured, and the transmission data volume is reduced, so that the value of N is reduced. Therefore, the video remote terminal can restore the quality and realize more effective and accurate compression. When the total data difference value is smaller than the minimum data difference value, the data quantity of the current YUV comparison table is determined to be smaller, so that the data transmission frequency is reduced, the data transmission efficiency is improved, the value of N can be increased, and further the video remote terminal recovery quality is guaranteed, and meanwhile more effective and accurate compression is achieved.
And generating target images corresponding to the original images of each frame according to the labeling data included in the YUV comparison table, so that the accuracy of the generated target images corresponding to the original images of each frame is ensured. And respectively calculating the image difference value between each frame of target image and the original image, so that the accuracy of the calculated image difference value between each frame of target image and the original image is ensured. And adding the image difference values to obtain a total image difference value, so that the accuracy of the obtained total image difference value is ensured. Comparing the total image difference value with a maximum image difference value and a minimum image difference value in a preset image difference value range; when the total image difference value is greater than the maximum image difference value, the value of the first preset difference value is reduced, so that the image difference value between the target image and the original image can be reduced. When the total image difference value is smaller than the minimum image difference value, the value of the first preset difference value is increased, so that the image difference value between the target image and the original image can be increased. The method realizes that the compression efficiency of data compression of the target pixel data corresponding to the N frames of original images in the original video data is improved as much as possible under the condition of ensuring the recovery quality of the target images, reduces the transmission data volume and realizes more effective and accurate compression.
The embodiment also provides a video processing device, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a video processing apparatus, as shown in fig. 8, including:
an obtaining module 501, configured to obtain original YUV data corresponding to an original image of a current frame in original video data; the original YUV data comprises original sub YUV data corresponding to each pixel point in the original image of the current frame;
An identification module 502, configured to obtain a target YUV data type corresponding to an original image of a current frame; the target YUV data types include YUV444 type and YUV420 type;
A reorganizing module 503, configured to perform data reorganizing processing on each original sub-YUV data according to the target YUV data type, so as to generate target pixel data; the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; and the group number corresponding to the target sub-data is smaller than the group number corresponding to the original sub-YUV data;
the compression module 504 is configured to perform data compression on target pixel data corresponding to N frames of original images in the original video data, so as to generate a YUV comparison table corresponding to the N frames of original images; and the YUV comparison table is transmitted to the target device.
The video processing device in this embodiment is presented in the form of functional units, where the units refer to ASIC circuits, processors and memories executing one or more software or firmware programs, and/or other devices that can provide the functionality described above.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The embodiment of the invention also provides electronic equipment, which is provided with the video processing device shown in the figure 8.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, as shown in fig. 9, the electronic device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 9.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform a method for implementing the embodiments described above.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created from the use of the electronic device of the presentation of one applet landing page, and the like. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The electronic device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example by a bus connection in fig. 9.
The input device 30 may receive input data words or character information and generate key signal inputs related to user settings and function control of the electronic device, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (22)

1. A method of video processing, the method comprising:
Acquiring original YUV data corresponding to an original image of a current frame in original video data; the original YUV data comprises original sub-YUV data corresponding to each pixel point in the original image of the current frame;
obtaining a target YUV data type corresponding to the original image of the current frame; the target YUV data types include YUV444 type and YUV420 type;
According to the target YUV data type, carrying out data recombination processing on each original sub YUV data to generate target pixel data; the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; the group number corresponding to the target sub-data is smaller than the group number corresponding to the original sub-YUV data;
performing data compression on the target pixel data corresponding to the N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and transmitting the YUV comparison table to target equipment.
2. The method according to claim 1, wherein when the target YUV data type is the YUV444 type, the performing data reorganization processing on each of the original sub-YUV data according to the target YUV data type to generate target pixel data includes:
According to the sequence corresponding to each pixel point in the original image of the current frame, sequentially carrying out data recombination on the original sub-YUV data corresponding to each pixel point to generate each target sub-data;
and generating the target pixel data corresponding to the original image of the current frame based on each target sub-data.
3. The method according to claim 2, wherein the sequentially performing data reorganization on the raw sub-YUV data corresponding to each pixel in the sequence corresponding to each pixel in the current frame raw image to generate each target sub-data includes:
sequentially obtaining the original sub-YUV data corresponding to each pixel point according to the sequence corresponding to each pixel point in the original image of the current frame;
Starting from the 1 st pixel point, taking the original sub-YUV data corresponding to every adjacent four pixel points as a first cyclic data set;
for the first cyclic data group corresponding to each original sub-YUV data, splitting and recombining adjacent four original sub-YUV data to generate three target sub-data;
and circulating until all the first circulating data sets are completed, and generating each target sub-data corresponding to the original image of the current frame.
4. A method according to claim 3, wherein the data types corresponding to the three target sub-data are YUVY types, UVYU types, and VYUV types, respectively, and the splitting and reorganizing the adjacent four original sub-YUV data for the first cyclic data group corresponding to each original sub-YUV data to generate three target sub-data includes:
Combining the original sub-YUV data corresponding to the 1 st pixel point in the first cyclic data group with Y data in the original sub-YUV data corresponding to the 2 nd pixel point in the first cyclic data group for each original sub-YUV data corresponding to the first cyclic data group to generate YUVY types of target sub-data;
Combining the UV data in the original sub-YUV data corresponding to the 2 nd pixel point with the YU data in the original sub-YUV data corresponding to the 3 rd pixel point to generate UVYU types of target sub-data;
And combining V data in the original sub-YUV data corresponding to the 3 rd pixel point with the original sub-YUV data corresponding to the 4 th pixel point to generate VYUV types of target sub-data.
5. The method according to claim 1, wherein when the target YUV data type is the YUV420 type, the performing data reorganization processing on each of the original sub-YUV data according to the target YUV data type to generate target pixel data includes:
Performing rounding processing on the original sub-YUV data corresponding to each pixel point in the original image of the current frame to generate sub-YUV data to be processed corresponding to each pixel point;
According to the sequence corresponding to each pixel point in the original image of the current frame, sequentially carrying out recombination processing on the sub-YUV data to be processed corresponding to each pixel point to generate each target sub-data;
and generating the target pixel data corresponding to the original image of the current frame based on each target sub-data.
6. The method according to claim 5, wherein the performing a rounding process on the raw sub-YUV data corresponding to each pixel point in the current frame raw image to generate sub-YUV data to be processed corresponding to each pixel point includes:
For each pixel point of an even row and an even column in the original image of the current frame, reserving the original sub-YUV data corresponding to each pixel point of the even row and the even column, and generating sub-YUV data to be processed corresponding to each pixel point of the even row and the even column;
and for other pixel points except for the pixel points of even lines and even columns, only preserving Y data in the original sub-YUV data corresponding to the other pixel points, and generating sub-YUV data to be processed corresponding to the other pixel points.
7. The method according to claim 5, wherein the sequentially recombining the sub-YUV data to be processed corresponding to each pixel in the sequence corresponding to each pixel in the original image of the current frame to generate each target sub-data includes:
For each pixel point of an even line in the original image of the current frame, starting from the 1 st pixel point, taking the sub-YUV data to be processed corresponding to every two adjacent pixel points as a second cyclic data group;
for each second cyclic data set, recombining two sub-YUV data to be processed in the second cyclic data set to generate one target sub-data;
For each pixel point of an odd line in the original image of the current frame, starting from the 1 st pixel point, taking the sub-YUV data to be processed corresponding to every adjacent four pixel points as a third cycle data group;
for each third cyclic data set, recombining four sub-YUV data to be processed in the third cyclic data set to generate one target sub-data;
And circulating until all the second circulating data set and the third circulating data set are completed, and generating each target sub-data.
8. The method according to claim 7, wherein the type of the target sub-data is YUVY types, and the reorganizing two sub-YUV data to be processed in the second cyclic data group for each second cyclic data group to generate one target sub-data includes:
And combining the sub-YUV data to be processed corresponding to the 1 st pixel point in the second cyclic data set with Y data in the sub-YUV data to be processed corresponding to the 2 nd pixel point in the second cyclic data set to generate YUVY types of target sub-data.
9. The method according to claim 7, wherein the type of the target sub-data is YYYY type, and the reorganizing four sub-YUV data to be processed in the second cyclic data set to generate one target sub-data includes:
And recombining Y data in four sub-YUV data to be processed in the third cyclic data group for each third cyclic data group to generate the target sub-data of YYYY type.
10. The method according to claim 1, wherein the data compressing the target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to N frames of original images includes:
Generating a plurality of annotation data aiming at the relation between the target sub-data corresponding to the N frames of original images respectively, wherein the data volume of the annotation data is smaller than that of each target sub-data; each labeling data corresponds to at least one target sub-data;
and generating a YUV comparison table corresponding to the original image of the N frames according to each labeling data.
11. The method according to claim 10, wherein the generating a plurality of annotation data for the relationships between the target sub-data corresponding to the N frames of the original image respectively includes:
acquiring current target sub-data; the current target sub-data is any one of the target sub-data;
Detecting whether unprocessed historical target sub-data which is positioned before the current target sub-data and does not generate the annotation data exists or not; wherein the non-generated annotation data characterizes that there is a first identical historical target sub-data that is identical to the unprocessed historical target sub-data before the unprocessed historical target sub-data; the number of unprocessed history target sub-data is at least one;
When the unprocessed historical target sub-data does not exist, comparing the current target sub-data with each historical target sub-data before the current target sub-data, and judging whether second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data;
and outputting the marking data corresponding to the current target sub-data according to the judging result.
12. The method of claim 11, wherein when the unprocessed history target-sub-data is absent, comparing the current target-sub-data with each history target-sub-data preceding the current target-sub-data, and determining whether a second identical history target-sub-data that is identical to the current target-sub-data exists in each history target-sub-data comprises:
when the unprocessed historical target sub-data does not exist, absolute difference values between the current target sub-data and each historical target sub-data are calculated in sequence;
comparing each absolute difference value with a first preset difference value;
When the absolute difference value is smaller than the first preset difference value, determining that the second identical historical target sub-data which is identical to the current target sub-data exists in each historical target sub-data;
and when the absolute difference value is smaller than the first preset difference value, determining that the second identical historical target sub-data which is identical to the current target sub-data does not exist in each historical target sub-data.
13. The method of claim 11, wherein outputting the annotation data corresponding to the current target sub-data according to the determination result comprises:
When the judgment result is that the second same historical target sub-data which is the same as the current target sub-data does not exist in each historical target sub-data, outputting the marking data corresponding to the current target sub-data according to a preset mapping relation; and continuing to acquire the next target sub-data corresponding to the current target sub-data.
14. The method of claim 13, wherein the method further comprises:
And when the judging result is that the second same historical target sub-data which is the same as the current target sub-data exists in each historical target sub-data, temporarily prohibiting the output of the marking data corresponding to the current target sub-data, and continuously acquiring the next target sub-data corresponding to the current target sub-data.
15. The method of claim 11, wherein the method further comprises:
when the unprocessed historical target sub-data exists, combining the current target sub-data with the unprocessed historical target sub-data to generate a current target sub-data combination;
searching historical target sub-data combinations with the same number as the data included in the current target sub-data combination in each historical target sub-data before the current target sub-data combination;
Comparing the current target sub-data combination with each historical target sub-data combination, and judging whether the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination;
When the same historical target sub-data combination which is the same as the current target sub-data combination exists in each historical target sub-data combination, prohibiting the output of the marking data corresponding to the current target sub-data combination, and continuing to acquire the next target sub-data corresponding to the current target sub-data.
16. The method of claim 15, wherein the method further comprises:
when the same historical target sub-data combination which is the same as the current target sub-data combination does not exist in each historical target sub-data combination, outputting the marking data corresponding to the current target sub-data combination according to a preset mapping relation, and continuously acquiring the next target sub-data corresponding to the current target sub-data.
17. The method according to claim 1, wherein after performing data compression on the target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to N frames of original images, the method further includes:
Calculating the data difference value between every two adjacent frames of the original images in the N frames of the original images;
adding the data difference values, and calculating to obtain a total data difference value;
Comparing the total data difference value with a maximum data difference value and a minimum data difference value in a preset data difference value range;
Decreasing the value of N when the total data difference is greater than the maximum data difference;
and increasing the value of N when the total data difference is less than the minimum data difference.
18. The method of claim 17, wherein said calculating a data difference between each adjacent two of the N frames of the original image comprises:
sequentially calculating first sub-difference values between two target sub-data at corresponding positions in the original images of any two adjacent frames in the original images of each frame;
and adding the first sub-difference values to obtain the data difference value between the original images of two adjacent frames.
19. The method according to claim 12, wherein after performing data compression on the target pixel data corresponding to N frames of original images in the original video data to generate a YUV comparison table corresponding to N frames of original images, the method further comprises:
Generating a target image corresponding to the original image of each frame according to the labeling data included in the YUV comparison table;
Respectively calculating an image difference value between the target image and the original image of each frame;
Adding the image difference values to obtain a total image difference value;
comparing the total image difference value with a maximum image difference value and a minimum image difference value in a preset image difference value range;
when the total image difference value is larger than the maximum image difference value, reducing the value of the first preset difference value;
and when the total image difference value is smaller than the minimum image difference value, increasing the value of the first preset difference value.
20. A video processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring original YUV data corresponding to an original image of a current frame in the original video data; the original YUV data comprises original sub-YUV data corresponding to each pixel point in the original image of the current frame;
The identification module is used for acquiring a target YUV data type corresponding to the original image of the current frame; the target YUV data types include YUV444 type and YUV420 type;
A reorganization module, configured to perform data reorganization processing on each original sub-YUV data according to the target YUV data type, so as to generate target pixel data; the target pixel data comprises a plurality of target sub-data; the data size of each target sub-data is matched with the bus bandwidth; the group number corresponding to the target sub-data is smaller than the group number corresponding to the original sub-YUV data;
The compression module is used for carrying out data compression on the target pixel data corresponding to the N frames of original images in the original video data to generate a YUV comparison table corresponding to the N frames of original images; and the YUV comparison table is transmitted to the target device.
21. An electronic device, comprising:
a memory and a processor in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the video processing method of any of claims 1 to 19.
22. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the video processing method of any one of claims 1 to 19.
CN202410117571.7A 2024-01-26 2024-01-26 Video processing method, device, electronic equipment and storage medium Pending CN117979059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410117571.7A CN117979059A (en) 2024-01-26 2024-01-26 Video processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410117571.7A CN117979059A (en) 2024-01-26 2024-01-26 Video processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117979059A true CN117979059A (en) 2024-05-03

Family

ID=90852735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410117571.7A Pending CN117979059A (en) 2024-01-26 2024-01-26 Video processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117979059A (en)

Similar Documents

Publication Publication Date Title
CN107155093B (en) Video preview method, device and equipment
CN111683251A (en) Video data storage method and device and computer readable storage medium
US20240104780A1 (en) Image compression method and apparatus, and intelligent terminal and computer-readable storage medium
CN115460414B (en) Video compression method and system of baseboard management control chip and related components
US20160133232A1 (en) Image processing method and display apparatus
CN112929672B (en) Video compression method, device, equipment and computer readable storage medium
WO2024074012A1 (en) Video transmission control method, apparatus and device, and nonvolatile readable storage medium
KR20200011000A (en) Device and method for augmented reality preview and positional tracking
US20210234991A1 (en) Method and apparatus for converting image data, and storage medium
TWI505717B (en) Joint scalar embedded graphics coding for color images
CN116634089B (en) Video transmission method, device, equipment and storage medium
EP1338151A2 (en) On the fly data transfer between rgb and ycrcb color spaces for dct interface
US20110157465A1 (en) Look up table update method
US10965314B2 (en) Compensation table compression method, display manufacturing apparatus, and memory
CN111052742A (en) Image processing
US6670960B1 (en) Data transfer between RGB and YCRCB color spaces for DCT interface
CN117979059A (en) Video processing method, device, electronic equipment and storage medium
CN113538215B (en) Image format conversion method, device and system, electronic equipment and storage medium
CN117319716B (en) Resource scheduling method of baseboard management control chip and baseboard management control chip
CN115499667B (en) Video processing method, device, equipment and readable storage medium
CN114554126B (en) Baseboard management control chip, video data transmission method and server
CN113473150B (en) Image processing method and device and computer readable storage device
WO2024001970A1 (en) Event data processing method and related device
KR20220113028A (en) Encoding system and method for display stream compression
CN114374847A (en) Image compression method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination