CN115209145A - Video compression method, system, device and readable storage medium - Google Patents

Video compression method, system, device and readable storage medium Download PDF

Info

Publication number
CN115209145A
CN115209145A CN202211118396.0A CN202211118396A CN115209145A CN 115209145 A CN115209145 A CN 115209145A CN 202211118396 A CN202211118396 A CN 202211118396A CN 115209145 A CN115209145 A CN 115209145A
Authority
CN
China
Prior art keywords
video data
target
video
block
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211118396.0A
Other languages
Chinese (zh)
Inventor
张贞雷
邹晓峰
李拓
满宏涛
刘同强
周玉龙
王贤坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202211118396.0A priority Critical patent/CN115209145A/en
Publication of CN115209145A publication Critical patent/CN115209145A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a video compression method, a system, a device and a readable storage medium, and relates to the field of data processing. When video data similar to the target video data of the first target position in the current frame video data exists in the area to be compared corresponding to the first target position in the previous frame video data, the similar video data is used for generating repeated mark information, so that the target video data is not converted and video compressed according to the second target position and the repeated mark information, the video display device can directly call the video data of the second target position as the target video data to be displayed, and when the change of a video picture at a server host side is small or the video picture is not changed, the data amount of format conversion, video compression and transmission can be greatly reduced, and further the processing amount and the power consumption of a substrate management control chip are reduced.

Description

Video compression method, system, device and readable storage medium
Technical Field
The present application relates to the field of data processing, and in particular, to a video compression method, system, device, and readable storage medium.
Background
Referring to fig. 1, fig. 1 is a block diagram illustrating a video compression method in the prior art. The traditional video processing flow in the baseboard management control chip is as follows: the Video display method includes the steps of transmitting Video data at a server host end to a Video Graphics Array (VGA) of a substrate management control chip for processing, generating original Video data in an RGB (red-green-blue) format, converting the original Video data in the RGB format into Video data in a YUV format through a color space conversion module (RGB 2 YUV), storing the Video data in the YUV format into a First In First Out (FIFO), inputting the Video data in the YUV format to a compression module according to the requirement of a rear-stage compression module on the compression format according to the sequence of BLOCK for compression, writing the Video data into a storage module after the compression is completed, and driving and reading the data compressed in the storage module by an Ethernet Controller (Ethernet Access Controller) so as to transmit the Video to a Video display device and realize remote display.
However, in an application scenario of the bmc chip, there is a great possibility that a video screen at the host side of the server changes very little or a video screen does not change (especially in a monitoring scenario), but in the prior art, for the above situation, video data that does not change or changes very little is still subjected to color space conversion and video compression, which is equivalent to processing a lot of repeated data, resulting in a large processing amount and large power consumption of the bmc chip.
Disclosure of Invention
The application aims to provide a video compression method, a video compression system, a video compression device and a readable storage medium, which can greatly reduce the data volume of format conversion, compression and transmission when the video picture at a server host side is changed very little or the video picture is not changed, thereby reducing the processing capacity and power consumption of a substrate management control chip.
In order to solve the above technical problem, the present application provides a video compression method, including:
receiving block video data of a current frame;
determining a region to be compared corresponding to a first target position in the block video data of the previous frame corresponding to the block video data of the current frame according to the first target position of the target video data in the block video data of the current frame;
determining whether video data similar to the target video data exists in the video data of the area to be compared;
if the target video data exists, acquiring a second target position of the video data similar to the target video data in the video data of the previous frame, and generating repeated mark information so as not to convert and compress the target video data according to the second target position and the repeated mark information;
and sending the second target position and the repeated mark information to a video display device so that the video display device calls the video data of the second target position as target video data and displays the target video data.
Preferably, determining, according to a first target position of target video data in the block video data of the current frame, a region to be compared corresponding to the first target position in the block video data of the previous frame corresponding to the block video data of the current frame includes:
determining the block video data of a previous frame corresponding to the block video data of a current frame;
and dividing an area in a preset range taking the first target position as the center in the block video data of the previous frame into areas to be compared.
Preferably, the target video data are target pixel points;
dividing an area within a preset range with the first target position as a center in the block video data of the previous frame into areas to be compared, including:
dividing a region formed by a rice-shaped structure or an N x N grid structure taking the first target position as the center in the block video data of the previous frame into regions to be compared;
the number of pixel points which are connected with any edge of the area to be compared at the first target position does not exceed a preset number, and N is an integer not less than 3.
Preferably, after dividing a region formed by a m-shaped structure or an N × N grid structure centered on the first target position in the block of video data of the previous frame into regions to be compared, the method further includes:
setting one-to-one corresponding coordinates for each pixel point in the area to be compared;
acquiring a second target position in the video data of the previous frame of video data similar to the target video data, including:
and acquiring coordinates of pixels similar to the target pixels in the video data of the previous frame.
Preferably, the coordinates include an abscissa and an ordinate.
Preferably, after determining whether video data similar to the target video data exists in the video data of the area to be compared, the method further includes:
if not, performing color space conversion on the block video data of the current frame to convert the block video data of the current frame from an RGB format to a YUV format;
compressing the block video data of the current frame in the YUV format to obtain compressed video data of the current frame;
and sending the compressed video data of the current frame to the video display device so as to display a corresponding video picture.
Preferably, the block video data of the current frame is 16 × 16 block video data.
Preferably, compressing the block video data of the current frame in the YUV format to obtain the compressed video data of the current frame includes:
in a YUV420 mode, receiving and compressing a 16-by-16Y component, and caching a U component of 8*8 and a V component of 8*8;
after the compression of the Y component of 16 × 16 is completed, the U component of 8*8 and the V component of 8*8 are sequentially received and sequentially compressed.
Preferably, compressing the block video data of the current frame in the YUV format to obtain the compressed video data of the current frame includes:
in a YUV422 mode, receiving and compressing a 16X 16Y component, and caching a 16X 8U component and a 16X 8V component;
after the compression of the Y component of 16 × 16 is completed, the U component of 16 × 8 and the V component of 16 × 8 are sequentially received and sequentially compressed.
Preferably, compressing the block video data of the current frame in the YUV format to obtain the compressed video data of the current frame includes:
in a YUV444 mode, a first group 8*8Y component is received and compressed, and a second group 8*8Y component, 16 × 16U component and 16 × 16V component are buffered;
after the compression of the Y component of the first group 8*8 is completed, the U component of the first group 8*8, the V component of the first group 8*8, the Y component of the second group 8*8, the U component of the second group 8*8 and the V component of the second group 8*8 are received in sequence and compressed in sequence.
Preferably, the determining whether video data similar to the target video data exists in the video data of the area to be compared includes:
calculating the similarity between each video data in the area to be compared and the target video data;
determining whether the calculated similarity exists or not;
if so, judging that the video data with the similarity degree larger than the similarity threshold is similar to the target video data, and acquiring a second target position of the video data similar to the target video data in the video data of the previous frame;
if not, judging that the video data similar to the target video data does not exist in the area to be compared.
Preferably, the target video data is a target pixel point; calculating the similarity between each video data in the area to be compared and the target video data, including:
and calculating the similarity between each pixel point in the area to be compared and the target pixel point.
In order to solve the above technical problem, the present application further provides a video compression system, including:
a video data receiving unit for receiving block video data of a current frame;
the area determining unit is used for determining an area to be compared, corresponding to a first target position, in the block video data of the previous frame corresponding to the block video data of the current frame according to the first target position of the target video data in the block video data of the current frame;
a similar data determining unit, configured to determine whether video data similar to the target video data exists in the video data of the region to be compared;
a position determining unit, configured to, when video data similar to the target video data exists in the video data of the region to be compared, obtain a second target position in the video data of a previous frame of the video data similar to the target video data, and generate repeat flag information, so as not to perform conversion and video compression on the target video data according to the second target position and the repeat flag information;
and the data sending unit is used for sending the second target position and the repeated mark information to a video display device so that the video display device calls the video data of the second target position as target video data and displays the target video data.
In order to solve the above technical problem, the present application further provides a video compression apparatus, including:
a memory for storing a computer program;
a processor for implementing the steps of the video compression method as described above when storing a computer program.
To solve the above technical problem, the present application further provides a computer-readable storage medium, having a computer program stored thereon, where the computer program, when executed by a processor, implements the steps of the video compression method as described above.
The application provides a video compression method, and relates to the field of data processing. In the scheme, when video data similar to target video data of a first target position in current frame video data exists in a to-be-compared area corresponding to the first target position in the previous frame video data, the similar second target position of the video data is used for generating repeated mark information, so that the target video data is not converted and video compressed according to the second target position and the repeated mark information, the video display device can directly call the video data of the second target position as the target video data for displaying, when the change of a video picture at a host computer end of a server is small or the video picture is not changed, the data amount of format conversion, video compression and transmission can be greatly reduced, and further the processing amount and the power consumption of a substrate management control chip are reduced.
The application also provides a video compression system, a video compression device and a readable storage medium, which have the same beneficial effects as the video compression method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a block diagram of a video compression architecture in the prior art;
FIG. 2 is an exploded view of a block of video data according to the prior art;
fig. 3 is a schematic flowchart of a video compression method provided in the present application;
FIG. 4 is a block diagram of a video compression system according to the present application;
FIG. 5 is a schematic diagram of a region to be compared provided herein;
fig. 6 is an exploded view of RGB video data provided in the present application;
fig. 7 is a block diagram of a video compression system according to the present application;
fig. 8 is a block diagram of a video compression apparatus according to the present application.
Detailed Description
The core of the application is to provide a video compression method, a system, a device and a readable storage medium, when the video picture at the host end of a server has little change or the video picture does not change, the data volume of format conversion, compression and transmission can be greatly reduced, and further, the processing capacity and the power consumption of a substrate management control chip are reduced.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, a specific manner of transmitting video data in BLOCK mode in the prior art is described: referring to fig. 2, fig. 2 is an exploded view of block video data in the prior art. Each small square frame on the right side in fig. 2 represents 8*8 pixels on the left side in fig. 2, the large square frame on the right side in fig. 2 represents 16 × 16 pixels, and the rectangular square frame represents 8 × 16 pixels. Cb is the U component and Cr is the V component.
Taking YUV420 as an example, a Y block represents the Y component of 4 8*8 pixels, a Cb block represents the U component of 1 8*8 block, and a Cr block represents the V component of 1 8*8 block.
The detailed video data processing flow comprises the following steps: the VGA writes the original video data into an address space of DDR (double data rate synchronous dynamic random access memory), for example, a space from addresses 0x1000 _0000to 0x2000 _0000is set, and then after writing into the tail space, the write pointer starts writing again from 0x1000 _0000. And the subsequent stage needs to read the written original video data in the DDR in time, so that the situation that the original video data in the address space of the DDR is covered without being read is avoided. Then, the color space conversion is carried out through a color space conversion module, and the original video data is converted from an RGB format to a YUV format. Then, the Y, U, V data is cached (cached to FIFO) by using the storage resources in the chip, according to the requirement of BLOCK format conversion, an FIFO array composed of 16Y _ FIFOs, 16U _ FIFOs, and 16V _ FIFOs is needed, in the prior art, a BLOCK _ CONVERT (BLOCK conversion module) receives control information of read data sent by a video compression module, and it is noted that the BLOCK conversion module does not care about a read address sent by the video compression module, but generates a read-write control logic by itself.
Specifically, the YUV write logic is: in YUV420 mode, (all Y data is reserved, U/V data of even rows and even columns are reserved);
writing the Y data of line 0/16/32/48 … into Y _ FIFO _0;
writing the Y data of line 1/17/33/49 … into Y _ FIFO _1;
writing Y data for line 2/18/34/50 … into Y _ FIFO _2;
……
y data for line 15/31/47/63 … is written into Y _ FIFO _15.
Writing the even column U data of row 0/16/32/48 … into U _ FIFO _0;
writing the even column U data of row 2/18/34/50 … into U _ FIFO _1;
……
the even column U data for row 14/30/46/62 … is written into U _ FIFO _7.
Writing the even column U data of row 0/16/32/48 … into V _ FIFO _0;
writing the even column U data of row 2/18/34/50 … into V _ FIFO _1;
……
the even column U data for row 14/30/46/62 … is written into V _ FIFO _7.
In YUV422 mode, (all Y data is retained, even columns of U/V data are retained);
writing Y data for line 0/16/32/48 … into Y _ FIFO _0;
writing the Y data of line 1/17/33/49 … into Y _ FIFO _1;
writing Y data for line 2/18/34/50 … into Y _ FIFO _2;
……
the Y data for line 15/31/47/63 … is written into Y _ FIFO _15.
Writing the even column U data of row 0/16/32/48 … into U _ FIFO _0;
writing the U data of the even columns of the 1/17/33/49 … row into the U _ FIFO _1;
writing the even column U data of row 2/18/34/50 … into U _ FIFO _2;
……
the U data for row even columns of row 15/31/47/63 … is written into U _ FIFO _15.
Writing the even column V data of row 0/16/32/48 … into V _ FIFO _0;
writing the even column V data of the 1/17/33/49 … row into V _ FIFO _1;
writing even column V data for row 2/18/34/50 … into V _ FIFO _2;
……
the V data for row even columns 15/31/47/63 … is written into V _ FIFO _15.
In YUV444 mode, (all rows and columns of Y/U/V data are retained);
writing Y data for line 0/8/16/24 … into Y _ FIFO _0;
writing the Y data of line 1/9/17/25 … into Y _ FIFO _1;
writing Y data for line 2/10/18/26 … into Y _ FIFO _2;
……
the Y data for line 7/15/23/31 … is written into Y _ FIFO _7.
Writing the U data of line 0/8/16/24 … into U _ FIFO _0;
writing the U data of the 1/9/17/25 … line into U _ FIFO _1;
writing the U data of line 2/10/18/26 … into U _ FIFO _2;
……
the U data for line 7/15/23/31 … is written into U _ FIFO _7.
Writing the V data of line 0/8/16/24 … into V _ FIFO _0;
writing the V data of line 1/9/17/25 … into V _ FIFO _1;
writing the V data of line 2/10/18/26 … into V _ FIFO _2;
……
the V data for line 7/15/23/31 … is written into V _ FIFO _7.
Specifically, the YUV read logic is: in the YUV420 mode, the FIFO queue does not care about the read address from the video compression module, but only care about the read enable from the video compression module, and reads Y _ FIFO _0 16 times, Y _ FIFO _1 … … 16 times, and Y _ FIFO _15 16 times in sequence. U _ FIFO _0,8 times U _ FIFO _1,8 times U _ FIFO _7,8 times V _ FIFO _0,8 times V _ FIFO _1,8 times V _ FIFO _7, and then cycle through.
In YUV422 mode, the FIFO queue does not care about the read address sent by the video compression module, but only care about the read enable sent by the video compression module, and sequentially reads Y _ FIFO _0 16 times, Y _ FIFO _1 … … times, Y _ FIFO _15,8 times, U _ FIFO _0,8 times, U _ FIFO _1 times, … … times, U _ FIFO _15,8 times, V _ FIFO _0,8 times, V _ FIFO _1 times, … … times, V _ FIFO _15 times.
In YUV444 mode, FIFO queue does not care about read address sent by video compression module, but only care about read enable sent by video compression module, and sequentially reads Y _ FIFO _0,8Y _ FIFO _1 … … Y _ FIFO _7 8 times. U _ FIFO _0,8 times U _ FIFO _1,8 times U _ FIFO _7,8 times V _ FIFO _0,8 times V _ FIFO _1,8 times V _ FIFO _7.
As can be seen, the possibility that the video image at the host side of the server changes very little or the video image does not change is very high (especially in a monitoring scene), but in the prior art, for the above situation, the processing of color space conversion and video compression is still performed on all the video data that does not change or changes very little, which is equivalent to processing a lot of repeated data, resulting in a large processing amount and a large power consumption of the substrate management control chip.
Referring to fig. 3 and fig. 4, fig. 3 is a schematic flowchart of a video compression method provided in the present application, and fig. 4 is a block diagram of a system for video compression provided in the present application, where the method includes:
s31: receiving block video data of a current frame;
s32: determining a region to be compared corresponding to a first target position in the block video data of the previous frame corresponding to the block video data of the current frame according to the first target position of the target video data in the block video data of the current frame;
whether the pictures corresponding to the block video data of the current frame and the block video data of the previous frame are completely the same or whether the change of the pictures is small needs to be judged. Therefore, in the present application, when receiving the block video data of the current frame, a first target position corresponding to the target video data in the block video data of the current frame to be compared needs to be determined first, and then a region to be compared corresponding to the first target position in the block video data of the previous frame is determined according to the first target position, so as to compare the block video data of the current frame with the block video data of the previous frame.
It should be noted that, in the present application, instead of directly comparing the video data in the first target position of the block video data of the current frame with the video data in the first target position of the block video data of the previous frame, the video data in the block video data of the current frame is compared with the video data in the area to be compared, corresponding to the first target position, of the block video data of the previous frame, and the purpose of the comparison is as follows: the method not only judges whether the video picture of the current frame is the same as the video picture of the previous frame, but also judges whether the video pictures are similar, thereby further reducing the data volume for converting and compressing the video data and reducing the processing capacity of a chip.
As a preferred embodiment, determining a region to be compared corresponding to a first target position in block video data of a previous frame corresponding to block video data of a current frame according to the first target position of target video data in the block video data of the current frame includes:
determining block video data of a previous frame corresponding to the block video data of the current frame;
and dividing an area in a preset range taking the first target position as the center in the block video data of the previous frame into areas to be compared.
The present embodiment is directed to defining a specific implementation manner of determining an area to be compared, and in particular, considering that when a change of a video picture is small, the change is usually only performed within a certain range, for example, a monitoring picture is slightly changed due to camera shake, and the like, and at this time, video data similar to or identical to target video data is usually within a certain area centered on a first target position.
Therefore, in the present application, an area within a preset range centered on the first target position in the block video data of the previous frame is divided into areas to be compared.
The area to be compared may be a range centered on the first target position and having a radius of a preset length, or a range centered on the first target position and including areas with preset pixels, and the like, which is not limited herein.
As a preferred embodiment, the target video data is a target pixel point;
dividing an area in a preset range with a first target position as a center in block video data of a previous frame into areas to be compared, wherein the area comprises:
dividing a region formed by a rice-shaped structure or an N x N grid structure taking the first target position as the center in the block video data of the previous frame into regions to be compared;
the number of pixel points which are connected with any edge of the area to be compared at the first target position does not exceed a preset number, and N is an integer not less than 3.
The embodiment is intended to limit a specific implementation manner of the region to be compared, and specifically, when the target video data is a target pixel point, the region to be compared in the present application may be a region formed by a m-shaped structure with the first target position as a center, or a region formed by an N × N grid structure with the first target position as a center. In one embodiment, the region to be compared includes 9 pixels, and the target pixel is taken as the center. Specifically, referring to fig. 5, fig. 5 is a schematic diagram of a region to be compared provided in the present application.
S33: determining whether video data similar to the target video data exists in the video data of the area to be compared;
s34: if the target video data exists, acquiring a second target position of the video data similar to the target video data in the video data of the previous frame so as not to convert and compress the target video data according to the second target position and the repeated mark information;
specifically, after the area to be compared is determined, it is determined whether video data similar to the target video data exists in the video data in the area to be compared, and if so, it indicates that the video data of the current frame is the same as the picture corresponding to the video data of the previous frame or the picture change is small. Specifically, when the video data of the first target position of the current frame is the same as the video data of the first target position of the previous frame, it indicates that the video pictures of the two frames are the same, that is, the video pictures are not changed. When the video data of the first target position of the current frame is similar to the video data of the other positions except the first target position in the area to be compared of the previous frame, the video pictures of the two frames are similar, that is, the video pictures have smaller change.
Specifically, when video data similar to the video data at the first target position exists in the video data in the area to be compared, a second target position of the similar video data is acquired, so that the video data of the current frame is subsequently processed based on the second target position.
As a preferred embodiment, after dividing a region formed by a m-shaped structure or an N x N grid structure centered on a first target position in block video data of a previous frame into regions to be compared, the method further includes:
setting one-to-one corresponding coordinates for each pixel point in the area to be compared;
acquiring a second target position of video data similar to the target video data in the video data of the previous frame, including:
and acquiring coordinates of pixel points similar to the target pixel points in the video data of the previous frame.
Specifically, when the target video data is a target pixel point and the region to be compared is a region formed by a mi-shaped structure or an N x N grid structure with the first target position as the center, the region to be compared includes a plurality of pixel points with the first target position as the center. At this time, in the present application, a one-to-one corresponding coordinate is set for each pixel point in the region to be compared, specifically, referring to fig. 5, in fig. 5, the coordinate of the target pixel point is (1,1), and the adjacent pixel points are taken as the region to be compared, and specifically, are represented as (0,0) to (2,2) in fig. 5.
As a preferred embodiment, the coordinates include an abscissa and an ordinate. At this time, a specific way of acquiring the second target position in the video data of the previous frame of the video data similar to the target video data is as follows: coordinates (which may be specifically an abscissa and an ordinate) of video data similar to the target video data in the area to be compared are acquired.
As a preferred embodiment, the determining whether video data similar to the target video data exists in the video data of the area to be compared includes:
calculating the similarity of each video data in the area to be compared and the target video data;
determining whether the calculated similarity exists in a similarity larger than a similarity threshold;
if so, judging that the video data with the similarity degree larger than the similarity threshold is similar to the target video data, and performing a step of acquiring a second target position of the video data similar to the target video data in the video data of the previous frame;
if not, judging that the video data similar to the target video data does not exist in the area to be compared.
In particular, the present application aims to define a specific implementation of determining whether video data similar to target video data exists in video data of an area to be compared. Firstly, calculating the similarity between each video data in the area to be compared and the target video data, judging that the video data corresponding to each similarity is similar to the corresponding video data when the similarity greater than a similarity threshold exists in each similarity, and acquiring a second target position of the video data corresponding to the similarity greater than the similarity threshold at the moment so as to process the video data based on the second target position subsequently.
As a preferred embodiment, the target video data is a target pixel point; calculating the similarity between each video data in the area to be compared and the target video data, wherein the similarity comprises the following steps:
and calculating the similarity between each pixel point in the region to be compared and the target pixel point.
Specifically, the specific way of calculating the similarity in the present application is: and calculating the similarity between the pixel value of each pixel point in the area to be compared and the pixel value of the target pixel point. Specifically, the similarity threshold may be, but is not limited to, 90%.
S15: and sending the second target position and the repeated mark information to the video display device so that the video display device calls the video data of the second target position as target video data and displays the target video data.
After the second target position corresponding to the similar video data is obtained, repeat flag information is further generated, and the second target position and the repeat flag information are sent to the video display device, where the format of the repeat flag information and the second target position may be 32 bits, and specifically may be: {16'HFFFFFFFF,8' Hx,8'Hy }, wherein 16' HFFFFFFFF is repeated mark information, 8'Hx is an abscissa of the second target position, and 8' Hx is an ordinate of the second target position. The second target positions are (0,0), (2,1), and the like in fig. 5. At this time, the video data marked as similar data is not subjected to the steps of color space conversion and video compression, but is directly obtained from the cache corresponding to the video compression according to the position information of the second target position and the compressed data from the compressed result cache of the previous frame. The video display device may directly retrieve the video data corresponding to the second target position from the display screen of the previous frame and directly display the video data.
By the mode in the application, under the condition that the video pictures are the same or have small changes, the workload of performing color space conversion and video compression processing on the video data can be reduced, and in addition, the data transmitted between the modules is replaced by the repeated mark information and the second target position from the original video data, so that the transmitted data amount is reduced.
On the basis of the above-described embodiment:
as a preferred embodiment, after determining whether video data similar to the target video data exists in the video data of the area to be compared, the method further includes:
if the current frame does not exist, color space conversion is carried out on the block video data of the current frame so as to convert the block video data of the current frame from an RGB format to a YUV format;
compressing the block video data of the current frame in the YUV format to obtain the compressed video data of the current frame;
and sending the compressed video data of the current frame to a video display device to display a corresponding video picture.
Specifically, when it is determined that there is no video data similar to the target video data in the area to be compared, it is determined that the block video data of the current frame is neither the same as nor similar to the block video data of the previous frame, that is, the corresponding video pictures are completely different or have a large change. At this time, a general processing flow is performed on the block video data of the current frame. The method comprises the following steps: the method comprises the steps of performing color space conversion processing on block video data of a current frame to convert the block video data from an RGB format to a YUV format, then performing compression processing on the block video data of the current frame in the YUV format to obtain compressed video data of the current frame, storing the compressed video data of the current frame, specifically storing the compressed video data of the current frame into a storage module in the figure 4, reading the storage module when a video display device needs to display the compressed video data, and transmitting the read storage module to the video display device through an EMAC to display a corresponding video picture.
In addition, it should be noted that the above-described block video data of the current frame and the block video data of the previous frame are both RGB video data. In the video data transmission process, the block video data of the current frame is stored in the RGB FIFO queue in fig. 4, the block video data of the previous frame is stored in the RGB buffer in fig. 4, the comparison module in fig. 4 reads the RGB FIFO queue and the RGB buffer, respectively, to obtain the block video data of the current frame and the block video data of the previous frame, and then compares the two frames of block video data.
As a preferred embodiment, the block video data of the current frame is 16 × 16 block video data.
Specifically, the block video data of the current frame in this application is 16 × 16 block video data. That is, the RGB raw block data is read according to the size of 16 × 16, and for different compression formats (YUV 444/YUV422/YUV 420), the video compression has different requirements for block video data in YUV format, so that video data of Y/U/V components needs to be buffered first.
Referring to fig. 6, fig. 6 is an exploded view of RGB video data according to the present disclosure. The 16 × 16 block video data is decomposed into Y, U, and V components, and the Y, U, and V components are divided into 4 parts, respectively, each of which has a size of 8*8.
As a preferred embodiment, compressing block video data of a current frame in YUV format to obtain compressed video data of the current frame includes:
in a YUV420 mode, receiving and compressing a Y component of 16 x 16, and buffering a U component of 8*8 and a V component of 8*8;
after the compression of the Y component of 16 × 16 is completed, the U component of 8*8 and the V component of 8*8 are sequentially received and sequentially compressed.
Specifically, each virtual module in fig. 4 is used as an execution subject to be described, that is, the execution subject is: under the YUV420 mode of the video compression module, the Y component of 16 x 16 is sent to the video compression module, and the U component of 8*8 is stored in a U component cache module, and the V component of 8*8 is stored in a V component cache module; after the transmission of the Y component of 16 × 16 is completed, the U component of 8*8 and the V component of 8*8 are transmitted to the video compression module in turn.
Specifically, under the YUV420 compression format: after the block video data of the current frame is subjected to color space conversion, the 16 × 16Y components are firstly transmitted to a video compression module at a later stage. And only the U/V components of even rows and even columns are reserved and buffered by a U _ BUFFER (U component BUFFER module) and a V _ BUFFER (V component BUFFER module) respectively. After the Y component is transmitted, the U component (8*8) is transmitted to a video compression module at the later stage, and the V component (8*8) is transmitted to a video compression module at the later stage.
As a preferred embodiment, compressing block video data of a current frame in YUV format to obtain compressed video data of the current frame includes:
in a YUV422 mode, receiving and compressing a 16X 16Y component, and caching a 16X 8U component and a 16X 8V component;
after the compression of the Y component of 16 × 16 is completed, the U component of 16 × 8 and the V component of 16 × 8 are sequentially received and sequentially compressed.
Specifically, each virtual module in fig. 4 is taken as an execution subject for description, that is: under a YUV422 mode, the video compression module firstly sends the Y component of 16 × 16 to the video compression module, and stores the U component of 16 × 8 to the U component cache module and stores the V component of 16 × 8 to the V component cache module; after the transmission of the Y component of 16 × 16 is completed, the U component of 16 × 8 and the V component of 16 × 8 are sequentially transmitted to the video compression module.
Specifically, under the YUV422 compression format: after the block video data of the current frame is subjected to color space conversion, the Y component of 16 × 16 is first transferred to the video compression module of the subsequent stage. And only keeping the U/V components of even columns, and respectively buffering by using a U _ BUFFER (U component buffering module) and a V _ BUFFER (V component buffering module). And after the transmission of the Y component is finished, transmitting the U component (16 x 8) to a video compression module at the later stage, and transmitting the V component (16 x 8) to the video compression module at the later stage.
As a preferred embodiment, compressing the block video data of the current frame in YUV format to obtain the compressed video data of the current frame includes:
in YUV444 mode, receiving and compressing Y component of 8*8 of first group, and buffering Y component, U component of 16 × 16 and V component of 16 × 16 of second group 8*8;
after the compression of the Y component of the first group 8*8 is completed, the U component of the first group 8*8, the V component of the first group 8*8, the Y component of the second group 8*8, the U component of the second group 8*8 and the V component of the second group 8*8 are received in sequence and compressed in sequence.
Specifically, each virtual module in fig. 4 is taken as an execution subject for description, that is: under YUV444 mode, the video compression module firstly sends Y component of 8*8 of the first group to the video compression module, and stores Y component of 8*8 of the second group to a Y component cache module, stores U component of 16 x 16 to a U component cache module and stores V component of 16 x 16 to a V component cache module;
after the Y component of the first group 8*8 is sent, the U component of the first group 8*8, the V component of the first group 8*8, the Y component of the second group 8*8, the U component of the second group 8*8 and the V component of the second group 8*8 are sent to a video compression module in sequence.
Specifically, under the YUV444 compression format: after the block video data (16 × 16) of the current frame is color space converted, the Y components of the first group 8*8 are first passed to the video compression module of the subsequent stage. The Y/U/V components of all rows and columns are retained simultaneously and buffered with Y _ BUFFER/U _ BUFFER and V _ BUFFER, respectively. After the transmission of the first group of Y components is finished, the first group of U components (8*8) is transmitted to a video compression module at the later stage, and the first group of V components (8*8) is transmitted to a video compression module at the later stage. Then the Y/U/V components of the 2/3/4 th group are transmitted in turn.
In summary, according to the video compression method in the application, when the video picture at the host side of the server changes very little or the video picture does not change, the data volume of format conversion, compression and transmission can be greatly reduced, and further the processing capacity and power consumption of the substrate management control chip are reduced.
To solve the above technical problem, the present application further provides a video compression system, please refer to fig. 7, fig. 7 is a block diagram of a structure of the video compression system provided in the present application, and the system includes:
a video data receiving unit 71 for receiving block video data of a current frame;
a region determining unit 72, configured to determine, according to a first target position of target video data in block video data of a current frame, a region to be compared, corresponding to the first target position, in block video data of a previous frame corresponding to the block video data of the current frame;
a similar data determining unit 73 for determining whether video data similar to the target video data exists in the video data of the area to be compared;
a position determining unit 74, configured to, when video data similar to the target video data exists in the video data of the area to be compared, obtain a second target position in the video data of the previous frame of the video data similar to the target video data, and generate repeat flag information, so as not to perform conversion and video compression on the target video data according to the second target position and the repeat flag information;
a data sending unit 75, configured to send the second target position and the repeated mark information to the video display apparatus, so that the video display apparatus calls the video data at the second target position as target video data and displays the target video data.
For the introduction of the video compression system, please refer to the above embodiments, which are not described herein again.
To solve the above technical problem, the present application further provides a video compression apparatus, please refer to fig. 8, where fig. 8 is a block diagram of a structure of the video compression apparatus provided in the present application, and the apparatus includes:
a memory 81 for storing a computer program;
a processor 82 for implementing the steps of the video compression method as described above when storing the computer program.
For the introduction of the video compression apparatus, please refer to the above embodiments, which are not described herein again.
To solve the above technical problem, the present application further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the video compression method as described above. For the introduction of the computer-readable storage medium, reference is made to the above embodiments, which are not repeated herein.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (15)

1. A method of video compression, comprising:
receiving block video data of a current frame;
determining a region to be compared corresponding to a first target position in block video data of a previous frame corresponding to the block video data of the current frame according to the first target position of the target video data in the block video data of the current frame;
determining whether video data similar to the target video data exists in the video data of the area to be compared;
if the target video data exists, acquiring a second target position of the video data similar to the target video data in the video data of the previous frame, and generating repeated mark information so as not to convert and compress the target video data according to the second target position and the repeated mark information;
and sending the second target position and the repeated mark information to a video display device so that the video display device calls the video data of the second target position as target video data and displays the target video data.
2. The video compression method of claim 1, wherein determining the area to be compared corresponding to the first target position in the block video data of the previous frame corresponding to the block video data of the current frame according to the first target position of the target video data in the block video data of the current frame comprises:
determining the block video data of a previous frame corresponding to the block video data of a current frame;
and dividing an area in a preset range taking the first target position as the center in the block video data of the previous frame into areas to be compared.
3. The video compression method of claim 2, wherein the target video data is a target pixel point;
dividing an area within a preset range with the first target position as a center in the block video data of the previous frame into areas to be compared, including:
dividing a region formed by a rice-shaped structure or an N-x-N grid structure taking the first target position as the center in the block of video data of the previous frame into regions to be compared;
the number of pixel points which are connected and pass through between the first target position and any edge of the area to be compared is not more than a preset number, and N is an integer not less than 3.
4. The video compression method of claim 3, wherein after dividing a region, which is formed by a mi-shaped structure or an N x N grid structure centered on the first target position, in the block of video data of a previous frame into regions to be compared, further comprising:
setting one-to-one corresponding coordinates for each pixel point in the area to be compared;
acquiring a second target position of video data similar to the target video data in the video data of the previous frame, including:
and acquiring coordinates of pixels similar to the target pixel in the video data of the previous frame.
5. The video compression method of claim 4, wherein the coordinates comprise an abscissa and an ordinate.
6. The video compression method of claim 1, wherein after determining whether video data similar to the target video data exists in the video data of the area to be compared, further comprising:
if not, performing color space conversion on the block video data of the current frame to convert the block video data of the current frame from an RGB format to a YUV format;
compressing the block video data of the current frame in the YUV format to obtain compressed video data of the current frame;
and sending the compressed video data of the current frame to the video display device so as to display a corresponding video picture.
7. The video compression method of claim 6, wherein the block video data of the current frame is 16 x 16 block video data.
8. The video compression method of claim 7, wherein compressing the block video data of the current frame in YUV format to obtain the compressed video data of the current frame comprises:
in a YUV420 mode, receiving and compressing a 16-by-16Y component, and caching a U component of 8*8 and a V component of 8*8;
after the compression of the Y component of 16 × 16 is completed, the U component of 8*8 and the V component of 8*8 are sequentially received and sequentially compressed.
9. The video compression method of claim 7, wherein compressing the block video data of the current frame in YUV format to obtain the compressed video data of the current frame comprises:
in a YUV422 mode, receiving and compressing a 16X 16Y component, and caching a 16X 8U component and a 16X 8V component;
after the compression of the Y component of 16 × 16 is completed, the U component of 16 × 8 and the V component of 16 × 8 are sequentially received and sequentially compressed.
10. The video compression method of claim 7, wherein compressing the block video data of the current frame in YUV format to obtain the compressed video data of the current frame comprises:
in YUV444 mode, receiving and compressing Y component of 8*8 of first group, and buffering Y component, U component of 16 × 16 and V component of 16 × 16 of second group 8*8;
after the compression of the Y component of the first group 8*8 is completed, the U component of the first group 8*8, the V component of the first group 8*8, the Y component of the second group 8*8, the U component of the second group 8*8 and the V component of the second group 8*8 are sequentially received and sequentially compressed.
11. The video compression method according to any one of claims 1 to 10, wherein determining whether video data similar to the target video data exists in the video data of the area to be compared comprises:
calculating the similarity between each video data in the area to be compared and the target video data;
determining whether the calculated similarity exists or not;
if so, judging that the video data with the similarity degree larger than the similarity threshold is similar to the target video data, and acquiring a second target position of the video data similar to the target video data in the video data of the previous frame;
and if the target video data does not exist, judging that the video data similar to the target video data does not exist in the area to be compared.
12. The video compression method of claim 11, wherein the target video data is a target pixel point; calculating the similarity between each video data in the area to be compared and the target video data, including:
and calculating the similarity between each pixel point in the area to be compared and the target pixel point.
13. A video compression system, comprising:
a video data receiving unit for receiving block video data of a current frame;
the area determining unit is used for determining an area to be compared, corresponding to a first target position, in the block video data of the previous frame corresponding to the block video data of the current frame according to the first target position of the target video data in the block video data of the current frame;
a similar data determining unit, configured to determine whether video data similar to the target video data exists in the video data of the region to be compared;
a position determining unit, configured to, when video data similar to the target video data exists in the video data of the region to be compared, obtain a second target position in the video data of a previous frame of the video data similar to the target video data, and generate repeat flag information, so as not to perform conversion and video compression on the target video data according to the second target position and the repeat flag information;
and the data sending unit is used for sending the second target position and the repeated mark information to a video display device so that the video display device calls the video data of the second target position as target video data and displays the target video data.
14. A video compression apparatus, comprising:
a memory for storing a computer program;
processor for implementing the steps of the video compression method according to any one of claims 1 to 12 when storing a computer program.
15. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the video compression method according to any one of claims 1-12.
CN202211118396.0A 2022-09-15 2022-09-15 Video compression method, system, device and readable storage medium Pending CN115209145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211118396.0A CN115209145A (en) 2022-09-15 2022-09-15 Video compression method, system, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211118396.0A CN115209145A (en) 2022-09-15 2022-09-15 Video compression method, system, device and readable storage medium

Publications (1)

Publication Number Publication Date
CN115209145A true CN115209145A (en) 2022-10-18

Family

ID=83573539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211118396.0A Pending CN115209145A (en) 2022-09-15 2022-09-15 Video compression method, system, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115209145A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460414A (en) * 2022-11-11 2022-12-09 苏州浪潮智能科技有限公司 Video compression method and system of baseboard management control chip and related components
CN116828200A (en) * 2023-08-29 2023-09-29 苏州浪潮智能科技有限公司 Image processing method, processing device, equipment and medium
CN118338002A (en) * 2024-06-12 2024-07-12 山东云海国创云计算装备产业创新中心有限公司 BMC video compression method, device and system and baseboard management controller

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111093079A (en) * 2019-12-30 2020-05-01 西安万像电子科技有限公司 Image processing method and device
CN112804532A (en) * 2021-01-07 2021-05-14 苏州浪潮智能科技有限公司 Image data acquisition method, system and related device
CN113079379A (en) * 2021-03-26 2021-07-06 山东英信计算机技术有限公司 Video compression method, device, equipment and computer readable storage medium
CN113709490A (en) * 2021-07-30 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 Video compression method, device, system and medium
CN113873255A (en) * 2021-12-06 2021-12-31 苏州浪潮智能科技有限公司 Video data transmission method, video data decoding method and related devices
CN114125448A (en) * 2020-08-31 2022-03-01 华为技术有限公司 Video encoding method, decoding method and related devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111093079A (en) * 2019-12-30 2020-05-01 西安万像电子科技有限公司 Image processing method and device
CN114125448A (en) * 2020-08-31 2022-03-01 华为技术有限公司 Video encoding method, decoding method and related devices
CN112804532A (en) * 2021-01-07 2021-05-14 苏州浪潮智能科技有限公司 Image data acquisition method, system and related device
CN113079379A (en) * 2021-03-26 2021-07-06 山东英信计算机技术有限公司 Video compression method, device, equipment and computer readable storage medium
CN113709490A (en) * 2021-07-30 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 Video compression method, device, system and medium
CN113873255A (en) * 2021-12-06 2021-12-31 苏州浪潮智能科技有限公司 Video data transmission method, video data decoding method and related devices

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460414A (en) * 2022-11-11 2022-12-09 苏州浪潮智能科技有限公司 Video compression method and system of baseboard management control chip and related components
CN115460414B (en) * 2022-11-11 2023-03-07 苏州浪潮智能科技有限公司 Video compression method and system of baseboard management control chip and related components
WO2024098715A1 (en) * 2022-11-11 2024-05-16 苏州元脑智能科技有限公司 Video compression method and system for baseboard management control chip, and related components
CN116828200A (en) * 2023-08-29 2023-09-29 苏州浪潮智能科技有限公司 Image processing method, processing device, equipment and medium
CN116828200B (en) * 2023-08-29 2024-01-23 苏州浪潮智能科技有限公司 Image processing method, processing device, equipment and medium
CN118338002A (en) * 2024-06-12 2024-07-12 山东云海国创云计算装备产业创新中心有限公司 BMC video compression method, device and system and baseboard management controller
CN118338002B (en) * 2024-06-12 2024-09-24 山东云海国创云计算装备产业创新中心有限公司 BMC video compression method, device and system and baseboard management controller

Similar Documents

Publication Publication Date Title
CN115209145A (en) Video compression method, system, device and readable storage medium
CN112965678A (en) Display, device, storage medium and method based on electronic ink screen
CN115460414B (en) Video compression method and system of baseboard management control chip and related components
WO2023134128A1 (en) Video compression processing method, device, and medium
JP2001101396A (en) Processor and method for correcting image distortion and medium with program performing image distortion correction processing stored therein
CN107886466B (en) Image processing unit system of graphic processor
CN112804532A (en) Image data acquisition method, system and related device
WO2023024421A1 (en) Method and system for splicing multiple channels of images, and readable storage medium and unmanned vehicle
CN114554126B (en) Baseboard management control chip, video data transmission method and server
CN113573072B (en) Image processing method and device and related components
US20100295862A1 (en) Method and system for accessing image data adaptively
WO2007057053A1 (en) Conditional updating of image data in a memory buffer
CN109089120B (en) Analysis-aided encoding
CN107506119B (en) Picture display method, device, equipment and storage medium
CN117692593A (en) Video frame processing method, device, equipment and medium based on pixel row stitching
US7382376B2 (en) System and method for effectively utilizing a memory device in a compressed domain
KR100353894B1 (en) Memory architecture for buffering jpeg input data and addressing method thereof
CN118214820B (en) Image data processing method, product, equipment and medium
CN113126869B (en) Method and system for realizing KVM image high-speed redirection based on domestic BMC chip
US20100254618A1 (en) Method for Accessing Image Data and Related Apparatus
CN116647686B (en) Image compression method, device, server and image compression system
CN115913939B (en) Real-time image data modification method and device in cloud desktop image transmission process
CN112073726B (en) Compression method and device, computer readable storage medium and electronic device
US11483493B2 (en) Camera image conversion method capable of reducing processing time
CN116320247A (en) Real-time video scaling method and device based on ZYNQ and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination