CN115499667A - Video processing method, device and equipment and readable storage medium - Google Patents

Video processing method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN115499667A
CN115499667A CN202211437518.2A CN202211437518A CN115499667A CN 115499667 A CN115499667 A CN 115499667A CN 202211437518 A CN202211437518 A CN 202211437518A CN 115499667 A CN115499667 A CN 115499667A
Authority
CN
China
Prior art keywords
data block
memory area
component
mode
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211437518.2A
Other languages
Chinese (zh)
Other versions
CN115499667B (en
Inventor
张贞雷
李拓
邹晓峰
满宏涛
周玉龙
魏红杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202211437518.2A priority Critical patent/CN115499667B/en
Publication of CN115499667A publication Critical patent/CN115499667A/en
Application granted granted Critical
Publication of CN115499667B publication Critical patent/CN115499667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a video processing method, a video processing device, video processing equipment and a readable storage medium, which are applied to the technical field of computers. After a video frame to be compressed is obtained, dividing all columns in the video frame into a plurality of groups according to a preset sampling mode, and alternately storing the groups into a first memory area and a second memory area; constructing data stored in a target address field in a first memory area into a first data block according to a preset sampling mode from a first address of the first memory area; meanwhile, starting from the first address of the second memory area, constructing data stored in the object address field in the second memory area into a second data block according to a preset sampling mode; and simultaneously performing compression operation on the first data block and the second data block. The method and the device can improve the video compression efficiency, save the cache space and avoid frame loss in the compression process. The video processing device, the video processing equipment and the readable storage medium have the technical effects.

Description

Video processing method, device and equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method, apparatus, device, and readable storage medium.
Background
Currently, when compressing video in a memory, video data needs to be converted into one data block, and the process of converting the data block needs to temporarily store components constituting the data block by using a buffer.
However, since the buffer space is limited, and the components constituting the data block need to be sequentially read from the buffer when the data block is converted in the existing scheme, the release speed of the buffer space resources is relatively slow, and the video compression efficiency is reduced by sequentially reading the component data. And in the case of limited buffer space and slow release of buffer space, the buffer space is easily filled. When the buffer space is not enough, the component data of the subsequent data block cannot be written into the buffer, the data which cannot be written into the buffer is discarded, and the frame loss phenomenon can be caused.
Therefore, how to improve the video compression efficiency, save the buffer space, and avoid frame loss in the compression process is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of this, an object of the present application is to provide a video processing method, apparatus, device and readable storage medium, so as to improve video compression efficiency, save buffer space and avoid frame loss in the compression process. The specific scheme is as follows:
in a first aspect, the present application provides a video processing method, including:
acquiring a video frame to be compressed;
dividing all the columns in the video frame into a plurality of groups according to a preset sampling mode, and alternately storing the groups into a first memory area and a second memory area;
constructing data stored in a target address field in the first memory area into a first data block according to the preset sampling mode from the first address of the first memory area; meanwhile, starting from the first address of the second memory area, constructing data stored in the object address field in the second memory area into a second data block according to the preset sampling mode;
and simultaneously performing compression operation on the first data block and the second data block.
Optionally, the preset sampling mode is: YUV422 mode or YUV420 mode;
correspondingly, the dividing all columns in the video frame into a plurality of groups according to a preset sampling mode includes:
and dividing all columns in the video frame into a plurality of groups with 16 columns according to the YUV422 mode or the YUV420 mode.
Optionally, the preset sampling mode is: a YUV444 mode;
correspondingly, the dividing all columns in the video frame into a plurality of groups according to a preset sampling mode includes:
and dividing all columns in the video frame into a plurality of groups with 8 columns according to the YUV444 mode.
Optionally, the alternately storing the groups into the first memory area and the second memory area includes:
arranging each group according to the size of the column serial numbers of all columns in the video frame to obtain a group sequence;
storing the groups with the odd arrangement positions in the group sequence into the first memory area, and storing the groups with the even arrangement positions in the group sequence into the second memory area; or storing the groups with even arrangement positions in the group sequence into the first memory area, and storing the groups with odd arrangement positions in the group sequence into the second memory area.
Optionally, the preset sampling mode is: YUV422 mode or YUV420 mode;
correspondingly, the constructing the data stored in the target address field in the first memory area as the first data block according to the preset sampling mode includes:
respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues according to the YUV422 mode or the YUV420 mode, and constructing a first data block based on each cache queue;
correspondingly, the constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode includes:
and respectively reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to the YUV422 mode or the YUV420 mode, and constructing a second data block based on each buffer queue.
Optionally, the preset sampling mode is: a YUV444 mode;
correspondingly, the constructing the data stored in the target address field in the first memory area as the first data block according to the preset sampling mode includes:
according to the YUV444 mode, respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues, and constructing a first data block based on each cache queue;
correspondingly, the constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode includes:
and reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to the YUV444 mode, and constructing a second data block based on each buffer queue.
Optionally, after the compressing the first data block and the second data block simultaneously, the method further includes:
and storing the Y component, the U component and the V component of a subsequent new data block in the address space where the Y component, the U component and the V component of the first data block and the second data block are located in the corresponding buffer queue. Optionally, the data stored in the target address field in the first memory area is constructed into a first data block according to the preset sampling mode, starting from the first address of the first memory area; before constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode, starting from the first address of the second memory area, the method further includes:
and if the video frame is in the RGB format, converting the RGB format data stored in the first memory area and the second memory area into the YUV format.
Optionally, the performing compression operations on the first data block and the second data block simultaneously includes:
and performing DCT transformation in a compression operation on the first data block and the second data block simultaneously.
Optionally, the method further comprises:
when the first data block and the second data block start to perform DCT transformation, constructing data stored in a next address field in the first memory area as a new first data block according to the preset sampling mode, and constructing data stored in a next address field in the second memory area as a new second data block according to the preset sampling mode, so as to perform compression operation on the new first data block and the new second data block at the same time.
Optionally, the method further comprises:
after the first data block and the second data block are compressed to obtain compressed data, adding a frame identifier for the compressed data, and writing the compressed data added with the frame identifier into a preset memory area.
In a second aspect, the present application provides a video processing apparatus comprising:
the acquisition module is used for acquiring a video frame to be compressed;
the storage module is used for dividing all the columns in the video frame into a plurality of groups according to a preset sampling mode and alternately storing the groups into a first memory area and a second memory area;
a data block constructing module, configured to construct, from a first address of the first memory area, a first data block from data stored in a target address segment in the first memory area according to the preset sampling mode; meanwhile, starting from the first address of the second memory area, constructing data stored in the object address field in the second memory area into a second data block according to the preset sampling mode;
and the compression module is used for simultaneously performing compression operation on the first data block and the second data block.
Optionally, the compression module comprises:
a first compression unit for performing DCT transform in a compression operation on the first data block and the second data block simultaneously;
and a second compression unit, configured to, when DCT transformation is started on the first data block and the second data block, construct data stored in a next address segment in the first memory area as a new first data block according to the preset sampling mode, and at the same time construct data stored in a next address segment in the second memory area as a new second data block according to the preset sampling mode, so as to perform compression operation on the new first data block and the new second data block at the same time.
Optionally, the storage module is specifically configured to: the preset sampling mode is as follows: and the YUV422 mode or the YUV420 mode divides all the columns in the video frame into a plurality of groups with 16 columns according to the YUV422 mode or the YUV420 mode.
Optionally, the storage module is specifically configured to: the preset sampling mode is as follows: a YUV444 mode; and dividing all columns in the video frame into a plurality of groups with 8 columns according to the YUV444 mode.
Optionally, the storage module is specifically configured to:
arranging each group according to the size of the sequence numbers of all the columns in the video frame to obtain a group sequence; storing the groups with the odd arrangement positions in the group sequence into the first memory area, and storing the groups with the even arrangement positions in the group sequence into the second memory area; or storing the groups with even arrangement positions in the group sequence into the first memory area, and storing the groups with odd arrangement positions in the group sequence into the second memory area.
Optionally, the data block building module is specifically configured to: the preset sampling mode is as follows: YUV422 mode or YUV420 mode; respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues according to the YUV422 mode or the YUV420 mode, and constructing a first data block based on each cache queue;
optionally, the data block building module is specifically configured to: the preset sampling mode is as follows: YUV422 mode or YUV420 mode; and respectively reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to the YUV422 mode or the YUV420 mode, and constructing a second data block based on each buffer queue.
Optionally, the data block building module is specifically configured to: the preset sampling mode is as follows: a YUV444 mode; according to the YUV444 mode, respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues, and constructing a first data block based on each cache queue;
optionally, the preset sampling mode is: a YUV444 mode; and reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to the YUV444 mode, and constructing a second data block based on each buffer queue.
Optionally, the method further comprises:
and the buffer multiplexing module is used for enabling the Y component, the U component and the V component of the first data block and the second data block to be stored in the address space in which the Y component, the U component and the V component of the subsequent new data block are located in the corresponding buffer queue after the first data block and the second data block are simultaneously compressed.
Optionally, the method further comprises:
a format conversion module, configured to, if the video frame is in an RGB format, construct, according to the preset sampling mode, data stored in a target address segment in the first memory area as a first data block starting from the first address of the first memory area; and simultaneously, converting the RGB format data stored in the first memory area and the second memory area into YUV format before constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode from the first address of the second memory area.
Optionally, the method further comprises:
and the compressed data storage module is used for adding a frame identifier for the compressed data after the first data block and the second data block are compressed to obtain the compressed data, and writing the compressed data added with the frame identifier into a preset memory area.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the video processing method disclosed in the foregoing.
In a fourth aspect, the present application provides a readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the video processing method disclosed in the foregoing.
According to the above scheme, the present application provides a video processing method, including: acquiring a video frame to be compressed; dividing all the columns in the video frame into a plurality of groups according to a preset sampling mode, and alternately storing the groups into a first memory area and a second memory area; constructing data stored in a target address field in the first memory area into a first data block according to the preset sampling mode from the first address of the first memory area; meanwhile, starting from the first address of the second memory area, constructing data stored in the object address field in the second memory area into a second data block according to the preset sampling mode; and simultaneously performing compression operation on the first data block and the second data block.
Therefore, when the video frame is stored in the memory, two memory areas are used. Specifically, all the columns in the video frame are divided into a plurality of groups according to a preset sampling mode, and the groups are alternately stored in the first memory area and the second memory area, so that all the data in one video frame are separated into two memory areas. The storage process is completed according to the preset sampling mode, so that the data stored in the two memory areas can provide a prerequisite condition for parallel execution of subsequent compression operation. When converting a data block, constructing data stored in a target address field in a first memory area into the first data block according to a preset sampling mode from a first address of the first memory area; namely: reading a segment of data from the first address of the first memory area to construct a first data block. When a first data block is constructed, constructing data stored in an object address field in a second memory area into a second data block according to a preset sampling mode from a first address of the second memory area; namely: and reading a piece of data from the first address of the second memory area to construct a second data block. The application can realize that: the construction of the two data blocks is completed at the same time, so that the cache resources occupied by the two data blocks can be released as soon as possible, thereby saving the cache space and avoiding the frame loss phenomenon. Because the construction of the two data blocks can be completed at the same time, the two data blocks can be compressed simultaneously subsequently, and the compression efficiency is improved. Therefore, the video compression efficiency can be improved, the cache space is saved, and frame loss in the compression process is avoided.
Accordingly, the video processing device, the video processing apparatus and the readable storage medium provided by the application also have the technical effects.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a video processing method disclosed herein;
FIG. 2 is a schematic diagram of data structure of a first data block and a second data block disclosed in the present application;
FIG. 3 is a schematic view of a compression frame of the present disclosure;
FIG. 4 is a schematic diagram of a video processing apparatus according to the present disclosure;
fig. 5 is a schematic diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, when a data block is converted in the existing scheme, components forming the data block need to be read from a cache in sequence, so that the release speed of cache space resources is relatively slow, and the video compression efficiency is reduced by reading component data in sequence. And in the case of limited buffer space and slow release of buffer space, the buffer space is easily filled. When the buffer space is not enough, the component data of the subsequent data block cannot be written into the buffer, the data which cannot be written into the buffer is discarded, and the frame loss phenomenon can be caused. Therefore, the video processing scheme is provided, the video compression efficiency can be improved, the cache space is saved, and frame loss in the compression process is avoided.
Referring to fig. 1, an embodiment of the present application discloses a video processing method, including:
s101, obtaining a video frame to be compressed.
S102, dividing all the columns in the video frame into a plurality of groups according to a preset sampling mode, and alternately storing the groups into a first memory area and a second memory area.
In this embodiment, two memory areas are used when storing video frames in the memory. Specifically, all the columns in the video frame are divided into a plurality of groups according to a preset sampling mode, and the groups are alternately stored in the first memory area and the second memory area, so that all the data in one video frame are separated into two memory areas. And the storage process is finished according to the preset sampling mode, so that the data stored in the two memory areas can provide a prerequisite condition for parallel execution of subsequent compression operation.
The preset sampling mode can be a YUV422 mode, a YUV420 mode or a YUV444 mode, and sampling sizes specified by different modes are different. The YUV422 mode and YUV420 mode specify sample sizes as: 16, and the YUV444 mode specifies a sample size of: 8. therefore, when a YUV422 mode or a YUV420 mode is adopted, each 16 columns form a group; in the YUV444 mode, every 8 columns constitute a group. In one embodiment, the preset sampling pattern is: YUV422 mode or YUV420 mode; accordingly, dividing all columns in the video frame into a plurality of groups according to a preset sampling pattern includes: and dividing all columns in the video frame into a plurality of groups with 16 columns according to a YUV422 mode or a YUV420 mode. In one embodiment, the preset sampling pattern is: a YUV444 mode; accordingly, dividing all columns in the video frame into a plurality of groups according to a preset sampling pattern includes: all columns in the video frame are divided into a plurality of groups with 8 columns according to the YUV444 mode.
Assuming that a video frame has 0-95 th columns and 96 columns, then according to YUV422 mode or YUV420 mode, the 0-15 th columns form a group, the 16-31 th columns form a group, the 32-47 th columns form a group … …, and so on, 6 groups can be obtained. These 6 groups are alternately stored in the first memory area and the second memory area, that is: group 1 of columns 0-15 is stored in the first memory area, group 2 of columns 16-31 is stored in the second memory area, group 3 of columns 32-47 is stored in the first memory area, group 4 of columns 48-63 is stored in the second memory area, group 5 of columns 64-79 is stored in the first memory area, and group 6 of columns 80-95 is stored in the second memory area. It can be seen that set 1, set 3, set 5 are stored in the first memory area, and set 2, set 4, set 6 are stored in the second memory area, thus implementing: the groups are alternately stored in the first memory area and the second memory area. Thus, in one embodiment, alternately storing groups into a first memory region and a second memory region comprises: arranging each group according to the size of the sequence numbers of all the columns in the video frame to obtain a group sequence; storing the groups with the odd-numbered arrangement positions in the group sequence into a first memory area, and storing the groups with the even-numbered arrangement positions in the group sequence into a second memory area; or storing the groups with even arrangement positions in the group sequence into the first memory area, and storing the groups with odd arrangement positions in the group sequence into the second memory area.
If the 96 columns of video frames are grouped according to the YUV444 pattern, columns 0-7 form a group, columns 8-15 form a group … …, and so on, 12 groups are available. The 12 sets are alternately stored in the first memory area and the second memory area, which can be implemented by referring to the above example and will not be described herein again.
S103, constructing data stored in a target address field in the first memory area into a first data block according to a preset sampling mode from a first address of the first memory area; and meanwhile, constructing the data stored in the object address field in the second memory area into a second data block according to a preset sampling mode from the first address of the second memory area.
Note that the data block sizes defined by the different modes are different. The data block size specified by the YUV422 mode and the YUV420 mode is: 16 × 16, and the YUV444 mode specifies a data block size of: 8X 8. Therefore, when the YUV422 mode or the YUV420 mode is adopted, the first data block and the second data block are in the 16 × 16 specification. When the YUV444 mode is adopted, the first data block and the second data block are in the specification of 8 × 8.
In this embodiment, when constructing a data block based on data in the first memory area, a segment of data is read from the first address of the first memory area, and based on the data read at this time, it can be determined that: and the Y component, the U component and the V component of each pixel point for constructing the first data block are used, the component data can be written into the cache queue, and then the first data block is constructed according to the data in the cache queue. When constructing the first data block, simultaneously reading a piece of data from the first address of the second memory area, and based on the data read at this time, determining: and the Y component, the U component and the V component of each pixel point for constructing the second data block are used, the component data can be written into the cache queue, and then the second data block is constructed according to the data in the cache queue. This makes it possible to: the construction of the two data blocks is completed at the same time, so that the cache resources occupied by the two data blocks can be released as soon as possible, thereby saving the cache space and avoiding the frame loss phenomenon.
The first memory area and the second memory area can store video pixels in YUV format and can also store video pixels in RGB format. If the first memory area and the second memory area store video pixels in YUV format, the Y component, the U component, and the V component of each pixel point for constructing the first data block and the second data block can be directly read from the first memory area and the second memory area. If the first memory area and the second memory area store video pixels in RGB format, after a segment of data is read from the first memory area and the second memory area, the read data needs to be converted into YUV format, so that it can be determined that: and the Y component, the U component and the V component of each pixel point for constructing the first data block and the second data block. However, this method requires format conversion every time a piece of data is read from the memory. Therefore, when the video pixels in the RGB format are stored in the first memory area and the second memory area, the video pixels in the RGB format stored in the first memory area and the second memory area are uniformly converted into the YUV format, and then the data block is created. Therefore, in one embodiment, the data stored in the target address field in the first memory area is constructed into a first data block according to a preset sampling mode from the first address of the first memory area; meanwhile, before constructing the data stored in the object address field in the second memory area into the second data block according to the preset sampling mode starting from the first address of the second memory area, the method further includes: and if the video frame is in the RGB format, converting the RGB format data stored in the first memory area and the second memory area into YUV format.
In one embodiment, the preset sampling pattern is: YUV422 mode or YUV420 mode; correspondingly, constructing the data stored in the target address field in the first memory area into a first data block according to a preset sampling mode, including: respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues according to a YUV422 mode or a YUV420 mode, and constructing a first data block based on each cache queue; correspondingly, constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode, including: and respectively reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to a YUV422 mode or a YUV420 mode, and constructing a second data block based on each buffer queue. It is obvious from this that, if the first memory area stores video pixels in YUV format, the data stored in the destination address field is: and the Y component, the U component and the V component of each pixel point for constructing the first data block. Correspondingly, if the second memory area stores video pixels in YUV format, the data stored in the object address field is: and the Y component, the U component and the V component of each pixel point for constructing the second data block. In YUV420 mode, the Y component input to the post-stage compression module is 16 × 16, the U/V component is 8 × 8, and the discarding of the U/V component is performed when YUV data is written into the respective FIFOs. For example, in YUV420 mode, only the U/V components of even rows and even columns are retained, so that the U/V components are 8*8 when being input to the post-stage compression module. Accordingly, in the YUV422 mode, the Y component, the U component, and the V component are discarded or retained according to the rules established in the mode.
In one embodiment, the preset sampling pattern is: a YUV444 mode; correspondingly, constructing the data stored in the target address field in the first memory area into a first data block according to a preset sampling mode, including: respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues according to a YUV444 mode, and constructing a first data block based on each cache queue; correspondingly, constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode, including: and reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to a YUV444 mode, and constructing a second data block based on each buffer queue. It is obvious from this that, if the first memory area stores video pixels in YUV format, the data stored in the destination address field is: and the Y component, the U component and the V component of each pixel point for constructing the first data block. Correspondingly, if the second memory area stores video pixels in YUV format, the data stored in the object address field is: and the Y component, the U component and the V component of each pixel point for constructing the second data block. In the YUV422 mode, only the U/V component of the even columns is retained, so the U/V input to the subsequent stage of compression is 16 × 8. It can be seen that when data blocks are formed in different modes, Y, U and V components need to be discarded or retained according to the rules established in the corresponding modes.
Certainly, when the video pixels in RGB format are stored in the first memory area and the second memory area, the video pixels in RGB format in the first memory area and the second memory area may be uniformly converted into YUV format before the data block is constructed, or after a section of pixels used for constructing the data block is read, the currently read pixels may be converted into YUV format, so that the Y component, the U component, and the V component of each pixel point used for constructing the data block may also be obtained.
In one example, the data organization of the first data block and the second data block may refer to fig. 2. As shown in fig. 2, according to the YUV422 mode or the YUV420 mode, BLOCK0 (the first data BLOCK) corresponds to 0-15 th row x 0-15 th column of the video frame, BLOCK1 (the first second data BLOCK) corresponds to 0-15 th row x 16-31 th column of the video frame, BLOCK2 (the second first data BLOCK) corresponds to 0-15 th row x 32-47 th column … … of the video frame, and so on. If in YUV444 mode, BLOCK0 (the first BLOCK) corresponds to lines 0-7 x columns 0-7 of the video frame, BLOCK1 (the first second BLOCK) corresponds to lines 0-15 x columns 8-15 of the video frame … …, and so on. For the first data BLOCKs BLOCK0, BLOCK2, BLOCK4 … …, the component data constituting them are stored in the first memory area. For the second data BLOCKs BLOCK1, BLOCK3, BLOCK5 … …, the component data constituting them are stored in the second memory area.
The embodiment can realize that: the construction of the two data blocks is completed at the same time, so that the cache resources occupied by the two data blocks can be released as soon as possible. Therefore, in an embodiment, after performing the compression operation on the first data block and the second data block simultaneously, the method further includes: and in the corresponding cache queue, the address space where the Y component, the U component and the V component of the first data block and the second data block are located stores the Y component, the U component and the V component of a subsequent new data block, so that the cache queue can be continuously used for constructing the new data block, the multiplexing of cache resources occupied by the data block which enters the compression module is realized, the cache space can be saved, and the frame loss phenomenon is avoided.
And S104, simultaneously performing compression operation on the first data block and the second data block.
Because the construction of the two data blocks can be completed at the same time, the two data blocks can be compressed simultaneously subsequently, and the compression efficiency is improved. In one embodiment, performing a compression operation on a first data block and a second data block simultaneously includes: the first data block and the second data block are simultaneously subjected to a DCT transform in a compression operation. It can be seen that the compression operation includes a DCT transform.
Since the DCT transformation process takes a lot of time, if the DCT transformation of the first data block and the second data block is completed, and then the compression of the subsequent data block is performed, the compression time length is increased. For this reason, in this embodiment, when the first data block and the second data block start DCT transform, data stored in the next address field in the first memory area is constructed as a new first data block according to a preset sampling mode, and data stored in the next address field in the second memory area is constructed as a new second data block according to the preset sampling mode, so as to perform compression operation on the new first data block and the new second data block at the same time. That is: after the first data block and the second data block are read by the DCT transformation module, the construction and reading of the subsequent data block are started, at this time, another DCT transformation module can be arranged, and then the subsequent data block can be DCT transformed by the other DCT transformation module. Therefore, the DCT transformation of the first data block and the second data block is not needed to wait, and the DCT transformation of other subsequent data blocks can be carried out in the process of carrying out the DCT transformation on the first data block and the second data block, thereby shortening the compression time and improving the compression efficiency.
Referring to fig. 2, in one example, two DCT transformation modules may be provided for all first data blocks: DCT0 and DCT2, while setting two DCT transformation modules for all second data blocks: DCT1 and DCT3, then the following steps may be implemented: when DCT0 finishes reading BLOCK0 and prepares to enter DCT conversion process, DCT2 starts reading BLOCK2 to carry out DCT conversion on BLOCK2, so that the DCT conversion of BLOCK0 and BLOCK2 has time repetition, thereby saving time. When DCT0 reads BLOCK0, DCT1 also reads BLOCK1, then DCT1 finishes reading BLOCK1 and prepares to enter DCT transformation flow, DCT3 starts reading BLOCK3 to carry out DCT transformation to BLOCK3, then DCT transformation of BLOCK1 and BLOCK3 has time repetition. Since BLOCK0 and BLOCK1 start DCT transformation at the same time, BLOCK0, BLOCK1, BLOCK2, and BLOCK3 may have time repetition of DCT transformation on the basis that BLOCK0 and BLOCK2 have time repetition of DCT transformation and BLOCK1 and BLOCK3 have time repetition of DCT transformation. Therefore, the present embodiment can perform DCT transformation on a plurality of data blocks in parallel, thereby improving compression efficiency. Of course, more DCT transform modules are provided, which can further improve the compression efficiency.
In an embodiment, after the first data block and the second data block are compressed to obtain compressed data, a frame identifier is added to the compressed data, and the compressed data added with the frame identifier is written into a preset memory area.
It can be seen that in this embodiment, two memory areas are used when storing video frames in the memory. Specifically, all the columns in the video frame are divided into a plurality of groups according to a preset sampling mode, and the groups are alternately stored in the first memory area and the second memory area, so that all the data in one video frame are separated into two memory areas. The embodiment completes the storage process according to the preset sampling mode, so that the data stored in the two memory areas can provide a prerequisite for parallel execution of the subsequent compression operation. In this embodiment, when converting a data block, a segment of data is read from the first address of the first memory area to construct a first data block. When the first data block is constructed, a segment of data is read from the first address of the second memory area at the same time to construct a second data block. This embodiment can thus realize: the construction of the two data blocks is completed at the same time, so that the cache resources occupied by the two data blocks can be released as soon as possible, thereby saving the cache space and avoiding the frame loss phenomenon. Because the embodiment can finish the construction of the two data blocks at the same time, the two data blocks can be compressed simultaneously subsequently, and the compression efficiency is improved. Therefore, the embodiment can improve the video compression efficiency, save the cache space and avoid frame loss in the compression process.
It should be noted that, when the baseboard management control system in the server compresses the Video, the host side transmits the Video data to a VGA (Video Graphics Array) in the baseboard management control system through PCIe, and the VGA writes the Video data into the host DDR. Then, the read control module (RD _ CTRL) reads the data in the DDR, and then converts the video data in the original RGB format into the data in the YUV format through the color space conversion module (RGB 2 YUV), and then performs FIFO buffering on the Y, U, V component using the storage resource in the system according to the BLOCK format conversion requirement, thereby completing the format conversion of the YUV2 BLOCK. And then the compression module reads each BLOCK according to the sequence of the BLOCKs to compress, and after the compression is finished, the compressed data is written into the DDR and is sent to the far end through the MAC. The remote end may display the video data.
The FIFO buffer needs to be set to an appropriate depth and width according to the video resolution. For example: the resolution is 1920 × 1200, the depth of the FIFO is 16384, and the width is 8bits, so that the FIFO will not be full. The width means: the size of data written into the FIFO buffer each time; depth refers to: the total number of the 8bits data which can be stored in the FIFO buffer.
In this embodiment, when writing RGB raw data into DDR, the VGA writes two memory areas SPACE _ LOW and SPACE _ HIGH respectively, and the compression module in this embodiment supports YUV444, YUV422, and YUV420 modes. SPACE _ LOW is: the first memory area described in the above embodiment; SPACE _ HIGH is the second memory area described in the above embodiments. Of course, the reverse may be also true, where SPACE _ LOW is regarded as the second memory area described in the above embodiment, and SPACE _ HIGH is regarded as the first memory area described in the above embodiment.
If the compression module adopts YUV422 or YUV420 mode, writing the RGB data of columns 0-15, columns 32-47, columns 64-79 and column … … of all the lines of a video frame into SPACE _ LOW SPACE; the RGB data for columns 16-31, columns 48-63, columns 80-95 … … of all the rows of the video frame are written to SPACE _ HIGH SPACE at the same time.
If the compression module adopts YUV444 mode, writing the RGB data of 0-7 columns, 16-23 columns, 32-39 columns and … … of all lines of a video frame into SPACE _ LOW SPACE; the RGB data for columns 8-15, columns 24-31, columns 40-47, … … of all the rows of the video frame are written to SPACE _ HIGH SPACE at the same time.
Taking YUV420 mode as an example, if RGB data is read from DDR, data in SPACE _ LOW SPACE is read first, 16 pieces of RGB data are read at a time, and then subjected to color SPACE conversion by RGB2YUV, and input to YUV2BLOCK _ NEW _0. The data of the SPACE _ HIGH SPACE is read for the second time, 16 pieces of RGB data are read for one time, and input to YUV2BLOCK _ NEW _1 after being subjected to RGB2YUV color SPACE conversion. By repeating the above steps, the original RGB data can be sent to YUV2BLOCK _ NEW _0 and YUV2BLOCK _ NEW _1, respectively. The YUV2BLOCK _ NEW _0 and YUV2BLOCK _ NEW _1 are two YUV to BLOCK modules, and the two modules respectively correspond to two memory areas, namely SPACE _ LOW and SPACE _ HIGH. Therefore, the two YUV to BLOCK conversion modules can synchronously carry out BLOCK conversion, thereby providing a precondition for the synchronous compression of the subsequent BLOCK.
In the YUV422 mode, 16 pieces of RGB data are read at a time. In the YUV444 mode, 8 pieces of RGB data are read at a time.
In the prior art, 16Y _ FIFOs, 16U _ FIFOs and 16V _ FIFOs are required to complete BLOCK conversion. This embodiment sets 32Y component buffer queues: y _ FIFO _0 _Ato Y _FIFO _15_A, Y _FIFO0 _Bto Y _FIFO15 _B; 32U component buffer queues: u _ FIFO _0 _Ato U _FIFO _15_A, U _FIFO _0 _Bto U _FIFO _15_B; 32V-component buffer queues: v _ FIFO _0_A _ V _FIFO15 _A, V _FIFO0 _B _V _FIFO15 _B. Under different sampling modes, the number of used buffer queues is different.
In one example, Y _ FIFO _0 _Athrough Y _FIFO _15 _Aare used to cache the Y components that make up BLOCK0, and Y _ FIFO _0 _Bthrough Y _FIFO15 _Bare used to cache the Y components that make up BLOCK 2. U _ FIFO _0_A _ U _FIFO _15 _Ais used for caching the U component constituting BLOCK0, U _ FIFO _0_B _FIFO _15 _Bis used for caching the U component constituting BLOCK2, V _ FIFO _0_A _ V _FIFO _15 _Ais used for caching the V component constituting BLOCK0, and V _ FIFO _0_B _V _FIFO15U b is used for caching the V component constituting BLOCK 2. Accordingly, the present embodiment can synchronously construct BLOCK0 and BLOCK2, thereby providing a premise basis for synchronous compression of BLOCK0 and BLOCK 2. Accordingly, this embodiment can simultaneously construct BLOCK4 and BLOCK6, BLOCK8 and BLOCK10, BLOCK1 and BLOCK3, BLOCK5 and BLOCK7, and so on.
In the YUV420 mode, all Y components need to be buffered, and U/V components in even rows and even columns need to be buffered. Then when buffering all rows and all columns of Y components, first write the 0-15 th column of Y components into Y _ FIFO _0_A of YUV2BLOCK _ NEW _0 for all Y components of 0/16/32 … … rows of the video frame; writing the Y component of columns 16-31 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 32-47 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the Y component of columns 48-63 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 64-79 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the Y component of columns 80-95 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 96-111 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the Y component of columns 112-127 is written into Y _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and the other columns and so on until the buffering of all Y components of row 0/16/32 … … is completed. All Y components of the 1/17/33 … … line of the video frame are buffered according to the corresponding rule, and the buffer queue is changed into: y _ FIFO _1_A and Y _ FIFO _1_B may be used. Correspondingly, for all Y components of the 15/31/47 … … line of the video frame, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: y _ FIFO _15_a, Y _ FIFO _15_b.
In YUV420 mode, when buffering U components of even lines and even columns, firstly writing the U component of 0/2/4/6 … column into U _ FIFO _0_A of YUV2BLOCK _ NEW _0 aiming at the U component of 0/16/32 … … line of a video frame; writing the U component of column 16/18/20 … 30 into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the U component of column 32/34/36 … into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the U component of column 48/50/52 … into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the U component of column 64/66/68 … into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the U component of column 80/82/84 … into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the U component of column 96/98/100 …, column 96/98/100 zxft 5363, into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the U component of column 112/114/116 … is written into U _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and so on for the other columns until the buffering of all U components of row 0/16/32 … … is completed. For the U component of the 2/18/34 … … row, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: u _ FIFO _1_A and U _ FIFO _1_B complete buffering according to corresponding rules for the U component of 14/30/46 … … line, and the buffer queue is changed as follows: u _ FIFO _7_A/U _ FIFO _7_B.
In YUV420 mode, buffering the V component of even line and even column, firstly writing the V component of 0/2/4/6 5262 zxft 5214 column into V _ FIFO _0_A of YUV2BLOCK _ NEW _0 for the V component of 0/16/32 … … line; writing the V component of column 16/18/20 … 30 into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the V component of column 32/34/36 … into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the V component of column 48/50/52 … into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the V component of column 64/66/68 … into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the V component of column 80/82/84 … into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the V component of column 96/98/100 …, column 96/98/100 zxft 5363, into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the V component of column 112/114/116 … is written into V _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and so on for the other columns until the buffering of all V components of row 0/16/32 … … is completed. For the V component of the 2/18/34 … … line, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: v _ FIFO _1_A, U _ FIFO _1_B. For the V component of the 14/30/46 … … line, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: v _ FIFO _7_A, V _ FIFO _7_B.
In the YUV422 mode, all Y components need to be buffered, and U/V components in even columns need to be buffered. When buffering the Y components of all rows and all columns, firstly writing the Y components of 0 th to 15 th columns into Y _ FIFO _0_A of YUV2BLOCK _ NEW _0 aiming at the Y component of 0 th row/16/32 … row; writing the Y component of columns 16-31 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 32-47 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the Y component of columns 48-63 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 64-79 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the Y component of columns 80-95 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 96-111 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the Y component of columns 112-127 is written into Y _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and the other columns and so on until the buffering of all Y components of row 0/16/32 … … is completed. For all Y components of the 1/17/33 … … line, buffering is also completed according to the corresponding rule, and the buffer queue is changed into: y _ FIFO _1_A, Y _ FIFO _1_B. For all Y components of the 15/31/47 … … line, buffering is also completed according to the corresponding rule, and the buffer queue is changed into: y _ FIFO _15_A/Y _ FIFO _15_B.
In YUV422 mode, buffer the U component of even column, so the U component for 0 th/16/32 … row; writing the U component of column 0/2/4/6 … 14 into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; writing the U component of column 16/18/20 … 30 into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the U component of column 32/34/36 … into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the U component of column 48/50/52 … into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the U component of column 64/66/68 … into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the U component of column 80/82/84 … into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the U component of column 96/98/100 …, column 96/98/100 zxft 5363, into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the U component of column 112/114/116 … is written into U _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and so on for the other columns until the buffering of all U components of row 0/16/32 … … is completed. For all U components of the 1/17/33 … … line, buffering is also completed according to corresponding rules, and the buffer queue is changed into: u _ FIFO _1_A | U _ FIFO _1_B. For all U components of the 15/31/47 … … line, buffering is also completed according to corresponding rules, and the buffer queue is changed into: u _ FIFO _15_a, U _ FIFO _15_b.
In YUV422 mode, buffering the V component of even column, so for the V component of 0 th line/16/32 … line, writing the V component of 0/2/4/6 … column 14 into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; writing the V component of column 16/18/20 … 30 into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the V component of column 32/34/36 … into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the V component of column 48/50/52 … into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the V component of column 64/66/68 … into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the V component of column 80/82/84 … into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the V component of column 96/98/100 …, column 96/98/100 zxft 5363, into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the V component of column 112/114/116 … is written into V _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and so on for the other columns until the buffering of all V components of row 0/16/32 … … is completed. For all the V components of the 1/17/33 … … line, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: v _ FIFO _1_A, V _ FIFO _1_B. For all the V components of the 15/31/47 … … line, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: v _ FIFO _15_a, V _ FIFO _15_b.
And in a YUV444 mode, all rows and columns of Y/U/V data are cached. When the Y components of all columns of all rows are cached, the Y components of 0 th row/8/16 … are written into Y _ FIFO _0_A of YUV2BLOCK _ NEW _0 for the Y components of 0 th row/8/16 zxft 5363; writing the Y component of columns 8-15 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 16-23 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the Y component of columns 24-31 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 32-39 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the Y component of columns 40-47 into Y _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the Y component of columns 48-55 into Y _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the Y components of columns 56-63 are written into Y _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and the other columns and so on until the buffering of all Y components of line 0/8/16 … is completed. For all Y components of the 1/9/17 … … line, buffering is also completed according to corresponding rules, and the buffer queue is changed into: y _ FIFO _1_A, Y _ FIFO _1_B. For all Y components of the 7/15/23 … … row, buffering is also completed according to corresponding rules, and the buffer queue is changed into: y _ FIFO _7_A, Y _ FIFO _7_B.
In YUV444 mode, when buffering U components of all columns of all rows, firstly writing the U components of 0 th row/8/16 … row into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; writing the U component of column 8-15 into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the U component of columns 16-23 into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the U component of column 24-31 into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the U component of columns 32-39 into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the U components of columns 40-47 into U _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the U component of column 48-55 into U _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the U components of columns 56-63 are written into U _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and the other columns and so on until the buffering of all U components of line 0/8/16 … is completed. For all U components of the 1/9/17 … … row, buffering is also completed according to corresponding rules, and the buffer queue is changed into: u _ FIFO _1_A, U _ FIFO _1_B. For all U components of the 7/15/23 … … row, buffering is also completed according to corresponding rules, and the buffer queue is changed into: u _ FIFO _7_A, U _ FIFO _7_B.
In YUV444 mode, when buffering all V components of all columns of all rows, firstly writing the V components of 0 th to 7 th columns into V _ FIFO _0_A of YUV2BLOCK _ NEW _0 for the V components of 0 th/8/16 … row; writing the V component of columns 8-15 into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the V component of column 16-23 into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; writing the V component of columns 24-31 into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 1; writing the V component of columns 32-39 into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 0; entering the V component of columns 40-47 into V _ FIFO _0_A of YUV2BLOCK _ NEW _ 1; writing the V component of columns 48-55 into V _ FIFO _0_B of YUV2BLOCK _ NEW _ 0; the V components of columns 56-63 are written into V _ FIFO _0_B … … of YUV2BLOCK _ NEW _1 and the other columns and so on until the buffering of all V components of line 0/8/16 … is completed. For all the V components of the 1/9/17 … … line, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: v _ FIFO _1_A, V _ FIFO _1_B. For all the V components of the 7/15/23 … … line, the buffering is also completed according to the corresponding rule, and the buffering queue is changed into: v _ FIFO _7_A, V _ FIFO _7_B.
It can be seen that, under different sampling modes, 32Y component buffer queues, 32U component buffer queues, and 32V component buffer queues are adopted, so that: one conversion module (YUV 2BLOCK _ NEW _0 or YUV2BLOCK _ NEW _ 1) constructs two BLOCKs synchronously. Therefore, the embodiment can realize synchronous compression of data, namely, realize that: YUV2BLOCK _ NEW _0 and YUV2BLOCK _ NEW _1 are simultaneously constructed to obtain BLOCK _0 and BLOCK _1, so that BLOCK _0 and BLOCK _1 are synchronously compressed; meanwhile, YUV2BLOCK _ NEW _0 constructs BLOCK _0 and BLOCK _2 at the same time, and the compression of BLOCK _0 and BLOCK _2 is repeated for a certain time, so that BLOCK _0 and BLOCK _2 are not compressed in a complete synchronization manner; similarly, YUV2BLOCK _ NEW _1 constructs BLOCK _1 and BLOCK _3 at the same time, and the compression of BLOCK _1 and BLOCK _3 is repeated for a certain time, so that BLOCK _1 and BLOCK _3 are not completely compressed synchronously, thereby improving the compression efficiency.
Reading and caching in a YUV420 mode according to the following processes: reading Y _ FIFO _0_A in YUV2BLOCK _ NEW _0 16 times, synchronously reading Y _ FIFO _0_A … … in YUV2BLOCK _ NEW _ 116 times Y _ FIFO _15 _Ain YUV2BLOCK _ NEW _0 16 times; y _ FIFO _15 _ain YUV2BLOCK _ NEW _1 is synchronously read 16 times, thereby reading a set of 16 × 16Y components to constitute 16 × 16 BLOCK data. U _ FIFO _0_A in YUV2BLOCK _ NEW _0 is read 8 times, U _ FIFO _0_A … … in YUV2BLOCK _ NEW _1 is read 8 times synchronously, U _ FIFO _7_A in YUV2BLOCK _ NEW _0 is read 8 times synchronously, U _ FIFO _7_A in YUV2BLOCK _ NEW _1 is read 8 times synchronously, thereby reading a set of U components of 8*8 to form BLOCK data of 8*8. V _ FIFO _0_A in 8 YUV2BLOCK _ NEW _0 reads, V _ FIFO _0_A … … in 8 YUV2BLOCK _ NEW _1 reads, V _ FIFO _7_A in 8 YUV2BLOCK _ NEW _0 reads, V _ FIFO _7_A in 8 YUV2BLOCK _ NEW _1 reads, thereby reading V components of a set 8*8 to constitute BLOCK data of 8*8. Therefore, the BLOCK data can be correspondingly constructed based on the read data, and then the BLOCK data can be sent into the compression module.
Referring to fig. 3, the compression frame provided in this embodiment includes: two BLOCK conversion modules: YUV2BLOCK _ NEW _0 (i.e., conversion BLOCK 0) and YUV2BLOCK _ NEW _1 (i.e., conversion BLOCK 1); two compression modules: compression block0 and compression block 1. Also, the compression module 0 includes two DCT units: DCT0 and DCT2, the compression module 1 comprises two DCT units: DCT1 and DCT3.
Specifically, YUV2BLOCK _ NEW _0 generates the first BLOCK0, BLOCK0 is sent to DCT _0 unit in compression BLOCK0 for DCT transformation processing, and since the processing time of DCT transformation is long, when DCT _0 in compression BLOCK0 processes BLOCK0, the arbitration BLOCK generates the read sequence of reading YUV2BLOCK _ NEW _0 again, at this time, Y _ FIFO _0_B and Y _ FIFO _1_B … … are read to construct BLOCK _2, BLOCK _2 is input to DCT _1 in compression BLOCK0, and thus BLOCK0 and BLOCK2 have compression repetition time, so both BLOCK0 and BLOCK2 can be regarded as parallel compression.
Since YUV2BLOCK _ NEW _0 generates BLOCK0, YUV2BLOCK _ NEW _1 will generate BLOCK1 synchronously, so that BLOCK0 and BLOCK1 can be compressed synchronously. Accordingly, BLOCK1 and BLOCK3 also have compression repetition times, so BLOCK0, BLOCK1, BLOCK2, BLOCK4 are compressed with probability. Therefore, each unit in the YUV2BLOCK module and the compression module can be fully utilized, and the data processing speed is greatly increased. Wherein the other units in the compression module include: quantization unit, entropy coding unit, framing unit, etc. The quantization unit can obtain a higher compression ratio on the premise of ensuring the image quality, and adopts a smaller quantization interval and a smaller number of bits for the low-frequency component relative to the high-frequency component. The entropy coding unit can further compress the video image and carry out coding with different code lengths according to the probability distribution of different symbols. The framing unit is able to determine picture start flags, picture end flags, frame headers, etc. Since the two compression modules output the compressed data synchronously in this embodiment, the compressed data needs to be added with the frame head and the frame tail, framed, and output to the DDR sequentially.
Therefore, in the embodiment, data in SPACE _ LOW and SPACE _ HIGH in DDR can be synchronously compressed, and adjacent BLOCKs (such as BLOCK0 and BLOCK 2) constructed by one YUV2BLOCK module have repetition in compression time, so that the JPEG video compression speed in the baseboard management control chip is greatly increased, the frame loss rate is reduced, the buffering time of data in the chip is reduced, the occupation of the video compression function on the resource SPACE on the chip is reduced, and the overall performance of the chip can be improved.
In the following, a video processing apparatus provided by an embodiment of the present application is introduced, and a video processing apparatus described below and a video processing method described above may be referred to with each other.
Referring to fig. 4, an embodiment of the present application discloses a video processing apparatus, including:
an obtaining module 401, configured to obtain a video frame to be compressed;
a storage module 402, configured to divide all columns in a video frame into a plurality of groups according to a preset sampling mode, and store each group in a first memory area and a second memory area alternately;
a data block constructing module 403, configured to start from a first address of the first memory area, and construct data stored in a target address segment in the first memory area as a first data block according to a preset sampling mode; meanwhile, starting from the first address of the second memory area, constructing data stored in the object address field in the second memory area into a second data block according to a preset sampling mode;
a compressing module 404, configured to perform a compressing operation on the first data block and the second data block simultaneously.
In one embodiment, the compression module comprises:
a first compression unit for performing DCT transform in a compression operation on the first data block and the second data block simultaneously;
and the second compression unit is used for constructing the data stored in the next address field in the first memory area into a new first data block according to a preset sampling mode when the first data block and the second data block start to carry out DCT conversion, and constructing the data stored in the next address field in the second memory area into a new second data block according to the preset sampling mode so as to simultaneously carry out compression operation on the new first data block and the new second data block.
In one embodiment, the storage module is specifically configured to: the preset sampling mode is as follows: and the YUV422 mode or the YUV420 mode divides all the columns in the video frame into a plurality of groups with 16 columns according to the YUV422 mode or the YUV420 mode.
In one embodiment, the storage module is specifically configured to: the preset sampling mode is as follows: a YUV444 mode; all columns in the video frame are divided into a plurality of groups with 8 columns according to the YUV444 mode.
In one embodiment, the storage module is specifically configured to:
arranging each group according to the size of the sequence numbers of all the columns in the video frame to obtain a group sequence; storing the groups with the odd-numbered arrangement positions in the group sequence into a first memory area, and storing the groups with the even-numbered arrangement positions in the group sequence into a second memory area; or storing the groups with even arrangement positions in the group sequence into the first memory area, and storing the groups with odd arrangement positions in the group sequence into the second memory area.
In one embodiment, the data block building module is specifically configured to: the preset sampling mode is as follows: YUV422 mode or YUV420 mode; respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues according to a YUV422 mode or a YUV420 mode, and constructing a first data block based on each cache queue;
in one embodiment, the data block building module is specifically configured to: the preset sampling mode is as follows: YUV422 mode or YUV420 mode; and respectively reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to a YUV422 mode or a YUV420 mode, and constructing a second data block based on each buffer queue.
In one embodiment, the data block building module is specifically configured to: the preset sampling mode is as follows: a YUV444 mode; respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues according to a YUV444 mode, and constructing a first data block based on each cache queue;
in one embodiment, the preset sampling pattern is: a YUV444 mode; and reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to a YUV444 mode, and constructing a second data block based on each buffer queue.
In one embodiment, the method further comprises:
and the buffer multiplexing module is used for enabling the Y component, the U component and the V component of the first data block and the second data block to be stored in the address space where the Y component, the U component and the V component of the subsequent new data block are located in the corresponding buffer queue after the first data block and the second data block are simultaneously compressed.
In one embodiment, further comprising:
the format conversion module is used for constructing data stored in a target address segment in the first memory area into a first data block according to a preset sampling mode from a first address of the first memory area if the video frame is in an RGB format; and simultaneously, converting the RGB format data stored in the first memory area and the second memory area into YUV format before constructing the data stored in the target address field in the second memory area into a second data block according to a preset sampling mode from the first address of the second memory area.
In one embodiment, the method further comprises:
and the compressed data storage module is used for adding a frame identifier for the compressed data after the first data block and the second data block are compressed to obtain the compressed data, and writing the compressed data added with the frame identifier into a preset memory area.
For more specific working processes of each module and unit in this embodiment, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not described here again.
Therefore, the embodiment provides a video processing device, which can improve the video compression efficiency, save the buffer space, and avoid frame loss in the compression process.
In the following, an electronic device provided by an embodiment of the present application is introduced, and an electronic device described below and a video processing method and apparatus described above may be referred to each other.
Referring to fig. 5, an embodiment of the present application discloses an electronic device, including:
a memory 501 for storing a computer program;
a processor 502 for executing the computer program to implement the method disclosed in any of the embodiments above.
Further, an embodiment of the present application further provides a server as the electronic device. The server may specifically include: at least one processor, at least one memory, a power supply, a communication interface, an input output interface, and a communication bus. Wherein, the memory is used for storing a computer program, and the computer program is loaded and executed by the processor to implement the relevant steps in the video processing method disclosed in any of the foregoing embodiments.
In this embodiment, the power supply is configured to provide a working voltage for each hardware device on the server; the communication interface can create a data transmission channel between the server and external equipment, and the communication protocol followed by the communication interface is any communication protocol applicable to the technical scheme of the application, and the communication protocol is not specifically limited herein; the input/output interface is used for acquiring external input data or outputting data to the outside, and the specific interface type can be selected according to specific application requirements without specific limitation.
In addition, the memory is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like, where the stored resources include an operating system, a computer program, data, and the like, and the storage manner may be a transient storage manner or a permanent storage manner.
The operating system is used for managing and controlling hardware devices and computer programs on the Server so as to realize the operation and processing of the processor on the data in the memory, and the operating system can be Windows Server, netware, unix, linux and the like. The computer program may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the video processing method disclosed in any of the foregoing embodiments. The data may include data such as developer information of the virtual machine, in addition to data such as the virtual machine.
Further, the embodiment of the application also provides a terminal as the electronic device. The terminal may specifically include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Generally, the terminal in this embodiment includes: a processor and a memory.
The processor may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory is at least used for storing a computer program, wherein after being loaded and executed by the processor, the computer program can realize relevant steps in the video processing method executed by the terminal side disclosed in any one of the foregoing embodiments. In addition, the resources stored by the memory may also include an operating system, data and the like, and the storage mode may be a transient storage mode or a permanent storage mode. The operating system may include Windows, unix, linux, and the like. The data may include, but is not limited to, update information for the application.
In some embodiments, the terminal may further include a display, an input/output interface, a communication interface, a sensor, a power source, and a communication bus.
A readable storage medium provided in the embodiments of the present application is introduced below, and a readable storage medium described below and a video processing method, apparatus, and device described above may be referred to each other.
A readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the video processing method disclosed in the foregoing embodiments.
The principle and the embodiment of the present application are explained by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A video processing method, comprising:
acquiring a video frame to be compressed;
dividing all the columns in the video frame into a plurality of groups according to a preset sampling mode, and alternately storing the groups into a first memory area and a second memory area;
constructing data stored in a target address field in the first memory area into a first data block according to the preset sampling mode from the first address of the first memory area; meanwhile, starting from the first address of the second memory area, constructing data stored in the object address field in the second memory area into a second data block according to the preset sampling mode;
and simultaneously performing compression operation on the first data block and the second data block.
2. The method of claim 1,
the preset sampling mode is as follows: YUV422 mode or YUV420 mode;
correspondingly, the dividing all columns in the video frame into a plurality of groups according to a preset sampling mode includes:
and dividing all columns in the video frame into a plurality of groups with 16 columns according to the YUV422 mode or the YUV420 mode.
3. The method of claim 1,
the preset sampling mode is as follows: a YUV444 mode;
correspondingly, the dividing all columns in the video frame into a plurality of groups according to a preset sampling mode includes:
and dividing all columns in the video frame into a plurality of groups with 8 columns according to the YUV444 mode.
4. The method of claim 1, wherein alternately storing the groups into the first memory region and the second memory region comprises:
arranging each group according to the size of the column serial numbers of all columns in the video frame to obtain a group sequence;
and storing the groups with the odd arrangement positions in the group sequence into the first memory area, and storing the groups with the even arrangement positions in the group sequence into the second memory area.
5. The method of claim 1,
the preset sampling mode is as follows: YUV422 mode or YUV420 mode;
correspondingly, the constructing the data stored in the target address field in the first memory area as the first data block according to the preset sampling mode includes:
respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues according to the YUV422 mode or the YUV420 mode, and constructing the first data block based on each cache queue;
correspondingly, the constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode includes:
and respectively reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to the YUV422 mode or the YUV420 mode, and constructing the second data block based on each buffer queue.
6. The method of claim 1,
the preset sampling mode is as follows: a YUV444 mode;
correspondingly, the constructing the data stored in the target address field in the first memory area as the first data block according to the preset sampling mode includes:
according to the YUV444 mode, respectively reading the Y component, the U component and the V component of each pixel point stored in the target address field to corresponding cache queues, and constructing the first data block based on each cache queue;
correspondingly, the constructing the data stored in the object address field in the second memory area into a second data block according to the preset sampling mode includes:
and reading the Y component, the U component and the V component of each pixel point stored in the object address field to corresponding buffer queues according to the YUV444 mode, and constructing the second data block based on each buffer queue.
7. The method of claim 5 or 6, wherein after the performing the compression operation on the first data block and the second data block simultaneously, further comprising:
and storing the Y component, the U component and the V component of a subsequent new data block in the address space where the Y component, the U component and the V component of the first data block and the second data block are located in the corresponding buffer queue.
8. The method according to any one of claims 1 to 6, wherein the data stored in the target address field in the first memory area is constructed into a first data block according to the preset sampling mode from the first address of the first memory area; before constructing the data stored in the object address field in the second memory area into the second data block according to the preset sampling mode, starting from the first address of the second memory area, the method further includes:
and if the video frame is in the RGB format, converting the RGB format data stored in the first memory area and the second memory area into YUV format.
9. The method of any of claims 1 to 6, wherein the performing the compression operation on the first data block and the second data block simultaneously comprises:
and performing DCT transformation in a compression operation on the first data block and the second data block simultaneously.
10. The method of claim 9, further comprising:
when the first data block and the second data block start to perform DCT transformation, constructing data stored in a next address field in the first memory area as a new first data block according to the preset sampling mode, and constructing data stored in a next address field in the second memory area as a new second data block according to the preset sampling mode, so as to perform compression operation on the new first data block and the new second data block at the same time.
11. The method of any of claims 1 to 6, further comprising:
after the first data block and the second data block are compressed to obtain compressed data, adding a frame identifier for the compressed data, and writing the compressed data added with the frame identifier into a preset memory area.
12. A video processing apparatus, comprising:
the acquisition module is used for acquiring a video frame to be compressed;
the storage module is used for dividing all the columns in the video frame into a plurality of groups according to a preset sampling mode and alternately storing the groups into a first memory area and a second memory area;
a data block constructing module, configured to construct, from a first address of the first memory area, a first data block from data stored in a target address segment in the first memory area according to the preset sampling mode; meanwhile, starting from the first address of the second memory area, constructing data stored in the object address field in the second memory area into a second data block according to the preset sampling mode;
and the compression module is used for simultaneously performing compression operation on the first data block and the second data block.
13. The apparatus of claim 12, wherein the compression module comprises:
a first compression unit for performing DCT transform in a compression operation on the first data block and the second data block simultaneously;
and a second compression unit, configured to, when DCT transformation is started on the first data block and the second data block, construct data stored in a next address segment in the first memory area as a new first data block according to the preset sampling mode, and at the same time construct data stored in a next address segment in the second memory area as a new second data block according to the preset sampling mode, so as to perform compression operation on the new first data block and the new second data block at the same time.
14. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the method of any one of claims 1 to 11.
15. A readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the method of any one of claims 1 to 11.
CN202211437518.2A 2022-11-17 2022-11-17 Video processing method, device, equipment and readable storage medium Active CN115499667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211437518.2A CN115499667B (en) 2022-11-17 2022-11-17 Video processing method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211437518.2A CN115499667B (en) 2022-11-17 2022-11-17 Video processing method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN115499667A true CN115499667A (en) 2022-12-20
CN115499667B CN115499667B (en) 2023-07-14

Family

ID=85115943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211437518.2A Active CN115499667B (en) 2022-11-17 2022-11-17 Video processing method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115499667B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1126411A (en) * 1995-01-06 1996-07-10 大宇电子株式会社 Apparatus for parallel encoding/decoding of digital video signals
CN101188761A (en) * 2007-11-30 2008-05-28 上海广电(集团)有限公司中央研究院 Method for optimizing DCT quick algorithm based on parallel processing in AVS
US20130182774A1 (en) * 2012-01-18 2013-07-18 Qualcomm Incorporated Indication of use of wavefront parallel processing in video coding
CN104349168A (en) * 2014-08-11 2015-02-11 大连戴姆科技有限公司 Ultra-high-speed image real-time compression method
CN113709489A (en) * 2021-07-26 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 Video compression method, device, equipment and readable storage medium
CN114501024A (en) * 2022-04-02 2022-05-13 苏州浪潮智能科技有限公司 Video compression system, method, computer readable storage medium and server
CN115086668A (en) * 2022-07-21 2022-09-20 苏州浪潮智能科技有限公司 Video compression method, system, equipment and computer readable storage medium
CN115243047A (en) * 2022-07-22 2022-10-25 山东云海国创云计算装备产业创新中心有限公司 Video compression method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1126411A (en) * 1995-01-06 1996-07-10 大宇电子株式会社 Apparatus for parallel encoding/decoding of digital video signals
CN101188761A (en) * 2007-11-30 2008-05-28 上海广电(集团)有限公司中央研究院 Method for optimizing DCT quick algorithm based on parallel processing in AVS
US20130182774A1 (en) * 2012-01-18 2013-07-18 Qualcomm Incorporated Indication of use of wavefront parallel processing in video coding
CN104349168A (en) * 2014-08-11 2015-02-11 大连戴姆科技有限公司 Ultra-high-speed image real-time compression method
CN113709489A (en) * 2021-07-26 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 Video compression method, device, equipment and readable storage medium
CN114501024A (en) * 2022-04-02 2022-05-13 苏州浪潮智能科技有限公司 Video compression system, method, computer readable storage medium and server
CN115086668A (en) * 2022-07-21 2022-09-20 苏州浪潮智能科技有限公司 Video compression method, system, equipment and computer readable storage medium
CN115243047A (en) * 2022-07-22 2022-10-25 山东云海国创云计算装备产业创新中心有限公司 Video compression method, device, equipment and medium

Also Published As

Publication number Publication date
CN115499667B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN113709489B (en) Video compression method, device, equipment and readable storage medium
CN105120293A (en) Image cooperative decoding method and apparatus based on CPU and GPU
US11915058B2 (en) Video processing method and device, electronic equipment and storage medium
CN112235579B (en) Video processing method, computer-readable storage medium and electronic device
CN112188280B (en) Image processing method, device and system and computer readable medium
CN115460414B (en) Video compression method and system of baseboard management control chip and related components
WO2024074012A1 (en) Video transmission control method, apparatus and device, and nonvolatile readable storage medium
CN104952088A (en) Method for compressing and decompressing display data
CN113573072B (en) Image processing method and device and related components
CN115209145A (en) Video compression method, system, device and readable storage medium
CN106227506A (en) A kind of multi-channel parallel Compress softwares system and method in memory compression system
CN113286174B (en) Video frame extraction method and device, electronic equipment and computer readable storage medium
US20130235272A1 (en) Image processing apparatus and image processing method
CN115499667B (en) Video processing method, device, equipment and readable storage medium
US20190324909A1 (en) Information processing apparatus and information processing method
US7082612B2 (en) Transmission apparatus of video information, transmission system of video information and transmission method of video information
CN101499245B (en) Asynchronous first-in first-out memory, liquid crystal display controller and its control method
WO2022042053A1 (en) Data processing method and system, and electronic device
US7928987B2 (en) Method and apparatus for decoding video data
US11189006B2 (en) Managing data for transportation
CN107241601B (en) Image data transmission method, device and terminal
CN116795442B (en) Register configuration method, DMA controller and graphics processing system
CN114428595B (en) Image processing method, device, computer equipment and storage medium
CN111813484A (en) Full-screen multiple anti-aliasing method and device for 2D desktop and graphics processor
CN113065998A (en) Ultrahigh-speed real-time image storage method and system and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant