WO2021007742A1 - Compression method for obtaining video file, decompression method, system, and storage medium - Google Patents

Compression method for obtaining video file, decompression method, system, and storage medium Download PDF

Info

Publication number
WO2021007742A1
WO2021007742A1 PCT/CN2019/095965 CN2019095965W WO2021007742A1 WO 2021007742 A1 WO2021007742 A1 WO 2021007742A1 CN 2019095965 W CN2019095965 W CN 2019095965W WO 2021007742 A1 WO2021007742 A1 WO 2021007742A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
color
pixel
image
video file
Prior art date
Application number
PCT/CN2019/095965
Other languages
French (fr)
Chinese (zh)
Inventor
周新生
李翔
阮俊瑾
张灵
潘永靖
Original Assignee
上海极清慧视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海极清慧视科技有限公司 filed Critical 上海极清慧视科技有限公司
Priority to PCT/CN2019/095965 priority Critical patent/WO2021007742A1/en
Priority to CN201980005157.4A priority patent/CN111406404B/en
Publication of WO2021007742A1 publication Critical patent/WO2021007742A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Definitions

  • This application relates to the field of image processing technology, and in particular to a compression method, decompression method, system and storage medium for obtaining video files.
  • multimedia playback data With people's playback quality of multimedia playback data becomes higher and higher, the amount of multimedia playback data is also increasing. For multimedia playback data with a large amount of data, after compressing and encoding it in a traditional way, the amount of data still cannot be controlled within a range that can be stably transmitted. In order to enable the stable transmission of multimedia playback data, usually the multimedia playback data can only be further compressed, but this method will cause serious loss of color information and cannot meet the demand for high image quality. Therefore, people expect a better compression method to achieve high-fidelity effects for multimedia playback data with a large amount of data while ensuring transmission stability.
  • the purpose of this application is to provide a compression method, decompression method, system and storage medium for obtaining video files to solve the problem of difficulty in transmitting ultra-high-definition video data in the prior art.
  • the first aspect of this application provides a compression method for obtaining a video file, which includes the following steps: obtaining a plurality of image data to be compressed in a chronological order; the image data is used to display UHD4K Video images of and above pixels; based on the color attribute of each pixel in the image data, the color value of each pixel in each image data is mapped to each pixel position in multiple image blocks; Each image block corresponding to the image data and having the same color attribute is compressed to obtain a video file.
  • the color attribute of each pixel in the image data is The step of mapping the color value of the pixel to each pixel position in the plurality of image blocks includes: traversing the image data according to the color format set based on the Bayer format in the image data; wherein, during the traversal, based on the According to the color attribute of each pixel in the color format, the color value of each pixel is extracted from the image data and mapped to the pixel position in the corresponding image block.
  • each pixel in the image data represents an RGB color attribute
  • each pixel in each image data is The step of mapping the color value of the pixel to each pixel position in a plurality of image blocks includes: traversing the image data according to a color format set based on the pixel row format in the image data; wherein, during the traversal, based on For the color attribute of each pixel in the color format, the color principal component or color fitting component of each pixel is extracted from the image data, and mapped to the pixel position in the corresponding image block.
  • the step of compressing image blocks corresponding to each of the multiple image data and having the same color attribute includes: according to the color format The multiple image blocks corresponding to multiple image data are sequentially input to a first encoder for compression processing.
  • the step of compressing image blocks corresponding to each of the multiple image data and having the same color attribute includes: under synchronous control, using The multiple second encoders respectively perform compression processing on multiple image blocks with the same color attribute.
  • the video file obtained by the compression processing using multiple second encoders includes synchronization information set for decompressing the video file to restore multiple image data.
  • the second aspect of the present application also provides a method for decompressing a video file, including: obtaining a video file; decompressing the video file according to the compression method used for the video file to obtain multiple image blocks Wherein, according to the color attribute of each image block, the obtained multiple image blocks correspond to each of the multiple image data to be generated; according to the color attribute, each of the corresponding image blocks The color value of the pixel position is mapped to the pixel of the image data; based on the color value of each pixel in the image data, a video image for displaying UHD 4K and above pixels is generated.
  • the step of decompressing the video file according to a compression mode to obtain multiple image blocks includes: using multiple second decoders under synchronous control The video file is decompressed according to the color attributes respectively; wherein, each second decoder outputs a plurality of image blocks with the same color attribute; wherein, each image block corresponds to a piece of image data to be generated.
  • each second decoder determines the correspondence between a plurality of image blocks to be decompressed and a piece of image data to be generated according to the synchronization information in the video file relationship.
  • the step of decompressing the video file in a compression mode to obtain a plurality of image blocks includes: using a first decoder to perform processing on the received video file The decompression process obtains multiple groups of image blocks divided according to different color attributes in the color format; wherein, each image block in each group of image blocks corresponds to a piece of image data to be generated.
  • the step of mapping the color value of each pixel position in the corresponding image block to the pixel of the image data according to the color attribute includes: according to the color format, Traverse the pixel position in the image block of each color attribute, and during the traversal, map the color value of the corresponding pixel position in each image block to the pixel position in the corresponding image data to generate image data; wherein, the image data The color value of each pixel position in represents a single color attribute.
  • the step of generating a video image for displaying UHD 4K and above pixels based on the color value of each pixel obtained by mapping in the image data further includes: In the color format, each pixel position in the obtained image data is interpolated to obtain a video image containing RGB color attributes in each pixel.
  • the third aspect of the present application also provides a compression device, including: a communication interface for communicating with an external decompression device; a memory for storing at least one program and image data to be compressed; a processor for coordination
  • the communication interface and the memory are used to execute the program, and during execution, the image data is compressed according to the compression method for obtaining a video file described in the first aspect of the present application to obtain a video file.
  • the fourth aspect of the present application also provides a decompression device, including: a communication interface for communicating with an external compression device; a memory for storing at least one program and a video file to be decompressed; and a processor for The communication interface and the memory are coordinated to execute the program, and during execution, the video file is decompressed according to the video file decompression method as described in the second aspect of the present application, so as to play the video file.
  • a decompression device including: a communication interface for communicating with an external compression device; a memory for storing at least one program and a video file to be decompressed; and a processor for The communication interface and the memory are coordinated to execute the program, and during execution, the video file is decompressed according to the video file decompression method as described in the second aspect of the present application, so as to play the video file.
  • the fifth aspect of the present application also provides a video transmission system, including: the compression device as described in the third aspect of the application; and the decompression device as described in the fourth aspect of the application.
  • the sixth aspect of the present application also provides a computer-readable storage medium, including: storing at least one program; when called, the at least one program executes the video file acquisition program described in any one of the first aspects of the present application Compression method; or, when the at least one program is called, the method for decompressing a video file as described in the second aspect of the present application is executed.
  • the compression method, decompression method, system and storage medium for obtaining video files of this application have the following beneficial effects:
  • the compression method, decompression method, system and storage medium for obtaining video files provided by this application can be effective Reduce the code stream while ensuring high-fidelity picture quality.
  • the amount of data added by multiple image blocks is much lower than the amount of data after compression processing is performed using traditional methods.
  • compared with the YUV222 format it has only half the data volume; compared with the YUV444 format, it has only 1/3 of the data volume, but the amount of information carried by the compression method of this application is equivalent to that of the YUV444 format .
  • the image block of each color attribute is equivalent to 4K video YUV400 format with only brightness information, and compared with YUV422 format, the data volume is only half. Since the compression method in this application can effectively reduce the amount of data, 8k video can be encoded by a 4K encoder in the prior art. In the same way, with the compression method of the present application, 4K video can also be encoded by a 2K video encoder, or 16k video can be processed by an 8K video encoder. Moreover, with the compression method of the present application, RGB video images or Bayer format images can be directly used for compression without conversion to YUV format.
  • bit stream rate generated by the compression method of this application can be controlled at about half of YUV422, that is, 24 to 80 Mbps.
  • the current stable uplink peak of 5G is 90 Mbps, so 5G real-time transmission of 8K video can be realized, and it has High fidelity picture quality.
  • FIG. 1 shows a flowchart of an embodiment of the compression method in this application
  • FIG. 2 shows a schematic diagram of image data in an embodiment of this application
  • FIG. 3 shows a schematic diagram of an embodiment of a mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks in this application;
  • FIG. 4 shows a schematic diagram of an embodiment in which each pixel in the image data in this application represents RGB color attributes
  • FIG. 5 shows a schematic diagram of another embodiment of the mapping method of mapping the color value of each pixel in each image data to each pixel position in multiple image blocks in this application;
  • FIG. 6 shows a schematic diagram of an embodiment of using a first encoder to perform compression processing in this application
  • FIG. 7 shows a schematic diagram of another embodiment of using a first encoder to perform compression processing in this application
  • FIG. 8 shows a schematic diagram of another embodiment in which each pixel in the image data in this application represents RGB color attributes
  • FIG. 9 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance
  • Figure 10 shows an implementation of a mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance Example diagram;
  • FIG. 11 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance;
  • Figure 12 shows another mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance.
  • FIG. 13 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance;
  • Figure 14 shows another mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance Schematic diagram of the embodiment.
  • FIG. 15 shows a flowchart of the decompression method in an embodiment
  • FIG. 16 shows a schematic diagram of an embodiment of using a first decoder to perform decompression processing in this application
  • FIG. 17 is a schematic diagram of another embodiment of using a first decoder to perform decompression processing in this application.
  • FIG. 18 shows a schematic diagram of an embodiment in which the decompression device in this application maps the color value of each pixel position in the corresponding image block to the pixel of the image data;
  • Figure 19 shows a schematic diagram of an embodiment of the compression device in this application.
  • FIG. 20 shows a schematic diagram of an embodiment of the decompression device in this application.
  • FIG. 21 shows a schematic structural diagram of the video transmission system in this application in an embodiment.
  • first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element.
  • the first decoder may be referred to as the second decoder, and similarly, the second decoder may be referred to as the first decoder without departing from the scope of the various described embodiments. Both the first decoder and the decoder are describing a threshold, but unless the context clearly indicates otherwise, they are not the same decoder.
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
  • An exception to this definition will only occur when the combination of elements, functions, steps or operations is inherently mutually exclusive in some way.
  • the YUV422 format is used, although the color components are lost by half, the data volume still reaches 2Gbyte/ s. Under this kind of data volume, if the existing technology is used for compression encoding, the code stream is 48-160Mbps. Even if the latest 5G technology is adopted, the average value of the uplink speed of 5GCPE is 80-90Mbps, which still cannot meet the requirements of stable transmission. Requirements. On the other hand, in some embodiments, most of the compression of multimedia playback data adopts the YUV422 or YUV420 format, and the color information is seriously lost, which cannot meet the demand for high image quality. People expect to have a better compression method that can achieve high-fidelity effects for multimedia playback data with a large amount of data, even at low bit rates.
  • this application provides a compression method for obtaining video files to solve the above-mentioned problems and make the playback of high-definition videos smoother and with higher fidelity.
  • the compression method is mainly implemented by an image compression device, where the compression device may be a terminal device or a server.
  • the terminal equipment includes, but is not limited to, camera equipment, personal electronic terminal equipment, and the like.
  • the camera equipment includes a camera device, a storage device, a processing device, and may also include an interface device.
  • the camera device is used to acquire image data, wherein the image data is composed of multiple image data set based on colors.
  • the imaging device includes at least a lens composed of a lens group, a light sensing device, etc., where the light sensing device includes, for example, a CCD device, a CMOS device, and the like.
  • the storage device may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the storage device also includes a memory controller, which can control access to the memory by other components of the device, such as a CPU and a peripheral interface.
  • the storage device is used to store at least one program and image data to be encoded.
  • the program stored in the storage device includes an operating system, a communication module (or instruction set), a graphics module (or instruction set), a text input module (or instruction set), and an application (or instruction set).
  • the program in the storage device also includes an instruction set for performing an encoding operation on the image data in time sequence based on the technical solution provided by the compression method.
  • the processing device includes, but is not limited to: CPU, GPU, FPGA (Field-Programmable Gate Array), ISP (Image Signal Processing image processing chip), or other including at least data stored in a storage device dedicated to processing A program processing chip (such as AI dedicated chip), etc.
  • the processing device calls and executes at least one program stored in the storage device to perform compression processing on the stored image data according to the compression method.
  • the interface device includes, but is not limited to: a data line interface and a network interface; among them, examples of the data line interface include at least one of the following: serial interfaces such as USB, and parallel interfaces such as bus interfaces.
  • network interfaces include at least one of the following: short-range wireless network interfaces such as Bluetooth-based network interfaces and WiFi network interfaces, such as wireless network interfaces of mobile networks based on 3G, 4G, or 5G protocols, such as wired network interfaces that include network cards Wait.
  • short-range wireless network interfaces such as Bluetooth-based network interfaces and WiFi network interfaces, such as wireless network interfaces of mobile networks based on 3G, 4G, or 5G protocols, such as wired network interfaces that include network cards Wait.
  • the camera device is set on a pan-tilt above the road to monitor vehicle violations, such as speeding, red light running, etc.
  • the camera device is configured on a minimally invasive medical device, and the camera device is set at the front end of the hose through an optical fiber or other dedicated data cable.
  • the camera device is configured on a high-speed moving track of a stadium to capture high-definition pictures of competitive games.
  • the electronic terminal equipment for personal use includes desktop computers, notebook computers, tablet computers, and editing equipment dedicated to the production of TV programs, movies, TV series, and the like.
  • the electronic terminal equipment includes a storage device and a processing device.
  • the storage device and the processing device may be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here.
  • the electronic terminal equipment may also include a camera device for capturing image data.
  • the hardware and software modules of the camera device may be the same as or similar to the corresponding device in the aforementioned camera device, and will not be repeated here.
  • the electronic terminal device may further include an image acquisition interface for acquiring image data.
  • the image acquisition interface may be a network interface, a data line interface, or a program interface.
  • the network interface and the data line interface can be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here.
  • the processing device of the electronic terminal equipment downloads image data from the Internet.
  • the processing device of the electronic terminal device obtains the image data displayed by the drawing software on the display screen.
  • the drawing software is PS software, or screenshot software, for example.
  • the processing device of the electronic terminal device obtains one frame of image data in the unedited high-definition video from the storage device.
  • the server includes but is not limited to a single server, a server cluster, a distributed server, a server based on cloud technology, and the like.
  • the server includes a storage device, a processing device, an image acquisition interface, and the like.
  • the storage device and the processing device may be configured in the same physical server device, or be configured in multiple physical server devices according to the division of labor of each physical server device.
  • the image acquisition interface may be a network interface or a data line interface.
  • the storage device, processing device, image acquisition interface, etc. included in the server may be the same as the corresponding devices mentioned in the aforementioned terminal equipment; or specifically set for the server based on the server's throughput, processing capacity, and storage requirements The corresponding devices.
  • the storage device may also include a solid state drive or the like.
  • the processing device may also include a CPU dedicated to a server or the like.
  • the image acquisition interface in the server acquires image data and encoding instructions from the Internet, and the processing device executes the compression method described in this application on the acquired image data based on the encoding instructions.
  • the video file can be stored in a storage medium, and the video file can also be transmitted to a compression device using a communication transmission mode of 60 Mbps and above.
  • the transmission method includes, but is not limited to: a wireless transmission method based on the 5G communication protocol, or optical fiber transmission.
  • FIG. 1 is shown as a flowchart of the compression method in an embodiment.
  • step S110 multiple pieces of image data to be compressed are acquired in chronological order; the image data is used to display UHD 4K and above pixel video images.
  • the image data includes, but is not limited to: ultra-high-definition images (such as 4K images or 8K images), and images that have been compressed and decompressed.
  • the image data is a high-definition image from an original video captured by a high-definition camera.
  • the image data is a high-definition image transmitted through a dedicated data channel.
  • the image data is an image that comes from the Internet and needs to be re-encoded.
  • the format of the image data can be Bayer format, or RGB image generated after Debayer, or format such as YUV.
  • the image data is in Bayer format directly generated by the sensor of a high-definition camera.
  • the Bayer format generated by the sensor of a high-definition camera undergoes Debayer, that is, the other two color components are fitted to the color components of each pixel in the Bayer format to generate an RGB image, and the RGB image is used as the image to be processed data.
  • Debayer is mosaic processing, a digital image processing algorithm, whose purpose is to reconstruct a full-color image from the incomplete color samples output by the photosensitive element covered with a color filter array (CFA).
  • CFA color filter array
  • This method is also called color filter array interpolation (CFA interpolation) or color reconstruction (Color reconstruction).
  • a video file is composed of several frames of image data. Therefore, the compression device acquires several frames of image data to be processed in chronological order, so as to sequentially process several frames of image data in chronological order.
  • UHD Ultra High Definition
  • UHD 4k and above refers to video images with a resolution of 4K pixels and above, such as 8k pixels, 16k pixels, etc.
  • this embodiment takes 8K pixels as an example for description, but the principle of the solution can also be used to compress 4k pixels, 16k pixels or even higher-definition video images.
  • the image data used to display a video image of UHD 4K and above pixels may be the aforementioned Bayer format image data or the RGB format image data.
  • the image data in the RGB format includes image data in the RGB format itself and image data in other formats (such as YUV format, etc.) that can be converted into the RGB format.
  • step S120 the compression device maps the color value of each pixel in each image data to each pixel position in multiple image blocks based on the color attribute of each pixel in the image data. .
  • the pixel is the basic unit of image display.
  • Each pixel has different color attributes according to the format of the image data in which it is located.
  • the color attribute of the pixel is a single color component;
  • the color attribute of the pixel includes three colors of red (R), green (G), and blue (B) Weight. Since human eyes are more sensitive to green than other colors, usually the number of G components is twice the number of other color components. Therefore, in some embodiments, the G component is represented by a Gr component or a Gb component.
  • each pixel has a color value corresponding to its color attribute.
  • the image data is in Bayer format, please refer to FIG.
  • each square represents a pixel
  • each pixel There is only a single color component of R or G or B, where the G component is represented by the Gr component and the Gb component.
  • the color value of each pixel in the image data is the brightness value of the single color component, that is, the color value of each pixel;
  • the color value of each pixel in the image data includes the brightness value of each color component in the pixel.
  • a single image data is used as an example. It should be understood that the method for processing multiple image data is to process multiple image data separately according to the processing method for single image data, and provide the processed results to step S130 respectively.
  • the compression device divides the image data into a plurality of image blocks based on color attributes, in order to ensure the correlation between each pixel in the image data and each pixel in the image block, so that it can be decoded.
  • the compression device maps the color value of each pixel in each image data to each pixel position in a plurality of image blocks.
  • the step S120 includes: traversing according to the color format set based on the Bayer format in the image data The image data; wherein, during the traversal, based on the color attribute of each pixel in the color format, the color value of each pixel is extracted from the image data and mapped to the pixel position in the corresponding image block.
  • Each pixel in the image data represents a single color attribute.
  • 4 (2 ⁇ 2) pixels are determined as the color format 101, because when Bayer data is scanned, the odd lines usually output G, R, G, R..., and the even lines output B, G, B, G..., so there are 4 pixel data with different color attributes in one color format 101.
  • the image data is traversed to extract each pixel data and form multiple image blocks.
  • FIG. 3 shows a schematic diagram of an embodiment of a mapping method for mapping the color value of each pixel in each image data to each pixel position in a plurality of image blocks in this application.
  • the odd-numbered rows in the image data are Gr, R, Gr, R...
  • the even-numbered rows are B, Gb, B, Gb..., here, the odd-numbered rows of Gr, R, B and Gb of even rows are determined as a color format 101.
  • the coordinates of all pixels in the color format 101 where the pixels in the first row and first column are defined are the origin (0, 0), that is, the color format 101 includes Gr(0,0), R(0 , 0), B (0, 0) and Gb (0, 0). As shown in FIG.
  • each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 is determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101
  • the coordinates of each pixel in the color format of the unit are determined to be (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit
  • the coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and
  • the coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0)....
  • the compression device After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel.
  • the compression device divides all pixel data in the image data into multiple image blocks based on color attributes, and each image block contains only one color attribute.
  • the color attributes in this embodiment include R, Gr, Gb, and B, these four attributes, so in this embodiment, all the pixel data in the image data is divided into R, Gr, Gb, and B. Four image blocks of Gr, Gb, and B.
  • Gr(0,0) is divided into image blocks with Gr color attribute
  • Gr(0,1) is divided into image blocks with Gr color attribute and is offset to the right in the horizontal direction of Gr(0,0) Shifted by 1 unit
  • Gr(0,2) is divided into the image block of the Gr color attribute and shifted by 2 units to the right in the horizontal direction of Gr(0,0)
  • Gr(0,3) is divided into In the image block of the Gr color attribute and offset 3 units to the right in the horizontal direction of Gr(0,0);
  • Gr(1,0) is divided into the image block of the Gr color attribute and is in Gr(0,0) ) Is shifted downward by 1 unit in the vertical direction
  • Gr(2,0) is divided into the image block of the Gr color attribute and shifted downward by 2 units in the vertical direction of Gr(0,0),
  • Gr (3, 0) is divided into the image block of the Gr color attribute and is shifted downward by 3 units in the vertical direction of Gr(0, 0)...
  • the same is true for
  • FIG. 4 shows a schematic diagram of an embodiment in which each pixel in the image data in this application represents RGB color attributes.
  • the image data acquired by the compression device is an RGB image.
  • the image data is passed through Debayer based on the Bayer format, that is, the other two are fitted to the color components of each pixel shown in Figure 2. These color components generate RGB images.
  • the step S120 includes: traversing the image data according to a color format set based on the pixel row format in the image data; Wherein, during the traversal, based on the color attribute of each pixel in the color format, the color principal component or color fitting component of each pixel is extracted from the image data, and mapped to the pixel position in the corresponding image block.
  • each pixel has only a single component.
  • the encoder cannot directly encode the Bayer format, and the display device cannot directly display images in the Bayer format. Therefore, it is usually necessary to debayer the Bayer format to form an RGB format.
  • the amount of data processed by Debayer is very large, which is three times that of the Bayer format, which causes the code stream to be too large and affects the transmission efficiency. Therefore, in some embodiments, the RGB format is also converted to the YUV format, that is, into the luminance and chrominance format, and then the chrominance is reduced, thereby reducing the amount of data.
  • YUV422 will reduce the data volume by 1/3
  • YUV420 will reduce the data volume by 1/2, but the data volume is still 2 times and 1.5 times that of Bayer format respectively.
  • the method of converting the format to YUV422 or YUV420 will also cause serious color information loss, which cannot meet the demand for high image quality.
  • each pixel in the image data has three color components, where the bolded part in each pixel represents the principal component in the pixel, and each pixel The non-black bolded part in represents the other two components fitted based on the principal components in the pixel.
  • Four (2 ⁇ 2) pixels are determined as a color format 101. Since each pixel in a color format 101 has three color components, in order to save and reduce the code stream after compression, only in each Extract a color component from the pixel.
  • the compression device knows the principal component in each pixel in advance, and directly determines the principal component in each pixel as the color component to be extracted. In some other embodiments, the compression device cannot know the principal component in each pixel in advance, and may determine a certain component in each pixel as the color component to be extracted according to a preset rule.
  • the preset rules are exemplified but not limited to: extracting G and R according to odd rows and extracting B and G rules for even rows; or extracting G and B according to odd rows and extracting R and G rules for even rows; Or extract R and G according to odd lines, and extract G and B from even lines; or extract B and G according to odd lines and extract G and R from even lines.
  • the preset rule may also be exemplified but not limited to: extracting Gr and R according to odd lines, and extracting B and Gb according to even lines; or extracting according to odd lines Gb, B, rules for extracting R and Gr from even rows; or rules for extracting R and Gr from odd rows and Gb and B from even rows; or rules for extracting B and Gb from odd rows and Gr and R from even rows. It should be understood that due to the limited recognition of images by human eyes, any of the above-mentioned extraction methods has a negligible impact on the final imaging effect.
  • the compression device knows the principal component in each pixel in advance.
  • the compression device determines the principal component in each pixel as the color component to be extracted, and the color format 101 is determined as odd lines Gr, R, and even lines B, Gb.
  • FIG. 5 shows a schematic diagram of another embodiment of the mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks in this application.
  • the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R Gr B (0, 0), R G B (0, 0), R G B (0, 0), and R Gb B (0, 0).
  • each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 is determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101
  • the coordinates of each pixel in the color format of the unit are determined as (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit
  • the coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and
  • the coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0)....
  • the compression device After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel. Since the compression device knows the principal components in each pixel in advance, the compression device only needs to extract the principal components in each pixel. Here, the compression device divides all the principal components in the image data into multiple image blocks based on color attributes, and each image block contains only one color attribute. Please continue to refer to FIG. 5. Since the color attributes in this embodiment include R, Gr, Gb, and B, these four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B. Four image blocks of Gr, Gb, and B.
  • Gr(0,0) is divided into image blocks with Gr color attribute
  • Gr(0,1) is divided into image blocks with Gr color attribute and is offset to the right in the horizontal direction of Gr(0,0) Shifted by 1 unit
  • Gr(0,2) is divided into the image block of the Gr color attribute and shifted by 2 units to the right in the horizontal direction of Gr(0,0)
  • Gr(0,3) is divided into Gr color attribute image block and offset 3 units to the right in the horizontal direction of Gr(0,0);
  • Gr(1,0) is divided into Gr color attribute image block and in Gr(0,0) ) Is shifted downward by 1 unit in the vertical direction
  • Gr(2,0) is divided into the image block of the Gr color attribute and shifted downward by 2 units in the vertical direction of Gr(0,0)
  • Gr (3,0) is divided into the image block of the Gr color attribute and is offset 3 units downward in the vertical direction of Gr(0,0).
  • each pixel of the R, Gb, B color attributes is also the same One is
  • the compression device does not know the principal component in each pixel in advance.
  • the rules of extracting Gr and R according to odd lines, and extracting B and Gb according to even lines; extracting Gb and B according to odd lines and extracting R and Gr according to even lines; extracting R and Gr according to odd lines and extracting even lines are illustrated as examples.
  • FIG. 8 shows a schematic diagram of another embodiment in which each pixel in the image data in this application represents RGB color attributes.
  • the compression device does not know the principal component in each pixel in advance.
  • the rules of extracting Gr and R in odd rows and extracting B and Gb in even rows determine the color components to be extracted, and then the color format is determined as odd rows Gr and R, and even rows B and Gb. Since the color format is the odd-numbered lines Gr and R, and the even-numbered lines B and Gb are the same as the compression method of the embodiment shown in FIG. 5, the description will not be repeated.
  • FIG. 9 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance.
  • the color components to be extracted are determined according to the rules of extracting Gb and B for odd rows and R and Gr for even rows, then the color format 101 is determined as odd rows Gb and B, and even rows R and Gr.
  • FIG. 10 shows the mapping of the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance
  • the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R Gb B (0, 0), R G B (0, 0), R G B (0, 0), and R Gr B (0, 0).
  • each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 is determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101
  • the coordinates of each pixel in the color format of the unit are determined to be (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit
  • the coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and
  • the coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0)....
  • the compression device After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel and divides it into multiple image blocks, and each image block contains only one type Color attributes.
  • the color attributes in this embodiment include R, Gr, Gb, and B, four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B based on the color attributes. Four image blocks of Gr, Gb, and B.
  • Gb(0,0) is assigned to the image block of the Gb color attribute
  • Gb(0,1) is assigned to the image block of the Gb color attribute and is offset to the right in the horizontal direction of Gb(0,0) Shifted by 1 unit
  • Gb(0,2) is divided into the image block of Gb color attribute and shifted by 2 units to the right in the horizontal direction of Gb(0,0)
  • Gb(0,3) is divided into Gb color attribute image block and offset 3 units to the right in the horizontal direction of Gb(0,0)
  • Gb(1,0) is divided into Gb color attribute image block and is in Gb(0,0) ) Is shifted downward by 1 unit in the vertical direction
  • Gb(2,0) is divided into the image blocks of the Gb color attribute and shifted downward by 2 units in the vertical direction of Gb(0,0)
  • Gb (3, 0) is divided into the image block of the Gb color attribute and is shifted downward by 3 units in the vertical direction of G
  • FIG. 11 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance.
  • the color components to be extracted are determined according to the rules of extracting R and Gr for odd rows and Gb and B for even rows, then the color format 101 is determined as odd rows R and Gr, and even rows Gb and B.
  • FIG. 12 shows a mapping that maps the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance
  • FIG. 12 shows a mapping that maps the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance
  • the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R G B (0, 0), R Gr B (0, 0), R Gb B (0, 0), and R G B (0, 0).
  • each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 is determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101
  • the coordinates of each pixel in the color format of the unit are determined as (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit
  • the coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and
  • the coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0)....
  • the compression device After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel and divides it into multiple image blocks, and each image block contains only one type Color attributes.
  • the color attributes in this embodiment include R, Gr, Gb, and B, four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B based on the color attributes. Four image blocks of Gr, Gb, and B.
  • R(0,0) is divided into image blocks with R color attributes
  • R(0,1) is divided into image blocks with R color attributes and is offset to the right in the horizontal direction of R(0,0) Shifted by 1 unit
  • R(0,2) is divided into the image block of the R color attribute and shifted by 2 units to the right in the horizontal direction of R(0,0)
  • R(0,3) is divided into In the image block of the R color attribute and offset by 3 units to the right in the horizontal direction of R(0,0);
  • R(1,0) is divided into the image block of the R color attribute and is in R(0,0) ) Is shifted downward by 1 unit in the vertical direction
  • R(2,0) is divided into the image blocks of the R color attribute and shifted downward by 2 units in the vertical direction of R(0,0),
  • R (3, 0) is divided into the image block of the R color attribute and shifted downward by 3 units in the vertical direction of R(0, 0)..
  • FIG. 13 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal components in each pixel in advance.
  • the color components to be extracted are determined according to the rules of extracting B and Gb for odd rows and Gr and R for even rows, then the color format 101 is determined as odd rows B and Gb, and even rows Gr and R.
  • Figure 14 shows the mapping of the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance
  • the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R G B (0, 0), R Gb B (0, 0), R Gr B (0, 0), and R G B (0, 0).
  • the origin (0, 0)
  • the color format 101 includes R G B (0, 0), R Gb B (0, 0), R Gr B (0, 0), and R G B (0, 0).
  • each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 is determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101
  • the coordinates of each pixel in the color format of the unit are determined to be (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit
  • the coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and
  • the coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0)....
  • the compression device After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel and divides it into multiple image blocks, and each image block contains only one type Color attributes.
  • the color attributes in this embodiment include R, Gr, Gb, and B, four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B based on the color attributes. Four image blocks of Gr, Gb, and B.
  • B(0,0) is assigned to the image block of the B color attribute
  • B(0,1) is assigned to the image block of the B color attribute and is offset to the right in the horizontal direction of B(0,0) Shifted by 1 unit
  • B(0,2) is divided into the image block of B color attribute and shifted to the right by 2 units in the horizontal direction of B(0,0)
  • B(0,3) is divided into In the image block of the B color attribute and offset 3 units to the right in the horizontal direction of B(0,0);
  • B(1,0) is divided into the image block of the B color attribute and is in the B(0,0) ) Is shifted downward by 1 unit in the vertical direction
  • B(2,0) is divided into the image block of B color attribute and shifted downward by 2 units in the vertical direction of B(0,0),
  • B (3, 0) is divided into the image block of the B color attribute and shifted downward by 3 units in the vertical direction of B(0, 0).
  • mapping relationship is determined in the form of coordinates.
  • each pixel is extracted from the image data based on the color attribute of each pixel in the color format.
  • the method of mapping the color value of, and the pixel position in the corresponding image block is not limited to determining the mapping relationship by coordinates, and may also include any information that can be identified by the compression device for determining the mapping relationship, such as a serial number.
  • step S130 the compression device compresses image blocks corresponding to each of the multiple image data and having the same color attribute to obtain a video file.
  • the multiple image blocks obtained in step S120 are input to the encoder for encoding.
  • the coding standards may include, but are not limited to: H.265 or AVS2, which is the second-generation digital audio and video coding and decoding technology standards.
  • the encoder may be integrated in the compression device.
  • the processing device of the compression device coordinates the encoder to perform step S130 after performing steps S110 and S120.
  • the encoder may also be an independent terminal device or a server.
  • the encoder includes a processing module that can perform logic control and digital operations, and a storage module for storing intermediate data generated during the operation of the processing module.
  • the processing module includes, for example, any one or a combination of the following: FPGA, MCU, CPU, etc.
  • the storage module includes, for example, any one or a combination of the following: volatile memories such as registers, stacks, and caches.
  • a video encoder is a program or device capable of compressing digital video.
  • the data in Bayer format cannot enter the encoder directly because each pixel has only a single color attribute and lacks the color attributes of the other two bits.
  • the Bayer format data needs to be Debayer to generate RGB format, or converted to YUV format before entering the encoder for encoding.
  • the pixels in each image block only include a single color attribute, the insufficient number of bits can be temporarily filled with 0 when entering the encoder, so that the image block can be compatible with existing
  • the encoder in the technology facilitates the calculation of the encoder and ensures the processing efficiency of the encoder.
  • a plurality of image blocks may be processed separately by an encoder.
  • the step S130 includes: according to the color attributes in the color format, the multiple image data corresponding to the A plurality of image blocks are sequentially input to a first encoder for compression processing.
  • FIG. 6 shows a schematic diagram of an embodiment of using a first encoder for compression processing in this application.
  • the figure shows multiple image blocks generated from the image data of 4 different frames of 1, 2, 3, and 4, and each image block in each serial number only includes a single color attribute.
  • a plurality of image blocks are input into a first encoder in a preset order for compression encoding.
  • the preset order includes, but is not limited to: based on the time when the image data corresponding to the image block is acquired, or based on the color attribute of the image block.
  • the preset order is determined based on the time when the image data corresponding to the image block is acquired.
  • FIG. 7 shows a schematic diagram of another embodiment in which a first encoder is used for compression processing in this application.
  • the four image blocks with sequence number 1 are first input to the first encoder 102 for compression processing, and then the four image blocks with sequence number 2 are input to the first encoder 102 for compression processing. Compression processing, then the four image blocks with the sequence number 3 are input to the first encoder 102 for compression processing, and finally the four image blocks with the sequence number 4 are input to the first encoder 102 for compression processing.
  • the preset order is determined based on the color attributes of the image blocks. Please continue to refer to FIG. 6 and first input the image blocks with the color attribute Gr to the first encoder 102 for compression processing. Then input the image block with the color attribute of R to the first encoder 102 for compression processing, then input the image block with the color attribute of B to the first encoder 102 for compression processing, and finally input the image block with the color attribute of Gb
  • the first encoder 102 performs compression processing. Since the color difference between adjacent frames is small, the method in this embodiment can greatly reduce the amount of calculation and improve the compression efficiency.
  • multiple image blocks may be processed respectively by multiple encoders.
  • the step S130 includes: under synchronous control, using multiple second The encoder separately compresses multiple image blocks with the same color attribute.
  • the step S130 further includes: the video file obtained by performing compression processing using multiple second encoders includes a video file used to decompress the video file.
  • the second encoder generates synchronization information for each image block.
  • the synchronization information includes but is not limited to a time stamp, sequence number, etc., for example: images of the same frame have the same time stamp, and images of different frames The images have different time stamps, so that the image block with the same time stamp can be restored into one image data at the decoding end.
  • the images of the same frame have the same serial number, and the images of different frames have different serial numbers, so that the image blocks of the same serial number can be restored into one piece of image data at the decoding end.
  • a synchronization server can be used to synchronize the time of multiple second encoders to coordinate the time mechanism of multiple second encoders to keep consistent or to control the error.
  • the server includes, but is not limited to, an NTP (Network Time Protocol) server, etc.
  • one of the multiple second encoders can be used to synchronize other second encoders to coordinate the other second encoders to be in the same time mechanism. Or control the error within an acceptable range.
  • the synchronization protocol includes but is not limited to the 1588 protocol and the like.
  • the image data is in Bayer format or other formats than RGB format, such as YUV format. It is necessary to convert the image data into RGB or Bayer format, and then perform compression processing on it according to the above-mentioned compression processing method. The compression processing method will not be repeated here.
  • the image data compressed by the technical idea provided by the above-mentioned compression method can be stored on a storage medium, or data transmission between devices or within devices can be performed by using a communication transmission method of 60 Mbps and above.
  • the hardware constituting the compression device compresses the captured image data into corresponding compressed image data under the instruction of the software, and saves it in the storage device.
  • the hardware constituting the decompression device decompresses the compressed image data under the instruction scheduling of the software, and plays it (or called display).
  • a camera device that can perform the compression method compresses the captured image data data into corresponding compressed image data (such as a compressed file or code stream), and uses a wireless transmission method based on the 5G communication protocol and optical fiber transmission
  • the compressed image data is transmitted to a server in a transmission mode, and the decompression device provided in the server decompresses the compressed image data and plays it (or called display).
  • the compression method of this application can ensure the clarity of ultra-high-definition video while ensuring the stability of transmission.
  • the amount of data compressed and encoded by the compression method of this application can realize the transmission of 8K images by the current 4K encoder.
  • the problem of difficulty in transmitting ultra-high-definition video in the prior art is solved.
  • a method for decompressing a video file is also provided.
  • the decompression method is mainly performed by an image decompression device, where the decompression device may be a terminal device, Or server.
  • the terminal device may be a terminal device, a server, or the like.
  • the terminal equipment includes, but is not limited to, playback equipment, personal electronic terminal equipment, and the like.
  • the playback device includes a storage device, a processing device, and may also include an interface device.
  • the storage device may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the storage device also includes a memory controller, which can control access to the memory by other components of the device, such as a CPU and a peripheral interface.
  • the storage device is used to store at least one program and image data to be decompressed.
  • the program stored in the storage device includes an operating system, a communication module (or instruction set), a graphics module (or instruction set), a text input module (or instruction set), and an application (or instruction set).
  • the program in the storage device further includes an instruction set for performing a decompression operation on the image data in time sequence based on the technical solution provided by the decompression method.
  • the processing device includes, but is not limited to: CPU, GPU, FPGA (Field-Programmable Gate Array), ISP (Image Signal Processing image processing chip), or other including at least data stored in a storage device dedicated to processing A program processing chip (such as AI dedicated chip), etc.
  • the processing device calls and executes at least one program stored in the storage device to perform decompression processing on the stored image data according to the decompression method.
  • the interface device includes, but is not limited to: a data line interface and a network interface; examples of the data line interface include: display interfaces such as VGA interface and HDMI interface, serial interfaces such as USB, and parallel interfaces such as data bus. Examples of network interfaces include at least one of the following: short-range wireless network interfaces such as Bluetooth-based network interfaces and WiFi network interfaces, such as wireless network interfaces of mobile networks based on 3G, 4G, or 5G protocols, such as wired network interfaces that include network cards Wait.
  • the playback device also includes a display device for displaying the image data obtained by decompression.
  • the display device at least includes a display screen, a display screen controller, etc., where the display screen includes, for example, a liquid crystal display screen, a curved display screen, a touch screen, and the like.
  • the display screen controller includes, for example, a processor dedicated to the display device, a processor integrated with the processor in the processing device, and the like.
  • the playback device is set up with a traffic command center for decompressing and displaying the compressed image data transmitted from the camera device.
  • the playback device is configured on a computer device that is communicatively connected to the minimally invasive medical device, which is connected to the minimally invasive medical device through an optical fiber or other dedicated data line, and connects the current minimally invasive medical device
  • the compressed image data is decompressed and played.
  • the playback device is configured in the computer room of the TV forwarding center, and is used to decompress and play the compressed image data transmitted by the camera device installed on the stadium for video editing.
  • the playback device is a set-top box, which is used to decompress the code stream in the corresponding TV channel in the TV signal and output it to the TV for display.
  • the electronic terminal equipment for personal use includes desktop computers, notebook computers, tablet computers, and editing equipment dedicated to the production of TV programs, movies, TV series, and the like.
  • the electronic terminal equipment includes a storage device and a processing device. Wherein, the storage device and the processing device may be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here.
  • the electronic terminal equipment may also include a display device for displaying image data obtained by decompression.
  • the hardware and software modules of the electronic terminal may be the same as or similar to the corresponding devices in the aforementioned playback device, and will not be repeated here.
  • the electronic terminal device may further include an image acquisition interface for acquiring compressed image data derived from compression.
  • the image acquisition interface may be a network interface, a data line interface, or a program interface.
  • the network interface and the data line interface can be the same or similar to the corresponding devices in the aforementioned playback device, and will not be described in detail here.
  • the processing device of the electronic terminal device downloads compressed image data from the Internet.
  • the processing device of the electronic terminal device obtains the edited file from the storage device.
  • the server includes but is not limited to a single server, a server cluster, a distributed server, a server based on cloud technology, and the like.
  • the server includes a storage device, a processing device, an image acquisition interface, and the like.
  • the storage device and the processing device may be configured in the same physical server device, or be configured in multiple physical server devices according to the division of labor of each physical server device.
  • the image acquisition interface may be a network interface or a data line interface.
  • the storage device, processing device, image acquisition interface, etc. included in the server may be the same as the corresponding devices mentioned in the aforementioned terminal equipment; or specifically set for the server based on the server's throughput, processing capacity, and storage requirements The corresponding devices.
  • the storage device may also include a solid state drive or the like.
  • the processing device may also include a CPU dedicated to a server or the like.
  • the image acquisition interface in the server acquires compressed image data and playback instructions from the Internet, and the processing device executes the decompression method described in this application on the acquired compressed image data based on the playback instructions.
  • this application Based on the demand for decompressing image data generated in any of the above scenarios, this application provides a decompression method for obtaining a video file. Please refer to FIG. 15, which shows the flow of the decompression method in an embodiment. Figure.
  • step S210 a video file is obtained.
  • the video file is obtained by compressing graphics data according to the compression method in this application.
  • the video file may come from a storage medium, or the video file may be transmitted to the decompression device using a communication transmission mode of 60 Mbps and above.
  • the transmission method includes, but is not limited to: a wireless transmission method based on the 5G communication protocol, or optical fiber transmission.
  • step S220 the video file is decompressed according to the compression method used for the video file to obtain multiple image blocks; wherein, according to the color attribute of each image block, the obtained multiple image blocks are Each of the multiple pieces of image data to be generated corresponds to each piece of image data.
  • the video file obtained in step S210 is input to the decoder for decoding.
  • the decoding standard may include, but is not limited to: H.265 or AVS2, which is the second-generation digital audio and video coding and decoding technology standard.
  • the decoder may be integrated in the decompression device.
  • the processing device of the decompression device coordinates the decoder to perform step S220 after performing step S210.
  • the decoder may also be an independent terminal device or a server.
  • the decoder includes a processing module capable of performing logic control and digital operations, and a storage module for storing intermediate data generated during the operation of the processing module.
  • the processing module includes, for example, any one or a combination of the following: FPGA, MCU, CPU, etc.
  • the storage module includes, for example, any one or a combination of the following: volatile memories such as registers, stacks, and caches.
  • multiple image blocks may be processed separately by one decoder.
  • the step S220 includes: decompressing the received video file with the first decoder to obtain the basis Groups of image blocks divided by different color attributes in the color format. Wherein, each image block in each group of image blocks corresponds to a piece of image data to be generated.
  • the first decoder performs decompression processing on the received video file according to the compression coding method of the image block by the encoder during compression coding. And after obtaining multiple sets of image blocks through the first decoder, the first decoder determines the correspondence between the multiple image blocks according to the obtained image block order and compression coding rules, so that the multiple The image block generates image data.
  • FIG. 16 shows a schematic diagram of an embodiment in which a first decoder is used for decompression processing in this application.
  • the encoder compresses and encodes multiple image blocks based on the time rule when the image data corresponding to the image block is acquired. Therefore, as shown in Figure 16, the first decoder sequentially obtains the image block with the Gr color attribute numbered 1, the image block with the R color attribute numbered 1, and the image block with the B color attribute numbered 1. , The image block with the Gb color attribute numbered 1, the image block with the Gr color attribute numbered 2, the image block with the R color attribute numbered 2...
  • the corresponding relationship between multiple image blocks is determined according to the rules of the encoder when performing compression encoding, such as the image block with the Gr color attribute number 1, the image block with the R color attribute number 1, and the number 1
  • the B color attribute image block and the Gb color attribute image block numbered 1 are all from the same image data
  • the Gr color attribute image block numbered 2 the R color attribute image block numbered 2
  • the number is 2
  • the image block of the B color attribute of, and the image block of the Gb color attribute of number 2 are all from the same image data and the order is after 1, etc.
  • FIG. 17 shows a schematic diagram of another embodiment in which a first decoder is used for decompression processing in this application.
  • the encoder compresses and encodes multiple image blocks based on the rules of the color attributes of the image blocks. Therefore, as shown in FIG. 17, the first decoder sequentially obtains the image block with the Gr color attribute number 1, the image block with the Gr color attribute number 2, and the image block with the Gr color attribute number 3. , The image block with the Gr color attribute numbered 4, the image block with the R color attribute numbered 1, the image block with the R color attribute numbered 2...
  • the corresponding relationship between multiple image blocks is determined according to the rules of the encoder when performing compression encoding, such as the image block with the Gr color attribute number 1, the image block with the R color attribute number 1, and the number 1
  • the B color attribute image block and the Gb color attribute image block numbered 1 are all from the same image data
  • the Gr color attribute image block numbered 2 the R color attribute image block numbered 2
  • the number is 2
  • the image block of the B color attribute of, and the image block of the Gb color attribute of number 2 are all from the same image data and the order is after 1, etc.
  • multiple decoders may be used to process multiple image blocks respectively.
  • the step S220 includes: using multiple image blocks under synchronous control.
  • Each of the second decoders decompresses the video file according to the color attributes; wherein, each second decoder outputs a plurality of image blocks with the same color attribute; wherein, each image block is related to the one to be generated
  • the image data corresponds.
  • each second decoder determines the number of image blocks to be decompressed and an image to be generated according to the synchronization information in the video file. Correspondence between data.
  • the second decoder generates a synchronization information for each image block.
  • the synchronization information includes but is not limited to a time stamp, sequence number, etc., for example: images of the same frame have the same time stamp, and images of different frames The images have different time stamps, so that the image block with the same time stamp can be restored into one image data at the decoding end.
  • the images of the same frame have the same serial number, and the images of different frames have different serial numbers, so that the image blocks of the same serial number can be restored into one piece of image data at the decoding end.
  • a synchronization server can be used to synchronize the time of multiple second decoders to coordinate the time mechanism of multiple second decoders to keep the same or to control the error.
  • the server includes, but is not limited to, an NTP (Network Time Protocol) server, etc.
  • NTP Network Time Protocol
  • one of the multiple second decoders can be used to synchronize other second decoders to coordinate the other second decoders to be in the same time mechanism. Or control the error within an acceptable range.
  • the synchronization protocol includes but is not limited to the 1588 protocol and the like.
  • the decompression device provides the obtained multiple image blocks to step S230.
  • step S230 the color value of each pixel position in the corresponding image block is mapped to the pixel of the image data according to the color attribute.
  • the pixel is the basic unit of image display.
  • Each pixel has different color attributes according to the format of the image data in which it is located.
  • the color attribute of the pixel is a single color component;
  • the color attribute of the pixel includes three colors of red (R), green (G), and blue (B) Weight. Since human eyes are more sensitive to green than other colors, usually the number of G components is twice the number of other color components. Therefore, in some embodiments, the G component is represented by a Gr component or a Gb component. Among them, each pixel has a color value corresponding to its color attribute.
  • each pixel has only a single color component of R or G or B, where the G component is represented by the Gr component and the Gb component, and the color value of each pixel in the image data is The brightness value of a single color component is the color value of each pixel; when the image data is in RGB format, the color value of each pixel in the image data includes the brightness value of each color component in the pixel.
  • FIG. 18 shows a schematic diagram of an embodiment in which the decompression device in this application maps the color value of each pixel position in the corresponding image block to the pixel of the image data.
  • the decompression device obtains multiple image blocks, according to the mapping relationship between the pixel position in the image block and the pixel position in the image data corresponding to the image block, the color value of each pixel in the image block Mapping to the pixel position in the corresponding image data, thereby restoring multiple image blocks into image data.
  • each pixel in the multiple image blocks of the decompression device is mapped to the image data according to its position information, wherein the pixels with the same position information and different color attributes are arranged according to their color format when compressed.
  • Gr(0,0), R(0,0), B(0,0), Gb(0,0) are all mapped to the (0,0) position in the image data.
  • the compression When the odd-numbered lines extract Gr, R, and the even-numbered lines extract the format of B, Gb, arrange Gr(0,0), R(0,0), B(0,0), Gb(0,0) according to the color format .
  • other pixels in the image block are also mapped to the image data according to the above method.
  • the step S230 also includes: traversing the pixel position in the image block of each color attribute according to the color format, and during the traversal, mapping the color value of the corresponding pixel position in each image block to the corresponding image data Pixel positions to generate image data; wherein the color value of each pixel position in the image data represents a single color attribute.
  • the decompression device processes a plurality of image blocks separately according to the above method, and sends the generated plurality of image data to step S240 in sequence.
  • the color format in this embodiment is determined according to the color format during compression, and the method for determining the color format has been explained in the implementation of the first aspect of the present application, so it will not be repeated here.
  • step S240 based on the color value of each pixel in the image data, a video image for displaying UHD 4K and above pixels is generated.
  • UHD Ultra High Definition
  • UHD 4k and above refers to video images with a resolution of 4K pixels and above, such as 8k pixels, 16k pixels, etc.
  • this embodiment takes 8K pixels as an example for description, but the principle of the solution can also be used to compress 4k pixels, 16k pixels or even higher-definition video images.
  • the image data provided in step S230 is equivalent to image data in the Bayer format.
  • the decompression device processes the image data provided in step S230 to Debayer, etc., to generate an RGB image for display. Therefore, here, the step S240 further includes: performing interpolation processing on each pixel position in the obtained image data according to the color format to obtain a video image containing RGB color attributes in each pixel.
  • the RGB image includes image data in the RGB format itself and image data in other formats (such as YUV format, etc.) that can be converted into the RGB format.
  • Debayer is mosaic processing, a digital image processing algorithm, whose purpose is to reconstruct a full-color image from the incomplete color samples output by the photosensitive element covered with a color filter array (CFA).
  • CFA color filter array
  • This method is also called color filter array interpolation (CFA interpolation) or color reconstruction (Color reconstruction).
  • the compression device includes: a communication interface for communicating with an external decompression device; a memory for storing at least one The program and the image data to be compressed; the processor, which is used to coordinate the communication interface and the memory to execute the program.
  • the method for obtaining a video file according to any one of the embodiments of the first aspect of the present application will be The image data is compressed to obtain a video file.
  • the memory includes non-volatile memory, storage server, etc.
  • the non-volatile memory is, for example, a solid state hard disk or a U disk.
  • the storage server is used to store various information related to power consumption and power supply.
  • the communication interface includes network interface, data line interface and so on.
  • the network interface includes, but is not limited to: an Ethernet network interface device, a mobile network (3G, 4G, 5G, etc.)-based network interface device, a short-range communication (WiFi, Bluetooth, etc.)-based network interface device, etc.
  • the data line interface includes but is not limited to: USB interface, RS232, etc.
  • the communication interface is connected to various sensor devices, third-party systems, the Internet and other data.
  • the processor is connected to the communication interface and the memory, and it includes at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA), and a multi-core processor.
  • the processor also includes memory, registers, and other memories used to temporarily store data.
  • the communication interface is used to communicate with an external decompression device.
  • the communication interface includes, for example, a network card, which communicates with the decompression device via the Internet or a dedicated network built.
  • the communication interface sends the video file compressed and processed by the compression device to the decompression device.
  • the memory is used to store at least one program and image data to be compressed.
  • the memory includes, for example, a memory card provided in a compression device.
  • the processor is configured to call the at least one program to coordinate the communication interface and the memory to execute the compression method mentioned in any of the foregoing examples.
  • the decompression device includes: a communication interface for communicating with an external compression device; a memory for storing at least A program and the video file to be decompressed; a processor for coordinating the communication interface and the memory to execute the program, during execution according to the decompression of the video file as described in any of the embodiments of the second aspect of the present application
  • the method decompresses the video file so as to play the video file.
  • the memory includes non-volatile memory, storage server, etc.
  • the non-volatile memory is, for example, a solid state hard disk or a U disk.
  • the storage server is used to store various information related to power consumption and power supply.
  • the communication interface includes network interface, data line interface and so on.
  • the network interface includes, but is not limited to: an Ethernet network interface device, a mobile network (3G, 4G, 5G, etc.)-based network interface device, a short-range communication (WiFi, Bluetooth, etc.)-based network interface device, etc.
  • the data line interface includes but is not limited to: USB interface, RS232, etc.
  • the communication interface is connected to various sensor devices, third-party systems, the Internet and other data.
  • the processor is connected to the communication interface and the memory, and includes: at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA), and a multi-core processor.
  • the processor also includes memory, registers, and other memories used to temporarily store data.
  • the communication interface is used to communicate with an external compression device.
  • the communication interface includes, for example, a network card, which communicates with the compression device through the Internet or a dedicated network built.
  • the communication interface receives the video file compressed and processed by the compression device, and provides the video file to the processor.
  • the memory is used to store at least one program and a video file to be decompressed.
  • the memory includes, for example, a memory card provided in a decompression device.
  • the processor is configured to call the at least one program to coordinate the communication interface and the memory to execute the decompression method mentioned in any of the foregoing examples, so as to perform decompression processing on the video file to play the video file.
  • this application also provides a video transmission system. Please refer to FIG. 21, which shows a schematic structural diagram of the video transmission system in an embodiment of this application.
  • the video transmission system includes any one of the aforementioned compression equipment and decompression equipment.
  • the video transmission system includes a communication interface, a memory, and a processor.
  • the communication interface may include a network interface, a data line interface, or a program interface.
  • the processing device executes the compression operation by calling the program stored in the memory to compress and encode the acquired image data into a video file , And stored in the storage device.
  • the processing device performs a decompression operation by calling a program in the storage device, and displays the image data obtained after decompression on the display screen.
  • the compression and decompression operations in the video transmission system can be performed based on the corresponding methods provided in this application, and will not be repeated here.
  • the technical solution of the present application essentially or the part that contributes to the prior art can be embodied in the form of a software product.
  • the computer software product can include one or more machine executable instructions stored thereon.
  • a machine-readable medium when these instructions are executed by one or more machines, such as a computer, a computer network, or other electronic devices, can cause the one or more machines to perform operations according to the embodiments of the present application. For example, the steps in the compression method or decompression method.
  • Machine-readable media may include, but are not limited to, floppy disks, optical disks, CD-ROM (compact disk-read only memory), magneto-optical disks, ROM (read only memory), RAM (random access memory), EPROM (erasable Except programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions.
  • any connection is properly termed a computer-readable medium.
  • the instruction is sent from a website, server or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • the Coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of the medium.
  • computer readable and writable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are intended for non-transitory, tangible storage media.
  • the magnetic disks and optical disks used in the application include compact disks (CD), laser disks, optical disks, digital versatile disks (DVD), floppy disks, and Blu-ray disks.
  • CD compact disks
  • laser disks optical disks
  • DVD digital versatile disks
  • floppy disks floppy disks
  • Blu-ray disks Disks usually copy data magnetically, while optical disks use lasers for optical Copy data locally.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, rather than corresponding to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the video file compression method provided in this application can effectively reduce the code stream while ensuring high-fidelity image quality.
  • the amount of data added by multiple image blocks is much lower than the amount of data after compression processing is performed using traditional methods.
  • the amount of information carried by the compression method of this application is equivalent to that of the YUV444 format .
  • the image block of each color attribute is equivalent to 4K video YUV400 format with only brightness information, and compared with YUV422 format, the data volume is only half.
  • 8k video can be encoded by an encoder in the existing technology.
  • 4K video can also be encoded by a 2K video encoder, or 16k video can be processed by an 8K video encoder.
  • the bit stream rate generated by the compression method of this application can be controlled at about half of YUV422, that is, 24 to 80 Mbps.
  • the current stable uplink peak of 5G is 90 Mbps, so 5G real-time transmission of 8K video can be realized, and it has High fidelity picture quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present application relates to the technical field of image processing, and particularly provides a compression method for obtaining a video file, a decompression method, a system, and a storage medium. The compression method comprises: obtaining, in a temporal order, multiple pieces of image data to be compressed; separately mapping a color value of each pixel in each piece of image data to each pixel position in multiple image blocks on the basis of a color attribute of each pixel in the image data; and compressing image blocks corresponding to each piece of image data in the multiple pieces of image data and having the same color attribute to obtain a video file. The solution provided in the present application can effectively reduce the data rate while ensuring high-fidelity image quality. Because the compression method in the present application can effectively reduce the data volume, a 8K video can be encoded by means of a 4K encoder in the prior art. At present, an uplink stable peak value of 5G is 90 Mbps, so that the 8K video can be transmitted in real time by 5G and high-fidelity image quality can be achieved.

Description

获得视频文件的压缩方法、解压缩方法、系统及存储介质Compression method, decompression method, system and storage medium for obtaining video files 技术领域Technical field
本申请涉及图像处理技术领域,特别是涉及一种获得视频文件的压缩方法、解压缩方法、系统及存储介质。This application relates to the field of image processing technology, and in particular to a compression method, decompression method, system and storage medium for obtaining video files.
背景技术Background technique
随着人们对多媒体播放数据的播放质量越来越高,多媒体播放数据的数据量也越来越大。对于大数据量的多媒体播放数据,采用传统方式对其进行压缩编码后,数据量依然无法控制在可被稳定传输的范围内。为使多媒体播放数据可被稳定传输,通常只能将多媒体播放数据进一步压缩,但这种方法会导致颜色信息丢失严重,无法满足高画质的需求。因此,人们期望能够有一种更好的压缩方法,以实现对于数据量大的多媒体播放数据,可在保证传输稳定性的情况下满足高保真的效果。As people's playback quality of multimedia playback data becomes higher and higher, the amount of multimedia playback data is also increasing. For multimedia playback data with a large amount of data, after compressing and encoding it in a traditional way, the amount of data still cannot be controlled within a range that can be stably transmitted. In order to enable the stable transmission of multimedia playback data, usually the multimedia playback data can only be further compressed, but this method will cause serious loss of color information and cannot meet the demand for high image quality. Therefore, people expect a better compression method to achieve high-fidelity effects for multimedia playback data with a large amount of data while ensuring transmission stability.
发明内容Summary of the invention
鉴于以上所述现有技术的缺点,本申请的目的在于提供一种获得视频文件的压缩方法、解压缩方法、系统及存储介质,用于解决现有技术中超高清视频数据传输困难的问题。In view of the above-mentioned shortcomings of the prior art, the purpose of this application is to provide a compression method, decompression method, system and storage medium for obtaining video files to solve the problem of difficulty in transmitting ultra-high-definition video data in the prior art.
为实现上述目的及其他相关目的,本申请的第一方面提供一种获得视频文件的压缩方法,包括以下步骤:按照时间顺序获取多幅待压缩处理的图像数据;所述图像数据用于显示UHD4K及以上像素的视频图像;基于所述图像数据中各像素的颜色属性,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置;将多幅图像数据中的每一个图像数据所对应的且具有同一颜色属性的图像块进行压缩,以得到视频文件。In order to achieve the above and other related purposes, the first aspect of this application provides a compression method for obtaining a video file, which includes the following steps: obtaining a plurality of image data to be compressed in a chronological order; the image data is used to display UHD4K Video images of and above pixels; based on the color attribute of each pixel in the image data, the color value of each pixel in each image data is mapped to each pixel position in multiple image blocks; Each image block corresponding to the image data and having the same color attribute is compressed to obtain a video file.
在本申请的第一方面的某些实施方式中,所述图像数据中的每个像素表示单一颜色属性的情况下,所述基于图像数据中各像素的颜色属性,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的步骤包括:按照基于所述图像数据中的Bayer格式而设置的颜色格式,遍历所述图像数据;其中,在遍历期间,基于所述颜色格式中各像素的颜色属性,从所述图像数据中提取各像素的颜色值,并映射到相应图像块中的像素位置。In some embodiments of the first aspect of the present application, in the case where each pixel in the image data represents a single color attribute, the color attribute of each pixel in the image data is The step of mapping the color value of the pixel to each pixel position in the plurality of image blocks includes: traversing the image data according to the color format set based on the Bayer format in the image data; wherein, during the traversal, based on the According to the color attribute of each pixel in the color format, the color value of each pixel is extracted from the image data and mapped to the pixel position in the corresponding image block.
在本申请的第一方面的某些实施方式中,所述图像数据中的每个像素表示RGB颜色属性的情况下;所述基于图像数据中各像素的颜色属性,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的步骤包括:按照基于所述图像数据中的像素行格式而设置的颜色格式,遍历所述图像数据;其中,在遍历期间,基于所述颜色格式中各像素的颜色 属性,从所述图像数据中提取各像素的颜色主分量或颜色拟合分量,并映射到相应图像块中的像素位置。In some implementations of the first aspect of the present application, when each pixel in the image data represents an RGB color attribute; in the image data, each pixel in each image data is The step of mapping the color value of the pixel to each pixel position in a plurality of image blocks includes: traversing the image data according to a color format set based on the pixel row format in the image data; wherein, during the traversal, based on For the color attribute of each pixel in the color format, the color principal component or color fitting component of each pixel is extracted from the image data, and mapped to the pixel position in the corresponding image block.
在本申请的第一方面的某些实施方式中,所述将多幅图像数据中的每一个图像数据所对应的且具有同一颜色属性的图像块进行压缩的步骤包括:按照所述颜色格式中的颜色属性,将多幅图像数据所对应的多个图像块依序输入一第一编码器进行压缩处理。In some implementation manners of the first aspect of the present application, the step of compressing image blocks corresponding to each of the multiple image data and having the same color attribute includes: according to the color format The multiple image blocks corresponding to multiple image data are sequentially input to a first encoder for compression processing.
在本申请的第一方面的某些实施方式中,所述将多幅图像数据中的每一个图像数据所对应的且具有同一颜色属性的图像块进行压缩的步骤包括:在同步控制下,利用多个第二编码器分别将同一颜色属性的多个图像块进行压缩处理。In some implementations of the first aspect of the present application, the step of compressing image blocks corresponding to each of the multiple image data and having the same color attribute includes: under synchronous control, using The multiple second encoders respectively perform compression processing on multiple image blocks with the same color attribute.
在本申请的第一方面的某些实施方式中,利用多个第二编码器进行压缩处理所得到的视频文件中包含用于解压缩视频文件以恢复多幅图像数据而设置的同步信息。In some implementations of the first aspect of the present application, the video file obtained by the compression processing using multiple second encoders includes synchronization information set for decompressing the video file to restore multiple image data.
本申请的第二方面还提供一种视频文件的解压缩方法,包括:获取一视频文件;按照对应所述视频文件所使用的压缩方式对所述视频文件进行解压缩处理,得到多个图像块;其中,根据各图像块的颜色属性,所得到的多个图像块与待生成的多幅图像数据中的每一幅图像数据相对应;根据所述颜色属性,将相应的各图像块中各像素位置的颜色值映射到图像数据的像素中;基于所述图像数据中各像素的颜色值,生成用于显示UHD 4K及以上像素的视频图像。The second aspect of the present application also provides a method for decompressing a video file, including: obtaining a video file; decompressing the video file according to the compression method used for the video file to obtain multiple image blocks Wherein, according to the color attribute of each image block, the obtained multiple image blocks correspond to each of the multiple image data to be generated; according to the color attribute, each of the corresponding image blocks The color value of the pixel position is mapped to the pixel of the image data; based on the color value of each pixel in the image data, a video image for displaying UHD 4K and above pixels is generated.
在本申请的第二方面的某些实施方式中,所述按照压缩方式对所述视频文件进行解压缩处理,得到多个图像块的步骤包括:在同步控制下,利用多个第二解码器分别依据颜色属性对所述视频文件进行解压缩处理;其中,每个第二解码器输出具有同一颜色属性的多个图像块;其中,每个图像块与待生成的一幅图像数据相对应。In some implementation manners of the second aspect of the present application, the step of decompressing the video file according to a compression mode to obtain multiple image blocks includes: using multiple second decoders under synchronous control The video file is decompressed according to the color attributes respectively; wherein, each second decoder outputs a plurality of image blocks with the same color attribute; wherein, each image block corresponds to a piece of image data to be generated.
在本申请的第二方面的某些实施方式中,每个第二解码器依据所述视频文件中的同步信息确定所解压缩的多个图像块与待生成的一幅图像数据之间的对应关系。In some implementations of the second aspect of the present application, each second decoder determines the correspondence between a plurality of image blocks to be decompressed and a piece of image data to be generated according to the synchronization information in the video file relationship.
在本申请的第二方面的某些实施方式中,所述按照压缩方式对所述视频文件进行解压缩处理,得到多个图像块的步骤包括:利用第一解码器对所接收的视频文件进行解压缩处理,得到依据颜色格式中的不同颜色属性而划分的多组图像块;其中,每组图像块中的每个图像块与待生成的一幅图像数据相对应。In some implementation manners of the second aspect of the present application, the step of decompressing the video file in a compression mode to obtain a plurality of image blocks includes: using a first decoder to perform processing on the received video file The decompression process obtains multiple groups of image blocks divided according to different color attributes in the color format; wherein, each image block in each group of image blocks corresponds to a piece of image data to be generated.
在本申请的第二方面的某些实施方式中,所述根据颜色属性,将相应的各图像块中各像素位置的颜色值映射到图像数据的像素中的步骤包括:按照所述颜色格式,遍历各颜色属性的图像块中的像素位置,在遍历期间,将各图像块中相应像素位置的颜色值映射到所对应的图像数据中的像素位置,以生成图像数据;其中,所述图像数据中各像素位置的颜色值表示单一颜色属性。In some implementation manners of the second aspect of the present application, the step of mapping the color value of each pixel position in the corresponding image block to the pixel of the image data according to the color attribute includes: according to the color format, Traverse the pixel position in the image block of each color attribute, and during the traversal, map the color value of the corresponding pixel position in each image block to the pixel position in the corresponding image data to generate image data; wherein, the image data The color value of each pixel position in represents a single color attribute.
在本申请的第二方面的某些实施方式中,所述基于图像数据中经映射得到的各像素的颜色值,生成用于显示UHD 4K及以上像素的视频图像的步骤还包括:根据所述颜色格式,将所得到的图像数据中的各像素位置进行插值处理,得到各像素中包含RGB颜色属性的视频图像。In some implementation manners of the second aspect of the present application, the step of generating a video image for displaying UHD 4K and above pixels based on the color value of each pixel obtained by mapping in the image data further includes: In the color format, each pixel position in the obtained image data is interpolated to obtain a video image containing RGB color attributes in each pixel.
本申请的第三方面还提供一种压缩设备,包括:通信接口,用于与外部的解压缩设备通信连接;存储器,用于存储至少一个程序和待压缩的图像数据;处理器,用于协调通信接口和存储器以执行所述程序,在执行期间按照本申请第一方面中任一所述的获得视频文件的压缩方法将所述图像数据进行压缩处理,以得到视频文件。The third aspect of the present application also provides a compression device, including: a communication interface for communicating with an external decompression device; a memory for storing at least one program and image data to be compressed; a processor for coordination The communication interface and the memory are used to execute the program, and during execution, the image data is compressed according to the compression method for obtaining a video file described in the first aspect of the present application to obtain a video file.
本申请的第四方面还提供一种解压缩设备,包括:通信接口,用于与外部的压缩设备通信连接;存储器,用于存储至少一个程序和待解压缩的视频文件;处理器,用于协调通信接口和存储器以执行所述程序,在执行期间按照如本申请第二方面中任一所述的视频文件的解压缩方法将所述视频文件进行解压缩处理,以便播放所述视频文件。The fourth aspect of the present application also provides a decompression device, including: a communication interface for communicating with an external compression device; a memory for storing at least one program and a video file to be decompressed; and a processor for The communication interface and the memory are coordinated to execute the program, and during execution, the video file is decompressed according to the video file decompression method as described in the second aspect of the present application, so as to play the video file.
本申请的第五方面还提供一种视频传输系统,包括:如本申请的第三方面所述的压缩设备;以及如本申请的第四方面所述的解压缩设备。The fifth aspect of the present application also provides a video transmission system, including: the compression device as described in the third aspect of the application; and the decompression device as described in the fourth aspect of the application.
本申请的第六方面还提供一种计算机可读存储介质,包括:存储有至少一程序;所述至少一程序在被调用时执行如本申请第一方面中任一所述的获得视频文件的压缩方法;或者,所述至少一程序在被调用时执行如本申请第二方面中任一所述的视频文件的解压缩方法。The sixth aspect of the present application also provides a computer-readable storage medium, including: storing at least one program; when called, the at least one program executes the video file acquisition program described in any one of the first aspects of the present application Compression method; or, when the at least one program is called, the method for decompressing a video file as described in the second aspect of the present application is executed.
如上所述,本申请的获得视频文件的压缩方法、解压缩方法、系统及存储介质,具有以下有益效果:本申请所提供的获得视频文件的压缩方法、解压缩方法、系统及存储介质可有效降低码流,并同时保证高保真画质。本申请中,多个图像块相加的数据量远远低于采用传统方法进行压缩处理后的数据量。其中,与YUV222格式相比,只有其一半的数据量;与YUV444格式相比,只有其1/3的数据量,但采用本申请的压缩方法所携带的信息量却相当于YUV444格式的信息量。以8K视频为例,每个颜色属性的图像块相当于只有亮度信息的4K视频YUV400格式,且与YUV422格式相比,数据量只有其一半。由于本申请中的压缩方法可有效降低数据量,因此可通过现有技术中的4K编码器进行8k视频的编码。同理,通过本申请的压缩方法也可通过2K视频的编码器对4K视频进行编码处理,或者通过8K视频的编码器对16k视频进行处理等。并且,藉由本申请的压缩方法可直接利用RGB视频图像或Bayer格式图像进行压缩,而无需转换成YUV格式。另外,通过本申请的压缩方法所产生的码流率可以控制在YUV422的一半左右,即24~80Mbps,介于目前5G的上行稳定峰值是90Mbps,因此可实现5G实时传输8K视频,并同时具有高保真画质。As mentioned above, the compression method, decompression method, system and storage medium for obtaining video files of this application have the following beneficial effects: The compression method, decompression method, system and storage medium for obtaining video files provided by this application can be effective Reduce the code stream while ensuring high-fidelity picture quality. In this application, the amount of data added by multiple image blocks is much lower than the amount of data after compression processing is performed using traditional methods. Among them, compared with the YUV222 format, it has only half the data volume; compared with the YUV444 format, it has only 1/3 of the data volume, but the amount of information carried by the compression method of this application is equivalent to that of the YUV444 format . Taking 8K video as an example, the image block of each color attribute is equivalent to 4K video YUV400 format with only brightness information, and compared with YUV422 format, the data volume is only half. Since the compression method in this application can effectively reduce the amount of data, 8k video can be encoded by a 4K encoder in the prior art. In the same way, with the compression method of the present application, 4K video can also be encoded by a 2K video encoder, or 16k video can be processed by an 8K video encoder. Moreover, with the compression method of the present application, RGB video images or Bayer format images can be directly used for compression without conversion to YUV format. In addition, the bit stream rate generated by the compression method of this application can be controlled at about half of YUV422, that is, 24 to 80 Mbps. The current stable uplink peak of 5G is 90 Mbps, so 5G real-time transmission of 8K video can be realized, and it has High fidelity picture quality.
附图说明Description of the drawings
图1显示为本申请中的压缩方法在一实施方式中的流程图;FIG. 1 shows a flowchart of an embodiment of the compression method in this application;
图2显示为本申请中的图像数据在一实施例中的示意图;FIG. 2 shows a schematic diagram of image data in an embodiment of this application;
图3显示为本申请中将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法实施例示意图;FIG. 3 shows a schematic diagram of an embodiment of a mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks in this application;
图4显示为本申请中图像数据中的每个像素表示RGB颜色属性的实施例示意图;FIG. 4 shows a schematic diagram of an embodiment in which each pixel in the image data in this application represents RGB color attributes;
图5显示为本申请中将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的另一实施例示意图;FIG. 5 shows a schematic diagram of another embodiment of the mapping method of mapping the color value of each pixel in each image data to each pixel position in multiple image blocks in this application;
图6显示为本申请中利用一第一编码器进行压缩处理的实施例示意图;FIG. 6 shows a schematic diagram of an embodiment of using a first encoder to perform compression processing in this application;
图7显示为本申请中利用一第一编码器进行压缩处理的另一实施例示意图;FIG. 7 shows a schematic diagram of another embodiment of using a first encoder to perform compression processing in this application;
图8显示为本申请中图像数据中的每个像素表示RGB颜色属性的另一实施例示意图;FIG. 8 shows a schematic diagram of another embodiment in which each pixel in the image data in this application represents RGB color attributes;
图9显示为本申请中压缩设备未预先获知每个像素中的主分量时的又一实施例示意图;FIG. 9 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance;
图10显示为本申请中压缩设备未预先获知每个像素中的主分量时,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的一实施例示意图;Figure 10 shows an implementation of a mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance Example diagram;
图11显示为本申请中压缩设备未预先获知每个像素中的主分量时的再一实施例示意图;FIG. 11 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance;
图12显示为本申请中压缩设备未预先获知每个像素中的主分量时,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的另一实施例示意图;Figure 12 shows another mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance. Schematic diagram of the embodiment;
图13显示为本申请中压缩设备未预先获知每个像素中的主分量时的又一实施例示意图;FIG. 13 shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance;
图14显示为本申请中压缩设备未预先获知每个像素中的主分量时,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的又一实施例示意图。Figure 14 shows another mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance Schematic diagram of the embodiment.
图15显示为所述解压缩方法在一实施方式中的流程图;FIG. 15 shows a flowchart of the decompression method in an embodiment;
图16显示为本申请中利用一第一解码器进行解压缩处理的实施例示意图;FIG. 16 shows a schematic diagram of an embodiment of using a first decoder to perform decompression processing in this application;
图17显示为本申请中利用一第一解码器进行解压缩处理的另一实施例示意图;FIG. 17 is a schematic diagram of another embodiment of using a first decoder to perform decompression processing in this application;
图18显示为本申请中解压缩设备将相应的各图像块中各像素位置的颜色值映射到图像数据的像素中的实施例示意图;FIG. 18 shows a schematic diagram of an embodiment in which the decompression device in this application maps the color value of each pixel position in the corresponding image block to the pixel of the image data;
图19显示为本申请中压缩设备的实施例示意图;Figure 19 shows a schematic diagram of an embodiment of the compression device in this application;
图20显示为本申请中解压缩设备的实施例示意图;FIG. 20 shows a schematic diagram of an embodiment of the decompression device in this application;
图21显示为本申请中的视频传输系统在一实施方式中的结构示意图。FIG. 21 shows a schematic structural diagram of the video transmission system in this application in an embodiment.
具体实施方式Detailed ways
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭 露的内容轻易地了解本申请的其他优点及功效。The following specific examples illustrate the implementation of the present application. Those familiar with the technology can easily understand the other advantages and effects of the present application from the content disclosed in this specification.
虽然在一些实例中术语第一、第二等在本文中用来描述各种元件,但是这些元件不应当被这些术语限制。这些术语仅用来将一个元件与另一个元件进行区分。例如,第一解码器可以被称作第二解码器,并且类似地,第二解码器可以被称作第一解码器,而不脱离各种所描述的实施例的范围。第一解码器和解码器均是在描述一个阈值,但是除非上下文以其他方式明确指出,否则它们不是同一个解码器。Although the terms first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, the first decoder may be referred to as the second decoder, and similarly, the second decoder may be referred to as the first decoder without departing from the scope of the various described embodiments. Both the first decoder and the decoder are describing a threshold, but unless the context clearly indicates otherwise, they are not the same decoder.
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to also include the plural forms, unless the context dictates to the contrary. It should be further understood that the terms "comprising" and "including" indicate the existence of the described features, steps, operations, elements, components, items, types, and/or groups, but do not exclude one or more other features, steps, operations, The existence, appearance or addition of elements, components, items, categories, and/or groups. The terms "or" and "and/or" used herein are interpreted as inclusive, or mean any one or any combination. Therefore, "A, B or C" or "A, B and/or C" means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C" . An exception to this definition will only occur when the combination of elements, functions, steps or operations is inherently mutually exclusive in some way.
随着用户对多媒体播放数据的播放质量要求的不断提升,例如,在节目直播、视频会议或是安全监控等领场景,技术人员需传输稳定性更好、保真率更高的多媒体播放数据。但是随着人们对多媒体播放数据的播放质量越来越高,多媒体播放数据的数据量也越来越大。对于大数据量的多媒体播放数据,采用传统方式对其进行压缩编码后的码流也较大,并且通常需要将RGB图像转换成YUV格式后再进行压缩编码。以30帧8K为例,如果采用YUV444格式为例,其数据量达到7680×4320×24bit×30fps≈3GByte/s,如果采用YUV422格式,虽然颜色分量丢失了一半,但其数据量仍然达到2Gbyte/s。在这种数据量下,如果采用现有技术中的方式进行压缩编码,码流为48~160Mbps,即使采用最新的5G技术,由于5GCPE的上行速度峰值均值在80~90Mbps,仍无法满足稳定传输的要求。另一方面,在一些实施方式中对多媒体播放数据的压缩大多采用YUV422或YUV420格式,颜色信息丢失严重,无法满足高画质的需求。人们期望能够有一种更好的压缩方法,能够实现对于数据量大的多媒体播放数据,可在低码率的情况下亦能做到高保真的效果。With the continuous improvement of users' requirements for the playback quality of multimedia playback data, for example, in scenarios such as program live broadcasts, video conferences, or security monitoring, technicians need to transmit multimedia playback data with better stability and higher fidelity. However, as people's playback quality of multimedia playback data becomes higher and higher, the amount of multimedia playback data is also increasing. For multimedia playback data with a large amount of data, the code stream after compression and encoding using traditional methods is also relatively large, and it is usually necessary to convert RGB images into YUV format and then perform compression encoding. Taking 30 frames of 8K as an example, if the YUV444 format is used as an example, the data volume will reach 7680×4320×24bit×30fps≈3GByte/s. If the YUV422 format is used, although the color components are lost by half, the data volume still reaches 2Gbyte/ s. Under this kind of data volume, if the existing technology is used for compression encoding, the code stream is 48-160Mbps. Even if the latest 5G technology is adopted, the average value of the uplink speed of 5GCPE is 80-90Mbps, which still cannot meet the requirements of stable transmission. Requirements. On the other hand, in some embodiments, most of the compression of multimedia playback data adopts the YUV422 or YUV420 format, and the color information is seriously lost, which cannot meet the demand for high image quality. People expect to have a better compression method that can achieve high-fidelity effects for multimedia playback data with a large amount of data, even at low bit rates.
为此,本申请提供一种获得视频文件的压缩方法,以解决上述问题,使高清视频的播放更流畅且保真度更高。所述压缩方法主要由图像的压缩设备来完成,其中,所述压缩设备可以为一种终端设备、或者服务器。在此,所述终端设备包括但不限于摄像设备、个人使用的电子终端设备等。For this reason, this application provides a compression method for obtaining video files to solve the above-mentioned problems and make the playback of high-definition videos smoother and with higher fidelity. The compression method is mainly implemented by an image compression device, where the compression device may be a terminal device or a server. Here, the terminal equipment includes, but is not limited to, camera equipment, personal electronic terminal equipment, and the like.
应当理解,所述摄像设备包括摄像装置、存储装置、处理装置,还可以包含接口装置等。 所述摄像装置用于获取图像数据,其中,所述图像数据是由基于颜色而设置的多路图像数据组成的。所述摄像装置至少包含由透镜组构成的镜头、光感器件等,其中光感器件举例包含CCD器件、CMOS器件等。所述存储装置可包括高速随机存取存储器,并且还可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。所述存储装置还包括存储器控制器,其可控制设备的诸如CPU和外设接口之类的其他组件对存储器的访问。所述存储装置用于存储至少一个程序和待编码的图像数据。存储在存储装置中的程序包括操作系统、通信模块(或指令集)、图形模块(或指令集)、文本输入模块(或指令集)、以及应用(或指令集)。所述存储装置中的程序还包括基于所述压缩方法所提供的技术方案而依时序对图像数据执行编码操作的指令集。所述处理装置包括但不限于:CPU、GPU、FPGA(Field-Programmable Gate Array现场可编程门阵列)、ISP(Image Signal Processing图像处理芯片)、或者其他包含专用于处理存储装置中所存储的至少一个程序的处理芯片(如AI专用芯片)等。所述处理装置调用并执行存储装置中所存储的至少一个程序,以按照所述压缩方法对所保存的图像数据进行压缩处理。所述接口装置包括但不限于:数据线接口和网络接口;其中,数据线接口举例包括以下至少一种:如USB等串行接口、如总线接口能够并行接口等。网络接口举例包括以下至少一种:如基于蓝牙协议的网络接口、WiFi网络接口等短距离无线网络接口,如基于3G、4G或5G协议的移动网络的无线网络接口,如包含网卡的有线网络接口等。在一些场景中,所述摄像设备设置在道路上方的云台上,用于监控车辆违章,如超速、闯红灯等。在另一些场景中,所述摄像装置被配置在微创医疗设备上,其摄像装置通过光纤或其他专用数据线设置在软管前端。在另一些场景中,所述摄像装置被配置在体育场的高速移动的轨道上,用于摄取竞技比赛的高清画面。It should be understood that the camera equipment includes a camera device, a storage device, a processing device, and may also include an interface device. The camera device is used to acquire image data, wherein the image data is composed of multiple image data set based on colors. The imaging device includes at least a lens composed of a lens group, a light sensing device, etc., where the light sensing device includes, for example, a CCD device, a CMOS device, and the like. The storage device may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The storage device also includes a memory controller, which can control access to the memory by other components of the device, such as a CPU and a peripheral interface. The storage device is used to store at least one program and image data to be encoded. The program stored in the storage device includes an operating system, a communication module (or instruction set), a graphics module (or instruction set), a text input module (or instruction set), and an application (or instruction set). The program in the storage device also includes an instruction set for performing an encoding operation on the image data in time sequence based on the technical solution provided by the compression method. The processing device includes, but is not limited to: CPU, GPU, FPGA (Field-Programmable Gate Array), ISP (Image Signal Processing image processing chip), or other including at least data stored in a storage device dedicated to processing A program processing chip (such as AI dedicated chip), etc. The processing device calls and executes at least one program stored in the storage device to perform compression processing on the stored image data according to the compression method. The interface device includes, but is not limited to: a data line interface and a network interface; among them, examples of the data line interface include at least one of the following: serial interfaces such as USB, and parallel interfaces such as bus interfaces. Examples of network interfaces include at least one of the following: short-range wireless network interfaces such as Bluetooth-based network interfaces and WiFi network interfaces, such as wireless network interfaces of mobile networks based on 3G, 4G, or 5G protocols, such as wired network interfaces that include network cards Wait. In some scenarios, the camera device is set on a pan-tilt above the road to monitor vehicle violations, such as speeding, red light running, etc. In other scenarios, the camera device is configured on a minimally invasive medical device, and the camera device is set at the front end of the hose through an optical fiber or other dedicated data cable. In other scenes, the camera device is configured on a high-speed moving track of a stadium to capture high-definition pictures of competitive games.
应当理解,所述个人使用的电子终端设备包括台式电脑、笔记本电脑、平板电脑、和专用于制作电视节目、电影、电视剧等的剪接设备等。所述电子终端设备包含存储装置、处理装置。其中,存储装置和处理装置可与前述摄像设备中的对应装置相同或相似,在此不再详述。所述电子终端设备还可以包含摄像装置,用于摄取图像数据。在此,在一些示例中,所述摄像装置的硬件及软件模块可与前述摄像设备中的对应装置相同或相似,在此也不再重述。在又一些示例中,所述电子终端设备还可以包括图像获取接口,用于获取图像数据。所述图像获取接口可以为网络接口、数据线接口、或程序接口。其中,所述网络接口和数据线接口可与前述摄像设备中的对应装置相同或相似,在此不再详述。例如,藉由所述网络接口,所述电子终端设备的处理装置从互联网中下载的图像数据。又如,藉由所述程序接口,所述电子终端设备的处理装置获取绘图软件展示在显示屏上的图像数据。其中,所述绘图软件举例 为PS软件、或截屏软件等。再如,藉由所述数据线接口,所述电子终端设备的处理装置从存储装置中获取未经剪辑处理的高清视频中的一帧图像数据。It should be understood that the electronic terminal equipment for personal use includes desktop computers, notebook computers, tablet computers, and editing equipment dedicated to the production of TV programs, movies, TV series, and the like. The electronic terminal equipment includes a storage device and a processing device. Wherein, the storage device and the processing device may be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here. The electronic terminal equipment may also include a camera device for capturing image data. Here, in some examples, the hardware and software modules of the camera device may be the same as or similar to the corresponding device in the aforementioned camera device, and will not be repeated here. In still other examples, the electronic terminal device may further include an image acquisition interface for acquiring image data. The image acquisition interface may be a network interface, a data line interface, or a program interface. Wherein, the network interface and the data line interface can be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here. For example, through the network interface, the processing device of the electronic terminal equipment downloads image data from the Internet. For another example, through the program interface, the processing device of the electronic terminal device obtains the image data displayed by the drawing software on the display screen. Wherein, the drawing software is PS software, or screenshot software, for example. For another example, through the data line interface, the processing device of the electronic terminal device obtains one frame of image data in the unedited high-definition video from the storage device.
所述服务器包括但不限于单台服务器、服务器集群、分布式服务器、基于云技术的服务端等。其中,所述服务器包括存储装置、处理装置和图像获取接口等。其中所述存储装置和处理装置可配置于同一台实体服务器设备中,或根据各实体服务器设备的分工而配置在多台实体服务器设备中。所述图像获取接口可以为网络接口、或数据线接口。所述服务器中所包含的存储装置、处理装置和图像获取接口等可与前述终端设备中所提及的对应装置相同;或基于服务器的吞吐量、处理能力、存储要求而专门设置的用于服务器的各对应装置。例如,所述存储装置还可包含固态硬盘等。例如,所述处理装置还可包含专用于服务器的CPU等。所述服务器中的图像获取接口获取来自互联网中的图像数据和编码指令,处理装置基于所述编码指令对所获取的图像数据执行本申请所述的压缩方法。The server includes but is not limited to a single server, a server cluster, a distributed server, a server based on cloud technology, and the like. Wherein, the server includes a storage device, a processing device, an image acquisition interface, and the like. The storage device and the processing device may be configured in the same physical server device, or be configured in multiple physical server devices according to the division of labor of each physical server device. The image acquisition interface may be a network interface or a data line interface. The storage device, processing device, image acquisition interface, etc. included in the server may be the same as the corresponding devices mentioned in the aforementioned terminal equipment; or specifically set for the server based on the server's throughput, processing capacity, and storage requirements The corresponding devices. For example, the storage device may also include a solid state drive or the like. For example, the processing device may also include a CPU dedicated to a server or the like. The image acquisition interface in the server acquires image data and encoding instructions from the Internet, and the processing device executes the compression method described in this application on the acquired image data based on the encoding instructions.
所述视频文件可以被存储在存储介质中,也可以利用60Mbps及以上的通信传输方式将所述视频文件传输至压缩设备。其中,所述传输方式包括但不限于:基于5G通信协议的无线传输方式、或光纤传输。The video file can be stored in a storage medium, and the video file can also be transmitted to a compression device using a communication transmission mode of 60 Mbps and above. Wherein, the transmission method includes, but is not limited to: a wireless transmission method based on the 5G communication protocol, or optical fiber transmission.
基于上述任一场景所产生的对图像数据进行压缩编码的需求,本申请提供一种获得视频文件的压缩方法,请参阅图1,其显示为所述压缩方法在一实施方式中的流程图。Based on the requirements for compressing and encoding image data generated in any of the above scenarios, the present application provides a compression method for obtaining video files. Please refer to FIG. 1, which is shown as a flowchart of the compression method in an embodiment.
在步骤S110中,按照时间顺序获取多幅待压缩处理的图像数据;所述图像数据用于显示UHD 4K及以上像素的视频图像。In step S110, multiple pieces of image data to be compressed are acquired in chronological order; the image data is used to display UHD 4K and above pixel video images.
其中,所述图像数据包括但不限于:超高清图像(如4K图像或8K图像)、以及已被压缩处理并解压缩后的图像等。例如,所述图像数据为来源于高清摄像机所摄取的原始视频中的高清图像。又如,所述图像数据为藉由专用数据通道传输得到的高清图像。再如,所述图像数据为来源于互联网且需要被重新编码的图像。其中,所述图像数据的格式可以是Bayer格式,或者是经Debayer后生成的RGB图像,又或者是YUV等格式。例如,所述图像数据是高清摄像机的传感器直接生成的Bayer格式。又如,高清摄像机的传感器生成的Bayer格式经过Debayer,即在Bayer格式中每个像素的颜色分量上拟合出另外两种颜色分量,由此生成RGB图像,并将RGB图像作为待处理的图像数据。The image data includes, but is not limited to: ultra-high-definition images (such as 4K images or 8K images), and images that have been compressed and decompressed. For example, the image data is a high-definition image from an original video captured by a high-definition camera. In another example, the image data is a high-definition image transmitted through a dedicated data channel. For another example, the image data is an image that comes from the Internet and needs to be re-encoded. Wherein, the format of the image data can be Bayer format, or RGB image generated after Debayer, or format such as YUV. For example, the image data is in Bayer format directly generated by the sensor of a high-definition camera. For another example, the Bayer format generated by the sensor of a high-definition camera undergoes Debayer, that is, the other two color components are fitted to the color components of each pixel in the Bayer format to generate an RGB image, and the RGB image is used as the image to be processed data.
应当理解,Debayer即为马赛克处理,是一种数位影像处理算法,目的是从覆有滤色阵列(Color filter array,简称CFA)的感光元件所输出的不完全色彩取样中,重建出全彩影像。此法也称为滤色阵列内插法(CFA interpolation)或色彩重建法(Color reconstruction)。It should be understood that Debayer is mosaic processing, a digital image processing algorithm, whose purpose is to reconstruct a full-color image from the incomplete color samples output by the photosensitive element covered with a color filter array (CFA). . This method is also called color filter array interpolation (CFA interpolation) or color reconstruction (Color reconstruction).
应当理解,视频文件是由若干帧图像数据组成的,因此,压缩设备按照时间顺序获取若 干帧待处理的图像数据,以便按照时间顺序将若干帧图像数据依次处理。It should be understood that a video file is composed of several frames of image data. Therefore, the compression device acquires several frames of image data to be processed in chronological order, so as to sequentially process several frames of image data in chronological order.
应当理解,UHD为Ultra High Definition,即超高清。UHD 4k及以上指清晰度在4K像素及以上的视频图像,如8k像素、16k像素等。为便于理解,本实施例中以8K像素为例进行描述,但本方案原理亦可用于压缩4k像素、16k像素甚至更高清的视频图像。It should be understood that UHD is Ultra High Definition, that is, Ultra High Definition. UHD 4k and above refers to video images with a resolution of 4K pixels and above, such as 8k pixels, 16k pixels, etc. For ease of understanding, this embodiment takes 8K pixels as an example for description, but the principle of the solution can also be used to compress 4k pixels, 16k pixels or even higher-definition video images.
在此,用于显示UHD 4K及以上像素的视频图像的图像数据可为前述提及的Bayer格式的图像数据或RGB格式的图像数据。其中,RGB格式的图像数据包括RGB格式本身的图像数据以及可转换成RGB格式的其他格式(如YUV格式等)的图像数据。Here, the image data used to display a video image of UHD 4K and above pixels may be the aforementioned Bayer format image data or the RGB format image data. Among them, the image data in the RGB format includes image data in the RGB format itself and image data in other formats (such as YUV format, etc.) that can be converted into the RGB format.
请继续参阅图1,在步骤S120中,所述压缩设备基于所述图像数据中各像素的颜色属性,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置。Please continue to refer to FIG. 1. In step S120, the compression device maps the color value of each pixel in each image data to each pixel position in multiple image blocks based on the color attribute of each pixel in the image data. .
应当理解,像素为影像显示的基本单位。每个像素根据其所在的图像数据的格式不同,具有不同的颜色属性。例如,对于Bayer格式的图像数据,其像素的颜色属性为单一颜色分量;对于RGB等格式的图像数据,其像素的颜色属性包括红(R)、绿(G)、蓝(B)三种颜色分量。由于人眼对绿色相对其他颜色更为敏感,因此通常G分量的数量为其他颜色分量数量的2倍。因此,在某些实施例中,所述G分量用Gr分量或Gb分量表示。其中,每一像素均有对应其颜色属性的颜色值。例如,当所述图像数据为Bayer格式时,请参阅图2,其显示为本申请中的图像数据在一实施例中的示意图,如图所示,每个方块代表了一个像素,每个像素中只有R或G或B单一颜色分量,其中G分量用Gr分量和Gb分量表示,所述图像数据中每一像素的颜色值为该单一颜色分量的亮度值,即各像素的颜色值;当所述图像数据为RGB格式时,所述图像数据中每一像素的颜色值包括该像素中各颜色分量的亮度值。It should be understood that the pixel is the basic unit of image display. Each pixel has different color attributes according to the format of the image data in which it is located. For example, for image data in Bayer format, the color attribute of the pixel is a single color component; for image data in RGB format, the color attribute of the pixel includes three colors of red (R), green (G), and blue (B) Weight. Since human eyes are more sensitive to green than other colors, usually the number of G components is twice the number of other color components. Therefore, in some embodiments, the G component is represented by a Gr component or a Gb component. Among them, each pixel has a color value corresponding to its color attribute. For example, when the image data is in Bayer format, please refer to FIG. 2, which shows a schematic diagram of image data in an embodiment of this application. As shown in the figure, each square represents a pixel, and each pixel There is only a single color component of R or G or B, where the G component is represented by the Gr component and the Gb component. The color value of each pixel in the image data is the brightness value of the single color component, that is, the color value of each pixel; When the image data is in the RGB format, the color value of each pixel in the image data includes the brightness value of each color component in the pixel.
为便于理解,在此以单幅图像数据举例说明。应当理解,多幅图像数据的处理方法即为按照所述单幅图像数据的处理方法分别处理多幅图像数据,并将处理后的结果分别提供给步骤S130。For ease of understanding, a single image data is used as an example. It should be understood that the method for processing multiple image data is to process multiple image data separately according to the processing method for single image data, and provide the processed results to step S130 respectively.
在本实施例中,所述压缩设备基于颜色属性将图像数据分成多个图像块,为保证所述图像数据中的各像素与图像块中的各像素之间的关联性,以便在解码时能够还原图像数据,所述压缩设备将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置。In this embodiment, the compression device divides the image data into a plurality of image blocks based on color attributes, in order to ensure the correlation between each pixel in the image data and each pixel in the image block, so that it can be decoded. To restore the image data, the compression device maps the color value of each pixel in each image data to each pixel position in a plurality of image blocks.
在一个示例性的实施例中,在所述图像数据中的每个像素表示单一颜色属性的情况下,所述步骤S120包括:按照基于所述图像数据中的Bayer格式而设置的颜色格式,遍历所述图像数据;其中,在遍历期间,基于所述颜色格式中各像素的颜色属性,从所述图像数据中提取各像素的颜色值,并映射到相应图像块中的像素位置。In an exemplary embodiment, in the case that each pixel in the image data represents a single color attribute, the step S120 includes: traversing according to the color format set based on the Bayer format in the image data The image data; wherein, during the traversal, based on the color attribute of each pixel in the color format, the color value of each pixel is extracted from the image data and mapped to the pixel position in the corresponding image block.
请继续参阅图2,所述图像数据中的每个像素均表示单一颜色属性。在本实施例中,以4 个(2×2)像素确定为颜色格式101,由于对Bayer数据扫描时,通常其奇数行输出G、R、G、R……,偶数行输出B、G、B、G……,因此在一个颜色格式101中具有4个不同颜色属性的像素数据。以所述像素单元中不同颜色属性的像素数据的相对位置作为像素行格式,遍历所述图像数据,从而提取各个像素数据并形成多个图像块。Please continue to refer to FIG. 2. Each pixel in the image data represents a single color attribute. In this embodiment, 4 (2×2) pixels are determined as the color format 101, because when Bayer data is scanned, the odd lines usually output G, R, G, R..., and the even lines output B, G, B, G..., so there are 4 pixel data with different color attributes in one color format 101. Using the relative positions of the pixel data of different color attributes in the pixel unit as the pixel row format, the image data is traversed to extract each pixel data and form multiple image blocks.
在某些实施例中,请参阅图3,其显示为本申请中将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法实施例示意图。如图所示,在本实施例中,所述图像数据中的奇数行为Gr、R、Gr、R……,偶数行为B、Gb、B、Gb……,在此,将奇数行的Gr、R,偶数行的B、Gb确定为一个颜色格式101。为便于理解,定义第一行第一列的像素所在的颜色格式101中所有像素的坐标均为原点(0,0),即该颜色格式101中包括了Gr(0,0)、R(0,0)、B(0,0)和Gb(0,0)。如图3所示,以所述颜色格式101为基准,将在所述颜色格式101的水平方向上向右偏移1个单位的颜色格式中的各像素坐标确定为(0,1),将在所述颜色格式101的水平方向上向右偏移2个单位的颜色格式中的各像素坐标确定为(0,2),将在所述颜色格式101的水平方向上向右偏移3个单位的颜色格式中的各像素坐标确定为(0,3)……;同理,以所述颜色格式101为基准,将在所述颜色格式101的垂直方向上向下偏移1个单位的颜色格式中的各像素坐标确定为(1,0),将在所述颜色格式101的垂直方向上向下偏移2个单位的颜色格式中的各像素坐标确定为(2,0),将在所述颜色格式101的垂直方向上向下偏移3个单位的颜色格式中的各像素坐标确定为(3,0)……。以此规则遍历整个图像数据,则所述图像数据中的每一像素均有其位置信息,以便在解码时利用该位置信息还原图像数据。In some embodiments, please refer to FIG. 3, which shows a schematic diagram of an embodiment of a mapping method for mapping the color value of each pixel in each image data to each pixel position in a plurality of image blocks in this application. As shown in the figure, in this embodiment, the odd-numbered rows in the image data are Gr, R, Gr, R..., and the even-numbered rows are B, Gb, B, Gb..., here, the odd-numbered rows of Gr, R, B and Gb of even rows are determined as a color format 101. For ease of understanding, the coordinates of all pixels in the color format 101 where the pixels in the first row and first column are defined are the origin (0, 0), that is, the color format 101 includes Gr(0,0), R(0 , 0), B (0, 0) and Gb (0, 0). As shown in FIG. 3, using the color format 101 as a reference, the coordinate of each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 is determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101 The coordinates of each pixel in the color format of the unit are determined to be (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit The coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and The coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0).... By traversing the entire image data according to this rule, each pixel in the image data has its position information, so that the position information can be used to restore the image data during decoding.
在确定了各像素的位置信息后,所述压缩设备基于各像素的颜色属性,从所述图像数据中提取各像素的颜色值。在此,所述压缩设备将所述图像数据中的所有像素数据基于颜色属性分成多个图像块,且每个图像块中只包含一种颜色属性。请继续参阅图3,由于本实施例中的颜色属性包括R、Gr、Gb、B这4种属性,故在本实施例中将所述图像数据中的所有像素数据基于颜色属性分成了R、Gr、Gb、B这4个图像块。例如,Gr(0,0)被分至Gr颜色属性的图像块中,Gr(0,1)被分至Gr颜色属性的图像块中且在Gr(0,0)的水平方向上向右偏移1个单位,Gr(0,2)被分至Gr颜色属性的图像块中且在Gr(0,0)的水平方向上向右偏移2个单位,Gr(0,3)被分至Gr颜色属性的图像块中且在Gr(0,0)的水平方向上向右偏移3个单位;Gr(1,0)被分至Gr颜色属性的图像块中且在Gr(0,0)的垂直方向上向下偏移1个单位,Gr(2,0)被分至Gr颜色属性的图像块中且在Gr(0,0)的垂直方向上向下偏移2个单位,Gr(3,0)被分至Gr颜色属性的图像块中且在Gr(0,0) 的垂直方向上向下偏移3个单位……同理R、Gb、B颜色属性的各像素也一一被相应地分至各图像块中,且每个图像块中的每个像素位置均与其在图像数据中的像素位置具有对应关系,在此不一一赘述。After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel. Here, the compression device divides all pixel data in the image data into multiple image blocks based on color attributes, and each image block contains only one color attribute. Please continue to refer to FIG. 3, since the color attributes in this embodiment include R, Gr, Gb, and B, these four attributes, so in this embodiment, all the pixel data in the image data is divided into R, Gr, Gb, and B. Four image blocks of Gr, Gb, and B. For example, Gr(0,0) is divided into image blocks with Gr color attribute, and Gr(0,1) is divided into image blocks with Gr color attribute and is offset to the right in the horizontal direction of Gr(0,0) Shifted by 1 unit, Gr(0,2) is divided into the image block of the Gr color attribute and shifted by 2 units to the right in the horizontal direction of Gr(0,0), Gr(0,3) is divided into In the image block of the Gr color attribute and offset 3 units to the right in the horizontal direction of Gr(0,0); Gr(1,0) is divided into the image block of the Gr color attribute and is in Gr(0,0) ) Is shifted downward by 1 unit in the vertical direction, Gr(2,0) is divided into the image block of the Gr color attribute and shifted downward by 2 units in the vertical direction of Gr(0,0), Gr (3, 0) is divided into the image block of the Gr color attribute and is shifted downward by 3 units in the vertical direction of Gr(0, 0)... The same is true for each pixel of the R, Gb, and B color attributes One is divided into each image block accordingly, and each pixel position in each image block has a corresponding relationship with its pixel position in the image data, which will not be repeated here.
在另一个示例性的实施例中,请参阅图4,其显示为本申请中图像数据中的每个像素表示RGB颜色属性的实施例示意图。如图所示,所述压缩设备获取的图像数据为RGB图像,例如所述图像数据是在Bayer格式的基础上经过Debayer,即在图2显示的每个像素的颜色分量上拟合出另外两种颜色分量,由此生成RGB图像。在此,在所述图像数据中的每个像素表示RGB颜色属性的情况下,所述步骤S120包括:按照基于所述图像数据中的像素行格式而设置的颜色格式,遍历所述图像数据;其中,在遍历期间,基于所述颜色格式中各像素的颜色属性,从所述图像数据中提取各像素的颜色主分量或颜色拟合分量,并映射到相应图像块中的像素位置。In another exemplary embodiment, please refer to FIG. 4, which shows a schematic diagram of an embodiment in which each pixel in the image data in this application represents RGB color attributes. As shown in the figure, the image data acquired by the compression device is an RGB image. For example, the image data is passed through Debayer based on the Bayer format, that is, the other two are fitted to the color components of each pixel shown in Figure 2. These color components generate RGB images. Here, in the case that each pixel in the image data represents an RGB color attribute, the step S120 includes: traversing the image data according to a color format set based on the pixel row format in the image data; Wherein, during the traversal, based on the color attribute of each pixel in the color format, the color principal component or color fitting component of each pixel is extracted from the image data, and mapped to the pixel position in the corresponding image block.
应当理解,由于图像拍摄设备的传感器所输出的数据通常都为Bayer格式,每个像素点只有单一分量。而编码器无法直接对Bayer格式进行编码,显示装置也无法对Bayer格式直接显示图像。因此,通常需要将Bayer格式进行Debayer处理,形成RGB格式。但经过Debayer处理后的数据量很大,为Bayer格式的三倍,由此造成码流过大,影响传输效率。因此,在一些实施方式中,还会将RGB格式转换成YUV格式,即变成亮度和色度格式,再消减色度,从而减少数据量。其中,YUV422会将数据量消减1/3,YUV420会将数据量消减1/2,但其数据量依然分别是Bayer格式的2倍和1.5倍。同时,将格式转换成YUV422或YUV420的方式还会带来颜色信息丢失严重的问题,无法满足高画质的需求。It should be understood that since the data output by the sensor of the image capturing device is usually in Bayer format, each pixel has only a single component. The encoder cannot directly encode the Bayer format, and the display device cannot directly display images in the Bayer format. Therefore, it is usually necessary to debayer the Bayer format to form an RGB format. However, the amount of data processed by Debayer is very large, which is three times that of the Bayer format, which causes the code stream to be too large and affects the transmission efficiency. Therefore, in some embodiments, the RGB format is also converted to the YUV format, that is, into the luminance and chrominance format, and then the chrominance is reduced, thereby reducing the amount of data. Among them, YUV422 will reduce the data volume by 1/3, and YUV420 will reduce the data volume by 1/2, but the data volume is still 2 times and 1.5 times that of Bayer format respectively. At the same time, the method of converting the format to YUV422 or YUV420 will also cause serious color information loss, which cannot meet the demand for high image quality.
请继续参阅图4,在本实施例中,所述图像数据中的每个像素均具有三个颜色分量,其中,每个像素中的黑体加粗部分表示该像素中的主分量,每个像素中的非黑体加粗部分表示基于所述像素中的主分量而拟合出的另外两种分量。以4个(2×2)像素确定为一个颜色格式101,由于在一个颜色格式101中,每个像素均具有三个颜色分量,为节省降低压缩编码后的码流,在此只在每个像素中提取一个颜色分量。Please continue to refer to FIG. 4, in this embodiment, each pixel in the image data has three color components, where the bolded part in each pixel represents the principal component in the pixel, and each pixel The non-black bolded part in represents the other two components fitted based on the principal components in the pixel. Four (2×2) pixels are determined as a color format 101. Since each pixel in a color format 101 has three color components, in order to save and reduce the code stream after compression, only in each Extract a color component from the pixel.
在某些实施例中,所述压缩设备预先获知每个像素中的主分量,则直接将每个像素中的主分量确定为待提取的颜色分量。在另一些实施例中,所述压缩设备无法预先获知每个像素中的主分量,则可以按照预设的规则将每个像素中的某一分量确定为待提取的颜色分量。在此,所述预设的规则举例但不限于为:按照奇数行提取G、R,偶数行提取B、G的规则;或者按照奇数行提取G、B,偶数行提取R、G的规则;或者按照奇数行提取R、G,偶数行提取G、B的规则;或者按照奇数行提取B、G,偶数行提取G、R的规则等。其中,当G分 量表示为Gr分量和Gb分量时,所述预设的规则还可举例但不限于为:按照奇数行提取Gr、R,偶数行提取B、Gb的规则;或者按照奇数行提取Gb、B,偶数行提取R、Gr的规则;或者按照奇数行提取R、Gr,偶数行提取Gb、B的规则;或者按照奇数行提取B、Gb,偶数行提取Gr、R的规则等。应当理解,由于人眼对图像的辨识度有限,采用上述任一提取方式对于最终的成像效果影响均可忽略不计。In some embodiments, the compression device knows the principal component in each pixel in advance, and directly determines the principal component in each pixel as the color component to be extracted. In some other embodiments, the compression device cannot know the principal component in each pixel in advance, and may determine a certain component in each pixel as the color component to be extracted according to a preset rule. Here, the preset rules are exemplified but not limited to: extracting G and R according to odd rows and extracting B and G rules for even rows; or extracting G and B according to odd rows and extracting R and G rules for even rows; Or extract R and G according to odd lines, and extract G and B from even lines; or extract B and G according to odd lines and extract G and R from even lines. Wherein, when the G component is expressed as the Gr component and the Gb component, the preset rule may also be exemplified but not limited to: extracting Gr and R according to odd lines, and extracting B and Gb according to even lines; or extracting according to odd lines Gb, B, rules for extracting R and Gr from even rows; or rules for extracting R and Gr from odd rows and Gb and B from even rows; or rules for extracting B and Gb from odd rows and Gr and R from even rows. It should be understood that due to the limited recognition of images by human eyes, any of the above-mentioned extraction methods has a negligible impact on the final imaging effect.
请继续参阅图4,在本实施例中,所述压缩设备预先获知每个像素中的主分量。在此,所述压缩设备将每个像素中的主分量确定为待提取的颜色分量,则所述颜色格式101确定为奇数行Gr、R,偶数行B、Gb。Please continue to refer to FIG. 4. In this embodiment, the compression device knows the principal component in each pixel in advance. Here, the compression device determines the principal component in each pixel as the color component to be extracted, and the color format 101 is determined as odd lines Gr, R, and even lines B, Gb.
请参阅图5,其显示为本申请中将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的另一实施例示意图。如图所示,为便于理解,定义所述图像数据中第一行第一列的像素所在的颜色格式101中所有像素的坐标均为原点(0,0),即该颜色格式101中包括了R Gr B(0,0)、R G B(0,0)、R G B(0,0)和R Gb B(0,0)。如图5所示,以所述颜色格式101为基准,将在所述颜色格式101的水平方向上向右偏移1个单位的颜色格式中的各像素坐标确定为(0,1),将在所述颜色格式101的水平方向上向右偏移2个单位的颜色格式中的各像素坐标确定为(0,2),将在所述颜色格式101的水平方向上向右偏移3个单位的颜色格式中的各像素坐标确定为(0,3)……;同理,以所述颜色格式101为基准,将在所述颜色格式101的垂直方向上向下偏移1个单位的颜色格式中的各像素坐标确定为(1,0),将在所述颜色格式101的垂直方向上向下偏移2个单位的颜色格式中的各像素坐标确定为(2,0),将在所述颜色格式101的垂直方向上向下偏移3个单位的颜色格式中的各像素坐标确定为(3,0)……。以此规则遍历整个图像数据,则所述图像数据中的每一像素均有其位置信息,以便在解码时利用该位置信息还原图像数据。Please refer to FIG. 5, which shows a schematic diagram of another embodiment of the mapping method for mapping the color value of each pixel in each image data to each pixel position in multiple image blocks in this application. As shown in the figure, for ease of understanding, the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R Gr B (0, 0), R G B (0, 0), R G B (0, 0), and R Gb B (0, 0). As shown in FIG. 5, using the color format 101 as a reference, the coordinates of each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 are determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101 The coordinates of each pixel in the color format of the unit are determined as (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit The coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and The coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0).... By traversing the entire image data according to this rule, each pixel in the image data has its position information, so that the position information can be used to restore the image data during decoding.
在确定了各像素的位置信息后,所述压缩设备基于各像素的颜色属性,从所述图像数据中提取各像素的颜色值。由于所述压缩设备预先获知每个像素中的主分量,因此所述压缩设备仅提取每个像素中的主分量即可。在此,所述压缩设备将所述图像数据中的所有主分量基于颜色属性分成多个图像块,且每个图像块中只包含一种颜色属性。请继续参阅图5,由于本实施例中的颜色属性包括R、Gr、Gb、B这4种属性,故在本实施例中将所述图像数据中的所有像素数据基于颜色属性分成了R、Gr、Gb、B这4个图像块。例如,Gr(0,0)被分至Gr颜色属性的图像块中,Gr(0,1)被分至Gr颜色属性的图像块中且在Gr(0,0)的水平方向上向右偏移1个单位,Gr(0,2)被分至Gr颜色属性的图像块中且在Gr(0,0)的水平方向上向右偏移2个单位,Gr(0,3)被分至Gr颜色属性的图像块中且在Gr(0,0) 的水平方向上向右偏移3个单位;Gr(1,0)被分至Gr颜色属性的图像块中且在Gr(0,0)的垂直方向上向下偏移1个单位,Gr(2,0)被分至Gr颜色属性的图像块中且在Gr(0,0)的垂直方向上向下偏移2个单位,Gr(3,0)被分至Gr颜色属性的图像块中且在Gr(0,0)的垂直方向上向下偏移3个单位……同理R、Gb、B颜色属性的各像素也一一被相应地分至各图像块中,且每个图像块中的每个像素位置均与其在图像数据中的像素位置具有对应关系,在此不一一赘述。After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel. Since the compression device knows the principal components in each pixel in advance, the compression device only needs to extract the principal components in each pixel. Here, the compression device divides all the principal components in the image data into multiple image blocks based on color attributes, and each image block contains only one color attribute. Please continue to refer to FIG. 5. Since the color attributes in this embodiment include R, Gr, Gb, and B, these four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B. Four image blocks of Gr, Gb, and B. For example, Gr(0,0) is divided into image blocks with Gr color attribute, and Gr(0,1) is divided into image blocks with Gr color attribute and is offset to the right in the horizontal direction of Gr(0,0) Shifted by 1 unit, Gr(0,2) is divided into the image block of the Gr color attribute and shifted by 2 units to the right in the horizontal direction of Gr(0,0), Gr(0,3) is divided into Gr color attribute image block and offset 3 units to the right in the horizontal direction of Gr(0,0); Gr(1,0) is divided into Gr color attribute image block and in Gr(0,0) ) Is shifted downward by 1 unit in the vertical direction, Gr(2,0) is divided into the image block of the Gr color attribute and shifted downward by 2 units in the vertical direction of Gr(0,0), Gr (3,0) is divided into the image block of the Gr color attribute and is offset 3 units downward in the vertical direction of Gr(0,0)...Similarly, each pixel of the R, Gb, B color attributes is also the same One is divided into each image block accordingly, and each pixel position in each image block has a corresponding relationship with its pixel position in the image data, which will not be repeated here.
在另一些实施例中,所述压缩设备未预先获知每个像素中的主分量。在此,分别以按照奇数行提取Gr、R,偶数行提取B、Gb的规则;按照奇数行提取Gb、B,偶数行提取R、Gr的规则;按照奇数行提取R、Gr,偶数行提取Gb、B的规则;按照奇数行提取B、Gb,偶数行提取Gr、R的规则举例说明。In other embodiments, the compression device does not know the principal component in each pixel in advance. Here, according to the rules of extracting Gr and R according to odd lines, and extracting B and Gb according to even lines; extracting Gb and B according to odd lines and extracting R and Gr according to even lines; extracting R and Gr according to odd lines and extracting even lines The rules of Gb and B; the rules for extracting B and Gb from odd rows and Gr and R from even rows are illustrated as examples.
请参阅图8,其显示为本申请中图像数据中的每个像素表示RGB颜色属性的另一实施例示意图。在本实施例中,所述压缩设备未预先获知每个像素中的主分量。在此,以奇数行提取Gr、R,偶数行提取B、Gb的规则确定待提取的颜色分量,则所述颜色格式确定为奇数行Gr、R,偶数行B、Gb。由于当颜色格式为奇数行Gr、R,偶数行B、Gb时与图5所示实施例的压缩方法相同,故不再重述。Please refer to FIG. 8, which shows a schematic diagram of another embodiment in which each pixel in the image data in this application represents RGB color attributes. In this embodiment, the compression device does not know the principal component in each pixel in advance. Here, the rules of extracting Gr and R in odd rows and extracting B and Gb in even rows determine the color components to be extracted, and then the color format is determined as odd rows Gr and R, and even rows B and Gb. Since the color format is the odd-numbered lines Gr and R, and the even-numbered lines B and Gb are the same as the compression method of the embodiment shown in FIG. 5, the description will not be repeated.
请参阅图9,其显示为本申请中压缩设备未预先获知每个像素中的主分量时的又一实施例示意图。在本实施例中,按照奇数行提取Gb、B,偶数行提取R、Gr的规则确定待提取的颜色分量,则所述颜色格式101确定为奇数行Gb、B,偶数行R、Gr。Please refer to FIG. 9, which shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance. In this embodiment, the color components to be extracted are determined according to the rules of extracting Gb and B for odd rows and R and Gr for even rows, then the color format 101 is determined as odd rows Gb and B, and even rows R and Gr.
请参阅图10,其显示为本申请中压缩设备未预先获知每个像素中的主分量时,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的一实施例示意图。如图所示,为便于理解,定义所述图像数据中第一行第一列的像素所在的颜色格式101中所有像素的坐标均为原点(0,0),即该颜色格式101中包括了R Gb B(0,0)、R G B(0,0)、R G B(0,0)和R Gr B(0,0)。如图10所示,以所述颜色格式101为基准,将在所述颜色格式101的水平方向上向右偏移1个单位的颜色格式中的各像素坐标确定为(0,1),将在所述颜色格式101的水平方向上向右偏移2个单位的颜色格式中的各像素坐标确定为(0,2),将在所述颜色格式101的水平方向上向右偏移3个单位的颜色格式中的各像素坐标确定为(0,3)……;同理,以所述颜色格式101为基准,将在所述颜色格式101的垂直方向上向下偏移1个单位的颜色格式中的各像素坐标确定为(1,0),将在所述颜色格式101的垂直方向上向下偏移2个单位的颜色格式中的各像素坐标确定为(2,0),将在所述颜色格式101的垂直方向上向下偏移3个单位的颜色格式中的各像素坐标确定为(3,0)……。以此规则 遍历整个图像数据,则所述图像数据中的每一像素均有其位置信息,以便在解码时利用该位置信息还原图像数据。Please refer to FIG. 10, which shows the mapping of the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance Schematic diagram of an embodiment of the method. As shown in the figure, for ease of understanding, the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R Gb B (0, 0), R G B (0, 0), R G B (0, 0), and R Gr B (0, 0). As shown in FIG. 10, using the color format 101 as a reference, the coordinates of each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 are determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101 The coordinates of each pixel in the color format of the unit are determined to be (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit The coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and The coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0).... By traversing the entire image data according to this rule, each pixel in the image data has its position information, so that the position information can be used to restore the image data during decoding.
在确定了各像素的位置信息后,所述压缩设备基于各像素的颜色属性,从所述图像数据中提取各像素的颜色值并分成多个图像块,且每个图像块中只包含一种颜色属性。请继续参阅图10,由于本实施例中的颜色属性包括R、Gr、Gb、B这4种属性,故在本实施例中将所述图像数据中的所有像素数据基于颜色属性分成了R、Gr、Gb、B这4个图像块。例如,Gb(0,0)被分至Gb颜色属性的图像块中,Gb(0,1)被分至Gb颜色属性的图像块中且在Gb(0,0)的水平方向上向右偏移1个单位,Gb(0,2)被分至Gb颜色属性的图像块中且在Gb(0,0)的水平方向上向右偏移2个单位,Gb(0,3)被分至Gb颜色属性的图像块中且在Gb(0,0)的水平方向上向右偏移3个单位;Gb(1,0)被分至Gb颜色属性的图像块中且在Gb(0,0)的垂直方向上向下偏移1个单位,Gb(2,0)被分至Gb颜色属性的图像块中且在Gb(0,0)的垂直方向上向下偏移2个单位,Gb(3,0)被分至Gb颜色属性的图像块中且在Gb(0,0)的垂直方向上向下偏移3个单位……同理B、R、Gr颜色属性的各像素也一一被相应地分至各图像块中,且每个图像块中的每个像素位置均与其在图像数据中的像素位置具有对应关系,在此不一一赘述。After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel and divides it into multiple image blocks, and each image block contains only one type Color attributes. Please continue to refer to FIG. 10, since the color attributes in this embodiment include R, Gr, Gb, and B, four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B based on the color attributes. Four image blocks of Gr, Gb, and B. For example, Gb(0,0) is assigned to the image block of the Gb color attribute, and Gb(0,1) is assigned to the image block of the Gb color attribute and is offset to the right in the horizontal direction of Gb(0,0) Shifted by 1 unit, Gb(0,2) is divided into the image block of Gb color attribute and shifted by 2 units to the right in the horizontal direction of Gb(0,0), Gb(0,3) is divided into Gb color attribute image block and offset 3 units to the right in the horizontal direction of Gb(0,0); Gb(1,0) is divided into Gb color attribute image block and is in Gb(0,0) ) Is shifted downward by 1 unit in the vertical direction, Gb(2,0) is divided into the image blocks of the Gb color attribute and shifted downward by 2 units in the vertical direction of Gb(0,0), Gb (3, 0) is divided into the image block of the Gb color attribute and is shifted downward by 3 units in the vertical direction of Gb (0, 0)...Similarly, each pixel of the B, R, and Gr color attributes is also the same One is divided into each image block accordingly, and each pixel position in each image block has a corresponding relationship with its pixel position in the image data, which will not be repeated here.
请参阅图11,其显示为本申请中压缩设备未预先获知每个像素中的主分量时的再一实施例示意图。在本实施例中,按照奇数行提取R、Gr,偶数行提取Gb、B的规则确定待提取的颜色分量,则所述颜色格式101确定为奇数行R、Gr,偶数行Gb、B。Please refer to FIG. 11, which shows a schematic diagram of another embodiment when the compression device in this application does not know the principal component in each pixel in advance. In this embodiment, the color components to be extracted are determined according to the rules of extracting R and Gr for odd rows and Gb and B for even rows, then the color format 101 is determined as odd rows R and Gr, and even rows Gb and B.
请参阅图12,其显示为本申请中压缩设备未预先获知每个像素中的主分量时,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的另一实施例示意图。如图所示,为便于理解,定义所述图像数据中第一行第一列的像素所在的颜色格式101中所有像素的坐标均为原点(0,0),即该颜色格式101中包括了R G B(0,0)、R Gr B(0,0)、R Gb B(0,0)和R G B(0,0)。如图10所示,以所述颜色格式101为基准,将在所述颜色格式101的水平方向上向右偏移1个单位的颜色格式中的各像素坐标确定为(0,1),将在所述颜色格式101的水平方向上向右偏移2个单位的颜色格式中的各像素坐标确定为(0,2),将在所述颜色格式101的水平方向上向右偏移3个单位的颜色格式中的各像素坐标确定为(0,3)……;同理,以所述颜色格式101为基准,将在所述颜色格式101的垂直方向上向下偏移1个单位的颜色格式中的各像素坐标确定为(1,0),将在所述颜色格式101的垂直方向上向下偏移2个单位的颜色格式中的各像素坐标确定为(2,0),将在所述颜色格式101的垂直方向上向下偏移3个单位的颜色格式中的各像素坐标确定为(3,0)……。以此规则 遍历整个图像数据,则所述图像数据中的每一像素均有其位置信息,以便在解码时利用该位置信息还原图像数据。Please refer to FIG. 12, which shows a mapping that maps the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance Schematic diagram of another embodiment of the method. As shown in the figure, for ease of understanding, the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R G B (0, 0), R Gr B (0, 0), R Gb B (0, 0), and R G B (0, 0). As shown in FIG. 10, using the color format 101 as a reference, the coordinates of each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 are determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101 The coordinates of each pixel in the color format of the unit are determined as (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit The coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and The coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0).... By traversing the entire image data according to this rule, each pixel in the image data has its position information, so that the position information can be used to restore the image data during decoding.
在确定了各像素的位置信息后,所述压缩设备基于各像素的颜色属性,从所述图像数据中提取各像素的颜色值并分成多个图像块,且每个图像块中只包含一种颜色属性。请继续参阅图10,由于本实施例中的颜色属性包括R、Gr、Gb、B这4种属性,故在本实施例中将所述图像数据中的所有像素数据基于颜色属性分成了R、Gr、Gb、B这4个图像块。例如,R(0,0)被分至R颜色属性的图像块中,R(0,1)被分至R颜色属性的图像块中且在R(0,0)的水平方向上向右偏移1个单位,R(0,2)被分至R颜色属性的图像块中且在R(0,0)的水平方向上向右偏移2个单位,R(0,3)被分至R颜色属性的图像块中且在R(0,0)的水平方向上向右偏移3个单位;R(1,0)被分至R颜色属性的图像块中且在R(0,0)的垂直方向上向下偏移1个单位,R(2,0)被分至R颜色属性的图像块中且在R(0,0)的垂直方向上向下偏移2个单位,R(3,0)被分至R颜色属性的图像块中且在R(0,0)的垂直方向上向下偏移3个单位……同理Gr、Gb、B颜色属性的各像素也一一被相应地分至各图像块中,且每个图像块中的每个像素位置均与其在图像数据中的像素位置具有对应关系,在此不一一赘述。After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel and divides it into multiple image blocks, and each image block contains only one type Color attributes. Please continue to refer to FIG. 10, since the color attributes in this embodiment include R, Gr, Gb, and B, four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B based on the color attributes. Four image blocks of Gr, Gb, and B. For example, R(0,0) is divided into image blocks with R color attributes, and R(0,1) is divided into image blocks with R color attributes and is offset to the right in the horizontal direction of R(0,0) Shifted by 1 unit, R(0,2) is divided into the image block of the R color attribute and shifted by 2 units to the right in the horizontal direction of R(0,0), R(0,3) is divided into In the image block of the R color attribute and offset by 3 units to the right in the horizontal direction of R(0,0); R(1,0) is divided into the image block of the R color attribute and is in R(0,0) ) Is shifted downward by 1 unit in the vertical direction, R(2,0) is divided into the image blocks of the R color attribute and shifted downward by 2 units in the vertical direction of R(0,0), R (3, 0) is divided into the image block of the R color attribute and shifted downward by 3 units in the vertical direction of R(0, 0)... Similarly, the pixels of the Gr, Gb, and B color attributes are also the same One is divided into each image block accordingly, and each pixel position in each image block has a corresponding relationship with its pixel position in the image data, which will not be repeated here.
请参阅图13,其显示为本申请中压缩设备未预先获知每个像素中的主分量时的又一实施例示意图。在本实施例中,按照奇数行提取B、Gb,偶数行提取Gr、R的规则确定待提取的颜色分量,则所述颜色格式101确定为奇数行B、Gb,偶数行Gr、R。Please refer to FIG. 13, which shows a schematic diagram of another embodiment when the compression device in this application does not know the principal components in each pixel in advance. In this embodiment, the color components to be extracted are determined according to the rules of extracting B and Gb for odd rows and Gr and R for even rows, then the color format 101 is determined as odd rows B and Gb, and even rows Gr and R.
请参阅图14,其显示为本申请中压缩设备未预先获知每个像素中的主分量时,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的映射方法的又一实施例示意图。如图所示,为便于理解,定义所述图像数据中第一行第一列的像素所在的颜色格式101中所有像素的坐标均为原点(0,0),即该颜色格式101中包括了R G B(0,0)、R Gb B(0,0)、R Gr B(0,0)和R G B(0,0)。如图14所示,以所述颜色格式101为基准,将在所述颜色格式101的水平方向上向右偏移1个单位的颜色格式中的各像素坐标确定为(0,1),将在所述颜色格式101的水平方向上向右偏移2个单位的颜色格式中的各像素坐标确定为(0,2),将在所述颜色格式101的水平方向上向右偏移3个单位的颜色格式中的各像素坐标确定为(0,3)……;同理,以所述颜色格式101为基准,将在所述颜色格式101的垂直方向上向下偏移1个单位的颜色格式中的各像素坐标确定为(1,0),将在所述颜色格式101的垂直方向上向下偏移2个单位的颜色格式中的各像素坐标确定为(2,0),将在所述颜色格式101的垂直方向上向下偏移3个单位的颜色格式中的各像素坐标确定为(3,0)……。以此规则 遍历整个图像数据,则所述图像数据中的每一像素均有其位置信息,以便在解码时利用该位置信息还原图像数据。Please refer to Figure 14, which shows the mapping of the color value of each pixel in each image data to each pixel position in multiple image blocks when the compression device in this application does not know the principal component in each pixel in advance Schematic diagram of another embodiment of the method. As shown in the figure, for ease of understanding, the coordinates of all pixels in the color format 101 where the pixels in the first row and first column of the image data are defined are the origin (0, 0), that is, the color format 101 includes R G B (0, 0), R Gb B (0, 0), R Gr B (0, 0), and R G B (0, 0). As shown in FIG. 14, using the color format 101 as a reference, the coordinate of each pixel in the color format shifted by 1 unit to the right in the horizontal direction of the color format 101 is determined as (0, 1), and Each pixel coordinate in the color format that is offset by 2 units to the right in the horizontal direction of the color format 101 is determined to be (0, 2), which will be offset by 3 units to the right in the horizontal direction of the color format 101 The coordinates of each pixel in the color format of the unit are determined to be (0, 3)...; in the same way, based on the color format 101, the vertical direction of the color format 101 will be offset downward by 1 unit The coordinate of each pixel in the color format is determined as (1, 0), and the coordinate of each pixel in the color format that is offset down by 2 units in the vertical direction of the color format 101 is determined as (2, 0), and The coordinates of each pixel in the color format that is offset 3 units downward in the vertical direction of the color format 101 are determined to be (3, 0).... By traversing the entire image data according to this rule, each pixel in the image data has its position information, so that the position information can be used to restore the image data during decoding.
在确定了各像素的位置信息后,所述压缩设备基于各像素的颜色属性,从所述图像数据中提取各像素的颜色值并分成多个图像块,且每个图像块中只包含一种颜色属性。请继续参阅图10,由于本实施例中的颜色属性包括R、Gr、Gb、B这4种属性,故在本实施例中将所述图像数据中的所有像素数据基于颜色属性分成了R、Gr、Gb、B这4个图像块。例如,B(0,0)被分至B颜色属性的图像块中,B(0,1)被分至B颜色属性的图像块中且在B(0,0)的水平方向上向右偏移1个单位,B(0,2)被分至B颜色属性的图像块中且在B(0,0)的水平方向上向右偏移2个单位,B(0,3)被分至B颜色属性的图像块中且在B(0,0)的水平方向上向右偏移3个单位;B(1,0)被分至B颜色属性的图像块中且在B(0,0)的垂直方向上向下偏移1个单位,B(2,0)被分至B颜色属性的图像块中且在B(0,0)的垂直方向上向下偏移2个单位,B(3,0)被分至B颜色属性的图像块中且在B(0,0)的垂直方向上向下偏移3个单位……同理Gb、Gr、R颜色属性的各像素也一一被相应地分至各图像块中,且每个图像块中的每个像素位置均与其在图像数据中的像素位置具有对应关系,在此不一一赘述。After determining the position information of each pixel, the compression device extracts the color value of each pixel from the image data based on the color attribute of each pixel and divides it into multiple image blocks, and each image block contains only one type Color attributes. Please continue to refer to FIG. 10, since the color attributes in this embodiment include R, Gr, Gb, and B, four attributes, so in this embodiment, all pixel data in the image data is divided into R, Gr, Gb, and B based on the color attributes. Four image blocks of Gr, Gb, and B. For example, B(0,0) is assigned to the image block of the B color attribute, and B(0,1) is assigned to the image block of the B color attribute and is offset to the right in the horizontal direction of B(0,0) Shifted by 1 unit, B(0,2) is divided into the image block of B color attribute and shifted to the right by 2 units in the horizontal direction of B(0,0), B(0,3) is divided into In the image block of the B color attribute and offset 3 units to the right in the horizontal direction of B(0,0); B(1,0) is divided into the image block of the B color attribute and is in the B(0,0) ) Is shifted downward by 1 unit in the vertical direction, B(2,0) is divided into the image block of B color attribute and shifted downward by 2 units in the vertical direction of B(0,0), B (3, 0) is divided into the image block of the B color attribute and shifted downward by 3 units in the vertical direction of B(0, 0)...Similarly, each pixel of the Gb, Gr, R color attributes is also the same One is divided into each image block accordingly, and each pixel position in each image block has a corresponding relationship with its pixel position in the image data, which will not be repeated here.
应当理解,上述实施例中为便于理解,故以坐标的形式确定映射关系,但在一些实施方式中,所述基于所述颜色格式中各像素的颜色属性,从所述图像数据中提取各像素的颜色值,并映射到相应图像块中的像素位置的方法不限于通过坐标确定映射关系,还可包括序号等任何可被压缩设备识别以用于确定映射关系的信息。It should be understood that in the foregoing embodiments, for ease of understanding, the mapping relationship is determined in the form of coordinates. However, in some embodiments, each pixel is extracted from the image data based on the color attribute of each pixel in the color format. The method of mapping the color value of, and the pixel position in the corresponding image block is not limited to determining the mapping relationship by coordinates, and may also include any information that can be identified by the compression device for determining the mapping relationship, such as a serial number.
在步骤S130中,所述压缩设备将多幅图像数据中的每一个图像数据所对应的且具有同一颜色属性的图像块进行压缩,以得到视频文件。在此,将步骤S120中得到的多个图像块输入至编码器进行编码。其中,所述编码的标准可采用包括但不限于:H.265或AVS2即第二代数字音视频编解码技术标准等。In step S130, the compression device compresses image blocks corresponding to each of the multiple image data and having the same color attribute to obtain a video file. Here, the multiple image blocks obtained in step S120 are input to the encoder for encoding. Wherein, the coding standards may include, but are not limited to: H.265 or AVS2, which is the second-generation digital audio and video coding and decoding technology standards.
在某些实施例中,所述编码器可以集成在所述压缩设备中,例如,所述压缩设备的处理装置在执行步骤S110和S120后协调所述编码器执行步骤S130。或者,所述编码器也可以为独立的终端设备、或者服务器。所述编码器包含可进行逻辑控制和数字运算的处理模块,和用于存储所述处理模块运行期间所产生的中间数据的存储模块。其中,所述处理模块举例包括以下任一种或多种的组合:FPGA、MCU及CPU等。所述存储模块举例包括以下任一种或多种的组合:寄存器、堆栈及缓存等易失性存储器。In some embodiments, the encoder may be integrated in the compression device. For example, the processing device of the compression device coordinates the encoder to perform step S130 after performing steps S110 and S120. Alternatively, the encoder may also be an independent terminal device or a server. The encoder includes a processing module that can perform logic control and digital operations, and a storage module for storing intermediate data generated during the operation of the processing module. Wherein, the processing module includes, for example, any one or a combination of the following: FPGA, MCU, CPU, etc. The storage module includes, for example, any one or a combination of the following: volatile memories such as registers, stacks, and caches.
应当理解,视频编码器为能够对数字视频进行压缩的程序或者设备。通常,Bayer格式的 数据由于每个像素只有单个颜色属性,缺少另外两位的颜色属性,故无法直接进入编码器编码。需要将Bayer格式的数据进行Debayer生成RGB格式,或是转换成YUV格式后才可进入编码器编码。在本实施例中,由于每个图像块中的像素只包括了单一颜色属性,因此,在进入编码器时不足的位数可以暂时以0进行填充,从而在一方面使图像块能够兼容现有技术中的编码器,另一方面便于编码器计算,保证编码器的处理效率。It should be understood that a video encoder is a program or device capable of compressing digital video. Generally, the data in Bayer format cannot enter the encoder directly because each pixel has only a single color attribute and lacks the color attributes of the other two bits. The Bayer format data needs to be Debayer to generate RGB format, or converted to YUV format before entering the encoder for encoding. In this embodiment, because the pixels in each image block only include a single color attribute, the insufficient number of bits can be temporarily filled with 0 when entering the encoder, so that the image block can be compatible with existing The encoder in the technology, on the other hand, facilitates the calculation of the encoder and ensures the processing efficiency of the encoder.
在一个示例性的实施例中,可通过一个编码器分别对多个图像块进行处理,在此,所述步骤S130包括:按照所述颜色格式中的颜色属性,将多幅图像数据所对应的多个图像块依序输入一第一编码器进行压缩处理。In an exemplary embodiment, a plurality of image blocks may be processed separately by an encoder. Here, the step S130 includes: according to the color attributes in the color format, the multiple image data corresponding to the A plurality of image blocks are sequentially input to a first encoder for compression processing.
请参阅图6,其显示为本申请中利用一第一编码器进行压缩处理的实施例示意图。如图所示,图中显示了由①、②、③、④这4个不同帧的图像数据所生成的多个图像块,其中每个序号中的每个图像块中只包括单一颜色属性。在此,将多个图像块按照预设的次序输入一第一编码器进行压缩编码。其中,所述预设的次序包括但不限于:基于图像块对应的图像数据所获取的时间,或者基于图像块的颜色属性等方式。Please refer to FIG. 6, which shows a schematic diagram of an embodiment of using a first encoder for compression processing in this application. As shown in the figure, the figure shows multiple image blocks generated from the image data of 4 different frames of ①, ②, ③, and ④, and each image block in each serial number only includes a single color attribute. Here, a plurality of image blocks are input into a first encoder in a preset order for compression encoding. Wherein, the preset order includes, but is not limited to: based on the time when the image data corresponding to the image block is acquired, or based on the color attribute of the image block.
在某些实施例中,所述预设的次序为基于图像块对应的图像数据所获取的时间所确定的。请参阅图7,其显示为本申请中利用一第一编码器进行压缩处理的另一实施例示意图。如图所示,在本实施例中,先将序号为①的四个图像块输入至第一编码器102进行压缩处理,再将序号为②的四个图像块输入至第一编码器102进行压缩处理,接着将序号为③的四个图像块输入至第一编码器102进行压缩处理,最后将序号为④的四个图像块输入至第一编码器102进行压缩处理。In some embodiments, the preset order is determined based on the time when the image data corresponding to the image block is acquired. Please refer to FIG. 7, which shows a schematic diagram of another embodiment in which a first encoder is used for compression processing in this application. As shown in the figure, in this embodiment, the four image blocks with sequence number ① are first input to the first encoder 102 for compression processing, and then the four image blocks with sequence number ② are input to the first encoder 102 for compression processing. Compression processing, then the four image blocks with the sequence number ③ are input to the first encoder 102 for compression processing, and finally the four image blocks with the sequence number ④ are input to the first encoder 102 for compression processing.
在另一些实施例中,所述预设的次序为基于图像块的颜色属性所确定的,请继续参阅图6,先将颜色属性为Gr的图像块输入至第一编码器102进行压缩处理,再将颜色属性为R的图像块输入至第一编码器102进行压缩处理,接着将颜色属性为B的图像块输入至第一编码器102进行压缩处理,最后将颜色属性为Gb的图像块输入至第一编码器102进行压缩处理。由于相邻帧之间的颜色差值较小,因此通过本实施例中的方式可大大降低运算量,提高压缩效率。In other embodiments, the preset order is determined based on the color attributes of the image blocks. Please continue to refer to FIG. 6 and first input the image blocks with the color attribute Gr to the first encoder 102 for compression processing. Then input the image block with the color attribute of R to the first encoder 102 for compression processing, then input the image block with the color attribute of B to the first encoder 102 for compression processing, and finally input the image block with the color attribute of Gb The first encoder 102 performs compression processing. Since the color difference between adjacent frames is small, the method in this embodiment can greatly reduce the amount of calculation and improve the compression efficiency.
在一个示例性的实施例中,为提高压缩处理的效率,可通过多个编码器分别对多个图像块进行处理,在此,所述步骤S130包括:在同步控制下,利用多个第二编码器分别将同一颜色属性的多个图像块进行压缩处理。In an exemplary embodiment, in order to improve the efficiency of compression processing, multiple image blocks may be processed respectively by multiple encoders. Here, the step S130 includes: under synchronous control, using multiple second The encoder separately compresses multiple image blocks with the same color attribute.
在某些实施例中,为了区分不同帧的图像数据生成的图像块,所述步骤S130还包括:利用多个第二编码器进行压缩处理所得到的视频文件中包含用于解压缩视频文件以恢复多幅图 像数据而设置的同步信息。在此,所述第二编码器对每一图像块均会生成一同步信息,所述同步信息包括但不限于时间戳、序号等,例如:同一帧的图像具有相同的时间戳,不同帧的图像具有不同的时间戳,以便在解码端将相同时间戳的图像块还原成一幅图像数据。又如,同一帧的图像具有相同的序号,不同帧的图像具有不同的序号,以便在解码端将相同序号的图像块还原成一幅图像数据。其中,由于不同第二编码器的时间机制存在误差,故在此可藉由一同步服务器对多个第二编码器进行时间同步以协调多个第二编码器的时间机制保持一致或将误差控制在可接受的范围内。其中,所述服务器包括但不限于NTP(Network Time Protocol)服务器等。在还有一些实施例中,还可以基于同步协议,藉由多个第二编码器中的其中一个对其他的第二编码器进行同步控制,以协调其他的第二编码器处于同一时间机制中或将误差控制在可接受的范围内。其中,所述同步协议包括但不限于1588协议等。In some embodiments, in order to distinguish image blocks generated from image data of different frames, the step S130 further includes: the video file obtained by performing compression processing using multiple second encoders includes a video file used to decompress the video file. The synchronization information set for restoring multiple image data. Here, the second encoder generates synchronization information for each image block. The synchronization information includes but is not limited to a time stamp, sequence number, etc., for example: images of the same frame have the same time stamp, and images of different frames The images have different time stamps, so that the image block with the same time stamp can be restored into one image data at the decoding end. For another example, the images of the same frame have the same serial number, and the images of different frames have different serial numbers, so that the image blocks of the same serial number can be restored into one piece of image data at the decoding end. Among them, due to the error in the time mechanism of different second encoders, a synchronization server can be used to synchronize the time of multiple second encoders to coordinate the time mechanism of multiple second encoders to keep consistent or to control the error. Within the acceptable range. Wherein, the server includes, but is not limited to, an NTP (Network Time Protocol) server, etc. In some other embodiments, based on the synchronization protocol, one of the multiple second encoders can be used to synchronize other second encoders to coordinate the other second encoders to be in the same time mechanism. Or control the error within an acceptable range. Wherein, the synchronization protocol includes but is not limited to the 1588 protocol and the like.
在一个示例性的实施例中,所述图像数据为Bayer格式或RGB格式以外的其他格式,例如YUV格式等。则需要将图像数据转换成RGB或Bayer格式,再依照上述压缩处理方式对其进行压缩处理,压缩处理的方式在此不再一一重述。In an exemplary embodiment, the image data is in Bayer format or other formats than RGB format, such as YUV format. It is necessary to convert the image data into RGB or Bayer format, and then perform compression processing on it according to the above-mentioned compression processing method. The compression processing method will not be repeated here.
经上述压缩方法所提供的技术思想而压缩后的图像数据,可被存储在存储介质上,或藉由利用60Mbps及以上的通信传输方式进行设备之间、或设备内部的数据传输。例如,在录放一体的摄录机中,构成压缩设备的各硬件在软件的指令调度下将所摄取的图像数据数据压缩成相应的压缩后图像数据,并保存在存储装置中。当用户操作摄录机播放所述压缩后图像数据时,构成解压缩设备的各硬件在软件的指令调度下将压缩后图像数据进行解压缩,并予以播放(或称为显示)。又如,可执行所述压缩方法的摄像设备将所摄取的图像数据数据压缩成相应的压缩后图像数据(如压缩文件、或码流),并利用基于5G通信协议的无线传输方式、光纤传输等传输方式将压缩后图像数据传输至一服务器,设置在所述服务器内的解压缩设备将所述压缩后图像数据进行解压缩,并予以播放(或称为显示)。The image data compressed by the technical idea provided by the above-mentioned compression method can be stored on a storage medium, or data transmission between devices or within devices can be performed by using a communication transmission method of 60 Mbps and above. For example, in a camcorder with integrated recording and playback, the hardware constituting the compression device compresses the captured image data into corresponding compressed image data under the instruction of the software, and saves it in the storage device. When the user operates the camcorder to play the compressed image data, the hardware constituting the decompression device decompresses the compressed image data under the instruction scheduling of the software, and plays it (or called display). For another example, a camera device that can perform the compression method compresses the captured image data data into corresponding compressed image data (such as a compressed file or code stream), and uses a wireless transmission method based on the 5G communication protocol and optical fiber transmission The compressed image data is transmitted to a server in a transmission mode, and the decompression device provided in the server decompresses the compressed image data and plays it (or called display).
通过本申请的压缩方式可在保证超高清视频清晰度的同时保证传输时的稳定性,通过本申请中的压缩方式压缩编码后的数据量可藉由目前的4K编码器实现8K影像的传输,解决了现有技术中超高清视频传输困难的问题。The compression method of this application can ensure the clarity of ultra-high-definition video while ensuring the stability of transmission. The amount of data compressed and encoded by the compression method of this application can realize the transmission of 8K images by the current 4K encoder. The problem of difficulty in transmitting ultra-high-definition video in the prior art is solved.
在本申请第二方面的实施例中还提供一种视频文件的解压缩方法,所述解压缩方法主要由图像的解压缩设备来完成,其中,所述解压缩设备可以为一种终端设备、或者服务器。在此,所述终端设备可以为一种终端设备、或者服务器等。In the embodiment of the second aspect of the present application, a method for decompressing a video file is also provided. The decompression method is mainly performed by an image decompression device, where the decompression device may be a terminal device, Or server. Here, the terminal device may be a terminal device, a server, or the like.
其中,所述终端设备包括但不限于播放设备、个人使用的电子终端设备等。其中所述播放设备包括存储装置、处理装置,还可以包含接口装置等。其中,所述存储装置可包括高速 随机存取存储器,并且还可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。所述存储装置还包括存储器控制器,其可控制设备的诸如CPU和外设接口之类的其他组件对存储器的访问。所述存储装置用于存储至少一个程序和待解压缩的图像数据。存储在存储装置中的程序包括操作系统、通信模块(或指令集)、图形模块(或指令集)、文本输入模块(或指令集)、以及应用(或指令集)。所述存储装置中的程序还包括基于所述解压缩方法所提供的技术方案而依时序对图像数据执行解压缩操作的指令集。所述处理装置包括但不限于:CPU、GPU、FPGA(Field-Programmable Gate Array现场可编程门阵列)、ISP(Image Signal Processing图像处理芯片)、或者其他包含专用于处理存储装置中所存储的至少一个程序的处理芯片(如AI专用芯片)等。所述处理装置调用并执行存储装置中所存储的至少一个程序,以按照所述解压缩方法对所保存的图像数据进行解压缩处理。其中,利用如FPGA等可并行处理矩阵数据的处理装置更适合高效、实时对所获取的图像数据进行解压缩处理。所述接口装置包括但不限于:数据线接口和网络接口;其中,数据线接口举例包括:如VGA接口、HDMI接口等显示接口、如USB等串行接口、和如数据总线等并行接口。网络接口举例包括以下至少一种:如基于蓝牙协议的网络接口、WiFi网络接口等短距离无线网络接口,如基于3G、4G或5G协议的移动网络的无线网络接口,如包含网卡的有线网络接口等。所述播放设备还包括显示装置用于将经解压缩得到的图像数据予以显示。所述显示装置至少包含由显示屏、显示屏控制器等,其中显示屏举例包含液晶显示屏、曲面显示屏、触摸屏等。所述显示屏控制器举例包括专用于显示装置的处理器、与处理装置中的处理器集成在一起的处理器等。在一些场景中,所述播放设备设置交通指挥中心,用于将来自摄像装置所传输的压缩后图像数据予以解压缩和显示。在另一些场景中,所述播放设备被配置在与微创医疗设备通信连接的计算机设备上,其通过光纤或其他专用数据线与微创医疗设备相连,并将当前微创医疗设备所提供的压缩后图像数据予以解压缩并播放。在另一些场景中,所述播放设备被配置在电视转发中心的机房中,用于将赛场上所设置的摄像装置所传输来的压缩后图像数据予以解压缩并播放,以供视频编辑。在另一些场景中,所述播放设备为机顶盒,其用于将电视信号中相应电视频道中的码流予以解压缩并输出给电视机以供显示。Wherein, the terminal equipment includes, but is not limited to, playback equipment, personal electronic terminal equipment, and the like. The playback device includes a storage device, a processing device, and may also include an interface device. Wherein, the storage device may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The storage device also includes a memory controller, which can control access to the memory by other components of the device, such as a CPU and a peripheral interface. The storage device is used to store at least one program and image data to be decompressed. The program stored in the storage device includes an operating system, a communication module (or instruction set), a graphics module (or instruction set), a text input module (or instruction set), and an application (or instruction set). The program in the storage device further includes an instruction set for performing a decompression operation on the image data in time sequence based on the technical solution provided by the decompression method. The processing device includes, but is not limited to: CPU, GPU, FPGA (Field-Programmable Gate Array), ISP (Image Signal Processing image processing chip), or other including at least data stored in a storage device dedicated to processing A program processing chip (such as AI dedicated chip), etc. The processing device calls and executes at least one program stored in the storage device to perform decompression processing on the stored image data according to the decompression method. Among them, the use of processing devices such as FPGA that can process matrix data in parallel is more suitable for efficient and real-time decompression processing of the acquired image data. The interface device includes, but is not limited to: a data line interface and a network interface; examples of the data line interface include: display interfaces such as VGA interface and HDMI interface, serial interfaces such as USB, and parallel interfaces such as data bus. Examples of network interfaces include at least one of the following: short-range wireless network interfaces such as Bluetooth-based network interfaces and WiFi network interfaces, such as wireless network interfaces of mobile networks based on 3G, 4G, or 5G protocols, such as wired network interfaces that include network cards Wait. The playback device also includes a display device for displaying the image data obtained by decompression. The display device at least includes a display screen, a display screen controller, etc., where the display screen includes, for example, a liquid crystal display screen, a curved display screen, a touch screen, and the like. The display screen controller includes, for example, a processor dedicated to the display device, a processor integrated with the processor in the processing device, and the like. In some scenarios, the playback device is set up with a traffic command center for decompressing and displaying the compressed image data transmitted from the camera device. In other scenarios, the playback device is configured on a computer device that is communicatively connected to the minimally invasive medical device, which is connected to the minimally invasive medical device through an optical fiber or other dedicated data line, and connects the current minimally invasive medical device The compressed image data is decompressed and played. In other scenarios, the playback device is configured in the computer room of the TV forwarding center, and is used to decompress and play the compressed image data transmitted by the camera device installed on the stadium for video editing. In other scenarios, the playback device is a set-top box, which is used to decompress the code stream in the corresponding TV channel in the TV signal and output it to the TV for display.
所述个人使用的电子终端设备包括台式电脑、笔记本电脑、平板电脑、和专用于制作电视节目、电影、电视剧等的剪接设备等。所述电子终端设备包含存储装置、处理装置。其中,存储装置和处理装置可与前述摄像设备中的对应装置相同或相似,在此不再详述。所述电子终端设备还可以包含显示装置,用于显示经解压缩得到的图像数据。在此,在一些示例中, 所述电子终端的硬件及软件模块可与前述播放设备中的对应装置相同或相似,在此也不再重述。在又一些示例中,所述电子终端设备还可以包括图像获取接口,用于获取源自于经压缩的压缩后图像数据。所述图像获取接口可以为网络接口、数据线接口、或程序接口。其中,所述网络接口和数据线接口可与前述播放设备中的对应装置相同或相似,在此不再详述。例如,藉由所述网络接口,所述电子终端设备的处理装置从互联网中下载的压缩后图像数据。再如,藉由所述数据线接口,所述电子终端设备的处理装置从存储装置中获取编辑文件。The electronic terminal equipment for personal use includes desktop computers, notebook computers, tablet computers, and editing equipment dedicated to the production of TV programs, movies, TV series, and the like. The electronic terminal equipment includes a storage device and a processing device. Wherein, the storage device and the processing device may be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here. The electronic terminal equipment may also include a display device for displaying image data obtained by decompression. Here, in some examples, the hardware and software modules of the electronic terminal may be the same as or similar to the corresponding devices in the aforementioned playback device, and will not be repeated here. In still other examples, the electronic terminal device may further include an image acquisition interface for acquiring compressed image data derived from compression. The image acquisition interface may be a network interface, a data line interface, or a program interface. Wherein, the network interface and the data line interface can be the same or similar to the corresponding devices in the aforementioned playback device, and will not be described in detail here. For example, through the network interface, the processing device of the electronic terminal device downloads compressed image data from the Internet. For another example, through the data line interface, the processing device of the electronic terminal device obtains the edited file from the storage device.
所述服务器包括但不限于单台服务器、服务器集群、分布式服务器、基于云技术的服务端等。其中,所述服务器包括存储装置、处理装置和图像获取接口等。其中所述存储装置和处理装置可配置于同一台实体服务器设备中,或根据各实体服务器设备的分工而配置在多台实体服务器设备中。所述图像获取接口可以为网络接口、或数据线接口。所述服务器中所包含的存储装置、处理装置和图像获取接口等可与前述终端设备中所提及的对应装置相同;或基于服务器的吞吐量、处理能力、存储要求而专门设置的用于服务器的各对应装置。例如,所述存储装置还可包含固态硬盘等。例如,所述处理装置还可包含专用于服务器的CPU等。所述服务器中的图像获取接口获取来自互联网中的压缩后图像数据、和播放指令,处理装置基于所述播放指令对所获取的压缩后图像数据执行本申请所述的解压缩方法。The server includes but is not limited to a single server, a server cluster, a distributed server, a server based on cloud technology, and the like. Wherein, the server includes a storage device, a processing device, an image acquisition interface, and the like. The storage device and the processing device may be configured in the same physical server device, or be configured in multiple physical server devices according to the division of labor of each physical server device. The image acquisition interface may be a network interface or a data line interface. The storage device, processing device, image acquisition interface, etc. included in the server may be the same as the corresponding devices mentioned in the aforementioned terminal equipment; or specifically set for the server based on the server's throughput, processing capacity, and storage requirements The corresponding devices. For example, the storage device may also include a solid state drive or the like. For example, the processing device may also include a CPU dedicated to a server or the like. The image acquisition interface in the server acquires compressed image data and playback instructions from the Internet, and the processing device executes the decompression method described in this application on the acquired compressed image data based on the playback instructions.
基于上述任一场景所产生的对图像数据进行解压缩的需求,本申请提供一种获得视频文件的解压缩方法,请参阅图15,其显示为所述解压缩方法在一实施方式中的流程图。Based on the demand for decompressing image data generated in any of the above scenarios, this application provides a decompression method for obtaining a video file. Please refer to FIG. 15, which shows the flow of the decompression method in an embodiment. Figure.
在步骤S210中,获取一视频文件。其中,所述的视频文件是根据本申请中的压缩方法对图形数据进行压缩处理后所得到的。In step S210, a video file is obtained. Wherein, the video file is obtained by compressing graphics data according to the compression method in this application.
在某些实施方式中,所述视频文件可以来自于存储介质中,也可以利用60Mbps及以上的通信传输方式将所述视频文件传输至解压缩设备。其中,所述传输方式包括但不限于:基于5G通信协议的无线传输方式、或光纤传输。In some embodiments, the video file may come from a storage medium, or the video file may be transmitted to the decompression device using a communication transmission mode of 60 Mbps and above. Wherein, the transmission method includes, but is not limited to: a wireless transmission method based on the 5G communication protocol, or optical fiber transmission.
在步骤S220中,按照对应所述视频文件所使用的压缩方式对所述视频文件进行解压缩处理,得到多个图像块;其中,根据各图像块的颜色属性,所得到的多个图像块与待生成的多幅图像数据中的每一幅图像数据相对应。In step S220, the video file is decompressed according to the compression method used for the video file to obtain multiple image blocks; wherein, according to the color attribute of each image block, the obtained multiple image blocks are Each of the multiple pieces of image data to be generated corresponds to each piece of image data.
在此,将步骤S210中得到的视频文件输入至解码器进行解码。其中,所述解码的标准可采用包括但不限于:H.265或AVS2即第二代数字音视频编解码技术标准等。Here, the video file obtained in step S210 is input to the decoder for decoding. Wherein, the decoding standard may include, but is not limited to: H.265 or AVS2, which is the second-generation digital audio and video coding and decoding technology standard.
在某些实施例中,所述解码器可以集成在所述解压缩设备中,例如,所述解压缩设备的处理装置在执行步骤S210后协调所述解码器执行步骤S220。或者,所述解码器也可以为独立的终端设备、或者服务器。所述解码器包含可进行逻辑控制和数字运算的处理模块,和用 于存储所述处理模块运行期间所产生的中间数据的存储模块。其中,所述处理模块举例包括以下任一种或多种的组合:FPGA、MCU及CPU等。所述存储模块举例包括以下任一种或多种的组合:寄存器、堆栈及缓存等易失性存储器。In some embodiments, the decoder may be integrated in the decompression device. For example, the processing device of the decompression device coordinates the decoder to perform step S220 after performing step S210. Alternatively, the decoder may also be an independent terminal device or a server. The decoder includes a processing module capable of performing logic control and digital operations, and a storage module for storing intermediate data generated during the operation of the processing module. Wherein, the processing module includes, for example, any one or a combination of the following: FPGA, MCU, CPU, etc. The storage module includes, for example, any one or a combination of the following: volatile memories such as registers, stacks, and caches.
在一个示例性的实施例中,可通过一个解码器分别对多个图像块进行处理,在此,所述步骤S220包括:用第一解码器对所接收的视频文件进行解压缩处理,得到依据颜色格式中的不同颜色属性而划分的多组图像块。其中,每组图像块中的每个图像块与待生成的一幅图像数据相对应。In an exemplary embodiment, multiple image blocks may be processed separately by one decoder. Here, the step S220 includes: decompressing the received video file with the first decoder to obtain the basis Groups of image blocks divided by different color attributes in the color format. Wherein, each image block in each group of image blocks corresponds to a piece of image data to be generated.
在此,依据在压缩编码时编码器对图像块的压缩编码方式,所述第一解码器对所接收的视频文件进行解压缩处理。并在通过第一解码器解码获得多组图像块后,所述第一解码器根据所获得的图像块次序以及压缩编码的规则,确定多个图像块之间的对应关系,以便将多个图像块生成图像数据。Here, the first decoder performs decompression processing on the received video file according to the compression coding method of the image block by the encoder during compression coding. And after obtaining multiple sets of image blocks through the first decoder, the first decoder determines the correspondence between the multiple image blocks according to the obtained image block order and compression coding rules, so that the multiple The image block generates image data.
在某些实施例中,请参阅图16,其显示为本申请中利用一第一解码器进行解压缩处理的实施例示意图。在本实施例中,在压缩编码期间,编码器是基于图像块对应的图像数据所获取的时间的规则来对多个图像块进行压缩编码的。因此,诚如图16所示,所述第一解码器依次获取了编号为①的Gr颜色属性的图像块、编号为①的R颜色属性的图像块、编号为①的B颜色属性的图像块、编号为①的Gb颜色属性的图像块、编号为②的Gr颜色属性的图像块、编号为②的R颜色属性的图像块……。在此,根据编码器进行压缩编码时的规则,确定多个图像块之间的对应关系,如编号为①的Gr颜色属性的图像块、编号为①的R颜色属性的图像块、编号为①的B颜色属性的图像块、编号为①的Gb颜色属性的图像块均来自于同一图像数据,编号为②的Gr颜色属性的图像块、编号为②的R颜色属性的图像块、编号为②的B颜色属性的图像块、编号为②的Gb颜色属性的图像块均来自于同一图像数据且次序位于①之后等。In some embodiments, please refer to FIG. 16, which shows a schematic diagram of an embodiment in which a first decoder is used for decompression processing in this application. In this embodiment, during the compression encoding, the encoder compresses and encodes multiple image blocks based on the time rule when the image data corresponding to the image block is acquired. Therefore, as shown in Figure 16, the first decoder sequentially obtains the image block with the Gr color attribute numbered ①, the image block with the R color attribute numbered ①, and the image block with the B color attribute numbered ①. , The image block with the Gb color attribute numbered ①, the image block with the Gr color attribute numbered ②, the image block with the R color attribute numbered ②... Here, the corresponding relationship between multiple image blocks is determined according to the rules of the encoder when performing compression encoding, such as the image block with the Gr color attribute number ①, the image block with the R color attribute number ①, and the number ① The B color attribute image block and the Gb color attribute image block numbered ① are all from the same image data, the Gr color attribute image block numbered ②, the R color attribute image block numbered ②, and the number is ② The image block of the B color attribute of, and the image block of the Gb color attribute of number ② are all from the same image data and the order is after ①, etc.
在另一些实施例中,请参阅图17,其显示为本申请中利用一第一解码器进行解压缩处理的另一实施例示意图。在本实施例中,在压缩编码期间,编码器是基于图像块的颜色属性的规则来对多个图像块进行压缩编码的。因此,诚如图17所示,所述第一解码器依次获取了编号为①的Gr颜色属性的图像块、编号为②的Gr颜色属性的图像块、编号为③的Gr颜色属性的图像块、编号为④的Gr颜色属性的图像块、编号为①的R颜色属性的图像块、编号为②的R颜色属性的图像块……。在此,根据编码器进行压缩编码时的规则,确定多个图像块之间的对应关系,如编号为①的Gr颜色属性的图像块、编号为①的R颜色属性的图像块、编号为①的B颜色属性的图像块、编号为①的Gb颜色属性的图像块均来自于同一图像数据, 编号为②的Gr颜色属性的图像块、编号为②的R颜色属性的图像块、编号为②的B颜色属性的图像块、编号为②的Gb颜色属性的图像块均来自于同一图像数据且次序位于①之后等。In other embodiments, please refer to FIG. 17, which shows a schematic diagram of another embodiment in which a first decoder is used for decompression processing in this application. In this embodiment, during compression encoding, the encoder compresses and encodes multiple image blocks based on the rules of the color attributes of the image blocks. Therefore, as shown in FIG. 17, the first decoder sequentially obtains the image block with the Gr color attribute number ①, the image block with the Gr color attribute number ②, and the image block with the Gr color attribute number ③. , The image block with the Gr color attribute numbered ④, the image block with the R color attribute numbered ①, the image block with the R color attribute numbered ②... Here, the corresponding relationship between multiple image blocks is determined according to the rules of the encoder when performing compression encoding, such as the image block with the Gr color attribute number ①, the image block with the R color attribute number ①, and the number ① The B color attribute image block and the Gb color attribute image block numbered ① are all from the same image data, the Gr color attribute image block numbered ②, the R color attribute image block numbered ②, and the number is ② The image block of the B color attribute of, and the image block of the Gb color attribute of number ② are all from the same image data and the order is after ①, etc.
在另一个示例性的实施例中,为提高解压缩处理的效率,还可通过多个解码器分别对多个图像块进行处理,在此,所述步骤S220包括:在同步控制下,利用多个第二解码器分别依据颜色属性对所述视频文件进行解压缩处理;其中,每个第二解码器输出具有同一颜色属性的多个图像块;其中,每个图像块与待生成的一幅图像数据相对应。在某些实施例中,为了区分不同帧的图像数据生成的图像块,每个第二解码器依据所述视频文件中的同步信息确定所解压缩的多个图像块与待生成的一幅图像数据之间的对应关系。In another exemplary embodiment, in order to improve the efficiency of the decompression processing, multiple decoders may be used to process multiple image blocks respectively. Here, the step S220 includes: using multiple image blocks under synchronous control. Each of the second decoders decompresses the video file according to the color attributes; wherein, each second decoder outputs a plurality of image blocks with the same color attribute; wherein, each image block is related to the one to be generated The image data corresponds. In some embodiments, in order to distinguish image blocks generated from image data of different frames, each second decoder determines the number of image blocks to be decompressed and an image to be generated according to the synchronization information in the video file. Correspondence between data.
在此,所述第二解码器对每一图像块均会生成一同步信息,所述同步信息包括但不限于时间戳、序号等,例如:同一帧的图像具有相同的时间戳,不同帧的图像具有不同的时间戳,以便在解码端将相同时间戳的图像块还原成一幅图像数据。又如,同一帧的图像具有相同的序号,不同帧的图像具有不同的序号,以便在解码端将相同序号的图像块还原成一幅图像数据。其中,由于不同第二解码器的时间机制存在误差,故在此可藉由一同步服务器对多个第二解码器进行时间同步以协调多个第二解码器的时间机制保持一致或将误差控制在可接受的范围内。其中,所述服务器包括但不限于NTP(Network Time Protocol)服务器等。在还有一些实施例中,还可以基于同步协议,藉由多个第二解码器中的其中一个对其他的第二解码器进行同步控制,以协调其他的第二解码器处于同一时间机制中或将误差控制在可接受的范围内。其中,所述同步协议包括但不限于1588协议等。Here, the second decoder generates a synchronization information for each image block. The synchronization information includes but is not limited to a time stamp, sequence number, etc., for example: images of the same frame have the same time stamp, and images of different frames The images have different time stamps, so that the image block with the same time stamp can be restored into one image data at the decoding end. For another example, the images of the same frame have the same serial number, and the images of different frames have different serial numbers, so that the image blocks of the same serial number can be restored into one piece of image data at the decoding end. Among them, due to the error in the time mechanism of different second decoders, a synchronization server can be used to synchronize the time of multiple second decoders to coordinate the time mechanism of multiple second decoders to keep the same or to control the error. Within the acceptable range. Wherein, the server includes, but is not limited to, an NTP (Network Time Protocol) server, etc. In some other embodiments, based on the synchronization protocol, one of the multiple second decoders can be used to synchronize other second decoders to coordinate the other second decoders to be in the same time mechanism. Or control the error within an acceptable range. Wherein, the synchronization protocol includes but is not limited to the 1588 protocol and the like.
请继续参阅图15,在通过上述解压缩方法得到多个图像块后,所述解压缩设备将所得到的多个图像块提供给步骤S230。Please continue to refer to FIG. 15, after obtaining multiple image blocks through the aforementioned decompression method, the decompression device provides the obtained multiple image blocks to step S230.
在步骤S230中,根据所述颜色属性,将相应的各图像块中各像素位置的颜色值映射到图像数据的像素中。In step S230, the color value of each pixel position in the corresponding image block is mapped to the pixel of the image data according to the color attribute.
应当理解,像素为影像显示的基本单位。每个像素根据其所在的图像数据的格式不同,具有不同的颜色属性。例如,对于Bayer格式的图像数据,其像素的颜色属性为单一颜色分量;对于RGB等格式的图像数据,其像素的颜色属性包括红(R)、绿(G)、蓝(B)三种颜色分量。由于人眼对绿色相对其他颜色更为敏感,因此通常G分量的数量为其他颜色分量数量的2倍。因此,在某些实施例中,所述G分量用Gr分量或Gb分量表示。其中,每一像素均有对应其颜色属性的颜色值。例如,当所述图像数据为Bayer格式时,每个像素中只有R或G或B单一颜色分量,其中G分量用Gr分量和Gb分量表示,所述图像数据中每一像素的颜色值为该单一颜色分量的亮度值,即各像素的颜色值;当所述图像数据为RGB格式时, 所述图像数据中每一像素的颜色值包括该像素中各颜色分量的亮度值。It should be understood that the pixel is the basic unit of image display. Each pixel has different color attributes according to the format of the image data in which it is located. For example, for image data in Bayer format, the color attribute of the pixel is a single color component; for image data in RGB format, the color attribute of the pixel includes three colors of red (R), green (G), and blue (B) Weight. Since human eyes are more sensitive to green than other colors, usually the number of G components is twice the number of other color components. Therefore, in some embodiments, the G component is represented by a Gr component or a Gb component. Among them, each pixel has a color value corresponding to its color attribute. For example, when the image data is in Bayer format, each pixel has only a single color component of R or G or B, where the G component is represented by the Gr component and the Gb component, and the color value of each pixel in the image data is The brightness value of a single color component is the color value of each pixel; when the image data is in RGB format, the color value of each pixel in the image data includes the brightness value of each color component in the pixel.
在一个示例性的实施例中,请参阅图18,其显示为本申请中解压缩设备将相应的各图像块中各像素位置的颜色值映射到图像数据的像素中的实施例示意图。在此,所述解压缩设备获取到多个图像块后,根据图像块中的像素位置与该图像块所对应的图像数据中的像素位置的映射关系,将图像块中的各个像素的颜色值映射到所对应的图像数据中的像素位置中,从而将多个图像块还原成图像数据。如图18所示,所述解压缩设备多个图像块中的每一像素按照其位置信息映射到图像数据中,其中相同位置信息的不同颜色属性的像素按照其在压缩时的颜色格式排列。例如:Gr(0,0)、R(0,0)、B(0,0)、Gb(0,0)均被映射到图像数据中的(0,0)位置处,同时,根据在压缩时奇数行提取Gr、R,偶数行提取B、Gb的格式,将Gr(0,0)、R(0,0)、B(0,0)、Gb(0,0)按照该颜色格式排列。同理,图像块中的其他的像素也按照上述方法映射至图像数据中。故所述步骤S230还包括:按照所述颜色格式,遍历各颜色属性的图像块中的像素位置,在遍历期间,将各图像块中相应像素位置的颜色值映射到所对应的图像数据中的像素位置,以生成图像数据;其中,所述图像数据中各像素位置的颜色值表示单一颜色属性。在此,所述解压缩设备依据上述方法分别处理多个图像块,并将生成的多个图像数据依次发送给步骤S240。In an exemplary embodiment, please refer to FIG. 18, which shows a schematic diagram of an embodiment in which the decompression device in this application maps the color value of each pixel position in the corresponding image block to the pixel of the image data. Here, after the decompression device obtains multiple image blocks, according to the mapping relationship between the pixel position in the image block and the pixel position in the image data corresponding to the image block, the color value of each pixel in the image block Mapping to the pixel position in the corresponding image data, thereby restoring multiple image blocks into image data. As shown in FIG. 18, each pixel in the multiple image blocks of the decompression device is mapped to the image data according to its position information, wherein the pixels with the same position information and different color attributes are arranged according to their color format when compressed. For example: Gr(0,0), R(0,0), B(0,0), Gb(0,0) are all mapped to the (0,0) position in the image data. At the same time, according to the compression When the odd-numbered lines extract Gr, R, and the even-numbered lines extract the format of B, Gb, arrange Gr(0,0), R(0,0), B(0,0), Gb(0,0) according to the color format . In the same way, other pixels in the image block are also mapped to the image data according to the above method. Therefore, the step S230 also includes: traversing the pixel position in the image block of each color attribute according to the color format, and during the traversal, mapping the color value of the corresponding pixel position in each image block to the corresponding image data Pixel positions to generate image data; wherein the color value of each pixel position in the image data represents a single color attribute. Here, the decompression device processes a plurality of image blocks separately according to the above method, and sends the generated plurality of image data to step S240 in sequence.
应当理解,本实施例中的颜色格式是根据压缩时的颜色格式而确定的,确定颜色格式的方式在本申请第一方面的实施方式中已做说明,故在此不再一一重述。It should be understood that the color format in this embodiment is determined according to the color format during compression, and the method for determining the color format has been explained in the implementation of the first aspect of the present application, so it will not be repeated here.
在步骤S240中,基于所述图像数据中各像素的颜色值,生成用于显示UHD 4K及以上像素的视频图像。In step S240, based on the color value of each pixel in the image data, a video image for displaying UHD 4K and above pixels is generated.
应当理解,UHD为Ultra High Definition,即超高清。UHD 4k及以上指清晰度在4K像素及以上的视频图像,如8k像素、16k像素等。为便于理解,本实施例中以8K像素为例进行描述,但本方案原理亦可用于压缩4k像素、16k像素甚至更高清的视频图像。It should be understood that UHD is Ultra High Definition, that is, Ultra High Definition. UHD 4k and above refers to video images with a resolution of 4K pixels and above, such as 8k pixels, 16k pixels, etc. For ease of understanding, this embodiment takes 8K pixels as an example for description, but the principle of the solution can also be used to compress 4k pixels, 16k pixels or even higher-definition video images.
在此,步骤S230所提供的图像数据相当于Bayer格式的图像数据。在某些实施方式中,为便于显示,所述解压缩设备将步骤S230所提供的图像数据进行Debayer等处理,从而生成RGB图像以显示。故在此,所述步骤S240还包括:根据所述颜色格式,将所得到的图像数据中的各像素位置进行插值处理,得到各像素中包含RGB颜色属性的视频图像。其中,RGB图像包括RGB格式本身的图像数据以及可转换成RGB格式的其他格式(如YUV格式等)的图像数据。Here, the image data provided in step S230 is equivalent to image data in the Bayer format. In some embodiments, to facilitate display, the decompression device processes the image data provided in step S230 to Debayer, etc., to generate an RGB image for display. Therefore, here, the step S240 further includes: performing interpolation processing on each pixel position in the obtained image data according to the color format to obtain a video image containing RGB color attributes in each pixel. Among them, the RGB image includes image data in the RGB format itself and image data in other formats (such as YUV format, etc.) that can be converted into the RGB format.
应当理解,Debayer即为马赛克处理,是一种数位影像处理算法,目的是从覆有滤色阵列(Color filter array,简称CFA)的感光元件所输出的不完全色彩取样中,重建出全彩影像。 此法也称为滤色阵列内插法(CFA interpolation)或色彩重建法(Color reconstruction)。It should be understood that Debayer is mosaic processing, a digital image processing algorithm, whose purpose is to reconstruct a full-color image from the incomplete color samples output by the photosensitive element covered with a color filter array (CFA). . This method is also called color filter array interpolation (CFA interpolation) or color reconstruction (Color reconstruction).
请参阅图19,其显示为本申请中压缩设备的实施例示意图,如图所示,所述压缩设备包括:通信接口,用于与外部的解压缩设备通信连接;存储器,用于存储至少一个程序和待压缩的图像数据;处理器,用于协调通信接口和存储器以执行所述程序,在执行期间按照如本申请第一方面的实施例中任一所述的获得视频文件的压缩方法将所述图像数据进行压缩处理,以得到视频文件。Please refer to FIG. 19, which shows a schematic diagram of an embodiment of the compression device in this application. As shown in the figure, the compression device includes: a communication interface for communicating with an external decompression device; a memory for storing at least one The program and the image data to be compressed; the processor, which is used to coordinate the communication interface and the memory to execute the program. During the execution, the method for obtaining a video file according to any one of the embodiments of the first aspect of the present application will be The image data is compressed to obtain a video file.
其中,存储器包含非易失性存储器、存储服务器等。其中,所述非易失性存储器举例为固态硬盘或U盘等。所述存储服务器用于存储所获取的各种用电相关信息和供电相关信息。通信接口包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232等。所述通信接口与各传感装置、第三方系统、互联网等数据连接。处理器连接通信接口和存储器,其包含:CPU或集成有CPU的芯片、可编程逻辑器件(FPGA)和多核处理器中的至少一种。处理器还包括内存、寄存器等用于临时存储数据的存储器。Among them, the memory includes non-volatile memory, storage server, etc. Wherein, the non-volatile memory is, for example, a solid state hard disk or a U disk. The storage server is used to store various information related to power consumption and power supply. The communication interface includes network interface, data line interface and so on. The network interface includes, but is not limited to: an Ethernet network interface device, a mobile network (3G, 4G, 5G, etc.)-based network interface device, a short-range communication (WiFi, Bluetooth, etc.)-based network interface device, etc. The data line interface includes but is not limited to: USB interface, RS232, etc. The communication interface is connected to various sensor devices, third-party systems, the Internet and other data. The processor is connected to the communication interface and the memory, and it includes at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA), and a multi-core processor. The processor also includes memory, registers, and other memories used to temporarily store data.
所述通信接口用于与外部的解压缩设备通信连接。在此,所述通信接口举例包括网卡,其通过互联网或搭建的专用网络与解压缩设备通信连接。例如,所述通信接口将压缩设备所压缩处理完成的视频文件发送给解压缩设备。The communication interface is used to communicate with an external decompression device. Here, the communication interface includes, for example, a network card, which communicates with the decompression device via the Internet or a dedicated network built. For example, the communication interface sends the video file compressed and processed by the compression device to the decompression device.
所述存储器用于存储至少一个程序和待压缩的图像数据。在此,所述存储器举例包括设置在压缩设备中的存储卡。The memory is used to store at least one program and image data to be compressed. Here, the memory includes, for example, a memory card provided in a compression device.
所述处理器用于调用所述至少一个程序以协调所述通信接口和存储器执行前述任一示例所提及的压缩方法。The processor is configured to call the at least one program to coordinate the communication interface and the memory to execute the compression method mentioned in any of the foregoing examples.
请参阅图20,其显示为本申请中解压缩设备的实施例示意图,如图所示,所述解压缩设备包括:通信接口,用于与外部的压缩设备通信连接;存储器,用于存储至少一个程序和待解压缩的视频文件;处理器,用于协调通信接口和存储器以执行所述程序,在执行期间按照如本申请第二方面的实施例中任一所述的视频文件的解压缩方法将所述视频文件进行解压缩处理,以便播放所述视频文件。Please refer to FIG. 20, which shows a schematic diagram of an embodiment of the decompression device in this application. As shown in the figure, the decompression device includes: a communication interface for communicating with an external compression device; a memory for storing at least A program and the video file to be decompressed; a processor for coordinating the communication interface and the memory to execute the program, during execution according to the decompression of the video file as described in any of the embodiments of the second aspect of the present application The method decompresses the video file so as to play the video file.
其中,存储器包含非易失性存储器、存储服务器等。其中,所述非易失性存储器举例为固态硬盘或U盘等。所述存储服务器用于存储所获取的各种用电相关信息和供电相关信息。通信接口包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙 等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232等。所述通信接口与各传感装置、第三方系统、互联网等数据连接。处理器连接通信接口和存储器,其包含:CPU或集成有CPU的芯片、可编程逻辑器件(FPGA)和多核处理器中的至少一种。处理器还包括内存、寄存器等用于临时存储数据的存储器。Among them, the memory includes non-volatile memory, storage server, etc. Wherein, the non-volatile memory is, for example, a solid state hard disk or a U disk. The storage server is used to store various information related to power consumption and power supply. The communication interface includes network interface, data line interface and so on. The network interface includes, but is not limited to: an Ethernet network interface device, a mobile network (3G, 4G, 5G, etc.)-based network interface device, a short-range communication (WiFi, Bluetooth, etc.)-based network interface device, etc. The data line interface includes but is not limited to: USB interface, RS232, etc. The communication interface is connected to various sensor devices, third-party systems, the Internet and other data. The processor is connected to the communication interface and the memory, and includes: at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA), and a multi-core processor. The processor also includes memory, registers, and other memories used to temporarily store data.
所述通信接口用于与外部的压缩设备通信连接。在此,所述通信接口举例包括网卡,其通过互联网或搭建的专用网络与压缩设备通信连接。例如,所述通信接口接收压缩设备所压缩处理完成的视频文件,并将视频文件提供给所述处理器。The communication interface is used to communicate with an external compression device. Here, the communication interface includes, for example, a network card, which communicates with the compression device through the Internet or a dedicated network built. For example, the communication interface receives the video file compressed and processed by the compression device, and provides the video file to the processor.
所述存储器用于存储至少一个程序和待解压缩的视频文件。在此,所述存储器举例包括设置在解压缩设备中的存储卡。The memory is used to store at least one program and a video file to be decompressed. Here, the memory includes, for example, a memory card provided in a decompression device.
所述处理器用于调用所述至少一个程序以协调所述通信接口和存储器执行前述任一示例所提及的解压缩方法,从而将所述视频文件进行解压缩处理,以便播放所述视频文件。The processor is configured to call the at least one program to coordinate the communication interface and the memory to execute the decompression method mentioned in any of the foregoing examples, so as to perform decompression processing on the video file to play the video file.
基于上述任一提供的压缩和解压缩方式,本申请还提供一种视频传输系统,请参阅图21,其显示为本申请中的视频传输系统在一实施方式中的结构示意图。所述视频传输系统包括前述任一所述的压缩设备和解压缩设备。Based on any of the above-mentioned compression and decompression methods, this application also provides a video transmission system. Please refer to FIG. 21, which shows a schematic structural diagram of the video transmission system in an embodiment of this application. The video transmission system includes any one of the aforementioned compression equipment and decompression equipment.
在此,所述视频传输系统包含通信接口、存储器和处理器。其中,所述通信接口可以包含网络接口、数据线接口、或程序接口等。在压缩期间,通过所述压缩设备的通信接口获取摄像装置或互联网中的图像数据,处理装置通过调取存储器中所存储的程序来执行压缩操作,以将所获取的图像数据压缩编码成视频文件,并存储在存储装置中。当所述视频传输系统基于用户操显示该视频文件时,处理装置通过调用存储装置中的程序来执行解压缩操作,并将解压缩后所得到的图像数据显示在显示屏中。其中,该视频传输系统中的压缩和解压缩操作均可基于本申请所提供的相应方法来执行,在此不再重述。Here, the video transmission system includes a communication interface, a memory, and a processor. Wherein, the communication interface may include a network interface, a data line interface, or a program interface. During compression, the image data in the camera device or the Internet is acquired through the communication interface of the compression device, and the processing device executes the compression operation by calling the program stored in the memory to compress and encode the acquired image data into a video file , And stored in the storage device. When the video transmission system displays the video file based on a user operation, the processing device performs a decompression operation by calling a program in the storage device, and displays the image data obtained after decompression on the display screen. Wherein, the compression and decompression operations in the video transmission system can be performed based on the corresponding methods provided in this application, and will not be repeated here.
需要说明的是,通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本申请的部分或全部可借助软件并结合必需的通用硬件平台来实现。所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,还可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请还提供一种计算机可读存储介质,所述存储介质存储有至少一个程序,所述程序在被执行时实现前述的任一所述的压缩方法或解压缩方法。It should be noted that through the description of the above implementation manners, those skilled in the art can clearly understand that part or all of this application can be implemented by means of software in combination with a necessary general hardware platform. If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can also be stored in a computer readable storage medium. Based on this understanding, the present application also provides a computer-readable storage medium that stores at least one program that, when executed, implements any of the aforementioned compression methods or decompression methods.
基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可包括其上存储有机器可执行指令的一个或多个机器可读介质,这些指令在由诸如计算机、计算机网络或其他电子设备等一个或多个机器执行时可使得该一个或多个机器根据本申请的实施例来执行操作。例如压缩方法或解压缩方法 中的各步骤等。机器可读介质可包括,但不限于,软盘、光盘、CD-ROM(紧致盘-只读存储器)、磁光盘、ROM(只读存储器)、RAM(随机存取存储器)、EPROM(可擦除可编程只读存储器)、EEPROM(电可擦除可编程只读存储器)、磁卡或光卡、闪存、或适于存储机器可执行指令的其他类型的介质/机器可读介质。Based on this understanding, the technical solution of the present application essentially or the part that contributes to the prior art can be embodied in the form of a software product. The computer software product can include one or more machine executable instructions stored thereon. A machine-readable medium, when these instructions are executed by one or more machines, such as a computer, a computer network, or other electronic devices, can cause the one or more machines to perform operations according to the embodiments of the present application. For example, the steps in the compression method or decompression method. Machine-readable media may include, but are not limited to, floppy disks, optical disks, CD-ROM (compact disk-read only memory), magneto-optical disks, ROM (read only memory), RAM (random access memory), EPROM (erasable Except programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions.
另外,任何连接都可以适当地称为计算机可读介质。例如,如果指令是使用同轴电缆、光纤光缆、双绞线、数字订户线(DSL)或者诸如红外线、无线电和微波之类的无线技术,从网站、服务器或其它远程源发送的,则所述同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线电和微波之类的无线技术包括在所述介质的定义中。然而,应当理解的是,计算机可读写存储介质和数据存储介质不包括连接、载波、信号或者其它暂时性介质,而是旨在针对于非暂时性、有形的存储介质。如申请中所使用的磁盘和光盘包括压缩光盘(CD)、激光光盘、光盘、数字多功能光盘(DVD)、软盘和蓝光光盘,其中,磁盘通常磁性地复制数据,而光盘则用激光来光学地复制数据。In addition, any connection is properly termed a computer-readable medium. For example, if the instruction is sent from a website, server or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, the Coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of the medium. However, it should be understood that computer readable and writable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are intended for non-transitory, tangible storage media. For example, the magnetic disks and optical disks used in the application include compact disks (CD), laser disks, optical disks, digital versatile disks (DVD), floppy disks, and Blu-ray disks. Disks usually copy data magnetically, while optical disks use lasers for optical Copy data locally.
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that, in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, rather than corresponding to the embodiments of the present application. The implementation process constitutes any limitation.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
综上所述,本申请所提供的视频文件的压缩方法可有效降低码流,并同时保证高保真画质。本申请中,多个图像块相加的数据量远远低于采用传统方法进行压缩处理后的数据量。其中,与YUV222格式相比,只有其一半的数据量;与YUV444格式相比,只有其1/3的数据量,但采用本申请的压缩方法所携带的信息量却相当于YUV444格式的信息量。以8K视频为例,每个颜色属性的图像块相当于只有亮度信息的4K视频YUV400格式,且与YUV422格式相比,数据量只有其一半。由于本申请中的压缩方法可有效降低数据量,因此可通过现 有技术中的编码器进行8k视频的编码。同理,通过本申请的压缩方法也可通过2K视频的编码器对4K视频进行编码处理,或者通过8K视频的编码器对16k视频进行处理等。另外,通过本申请的压缩方法所产生的码流率可以控制在YUV422的一半左右,即24~80Mbps,介于目前5G的上行稳定峰值是90Mbps,因此可实现5G实时传输8K视频,并同时具有高保真画质。In summary, the video file compression method provided in this application can effectively reduce the code stream while ensuring high-fidelity image quality. In this application, the amount of data added by multiple image blocks is much lower than the amount of data after compression processing is performed using traditional methods. Among them, compared with the YUV222 format, it has only half the data volume; compared with the YUV444 format, it has only 1/3 of the data volume, but the amount of information carried by the compression method of this application is equivalent to that of the YUV444 format . Taking 8K video as an example, the image block of each color attribute is equivalent to 4K video YUV400 format with only brightness information, and compared with YUV422 format, the data volume is only half. Since the compression method in this application can effectively reduce the amount of data, 8k video can be encoded by an encoder in the existing technology. In the same way, with the compression method of the present application, 4K video can also be encoded by a 2K video encoder, or 16k video can be processed by an 8K video encoder. In addition, the bit stream rate generated by the compression method of this application can be controlled at about half of YUV422, that is, 24 to 80 Mbps. The current stable uplink peak of 5G is 90 Mbps, so 5G real-time transmission of 8K video can be realized, and it has High fidelity picture quality.
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。The foregoing embodiments only exemplarily illustrate the principles and effects of the present application, and are not used to limit the present application. Anyone familiar with this technology can modify or change the above-mentioned embodiments without departing from the spirit and scope of this application. Therefore, all equivalent modifications or changes completed by those with ordinary knowledge in the technical field without departing from the spirit and technical ideas disclosed in this application should still be covered by the claims of this application.

Claims (16)

  1. 一种获得视频文件的压缩方法,其特征在于,包括以下步骤:A compression method for obtaining video files, characterized in that it comprises the following steps:
    按照时间顺序获取多幅待压缩处理的图像数据;所述图像数据用于显示UHD 4K及以上像素的视频图像;Acquire multiple pieces of image data to be compressed in chronological order; the image data is used to display UHD 4K and above pixel video images;
    基于所述图像数据中各像素的颜色属性,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置;Mapping the color value of each pixel in each image data to each pixel position in a plurality of image blocks based on the color attribute of each pixel in the image data;
    将多幅图像数据中的每一个图像数据所对应的且具有同一颜色属性的图像块进行压缩,以得到视频文件。Compress the image blocks corresponding to each image data in the multiple image data and have the same color attribute to obtain a video file.
  2. 根据权利要求1所述的获得视频文件的压缩方法,其特征在于,所述图像数据中的每个像素表示单一颜色属性的情况下,所述基于图像数据中各像素的颜色属性,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的步骤包括:The method for obtaining a video file according to claim 1, wherein in the case that each pixel in the image data represents a single color attribute, the color attribute of each pixel in the image data is The steps of mapping the color value of each pixel in the image data to each pixel position in a plurality of image blocks respectively include:
    按照基于所述图像数据中的Bayer格式而设置的颜色格式,遍历所述图像数据;Traverse the image data according to the color format set based on the Bayer format in the image data;
    其中,在遍历期间,基于所述颜色格式中各像素的颜色属性,从所述图像数据中提取各像素的颜色值,并映射到相应图像块中的像素位置。Wherein, during the traversal, based on the color attribute of each pixel in the color format, the color value of each pixel is extracted from the image data and mapped to the pixel position in the corresponding image block.
  3. 根据权利要求1所述的获得视频文件的压缩方法,其特征在于,所述图像数据中的每个像素表示RGB颜色属性的情况下;所述基于图像数据中各像素的颜色属性,将每一图像数据中各像素的颜色值分别映射到多个图像块中的各像素位置的步骤包括:The compression method for obtaining a video file according to claim 1, wherein each pixel in the image data represents an RGB color attribute; the color attribute of each pixel in the image data is The steps of mapping the color value of each pixel in the image data to each pixel position in a plurality of image blocks respectively include:
    按照基于所述图像数据中的像素行格式而设置的颜色格式,遍历所述图像数据;Traverse the image data according to the color format set based on the pixel row format in the image data;
    其中,在遍历期间,基于所述颜色格式中各像素的颜色属性,从所述图像数据中提取各像素的颜色主分量或颜色拟合分量,并映射到相应图像块中的像素位置。Wherein, during the traversal, based on the color attribute of each pixel in the color format, the color principal component or color fitting component of each pixel is extracted from the image data, and mapped to the pixel position in the corresponding image block.
  4. 根据权利要求2或3所述的获得视频文件的压缩方法,其特征在于,所述将多幅图像数据中的每一个图像数据所对应的且具有同一颜色属性的图像块进行压缩的步骤包括:The compression method for obtaining a video file according to claim 2 or 3, wherein the step of compressing image blocks corresponding to each image data in the plurality of image data and having the same color attribute comprises:
    按照所述颜色格式中的颜色属性,将多幅图像数据所对应的多个图像块依序输入一第一编码器进行压缩处理。According to the color attributes in the color format, multiple image blocks corresponding to multiple image data are sequentially input to a first encoder for compression processing.
  5. 根据权利要求1-3中任一所述的获得视频文件的压缩方法,其特征在于,所述将多幅图像数据中的每一个图像数据所对应的且具有同一颜色属性的图像块进行压缩的步骤包括:The compression method for obtaining a video file according to any one of claims 1-3, wherein the image block corresponding to each image data in the plurality of image data and having the same color attribute is compressed The steps include:
    在同步控制下,利用多个第二编码器分别将同一颜色属性的多个图像块进行压缩处理。Under synchronous control, multiple second encoders are used to compress multiple image blocks of the same color attribute respectively.
  6. 根据权利要求5所述的获得视频文件的压缩方法,其特征在于,利用多个第二编码器进行压缩处理所得到的视频文件中包含用于解压缩视频文件以恢复多幅图像数据而设置的同步信息。The compression method for obtaining a video file according to claim 5, characterized in that the video file obtained by the compression processing using a plurality of second encoders includes a setting for decompressing the video file to restore multiple image data Synchronization information.
  7. 一种视频文件的解压缩方法,其特征在于,包括:A method for decompressing video files, which is characterized in that it comprises:
    获取一视频文件;Obtain a video file;
    按照对应所述视频文件所使用的压缩方式对所述视频文件进行解压缩处理,得到多个图像块;其中,根据各图像块的颜色属性,所得到的多个图像块与待生成的多幅图像数据中的每一幅图像数据相对应;The video file is decompressed according to the compression method used for the video file to obtain multiple image blocks; wherein, according to the color attribute of each image block, the obtained multiple image blocks and the multiple images to be generated Each piece of image data in the image data corresponds to;
    根据所述颜色属性,将相应的各图像块中各像素位置的颜色值映射到图像数据的像素中;Mapping the color value of each pixel position in each corresponding image block to the pixel of the image data according to the color attribute;
    基于所述图像数据中各像素的颜色值,生成用于显示UHD 4K及以上像素的视频图像。Based on the color value of each pixel in the image data, a video image for displaying UHD 4K and above pixels is generated.
  8. 根据权利要求7所述的视频文件的解压缩方法,其特征在于,所述按照压缩方式对所述视频文件进行解压缩处理,得到多个图像块的步骤包括:The method for decompressing a video file according to claim 7, wherein the step of decompressing the video file according to a compression mode to obtain a plurality of image blocks comprises:
    在同步控制下,利用多个第二解码器分别依据颜色属性对所述视频文件进行解压缩处理;其中,每个第二解码器输出具有同一颜色属性的多个图像块;其中,每个图像块与待生成的一幅图像数据相对应。Under synchronous control, a plurality of second decoders are used to decompress the video file according to the color attributes respectively; wherein, each second decoder outputs a plurality of image blocks with the same color attribute; wherein, each image The block corresponds to a piece of image data to be generated.
  9. 根据权利要求8所述的视频文件的解压缩方法,其特征在于,每个第二解码器依据所述视频文件中的同步信息确定所解压缩的多个图像块与待生成的一幅图像数据之间的对应关系。The method for decompressing a video file according to claim 8, wherein each second decoder determines the number of image blocks to be decompressed and a piece of image data to be generated according to the synchronization information in the video file Correspondence between.
  10. 根据权利要求7所述的视频文件的解压缩方法,其特征在于,所述按照压缩方式对所述视频文件进行解压缩处理,得到多个图像块的步骤包括:The method for decompressing a video file according to claim 7, wherein the step of decompressing the video file according to a compression mode to obtain a plurality of image blocks comprises:
    利用第一解码器对所接收的视频文件进行解压缩处理,得到依据颜色格式中的不同颜色属性而划分的多组图像块;其中,每组图像块中的每个图像块与待生成的一幅图像数据相对应。Use the first decoder to decompress the received video file to obtain multiple groups of image blocks divided according to different color attributes in the color format; wherein, each image block in each group of image blocks is related to the to-be-generated Corresponding to one of the image data.
  11. 根据权利要求7所述的视频文件的解压缩方法,其特征在于,所述根据颜色属性,将相应的各图像块中各像素位置的颜色值映射到图像数据的像素中的步骤包括:The method for decompressing a video file according to claim 7, wherein the step of mapping the color value of each pixel position in each corresponding image block to the pixel of the image data according to the color attribute comprises:
    按照所述颜色格式,遍历各颜色属性的图像块中的像素位置,在遍历期间,将各图像块中相应像素位置的颜色值映射到所对应的图像数据中的像素位置,以生成图像数据;其中,所述图像数据中各像素位置的颜色值表示单一颜色属性。According to the color format, traverse the pixel position in the image block of each color attribute, and during the traversal, map the color value of the corresponding pixel position in each image block to the pixel position in the corresponding image data to generate image data; Wherein, the color value of each pixel position in the image data represents a single color attribute.
  12. 根据权利要求11所述的视频文件的解压缩方法,其特征在于,所述基于图像数据中经映射得到的各像素的颜色值,生成用于显示UHD 4K及以上像素的视频图像的步骤还包括:The method for decompressing a video file according to claim 11, wherein the step of generating a video image for displaying UHD 4K and above pixels based on the color value of each pixel obtained by mapping in the image data further comprises :
    根据所述颜色格式,将所得到的图像数据中的各像素位置进行插值处理,得到各像素中包含RGB颜色属性的视频图像。According to the color format, interpolation processing is performed on each pixel position in the obtained image data to obtain a video image containing RGB color attributes in each pixel.
  13. 一种压缩设备,其特征在于,包括A compression device, characterized in that it comprises
    通信接口,用于与外部的解压缩设备通信连接;Communication interface, used to communicate with external decompression equipment;
    存储器,用于存储至少一个程序和待压缩的图像数据;A memory for storing at least one program and image data to be compressed;
    处理器,用于协调通信接口和存储器以执行所述程序,在执行期间按照如权利要求1-6中任一所述的获得视频文件的压缩方法将所述图像数据进行压缩处理,以得到视频文件。The processor is used for coordinating the communication interface and the memory to execute the program, and during the execution, the image data is compressed according to the method for obtaining a video file according to any one of claims 1-6 to obtain a video file.
  14. 一种解压缩设备,其特征在于,包括:A decompression device, characterized by comprising:
    通信接口,用于与外部的压缩设备通信连接;Communication interface, used to communicate with external compression equipment;
    存储器,用于存储至少一个程序和待解压缩的视频文件;A memory for storing at least one program and a video file to be decompressed;
    处理器,用于协调通信接口和存储器以执行所述程序,在执行期间按照如权利要求7-12中任一所述的视频文件的解压缩方法将所述视频文件进行解压缩处理,以便播放所述视频文件。The processor is used for coordinating the communication interface and the memory to execute the program, and during the execution, the video file is decompressed according to the video file decompression method of any one of claims 7-12 for playback The video file.
  15. 一种视频传输系统,其特征在于,包括:A video transmission system is characterized in that it comprises:
    如权利要求13所述的压缩设备;以及The compression device according to claim 13; and
    如权利要求14所述的解压缩设备。The decompression device according to claim 14.
  16. 一种计算机可读存储介质,其特征在于,包括:存储有至少一程序;所述至少一程序在被调用时执行如权利要求1-6中任一所述的获得视频文件的压缩方法;或者,所述至少一 程序在被调用时执行如权利要求7-12中任一所述的视频文件的解压缩方法。A computer-readable storage medium, comprising: storing at least one program; when called, the at least one program executes the method for obtaining a video file according to any one of claims 1-6; or When the at least one program is called, the method for decompressing a video file according to any one of claims 7-12 is executed.
PCT/CN2019/095965 2019-07-15 2019-07-15 Compression method for obtaining video file, decompression method, system, and storage medium WO2021007742A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/095965 WO2021007742A1 (en) 2019-07-15 2019-07-15 Compression method for obtaining video file, decompression method, system, and storage medium
CN201980005157.4A CN111406404B (en) 2019-07-15 2019-07-15 Compression method, decompression method, system and storage medium for obtaining video file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/095965 WO2021007742A1 (en) 2019-07-15 2019-07-15 Compression method for obtaining video file, decompression method, system, and storage medium

Publications (1)

Publication Number Publication Date
WO2021007742A1 true WO2021007742A1 (en) 2021-01-21

Family

ID=71414906

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/095965 WO2021007742A1 (en) 2019-07-15 2019-07-15 Compression method for obtaining video file, decompression method, system, and storage medium

Country Status (2)

Country Link
CN (1) CN111406404B (en)
WO (1) WO2021007742A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115883839A (en) * 2023-03-09 2023-03-31 湖北芯擎科技有限公司 Image verification method, device and equipment and computer readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189689B (en) * 2021-11-25 2024-02-02 广州思德医疗科技有限公司 Image compression processing method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1669054A (en) * 2002-07-12 2005-09-14 艾特维西坎股份公司 Method for compressing and decompressing video image data
CN1921627A (en) * 2006-09-14 2007-02-28 浙江大学 Video data compaction coding method
CN102075688A (en) * 2010-12-28 2011-05-25 青岛海信网络科技股份有限公司 Wide dynamic processing method for single-frame double-exposure image
CN102457722A (en) * 2010-10-26 2012-05-16 珠海全志科技股份有限公司 Processing method and device for Bayer image
CN104284167A (en) * 2013-07-08 2015-01-14 三星显示有限公司 Image capture device, image display device, system and method using same
WO2017089146A1 (en) * 2015-11-24 2017-06-01 Koninklijke Philips N.V. Handling multiple hdr image sources
CN109983772A (en) * 2016-11-30 2019-07-05 高通股份有限公司 For signaling to and constraining the system and method for high dynamic range (HDR) video system with dynamic metadata

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846589B (en) * 2016-09-19 2020-07-07 上海臻瞳电子科技有限公司 Image compression method based on local dynamic quantization

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1669054A (en) * 2002-07-12 2005-09-14 艾特维西坎股份公司 Method for compressing and decompressing video image data
CN1921627A (en) * 2006-09-14 2007-02-28 浙江大学 Video data compaction coding method
CN102457722A (en) * 2010-10-26 2012-05-16 珠海全志科技股份有限公司 Processing method and device for Bayer image
CN102075688A (en) * 2010-12-28 2011-05-25 青岛海信网络科技股份有限公司 Wide dynamic processing method for single-frame double-exposure image
CN104284167A (en) * 2013-07-08 2015-01-14 三星显示有限公司 Image capture device, image display device, system and method using same
WO2017089146A1 (en) * 2015-11-24 2017-06-01 Koninklijke Philips N.V. Handling multiple hdr image sources
CN109983772A (en) * 2016-11-30 2019-07-05 高通股份有限公司 For signaling to and constraining the system and method for high dynamic range (HDR) video system with dynamic metadata

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115883839A (en) * 2023-03-09 2023-03-31 湖北芯擎科技有限公司 Image verification method, device and equipment and computer readable storage medium
CN115883839B (en) * 2023-03-09 2023-06-06 湖北芯擎科技有限公司 Image verification method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111406404B (en) 2022-07-12
CN111406404A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
JP7221957B2 (en) Game engine application for video encoder rendering
WO2017129090A1 (en) Data transmission method and device for mobile terminal
KR101859064B1 (en) Video synchronous playback method, apparatus, and system
US20190082185A1 (en) Efficient lossless compression of captured raw image information systems and methods
CN102724492B (en) Method and system for transmitting and playing video images
JP2013501476A (en) Transform video data according to human visual feedback metrics
WO2020135357A1 (en) Data compression method and apparatus, and data encoding/decoding method and apparatus
WO2015024362A1 (en) Image processing method and device
WO2022022019A1 (en) Screen projection data processing method and apparatus
US9030569B2 (en) Moving image processing program, moving image processing device, moving image processing method, and image-capturing device provided with moving image processing device
US7593580B2 (en) Video encoding using parallel processors
WO2021007742A1 (en) Compression method for obtaining video file, decompression method, system, and storage medium
CN101990125A (en) Method for dynamically capturing screen of digital television in real time
US9584755B2 (en) Endoscope with high definition video recorder/player
WO2022141515A1 (en) Video encoding method and device and video decoding method and device
US20200269133A1 (en) Game and screen media content streaming architecture
WO2021168827A1 (en) Image transmission method and apparatus
WO2020168501A1 (en) Image encoding method and decoding method, and device and system to which said methods are applicable
TW201138476A (en) Joint scalar embedded graphics coding for color images
CN105376585A (en) Method for improving video transmission speed by frame image combination
CN114640882B (en) Video processing method, video processing device, electronic equipment and computer readable storage medium
CN114513675A (en) Construction method of panoramic video live broadcast system
CN204929035U (en) High -definition audio and video collector
CN201774633U (en) Video encoder
CN115278323A (en) Display device, intelligent device and data processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19937669

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19937669

Country of ref document: EP

Kind code of ref document: A1