WO2020168501A1 - Procédé de codage et procédé de décodage d'image, et dispositif et système auxquels lesdits procédés sont applicables - Google Patents

Procédé de codage et procédé de décodage d'image, et dispositif et système auxquels lesdits procédés sont applicables Download PDF

Info

Publication number
WO2020168501A1
WO2020168501A1 PCT/CN2019/075642 CN2019075642W WO2020168501A1 WO 2020168501 A1 WO2020168501 A1 WO 2020168501A1 CN 2019075642 W CN2019075642 W CN 2019075642W WO 2020168501 A1 WO2020168501 A1 WO 2020168501A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
bit
plane
image
serialization
Prior art date
Application number
PCT/CN2019/075642
Other languages
English (en)
Chinese (zh)
Inventor
钮旋
周新生
阮俊瑾
朱怀安
李翔
Original Assignee
上海极清慧视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海极清慧视科技有限公司 filed Critical 上海极清慧视科技有限公司
Priority to PCT/CN2019/075642 priority Critical patent/WO2020168501A1/fr
Priority to CN201980005137.7A priority patent/CN111316644B/zh
Publication of WO2020168501A1 publication Critical patent/WO2020168501A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • This application relates to the field of image processing technology, and in particular to an image encoding method, decoding method, and applicable equipment and systems.
  • High-definition images have a wide range of applications, such as urban security, medical imaging, and event broadcasting.
  • front-end equipment such as infrared cameras, array cameras, etc.
  • lenses and image chips that can capture 4K or even above 4K.
  • high-definition video files acquired by front-end equipment pose a challenge to data storage and network transmission due to the huge amount of data.
  • the front-end equipment usually uses an image encoding method to encode and compress the acquired high-definition images, in the hope of reducing the data volume of the original image data.
  • the purpose of this application is to provide an image encoding method, decoding method, and applicable equipment and systems, which are used to solve the problem of the large amount of high-definition image data in the prior art that is difficult to save. And transmission problems.
  • the first aspect of this application provides an image encoding method, which includes: dividing the acquired image data into multiple bit-plane matrix data according to binary data bits; and based on preset serialization Cycle, serialize at least part of the bit-plane matrix data of the bit-plane to obtain bit-plane sequence data; wherein, the serialization cycle is a cycle set by the preset m*n matrix according to the serialization of adjacent data ; Perform encoding processing on each of the obtained bit-plane sequence data, and generate encoded image data of the image data.
  • the step of dividing the acquired image data into a plurality of bit-plane matrix data according to binary data bits includes: performing frequency domain conversion on the acquired image data, and according to a preset binary The data bits divide the converted frequency domain image data into multiple bit plane matrix data.
  • the encoding method further includes a step of dividing the acquired original image into multiple channels of image data according to colors; so as to perform frequency domain conversion processing on each channel of the image data.
  • the encoding method further includes a step of performing block processing on the obtained bit-plane matrix data; correspondingly, based on the preset serialization period, at least part of the bit-plane
  • the step of serializing the bit-plane matrix data of the bit-plane includes: serializing each matrix data block in the bit-plane matrix data of at least part of the bit-plane based on a preset serialization cycle to obtain each serial data block; and According to the position of each matrix data block in the bit plane matrix data, the sequence data blocks are connected into bit plane sequence data.
  • the step of serializing at least part of the obtained bit-plane matrix data based on a preset serialization period includes: following a sequence set based on the preset bit-plane In the conversion cycle, the bit-plane matrix data of the corresponding bit-plane is serialized.
  • the number of serialization periods set based on a preset bit plane is multiple, and the number of the sequence segments described in the serialization period set corresponding to a higher bit plane The length is greater than the length of the sequence segment described in the serialization period set corresponding to the lower bit plane.
  • the step of serializing at least part of the obtained bit-plane matrix data based on a preset serialization period includes: according to the serialization period, the corresponding bit-plane The matrix data is serialized into a plurality of sequence segments; according to the start data and the end data of the sequence segment described in the serialization period, the sequence segments of the corresponding bit planes are connected to obtain the corresponding bit plane sequence data.
  • the step of encoding each bit plane sequence data includes: performing encoding processing on each bit plane sequence data according to a preset encoding method corresponding to each bit plane.
  • the step of encoding each bit-plane sequence data includes: encoding the corresponding bit-plane sequence data with a coding unit set based on the serialization period.
  • the step of encoding each bit plane sequence data includes: using an entropy coding method to encode each bit plane sequence data.
  • the frequency domain conversion method includes wavelet transform.
  • the serialization period is obtained based on the Hilbert polyline algorithm.
  • the image data includes 4K and above image data.
  • a second aspect of the present application provides an image decoding method, including: decoding the acquired encoded image data to extract bit-plane sequence data used to describe multiple bit-planes of the image data; based on a preset serialization period, Convert the bit-plane sequence data of the corresponding bit-plane into bit-plane matrix data; wherein, the serialization period is a period set by the preset m*n matrix according to the serialization of adjacent data; according to the binary data bits of each bit plane , Merge all the obtained bit-plane matrix data into the described image data.
  • the decoding method further includes: combining the obtained multiple image data into an original image according to colors.
  • the step of decoding the acquired encoded image data includes: setting an encoding unit based on the serialization period, and processing the encoded bit plane sequence data in the encoded image data Decoding processing.
  • the step of decoding the acquired encoded image data includes: using an entropy decoding method to decode the acquired encoded image data.
  • the step of converting the bit-plane sequence data of the corresponding bit-plane into bit-plane matrix data based on a preset serialization period includes: performing bit-plane sequence data of the corresponding bit-plane Block processing to obtain multiple sequence data blocks; based on the preset serialization cycle, convert each sequence data block of the corresponding bit plane into a matrix data block; based on the position of each sequence data block in the corresponding bit plane sequence data, Combine the blocks into bit-plane matrix data.
  • the step of converting the bit-plane sequence data of the corresponding bit-plane into bit-plane matrix data based on a preset serialization period includes: according to a sequence set based on the preset bit-plane The conversion cycle converts the bit-plane sequence data of the corresponding bit-plane into bit-plane matrix data.
  • the number of serialization periods set based on the preset bit plane is multiple, and the length of the sequence segment described in the serialization period set corresponding to the higher bit plane It is greater than the length of the sequence segment described in the serialization period set corresponding to the lower bit plane.
  • the step of converting the bit-plane sequence data of the corresponding bit-plane into bit-plane matrix data based on a preset serialization period includes: according to the serialization period, converting the corresponding bit-plane The plane sequence data is serialized into multiple sequence segments; according to the start data and end data of the sequence segment described in the serialization cycle, each sequence segment of the corresponding bit plane is converted into a matrix form, and the data in each matrix form is merged Into bit plane matrix data.
  • the step of decoding the acquired coded image data includes: according to a decoding method corresponding to a preset bit plane, the coded bit plane sequence data in the coded image data Perform decoding processing.
  • the step of combining all the obtained bit-plane matrix data into the described image data according to the binary data bits of each bit plane includes: according to the binary data bits of each bit plane, combining All the obtained bit-plane matrix data undergoes frequency domain inverse transformation to obtain the described image data.
  • the frequency domain inverse transformation method includes wavelet transform inverse transformation.
  • the serialization period is configured based on the Hilbert polyline algorithm.
  • the image data obtained after decoding includes 8K image data.
  • a third aspect of the present application provides an image encoding device, including: an image acquisition interface for acquiring the image data; a storage device for storing at least one program and image data to be encoded; and a processing device for calling and The program is executed to perform encoding processing on the image data according to the image encoding method described in any one of the first aspect.
  • a fourth aspect of the present application provides a camera device, including: a capturing device for acquiring an original image, wherein the original image is composed of multiple image data set based on color; and a storage device for storing at least one A program and image data to be encoded; a processing device for calling and executing the program to perform encoding processing on the image data according to the image encoding method according to any one of the first aspects.
  • a fifth aspect of the present application provides an image decoding device, which includes: a storage device for storing at least one program and encoded image data to be decoded; a processing device for calling and executing the program to perform as described in the second aspect
  • the image decoding method described in any one of the image decoding methods decodes the encoded image data to obtain image data that can be displayed.
  • a sixth aspect of the present application provides an image playback device, including: a storage device for storing at least one program and encoded image data to be decoded; and a processing device for calling and executing the program to perform as described in the second aspect
  • the image decoding method described in any one of the image decoding methods decodes the encoded image data; the interface device is used to transmit the decoded image data to the connected display screen.
  • a seventh aspect of the present application provides an image transmission system, including: an image acquisition interface for acquiring the image data; a storage device for storing at least one program, image data to be encoded, and encoded image data to be decoded; processing A device for calling and executing the program to perform encoding processing on the image data according to the image encoding method as described in any one of the first aspect; and/or according to any one of the second aspect
  • the image decoding method performs encoding processing on the encoded image data.
  • An eighth aspect of the present application provides an image transmission system, including: the image encoding device as shown in the third aspect, or the imaging device as shown in the fourth aspect; and the decoding device as shown in the fifth aspect, or The playback device shown in the sixth aspect.
  • a ninth aspect of the present application provides a computer storage medium, which is characterized by including: storing at least one program; when called, the at least one program executes the image encoding method according to any one of the first aspects; or , When the at least one program is called, the decoding method as described in any one of the second aspect is executed.
  • this application uses a serialization period to implement serialization of bit-plane matrix data, which is beneficial to improve the original image
  • the cohesion especially the cohesion of high-definition images of 4K and above.
  • the use of different serialization cycles based on binary data bits effectively improves the image compression rate.
  • Figure 1 shows a flowchart of an embodiment of the coding method of this application.
  • FIG. 2 shows a schematic diagram of a bit plane of a channel of image data divided by color in this application.
  • Figure 3 shows a schematic diagram of the spectrum distribution of the image data of this application after three-level wavelet transformation.
  • FIG. 4 shows a schematic diagram of the serialization rule of the 4*4 matrix in this application as the serialization cycle.
  • FIG. 5 shows a schematic diagram of the serialization rule of the 8*8 matrix in this application as the serialization cycle.
  • FIG. 6 shows a schematic diagram of serializing bit-plane matrix data (8*16) according to the serialization cycle shown in FIG. 4.
  • FIG. 7 shows a flowchart of an embodiment of the decoding method of this application.
  • FIG. 8 shows a schematic structural diagram of an image transmission system of this application in an embodiment.
  • FIG. 9 shows a schematic diagram of the structure of the image transmission system of this application in another embodiment.
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
  • An exception to this definition will only occur when the combination of elements, functions, steps or operations is inherently mutually exclusive in some way.
  • the present application provides an image encoding method, which aims to achieve the encoding purpose of not only retaining as much image information as possible, but also effectively reducing the amount of image data.
  • the encoding method mainly re-serializes the image data according to the binary data bits in the image data, and encodes the serialized data after the serialization processing, and realizes the image information concentration by using a more cohesive serialization method To achieve the above purpose.
  • the image data may come from the original image.
  • the original image includes but is not limited to: high-definition images (such as 2K images), standard-definition images (such as 720*576 images), ultra-high-definition images (such as 4K images or 8K images), and compressed and decompressed Images etc.
  • the original image is a high-definition image from an original video captured by a high-definition camera.
  • the original image is a high-definition image transmitted through a dedicated data channel.
  • the original image is an image that comes from the Internet and needs to be re-encoded.
  • the image data may be an original image, or the image data is obtained by dividing the original image into multiple paths according to colors.
  • the original image is divided into three channels of image data according to RGB, and each channel of image data is encoded using the encoding method.
  • the original image is divided into three channels of image data according to YUN, and each channel of image data is encoded using the encoding method.
  • the encoding method is mainly executed by an image encoding device.
  • the encoding device may be a terminal device or a server.
  • the terminal equipment includes but is not limited to camera equipment, personal electronic terminal equipment, etc.
  • the camera equipment includes a camera device, a storage device, a processing device, and may also include an interface device.
  • the camera device is used to obtain an original image, wherein the original image is composed of multi-channel image data set based on colors.
  • the imaging device includes at least a lens composed of a lens group, a light sensing device, etc., where the light sensing device includes, for example, a CCD device, a CMOS device, and the like.
  • the storage device may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the storage device also includes a memory controller, which can control access to the memory by other components of the device, such as a CPU and a peripheral interface.
  • the storage device is used to store at least one program and image data to be encoded.
  • the program stored in the storage device includes an operating system, a communication module (or instruction set), a graphics module (or instruction set), a text input module (or instruction set), and an application (or instruction set).
  • the program in the storage device further includes an instruction set for performing an encoding operation on the image data in a time sequence based on the technical solution provided by the encoding method.
  • the processing device 13 includes but is not limited to: CPU, GPU, FPGA (Field-Programmable Gate Array), ISP (Image Signal Processing image processing chip), or other memory devices that are dedicated to processing At least one program processing chip (such as AI dedicated chip), etc.
  • the processing device calls and executes at least one program stored in the storage device to perform encoding processing on the stored original image or image data in the original image according to the encoding method.
  • processing devices such as FPGA that can process matrix data in parallel is more suitable for efficient and real-time encoding of the acquired image data.
  • the interface device includes, but is not limited to: a data line interface and a network interface; among them, examples of the data line interface include at least one of the following: serial interfaces such as USB, and parallel interfaces such as bus interfaces.
  • Examples of network interfaces include at least one of the following: short-range wireless network interfaces such as Bluetooth-based network interfaces and WiFi network interfaces, such as wireless network interfaces of mobile networks based on 3G, 4G, or 5G protocols, such as wired network interfaces that include network cards Wait.
  • the camera device is set on a pan-tilt above the road to monitor vehicle violations, such as speeding, red light running, etc.
  • the camera device is configured on a minimally invasive medical device, and the camera device is set at the front end of the hose through an optical fiber or other dedicated data cable.
  • the camera device is configured on a high-speed moving track of a stadium to capture high-definition pictures of competitive games.
  • the electronic terminal equipment for personal use includes desktop computers, notebook computers, tablet computers, and editing equipment dedicated to the production of TV programs, movies, TV series, and the like.
  • the electronic terminal equipment includes a storage device and a processing device.
  • the storage device and the processing device may be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here.
  • the electronic terminal equipment may also include a camera device for capturing original images.
  • the hardware and software modules of the camera device may be the same as or similar to the corresponding device in the aforementioned camera device, and will not be repeated here.
  • the electronic terminal device may further include an image acquisition interface for acquiring image data derived from the original image or the original image.
  • the image acquisition interface may be a network interface, a data line interface, or a program interface.
  • the network interface and the data line interface can be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here.
  • the processing device of the electronic terminal equipment downloads the original image from the Internet.
  • the processing device of the electronic terminal device obtains the original image or image data displayed by the drawing software on the display screen.
  • the drawing software is, for example, PS software, or screenshot software, etc.
  • the processing device of the electronic terminal equipment obtains an original frame of the unedited high-definition video from the storage device.
  • the server includes but is not limited to a single server, a server cluster, a distributed server, a server based on cloud technology, and the like.
  • the server includes a storage device, a processing device, an image acquisition interface, and the like.
  • the storage device and the processing device may be configured in the same physical server device, or be configured in multiple physical server devices according to the division of labor of each physical server device.
  • the image acquisition interface may be a network interface or a data line interface.
  • the storage device, processing device, image acquisition interface, etc. included in the server may be the same as the corresponding devices mentioned in the aforementioned terminal equipment; or specifically set for the server based on the server's throughput, processing capacity, and storage requirements The corresponding devices.
  • the storage device may also include a solid state drive or the like.
  • the processing device may also include a CPU dedicated to a server or the like.
  • the image acquisition interface in the server acquires image data and encoding instructions from the Internet, and the processing device executes the encoding method described in this application on the acquired image data based on the encoding instructions.
  • FIG. 1 shows a flowchart of the encoding method in an embodiment.
  • the processing device in any of the encoding device examples mentioned above executes at least one program and schedules the hardware in the encoding device to perform the following steps.
  • the acquired image data is divided into a plurality of bit-plane matrix data according to binary data bits.
  • the image data is a data matrix arranged based on pixel positions.
  • the color depth of the data at each pixel position in the corresponding data matrix can be expressed by multi-bit binary data. For example, if the color depth of the image data is 256, the data at each pixel position in the data matrix is expressed by an 8-bit binary number. Based on each channel of image data divided by color, the same type of color from the high bit to the low bit of the multi-bit binary data can be regarded as set according to the binary data bit from high to low.
  • Figure 2 shows an example of a bit plane of image data divided by color, where the red component of each pixel is represented by an 8-bit binary number ⁇ b7, b6,..., b0 ⁇ , adjacent pixels
  • the relative frequency of the b7 bit in the red component of the red component is lower than the b0 bit.
  • the bit-plane matrix data composed of the b7 bit of each pixel in the image data can be regarded as the higher binary data bit in the image data.
  • the bit-plane matrix data of the image data, and the bit-plane matrix data composed of the b0 bits of each pixel in the image data can be regarded as the bit-plane matrix data of the lower binary data bits in the image data.
  • the color value of each pixel in each channel of image data may also be represented by a value of more than 8 bits or a value of less than 8 bits.
  • the pixel bit of each channel of image data is represented by a 10-bit value.
  • this step can be divided into multiple bit-plane matrix data by the position of each data bit in the binary data expressing the color in the image data. .
  • the frequency domain conversion method can be used to obtain the bit-plane matrix data based on the frequency spectrum more accurately.
  • the step S110 includes: performing frequency domain conversion on the acquired image data, and dividing the converted frequency domain image data into a plurality of bit plane matrix data according to preset binary data bits.
  • the encoding device performs frequency domain conversion on the image data to obtain the distribution of each frequency spectrum of the image data in the frequency domain, divides the frequency domain image data into multiple pixel data blocks based on the frequency spectrum, and divides the frequency domain image data into a plurality of pixel data blocks according to the constituent pixel data blocks.
  • the binary data bits of the pixel data are divided into multiple bit plane matrix data for each pixel data block.
  • the frequency domain conversion method includes Fourier transform, cosine transform, etc., for example.
  • the frequency domain transform may adopt Discrete Fourier Transform (DFT).
  • the frequency domain transform may adopt a discrete cosine transform (Discrete Fourier Transform, DCT).
  • the frequency domain transform may also adopt wavelet transform (Wavelet Transform, WT).
  • FIG. 3 shows a schematic diagram of the spectrum distribution of image data after three-level wavelet transform.
  • LLi in the figure represents the low frequency subband
  • LHi represents the horizontal detail component, which belongs to the high frequency subband
  • HLi represents the vertical detail component, which belongs to the high frequency subband
  • HHi represents the diagonal detail component, which also belongs to the high frequency subband.
  • i 1, 2, 3.
  • an image data After an image data is converted to the frequency domain through a three-level wavelet transform, it can be divided into pixel image blocks distributed in ten subbands, as shown in Figure 3, LL3, HL3, HL2, HL1, LH3, LH2, LH1, HH3, HH2 , HH1 ten sub-band pixel image blocks, wherein the pixel value in each pixel image block is represented by 8-bit or 10-bit binary for example.
  • the encoding device performs bit plane division on the pixel data in each obtained pixel data block. For example, if the color value in the image data is represented by 10-bit binary, the encoding device divides the pixel data block located in HL3 into 10 bit-plane matrix data. By analogy, the encoding device can divide the pixel data blocks in at least HL3, HL2, HL1, LH3, LH2, LH1, HH3, HH2, HH1 subbands into 10 bit-plane matrices each. Among them, for the low-frequency subband LL3, bit-plane matrix data can be processed or processed separately according to encoding methods such as H.264 and JPEG.
  • this step can divide the original image into multiple channels of image data according to the color, and operate the frequency domain conversion processing as mentioned in any of the above examples on each channel of image data, thereby obtaining the The bit-plane matrix data of each channel of image data.
  • step S130 may be performed to serialize the bit-plane matrix data.
  • the encoding method further includes: step S120, which is to perform block processing on the obtained bit-plane matrix data.
  • the block processing aims to block the bit-plane matrix data according to the preset number of rows and columns to obtain a plurality of matrix data blocks, wherein each adjacent block has no overlapping data.
  • the block processing method adopts block processing of bit-plane matrix data according to a preset number of rows or columns.
  • the block processing method adopts the block processing of the bit-plane matrix data according to the maximum M*N matrix data amount.
  • the block processing performs block processing on the obtained bit-plane matrix data according to a preset serialization cycle for serialization processing.
  • the serialization cycle includes an m*n matrix used to describe serialization processing rules, m and n are both natural numbers greater than 1, and m and n may be the same or different.
  • m is an integer multiple of n.
  • the block processing method includes: performing block processing on the obtained bit-plane matrix data according to the number of rows/columns of the matrix described by the serialization period.
  • the block processing method is that a*m acts as a block unit to block the bit-plane matrix data, where a is a coefficient, and a ⁇ 1.
  • the block processing method is to block the bit-plane matrix data according to b*n as the block unit, where b is a coefficient, and b ⁇ 1.
  • the block processing method is to block the bit-plane matrix data according to the (a*m, b*n) matrix as the block unit, where a and b are coefficients, and both a and b are greater than An integer equal to 1, a and b can be equal or unequal.
  • the block processing method also includes the remaining data of less than one block unit in the bit-plane matrix data Into a separate matrix data block.
  • step S130 may be performed to perform serialization processing on each matrix data block in each bit-plane matrix data to obtain a sequence data block, and according to each matrix data block The position in the bit-plane matrix data connects each sequence data block into bit-plane sequence data.
  • step S130 based on a preset serialization period, at least part of the bit-plane matrix data of the bit-plane is serialized to obtain bit-plane sequence data.
  • the serialization period is a period set by the preset m*n matrix according to the serialization of adjacent data.
  • the serialization period describes the rule of serialized data obtained by traversing adjacent data in a preset m*n matrix.
  • the rules include: starting data and ending data in the matrix determined for serialization, and in the matrix determined for serialization from the position of the starting data to the position of the ending data, the data in the matrix are determined according to The sequence relationship of each position when the sequence is arranged.
  • the adjacent position relationship means that the adjacent data after serialization are adjacent in the matrix, that is, after the data in the matrix is serialized into sequence segments, the adjacent data corresponds to the matrix having adjacent data in the same row or column. Positional relationship.
  • the data in the matrix constructs a serialization cycle based on the area enclosed by adjacent data in the same row and the same column, forming a sequence segment with start data and end data.
  • the start data and end data of a sequence segment are located in the same row or column in the corresponding matrix data.
  • Figure 4 shows a schematic diagram of the serialization rule of the 4*4 matrix as the serialization cycle, where the area enclosed by every 2 adjacent data in the 4*4 matrix is serialized according to the arrow.
  • a11 and a12 are adjacent data
  • a12 and a22 are adjacent data
  • the serialization cycle starts with a11 and a14 as the end point to serialize all the data in the 4*4 matrix into a row of 16-bit sequence segments: a11 , A12, a22, a21, a31, a41, a42, a32, a33, a43, a44, a34, a24, a23, a13, a14.
  • Figure 5 shows a schematic diagram of the serialization rule of an 8*8 matrix as the serialization period. The area enclosed by every 4 adjacent data in the 8*8 matrix is called Wi.
  • the data at d0 and d1, d1 and d2, and d2 and d3 in the area Wi are all adjacent data; the serialization cycle starts from the data at the d0 position of W0 and takes the d3 position in the area WF As the end point, all data in the 8*8 matrix is serialized into a row of 64-bit sequence segments, that is, the sequence segments connected by arrows as shown in Figure 5.
  • the serialization period may be a predetermined fixed period based on the statistics of the frequency spectrum of the sample image.
  • the statistical method may determine the serialization period by pre-setting the cohesion condition of the image information; or may determine the serialization period by statistically calculating the data volume change ratio before and after encoding.
  • the serialization period may be obtained by using the Hilbert polyline algorithm.
  • the fourth-order Hilbert polyline algorithm is used to generate multiple candidate serialization periods, and the coded image data obtained by serializing each sample image in each candidate serialization period is counted to compare the phases of the coded image data.
  • the serialization period is selected and configured as the serialization period used when the encoding method is executed.
  • the serialization period can also be obtained by other broken line algorithms set based on the aforementioned broken line principle, which will not be described in detail here.
  • the serialization period is set based on a bit plane.
  • a serialization period that can be universally applied to each bit plane can be preset.
  • each bit plane uses the same serialization cycle for serialization.
  • different serialization periods are set corresponding to different bit planes.
  • each bit plane corresponds to a serialization period.
  • the bit plane is divided into multiple groups in the order of binary bits, and each group of bit planes corresponds to a serialization period.
  • the bit planes belonging to each subband use the same one or more serialization cycles.
  • the serialization period is set according to the frequency spectrum segment where the subbands are located.
  • each bit plane in each subband corresponds to a serialization period separately.
  • each subband separately divides the bit plane into multiple groups in the order of binary bits, and each group of bit planes individually corresponds to a serialization period.
  • each sub-band is divided into multiple groups according to the spectrum section where the sub-bands are located.
  • Each bit plane in each sub-band can have a one-to-one correspondence with multiple serialization periods, or divide the sub-bands in each group. The bit planes continue to be grouped, and the bit plane groups obtained after multiple groupings have a one-to-one correspondence with multiple serialization periods.
  • the step S130 includes: serializing the bit plane matrix data of the corresponding bit plane according to the serialization period set based on the preset bit plane.
  • an 8-bit binary number is used to represent a color value (b7, b6,..., b1, b0) of a pixel position in the image data as an example.
  • the bit-plane matrix data obtained by the b7-b6 bits can be divided into a group, the bit-plane matrix data obtained by the b5-b2 bits can be divided into a group, and the bit-plane matrix data obtained by the b1-b0 bits can be divided into a group.
  • the encoding device can at least perform the steps of serializing the bit-plane matrix data of the b7-b6 bits according to the preset serialization cycle corresponding to the b7-b6 bits, and according to the preset b5-b2 bits corresponding
  • the serialization cycle is a step of serializing the bit-plane matrix data of the b5-b2 bits.
  • bit-plane matrix data obtained based on each pixel data block Take the bit-plane matrix data obtained based on each pixel data block as an example.
  • the pixel data blocks are distributed in LL3, HL3, LH3, HH3, HL2, LH2, HH2, H1, HL1, and HH1 spectrum sub-bands; among them, the binary data in each pixel data block is divided into 10 bit planes according to the high to low bits, and the corresponding The bit plane matrix data of the 10 bit planes are distributed in each pixel data block.
  • the encoding device can perform at least the bit-plane matrix data of each sub-band: the step of serializing the bit-plane matrix data of the b9-b6 bits according to the preset serialization cycle corresponding to the b9-b6 bits, and the step It is assumed that the serialization period corresponding to the b5-b0 bits is a step of serializing the bit-plane matrix data of the b5-b0 bits.
  • the coding method can be used in combination with the existing serialization processing method.
  • some bit-plane matrix data divided into low frequencies can be serialized according to the existing serialization method, for example,
  • the bit plane matrix data of the pixel data block in the LL3 subband can be processed by Zigzag serialization.
  • the bit-plane matrix data corresponding to the bit-plane located in the lower bits can be processed in a Zigzag serialization manner.
  • the encoding method can also be used in combination with existing encoding methods.
  • bit-plane matrix data divided into low frequencies can be encoded according to the existing encoding methods. For example, for the pixel data block in the LL3 subband, step S130 may not be executed and step S140 may be executed directly. For another example, the bit-plane matrix data corresponding to the bit-plane located in the lower bits (such as bits b1-b0) may not perform step S130 but directly perform step S140.
  • the serialization period of the high-order interval in the binary data bit can be obtained by serializing the matrix with a larger amount of data.
  • the serialization of the low-order interval in the binary data bit The period can be obtained by serializing a matrix with a small amount of data. In other words, the length of the sequence segment described in the serialization period set at a higher position is greater than the length of the sequence segment described in the serialization period set at a lower position.
  • the encoding device executes step S140.
  • the pixel data in the pixel data block LL3 can be sequenced using the existing serialization processing method; or the serialization period T1 can be used for the serialization processing; or the encoding processing can be directly performed.
  • the length of the sequence segment described in the serialization period set for the higher bit plane is greater than that of the lower bit plane.
  • Set the length of the sequence segment described in the serialization period. Take the serialization period shown in Figure 4 and Figure 5, and the bit planes determined by binary bits from low to high, including: bit plane 0, bit plane 1,..., bit plane 9 as examples, including 8*8 matrix
  • the serialization period is configured to serialize the bit-plane matrix data of bit planes 6-9; the serialization period including the 4*4 matrix is configured to serialize the bit-plane matrix data of at least bit planes 2-5.
  • bit-plane 0-1 bit-plane matrix data in the above example can also be directly encoded or sequenced using a serialization cycle containing a 4*4 matrix.
  • existing serialization processing methods include but are not limited to Zigzag polyline processing methods.
  • the step S130 includes: directly serializing the obtained bit-plane matrix data according to a preset serialization period to obtain bit-plane sequence data .
  • this step may perform serialization processing on the bit plane matrix data in the higher bit plane of the acquired image data according to the preset serialization cycle. For example, according to a preset serialization cycle, the bit plane matrix data of each bit plane except bit planes 0 and 1 is serialized.
  • serialization processing is performed on each bit plane matrix data in all bit planes corresponding to the image data according to a preset serialization cycle. For example, according to a preset serialization cycle, the bit-plane matrix data of all bit-planes including 10 bit-planes are serialized.
  • the serialization processing method includes: serializing the corresponding bit-plane matrix data into a plurality of sequence segments according to the serialization period; according to the start data and end data of the sequence segment described in the serialization period , Connect the sequence segments of the corresponding bit plane to obtain the corresponding bit plane sequence data.
  • Figure 6 shows a schematic diagram of serializing bit-plane matrix data (8*16) according to the serialization cycle shown in Figure 4, where the number of rows and columns of the matrix described in the serialization cycle The number is the traversal window.
  • the traversal window is traversed in the bit-plane matrix data, and the matrix data in the traversal window is in the sequence described by the serialization period each time it moves.
  • the serialization rule performs serialization processing to obtain the corresponding sequence segment, which includes the sequence segment described by the arrow (x i-1,0 ,x i-1,1 ,x i,1 ,x i,0 ,...
  • x j, k in FIG. 6 are data at the (j, k)th position in the bit plane matrix data.
  • the direction traversed by the traversal window is related to the row/column of the start data and the end data in the serialization cycle. Therefore, the above-mentioned method of moving the traversal window in the row direction of the bit-plane matrix data It is only an example, not a limitation of this application.
  • the step S130 when processing the bit-plane matrix data after block processing in step S120, includes: based on a preset serialization period, at least part of the bit-plane bit-plane matrix data Each matrix data block is serialized to obtain each sequence data block; and according to the position of each matrix data block in the bit plane matrix data, the sequence data blocks are connected into bit plane sequence data.
  • the number of rows and columns of the matrix described in the serialization cycle is used as the traversal window, and the matrix The number of rows (or the number of columns) is the step size, and each matrix data block is serialized, and the obtained serialized data is called a sequence data block, which will not be described in detail here. Then, according to the preset position sequence of each matrix data block in the bit plane matrix data, and the start data and end data of each sequence data block, the sequence data blocks are connected end to end to obtain the bit plane sequence data.
  • step S140 encoding processing is performed on each of the obtained bit-plane sequence data, and encoded image data of the image data is generated.
  • the encoding process aims to convert each of the bit-plane sequence data into a code stream described by encoding symbols at the cost of a minimum amount of information loss.
  • the encoding processing method is an example of a lossless encoding processing method.
  • the lossless encoding processing method is an entropy encoding method.
  • the step S140 includes using an entropy encoding method to encode each bit plane sequence data.
  • the entropy coding method includes but is not limited to: Shannon coding and Huffman coding.
  • the lossless encoding processing method is an improved encoding method based on entropy coding, for example, an entropy encoding method based on run length is adopted.
  • the encoding process is also based on the adjacent relationship between the divided bit planes, the color type expressed by the image data, the additional information of the original image, etc., to add header information to the code stream.
  • the encoded image data obtained according to the encoding process also includes a file of code streams and header information obtained by encoding multiple channels of image data separately.
  • the step S140 includes: encoding the bit-plane sequence data according to the coding method corresponding to the preset bit-plane.
  • the bit planes divided according to any method in step S130 are correspondingly set with different encoding methods, and the bit plane sequence data is encoded.
  • the bit plane sequence data corresponding to the b9-b2th bit plane is coded according to entropy coding, and other data to be coded is coded according to the byte coding method.
  • the step S140 further includes encoding the corresponding bit-plane sequence data with an encoding unit set based on the serialization period.
  • bit-plane matrix data divided according to the serialization cycle has better cohesion, so according to the code word or byte, the bit-plane sequence data in each serialization cycle is encoded.
  • the sequence segment corresponding to each serialization cycle in the bit-plane sequence data is encoded with a set of 4 bits of binary to obtain a 4-bit encoding
  • the symbol represents a sequence segment to describe the coding symbol of the bit-plane sequence data.
  • the pixel data blocks HL3, LH3, HH3, HL2, LH2, HH2, and HH1 in the upper 4 bit plane sequence data are serialized to obtain
  • the coding symbol composed of 8 bits represents a sequence segment to describe the coding symbol of each bit plane sequence data; and adopts the byte coding method corresponding to the serialization period of the 4*4 matrix, and the bit plane sequence of the lower 6 bits in each pixel data block is used.
  • the data is serialized, and a coding symbol composed of 4 bits is obtained to represent a sequence segment to describe the coding symbol of each bit plane sequence data.
  • Entropy-based coding can be used to encode each bit plane sequence data and pixel data; or, an existing encoding method can be used for encoding.
  • any of the above examples are not mutually exclusive, but can combine multiple examples based on coding requirements to obtain encoded image data.
  • this application uses a serialization cycle to serialize bit-plane matrix data, which is beneficial to improve the cohesion of the original image, especially the cohesion of 4K and above HD images Sex.
  • different serialization cycles are adopted based on the frequency spectrum, which effectively improves the compression rate of the high frequency spectrum segment.
  • the encoded image data encoded by the technical ideas provided by the above encoding method can be transmitted between devices or within devices through transmission media such as data lines and the Internet.
  • the hardware constituting the encoding device encodes the captured original image into corresponding encoded image data under the instruction of the software, and saves it in the storage device.
  • each hardware constituting the decoding device decodes the encoded image data under the instruction scheduling of the software and plays it (or called display).
  • a camera device that can perform the encoding method encodes the captured original image into corresponding encoded image data (such as an encoded file or code stream), and transmits the encoded image data to a server using the Internet or a dedicated network cable,
  • the decoding device provided in the server decodes the encoded image data and plays it (or called display).
  • This application also provides an image decoding method for decoding encoded image data encoded based on the foregoing encoding method.
  • the decoding method is mainly executed by a decoding device.
  • the decoding device may be a terminal device or a server.
  • the terminal equipment includes, but is not limited to, playback equipment, personal electronic terminal equipment, and the like.
  • the playback device includes a storage device, a processing device, and may also include an interface device.
  • the storage device may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the storage device also includes a memory controller, which can control access to the memory by other components of the device, such as a CPU and a peripheral interface.
  • the storage device is used to store at least one program and image data to be decoded.
  • the program stored in the storage device includes an operating system, a communication module (or instruction set), a graphics module (or instruction set), a text input module (or instruction set), and an application (or instruction set).
  • the program in the storage device also includes an instruction set for performing a decoding operation on the image data in time sequence based on the technical solution provided by the decoding method.
  • the processing device includes, but is not limited to: CPU, GPU, FPGA (Field-Programmable Gate Array), ISP (Image Signal Processing image processing chip), or other including at least data stored in a storage device dedicated to processing A program processing chip (such as AI dedicated chip), etc.
  • the processing device calls and executes at least one program stored in the storage device to decode the stored original image or image data in the original image according to the decoding method.
  • the interface device includes, but is not limited to: a data line interface and a network interface; examples of the data line interface include: display interfaces such as VGA interface and HDMI interface, serial interfaces such as USB, and parallel interfaces such as data bus.
  • Examples of network interfaces include at least one of the following: short-range wireless network interfaces such as Bluetooth-based network interfaces and WiFi network interfaces, such as wireless network interfaces of mobile networks based on 3G, 4G, or 5G protocols, such as wired network interfaces that include network cards Wait.
  • the playback device also includes a display device for displaying the decoded image data, wherein the image data is one of multiple channels of image data set based on the color of an original image.
  • the display device at least includes a display screen, a display screen controller, etc., where the display screen includes, for example, a liquid crystal display screen, a curved display screen, a touch screen, and the like.
  • the display screen controller includes, for example, a processor dedicated to the display device, a processor integrated with the processor in the processing device, and the like.
  • the playback device is set up with a traffic command center for decoding and displaying the encoded image data transmitted from the camera device.
  • the playback device is configured on a computer device that is communicatively connected with the minimally invasive medical device, which is connected to the minimally invasive medical device through an optical fiber or other dedicated data line, and connects the current minimally invasive medical device
  • the encoded image data is decoded and played.
  • the playback device is configured in the computer room of the TV forwarding center, and is used to decode and play the encoded image data transmitted by the camera set on the stadium for video editing.
  • the playback device is a set-top box, which is used to decode the code stream in the corresponding TV channel in the TV signal and output it to the TV for display.
  • the electronic terminal equipment for personal use includes desktop computers, notebook computers, tablet computers, and editing equipment dedicated to the production of TV programs, movies, TV series, and the like.
  • the electronic terminal equipment includes a storage device and a processing device. Wherein, the storage device and the processing device may be the same or similar to the corresponding devices in the aforementioned camera equipment, and will not be described in detail here.
  • the electronic terminal equipment may also include a display device for displaying the decoded image data.
  • the hardware and software modules of the electronic terminal may be the same as or similar to the corresponding devices in the aforementioned playback device, and will not be repeated here.
  • the electronic terminal device may further include an image acquisition interface for acquiring encoded image data derived from the encoding.
  • the image acquisition interface may be a network interface, a data line interface, or a program interface.
  • the network interface and the data line interface can be the same or similar to the corresponding devices in the aforementioned playback device, and will not be described in detail here.
  • the processing device of the electronic terminal device downloads encoded image data from the Internet.
  • the processing device of the electronic terminal device obtains the edited file from the storage device.
  • the server includes but is not limited to a single server, a server cluster, a distributed server, a server based on cloud technology, and the like.
  • the server includes a storage device, a processing device, an image acquisition interface, and the like.
  • the storage device and the processing device may be configured in the same physical server device, or be configured in multiple physical server devices according to the division of labor of each physical server device.
  • the image acquisition interface may be a network interface or a data line interface.
  • the storage device, processing device, image acquisition interface, etc. included in the server may be the same as the corresponding devices mentioned in the aforementioned terminal equipment; or specifically set for the server based on the server's throughput, processing capacity, and storage requirements The corresponding devices.
  • the storage device may also include a solid state drive or the like.
  • the processing device may also include a CPU dedicated to a server or the like.
  • the image acquisition interface in the server acquires coded image data and playback instructions from the Internet, and the processing device executes the decoding method described in this application on the acquired encoded image data based on the playback instructions.
  • step S210 the acquired encoded image data is decoded to extract bit-plane sequence data describing multiple bit-planes of the image data.
  • the encoded image data includes the aforementioned encoded file and code stream.
  • the acquired encoded image data is a file in a complete format that is downloaded and stored locally.
  • the acquired encoded image data is a video stream transmitted in real time using a streaming protocol.
  • the encoded image data may contain the following header information: used to describe the encoding method of the encoded image data, the starting data position of the image data obtained by dividing the original image based on color, and the bit plane sequence data in each image data The starting position and so on.
  • the method of performing decoding processing on the acquired encoded image data in this step is the inverse processing of the encoding processing method in the foregoing step S140.
  • the encoding processing manner is a lossless encoding processing manner.
  • the lossless encoding processing method is an entropy encoding method.
  • the step S210 includes adopting an entropy decoding method to de-encode the encoded image data.
  • the entropy decoding method includes, but is not limited to: Shannon decoding and Huffman decoding.
  • the lossless encoding processing method is an improved encoding method based on entropy encoding, for example, an entropy encoding method based on run length is adopted.
  • the step S210 includes adopting an entropy decoding method based on run length to The encoded image data is decoded.
  • the encoding processing method includes: encoding each bit plane sequence data according to the encoding method corresponding to the preset frequency spectrum segment.
  • different decoding modes are set correspondingly based on the bit plane, and the encoded image file is decoded.
  • the bit plane sequence data corresponding to the b9-b2th bit plane is decoded according to the decoding scheme corresponding to the entropy encoding, and other data to be decoded is encoded and decoded according to the decoding method corresponding to the byte encoding.
  • the encoding processing method further includes encoding the corresponding bit-plane sequence data with an encoding unit set based on the serialization period.
  • the step S210 includes: setting a decoding unit based on the serialization period to decode the encoded bit plane sequence data in the encoded image data.
  • each encoded bit-plane sequence data is decoded according to words or bytes.
  • the sequence segment corresponding to each serialization cycle in the encoded bit-plane sequence data is represented by a set of 4bit encoding symbols, and then according to the encoding symbols
  • the corresponding 4-bit set of binary data is decoded to obtain the bit-plane sequence data represented by four sets of 4-bit binary data and described by multiple sequence sections.
  • different decoding units are set. For example, using an 8*8 matrix serialization cycle T1 corresponding word encoding method to decode the encoded bit-plane sequence data of the upper 6-bit bit-plane to obtain a sequence segment represented by 4 groups of 8-bit binary data, and The above-mentioned bit-plane sequence data described by multiple sequence segments; and using the corresponding word encoding method of the serialization period T2 containing a 4*4 matrix, the coded bit-plane sequence data of the lower 4 bit-planes is decoded to obtain a 4 A group of 4bit binary data represents a sequence segment, and the above-mentioned bit plane sequence data described by multiple sequence segments.
  • bit plane sequence data corresponding to low data bit planes such as b1 and b0 bit planes
  • decoding methods for each pixel data of pixel data blocks corresponding to low frequency spectrum subbands such as LL3 subband
  • the encoded bit plane sequence data and pixel data can be decoded by a decoding scheme based on entropy coding; or, the existing decoding method corresponding to the encoding method can be used for the decoding processing.
  • any of the above examples are not mutually exclusive, but can be decoded based on a decoding rule set correspondingly including an encoding rule set using a serialization period to obtain multiple bit-plane sequence data.
  • step S220 the plane sequence data of the multiple bit planes will be matrixed.
  • the corresponding bit plane serialized data is selected to execute the following step S220.
  • step S220 based on a preset serialization period, the bit-plane sequence data of the corresponding bit-plane is converted into bit-plane matrix data; wherein, the serialization period is to convert the preset m*n matrix according to the sequence of adjacent data
  • the serialization period is consistent with the serialization period mentioned in the foregoing encoding method, and will not be described in detail here.
  • the serialization period is set based on a bit plane.
  • each plane sequence data is converted into bit-plane matrix data according to the serialization cycle.
  • the serialization cycle of the matrix converts the bit-plane sequence data of bit planes 6-9 into corresponding bit-plane matrix data; the serialization cycle containing the 4*4 matrix converts the bit-plane sequence data of bit planes 2-5 into the corresponding bit planes. Plane matrix data.
  • bit-plane 0-1 bit-plane matrix data in the above example can also be converted directly according to the matrix information provided in the packet header to obtain the bit-plane matrix after decoding.
  • bit-plane 0-1 bit-plane matrix data in the above example can also be converted directly according to the matrix information provided in the packet header to obtain the bit-plane matrix after decoding.
  • the existing conversion processing methods include, but are not limited to, Zigzag polyline processing methods.
  • the step S220 includes: directly performing the conversion processing on the obtained bit-plane sequence data according to the preset serialization period , To get the bit plane matrix data.
  • the step S220 includes: directly performing the conversion processing on the obtained bit-plane sequence data according to the preset serialization period , To get the bit plane matrix data.
  • this step can convert the acquired bit-plane sequence data in the higher bit plane according to the preset serialization cycle; or, in this step, the bit-plane sequence data in all bit planes can be converted according to The preset serialization cycle performs conversion processing.
  • the conversion processing method includes: serializing the corresponding bit-plane sequence data into a plurality of sequence segments according to the serialization period; according to the start data and end data of the sequence segment described in the serialization period, Convert each sequence segment of the corresponding bit plane into a matrix form, and merge the data in each matrix form into bit plane matrix data according to the position of each sequence segment in the plane sequence data.
  • bit-plane sequence data is divided by the length of the sequence segment described in the serialization cycle, and each segment of the segmented sequence is converted according to the matrix form described in the serialization cycle to obtain 4 *4 Data in matrix form.
  • the data in the form of each matrix is merged into bit-plane matrix data.
  • x j, k in FIG. 4 is the data at the (j, k)th position in the bit plane matrix data.
  • the position of adjacent sequence segments in the bit-plane matrix data is related to the row/column of the matrix where the start data and the end data of the serialization period are located.
  • the way to merge the directions is only an example, not a limitation of the application.
  • the decoded image data is obtained by performing step S230 on the obtained bit-plane matrix data.
  • the step S220 includes: block the bit-plane sequence data of the corresponding bit-plane to obtain multiple sequences Data block; based on a preset serialization cycle, convert each sequence data block of the corresponding bit plane into a matrix data block; and based on the position of each sequence data block in the corresponding bit plane sequence data, merge each of the blocks into Bit plane matrix data.
  • the extracted bit-plane sequence data is composed of a plurality of sequence data blocks, and the decoding device converts each sequence data block into a matrix data block according to the conversion method provided in the above example, wherein The conversion method is the same as or similar to the conversion method in any of the aforementioned examples, and will not be described in detail here.
  • the sequence and position of the sequence data blocks in the same bit-plane sequence data the corresponding matrix data blocks are merged into bit-plane matrix data, and step S230 is executed.
  • step S230 according to the binary data bits of each bit plane, all the obtained bit-plane matrix data are merged into the described image data.
  • bit plane matrix data is correspondingly filled in to obtain the image data.
  • the image data provided based on any of the above examples can be that the acquired original image is divided into one of the multiple channels of image data according to color, and the multiple channels of image data corresponding to the same original image are combined according to the decoding header information, etc., Obtain the decoded original image, and output the original image to the display screen, which is the image content displayed on the display screen.
  • an image transmission system is composed of at least the encoding equipment and decoding equipment.
  • FIG. 8 shows a schematic structural diagram of an image transmission system in an embodiment.
  • the encoding device can be configured in a computer device used to produce a TV program, and the original image (such as the main frame image) in the original video of the TV program is encoded based on the foregoing encoding method. And through the video coding technology to make the encoded image data into a video file.
  • the video file can be transmitted to the decoding device via the Internet, a dedicated channel for television signals, and the like.
  • the decoding device may be configured in a set-top box or a TV set for playing television programs, and the decoding device decodes the received video file based on a decoding method set corresponding to the encoding method, and decodes the original image after the decoding method.
  • the video is displayed on the TV screen.
  • the television can be regarded as a specific example of a playback device.
  • the image transmission system shown in FIG. 8 may include an encoding device and a playback device.
  • the encoding device shown in FIG. 8 can be implemented by a camera device, and the decoding device can be implemented by a playback device including a display screen.
  • the encoding device can be configured in a camera device for producing and acquiring road monitoring images, and each original image captured is encoded according to the above-mentioned encoding method to obtain an image code Stream, the image code stream can be transmitted to the playback device via the Internet, a specially constructed data line (such as optical fiber, etc.).
  • the playback device can be configured in a computer room for monitoring roads, and is equipped with at least one display screen.
  • the playback device obtains the image code stream provided by the designated camera device based on the user's operation, decodes the received image code stream according to the decoding method set corresponding to the encoding method, and displays the decoded original image on the display On the screen.
  • the unplayed graphics stream will be saved in the form of video files, and when you need to read them, the saved video files will be decoded, and the decoded original images will be displayed on the decoding device one by one.
  • the image transmission device shown in FIG. 8 may also include a camera device and a decoding device. I will not give examples one by one here.
  • this application also provides an image transmission system. Please refer to FIG. 9 which shows a schematic structural diagram of the image transmission system in another embodiment.
  • the encoding equipment and decoding equipment included in the image transmission system may at least partially share hardware devices. Examples of the image transmission system are a recording and playback camera, an electronic terminal including a display screen, and the like.
  • the image transmission system includes an image acquisition interface, a storage device, and a processing device.
  • the image acquisition interface may include a network interface, a data line interface, or a program interface.
  • the processing device executes the encoding operation by calling the program stored in the storage device to encode the acquired original image into encoded image data, And stored in the storage device.
  • the processing device executes the decoding operation by calling the program in the storage device, and displays the original image obtained after decoding on the display screen.
  • the encoding and decoding operations in the image transmission system can be performed based on the corresponding methods provided in this application, and will not be repeated here.
  • this application can be implemented by means of software in combination with a necessary general hardware platform. If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can also be stored in a computer readable storage medium. Based on this understanding, this application also provides a computer-readable storage medium that stores at least one program that, when executed, implements any of the foregoing encoding methods or decoding methods, such as The foregoing corresponds to the method described in FIG. 1 or FIG. 7.
  • the technical solution of the present application essentially or the part that contributes to the prior art can be embodied in the form of a software product.
  • the computer software product can include one or more machine executable instructions stored thereon.
  • a machine-readable medium when these instructions are executed by one or more machines, such as a computer, a computer network, or other electronic devices, can cause the one or more machines to perform operations according to the embodiments of the present application. For example, the steps in the encoding method or the decoding method.
  • Machine-readable media may include, but are not limited to, floppy disks, optical disks, CD-ROM (compact disk-read only memory), magneto-optical disks, ROM (read only memory), RAM (random access memory), EPROM (erasable Except programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions.
  • any connection is properly termed a computer-readable medium.
  • the instruction is sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave
  • computer readable and writable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are intended for non-transitory, tangible storage media.
  • the magnetic disks and optical disks used in the application include compact disks (CD), laser disks, optical disks, digital versatile disks (DVD), floppy disks and Blu-ray disks.
  • CD compact disks
  • laser disks optical disks
  • DVD digital versatile disks
  • floppy disks floppy disks
  • Blu-ray disks disks usually copy data magnetically
  • optical disks use lasers for optical Copy data locally.
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, rather than corresponding to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de codage et un procédé de décodage d'image, et un dispositif et un système auxquels lesdits procédés sont applicables. Ledit procédé de codage comporte les étapes consistant à: diviser des données d'image acquises en une pluralité de données de matrice de plan binaire selon des bits de données binaires; effectuer un traitement de sérialisation sur des données de matrice de plan binaire d'au moins une partie de plans binaires d'après une période de sérialisation prédéfinie pour obtenir des données sérialisées de plans binaires, la période de sérialisation étant une période fixée en sérialisant une matrice m*n prédéfinie selon des données voisines; et coder les données sérialisées de plans binaires obtenues, et générer des données d'image codées des données d'image. La présente invention utilise une période de sérialisation pour mettre en œuvre un traitement de sérialisation sur des données de matrice de plan binaire, et a l'avantage d'améliorer la cohésion d'une image d'origine, en particulier la cohésion d'images à haute définition de 4K ou plus.
PCT/CN2019/075642 2019-02-21 2019-02-21 Procédé de codage et procédé de décodage d'image, et dispositif et système auxquels lesdits procédés sont applicables WO2020168501A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/075642 WO2020168501A1 (fr) 2019-02-21 2019-02-21 Procédé de codage et procédé de décodage d'image, et dispositif et système auxquels lesdits procédés sont applicables
CN201980005137.7A CN111316644B (zh) 2019-02-21 2019-02-21 图像的编码方法、解码方法及所适用的设备、系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/075642 WO2020168501A1 (fr) 2019-02-21 2019-02-21 Procédé de codage et procédé de décodage d'image, et dispositif et système auxquels lesdits procédés sont applicables

Publications (1)

Publication Number Publication Date
WO2020168501A1 true WO2020168501A1 (fr) 2020-08-27

Family

ID=71159511

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075642 WO2020168501A1 (fr) 2019-02-21 2019-02-21 Procédé de codage et procédé de décodage d'image, et dispositif et système auxquels lesdits procédés sont applicables

Country Status (2)

Country Link
CN (1) CN111316644B (fr)
WO (1) WO2020168501A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116962299A (zh) * 2023-09-21 2023-10-27 广东云下汇金科技有限公司 一种数据中心算力调度方法、系统、设备及可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134475B (zh) * 2022-08-31 2022-11-08 智联信通科技股份有限公司 一种衡器鉴重智能管理系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6477280B1 (en) * 1999-03-26 2002-11-05 Microsoft Corporation Lossless adaptive encoding of finite alphabet data
CN1954614A (zh) * 2004-05-13 2007-04-25 皇家飞利浦电子股份有限公司 编码数值块的方法和设备
CN107610037A (zh) * 2017-09-29 2018-01-19 重庆第二师范学院 一种融合多混沌映射和dna编码的图像加密方法及装置
CN108028928A (zh) * 2015-09-18 2018-05-11 皇家飞利浦有限公司 用于快速和高效的图像压缩和解压缩的方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1255770C (zh) * 2003-06-30 2006-05-10 大唐微电子技术有限公司 基于数字信号处理器的层次树集合划分图像编解码方法
WO2007066709A1 (fr) * 2005-12-07 2007-06-14 Sony Corporation Dispositif de codage, procede de codage, programme de codage, dispositif de decodage, procede de decodage, et programme de decodage
US9307248B2 (en) * 2013-03-08 2016-04-05 Mediatek Inc. Image encoding method and apparatus for performing bit-plane scanning coding upon pixel data and related image decoding method and apparatus
WO2018054506A1 (fr) * 2016-09-23 2018-03-29 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Codage par transformation de blocs

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6477280B1 (en) * 1999-03-26 2002-11-05 Microsoft Corporation Lossless adaptive encoding of finite alphabet data
CN1954614A (zh) * 2004-05-13 2007-04-25 皇家飞利浦电子股份有限公司 编码数值块的方法和设备
CN108028928A (zh) * 2015-09-18 2018-05-11 皇家飞利浦有限公司 用于快速和高效的图像压缩和解压缩的方法和装置
CN107610037A (zh) * 2017-09-29 2018-01-19 重庆第二师范学院 一种融合多混沌映射和dna编码的图像加密方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116962299A (zh) * 2023-09-21 2023-10-27 广东云下汇金科技有限公司 一种数据中心算力调度方法、系统、设备及可读存储介质
CN116962299B (zh) * 2023-09-21 2024-01-19 广东云下汇金科技有限公司 一种数据中心算力调度方法、系统、设备及可读存储介质

Also Published As

Publication number Publication date
CN111316644A (zh) 2020-06-19
CN111316644B (zh) 2022-06-28

Similar Documents

Publication Publication Date Title
US7701365B2 (en) Encoding device and method, composite device and method, and transmission system
KR101442273B1 (ko) 정보 처리 장치 및 방법
US11212539B2 (en) Efficient lossless compression of captured raw image information systems and methods
TWI733986B (zh) 用以編碼和解碼視頻資料之方法、設備及系統
KR101266667B1 (ko) 장치 내 제어기에서 프로그래밍되는 압축 방법 및 시스템
US20180309991A1 (en) Video encoding with adaptive rate distortion control by skipping blocks of a lower quality video into a higher quality video
CN103945223A (zh) 具有帧缓冲压缩的视频处理器及其使用方法
TW201415897A (zh) 解碼器及解碼方法
JP2004221836A (ja) 画像処理装置、プログラム、記憶媒体及び符号伸長方法
WO2016110031A1 (fr) Procédé et dispositif de décodage de flux de données
TWI626841B (zh) 具有減少色彩解析度的視訊流之自適應處理
US20180084279A1 (en) Video encoding by injecting lower-quality quantized transform matrix values into a higher-quality quantized transform matrix
WO2020168501A1 (fr) Procédé de codage et procédé de décodage d'image, et dispositif et système auxquels lesdits procédés sont applicables
RU2510589C2 (ru) Способ кодирования цифрового видеоизображения
JP2004172957A (ja) 画像処理装置、プログラム及び記憶媒体
EP2787738B1 (fr) Compression de pavés pour applications graphiques
JP2004166124A (ja) 画像処理装置、プログラム、記憶媒体及び画像処理方法
CN111406404B (zh) 获得视频文件的压缩方法、解压缩方法、系统及存储介质
CN105828082A (zh) 视频图像快速压缩系统及方法
WO2021168827A1 (fr) Procédé et appareil de transmission d'image
JP2007312399A (ja) 画像符号化装置および画像復号装置、ならびにそれらを利用可能な画像表示装置および方法
KR20110071204A (ko) 웨이블릿 변환 기반의 jpeg2000에서의 병렬 처리 방법
WO2020215193A1 (fr) Codeur, système de codage, et procédé de codage
US20210203952A1 (en) Encoder, encoding system and encoding method
CN108419085B (zh) 一种基于查表的视频传输系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.12.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19916245

Country of ref document: EP

Kind code of ref document: A1