CN113422960A - Image transmission method and device - Google Patents

Image transmission method and device Download PDF

Info

Publication number
CN113422960A
CN113422960A CN202110663402.XA CN202110663402A CN113422960A CN 113422960 A CN113422960 A CN 113422960A CN 202110663402 A CN202110663402 A CN 202110663402A CN 113422960 A CN113422960 A CN 113422960A
Authority
CN
China
Prior art keywords
macro block
data
residual
layer
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110663402.XA
Other languages
Chinese (zh)
Inventor
徐律
潘志恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chenyu Information Technology Co ltd
Original Assignee
Shanghai Chenyu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chenyu Information Technology Co ltd filed Critical Shanghai Chenyu Information Technology Co ltd
Priority to CN202110663402.XA priority Critical patent/CN113422960A/en
Publication of CN113422960A publication Critical patent/CN113422960A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an image transmission method and device, a nonvolatile storage medium and a processor. Wherein, the method comprises the following steps: acquiring an image to be sent, dividing the image to be sent into a plurality of macro blocks, and determining the macro block type of each macro block according to the characteristics of each macro block; coding each macro block according to coding methods corresponding to different macro block types, and taking the coded data of each macro block as base layer data; obtaining residual error macro blocks corresponding to the macro blocks according to the original macro block data of the macro blocks and base layer data obtained after coding the corresponding macro blocks; and respectively coding residual error macro blocks corresponding to all macro blocks, taking the coded data of all residual error macro blocks as extension layer data, and sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end. The method and the device solve the technical problem that the existing image lossless compression coding and decoding method has high requirements on bandwidth.

Description

Image transmission method and device
Technical Field
The present application relates to the field of image encoding and decoding, and in particular, to an image transmission method and apparatus, a non-volatile storage medium, and a processor.
Background
The conventional progressive encoding and decoding methods for images are basically directed to lossy compression, such as the progressive scheme of JPEG, in which quantized coefficients are progressively transmitted. The lossless compression is to perform entropy coding directly under the condition of ensuring the bandwidth, generally without a progressive scheme, and has higher requirement on the bandwidth.
Aiming at the problem that the existing image lossless compression coding and decoding method has high requirement on bandwidth, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the application provides an image transmission method and device, a nonvolatile storage medium and a processor, which are used for at least solving the technical problem that the existing image lossless compression coding and decoding method has high requirement on bandwidth.
According to an aspect of an embodiment of the present application, there is provided an image transmission method, including: acquiring an image to be sent, dividing the image to be sent into a plurality of macro blocks, and determining the macro block type of each macro block according to the characteristics of each macro block, wherein the macro blocks of different types are coded by adopting different coding methods; coding each macro block according to coding methods corresponding to different macro block types, and taking the coded data of each macro block as base layer data; obtaining residual error macro blocks corresponding to the macro blocks according to the original macro block data of the macro blocks and base layer data obtained after coding the corresponding macro blocks; and respectively coding residual error macro blocks corresponding to all macro blocks, taking the coded data of all residual error macro blocks as extension layer data, and sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end.
Optionally, obtaining a residual macroblock corresponding to each macroblock according to original macroblock data of each macroblock and base layer data obtained by encoding the corresponding macroblock, including: decoding the base layer data obtained after each macro block is coded to obtain a reconstructed macro block corresponding to each macro block; and subtracting the data of the reconstructed macro block corresponding to each macro block from the original macro block data of each macro block to obtain a residual macro block corresponding to each macro block.
Optionally, subtracting the data of the reconstructed macro block corresponding to each macro block from the original macro block data of each macro block to obtain a residual macro block corresponding to each macro block, where the method includes: and subtracting the pixel value of the reconstructed macro block corresponding to each macro block from the pixel value of each macro block to obtain the residual value of each residual macro block, wherein the residual value of each residual macro block is a residual value matrix of n rows by m columns, and n and m are natural numbers greater than or equal to 1.
Optionally, the encoding of the residual macroblocks corresponding to the respective macroblocks respectively includes: layering each residual error macro block, wherein each layer comprises one row of residual error values or a plurality of rows of residual error values in a residual error value matrix of each residual error macro block; determining the available flow of the extension layer data according to the difference value of the current available network bandwidth and the flow occupied by the base layer data; determining the number of the encodable layers of each residual error macro block according to the available flow of the extension layer data; and respectively coding each residual error macro block according to the number of the coding layers of each residual error macro block.
Optionally, determining the number of encodable layers for each residual macroblock according to the available flow of enhancement layer data includes:
determining the minimum transmitted layer number a in the transmitted layer number of each residual error macro block in the current extension layer data according to the transmitted layer number matrix;
setting an initial value of the number i of the layers which can be coded as a, and setting a total prediction code stream B as 0;
determining the number i of encodable layers by:
step 1: determining a total code stream B of residual data of the (i + 1) th layer of all target residual macroblocks with the number of transmitted layers being less than or equal to i, and simultaneously re-determining a total predicted code stream B: b is 0+ B;
step 2: if B is smaller than the available flow of the extension layer data and the current encodable layer number i is smaller than the maximum layering number of each residual error macro block, updating the current encodable layer number i as follows: i is i +1, and meanwhile, the step 1 is switched to continue to be executed; and if B is larger than the available flow of the data of the extension layer or the current encodable layer number i is equal to the maximum layering number, determining the encodable layer number as i and ending the current flow.
Optionally, determining a total code stream of i +1 th layer residual data of all target residual macroblocks with the number of transmission layers being i by: respectively mapping the residual values of the i +1 th layer of each residual macro block into symbol values; searching a code length corresponding to each residual value in the i +1 th layer of residual values from a preset residual code table; and adding the code length corresponding to each residual value in the i +1 th layer of residual values of each residual macro block to obtain the total code stream of the i +1 th layer of residual data of the target residual macro block with the i number of transmitted layers.
Optionally, before determining the number of encodable layers for each residual macroblock according to the available flow rate of the enhancement layer data, the method further includes: determining a transmitted layer number matrix, wherein the transmitted layer number matrix is used for recording the coded layer number of each residual error macro block in the coded previous frame image; comparing each macro block in a current frame image which is being coded with each corresponding macro block in a previous frame image to determine a target macro block which is changed relative to the previous frame image in the current frame image; in the transmitted layer number matrix, the transmitted layer number at the position corresponding to each target macro block is set to 0, and the transmitted layer number matrix is updated.
Optionally, the determining a minimum number of transmitted layers in the updated matrix of number of transmitted layers includes: if at least one residual error macro block of the current frame image is changed relative to the previous frame image, determining the minimum transmitted layer number to be 0; if the current frame image is unchanged relative to the previous frame image, all the numerical values in the transmitted layer number matrix are sequenced, and the minimum numerical value in the transmitted layer number matrix is used as the minimum transmitted layer number.
Optionally, the encoding of each residual macroblock according to the number of encodable layers of each residual macroblock includes: and coding all the non-transmitted layer data between the transmitted layer number and the encodable layer number aiming at all the residual error macro blocks with the transmitted layer number smaller than the encodable layer number in the current extension layer data.
According to another aspect of the embodiments of the present application, there is provided another image transmission method, including: acquiring an image after coding; decomposing the coded image into base layer data and extension layer data, wherein the base layer data is coded data obtained by coding each macro block obtained by dividing the original image, and the extension layer data is coded data obtained by coding a residual error macro block corresponding to each macro block; and respectively decoding the data of the base layer and the data of the extension layer, and splicing the data after decoding the data of the base layer and the data after decoding the data of the extension layer to obtain the image before encoding.
Optionally, decoding the extension layer data comprises: if the residual error macro block in the current frame image to be decoded is the same as the residual error macro block in the next frame image, recording the layer number N of the residual error value of the decoded residual error macro block in the current frame image to be decoded; when the residual error macro block in the next frame image is decoded, the residual error values of the (N + 1) th layer of the residual error macro block in the next frame image are decoded until all the residual error values of the residual error macro block in the next frame are decoded.
According to another aspect of the embodiments of the present application, there is also provided an image transmission apparatus including: the first acquisition module is used for acquiring an image to be transmitted, dividing the image to be transmitted into a plurality of macro blocks and determining the macro block type of each macro block according to the characteristics of each macro block, wherein the macro blocks of different types are coded by adopting different coding methods; the first coding module is used for coding each macro block according to coding methods corresponding to different macro block types respectively, and taking the coded data of each macro block as base layer data; the determining module is used for obtaining residual error macro blocks corresponding to all macro blocks according to original macro block data of all macro blocks and base layer data obtained after coding the corresponding macro blocks; and the second coding module is used for coding the residual error macro blocks corresponding to the macro blocks respectively, taking the coded data of the residual error macro blocks as extension layer data, and sending the data obtained by combining the base layer data and the extension layer data to an image decoding end.
According to another aspect of the embodiments of the present application, there is provided another image transmission apparatus, including: the second acquisition module is used for acquiring the coded image; the decomposition module is used for decomposing the coded image into base layer data and extension layer data, wherein the base layer data is coded data obtained by coding each macro block obtained by dividing the original image, and the extension layer data is coded data obtained by coding a residual error macro block corresponding to each macro block; and the decoding module is used for respectively decoding the base layer data and the extension layer data, and splicing the data after decoding the base layer data and the data after decoding the extension layer data to obtain the image before encoding.
According to still another aspect of the embodiments of the present application, there is also provided a non-volatile storage medium including a stored program, wherein the apparatus in which the non-volatile storage medium is located is controlled to execute the above image transmission method when the program runs.
According to still another aspect of the embodiments of the present application, there is also provided a processor for executing a program stored in a memory, wherein the program executes the above image transmission method.
In the embodiment of the application, an image to be sent is acquired, the image to be sent is divided into a plurality of macro blocks, the macro block type of each macro block is determined according to the characteristics of each macro block, and different types of macro blocks are coded by different coding methods; coding each macro block according to coding methods corresponding to different macro block types, and taking the coded data of each macro block as base layer data; obtaining residual error macro blocks corresponding to the macro blocks according to the original macro block data of the macro blocks and base layer data obtained after coding the corresponding macro blocks; the method comprises the steps of coding residual error macro blocks corresponding to all macro blocks respectively, using the coded data of all the residual error macro blocks as extension layer data, sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end, and realizing lossless coding by combining actual available bandwidth and carrying out progressive coding and residual error transmission on the basis of loss, thereby realizing the technical effect of realizing real lossless coding and decoding of images under the condition of limited bandwidth, and further solving the technical problem that the existing image lossless compression coding and decoding method has high requirements on bandwidth.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a method for transmitting an image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a single residual macroblock in accordance with an embodiment of the present application;
fig. 3a is a schematic diagram of a transmitted layer number matrix according to an embodiment of the present application;
fig. 3b is a schematic diagram of the updated transmission layer number matrix shown in fig. 3 a;
fig. 3c is a schematic diagram of a transmitted layer number matrix according to an embodiment of the present application;
fig. 3d is a schematic diagram of a transmitted layer number matrix according to an embodiment of the present application;
fig. 3e is a schematic diagram of a transmitted layer number matrix according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a residual huffman code table according to an embodiment of the present application;
FIG. 5 is a flow chart of another method of image transmission according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an image encoding side and a decoding side according to an embodiment of the present application;
FIG. 7a is a flowchart of an image encoding end according to an embodiment of the present application;
FIG. 7b is a flowchart illustrating the operation of an image decoding end according to an embodiment of the present application;
fig. 8 is a block diagram of a transmission apparatus of an image according to an embodiment of the present application;
fig. 9 is a block diagram of another image transmission apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, there is provided an embodiment of a method for transmitting an image, where it is noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
The application provides a lossless coding method which can be combined with the actual available bandwidth, and can be realized by carrying out progressive coding and residual error transmission on the basis of loss. By using the method, the high bandwidth required by the traditional lossless scheme is not limited, and the progressive idea is applied to the method, so that the lossless scheme under the condition of limited bandwidth is realized. The main technical characteristics are as follows:
1. when the bandwidth is limited, the base layer, namely the lossy data, is transmitted firstly;
2. when a base layer is coded, a residual error is generated for each macro block, once the bandwidth is surplus, the residual error can be coded and sent to a decoding end in an extended layer mode, and the decoding end superposes data formed after the residual error is decoded and corresponding base layer reconstruction data to form a real lossless reconstruction pixel;
3. the scheme logically layers the residual errors, when the residual error data are coded, prediction is carried out according to the current actual bandwidth and the size of the residual error data, and only the number of residual error layers which can be contained in the current bandwidth is coded; if the original image is static, the subsequent residual error layer number can be gradually transmitted continuously, and the image is gradually clear seen at a decoding end until the residual error is completely transmitted to form a frame of lossless image.
The above scheme is explained in detail below:
fig. 1 is a flowchart of an image transmission method according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S102, acquiring an image to be transmitted, dividing the image to be transmitted into a plurality of macro blocks, and determining the macro block type of each macro block according to the characteristics of each macro block, wherein the macro blocks of different types are coded by adopting different coding methods;
macroblock is a basic concept in video coding technology. By dividing the picture into blocks of different sizes, different compression strategies are implemented at different locations.
In this step, one frame of YUV image is acquired each time as a coded data source (i.e. the image to be transmitted). "YUV" refers to a pixel format in which a luminance parameter and a chrominance parameter are separately expressed. "Y" represents brightness, i.e., gray scale value, and "U" and "V" represent chroma, which is used to describe the color and saturation of the image for specifying the color of the pixel.
Step S104, coding each macro block according to the coding methods corresponding to different macro block types, and taking the coded data of each macro block as the base layer data;
according to an alternative embodiment of the present application, when step S102 and step S104 are executed, the full-frame image is divided into macroblocks of 16 × 16 unit size, and classified according to the characteristics of each macroblock, for example, the macroblock with smooth pixel change is classified into a picture block, and JPEG encoding is used to realize a lower code stream; and classifying macro blocks with rich details into character macro blocks, and performing Huffman compression and palette quantization to realize clearer detail display. Meanwhile, other extension coding modules can be supported. One advantage of this scheme is that, whatever the coding module, the same format of residual can be generated, progressive residual transmission can be performed using this scheme to achieve lossless.
Optionally, the text macro block of the full frame is encoded, mainly by using Huffman coding and palette quantization methods. The picture macro block of the full frame is coded by mainly adopting a JPEG coding method.
The data resulting from encoding the macroblock is referred to as base layer data. Pure base layer data can also be decoded and produce a frame of lossy image, and only the base layer data is sent to the decoding side in case the bandwidth is too limited to transmit any residual. The base layer data means a coded stream without any residual data. If the residual data is completely abandoned, a lossy code stream with quantization loss is coded, and the code stream is called a basic layer. The base layer alone can be decoded to display the picture with little loss. Correspondingly, the code stream part coded by the residual errors mentioned below is called an extension layer, and the extension layer data means whether all the lines exist or not, is of an extension type and is better. The data of the expansion layer is transmitted completely or in a lossless manner.
Step S106, obtaining residual error macro blocks corresponding to all macro blocks according to the original macro block data of all macro blocks and the base layer data obtained after coding the corresponding macro blocks;
when step S106 is executed, the encoded macroblock is decoded again, and a reconstructed macroblock is generated. And subtracting the data of the reconstructed macro block from the data of the original macro block to obtain a residual error macro block corresponding to the original macro block.
Fig. 2 is a schematic diagram of a single residual macroblock, which is a data source for extended layer coding according to an embodiment of the present application.
And step S108, respectively coding the residual error macro blocks corresponding to the macro blocks, taking the coded data of the residual error macro blocks as extension layer data, and sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end.
And the final code stream generated by fusing the data of the basic layer and the data of the expansion layer is sent to a decoding end.
Through the steps, the lossless coding is realized by combining the actual available bandwidth and carrying out progressive coding and residual error transmission on the basis of loss, so that the technical effect of truly lossless coding and decoding the image under the condition of limited bandwidth is realized.
According to an alternative embodiment of the present application, step S106 is implemented by: decoding the base layer data obtained after each macro block is coded to obtain a reconstructed macro block corresponding to each macro block; and subtracting the data of the reconstructed macro block corresponding to each macro block from the original macro block data of each macro block to obtain a residual macro block corresponding to each macro block.
According to another optional embodiment of the present application, subtracting the data of the reconstructed macro block corresponding to each macro block from the original macro block data of each macro block to obtain the residual macro block corresponding to each macro block, includes the following steps: and subtracting the pixel value of the reconstructed macro block corresponding to each macro block from the pixel value of each macro block to obtain the residual value of each residual macro block, wherein the residual value of each residual macro block is a residual value matrix of n rows by m columns, and n and m are natural numbers greater than or equal to 1.
Referring to fig. 2, each element in the 16 × 16 matrix shown in fig. 2 is a residual value generated after one pixel is encoded.
In some optional embodiments of the present application, the step S108 of encoding the residual macroblock corresponding to each macroblock separately includes the following steps: layering each residual error macro block, wherein each layer comprises one row of residual error values or a plurality of rows of residual error values in a residual error value matrix of each residual error macro block; determining the available flow of the extension layer data according to the difference value of the current available network bandwidth and the flow occupied by the base layer data; determining the number of the encodable layers of each residual error macro block according to the available flow of the extension layer data; and respectively coding each residual error macro block according to the number of the coding layers of each residual error macro block.
There are many possible ways to layer residual values. The embodiment of the present application takes the residual value as 16 layers, that is, the residual value generated every 1 line of pixels of each macroblock is 1 layer. Each element in the 16x16 matrix shown in fig. 2 is the residual error of one pixel after encoding. Each row constitutes one layer, and is divided into 16 layers in total, and 16 residuals in the thick coil are the layer 1 in the figure. All residual macroblocks of the full frame are layered in this way.
It should be noted that every two rows of elements or a plurality of rows of elements in the matrix shown in fig. 2 may be provided as a layer.
In some optional embodiments of the present application, determining the number of encodable layers for each residual macroblock according to the available flow of enhancement layer data comprises:
determining the minimum transmitted layer number a in the transmitted layer number of each residual error macro block in the current extension layer data according to the transmitted layer number matrix;
setting an initial value of the number i of the layers which can be coded as a, and setting a total prediction code stream B as 0;
determining the number i of encodable layers by:
step 1: determining a total code stream B of residual data of the (i + 1) th layer of all target residual macroblocks with the number of transmitted layers being less than or equal to i, and simultaneously re-determining a total predicted code stream B: b is 0+ B;
step 2: if B is smaller than the available flow of the extension layer data and the current encodable layer number i is smaller than the maximum layering number of each residual error macro block, updating the current encodable layer number i as follows: i is i +1, and meanwhile, the step 1 is switched to continue to be executed; and if B is larger than the available flow of the data of the extension layer or the current encodable layer number i is equal to the maximum layering number, determining the encodable layer number as i and ending the current flow.
In other optional embodiments of the present application, the total code stream of the i +1 th layer residual data of all the target residual macroblocks with the transmitted layer number i is determined by: respectively mapping the residual values of the i +1 th layer of each residual macro block into symbol values; searching a code length corresponding to each residual value in the i +1 th layer of residual values from a preset residual code table; and adding the code length corresponding to each residual value in the i +1 th layer of residual values of each residual macro block to obtain the total code stream of the i +1 th layer of residual data of the target residual macro block with the i number of transmitted layers.
The method provides a method for realizing how to predict the number of layers capable of being coded into the residual error, and specifically, the current available bandwidth is read from a network bandwidth monitoring module outside a coding and decoding system, and the flow value occupied by the base layer data is subtracted from the bandwidth, namely the flow occupied by the code stream of the expansion layer.
According to an optional embodiment of the present application, before determining the number of encodable layers of each residual macroblock according to the available flow of the enhancement layer data, a transmitted layer number matrix needs to be determined, where the transmitted layer number matrix is used to record the number of coded layers of each residual macroblock in a previous frame of image that has been coded; comparing each macro block in a current frame image which is being coded with each corresponding macro block in a previous frame image to determine a target macro block which is changed relative to the previous frame image in the current frame image; in the transmitted layer number matrix, the transmitted layer number at the position corresponding to each target macro block is set to 0, and the transmitted layer number matrix is updated.
The transmitted layer number matrix is continuously updated during the encoding process, where the acquired transmitted layer number matrix is the transmitted layer number matrix obtained after encoding the previous frame of image. The transmitted layer number matrix is used to record the number of layers that each residual macroblock in the previous frame has transmitted, for example, assuming that a frame of picture has 16 residual macroblocks, each macroblock is divided into 16 layers, the transmitted layer number matrix can be shown as follows, referring to fig. 3a, the transmitted layer number of the first row and the first column of macroblocks is 10 (that is, the macroblock has been transmitted to the 10 th layer of data before), the transmitted layer number of the first row and the second column of macroblocks is 10, and so on.
Comparing the current frame image with the previous frame image to determine a target macro block which changes relative to the previous frame in the current frame image, and setting the number of transmitted layers at the position corresponding to each target macro block in the transmitted layer number matrix as 0 so as to update the transmitted layer number matrix.
Specifically, the macroblock comparison method is as follows: comparing pixel points at corresponding positions in the two macro blocks at the corresponding positions, if all the pixel points are the same, determining that the macro blocks are not changed, otherwise, determining that the macro blocks are changed. In practical applications, only the Y component of the pixel point may be compared to improve the comparison speed.
If a certain macro block in the current frame image changes relative to the corresponding macro block in the previous frame image, setting the corresponding value of the residual error macro block corresponding to the macro block (namely the target macro block) in the transmitted layer number matrix to be 0; if a certain macro block in the current frame image is unchanged relative to the corresponding macro block in the previous frame image, the corresponding value of the residual error macro block corresponding to the macro block in the transmitted layer number matrix is unchanged.
Specifically, the number of layers transmitted of the macroblock that has changed is directly set to 0, and the number of layers transmitted of other macroblocks that have not changed is not changed. The updated transmitted layer number matrix replaces the original transmitted layer number matrix.
Taking fig. 3a as an example, if there are four changed macro blocks, and the number of the transmitted layers of the macro block at the corresponding position in fig. 3a is set to 0, the transmitted matrix in fig. 3a becomes the transmitted layer number matrix in fig. 3 b.
The transmitted layer number matrix is used for marking the layer number to which the residual error macro block at the corresponding position is currently transmitted, so that the current transmitted layer number is set to be 0 when the macro block changes, because when the macro block changes, the current macro block needs to be transmitted from the layer 1 again, and the transmission cannot be continued by the transmitted layer number of the macro block at the corresponding position in the previous frame image, therefore, setting the transmitted layer number of the current macro block to be 0 also indicates that the current residual error macro block is not transmitted yet.
According to another alternative embodiment of the present application, determining the minimum number of transmitted layers in the updated matrix of number of transmitted layers comprises: if at least one residual error macro block of the current frame image is changed relative to the previous frame image, determining the minimum transmitted layer number to be 0; if the current frame image is unchanged relative to the previous frame image, all the numerical values in the transmitted layer number matrix are sequenced, and the minimum numerical value in the transmitted layer number matrix is used as the minimum transmitted layer number.
In this step, the minimum number of transmitted layers in the current matrix of transmitted layers is determined. Specifically, the method for determining the minimum number of transmitted layers is as follows: if at least one macro block of the current frame is changed relative to the previous frame, the minimum number of transmitted layers is directly set to 0. If no macroblock has changed for the current frame relative to the previous frame, the minimum number of transmitted layers is determined as follows: sequencing all numerical values in the current transmitted layer number matrix; the minimum value is determined as the minimum number of transmitted layers.
The following describes the method for predicting the number of layers that can be coded in each residual macroblock in detail with a specific example:
assuming that the total number of transmission layers is 12, fig. 3a is a currently received transmitted layer number matrix, and the current transmitted layer number matrix obtained by updating the transmitted layer number matrix shown in fig. 3a is fig. 3 b.
At this time, determining the number of encodable layers of each residual macroblock according to the updated transmitted layer number matrix includes the steps of:
1) because the minimum transmitted layer number i is equal to 0, the current transmitted layer number is set to i +1, that is, 1, all macroblocks with the current transmitted layer number of 0 are found, the total code stream B1 of the residual data of the layer 1 is determined, and the total predicted code stream B is determined again: b ═ 0+ B1;
2) if B is smaller than the available flow of the enhancement layer data and the current number of transmission layers 1 is smaller than the maximum number of layered layers (12 layers) of each residual macroblock, the number of transmitted layers of all the currently encoded macroblocks is again +1, at this time, the number of transmitted layers matrix is updated to fig. 3 c;
3) because the minimum transmitted layer number i is 1, the current transmitted layer number is set to i +1, that is, 2, all macroblocks with the current transmitted layer number of 1 are found, the total code stream B2 of the 2 nd layer residual data is determined, and the total predicted code stream B is determined again: B-B1 + B2;
4) if B is smaller than the available flow of the enhancement layer data and the current number of transmission layers 2 is smaller than the maximum number of layered layers (12 layers) of each residual macroblock, the number of transmitted layers of all the currently encoded macroblocks is again +1, at this time, the number of transmitted layers matrix is updated to fig. 3 d;
5) assuming that the current number of transmission layers is 7, finding out all macroblocks with the number of transmission layers being 6, determining a total code stream B7 of residual data of the 7 th layer, and re-determining a total predicted code stream B: b1+ B2+ … + B7; at this time, B is greater than the available flow rate of the enhancement layer data, and the transmitted layer number matrix is updated to fig. 3 e;
6) receiving the next frame image (image 2), and obtaining the current transmission layer number matrix, i.e. the transmitted layer number matrix shown in fig. 3 e;
7) comparing the macro block of the current frame (picture 2) with the macro block of the previous frame (picture 1), assuming that the picture 2 is completely the same as the picture 1, and the current transmitted layer number matrix does not need to be adjusted, so the coding is directly started according to the current transmitted layer number matrix (figure 3 e);
8) determining the minimum transmitted layer number to be 6 from the current transmitted layer number matrix, and further determining the current transmitted layer number to be 6+1 to 7, so that the total code stream of the 7 th layer residual data of 9 macro blocks with the transmitted layer number of 6 in the current frame is calculated firstly;
9) the number of encodable layers continues to be predicted in the manner described above, and the prediction of the number of encodable layers is stopped assuming that the current number of transmission layers 12 is equal to the maximum number of layered layers (12 layers) of each residual macroblock up to layer 12.
According to an optional embodiment of the present application, when determining the total code stream of each layer of residual data, mapping the residual data of each layer to a symbol value (the mapping method is to add 128 to the residual value to obtain a symbol value), looking up a residual huffman code table for the symbol value to obtain a code length of each symbol, and adding the code lengths of each residual data to obtain the code stream size of the residual data of the layer.
Fig. 4 is a schematic diagram of a residual huffman code table according to an embodiment of the present application, and it should be noted that the present scheme is described based on an image with a bit depth of 8 bits, i.e. pixel values between 0 and 255. Theoretically, the difference between the original pixel value and the reconstructed pixel value is between-255 and 255. However, in practice, since the reconstructed pixel values are only obtained with a small loss due to the encoding of the original pixel values, and the difference between the reconstructed pixel values and the original pixel values is generally small, the code table does not need to be designed for all cases from-255 to 255, so as to reduce the space. In this case the residual values are between-128 and 127. For convenience, the residual values are uniformly added by 128 before huffman coding, and the obtained range is 0-255, which is the value range of the coding symbols. And then looking up Huffman code table coding for 0-255. Similarly, the decoding side decodes the symbols and subtracts 128 to obtain residual values. According to the statistical data, 0 is most frequently found among the residual values, and the more distant the residual value is from 0, the lower the frequency of occurrence.
Fig. 4 is only an example of a fixed huffman code table, and there may be more methods for generating the code table, and more code tables and code length types. However, a common feature of these code tables is that the symbol with the shortest code length corresponds to the most frequently occurring symbol, i.e., when the residual is 0, the corresponding symbol 128 has the shortest code length of 1. The symbol with the longest code length corresponds to the symbol with the least frequency of occurrence, i.e. the residual value is-128 or 127, and the code length is the longest, 17. Since the residual is the difference between the original data and the reconstructed data, it should ideally be 0, i.e. the reconstructed data is the original data itself. However, because the adopted coding algorithm will bring loss, so residual errors are generated, but generally the residual error values all fluctuate around a small range of 0, and a normal coding algorithm will not bring large residual errors, so the sign corresponding to the residual error value around 0, that is, the code length in the small range around the sign 128 is the minimum. And when the residual error value is larger, the corresponding code length is longest so as to reduce the code stream.
In an alternative embodiment of the present application, the method for encoding each residual macroblock according to the number of encodable layers of each residual macroblock includes the following steps: and coding all the non-transmitted layer data between the transmitted layer number and the encodable layer number aiming at all the residual error macro blocks with the transmitted layer number smaller than the encodable layer number in the current extension layer data.
And performing Huffman coding on the residual error macro blocks according to the number of the residual error layers which can be coded by each residual error macro block and are obtained through prediction, namely when the current frame is coded into an extension layer, each macro block can be coded into the number of the layers, and the module codes each residual error macro block until the layer is coded.
The main idea of the above coding is: if a certain residual error macro block is coded to the Nth layer at present, and if the image is still at the next frame, namely the content of the current residual error macro block is not changed, the number of residual error layers is predicted, and the prediction starts from the (N + 1) th layer. Since the first N layers have already been coded and transmitted, the next frame will also be coded backwards starting from the N +1 th layer residual. This is the meaning of progressively coding the residual; on the contrary, if the image changes again when the next frame arrives, that is, the content of the residual macroblock changes, and the residual value also changes, then the number of coded layers of the macroblock in the next frame is not N, and zero clearing is required, and the number of predicted residual layers and the actual residual coding need to start from the 0 th layer.
Fig. 5 is a flowchart of another image transmission method according to an embodiment of the present application, and as shown in fig. 5, the method includes the following steps:
step S502, acquiring an image after coding;
step S504, the image after being coded is decomposed into basic layer data and extension layer data, wherein the basic layer data is coded data obtained by coding each macro block obtained by dividing the original image, and the extension layer data is coded data obtained by coding residual macro blocks corresponding to each macro block;
step S506, the data of the basic layer and the data of the extension layer are decoded respectively, and the data after the data of the basic layer and the data after the data of the extension layer are spliced to obtain the image before coding.
After receiving the coded data at the decoding end, splitting the coded data into base layer data and extension layer data, and decoding the base layer data and the extension layer data respectively. And splicing the content decoded by the base layer data and the content decoded by the extension layer data to obtain the original lossless image frame.
Most compressed code streams (such as 264, vgtp and the like) are formed by connecting coding basic units in a compression unit mode, wherein SPS, PPS, frame header and data in a frame are placed in some compression units. The coding base unit also typically has a type field that indicates what is being placed here. The extension layer data and the base layer data may be indicated by the type field. The coding base unit is typically preceded by a 000001 representation delimiter. When the separator is encountered during decoding, the new coding basic unit arrives; then, the type field of the new coding basic unit is analyzed, and the following code stream is read and decoded according to the specific type.
According to an alternative embodiment of the present application, when step S506 is executed, decoding the extension layer data includes the following steps: if the residual error macro block in the current frame image to be decoded is the same as the residual error macro block in the next frame image, recording the layer number N of the residual error value of the decoded residual error macro block in the current frame image to be decoded; when the residual error macro block in the next frame image is decoded, the residual error values of the (N + 1) th layer of the residual error macro block in the next frame image are decoded until all the residual error values of the residual error macro block in the next frame are decoded.
When decoding the extension layer data, current _ level in the code stream, that is, the currently coded layer number, is needed. As in the encoding side, when the decoding side resolves a still macroblock, the decoded residual layer number is recorded as old _ layer since the residual is not changed, and the next frame is transmitted with the previous residual layer number. After the current _ level value is solved in the current frame, the code stream solved by huffman is put into the residual error macro block between old _ layer and current _ level, so that the corresponding pixel and residual error position can be corresponded, and the original pixel value can be correctly restored during superposition reconstruction. For the layer before old _ layer, the corresponding residual data is zero, because the residual corresponding to those pixels has been passed before, the pixels have been completely restored, and the residual data of the layer after current _ level layer is also zero, because it has not been passed yet.
Fig. 6 is a schematic structural diagram of an image encoding end and a decoding end according to an embodiment of the present application, as shown in fig. 6,
the encoding end comprises the following functional components:
the image acquisition module 101 acquires a frame of YUV image each time as a coded data source.
The macroblock classifying module 102 divides the full frame image into a plurality of macroblocks, which are mainly divided into a picture macroblock, a text macroblock, and other extended encoding modules.
The text macroblock coding module 103 codes the text macroblocks of the full frame, and mainly adopts Huffman coding and palette quantization methods.
The picture macroblock encoding module 104 encodes the picture macroblocks of the full frame, mainly using a JPEG encoding method.
The other encoding modules 105, as a future extension, are listed here to show that the same approach can be used to generate the residual, regardless of the encoding mode, and apply the present scheme to achieve lossless compression.
The 103-105 module encodes the generated data, which is called base layer data.
And a residual layer number predicting module 106 for predicting the number of the programmable residual layers.
A reconstruction and residual error generation module 107, which actually decodes each macroblock encoded by the 103-105 module to generate reconstructed data, and subtracts the reconstructed macroblock from the original macroblock to obtain a residual error macroblock of the macroblock.
The residual error coding module 108 performs Huffman coding on residual error macro blocks, and needs the 106 modules to provide the number of layers of residual errors that can be coded, i.e. when the current frame is coded into an extension layer, each macro block can be coded into the next layer, and the module codes each residual error macro block until the layer is coded.
And a code stream generation and transmission module 109 for fusing the base layer code stream generated by the 103-plus-105 module and the extended layer code stream generated by the 108 module to generate a final code stream, and sending the final code stream to the decoding end.
The decoding end comprises the following functional components:
and the transmission receiving module 110 receives the coded data at the decoding end, decodes the data into a base layer and an extension layer, further delivers the base layer code stream to a 111-113 module for decoding macro blocks of respective types according to the macro block types, and delivers the extension layer code stream to a 114 module for residual decoding.
The 111-113 module, the base layer decoding module and the 103-05 module at the encoding end correspond to each other.
And a residual decoding module 114 for decoding the extension layer data.
And the superposition reconstruction module 115 splices the contents decoded by the 111-113 module to form an image frame formed by the base layer, namely a lossy reconstruction frame.
And a final frame reconstruction module 116, which adds the base layer pixel value output by the 115 module and the residual error solved by the 114 module to obtain the reconstructed frame. If the current _ level has reached 16, that is, all the layers of the full frame residual have been transmitted, the reconstructed frame is the original lossless image frame.
Fig. 7a is a flowchart of the work of an image encoding end according to an embodiment of the present application, and as shown in fig. 7a, the method includes the following steps:
step S701, acquiring 1 frame of original image data;
step S702, classifying the full frame macro block;
step S703, classifying the full-frame macro block into text, picture, and other macro blocks;
step S704, encoding various macro blocks to form a base layer code stream, and generating residual macro blocks;
step S705, according to the current bandwidth. Predicting the number of layers of the residual error macro blocks which can be transmitted;
step S706, the residual error value of the appointed layer number is encoded by using huffman to form an extended layer code stream, and the transmitted layer number information of each residual error macro block is updated;
and step S707, merging the code streams of the basic layer and the expansion layer to form a whole frame code stream.
Fig. 7b is a flowchart of the work of an image decoding end according to an embodiment of the present application, and as shown in fig. 7b, the method includes the following steps:
step S801, receiving the code stream sent by the encoding end, and decomposing the code stream into base layer data and extension layer data.
Step S802, decoding various types of data of the base layer.
Step S803, performing huffman decoding on the extension layer, placing the decoded data at the corresponding position of the residual macroblock according to the current layer number, and updating the layer number information of the residual macroblock.
Step S804, the base layer pixel value and the extension layer data are overlapped to form a final reconstructed frame.
It should be noted that, reference may be made to the description related to the embodiment shown in fig. 1 for a preferred implementation of the embodiment shown in fig. 7a and 7b, and details are not repeated here.
The image transmission method provided by the application can realize the following technical effects: when the image moves all the time, the residual error can be transmitted as much as possible according to the real bandwidth, and the display of the decoding end is as clear as possible; when the image is still, the residual data of each macro block is not changed, and the residual of all the layers of each macro block is completely transmitted according to the bandwidth. The scheme can realize real lossless coding and decoding of the image under the condition of limited bandwidth.
Fig. 8 is a block diagram of a structure of an image transmission apparatus according to an embodiment of the present application, and as shown in fig. 8, the apparatus includes:
a first obtaining module 80, configured to obtain an image to be sent, divide the image to be sent into a plurality of macroblocks, and determine a macroblock type of each macroblock according to characteristics of each macroblock, where different types of macroblocks are encoded by using different encoding methods;
a first encoding module 82, configured to encode each macroblock according to a coding method corresponding to different macroblock types, and use the encoded data of each macroblock as base layer data;
a determining module 84, configured to obtain a residual macroblock corresponding to each macroblock according to original macroblock data of each macroblock and base layer data obtained after encoding the corresponding macroblock;
and a second encoding module 86, configured to encode the residual macroblocks corresponding to the respective macroblocks, use the encoded data of the residual macroblocks as extension layer data, and send the merged data of the base layer data and the extension layer data to the image decoding end.
It should be noted that, reference may be made to the description related to the embodiment shown in fig. 1 for a preferred implementation of the embodiment shown in fig. 8, and details are not repeated here.
Fig. 9 is a block diagram of another image transmission apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus includes:
a second obtaining module 90, configured to obtain the encoded image;
a decomposition module 92, configured to decompose the encoded image into base layer data and extension layer data, where the base layer data is encoded data obtained by encoding each macroblock obtained by dividing an original image, and the extension layer data is encoded data obtained by encoding a residual macroblock corresponding to each macroblock;
and a decoding module 94, configured to decode the base layer data and the extension layer data, respectively, and splice the data after decoding the base layer data and the data after decoding the extension layer data to obtain an image before encoding.
It should be noted that, reference may be made to the description related to the embodiment shown in fig. 5 for a preferred implementation of the embodiment shown in fig. 9, and details are not repeated here.
The embodiment of the application also provides a nonvolatile storage medium, which comprises a stored program, wherein when the program runs, the device where the nonvolatile storage medium is located is controlled to execute the image transmission method.
The nonvolatile storage medium stores a program for executing the following functions: acquiring an image to be sent, dividing the image to be sent into a plurality of macro blocks, and determining the macro block type of each macro block according to the characteristics of each macro block, wherein the macro blocks of different types are coded by adopting different coding methods; coding each macro block according to coding methods corresponding to different macro block types, and taking the coded data of each macro block as base layer data; obtaining residual error macro blocks corresponding to the macro blocks according to the original macro block data of the macro blocks and base layer data obtained after coding the corresponding macro blocks; and respectively coding residual error macro blocks corresponding to all macro blocks, taking the coded data of all residual error macro blocks as extension layer data, and sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end. Or
Acquiring an image after coding; decomposing the coded image into base layer data and extension layer data, wherein the base layer data is coded data obtained by coding each macro block obtained by dividing the original image, and the extension layer data is coded data obtained by coding a residual error macro block corresponding to each macro block; and respectively decoding the data of the base layer and the data of the extension layer, and splicing the data after decoding the data of the base layer and the data after decoding the data of the extension layer to obtain the image before encoding.
The embodiment of the application also provides a processor, wherein the processor is used for running the program stored in the memory, and the program is run to execute the image transmission method.
The processor is used for running a program for executing the following functions: acquiring an image to be sent, dividing the image to be sent into a plurality of macro blocks, and determining the macro block type of each macro block according to the characteristics of each macro block, wherein the macro blocks of different types are coded by adopting different coding methods; coding each macro block according to coding methods corresponding to different macro block types, and taking the coded data of each macro block as base layer data; obtaining residual error macro blocks corresponding to the macro blocks according to the original macro block data of the macro blocks and base layer data obtained after coding the corresponding macro blocks; and respectively coding residual error macro blocks corresponding to all macro blocks, taking the coded data of all residual error macro blocks as extension layer data, and sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end. Or
Acquiring an image after coding; decomposing the coded image into base layer data and extension layer data, wherein the base layer data is coded data obtained by coding each macro block obtained by dividing the original image, and the extension layer data is coded data obtained by coding a residual error macro block corresponding to each macro block; and respectively decoding the data of the base layer and the data of the extension layer, and splicing the data after decoding the data of the base layer and the data after decoding the data of the extension layer to obtain the image before encoding.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (13)

1. A method for transmitting an image, comprising:
acquiring an image to be sent, dividing the image to be sent into a plurality of macro blocks, and determining the macro block type of each macro block according to the characteristics of each macro block, wherein the macro blocks of different types are coded by adopting different coding methods;
coding each macro block according to coding methods corresponding to different macro block types, and taking the coded data of each macro block as base layer data;
obtaining residual error macro blocks corresponding to the macro blocks according to the original macro block data of the macro blocks and base layer data obtained after coding the corresponding macro blocks;
and respectively coding the residual error macro blocks corresponding to the macro blocks, taking the coded data of the residual error macro blocks as extension layer data, and sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end.
2. The method of claim 1, wherein obtaining residual macroblocks corresponding to each macroblock according to original macroblock data of each macroblock and base layer data obtained by encoding the corresponding macroblock, comprises:
decoding the base layer data obtained after each macro block is coded to obtain a reconstructed macro block corresponding to each macro block;
and subtracting the data of the reconstructed macro block corresponding to each macro block from the original macro block data of each macro block to obtain a residual macro block corresponding to each macro block.
3. The method of claim 2, wherein subtracting the data of the reconstructed macroblock corresponding to each macroblock from the original macroblock data of each macroblock to obtain a residual macroblock corresponding to each macroblock, comprises:
and subtracting the pixel value of the reconstructed macro block corresponding to each macro block from the pixel value of each macro block to obtain a residual value of each residual macro block, wherein the residual value of each residual macro block is a residual value matrix of n rows × m columns, and n and m are natural numbers greater than or equal to 1.
4. The method of claim 1, wherein separately encoding the residual macroblocks corresponding to the macroblocks comprises:
layering each residual error macro block, wherein each layer comprises one row of residual error values or a plurality of rows of residual error values in a residual error value matrix of each residual error macro block;
determining the available flow of the extension layer data according to the difference value of the current network available bandwidth and the flow occupied by the base layer data;
determining the number of the encodable layers of each residual error macro block according to the available flow of the extension layer data;
and respectively coding each residual error macro block according to the number of the coding layers of each residual error macro block.
5. The method of claim 4, wherein determining the number of encodable layers for each of the residual macroblocks in accordance with the available flow of enhancement layer data comprises:
determining the minimum transmitted layer number a in the transmitted layer number of each residual error macro block in the current extension layer data according to the transmitted layer number matrix;
setting an initial value of the number i of the layers which can be coded as a, and setting a total prediction code stream B as 0;
determining the number i of encodable layers by:
step 1: determining a total code stream B of residual data of the (i + 1) th layer of all target residual macroblocks with the number of transmitted layers being less than or equal to i, and simultaneously re-determining a total predicted code stream B: b is 0+ B;
step 2: if B is smaller than the available flow of the extension layer data and the current encodable layer number i is smaller than the maximum layering number of each residual error macro block, updating the current encodable layer number i as follows: i is i +1, and meanwhile, the step 1 is switched to continue to be executed; and if B is larger than the available flow of the extension layer data or the current encodable layer number i is equal to the maximum layering number, determining the encodable layer number as i, and ending the current flow.
6. The method of claim 5, wherein the total code stream of the i +1 layer residual data of all the target residual macroblocks with i transmission layers is determined by:
respectively mapping the residual value of the i +1 th layer of each residual macro block into a symbol value;
searching a code length corresponding to each residual value in the i +1 th layer of residual values from a preset residual code table;
and adding the code length corresponding to each residual value in the i +1 th layer of residual values of each residual macro block to obtain the total code stream of the i +1 th layer of residual data of the target residual macro block with the number of the transmitted layers being i.
7. The method of claim 5, wherein before determining the number of encodable layers for each of the residual macroblocks based on the available flow of enhancement layer data, the method further comprises:
determining the transmitted layer number matrix, wherein the transmitted layer number matrix is used for recording the coded layer number of each residual error macro block in the coded previous frame image;
comparing each macro block in a current frame image which is being coded with each corresponding macro block in a previous frame image to determine a target macro block which is changed relative to the previous frame image in the current frame image;
and in the transmitted layer number matrix, setting the transmitted layer number at the position corresponding to each target macro block to be 0, thereby updating the transmitted layer number matrix.
8. The method of claim 5, wherein determining the minimum number of transmitted layers in the matrix of number of transmitted layers comprises:
if at least one residual error macro block of the current frame image is changed relative to the previous frame image, determining the minimum transmitted layer number to be 0;
if the current frame image is unchanged relative to the previous frame image, sorting all values in the transmitted layer number matrix, and taking the minimum value in the transmitted layer number matrix as the minimum transmitted layer number.
9. The method according to any one of claims 4 to 8, wherein encoding each of the residual macroblocks separately according to the number of encodable layers of the residual macroblock comprises:
and coding all the non-transmitted layer data between the transmitted layer number and the encodable layer number aiming at all the residual error macro blocks with the transmitted layer number smaller than the encodable layer number in the current extension layer data.
10. A method for transmitting an image, comprising:
acquiring an image after coding;
decomposing the coded image into base layer data and extension layer data, wherein the base layer data is coded data obtained by coding each macro block obtained by dividing an original image, and the extension layer data is coded data obtained by coding a residual macro block corresponding to each macro block;
and respectively decoding the base layer data and the extension layer data, and splicing the data after decoding the base layer data and the data after decoding the extension layer data to obtain the image before encoding.
11. The method of claim 10, wherein decoding the extension layer data comprises:
if the residual error macro block in the current frame image to be decoded is the same as the residual error macro block in the next frame image, recording the layer number N of the residual error value of the residual error macro block in the current frame image to be decoded, which is decoded;
and when the residual error macro block in the next frame image is decoded, decoding the residual error values of the (N + 1) th layer of the residual error macro block in the next frame image until all the residual error values of the residual error macro block in the next frame are decoded.
12. An apparatus for transmitting an image, comprising:
the first acquisition module is used for acquiring an image to be transmitted, dividing the image to be transmitted into a plurality of macro blocks and determining the macro block type of each macro block according to the characteristics of each macro block, wherein the macro blocks of different types are coded by adopting different coding methods;
the first coding module is used for coding each macro block according to coding methods corresponding to different macro block types respectively, and taking the coded data of each macro block as base layer data;
a determining module, configured to obtain residual macroblocks corresponding to the respective macroblocks according to original macroblock data of the respective macroblocks and base layer data obtained after encoding the respective macroblocks;
and the second coding module is used for coding the residual error macro blocks corresponding to the macro blocks respectively, taking the coded data of the residual error macro blocks as extension layer data, and sending the data obtained by combining the basic layer data and the extension layer data to an image decoding end.
13. An apparatus for transmitting an image, comprising:
the second acquisition module is used for acquiring the coded image;
a decomposition module, configured to decompose the encoded image into base layer data and extension layer data, where the base layer data is encoded data obtained by encoding each macroblock obtained by dividing an original image, and the extension layer data is encoded data obtained by encoding a residual macroblock corresponding to each macroblock;
and the decoding module is used for respectively decoding the base layer data and the extension layer data, and splicing the data after decoding the base layer data and the data after decoding the extension layer data to obtain the image before encoding.
CN202110663402.XA 2021-06-15 2021-06-15 Image transmission method and device Pending CN113422960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110663402.XA CN113422960A (en) 2021-06-15 2021-06-15 Image transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110663402.XA CN113422960A (en) 2021-06-15 2021-06-15 Image transmission method and device

Publications (1)

Publication Number Publication Date
CN113422960A true CN113422960A (en) 2021-09-21

Family

ID=77788637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110663402.XA Pending CN113422960A (en) 2021-06-15 2021-06-15 Image transmission method and device

Country Status (1)

Country Link
CN (1) CN113422960A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506623A (en) * 2023-06-28 2023-07-28 鹏城实验室 Highly parallel intra-block prediction method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014349A1 (en) * 2005-06-03 2007-01-18 Nokia Corporation Residual prediction mode in scalable video coding
CN103096056A (en) * 2011-11-08 2013-05-08 华为技术有限公司 Matrix coding method and coding device and matrix decoding method and decoding device
CN103108177A (en) * 2011-11-09 2013-05-15 华为技术有限公司 Image coding method and image coding device
CN106412579A (en) * 2015-07-30 2017-02-15 浙江大华技术股份有限公司 Image coding method and apparatus, and image decoding method and apparatus
CN111093081A (en) * 2019-12-20 2020-05-01 合肥埃科光电科技有限公司 Lossless image compression method and system
CN111131831A (en) * 2019-12-20 2020-05-08 西安万像电子科技有限公司 Data transmission method and device
CN111447452A (en) * 2020-03-30 2020-07-24 西安万像电子科技有限公司 Data coding method and system
CN111464811A (en) * 2020-04-09 2020-07-28 西安万像电子科技有限公司 Image processing method, device and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014349A1 (en) * 2005-06-03 2007-01-18 Nokia Corporation Residual prediction mode in scalable video coding
CN103096056A (en) * 2011-11-08 2013-05-08 华为技术有限公司 Matrix coding method and coding device and matrix decoding method and decoding device
US20140241427A1 (en) * 2011-11-08 2014-08-28 Huawei Technologies Co., Ltd. Method and apparatus for coding matrix and method and apparatus for decoding matrix
CN103108177A (en) * 2011-11-09 2013-05-15 华为技术有限公司 Image coding method and image coding device
CN106412579A (en) * 2015-07-30 2017-02-15 浙江大华技术股份有限公司 Image coding method and apparatus, and image decoding method and apparatus
CN111093081A (en) * 2019-12-20 2020-05-01 合肥埃科光电科技有限公司 Lossless image compression method and system
CN111131831A (en) * 2019-12-20 2020-05-08 西安万像电子科技有限公司 Data transmission method and device
CN111447452A (en) * 2020-03-30 2020-07-24 西安万像电子科技有限公司 Data coding method and system
CN111464811A (en) * 2020-04-09 2020-07-28 西安万像电子科技有限公司 Image processing method, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506623A (en) * 2023-06-28 2023-07-28 鹏城实验室 Highly parallel intra-block prediction method and system
CN116506623B (en) * 2023-06-28 2023-09-12 鹏城实验室 Highly parallel intra-block prediction method and system

Similar Documents

Publication Publication Date Title
US20220094969A1 (en) Image prediction method and apparatus
JP7280233B2 (en) Image decoding method, image decoding system and computer readable medium for image decoding
US20220360817A1 (en) Method of Coding and Decoding Images, Coding and Decoding Device and Computer Programs Corresponding Thereto
JP6771493B2 (en) Grouping palette bypass bins for video coding
US20170310978A1 (en) Coded-Block-Flag Coding and Derivation
US20160234498A1 (en) Methods and systems for palette table coding
EP3334153A1 (en) Reference frame decoding method
US20060017592A1 (en) Method of context adaptive binary arithmetic coding and apparatus using the same
TW201720150A (en) Palette predictor initialization and merge for video coding
CN112514386B (en) Grid coding and decoding quantization coefficient coding and decoding
WO2013187698A1 (en) Image decoding method and apparatus using same
TW201545543A (en) Palette-based video coding
US20160100180A1 (en) Method and apparatus for processing video signal
JP2014510440A (en) Subslice in video coding
KR20160134704A (en) Level definitions for multi-layer video codecs
TW201725905A (en) Entropy coding techniques for display stream compression (DSC) of non-4:4:4 chroma sub-sampling
CN101218825B (en) Method for modeling coding information of video signal for compressing/decompressing coding information
JP3918263B2 (en) Compression encoding apparatus and encoding method
JP2012227929A (en) Coding method and coding device for coding 3d video signal, and corresponding decoding method and decoding device
CN113422960A (en) Image transmission method and device
US20140219354A1 (en) Methods, systems, and computer program products for assessing a macroblock candidate for conversion to a skipped macroblock
JP2008271039A (en) Image encoder and image decoder
JP6469277B2 (en) Image encoding device, image encoding method and program, image decoding device, image decoding method and program
KR20020026189A (en) Efficient video data access using fixed ratio compression
WO2011138912A1 (en) Image encoding apparatus, image decoding apparatus, image encoding method, image decoding method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210921