CN109996075B - Image decoding method and decoder - Google Patents

Image decoding method and decoder Download PDF

Info

Publication number
CN109996075B
CN109996075B CN201711483149.XA CN201711483149A CN109996075B CN 109996075 B CN109996075 B CN 109996075B CN 201711483149 A CN201711483149 A CN 201711483149A CN 109996075 B CN109996075 B CN 109996075B
Authority
CN
China
Prior art keywords
block
decoding
coding
sub
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711483149.XA
Other languages
Chinese (zh)
Other versions
CN109996075A (en
Inventor
张志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201711483149.XA priority Critical patent/CN109996075B/en
Priority to PCT/CN2018/112328 priority patent/WO2019128443A1/en
Publication of CN109996075A publication Critical patent/CN109996075A/en
Application granted granted Critical
Publication of CN109996075B publication Critical patent/CN109996075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an image decoding method and a decoder, which are used for optimizing a decoding algorithm, reducing the hardware overhead of the decoder and simultaneously improving the performance of the decoder. The method comprises the following steps: acquiring auxiliary information of a common position image of a first coding sub-block, wherein the common position image is an image which has the same coordinate information as a current image block where the first coding sub-block is located; if the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are the same, acquiring a decoding auxiliary value of the second encoded sub-block as a decoding auxiliary value of the first encoded sub-block, the decoding auxiliary value of the second encoded sub-block being calculated according to the decoding information of the second co-located block; and decoding the first coding sub-block according to the decoding auxiliary value of the first coding sub-block to obtain the decoding information of the first coding sub-block.

Description

Image decoding method and decoder
Technical Field
The present application relates to the field of video and image encoding and decoding technologies, and in particular, to an image decoding method and a decoder.
Background
The video coding and decoding technology is widely applied to the fields of internet, television broadcasting, storage media, communication and the like. The new generation video codec protocols such as h.264, h.265 and AVS2.0 all have motion prediction modes such as DIRECT mode and SKIP mode. The decoding process of the coding block of the motion prediction mode comprises the following steps: firstly, a decoder acquires decoding information of a common position block in a common position image corresponding to a current coding subblock to calculate so as to acquire a corresponding calculation result, secondly, the decoder acquires decoding information of a coding subblock adjacent to the current coding subblock, and finally, the decoder decodes the current coding subblock according to the calculation result and the decoding information of the current coding block so as to acquire the decoding information of the current coding subblock, wherein the decoding information comprises: the coding mode, the motion vector, and the reference picture index of the coding block are referenced.
In the related decoding technology, the coding block of the motion prediction mode is divided into N blocks (N is more than or equal to 2) to obtain N coding sub-blocks, and the decoder executes the decoding process to decode the N coding sub-blocks one by one so as to finish the decoding of the coding block of the motion prediction mode. To improve decoding efficiency, M (2 ≦ M ≦ N) computing units are integrated into the decoder to decode the N encoded sub-blocks in parallel.
In the above parallel decoding scheme, although the decoding efficiency of the coding block of the motion prediction mode is improved, the hardware overhead of the decoder is increased due to the increase of the number of parallel computing units, and the M computing units need to perform cooperative computing, which further increases the complexity of the hardware equipment of the decoder.
Disclosure of Invention
The application provides an image decoding method and a decoder, which are used for optimizing a decoding algorithm, reducing the hardware overhead of the decoder and simultaneously improving the performance of the decoder.
A first aspect of the present application provides an image decoding method, including:
acquiring auxiliary information of a co-location image of a first coding sub-block, wherein the co-location image is an image which has the same coordinate information as a current image block where the first coding sub-block is located, the current image block comprises the first coding sub-block and a second coding sub-block, the second coding sub-block is a coded sub-block which is decoded in the current image block, and the co-location image comprises a first co-location block and a second co-location block, wherein the coordinate information of the first co-location block is the same as the coordinate information of the first coding sub-block, and the coordinate information of the second co-location block is the same as the coordinate information of the second coding sub-block;
if the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are the same, acquiring a decoding auxiliary value of the second encoded sub-block as a decoding auxiliary value of the first encoded sub-block, the decoding auxiliary value of the second encoded sub-block being calculated according to the decoding information of the second co-located block;
and decoding the first coding subblock according to the decoding auxiliary value of the first coding subblock to obtain decoding information of the first coding subblock, wherein the target decoding information is decoding information of coding subblocks adjacent to the first coding subblock.
According to the technical scheme, the method has the following advantages:
under the condition that the coding mode, the motion vector and the reference picture index of the first co-located block and the second co-located block are the same, since the coding mode, the motion vector and the reference picture index of the second co-located block are already calculated according to the video coding protocol during the process of decoding the second coded sub-block to obtain the decoding auxiliary value of the second co-located block and the decoding auxiliary value is stored, when the first coded sub-block is decoded, the decoder does not need to calculate the decoding auxiliary value of the first coded sub-block again, the decoder can directly copy or read the decoding auxiliary value of the second coded sub-block which is stored before, the decoding operation can be simplified, and more importantly, the calculation process of the decoding auxiliary value is extremely complicated and consumes a long time, and the occupation ratio in the whole decoding time is large, therefore, the image decoding method in the embodiment of the present application can greatly shorten the decoding time, therefore, the calculation resource of the decoder is saved, the decoding efficiency of the decoder is improved, and finally the decoding performance of the decoder is improved.
With reference to the first aspect of the present application, in a first possible implementation manner of the first aspect, before the obtaining the auxiliary information of the co-located image of the first encoded sub-block, the method further includes:
when all the coding sub-blocks in the common position image are decoded, judging whether the decoding information of the first common position block and the second common position block in the common position image is the same or not to obtain a judgment result;
and storing the judgment result as auxiliary information of the common position image.
After the image decoding is finished, the corresponding auxiliary information is automatically generated and stored, so that the decoding efficiency of subsequent images can be improved, the computing resources are saved, and the decoding performance of a decoder is improved.
With reference to the first aspect or the first possible implementation manner of the present application, in a second possible implementation manner of the first aspect of the present application, after obtaining the auxiliary information of the co-located image of the first encoded sub-block, the method further includes:
if the auxiliary information of the co-location image indicates that the decoding information of the first co-location block and the second co-location block are not the same, obtaining a decoding auxiliary value of the first encoded sub-block calculated according to the decoding information of the first co-location block, so that the first encoded sub-block is decoded according to the decoding auxiliary value of the first encoded sub-block to obtain the decoding information of the first encoded sub-block.
With reference to the first aspect, the first possible implementation manner of the first aspect, or the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the decoding the first encoded sub-block according to the decoding auxiliary value of the first encoded sub-block includes:
if the decoding auxiliary value of the first coding subblock is within a first preset range, determining the decoding information of the first common position block as the decoding information of the first coding subblock;
and if the decoding auxiliary value of the first coding subblock is within a second preset range, decoding the first coding subblock according to the decoding information of the coding subblock adjacent to the first coding subblock to obtain the decoding information of the first coding subblock.
With reference to the third possible implementation manner of the first aspect of the present application, in a fourth possible calculation manner of the first aspect of the present application, the decoding information includes: at least one of a coding mode, a motion vector, and a co-located picture index.
In a second aspect, embodiments of the present application provide a decoder having a function of implementing the behavior of the decoder in the above method embodiments. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium for storing computer operation instructions for the decoder, which when executed on a computer, enable the computer to perform the image decoding method according to any one of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer program product containing instructions, which when run on a computer, enable the computer to perform the image decoding method according to any one of the first aspect.
In addition, the technical effects brought by any one of the design manners of the second aspect to the fourth aspect can be referred to the technical effects brought by different design manners of the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic diagram illustrating a coding block divided into a plurality of coding sub-blocks according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an embodiment of an image decoding method in an embodiment of the present application;
FIG. 3 is a schematic diagram of one 16x16 code block being divided into 16 4x4 code sub-blocks in the embodiment of the present application;
FIG. 4 is a schematic diagram of an embodiment of a decoder in an embodiment of the present application;
FIG. 5 is a schematic diagram of another embodiment of a decoder in the embodiment of the present application;
fig. 6 is a hardware configuration diagram of a decoder in the embodiment of the present application.
Detailed Description
The application provides an image decoding method and a decoder, which are used for optimizing a decoding algorithm, reducing the hardware overhead of the decoder and simultaneously improving the performance of the decoder.
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In a video code stream, images in a video are compressed and encoded by a video coding and decoding protocol to obtain coding blocks, and an image is compressed and encoded into a coding block, such as a Macroblock (MB). In a video coding and decoding system, after an MB is transmitted from an encoder to a decoder, the decoder decodes the MB according to a video coding and decoding protocol to finally obtain corresponding decoding information, so that an image carried in the MB is obtained. Because the data volume carried in the compressed and encoded MB is large, when the compressed and encoded MB is decoded, the MB is firstly divided into a plurality of encoded subblocks by the decoder, then the divided encoded subblocks are decoded by the decoder one by one, and finally, all the encoded subblocks in the MB are decoded by the decoder. Fig. 1 is a schematic diagram of dividing a 16 × 16 coded block MB into 16 coded sub-blocks (a0-a15) of 4 × 4, where the positions of the coded sub-blocks may be arbitrary, and only one of the coded sub-blocks is shown in the diagram.
On one hand, the following is obtained through analyzing the mode distribution of the video code stream: the coding mode, motion vector and reference picture index of the co-located block corresponding to the coding subblock divided by the coding block obtained by the motion prediction mode coding are in many cases the same, or in many cases the coding mode, motion vector and reference picture index of the co-located block corresponding to a part of the coding subblock are the same, wherein the co-located block is a coding block having the same coordinate information as the coding subblock and having been decoded.
Taking fig. 1 as an example, in the coding blocks a0-a15, there may be a case where the coding mode, the motion vector, and the reference picture index of 16 co-located blocks corresponding to a0-a15 are the same, or the coding mode, the motion vector, and the reference picture index of 8 co-located blocks corresponding to a0-a7 are the same, and the coding mode, the motion vector, and the reference picture index of the other 8 co-located blocks corresponding to a8-a15 are the same.
On the other hand, through analysis of each video codec protocol (such as h.264, h.265, and AVS 2.0), it is known that: when the decoding operation of the motion prediction mode coding block is executed, the coding mode, the motion vector and the reference image index of the common position block corresponding to different coding sub-blocks may be the same, so in order to improve the decoding efficiency of the decoder, the complicated decoding calculation methods such as scaling operation performed on the coding mode, the motion vector and the reference image index of the common position block can be simplified, the calculation steps are reduced, the calculation time is shortened, and meanwhile, the calculation resources of the decoder are saved.
Based on the analysis of the two aspects, the image decoding method in the embodiment of the application is obtained by optimizing the decoding method, so that the decoding time is reduced, the calculation resources of the decoder are saved, and the decoding efficiency of the decoder is improved.
For convenience of understanding, the image decoding method in the embodiment of the present application is described in detail below with reference to fig. 2, which specifically includes the following steps:
as shown in fig. 2, an embodiment of an image decoding method in the embodiment of the present application includes:
201. the decoder stores side information for the co-located image.
The co-located picture is a picture that has been decoded before decoding the current image block and has the same coordinate information as the current image block. After the co-located image is decoded, the decoder stores the decoding information of the co-located image, and simultaneously stores the auxiliary information of the co-located image, wherein the decoding information comprises at least one of a coding mode, a motion vector and a reference image index.
The auxiliary information of the co-located image is used to indicate encoded sub-blocks in the co-located image where decoding information is the same and encoded sub-blocks in which decoding information is not the same. The decoder judges the decoding information of all the coding subblocks in the common position image and/or whether the decoding information of at least more than two specific position coding subblocks in the common position image is the same or not;
if the two image blocks are the same, the decoder sets the flag bit to 1, and if the two image blocks are not the same, the decoder sets the flag bit to 0 to obtain a marking result, and finally, the decoder takes the marking result as auxiliary information of the common position image and stores the auxiliary information so that the decoder can use the auxiliary information of the common position image to decode the encoded subblock in the current image block.
For a specific implementation of specifically acquiring the auxiliary information of the common location block, reference may be made to the related description in application scenario one below, and details are not described here again.
202. The decoder acquires the side information of the co-located image of the first encoded sub-block.
The first coding sub-block is a coding sub-block to be decoded in the current image block, the decoder divides the current image block into a plurality of coding sub-blocks, wherein the coding sub-blocks comprise a first coding sub-block and a second coding sub-block, and the second coding sub-block is a coding sub-block which is decoded completely.
And the decoder determines a common position image with the same coordinate information as the current image block according to the coordinate information of the current image block and extracts auxiliary information of the common position image. Further, the decoder determines a first co-located block in the co-located image having the same coordinate information as the first encoded sub-block based on the coordinate information of the first encoded sub-block.
The co-located image further includes a second co-located block having the same coordinate information as the second encoded sub-block, and the decoder stores the encoding method, the motion vector, and the reference picture index of the second co-located block, and also stores the decoding auxiliary value of the second encoded sub-block calculated according to the decoding information of the second co-located block.
For example, if the encoding mode of the second common position block is intra encoding mode, the decoder sets the motion vector of the common position block to 0, and the reference picture index to-1, and then the decoder calculates the motion vector and the reference picture index of the common position block according to the calculation methods specified by the video encoding and decoding protocols such as h.264, h.265 and AVS2.0 to obtain the decoding auxiliary value of the second encoding sub-block; if the coding mode of the second common position block is an inter coding mode and is a forward prediction reference, and the second common position block has a forward motion vector and a forward reference image index, the decoder sets the motion vector of the second common position block as the forward motion vector and the reference image index as the forward reference image index, and then the decoder calculates the motion vector and the reference image index of the second common position block according to a video coding and decoding protocol such as a calculation method specified by H.264, H.265 and AVS2.0 to obtain a decoding auxiliary value of the second coding sub-block; if the coding mode of the common position block is an inter coding mode and is a backward prediction reference, the common position block has a backward motion vector and a backward reference image index, the decoder sets the motion vector of the common position block as the backward motion vector and the reference image index as the backward reference image index, and further, the decoder calculates the motion vector and the reference image index of the second common position block according to the calculation method specified by video coding and decoding protocols such as H.264, H.265 and AVS2.0 to obtain the decoding auxiliary value of the second coding sub-block.
It should be noted that, the calculation process of the decoder calculating the auxiliary decoded value of the second encoded sub-block having the same coordinate information as the second common position block according to the decoding information of the second common position block is complex, and a large amount of calculation resources and calculation time of the decoder need to be consumed.
203. If the side information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are the same, the decoder acquires the decoding side value of the second encoded sub-block as the decoding side value of the first encoded sub-block.
If the auxiliary information of the co-location image indicates that the decoding information of the first co-location block and the second co-location block is the same, the decoder determines that the decoding auxiliary value of the first encoded sub-block is the same as the decoding auxiliary value of the second encoded sub-block, and then the decoder takes the decoding auxiliary value of the second encoded sub-block out of the storage space for storing the decoding auxiliary value of the second encoded sub-block, and the decoder decodes the first encoded sub-block by taking the decoding auxiliary value of the second encoded sub-block as the decoding auxiliary value of the first encoded sub-block to obtain the decoding information of the first encoded sub-block.
204. If the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are not the same, the decoder performs calculation according to the decoding information of the first co-located block to obtain the decoding auxiliary value of the first encoded sub-block.
If the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are not the same, the decoder calculates according to the decoding information of the first co-located block to obtain a decoding auxiliary value of the first encoded sub-block, so that the decoder decodes the first encoded sub-block using the decoding auxiliary value of the first encoded sub-block to obtain the decoding information of the first encoded sub-block.
The method for calculating the decoding auxiliary value of the first encoded sub-block by the decoder is similar to the method for calculating the decoding auxiliary value of the second encoded sub-block in step 203, and is not described herein again.
205. The decoder decodes the first encoded sub-block according to the decoding auxiliary value of the first common position block to obtain decoding information of the first encoded sub-block.
If the decoding auxiliary value of the first common position block is within a first preset range, the decoder determines that the decoding information of the first encoded subblock is the same as the decoding information of the first common position block, and further determines the decoding information of the first common position block as the decoding information of the first encoded subblock.
If the decoding auxiliary value of the first coding sub-block is within a second preset range, the decoder acquires the decoding information of the coding sub-block which is adjacent to the first coding sub-block and is already decoded in the current image block. Further, the decoder decodes the first encoded sub-block according to the decoding information of the encoded sub-block which is adjacent to the first encoded sub-block and has been decoded, so as to obtain the decoding information of the first encoded sub-block.
For example, when the decoding auxiliary value of the first encoded subblock is within the second preset range, the decoder performs decoding calculation on the first encoded subblock by using a spatial prediction algorithm to obtain decoding information of the first encoded subblock. In the spatial domain prediction algorithm, firstly, a decoder judges whether the coding subblock adjacent to the first coding subblock is valid according to a coding mode, secondly, if the coding subblock is valid, the decoder calculates target decoding information according to a first calculation method specified by a video coding and decoding protocol, such as a median filtering algorithm, to obtain the decoding information of the first coding subblock, and thirdly, if the coding subblock is invalid, the decoder calculates the target decoding information according to a second calculation method specified by the video coding and decoding protocol to obtain the decoding information of the first coding subblock.
Specifically, when the coding mode of the coding subblock adjacent to the first coding subblock is the inter coding mode, the decoder determines that the coding subblock adjacent to the first coding subblock is invalid, and if the coding mode of the coding subblock adjacent to the first coding subblock is the intra coding mode, the decoder determines that the coding subblock corresponding to the target decoding information is valid.
It should be noted that the first calculation method and the second calculation method need to be further determined according to the relevant regulations in different video coding and decoding protocols, such as h.264, h.265, and AVS2.0, and the specific calculation methods of the first calculation method and the second calculation method may refer to the relevant contents in the h.264, h.265, and AVS2.0 protocols, which are not described herein again.
It should be further noted that the first preset range and the second preset range are calculated according to a correlation calculation method in a video coding and decoding protocol, and are used for determining whether the decoding information of the first encoded subblock and the first common position block is the same, where when the decoding auxiliary value of the first encoded subblock is within the first preset range, the decoder directly uses the decoding information of the first common position block as the decoding information of the first encoded subblock to complete the decoding of the first encoded subblock; when the decoding auxiliary value of the first encoded sub-block is within the second preset range, the decoder needs to perform the related calculation between the first calculation method and the second calculation method to obtain the decoding information of the first encoded sub-block again according to the decoding information of the encoded sub-block adjacent to the first encoded block in the current image block.
For the above-mentioned calculation method for obtaining the first preset range and the second preset range by calculation according to the video coding and decoding protocols, the calculation methods are different in different video coding and decoding protocols, so that the first preset range and the second preset range in different video coding and decoding protocols are also different, and for the description related to the calculation of the first preset range and the second preset range, reference may be made to the description of the related part in the related video coding and decoding protocols, and details thereof are not repeated here.
206. The decoder stores side information of the current image block.
After all the encoded sub-blocks in the current image block are decoded, the decoder acquires and stores the auxiliary information of the current image block to assist the decoder in decoding the subsequent image.
Step 206 is similar to step 201 described above, and is not described in detail here.
In this embodiment, when the coding method, the motion vector and the reference picture index of the first co-located block and the second co-located block are the same, since the coding method, the motion vector and the reference picture index of the second co-located block have been correlated according to the video coding protocol during the decoding of the second encoded sub-block to obtain the decoding auxiliary value of the second co-located block and store the decoding auxiliary value, when the first encoded sub-block is decoded, the decoder does not need to calculate the decoding auxiliary value of the first encoded sub-block again, the decoder can directly copy or read the decoding auxiliary value of the second encoded sub-block stored before, the decoding operation can be simplified, and more importantly, the calculation process of the decoding auxiliary value is very complicated and takes a long time, and the occupation ratio in the whole decoding time is large, so the image decoding method in the embodiment of the present application can greatly shorten the decoding time, therefore, the calculation resource of the decoder is saved, the decoding efficiency of the decoder is improved, and finally the decoding performance of the decoder is improved.
Furthermore, the image decoding method in the application optimizes the decoding operation process in a large quantity, improves the decoding performance of the decoder, does not need to use a plurality of computing units for parallel processing, and can realize that a single computing unit carries out decoding calculation.
To facilitate understanding of the above steps 201 and 206, the following describes a specific process for storing the auxiliary information of the co-location image in a specific operation scene, specifically as follows:
the application scene one: for simplicity, the full-frame coding and full-spatial motion prediction modes under the h.264 protocol are taken as examples for explanation, as shown in fig. 3, in order to divide a 16x16MB into 16 4x4 coding blocks (b0-b15) under the above protocol and coding modes, the figure only shows one arrangement, and may also be other arrangements, which is not limited in this application; according to the relevant provisions of the H.264 protocol, the decoding information of the current 16x16MB coding block is divided into 16 4x4 coding blocks (b0-b15) for information storage, and the coding mode, the motion vector and the reference image index of each coding sub-block in b0-b15 are stored. After the MB is decoded, whether the coding modes, the motion vectors and the reference image indexes of all the coding sub-blocks in b0-b15 and the part of the coding sub-blocks at specific positions are the same or not is judged according to an H.264 protocol, the flag bit is assigned and marked according to the judgment result, and the assigned flag bit is stored. In the arrangement shown in fig. 3, the auxiliary information can be obtained using a flag bit with a length of 16 bits according to the following assignment:
bit 0: 1 if the coding mode, motion vector and reference picture index of all 4x4 blocks within the MB are consistent, otherwise 0;
bit 1: if b0, b5, b10 and b15 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 2: if b0, b5, b10 and b15 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 3: if the coding modes of b0, b5, b10 and b15 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the forward and backward motion vectors and the reference picture index are 0;
bit 4: if b0, b1, b2 and b3 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 5: if b0, b1, b2 and b3 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 6: if the coding modes of b0, b1, b2 and b3 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the forward and backward motion vectors and the reference picture index are 0;
bit 7: if b4, b5, b6 and b7 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 8: if b4, b5, b6 and b7 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 9: if the coding modes of b4, b5, b6 and b7 are consistent, the forward and backward motion vectors and the reference image index are 1, otherwise, the forward and backward motion vectors and the reference image index are 0;
bit 10: if b8, b9, b10 and b11 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 11: if b8, b9, b10 and b11 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 12: if the coding modes of b8, b9, b10 and b11 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the forward and backward motion vectors and the reference picture index are 0;
bit 13: if b12, b13, b14 and b15 are all intra coding modes, or all have forward prediction and the forward motion vector is consistent with the reference image index, 1 is obtained, otherwise 0 is obtained;
bit 14: if b12, b13, b14 and b15 are all inter coding modes, no forward prediction exists, and the backward motion vector is consistent with the reference image index and is 1, otherwise, the backward motion vector is 0;
bit 15: if the coding modes of b12, b13, b14 and b15 are consistent, the forward and backward motion vectors and the reference picture index are 1, otherwise, the reference picture index is 0.
Application scenario two: taking the coding block in the application scenario one as an example, the image decoding method in the embodiment of the present application is described in detail as follows:
s100: the 16x16MB is divided into 16 4x4 coding blocks b0-b15 according to the H.264 protocol, and the following calculations from S101 to S110 are sequentially performed according to the decoding order of b0, b1, b2, b3 … … …, b14 and b 15.
If the currently decoded coded sub-block is b0, jumping to execute the following step S102; if the encoded subblock currently being decoded is any one of the encoded subblocks b1 through b15, the following step S101 is sequentially performed.
S101, finding out a common position image corresponding to the coding sub-block currently being decoded.
The above co-located picture has been decoded when decoding the current picture, and the decoder has stored the coding mode, motion vector and reference picture index of each coded sub-block in the co-located picture, and also stored the auxiliary information of the co-located picture.
S102, if the current coding subblock being decoded is b0, skipping to execute the step S104; otherwise, extracting the auxiliary information G, and judging whether the coded sub-block currently being decoded has the same motion mode, motion vector and reference image index as the coded sub-block already decoded in the current MB coding block according to the auxiliary information G.
S103, if the motion mode, the motion vector and the reference image index of the common position block of the decoded coding sub-block are the same, the calculation result obtained by calculation according to the H.264 protocol is copied and marked as R, and the step S106 is skipped to execute; if not, jumping to execute step S104.
And S104, finding the coordinates of the corresponding common position block in the common position image according to the H.264 protocol.
S105, according to the coordinates obtained in step 104, in the information stored in the co-located image, the coding mode, the motion vector, and the reference image index of the co-located block are addressed, and a calculation result obtained by calculating according to the h.264 protocol is denoted as R, and R is stored, and a specific calculation manner may refer to step S103, which is not described herein again.
S106, if the currently decoded coding subblock is b0, sequentially executing a step S107; if the encoded subblock currently being decoded is any one of the encoded subblocks b1 through b15, jumping to perform step S108.
And S107, determining the coding mode, the motion vector and the reference image index of a coding block adjacent to the current MB coding block according to the H.264 protocol.
And S108, calculating according to the calculation result R and the coding mode, the motion vector and the reference image index of the adjacent coding block obtained in the step S107 through a related calculation step specified by an H.264 protocol to obtain the motion vector and the reference image index of the current coding sub-block. S109, determining whether the coding mode, the motion vector and the reference image index of the current coding sub-block need to be stored or not by an H.264 protocol, and if so, storing the coding mode, the motion vector and the reference image index for being used as a reference image of a subsequent image to be decoded; if not, the storage is not carried out, and the steps S101 to S108 are directly skipped to decode the next coding sub-block in the current image.
S110, if b0-b15 are decoded completely, and the 16x16MB needs to store and optimize the decoding operation of the subsequent image, determining whether the coding mode, the motion vector, and the reference image index of the 16 coding sub-blocks b0-b15 are all consistent, and marking according to the determination result to generate corresponding auxiliary information, and determining whether the coding mode, the motion vector, and the reference image index of the coding sub-blocks at certain specific positions in the 16 coding sub-blocks are all consistent according to the h.264 protocol, and marking according to the determination result to generate corresponding auxiliary information, where the specific determination process refers to the first application scenario, which is not described herein again. If the decoding is not completed, the skip execution step S101 continues the decoding.
The following describes the decoder in the embodiment of the present application in detail with reference to the specific implementation manner, specifically as follows:
as shown in fig. 4, an embodiment of a decoder in the embodiment of the present application schematically shows that the decoder 40 includes: a first obtaining module 401, a second obtaining module 402 and a decoding module 403;
a first obtaining module 401, configured to perform the operation process described in the foregoing step 202;
a second obtaining module 402, which may perform the operation process described in the above step 203;
the decoding module 403 may perform the operation procedure described in step 205.
In one example, the decoder 50 as shown in fig. 5 includes, in addition to the modules shown in fig. 4 described above, the decoder 50 further including: a judging module 504, a storage module 505 and a calculating module 506; the determining module 504 and the storing module 505 may be configured to execute the decoding operations described in the above steps 201 and 206, and execute the relevant operations of acquiring and storing the auxiliary information in the above application scenario one; a calculation module 506, which is configured to jointly execute the operation procedure described in step 204.
For the functions of the modules, reference may be made to the description in the embodiment, the first application scenario, and the second application scenario corresponding to fig. 2 for understanding, and details of the description are not repeated here.
The decoder may also be a decoding chip. In an implementation manner, the decoder may be implemented by hardware, or may be implemented by hardware to execute corresponding software, where the hardware and the software include modules corresponding to one or more of the above functions.
The following describes the hardware structure of the decoder in the embodiment of the present application in detail, specifically as follows:
as shown in fig. 6, which is a hardware structure of the decoder in the embodiment of the present application, the decoder 60 includes:
a processor 601 and a memory 602; memory 404 may include both read-only memory and random-access memory, among other things, and provides instructions and data to processor 403. A portion of memory 404 may also include non-volatile random access memory (NVRAM). The memory 602 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
and (3) operating instructions: the method comprises various operation instructions for realizing various operations; operating the system: including various system programs for implementing various basic services and for handling hardware-based tasks.
Processor 601 may also be referred to as a Central Processing Unit (CPU). The image decoding method disclosed in the embodiment of the present application can be applied to the processor 601 or implemented by the processor 601. The processor 601 may be an integrated circuit chip having decoding capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 601.
The processor 601 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the image decoding method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art.
The processor 601 executes the decoding operation of the decoder described in the embodiment of the method corresponding to fig. 2 by calling the operation instruction stored in the memory 602.
The embodiment of the present application provides a computer-readable storage medium, which is used for storing computer operating instructions for the decoder, and when the computer operating instructions are executed on a computer, the computer is enabled to execute the image decoding method in the embodiment corresponding to fig. 2.
The present application provides a computer program product containing instructions, which when run on a computer, enables the computer to execute the image decoding method in the embodiment corresponding to fig. 2.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the technical solution scope of the embodiments of the present application.

Claims (12)

1. An image decoding method, comprising:
acquiring auxiliary information of a co-location image of a first coding sub-block, wherein the co-location image is an image which has the same coordinate information as a current image block where the first coding sub-block is located, the current image block comprises the first coding sub-block and a second coding sub-block, the second coding sub-block is a coded sub-block which is decoded in the current image block, and the co-location image comprises a first co-location block and a second co-location block, wherein the coordinate information of the first co-location block is the same as the coordinate information of the first coding sub-block, and the coordinate information of the second co-location block is the same as the coordinate information of the second coding sub-block;
if the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are the same, acquiring a decoding auxiliary value of the second encoded sub-block as a decoding auxiliary value of the first encoded sub-block, wherein the decoding auxiliary value of the second encoded sub-block is calculated according to the decoding information of the second co-located block;
and decoding the first coding sub-block according to the decoding auxiliary value of the first coding sub-block to obtain the decoding information of the first coding sub-block.
2. The method of claim 1, wherein before said obtaining the side information of the co-located image of the first encoded sub-block, the method further comprises:
when all the coding sub-blocks in the common position image are decoded, judging whether the decoding information of the first common position block and the second common position block in the common position image is the same or not to obtain a judgment result;
and storing the judgment result as auxiliary information of the common position image.
3. The method according to any one of claims 1 to 2, wherein after obtaining the auxiliary information of the co-located image of the first encoded sub-block, the method further comprises:
if the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are not the same, calculating a decoding auxiliary value of the first encoded sub-block according to the decoding information of the first co-located block, so that the first encoded sub-block is decoded according to the decoding auxiliary value of the first encoded sub-block to obtain the decoding information of the first encoded sub-block.
4. The method of claim 3, wherein decoding the first encoded sub-block according to the decoding auxiliary value of the first encoded sub-block comprises:
determining the decoding information of the first common position block as the decoding information of the first coding sub-block if the decoding auxiliary value of the first coding sub-block is within a first preset range;
and if the decoding auxiliary value of the first coding subblock is within a second preset range, decoding the first coding subblock according to the decoding information of the coding subblock adjacent to the first coding subblock to obtain the decoding information of the first coding subblock.
5. The method of claim 4, wherein the decoding information comprises at least one of a coding mode, a motion vector, and a co-located picture index.
6. A decoder, comprising:
the image processing device comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining auxiliary information of a common position image of a first coding sub-block, the common position image is an image which has the same coordinate information with a current image block where the first coding sub-block is located, the current image block comprises the first coding sub-block and a second coding sub-block, the second coding sub-block is a coded sub-block which is decoded in the current image block, the common position image comprises a first common position block and a second common position block, the coordinate information of the first common position block is the same as that of the first coding sub-block, and the coordinate information of the second common position block is the same as that of the second coding sub-block;
a second obtaining module, configured to obtain a decoding auxiliary value of the second encoded sub-block as a decoding auxiliary value of the first encoded sub-block if the auxiliary information of the co-located image indicates that the decoding information of the first co-located block and the second co-located block are the same, where the decoding auxiliary value of the second encoded sub-block is calculated according to the decoding information of the second co-located block;
and the decoding module is used for decoding the first coding sub-block according to the decoding auxiliary value of the first coding sub-block so as to obtain the decoding information of the first coding sub-block.
7. The decoder of claim 6, wherein the decoder further comprises:
a judging module, configured to judge whether decoding information of the first common position block and the second common position block in the common position image is the same when decoding of all the encoded sub-blocks in the common position image is completed, so as to obtain a judgment result;
and the storage module is used for storing the judgment result as the auxiliary information of the common position image.
8. The decoder according to claim 6 or 7, characterized in that the decoder further comprises:
a calculating module, configured to calculate a decoding auxiliary value of the first encoded sub-block according to the decoding information of the first common position block if the auxiliary information of the common position image indicates that the decoding information of the first common position block and the second common position block are different, so that the first encoded sub-block is decoded according to the decoding auxiliary value of the first encoded sub-block to obtain the decoding information of the first encoded sub-block.
9. The decoder of claim 8, wherein the decoding module is specifically configured to:
if the decoding auxiliary value of the first coding subblock is within a first preset range, determining the decoding information of the first common position block as the decoding information of the first coding subblock;
and if the decoding auxiliary value of the first coding subblock is within a second preset range, decoding the first coding subblock according to the decoding information of the coding subblock adjacent to the first coding subblock to obtain the decoding information of the first coding subblock.
10. The decoder of claim 9, wherein the decoding information comprises at least one of a coding mode, a motion vector, and a co-located picture index.
11. A decoder, characterized in that it comprises:
a memory and a processor;
the memory is used for storing operation instructions;
the processor is used for executing the image decoding method of any one of the above claims 1 to 5 by calling the operation instruction.
12. A computer-readable storage medium for storing computer instructions which, when executed on a computer, cause the computer to perform the image decoding method of any of claims 1 to 5.
CN201711483149.XA 2017-12-29 2017-12-29 Image decoding method and decoder Active CN109996075B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711483149.XA CN109996075B (en) 2017-12-29 2017-12-29 Image decoding method and decoder
PCT/CN2018/112328 WO2019128443A1 (en) 2017-12-29 2018-10-29 Image decoding method and decoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711483149.XA CN109996075B (en) 2017-12-29 2017-12-29 Image decoding method and decoder

Publications (2)

Publication Number Publication Date
CN109996075A CN109996075A (en) 2019-07-09
CN109996075B true CN109996075B (en) 2022-07-12

Family

ID=67063018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711483149.XA Active CN109996075B (en) 2017-12-29 2017-12-29 Image decoding method and decoder

Country Status (2)

Country Link
CN (1) CN109996075B (en)
WO (1) WO2019128443A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706573A (en) * 2020-05-08 2021-11-26 杭州海康威视数字技术股份有限公司 Method and device for detecting moving object and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101115199A (en) * 2002-04-19 2008-01-30 松下电器产业株式会社 Method for calculation motion vector
CN101605256A (en) * 2008-06-12 2009-12-16 华为技术有限公司 A kind of method of coding and decoding video and device
CN101755458A (en) * 2006-07-11 2010-06-23 诺基亚公司 Scalable video coding
WO2011099241A1 (en) * 2010-02-10 2011-08-18 三菱電機株式会社 Image encoding device, image decoding device, image encoding method, and image decoding method
KR20130043054A (en) * 2011-10-19 2013-04-29 한국전자통신연구원 Method and device for image processing by image division
CN103716631A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method, device, coder and decoder
CN103716629A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method, device, coder and decoder
CN104244002A (en) * 2013-06-14 2014-12-24 北京三星通信技术研究有限公司 Method and device for obtaining motion information in video coding/decoding process
CN105637875A (en) * 2013-10-18 2016-06-01 Lg电子株式会社 Method and apparatus for decoding multi-view video
WO2017086738A1 (en) * 2015-11-19 2017-05-26 한국전자통신연구원 Method and apparatus for image encoding/decoding

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT1119091B (en) * 1979-06-05 1986-03-03 Cselt Centro Studi Lab Telecom PROCEDURE AND DEVICE FOR THE NUMERICAL CODING AND DECODING OF THE PAL COMPOSITE TELEVISION SIGNAL
CN102215396A (en) * 2010-04-09 2011-10-12 华为技术有限公司 Video coding and decoding methods and systems
ES2904650T3 (en) * 2010-04-13 2022-04-05 Ge Video Compression Llc Video encoding using multitree image subdivisions
JP2012175332A (en) * 2011-02-21 2012-09-10 Nippon Telegr & Teleph Corp <Ntt> Image coding device, image decoding device, image coding method, image decoding method, image coding program, and image decoding program
US8958642B2 (en) * 2011-10-19 2015-02-17 Electronics And Telecommunications Research Institute Method and device for image processing by image division
JP5722761B2 (en) * 2011-12-27 2015-05-27 株式会社ソニー・コンピュータエンタテインメント Video compression apparatus, image processing apparatus, video compression method, image processing method, and data structure of video compression file
RU2594985C2 (en) * 2012-01-18 2016-08-20 ДжейВиСи КЕНВУД КОРПОРЕЙШН Moving image encoding device, moving image encoding method and moving image encoding program, as well as moving image decoding device, moving image decoding method and moving image decoding program
KR20130107861A (en) * 2012-03-23 2013-10-02 한국전자통신연구원 Method and apparatus for inter layer intra prediction

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101115199A (en) * 2002-04-19 2008-01-30 松下电器产业株式会社 Method for calculation motion vector
CN101755458A (en) * 2006-07-11 2010-06-23 诺基亚公司 Scalable video coding
CN101605256A (en) * 2008-06-12 2009-12-16 华为技术有限公司 A kind of method of coding and decoding video and device
WO2011099241A1 (en) * 2010-02-10 2011-08-18 三菱電機株式会社 Image encoding device, image decoding device, image encoding method, and image decoding method
KR20130043054A (en) * 2011-10-19 2013-04-29 한국전자통신연구원 Method and device for image processing by image division
CN103716631A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method, device, coder and decoder
CN103716629A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method, device, coder and decoder
CN104244002A (en) * 2013-06-14 2014-12-24 北京三星通信技术研究有限公司 Method and device for obtaining motion information in video coding/decoding process
CN105637875A (en) * 2013-10-18 2016-06-01 Lg电子株式会社 Method and apparatus for decoding multi-view video
WO2017086738A1 (en) * 2015-11-19 2017-05-26 한국전자통신연구원 Method and apparatus for image encoding/decoding

Also Published As

Publication number Publication date
WO2019128443A1 (en) 2019-07-04
CN109996075A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
JP7237874B2 (en) Image prediction method and apparatus
CN107809642B (en) Method for encoding and decoding video image, encoding device and decoding device
US10812806B2 (en) Method and apparatus of localized luma prediction mode inheritance for chroma prediction in video coding
CN107046645B (en) Image coding and decoding method and device
US20160150242A1 (en) Method of Background Residual Prediction for Video Coding
US20130301734A1 (en) Video encoding and decoding with low complexity
JP2018524918A (en) Image prediction method and image prediction apparatus
WO2013041244A1 (en) Video encoding and decoding with improved error resilience
JP2018513627A (en) Image encoding / decoding method and related apparatus
GB2492778A (en) Motion compensated image coding by combining motion information predictors
US20060039476A1 (en) Methods for efficient implementation of skip/direct modes in digital video compression algorithms
CN109996075B (en) Image decoding method and decoder
CN112203091B (en) Motion vector prediction method, system and computer medium based on quadratic polynomial
CN111654696B (en) Intra-frame multi-reference-line prediction method and device, storage medium and terminal
WO2012168242A2 (en) Method and device for encoding a sequence of images and method and device for decoding a sequence of image
ES2909314T3 (en) Image coding method, image decoding method, image coding device, image decoding device, image coding program, and image decoding program
CN113365077B (en) Inter-frame prediction method, encoder, decoder, computer-readable storage medium
CN109672889A (en) The method and device of the sequence data head of constraint
RU2809673C2 (en) Method and device for image prediction
RU2808688C2 (en) Method and device for image prediction
CN112055201B (en) Video coding method and related device thereof
CN117640948A (en) Video image frame encoding method, decoding method and related devices
US20200120352A1 (en) Moving image processing device, moving image processing method, and recording medium having moving image processing program stored thereon
CN112738522A (en) Video coding method and device
CN115643409A (en) Code prediction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant