WO2019072049A1 - 一种帧间预测方法、装置及存储介质 - Google Patents

一种帧间预测方法、装置及存储介质 Download PDF

Info

Publication number
WO2019072049A1
WO2019072049A1 PCT/CN2018/103637 CN2018103637W WO2019072049A1 WO 2019072049 A1 WO2019072049 A1 WO 2019072049A1 CN 2018103637 W CN2018103637 W CN 2018103637W WO 2019072049 A1 WO2019072049 A1 WO 2019072049A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference frame
processed
prediction unit
amvp
candidate
Prior art date
Application number
PCT/CN2018/103637
Other languages
English (en)
French (fr)
Inventor
林四新
张宏顺
李雅卿
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP18865441.2A priority Critical patent/EP3697093A4/en
Publication of WO2019072049A1 publication Critical patent/WO2019072049A1/zh
Priority to US16/597,606 priority patent/US11076168B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/57Motion estimation characterised by a search window with variable size or shape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction

Definitions

  • the present disclosure relates to the field of video processing technologies, and in particular to inter prediction.
  • Inter-frame prediction is to determine a reference frame by using other coded blocks that have been coded and reconstructed in the vicinity of the current coded block, and predictively encode the current coded block by using reference frame by motion estimation to eliminate temporal redundancy information of the video.
  • Means; inter-frame prediction is an important part of video coding, especially in video coding of hybrid coding frameworks such as H.264/AVC, H.265/HEVC, and AVS.
  • An important content of inter-prediction for the current coding block is: selecting the optimal target reference frame from among all the reference frames (the optimal target reference frame is generally considered to be the reference frame with the lowest coding cost among all reference frames). And, by motion estimation, predictive coding of the current coded block using the target reference frame.
  • the embodiments of the present disclosure provide an inter prediction method, apparatus, and storage medium, so as to reduce the processing complexity of target reference frame selection, reduce the complexity of video coding, and improve the efficiency of video coding.
  • an inter prediction method including:
  • Comparing the candidate reference frame with a predetermined first reference frame if the candidate reference frame is different from the first reference frame, performing motion estimation on the prediction unit to be processed by the candidate reference frame and the first reference frame, respectively Determining a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame according to a corresponding estimated coding cost of the motion estimation candidate reference frame and the first reference frame; wherein the first reference frame is the to-be-processed coding
  • the most used reference frame is used in the subcoded block of the block.
  • an embodiment of the present disclosure further provides an inter prediction apparatus, including:
  • a reference coded block determining module configured to determine at least one reference coded block spatially adjacent to a to-be-processed prediction unit of a coded block to be processed
  • the AMVP determining module is configured to determine, according to the reference information of each reference coded block, the corresponding AMVP of each to-be-processed prediction unit in each reference frame, for each preset reference frame;
  • a candidate reference frame determining module configured to determine a target AMVP from a corresponding AMVP in each reference frame from a to-be-processed prediction unit, and use a reference frame corresponding to the target AMVP as a candidate reference frame;
  • a first comparison selection module configured to compare the candidate reference frame with a predetermined first reference frame, if the candidate reference frame is different from the first reference frame, respectively, by using the candidate reference frame and the first
  • the reference frame is subjected to motion estimation, and the target reference frame of the to-be-processed prediction unit is determined from the candidate reference frame and the first reference frame according to the motion estimation candidate reference frame and the corresponding coding cost of the first reference frame; wherein, A reference frame is the most used reference frame in the sub-coded block of the coded block to be processed.
  • an embodiment of the present disclosure further provides a storage medium storing a program executable by a central processing unit or a graphics processor, the program for:
  • Comparing the candidate reference frame with a predetermined first reference frame if the candidate reference frame is different from the first reference frame, performing motion estimation on the prediction unit to be processed by the candidate reference frame and the first reference frame, respectively Determining a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame according to a corresponding estimated coding cost of the motion estimation candidate reference frame and the first reference frame; wherein the first reference frame is the to-be-processed coding
  • the most used reference frame is used in the subcoded block of the block.
  • an embodiment of the present disclosure further provides a video encoding device, where the video encoding device includes:
  • a processor a communication interface, a memory, and a communication bus
  • the processor, the communication interface, and the memory complete communication with each other through the communication bus;
  • the communication interface is an interface of a communication module;
  • the memory is configured to store program code and transmit the program code to the processor
  • the processor the instruction for invoking program code in the memory, performs the inter prediction method of the first aspect.
  • embodiments of the present disclosure also provide a computer program product comprising instructions that, when executed on a computer, cause the computer to perform the inter prediction method of the first aspect.
  • the embodiment of the present disclosure may determine the corresponding AMVP of the to-be-processed prediction unit in each reference frame according to the reference information of the at least one reference coded block that is adjacent to the prediction unit to be processed, and select the target AMVP from the reference frame.
  • the reference frame corresponding to the target AMVP is used as a candidate reference frame, and is compared with the predetermined first reference frame, so as to determine the target reference frame of the to-be-processed prediction unit according to the comparison result, and implement the prediction in the inter prediction process.
  • the determination of the target reference frame of the unit may determine the corresponding AMVP of the to-be-processed prediction unit in each reference frame according to the reference information of the at least one reference coded block that is adjacent to the prediction unit to be processed, and select the target AMVP from the reference frame.
  • the reference frame corresponding to the target AMVP is used as a candidate reference frame, and is compared with the predetermined first reference frame, so as to determine the target reference frame of the to-be-processed prediction unit
  • the embodiment of the present disclosure selects a candidate reference frame based on AMVP from all reference frames, it is only necessary to process the prediction unit by the candidate reference frame and the first reference frame respectively when the candidate reference frame and the predetermined first reference frame are different.
  • Performing motion estimation to select a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame according to the coding cost of the motion estimation so the embodiment of the present disclosure can greatly reduce the target reference of the to-be-processed prediction unit
  • the number of motion estimation searches in the frame selection process reduces the processing complexity of the target reference frame selection, which reduces the complexity of video coding and improves the efficiency of video coding.
  • FIG. 1 is a flowchart of an inter prediction method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram showing the relationship between a coding block and a prediction unit
  • FIG. 3 is an exemplary diagram of a reference coded block
  • FIG. 4 is a flowchart of a method for determining a corresponding AMVP of a current prediction unit under a reference frame
  • FIG. 5 is a flowchart of a method for determining a corresponding AMVP of a current prediction unit under each reference frame and determining a target AMVP;
  • FIG. 6 is a flowchart of a method for comparing a candidate reference frame with a first reference frame to determine a target reference frame
  • FIG. 7 is another flowchart of an inter prediction method according to an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of an example of a video encoding application according to an embodiment of the present disclosure.
  • FIG. 9 is a structural block diagram of an inter prediction apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a block diagram showing another structure of an inter prediction apparatus according to an embodiment of the present disclosure.
  • FIG. 11 is a block diagram showing still another structure of an inter prediction apparatus according to an embodiment of the present disclosure.
  • Figure 12 is a block diagram showing the hardware structure of a video encoding apparatus.
  • the processing complexity of the target reference frame selection is reduced, the key is to reduce the number of motion estimation searches in the target reference frame selection process; since a motion estimation will bring a large processing burden, how to be reasonable In the inter-frame prediction process, it is particularly critical to reduce the number of searches for the motion estimation used in the selection of the target reference frame. Based on this, the present disclosure proposes a novel inter-frame prediction method to reduce the number of search times of motion estimation used in target frame selection in the inter-frame prediction process, so as to reduce the processing complexity of target frame selection. .
  • FIG. 1 is a flowchart of an inter prediction method according to an embodiment of the present disclosure.
  • the inter prediction method may be applied to a video encoding device, where the video encoding device may be a CPU (central processing unit) or a GPU with video encoding capability (graphic processing)
  • the video encoding device can be implemented by using a mobile phone, a notebook computer, or the like, or a server.
  • an inter prediction method provided by an embodiment of the present disclosure may include:
  • Step S100 Determine at least one reference coded block that is spatially adjacent to the to-be-processed prediction unit of the to-be-processed coding block.
  • the coded block to be processed can be understood as a coded block to be processed for predictive coding.
  • the coded block to be processed can be divided into multiple prediction units. As shown in FIG. 2, one coding block (CU) can divide two prediction units (PU). It is apparent that FIG. 2 is only an example, and one coding block is not limited to the case of dividing two prediction units.
  • Each of the prediction units in the coding block to be processed may be processed by using the inter prediction method provided by the embodiment of the present disclosure, and the to-be-processed prediction unit may be considered as a prediction of the to-be-processed coding block to be subjected to prediction coding. unit.
  • the coded block can be regarded as a coded block that has been subjected to predictive coding by using the inter prediction method provided by the embodiment of the present disclosure.
  • the coded block has selected the reference frame used for inter prediction by using the inter prediction method provided by the embodiment of the present disclosure. Therefore, the reference frame selected by the coded block, and the motion vector (MV, MotionVector) determined based on the selected reference frame are known; in the disclosed embodiment, the reference frame selected by the coded block can be considered Is the corresponding reference frame of the coded block, and the motion vector determined based on the corresponding reference frame of the coded block can be regarded as the corresponding motion vector of the coded block.
  • the selected at least one reference coded block that is adjacent to the to-be-processed prediction unit may be considered to be: at least one that is adjacent to the to-be-processed prediction unit and that is in contact with the edge of the to-be-processed prediction unit. Encoding block.
  • the at least one reference coded block may include: a coded block (b0) that is connected to a top right point of the prediction unit to be processed, is connected to a top right point of the to-be-processed prediction unit, and is located in a pending prediction.
  • the lower left point of the unit is connected, and is located in the coded block (a1) to the left of the prediction unit to be processed.
  • the example shown in FIG. 3 is only an exemplary illustration. The selection of the specific at least one reference coded block may not be limited to that shown in FIG. 3, and only b0, b2, a0, etc. may be selected as well.
  • step S110 for each reference frame that is preset, according to the reference information of each reference coded block, the corresponding advanced motion vector prediction value (AMVP, Advanced Motion Vector Prediction) of the to-be-processed prediction unit in each reference frame is determined.
  • AMVP Advanced Motion Vector Prediction
  • reference information such as a reference frame and a motion vector corresponding to the coded block are known;
  • the embodiment of the present disclosure may respectively determine the corresponding AMVP of the to-be-processed prediction unit under each reference frame according to the reference information of each reference coded block.
  • the embodiment of the present disclosure may determine, under the reference frame, a motion vector of the to-be-processed prediction unit with respect to each reference coded block, to obtain a to-be-processed a prediction unit corresponding to the candidate motion vector under the reference frame (under a reference frame, the motion vector of the to-be-processed prediction unit relative to each reference coded block may be regarded as a corresponding candidate of the to-be-processed prediction unit under the reference frame) a motion vector), so that the motion vector with the smallest AMVP cost is selected from the candidate motion vector to be processed in the reference frame as the to-be-processed prediction unit in the reference frame corresponding to the AMVP (the prediction unit to be processed is in one
  • the corresponding AMVP under the reference frame can be regarded as the motion vector with the smallest AMVP cost among the corresponding candidate motion vectors under the reference frame.
  • the reference frame-by-reference frame is processed in the above manner to determine the corresponding AMVP of the to-be-processed prediction unit under each reference frame.
  • the AMVP cost of the to-be-processed prediction unit relative to a candidate motion vector may be determined by determining that the candidate motion vector distance is deviated from the reference frame and is the same as the to-be-processed prediction unit. Forming a reference block, calculating an error sum between the prediction unit to be processed and the reference block, and obtaining an AMVP penalty of the prediction unit to be processed with respect to the candidate motion vector; thus, in a reference frame, the candidate motion vector is passed through the above manner Processing is performed to determine the AMVP cost relative to each candidate motion vector under a reference frame for the prediction unit to be processed.
  • the way to determine the AMVP cost is not limited to this, and other methods can be used.
  • a reference frame may be associated and identified by a unique reference frame index.
  • Step S120 Determine, from the to-be-processed prediction unit, the target AMVP in the corresponding AMVP in each reference frame, and use the reference frame corresponding to the target AMVP as the candidate reference frame.
  • the embodiment of the present disclosure may use the AMVP with the smallest AMVP cost as the target AMVP in the corresponding AMVP in each reference frame.
  • the reference frame corresponding to the target AMVP is used as a candidate reference frame.
  • Step S130 Comparing the candidate reference frame with the predetermined first reference frame, if the candidate reference frame is different from the first reference frame, performing motion estimation on the prediction unit to be processed by the candidate reference frame and the first reference frame respectively, according to the motion. Estimating the coding cost corresponding to the obtained candidate reference frame and the first reference frame, and determining a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame.
  • the predetermined first reference frame may be the most used reference frame in the sub-coded block of the to-be-processed coded block, where the sub-coded block of the to-be-processed coded block is pre-coded.
  • a coding block may have multiple coding block mode coding, and each coding block mode coding mode may correspond to a division mode of a prediction unit, and the prediction unit has independent motion information in the coding block, generally one coding.
  • the block can be divided into 2 prediction units; and the sub-coded block is further divided spatially for the coding block, and the obtained sub-region of the coding block.
  • one coding block can be fixedly divided into 4 sub-coded blocks.
  • the embodiment of the present disclosure may compare the candidate reference frame with the predetermined first reference frame, and if the two are the same, the candidate may be selected as the pending a target reference frame of the prediction unit; if the two are different, motion prediction may be performed on the prediction unit to be processed by the candidate reference frame and the first reference frame, respectively, according to the coding cost of the motion estimation candidate reference frame and the first reference frame
  • the reference frame with the smallest coding cost is selected in the reference frame and the first reference frame as the target reference frame of the prediction unit to be processed.
  • An inter prediction method provided by an embodiment of the present disclosure includes: determining at least one reference coded block spatially adjacent to a to-be-processed prediction unit of a to-be-processed coding block; determining a to-be-processed prediction unit according to reference information of each reference coded block Corresponding AMVP in each reference frame; determining, from the to-be-processed prediction unit, the target AMVP in the corresponding AMVP in each reference frame, using the reference frame corresponding to the target AMVP as the candidate reference frame; and selecting the candidate reference frame and the predetermined first frame Performing alignment on the reference frame, if the candidate reference frame is different from the first reference frame, performing motion estimation on the prediction unit to be processed by the candidate reference frame and the first reference frame, respectively, according to the motion estimation candidate reference frame and the first reference frame corresponding coding
  • the target reference frame of the prediction unit to be processed is determined from the candidate reference frame and the first reference frame.
  • the embodiment of the present disclosure may determine the corresponding AMVP of the to-be-processed prediction unit in each reference frame according to the reference information of the at least one reference coded block that is adjacent to the prediction unit to be processed, and select the target AMVP from the target AMVP;
  • the reference frame corresponding to the target AMVP is used as a candidate reference frame, and is compared with the predetermined first reference frame, so that the target reference frame of the to-be-processed prediction unit is determined according to the comparison result, and the prediction unit to be processed is implemented in the inter prediction process.
  • the determination of the target reference frame is performed by the target reference frame according to the reference information of the at least one reference coded block that is adjacent to the prediction unit to be processed, and select the target AMVP from the target AMVP;
  • the reference frame corresponding to the target AMVP is used as a candidate reference frame, and is compared with the predetermined first reference frame, so that the target reference frame of the to-be-processed prediction unit is determined according to the comparison result, and the prediction unit to
  • the embodiment of the present disclosure selects a candidate reference frame based on AMVP from all reference frames, it is only necessary to process the prediction unit by the candidate reference frame and the first reference frame respectively when the candidate reference frame and the predetermined first reference frame are different.
  • Performing motion estimation to select a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame according to the coding cost of the motion estimation so the embodiment of the present disclosure can greatly reduce the target reference of the to-be-processed prediction unit
  • the number of motion estimation searches in the frame selection process reduces the processing complexity of the target reference frame selection, which reduces the complexity of video coding and improves the efficiency of video coding.
  • the motion vector prediction value (MVP, Motion Vector Prediction) of the to-be-processed prediction unit of the to-be-processed coding block in each reference frame is estimated, and then in each reference frame. And performing motion estimation according to the corresponding MVP processing prediction unit, determining a motion vector (MV) and a coding cost of the to-be-processed prediction unit under each reference frame, and selecting a target reference frame with the smallest coding cost, and then estimating the motion estimation
  • the number of searches is very large, resulting in a higher processing complexity of the target reference frame selection.
  • the embodiment of the present disclosure can implement the motion estimation of the prediction unit to be processed based on the candidate reference frame and the first reference frame, and achieve the selection of the target reference frame of the to-be-processed prediction unit, which greatly reduces the number of motion estimation searches.
  • the reference information of the reference coded block may include: a reference frame and a motion vector corresponding to the reference coded block; and correspondingly, FIG. 4 illustrates the determining the to-be-processed prediction unit provided by the embodiment of the present disclosure.
  • the method may include:
  • Step S200 Determine, under the reference frame, the motion vector of the to-be-processed prediction unit relative to each reference coded block in the reference frame according to the reference frame and the motion vector of each reference coded block according to the current reference frame. And obtaining a corresponding candidate motion vector of the to-be-processed prediction unit under this reference frame.
  • the reference frame may be a reference frame currently traversed in all reference frames, that is, traversing all reference frames frame by frame, and the reference frame is a reference frame currently traversed;
  • the preset reference frames may be traversed one by one.
  • the method shown in FIG. 4 is performed to determine the corresponding AMVP of the to-be-processed prediction unit under the current reference frame.
  • the referenced frame may be determined according to the reference frame, the corresponding reference frame and the motion vector of the reference coded block.
  • the pending prediction unit is below this reference frame relative to the MV of the reference coded block.
  • the reference frame refIdx (the reference frame refIdx may be all reference frames) Under any of the currently traversed reference frames, the pending prediction unit under the reference frame refIdx may be a ratio of the ratio of refIdx to refa0, multiplied by MVa0.
  • the embodiment of the present disclosure may multiply the ratio of the reference frame to the reference frame corresponding to the reference coded block. Referring to the corresponding motion vector of the coded block; thus, for each reference coded block, the process can be determined, and the motion vector of the to-be-processed prediction unit relative to each reference coded block under the reference frame can be determined, and the to-be-processed prediction unit is obtained. The corresponding candidate motion vector under the reference frame.
  • Step S210 Under this reference frame, determine an AMVP cost of the prediction unit to be processed with respect to each candidate motion vector.
  • the embodiment of the present disclosure may determine a reference block (ie, a candidate) that is offset from the candidate motion vector distance with respect to any reference frame currently traversed, and has the same shape as the prediction unit to be processed.
  • the motion vector corresponds to a reference block), and the error between the prediction unit to be processed and the reference block is calculated and (optionally, the error sum can be understood as the sum of the absolute values of the pixel differences of the prediction unit to be processed and the reference block)
  • the AMVP cost of the motion vector is a reference block (ie, a candidate) that is offset from the candidate motion vector distance with respect to any reference frame currently traversed, and has the same shape as the prediction unit to be processed.
  • the motion vector corresponds to a reference block), and the error between the prediction unit to be processed and the reference block is calculated
  • Step S220 The candidate motion vector with the smallest AMVP cost is used as the corresponding AMVP of the to-be-processed prediction unit in this reference frame.
  • the reference frame that is currently traversed is processed in each preset reference frame by using the process shown in FIG. 4, so that after the preset reference frames are traversed, the to-be-processed prediction unit is obtained in each Refer to the corresponding AMVP in the frame.
  • the embodiment of the present disclosure may determine, from the to-be-processed prediction unit, the AMVP with the smallest AMVP cost as the target AMVP in the corresponding AMVP in each reference frame.
  • the embodiment of the present disclosure may set a reference AMVP cost, and set an initial value of the reference AMVP cost to a preset maximum value, so that each determined prediction unit to be processed corresponds to a reference frame.
  • the AMVP cost of the determined AMVP is compared with the reference AMVP cost. If the AMVP cost of the AMVP is less than the reference AMVP cost, the reference AMVP cost is updated to the AMVP cost of the AMVP until the pending prediction unit is at all. The corresponding AMVP under the reference frame is determined.
  • the AMVP cost of the determined AMVP is compared with the reference AMVP cost, and the reference AMVP cost is updated, and finally the pending prediction is determined.
  • the reference AMVP cost may be the corresponding minimum AMVP cost of the pending prediction unit in the AMVP under each reference frame; thus, the AMVP corresponding to the last updated reference AMVP cost may be obtained.
  • the target AMVP the purpose of determining the target AMVP with the smallest AMVP cost from the corresponding AMVP in each reference frame from the to-be-processed prediction unit is implemented.
  • FIG. 5 is a flowchart of a method for determining a corresponding AMVP and a target AMVP of a to-be-processed prediction unit in each reference frame according to an embodiment of the present disclosure.
  • the process may include:
  • Step S300 Set a reference AMVP cost, and make the initial value of the reference AMVP cost be a preset maximum value.
  • Step S310 Select a reference frame from all reference frames as the current reference frame, and determine an AMVP of the to-be-processed prediction unit under the reference frame.
  • step S310 can be referred to FIG. 4 .
  • Step S320 Determine whether the AMVP cost of the AMVP of the to-be-processed prediction unit is less than the reference AMVP cost in the reference frame. If not, go to step S330, and if yes, go to step S340.
  • Step S330 Maintaining the reference AMVP cost unchanged, and executing step S350.
  • the reference AMVP cost may be maintained unchanged.
  • Step S340 Update the reference AMVP cost with the AMVP cost of the AMVP under the reference frame by the prediction unit to be processed, and update the target AMVP to the AMVP of the to-be-processed prediction unit in the reference frame, and update the candidate reference frame to the reference frame. Go to step S350.
  • the reference AMVP cost may be updated by the AMVP cost, and updated by the AMVP of the to-be-processed prediction unit under the reference frame.
  • the target AMVP updates the candidate reference frame with the reference frame.
  • Step S350 determining whether all the reference frames are traversed, if yes, executing step S360, and if no, returning to step S310.
  • Step S360 Determine the last updated target AMVP, which is the determined target AMVP, and the last updated candidate reference frame is the determined candidate reference frame.
  • the loop-by-reference loop processing by the method shown in FIG. 5 can make the last updated reference AMVP cost the corresponding minimum AMVP cost in the AMVP of the prediction unit to be processed in each reference frame; thus, the last The updated target AMVP (the last updated reference AMVP cost corresponding AMVP), as the target AMVP to be determined in the embodiment of the present disclosure, the last updated candidate reference frame (the last updated target AMVP corresponding reference frame) may be used as the present disclosure.
  • the embodiment of the present disclosure may use the reference frame corresponding to the target AMVP as a candidate reference frame, and compare with the predetermined first reference frame; in the embodiment of the disclosure, the sub-code of the to-be-processed coding block The block needs to be pre-coded to be completed. After the encoding of the sub-coded block of the to-be-processed coded block is completed, the embodiment of the present disclosure may use the most used reference frame in the sub-coded block of the to-be-processed coded block as the first reference frame; The manner in which the sub-coded blocks of the coded block are to be processed can be referred to the conventional manner, and the embodiment of the present disclosure is not limited.
  • FIG. 6 shows an optional method for comparing the candidate reference frame with the first reference frame to determine a target reference frame.
  • the process can include:
  • Step S400 determining whether the candidate reference frame is the same as the first reference frame, if yes, executing step S410; if not, executing step S420.
  • Step S410 Select a reference frame from the candidate reference frame and the first reference frame as the target reference frame of the prediction unit to be processed.
  • Step S420 Perform motion estimation on the prediction unit to be processed by the candidate reference frame, and obtain a coding cost corresponding to the candidate reference frame; and perform motion estimation on the prediction unit to be processed by using the first reference frame to obtain a coding cost corresponding to the first reference frame.
  • the embodiment of the present disclosure may use the AMVP of the candidate reference frame as the search starting point of the motion estimation, perform motion estimation on the processing unit to be processed, and obtain a candidate reference frame correspondingly.
  • Rate error distortion cost an alternative form of coding cost
  • the embodiment of the present disclosure may use AMVP of the first reference frame as a search starting point for motion estimation, The motion prediction unit is processed to perform motion estimation, and the corresponding rate distortion cost of the first reference frame is obtained.
  • Step S430 Select a reference frame with the smallest coding cost from the candidate reference frame and the first reference frame as the target reference frame of the prediction unit to be processed.
  • the embodiment of the present disclosure may use the target reference frame to perform motion estimation on the prediction unit to be processed, determine a residual block corresponding to the to-be-processed prediction unit, and perform residual
  • the block performs transform and quantization, and then performs rearrangement and entropy coding on the transformed and quantized residual block to obtain a video coding result of the prediction unit to be processed, and combines the video coding results of each prediction unit of the to-be-processed coding block,
  • the coding result of the coding block to be processed can be obtained by inter prediction coding.
  • FIG. 7 shows another process of the inter prediction method provided by the embodiment of the present disclosure.
  • the process may include:
  • Step S500 Determine a coded block that is connected to a top right point of the to-be-processed prediction unit space of the to-be-processed coding block, and a coded block that is located on the upper right side of the to-be-processed prediction unit, and is located on the upper side of the to-be-processed prediction unit.
  • the coded block that is connected to the upper left point of the prediction unit, the coded block that is connected to the lower left point of the prediction unit to be processed, and the coded block that is located at the lower left point of the prediction unit to be processed, and is located to the left of the prediction unit to be processed get at least one reference coded block.
  • Step S510 Tracing the reference frame one by one from all reference frames, and using the reference frame currently traversed as the current reference frame.
  • Step S520 Under the reference frame, for any reference coded block, multiply the ratio of the reference frame and the reference frame corresponding to the reference coded block by the motion vector corresponding to the reference coded block to obtain a to-be-processed prediction. And determining, by the unit, a motion vector of the reference coded block under the reference frame to determine a motion vector of the to-be-processed prediction unit relative to each reference coded block under the reference frame, to obtain a to-be-processed prediction unit under the reference frame. Corresponding candidate motion vectors.
  • Step S530 For any candidate motion vector, determine a reference block that deviates from the reference motion vector distance with respect to the reference frame and has the same shape as the prediction unit to be processed, and calculates an error between the prediction unit to be processed and the reference block. Obtaining an AMVP penalty of the prediction unit to be processed relative to the candidate motion vector, to obtain an AMVP cost of the to-be-processed prediction unit relative to each candidate motion vector under the reference frame.
  • Step S540 The candidate motion vector with the smallest AMVP cost is used as the AMVP of the to-be-processed prediction unit under the reference frame.
  • step S550 it is determined whether all the reference frames have been traversed. If not, the process returns to step S510. If yes, the corresponding AMVP of the to-be-processed prediction unit in each reference frame is obtained, and step S560 is performed.
  • Step S560 Determine, from the to-be-processed prediction unit, the target AMVP with the smallest AMVP cost in the corresponding AMVP in each reference frame, and use the reference frame corresponding to the target AMVP as the candidate reference frame.
  • Step S570 determining whether the candidate reference frame is the same as the first reference frame, if yes, executing step S580, and if no, executing step S590.
  • Step S580 Select a reference frame from the candidate reference frame and the first reference frame as the target reference frame of the prediction unit to be processed.
  • Step S590 Perform motion estimation on the prediction unit to be processed by the candidate reference frame, and obtain a coding cost corresponding to the candidate reference frame; and perform motion estimation on the prediction unit to be processed by using the first reference frame to obtain a coding cost corresponding to the first reference frame.
  • Step S600 Select a reference frame with the smallest coding cost from the candidate reference frame and the first reference frame as the target reference frame of the prediction unit to be processed.
  • An application example of the inter prediction method provided by the embodiment of the present disclosure may be applied to a video encoding device, where the video encoding device performs video encoding on a video image, and uses an inter prediction encoding method to encode any pending image of the video image.
  • the block is encoded, and in this process, the target reference frame of the to-be-processed prediction unit of the to-be-processed coded block is determined using the inter prediction method provided by the embodiment of the present disclosure.
  • the video encoding process of the inter prediction method provided by the embodiment of the present disclosure may be as shown in FIG.
  • Step S700 The video encoding device acquires a video image to be encoded transmitted by the video collection device.
  • the captured video image may be transmitted to the video encoding device, and the video encoding device performs video encoding by using an interframe prediction encoding manner.
  • the video capture device may be a terminal device with an image capture device such as a camera, such as a video capture terminal in a live video scene; obviously, the source of the video image device to obtain the video image to be encoded is not limited to video capture.
  • a device, such as a video encoding device, may also re-encode a saved video image.
  • the video encoding device may be implemented by a terminal such as a mobile phone or a notebook computer, or may be a server set on a network side.
  • Step S710 Determine, for a to-be-processed prediction unit of any to-be-processed coding block to be encoded in the video image, a target reference frame of the prediction unit to be processed.
  • the video image may be encoded in units of coding blocks. After the video encoding device acquires the video image, the video image may be divided into multiple blocks.
  • the determination of the target reference frame of the to-be-processed prediction unit may be implemented by the inter prediction method provided by the embodiment of the present disclosure.
  • the process of determining the target reference frame of the to-be-processed prediction unit may be as shown in the refinement of step S710 in FIG. 8, including:
  • Step S711 Determine at least one reference coded block that is adjacent to the prediction unit to be processed.
  • Step S712 For each reference frame that is preset, according to the reference information of each reference coded block, respectively determine the corresponding AMVP of the to-be-processed prediction unit under each reference frame.
  • Step S713 Determine, from the to-be-processed prediction unit, the target AMVP in the corresponding AMVP in each reference frame, and use the reference frame corresponding to the target AMVP as the candidate reference frame.
  • Step S714 Comparing the candidate reference frame with the predetermined first reference frame, if the candidate reference frame is different from the first reference frame, performing motion estimation on the prediction unit to be processed by the candidate reference frame and the first reference frame respectively, according to the motion. And estimating a coding cost corresponding to the obtained candidate reference frame and the first reference frame, and determining a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame.
  • steps S711 to S714 can be described with reference to the corresponding parts above.
  • Step S720 Perform motion estimation on the prediction unit to be processed by using the target reference frame, determine a residual block corresponding to the prediction unit to be processed, transform and quantize the residual block, and then rearrange and entropy the transformed and quantized residual block. Encoding to obtain the video coding result of the prediction unit to be processed.
  • Step S730 Combine the video coding results of the prediction units of the to-be-processed coding block to obtain an encoding result of the to-be-processed coding block.
  • the inter prediction method provided by the embodiment of the present disclosure can reduce the number of motion estimation searches, reduce the processing complexity of target reference frame selection, and reduce the complexity of video coding in determining the target reference frame of the prediction unit to be processed. Reduce and improve the efficiency of video coding.
  • the inter-prediction apparatus provided by the embodiment of the present disclosure is described below.
  • the inter-prediction apparatus described below may be considered as a video coding apparatus, and is a program module that is configured to implement the inter-frame prediction method provided by the embodiment of the present disclosure.
  • the content of the inter prediction apparatus described below may be referred to in correspondence with the content of the inter prediction method described above.
  • FIG. 9 is a structural block diagram of an inter prediction apparatus according to an embodiment of the present disclosure.
  • the inter prediction apparatus is applicable to a video encoding apparatus.
  • the inter prediction apparatus may include:
  • coded block determining module 100 configured to determine at least one reference coded block spatially adjacent to the to-be-processed prediction unit of the to-be-processed coded block;
  • the AMVP determining module 200 is configured to determine, according to the reference information of each reference coded block, the corresponding AMVP of each to-be-processed prediction unit in each reference frame, for each preset reference frame;
  • a candidate reference frame determining module 300 configured to determine, from a to-be-processed prediction unit, a target AMVP in a corresponding AMVP in each reference frame, and use a reference frame corresponding to the target AMVP as a candidate reference frame;
  • a first comparison selection module 400 configured to compare the candidate reference frame with a predetermined first reference frame, if the candidate reference frame is different from the first reference frame, respectively, by using the candidate reference frame and the first Performing motion estimation on a reference frame to be processed by the prediction unit, and determining a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame according to the motion estimation candidate reference frame and the corresponding coding cost of the first reference frame;
  • the first reference frame is the most used reference frame in the sub-coded block of the to-be-processed coded block.
  • the AMVP determining module 200 is configured to determine, according to the reference information of each reference coded block, the corresponding AMVP of the to-be-processed prediction unit in each reference frame, specifically:
  • the AMVP determining module 200 is configured to determine, according to the reference frame, the reference frame and the motion vector corresponding to the coded block, respectively, according to the reference frame, to determine, according to the reference frame, the prediction unit to be processed, in the reference frame, with respect to each reference.
  • the motion vector of the encoded block including:
  • the ratio of the reference frame to the reference frame corresponding to the reference coded block is multiplied by the motion vector corresponding to the reference coded block to obtain a to-be-processed prediction unit.
  • a motion vector relative to the reference coded block is referenced to determine a motion vector of the to-be-processed prediction unit relative to each reference coded block under the reference frame.
  • the AMVP determining module 200 is configured to determine, according to the reference frame, an AMVP cost of the to-be-processed prediction unit with respect to each candidate motion vector, including:
  • the candidate reference frame determining module 300 is configured to determine, by the to-be-processed prediction unit, the target AMVP in the corresponding AMVP in each reference frame, which specifically includes:
  • the AMVP with the smallest AMVP cost is selected as the target AMVP from the corresponding AMVP in each reference frame from the to-be-processed prediction unit.
  • the candidate reference frame determining module 300 is configured to select, as the target AMVP, the AMVP with the smallest AMVP cost as the target AMVP from the corresponding AMVP in the reference frame.
  • the AMVP cost of the determined AMVP is compared with the reference AMVP cost. If the determined AMVP cost of the AMVP is less than the reference AMVP cost, the reference AMVP cost is updated to Determining the AMVP cost of the AMVP until the corresponding AMVP of the pending prediction unit is determined in all reference frames;
  • the AMVP corresponding to the last updated reference AMVP cost is taken as the target AMVP.
  • the first comparison selection module 400 is configured to perform motion estimation on the prediction unit to be processed by using the candidate reference frame and the first reference frame, respectively, and select a corresponding reference frame and a first reference frame according to the motion estimation.
  • the cost of determining the target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame includes:
  • a reference frame with the smallest coding cost is selected from the candidate reference frame and the first reference frame as a target reference frame of the prediction unit to be processed.
  • FIG. 10 is another structural block diagram of an inter prediction apparatus according to an embodiment of the present disclosure. As shown in FIG. 9 and FIG. 10, the apparatus may further include:
  • the second comparison selection module 500 is configured to select a reference frame from the candidate reference frame and the first reference frame as the target reference frame of the to-be-processed prediction unit if the candidate reference frame is the same as the first reference frame.
  • the reference coded block determining module 100 is configured to determine at least one reference coded block that is spatially adjacent to the to-be-processed prediction unit of the to-be-processed coded block, and specifically includes:
  • the at least one reference coded block may include: an encoded block that is connected to a top right point of the to-be-processed prediction unit, and is connected to a top right point of the to-be-processed prediction unit, and is located at The coded block on the upper side of the prediction unit to be processed, the coded block connected to the upper left point of the prediction unit to be processed, and the coded block connected to the lower left point of the prediction unit to be processed are connected to the lower left point of the prediction unit to be processed. And the coded block located to the left of the prediction unit to be processed.
  • FIG. 11 is a block diagram showing another structure of the inter prediction apparatus according to the embodiment of the present disclosure. As shown in FIG. 9 and FIG. 11, the apparatus may further include:
  • the coding result determining module 600 is configured to perform motion estimation on the prediction unit to be processed using the target reference frame, determine a residual block corresponding to the prediction unit to be processed, transform and quantize the residual block, and then transform and quantize the residual block.
  • the rearrangement and entropy coding are performed to obtain a video coding result of the prediction unit to be processed.
  • the inter prediction apparatus provided by the embodiment of the present disclosure can reduce the processing complexity of the target reference frame selection, reduce the complexity of video coding, and improve the efficiency of video coding.
  • FIG. 12 shows a hardware structural block diagram of the video encoding apparatus.
  • the video encoding apparatus may include: at least one The central processing unit 1, at least one communication interface 2, at least one memory 3, at least one communication bus 4 and at least one graphics processor 5.
  • the number of the processor 1, the communication interface 2, the memory 3, the communication bus 4, and the graphics processor 5 is at least one, and the processor 1, the communication interface 2, and the memory 3 are mutually completed through the communication bus 4. Communication.
  • the memory stores a program executable for execution by a central processing unit or a graphics processor, the program being used to:
  • Comparing the candidate reference frame with a predetermined first reference frame if the candidate reference frame is different from the first reference frame, performing motion estimation on the prediction unit to be processed by the candidate reference frame and the first reference frame, respectively Determining a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame according to a corresponding estimated coding cost of the motion estimation candidate reference frame and the first reference frame; wherein the first reference frame is the to-be-processed coding
  • the most used reference frame is used in the subcoded block of the block.
  • refinement function and the extended function of the program may be described in the corresponding sections above.
  • an embodiment of the present disclosure further provides a storage medium storing a program suitable for execution by a central processing unit or a graphics processor, the program being used to:
  • Comparing the candidate reference frame with a predetermined first reference frame if the candidate reference frame is different from the first reference frame, performing motion estimation on the prediction unit to be processed by the candidate reference frame and the first reference frame, respectively Determining a target reference frame of the to-be-processed prediction unit from the candidate reference frame and the first reference frame according to a corresponding estimated coding cost of the motion estimation candidate reference frame and the first reference frame; wherein the first reference frame is the to-be-processed coding
  • the most used reference frame is used in the subcoded block of the block.
  • the embodiment of the present disclosure further provides a storage medium for storing program code, and the program code is used to execute the inter prediction method provided by the foregoing embodiment.
  • Embodiments of the present disclosure also provide a computer program product comprising instructions that, when run on a server, cause the server to perform the inter prediction method provided by the above embodiments.
  • refinement function and the extended function of the program may be described in the corresponding sections above.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本公开实施例提供一种帧间预测方法、装置及存储介质,该方法包括:确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定待处理预测单元在各参考帧下相应的AMVP;从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将目标AMVP对应的参考帧作为候选参考帧;将候选参考帧与第一参考帧进行比对,若候选参考帧与第一参考帧不同,分别通过候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定目标参考帧。本公开实施例可降低目标参考帧选择的处理复杂度,提高视频编码的效率。

Description

一种帧间预测方法、装置及存储介质
本申请要求于2017年10月13日提交中国专利局、申请号为201710955097.5、申请名称为“一种帧间预测方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及视频处理技术领域,具体涉及帧间预测。
背景技术
帧间预测是利用当前编码块周边已编码重建的其他编码块确定参考帧,通过运动估计(motion estimation),利用参考帧对当前编码块进行预测编码,以消除视频的时间冗余信息的一种手段;帧间预测是视频编码的一个重要环节,尤其在H.264/AVC、H.265/HEVC、AVS等混合编码框架的视频编码中经常应用。
对当前编码块进行帧间预测的一个重要内容是:从众多的所有参考帧中选择出最优的目标参考帧(最优的目标参考帧一般认为是所有参考帧中,编码代价最小的参考帧),从而通过运动估计,利用目标参考帧对当前编码块进行预测编码。
由于帧间预测过程中目标参考帧选择的处理复杂度,对于视频编码的效率影响极大,因此如何降低目标参考帧选择的处理复杂度,使得视频编码的复杂度降低,提高视频编码的效率,一直是本领域技术人员研究的问题。
发明内容
有鉴于此,本公开实施例提供一种帧间预测方法、装置及存储介质,以降低目标参考帧选择的处理复杂度,使得视频编码的复杂度降低,提高视频编码的效率。
为实现上述目的,本公开实施例提供如下技术方案:
一方面,本公开实施例提供了一种帧间预测方法,包括:
确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP;
从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所 述目标AMVP对应的参考帧作为候选参考帧;
将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
另一方面,本公开实施例还提供一种帧间预测装置,包括:
参考已编码块确定模块,用于确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
AMVP确定模块,用于对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP;
候选参考帧确定模块,用于从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
第一比对选择模块,用于将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
另一方面,本公开实施例还提供一种存储介质,所述存储介质存储有适于中央处理器或图形处理器执行的程序,所述程序用于:
确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP;
从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进 行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
又一方面,本公开实施例还提供了一种视频编码设备,所述视频编码设备包括:
处理器、通信接口、存储器和通信总线;
其中,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信;所述通信接口为通信模块的接口;
所述存储器,用于存储程序代码,并将所述程序代码传输给所述处理器;
所述处理器,用于调用存储器中程序代码的指令执行第一方面所述的帧间预测方法。
再一方面,本公开实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行第一方面所述的帧间预测方法。
基于上述技术方案,本公开实施例可根据待处理预测单元空间相邻的至少一个参考已编码块的参考信息,确定出待处理预测单元在各参考帧下相应的AMVP,并从中选取出目标AMVP;将目标AMVP对应的参考帧作为候选参考帧,与预确定的第一参考帧进行比对,从而根据比对结果确定待处理预测单元的目标参考帧,实现帧间预测过程中,待处理预测单元的目标参考帧的确定。
由于本公开实施例从所有参考帧中基于AMVP选取候选参考帧后,仅需在候选参考帧和预确定的第一参考帧不同时,才分别通过候选参考帧和第一参考帧对待处理预测单元进行运动估计,从而根据运动估计的编码代价,从候选参考帧和第一参考帧中选取出待处理预测单元的目标参考帧,因此本公开实施例可极大的减少待处理预测单元的目标参考帧选取过程中运动估计搜索的次数,降低目标参考帧选择的处理复杂度,使得视频编码的复杂度降低,提高视频编码的效率。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述 中的附图仅仅是本公开的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本公开实施例提供的帧间预测方法的流程图;
图2为编码块与预测单元的关系示意图;
图3为参考已编码块的示例图;
图4为确定当前预测单元在一参考帧下相应的AMVP的方法流程图;
图5为确定当前预测单元在各参考帧下相应的AMVP及确定目标AMVP的方法流程图;
图6为将候选参考帧与第一参考帧进行比对,确定目标参考帧的方法流程图;
图7为本公开实施例提供的帧间预测方法的另一流程图;
图8为本公开实施例提供的视频编码应用示例流程图;
图9为本公开实施例提供的帧间预测装置的结构框图;
图10为本公开实施例提供的帧间预测装置的另一结构框图;
图11为本公开实施例提供的帧间预测装置的再一结构框图;
图12为视频编码设备的硬件结构框图。
具体实施方式
在帧间预测过程中,降低目标参考帧选择的处理复杂度,关键在于降低目标参考帧选择过程中,运动估计的搜索次数;由于一次运动估计将带来较大的处理负担,因此如何合理的在帧间预测过程中,降低目标参考帧选择时所使用的运动估计的搜索次数,尤为关键。基于此,本公开提出一种新型的帧间预测方法,以在帧间预测过程中,降低目标参考帧选择时所使用的运动估计的搜索次数,达到降低目标参考帧选择的处理复杂度的目的。
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
图1为本公开实施例提供的帧间预测方法的流程图,该帧间预测方法可应用于视频编码设备,视频编码设备可以是具有视频编码能力的CPU(中央处理 器)或GPU(图形处理器);可选的,视频编码设备可选用手机、笔记本电脑等终端实现,也可选用服务器实现。
参照图1,本公开实施例提供的帧间预测方法可以包括:
步骤S100、确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块。
待处理编码块可以理解为是待处理需进行预测编码的编码块,待处理编码块可划分出多个预测单元,如图2示例,一个编码块(CU)可划分出2个预测单元(PU),显然图2仅是示例,一个编码块并不限于划分出2个预测单元的情况。
本公开实施例可对待处理编码块中的每一预测单元均使用本公开实施例提供的帧间预测方法进行处理,待处理预测单元可以认为是待处理编码块中待处理需进行预测编码的预测单元。
已编码块可以认为是利用本公开实施例提供的帧间预测方法已进行了预测编码的编码块,已编码块已利用本公开实施例提供的帧间预测方法选择了帧间预测所用的参考帧,因此已编码块所选择的参考帧,和基于已选择的参考帧所确定的运动矢量(MV,MotionVector)是已知的;在本公开实施例中,已编码块所选择的参考帧可以认为是已编码块相应的参考帧,基于已编码块相应的参考帧所确定的运动矢量可以认为是,已编码块相应的运动矢量。
可选的,所选择的与待处理预测单元空间相邻的至少一个参考已编码块可以认为是:与待处理预测单元空间相邻,且与待处理预测单元的边点相接的至少一个已编码块。
如图3所示示例,至少一个参考已编码块可以包括:与待处理预测单元右上边点相接的已编码块(b0),与待处理预测单元右上边点相接,且位于待处理预测单元上边的已编码块(b1),与待处理预测单元左上边点相接的已编码块(b2),与待处理预测单元左下边点相接的已编码块(a0),与待处理预测单元左下边点相接,且位于待处理预测单元左边的已编码块(a1)。显然,图3所示示例仅是一种举例示意说明,具体的至少一个参考已编码块的选取可不限于图3所示,如也可能仅选取其中的b0、b2、a0等。
步骤S110、对于预设定的各参考帧,根据各参考已编码块的参考信息,分 别确定待处理预测单元在各参考帧下相应的先进运动矢量预测值(AMVP,Advanced Motion Vector Prediction)。
可选的,由于参考已编码块已利用本公开实施例提供的帧间预测方法进行了预测编码,因此参考已编码块相应的参考帧和运动矢量等参考信息均是已知的;对于预设定的所有参考帧中的各参考帧而言,本公开实施例可根据各参考已编码块的参考信息,分别确定出待处理预测单元在每一参考帧下相应的AMVP。
可选的,在确定待处理预测单元在一参考帧下相应的AMVP时,本公开实施例可在该参考帧下,确定待处理预测单元相对于各参考已编码块的运动矢量,得到待处理预测单元在该参考帧下相应的候选运动矢量(在一参考帧下,待处理预测单元相对于各参考已编码块的运动矢量,可视为是待处理预测单元在该参考帧下相应的候选运动矢量),从而从待处理预测单元在该参考帧下相应的候选运动矢量中,选取AMVP代价最小的运动矢量,作为待处理预测单元在该参考帧下相应的AMVP(待处理预测单元在一参考帧下相应的AMVP可视为是,待处理预测单元在该参考帧下相应的候选运动矢量中,AMVP代价最小的运动矢量)。
从而对于待处理预测单元,逐参考帧的通过上述方式进行处理,以确定出待处理预测单元在各参考帧下相应的AMVP。
可选的,在一参考帧下,待处理预测单元相对于一候选运动矢量的AMVP代价可通过如下方式确定:确定相对于该参考帧偏离了该候选运动矢量距离,且与待处理预测单元相同形状的参考块,计算待处理预测单元与该参考块的间的误差和,得到待处理预测单元相对于该候选运动矢量的AMVP代价;从而在一参考帧下,逐候选运动矢量的通过上述方式进行处理,以确定出待处理预测单元在一参考帧下,相对于各候选运动矢量的AMVP代价。
显然,AMVP代价的确定方式并不限于此,还可选用其他方式实现。
可选的,一个参考帧可通过唯一的参考帧索引进行关联标识。
步骤S120、从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将目标AMVP对应的参考帧作为候选参考帧。
可选的,本公开实施例在确定待处理预测单元在各参考帧下相应的AMVP 后,可将待处理预测单元在各参考帧下相应的AMVP中,AMVP代价最小的AMVP作为目标AMVP,从而将目标AMVP对应的参考帧作为候选参考帧使用。
步骤S130、将候选参考帧与预确定的第一参考帧进行比对,若候选参考帧与第一参考帧不同,分别通过候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计得到的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧。
可选的,预确定的第一参考帧可以是待处理编码块的子编码块中使用最多的参考帧,其中,待处理编码块的子编码块预先编码完成。需要说明的是,一编码块可以具有多种编码块模式编码,每种编码块模式编码模式可对应有一种预测单元的划分方式,且预测单元在编码块中具有独立的运动信息,一般一个编码块可分成2个预测单元;而子编码块则是对编码块进一步在空间上进行划分,所得到的编码块的子区域,一般而言,一个编码块可固定分成4个子编码块。
在确定候选参考帧和预确定的第一参考帧的基础上,本公开实施例可将候选参考帧与预确定的第一参考帧进行比对,如果两者相同,则可择一作为待处理预测单元的目标参考帧;如果两者不同,则可通过候选参考帧和第一参考帧分别对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧的编码代价,从候选参考帧和第一参考帧中选取编码代价最小的参考帧,作为待处理预测单元的目标参考帧。
本公开实施例提供的帧间预测方法包括:确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;根据各参考已编码块的参考信息,分别确定待处理预测单元在各参考帧下相应的AMVP;从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将目标AMVP对应的参考帧作为候选参考帧;将候选参考帧与预确定的第一参考帧进行比对,若候选参考帧与第一参考帧不同,分别通过候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧。
可以看出,本公开实施例可根据待处理预测单元空间相邻的至少一个参考 已编码块的参考信息,确定出待处理预测单元在各参考帧下相应的AMVP,并从中选取出目标AMVP;将目标AMVP对应的参考帧作为候选参考帧,与预确定的第一参考帧进行比对,从而根据比对结果确定待处理预测单元的目标参考帧,实现帧间预测过程中,待处理预测单元的目标参考帧的确定。
由于本公开实施例从所有参考帧中基于AMVP选取候选参考帧后,仅需在候选参考帧和预确定的第一参考帧不同时,才分别通过候选参考帧和第一参考帧对待处理预测单元进行运动估计,从而根据运动估计的编码代价,从候选参考帧和第一参考帧中选取出待处理预测单元的目标参考帧,因此本公开实施例可极大的减少待处理预测单元的目标参考帧选取过程中运动估计搜索的次数,降低目标参考帧选择的处理复杂度,使得视频编码的复杂度降低,提高视频编码的效率。
需要说明的是,如果是是先遍历所有的参考帧,估算出待处理编码块的待处理预测单元在各参考帧下的运动矢量预测值(MVP,Motion Vector Prediction),再分别在各参考帧下,根据相应的MVP对待处理预测单元进行运动估计,确定出待处理预测单元在各参考帧下的运动矢量(MV)和编码代价,从中选取出编码代价最小的目标参考帧,则运动估计的搜索次数非常多,导致目标参考帧选择的处理复杂度较高。而本公开实施例可由基于候选参考帧和第一参考帧对待处理预测单元的运动估计,实现待处理预测单元的目标参考帧的选择,极大的减少了运动估计搜索的次数。
可选的,一参考已编码块的参考信息可以包括:该参考已编码块相应的参考帧和运动矢量;相应的,图4示出了本公开实施例提供的确定待处理预测单元在一参考帧下相应的AMVP的方法流程,参照图4,该方法可以包括:
步骤S200、在这一参考帧下,分别根据当前参考帧,各参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在这一参考帧下相对于各参考已编码块的运动矢量,得到待处理预测单元在这一参考帧下相应的候选运动矢量。
可选的,这一参考帧可以是所有参考帧中当前遍历到的参考帧,即可对所有参考帧逐帧的进行遍历,这一参考帧则是当前遍历到的参考帧;本公开实施例可逐一遍历预设定的各参考帧,在遍历到当前参考帧时,执行图4所示方法, 确定出待处理预测单元在当前参考帧下相应的AMVP。
在这一参考帧下,对于所述至少一个参考已编码块中的任一个参考已编码块,本公开实施例可根据这一参考帧,该参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在这一参考帧下,相对于该参考已编码块的MV。
作为一种可选示例,以参考已编码块a0为例,若设参考已编码块a0相应的参考帧为refa0,运动矢量为MVa0,则在参考帧refIdx(该参考帧refIdx可以是所有参考帧中的任一当前遍历到的参考帧)下,待处理预测单元在该参考帧refIdx下,相对于参考已编码块a0的运动矢量可以是:refIdx和refa0的比值,乘以MVa0。
即作为一种可选示例,在当前遍历到的一参考帧下,对于一参考已编码块,本公开实施例可将该参考帧与该参考已编码块相应的参考帧的比值,乘以该参考已编码块相应的运动矢量;从而对于各参考已编码块均进行此处理,可确定出待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量,得到待处理预测单元在该参考帧下相应的候选运动矢量。
步骤S210、在这一参考帧下,确定待处理预测单元相对于各候选运动矢量的AMVP代价。
可选的,对于任一候选运动矢量,本公开实施例可确定相对于当前遍历到的任一参考帧偏离了该候选运动矢量距离,且与待处理预测单元相同形状的参考块(即一个候选运动矢量会对应出一个参考块),计算待处理预测单元与该参考块的间的误差和(可选的,误差和可理解为是待处理预测单元和参考块的像素差的绝对值的和),得到待处理预测单元相对于该候选运动矢量的AMVP代价;对于各候选运动矢量均作此处理,则可在当前遍历到的任一参考帧下,确定出待处理预测单元相对于各候选运动矢量的AMVP代价。
步骤S220、将AMVP代价最小的候选运动矢量,作为待处理预测单元在这一参考帧下相应的AMVP。
可选的,以图4所示流程对预设定的各参考帧中,当前遍历到的参考帧进行处理,从而可在遍历完预设定的各参考帧后,得到待处理预测单元在各参考帧下相应的AMVP。
在确定待处理预测单元在各参考帧下相应的AMVP后,本公开实施例可从待处理预测单元在各参考帧下相应的AMVP中,确定AMVP代价最小的AMVP作为目标AMVP。
可选的,作为一种可选示例,本公开实施例可设置参考AMVP代价,并设置参考AMVP代价的初始值为预设最大值,从而在每确定出待处理预测单元在一参考帧下相应的AMVP后,将所确定AMVP的AMVP代价与参考AMVP代价进行比对,如果该AMVP的AMVP代价小于参考AMVP代价,则将参考AMVP代价更新为该AMVP的AMVP代价,直至待处理预测单元在所有参考帧下相应的AMVP被确定出。
以此,在每确定出待处理预测单元在一参考帧下相应的AMVP后,进行所确定AMVP的AMVP代价与参考AMVP代价的比对,实现参考AMVP代价的更新,最终在确定出待处理预测单元在最后一参考帧下相应的AMVP后,则可使得参考AMVP代价为,待处理预测单元在各参考帧下的AMVP中相应的最小AMVP代价;从而可将最后更新的参考AMVP代价相应的AMVP,作为目标AMVP,实现从待处理预测单元在各参考帧下相应的AMVP中,确定AMVP代价最小的目标AMVP的目的。
相应的,图5示出了本公开实施例提供的确定待处理预测单元在各参考帧下相应的AMVP及确定目标AMVP的方法流程图,参照图5,该流程可以包括:
步骤S300、设置参考AMVP代价,并使参考AMVP代价的初始值为预设最大值。
步骤S310、从所有参考帧中选取一参考帧作为当前参考帧,确定待处理预测单元在该参考帧下的AMVP。
可选的,步骤S310的可选实现可参照图4所示。
步骤S320、判断待处理预测单元在该参考帧下AMVP的AMVP代价,是否小于参考AMVP代价,若否,执行步骤S330,若是,执行步骤S340。
步骤S330、维持参考AMVP代价不变,执行步骤S350。
可选的,如果待处理预测单元在该参考帧下AMVP的AMVP代价,不小于参考AMVP代价,则可维持参考AMVP代价不变。
步骤S340、以待处理预测单元在该参考帧下AMVP的AMVP代价,更新参 考AMVP代价,并将目标AMVP更新为待处理预测单元在该参考帧下的AMVP,将候选参考帧更新为该参考帧,执行步骤S350。
可选的,如果待处理预测单元在该参考帧下AMVP的AMVP代价,小于参考AMVP代价,则可通过该AMVP代价更新参考AMVP代价,并以待处理预测单元在该参考帧下的AMVP,更新目标AMVP,以该参考帧更新候选参考帧。
步骤S350、判断是否所有的参考帧均遍历完毕,若是,执行步骤S360,若否,返回步骤S310。
步骤S360、确定最后更新的目标AMVP,为所确定的目标AMVP,最后更新的候选参考帧,为所确定的候选参考帧。
可以看出,通过图5所示方法逐参考的循环处理,则可使得最后更新的参考AMVP代价为,待处理预测单元在各参考帧下的AMVP中相应的最小的AMVP代价;从而可将最后更新的目标AMVP(最后更新的参考AMVP代价相应的AMVP),作为本公开实施例需确定的目标AMVP,可将最后更新的候选参考帧(最后更新的目标AMVP相应的参考帧),作为本公开实施例需确定的候选参考帧。
在确定目标AMVP后,本公开实施例可将目标AMVP对应的参考帧作为候选参考帧,并与预确定的第一参考帧进行比对;在本公开实施例中,待处理编码块的子编码块需预先编码完成,在待处理编码块的子编码块编码完成后,本公开实施例可将待处理编码块的子编码块中使用最多的参考帧,作为第一参考帧;可选的,对待处理编码块的子编码块进行编码的方式可参照传统现有方式,本公开实施例并不限制。
可选的,在确定候选参考帧和第一参考帧后,图6示出了一种可选的将候选参考帧与第一参考帧进行比对,确定目标参考帧的方法流程,参照图6,该流程可以包括:
步骤S400、判断候选参考帧与第一参考帧是否相同,若是,执行步骤S410,若否,执行步骤S420。
步骤S410、从候选参考帧和第一参考帧中选择一参考帧,作为待处理预测单元的目标参考帧。
步骤S420、通过候选参考帧对待处理预测单元进行运动估计,得出候选参考帧相应的编码代价;及,通过第一参考帧对待处理预测单元进行运动估计,得出第一参考帧相应的编码代价。
可选的,在通过候选参考帧对待处理预测单元进行运动估计时,本公开实施例可以候选参考帧的AMVP作为运动估计的搜索起始点,对待处理预测单元进行运动估计,得出候选参考帧相应的码率失真代价(编码代价的一种可选形式);在通过第一参考帧对待处理预测单元进行运动估计时,本公开实施例可以第一参考帧的AMVP作为运动估计的搜索起始点,对待处理预测单元进行运动估计,得出第一参考帧相应的码率失真代价。
步骤S430、从候选参考帧和第一参考帧中选择编码代价最小的参考帧,作为待处理预测单元的目标参考帧。
可选的,进一步,在确定出待处理预测单元的目标参考帧后,本公开实施例可使用目标参考帧对待处理预测单元进行运动估计,确定待处理预测单元相应的残差块,对残差块进行变换和量化,再对变换和量化后的残差块进行重排和熵编码,得到待处理预测单元的视频编码结果,将待处理编码块的各预测单元的视频编码结果相结合,则可通过帧间预测编码方式得到待处理编码块的编码结果。
可选的,图7示出了本公开实施例提供的帧间预测方法的另一流程,参照图7,该流程可以包括:
步骤S500、确定与待处理编码块的待处理预测单元空间右上边点相接的已编码块,与待处理预测单元右上边点相接,且位于待处理预测单元上边的已编码块,与待处理预测单元左上边点相接的已编码块,与待处理预测单元左下边点相接的已编码块,与待处理预测单元左下边点相接,且位于待处理预测单元左边的已编码块,得到至少一个参考已编码块。
步骤S510、从所有参考帧逐一遍历参考帧,将当前遍历到的参考帧作为当前参考帧。
步骤S520、在该参考帧下,对于任一参考已编码块,将该参考帧与该参考已编码块相应的参考帧的比值,乘以该参考已编码块相应的运动矢量,得到待 处理预测单元在该参考帧下相对于该参考已编码块的运动矢量,以确定出待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量,得到待处理预测单元在该参考帧下相应的候选运动矢量。
步骤S530、对于任一候选运动矢量,确定相对于该参考帧偏离了该候选运动矢量距离,且与待处理预测单元相同形状的参考块,计算待处理预测单元与该参考块的间的误差和,得到待处理预测单元相对于该候选运动矢量的AMVP代价,以在该参考帧下,得到待处理预测单元相对于各候选运动矢量的AMVP代价。
步骤S540、将AMVP代价最小的候选运动矢量,作为待处理预测单元在该参考帧下的AMVP。
步骤S550、判断是否所有参考帧均已遍历,若否,返回步骤S510,若是,得到待处理预测单元在各参考帧下相应的AMVP,执行步骤S560。
步骤S560、从待处理预测单元在各参考帧下相应的AMVP中,确定AMVP代价最小的目标AMVP,将目标AMVP对应的参考帧作为候选参考帧。
步骤S570、判断候选参考帧与第一参考帧是否相同,若是,执行步骤S580,若否,执行步骤S590。
步骤S580、从候选参考帧和第一参考帧中选择一参考帧,作为待处理预测单元的目标参考帧。
步骤S590、通过候选参考帧对待处理预测单元进行运动估计,得出候选参考帧相应的编码代价;及,通过第一参考帧对待处理预测单元进行运动估计,得出第一参考帧相应的编码代价。
步骤S600、从候选参考帧和第一参考帧中选择编码代价最小的参考帧,作为待处理预测单元的目标参考帧。
本公开实施例提供的帧间预测方法的一个应用示例可以是,应用于视频编码设备,在视频编码设备对视频图像进行视频编码时,利用帧间预测编码方式对视频图像的任一待处理编码块进行编码,并且这个过程中,使用本公开实施例提供的帧间预测方法,实现确定待处理编码块的待处理预测单元的目标参考帧。
可选的,作为一个应用示例,应用本公开实施例提供的帧间预测方法的视频编码过程可以如图8所示,包括:
步骤S700、视频编码设备获取视频采集设备传输的待编码的视频图像。
可选的,视频采集设备采集视频图像后,可将所采集的视频图像传输至视频编码设备,由视频编码设备以帧间预测编码方式进行视频编码。
可选的,视频采集设备可以是具有摄像头等图像采集装置的终端设备,如可以是视频直播场景中的视频采集终端等;显然,视频编码设备获取待编码的视频图像的来源并不限于视频采集设备,如视频编码设备也可能是对保存的视频图像进行重新编码等。
可选的,视频编码设备可以如手机、笔记本电脑等终端实现,也可如网络侧设置的服务器等。
步骤S710、对于所述视频图像中待编码的任一待处理编码块的待处理预测单元,确定待处理预测单元的目标参考帧。
视频编码设备获取待编码的视频图像后,可以编码块为单位进行视频图像的编码,如视频编码设备获取视频图像后,可将视频图像进行图像分块,划分出多个编码块。
对于视频图像中待编码的任一待处理编码块的待处理预测单元,可以上文描述的本公开实施例提供的帧间预测方法,实现待处理预测单元的目标参考帧的确定。
可选的,确定待处理预测单元的目标参考帧的过程可以如8中对步骤S710的细化所示,包括:
步骤S711、确定待处理预测单元空间相邻的至少一个参考已编码块。
步骤S712、对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定待处理预测单元在各参考帧下相应的AMVP。
步骤S713、从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将目标AMVP对应的参考帧作为候选参考帧。
步骤S714、将候选参考帧与预确定的第一参考帧进行比对,若候选参考帧与第一参考帧不同,分别通过候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计得出的候选参考帧和第一参考帧相应的编码代价,从 候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧。
可选的,步骤S711至步骤S714的细化说明可参照上文相应部分描述。
步骤S720、使用目标参考帧对待处理预测单元进行运动估计,确定待处理预测单元相应的残差块,对残差块进行变换和量化,再对变换和量化后的残差块进行重排和熵编码,得到待处理预测单元的视频编码结果。
步骤S730、将待处理编码块的各预测单元的视频编码结果相结合,得到待处理编码块的编码结果。
本公开实施例提供的帧间预测方法,可在确定待处理预测单元的目标参考帧的过程中,减少了运动估计搜索的次数,降低目标参考帧选择的处理复杂度,使得视频编码的复杂度降低,提高视频编码的效率。
下面对本公开实施例提供的帧间预测装置进行介绍,下文描述的帧间预测装置可以认为是视频编码设备,为实现本公开实施例提供的帧间预测方法所设置的程序模块。下文描述的帧间预测装置的内容,可与上文描述的帧间预测方法的内容相互对应参照。
图9为本公开实施例提供的帧间预测装置的结构框图,该帧间预测装置可应用于视频编码设备,参照图9,该帧间预测装置可以包括:
参考已编码块确定模块100,用于确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
AMVP确定模块200,用于对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP;
候选参考帧确定模块300,用于从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
第一比对选择模块400,用于将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
可选的,AMVP确定模块200,用于根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP,具体包括:
逐一遍历预设定的各参考帧,在遍历到当前参考帧时,在该参考帧下,分别根据该参考帧,各参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量,得到待处理预测单元在该参考帧下相应的候选运动矢量;
在当前参考帧下,确定待处理预测单元相对于各候选运动矢量的AMVP代价,将AMVP代价最小的候选运动矢量,作为待处理预测单元在该参考帧下相应的AMVP,以在遍历完预设定的各参考帧后,得到待处理预测单元在各参考帧下相应的AMVP。
可选的,AMVP确定模块200,用于在该参考帧下,分别根据该参考帧,各参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量,具体包括:
在当前参考帧下,对于任一参考已编码块,将该参考帧与该参考已编码块相应的参考帧的比值,乘以该参考已编码块相应的运动矢量,得到待处理预测单元在该参考帧下相对于该参考已编码块的运动矢量,以确定出待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量。
可选的,AMVP确定模块200,用于在该参考帧下,确定待处理预测单元相对于各候选运动矢量的AMVP代价,具体包括:
在当前参考帧下,对于任一候选运动矢量,确定相对于该参考帧偏离了该候选运动矢量距离,且与待处理预测单元相同形状的参考块,计算待处理预测单元与该参考块的间的误差和,得到待处理预测单元相对于该候选运动矢量的AMVP代价,以确定出待处理预测单元相对于各候选运动矢量的AMVP代价。
可选的,候选参考帧确定模块300,用于从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,具体包括:
从待处理预测单元在各参考帧下相应的AMVP中,选取AMVP代价最小的AMVP作为目标AMVP。
可选的,候选参考帧确定模块300,用于从待处理预测单元在各参考帧下相应的AMVP中,选取AMVP代价最小的AMVP作为目标AMVP,具体包括:
每确定出待处理预测单元在一参考帧下相应的AMVP,则将所确定AMVP的AMVP代价与参考AMVP代价进行比对,若所确定AMVP的AMVP代价小于参考AMVP代价,将参考AMVP代价更新为所确定AMVP的AMVP代价,直至待处理预测单元在所有参考帧下相应的AMVP被确定出;
将最后更新的参考AMVP代价相应的AMVP,作为目标AMVP。
可选的,第一比对选择模块400,用于分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计得到的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧,具体包括:
通过候选参考帧对待处理预测单元进行运动估计,得出候选参考帧相应的编码代价;及,通过第一参考帧对待处理预测单元进行运动估计,得出第一参考帧相应的编码代价;
从候选参考帧和第一参考帧中选择编码代价最小的参考帧,作为待处理预测单元的目标参考帧。
可选的,图10示出了本公开实施例提供的帧间预测装置的另一结构框图,结合图9和图10所示,该装置还可以包括:
第二比对选择模块500,用于若所述候选参考帧与第一参考帧相同,从候选参考帧和第一参考帧中选择一参考帧,作为待处理预测单元的目标参考帧。
可选的,参考已编码块确定模块100,用于确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块,具体包括:
确定与待处理预测单元空间相邻,且与待处理预测单元的边点相接的至少一个已编码块。
可选的,作为一种可选示例,所述至少一个参考已编码块可以包括:与待处理预测单元右上边点相接的已编码块,与待处理预测单元右上边点相接,且位于待处理预测单元上边的已编码块,与待处理预测单元左上边点相接的已编码块,与待处理预测单元左下边点相接的已编码块,与待处理预测单元左下边点相接,且位于待处理预测单元左边的已编码块。
可选的,图11示出了本公开实施例提供的帧间预测装置的再一结构框图,结合图9和图11所示,该装置还可以包括:
编码结果确定模块600,用于使用目标参考帧对待处理预测单元进行运动估计,确定待处理预测单元相应的残差块,对残差块进行变换和量化,再对变换和量化后的残差块进行重排和熵编码,得到待处理预测单元的视频编码结果。
本公开实施例提供的帧间预测装置可降低目标参考帧选择的处理复杂度,使得视频编码的复杂度降低,提高视频编码的效率。
上文描述的帧间预测装置可以程序模块的形式装载于视频编码设备中,可选的,图12示出了视频编码设备的硬件结构框图,参照图12,该视频编码设备可以包括:至少一个中央处理器1,至少一个通信接口2,至少一个存储器3,至少一个通信总线4和至少一个图形处理器5。
在本公开实施例中,处理器1、通信接口2、存储器3、通信总线4、图形处理器5的数量为至少一个,且处理器1、通信接口2、存储器3通过通信总线4完成相互间的通信。
其中,存储器存储有可适于中央处理器或图形处理器执行的程序,所述程序用于:
确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP;
从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
可选的,所述程序的细化功能和扩展功能可参照上文相应部分描述。
进一步,本公开实施例还提供一种存储介质,该存储介质存储有适于中央处理器或图形处理器执行的程序,所述程序用于:
确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP;
从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
另外,本公开实施例还提供了一种存储介质,存储介质用于存储程序代码,程序代码用于执行上述实施例提供的帧间预测方法。
本公开实施例还提供了一种包括指令的计算机程序产品,当其在服务器上运行时,使得服务器执行上述实施例提供的帧间预测方法。
可选的,所述程序的细化功能和扩展功能可参照上文相应部分描述。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器 (RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本公开。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的核心思想或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (17)

  1. 一种帧间预测方法,其特征在于,包括:
    确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
    对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的先进运动矢量预测值AMVP;
    从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
    将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计得到的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
  2. 根据权利要求1所述的帧间预测方法,其特征在于,所述根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP包括:
    逐一遍历预设定的各参考帧,在遍历到任一参考帧时,在该参考帧下,分别根据该参考帧,各参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量,得到待处理预测单元在该参考帧下相应的候选运动矢量;
    在该参考帧下,确定待处理预测单元相对于各候选运动矢量的AMVP代价,将AMVP代价最小的候选运动矢量,作为待处理预测单元在该参考帧下相应的AMVP,以在遍历完预设定的各参考帧后,得到待处理预测单元在各参考帧下相应的AMVP。
  3. 根据权利要求2所述的帧间预测方法,其特征在于,所述在该参考帧下,分别根据该参考帧,各参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量包括:
    在该参考帧下,对于任一参考已编码块,将该参考帧与该参考已编码块相应的参考帧的比值,乘以该参考已编码块相应的运动矢量,得到待处理预测单 元在该参考帧下相对于该参考已编码块的运动矢量,以确定出待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量。
  4. 根据权利要求2所述的帧间预测方法,其特征在于,所述在该参考帧下,确定待处理预测单元相对于各候选运动矢量的AMVP代价包括:
    在该参考帧下,对于任一候选运动矢量,确定相对于该参考帧偏离了该候选运动矢量距离,且与待处理预测单元相同形状的参考块,计算待处理预测单元与该参考块的间的误差和,得到待处理预测单元相对于该候选运动矢量的AMVP代价,以确定出待处理预测单元相对于各候选运动矢量的AMVP代价。
  5. 根据权利要求1-4任一项所述的帧间预测方法,其特征在于,所述从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP包括:
    从待处理预测单元在各参考帧下相应的AMVP中,选取AMVP代价最小的AMVP作为目标AMVP。
  6. 根据权利要求5所述的帧间预测方法,其特征在于,所述从待处理预测单元在各参考帧下相应的AMVP中,选取AMVP代价最小的AMVP作为目标AMVP包括:
    每确定出待处理预测单元在一参考帧下相应的AMVP,则将所确定AMVP的AMVP代价与参考AMVP代价进行比对,若所确定AMVP的AMVP代价小于参考AMVP代价,将参考AMVP代价更新为所确定AMVP的AMVP代价,直至待处理预测单元在所有参考帧下相应的AMVP被确定出;
    将最后更新的参考AMVP代价相应的AMVP,作为目标AMVP。
  7. 根据权利要求1所述的帧间预测方法,其特征在于,所述分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计得到的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧包括:
    通过候选参考帧对待处理预测单元进行运动估计,得出候选参考帧相应的编码代价;及,通过第一参考帧对待处理预测单元进行运动估计,得出第一参考帧相应的编码代价;
    从候选参考帧和第一参考帧中选择编码代价最小的参考帧,作为待处理预测单元的目标参考帧。
  8. 根据权利要求1或7所述的帧间预测方法,其特征在于,还包括:
    若所述候选参考帧与第一参考帧相同,从候选参考帧和第一参考帧中选择一参考帧,作为待处理预测单元的目标参考帧。
  9. 根据权利要求1所述的帧间预测方法,其特征在于,所述确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块包括:
    确定与待处理预测单元空间相邻,且与待处理预测单元的边点相接的至少一个已编码块。
  10. 根据权利要求1所述的帧间预测方法,其特征在于,还包括:
    使用目标参考帧对待处理预测单元进行运动估计,确定待处理预测单元相应的残差块,对残差块进行变换和量化,再对变换和量化后的残差块进行重排和熵编码,得到待处理预测单元的视频编码结果。
  11. 一种帧间预测装置,其特征在于,包括:
    参考已编码块确定模块,用于确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码块;
    AMVP确定模块,用于对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的先进运动矢量预测值AMVP;
    候选参考帧确定模块,用于从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
    第一比对选择模块,用于将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计得到的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
  12. 根据权利要求11所述的帧间预测装置,其特征在于,所述AMVP确定模块,用于根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的AMVP,具体包括:
    逐一遍历预设定的各参考帧,在遍历到任一参考帧时,在该参考帧下,分 别根据该参考帧,各参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量,得到待处理预测单元在该参考帧下相应的候选运动矢量;
    在该参考帧下,确定待处理预测单元相对于各候选运动矢量的AMVP代价,将AMVP代价最小的候选运动矢量,作为待处理预测单元在该参考帧下相应的AMVP,以在遍历完预设定的各参考帧后,得到待处理预测单元在各参考帧下相应的AMVP。
  13. 根据权利要求12所述的帧间预测装置,其特征在于,所述AMVP确定模块,用于在该参考帧下,分别根据该参考帧,各参考已编码块相应的参考帧和运动矢量,确定待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量,具体包括:
    在该参考帧下,对于任一参考已编码块,将该参考帧与该参考已编码块相应的参考帧的比值,乘以该参考已编码块相应的运动矢量,得到待处理预测单元在该参考帧下相对于该参考已编码块的运动矢量,以确定出待处理预测单元在该参考帧下相对于各参考已编码块的运动矢量;
    所述AMVP确定模块,用于在该参考帧下,确定待处理预测单元相对于各候选运动矢量的AMVP代价,具体包括:
    在该参考帧下,对于任一候选运动矢量,确定相对于该参考帧偏离了该候选运动矢量距离,且与待处理预测单元相同形状的参考块,计算待处理预测单元与该参考块的间的误差和,得到待处理预测单元相对于该候选运动矢量的AMVP代价,以确定出待处理预测单元相对于各候选运动矢量的AMVP代价。
  14. 根据权利要求11-13任一项所述的帧间预测装置,其特征在于,还包括:
    编码结果确定模块,用于使用目标参考帧对待处理预测单元进行运动估计,确定待处理预测单元相应的残差块,对残差块进行变换和量化,再对变换和量化后的残差块进行重排和熵编码,得到待处理预测单元的视频编码结果。
  15. 一种存储介质,其特征在于,所述存储介质存储有适于中央处理器或图形处理器执行的程序,所述程序用于:
    确定与待处理编码块的待处理预测单元空间相邻的至少一个参考已编码 块;
    对于预设定的各参考帧,根据各参考已编码块的参考信息,分别确定所述待处理预测单元在各参考帧下相应的先进运动矢量预测值AMVP;
    从待处理预测单元在各参考帧下相应的AMVP中,确定目标AMVP,将所述目标AMVP对应的参考帧作为候选参考帧;
    将所述候选参考帧与预确定的第一参考帧进行比对,若所述候选参考帧与第一参考帧不同,分别通过所述候选参考帧和第一参考帧对待处理预测单元进行运动估计,根据运动估计的候选参考帧和第一参考帧相应的编码代价,从候选参考帧和第一参考帧中确定待处理预测单元的目标参考帧;其中,第一参考帧为所述待处理编码块的子编码块中使用最多的参考帧。
  16. 一种视频编码设备,所述视频编码设备包括:
    处理器、通信接口、存储器和通信总线;
    其中,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信;所述通信接口为通信模块的接口;
    所述存储器,用于存储程序代码,并将所述程序代码传输给所述处理器;
    所述处理器,用于调用存储器中程序代码的指令执行权利要求1-10任意一项所述的帧间预测方法。
  17. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1-10任意一项所述的帧间预测方法。
PCT/CN2018/103637 2017-10-13 2018-08-31 一种帧间预测方法、装置及存储介质 WO2019072049A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18865441.2A EP3697093A4 (en) 2017-10-13 2018-08-31 INTERMEDIATE PREDICTION METHOD, DEVICE, AND STORAGE MEDIUM
US16/597,606 US11076168B2 (en) 2017-10-13 2019-10-09 Inter-prediction method and apparatus, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710955097.5 2017-10-13
CN201710955097.5A CN109672894B (zh) 2017-10-13 2017-10-13 一种帧间预测方法、装置及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/597,606 Continuation US11076168B2 (en) 2017-10-13 2019-10-09 Inter-prediction method and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2019072049A1 true WO2019072049A1 (zh) 2019-04-18

Family

ID=66101239

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/103637 WO2019072049A1 (zh) 2017-10-13 2018-08-31 一种帧间预测方法、装置及存储介质

Country Status (4)

Country Link
US (1) US11076168B2 (zh)
EP (1) EP3697093A4 (zh)
CN (1) CN109672894B (zh)
WO (1) WO2019072049A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102417212B1 (ko) * 2019-06-17 2022-07-05 베이징 다지아 인터넷 인포메이션 테크놀로지 컴퍼니 리미티드 비디오 코딩에서의 디코더 측 모션 벡터 개선을 위한 방법들 및 장치들
CN110545425B (zh) * 2019-08-21 2021-11-16 浙江大华技术股份有限公司 一种帧间预测方法、终端设备以及计算机存储介质
CN112565789B (zh) * 2019-11-13 2021-09-17 腾讯科技(深圳)有限公司 视频解码及编码方法、装置、计算机可读介质及电子设备
CN111263151B (zh) * 2020-04-26 2020-08-25 腾讯科技(深圳)有限公司 视频编码方法、装置、电子设备和计算机可读存储介质
CN111770345B (zh) * 2020-07-22 2022-02-22 腾讯科技(深圳)有限公司 编码单元的运动估计方法、装置、设备及存储介质
CN111818342B (zh) * 2020-08-28 2020-12-11 浙江大华技术股份有限公司 帧间预测方法及预测装置
WO2022061573A1 (zh) * 2020-09-23 2022-03-31 深圳市大疆创新科技有限公司 运动搜索方法、视频编码装置及计算机可读存储介质
CN113079372B (zh) * 2021-06-07 2021-08-06 腾讯科技(深圳)有限公司 帧间预测的编码方法、装置、设备及可读存储介质
CN116684610A (zh) * 2023-05-17 2023-09-01 北京百度网讯科技有限公司 确定长期参考帧的参考状态的方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222640A (zh) * 2007-01-09 2008-07-16 华为技术有限公司 确定参考帧的方法及其装置
CN103338372A (zh) * 2013-06-15 2013-10-02 浙江大学 一种视频处理方法及装置
US20130272409A1 (en) * 2012-04-12 2013-10-17 Qualcomm Incorporated Bandwidth reduction in video coding through applying the same reference index
CN103813166A (zh) * 2014-01-28 2014-05-21 浙江大学 一种低复杂度的hevc编码多参考帧的选择方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8755437B2 (en) * 2011-03-17 2014-06-17 Mediatek Inc. Method and apparatus for derivation of spatial motion vector candidate and motion vector prediction candidate
TWI580264B (zh) * 2011-11-10 2017-04-21 Sony Corp Image processing apparatus and method
WO2013107028A1 (en) * 2012-01-19 2013-07-25 Mediatek Singapore Pte. Ltd. Methods and apparatuses of amvp simplification
US20130294513A1 (en) * 2012-05-07 2013-11-07 Qualcomm Incorporated Inter layer merge list construction for video coding
JP2017011458A (ja) * 2015-06-19 2017-01-12 富士通株式会社 符号化データ生成プログラム、符号化データ生成方法および符号化データ生成装置
US10368083B2 (en) * 2016-02-15 2019-07-30 Qualcomm Incorporated Picture order count based motion vector pruning
CN116567212A (zh) * 2016-08-11 2023-08-08 Lx 半导体科技有限公司 编码/解码设备以及发送图像数据的设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222640A (zh) * 2007-01-09 2008-07-16 华为技术有限公司 确定参考帧的方法及其装置
US20130272409A1 (en) * 2012-04-12 2013-10-17 Qualcomm Incorporated Bandwidth reduction in video coding through applying the same reference index
CN103338372A (zh) * 2013-06-15 2013-10-02 浙江大学 一种视频处理方法及装置
CN103813166A (zh) * 2014-01-28 2014-05-21 浙江大学 一种低复杂度的hevc编码多参考帧的选择方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3697093A4

Also Published As

Publication number Publication date
EP3697093A4 (en) 2020-10-28
US20200045330A1 (en) 2020-02-06
US11076168B2 (en) 2021-07-27
CN109672894A (zh) 2019-04-23
EP3697093A1 (en) 2020-08-19
CN109672894B (zh) 2022-03-08

Similar Documents

Publication Publication Date Title
WO2019072049A1 (zh) 一种帧间预测方法、装置及存储介质
JP7358436B2 (ja) マルチリファレンス予測のための動きベクトルの精密化
WO2019191890A1 (zh) 用于图像处理的方法和图像处理装置
CN107197306B (zh) 编码装置和方法、解码装置和方法以及存储介质
WO2013042888A2 (ko) 머지 후보 블록 유도 방법 및 이러한 방법을 사용하는 장치
JP6943482B2 (ja) フレーム内予測方法、装置、ビデオ符号化装置、及び記憶媒体
JP2010016454A (ja) 画像符号化装置および方法、画像復号装置および方法、並びにプログラム
WO2020177665A1 (en) Methods and apparatuses of video processing for bi-directional prediction with motion refinement in video coding systems
JP2021507589A (ja) 映像コーディングシステムにおけるインター予測による映像デコーディング方法及び装置
JP7330243B2 (ja) 低減されたメモリアクセスを用いてfrucモードでビデオデータを符号化又は復号する方法及び装置
JP2022513492A (ja) 構築されたアフィンマージ候補を導出する方法
WO2021188707A1 (en) Methods and apparatuses for simplification of bidirectional optical flow and decoder side motion vector refinement
CN116437101A (zh) 一种hevc编码的运动估计方法、装置及设备
NZ760521B2 (en) Motion vector refinement for multi-reference prediction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18865441

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018865441

Country of ref document: EP

Effective date: 20200513