CN110958443B - Fast encoding method for 360-degree video interframes - Google Patents

Fast encoding method for 360-degree video interframes Download PDF

Info

Publication number
CN110958443B
CN110958443B CN201911293110.0A CN201911293110A CN110958443B CN 110958443 B CN110958443 B CN 110958443B CN 201911293110 A CN201911293110 A CN 201911293110A CN 110958443 B CN110958443 B CN 110958443B
Authority
CN
China
Prior art keywords
coding unit
current
size
same
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911293110.0A
Other languages
Chinese (zh)
Other versions
CN110958443A (en
Inventor
郁梅
吴志强
陈华
宋洋
徐海勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Blue Diamond Culture Media Co.,Ltd.
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201911293110.0A priority Critical patent/CN110958443B/en
Publication of CN110958443A publication Critical patent/CN110958443A/en
Application granted granted Critical
Publication of CN110958443B publication Critical patent/CN110958443B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a 360-degree video inter-frame rapid coding method, which guides different areas such as a moving block, a static block, an edge block, a non-edge block and the like to use different division depths and predictive coding mode search strategies according to the content characteristics of a 360-degree video, so that whether the search of coding units under certain depths is skipped or not can be judged according to the motion attributes, texture characteristics of the coding units, the optimal predictive coding mode of a processed coding unit adjacent to the current coding unit and other information, and the number of the predictive coding modes required to be traversed is reduced for the search of the optimal predictive coding mode of each coding unit, thereby achieving the purposes of effectively reducing the calculation complexity of the 360-degree video coding and saving the coding time.

Description

Fast encoding method for 360-degree video interframes
Technical Field
The invention relates to a Video signal processing method, in particular to a 360-degree Video inter-frame fast coding method based on motion and texture characteristics in High Efficiency Video Coding (HEVC).
Background
The 360-degree video is generally obtained by acquiring scene information of 360 degrees in the horizontal direction and 180 degrees in the vertical direction around a camera array composed of a plurality of cameras, and synthesizing the acquired videos of a plurality of viewing angles by applying a stitching fusion technology. The method is limited in that the existing Video coding standard can only compress two-dimensional plane videos, the 360-degree videos are compatible with the existing Video coding standards such as HEVC and AVC (advanced Video coding), and the encoding compression can be carried out only by projecting the 360-degree videos onto a two-dimensional projection plane. Currently, JVET has proposed a variety of two-dimensional projection formats, such as equirectangular projection, cubic projection, octahedral projection, segmented spherical projection, etc.
The 360-degree video has huge data volume and high coding complexity, and needs to be effectively and quickly coded, so that the coding computation complexity is reduced, and the practical application requirements are met. The 360-degree video picture features are obviously different from the traditional video picture due to the special shooting conditions of the 360-degree video and the transformation process of projecting from a spherical surface to a two-dimensional plane. The content characteristics of the 360-degree video are not considered in the fast coding applied to the conventional video, and the coding efficiency can be further improved. Therefore, there is a need to develop a fast encoding method applied to 360-degree video to reduce the computational complexity of 360-degree video encoding and increase the encoding speed of 360-degree video.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a 360-degree video inter-frame fast coding method, which effectively reduces the computational complexity of 360-degree video coding and improves the coding speed of the 360-degree video on the premise of ensuring that the picture quality is not greatly reduced after the 360-degree video coding is compressed.
The technical scheme adopted by the invention for solving the technical problems is as follows: a360-degree video inter-frame fast coding method is based on HEVC video coding standard and is only used for fast coding inter-frame coding frames in 360-degree video, and the method comprises the following steps:
step 1: let class0, class1, and class2 denote three prediction coding mode sets, respectively, and a prediction coding mode set class0 includes Skip mode and 2N × 2N merge mode among prediction coding modes; the prediction encoding mode set class1 includes 2N × 2N, 2N × N, N × 2N, N × N, 2N × nU, 2N × nD, nL × 2N, nR × 2N modes among the prediction encoding modes; the prediction coding mode set class2 contains Intra 2N × 2N, Intra PCM modes in the prediction coding mode; in the HEVC video coding standard, the size of the largest coding unit in an inter-coded frame is 64 × 64, and the division depth of the largest coding unit is an integer between 0 and 3; defining an interframe coding frame to be coded currently in a 360-degree video as a current frame;
step 2: dividing a current frame into a plurality of maximum coding units with the size of 64 multiplied by 64;
and step 3: defining a maximum coding unit with the size of 64 multiplied by 64 to be processed currently in the current frame as a current maximum coding unit;
and 4, step 4: judging whether the current maximum coding unit is positioned at the leftmost side or the uppermost side of the current frame, if so, performing recursive division on the current maximum coding unit to obtain a 4-layer quadtree which is composed of 1 coding unit with the division depth of 0 and the size of 64 × 64, 4 coding units with the division depth of 1 and the size of 32 × 32, 16 coding units with the division depth of 2 and the size of 16 × 16 and 64 coding units with the division depth of 3 and the size of 8 × 8, traversing each coding unit from top to bottom from the division depth of 0 to the division depth of 3, respectively searching the optimal prediction coding mode of each coding unit from class0 ^ class1 ^ class2 of each coding unit, and then executing step 16; otherwise, executing step 5; wherein the symbol "U" is a collective and operational symbol;
and 5: when the current maximum coding unit is positioned at the rightmost side of the current frame, respectively determining whether the current maximum coding unit, the left maximum coding unit of the current maximum coding unit, the upper maximum coding unit of the current maximum coding unit and the upper left maximum coding unit of the current maximum coding unit are moving blocks or static blocks; and acquiring the division depth of the maximum coding unit with the same position as the current maximum coding unit in the previous frame of the current frame when coding, and recording the division depth as DCol(ii) a Obtaining the division depth of the coding units with the size of 32 multiplied by 32 on the right side in the left maximum coding unit of the current maximum coding unit, and recording the division depth as DL(ii) a Obtaining the division depth when the two coding units with the size of 32 multiplied by 32 at the lower side in the upper maximum coding unit of the current maximum coding unit are coded, and recording the division depth as DT(ii) a Obtaining the division depth of the coding unit with the size of 32 multiplied by 32 at the lower right side in the maximum coding unit at the upper left side of the current maximum coding unit, and recording the division depth as DLT
When the current maximum coding unit is not positioned at the rightmost side of the current frame, determining whether the upper right maximum coding unit of the current maximum coding unit is a moving block or a static block on the basis; and obtains the current maximumThe division depth in coding one coding unit with a size of 32 × 32 on the lower left side of the maximum coding unit on the upper right side of the coding unit is denoted as DRT
Above, DCol、DL、DT、DLT、DRTIs an integer between 0 and 3;
step 6: let dminRepresents the lower limit of the search for the partition depth when the current maximum coding unit is coded, and let dmaxRepresenting the upper limit of the search of the division depth when the current maximum coding unit is coded;
if the current maximum coding unit is a motion block, let d min0 and dmaxStep 7 is then performed;
if the current maximum coding unit is a static block, then let d be the rightmost side of the current framemin=min(DL,DT,DLT) And make an order
Figure BDA0002319654510000031
Then step 7 is executed; when the current maximum coding unit is not located at the rightmost side of the current frame, order dmin=min(DL,DT,DLT,DRT) And make an order
Figure BDA0002319654510000032
Then step 7 is executed;
above, MLLeft-hand largest coding unit, M, representing the current largest coding unitTRepresents the upper maximum coding unit, M, of the current maximum coding unitLTThe upper left maximum coding unit, M, representing the current maximum coding unitRTRepresenting the maximum coding unit at the upper right side of the current maximum coding unit, wherein min () is a minimum value taking function, and max () is a maximum value taking function;
and 7: if d isminIf 0, the current maximum coding unit is initially composed of 1 coding unit with size 64 × 64, and then step 8 is performed; if d isminIf it is 1, the current maximum coding unit is initially divided into 4 coding units of size 32 × 32, and then the process is performedStep 8 is executed; if d isminIf 2, initially dividing the current maximum coding unit into 16 coding units with the size of 16 × 16, and then executing step 8; if d isminIf 3, initially dividing the current maximum coding unit into 64 coding units with the size of 8 × 8, and then executing step 8;
and 8: defining a coding unit to be processed currently in a current maximum coding unit as a current coding unit;
and step 9: when the current coding unit is positioned at the rightmost side of the current frame, respectively determining whether the current coding unit, the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit are moving blocks or static blocks; when the current coding unit is not positioned at the rightmost side of the current frame, respectively determining whether the current coding unit, the same-size left coding unit of the current coding unit, the same-size upper side coding unit of the current coding unit, the same-size upper left coding unit of the current coding unit and the same-size upper right coding unit of the current coding unit are moving blocks or static blocks; and determining whether the current coding unit is an edge block or a non-edge block;
step 10: if the current coding unit is a motion block and the current coding unit is located at the rightmost side of the current frame, two situations are distinguished, the first situation: when the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit are all static blocks, only searching the optimal predictive coding mode of the current coding unit in class1 ≦ class2, and then executing step 11; in the second case: when at least one of the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit is a motion block, if the current coding unit is an edge block, only searching out the optimal predictive coding mode of the current coding unit in class2, and then executing step 11; if the current coding unit is a non-edge block, searching for the optimal prediction coding mode of the current coding unit only in class1 ≧ class2, and then performing step 11;
if the current coding unit is a motion block and the current coding unit is not located at the rightmost side of the current frame, two cases are distinguished, the first case: when at least three of a same-size left coding unit of the current coding unit, a same-size upper coding unit of the current coding unit, a same-size left upper coding unit of the current coding unit, and a same-size right upper coding unit of the current coding unit are static blocks, searching for an optimal predictive coding mode of the current coding unit only in class1 £ class2, and then executing step 11; in the second case: when at least two of the same-size left coding unit, the same-size upper coding unit, the same-size left upper coding unit and the same-size right upper coding unit of the current coding unit are motion blocks, if the current coding unit is an edge block, only searching out the optimal predictive coding mode of the current coding unit in class2, and then executing step 11; if the current coding unit is a non-edge block, searching for the optimal prediction coding mode of the current coding unit only in class1 ≧ class2, and then performing step 11;
if the current coding unit is a static block and the current coding unit is located at the rightmost side of the current frame, two situations are adopted, namely the first situation: when the same-size left coding unit of the current coding unit, the same-size upper coding unit of the current coding unit and the same-size upper-left coding unit of the current coding unit are all static blocks, if the optimal predictive coding modes of the same-size left coding unit of the current coding unit, the same-size upper coding unit of the current coding unit and the same-size upper-left coding unit of the current coding unit are all Skip modes or the current coding unit is a non-edge block, only searching out the optimal predictive coding mode of the current coding unit in class0, and then executing step 11; otherwise, only the optimal prediction encoding mode of the current encoding unit is searched in class0 ℃ £ class1, and step 11 is performed; in the second case: when at least one of the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit is a motion block, searching the optimal predictive coding mode of the current coding unit only in class0 ≦ class1, and then executing step 11;
if the current coding unit is a static block and the current coding unit is not located at the rightmost side of the current frame, two situations are distinguished, the first situation: when at least three of the same-size left coding unit of the current coding unit, the same-size upper-side coding unit of the current coding unit, the same-size upper-left coding unit of the current coding unit, and the same-size upper-right coding unit of the current coding unit are static blocks, if the optimal predictive coding mode of at least three of the same-size left coding unit of the current coding unit, the same-size upper-side coding unit of the current coding unit, the same-size upper-left coding unit of the current coding unit, and the same-size upper-right coding unit of the current coding unit is Skip mode or the current coding unit is a non-edge block, only searching for the optimal predictive coding mode of the current coding unit in class0, and then executing step 11; otherwise, only the optimal prediction encoding mode of the current encoding unit is searched in class0 ℃ £ class1, and step 11 is performed; in the second case: when at least two of a same-size left coding unit of the current coding unit, a same-size upper coding unit of the current coding unit, a same-size left upper coding unit of the current coding unit, and a same-size right upper coding unit of the current coding unit are motion blocks, searching for an optimal predictive coding mode of the current coding unit only in class 0U class1, and then executing step 11;
step 11: when the size of the current coding unit is 64 × 64, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 16 is executed; otherwise, dividing the current coding unit into 4 coding units with the size of 32 × 32, and then executing step 12; wherein, condition 1: d max0; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal predictive coding mode of the current coding unit is Skip mode;
when the size of the current coding unit is 32 × 32, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 13 is executed; otherwise, dividing the current coding unit into 4 coding units with the size of 16 × 16, and then executing step 12; wherein, condition 1: d max1 is ═ 1; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal predictive coding mode of the current coding unit is Skip mode;
when the size of the current coding unit is 16 × 16, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 13 is executed; otherwise, dividing the current coding unit into 4 coding units with the size of 8 × 8, and then executing step 12; wherein, condition 1: d max2; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal predictive coding mode of the current coding unit is Skip mode;
when the size of the current coding unit is 8 × 8, performing step 13;
step 12: dividing the current coding unit into the coding units with the current size to be processed being halved in the next layer which is split as the current coding unit, and then returning to the step 9 to continue executing;
step 13: taking the next coding unit to be processed, which has the same father node as the current coding unit in the same layer of the current coding unit, as the current coding unit, and then returning to the step 9 to continue executing the process until all 4 coding units having the same father node as the current coding unit in the same layer are processed, and then executing the step 14;
step 14: taking the next to-be-processed coding unit which is the next sibling node to the parent node of the current coding unit in the previous layer of the current coding unit as the current coding unit, then returning to the step 9 to continue executing until all 4 coding units which are the sibling nodes to the parent node of the current coding unit in the previous layer are processed, and then executing the step 15;
step 15: judging whether the processed division depth is dminTo dmaxIf yes, executing step 16; otherwise, go to step 14;
step 16: coding the current maximum coding unit, then taking the next maximum coding unit with the size of 64 multiplied by 64 to be processed in the current frame as the current maximum coding unit, returning to execute the step 4 until all the maximum coding units in the current frame are coded, and then executing the step 17;
and step 17: and taking the inter-frame to be coded of the next frame in the 360-degree video as the current frame, and then returning to execute the step 2 until all the inter-frame coded frames in the 360-degree video are processed.
Compared with the prior art, the invention has the advantages that:
the method considers the difference of the content characteristics of the 360-degree video and the traditional HEVC video, and guides different areas such as a moving block, a static block, an edge block, a non-edge block and the like to use different depth division and predictive coding mode search strategies according to the content characteristics of the 360-degree video, so that whether the search of coding units under certain depths is skipped or not can be judged according to the motion attributes and the texture characteristics of the coding units, the optimal predictive coding modes of processed coding units adjacent to the current coding unit and other information, the number of the predictive coding modes required to be traversed is reduced for the search of the optimal predictive coding mode of each coding unit, and the purposes of effectively reducing the coding calculation complexity of the 360-degree video and saving the coding time are achieved.
Drawings
FIG. 1 is a flow chart of encoding a frame of inter-coded frames using the method of the present invention;
FIG. 2 is a diagram of recursive partitioning of a largest coding unit of size 64 × 64 and the corresponding partitioning result;
FIG. 3 is a schematic diagram of the position relationship between the current LCU and the left LCU, the upper left LCU, and the upper right LCU;
FIG. 4 is a flowchart illustrating the searching of the optimal predictive coding mode for the current coding unit according to the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a 360-degree video inter-frame rapid coding method which is based on the HEVC video coding standard and is only used for rapidly coding inter-frame coding frames in a 360-degree video, and the method comprises the following steps:
step 1: let class0, class1, and class2 denote three prediction coding mode sets, respectively, and a prediction coding mode set class0 includes Skip mode and 2N × 2N merge mode among prediction coding modes; the prediction encoding mode set class1 includes 2N × 2N, 2N × N, N × 2N, N × N, 2N × nU, 2N × nD, nL × 2N, nR × 2N modes among the prediction encoding modes; the prediction coding mode set class2 contains Intra 2N × 2N, Intra PCM modes in the prediction coding mode; in the HEVC video coding standard, the size of the largest coding unit in an inter-coded frame is 64 × 64, and the division depth of the largest coding unit is an integer between 0 and 3; and defining the current frame to be coded in the 360-degree video as the current frame.
Step 2: dividing a current frame into a plurality of maximum coding units with the size of 64 multiplied by 64; FIG. 1 is a flow chart illustrating the encoding of a frame of inter-coded frames.
And step 3: the largest coding unit with the size of 64 × 64 currently to be processed in the current frame is defined as the current largest coding unit.
And 4, step 4: judging whether the current maximum coding unit is positioned at the leftmost side or the uppermost side of the current frame, if so, performing recursive division on the current maximum coding unit to obtain 1 coding unit with the division depth of 0 and the size of 64 × 64, 4 coding units with the division depth of 1 and the size of 32 × 32, 16 coding units with the division depth of 2 and the size of 16 × 16 and 64 coding units with the division depth of 3 and the size of 8 × 8, traversing each coding unit from top to bottom from the division depth of 0 to the division depth of 3, searching the optimal prediction coding mode of the coding unit from class0 ^ class1 ^ class2 of each coding unit, and then performing step 16; otherwise, executing step 5; where the symbol "U" is a collective and operational symbol.
And 5: when the current maximum coding unit is positioned at the rightmost side of the current frame, respectively determining whether the current maximum coding unit, the left maximum coding unit of the current maximum coding unit, the upper maximum coding unit of the current maximum coding unit and the upper left maximum coding unit of the current maximum coding unit are moving blocks or static blocks; and acquiring the division depth of the maximum coding unit with the same position as the current maximum coding unit in the previous frame of the current frame when coding, and recording the division depth as DCol(ii) a Obtaining the division depth, denoted as D, when two coding units with the size of 32 × 32 on the right side (as shown by two gray blocks denoted as 1 and 2 in fig. 3) in the left-side maximum coding unit of the current maximum coding unit are codedL(ii) a Obtaining the division depth, marked as D, when the two coding units with the size of 32 × 32 (as shown by two gray color blocks marked as 4 and 5 in FIG. 3) at the lower side of the upper maximum coding unit of the current maximum coding unit are codedT(ii) a Obtaining the division depth, marked as D, when a coding unit (shown as a gray color block marked as 3 in FIG. 3) with the size of 32 × 32 at the lower right side in the maximum coding unit at the upper left side of the current maximum coding unit is codedLT
When the current maximum coding unit is not positioned at the rightmost side of the current frame, determining whether the upper right maximum coding unit of the current maximum coding unit is a moving block or a static block on the basis; and obtaining the division depth, marked as D, when a coding unit with the size of 32 × 32 on the lower left side (as shown by a gray block marked as 6 in fig. 3) in the maximum coding unit on the upper right side of the current maximum coding unit is codedRT
Above, DCol、DL、DT、DLT、DRTIs an integer between 0 and 3.
Here, the prior art is adopted to determine whether each maximum coding unit is a moving block or a static block, for example, the motion attribute of the maximum coding unit is determined by using the motion information between the current frame and the previous frame of the current frame, such as the process of determining whether the current maximum coding unit is a moving block or a static block is as follows: calculating the absolute difference value of the pixel value of each pixel point in the current maximum coding unit and the pixel value of the corresponding pixel point in the maximum coding unit with the same position as the current maximum coding unit in the previous frame of the current frame, then judging whether the average value of all the absolute difference values corresponding to the current maximum coding unit is greater than a set threshold value, if so, judging that the current maximum coding unit is a moving block, otherwise, judging that the current maximum coding unit is a static block. Other methods of discriminating between moving blocks and stationary blocks, such as optical flow, may also be used in the practice of the present invention.
Here, the left-side maximum coding unit of the current maximum coding unit is a maximum coding unit located on the left side of the current maximum coding unit, the upper-side maximum coding unit of the current maximum coding unit is a maximum coding unit located on the upper side of the current maximum coding unit, the upper-left maximum coding unit of the current maximum coding unit is a maximum coding unit located on the upper left side of the current maximum coding unit, and the upper-right maximum coding unit of the current maximum coding unit is a maximum coding unit located on the upper right side of the current maximum coding unit.
Step 6: let dminRepresents the lower limit of the search for the partition depth when the current maximum coding unit is coded, and let dmaxAnd represents the upper limit of the search for the partition depth when the current maximum coding unit is coded.
If the current maximum coding unit is a motion block, let d min0 and dmaxStep 7 is then performed, 3.
If the current maximum coding unit is a static block, then let d be the rightmost side of the current framemin=min(DL,DT,DLT) And make an order
Figure BDA0002319654510000091
Then step 7 is executed; when the current maximum coding unit is not located at the rightmost side of the current frame, order dmin=min(DL,DT,DLT,DRT) And make an order
Figure BDA0002319654510000092
Step 7 is then performed.
Above, MLLeft-hand largest coding unit, M, representing the current largest coding unitTRepresents the upper maximum coding unit, M, of the current maximum coding unitLTThe upper left maximum coding unit, M, representing the current maximum coding unitRTRepresents the upper right-most coding unit of the current maximum coding unit, min () is the take minimum function, max () is the take maximum function.
For a largest coding unit with size of 64 × 64, the process of recursively dividing into coding units with smaller size is a top-down search process for a quad-tree with 4 levels as shown in fig. 2, and the division depth can be an integer between 0 and 3. Fig. 2 is a schematic diagram of recursive partitioning of 1 largest coding unit of size 64 × 64, the dashed lines in fig. 2 identify coding units that do not need to be processed, and the corresponding partitioning results are also shown in fig. 2. dminThe initial depth of division to be searched is determined, i.e. the search is determined starting from the level of the search tree as shown in fig. 2 and going down, e.g. if dminWhen the size is 1, a coding unit of 64 × 64 is skipped, and the process is directly started from a coding unit of 32 × 32. Similarly, dmaxIts deepest partition depth to be searched is determined, e.g. if dmaxWhen the size of the minimum coding unit to be searched for the maximum coding unit is 16 × 16, it is not necessary to search for 64 coding units having a size of 8 × 8. The computational complexity of the encoding is reduced due to the reduced number of coding units that need to be searched.
And 7: if d isminWhen it is 0, the current maximum coding unit is initially composed of 1 coding unit having a size of 64 × 64, and thenExecuting the step 8; if d isminIf it is 1, the current maximum coding unit is initially divided into 4 coding units with the size of 32 × 32, and then step 8 is performed; if d isminIf 2, initially dividing the current maximum coding unit into 16 coding units with the size of 16 × 16, and then executing step 8; if d isminIf 3, the current maximum coding unit is initially divided into 64 coding units of size 8 × 8, and then step 8 is performed.
And 8: defining a coding unit to be processed currently in a current maximum coding unit as a current coding unit; a flowchart for searching for the optimal predictive coding mode of the current coding unit is shown in fig. 4.
And step 9: when the current coding unit is positioned at the rightmost side of the current frame, respectively determining whether the current coding unit, the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit are moving blocks or static blocks; when the current coding unit is not positioned at the rightmost side of the current frame, respectively determining whether the current coding unit, the same-size left coding unit of the current coding unit, the same-size upper side coding unit of the current coding unit, the same-size upper left coding unit of the current coding unit and the same-size upper right coding unit of the current coding unit are moving blocks or static blocks; and determines whether the current coding unit is an edge block or a non-edge block.
Here, the determination of whether each coding unit is a moving block or a static block uses the prior art, which is the same as the determination method for determining whether the current largest coding unit is a moving block or a static block in step 5; for determining whether the current coding unit is an edge block or a non-edge block, an edge detection algorithm may be used to calculate the texture characteristic of the current coding unit, for example, Sobel operator is used to calculate the edge strength of the current coding unit, if the average value of the edge strengths of all the pixels in the current coding unit is greater than a set threshold, it is determined that the current coding unit is an edge block, otherwise, it is a non-edge block, and other edge block determination methods may also be used here, for example, Prewitt operator, Canny operator, etc. are used.
Here, the left coding unit with the same size is a coding unit located on the left side of the current coding unit and having the same size as the current coding unit; the encoding unit at the upper side with the same size is the encoding unit which is positioned at the upper side of the current encoding unit and has the same size as the current encoding unit; the left upper coding unit with the same size is a coding unit which is positioned on the left upper side of the current coding unit and has the same size as the current coding unit; the upper right coding unit with the same size is the coding unit which is positioned at the upper right side of the current coding unit and has the same size as the current coding unit.
Step 10: if the current coding unit is a motion block and the current coding unit is located at the rightmost side of the current frame, two situations are distinguished, the first situation: when the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit are all static blocks, only searching the optimal predictive coding mode of the current coding unit in class1 ≦ class2, and then executing step 11; in the second case: when at least one of the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit is a motion block, if the current coding unit is an edge block, only searching out the optimal predictive coding mode of the current coding unit in class2, and then executing step 11; if the current coding unit is a non-edge block, the optimal prediction coding mode of the current coding unit is searched only in class1 ≧ class2, and step 11 is performed again.
If the current coding unit is a motion block and the current coding unit is not located at the rightmost side of the current frame, two cases are distinguished, the first case: when at least three of a same-size left coding unit of the current coding unit, a same-size upper coding unit of the current coding unit, a same-size left upper coding unit of the current coding unit, and a same-size right upper coding unit of the current coding unit are static blocks, searching for an optimal predictive coding mode of the current coding unit only in class1 £ class2, and then executing step 11; in the second case: when at least two of the same-size left coding unit, the same-size upper coding unit, the same-size left upper coding unit and the same-size right upper coding unit of the current coding unit are motion blocks, if the current coding unit is an edge block, only searching out the optimal predictive coding mode of the current coding unit in class2, and then executing step 11; if the current coding unit is a non-edge block, the optimal prediction coding mode of the current coding unit is searched only in class1 ≧ class2, and step 11 is performed again.
If the current coding unit is a static block and the current coding unit is located at the rightmost side of the current frame, two situations are adopted, namely the first situation: when the same-size left coding unit of the current coding unit, the same-size upper coding unit of the current coding unit and the same-size upper-left coding unit of the current coding unit are all static blocks, if the optimal predictive coding modes of the same-size left coding unit of the current coding unit, the same-size upper coding unit of the current coding unit and the same-size upper-left coding unit of the current coding unit are all Skip modes or the current coding unit is a non-edge block, only searching out the optimal predictive coding mode of the current coding unit in class0, and then executing step 11; otherwise, only the optimal prediction encoding mode of the current encoding unit is searched in class0 ℃ £ class1, and step 11 is performed; in the second case: when at least one of the left coding unit of the same size of the current coding unit, the upper coding unit of the same size of the current coding unit, and the upper left coding unit of the same size of the current coding unit is a motion block, the optimal predictive coding mode of the current coding unit is searched only in class0 £ class1, and then step 11 is performed.
If the current coding unit is a static block and the current coding unit is not located at the rightmost side of the current frame, two situations are distinguished, the first situation: when at least three of the same-size left coding unit of the current coding unit, the same-size upper-side coding unit of the current coding unit, the same-size upper-left coding unit of the current coding unit, and the same-size upper-right coding unit of the current coding unit are static blocks, if the optimal predictive coding mode of at least three of the same-size left coding unit of the current coding unit, the same-size upper-side coding unit of the current coding unit, the same-size upper-left coding unit of the current coding unit, and the same-size upper-right coding unit of the current coding unit is Skip mode or the current coding unit is a non-edge block, only searching for the optimal predictive coding mode of the current coding unit in class0, and then executing step 11; otherwise, only the optimal prediction encoding mode of the current encoding unit is searched in class0 ℃ £ class1, and step 11 is performed; in the second case: when at least two of the same-size left coding unit of the current coding unit, the same-size upper side coding unit of the current coding unit, the same-size upper left coding unit of the current coding unit, and the same-size upper right coding unit of the current coding unit are motion blocks, only the optimal predictive coding mode of the current coding unit is searched in class0 £ class1, and then step 11 is executed.
Step 11: when the size of the current coding unit is 64 × 64, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 16 is executed; otherwise, dividing the current coding unit into 4 coding units with the size of 32 × 32, and then executing step 12; wherein, condition 1: d max0; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal prediction coding mode of the current coding unit is Skip mode.
When the size of the current coding unit is 32 × 32, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 13 is executed; otherwise, dividing the current coding unit into 4 coding units with the size of 16 × 16, and then executing step 12; wherein, condition 1: d max1 is ═ 1; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: optimal predictive coding mode for current coding unitThe equation is Skip mode.
When the size of the current coding unit is 16 × 16, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 13 is executed; otherwise, dividing the current coding unit into 4 coding units with the size of 8 × 8, and then executing step 12; wherein, condition 1: d max2; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal prediction coding mode of the current coding unit is Skip mode.
When the size of the current coding unit is 8 × 8, step 13 is performed.
Step 12: and dividing the current coding unit into the coding units with the current to-be-processed size halved in the next layer which is split as the current coding unit, and then returning to the step 9 to continue executing.
Step 13: and taking the next to-be-processed coding unit with the same parent node as the current coding unit in the same layer of the current coding unit as the current coding unit, and then returning to the step 9 to continue the execution until all 4 coding units with the same parent node as the current coding unit in the same layer are processed, and then executing the step 14.
Step 14: and taking the next to-be-processed coding unit which is next to the sibling node of the parent node of the current coding unit in the previous layer of the current coding unit as the current coding unit, then returning to the step 9 to continue executing until all 4 coding units which are next to the sibling node of the parent node of the current coding unit in the previous layer are processed, and then executing the step 15.
Step 15: judging whether the processed division depth is dminTo dmaxIf yes, executing step 16; otherwise, step 14 is performed.
Step 16: and selecting a combination mode of the coding unit with the minimum rate distortion cost and a corresponding optimal prediction coding mode according to a rate distortion function, coding the current maximum coding unit, taking the next maximum coding unit with the size of 64 multiplied by 64 to be processed in the current frame as the current maximum coding unit, returning to the step 4 until all the maximum coding units in the current frame are coded, and then executing the step 17.
And step 17: and taking the inter-frame to be coded of the next frame in the 360-degree video as the current frame, and then returning to execute the step 2 until all the inter-frame coded frames in the 360-degree video are processed.
To verify the effectiveness of the inventive method, it was implemented on the HM reference software HM16.15 to test its rate-distortion performance and coding time.
The hardware configuration of the experimental platform is Intel i7-7400, the internal memory is 8G, and Windows 764 bit operating system. The main coding parameters of the experiment are low-delay coding mode, and the Quantization Parameters (QP) are 22, 27, 32, and 37, respectively.
Table 1 shows the coding time savings and rate-distortion performance of each test sequence encoded using the method of the present invention. BD-W in Table 1 indicates the percentage of code rate change under the same image quality conditions when WS-PSNR is used as the image quality evaluation index, BD-S indicates the percentage of code rate change under the same image quality conditions when S-PSNR is used as the image quality evaluation index, and Δ TS indicates the percentage of coding time saving under the same image quality conditions.
TABLE 1 coding time savings and Rate-distortion Performance scenarios for each test sequence encoded using the method of the present invention
Figure BDA0002319654510000141
As can be seen from the experimental results listed in Table 1, the method of the present invention can effectively reduce the encoding complexity and increase the encoding speed for different test sequences. This is because: when the HM original platform is used for coding, all coding units under the division depths of 0, 1, 2 and 3 are required to be searched for each maximum coding unit, and each coding unit is required to search all the predictive coding modes in three predictive coding mode sets of class0, class1 and class 2; the method can judge whether to skip the search of the coding unit under certain depths according to the motion attribute, the texture characteristic, the optimal predictive coding mode of the processed coding unit adjacent to the current coding unit and other information, and the number of the required traversed predictive coding modes is reduced for the search of the optimal predictive coding mode of each coding unit, thereby reducing the computational complexity of coding and saving the coding time.

Claims (1)

1. A360-degree video inter-frame fast coding method is based on HEVC video coding standard and is only used for fast coding inter-frame coding frames in 360-degree video, and the method comprises the following steps:
step 1: let class0, class1, and class2 denote three prediction coding mode sets, respectively, and a prediction coding mode set class0 includes Skip mode and 2N × 2N merge mode among prediction coding modes; the prediction encoding mode set class1 includes 2N × 2N, 2N × N, N × 2N, N × N, 2N × nU, 2N × nD, nL × 2N, nR × 2N modes among the prediction encoding modes; the prediction coding mode set class2 contains Intra 2N × 2N, Intra PCM modes in the prediction coding mode; in the HEVC video coding standard, the size of the largest coding unit in an inter-coded frame is 64 × 64, and the division depth of the largest coding unit is an integer between 0 and 3; defining an interframe coding frame to be coded currently in a 360-degree video as a current frame;
step 2: dividing a current frame into a plurality of maximum coding units with the size of 64 multiplied by 64;
and step 3: defining a maximum coding unit with the size of 64 multiplied by 64 to be processed currently in the current frame as a current maximum coding unit;
and 4, step 4: judging whether the current maximum coding unit is positioned at the leftmost side or the uppermost side of the current frame, if so, performing recursive division on the current maximum coding unit to obtain a 4-layer quadtree which is composed of 1 coding unit with the division depth of 0 and the size of 64 × 64, 4 coding units with the division depth of 1 and the size of 32 × 32, 16 coding units with the division depth of 2 and the size of 16 × 16 and 64 coding units with the division depth of 3 and the size of 8 × 8, traversing each coding unit from top to bottom from the division depth of 0 to the division depth of 3, respectively searching the optimal prediction coding mode of each coding unit from class0 ^ class1 ^ class2 of each coding unit, and then executing step 16; otherwise, executing step 5; wherein the symbol "U" is a collective and operational symbol;
and 5: when the current maximum coding unit is positioned at the rightmost side of the current frame, respectively determining whether the current maximum coding unit, the left maximum coding unit of the current maximum coding unit, the upper maximum coding unit of the current maximum coding unit and the upper left maximum coding unit of the current maximum coding unit are moving blocks or static blocks; and acquiring the division depth of the maximum coding unit with the same position as the current maximum coding unit in the previous frame of the current frame when coding, and recording the division depth as DCol(ii) a Obtaining the division depth of the coding units with the size of 32 multiplied by 32 on the right side in the left maximum coding unit of the current maximum coding unit, and recording the division depth as DL(ii) a Obtaining the division depth when the two coding units with the size of 32 multiplied by 32 at the lower side in the upper maximum coding unit of the current maximum coding unit are coded, and recording the division depth as DT(ii) a Obtaining the division depth of the coding unit with the size of 32 multiplied by 32 at the lower right side in the maximum coding unit at the upper left side of the current maximum coding unit, and recording the division depth as DLT
When the current maximum coding unit is not positioned at the rightmost side of the current frame, respectively determining whether the current maximum coding unit, the left maximum coding unit of the current maximum coding unit, the upper maximum coding unit of the current maximum coding unit, and the upper left maximum coding unit of the current maximum coding unit are moving blocks or static blocks; and acquiring the division depth of the maximum coding unit with the same position as the current maximum coding unit in the previous frame of the current frame when coding, and recording the division depth as DCol(ii) a Obtaining the division depth of the coding units with the size of 32 multiplied by 32 on the right side in the left maximum coding unit of the current maximum coding unit, and recording the division depth as DL(ii) a Obtaining the division when encoding two encoding units with the size of 32 multiplied by 32 at the lower side in the upper maximum encoding unit of the current maximum encoding unitDepth, noted as DT(ii) a Obtaining the division depth of the coding unit with the size of 32 multiplied by 32 at the lower right side in the maximum coding unit at the upper left side of the current maximum coding unit, and recording the division depth as DLT(ii) a Then determining whether the maximum coding unit at the upper right side of the current maximum coding unit is a moving block or a static block; and obtaining the division depth of the coding unit with the size of 32 multiplied by 32 at the lower left side in the maximum coding unit at the upper right side of the current maximum coding unit and recording the division depth as DRT
Above, DCol、DL、DT、DLT、DRTIs an integer between 0 and 3;
step 6: let dminRepresents the lower limit of the search for the partition depth when the current maximum coding unit is coded, and let dmaxRepresenting the upper limit of the search of the division depth when the current maximum coding unit is coded;
if the current maximum coding unit is a motion block, let dmin0 and dmaxStep 7 is then performed;
if the current maximum coding unit is a static block, then let d be the rightmost side of the current framemin=min(DL,DT,DLT) And make an order
Figure FDA0003008994610000021
Then step 7 is executed; when the current maximum coding unit is not located at the rightmost side of the current frame, order dmin=min(DL,DT,DLT,DRT) And make an order
Figure FDA0003008994610000022
Then step 7 is executed;
above, MLLeft-hand largest coding unit, M, representing the current largest coding unitTRepresents the upper maximum coding unit, M, of the current maximum coding unitLTThe upper left maximum coding unit, M, representing the current maximum coding unitRTRepresenting the upper right-most coding unit of the current largest coding unit, min () being a minimum functionNumber, max () is a function taking the maximum value;
and 7: if d isminIf 0, the current maximum coding unit is initially composed of 1 coding unit with size 64 × 64, and then step 8 is performed; if d isminIf it is 1, the current maximum coding unit is initially divided into 4 coding units with the size of 32 × 32, and then step 8 is performed; if d isminIf 2, initially dividing the current maximum coding unit into 16 coding units with the size of 16 × 16, and then executing step 8; if d isminIf 3, initially dividing the current maximum coding unit into 64 coding units with the size of 8 × 8, and then executing step 8;
and 8: defining a coding unit to be processed currently in a current maximum coding unit as a current coding unit;
and step 9: when the current coding unit is positioned at the rightmost side of the current frame, respectively determining whether the current coding unit, the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit are moving blocks or static blocks; when the current coding unit is not positioned at the rightmost side of the current frame, respectively determining whether the current coding unit, the same-size left coding unit of the current coding unit, the same-size upper side coding unit of the current coding unit, the same-size upper left coding unit of the current coding unit and the same-size upper right coding unit of the current coding unit are moving blocks or static blocks; and determining whether the current coding unit is an edge block or a non-edge block;
step 10: if the current coding unit is a motion block and the current coding unit is located at the rightmost side of the current frame, two situations are distinguished, the first situation: when the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit are all static blocks, only searching the optimal predictive coding mode of the current coding unit in class1 ≦ class2, and then executing step 11; in the second case: when at least one of the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit is a motion block, if the current coding unit is an edge block, only searching out the optimal predictive coding mode of the current coding unit in class2, and then executing step 11; if the current coding unit is a non-edge block, searching for the optimal prediction coding mode of the current coding unit only in class1 ≧ class2, and then performing step 11;
if the current coding unit is a motion block and the current coding unit is not located at the rightmost side of the current frame, two cases are distinguished, the first case: when at least three of a same-size left coding unit of the current coding unit, a same-size upper coding unit of the current coding unit, a same-size left upper coding unit of the current coding unit, and a same-size right upper coding unit of the current coding unit are static blocks, searching for an optimal predictive coding mode of the current coding unit only in class1 £ class2, and then executing step 11; in the second case: when at least two of the same-size left coding unit, the same-size upper coding unit, the same-size left upper coding unit and the same-size right upper coding unit of the current coding unit are motion blocks, if the current coding unit is an edge block, only searching out the optimal predictive coding mode of the current coding unit in class2, and then executing step 11; if the current coding unit is a non-edge block, searching for the optimal prediction coding mode of the current coding unit only in class1 ≧ class2, and then performing step 11;
if the current coding unit is a static block and the current coding unit is located at the rightmost side of the current frame, two situations are adopted, namely the first situation: when the same-size left coding unit of the current coding unit, the same-size upper coding unit of the current coding unit and the same-size upper-left coding unit of the current coding unit are all static blocks, if the optimal predictive coding modes of the same-size left coding unit of the current coding unit, the same-size upper coding unit of the current coding unit and the same-size upper-left coding unit of the current coding unit are all Skip modes or the current coding unit is a non-edge block, only searching out the optimal predictive coding mode of the current coding unit in class0, and then executing step 11; otherwise, only the optimal prediction encoding mode of the current encoding unit is searched in class0 ℃ £ class1, and step 11 is performed; in the second case: when at least one of the left coding unit with the same size of the current coding unit, the upper coding unit with the same size of the current coding unit and the upper left coding unit with the same size of the current coding unit is a motion block, searching the optimal predictive coding mode of the current coding unit only in class0 ≦ class1, and then executing step 11;
if the current coding unit is a static block and the current coding unit is not located at the rightmost side of the current frame, two situations are distinguished, the first situation: when at least three of the same-size left coding unit of the current coding unit, the same-size upper-side coding unit of the current coding unit, the same-size upper-left coding unit of the current coding unit, and the same-size upper-right coding unit of the current coding unit are static blocks, if the optimal predictive coding mode of at least three of the same-size left coding unit of the current coding unit, the same-size upper-side coding unit of the current coding unit, the same-size upper-left coding unit of the current coding unit, and the same-size upper-right coding unit of the current coding unit is Skip mode or the current coding unit is a non-edge block, only searching for the optimal predictive coding mode of the current coding unit in class0, and then executing step 11; otherwise, only the optimal prediction encoding mode of the current encoding unit is searched in class0 ℃ £ class1, and step 11 is performed; in the second case: when at least two of a same-size left coding unit of the current coding unit, a same-size upper coding unit of the current coding unit, a same-size left upper coding unit of the current coding unit, and a same-size right upper coding unit of the current coding unit are motion blocks, searching for an optimal predictive coding mode of the current coding unit only in class 0U class1, and then executing step 11;
step 11: when the size of the current coding unit is 64 × 64, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 16 is executed; if the following three conditions are not satisfied, dividing the current coding unit into 4 coding units with the size of 32 × 32, and then executing step 12; wherein, the stripPiece 1: dmax0; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal predictive coding mode of the current coding unit is Skip mode;
when the size of the current coding unit is 32 × 32, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 13 is executed; if the following three conditions are not satisfied, dividing the current coding unit into 4 coding units with the size of 16 × 16, and then executing step 12; wherein, condition 1: dmax1 is ═ 1; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal predictive coding mode of the current coding unit is Skip mode;
when the size of the current coding unit is 16 × 16, if one of the following three conditions is satisfied, the current coding unit is not continuously recursively divided, and step 13 is executed; if the following three conditions are not satisfied, dividing the current coding unit into 4 coding units with the size of 8 × 8, and then executing step 12; wherein, condition 1: dmax2; condition 2: the current coding unit is a static block and is a non-edge block, and the current division depth of the current coding unit is greater than or equal to the division depth of the coding unit with the same position as the current coding unit in the previous frame of the current frame when coding; condition 3: the optimal predictive coding mode of the current coding unit is Skip mode;
when the size of the current coding unit is 8 × 8, performing step 13;
step 12: dividing the current coding unit into the coding units with the current size to be processed being halved in the next layer which is split as the current coding unit, and then returning to the step 9 to continue executing;
step 13: taking the next coding unit to be processed, which has the same father node as the current coding unit in the same layer of the current coding unit, as the current coding unit, and then returning to the step 9 to continue executing the process until all 4 coding units having the same father node as the current coding unit in the same layer are processed, and then executing the step 14;
step 14: taking the next to-be-processed coding unit which is the next sibling node to the parent node of the current coding unit in the previous layer of the current coding unit as the current coding unit, then returning to the step 9 to continue executing until all 4 coding units which are the sibling nodes to the parent node of the current coding unit in the previous layer are processed, and then executing the step 15;
step 15: judging whether the processed division depth is dminTo dmaxIf yes, executing step 16; otherwise, go to step 14;
step 16: coding the current maximum coding unit, then taking the next maximum coding unit with the size of 64 multiplied by 64 to be processed in the current frame as the current maximum coding unit, returning to execute the step 4 until all the maximum coding units in the current frame are coded, and then executing the step 17;
and step 17: and taking the inter-frame to be coded of the next frame in the 360-degree video as the current frame, and then returning to execute the step 2 until all the inter-frame coded frames in the 360-degree video are processed.
CN201911293110.0A 2019-12-16 2019-12-16 Fast encoding method for 360-degree video interframes Active CN110958443B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911293110.0A CN110958443B (en) 2019-12-16 2019-12-16 Fast encoding method for 360-degree video interframes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911293110.0A CN110958443B (en) 2019-12-16 2019-12-16 Fast encoding method for 360-degree video interframes

Publications (2)

Publication Number Publication Date
CN110958443A CN110958443A (en) 2020-04-03
CN110958443B true CN110958443B (en) 2021-06-29

Family

ID=69981987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911293110.0A Active CN110958443B (en) 2019-12-16 2019-12-16 Fast encoding method for 360-degree video interframes

Country Status (1)

Country Link
CN (1) CN110958443B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350921A (en) * 2007-07-17 2009-01-21 北京华辰广正科技发展有限公司 Method for searching motion facing to panorama
CN103384325A (en) * 2013-02-22 2013-11-06 张新安 Quick inter-frame prediction mode selection method for AVS-M video coding
CN105554506A (en) * 2016-01-19 2016-05-04 北京大学深圳研究生院 Panorama video coding, decoding method and device based on multimode boundary filling
CN105721865A (en) * 2016-02-01 2016-06-29 同济大学 Fast decision algorithm for dividing HEVC inter-frame coding unit
CN107426491A (en) * 2017-05-17 2017-12-01 西安邮电大学 A kind of implementation method of 360 degree of panoramic videos
CN107483950A (en) * 2016-06-07 2017-12-15 北京大学 Picture parallel encoding method and system
WO2018066986A1 (en) * 2016-10-06 2018-04-12 김기백 Image data encoding/decoding method and apparatus
WO2019216712A1 (en) * 2018-05-10 2019-11-14 삼성전자 주식회사 Video encoding method and apparatus, and video decoding method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350921A (en) * 2007-07-17 2009-01-21 北京华辰广正科技发展有限公司 Method for searching motion facing to panorama
CN103384325A (en) * 2013-02-22 2013-11-06 张新安 Quick inter-frame prediction mode selection method for AVS-M video coding
CN105554506A (en) * 2016-01-19 2016-05-04 北京大学深圳研究生院 Panorama video coding, decoding method and device based on multimode boundary filling
CN105721865A (en) * 2016-02-01 2016-06-29 同济大学 Fast decision algorithm for dividing HEVC inter-frame coding unit
CN107483950A (en) * 2016-06-07 2017-12-15 北京大学 Picture parallel encoding method and system
WO2018066986A1 (en) * 2016-10-06 2018-04-12 김기백 Image data encoding/decoding method and apparatus
CN107426491A (en) * 2017-05-17 2017-12-01 西安邮电大学 A kind of implementation method of 360 degree of panoramic videos
WO2019216712A1 (en) * 2018-05-10 2019-11-14 삼성전자 주식회사 Video encoding method and apparatus, and video decoding method and apparatus

Also Published As

Publication number Publication date
CN110958443A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
US11736701B2 (en) Hash-based encoder decisions for video coding
CN110087087B (en) VVC inter-frame coding unit prediction mode early decision and block division early termination method
US11671632B2 (en) Machine-learning-based adaptation of coding parameters for video encoding using motion and object detection
EP3598758B1 (en) Encoder decisions based on results of hash-based block matching
TWI634777B (en) Method of searching reference patches
CN109379594B (en) Video coding compression method, device, equipment and medium
JPWO2009037828A1 (en) Image coding apparatus and image decoding apparatus
CN111837389A (en) Block detection method and device suitable for multi-sign bit hiding
JP2021525485A (en) Multi-type tree depth extension for picture border processing
KR20220160038A (en) Methods for Signaling Video Coding Data
US20240040143A1 (en) Method and apparatus for decoding image using interpicture prediction
CN106878754B (en) A kind of 3D video depth image method for choosing frame inner forecast mode
CN110958443B (en) Fast encoding method for 360-degree video interframes
Ma et al. A fast background model based surveillance video coding in HEVC
CN112822498B (en) Image processing apparatus and method of performing efficient deblocking
KR102140271B1 (en) Fast intra coding method and apparatus using coding unit split based on threshold value
CN109889842B (en) Virtual reality video CU partitioning algorithm based on KNN classifier
JP2002084544A (en) Dynamic image encoding device and dynamic image encoding method
CN112534818B (en) Machine learning based adaptation of coding parameters for video coding using motion and object detection
KR102586198B1 (en) Image decoding method and apparatus using inter picture prediction
CN106878753B (en) 3D video residual coding mode selection method using texture smoothing information
KR20240051807A (en) Image transmisstion method based on accumulation of region of interest
Lei et al. Fast Mode Decision Algorithm for Coding Depth Maps in 3D High-Efficiency Video Coding.
CN116647676A (en) CU partitioning quick selection based on screen content region characteristics
KR20200093206A (en) Device and method for deciding hevc intra prediction mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220817

Address after: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221111

Address after: 100020 Room 201, Unit 5, Building 8, Yard 1, Shuangliu North Street, Chaoyang District, Beijing

Patentee after: Beijing Blue Diamond Culture Media Co.,Ltd.

Address before: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

TR01 Transfer of patent right