WO2019214373A1 - 视频帧的编码单元划分方法、装置、存储介质及电子装置 - Google Patents
视频帧的编码单元划分方法、装置、存储介质及电子装置 Download PDFInfo
- Publication number
- WO2019214373A1 WO2019214373A1 PCT/CN2019/081211 CN2019081211W WO2019214373A1 WO 2019214373 A1 WO2019214373 A1 WO 2019214373A1 CN 2019081211 W CN2019081211 W CN 2019081211W WO 2019214373 A1 WO2019214373 A1 WO 2019214373A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- coding unit
- frame
- target
- type
- frame type
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
Definitions
- the present application relates to the field of computers, and in particular, to a coding unit division method, apparatus, storage medium, and electronic device for a video frame.
- the future development trend of video is high definition, high frame rate and high compression ratio.
- high compression rate video coding 4 sub-block divisions are performed on the current coding unit (CU) from top to bottom. Until the CU size is 8 cutoff, then bottom-up, and then layer by layer, select the optimal mode of CU partitioning, which will result in lower encoding speed.
- the embodiment of the present application provides a coding unit division method, device, storage medium, and electronic device for a video frame, so as to at least solve the technical problem that a coding speed of a frame in the related art is low.
- a coding unit division method for a video frame including:
- the target coding unit type corresponding to the target frame type to which the target frame belongs, where the target coding unit type is used to indicate the depth of division when the target frame is divided.
- a coding unit dividing apparatus for a video frame including:
- a first determining module configured to determine, according to a frame type and a coding unit type that have a corresponding relationship, a target coding unit type corresponding to a target frame type to which the target frame belongs, where the target coding unit type is used to indicate the target frame Depth of depth when dividing
- a second determining module configured to determine, according to the coding unit information of the target coding unit, whether the target coding unit satisfies a target condition, when performing coding on a target coding unit that belongs to the target coding unit type in the target frame, Get the target result;
- a processing module performing, for the target coding unit, a division operation corresponding to the target result.
- a storage medium having stored therein a computer program, wherein the computer program is configured to execute the method described in any of the above.
- an electronic device comprising a memory and a processor, wherein the memory stores a computer program, the processor being configured to execute any of the above by the computer program The method described in the above.
- a computer program product comprising instructions which, when run on a computer, cause the computer to perform a coding unit partitioning method of a video frame as described herein.
- the error accumulation considering the CU depth mainly comes from two aspects.
- the frame type of the current frame the weights of different types of frames used as reference frames are different, and the degree of diffusion of errors accumulated in different types.
- the size of the coding unit is different, and the range affected by the coding unit of different sizes is different. Therefore, the processing is performed separately for different frame types and different sizes, specifically, according to the frame type having the corresponding relationship.
- the coding unit type Determining, by the coding unit type, a target coding unit type corresponding to the target frame type to which the target frame belongs, the target coding unit type being used to indicate a division depth when the target frame is divided, and a target belonging to the target coding unit type in the target frame
- the coding unit is divided, determining whether the target coding unit satisfies the target condition according to the coding unit information of the target coding unit, obtaining a target result, and then performing a division operation corresponding to the target structure on the target coding unit, thereby being able to adopt different divisions for different frame types.
- the video frames are all divided into low operations, thereby realizing the technical effect of increasing the encoding speed of the frames, thereby solving the technical problem that the encoding speed of the frames in the related art is low. Further, the different situations of the frames of different frame types when dividing the coding unit are fully considered, and the depth of the division is determined for the frames of different frame types to avoid the cumulative accumulation of errors, thereby reducing the generation of coding errors.
- FIG. 1 is a schematic diagram of a coding unit division method of an optional video frame according to an embodiment of the present application
- FIG. 2 is a schematic diagram of an application environment of an optional coding unit division method of a video frame according to an embodiment of the present application
- FIG. 3 is a schematic diagram of a coding unit division method of an optional video frame according to an alternative embodiment of the present application.
- FIG. 4 is a schematic diagram of another optional coding unit division method of a video frame according to an alternative embodiment of the present application.
- FIG. 5 is a schematic diagram of an optional coding unit dividing apparatus for a video frame according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of an application scenario of an optional coding unit division method of a video frame according to an embodiment of the present application
- FIG. 7 is a schematic diagram of an alternative electronic device in accordance with an embodiment of the present application.
- a coding unit division method for a video frame is provided. As shown in FIG. 1, the method includes:
- S102 Determine, according to a frame type and a coding unit type that have a corresponding relationship, a target coding unit type corresponding to a target frame type to which the target frame belongs, where the target coding unit type is used to indicate a division depth when the target frame is divided.
- the coding unit division method of the video frame may be applied to a hardware environment formed by the target device 202 as shown in FIG. 2 .
- the target device 202 is configured to determine, according to a frame type and a coding unit type having a corresponding relationship, a target coding unit type corresponding to a target frame type to which the target frame to be divided belongs, where the target coding unit type is used to indicate Demarcation depth when dividing the target frame; when dividing the target coding unit belonging to the target coding unit type in the target frame, determining whether the target coding unit satisfies the target condition according to the coding unit information of the target coding unit, and obtaining the target result; The target coding unit performs a division operation corresponding to the target result.
- the coding unit division method of the video frame may be, but is not limited to, applied to a scenario in which a coding unit is divided into frames.
- the coding unit division method of the foregoing video frame may be, but is not limited to being applied to, various types of clients capable of video coding.
- the client may include, but is not limited to, an online education application, an instant messaging client, and a community. Space client, game client, shopping client, browser client, financial client, multimedia client, video live client, etc.
- a coding unit may be, but is not limited to, being applied to a scenario in which a coding unit is divided into frames in a video resource when performing video coding in the foregoing multimedia client, or may be, but not limited to, being applied to video coding in the game client.
- the coding speed of the frame is increased. The above is only an example, and is not limited in this embodiment.
- the coding unit division method of the video frame may be applied to the client side, or the server side, or performed by interaction between the client side and the server side.
- the frame type may be, but is not limited to, divided according to a reference relationship between frames in the encoding process of the frame.
- the frame type includes: an I frame, a P frame, a B frame, and a b frame, wherein the I frame is an intra coded frame, which can be understood as a complete reservation of the frame picture, and only needs to be decoded during decoding.
- the frame data can be completed.
- the P frame is a forward predictive coded frame, which identifies the difference between the frame and the previous key frame.
- the frame has no complete picture data, specifically the data different from the picture of the previous key frame.
- the B frame is a bidirectionally predicted interpolated coded frame, which records the difference between the frame and the previous frame, and the compression ratio is high, and the decoding needs not only in the present
- the buffered picture before the frame also decodes the picture after the frame, and the final picture is obtained by superimposing the front and rear pictures with the current frame data
- the P frame refers to the I frame
- the B frame refers to the P frame or the I frame
- the b frame refers to the I frame or the P frame.
- a B frame, and the b frame is not used as a reference frame
- the frame indicated by the arrow in FIG. 3 is a reference frame.
- the partition depth may be determined according to the size of the coding unit. For example, when performing intra-frame inter-frame prediction in the video encoding process, as shown in FIG. 4, starting from a maximum coding unit (LCU, Largest Code Unit), each layer is divided into layers according to a 4-fork tree, and recursively. Calculation. For each Coding Tree Unit (CTU), there are three levels of recursive processes from CU64x64 to CU32x32, CU32x32 to CU16x16, and CU16x16 to CU8x8.
- CTU Coding Tree Unit
- the size of the coding unit is represented by 2Nx2N.
- the above 64x64, 32x32, 16x16, and 8x8 indicate the size of the coding unit
- the coding unit of the size 64x64 has a division depth of 0
- the CU division depth of the coding unit of size 32x32 is 1.
- the coding unit of size 16x16 has a CU division depth of 2
- a coding unit of size 8x8 has a CU division depth of 3.
- the division of the coding unit may also adopt an expression form of the division layer number, for example, the coding unit of 64x64 to the coding unit of 32x32 is the 0th layer, the coding unit of 32x32 to the coding unit of 16x16 is the first layer, and the coding unit of 16x16 is to The 8x8 coding unit is layer 2.
- the hierarchical arrangement of the depth of the partition may be determined based on the value of the partitioned depth. The smaller the value of the partitioned depth is, the higher the level is. For example, the partitioning depth of the coding unit having a size of 64 ⁇ 64 is lower than the division depth of the coding unit having a size of 32 ⁇ 32. Therefore, the division depth level of the coding unit of size 64x64 is greater than the division depth level of the coding unit of size 32x32, and similarly, the division depth level of the coding unit of size 32x32 is larger than the division depth level of the coding unit of size 16x16, size The division depth level of the coding unit of 16x16 is larger than the division depth level of the coding unit of size 8x8.
- a corresponding coding unit type may be configured for each frame type.
- error accumulation is spread in one I frame period
- P frame refers to I frame
- B frame refers to P frame or I frame
- b frame refers to I frame or P frame or B frame
- b frame is not used as reference frame.
- the weights are sorted from large to small, I frame>P frame>B frame>b frame; in addition, the CU is divided into the maximum CU64 to CU32.
- the influence weight of the CU size is sorted from large to small as CU64>CU32>CU16.
- the division depth indicated by the coding unit type corresponding to the P frame is set to 2; for the B frame: the influence range is only within one P frame period, and the division depth indicated by the coding unit type corresponding to the B frame is 1 and 2 For b frames: the error does not affect other frames, so the division depth indicated by the coding unit type corresponding to the b frame is set to 0, 1, and 2.
- the division manner in the prior art may be used.
- the division of the coding units of the I frame adopts the prior art division manner, that is, the division of the three levels from CU64x64 to CU32x32, CU32x32 to CU16x16, and CU16x16 to CU8x8 is performed by the prior art.
- the coding unit with the depth of 2 is deeply divided in depth, and the coding unit with the depths of 0 and 1 adopts the prior art division manner, that is, the CU64x64 to the CU32x32 and the CU32x32 are adopted by the prior art.
- To the division of CU16x16 then judge whether CU16x16 satisfies the condition of fast division, and if it is satisfied, stop dividing it. If it is not satisfied, continue to divide CU16x16 into CU8x8.
- the coding units with the depths of 1 and 2 are rapidly and deeply divided.
- the prior art division manner is adopted, that is, the prior art division manner is adopted by the CU64x64 to The division of CU32x32, and then judge whether CU32x32 satisfies the condition of fast division. If it is satisfied, it stops dividing it. If it is not satisfied, it continues to divide CU32x32 into CU16x16, and then judges whether CU16x16 satisfies the condition of fast division. If it is satisfied, it stops. It is divided, and if it is not satisfied, the CU16x16 continues to be divided into CU8x8. For the coding unit division process of the b frame, the coding units with depths of 0, 1, and 2 are rapidly and deeply divided.
- the coding unit information of the target coding unit may include, but is not limited to, an optimal mode of the target coding unit, a rate distortion cost of the optimal mode of the target coding unit, and the like.
- the target condition may include, but is not limited to, including: the optimal mode of the target coding unit is a skip mode, and the rate distortion cost of the optimal mode of the target coding unit falls within a target threshold range.
- the coding unit of the coding unit type determines whether to stop the coding unit according to the coding unit information of the coding unit at the time of division, and performs a subsequent division operation according to the obtained target result, so that it can be paired for different frame types.
- Different partitioning modes are adopted when different partition depths are used, which avoids the operation of dividing each video frame into a low operation, thereby realizing the technical effect of improving the encoding speed of the frame, thereby solving the encoding speed of the frame in the related art.
- the different situations of the frames of different frame types when dividing the coding unit are fully considered, and whether the division depth of the division is continued is determined for the frames of different frame types, thereby reducing the generation of coding errors.
- performing the dividing operation corresponding to the target result to the target coding unit includes:
- the target result is used to indicate that the target coding unit satisfies the target condition, stop dividing the target coding unit; and/or,
- the target coding unit is divided if the target result is used to indicate that the target coding unit does not satisfy the target condition.
- the division operation corresponding to the target result may include, but is not limited to, including stopping the division of the target coding unit.
- the division operation corresponding to the target result may be, but is not limited to, including continuing to divide the target coding unit.
- the corresponding manner may be established in the following manner.
- the frame type and coding unit type of the relationship which specifically includes the following steps:
- S1 acquiring a frame type, where the frame type is divided according to a reference relationship between frames in a frame encoding process
- the coding unit type is used to indicate that the frame belonging to the frame type is divided into a first depth value when the frame is divided, or the coding unit type is used to indicate the second depth value and the target depth value when the frame belonging to the frame type is divided. Depth relationship between
- the coding unit type may be used to indicate one or more specific depth values (such as: 3, 2, 1, 0, etc.), or may also be used to indicate a depth relationship (for example, : Below the highest depth value, between the highest depth value and the lowest depth value, only below the highest depth value, etc.).
- the depth value includes four types, which are respectively 3, 2, and 1, and 0.
- the coding unit type is used to indicate that the frame belonging to the frame type is divided, the depth is the first depth.
- the frame type and coding unit type having the corresponding relationship can be as shown in Table 1.
- the target coding unit belonging to the target coding unit type in the target frame is divided, whether the target coding unit satisfies the target condition, and the target coding is determined according to the coding unit information of the target coding unit.
- the division of the target coding unit is stopped, and in the case that the target coding unit does not satisfy the target condition, the process of dividing the target coding unit is referred to as a coding unit depth fast determination process.
- obtaining frame types includes:
- S1 Determine a frame type, where the frame type includes: a first frame type, a second frame type, a third frame type, and a fourth frame type, where the first frame type is a frame type that does not refer to other frames in the process of encoding,
- the second frame type refers to a frame type of a frame belonging to the first frame type in the process of encoding
- the third frame type refers to a frame belonging to the first frame type and a frame belonging to the second frame type in the process of encoding.
- the fourth frame type refers to a frame belonging to the first frame type and a frame belonging to the third frame type in the process of encoding, or, referring to a frame belonging to the second frame type and a frame type of the frame belonging to the third frame type .
- the frames of the first frame type, the second frame type, the third frame type, and the fourth frame type may be, but are not limited to, the following reference relationship
- the frame reference of the second frame type is a frame of one frame type
- a frame of the third frame type refers to a frame of a first frame type and a frame of a second frame type
- a frame of a fourth frame type refers to a frame of a first frame type and a frame of a third frame type
- the frame of the fourth frame type refers to the frame of the second frame type and the frame of the third frame type.
- the frame of the first frame type may be an I frame
- the frame of the second frame type may be a P frame
- the frame of the third frame type may be a B frame
- the frame of the fourth frame type may be a b frame.
- obtaining the coding unit type corresponding to the frame type includes:
- the first frame type may be an I frame
- the fast decoding of the coding unit may not be performed for the I frame, that is, the coding unit of each layer of the I frame is directly divided. , divide it into the smallest coding unit.
- the second frame type may be a P frame, and for the P frame, only the 16 ⁇ 16 coding unit may be quickly divided, that is, for the P frame, it is 64 ⁇
- the coding unit of 64 is divided into 32 ⁇ 32 coding units, and then divided into 16 ⁇ 16 coding units from 32 ⁇ 32 coding units, and then determined according to the coding unit information of the 16 ⁇ 16 coding units whether the target condition is met, if If it is satisfied, the division is stopped, and if it is not satisfied, the 16 ⁇ 16 coding unit is further divided into 8 ⁇ 8 coding units.
- the third frame type may be a B frame, and for the B frame, the 32 ⁇ 32 coding unit and the 16 ⁇ 16 coding unit may be quickly divided, that is, for the B frame. And dividing it from a 64 ⁇ 64 coding unit into a 32 ⁇ 32 coding unit, and determining whether the target condition is met according to the coding unit information of the 32 ⁇ 32 coding unit, and if so, stopping the division, if not, then The 32 ⁇ 32 coding unit is further divided into 16 ⁇ 16 coding units.
- the coding unit divided into 16 ⁇ 16 it is determined whether the target condition is satisfied according to the coding unit information of the coding unit of 16 ⁇ 16, if it is satisfied, the division is stopped, and if not, the 16 ⁇ 16 coding unit is further divided into 8 ⁇ 8 coding unit.
- the fourth frame type may be a b frame, and for the b frame, a 64 ⁇ 64 coding unit, a 32 ⁇ 32 coding unit, and a 16 ⁇ 16 coding unit may be quickly divided, that is, It is said that for the b frame, it is determined whether the target condition is satisfied according to the coding unit information of the coding unit of 64 ⁇ 64, and if it is satisfied, the division is stopped, and if it is not satisfied, it is divided into 64 ⁇ 32 from the coding unit of 64 ⁇ 64. Coding unit.
- the coding unit divided into 32 ⁇ 32 it is determined whether the target condition is satisfied according to the coding unit information of the coding unit of 32 ⁇ 32, if it is satisfied, the division is stopped, and if not, the 32 ⁇ 32 coding unit is further divided into 16 ⁇ 16 coding unit.
- the coding unit divided into 16 ⁇ 16 it is determined whether the target condition is satisfied according to the coding unit information of the coding unit of 16 ⁇ 16, if it is satisfied, the division is stopped, and if not, the 16 ⁇ 16 coding unit is further divided into 8 ⁇ 8 coding unit.
- the coding unit type may also be used to indicate that the frame belonging to the frame type is divided into a first depth value when the frame is divided.
- the coding unit type corresponding to the acquired frame type includes:
- the coding unit type corresponding to the second frame type is the second coding unit type, where the first depth value indicated by the second coding unit type is the second target value, and the second target value is only lower than the highest depth value.
- the depth value, the highest depth value is the depth value of the smallest coding unit into which the frame can be divided;
- the coding unit type corresponding to the third frame type is a third coding unit type, where the first depth value indicated by the third coding unit type is a third target value, and the third target value is lower than the highest depth value.
- a depth value higher than a lowest depth value where the highest depth value is a depth value of a minimum coding unit into which the frame can be divided, and the lowest depth value is a depth value of a maximum coding unit into which the frame can be divided;
- the coding unit type corresponding to the fourth frame type is the fourth coding unit type, where the first depth value indicated by the fourth coding unit type is a fourth target value, and the fourth target value is lower than the highest depth value.
- the depth value, the highest depth value is the depth value of the smallest coding unit into which the frame can be divided.
- the corresponding coding unit type may not be set or its corresponding coding unit type may be set to indicate the division depth not included in the preset division depth, so that the type of frame is encoded.
- the unit division does not perform the process of quickly determining the depth of the coding unit, but directly divides from the lowest depth to the highest depth.
- the frame of the second frame type is only the reference frame of the frame of the first frame type
- the frame of the third frame type and the frame of the fourth frame type may be the second frame type.
- the frame is a reference frame
- the key of the second frame type is only second to the frame of the first frame type. Therefore, for the frame of the second frame type, the coding unit depth can be quickly determined only on the upper layer of the highest division depth. the process of. It is possible to set the depth value only below the highest depth value to the depth value indicated by the second frame type. Only depth values below the highest depth value are one depth value.
- the frame of the third frame type is the reference frame of the frame of the first frame type and the frame of the second frame type
- the frame of the third frame type is the reference of the frame of the fourth frame type.
- a frame, for a frame of the third frame type a process of quickly determining the depth of the coding unit at each layer between the highest division depth and the lowest division depth.
- a depth value below the highest depth value and above the lowest depth value may be set to a depth value indicated by the third frame type.
- the depth value below the highest depth value and above the lowest depth value may be multiple depth values.
- the frame of the fourth frame type is referenced by the frame of the first frame type and the frame of the third frame type or by the frame of the second frame type and the frame of the third frame type.
- the frame, and the frame of the fourth frame type does not serve as a reference frame of the frame of the other frame type, and the process of quickly determining the depth of the coding unit can be performed layer by layer from the highest division depth for the frame of the fourth frame type.
- a depth value lower than the highest depth value may be set to a depth value indicated by the third frame type.
- a depth value below the highest depth value may be a plurality of depth values.
- determining whether the target coding unit meets the target condition according to the coding unit information of the target coding unit includes:
- the target coding unit belonging to the target coding unit type in the target frame may be obtained according to the determined target coding unit type corresponding to the target frame, when the dividing process is performed.
- the coding unit depth fast determining process is performed on the target coding unit, so that whether to continue the division of the target coding unit is determined according to the obtained coding unit information.
- the obtained coding unit information of the target coding unit may include, but is not limited to, an optimal mode of the target coding unit and a rate distortion cost of the optimal mode.
- the target condition may include, but is not limited to, including: the optimal mode of the target coding unit is a skip mode, the rate distortion cost of the optimal mode of the target coding unit falls within a target threshold range, and the like.
- the two target conditions that meet the foregoing conditions may be determined to satisfy the target condition, and if one of the two is not satisfied, that is, the optimal mode is not the skip mode, or the rate distortion cost If it does not fall within the target threshold range, it is determined that the target coding unit does not satisfy the target condition.
- the foregoing target threshold range may be preset or may be determined according to information acquired during the encoding process.
- the number of available CTUs in the current CTU neighboring positions (left, upper, upper left, and upper right corners), and the number of the same CU depth in the adjacent CTU and the same CU depth, and the average of the rate distortion cost, respectively.
- ctu_validnum_skip_adjacent samecudepth_num_skip_adjacent, samecudepth_avgcost_skip_adjacent.
- the current number of CUs in the current CTU is skip mode and the CU depth is the same, and the average value of the rate distortion cost is recorded as samecudepth_num_skip_curr, samecudepth_avgcost_skip_curr.
- the rate distortion cost of the current CU optimal mode is recorded as bestcost, and the initial check is set to false. If the check is true after the following comparison, it is determined that the rate distortion cost of the current CU optimal mode satisfies the target condition:
- the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
- the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
- the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present application.
- a coding unit dividing apparatus for a video frame for implementing the coding unit division method of the video frame is further provided.
- the apparatus includes:
- a first determining module 52 configured to determine, according to a frame type and a coding unit type having a corresponding relationship, a target coding unit type corresponding to a target frame type to which the target frame belongs, where the target coding unit type is used to indicate that the target frame is performed. Depth of division when dividing;
- the second determining module 54 is configured to determine, according to the coding unit information of the target coding unit, whether the target coding unit satisfies the target condition, and obtain a target result, when the target coding unit belonging to the target coding unit type is divided in the target frame;
- the processing module 56 is configured to perform a division operation corresponding to the target result to the target coding unit.
- the processing module includes:
- a first processing unit configured to stop dividing the target coding unit if the target result is used to indicate that the target coding unit satisfies the target condition;
- a second processing unit configured to divide the target coding unit if the target result is used to indicate that the target coding unit does not satisfy the target condition.
- the division operation corresponding to the target result may include, but is not limited to, including stopping the division of the target coding unit.
- the division operation corresponding to the target result may be, but is not limited to, including continuing to divide the target coding unit.
- the above device further includes:
- a first obtaining module configured to acquire a frame type, where the frame type is divided according to a reference relationship between frames in a frame encoding process
- a second obtaining module configured to acquire a coding unit type corresponding to the frame type
- the coding unit type is used to indicate that the frame belonging to the frame type is divided into a first depth value when the frame is divided, or the coding unit type is used to indicate the second depth value and the target depth value when the frame belonging to the frame type is divided. Depth relationship between
- a storage module for storing a frame type and a coding unit type having a corresponding relationship.
- the coding unit type may be used to indicate one or more specific depth values (such as: 3, 2, 1, 0, etc.), or may also be used to indicate a depth relationship (for example, : Below the highest depth value, between the highest depth value and the lowest depth value, only below the highest depth value, etc.).
- the depth value includes four types, which are respectively 3, 2, and 1, and 0.
- the coding unit type is used to indicate that the frame belonging to the frame type is divided, the depth is the first depth.
- the frame type and coding unit type having the corresponding relationship can be as shown in Table 1.
- the first acquisition module is used to:
- the frame type includes: a first frame type, a second frame type, a third frame type, and a fourth frame type, and the frame of the first frame type does not refer to the frame type of other frames in the frame encoding process
- the frame of the second frame type refers to the frame type of the frame belonging to the first frame type in the frame encoding process
- the frame of the third frame type refers to the frame belonging to the first frame type and the frame belonging to the second frame type in the frame encoding process.
- the frame type, the frame of the fourth frame type refers to the frame belonging to the first frame type and the frame belonging to the third frame type in the frame encoding process, or refers to the frame belonging to the second frame type and the frame belonging to the third frame type.
- the frames of the first frame type, the second frame type, the third frame type, and the fourth frame type may be, but are not limited to, the following reference relationship
- the frame reference of the second frame type is a frame of one frame type
- a frame of the third frame type refers to a frame of a first frame type and a frame of a second frame type
- a frame of a fourth frame type refers to a frame of a first frame type and a frame of a third frame type
- the frame of the fourth frame type refers to the frame of the second frame type and the frame of the third frame type.
- the frame of the first frame type may be an I frame
- the frame of the second frame type may be a P frame
- the frame of the third frame type may be a B frame
- the frame of the fourth frame type may be a b frame.
- the second obtaining module includes:
- a first determining unit configured to determine that the first frame type does not have the coding unit type
- a second determining unit configured to determine that the coding unit type corresponding to the second frame type is a first coding unit type, where the coding unit of the first coding unit type includes: a 16 ⁇ 16 coding unit;
- a third determining unit configured to determine that the coding unit type corresponding to the third frame type is a second coding unit type, where the coding unit of the second coding unit type comprises: a 16 ⁇ 16 coding unit and 32 ⁇ 32 coding unit;
- a fourth determining unit configured to determine that the coding unit type corresponding to the fourth frame type is a third coding unit type, where the coding unit of the third coding unit type includes: a 16 ⁇ 16 coding unit, 32 ⁇ 32 coding unit and 64 ⁇ 64 coding unit.
- the coding unit type may also be used to indicate that the frame belonging to the frame type is divided into a first depth value when the frame is divided.
- the second obtaining module includes:
- a fifth determining unit configured to determine that the coding unit type corresponding to the first frame type is the first coding unit type, where the first depth value indicated by the first coding unit type is the first target value, and the first target value For a depth value lower than a lowest depth value or higher than a highest depth value, the highest depth value is a depth value of a minimum coding unit into which a frame can be divided, and the lowest depth value is a depth value of the frame;
- a sixth determining unit configured to determine that the coding unit type corresponding to the second frame type is the second coding unit type, where the first depth value indicated by the second coding unit type is the second target value, and the second target value a depth value that is only lower than the highest depth value, the highest depth value being a depth value of a minimum coding unit into which the frame can be divided;
- a third determining unit configured to determine that the coding unit type corresponding to the third frame type is a third coding unit type, where the first depth value indicated by the third coding unit type is a third target value, and the third target value a depth value lower than a highest depth value and higher than a lowest depth value, the highest depth value being a depth value of a minimum coding unit into which the frame can be divided, and the lowest depth value being a depth value of the frame;
- an eighth determining unit configured to determine that the coding unit type corresponding to the fourth frame type is a fourth coding unit type, where the first depth value indicated by the fourth coding unit type is a fourth target value, and the fourth target value For depth values below the highest depth value, the highest depth value is the depth value of the smallest coding unit into which the frame can be divided.
- the second determining module includes:
- a ninth determining unit configured to determine, as the target coding unit, a coding unit that matches a depth of division indicated by the target coding unit type in the target frame;
- an obtaining unit configured to acquire coding unit information of the target coding unit when dividing the target coding unit
- the tenth determining unit is configured to determine, according to the coding unit information, whether the target coding unit satisfies the target condition.
- the acquiring unit is configured to: obtain an optimal mode of the target coding unit and a rate distortion cost of the optimal mode;
- the tenth determining unit is configured to: when the optimal mode is the skip mode and the rate distortion penalty falls within the target threshold range, determine that the target coding unit satisfies the target condition; and/or, in the optimal mode, not the skip mode, or, In the case where the rate distortion penalty does not fall within the target threshold range, it is determined that the target coding unit does not satisfy the target condition.
- the application environment of the embodiment of the present application may be, but is not limited to, the reference to the application environment in the foregoing embodiments.
- An embodiment of the present application provides an optional specific application example of a connection method for implementing the foregoing real-time communication.
- the coding unit division method of the video frame may be, but is not limited to, applied to a scenario of encoding a frame as shown in FIG. 6. It is impossible to completely correct the prediction process in video coding. Once the prediction error occurs, it will lead to compression ratio loss, and will gradually expand because the error is accumulated.
- the error accumulation of CU depth is mainly related to two aspects. One is the frame type of the current frame, the error accumulation is spread in one I frame period, the P frame refers to the I frame, the B frame refers to the P frame or the I frame, and the b frame refers to I. A frame or a P frame or a B frame, and a b frame is not used as a reference frame.
- the weights of the four frame types are I frame>P frame>B frame>b frame.
- the CU is divided into the maximum CU64 to CU32. If the error occurs, the corresponding CU will not be generated. Then do the following 4 CU32 to CU16, and 16 CU16 to CU8 partition; also CU32 to CU16 partition, once the error, the corresponding CU will not do the next 4 CU16 to CU8 partition, and CU16 to CU8 In the division, once the error occurs, it only affects itself. Therefore, the influence weight of the CU size is sorted from large to small as CU64>CU32>CU16. In this scenario, for the coding unit partitioning process of a video frame, combined with the above two characteristics, the frame types of different CU sizes and current frames are separately processed.
- I frame Since the I frame will affect all subsequent frames once an error occurs, all CU blocks of the I frame are not quickly determined by the CU depth.
- P-frames P-frame weights are second only to I-frames, and the impact is also large. Therefore, P-frames are only deeply determined in depth from CU16 to CU8.
- B-frames the range of influence is only within one P-frame period, and can be quickly determined at the 2nd level from CU32 to CU16, CU16 to CU8.
- b-frames the error does not affect other frames, so make a quick determination at all layers.
- the depth fast determining process is based on whether the CU has a residual, and the number of residual blocks in the same depth of the peripheral CTU, and the average rate distortion cost and the rate distortion cost in the current CU optimal mode. To determine whether to continue the partition according to whether the current CU is the skip mode, and the number of the same depth skip mode of the peripheral CTU, and the average rate distortion cost and the rate distortion cost in the current CU optimal mode.
- a frame is sent to the encoder. After intra or inter prediction, the predicted value is obtained. The predicted value is subtracted from the input data to obtain a residual, and then Discrete Cosine Transform (DCT) and quantization are obtained. The residual coefficient is then sent to the output code stream of the entropy coding module. At the same time, after the inverse coefficient is inversely quantized, the residual value of the reconstructed image is obtained, and then added with the predicted value in the frame or the frame, thereby obtaining After reconstructing the image, the reconstructed image is subjected to intra-loop filtering, and then enters the reference frame queue as a reference image of the next frame, so that one frame is backward encoded.
- DCT Discrete Cosine Transform
- each layer is divided into layers according to a 4-tree tree, and recursively calculated. For each CTU, a total of three levels of recursive processes from CU64x64 to CU32x32, CU32x32 to CU16x16, and CU16x16 to CU8x8, and then layer by layer comparison, select the optimal mode CU partitioning.
- LCU Largest Code Unit
- the strategy is to predict the division of the current coding unit, thereby achieving a fast selection of the coding unit depth, thereby reducing the complexity of the coding and improving the coding speed.
- the sub-block SubCU when encoding a frame, the sub-block SubCU is sometimes recursively advanced and placed after the Predict Unit (PU) 2Nx2N.
- the coding unit depth fast determination process is performed in the step of determining whether the current CU performs 4 sub-block divisions. That is, when judging whether the current CU depth continues to be divided, PU merge and skip calculations, and PU 2Nx2N prediction have been made.
- the optimal mode referred to below is the optimal mode after PU 2Nx2N inter prediction and optimization.
- the judging process includes the following steps:
- Step 1 Count the number of available CTUs in the left, upper, upper left, and upper right corners of the current CTU neighboring position, and the number of the same CU depth in the adjacent CTU and the average CU depth, and the average value of the rate distortion cost. Ctu_validnum_skip_adjacent, samecudepth_num_skip_adjacent, samecudepth_avgcost_skip_adjacent;
- Step 2 counting the number of CUs in the current CTU that are in the skip mode and having the same CU depth, and the average value of the rate distortion cost, respectively, as samecudepth_num_skip_curr, samecudepth_avgcost_skip_curr;
- Step 3 If all the following conditions are met, the current CU does not perform 4 sub-block partitions, otherwise it performs 4 sub-block partitions.
- Condition 1 The current CU optimal mode is skip mode
- the impact range is within one P frame period, and the judgment can be made when CU32 to CU16 and CU16 to CU8 are divided into depths.
- the method is the same as the P frame except that the coefficient is 3.5, which is not described here.
- the method is similar to the judgment when CTU32 to CU16 divide the depth.
- the difference is that the statistical information must satisfy the quantized residual coefficient to be 0.
- the judgment process includes the following steps:
- Step 1 Count the number of available CTUs in the left, upper, upper left, and upper right corners of the current CTU neighboring position, and the number of remaining residual coefficients in the adjacent CTUs is 0 and the same CU depth, and the average of the rate distortion cost Values, respectively, are recorded as ctu_validnum_cbf_adjacent, samecudepth_num_cbf_adjacent, samecudepth_avgcost cbf_adjacent;
- Step 2 Count the number of the CU depths in the current CTU that are in the skip mode and the same CU depth, and the average value of the rate distortion cost, respectively, as the samecudepth_num_cbf_curr, samecudepth_avgcost_cbf_curr;
- Step 3 If all the following conditions are met, the current CU does not perform 4 sub-block partitions, otherwise 4 sub-block partitions are performed.
- Condition 1 The current CU optimal mode is skip mode
- the residual coefficient after quantization must be zero, but the residual coefficient after quantization is not necessarily the skip mode.
- b frames are not used as reference frames, and their loss does not affect other frames. Therefore, the process of coding unit depth is quickly determined for all layers.
- the judgment process includes the following steps:
- Step 1 Count the number of available CTUs in the left, upper, upper left, and upper right corners of the current CTU neighboring position, and the skip mode in the adjacent CTU, and the number of the same CU depth, and the average value of the rate distortion cost, respectively Recorded as ctu_validnum_skip_adjacent, samecudepth_num_skip_adjacent, samecudepth_avgcost_skip_adjacent;
- Step 2 If all the following conditions are met, the current CU does not perform 4 sub-block partitions, otherwise it performs 4 sub-block partitions.
- Condition 1 The current CU optimal mode is skip mode
- the judgment process includes the following steps:
- Step 1 Count the number of available CTUs in the left, upper, upper left, and upper right corners of the current CTU neighboring position, and the skip mode in the adjacent CTU, and the number of the same CU depth, and the average value of the rate distortion cost, respectively Recorded as ctu_validnum_skip_adjacent, samecudepth_num_skip_adjacent, samecudepth_avgcost_skip_adjacent;
- Step 2 Count the number of the CUs in the current CTU that are in the skip mode and the same CU depth, and the average value of the rate distortion cost, respectively, as the samecudepth_num_skip_curr, samecudepth_avgcost_skip_curr;
- Step 3 If all the following conditions are met, the current CU does not perform 4 sub-block partitions, otherwise it performs 4 sub-block partitions.
- Condition 1 The current CU optimal mode is skip mode
- the judgment process includes the following steps:
- Step 1 Count the number of available CTUs in the left, upper, upper left, and upper right corners of the current CTU neighboring position, and the residual residual coefficient of 0 in the adjacent CTU, and the number of the same CU depth, and the rate distortion cost.
- the average value is denoted as ctu_validnum_cbf_adjacent, samecudepth_num_cbf_adjacent, samecudepth_avgcost cbf_adjacent;
- Step 2 counting the residual coefficient of the current CTU after the CU quantization is 0, and the number of the same CU depth, and the average of the rate distortion cost, respectively, are recorded as samecudepth_num_cbf_curr, samecudepth_avgcost_cbf_curr;
- Step 3 If all the following conditions are met, the current CU does not perform 4 sub-block partitions, otherwise 4 sub-block partitions are performed.
- Condition 1 The current CU optimal mode is skip mode
- the coding speed is greatly improved. If the CU depth is 32, the four sub-blocks are not divided, and the corresponding sub-CUs of the CU (the CU size is 16), and each CU16. The four sub-CUs included (the CU size is 8) do not need to do various calculations of the PU, which can greatly reduce the amount of calculation.
- an electronic device for implementing the coding unit division method of the video frame is further provided.
- the electronic device includes: one or more (only shown in the figure) The processor 702, the memory 704, the sensor 706, the encoder 708, and the transmission device 710, in which the computer program is stored, the processor being arranged to perform the steps in any of the above method embodiments by a computer program.
- the foregoing electronic device may be located in at least one network device of the plurality of network devices of the computer network.
- the foregoing processor may be configured to perform the following steps by using a computer program:
- the structure shown in FIG. 7 is merely illustrative, and the electronic device may also be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (Mobile). Terminal devices such as Internet Devices, MID) and PAD.
- FIG. 7 does not limit the structure of the above electronic device.
- the electronic device may also include more or fewer components (such as a network interface, display device, etc.) than shown in FIG. 7, or have a different configuration than that shown in FIG.
- the memory 702 can be used to store the software program and the module, such as the coding unit division method of the video frame and the program instruction/module corresponding to the device in the embodiment of the present application, and the processor 704 runs the software program and the module stored in the memory 702. , thereby performing various functional applications and data processing, that is, implementing the above-described control method of the target component.
- Memory 702 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
- memory 702 can further include memory remotely located relative to processor 704, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
- the transmission device 710 described above is for receiving or transmitting data via a network.
- Specific examples of the above network may include a wired network and a wireless network.
- the transmission device 710 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
- the transmission device 710 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
- NIC Network Interface Controller
- RF Radio Frequency
- the memory 702 is used to store an application.
- Embodiments of the present application also provide a storage medium having stored therein a computer program, wherein the computer program is configured to execute the steps of any one of the method embodiments described above.
- the above storage medium may be configured to store a computer program for performing the following steps:
- the storage medium is further configured to store a computer program for performing the steps included in the method in the above embodiments, which will not be described in detail in this embodiment.
- Embodiments of the present application also provide a computer program product comprising instructions that, when executed on a computer, cause the computer to perform a coding unit partitioning method for a video frame as described herein.
- the storage medium may include a flash disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like.
- the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
- the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium.
- a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
- the disclosed client may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
帧类型 | 编码单元类型 |
I帧 | 无 |
P帧 | 2 |
B帧 | 1、2 |
b帧 | 0、1、2 |
Claims (16)
- 一种视频帧的编码单元划分方法,应用于电子装置,包括:根据具有对应关系的帧类型和编码单元类型,确定目标帧所属的目标帧类型对应的目标编码单元类型,其中,所述目标编码单元类型用于指示对所述目标帧进行划分时的划分深度;在对所述目标帧中属于所述目标编码单元类型的目标编码单元进行划分时,根据所述目标编码单元的编码单元信息确定所述目标编码单元是否满足目标条件,得到目标结果;对所述目标编码单元执行所述目标结果对应的划分操作。
- 根据权利要求1所述的方法,对所述目标编码单元执行所述目标结果对应的划分操作包括:在所述目标结果用于指示所述目标编码单元满足所述目标条件的情况下,停止对所述目标编码单元进行划分;和/或,在所述目标结果用于指示所述目标编码单元不满足所述目标条件的情况下,对所述目标编码单元进行划分。
- 根据权利要求1所述的方法,通过以下方式建立所述具有对应关系的帧类型和编码单元类型:获取帧类型,其中,所述帧类型是根据帧编码过程中帧间的参考关系而划分得到的;获取所述帧类型对应的编码单元类型;存储具有对应关系的帧类型和编码单元类型。
- 根据权利要求3所述的方法,获取所述帧类型包括:确定所述帧类型,所述帧类型包括:第一帧类型、第二帧类型、第三帧类型和第四帧类型;其中,所述第一帧类型的帧在帧编码过程中不参考其他帧;所述第二帧类型的帧在编码过程中参考所述第一帧类型的帧;所述第三帧类型的帧在编码过程中参考所述第一帧类型的帧和所述第二帧类型的帧;所述第四帧类型的帧在编码过程中参考所述第一帧类型的帧和所述第三帧类型的帧,或者,参考所述第二帧类型的帧和所述第三帧类型的帧。
- 根据权利要求4所述的方法,获取所述帧类型对应的编码单元类型包括:确定所述第二帧类型对应的编码单元类型为第一编码单元类型,属于所述第一编码单元类型的编码单元包括:16×16的编码单元;确定所述第三帧类型对应的编码单元类型为第二编码单元类型,属于所述第二编码单元类型的编码单元包括:16×16的编码单元和32×32的编码单元;确定所述第四帧类型对应的编码单元类型为第三编码单元类型,属于所述第三编码单元类型的编码单元包括:16×16的编码单元、32×32的编码单元和64×64的编码单元。
- 根据权利要求1至5中任一项所述的方法,根据所述目标编码单元的编码单元信息确定所述目标编码单元是否满足目标条件包括:将所述目标帧中符合所述目标编码单元类型指示的划分深度的编码单元,确定为所述目标编码单元;在对所述目标编码单元进行划分时,获取所述目标编码单元的编码单元信息;根据所述编码单元信息确定所述目标编码单元是否满足目标条件。
- 根据权利要求6所述的方法,获取所述目标编码单元的编码单元信息包括:获取所述目标编码单元的最优模式以及所述最优模式的率失真代价;则根据所述编码单元信息确定所述目标编码单元是否满足目标条件包括:在所述最优模式为skip模式且所述率失真代价落入目标阈值范围的情况下,确定所述目标编码单元满足所述目标条件;和/或,在所述最优模式不为skip模式,或者,所述率失真代价未落入所述目标阈值范围的情况下,确定所述目标编码单元不满足所述目标条件。
- 一种视频帧的编码单元划分装置,包括:第一确定模块,用于根据具有对应关系的帧类型和编码单元类型,确定目标帧所属的目标帧类型对应的目标编码单元类型,其中,所述目标编码单元类型用于指示对所述目标帧进行划分时的划分深度;第二确定模块,用于在对所述目标帧中属于所述目标编码单元类型的目标编码单元进行划分时,根据所述目标编码单元的编码单元信息确定所述目标编 码单元是否满足目标条件,得到目标结果;处理模块,用于对所述目标编码单元执行所述目标结果对应的划分操作。
- 根据权利要求8所述的装置,所述处理模块包括:第一处理单元,用于在所述目标结果用于指示所述目标编码单元满足所述目标条件的情况下,停止对所述目标编码单元进行划分;和/或,第二处理单元,用于在所述目标结果用于指示所述目标编码单元不满足所述目标条件的情况下,对所述目标编码单元进行划分。
- 根据权利要求8所述的装置,所述装置还包括:第一获取模块,用于获取帧类型,其中,所述帧类型是根据帧编码过程中帧间的参考关系而划分得到的;第二获取模块,用于获取所述帧类型对应的编码单元类型;存储模块,用于存储具有对应关系的帧类型和编码单元类型。
- 根据权利要求10所述的装置,所述第一获取模块用于:确定所述帧类型,所述帧类型包括:第一帧类型、第二帧类型、第三帧类型和第四帧类型;其中,所述第一帧类型的帧在编码过程中不参考其他帧;所述第二帧类型的帧编码过程中参考所述第一帧类型的帧;所述第三帧类型的帧在编码过程中参考所述第一帧类型的帧和所述第二帧类型的帧;所述第四帧类型的帧在编码过程中参考所述第一帧类型的帧和所述第三帧类型的帧,或者,参考所述第二帧类型的帧和所述第三帧类型的帧。
- 根据权利要求11所述的装置,所述第二获取模块包括:第二确定单元,用于确定所述第二帧类型对应的编码单元类型为第一编码单元类型,属于所述第一编码单元类型的编码单元包括:16×16的编码单元;第三确定单元,用于确定所述第三帧类型对应的编码单元类型为第二编码单元类型,属于所述第二编码单元类型的编码单元包括:16×16的编码单元和32×32的编码单元;第四确定单元,用于确定所述第四帧类型对应的编码单元类型为第三编码单元类型,属于所述第三编码单元类型的编码单元包括:16×16的编码单元、 32×32的编码单元和64×64的编码单元。
- 根据权利要求8至12中任一项所述的装置,所述第二确定模块包括:第九确定单元,用于将所述目标帧中符合所述目标编码单元类型指示的划分深度的编码单元,确定为所述目标编码单元;获取单元,用于在对所述目标编码单元进行划分时,获取所述目标编码单元的编码单元信息;第十确定单元,用于根据所述编码单元信息确定所述目标编码单元是否满足目标条件。
- 一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至7任一项中所述的方法。
- 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至7任一项中所述的方法。
- 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1至7任一项所述的视频帧的编码单元划分方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19799390.0A EP3793196B1 (en) | 2018-05-10 | 2019-04-03 | Video frame encoding unit division method and apparatus, and storage medium and electronic apparatus |
JP2020543747A JP7171748B2 (ja) | 2018-05-10 | 2019-04-03 | ビデオフレームの符号化単位の分割方法、装置、記憶媒体及び電子装置 |
US16/933,455 US11317089B2 (en) | 2018-05-10 | 2020-07-20 | Method, apparatus, and storage medium for dividing coding unit of video frame |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810444914.5A CN110198443B (zh) | 2018-05-10 | 2018-05-10 | 视频帧的编码单元划分方法、装置、存储介质及电子装置 |
CN201810444914.5 | 2018-05-10 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/933,455 Continuation US11317089B2 (en) | 2018-05-10 | 2020-07-20 | Method, apparatus, and storage medium for dividing coding unit of video frame |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019214373A1 true WO2019214373A1 (zh) | 2019-11-14 |
Family
ID=67751027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/081211 WO2019214373A1 (zh) | 2018-05-10 | 2019-04-03 | 视频帧的编码单元划分方法、装置、存储介质及电子装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11317089B2 (zh) |
EP (1) | EP3793196B1 (zh) |
JP (1) | JP7171748B2 (zh) |
CN (1) | CN110198443B (zh) |
WO (1) | WO2019214373A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111901591A (zh) * | 2020-07-28 | 2020-11-06 | 有半岛(北京)信息科技有限公司 | 一种编码模式的确定方法、装置、服务器和存储介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110572679B (zh) * | 2019-09-27 | 2022-04-26 | 腾讯科技(深圳)有限公司 | 帧内预测的编码方法、装置、设备及可读存储介质 |
CN113242429B (zh) * | 2021-05-11 | 2023-12-05 | 杭州网易智企科技有限公司 | 视频编码模式决策方法、装置、设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120236944A1 (en) * | 2009-12-08 | 2012-09-20 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition |
CN104023234A (zh) * | 2014-06-24 | 2014-09-03 | 华侨大学 | 一种适用于hevc的快速帧间预测方法 |
CN104243997A (zh) * | 2014-09-05 | 2014-12-24 | 南京邮电大学 | 一种质量可分级hevc视频编码方法 |
CN107295336A (zh) * | 2017-06-21 | 2017-10-24 | 鄂尔多斯应用技术学院 | 基于图像相关性的自适应快速编码单元划分方法及装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105103554A (zh) * | 2013-03-28 | 2015-11-25 | 华为技术有限公司 | 用于保护视频帧序列防止包丢失的方法 |
CN103491369B (zh) * | 2013-09-18 | 2016-09-28 | 华为技术有限公司 | 一种帧间预测编码方法和编码器 |
US9924183B2 (en) * | 2014-03-20 | 2018-03-20 | Nanjing Yuyan Information Technology Ltd. | Fast HEVC transcoding |
CN104602017B (zh) * | 2014-06-10 | 2017-12-26 | 腾讯科技(北京)有限公司 | 视频编码器、方法和装置及其帧间模式选择方法和装置 |
CN107396121B (zh) * | 2017-08-22 | 2019-11-01 | 中南大学 | 一种基于分层b帧结构的编码单元深度预测方法及装置 |
-
2018
- 2018-05-10 CN CN201810444914.5A patent/CN110198443B/zh active Active
-
2019
- 2019-04-03 WO PCT/CN2019/081211 patent/WO2019214373A1/zh active Application Filing
- 2019-04-03 EP EP19799390.0A patent/EP3793196B1/en active Active
- 2019-04-03 JP JP2020543747A patent/JP7171748B2/ja active Active
-
2020
- 2020-07-20 US US16/933,455 patent/US11317089B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120236944A1 (en) * | 2009-12-08 | 2012-09-20 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition |
CN104023234A (zh) * | 2014-06-24 | 2014-09-03 | 华侨大学 | 一种适用于hevc的快速帧间预测方法 |
CN104243997A (zh) * | 2014-09-05 | 2014-12-24 | 南京邮电大学 | 一种质量可分级hevc视频编码方法 |
CN107295336A (zh) * | 2017-06-21 | 2017-10-24 | 鄂尔多斯应用技术学院 | 基于图像相关性的自适应快速编码单元划分方法及装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3793196A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111901591A (zh) * | 2020-07-28 | 2020-11-06 | 有半岛(北京)信息科技有限公司 | 一种编码模式的确定方法、装置、服务器和存储介质 |
CN111901591B (zh) * | 2020-07-28 | 2023-07-18 | 有半岛(北京)信息科技有限公司 | 一种编码模式的确定方法、装置、服务器和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US11317089B2 (en) | 2022-04-26 |
US20200351498A1 (en) | 2020-11-05 |
EP3793196A4 (en) | 2021-06-09 |
CN110198443B (zh) | 2022-09-13 |
CN110198443A (zh) | 2019-09-03 |
EP3793196B1 (en) | 2024-01-10 |
JP7171748B2 (ja) | 2022-11-15 |
EP3793196A1 (en) | 2021-03-17 |
JP2021514158A (ja) | 2021-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102429449B1 (ko) | 양방향 광 흐름을 위한 비트-폭 제어를 위한 방법 및 디바이스 | |
WO2021114846A1 (zh) | 一种视频降噪处理方法、装置及存储介质 | |
JP7012747B2 (ja) | ビデオフレーム符号化方法、端末、および記憶媒体 | |
WO2019214373A1 (zh) | 视频帧的编码单元划分方法、装置、存储介质及电子装置 | |
KR101808327B1 (ko) | 영상 코덱에서 패딩을 이용한 영상 부/복호화 방법 및 장치 | |
KR102214937B1 (ko) | 디블로킹 필터 방법 및 장치 | |
CN111316642B (zh) | 信令图像编码和解码划分信息的方法和装置 | |
CN109688407B (zh) | 编码单元的参考块选择方法、装置、电子设备及存储介质 | |
KR20150036161A (ko) | 비디오 코딩을 위한 제한된 인트라 디블록킹 필터링 | |
EP4024872A1 (en) | Video coding method and apparatus, video decoding method and apparatus, electronic device, and storage medium | |
CN109963151B (zh) | 编码单元划分确定方法及装置、终端设备及可读存储介质 | |
JP2015103970A (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、及び動画像撮影装置 | |
CN108777794B (zh) | 图像的编码方法和装置、存储介质、电子装置 | |
CN110381312B (zh) | 一种基于hevc的预测深度划分范围的方法和装置 | |
WO2020248715A1 (zh) | 基于高效率视频编码的编码管理方法及装置 | |
CN109218722B (zh) | 一种视频编码方法、装置及设备 | |
CN115118976A (zh) | 一种图像编码方法、可读介质及其电子设备 | |
KR102094247B1 (ko) | 디블로킹 필터링 방법 및 디블로킹 필터 | |
CN110213595B (zh) | 基于帧内预测的编码方法、图像处理设备和存储装置 | |
EP2890124A1 (en) | Coding method and device applied to hevc-based 3dvc | |
CN109618152B (zh) | 深度划分编码方法、装置和电子设备 | |
CN105357494A (zh) | 视频编解码方法、装置和计算机程序产品 | |
CN110740323B (zh) | 用于确定lcu划分方式的方法、装置、服务器以及存储介质 | |
EP4044600A1 (en) | Video encoding method and apparatus, video decoding method and apparatus, electronic device, and storage medium | |
CN112383774B (zh) | 编码方法、编码器以及服务器 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19799390 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020543747 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2019799390 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2019799390 Country of ref document: EP Effective date: 20201210 |