WO2019184934A1 - 色度的帧内预测方法及装置 - Google Patents
色度的帧内预测方法及装置 Download PDFInfo
- Publication number
- WO2019184934A1 WO2019184934A1 PCT/CN2019/079808 CN2019079808W WO2019184934A1 WO 2019184934 A1 WO2019184934 A1 WO 2019184934A1 CN 2019079808 W CN2019079808 W CN 2019079808W WO 2019184934 A1 WO2019184934 A1 WO 2019184934A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- luma
- motion vector
- prediction
- luminance
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present application relates to the field of video coding and decoding, and in particular, to a chrominance intra prediction method and apparatus.
- intra prediction technology is a technique for extracting correlation between pixels within an image.
- intra prediction technique In the high-efficiency video coding (English: High Efficiency Video Coding; referred to as: HEVC) screen content coding (English: Screen Content Coding; referred to as SCC) extension standard, there is an intra prediction technique called intra block copy (intra block) Copy), which is also referred to as intra-frame motion compensation technique, which refers to a block to be processed (such as a coding block or a decoding block) moving from an already encoded region of an image frame to be processed to which it belongs. Search to find the best matching block to encode as its prediction block. In the encoding process, the intra motion compensation technique needs to determine and encode its corresponding motion vector for the coding block relative to the conventional intra prediction technique. In this way, the decoding end can find the prediction block of the corresponding decoding block according to the motion vector, and complete the decoding process.
- intra block copy Intra block copy
- intra-frame motion compensation technique which refers to a block to be processed (such as a coding block or a decoding block) moving from an already
- pixel values include a luminance component Y, a chrominance component U, and a chrominance component V.
- the division of the luminance component and the chrominance component is uniform.
- the proposed joint exploration model (English full name: Joint Exploration Model, JEM, JEM is the reference software model of H.266) has a significant performance improvement over HEVC.
- JEM Joint Exploration Model
- the luminance component and the chrominance component are separately divided and coded, so the division of the luminance component and the division of the chrominance components are no longer consistent.
- the luminance component and the chrominance component need to decide whether to adopt intra-frame motion compensation technology.
- the corresponding motion vectors need to be determined respectively, and then the corresponding prediction blocks are predicted based on the determined motion vectors to further perform respective video encoding and decoding processes.
- the process method using the intra motion compensation technique is complicated, and the operation cost of determining the motion vector is high.
- the embodiment of the present application provides a chrominance intra prediction method and apparatus, which solves the problem that the current intra-frame motion compensation technology has a complicated process method and determines the operation cost of the motion vector.
- the technical solution is as follows:
- a chrominance intra prediction method comprising:
- the target intra prediction mode is the intra motion compensation mode
- the motion vector of the current chroma block in the image frame to be processed is determined based on the motion vector of the reference luma block, as an example, the target intra prediction mode is used for Predicting a mode of the prediction block of the current chroma block, in which the motion vector of the current chroma block is generated based on a motion vector of the luma block, the reference luma block being the current a luminance block in n luminance blocks corresponding to a chroma block position, n ⁇ 1;
- a chrominance intra prediction apparatus comprising:
- a first determining module configured to determine a motion vector of a current chroma block in the image frame to be processed based on a motion vector of the reference luma block when the target intra prediction mode is an intra motion compensation mode, as an example, the target The intra prediction mode is a mode for predicting a prediction block of the current chroma block, and in the intra motion compensation mode, a motion vector of the current chroma block is generated based on a motion vector of a luma block, the reference The luma block is a luma block among n luma blocks corresponding to the current chroma block position, n ⁇ 1;
- a prediction module configured to predict a prediction block of the current chroma block based on a motion vector of the current chroma block.
- a chrominance intra prediction apparatus including:
- At least one processor At least one processor
- At least one memory At least one memory
- the at least one memory stores at least one program capable of executing the at least one program to perform the chrominance intra prediction method of any of the first aspects.
- a storage medium in which an instruction or code is stored
- the instructions or code when executed by a processor, enable the processor to perform the intra-prediction method of chrominance as described in any of the first aspects.
- the chrominance intra prediction method and apparatus since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the correlation between the motion vector of the luminance component and the motion vector of the chrominance component. Therefore, it is not necessary to separately calculate the motion vector of the chroma component, thereby simplifying the process of the intra motion compensation technique, reducing the operation cost of the motion vector of the chroma component, and correspondingly reducing the operation cost of the overall motion vector.
- the motion vector of the chrominance block is generated based on the motion vector of the luma block, in the encoding process, it is not necessary to separately encode the motion vector of the chrominance block, thereby reducing the coding cost of the motion vector of the chrominance component, which is beneficial to Improve the efficiency of video coding.
- FIG. 1 is a flowchart of a chrominance intra prediction method according to an exemplary embodiment
- FIG. 2 is a flowchart of another chrominance intra prediction method according to an exemplary embodiment
- FIG. 3 is a schematic diagram showing a division manner of a chroma maximum coding unit according to an exemplary embodiment
- FIG. 4 is a schematic diagram showing the numbering of the blocks obtained in FIG. 3;
- FIG. 5 is a schematic diagram of a division manner of another chroma maximum coding unit according to an exemplary embodiment
- FIG. 6 is a flowchart of a method for adding an intra motion compensation mode to a prediction mode candidate queue according to an exemplary embodiment
- FIG. 8 and FIG. 9 are schematic diagrams showing the correspondence relationship between a chroma block and a luminance image region in a scene with an encoding format of 4:2:0;
- FIG. 10 is a schematic diagram of a to-be-processed image frame during processing according to an exemplary embodiment
- FIG. 11 and FIG. 12 are schematic structural diagrams showing two coding division manners according to an exemplary embodiment
- FIG. 13 is a flowchart of still another chrominance intra prediction method according to an exemplary embodiment
- FIG. 14 is a flowchart of a chrominance intra prediction method according to another exemplary embodiment
- FIG. 15 is a schematic structural diagram of a chromaticity intra prediction apparatus according to an exemplary embodiment
- 16 is a schematic structural diagram of another chromaticity intra prediction apparatus according to an exemplary embodiment
- FIG. 17 is a schematic structural diagram of a construction module according to an exemplary embodiment
- FIG. 18 is a schematic structural diagram of a third determining module according to an exemplary embodiment
- FIG. 19 is a schematic structural diagram of another third determining module according to an exemplary embodiment.
- FIG. 20 is a schematic structural diagram of still another chrominance intra prediction apparatus according to an exemplary embodiment
- FIG. 21 is a schematic structural diagram of still another chrominance intra prediction apparatus according to an exemplary embodiment
- FIG. 22 is a schematic structural diagram of a chromaticity intra prediction apparatus according to another exemplary embodiment.
- FIG. 23 is a schematic structural diagram of another chromaticity intra prediction apparatus according to another exemplary embodiment.
- FIG. 24 is a schematic structural diagram of still another chrominance intra prediction apparatus according to another exemplary embodiment.
- FIG. 25 is a schematic structural diagram of still another chromaticity intra prediction apparatus according to another exemplary embodiment.
- An embodiment of the present application provides a chrominance intra prediction method, which is applied to the field of video coding and decoding.
- the chrominance intra prediction method is applicable to a codec of a video format (also referred to as a video format) in the YUV format.
- a codec of a video format also referred to as a video format
- YUV format a video format in the YUV format
- the basic encoding principle may be: taking an image acquisition device such as a three-tube color camera or a color-charged coupling device (English: Charge-coupled Device; CCD) camera or video camera, and then taking the image, and then The obtained color image signal is subjected to color separation and separately amplified to obtain an RGB signal, and then the RGB signal is subjected to a matrix conversion circuit to obtain a signal of the luminance component Y and two color difference signals B-Y (ie, signals of the chrominance component U), R.
- an image acquisition device such as a three-tube color camera or a color-charged coupling device (English: Charge-coupled Device; CCD) camera or video camera
- CCD Charge-coupled Device
- RGB signal is subjected to a matrix conversion circuit to obtain a signal of the luminance component Y and two color difference signals B-Y (ie, signals of the chrominance component U), R.
- YUV color space representation The signal of the luminance component Y represented by the YUV color space, the signal of the chrominance component U, and the signal of the chrominance component V are separated.
- the above-mentioned YUV format can also be obtained by other means, which is not limited by the embodiment of the present application.
- the image of the YUV format (hereinafter referred to as the target image) is usually sampled by an image capturing device such as a camera, and the initial image taken is subjected to a series of processing (for example, format conversion), the luminance component Y and the color are obtained.
- the sampling rate (also called the sampling rate) of the degree component U and the chrominance component V may be different, and the distribution density of each color component in the initial image is the same, that is, the distribution density ratio of each color component is 1:1:1 due to the respective color components.
- the sampling rate is different, and the distribution density of the different color components of the target image is different.
- the distribution density ratio of each color component is equal to the sampling rate ratio.
- the distribution density of one color component refers to It refers to the number of pieces of information of the color component contained in the unit size.
- the distribution density of the luminance component refers to the number of luminance pixel values (also referred to as luminance values) included in the unit size.
- the current YUV format is divided into multiple encoding formats based on different sampling rate ratios.
- the encoding format can be expressed in a sampling rate ratio. This representation is called A:B:C notation, and the current encoding format can be divided. For: 4:4:4, 4:2:2, 4:2:0 and 4:1:1.
- the encoding format is 4:4:4, indicating that the luminance component Y in the target image has the same sampling rate of the chrominance component U and the chrominance component V, and the downsampling is not performed on the original image, and the distribution density of each color component of the target image is The ratio is 1:1:1; the encoding format is 4:2:2, which means that every two luminance components Y in the target image share a set of chrominance components U and chrominance components V, and the distribution density ratio of each color component of the target image is 2:1:1, that is, the pixel is used as the sampling unit, the luminance component of the original image is not downsampled, the chrominance component of the original image is downsampled in the horizontal direction by 2:1, and the vertical direction is not downsampled to obtain the target.
- the encoding format is 4:2:0, indicating that for each chrominance component of the chrominance component U and the chrominance component V in the target image, the sampling rate in both the horizontal direction and the vertical direction is 2:1, the target
- the ratio of the distribution density of the luminance component Y to the chrominance component U of the image is 2:1
- the ratio of the distribution density of the luminance component Y to the chrominance component V of the target image is 2:1
- the pixel is used as a sampling unit
- the original image is
- the luminance component is not downsampled, the original image
- the chrominance component of the image is downsampled in the horizontal direction by 2:1, and the vertical direction is downsampled by 2:1 to obtain the target image.
- an embodiment of the present application provides a chrominance intra prediction method, which is applied to encoding and decoding of an I frame, and an I frame is also called an intra picture, and an I frame is usually The first frame of an image group (English: Group of Pictures; abbreviated as GOP), which is also called an intra prediction encoded frame or a key frame.
- the intra-prediction method of the chroma includes:
- Step 101 When the target intra prediction mode is the intra motion compensation mode, determine a motion vector of the current chroma block in the image frame to be processed based on the motion vector of the reference luma block.
- the target intra prediction mode is a mode for predicting a prediction block of a current chroma block
- a prediction block of a current chroma block is generated by using an intra motion compensation technique, and the generation is performed.
- the process is: acquiring the motion vector of the current chroma block, and generating the prediction block based on the motion vector of the current chroma block.
- the motion vector of the current chroma block is generated based on the motion vector of the reference luma block.
- the luma block is one of the n luma blocks corresponding to the current chroma block position, n ⁇ 1.
- Step 102 Predict a prediction block of a current chroma block based on a motion vector of a current chroma block.
- the current chroma block refers to the chroma block to be currently encoded
- the current chroma block refers to the chroma block to be decoded currently, the current chroma block. It may be an image block of the chrominance component U or an image block of the chrominance component V.
- the luminance component has a correlation with the chrominance component, and accordingly, the motion vector also has a correlation.
- the chrominance intra prediction method provided by the embodiment of the present application, since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the correlation between the motion vector of the luminance component and the motion vector of the chrominance component, It is not necessary to separately calculate the motion vector of the chroma component, which simplifies the process of the intra motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
- the motion vector of the chrominance block is generated based on the motion vector of the luma block, in the encoding process, it is not necessary to separately encode the motion vector of the chrominance block, thereby reducing the coding cost of the motion vector of the chrominance component, which is beneficial to Improve the efficiency of video coding.
- the intra prediction method may be applied to both the encoding end and the decoding end.
- the intra prediction method is applied to the encoding end and the decoding end respectively as an example, and the following two aspects are adopted. Be explained:
- the chrominance intra prediction method is performed by an encoding end, which is used for encoding of an I frame, and the method includes:
- Step 201 Perform a division of chroma blocks for the image frames to be processed.
- the image frame to be processed includes a luminance image and a chrominance image located in the same region, and the encoding process of the luminance component is actually encoding the luminance image, and the encoding process of the chrominance component is actually encoding the chrominance image.
- the image frame to be processed is usually first divided into a maximum coding unit of equal size (English: Coding Tree Unit; CTU for short), and the maximum coding unit also includes the maximum luminance coding. Unit and chroma maximum coding unit.
- the maximum coding unit is usually a square coding block, which may be 8 ⁇ 8 pixels, 16 ⁇ 16 pixels, 32 ⁇ 32 pixels, 64 ⁇ 64 pixels, 128 ⁇ 128 pixels, 256.
- the luminance component is first encoded, and the encoding process of the luminance component may include: dividing the maximum luminance coding unit, and the divided luminance block may be a square or a rectangle.
- the encoding type of each luma block and the prediction information corresponding to the encoding type are then determined.
- the coding type of the luma block generally includes an intra prediction type or an inter prediction type (also referred to as an inter coding type).
- the prediction information corresponding to the intra prediction type includes a prediction mode, inter prediction.
- the prediction information corresponding to the type includes a prediction vector and a reference frame index.
- the luma block For each luma block, after determining the prediction information based on the determined encoding type, the luma block is predicted according to the prediction information to obtain a prediction block of the luma block, and then the original pixel value of the luma block is compared with the pixel value of the prediction block. A residual of the luma block is obtained (all residuals of the luma block constitute a residual block), and the residual is transformed, quantized and entropy encoded to obtain a code stream of the corresponding luma block.
- the intra motion compensation technique is an intra prediction technique, but when marking the coding type, it is usually marked as an inter prediction type.
- the chroma maximum coding unit After encoding the luma block in the image frame to be processed, the chroma maximum coding unit can be divided.
- the division of the chroma maximum coding unit adopts a quadtree plus binary tree division method, and the division manner may be pre-agreed with the decoding end or encoded in the code stream after division.
- the process includes:
- Fig. 3 is a schematic diagram of the division of the chrominance maximum coding unit.
- a quadtree the chrominance maximum coding unit P is divided into four blocks P1, P2, P3, and P4; then, the first block P1 and the fourth block P4 that are divided are respectively divided into four-trees, and are divided into Four blocks are P11, P12, P13 and P14 and P41, P42, P43 and P44.
- Each block that is finally obtained by quadtree partitioning is called a quad-leaf node.
- the quad-leaf leaf node is represented by Q i .
- i represents the number of the quad-leaf leaf node, and also indicates the coding order of them, as shown in FIG. 4
- FIG. 4 is a schematic diagram of the number of the block obtained by dividing FIG. 3 .
- the coding order of the quad-leaf leaf nodes is: for the blocks sharing one parent node, the encoding is performed in the scanning order from left to right and from top to bottom.
- each quadtree node can further divide the binary tree to obtain two equal-sized blocks.
- the block obtained by the binary tree division can further divide the binary tree, as shown in FIG. 5, and the broken line indicates the block division result finally obtained after the binary tree division.
- each block that is finally obtained by binary tree partitioning is called a binary leaf node.
- the two blocks obtained by the binary tree division are also encoded in order from left to right and top to bottom.
- the block obtained by dividing the quadtree and the binary tree is the final chroma block, denoted by B i , where i represents the number of the chroma block, and also indicates the order of the chroma block encoding.
- the method for dividing the quadtree and the binary tree of the chrominance maximum coding unit is only a schematic description. In the actual implementation, the embodiment of the present application may also adopt only the quadtree partitioning method or only the binary tree partitioning method. The division of the chrominance maximum coding unit is not limited in this embodiment of the present application.
- Step 202 Construct, for the current chroma block, a prediction mode candidate queue, where the prediction mode candidate queue includes at least one prediction mode, and each prediction mode is a mode for predicting a prediction block of the current chroma block.
- the prediction mode candidate queue may have multiple types of prediction modes.
- the multiple types of prediction modes include: an intra prediction mode, a cross component prediction mode, and an intra motion compensation mode.
- the definition of the intra prediction mode and the cross component prediction mode can be referred to JEM.
- the intra motion compensation mode is a new mode added in the prediction mode candidate queue proposed in the embodiment of the present application.
- the motion vector of the current chroma block is generated based on the motion vector of the luma block. of.
- Each prediction mode in the prediction mode candidate queue has its corresponding mode number.
- 67 intra prediction modes are represented by 0 to 66
- 6 cross component prediction modes are represented by 67 to 72.
- the intra motion compensation mode added to the prediction mode candidate queue is represented by a mode number different from the traditional prediction mode, for example, may be greater than or equal to 73.
- the mode number indicates the intra motion compensation mode.
- the process of constructing the prediction mode candidate queue may include: a process of adding a cross-component prediction mode to a prediction mode candidate queue; a process of adding an intra motion compensation mode to a prediction mode candidate queue; and an intra prediction mode The process of joining a prediction mode candidate queue.
- the sequence of execution of the three processes is not limited in the embodiment of the present application. Generally, the three processes may be sequentially performed in the order of adding the cross-component prediction mode, the intra motion compensation mode, and the intra prediction mode.
- the length of the prediction mode candidate queue (that is, the number of prediction modes allowed to be added in the prediction mode candidate queue) is preset, that is, its existence length threshold, as an example, the length threshold is 11, Therefore, the execution of the three processes is also limited by the length threshold.
- the process of constructing a prediction mode candidate queue includes:
- Step A1 Add a cross-component prediction mode to the prediction mode candidate queue.
- step A1 includes adding at least one cross-component prediction mode to the prediction mode candidate queue.
- the mode may be numbered with six prediction modes of 67-72 sequentially added to the prediction mode candidate queue.
- step A2 the intra motion compensation mode is added to the prediction mode candidate queue.
- Step A3 If the prediction mode candidate queue is not full, the intra prediction modes of the n luma blocks corresponding to the current chroma block are added to the prediction mode candidate queue in the first order.
- Step A4 If the prediction mode candidate queue is not full, the intra prediction mode of the chroma block adjacent to the current chroma block is added to the prediction mode candidate queue in the second order.
- Step A5 If the prediction mode candidate queue is not full, join the planar mode (also called the planar mode, the mode number is 3) and the DC mode.
- planar mode also called the planar mode, the mode number is 3
- Step A6 If the prediction mode candidate queue is not full, join the directional pattern adjacent to the directional pattern already in the prediction mode candidate queue.
- the directional mode is one of the prediction modes. If the prediction mode candidate queue is not full, the directional mode adjacent to the directional pattern already in the prediction mode candidate queue needs to be added.
- Step A7 If the prediction mode candidate queue is not full, add vertical (English: vertical) mode, horizontal (English: horizontal mode) and No. 2 intra prediction mode (ie, intra prediction mode with mode number 2) .
- step A7 since the prediction mode candidate queue is not full, the number of modes that may be added may be less than or equal to three, and therefore, when adding the vertical mode, the horizontal mode, and the second intra prediction mode, The vertical mode, the horizontal mode, and the second intra prediction mode are sequentially added until the prediction mode candidate queue is full. It is possible to eventually join one or more of the vertical mode, the horizontal mode, and the No. 2 intra prediction mode.
- the process of adding the intra motion compensation mode to the prediction mode candidate queue in step A2 may include:
- Step 2021 Determine n luma blocks corresponding to the current chroma block position in the image frame to be processed. N ⁇ 1.
- the process of determining n luma blocks corresponding to the current chroma block position in the image frame to be processed may include:
- Step B1 Determine a luminance image region corresponding to a current chroma block position in the image frame to be processed.
- each chroma block may be associated with one or more luma blocks.
- multiple chrominance blocks may also correspond to one luminance block position.
- the position correspondence relationship between the chroma block and the luma block is related to the encoding format of the image frame to be processed.
- the distribution density of the luminance component and the chrominance component may be the same or different, and therefore, in the embodiment of the present application,
- the luminance image region corresponding to the current chroma block position in the image frame to be processed needs to be determined first, and then the n luma blocks corresponding to the current chroma block position are further determined.
- w2 is the size of the current chroma block
- K is the distribution density ratio of the luminance component and the chroma component.
- the ratio of the distribution density of the luminance component Y to the chrominance component U of the image frame to be processed is K, including the luminance component and the chrominance component in the horizontal direction (this direction can be regarded as the width direction of the luminance component and the chrominance component)
- the distribution density ratio is K1
- the distribution density ratio in the vertical direction (which can be regarded as the height direction of the luminance component and the chrominance component)
- the luminance component and the chrominance component are in the width direction and the height direction.
- the distribution density ratios are K1 and K2, respectively.
- the current chroma block has a width of CW and a height of CH
- the coordinates of the upper left pixel in the image frame to be processed are (C x , C y )
- the luminance image area is The pixel coordinates of the upper left corner are (K1 ⁇ C x , K2 ⁇ C y )
- the width and height are rectangular areas of K1 ⁇ CW and K2 ⁇ CH, respectively.
- the distribution density ratio of the luminance component Y and the chrominance component U of the image frame to be processed is 2:1
- the distribution of the luminance component Y and the chrominance component V is 2:1, that is, the width and height of the luminance component of the image frame to be processed are twice the width and height of the chrominance component, respectively, and the luminance component and the chrominance component are in the width direction and the height direction.
- the distribution density ratio K is 2:1 and 2:1.
- the luminance image area is the coordinates of the upper left pixel (2 ⁇ C x , 2 ⁇ C y ).
- the width and height are respectively 2 ⁇ CW, 2 ⁇ CH rectangular area.
- FIG. 7 to FIG. 9 are schematic diagrams showing the correspondence relationship between a chroma block and a luminance image region in a scene with an encoding format of 4:2:0.
- the luminance image region M1 corresponding to the position of the chroma block B1 has 6 luminance blocks, respectively M11 to M16; as shown in FIG. 8, in the image frame to be processed, The luminance image region M2 corresponding to the position of the chroma block B2 has one luminance block, which is M17; as shown in FIG. 9, in the image frame to be processed, the luminance image region M3 corresponding to the position of the chroma block B3 has a half luminance block. That is, the chrominance block B3 corresponds to 1/2 luminance blocks, and at this time, the two chrominance blocks correspond to one luminance block M18 position.
- Step B2 determining n luminance blocks in all target luminance blocks.
- the target luminance block is a luminance block that is partially or wholly in the luminance image region. That is, if part or all of a luminance block is located in the luminance image region in the image frame to be processed, the luminance block is determined as the target luminance block.
- all the target luma blocks can be used as n luma blocks, and the second determining manner can perform certain screening in all target luma blocks to reduce the subsequent operation cost.
- the brightness block of the specified position is filtered as n brightness blocks, and the brightness block of the specified position refers to a brightness block covering a specified pixel point in the brightness image area, covering the brightness block of the specified pixel point, referring to The specified pixel point is included in the pixel included in the luminance block.
- the specified pixel point includes a central pixel point CR in the luminance image area, an upper left corner pixel LT in the luminance image area, a top right corner pixel TR in the luminance image area, a lower left corner pixel point BL in the luminance image area, and a luminance image area.
- the target brightness block includes: a brightness block covering a central pixel point in the brightness image area, a brightness block covering the upper left pixel point in the brightness image area, a brightness block covering the upper right corner pixel in the brightness image area, and a coverage brightness image area.
- the origin of the image coordinate system of the image frame to be processed is its upper left corner
- the corresponding luminance image area of the current chroma block in the image frame to be processed is:
- the upper left corner pixel point LT coordinate is (2 ⁇ C x , 2 ⁇ C y )
- the width and height are respectively 2 ⁇ CW, 2 ⁇ CH rectangular area.
- the coordinates of the central pixel point CR, the upper right corner pixel point TR, the lower left corner pixel point BL, and the lower right corner pixel point BR in the luminance image region corresponding to the current chroma block are respectively (2 ⁇ C x +CW, 2 ⁇ C y +CH), (2 ⁇ C x + 2 ⁇ CW, 2 ⁇ C y ), (2 ⁇ C x , 2 ⁇ C y + 2 ⁇ CH) and (2 ⁇ C x + 2 ⁇ CW, 2 ⁇ C y +2 ⁇ CH).
- a luminance block covering a specified pixel point in all target luminance blocks is a luminance block covering LT, CR, TR, BL, and BR.
- the luminance blocks covering LT, CR, TR, BL, and BR may also have different conditions.
- the luminance block covering the 5 pixel points may be 5 different luminance blocks, or may be 1 luminance block.
- all the target luminance blocks are 6 luminance blocks of the luminance blocks M11 to M16.
- the luminance block covering the central pixel point CR in the luminance image area is the luminance block M15
- the luminance block covering the upper left corner pixel LT in the luminance image region is the luminance block M11
- the luminance block covering the upper right corner pixel TR in the luminance image region is the luminance
- the luminance block covering the lower left corner pixel BL in the luminance image region is the luminance block M14
- the luminance block covering the lower right corner pixel BR in the luminance image region is the luminance block M16.
- the determined n luma blocks are 6 luma blocks of the luma blocks M11 to M16; and the second determining manner, the determined n luma blocks are luma blocks M11, M13, M14, M15 and M16 has a total of 5 brightness blocks.
- all the target luminance blocks are the luminance block M17.
- the luminance blocks of the lower right corner pixel BR in the luminance image area are all the same luminance block M17. Then, whether the first determination mode or the second determination mode is adopted, the determined n luminance blocks are the luminance block M17.
- the luminance Block M18 is a target luminance block, and therefore, all target luminance blocks are luminance blocks M18. Then, regardless of whether the first determination mode or the second determination mode is adopted, the determined n luminance blocks are the luminance block M18.
- the specified pixel points may also be set to other positions according to a specific scene.
- the specified pixel point may also be a central position pixel point of the upper edge pixel row in the luminance image area, a central position pixel point of the lower edge pixel row, and a left edge pixel column.
- At least one of a central location pixel and a central location pixel of the right edge pixel column, etc., is merely illustrative of the embodiments of the present application.
- Step 2022 Detect whether there is a luminance block that can be referenced by the motion vector among the n luminance blocks.
- the process of detecting whether there is a luminance block that can be referenced by a motion vector in the n luminance blocks may include:
- the motion vectors of the luminance blocks in the n luminance blocks are sequentially detected according to the target order, until the detection stop condition is reached.
- the detection stop condition is that the total number of motion vectors that can be referenced is equal to a preset number threshold k, or that n luminance blocks are traversed.
- k can be 1 or 5.
- Perform the detection process which includes:
- Step C1 Check whether the motion vector of the i-th luma block in the n luma blocks is referenced.
- Step C2 When the motion vector of the ith luma block can be referred to, it is detected whether the detection stop condition is reached.
- step C4 when the detection stop condition is reached, the detection process is stopped.
- step C1 the process of detecting whether the motion vector of the i-th luma block in the n luma blocks can be referenced may include:
- Step C11 Detect a prediction type of an i-th luma block among the n luma blocks.
- Step C12 When the prediction type of the i-th luma block is an intra prediction type, determining that the motion vector of the i-th luma block is not referenced.
- the prediction information corresponding to the intra prediction type does not include a motion vector
- the intra prediction method provided by the embodiment of the present application is applicable to inter prediction of an I frame, and the prediction information includes a motion vector, and therefore, when i When the prediction type of the luma block is the intra prediction type, the prediction information does not have a motion vector, nor can it be used for the current chroma block reference.
- Step C13 When the prediction type of the i-th luma block is an inter prediction type, generate an alternative motion vector based on the motion vector of the i-th luma block.
- the process of generating an alternative motion vector based on the motion vector of the ith luma block may include:
- Step C131 Determine a vector scaling ratio of the current chroma block and the i-th luma block according to an encoding format of the image frame to be processed.
- the vector scaling ratio of the current chroma block and the ith luma block is equal to the ratio of the distribution density of the chroma block to the luma block in the image frame to be processed, and the distribution density ratio is encoded by the image frame to be processed.
- the format is determined. For example, when the encoding format is 4:2:0, in the horizontal direction (also called the x direction), the ratio of the distribution density of the chrominance block to the luminance block is 1:2, in the vertical direction (also called the y direction).
- the ratio of the distribution density of the chroma block to the luma block is 1:2, and the scaling ratio of the vector of the current chroma block to the i-th luma block in the horizontal direction is equal to 1:2, and the scaling ratio in the vertical direction is equal to 1:2.
- Step C132 Scale the motion vector of the i-th luma block based on the vector scaling ratio to obtain an alternative motion vector of the i-th luma block.
- the motion vector of the i-th luma block is scaled proportionally to obtain an alternative motion vector of the i-th luma block. For example, if the motion vector of the i-th luma block is (-11, -3) and the encoding format is 4:2:0, the candidate motion vector of the current chroma block is (-5.5, -1.5).
- Step C14 It is detected whether the candidate motion vector corresponding to the ith luma block is the same as the candidate motion vector corresponding to the luma block that the currently detected motion vector can refer to.
- Step C14 is actually a process of finding a repeated motion vector, which is referred to as a check-up process.
- step C15 is performed; when the candidate motion vector corresponding to the i-th luma block is the same as the candidate motion vector corresponding to the luma block that the currently detected motion vector can refer to, step C18 is performed.
- step C18 is performed.
- Step C15 When the candidate motion vector corresponding to the ith luma block is different from the candidate motion vector corresponding to the luma block to which the currently detected motion vector can be referenced, based on the candidate motion vector corresponding to the i-th luma block, An alternative prediction block that predicts the current chroma block.
- the candidate prediction block is pending.
- the pixel coordinates of the upper left corner of the encoded chrominance image of the image frame are (C x + MV x , C y + MV y ), and the image block of the same size as the current chroma block.
- Step C16 When the candidate prediction block is valid, it is determined that the motion vector of the i-th luminance block can be referred to.
- Step C17 When the candidate prediction block is invalid, it is determined that the motion vector of the i-th brightness block is not referenced.
- Step C18 When the candidate motion vector corresponding to the ith luma block is the same as the candidate motion vector corresponding to the luma block to which the currently detected motion vector can refer, determining that the motion vector of the i-th luma block is not referenced.
- step C15 it may be determined whether the candidate prediction block is valid to perform step C16 or C17.
- the determining process may include the following two implementation manners:
- the chroma encoded region in the chroma image of the image frame to be processed includes the already encoded CTU, and the quadtree node and the binary leaf node that have been encoded in the CTU to which the current chroma block belongs, such as The punctured area shown in FIG.
- the candidate motion vector ie, the motion vector obtained by scaling the motion vector of the i-th luma block
- MVx, MVy the candidate motion vector
- (MVx, MVy) is an integer pixel motion vector
- the candidate prediction block When the candidate prediction block is all within the chroma coded region, the candidate prediction block is considered to be valid, and when the candidate prediction block is not all in the chroma coded region, Then, the candidate prediction block is considered invalid; if (MVx, MVy) is a sub-pixel motion vector, and the candidate prediction block needs to be obtained by interpolation, the reference chrominance block corresponding to the candidate prediction block may be obtained first, and the candidate prediction block is The chroma pixel value is obtained by interpolating the chroma pixel values of the reference chroma block; detecting whether the reference chroma blocks are all located in the chroma encoded region in the image frame to be processed; when the reference chroma blocks are all located in the image frame to be processed Within the chroma-coded region, determining that the candidate prediction block is all located in the chroma-coded region of the image frame to be processed, at this time, the candidate prediction block is considered valid; when the reference chro
- whether the coordinates of the pixel of the upper left corner of the reference chrominance block and the coordinates of the pixel of the lower right corner are detected may be In the coordinate range of the chroma coded region, when the coordinates of the upper left corner of the reference chroma block and the coordinates of the lower right pixel are within the coordinate range of the chroma encoded region, it is determined that the reference chroma block is all located a chroma-coded region in the image frame to be processed; determining the reference chroma when at least one of the coordinates of the upper left corner of the reference chroma block and the coordinates of the lower right pixel are not within the coordinate range of the chroma encoded region The blocks are not all located within the chroma coded region of the image frame to be processed.
- the candidate motion vector determined based on the motion vector of the i-th luma block is a sub-pixel motion vector, and interpolation processing is required by the interpolation filter to obtain Alternative prediction block.
- the interpolation filter used in the interpolation process is an N-tap filter, N is a positive integer, and the chrominance pixel value of the sub-pixel position of the candidate prediction block is interpolated, and it is required that N1 pixels of the left side of the position are required. Chroma pixel value, chroma pixel value of N2 pixels on the right side, chroma pixel value of N3 pixels on the upper side, and chroma pixel value of N4 pixel on the lower side.
- N1+N2 N
- N3 +N4 N.
- (Cx, Cy) represents the coordinates of the pixel point of the upper left corner of the current chroma block, and CW and CH respectively represent the width of the current chroma block.
- MV1x represents the largest integer smaller than MVx
- MV1y represents the largest integer smaller than MVy.
- (MVx, MVy) is (-5.5, -1.5)
- MV1x is -6
- MV1y is -2.
- the candidate prediction blocks are all located in the chroma-coded region in the image frame to be processed, and detecting whether the candidate prediction block is located in a specified orientation of the current chroma block; Whether the candidate prediction blocks are all located in the chroma coded region in the image frame to be processed, and the order in which the candidate block is detected in the specified orientation of the current chroma block is not limited) when the candidate prediction block is all located
- Processing the chroma-coded region in the image frame and located at a specified orientation of the current chroma block determining that the candidate prediction block is valid; when the candidate prediction block is not all located in the chroma-coded region of the image frame to be processed, Or, instead of being located at a specified orientation of the current chroma block, determining that the candidate prediction block is invalid; as an example, the specified orientation of the current chroma block is any orientation of the left side, the upper side, and the upper left side of the current chroma block.
- detecting whether the candidate prediction block is located in the specified orientation of the current chroma block may include: detecting whether the coordinates of the pixel pixel of the lower right corner of the candidate prediction block are located in a specified orientation of the current chroma block, when the lower right corner of the candidate prediction block The coordinates of the pixel point are located at a specified orientation of the current chroma block, and the candidate prediction block is determined to be located at a specified orientation of the current chroma block; when the coordinates of the pixel pixel of the lower right corner of the candidate prediction block are not located at a specified orientation of the current chroma block, determining The alternate prediction block is not located at the specified orientation of the current chroma block.
- detecting whether the candidate prediction block is located at a specified orientation of the current chroma block may also have other manners, for example, detecting a relative position of the first pixel point of the candidate prediction block and the second pixel point of the current chroma block, for example,
- the first pixel and the second pixel may each be any pixel of the upper left pixel, the upper right pixel, the intermediate pixel, the lower left pixel, and the lower right pixel. This embodiment of the present application does not limit this.
- the method for detecting whether the candidate prediction blocks are all located in the chrominized coded region in the image frame to be processed may refer to the foregoing first implementation manner, which is not used in this embodiment of the present application. Narration.
- the candidate motion vector (ie, the motion vector obtained by scaling the motion vector of the i-th luma block) is (MVx, MVy), if (MVx, MVy) is an integer pixel motion vector, if ( Cx+MVx+(CW-1), Cy+MVy+(CH-1))
- the current chroma block belongs to the quad-leaf node block, it also needs to satisfy (Cx+MVx+(CW-1), Cy+MVy+( CH-1)) can determine that the candidate prediction block is valid at any of the left, upper and upper left sides of the current chroma block, when the candidate prediction block is not all in the chroma coded region, or If any position other than the left side, the upper side, and the upper left side of the current chroma block is not considered, the candidate block is considered invalid; likewise, if (MVx, MVy) is a sub-pixel motion vector, the candidate block needs to be interpolated.
- the reference chrominance block corresponding to the candidate prediction block may be obtained, where the chrominance pixel value of the candidate prediction block is obtained by interpolation based on the pixel value of the reference chrominance block; and whether the detection reference chrominance block is all located in the image to be processed
- the chroma in the frame has been encoded in the region and is located at the specified orientation of the current chroma block (due to the reference chroma block and the alternate pre- The block is located in the same orientation of the current chroma block, so the orientation of the candidate prediction block can be determined by detecting the orientation of the reference luma block; when the reference chroma block is all located in the chroma encoded region of the image frame to be processed, and Located in the specified orientation of the current chroma block, the candidate prediction block is considered valid; when the reference chroma block is not all located in the chroma encoded region of the image frame to be processed, or is not located in the specified orientation of the current
- the candidate motion vector determined based on the motion vector of the i-th luma block is a sub-pixel motion vector, and interpolation processing is required by the interpolation filter to obtain Alternative prediction block.
- the interpolation filter used for interpolating the chrominance pixel values of the pixel position is an N-tap filter, and N is a positive integer, and it is required to interpolate the chrominance pixel values of the sub-pixel positions of the candidate prediction block.
- N1+N2 N
- N3+N4 N
- (Cx+MV1x+(CW-1)+N2, Cy+MV1y+(CH-1)+N4) is in the quadtree leaf node block to which the current chroma block belongs, It is also necessary to satisfy (Cx+MV1x+(CW-1)+N2, Cy+MV1y+(CH-1)+N4) at any of the left, upper and upper left sides of the current chroma block to determine the The alternate prediction block is valid.
- FIG. 11 and FIG. 12 are schematic diagrams showing the structure of two coding division manners. Referring to FIG. 11 and FIG. 12, when the quadtree leaf node block to which the current chroma block belongs is coded according to the division manner shown in FIG.
- the chroma block in the lower left corner has not been encoded; and when the quadtree node block to which the current chroma block belongs is encoded according to the division manner shown in FIG. 12, the chroma block in the upper right corner has not been encoded.
- the second implementation manner can avoid determining whether the chrominance block in the upper right corner and the speed block in the lower left corner are encoded, thereby effectively simplifying the process of determining whether the candidate prediction block is valid, and reducing the operation cost.
- Step 2023 When there are luma blocks that can be referenced by the motion vector in the n luma blocks, the intra motion compensation mode is added in the prediction mode candidate queue.
- an intra motion compensation mode may be added only in the prediction mode candidate queue to indicate that the motion vector of the current chroma block may be based on The motion vector of the luma block is generated.
- the intra-frame motion compensation mode can be added once to achieve the corresponding prediction mode indication effect, and the process is relatively simple.
- the intra motion compensation mode may also be added based on the number of luma blocks that can be referenced by the detected motion vector under the limitation of detecting the stop condition, and the intra motion compensation mode is at least one.
- the manner of adding the intra motion compensation mode in the prediction mode candidate queue may include at least the following two types:
- an intra motion compensation mode is added to the prediction mode candidate queue every time a luminance block that can be referred to by a motion vector is detected in the target order.
- each time a luminance block that can be referenced by a motion vector is detected that is, an intra motion compensation mode is added in the prediction mode candidate queue, and m frames added in the prediction mode candidate queue are added.
- the mode numbers of the internal motion compensation mode may be the same or different.
- the mode number of the m intra motion compensation modes may be the first intra motion compensation mode.
- the first intra motion compensation mode is added, and the mode number of the first intra motion compensation mode can be represented by a number greater than or equal to 73, for example, the first intra motion compensation mode.
- the mode number is 73
- the mode number of the second intra motion compensation mode is 74
- the mode number of the third intra motion compensation mode is 75.
- the second adding mode after the detection stop condition is reached, if there are m luminance blocks that can be referred to by the motion vector, m frames are added in the prediction mode candidate queue according to the detection arrangement order of the m luminance blocks in the target sequence.
- Internal motion compensation mode m ⁇ 1.
- m intra motion compensation modes are added in the prediction mode candidate queue, and the mode numbers of the m intra motion compensation modes may be the same.
- the mode numbers of the m intra motion compensation modes may be sequentially added according to the first intra motion compensation mode, in the order of addition.
- the m intra motion compensation modes obtained by the above two addition methods are in one-to-one correspondence with the luminance blocks that the m motion vectors can refer to.
- the target order may be set according to a specific situation.
- n luma blocks include at least: a luma block covering a central pixel point in the luma image area, and a luma block covering the upper left corner pixel in the luma image area a luminance block covering a pixel in the upper right corner of the luminance image region, a luminance block covering the lower left corner of the luminance image region, and a luminance block covering the lower right pixel in the luminance image region.
- the target luminance block is a portion. Or all of the luminance blocks in the luminance image region, the luminance image region being the luminance region corresponding to the current chroma block position in the image frame to be processed.
- the above target order may be:
- a luminance block covering a central pixel point in the luminance image area a luminance block covering the upper left corner pixel in the luminance image area, a luminance block covering the upper right corner pixel in the luminance image area, and a luminance block covering the lower left corner pixel in the luminance image area
- the order of the luminance blocks covering the pixels in the lower right corner of the luminance image area is, the order of overlapping pixel points CR>LT>TR>BL>BR.
- the above target order may also be a randomly determined order.
- the method for determining the n luma blocks may refer to the foregoing step 2021, which is not repeatedly described in this embodiment of the present application.
- the first sequence may be a luminance block covering a central pixel point in the luminance image region, a luminance block covering the upper left corner pixel in the luminance image region, a luminance block covering the upper right corner pixel in the luminance image region, and a lower left corner in the coverage luminance image region.
- the order of the luminance block of the pixel and the luminance block covering the pixel of the lower right corner in the luminance image area that is, the order of overlapping the pixel points CR>LT>TR>BL>BR; or the order of random determination.
- a process of whether the prediction mode added by the query is repeated is also performed, which is also referred to as a check process, that is, for each intra prediction mode of the n luma blocks corresponding to the current chroma block, the frame is detected. Whether the intra prediction mode is the same as the prediction mode added in the prediction mode candidate queue; when the intra prediction mode is the same as the prediction mode added in the prediction mode candidate queue, the next intra prediction mode is detected; when the intra prediction mode and prediction are used When the prediction modes added in the mode candidate queue are different, the intra prediction mode is added to the prediction mode candidate queue.
- the second sequence may be a left adjacent chroma block of the current chroma block, an upper adjacent chroma block of the current chroma block, and a lower left adjacent chroma block of the current chroma block.
- it is also required to perform a process of whether the prediction mode added by the query is repeated, that is, detecting the intra prediction mode and prediction for the intra prediction mode of each adjacent chroma block of the current chroma block.
- the intra prediction mode candidate queue Whether the prediction modes added in the mode candidate queue are the same; when the intra prediction mode is the same as the prediction mode added in the prediction mode candidate queue, the intra prediction mode of the next adjacent chroma block is detected; when the intra prediction mode is used When the prediction mode added in the prediction mode candidate queue is different, the intra prediction mode is added to the prediction mode candidate queue.
- the target sequence and the first sequence may be the same or different, and the embodiment of the present application does not limit this.
- Step 203 Determine a target intra prediction mode in the constructed prediction mode candidate queue.
- the target intra prediction mode is a mode for predicting a prediction block of a current chroma block.
- step 203 in the constructed prediction mode candidate queue, the process of determining the target intra prediction mode includes:
- the intra prediction mode that meets the second target condition in the constructed prediction mode candidate queue is determined as the target intra prediction mode. This process can be implemented by traversing all prediction modes in the prediction mode queue.
- the second target condition is that the sum of the absolute values of the residual values of the residual blocks corresponding to the prediction block determined based on the intra prediction mode is the smallest, or the prediction block determined based on the intra prediction mode corresponds to The sum of the absolute values of the residual value of the residual block is the smallest, or the coding cost of the intra prediction mode coding is the smallest.
- all the prediction modes in the prediction mode queue may be traversed first, and the current luminance block is calculated based on a prediction residual corresponding to the prediction block determined by each mode, and then selecting a residual block corresponding to the corresponding prediction block based on the prediction residual corresponding to the prediction block determined by each mode according to the current luma block (ie, the luma block) The intra prediction mode with the smallest sum of the absolute values of the residual values of the residual block) as the target intra prediction mode;
- the second target condition is that the absolute value of the residual value of the residual block corresponding to the prediction block determined by the intra prediction mode is the smallest
- all the prediction modes in the prediction mode queue may be traversed first, and the current luminance block is calculated.
- Each mode corresponds to a prediction residual corresponding to the determined prediction block, and then performs residual transformation on the current luma block by using the prediction residual corresponding to the prediction block determined by each mode to obtain a residual difference transform quantity, and selects a corresponding one.
- the intra prediction mode with the smallest absolute value of the residual value of the residual block corresponding to the prediction block is used as the target intra prediction mode, and the current luma block adopts the prediction residual corresponding to the prediction block determined by each mode correspondingly.
- the process of performing the residual transform refers to multiplying the prediction residual corresponding to the prediction block determined by each mode for the current luma block by the transform matrix to obtain the residual transform quantity, and the residual transform process can implement the residual difference. Correlation, so that the amount of energy of each residual difference obtained in the end is more concentrated;
- the second target condition is that the coding cost corresponding to the intra prediction mode coding is minimum
- all prediction modes in the prediction mode queue may be traversed first, and the current luminance block is coded based on each mode of the current luminance block, and each coding is calculated.
- the intra prediction mode with the smallest coding cost is selected as the target intra prediction mode, and the coding cost can be calculated by using a preset cost function.
- Step 204 Determine a reference luminance block when the target intra prediction mode is the intra motion compensation mode.
- the reference luma block is a luma block among n luma blocks corresponding to the current chroma block position.
- the target intra prediction mode is the intra motion compensation mode
- the motion vector of the current luma block needs to be obtained based on the motion vector of the reference luma block, it is necessary to determine the reference luma block, the determining reference luma block.
- the implementation of the process may be various.
- the embodiment of the present application provides the following three implementation manners:
- the reference luma block is determined based on the ordering of the target intra prediction modes in the prediction mode candidate queue.
- the process of determining the reference luminance block may include:
- Step D1 Determine, in the prediction mode candidate queue, that the target intra prediction mode is the rth intra motion compensation mode in the m intra motion compensation modes, 1 ⁇ r ⁇ m.
- the determination of the luma block is related to the ordering of the target intra prediction mode in the prediction mode candidate queue, and therefore the target intra prediction needs to be determined.
- the order of the modes in the prediction mode candidate sequence that is, it is the first intra motion compensation mode. Step D1 assumes that the rth intra motion compensation mode is determined.
- Step D2 sequentially detecting, in the target order, whether the motion vector of the luma block in the n luma blocks can be referred to, until the detection stop condition is reached, the detection stop condition is that the total number of motion vectors that can be referenced is equal to the preset number threshold x, x ⁇ m, or, the total number of motion vectors that can be referenced is equal to r, or traversing n luminance blocks.
- the detection stop condition is that the total number of motion vectors that can be referenced is equal to the preset number threshold x, x ⁇ m, or, the total number of motion vectors that can be referenced is equal to r, or traversing n luminance blocks.
- m can be 1 or 5
- x m.
- Step D3 After the detection stop condition is reached, the luminance block referenced by the rth motion vector is determined as the reference luminance block.
- the ordering of the target intra prediction mode in the prediction mode candidate queue is actually associated with the detection order of the luminance block that the motion vector can refer to, corresponding to the above step 2023, when only When an intra motion compensation mode is added to the prediction mode candidate queue, the luminance block that can be referenced by the first motion vector is determined as a reference luminance block; when the detection of the stop condition is restricted, the motion vector based on the detected motion vector can be referred to.
- the luma blocks referenced by the motion vectors determined in steps D1 to D3 are in one-to-one correspondence with the number of added intra motion compensation modes, and the steps D1 to D3 determine
- the detection order of the luminance block that can be referenced by the motion vector is consistent with the added order of the added intra motion compensation mode, and the rth intra motion compensation mode as the target intra prediction mode and the rth motion vector as the reference luminance block
- the brightness block that can be referred to corresponds.
- the reference luma block is filtered based on the reference prediction block corresponding to the n luma blocks.
- the process of determining the reference luma block may include:
- Step E1 sequentially detecting, in the target order, whether the motion vector of the luma block in the n luma blocks can be referenced until the detection stop condition is reached, and the total number of motion vectors that can be referenced is equal to the preset number threshold x, x ⁇ m, or, traverse n luminance blocks.
- Step E2 After the detection stop condition is reached, a reference prediction block of the current chroma block is generated based on each referenceable motion vector.
- Step E3 Determine, in the generated plurality of reference prediction blocks, a reference prediction block that meets the first target condition, where the first target condition is that the sum of the absolute values of the residual values of the residual block corresponding to the reference prediction block is the smallest, or The sum of the absolute values of the residual value transformed by the residual block corresponding to the prediction block is the smallest, or the reference prediction block corresponds to the smallest coding cost.
- Step E4 Determine a luminance block corresponding to the reference prediction block that meets the first target condition as a reference luminance block.
- the order of the reference luma block and the intra motion compensation mode is actually related. Please refer to step 2023 above, regardless of the prediction mode candidate queue.
- the reference prediction block of the reference luma block only needs to meet the first target condition, so that compared with the first implementation manner, High accuracy.
- the reference luminance block is determined based on a correspondence table between the identifier of the pre-established intra motion compensation mode and the identifier of the luma block.
- the prediction mode candidate queue includes m intra motion compensation modes, m ⁇ 1.
- the intra motion compensation mode may also be established.
- the correspondence table records an identifier of each intra motion compensation mode added to the prediction mode candidate queue, and an identifier of the luma block that the corresponding motion vector can refer to.
- the luma block is a luma block that can be referenced by a motion vector, and the identifier of each intra-frame motion compensation mode in the correspondence table is used to uniquely identify an intra-frame motion compensation mode in a prediction mode candidate queue, and the identifier of each luma block is also used.
- the identifier of the luma block may be the number of the luma block when dividing the luma image corresponding to the image frame to be processed; or other types of identifiers, as in the following step 209; The identified identifiers are not described in detail in the embodiments of the present application.
- the mode numbers of the m intra motion compensation modes added in the prediction mode candidate queue may be the same, they may be different.
- the identifier of the intra motion compensation mode may be composed of the mode number and the index in the prediction mode candidate queue, so that the prediction may be performed.
- the intra mode motion compensation mode is uniquely identified in the mode candidate queue; when the mode numbers of the m intra motion compensation modes added in the prediction mode candidate queue are different, the identifier of the intra motion compensation mode may be the same as the mode number.
- the process of determining the reference luma block may include:
- Step F1 Query the correspondence table according to the identifier of the target intra prediction mode, and obtain the identifier of the reference luma block.
- Step F2 determining a reference luminance block based on the identifier of the reference luminance block.
- the identifier of the reference luma block can be directly determined by querying the correspondence relationship table. Compared with the first implementation manner and the second implementation manner, the process is relatively simple and the calculation cost is small.
- whether the motion vector of the luma block in the n luma blocks is sequentially referenced according to the target sequence is determined, and the process of detecting the stop condition may include:
- Perform the detection process which includes:
- step G1 it is detected whether the motion vector of the i-th luma block in the n luma blocks can be referred to.
- Step G2 When the motion vector of the i-th luma block is referenced, it is detected whether the detection stop condition is reached.
- step G4 when the detection stop condition is reached, the execution of the detection process is stopped.
- step G1 to step G4 may refer to step C1 to step C4 in the above step 2022, and details are not described herein again.
- the process of detecting whether the motion vector of the i-th luma block in the n luma blocks can be referenced may include: detecting a prediction type of the i-th luma block in the n luma blocks; when the i-th luma When the prediction type of the block is the intra prediction type, it is determined that the motion vector of the i-th luma block is not referenced; when the prediction type of the i-th luma block is the inter prediction type, the motion vector generation based on the i-th luma block is generated.
- the determining process may include the following two implementation manners:
- the candidate prediction block when the candidate prediction blocks are all located in the chroma coded region in the image frame to be processed, it is determined that the candidate prediction block is valid; when the candidate prediction block is not all the chroma coded in the image frame to be processed is encoded Within the area, it is determined that the candidate prediction block is invalid.
- the candidate prediction blocks when the candidate prediction blocks are all located in the chroma coded region in the image frame to be processed and are located in the specified orientation of the current chroma block, it is determined that the candidate prediction block is valid; when the candidate prediction blocks are not all located Determining that the candidate prediction block is invalid within the chroma-coded region of the image frame to be processed, or not at the specified orientation of the current chroma block; as an example, the specified orientation of the current chroma block is the current chroma block Any orientation on the left, upper, and upper left sides.
- the process of detecting whether the detection stop condition is reached may include: detecting whether the total number of motion vectors that can be referenced is equal to a preset number threshold x, where x ⁇ m; and detecting whether i is equal to n; when the referenceable motion vector is not equal to the preset number threshold x, and i is not equal to n, determining that the detection stop condition is not reached; when the referenceable motion vector is equal to the preset number
- Step 205 Determine a motion vector of a current chroma block in the image frame to be processed based on the motion vector of the reference luma block.
- step 205 the process of determining the motion vector of the current chroma block in the image frame to be processed based on the motion vector of the reference luma block may include:
- Step H1 Determine a vector scaling ratio of the current chroma block and the reference luma block according to an encoding format of the image frame to be processed.
- the vector scaling ratio of the current chroma block and the reference luma block is equal to the ratio of the distribution density of the chroma block and the luma block in the image frame to be processed, and the distribution density ratio is determined by the encoding format of the image frame to be processed.
- the encoding format is 4:4:4
- the ratio of the distribution density of the chrominance block to the luminance block is 1:1, in the vertical direction (also called the y direction).
- the ratio of the distribution density of the chroma block to the luma block is 1:1, and the scaling ratio of the vector of the current chroma block to the reference luma block in the horizontal direction is equal to 1:1, and the scaling ratio in the vertical direction is equal to 1: 1.
- This step can refer to the above step C131.
- Step H2 Based on the vector scaling ratio, the motion vector of the reference luma block is scaled to obtain a motion vector of the current chroma block.
- the motion vector of the reference luma block is scaled proportionally to obtain an alternative motion vector of the reference luma block. For example, if the motion vector of the reference luma block is (-11, -3) and the encoding format is 4:4:4, the candidate motion vector of the current chroma block is (-11, -3). This step can refer to step C132 above.
- Step 206 Predict a prediction block of the current chroma block based on the motion vector of the current chroma block.
- the motion vector of the current chroma block may be used to find the current chroma block from the chroma coded region in the chroma image in the to-be-processed video frame. Forecast block.
- Step 207 Add the target intra prediction mode to the code stream of the current chroma block after encoding the index in the prediction mode candidate queue.
- the prediction mode candidate queue includes at least one prediction mode, and each prediction mode is a mode for predicting a prediction block of the current chroma block.
- the index in the prediction mode candidate queue is used to indicate the ordering of the prediction mode in the prediction mode candidate queue, for example, the index of the target intra prediction mode is 3, indicating that the target intra prediction mode is the third of the prediction mode candidate queues. Forecast mode.
- the index may be entropy encoded by an entropy encoding module and then written to the code stream.
- Step 208 Transmit a code stream to the decoding end, where the code stream includes an index of the encoded target intra prediction mode and a coded residual block.
- the pixel value of the current block is subtracted from the original pixel value of the current chroma block to obtain a residual block of the current chroma block, and then The residual block of the current chroma block is transformed and quantized, and the quantized residual block is entropy encoded to obtain the encoded code stream.
- This process can refer to JEM or HEVC.
- the finally obtained division manner may also be added to the code stream after the coding, so that the decoding end processes the processing based on the division manner.
- the image frames are divided, and the divided chroma block pairs are processed accordingly.
- the index encoded in the code stream is used by the decoding end to determine a corresponding reference luma block.
- the decoding end may determine the reference luma based on a pre-agreed manner and an index of the target intra prediction mode. Block, so that there is no need to encode relevant indication information in the code stream, thereby reducing the coding cost of the indication information, which is beneficial to improving the efficiency of video coding.
- the coding end since the coding end has already determined the identifier of the reference luminance block, it can also The identifier of the reference luma block is encoded into the code stream for reference by the decoding end, so that the decoding end can directly determine the reference luma block based on the identifier of the reference luma block without performing excessive operations, thereby reducing the computational cost of the decoding end.
- the intra-frame prediction method of the chrominance provided by the embodiment of the present application may further include:
- Step 209 Obtain an identifier of the reference luma block.
- the first method for obtaining the identifier of the reference luma block includes:
- step I1 all the luma blocks in the n luma blocks are assigned an identifier in the order agreed with the decoding end.
- Step I2 Obtain an identifier assigned to the reference luma block.
- the second acquisition method, the process of obtaining the identifier of the reference luma block includes:
- step J1 the luma blocks referenced by the motion vectors in the n luma blocks are assigned in the order agreed with the decoding end.
- identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1 with a tolerance of u, and u is a positive integer, usually 1.
- the process of assigning identifiers may be implemented in multiple manners. The first one may be performed synchronously with step 2022. First, all the n luma blocks are assigned different identifiers, and each motion vector is not referenced in the target order. When the luminance block is used, the identity of the luminance block is deleted, and the identifiers of all the luminance blocks thereafter are updated until the detection stop condition is reached in step 2022. For example, all the n luma blocks are first assigned an identifier: 0, 1...n-1, and the assigned identifier may be an arithmetic progression sequence starting with 0 and having a tolerance of 1, when the motion vector of the first luma block is detected.
- the update method is to subtract the tolerance on the basis of each original identifier, that is, minus 1.
- the identifier obtained after the update is: 0, 1...n- 2; The process is repeated until the detection stop condition is reached in step 2022.
- the second is that after the detection stop condition is reached, all the luminance blocks that can be referred to for detecting the motion vector are assigned different identifiers.
- the assigned identifiers are: 0, 1, 2.
- Step J2 Obtain an identifier assigned to the reference luma block.
- the order of the convention may be the coding sequence of the luma block, or the step 2022 may be performed to detect whether there is a target sequence of the luma blocks that can be referenced by the motion vector in the n luma blocks.
- the assigned identifier can be a digital identifier.
- the identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1, and the tolerances are incremented by 1, such as 0, 1, 2, ..., n.
- the identification of the luma block allocation that can be referred to by the motion vector in the luma block is also a sequence of other forms, which is not limited in this embodiment of the present application. For example, please refer to FIG.
- n luma blocks are: luma block M11, luma block M13, luma block M14, luma block M15 and luma block M16, and the luma blocks referenced by the motion vector are M14 and M15, reference brightness
- the block is M14, and the order of the agreement is the coding sequence of the luma block.
- the first acquisition mode is adopted, and the identifiers of the n luma blocks are: the luma block M11 is 0, the luma block M13 is 1, and the luma block M14 is 2.
- the luminance block M15 is 3 and the luminance block M16 is 4, then the reference of the reference luminance block is 2; and the second acquisition mode is used, the identifiers of the n luminance blocks are: the luminance block M14 is 0, and the luminance block M15 If 1, the reference brightness block is identified as 0.
- the identification of these luma blocks can be expressed in binary numbers.
- the identification value of the luminance block allocated by the second acquisition mode is less, and the identification value of the finally determined reference luminance block is smaller, so that the reference luminance is When the identifier is transmitted through the code stream, the occupied data bits are less, and the code stream resource can be effectively saved.
- Step 210 Encode the identifier of the reference luma block and add it to the code stream of the current chroma block.
- the identification of the reference luma block may be entropy encoded by the entropy encoding module and then written to the code stream.
- sequence of the intra-prediction method steps of the chrominance provided by the embodiment of the present application may be appropriately adjusted, and the steps may also be correspondingly increased or decreased according to the situation.
- the steps of steps 209 and 210 may not be performed, and any familiarity may be performed.
- a person skilled in the art can easily conceive changes in the technical scope of the present application, and should be covered by the scope of the present application, and therefore will not be described again.
- the chrominance intra prediction method since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the motion vector and chrominance component motion of the luminance component.
- the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
- the chrominance intra prediction method is performed by a decoding end, which is used for decoding of an I frame, and the method includes:
- Step 301 Decode a code stream of a current chroma block, where the code stream includes an index of the encoded target intra prediction mode and a coded residual block.
- the current chroma block refers to the chroma block to be decoded.
- the decoding end After receiving the code stream transmitted by the encoding end, the decoding end decodes the code stream, and is usually decoded by the entropy decoding module. After decoding the code stream, the index of the target intra prediction mode and the decoded residual block may be included. The decoding end may continue to perform inverse quantization and inverse transform on the decoded residual block to obtain a residual block of the current chroma block.
- the code stream further includes the coded division mode.
- the decoding end may extract the division manner from the decoded code stream, divide the processed image frame to be processed according to the division manner, and perform corresponding processing on the divided chroma block pair.
- Step 302 Extract an index of the target intra prediction mode in the prediction mode candidate queue from the decoded code stream.
- the index is used to indicate the order of the prediction mode in the prediction mode candidate queue.
- the index of the target intra prediction mode is 3, indicating that the target intra prediction mode is the third of the prediction mode candidate queues. Forecast mode.
- Step 303 Construct, for the current chroma block, a prediction mode candidate queue, where the prediction mode candidate queue includes at least one prediction mode, and each prediction mode is a mode for predicting a prediction block of the current chroma block.
- the process of constructing the prediction mode candidate queue may refer to steps A1 to A7 in the foregoing step 202.
- the construction process is agreed with the encoding end, and the process is consistent with the encoding end, that is, the step 303 is consistent with the foregoing step 202. . Therefore, the embodiments of the present application will not be described again.
- Step 304 Determine a target intra prediction mode in the constructed prediction mode candidate queue.
- the target intra prediction mode may be obtained by querying the prediction mode candidate queue based on the index of the target intra prediction mode in the prediction mode candidate queue.
- the target intra prediction mode can be obtained by querying the prediction mode candidate queue based on the index.
- the prediction mode candidate queue is: ⁇ 11, 75, ..., 68 ⁇ , and includes 11 prediction modes. If the index of the target intra prediction mode is 2, the prediction mode candidate queue is queried, and the second prediction mode is selected. That is, the target intra prediction mode is known from the prediction mode candidate queue, and the target intra prediction mode is an intra motion compensation mode having a mode number of 75.
- Step 305 Determine a reference luma block when the target intra prediction mode is the intra motion compensation mode.
- the process is corresponding to the encoding end.
- the decoding end can determine the reference luma block based on the pre-agreed manner and the index of the target intra prediction mode, so that the relevant indication information does not need to be encoded in the code stream, thus reducing the
- the coding cost of the indication information is beneficial to improve the efficiency of video coding; on the other hand, since the coding end has determined the identifier of the reference luma block, it may also encode the identifier of the reference luma block into the code stream for reference by the decoder. Therefore, the decoding end can directly determine the reference luminance block based on the identifier of the reference luma block without performing excessive operations, thereby reducing the computational cost of the decoding end.
- the manner in which the decoding end determines the reference luma block can be various.
- the following two determination manners are provided in the embodiment of the present application.
- the decoding end determines the reference luma block based on a pre-agreed manner and an index of the target intra prediction mode.
- the process is the same as the above-mentioned step 204, and there are three implementation manners respectively.
- the specific process refers to the above-mentioned step 204, which is not described in detail in this embodiment of the present application.
- the decoding end determines the reference luma block based on the identifier of the reference luma block in the code stream.
- the process can include:
- Step K1 Extract an identifier of the reference luma block from the decoded code stream.
- the decoding end may extract the identifier of the reference luma block after entropy decoding the code stream.
- Step K2 Determine a reference luma block among the n luma blocks based on the identifier of the reference luma block.
- the identifier of the reference luma block can be obtained in multiple ways, and the decoding end needs to be consistent with the encoding method of the encoding end, it can ensure that the acquired identifier indicates the luma block of the same position. Therefore, corresponding
- the embodiment of the present application is described by taking the following two acquisition methods as an example:
- the first acquisition manner of the decoding end includes:
- step L1 the luma blocks referenced by the motion vectors in the n luma blocks are assigned in the order agreed with the encoding end.
- step L2 the luminance block referenced by the motion vector consistent with the identifier of the reference luminance block is determined as the reference luminance block.
- the second obtaining manner of the decoding end includes:
- step M1 the luma blocks referenced by the motion vectors in the n luma blocks are assigned in the order agreed with the encoding end.
- identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1 with a tolerance of u, and u is a positive integer, usually 1.
- the process of assigning identifiers may be implemented in multiple manners.
- the first one may be performed synchronously with step 303.
- n luminance blocks are all assigned different identifiers, and each motion vector is not referenced according to the target sequence.
- the identifier of the luma block is deleted, and the identifiers of all the luma blocks thereafter are updated until the detection stop condition is reached in step 303.
- all the n luma blocks are first assigned an identifier: 0, 1...n-1, and the assigned identifier may be an arithmetic progression sequence starting with 0 and having a tolerance of 1, when the motion vector of the first luma block is detected.
- the update method is to subtract the tolerance based on the original identifier, that is, minus 1.
- the updated identifier is: 0, 1...n-2; repeat The process continues until the detection stop condition is reached in step 303.
- the second is that after the detection stop condition is reached, all the luminance blocks that can be referred to for detecting the motion vector are assigned different identifiers.
- the assigned identifiers are: 0, 1, 2, and 3.
- Step M2 Determine a luminance block that can be referenced by a motion vector that is consistent with the identifier of the reference luminance block as a reference luminance block.
- the order of the agreement may be the coding sequence of the luma block, or may be the detection of whether there are luke blocks that can be referenced by the motion vector in the n luma blocks in the above steps 2022 and 303.
- the target sequence, the identifier may be a digital identifier, and the identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are 0 or 1 as the starting identifier, and the tolerance is 1 in the increment series.
- the tolerance is 1 in the increment series.
- n luma blocks are: luma block M11, luma block M13, luma block M14, luma block M15 and luma block M16, and the luma blocks that the motion vector can refer to are M14 and M15.
- the reference luma block is M14, and the agreed order is the coding sequence of the luma block.
- the first acquisition mode is adopted, and the identifiers of the n luma blocks are: luma block M11 is 0, luma block M13 is 1, luma block If the M14 is 2, the luma block M15 is 3, and the luma block M16 is 4, the identifier of the reference luma block to be transmitted in the code stream is 2, and the encoding end determines the reference luma block as M14 based on the identifier;
- the identifiers of the n luma blocks are: the luma block M14 is 0, and the luma block M15 is 1.
- the identifier of the reference luma block to be transmitted in the code stream is 0, and the encoding end determines the reference luma block based on the identifier. M14.
- the identification of these luma blocks can be expressed in binary numbers.
- the second acquisition mode allocates fewer luma blocks, and the final determined reference luma block has a smaller identifier, so that the identification of the reference luma passes.
- the code stream is transmitted, less data bits are occupied, which can effectively save the code stream resources.
- Step 306 Determine a motion vector of a current chroma block in the image frame to be processed based on the motion vector of the reference luma block.
- the process is the same as that of the coding end.
- the foregoing step 205 is not described herein.
- Step 307 Predict a prediction block of the current chroma block based on the motion vector of the current chroma block.
- Step 308 Determine a reconstructed pixel value of the current chroma block based on the prediction block of the current chroma block and the residual block of the current chroma block.
- the reconstructed pixel value of the current chroma block may be determined.
- the reconstructed pixel value is obtained by adding the pixel value of the prediction block of the current chroma block to the pixel value of the residual block of the current chroma block.
- sequence of the intra-prediction method steps of the chrominance provided by the embodiment of the present application may be appropriately adjusted, and the steps may also be correspondingly increased or decreased according to the situation.
- steps of steps 301 and 303 may be reversed, and any familiar Those skilled in the art can easily conceive changes within the scope of the technical scope of the present application, and should be included in the scope of protection of the present application, and therefore will not be described again.
- the chrominance intra prediction method since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the motion vector and chrominance component motion of the luminance component.
- the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
- An embodiment of the present application provides a chrominance intra prediction apparatus 40 for encoding or decoding an I frame.
- the apparatus 40 includes:
- a first determining module 401 configured to determine a motion vector of a current chroma block in an image frame to be processed based on a motion vector of the reference luma block when the target intra prediction mode is an intra motion compensation mode, as an example,
- the target intra prediction mode is a mode for predicting a prediction block of the current chroma block, and in the intra motion compensation mode, a motion vector of the current chroma block is generated based on a motion vector of the luma block
- the reference luma block is a luma block among n luma blocks corresponding to the current chroma block position, n ⁇ 1;
- the prediction module 402 is configured to predict a prediction block of the current chroma block based on a motion vector of the current chroma block.
- the chroma intra prediction device since the motion vector of the chroma block is determined based on the motion vector of the luma block, fully utilizes the motion vector and the motion of the chroma component.
- the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
- the apparatus 40 further includes:
- the constructing module 403 is configured to construct a prediction mode candidate queue, where the prediction mode candidate queue includes at least one prediction mode, before determining the motion vector of the current chroma block in the image frame to be processed with the motion vector based on the reference luma block
- Each of the prediction modes is a mode for predicting a prediction block of the current chroma block
- a second determining module 404 configured to determine the target intra prediction mode in the predicted mode candidate queue that is configured to be completed
- the third determining module 405 is configured to determine a reference luma block when the target intra prediction mode is the intra motion compensation mode.
- the constructing module 403 includes:
- a first determining submodule 4031 configured to determine n luma blocks corresponding to the current chroma block position in the image frame to be processed
- a detecting submodule 4032 configured to detect whether there is a luma block that can be referenced by a motion vector in the n luma blocks;
- the adding submodule 4033 is configured to add an intra motion compensation mode to the prediction mode candidate queue when there is a luma block that the motion vector can refer to in the n luma blocks.
- the detecting submodule 4032 is configured to:
- the detection stop condition is that the total number of motion vectors that can be referenced is equal to a preset number threshold k, or Traversing the n luma blocks.
- the adding submodule 4033 is used to:
- the third determining module 405 has multiple implementable manners, and examples include:
- the first implementation mode the prediction mode candidate queue includes m intra motion compensation modes, and m ⁇ 1.
- the third determining module 405 includes:
- a second determining submodule 4051 configured to determine, in the prediction mode candidate queue, that the target intra prediction mode is the rth intra motion compensation mode in the m intra motion compensation modes, 1 ⁇ r ⁇ m;
- the detecting sub-module 4052 is configured to sequentially detect, according to the target order, whether the motion vector of the luma block in the n luma blocks can be referenced until a detection stop condition is reached, where the total number of motion vectors that can be referenced is equal to a preset.
- the number threshold x, x ⁇ m, or the total number of motion vectors that can be referenced is equal to the r, or traversing the n luma blocks;
- the third determining submodule 4053 is configured to determine, as the reference luma block, a luma block that can be referenced by the rth motion vector after the detection stop condition is reached.
- the second implementation module includes: the prediction mode candidate queue includes m intra motion compensation modes, and m ⁇ 1.
- the third determining module 405 includes:
- the detecting sub-module 4052 is configured to sequentially detect, according to the target order, whether the motion vector of the luma block in the n luma blocks can be referenced until a detection stop condition is reached, where the total number of motion vectors that can be referenced is equal to a preset.
- a generating submodule 4054 configured to generate a reference prediction block of the current chroma block based on each referenced motion vector after the detection stop condition is reached;
- a fourth determining sub-module 4055 configured to determine, in the generated multiple reference prediction blocks, a reference prediction block that meets a first target condition, where the first target condition is a residual value of a residual block corresponding to the reference prediction block.
- the sum of the absolute values is the smallest, or the sum of the absolute values of the residual value of the residual block corresponding to the prediction block is the smallest, or the coding cost corresponding to the reference block is the smallest;
- the fifth determining sub-module 4056 is configured to determine a luma block corresponding to the reference prediction block that meets the first target condition as the reference luma block.
- the prediction mode candidate queue includes m intra motion compensation modes, and m ⁇ 1.
- the apparatus 40 further includes:
- the establishing module 406 is configured to establish, in the process of constructing the prediction mode candidate queue, a correspondence table between the identifier of the intra motion compensation mode and the identifier of the luma block, where the correspondence table record is added to the prediction mode candidate An identifier of each intra motion compensation mode in the queue, and an identifier of a luma block referenced by the corresponding motion vector, wherein the identifier of each of the intra motion compensation modes in the correspondence table is used to uniquely identify a prediction mode Intra motion compensation mode in the candidate queue;
- the third determining module 405 is configured to:
- the detecting submodule 4032 or the detecting submodule 4052 may include:
- An execution unit configured to perform a detection process, where the detection process includes:
- the execution unit is configured to:
- the prediction type of the ith luma block is an inter prediction type, generating an alternative motion vector based on a motion vector of the i th luma block;
- the candidate motion vector corresponding to the ith luma block is the same as the candidate motion vector corresponding to the luma block to which the currently detected motion vector can be referenced, it is determined that the motion vector of the i-th luma block is not referenced.
- the apparatus 40 further includes:
- a first detecting module 407 configured to detect, after the candidate prediction block of the current chroma block is predicted based on the candidate motion vector corresponding to the i-th luma block, whether the candidate prediction block is all located a chroma encoded area in the image frame to be processed;
- a fourth determining module 408, configured to determine that the candidate prediction block is valid when all of the candidate prediction blocks are located in a chroma coded region in the to-be-processed image frame;
- the fifth determining module 409 is configured to determine that the candidate prediction block is invalid when the candidate prediction blocks are not all located in the chroma coded region in the to-be-processed image frame.
- the apparatus 40 further includes:
- a first detecting module 407 configured to detect, after the candidate prediction block of the current chroma block is predicted based on the candidate motion vector corresponding to the i-th luma block, whether the candidate prediction block is all located a chroma encoded area in the image frame to be processed;
- a second detecting module 410 configured to detect whether the candidate prediction block is located in a specified orientation of the current chroma block
- a sixth determining module 411 configured to determine, when the candidate prediction blocks are all located in a chroma-coded region in the to-be-processed image frame, and in a specified orientation of the current chroma block, determine the candidate prediction Block is valid;
- a seventh determining module 412 configured to: when the candidate prediction block is not all located in the chroma coded area in the to-be-processed image frame, or not located in a specified orientation of the current chroma block, determine the The alternative prediction block is invalid;
- the specified orientation of the current chroma block is any orientation of the left side, the upper side, and the upper left side of the current chroma block.
- the first detecting module 407 is configured to:
- the candidate motion vector corresponding to the ith luma block is a sub-pixel motion vector
- acquiring a reference chroma block corresponding to the candidate prediction block where the chroma pixel value of the candidate prediction block is based on the reference
- the pixel values of the chroma block are interpolated
- the candidate prediction blocks are not all chromin coded regions located in the image frame to be processed.
- the foregoing first determining submodule 4031 includes:
- a determining unit configured to determine a luminance image region corresponding to the current chroma block position in the image frame to be processed
- a processing unit configured to: use all target brightness blocks as the n brightness blocks, or filter a brightness block of a specified position as the n brightness blocks in all target brightness blocks, where the brightness block of the specified position includes: a luminance block of a central pixel point in the luminance image region, a luminance block covering an upper left corner pixel in the luminance image region, a luminance block covering a pixel in an upper right corner of the luminance image region, and a coverage in the luminance image region a luminance block of a pixel at a lower left corner and a luminance block covering a pixel at a lower right corner of the luminance image region;
- the target luminance block is a luminance block that is partially or wholly in the luminance image region.
- the n luma blocks include at least: a luma block covering a central pixel point in the luma image area, a luma block covering an upper left corner pixel in the luma image area, and an upper right corner covering the luma image area a luminance block of a pixel, a luminance block covering a lower left corner pixel in the luminance image region, and a luminance block covering a lower right corner pixel in the luminance image region, as an example, the luminance image region is a to-be-processed image a luminance region in the frame corresponding to the current chroma block position;
- the target order is:
- a brightness block covering a central pixel point in the brightness image area a brightness block covering an upper left pixel point in the brightness image area, a brightness block covering an upper right pixel point in the brightness image area, and covering the brightness image area.
- the target order is a randomly determined order.
- the apparatus is applied to a decoding end, and the apparatus further includes: a third determining module for determining the reference luma block.
- the third determining module 405 can be the third determining module 405 shown in FIG. 16, and the third determining module 405 includes:
- An extraction submodule 4057 configured to extract an identifier of the reference luma block from the decoded code stream
- the sixth determining sub-module 4058 is configured to determine the reference luma block among the n luma blocks based on the identifier of the reference luma block.
- the sixth determining submodule 4058 is configured to:
- a luminance block referenced by a motion vector that coincides with the identification of the reference luminance block is determined as the reference luminance block.
- the device is applied to an encoding end.
- the device 40 further includes:
- a first encoding module 413 configured to add, after the determining the reference luma block, an index of the target intra prediction mode in a prediction mode candidate queue to a code stream of the current chroma block,
- the prediction mode candidate queue includes at least one prediction mode, each of which is a mode for predicting a prediction block of the current chroma block.
- the apparatus 40 further includes:
- An obtaining module 414 configured to acquire an identifier of the reference luma block after the determining the reference luma block
- the second encoding module 415 is configured to add the identifier of the reference luma block to the code stream of the current chroma block.
- the obtaining module 414 is configured to:
- the identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1 and the tolerances are incremented by 1.
- the first determining module 401 is configured to:
- the second determining module 404 is configured to:
- the second target condition is that a sum of absolute values of residual values of the residual block corresponding to the prediction block determined based on the intra prediction mode is the smallest, or a prediction block determined based on the intra prediction mode The sum of the absolute values of the residual value of the corresponding residual block is the smallest, or the coding cost corresponding to the intra prediction mode coding is the smallest.
- the determining unit is configured to: determine, according to a size of the current chroma block, and a distribution density ratio of the luma component and the chroma component, a luma image region corresponding to the current chroma block position, where The size of the luminance image area is equal to the product of the size of the current chroma block and the distribution density ratio.
- the chroma intra prediction device since the motion vector of the chroma block is determined based on the motion vector of the luma block, fully utilizes the motion vector and the motion of the chroma component.
- the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
- An embodiment of the present application provides a chrominance intra prediction apparatus, including:
- At least one processor At least one processor
- At least one memory At least one memory
- the at least one memory stores at least one program, the at least one memory being capable of executing the at least one program to perform the intra-prediction method of any of the chromaticities provided by the embodiments of the present application.
- the embodiment of the present application provides a storage medium, which is a non-transitory computer readable storage medium, where the instruction medium or code is stored in the storage medium.
- the processor When the instructions or code are executed by the processor, the processor is enabled to perform the chrominance intra prediction method according to any one of the embodiments of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (26)
- 一种色度的帧内预测方法,其特征在于,所述方法包括:当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,其中,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。
- 根据权利要求1所述的方法,其特征在于,在与所述基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量之前,所述方法还包括:构造预测模式候选队列,所述预测模式候选队列包括至少一个预测模式,每个所述预测模式均为用于预测所述当前色度块的预测块的模式;在构造完成的所述预测模式候选队列中,确定所述目标帧内预测模式;当目标帧内预测模式为帧内运动补偿模式时,确定所述参考亮度块。
- 根据权利要求2所述的方法,其特征在于,所述构造预测模式候选队列,包括:确定待处理图像帧中与所述当前色度块位置对应的n个亮度块;检测所述n个亮度块中是否存在运动矢量可参考的亮度块;当所述n个亮度块中存在运动矢量可参考的亮度块时,在所述预测模式候选队列中添加帧内运动补偿模式。
- 根据权利要求3所述的方法,其特征在于,所述检测所述n个亮度块中是否存在运动矢量可参考的亮度块,包括:按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值k,或者,遍历所述n个亮度块。
- 根据权利要求4所述的方法,其特征在于,所述在所述预测模式候选队列中添加帧内运动补偿模式,包括:在按照所述目标顺序每检测到一个运动矢量可参考的亮度块时,在所述预测模式候选队列中添加一个帧内运动补偿模式;或者,在达到检测停止条件后,若存在运动矢量可参考的m个亮度块,按照所述m个亮度块在所述目标顺序中的检测排布顺序,在所述预测模式候选队列中添加m个帧内运动补偿模式,m≥1。
- 根据权利要求2所述的方法,其特征在于,所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,所述确定所述参考亮度块,包括:在所述预测模式候选队列中,确定所述目标帧内预测模式在所述m个帧内运动补偿模式中为第r个帧内运动补偿模式,1≤r≤m;按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,可参考的运动矢量的总数等于所述r,或者,遍历所述n个亮度块;在达到检测停止条件后,将第r个运动矢量可参考的亮度块确定为所述参考亮度块。
- 根据权利要求2所述的方法,其特征在于,所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,所述确定所述参考亮度块,包括:按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,遍历所述n个亮度块;在达到检测停止条件后,基于每个可参考的运动矢量,生成所述当前色度块的参考预测块;在生成的多个参考预测块中,确定符合第一目标条件的参考预测块,所述第一目标条件为参考预测块对应的残差块的残差值的绝对值之和最小,或,参考预测块对应的残差块的残差值变换量的绝对值之和最小,或,参考预测块对应的编码代价最小;将符合所述第一目标条件的参考预测块所对应的亮度块确定为所述参考亮度块。
- 根据权利要求2所述的方法,其特征在于,所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,所述方法还包括:在构造所述预测模式候选队列的过程中,建立帧内运动补偿模式的标识与亮度块的标识的对应关系表,所述对应关系表记录有添加至所述预测模式候选队列中的每个帧内运动补偿模式的标识,以及对应的运动矢量可参考的亮度块的标识,所述对应关系表中每个所述帧内运动补偿模式的标识用于唯一标识一个预测模式候选队列中的帧内运动补偿模式;所述确定所述参考亮度块,包括:基于所述目标帧内预测模式的标识,查询所述对应关系表,得到所述参考亮度块的标识;基于所述参考亮度块的标识确定所述参考亮度块。
- 根据权利要求4至8任一所述的方法,其特征在于,所述按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,包括:设置i=1;执行检测过程,所述检测过程包括:检测所述n个亮度块中第i个亮度块的运动矢量是否可参考;当所述第i个亮度块的运动矢量可参考时,检测是否达到所述检测停止条件;当未达到所述检测停止条件,更新所述i,使得更新后的i=i+1,再次执行所述检测过程;当达到所述检测停止条件,停止执行所述检测过程。
- 根据权利要求9所述的方法,其特征在于,所述检测所述n个亮度块中第i个亮度块的运动矢量是否可参考,包括:检测所述n个亮度块中第i个亮度块的预测类型;当所述第i个亮度块的预测类型为帧内预测类型时,确定所述第i个亮度块的运动矢量不可参考;当所述第i个亮度块的预测类型为帧间预测类型时,基于所述第i个亮度块的运动矢量生成备选运动矢量;检测所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量是否相同;当所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量不同时,基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块;当所述备选预测块有效,确定所述第i个亮度块的运动矢量可参考;当所述备选预测块无效,确定所述第i个亮度块的运动矢量不可参考;当所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块对应的备选运动矢量相同时,确定所述第i个亮度块的运动矢量不可参考。
- 根据权利要求10所述的方法,其特征在于,在所述基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块之后,所述方法还包括:检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域;当所述备选预测块全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块有效;当所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块无效。
- 根据权利要求10所述的方法,其特征在于,在所述基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块之后,所述方法还包括:检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域;检测所述备选预测块是否位于所述当前色度块的指定方位;当所述备选预测块全部位于所述待处理图像帧中的色度已编码区域内,且位于所述当前色度块的指定方位,确定所述备选预测块有效;当所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域内,或者,不位于所述当前色度块的指定方位,确定所述备选预测块无效;其中,所述当前色度块的指定方位为所述当前色度块的左侧、上侧和左上侧的任一方位。
- 根据权利要求11或12所述的方法,其特征在于,所述检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域,包括:当所述第i个亮度块对应的备选运动矢量为分像素运动矢量,获取所述备选预测块对应的参考色度块,所述备选预测块的色度像素值是基于所述参考色度块的像素值插值得到的;检测所述参考色度块是否全部位于所述待处理图像帧中的色度已编码区域;当所述参考色度块全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块全部位于所述待处理图像帧中的色度已编码区域;当所述参考色度块不是全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域。
- 根据权利要求3至13任一所述的方法,其特征在于,所述确定待处理图像帧中与所述当前色度块位置对应的n个亮度块,包括:确定待处理图像帧中与所述当前色度块位置对应的亮度图像区域;将所有目标亮度块作为所述n个亮度块,或者,在所有目标亮度块中筛选指定位置的亮度块作为所述n个亮度块,所述指定位置的亮度块包括:覆盖所述亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖所述亮度图像 区域中右下角像素点的亮度块;其中,所述目标亮度块为部分或全部在所述亮度图像区域中的亮度块。
- 根据权利要求4至8任一所述的方法,其特征在于,所述n个亮度块至少包括:覆盖亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖所述亮度图像区域中右下角像素点的亮度块,其中,所述亮度图像区域为待处理图像帧中与所述当前色度块位置对应的亮度区域,所述目标顺序为:覆盖所述亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖亮度图像区域中右下角像素点的亮度块的顺序;或者,所述目标顺序为:随机确定的顺序。
- 根据权利要求1所述的方法,其特征在于,所述方法应用于解码端,确定所述参考亮度块的过程,包括:从解码后的码流中提取所述参考亮度块的标识;基于所述参考亮度块的标识,在所述n个亮度块中确定所述参考亮度块。
- 根据权利要求16所述的方法,其特征在于,所述基于所述参考亮度块的标识,在所述n个亮度块中确定所述参考亮度块,包括:为所述n个亮度块中运动矢量可参考的亮度块按照与编码端约定的顺序分配标识;将与所述参考亮度块的标识一致的运动矢量可参考的亮度块确定为所述参考亮度块。
- 根据权利要求1所述的方法,其特征在于,所述方法应用于编码端,在所述预测所述当前色度块的预测块之后,所述方法还包括:将所述目标帧内预测模式在预测模式候选队列中的索引编码后添加至所述当前色度块的码流中,所述预测模式候选队列包括至少一个预测模式,每个所述预测模式均为用于预测所述当前色度块的预测块的模式。
- 根据权利要求18所述的方法,其特征在于,所述方法还包括:获取所述参考亮度块的标识;将所述参考亮度块的标识编码后添加至所述当前色度块的码流中。
- 根据权利要求19所述的方法,其特征在于,所述获取所述参考亮度块的标识,包括:为所述n个亮度块中运动矢量可参考的亮度块按照与解码端约定的顺序分配标识;获取为所述参考亮度块分配的标识。
- 根据权利要求1所述的方法,其特征在于,所述基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,包括:按照所述待处理图像帧的编码格式,确定所述当前色度块与所述参考亮度块的矢量缩放比;基于所述矢量缩放比,对所述参考亮度块的运动矢量进行缩放得到所述当前色度块的运动矢量。
- 根据权利要求2所述的方法,其特征在于,所述在构造完成的所述预测模式候选队列中,确定所述目标帧内预测模式,包括:将构造完成的所述预测模式候选队列中对应的预测块符合第二目标条件的帧内预测模 式,确定为所述目标帧内预测模式;其中,所述第二目标条件为:基于帧内预测模式确定的预测块所对应的残差块的残差值的绝对值之和最小,或,基于帧内预测模式确定的预测块所对应的残差块的残差值变换量的绝对值之和最小,或,采用帧内预测模式编码对应的编码代价最小。
- 根据权利要求14所述的方法,其特征在于,所述确定待处理图像帧中与所述当前色度块位置对应的亮度图像区域,包括:基于所述当前色度块的尺寸,以及亮度分量与色度分量的分布密度比例,确定所述当前色度块位置对应的亮度图像区域,该亮度图像区域的尺寸等于所述当前色度块的尺寸与所述分布密度比例的乘积。
- 一种色度的帧内预测装置,其特征在于,所述装置包括:第一确定模块,用于当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,其中,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;预测模块,用于基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。
- 一种色度的帧内预测装置,其特征在于,包括:至少一个处理器;和至少一个存储器;所述至少一个存储器存储有至少一个程序,所述至少一个存储器能够执行所述至少一个程序,以执行权利要求1至23任一所述的色度的帧内预测方法。
- 一种存储介质,其特征在于,所述存储介质中存储有指令或代码,所述指令或代码被处理器执行时,使得所述处理器能够执行权利要求1至23任一所述的色度的帧内预测方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810276799.5 | 2018-03-30 | ||
CN201810276799.5A CN110324627B (zh) | 2018-03-30 | 2018-03-30 | 色度的帧内预测方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019184934A1 true WO2019184934A1 (zh) | 2019-10-03 |
Family
ID=68062566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/079808 WO2019184934A1 (zh) | 2018-03-30 | 2019-03-27 | 色度的帧内预测方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110324627B (zh) |
WO (1) | WO2019184934A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112203086A (zh) * | 2020-09-30 | 2021-01-08 | 字节跳动(香港)有限公司 | 图像处理方法、装置、终端和存储介质 |
CN114189688A (zh) * | 2020-09-14 | 2022-03-15 | 四川大学 | 基于亮度模板匹配的色度分量预测方法 |
CN115190312A (zh) * | 2021-04-02 | 2022-10-14 | 西安电子科技大学 | 一种基于神经网络的跨分量色度预测方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105393536A (zh) * | 2013-06-21 | 2016-03-09 | 高通股份有限公司 | 使用位移向量从预测性块的帧内预测 |
WO2016199574A1 (ja) * | 2015-06-08 | 2016-12-15 | ソニー株式会社 | 画像処理装置および画像処理方法 |
CN106464921A (zh) * | 2014-06-19 | 2017-02-22 | Vid拓展公司 | 用于块内复制搜索增强的方法和系统 |
WO2017171370A1 (ko) * | 2016-03-28 | 2017-10-05 | 주식회사 케이티 | 비디오 신호 처리 방법 및 장치 |
WO2017206803A1 (en) * | 2016-05-28 | 2017-12-07 | Mediatek Inc. | Method and apparatus of current picture referencing for video coding |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6259741B1 (en) * | 1999-02-18 | 2001-07-10 | General Instrument Corporation | Method of architecture for converting MPEG-2 4:2:2-profile bitstreams into main-profile bitstreams |
US7116831B2 (en) * | 2002-04-10 | 2006-10-03 | Microsoft Corporation | Chrominance motion vector rounding |
CN1232126C (zh) * | 2002-09-30 | 2005-12-14 | 三星电子株式会社 | 图像编码方法和装置以及图像解码方法和装置 |
US7724827B2 (en) * | 2003-09-07 | 2010-05-25 | Microsoft Corporation | Multi-layer run level encoding and decoding |
CN100461867C (zh) * | 2004-12-02 | 2009-02-11 | 中国科学院计算技术研究所 | 一种帧内图像预测编码方法 |
-
2018
- 2018-03-30 CN CN201810276799.5A patent/CN110324627B/zh active Active
-
2019
- 2019-03-27 WO PCT/CN2019/079808 patent/WO2019184934A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105393536A (zh) * | 2013-06-21 | 2016-03-09 | 高通股份有限公司 | 使用位移向量从预测性块的帧内预测 |
CN106464921A (zh) * | 2014-06-19 | 2017-02-22 | Vid拓展公司 | 用于块内复制搜索增强的方法和系统 |
WO2016199574A1 (ja) * | 2015-06-08 | 2016-12-15 | ソニー株式会社 | 画像処理装置および画像処理方法 |
WO2017171370A1 (ko) * | 2016-03-28 | 2017-10-05 | 주식회사 케이티 | 비디오 신호 처리 방법 및 장치 |
WO2017206803A1 (en) * | 2016-05-28 | 2017-12-07 | Mediatek Inc. | Method and apparatus of current picture referencing for video coding |
Non-Patent Citations (2)
Title |
---|
JIN HEO: "Chroma Intra Prediction", JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3, 5 October 2016 (2016-10-05), Chengdu * |
MADHUKAR BUDAGAVI: "AHG8: Video Coding Using Intra Motion Compensation", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3, 9 April 2013 (2013-04-09), Incheon * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114189688A (zh) * | 2020-09-14 | 2022-03-15 | 四川大学 | 基于亮度模板匹配的色度分量预测方法 |
CN112203086A (zh) * | 2020-09-30 | 2021-01-08 | 字节跳动(香港)有限公司 | 图像处理方法、装置、终端和存储介质 |
CN112203086B (zh) * | 2020-09-30 | 2023-10-17 | 字节跳动(香港)有限公司 | 图像处理方法、装置、终端和存储介质 |
CN115190312A (zh) * | 2021-04-02 | 2022-10-14 | 西安电子科技大学 | 一种基于神经网络的跨分量色度预测方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110324627B (zh) | 2022-04-05 |
CN110324627A (zh) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11399179B2 (en) | Method and apparatus for encoding/decoding image | |
CN114900690B (zh) | 视频解码方法、视频编码方法、装置、设备及存储介质 | |
TWI741589B (zh) | 視頻編解碼之亮度mpm列表導出的方法及裝置 | |
WO2019184934A1 (zh) | 色度的帧内预测方法及装置 | |
CN113273213A (zh) | 图像编码/解码方法和设备以及存储比特流的记录介质 | |
EP3198867A1 (en) | Method of improved directional intra prediction for video coding | |
CN111131822B (zh) | 具有从邻域导出的运动信息的重叠块运动补偿 | |
CN109379594B (zh) | 视频编码压缩方法、装置、设备和介质 | |
US11729410B2 (en) | Image decoding method/apparatus, image encoding method/apparatus, and recording medium storing bitstream | |
TWI729477B (zh) | 視訊編解碼中的子塊去塊及裝置 | |
CN114830651A (zh) | 帧内预测方法、编码器、解码器以及计算机存储介质 | |
CN115174931A (zh) | 视频图像解码、编码方法及装置 | |
US20220353509A1 (en) | Method and apparatus for image encoding and decoding using temporal motion information | |
CN110719467B (zh) | 色度块的预测方法、编码器及存储介质 | |
US20230283795A1 (en) | Video coding method and device using motion compensation of decoder side | |
CN111770334B (zh) | 数据编码方法及装置、数据解码方法及装置 | |
CN116918331A (zh) | 编码方法和编码装置 | |
CN113875237A (zh) | 用于在帧内预测中用信号传送预测模式相关信号的方法和装置 | |
JP6875802B2 (ja) | 画像符号化装置及びその制御方法及び撮像装置及びプログラム | |
WO2022116119A1 (zh) | 一种帧间预测方法、编码器、解码器及存储介质 | |
WO2022061563A1 (zh) | 视频编码方法、装置及计算机可读存储介质 | |
RU2819286C2 (ru) | Способ и устройство кодирования/декодирования сигналов изображений | |
RU2819393C2 (ru) | Способ и устройство кодирования/декодирования сигналов изображений | |
RU2806878C2 (ru) | Способ и устройство кодирования/декодирования изображения и носитель записи, хранящий битовый поток | |
RU2819080C2 (ru) | Способ и устройство кодирования/декодирования сигналов изображений |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19775981 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19775981 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/04/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19775981 Country of ref document: EP Kind code of ref document: A1 |