WO2019184934A1 - 色度的帧内预测方法及装置 - Google Patents

色度的帧内预测方法及装置 Download PDF

Info

Publication number
WO2019184934A1
WO2019184934A1 PCT/CN2019/079808 CN2019079808W WO2019184934A1 WO 2019184934 A1 WO2019184934 A1 WO 2019184934A1 CN 2019079808 W CN2019079808 W CN 2019079808W WO 2019184934 A1 WO2019184934 A1 WO 2019184934A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
luma
motion vector
prediction
luminance
Prior art date
Application number
PCT/CN2019/079808
Other languages
English (en)
French (fr)
Inventor
左旭光
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2019184934A1 publication Critical patent/WO2019184934A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present application relates to the field of video coding and decoding, and in particular, to a chrominance intra prediction method and apparatus.
  • intra prediction technology is a technique for extracting correlation between pixels within an image.
  • intra prediction technique In the high-efficiency video coding (English: High Efficiency Video Coding; referred to as: HEVC) screen content coding (English: Screen Content Coding; referred to as SCC) extension standard, there is an intra prediction technique called intra block copy (intra block) Copy), which is also referred to as intra-frame motion compensation technique, which refers to a block to be processed (such as a coding block or a decoding block) moving from an already encoded region of an image frame to be processed to which it belongs. Search to find the best matching block to encode as its prediction block. In the encoding process, the intra motion compensation technique needs to determine and encode its corresponding motion vector for the coding block relative to the conventional intra prediction technique. In this way, the decoding end can find the prediction block of the corresponding decoding block according to the motion vector, and complete the decoding process.
  • intra block copy Intra block copy
  • intra-frame motion compensation technique which refers to a block to be processed (such as a coding block or a decoding block) moving from an already
  • pixel values include a luminance component Y, a chrominance component U, and a chrominance component V.
  • the division of the luminance component and the chrominance component is uniform.
  • the proposed joint exploration model (English full name: Joint Exploration Model, JEM, JEM is the reference software model of H.266) has a significant performance improvement over HEVC.
  • JEM Joint Exploration Model
  • the luminance component and the chrominance component are separately divided and coded, so the division of the luminance component and the division of the chrominance components are no longer consistent.
  • the luminance component and the chrominance component need to decide whether to adopt intra-frame motion compensation technology.
  • the corresponding motion vectors need to be determined respectively, and then the corresponding prediction blocks are predicted based on the determined motion vectors to further perform respective video encoding and decoding processes.
  • the process method using the intra motion compensation technique is complicated, and the operation cost of determining the motion vector is high.
  • the embodiment of the present application provides a chrominance intra prediction method and apparatus, which solves the problem that the current intra-frame motion compensation technology has a complicated process method and determines the operation cost of the motion vector.
  • the technical solution is as follows:
  • a chrominance intra prediction method comprising:
  • the target intra prediction mode is the intra motion compensation mode
  • the motion vector of the current chroma block in the image frame to be processed is determined based on the motion vector of the reference luma block, as an example, the target intra prediction mode is used for Predicting a mode of the prediction block of the current chroma block, in which the motion vector of the current chroma block is generated based on a motion vector of the luma block, the reference luma block being the current a luminance block in n luminance blocks corresponding to a chroma block position, n ⁇ 1;
  • a chrominance intra prediction apparatus comprising:
  • a first determining module configured to determine a motion vector of a current chroma block in the image frame to be processed based on a motion vector of the reference luma block when the target intra prediction mode is an intra motion compensation mode, as an example, the target The intra prediction mode is a mode for predicting a prediction block of the current chroma block, and in the intra motion compensation mode, a motion vector of the current chroma block is generated based on a motion vector of a luma block, the reference The luma block is a luma block among n luma blocks corresponding to the current chroma block position, n ⁇ 1;
  • a prediction module configured to predict a prediction block of the current chroma block based on a motion vector of the current chroma block.
  • a chrominance intra prediction apparatus including:
  • At least one processor At least one processor
  • At least one memory At least one memory
  • the at least one memory stores at least one program capable of executing the at least one program to perform the chrominance intra prediction method of any of the first aspects.
  • a storage medium in which an instruction or code is stored
  • the instructions or code when executed by a processor, enable the processor to perform the intra-prediction method of chrominance as described in any of the first aspects.
  • the chrominance intra prediction method and apparatus since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the correlation between the motion vector of the luminance component and the motion vector of the chrominance component. Therefore, it is not necessary to separately calculate the motion vector of the chroma component, thereby simplifying the process of the intra motion compensation technique, reducing the operation cost of the motion vector of the chroma component, and correspondingly reducing the operation cost of the overall motion vector.
  • the motion vector of the chrominance block is generated based on the motion vector of the luma block, in the encoding process, it is not necessary to separately encode the motion vector of the chrominance block, thereby reducing the coding cost of the motion vector of the chrominance component, which is beneficial to Improve the efficiency of video coding.
  • FIG. 1 is a flowchart of a chrominance intra prediction method according to an exemplary embodiment
  • FIG. 2 is a flowchart of another chrominance intra prediction method according to an exemplary embodiment
  • FIG. 3 is a schematic diagram showing a division manner of a chroma maximum coding unit according to an exemplary embodiment
  • FIG. 4 is a schematic diagram showing the numbering of the blocks obtained in FIG. 3;
  • FIG. 5 is a schematic diagram of a division manner of another chroma maximum coding unit according to an exemplary embodiment
  • FIG. 6 is a flowchart of a method for adding an intra motion compensation mode to a prediction mode candidate queue according to an exemplary embodiment
  • FIG. 8 and FIG. 9 are schematic diagrams showing the correspondence relationship between a chroma block and a luminance image region in a scene with an encoding format of 4:2:0;
  • FIG. 10 is a schematic diagram of a to-be-processed image frame during processing according to an exemplary embodiment
  • FIG. 11 and FIG. 12 are schematic structural diagrams showing two coding division manners according to an exemplary embodiment
  • FIG. 13 is a flowchart of still another chrominance intra prediction method according to an exemplary embodiment
  • FIG. 14 is a flowchart of a chrominance intra prediction method according to another exemplary embodiment
  • FIG. 15 is a schematic structural diagram of a chromaticity intra prediction apparatus according to an exemplary embodiment
  • 16 is a schematic structural diagram of another chromaticity intra prediction apparatus according to an exemplary embodiment
  • FIG. 17 is a schematic structural diagram of a construction module according to an exemplary embodiment
  • FIG. 18 is a schematic structural diagram of a third determining module according to an exemplary embodiment
  • FIG. 19 is a schematic structural diagram of another third determining module according to an exemplary embodiment.
  • FIG. 20 is a schematic structural diagram of still another chrominance intra prediction apparatus according to an exemplary embodiment
  • FIG. 21 is a schematic structural diagram of still another chrominance intra prediction apparatus according to an exemplary embodiment
  • FIG. 22 is a schematic structural diagram of a chromaticity intra prediction apparatus according to another exemplary embodiment.
  • FIG. 23 is a schematic structural diagram of another chromaticity intra prediction apparatus according to another exemplary embodiment.
  • FIG. 24 is a schematic structural diagram of still another chrominance intra prediction apparatus according to another exemplary embodiment.
  • FIG. 25 is a schematic structural diagram of still another chromaticity intra prediction apparatus according to another exemplary embodiment.
  • An embodiment of the present application provides a chrominance intra prediction method, which is applied to the field of video coding and decoding.
  • the chrominance intra prediction method is applicable to a codec of a video format (also referred to as a video format) in the YUV format.
  • a codec of a video format also referred to as a video format
  • YUV format a video format in the YUV format
  • the basic encoding principle may be: taking an image acquisition device such as a three-tube color camera or a color-charged coupling device (English: Charge-coupled Device; CCD) camera or video camera, and then taking the image, and then The obtained color image signal is subjected to color separation and separately amplified to obtain an RGB signal, and then the RGB signal is subjected to a matrix conversion circuit to obtain a signal of the luminance component Y and two color difference signals B-Y (ie, signals of the chrominance component U), R.
  • an image acquisition device such as a three-tube color camera or a color-charged coupling device (English: Charge-coupled Device; CCD) camera or video camera
  • CCD Charge-coupled Device
  • RGB signal is subjected to a matrix conversion circuit to obtain a signal of the luminance component Y and two color difference signals B-Y (ie, signals of the chrominance component U), R.
  • YUV color space representation The signal of the luminance component Y represented by the YUV color space, the signal of the chrominance component U, and the signal of the chrominance component V are separated.
  • the above-mentioned YUV format can also be obtained by other means, which is not limited by the embodiment of the present application.
  • the image of the YUV format (hereinafter referred to as the target image) is usually sampled by an image capturing device such as a camera, and the initial image taken is subjected to a series of processing (for example, format conversion), the luminance component Y and the color are obtained.
  • the sampling rate (also called the sampling rate) of the degree component U and the chrominance component V may be different, and the distribution density of each color component in the initial image is the same, that is, the distribution density ratio of each color component is 1:1:1 due to the respective color components.
  • the sampling rate is different, and the distribution density of the different color components of the target image is different.
  • the distribution density ratio of each color component is equal to the sampling rate ratio.
  • the distribution density of one color component refers to It refers to the number of pieces of information of the color component contained in the unit size.
  • the distribution density of the luminance component refers to the number of luminance pixel values (also referred to as luminance values) included in the unit size.
  • the current YUV format is divided into multiple encoding formats based on different sampling rate ratios.
  • the encoding format can be expressed in a sampling rate ratio. This representation is called A:B:C notation, and the current encoding format can be divided. For: 4:4:4, 4:2:2, 4:2:0 and 4:1:1.
  • the encoding format is 4:4:4, indicating that the luminance component Y in the target image has the same sampling rate of the chrominance component U and the chrominance component V, and the downsampling is not performed on the original image, and the distribution density of each color component of the target image is The ratio is 1:1:1; the encoding format is 4:2:2, which means that every two luminance components Y in the target image share a set of chrominance components U and chrominance components V, and the distribution density ratio of each color component of the target image is 2:1:1, that is, the pixel is used as the sampling unit, the luminance component of the original image is not downsampled, the chrominance component of the original image is downsampled in the horizontal direction by 2:1, and the vertical direction is not downsampled to obtain the target.
  • the encoding format is 4:2:0, indicating that for each chrominance component of the chrominance component U and the chrominance component V in the target image, the sampling rate in both the horizontal direction and the vertical direction is 2:1, the target
  • the ratio of the distribution density of the luminance component Y to the chrominance component U of the image is 2:1
  • the ratio of the distribution density of the luminance component Y to the chrominance component V of the target image is 2:1
  • the pixel is used as a sampling unit
  • the original image is
  • the luminance component is not downsampled, the original image
  • the chrominance component of the image is downsampled in the horizontal direction by 2:1, and the vertical direction is downsampled by 2:1 to obtain the target image.
  • an embodiment of the present application provides a chrominance intra prediction method, which is applied to encoding and decoding of an I frame, and an I frame is also called an intra picture, and an I frame is usually The first frame of an image group (English: Group of Pictures; abbreviated as GOP), which is also called an intra prediction encoded frame or a key frame.
  • the intra-prediction method of the chroma includes:
  • Step 101 When the target intra prediction mode is the intra motion compensation mode, determine a motion vector of the current chroma block in the image frame to be processed based on the motion vector of the reference luma block.
  • the target intra prediction mode is a mode for predicting a prediction block of a current chroma block
  • a prediction block of a current chroma block is generated by using an intra motion compensation technique, and the generation is performed.
  • the process is: acquiring the motion vector of the current chroma block, and generating the prediction block based on the motion vector of the current chroma block.
  • the motion vector of the current chroma block is generated based on the motion vector of the reference luma block.
  • the luma block is one of the n luma blocks corresponding to the current chroma block position, n ⁇ 1.
  • Step 102 Predict a prediction block of a current chroma block based on a motion vector of a current chroma block.
  • the current chroma block refers to the chroma block to be currently encoded
  • the current chroma block refers to the chroma block to be decoded currently, the current chroma block. It may be an image block of the chrominance component U or an image block of the chrominance component V.
  • the luminance component has a correlation with the chrominance component, and accordingly, the motion vector also has a correlation.
  • the chrominance intra prediction method provided by the embodiment of the present application, since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the correlation between the motion vector of the luminance component and the motion vector of the chrominance component, It is not necessary to separately calculate the motion vector of the chroma component, which simplifies the process of the intra motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
  • the motion vector of the chrominance block is generated based on the motion vector of the luma block, in the encoding process, it is not necessary to separately encode the motion vector of the chrominance block, thereby reducing the coding cost of the motion vector of the chrominance component, which is beneficial to Improve the efficiency of video coding.
  • the intra prediction method may be applied to both the encoding end and the decoding end.
  • the intra prediction method is applied to the encoding end and the decoding end respectively as an example, and the following two aspects are adopted. Be explained:
  • the chrominance intra prediction method is performed by an encoding end, which is used for encoding of an I frame, and the method includes:
  • Step 201 Perform a division of chroma blocks for the image frames to be processed.
  • the image frame to be processed includes a luminance image and a chrominance image located in the same region, and the encoding process of the luminance component is actually encoding the luminance image, and the encoding process of the chrominance component is actually encoding the chrominance image.
  • the image frame to be processed is usually first divided into a maximum coding unit of equal size (English: Coding Tree Unit; CTU for short), and the maximum coding unit also includes the maximum luminance coding. Unit and chroma maximum coding unit.
  • the maximum coding unit is usually a square coding block, which may be 8 ⁇ 8 pixels, 16 ⁇ 16 pixels, 32 ⁇ 32 pixels, 64 ⁇ 64 pixels, 128 ⁇ 128 pixels, 256.
  • the luminance component is first encoded, and the encoding process of the luminance component may include: dividing the maximum luminance coding unit, and the divided luminance block may be a square or a rectangle.
  • the encoding type of each luma block and the prediction information corresponding to the encoding type are then determined.
  • the coding type of the luma block generally includes an intra prediction type or an inter prediction type (also referred to as an inter coding type).
  • the prediction information corresponding to the intra prediction type includes a prediction mode, inter prediction.
  • the prediction information corresponding to the type includes a prediction vector and a reference frame index.
  • the luma block For each luma block, after determining the prediction information based on the determined encoding type, the luma block is predicted according to the prediction information to obtain a prediction block of the luma block, and then the original pixel value of the luma block is compared with the pixel value of the prediction block. A residual of the luma block is obtained (all residuals of the luma block constitute a residual block), and the residual is transformed, quantized and entropy encoded to obtain a code stream of the corresponding luma block.
  • the intra motion compensation technique is an intra prediction technique, but when marking the coding type, it is usually marked as an inter prediction type.
  • the chroma maximum coding unit After encoding the luma block in the image frame to be processed, the chroma maximum coding unit can be divided.
  • the division of the chroma maximum coding unit adopts a quadtree plus binary tree division method, and the division manner may be pre-agreed with the decoding end or encoded in the code stream after division.
  • the process includes:
  • Fig. 3 is a schematic diagram of the division of the chrominance maximum coding unit.
  • a quadtree the chrominance maximum coding unit P is divided into four blocks P1, P2, P3, and P4; then, the first block P1 and the fourth block P4 that are divided are respectively divided into four-trees, and are divided into Four blocks are P11, P12, P13 and P14 and P41, P42, P43 and P44.
  • Each block that is finally obtained by quadtree partitioning is called a quad-leaf node.
  • the quad-leaf leaf node is represented by Q i .
  • i represents the number of the quad-leaf leaf node, and also indicates the coding order of them, as shown in FIG. 4
  • FIG. 4 is a schematic diagram of the number of the block obtained by dividing FIG. 3 .
  • the coding order of the quad-leaf leaf nodes is: for the blocks sharing one parent node, the encoding is performed in the scanning order from left to right and from top to bottom.
  • each quadtree node can further divide the binary tree to obtain two equal-sized blocks.
  • the block obtained by the binary tree division can further divide the binary tree, as shown in FIG. 5, and the broken line indicates the block division result finally obtained after the binary tree division.
  • each block that is finally obtained by binary tree partitioning is called a binary leaf node.
  • the two blocks obtained by the binary tree division are also encoded in order from left to right and top to bottom.
  • the block obtained by dividing the quadtree and the binary tree is the final chroma block, denoted by B i , where i represents the number of the chroma block, and also indicates the order of the chroma block encoding.
  • the method for dividing the quadtree and the binary tree of the chrominance maximum coding unit is only a schematic description. In the actual implementation, the embodiment of the present application may also adopt only the quadtree partitioning method or only the binary tree partitioning method. The division of the chrominance maximum coding unit is not limited in this embodiment of the present application.
  • Step 202 Construct, for the current chroma block, a prediction mode candidate queue, where the prediction mode candidate queue includes at least one prediction mode, and each prediction mode is a mode for predicting a prediction block of the current chroma block.
  • the prediction mode candidate queue may have multiple types of prediction modes.
  • the multiple types of prediction modes include: an intra prediction mode, a cross component prediction mode, and an intra motion compensation mode.
  • the definition of the intra prediction mode and the cross component prediction mode can be referred to JEM.
  • the intra motion compensation mode is a new mode added in the prediction mode candidate queue proposed in the embodiment of the present application.
  • the motion vector of the current chroma block is generated based on the motion vector of the luma block. of.
  • Each prediction mode in the prediction mode candidate queue has its corresponding mode number.
  • 67 intra prediction modes are represented by 0 to 66
  • 6 cross component prediction modes are represented by 67 to 72.
  • the intra motion compensation mode added to the prediction mode candidate queue is represented by a mode number different from the traditional prediction mode, for example, may be greater than or equal to 73.
  • the mode number indicates the intra motion compensation mode.
  • the process of constructing the prediction mode candidate queue may include: a process of adding a cross-component prediction mode to a prediction mode candidate queue; a process of adding an intra motion compensation mode to a prediction mode candidate queue; and an intra prediction mode The process of joining a prediction mode candidate queue.
  • the sequence of execution of the three processes is not limited in the embodiment of the present application. Generally, the three processes may be sequentially performed in the order of adding the cross-component prediction mode, the intra motion compensation mode, and the intra prediction mode.
  • the length of the prediction mode candidate queue (that is, the number of prediction modes allowed to be added in the prediction mode candidate queue) is preset, that is, its existence length threshold, as an example, the length threshold is 11, Therefore, the execution of the three processes is also limited by the length threshold.
  • the process of constructing a prediction mode candidate queue includes:
  • Step A1 Add a cross-component prediction mode to the prediction mode candidate queue.
  • step A1 includes adding at least one cross-component prediction mode to the prediction mode candidate queue.
  • the mode may be numbered with six prediction modes of 67-72 sequentially added to the prediction mode candidate queue.
  • step A2 the intra motion compensation mode is added to the prediction mode candidate queue.
  • Step A3 If the prediction mode candidate queue is not full, the intra prediction modes of the n luma blocks corresponding to the current chroma block are added to the prediction mode candidate queue in the first order.
  • Step A4 If the prediction mode candidate queue is not full, the intra prediction mode of the chroma block adjacent to the current chroma block is added to the prediction mode candidate queue in the second order.
  • Step A5 If the prediction mode candidate queue is not full, join the planar mode (also called the planar mode, the mode number is 3) and the DC mode.
  • planar mode also called the planar mode, the mode number is 3
  • Step A6 If the prediction mode candidate queue is not full, join the directional pattern adjacent to the directional pattern already in the prediction mode candidate queue.
  • the directional mode is one of the prediction modes. If the prediction mode candidate queue is not full, the directional mode adjacent to the directional pattern already in the prediction mode candidate queue needs to be added.
  • Step A7 If the prediction mode candidate queue is not full, add vertical (English: vertical) mode, horizontal (English: horizontal mode) and No. 2 intra prediction mode (ie, intra prediction mode with mode number 2) .
  • step A7 since the prediction mode candidate queue is not full, the number of modes that may be added may be less than or equal to three, and therefore, when adding the vertical mode, the horizontal mode, and the second intra prediction mode, The vertical mode, the horizontal mode, and the second intra prediction mode are sequentially added until the prediction mode candidate queue is full. It is possible to eventually join one or more of the vertical mode, the horizontal mode, and the No. 2 intra prediction mode.
  • the process of adding the intra motion compensation mode to the prediction mode candidate queue in step A2 may include:
  • Step 2021 Determine n luma blocks corresponding to the current chroma block position in the image frame to be processed. N ⁇ 1.
  • the process of determining n luma blocks corresponding to the current chroma block position in the image frame to be processed may include:
  • Step B1 Determine a luminance image region corresponding to a current chroma block position in the image frame to be processed.
  • each chroma block may be associated with one or more luma blocks.
  • multiple chrominance blocks may also correspond to one luminance block position.
  • the position correspondence relationship between the chroma block and the luma block is related to the encoding format of the image frame to be processed.
  • the distribution density of the luminance component and the chrominance component may be the same or different, and therefore, in the embodiment of the present application,
  • the luminance image region corresponding to the current chroma block position in the image frame to be processed needs to be determined first, and then the n luma blocks corresponding to the current chroma block position are further determined.
  • w2 is the size of the current chroma block
  • K is the distribution density ratio of the luminance component and the chroma component.
  • the ratio of the distribution density of the luminance component Y to the chrominance component U of the image frame to be processed is K, including the luminance component and the chrominance component in the horizontal direction (this direction can be regarded as the width direction of the luminance component and the chrominance component)
  • the distribution density ratio is K1
  • the distribution density ratio in the vertical direction (which can be regarded as the height direction of the luminance component and the chrominance component)
  • the luminance component and the chrominance component are in the width direction and the height direction.
  • the distribution density ratios are K1 and K2, respectively.
  • the current chroma block has a width of CW and a height of CH
  • the coordinates of the upper left pixel in the image frame to be processed are (C x , C y )
  • the luminance image area is The pixel coordinates of the upper left corner are (K1 ⁇ C x , K2 ⁇ C y )
  • the width and height are rectangular areas of K1 ⁇ CW and K2 ⁇ CH, respectively.
  • the distribution density ratio of the luminance component Y and the chrominance component U of the image frame to be processed is 2:1
  • the distribution of the luminance component Y and the chrominance component V is 2:1, that is, the width and height of the luminance component of the image frame to be processed are twice the width and height of the chrominance component, respectively, and the luminance component and the chrominance component are in the width direction and the height direction.
  • the distribution density ratio K is 2:1 and 2:1.
  • the luminance image area is the coordinates of the upper left pixel (2 ⁇ C x , 2 ⁇ C y ).
  • the width and height are respectively 2 ⁇ CW, 2 ⁇ CH rectangular area.
  • FIG. 7 to FIG. 9 are schematic diagrams showing the correspondence relationship between a chroma block and a luminance image region in a scene with an encoding format of 4:2:0.
  • the luminance image region M1 corresponding to the position of the chroma block B1 has 6 luminance blocks, respectively M11 to M16; as shown in FIG. 8, in the image frame to be processed, The luminance image region M2 corresponding to the position of the chroma block B2 has one luminance block, which is M17; as shown in FIG. 9, in the image frame to be processed, the luminance image region M3 corresponding to the position of the chroma block B3 has a half luminance block. That is, the chrominance block B3 corresponds to 1/2 luminance blocks, and at this time, the two chrominance blocks correspond to one luminance block M18 position.
  • Step B2 determining n luminance blocks in all target luminance blocks.
  • the target luminance block is a luminance block that is partially or wholly in the luminance image region. That is, if part or all of a luminance block is located in the luminance image region in the image frame to be processed, the luminance block is determined as the target luminance block.
  • all the target luma blocks can be used as n luma blocks, and the second determining manner can perform certain screening in all target luma blocks to reduce the subsequent operation cost.
  • the brightness block of the specified position is filtered as n brightness blocks, and the brightness block of the specified position refers to a brightness block covering a specified pixel point in the brightness image area, covering the brightness block of the specified pixel point, referring to The specified pixel point is included in the pixel included in the luminance block.
  • the specified pixel point includes a central pixel point CR in the luminance image area, an upper left corner pixel LT in the luminance image area, a top right corner pixel TR in the luminance image area, a lower left corner pixel point BL in the luminance image area, and a luminance image area.
  • the target brightness block includes: a brightness block covering a central pixel point in the brightness image area, a brightness block covering the upper left pixel point in the brightness image area, a brightness block covering the upper right corner pixel in the brightness image area, and a coverage brightness image area.
  • the origin of the image coordinate system of the image frame to be processed is its upper left corner
  • the corresponding luminance image area of the current chroma block in the image frame to be processed is:
  • the upper left corner pixel point LT coordinate is (2 ⁇ C x , 2 ⁇ C y )
  • the width and height are respectively 2 ⁇ CW, 2 ⁇ CH rectangular area.
  • the coordinates of the central pixel point CR, the upper right corner pixel point TR, the lower left corner pixel point BL, and the lower right corner pixel point BR in the luminance image region corresponding to the current chroma block are respectively (2 ⁇ C x +CW, 2 ⁇ C y +CH), (2 ⁇ C x + 2 ⁇ CW, 2 ⁇ C y ), (2 ⁇ C x , 2 ⁇ C y + 2 ⁇ CH) and (2 ⁇ C x + 2 ⁇ CW, 2 ⁇ C y +2 ⁇ CH).
  • a luminance block covering a specified pixel point in all target luminance blocks is a luminance block covering LT, CR, TR, BL, and BR.
  • the luminance blocks covering LT, CR, TR, BL, and BR may also have different conditions.
  • the luminance block covering the 5 pixel points may be 5 different luminance blocks, or may be 1 luminance block.
  • all the target luminance blocks are 6 luminance blocks of the luminance blocks M11 to M16.
  • the luminance block covering the central pixel point CR in the luminance image area is the luminance block M15
  • the luminance block covering the upper left corner pixel LT in the luminance image region is the luminance block M11
  • the luminance block covering the upper right corner pixel TR in the luminance image region is the luminance
  • the luminance block covering the lower left corner pixel BL in the luminance image region is the luminance block M14
  • the luminance block covering the lower right corner pixel BR in the luminance image region is the luminance block M16.
  • the determined n luma blocks are 6 luma blocks of the luma blocks M11 to M16; and the second determining manner, the determined n luma blocks are luma blocks M11, M13, M14, M15 and M16 has a total of 5 brightness blocks.
  • all the target luminance blocks are the luminance block M17.
  • the luminance blocks of the lower right corner pixel BR in the luminance image area are all the same luminance block M17. Then, whether the first determination mode or the second determination mode is adopted, the determined n luminance blocks are the luminance block M17.
  • the luminance Block M18 is a target luminance block, and therefore, all target luminance blocks are luminance blocks M18. Then, regardless of whether the first determination mode or the second determination mode is adopted, the determined n luminance blocks are the luminance block M18.
  • the specified pixel points may also be set to other positions according to a specific scene.
  • the specified pixel point may also be a central position pixel point of the upper edge pixel row in the luminance image area, a central position pixel point of the lower edge pixel row, and a left edge pixel column.
  • At least one of a central location pixel and a central location pixel of the right edge pixel column, etc., is merely illustrative of the embodiments of the present application.
  • Step 2022 Detect whether there is a luminance block that can be referenced by the motion vector among the n luminance blocks.
  • the process of detecting whether there is a luminance block that can be referenced by a motion vector in the n luminance blocks may include:
  • the motion vectors of the luminance blocks in the n luminance blocks are sequentially detected according to the target order, until the detection stop condition is reached.
  • the detection stop condition is that the total number of motion vectors that can be referenced is equal to a preset number threshold k, or that n luminance blocks are traversed.
  • k can be 1 or 5.
  • Perform the detection process which includes:
  • Step C1 Check whether the motion vector of the i-th luma block in the n luma blocks is referenced.
  • Step C2 When the motion vector of the ith luma block can be referred to, it is detected whether the detection stop condition is reached.
  • step C4 when the detection stop condition is reached, the detection process is stopped.
  • step C1 the process of detecting whether the motion vector of the i-th luma block in the n luma blocks can be referenced may include:
  • Step C11 Detect a prediction type of an i-th luma block among the n luma blocks.
  • Step C12 When the prediction type of the i-th luma block is an intra prediction type, determining that the motion vector of the i-th luma block is not referenced.
  • the prediction information corresponding to the intra prediction type does not include a motion vector
  • the intra prediction method provided by the embodiment of the present application is applicable to inter prediction of an I frame, and the prediction information includes a motion vector, and therefore, when i When the prediction type of the luma block is the intra prediction type, the prediction information does not have a motion vector, nor can it be used for the current chroma block reference.
  • Step C13 When the prediction type of the i-th luma block is an inter prediction type, generate an alternative motion vector based on the motion vector of the i-th luma block.
  • the process of generating an alternative motion vector based on the motion vector of the ith luma block may include:
  • Step C131 Determine a vector scaling ratio of the current chroma block and the i-th luma block according to an encoding format of the image frame to be processed.
  • the vector scaling ratio of the current chroma block and the ith luma block is equal to the ratio of the distribution density of the chroma block to the luma block in the image frame to be processed, and the distribution density ratio is encoded by the image frame to be processed.
  • the format is determined. For example, when the encoding format is 4:2:0, in the horizontal direction (also called the x direction), the ratio of the distribution density of the chrominance block to the luminance block is 1:2, in the vertical direction (also called the y direction).
  • the ratio of the distribution density of the chroma block to the luma block is 1:2, and the scaling ratio of the vector of the current chroma block to the i-th luma block in the horizontal direction is equal to 1:2, and the scaling ratio in the vertical direction is equal to 1:2.
  • Step C132 Scale the motion vector of the i-th luma block based on the vector scaling ratio to obtain an alternative motion vector of the i-th luma block.
  • the motion vector of the i-th luma block is scaled proportionally to obtain an alternative motion vector of the i-th luma block. For example, if the motion vector of the i-th luma block is (-11, -3) and the encoding format is 4:2:0, the candidate motion vector of the current chroma block is (-5.5, -1.5).
  • Step C14 It is detected whether the candidate motion vector corresponding to the ith luma block is the same as the candidate motion vector corresponding to the luma block that the currently detected motion vector can refer to.
  • Step C14 is actually a process of finding a repeated motion vector, which is referred to as a check-up process.
  • step C15 is performed; when the candidate motion vector corresponding to the i-th luma block is the same as the candidate motion vector corresponding to the luma block that the currently detected motion vector can refer to, step C18 is performed.
  • step C18 is performed.
  • Step C15 When the candidate motion vector corresponding to the ith luma block is different from the candidate motion vector corresponding to the luma block to which the currently detected motion vector can be referenced, based on the candidate motion vector corresponding to the i-th luma block, An alternative prediction block that predicts the current chroma block.
  • the candidate prediction block is pending.
  • the pixel coordinates of the upper left corner of the encoded chrominance image of the image frame are (C x + MV x , C y + MV y ), and the image block of the same size as the current chroma block.
  • Step C16 When the candidate prediction block is valid, it is determined that the motion vector of the i-th luminance block can be referred to.
  • Step C17 When the candidate prediction block is invalid, it is determined that the motion vector of the i-th brightness block is not referenced.
  • Step C18 When the candidate motion vector corresponding to the ith luma block is the same as the candidate motion vector corresponding to the luma block to which the currently detected motion vector can refer, determining that the motion vector of the i-th luma block is not referenced.
  • step C15 it may be determined whether the candidate prediction block is valid to perform step C16 or C17.
  • the determining process may include the following two implementation manners:
  • the chroma encoded region in the chroma image of the image frame to be processed includes the already encoded CTU, and the quadtree node and the binary leaf node that have been encoded in the CTU to which the current chroma block belongs, such as The punctured area shown in FIG.
  • the candidate motion vector ie, the motion vector obtained by scaling the motion vector of the i-th luma block
  • MVx, MVy the candidate motion vector
  • (MVx, MVy) is an integer pixel motion vector
  • the candidate prediction block When the candidate prediction block is all within the chroma coded region, the candidate prediction block is considered to be valid, and when the candidate prediction block is not all in the chroma coded region, Then, the candidate prediction block is considered invalid; if (MVx, MVy) is a sub-pixel motion vector, and the candidate prediction block needs to be obtained by interpolation, the reference chrominance block corresponding to the candidate prediction block may be obtained first, and the candidate prediction block is The chroma pixel value is obtained by interpolating the chroma pixel values of the reference chroma block; detecting whether the reference chroma blocks are all located in the chroma encoded region in the image frame to be processed; when the reference chroma blocks are all located in the image frame to be processed Within the chroma-coded region, determining that the candidate prediction block is all located in the chroma-coded region of the image frame to be processed, at this time, the candidate prediction block is considered valid; when the reference chro
  • whether the coordinates of the pixel of the upper left corner of the reference chrominance block and the coordinates of the pixel of the lower right corner are detected may be In the coordinate range of the chroma coded region, when the coordinates of the upper left corner of the reference chroma block and the coordinates of the lower right pixel are within the coordinate range of the chroma encoded region, it is determined that the reference chroma block is all located a chroma-coded region in the image frame to be processed; determining the reference chroma when at least one of the coordinates of the upper left corner of the reference chroma block and the coordinates of the lower right pixel are not within the coordinate range of the chroma encoded region The blocks are not all located within the chroma coded region of the image frame to be processed.
  • the candidate motion vector determined based on the motion vector of the i-th luma block is a sub-pixel motion vector, and interpolation processing is required by the interpolation filter to obtain Alternative prediction block.
  • the interpolation filter used in the interpolation process is an N-tap filter, N is a positive integer, and the chrominance pixel value of the sub-pixel position of the candidate prediction block is interpolated, and it is required that N1 pixels of the left side of the position are required. Chroma pixel value, chroma pixel value of N2 pixels on the right side, chroma pixel value of N3 pixels on the upper side, and chroma pixel value of N4 pixel on the lower side.
  • N1+N2 N
  • N3 +N4 N.
  • (Cx, Cy) represents the coordinates of the pixel point of the upper left corner of the current chroma block, and CW and CH respectively represent the width of the current chroma block.
  • MV1x represents the largest integer smaller than MVx
  • MV1y represents the largest integer smaller than MVy.
  • (MVx, MVy) is (-5.5, -1.5)
  • MV1x is -6
  • MV1y is -2.
  • the candidate prediction blocks are all located in the chroma-coded region in the image frame to be processed, and detecting whether the candidate prediction block is located in a specified orientation of the current chroma block; Whether the candidate prediction blocks are all located in the chroma coded region in the image frame to be processed, and the order in which the candidate block is detected in the specified orientation of the current chroma block is not limited) when the candidate prediction block is all located
  • Processing the chroma-coded region in the image frame and located at a specified orientation of the current chroma block determining that the candidate prediction block is valid; when the candidate prediction block is not all located in the chroma-coded region of the image frame to be processed, Or, instead of being located at a specified orientation of the current chroma block, determining that the candidate prediction block is invalid; as an example, the specified orientation of the current chroma block is any orientation of the left side, the upper side, and the upper left side of the current chroma block.
  • detecting whether the candidate prediction block is located in the specified orientation of the current chroma block may include: detecting whether the coordinates of the pixel pixel of the lower right corner of the candidate prediction block are located in a specified orientation of the current chroma block, when the lower right corner of the candidate prediction block The coordinates of the pixel point are located at a specified orientation of the current chroma block, and the candidate prediction block is determined to be located at a specified orientation of the current chroma block; when the coordinates of the pixel pixel of the lower right corner of the candidate prediction block are not located at a specified orientation of the current chroma block, determining The alternate prediction block is not located at the specified orientation of the current chroma block.
  • detecting whether the candidate prediction block is located at a specified orientation of the current chroma block may also have other manners, for example, detecting a relative position of the first pixel point of the candidate prediction block and the second pixel point of the current chroma block, for example,
  • the first pixel and the second pixel may each be any pixel of the upper left pixel, the upper right pixel, the intermediate pixel, the lower left pixel, and the lower right pixel. This embodiment of the present application does not limit this.
  • the method for detecting whether the candidate prediction blocks are all located in the chrominized coded region in the image frame to be processed may refer to the foregoing first implementation manner, which is not used in this embodiment of the present application. Narration.
  • the candidate motion vector (ie, the motion vector obtained by scaling the motion vector of the i-th luma block) is (MVx, MVy), if (MVx, MVy) is an integer pixel motion vector, if ( Cx+MVx+(CW-1), Cy+MVy+(CH-1))
  • the current chroma block belongs to the quad-leaf node block, it also needs to satisfy (Cx+MVx+(CW-1), Cy+MVy+( CH-1)) can determine that the candidate prediction block is valid at any of the left, upper and upper left sides of the current chroma block, when the candidate prediction block is not all in the chroma coded region, or If any position other than the left side, the upper side, and the upper left side of the current chroma block is not considered, the candidate block is considered invalid; likewise, if (MVx, MVy) is a sub-pixel motion vector, the candidate block needs to be interpolated.
  • the reference chrominance block corresponding to the candidate prediction block may be obtained, where the chrominance pixel value of the candidate prediction block is obtained by interpolation based on the pixel value of the reference chrominance block; and whether the detection reference chrominance block is all located in the image to be processed
  • the chroma in the frame has been encoded in the region and is located at the specified orientation of the current chroma block (due to the reference chroma block and the alternate pre- The block is located in the same orientation of the current chroma block, so the orientation of the candidate prediction block can be determined by detecting the orientation of the reference luma block; when the reference chroma block is all located in the chroma encoded region of the image frame to be processed, and Located in the specified orientation of the current chroma block, the candidate prediction block is considered valid; when the reference chroma block is not all located in the chroma encoded region of the image frame to be processed, or is not located in the specified orientation of the current
  • the candidate motion vector determined based on the motion vector of the i-th luma block is a sub-pixel motion vector, and interpolation processing is required by the interpolation filter to obtain Alternative prediction block.
  • the interpolation filter used for interpolating the chrominance pixel values of the pixel position is an N-tap filter, and N is a positive integer, and it is required to interpolate the chrominance pixel values of the sub-pixel positions of the candidate prediction block.
  • N1+N2 N
  • N3+N4 N
  • (Cx+MV1x+(CW-1)+N2, Cy+MV1y+(CH-1)+N4) is in the quadtree leaf node block to which the current chroma block belongs, It is also necessary to satisfy (Cx+MV1x+(CW-1)+N2, Cy+MV1y+(CH-1)+N4) at any of the left, upper and upper left sides of the current chroma block to determine the The alternate prediction block is valid.
  • FIG. 11 and FIG. 12 are schematic diagrams showing the structure of two coding division manners. Referring to FIG. 11 and FIG. 12, when the quadtree leaf node block to which the current chroma block belongs is coded according to the division manner shown in FIG.
  • the chroma block in the lower left corner has not been encoded; and when the quadtree node block to which the current chroma block belongs is encoded according to the division manner shown in FIG. 12, the chroma block in the upper right corner has not been encoded.
  • the second implementation manner can avoid determining whether the chrominance block in the upper right corner and the speed block in the lower left corner are encoded, thereby effectively simplifying the process of determining whether the candidate prediction block is valid, and reducing the operation cost.
  • Step 2023 When there are luma blocks that can be referenced by the motion vector in the n luma blocks, the intra motion compensation mode is added in the prediction mode candidate queue.
  • an intra motion compensation mode may be added only in the prediction mode candidate queue to indicate that the motion vector of the current chroma block may be based on The motion vector of the luma block is generated.
  • the intra-frame motion compensation mode can be added once to achieve the corresponding prediction mode indication effect, and the process is relatively simple.
  • the intra motion compensation mode may also be added based on the number of luma blocks that can be referenced by the detected motion vector under the limitation of detecting the stop condition, and the intra motion compensation mode is at least one.
  • the manner of adding the intra motion compensation mode in the prediction mode candidate queue may include at least the following two types:
  • an intra motion compensation mode is added to the prediction mode candidate queue every time a luminance block that can be referred to by a motion vector is detected in the target order.
  • each time a luminance block that can be referenced by a motion vector is detected that is, an intra motion compensation mode is added in the prediction mode candidate queue, and m frames added in the prediction mode candidate queue are added.
  • the mode numbers of the internal motion compensation mode may be the same or different.
  • the mode number of the m intra motion compensation modes may be the first intra motion compensation mode.
  • the first intra motion compensation mode is added, and the mode number of the first intra motion compensation mode can be represented by a number greater than or equal to 73, for example, the first intra motion compensation mode.
  • the mode number is 73
  • the mode number of the second intra motion compensation mode is 74
  • the mode number of the third intra motion compensation mode is 75.
  • the second adding mode after the detection stop condition is reached, if there are m luminance blocks that can be referred to by the motion vector, m frames are added in the prediction mode candidate queue according to the detection arrangement order of the m luminance blocks in the target sequence.
  • Internal motion compensation mode m ⁇ 1.
  • m intra motion compensation modes are added in the prediction mode candidate queue, and the mode numbers of the m intra motion compensation modes may be the same.
  • the mode numbers of the m intra motion compensation modes may be sequentially added according to the first intra motion compensation mode, in the order of addition.
  • the m intra motion compensation modes obtained by the above two addition methods are in one-to-one correspondence with the luminance blocks that the m motion vectors can refer to.
  • the target order may be set according to a specific situation.
  • n luma blocks include at least: a luma block covering a central pixel point in the luma image area, and a luma block covering the upper left corner pixel in the luma image area a luminance block covering a pixel in the upper right corner of the luminance image region, a luminance block covering the lower left corner of the luminance image region, and a luminance block covering the lower right pixel in the luminance image region.
  • the target luminance block is a portion. Or all of the luminance blocks in the luminance image region, the luminance image region being the luminance region corresponding to the current chroma block position in the image frame to be processed.
  • the above target order may be:
  • a luminance block covering a central pixel point in the luminance image area a luminance block covering the upper left corner pixel in the luminance image area, a luminance block covering the upper right corner pixel in the luminance image area, and a luminance block covering the lower left corner pixel in the luminance image area
  • the order of the luminance blocks covering the pixels in the lower right corner of the luminance image area is, the order of overlapping pixel points CR>LT>TR>BL>BR.
  • the above target order may also be a randomly determined order.
  • the method for determining the n luma blocks may refer to the foregoing step 2021, which is not repeatedly described in this embodiment of the present application.
  • the first sequence may be a luminance block covering a central pixel point in the luminance image region, a luminance block covering the upper left corner pixel in the luminance image region, a luminance block covering the upper right corner pixel in the luminance image region, and a lower left corner in the coverage luminance image region.
  • the order of the luminance block of the pixel and the luminance block covering the pixel of the lower right corner in the luminance image area that is, the order of overlapping the pixel points CR>LT>TR>BL>BR; or the order of random determination.
  • a process of whether the prediction mode added by the query is repeated is also performed, which is also referred to as a check process, that is, for each intra prediction mode of the n luma blocks corresponding to the current chroma block, the frame is detected. Whether the intra prediction mode is the same as the prediction mode added in the prediction mode candidate queue; when the intra prediction mode is the same as the prediction mode added in the prediction mode candidate queue, the next intra prediction mode is detected; when the intra prediction mode and prediction are used When the prediction modes added in the mode candidate queue are different, the intra prediction mode is added to the prediction mode candidate queue.
  • the second sequence may be a left adjacent chroma block of the current chroma block, an upper adjacent chroma block of the current chroma block, and a lower left adjacent chroma block of the current chroma block.
  • it is also required to perform a process of whether the prediction mode added by the query is repeated, that is, detecting the intra prediction mode and prediction for the intra prediction mode of each adjacent chroma block of the current chroma block.
  • the intra prediction mode candidate queue Whether the prediction modes added in the mode candidate queue are the same; when the intra prediction mode is the same as the prediction mode added in the prediction mode candidate queue, the intra prediction mode of the next adjacent chroma block is detected; when the intra prediction mode is used When the prediction mode added in the prediction mode candidate queue is different, the intra prediction mode is added to the prediction mode candidate queue.
  • the target sequence and the first sequence may be the same or different, and the embodiment of the present application does not limit this.
  • Step 203 Determine a target intra prediction mode in the constructed prediction mode candidate queue.
  • the target intra prediction mode is a mode for predicting a prediction block of a current chroma block.
  • step 203 in the constructed prediction mode candidate queue, the process of determining the target intra prediction mode includes:
  • the intra prediction mode that meets the second target condition in the constructed prediction mode candidate queue is determined as the target intra prediction mode. This process can be implemented by traversing all prediction modes in the prediction mode queue.
  • the second target condition is that the sum of the absolute values of the residual values of the residual blocks corresponding to the prediction block determined based on the intra prediction mode is the smallest, or the prediction block determined based on the intra prediction mode corresponds to The sum of the absolute values of the residual value of the residual block is the smallest, or the coding cost of the intra prediction mode coding is the smallest.
  • all the prediction modes in the prediction mode queue may be traversed first, and the current luminance block is calculated based on a prediction residual corresponding to the prediction block determined by each mode, and then selecting a residual block corresponding to the corresponding prediction block based on the prediction residual corresponding to the prediction block determined by each mode according to the current luma block (ie, the luma block) The intra prediction mode with the smallest sum of the absolute values of the residual values of the residual block) as the target intra prediction mode;
  • the second target condition is that the absolute value of the residual value of the residual block corresponding to the prediction block determined by the intra prediction mode is the smallest
  • all the prediction modes in the prediction mode queue may be traversed first, and the current luminance block is calculated.
  • Each mode corresponds to a prediction residual corresponding to the determined prediction block, and then performs residual transformation on the current luma block by using the prediction residual corresponding to the prediction block determined by each mode to obtain a residual difference transform quantity, and selects a corresponding one.
  • the intra prediction mode with the smallest absolute value of the residual value of the residual block corresponding to the prediction block is used as the target intra prediction mode, and the current luma block adopts the prediction residual corresponding to the prediction block determined by each mode correspondingly.
  • the process of performing the residual transform refers to multiplying the prediction residual corresponding to the prediction block determined by each mode for the current luma block by the transform matrix to obtain the residual transform quantity, and the residual transform process can implement the residual difference. Correlation, so that the amount of energy of each residual difference obtained in the end is more concentrated;
  • the second target condition is that the coding cost corresponding to the intra prediction mode coding is minimum
  • all prediction modes in the prediction mode queue may be traversed first, and the current luminance block is coded based on each mode of the current luminance block, and each coding is calculated.
  • the intra prediction mode with the smallest coding cost is selected as the target intra prediction mode, and the coding cost can be calculated by using a preset cost function.
  • Step 204 Determine a reference luminance block when the target intra prediction mode is the intra motion compensation mode.
  • the reference luma block is a luma block among n luma blocks corresponding to the current chroma block position.
  • the target intra prediction mode is the intra motion compensation mode
  • the motion vector of the current luma block needs to be obtained based on the motion vector of the reference luma block, it is necessary to determine the reference luma block, the determining reference luma block.
  • the implementation of the process may be various.
  • the embodiment of the present application provides the following three implementation manners:
  • the reference luma block is determined based on the ordering of the target intra prediction modes in the prediction mode candidate queue.
  • the process of determining the reference luminance block may include:
  • Step D1 Determine, in the prediction mode candidate queue, that the target intra prediction mode is the rth intra motion compensation mode in the m intra motion compensation modes, 1 ⁇ r ⁇ m.
  • the determination of the luma block is related to the ordering of the target intra prediction mode in the prediction mode candidate queue, and therefore the target intra prediction needs to be determined.
  • the order of the modes in the prediction mode candidate sequence that is, it is the first intra motion compensation mode. Step D1 assumes that the rth intra motion compensation mode is determined.
  • Step D2 sequentially detecting, in the target order, whether the motion vector of the luma block in the n luma blocks can be referred to, until the detection stop condition is reached, the detection stop condition is that the total number of motion vectors that can be referenced is equal to the preset number threshold x, x ⁇ m, or, the total number of motion vectors that can be referenced is equal to r, or traversing n luminance blocks.
  • the detection stop condition is that the total number of motion vectors that can be referenced is equal to the preset number threshold x, x ⁇ m, or, the total number of motion vectors that can be referenced is equal to r, or traversing n luminance blocks.
  • m can be 1 or 5
  • x m.
  • Step D3 After the detection stop condition is reached, the luminance block referenced by the rth motion vector is determined as the reference luminance block.
  • the ordering of the target intra prediction mode in the prediction mode candidate queue is actually associated with the detection order of the luminance block that the motion vector can refer to, corresponding to the above step 2023, when only When an intra motion compensation mode is added to the prediction mode candidate queue, the luminance block that can be referenced by the first motion vector is determined as a reference luminance block; when the detection of the stop condition is restricted, the motion vector based on the detected motion vector can be referred to.
  • the luma blocks referenced by the motion vectors determined in steps D1 to D3 are in one-to-one correspondence with the number of added intra motion compensation modes, and the steps D1 to D3 determine
  • the detection order of the luminance block that can be referenced by the motion vector is consistent with the added order of the added intra motion compensation mode, and the rth intra motion compensation mode as the target intra prediction mode and the rth motion vector as the reference luminance block
  • the brightness block that can be referred to corresponds.
  • the reference luma block is filtered based on the reference prediction block corresponding to the n luma blocks.
  • the process of determining the reference luma block may include:
  • Step E1 sequentially detecting, in the target order, whether the motion vector of the luma block in the n luma blocks can be referenced until the detection stop condition is reached, and the total number of motion vectors that can be referenced is equal to the preset number threshold x, x ⁇ m, or, traverse n luminance blocks.
  • Step E2 After the detection stop condition is reached, a reference prediction block of the current chroma block is generated based on each referenceable motion vector.
  • Step E3 Determine, in the generated plurality of reference prediction blocks, a reference prediction block that meets the first target condition, where the first target condition is that the sum of the absolute values of the residual values of the residual block corresponding to the reference prediction block is the smallest, or The sum of the absolute values of the residual value transformed by the residual block corresponding to the prediction block is the smallest, or the reference prediction block corresponds to the smallest coding cost.
  • Step E4 Determine a luminance block corresponding to the reference prediction block that meets the first target condition as a reference luminance block.
  • the order of the reference luma block and the intra motion compensation mode is actually related. Please refer to step 2023 above, regardless of the prediction mode candidate queue.
  • the reference prediction block of the reference luma block only needs to meet the first target condition, so that compared with the first implementation manner, High accuracy.
  • the reference luminance block is determined based on a correspondence table between the identifier of the pre-established intra motion compensation mode and the identifier of the luma block.
  • the prediction mode candidate queue includes m intra motion compensation modes, m ⁇ 1.
  • the intra motion compensation mode may also be established.
  • the correspondence table records an identifier of each intra motion compensation mode added to the prediction mode candidate queue, and an identifier of the luma block that the corresponding motion vector can refer to.
  • the luma block is a luma block that can be referenced by a motion vector, and the identifier of each intra-frame motion compensation mode in the correspondence table is used to uniquely identify an intra-frame motion compensation mode in a prediction mode candidate queue, and the identifier of each luma block is also used.
  • the identifier of the luma block may be the number of the luma block when dividing the luma image corresponding to the image frame to be processed; or other types of identifiers, as in the following step 209; The identified identifiers are not described in detail in the embodiments of the present application.
  • the mode numbers of the m intra motion compensation modes added in the prediction mode candidate queue may be the same, they may be different.
  • the identifier of the intra motion compensation mode may be composed of the mode number and the index in the prediction mode candidate queue, so that the prediction may be performed.
  • the intra mode motion compensation mode is uniquely identified in the mode candidate queue; when the mode numbers of the m intra motion compensation modes added in the prediction mode candidate queue are different, the identifier of the intra motion compensation mode may be the same as the mode number.
  • the process of determining the reference luma block may include:
  • Step F1 Query the correspondence table according to the identifier of the target intra prediction mode, and obtain the identifier of the reference luma block.
  • Step F2 determining a reference luminance block based on the identifier of the reference luminance block.
  • the identifier of the reference luma block can be directly determined by querying the correspondence relationship table. Compared with the first implementation manner and the second implementation manner, the process is relatively simple and the calculation cost is small.
  • whether the motion vector of the luma block in the n luma blocks is sequentially referenced according to the target sequence is determined, and the process of detecting the stop condition may include:
  • Perform the detection process which includes:
  • step G1 it is detected whether the motion vector of the i-th luma block in the n luma blocks can be referred to.
  • Step G2 When the motion vector of the i-th luma block is referenced, it is detected whether the detection stop condition is reached.
  • step G4 when the detection stop condition is reached, the execution of the detection process is stopped.
  • step G1 to step G4 may refer to step C1 to step C4 in the above step 2022, and details are not described herein again.
  • the process of detecting whether the motion vector of the i-th luma block in the n luma blocks can be referenced may include: detecting a prediction type of the i-th luma block in the n luma blocks; when the i-th luma When the prediction type of the block is the intra prediction type, it is determined that the motion vector of the i-th luma block is not referenced; when the prediction type of the i-th luma block is the inter prediction type, the motion vector generation based on the i-th luma block is generated.
  • the determining process may include the following two implementation manners:
  • the candidate prediction block when the candidate prediction blocks are all located in the chroma coded region in the image frame to be processed, it is determined that the candidate prediction block is valid; when the candidate prediction block is not all the chroma coded in the image frame to be processed is encoded Within the area, it is determined that the candidate prediction block is invalid.
  • the candidate prediction blocks when the candidate prediction blocks are all located in the chroma coded region in the image frame to be processed and are located in the specified orientation of the current chroma block, it is determined that the candidate prediction block is valid; when the candidate prediction blocks are not all located Determining that the candidate prediction block is invalid within the chroma-coded region of the image frame to be processed, or not at the specified orientation of the current chroma block; as an example, the specified orientation of the current chroma block is the current chroma block Any orientation on the left, upper, and upper left sides.
  • the process of detecting whether the detection stop condition is reached may include: detecting whether the total number of motion vectors that can be referenced is equal to a preset number threshold x, where x ⁇ m; and detecting whether i is equal to n; when the referenceable motion vector is not equal to the preset number threshold x, and i is not equal to n, determining that the detection stop condition is not reached; when the referenceable motion vector is equal to the preset number
  • Step 205 Determine a motion vector of a current chroma block in the image frame to be processed based on the motion vector of the reference luma block.
  • step 205 the process of determining the motion vector of the current chroma block in the image frame to be processed based on the motion vector of the reference luma block may include:
  • Step H1 Determine a vector scaling ratio of the current chroma block and the reference luma block according to an encoding format of the image frame to be processed.
  • the vector scaling ratio of the current chroma block and the reference luma block is equal to the ratio of the distribution density of the chroma block and the luma block in the image frame to be processed, and the distribution density ratio is determined by the encoding format of the image frame to be processed.
  • the encoding format is 4:4:4
  • the ratio of the distribution density of the chrominance block to the luminance block is 1:1, in the vertical direction (also called the y direction).
  • the ratio of the distribution density of the chroma block to the luma block is 1:1, and the scaling ratio of the vector of the current chroma block to the reference luma block in the horizontal direction is equal to 1:1, and the scaling ratio in the vertical direction is equal to 1: 1.
  • This step can refer to the above step C131.
  • Step H2 Based on the vector scaling ratio, the motion vector of the reference luma block is scaled to obtain a motion vector of the current chroma block.
  • the motion vector of the reference luma block is scaled proportionally to obtain an alternative motion vector of the reference luma block. For example, if the motion vector of the reference luma block is (-11, -3) and the encoding format is 4:4:4, the candidate motion vector of the current chroma block is (-11, -3). This step can refer to step C132 above.
  • Step 206 Predict a prediction block of the current chroma block based on the motion vector of the current chroma block.
  • the motion vector of the current chroma block may be used to find the current chroma block from the chroma coded region in the chroma image in the to-be-processed video frame. Forecast block.
  • Step 207 Add the target intra prediction mode to the code stream of the current chroma block after encoding the index in the prediction mode candidate queue.
  • the prediction mode candidate queue includes at least one prediction mode, and each prediction mode is a mode for predicting a prediction block of the current chroma block.
  • the index in the prediction mode candidate queue is used to indicate the ordering of the prediction mode in the prediction mode candidate queue, for example, the index of the target intra prediction mode is 3, indicating that the target intra prediction mode is the third of the prediction mode candidate queues. Forecast mode.
  • the index may be entropy encoded by an entropy encoding module and then written to the code stream.
  • Step 208 Transmit a code stream to the decoding end, where the code stream includes an index of the encoded target intra prediction mode and a coded residual block.
  • the pixel value of the current block is subtracted from the original pixel value of the current chroma block to obtain a residual block of the current chroma block, and then The residual block of the current chroma block is transformed and quantized, and the quantized residual block is entropy encoded to obtain the encoded code stream.
  • This process can refer to JEM or HEVC.
  • the finally obtained division manner may also be added to the code stream after the coding, so that the decoding end processes the processing based on the division manner.
  • the image frames are divided, and the divided chroma block pairs are processed accordingly.
  • the index encoded in the code stream is used by the decoding end to determine a corresponding reference luma block.
  • the decoding end may determine the reference luma based on a pre-agreed manner and an index of the target intra prediction mode. Block, so that there is no need to encode relevant indication information in the code stream, thereby reducing the coding cost of the indication information, which is beneficial to improving the efficiency of video coding.
  • the coding end since the coding end has already determined the identifier of the reference luminance block, it can also The identifier of the reference luma block is encoded into the code stream for reference by the decoding end, so that the decoding end can directly determine the reference luma block based on the identifier of the reference luma block without performing excessive operations, thereby reducing the computational cost of the decoding end.
  • the intra-frame prediction method of the chrominance provided by the embodiment of the present application may further include:
  • Step 209 Obtain an identifier of the reference luma block.
  • the first method for obtaining the identifier of the reference luma block includes:
  • step I1 all the luma blocks in the n luma blocks are assigned an identifier in the order agreed with the decoding end.
  • Step I2 Obtain an identifier assigned to the reference luma block.
  • the second acquisition method, the process of obtaining the identifier of the reference luma block includes:
  • step J1 the luma blocks referenced by the motion vectors in the n luma blocks are assigned in the order agreed with the decoding end.
  • identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1 with a tolerance of u, and u is a positive integer, usually 1.
  • the process of assigning identifiers may be implemented in multiple manners. The first one may be performed synchronously with step 2022. First, all the n luma blocks are assigned different identifiers, and each motion vector is not referenced in the target order. When the luminance block is used, the identity of the luminance block is deleted, and the identifiers of all the luminance blocks thereafter are updated until the detection stop condition is reached in step 2022. For example, all the n luma blocks are first assigned an identifier: 0, 1...n-1, and the assigned identifier may be an arithmetic progression sequence starting with 0 and having a tolerance of 1, when the motion vector of the first luma block is detected.
  • the update method is to subtract the tolerance on the basis of each original identifier, that is, minus 1.
  • the identifier obtained after the update is: 0, 1...n- 2; The process is repeated until the detection stop condition is reached in step 2022.
  • the second is that after the detection stop condition is reached, all the luminance blocks that can be referred to for detecting the motion vector are assigned different identifiers.
  • the assigned identifiers are: 0, 1, 2.
  • Step J2 Obtain an identifier assigned to the reference luma block.
  • the order of the convention may be the coding sequence of the luma block, or the step 2022 may be performed to detect whether there is a target sequence of the luma blocks that can be referenced by the motion vector in the n luma blocks.
  • the assigned identifier can be a digital identifier.
  • the identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1, and the tolerances are incremented by 1, such as 0, 1, 2, ..., n.
  • the identification of the luma block allocation that can be referred to by the motion vector in the luma block is also a sequence of other forms, which is not limited in this embodiment of the present application. For example, please refer to FIG.
  • n luma blocks are: luma block M11, luma block M13, luma block M14, luma block M15 and luma block M16, and the luma blocks referenced by the motion vector are M14 and M15, reference brightness
  • the block is M14, and the order of the agreement is the coding sequence of the luma block.
  • the first acquisition mode is adopted, and the identifiers of the n luma blocks are: the luma block M11 is 0, the luma block M13 is 1, and the luma block M14 is 2.
  • the luminance block M15 is 3 and the luminance block M16 is 4, then the reference of the reference luminance block is 2; and the second acquisition mode is used, the identifiers of the n luminance blocks are: the luminance block M14 is 0, and the luminance block M15 If 1, the reference brightness block is identified as 0.
  • the identification of these luma blocks can be expressed in binary numbers.
  • the identification value of the luminance block allocated by the second acquisition mode is less, and the identification value of the finally determined reference luminance block is smaller, so that the reference luminance is When the identifier is transmitted through the code stream, the occupied data bits are less, and the code stream resource can be effectively saved.
  • Step 210 Encode the identifier of the reference luma block and add it to the code stream of the current chroma block.
  • the identification of the reference luma block may be entropy encoded by the entropy encoding module and then written to the code stream.
  • sequence of the intra-prediction method steps of the chrominance provided by the embodiment of the present application may be appropriately adjusted, and the steps may also be correspondingly increased or decreased according to the situation.
  • the steps of steps 209 and 210 may not be performed, and any familiarity may be performed.
  • a person skilled in the art can easily conceive changes in the technical scope of the present application, and should be covered by the scope of the present application, and therefore will not be described again.
  • the chrominance intra prediction method since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the motion vector and chrominance component motion of the luminance component.
  • the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
  • the chrominance intra prediction method is performed by a decoding end, which is used for decoding of an I frame, and the method includes:
  • Step 301 Decode a code stream of a current chroma block, where the code stream includes an index of the encoded target intra prediction mode and a coded residual block.
  • the current chroma block refers to the chroma block to be decoded.
  • the decoding end After receiving the code stream transmitted by the encoding end, the decoding end decodes the code stream, and is usually decoded by the entropy decoding module. After decoding the code stream, the index of the target intra prediction mode and the decoded residual block may be included. The decoding end may continue to perform inverse quantization and inverse transform on the decoded residual block to obtain a residual block of the current chroma block.
  • the code stream further includes the coded division mode.
  • the decoding end may extract the division manner from the decoded code stream, divide the processed image frame to be processed according to the division manner, and perform corresponding processing on the divided chroma block pair.
  • Step 302 Extract an index of the target intra prediction mode in the prediction mode candidate queue from the decoded code stream.
  • the index is used to indicate the order of the prediction mode in the prediction mode candidate queue.
  • the index of the target intra prediction mode is 3, indicating that the target intra prediction mode is the third of the prediction mode candidate queues. Forecast mode.
  • Step 303 Construct, for the current chroma block, a prediction mode candidate queue, where the prediction mode candidate queue includes at least one prediction mode, and each prediction mode is a mode for predicting a prediction block of the current chroma block.
  • the process of constructing the prediction mode candidate queue may refer to steps A1 to A7 in the foregoing step 202.
  • the construction process is agreed with the encoding end, and the process is consistent with the encoding end, that is, the step 303 is consistent with the foregoing step 202. . Therefore, the embodiments of the present application will not be described again.
  • Step 304 Determine a target intra prediction mode in the constructed prediction mode candidate queue.
  • the target intra prediction mode may be obtained by querying the prediction mode candidate queue based on the index of the target intra prediction mode in the prediction mode candidate queue.
  • the target intra prediction mode can be obtained by querying the prediction mode candidate queue based on the index.
  • the prediction mode candidate queue is: ⁇ 11, 75, ..., 68 ⁇ , and includes 11 prediction modes. If the index of the target intra prediction mode is 2, the prediction mode candidate queue is queried, and the second prediction mode is selected. That is, the target intra prediction mode is known from the prediction mode candidate queue, and the target intra prediction mode is an intra motion compensation mode having a mode number of 75.
  • Step 305 Determine a reference luma block when the target intra prediction mode is the intra motion compensation mode.
  • the process is corresponding to the encoding end.
  • the decoding end can determine the reference luma block based on the pre-agreed manner and the index of the target intra prediction mode, so that the relevant indication information does not need to be encoded in the code stream, thus reducing the
  • the coding cost of the indication information is beneficial to improve the efficiency of video coding; on the other hand, since the coding end has determined the identifier of the reference luma block, it may also encode the identifier of the reference luma block into the code stream for reference by the decoder. Therefore, the decoding end can directly determine the reference luminance block based on the identifier of the reference luma block without performing excessive operations, thereby reducing the computational cost of the decoding end.
  • the manner in which the decoding end determines the reference luma block can be various.
  • the following two determination manners are provided in the embodiment of the present application.
  • the decoding end determines the reference luma block based on a pre-agreed manner and an index of the target intra prediction mode.
  • the process is the same as the above-mentioned step 204, and there are three implementation manners respectively.
  • the specific process refers to the above-mentioned step 204, which is not described in detail in this embodiment of the present application.
  • the decoding end determines the reference luma block based on the identifier of the reference luma block in the code stream.
  • the process can include:
  • Step K1 Extract an identifier of the reference luma block from the decoded code stream.
  • the decoding end may extract the identifier of the reference luma block after entropy decoding the code stream.
  • Step K2 Determine a reference luma block among the n luma blocks based on the identifier of the reference luma block.
  • the identifier of the reference luma block can be obtained in multiple ways, and the decoding end needs to be consistent with the encoding method of the encoding end, it can ensure that the acquired identifier indicates the luma block of the same position. Therefore, corresponding
  • the embodiment of the present application is described by taking the following two acquisition methods as an example:
  • the first acquisition manner of the decoding end includes:
  • step L1 the luma blocks referenced by the motion vectors in the n luma blocks are assigned in the order agreed with the encoding end.
  • step L2 the luminance block referenced by the motion vector consistent with the identifier of the reference luminance block is determined as the reference luminance block.
  • the second obtaining manner of the decoding end includes:
  • step M1 the luma blocks referenced by the motion vectors in the n luma blocks are assigned in the order agreed with the encoding end.
  • identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1 with a tolerance of u, and u is a positive integer, usually 1.
  • the process of assigning identifiers may be implemented in multiple manners.
  • the first one may be performed synchronously with step 303.
  • n luminance blocks are all assigned different identifiers, and each motion vector is not referenced according to the target sequence.
  • the identifier of the luma block is deleted, and the identifiers of all the luma blocks thereafter are updated until the detection stop condition is reached in step 303.
  • all the n luma blocks are first assigned an identifier: 0, 1...n-1, and the assigned identifier may be an arithmetic progression sequence starting with 0 and having a tolerance of 1, when the motion vector of the first luma block is detected.
  • the update method is to subtract the tolerance based on the original identifier, that is, minus 1.
  • the updated identifier is: 0, 1...n-2; repeat The process continues until the detection stop condition is reached in step 303.
  • the second is that after the detection stop condition is reached, all the luminance blocks that can be referred to for detecting the motion vector are assigned different identifiers.
  • the assigned identifiers are: 0, 1, 2, and 3.
  • Step M2 Determine a luminance block that can be referenced by a motion vector that is consistent with the identifier of the reference luminance block as a reference luminance block.
  • the order of the agreement may be the coding sequence of the luma block, or may be the detection of whether there are luke blocks that can be referenced by the motion vector in the n luma blocks in the above steps 2022 and 303.
  • the target sequence, the identifier may be a digital identifier, and the identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are 0 or 1 as the starting identifier, and the tolerance is 1 in the increment series.
  • the tolerance is 1 in the increment series.
  • n luma blocks are: luma block M11, luma block M13, luma block M14, luma block M15 and luma block M16, and the luma blocks that the motion vector can refer to are M14 and M15.
  • the reference luma block is M14, and the agreed order is the coding sequence of the luma block.
  • the first acquisition mode is adopted, and the identifiers of the n luma blocks are: luma block M11 is 0, luma block M13 is 1, luma block If the M14 is 2, the luma block M15 is 3, and the luma block M16 is 4, the identifier of the reference luma block to be transmitted in the code stream is 2, and the encoding end determines the reference luma block as M14 based on the identifier;
  • the identifiers of the n luma blocks are: the luma block M14 is 0, and the luma block M15 is 1.
  • the identifier of the reference luma block to be transmitted in the code stream is 0, and the encoding end determines the reference luma block based on the identifier. M14.
  • the identification of these luma blocks can be expressed in binary numbers.
  • the second acquisition mode allocates fewer luma blocks, and the final determined reference luma block has a smaller identifier, so that the identification of the reference luma passes.
  • the code stream is transmitted, less data bits are occupied, which can effectively save the code stream resources.
  • Step 306 Determine a motion vector of a current chroma block in the image frame to be processed based on the motion vector of the reference luma block.
  • the process is the same as that of the coding end.
  • the foregoing step 205 is not described herein.
  • Step 307 Predict a prediction block of the current chroma block based on the motion vector of the current chroma block.
  • Step 308 Determine a reconstructed pixel value of the current chroma block based on the prediction block of the current chroma block and the residual block of the current chroma block.
  • the reconstructed pixel value of the current chroma block may be determined.
  • the reconstructed pixel value is obtained by adding the pixel value of the prediction block of the current chroma block to the pixel value of the residual block of the current chroma block.
  • sequence of the intra-prediction method steps of the chrominance provided by the embodiment of the present application may be appropriately adjusted, and the steps may also be correspondingly increased or decreased according to the situation.
  • steps of steps 301 and 303 may be reversed, and any familiar Those skilled in the art can easily conceive changes within the scope of the technical scope of the present application, and should be included in the scope of protection of the present application, and therefore will not be described again.
  • the chrominance intra prediction method since the motion vector of the chrominance block is determined based on the motion vector of the luminance block, fully utilizes the motion vector and chrominance component motion of the luminance component.
  • the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
  • An embodiment of the present application provides a chrominance intra prediction apparatus 40 for encoding or decoding an I frame.
  • the apparatus 40 includes:
  • a first determining module 401 configured to determine a motion vector of a current chroma block in an image frame to be processed based on a motion vector of the reference luma block when the target intra prediction mode is an intra motion compensation mode, as an example,
  • the target intra prediction mode is a mode for predicting a prediction block of the current chroma block, and in the intra motion compensation mode, a motion vector of the current chroma block is generated based on a motion vector of the luma block
  • the reference luma block is a luma block among n luma blocks corresponding to the current chroma block position, n ⁇ 1;
  • the prediction module 402 is configured to predict a prediction block of the current chroma block based on a motion vector of the current chroma block.
  • the chroma intra prediction device since the motion vector of the chroma block is determined based on the motion vector of the luma block, fully utilizes the motion vector and the motion of the chroma component.
  • the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
  • the apparatus 40 further includes:
  • the constructing module 403 is configured to construct a prediction mode candidate queue, where the prediction mode candidate queue includes at least one prediction mode, before determining the motion vector of the current chroma block in the image frame to be processed with the motion vector based on the reference luma block
  • Each of the prediction modes is a mode for predicting a prediction block of the current chroma block
  • a second determining module 404 configured to determine the target intra prediction mode in the predicted mode candidate queue that is configured to be completed
  • the third determining module 405 is configured to determine a reference luma block when the target intra prediction mode is the intra motion compensation mode.
  • the constructing module 403 includes:
  • a first determining submodule 4031 configured to determine n luma blocks corresponding to the current chroma block position in the image frame to be processed
  • a detecting submodule 4032 configured to detect whether there is a luma block that can be referenced by a motion vector in the n luma blocks;
  • the adding submodule 4033 is configured to add an intra motion compensation mode to the prediction mode candidate queue when there is a luma block that the motion vector can refer to in the n luma blocks.
  • the detecting submodule 4032 is configured to:
  • the detection stop condition is that the total number of motion vectors that can be referenced is equal to a preset number threshold k, or Traversing the n luma blocks.
  • the adding submodule 4033 is used to:
  • the third determining module 405 has multiple implementable manners, and examples include:
  • the first implementation mode the prediction mode candidate queue includes m intra motion compensation modes, and m ⁇ 1.
  • the third determining module 405 includes:
  • a second determining submodule 4051 configured to determine, in the prediction mode candidate queue, that the target intra prediction mode is the rth intra motion compensation mode in the m intra motion compensation modes, 1 ⁇ r ⁇ m;
  • the detecting sub-module 4052 is configured to sequentially detect, according to the target order, whether the motion vector of the luma block in the n luma blocks can be referenced until a detection stop condition is reached, where the total number of motion vectors that can be referenced is equal to a preset.
  • the number threshold x, x ⁇ m, or the total number of motion vectors that can be referenced is equal to the r, or traversing the n luma blocks;
  • the third determining submodule 4053 is configured to determine, as the reference luma block, a luma block that can be referenced by the rth motion vector after the detection stop condition is reached.
  • the second implementation module includes: the prediction mode candidate queue includes m intra motion compensation modes, and m ⁇ 1.
  • the third determining module 405 includes:
  • the detecting sub-module 4052 is configured to sequentially detect, according to the target order, whether the motion vector of the luma block in the n luma blocks can be referenced until a detection stop condition is reached, where the total number of motion vectors that can be referenced is equal to a preset.
  • a generating submodule 4054 configured to generate a reference prediction block of the current chroma block based on each referenced motion vector after the detection stop condition is reached;
  • a fourth determining sub-module 4055 configured to determine, in the generated multiple reference prediction blocks, a reference prediction block that meets a first target condition, where the first target condition is a residual value of a residual block corresponding to the reference prediction block.
  • the sum of the absolute values is the smallest, or the sum of the absolute values of the residual value of the residual block corresponding to the prediction block is the smallest, or the coding cost corresponding to the reference block is the smallest;
  • the fifth determining sub-module 4056 is configured to determine a luma block corresponding to the reference prediction block that meets the first target condition as the reference luma block.
  • the prediction mode candidate queue includes m intra motion compensation modes, and m ⁇ 1.
  • the apparatus 40 further includes:
  • the establishing module 406 is configured to establish, in the process of constructing the prediction mode candidate queue, a correspondence table between the identifier of the intra motion compensation mode and the identifier of the luma block, where the correspondence table record is added to the prediction mode candidate An identifier of each intra motion compensation mode in the queue, and an identifier of a luma block referenced by the corresponding motion vector, wherein the identifier of each of the intra motion compensation modes in the correspondence table is used to uniquely identify a prediction mode Intra motion compensation mode in the candidate queue;
  • the third determining module 405 is configured to:
  • the detecting submodule 4032 or the detecting submodule 4052 may include:
  • An execution unit configured to perform a detection process, where the detection process includes:
  • the execution unit is configured to:
  • the prediction type of the ith luma block is an inter prediction type, generating an alternative motion vector based on a motion vector of the i th luma block;
  • the candidate motion vector corresponding to the ith luma block is the same as the candidate motion vector corresponding to the luma block to which the currently detected motion vector can be referenced, it is determined that the motion vector of the i-th luma block is not referenced.
  • the apparatus 40 further includes:
  • a first detecting module 407 configured to detect, after the candidate prediction block of the current chroma block is predicted based on the candidate motion vector corresponding to the i-th luma block, whether the candidate prediction block is all located a chroma encoded area in the image frame to be processed;
  • a fourth determining module 408, configured to determine that the candidate prediction block is valid when all of the candidate prediction blocks are located in a chroma coded region in the to-be-processed image frame;
  • the fifth determining module 409 is configured to determine that the candidate prediction block is invalid when the candidate prediction blocks are not all located in the chroma coded region in the to-be-processed image frame.
  • the apparatus 40 further includes:
  • a first detecting module 407 configured to detect, after the candidate prediction block of the current chroma block is predicted based on the candidate motion vector corresponding to the i-th luma block, whether the candidate prediction block is all located a chroma encoded area in the image frame to be processed;
  • a second detecting module 410 configured to detect whether the candidate prediction block is located in a specified orientation of the current chroma block
  • a sixth determining module 411 configured to determine, when the candidate prediction blocks are all located in a chroma-coded region in the to-be-processed image frame, and in a specified orientation of the current chroma block, determine the candidate prediction Block is valid;
  • a seventh determining module 412 configured to: when the candidate prediction block is not all located in the chroma coded area in the to-be-processed image frame, or not located in a specified orientation of the current chroma block, determine the The alternative prediction block is invalid;
  • the specified orientation of the current chroma block is any orientation of the left side, the upper side, and the upper left side of the current chroma block.
  • the first detecting module 407 is configured to:
  • the candidate motion vector corresponding to the ith luma block is a sub-pixel motion vector
  • acquiring a reference chroma block corresponding to the candidate prediction block where the chroma pixel value of the candidate prediction block is based on the reference
  • the pixel values of the chroma block are interpolated
  • the candidate prediction blocks are not all chromin coded regions located in the image frame to be processed.
  • the foregoing first determining submodule 4031 includes:
  • a determining unit configured to determine a luminance image region corresponding to the current chroma block position in the image frame to be processed
  • a processing unit configured to: use all target brightness blocks as the n brightness blocks, or filter a brightness block of a specified position as the n brightness blocks in all target brightness blocks, where the brightness block of the specified position includes: a luminance block of a central pixel point in the luminance image region, a luminance block covering an upper left corner pixel in the luminance image region, a luminance block covering a pixel in an upper right corner of the luminance image region, and a coverage in the luminance image region a luminance block of a pixel at a lower left corner and a luminance block covering a pixel at a lower right corner of the luminance image region;
  • the target luminance block is a luminance block that is partially or wholly in the luminance image region.
  • the n luma blocks include at least: a luma block covering a central pixel point in the luma image area, a luma block covering an upper left corner pixel in the luma image area, and an upper right corner covering the luma image area a luminance block of a pixel, a luminance block covering a lower left corner pixel in the luminance image region, and a luminance block covering a lower right corner pixel in the luminance image region, as an example, the luminance image region is a to-be-processed image a luminance region in the frame corresponding to the current chroma block position;
  • the target order is:
  • a brightness block covering a central pixel point in the brightness image area a brightness block covering an upper left pixel point in the brightness image area, a brightness block covering an upper right pixel point in the brightness image area, and covering the brightness image area.
  • the target order is a randomly determined order.
  • the apparatus is applied to a decoding end, and the apparatus further includes: a third determining module for determining the reference luma block.
  • the third determining module 405 can be the third determining module 405 shown in FIG. 16, and the third determining module 405 includes:
  • An extraction submodule 4057 configured to extract an identifier of the reference luma block from the decoded code stream
  • the sixth determining sub-module 4058 is configured to determine the reference luma block among the n luma blocks based on the identifier of the reference luma block.
  • the sixth determining submodule 4058 is configured to:
  • a luminance block referenced by a motion vector that coincides with the identification of the reference luminance block is determined as the reference luminance block.
  • the device is applied to an encoding end.
  • the device 40 further includes:
  • a first encoding module 413 configured to add, after the determining the reference luma block, an index of the target intra prediction mode in a prediction mode candidate queue to a code stream of the current chroma block,
  • the prediction mode candidate queue includes at least one prediction mode, each of which is a mode for predicting a prediction block of the current chroma block.
  • the apparatus 40 further includes:
  • An obtaining module 414 configured to acquire an identifier of the reference luma block after the determining the reference luma block
  • the second encoding module 415 is configured to add the identifier of the reference luma block to the code stream of the current chroma block.
  • the obtaining module 414 is configured to:
  • the identifiers assigned to the luma blocks referenced by the motion vectors in the n luma blocks are identified by 0 or 1 and the tolerances are incremented by 1.
  • the first determining module 401 is configured to:
  • the second determining module 404 is configured to:
  • the second target condition is that a sum of absolute values of residual values of the residual block corresponding to the prediction block determined based on the intra prediction mode is the smallest, or a prediction block determined based on the intra prediction mode The sum of the absolute values of the residual value of the corresponding residual block is the smallest, or the coding cost corresponding to the intra prediction mode coding is the smallest.
  • the determining unit is configured to: determine, according to a size of the current chroma block, and a distribution density ratio of the luma component and the chroma component, a luma image region corresponding to the current chroma block position, where The size of the luminance image area is equal to the product of the size of the current chroma block and the distribution density ratio.
  • the chroma intra prediction device since the motion vector of the chroma block is determined based on the motion vector of the luma block, fully utilizes the motion vector and the motion of the chroma component.
  • the correlation of the vector does not need to separately calculate the motion vector of the chroma component, which simplifies the process of the intra-frame motion compensation technique, reduces the computational cost of the motion vector of the chroma component, and correspondingly reduces the computational cost of the overall motion vector.
  • An embodiment of the present application provides a chrominance intra prediction apparatus, including:
  • At least one processor At least one processor
  • At least one memory At least one memory
  • the at least one memory stores at least one program, the at least one memory being capable of executing the at least one program to perform the intra-prediction method of any of the chromaticities provided by the embodiments of the present application.
  • the embodiment of the present application provides a storage medium, which is a non-transitory computer readable storage medium, where the instruction medium or code is stored in the storage medium.
  • the processor When the instructions or code are executed by the processor, the processor is enabled to perform the chrominance intra prediction method according to any one of the embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请是关于一种色度的帧内预测方法及装置,属于视频编解码领域。所述方法包括:当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,其中,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。本申请解决了当亮度分量和色度分量独立划分编解码时,采用帧内运动补偿技术的过程方法较为复杂,确定运动矢量的运算代价较高的问题。本申请用于视频的编解码。

Description

色度的帧内预测方法及装置
本申请要求于2018年3月30日提交的、申请号为201810276799.5、发明名称为“色度的帧内预测方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本申请涉及视频编解码领域,特别涉及一种色度的帧内预测方法及装置。
背景技术
在视频编解码领域,帧内预测技术是一种提取图像内像素之间的相关性的技术。
在高效率视频编码(英文:High Efficiency Video Coding;简称:HEVC)的屏幕内容编码(英文:Screen Content Coding;简称SCC)扩展标准中,有一种叫做帧内块拷贝的帧内预测技术(intra block copy),其也称为帧内运动补偿技术,该帧内运动补偿技术指的是,待处理块(如编码块或解码块)从其所属的待处理图像帧中已经编码的区域内进行运动搜索,以找到最匹配的块作为其预测块进行编码。在编码过程中,相对于普通的帧内预测技术,帧内运动补偿技术需要为编码块确定并编码其对应的运动矢量。这样解码端可以根据该运动矢量找到相应的解码块的预测块,完成解码过程。
在YUV编码技术中,像素值包括:亮度分量Y、色度分量U和色度分量V。在HEVC SCC标准中,亮度分量和色度分量的划分是一致的。目前提出的联合探索模型(英文全称:Joint Exploration Model,简称:JEM,JEM是H.266的参考软件模型),相对于HEVC已经有了大幅的性能提升。在JEM中,对于I帧的编解码,采用了亮度分量和色度分量独立划分编解码的方式,所以亮度分量的划分和色度分量的划分不再一致。基于亮度和色度独立划分的编码框架,如果采用帧内运动补偿技术,亮度分量和色度分量需要各自决策是否采用帧内运动补偿技术。而采用帧内运动补偿技术时,需要各自确定对应的运动矢量,进而各自基于确定的运动矢量预测对应的预测块,以进一步进行各自的视频编解码过程。
但是,当亮度分量和色度分量独立划分编解码时,采用帧内运动补偿技术的过程方法较为复杂,确定运动矢量的运算代价较高。
发明内容
本申请实施例提供了一种色度的帧内预测方法及装置,解决了目前的帧内运动补偿技术的过程方法较为复杂,确定运动矢量的运算代价较高的问题。所述技术方案如下:
根据本申请实施例的第一方面,提供一种色度的帧内预测方法,所述方法包括:
当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,作为一种示例,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;
基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。
根据本申请实施的第二方面,提供一种色度的帧内预测装置,所述装置包括:
第一确定模块,用于当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,作为一种示例,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;
预测模块,用于基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。
根据本申请实施例的第三方面,提供一种色度的帧内预测装置,包括:
至少一个处理器;和
至少一个存储器;
所述至少一个存储器存储有至少一个程序,所述至少一个存储器能够执行所述至少一个程序,以执行第一方面任一所述的色度的帧内预测方法。
根据本申请实施例的第四方面,提供一种存储介质,所述存储介质中存储有指令或代码,
所述指令或代码被处理器执行时,使得所述处理器能够执行第一方面任一所述的色度的帧内预测方法。
本申请的实施例提供的技术方案可以包括以下有益效果:
本申请实施例提供的色度的帧内预测方法及装置,由于色度块的运动矢量是基于亮度块的运动矢量确定的,充分利用了亮度分量的运动矢量和色度分量的运动矢量的相关性,无需单独计算色度分量的运动矢量,从而简化了帧内运动补偿技术的过程,降低了色度分量的运动矢量的运算代价,相应地降低了整体运动矢量的运算代价。并且,由于该色度块的运动矢量是基于亮度块的运动矢量生成,在编码过程中,无需单独编码该色度块的运动矢量,因此降低了色度分量的运动矢量的编码代价,有利于提高视频编码的效率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本申请。
附图说明
图1是根据一示例性实施例示出的一种色度的帧内预测方法的流程图;
图2是根据一示例性实施例示出的另一种色度的帧内预测方法的流程图;
图3是根据一示例性实施例示出的一种色度最大编码单元的划分方式示意图;
图4为图3划分得到的块的编号示意图;
图5是根据一示例性实施例示出的另一种色度最大编码单元的划分方式示意图;
图6是根据一示例性实施例示出的一种将帧内运动补偿模式加入预测模式候选队列的方法流程图;
图7、图8和图9为在编码格式为4:2:0的场景下,一个色度块与亮度图像区域的位置对应关系示意图;
图10是根据一示例性实施例示出的一种待处理图像帧在处理过程中的示意图;
图11和图12是根据一示例性实施例示出的两种编码划分方式的结构示意图;
图13是根据一示例性实施例示出的又一种色度的帧内预测方法的流程图;
图14是根据另一示例性实施例示出的一种色度的帧内预测方法的流程图;
图15是根据一示例性实施例示出的一种色度的帧内预测装置的结构示意图;
图16是根据一示例性实施例示出的另一种色度的帧内预测装置的结构示意图;
图17是根据一示例性实施例示出的一种构造模块的结构示意图;
图18是根据一示例性实施例示出的一种第三确定模块的结构示意图;
图19是根据一示例性实施例示出的另一种第三确定模块的结构示意图;
图20是根据一示例性实施例示出的又一种色度的帧内预测装置的结构示意图;
图21是根据一示例性实施例示出的再一种色度的帧内预测装置的结构示意图;
图22是根据另一示例性实施例示出的一种色度的帧内预测装置的结构示意图;
图23是根据另一示例性实施例示出的另一种色度的帧内预测装置的结构示意图;
图24是根据另一示例性实施例示出的又一种色度的帧内预测装置的结构示意图;
图25是根据另一示例性实施例示出的再一种色度的帧内预测装置的结构示意图。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
本申请实施例提供一种色度的帧内预测方法,应用于视频编解码领域,该色度的帧内预测方法适用于编码格式(也称视频格式)为YUV格式的视频的编解码。为了便于读者理解,下面首先对YUV格式的编码原理进行简单介绍:
当编码格式为YUV格式时,其基本编码原理可以为:采用三管彩色摄影机或彩色电荷耦合元件(英文:Charge-coupled Device;简称:CCD)摄影机或摄像机等图像采集装置进行取像,然后把取得的彩色图像信号经分色和分别放大校正后得到RGB信号,再将RGB信号经过矩阵变换电路得到亮度分量Y的信号和两个色差信号B-Y(即色度分量U的信号)、R-Y(即色度分量V的信号),最后将亮度分量Y的信号、色度分量U的信号和色度分量V的信号分别进行编码,采用同一信道发送出去。这种色彩的表示方法就是所谓的YUV色彩空间表示。采用YUV色彩空间表示的亮度分量Y的信号、色度分量U的信号和色度分量V的信号是分离的。上述YUV格式也可以通过其他方式获取,本申请实施例对此不做限定。
实际应用中,由于YUV格式的图像(后文简称目标图像)通常是通过图像采集装置,如摄像机,拍摄的初始图像进行一系列处理后(例如进行格式转换)采样得到的,亮度分量Y、色度分量U和色度分量V的采样率(也称抽样率)可能不同,初始图像中各个颜色分量的分布密度相同,即各个颜色分量的分布密度比例为1:1:1,由于各个颜色分量的采样率不同,最终得到的目标图像的不同颜色分量的分布密度不同,通常,目标图像中,各颜色分量的分布密度比例等于采样率比例,需要说明的是,一种颜色分量的分布密度指的是指单位尺寸中所包含的该种颜色分量的信息的个数。例如亮度分量的分布密度是指单位尺寸中所包含的亮度像素值(也称亮度值)的个数。
目前的YUV格式基于不同的采样率比例划分为多种编码格式,该编码格式可以采用采样率比例的方式进行表示,这种表示方式称为A:B:C表示法,目前的编码格式可以分为:4:4:4、4:2:2、4:2:0和4:1:1等。例如,编码格式为4:4:4表示目标图像中亮度分量Y,色度分量U和色度分量V的采样率相同,在原始图像上没有进行下采样,目标图像的各个颜色分量的分布密度比例为1:1:1;编码格式为4:2:2表示目标图像中每两个亮度分量Y共用一组色度分量U和色度分量V,目标图像的各个颜色分量的分布密度比例为2:1:1,即以像素点为采样 单位,对原始图像的亮度分量未进行下采样,对原始图像的色度分量进行水平方向的2:1下采样,垂直方向未进行下采样得到目标图像;编码格式为4:2:0表示对目标图像中的色度分量U和色度分量V中每个色度分量来说,水平方向和竖直方向的采样率都是2:1,目标图像的亮度分量Y与色度分量U的分布密度比例为2:1,目标图像的亮度分量Y与色度分量V的分布密度比例为2:1,即以像素点为采样单位,对原始图像的亮度分量未进行下采样,对原始图像的色度分量进行水平方向的2:1下采样,以及垂直方向的2:1下采样得到目标图像。
如图1所示,本申请实施例提供一种色度的帧内预测方法,其应用于I帧的编解码,I帧又称为帧内图像(英文:intra picture),I帧通常是每个图像组(英文:Group of Pictures;简称:GOP)的第一个帧,其也称为帧内预测编码帧或者关键帧。该色度的帧内预测方法包括:
步骤101、当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量。
作为一种示例,目标帧内预测模式为用于预测当前色度块的预测块的模式,在帧内运动补偿模式下,当前色度块的预测块采用基于帧内运动补偿技术生成,其生成过程为:获取当前色度块的运动矢量,基于该当前色度块的运动矢量生成该预测块,而本申请实施例中,当前色度块的运动矢量基于参考亮度块的运动矢量生成,参考亮度块为与当前色度块位置对应的n个亮度块中的一个亮度块,n≥1。
步骤102、基于当前色度块的运动矢量,预测当前色度块的预测块。
作为一种示例,对于编码端,该当前色度块指的是当前待编码的色度块,对于解码端,该当前色度块指的是当前待解码的色度块,该当前色度块可以是色度分量U的图像块,也可以是色度分量V的图像块。
由前述YUV格式的编码原理可知,亮度分量与色度分量具有相关性,相应的,其运动矢量也具有相关性。本申请实施例提供的色度的帧内预测方法,由于色度块的运动矢量是基于亮度块的运动矢量确定的,充分利用了亮度分量的运动矢量和色度分量的运动矢量的相关性,无需单独计算色度分量的运动矢量,从而简化了帧内运动补偿技术的过程,降低了色度分量的运动矢量的运算代价,相应地降低了整体运动矢量的运算代价。并且,由于该色度块的运动矢量是基于亮度块的运动矢量生成,在编码过程中,无需单独编码该色度块的运动矢量,因此降低了色度分量的运动矢量的编码代价,有利于提高视频编码的效率。
在本申请实施例中,帧内预测方法既可以应用于编码端,又可以应用在解码端,本申请实施例以该帧内预测方法分别应用于编码端和解码端为例,通过以下两方面进行说明:
第一方面,当该帧内预测方法应用于编码端时,如图2所示,该色度的帧内预测方法由编码端执行,其用于I帧的编码,该方法包括:
步骤201、对待处理图像帧进行色度块的划分。
待处理图像帧包括位于同一区域的亮度图像和色度图像,亮度分量的编码过程实际上是对该亮度图像的编码,色度分量的编码过程实际上是对该色度图像的编码。在进行待处理图像帧的视频编码时,通常先将待处理图像帧分割为一个个尺寸相等的最大编码单元(英文:Coding Tree Unit;简称:CTU),最大编码单元中同时包含了亮度最大编码单元和色度最大编码单元。最大编码单元通常是正方形的编码块,其尺寸可以是8×8个像素点、16×16个像素点、32×32个像素点、64×64个像素点、128×128个像素点、256×256个像素点或者 更大的尺寸。对于每个最大编码单元,先对其亮度分量进行编码,亮度分量的编码过程可以包括:对亮度最大编码单元进行划分,划分后的亮度块可以是正方形或者长方形。然后确定每个亮度块的编码类型以及编码类型所对应的预测信息。作为一种示例,亮度块的编码类型通常包括帧内预测类型或者帧间预测类型(也称帧间编码类型),作为一种示例,帧内预测类型对应的预测信息包括预测模式,帧间预测类型对应的预测信息包括预测矢量和参考帧索引。对于每个亮度块,在基于确定的编码类型确定预测信息后,按照预测信息对亮度块进行预测,得到该亮度块的预测块,然后将亮度块的原始像素值与预测块的像素值做差得到亮度块的残差(该亮度块的所有残差组成残差块),对该残差进行变换,量化和熵编码,得到相应的亮度块的码流。作为一种示例,帧内运动补偿技术虽然是一种帧内预测技术,但是在标记编码类型时,通常标记为帧间预测类型。
在对待处理图像帧中的亮度块进行编码后,可以对色度最大编码单元进行划分。通常,色度最大编码单元的划分采用四叉树加二叉树的划分方法,其划分方式可以与解码端预先约定,或者在划分后编码在码流中。其过程包括:
首先对色度最大编码单元进行四叉树的递归划分,该四叉树的递归划分方式有多种,如图3所示,图3为一种色度最大编码单元的划分方式示意图,首先按照四叉树,将色度最大编码单元P划分为4个块P1、P2、P3和P4;然后,将划分得到的第1个块P1和第4个块P4分别继续按照四叉树,划分为4个块,分别为P11、P12、P13和P14以及P41、P42、P43和P44。经过四叉树划分最终得到的每个块称为四叉树叶节点。四叉树叶节点用Q i表示,此时,i表示四叉树叶节点的编号,也指示它们的编码顺序,如图4所示,图4为图3划分得到的块的编号示意图。由图4可以看到,四叉树叶节点的编码顺序为:对于共享1个父节点的块,按照从左到右,从上到下的扫描顺序进行编码。
完成四叉树的划分后,每个四叉树叶节点可以进一步进行二叉树的划分得到两个尺寸相等的块,二叉树划分的方式有两种:竖直方向划分或水平方向划分。而经过二叉树划分得到的块,也可以进一步进行二叉树划分,如图5所示,虚线表示经过二叉树划分后最终得到的块划分结果。同样,经过二叉树划分最终得到的每个块称为二叉树叶节点。经过二叉树划分得到的两个块,它们之间也按照从左到右,从上到下的顺序进行编码。经过四叉树和二叉树划分得到的块即为最终的色度块,用B i表示,i表示色度块的编号,也指示色度块编码的顺序。
需要说明的是,上述色度最大编码单元的四叉树加二叉树的划分方法只是示意性说明,本申请实施例在实际实现时,也可以只采用四叉树划分方法或只采用二叉树划分方法进行色度最大编码单元的划分,本申请实施例对此不作限定。
步骤202、对于当前色度块,构造预测模式候选队列,该预测模式候选队列包括至少一个预测模式,每个预测模式均为用于预测当前色度块的预测块的模式。
在本申请实施例中,预测模式候选队列可以有多种类型的预测模式,示例的,该多种类型的预测模式包括:帧内预测模式、跨分量预测模式和帧内运动补偿模式,作为一种示例,帧内预测模式和跨分量预测模式的定义可以参考JEM。帧内运动补偿模式是本申请实施例提出的一种添加在预测模式候选队列中的新的模式,在该帧内运动补偿模式下,当前色度块的运动矢量是基于亮度块的运动矢量生成的。
预测模式候选队列中的各个预测模式都具有其相应的模式编号,例如,在JEM中,用0~66表示67个帧内预测模式,用67~72表示6种跨分量预测模式。为了与传统的预测模式进行区 分,本申请实施例中,添加到预测模式候选队列中的帧内运动补偿模式采用与传统的预测模式不同的模式编号来表示,例如,可以用大于或等于73的模式编号来表示帧内运动补偿模式。
在本申请实施例中,构造预测模式候选队列的过程可以包括:将跨分量预测模式加入预测模式候选队列中的过程;将帧内运动补偿模式加入预测模式候选队列的过程;将帧内预测模式加入预测模式候选队列的过程。本申请实施例对该三个过程的执行的先后顺序不作限定,通常,该三个过程可以按照跨分量预测模式、帧内运动补偿模式和帧内预测模式的添加顺序依次执行。并且,由于预测模式候选队列的长度(即允许添加在该预测模式候选队列中的预测模式个数)是预先设置的,也即是其存在长度阈值,作为一种示例,该长度阈值为11,因此,该三个过程的执行还受到长度阈值的限制,作为一种示例,在添加在预测模式候选队列中的预测模式的总数未达到该长度阈值时,该预测模式候选队列中还可以添加其他的预测模式。示例的,该构造预测模式候选队列的过程,包括:
步骤A1、将跨分量预测模式加入预测模式候选队列。
目前,跨分量预测模式共有模式编号为67~72的6种预测模式,步骤A1包括,将至少一种跨分量预测模式加入预测模式候选队列。作为一种示例,可以将模式编号为将67~72的6个预测模式依次加入该预测模式候选队列中。
步骤A2、将帧内运动补偿模式加入预测模式候选队列。
步骤A3、如果预测模式候选队列还未加满,按照第一顺序将当前色度块对应的n个亮度块的帧内预测模式加入预测模式候选队列。
步骤A4、如果预测模式候选队列还未加满,按照第二顺序将当前色度块相邻的色度块的帧内预测模式加入预测模式候选队列。
步骤A5、如果预测模式候选队列还未加满,加入planar模式(也称平面模式,模式编号为3)和DC模式。
步骤A6、如果预测模式候选队列还未加满,加入已经在该预测模式候选队列中的方向性模式相邻的方向性模式。
需要说明的是,方向性模式是预测模式的一种,如果预测模式候选队列还未加满,需要加入的是与已经在该预测模式候选队列中的方向性模式相邻的方向性模式。
步骤A7、如果预测模式候选队列还未加满,则加入垂直(英文:vertical)模式,水平(英文:horizontal)模式和第2号帧内预测模式(即模式编号为2的帧内预测模式)。
在步骤A7中,由于预测模式候选队列还未加满时,可能还可以加入的模式个数小于或等于3个,因此,在加入垂直模式,水平模式和第2号帧内预测模式时,可以依次加入垂直模式,水平模式和第2号帧内预测模式,直至该预测模式候选队列加满。最终可能加入了垂直模式,水平模式和第2号帧内预测模式中的一个或多个。
需要说明的是,上述步骤A1至A7的先后顺序可以进行适当调整,步骤也可以根据情况进行相应增减,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本申请的保护范围之内,因此不再赘述。
作为一种示例,如图6所示,步骤A2中将帧内运动补偿模式加入预测模式候选队列的过程的可以包括:
步骤2021、确定待处理图像帧中与当前色度块位置对应的n个亮度块。n≥1。
作为一种示例,确定待处理图像帧中与当前色度块位置对应的n个亮度块的过程可以包括:
步骤B1、确定待处理图像帧中与当前色度块位置对应的亮度图像区域。
由于色度块的划分和亮度块的划分独立,因此,在待处理图像帧中,色度块与亮度块的位置对应关系有多种,每个色度块可能与一个或多个亮度块位置对应,多个色度块也可能与一个亮度块位置对应。并且,色度块与亮度块的位置对应关系与待处理图像帧的编码格式相关。由前述对YUV格式的编码原理陈述可知,由于编码格式的不同,在同一个待处理图像帧中,亮度分量与色度分量的分布密度可能相同也可能不同,因此,在本申请实施例中,需要先确定待处理图像帧中与当前色度块位置对应的亮度图像区域,再进一步确定与当前色度块位置对应的n个亮度块。
作为一种示例,确定待处理图像帧中与当前色度块位置对应的亮度图像区域的过程包括:按照待处理图像帧的编码格式,确定亮度分量与色度分量的分布密度比例;基于当前色度块的尺寸,以及亮度分量与色度分量的分布密度比例,确定当前色度块位置对应的亮度图像区域,该亮度图像区域的尺寸等于当前色度块的尺寸与该分布密度比例的乘积。也即是亮度图像区域的尺寸w1=w2×K,作为一种示例,w2为当前色度块的尺寸,K为亮度分量与色度分量的分布密度比例。
示例的,假设待处理图像帧的亮度分量Y与色度分量U的分布密度比例为K,包括亮度分量与色度分量的在水平方向(该方向可以视为亮度分量和色度分量的宽度方向)的分布密度比例为K1,以及在垂直方向(该方向可以视为亮度分量和色度分量的高度方向)的分布密度比例为K2,则亮度分量与色度分量在宽度方向和高度方向上的分布密度比例分别为K1和K2,假设当前色度块的宽为CW,高为CH,其左上角像素点在待处理图像帧中的坐标为(C x,C y),则亮度图像区域为左上角像素点坐标为(K1×C x,K2×C y),宽和高分别为K1×CW,K2×CH的矩形区域。
例如,在待处理图像帧的编码格式为4:2:0时,待处理图像帧的亮度分量Y与色度分量U的分布密度比例为2:1,亮度分量Y与色度分量V的分布密度比例为2:1,也即是,待处理图像帧的亮度分量的宽和高分别为色度分量的宽和高的两倍,则亮度分量与色度分量在宽度方向和高度方向上的分布密度比例K为2:1和2:1。假设当前色度块的左上角像素点在待处理图像帧中的坐标为(C x,C y),则亮度图像区域为左上角像素点坐标为(2×C x,2×C y),宽和高分别为2×CW,2×CH的矩形区域。
图7至图9为在编码格式为4:2:0的场景下,一个色度块与亮度图像区域的位置对应关系示意图。
如图7所示,在待处理图像帧中,与色度块B1位置对应的亮度图像区域M1共有6个亮度块,分别为M11至M16;如图8所示在待处理图像帧中,与色度块B2位置对应的亮度图像区域M2共有1个亮度块,为M17;如图9所示,在待处理图像帧中,与色度块B3位置对应的亮度图像区域M3共有半个亮度块,也即该色度块B3对应1/2个亮度块,此时2个色度块与一个亮度块M18位置对应。
步骤B2、在所有目标亮度块中确定n个亮度块。
作为一种示例,目标亮度块为部分或全部在亮度图像区域中的亮度块。也即是,若在待处理图像帧中,一个亮度块的部分或全部位于该亮度图像区域中,则将该亮度块确定为目标亮度块。
在本申请实施例中,第一种确定方式,可以将所有目标亮度块作为n个亮度块,第二种确定方式,可以在所有目标亮度块中进行一定的筛选,以减少后续的运算代价。如在所有目 标亮度块中筛选指定位置的亮度块作为n个亮度块,该指定位置的亮度块指的是覆盖亮度图像区域中指定像素点的亮度块,覆盖指定像素点的亮度块,指的是该亮度块所包括的像素点中包括该指定像素点。示例的,该指定像素点包括亮度图像区域中中心像素点CR、亮度图像区域中左上角像素点LT、亮度图像区域中右上角像素点TR、亮度图像区域中左下角像素点BL和亮度图像区域中右下角像素点BR中的任一个。示例的,目标亮度块包括:覆盖亮度图像区域中中心像素点的亮度块、覆盖亮度图像区域中左上角像素点的亮度块、覆盖亮度图像区域中右上角像素点的亮度块、覆盖亮度图像区域中左下角像素点的亮度块和覆盖亮度图像区域中右下角像素点的亮度块。
例如,假设待处理图像帧的图像坐标系的原点为其左上角,当前色度块在待处理图像帧中对应的亮度图像区域为:左上角像素点LT坐标为(2×C x,2×C y),宽和高分别为2×CW,2×CH的矩形区域。则当前色度块所对应的亮度图像区域中中心像素点CR、右上角像素点TR、左下角像素点BL和右下角像素点BR的坐标分别为(2×C x+CW,2×C y+CH),(2×C x+2×CW,2×C y),(2×C x,2×C y+2×CH)和(2×C x+2×CW,2×C y+2×CH)。覆盖所有目标亮度块中指定像素点的亮度块即为覆盖LT、CR、TR、BL和BR的亮度块。需要说明的是,当亮度图像的划分方式不同时,覆盖LT、CR、TR、BL和BR的亮度块也会有不同的情况。例如覆盖该5个像素点的亮度块可能为5个不同的亮度块,也可能为1个亮度块。
示例的,当色度块B1与亮度图像区域的对应关系如图7所示时,所有目标亮度块为亮度块M11至M16共6个亮度块。覆盖亮度图像区域中中心像素点CR的亮度块为亮度块M15,覆盖亮度图像区域中左上角像素点LT的亮度块为亮度块M11,覆盖亮度图像区域中右上角像素点TR的亮度块为亮度块M13,覆盖亮度图像区域中左下角像素点BL的亮度块为亮度块M14,覆盖亮度图像区域中右下角像素点BR的亮度块为亮度块M16。则采用第一种确定方式,确定的n个亮度块为亮度块M11至M16共6个亮度块;采用第二种确定方式,确定的n个亮度块为亮度块M11、M13、M14、M15和M16共5个亮度块。
示例的,当色度块B2与亮度图像区域的对应关系如图8所示时,所有目标亮度块为亮度块M17。覆盖亮度图像区域中中心像素点CR的亮度块、左上角像素点LT的亮度块、覆盖亮度图像区域右上角像素点TR的亮度块、覆盖亮度图像区域中左下角像素点BL的亮度块和覆盖亮度图像区域中右下角像素点BR的亮度块均为同一亮度块M17。则无论采用第一种确定方式还是第二种确定方式,确定的n个亮度块均为亮度块M17。
示例的,当色度块B3与亮度图像区域的对应关系如图9所示时,由于亮度图像区域为亮度块M18的一半,也即是亮度块M18的部分在该亮度图像区域,因此,亮度块M18为1个目标亮度块,因此,所有目标亮度块为亮度块M18。则无论采用第一种确定方式还是第二种确定方式,确定的n个亮度块均为亮度块M18。
上述指定像素点还可以根据具体场景设置为其他位置,例如,指定像素点还可以是亮度图像区域中上边缘像素行的中心位置像素点,下边缘像素行的中心位置像素点,左边缘像素列的中心位置像素点和右边缘像素列的中心位置像素点等中的至少一种,本申请实施例只是示意性说明。
步骤2022、检测n个亮度块中是否存在运动矢量可参考的亮度块。
作为一种示例,该检测n个亮度块中是否存在运动矢量可参考的亮度块的过程可以包括:
按照目标顺序依次检测n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件。作为一种示例,该检测停止条件为可参考的运动矢量的总数等于预设的个数阈值k, 或者,遍历n个亮度块。作为一种示例,k可以为1或5。
示例的,上述按照目标顺序依次检测n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件的过程可以包括:
设置i=1。
执行检测过程,检测过程包括:
步骤C1、检测n个亮度块中第i个亮度块的运动矢量是否可参考。
步骤C2、当第i个亮度块的运动矢量可参考时,检测是否达到检测停止条件。
步骤C3、当未达到检测停止条件,更新i,使得更新后的i=i+1,再次执行检测过程。
步骤C4、当达到检测停止条件,停止执行检测过程。
作为一种示例,步骤C1中,检测n个亮度块中第i个亮度块的运动矢量是否可参考的过程可以包括:
步骤C11、检测n个亮度块中第i个亮度块的预测类型。
步骤C12、当第i个亮度块的预测类型为帧内预测类型时,确定第i个亮度块的运动矢量不可参考。
由前述陈述可知,帧内预测类型对应的预测信息不包括运动矢量,而本申请实施例提供的帧内预测方法适用于I帧的帧间预测,其预测信息包括运动矢量,因此,当第i个亮度块的预测类型为帧内预测类型时,其预测信息不存在运动矢量,也无法供当前色度块参考。
步骤C13、当第i个亮度块的预测类型为帧间预测类型时,基于第i个亮度块的运动矢量生成备选运动矢量。
该基于第i个亮度块的运动矢量生成备选运动矢量的过程可以包括:
步骤C131、按照待处理图像帧的编码格式,确定当前色度块与第i个亮度块的矢量缩放比。
在本申请实施例中,当前色度块与第i个亮度块的矢量缩放比等于待处理图像帧中,色度块与亮度块的分布密度比例,该分布密度比例由待处理图像帧的编码格式决定。示例的,编码格式为4:2:0时,在水平方向(也称x方向)上,色度块与亮度块的分布密度比例为1:2,在竖直方向(也称y方向)上,色度块与亮度块的分布密度比例为1:2,则当前色度块与第i个亮度块的矢量在水平方向上的缩放比等于1:2,在竖直方向上的缩放比等于1:2。
步骤C132、基于矢量缩放比,对第i个亮度块的运动矢量进行缩放得到第i个亮度块的备选运动矢量。
基于该矢量缩放比,对第i个亮度块的运动矢量进行等比例的缩放即可得到第i个亮度块的备选运动矢量。例如,第i个亮度块的运动矢量为(-11,-3),编码格式为4:2:0,则当前色度块的备选运动矢量为(-5.5,-1.5)。
步骤C14、检测第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量是否相同。
步骤C14实际上是一个查找重复运动矢量的过程,简称查重过程,在本申请实施例中,当第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量不同时,执行步骤C15;当第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量相同时,执行步骤C18,以确定第i个亮度块的运动矢量不可参考。这样避免后续过程对相同的备选运动矢量反复进行处理,减少运算代价。
步骤C15、当第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量不同时,基于第i个亮度块对应的备选运动矢量,预测当前色度块的备选预测块。
例如,假设当前色度块的左上角像素点在色度图像中的坐标为(C x,C y),备选运动矢量为(MV x,MV y),则备选预测块为在待处理图像帧的已编码色度图像中左上角像素点坐标为(C x+MV x,C y+MV y),尺寸和当前色度块相同的图像块。
步骤C16、当备选预测块有效,确定第i个亮度块的运动矢量可参考。
步骤C17、当备选预测块无效,确定第i个亮度块的运动矢量不可参考。
步骤C18、当第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块对应的备选运动矢量相同时,确定第i个亮度块的运动矢量不可参考。
需要说明的是,在上述步骤C15之后,可以判断备选预测块是否有效以执行步骤C16或C17。示例的,该判断过程可以包括以下两种实现方式:
第一种实现方式:
检测备选预测块是否全部位于待处理图像帧中的色度已编码区域;当备选预测块全部位于待处理图像帧中的色度已编码区域内,确定备选预测块有效;当备选预测块不是全部位于待处理图像帧中的色度已编码区域内,确定备选预测块无效。
在当前色度块编码前,待处理图像帧的色度图像中色度已编码区域包括已经编码的CTU,和当前色度块所属的CTU中已经编码的四叉树叶节点和二叉树叶节点,如图10中所示的点状花纹区域。对于当前色度块,假设备选运动矢量(即由第i个亮度块的运动矢量缩放后得到的运动矢量)为(MVx,MVy),如果(MVx,MVy)为整像素运动矢量,则备选预测块可以直接基于该整像素运动矢量确定,当备选预测块全部在色度已编码区域内,则认为备选预测块有效,当备选预测块不是全部在色度已编码区域内,则认为备选预测块无效;如果(MVx,MVy)为分像素运动矢量,备选预测块需要通过插值得到,则可以先获取备选预测块对应的参考色度块,该备选预测块的色度像素值是基于参考色度块的色度像素值插值得到的;检测参考色度块是否全部位于待处理图像帧中的色度已编码区域;当参考色度块全部位于待处理图像帧中的色度已编码区域内,确定备选预测块全部位于待处理图像帧中的色度已编码区域,此时,认为备选预测块有效;当参考色度块不是全部位于待处理图像帧中的色度已编码区域内,确定备选预测块不是全部位于待处理图像帧中的色度已编码区域内,此时认为备选预测块无效。
作为一种示例,检测参考色度块是否全部位于待处理图像帧中的色度已编码区域内时,可以通过检测该参考色度块的左上角像素点的坐标和右下角像素点的坐标是否在色度已编码区域的坐标范围内,当该参考色度块的左上角像素点的坐标和右下角像素点的坐标均在色度已编码区域的坐标范围内,确定参考色度块全部位于待处理图像帧中的色度已编码区域;当参考色度块的左上角像素点的坐标和右下角像素点的坐标的至少一个不位于色度已编码区域的坐标范围内,确定参考色度块不是全部位于待处理图像帧中的色度已编码区域内。
以图10中的黑色块表示的当前色度块为例,假设基于第i个亮度块的运动矢量确定的备选运动矢量为分像素运动矢量,此时需要通过插值滤波器进行插值处理以得到备选预测块。
假设,插值处理所使用的插值滤波器为N抽头滤波器,N为正整数,且插值出备选预测块的分像素位置的色度像素值,所需要的是该位置左边N1个像素点的色度像素值,右边N2个像素点的色度像素值,上边N3个像素点的色度像素值,下边N4个像素点的色度像素值,作为一种示例,N1+N2=N,N3+N4=N。则当备选运动矢量(MVx,MVy)满足(Cx+MV1x+1-N1, Cy+MV1y+1-N3)和(Cx+MV1x+(CW-1)+N2,Cy+MV1y+(CH-1)+N4)的坐标位置不落入色度未编码区域内时,可以确定备选预测块有效。(Cx+MV1x+1-N1,Cy+MV1y+1-N3)和(Cx+MV1x+(CW-1)+N2,Cy+MV1y+(CH-1)+N4)这两个坐标位置表示参考色度块的左上角像素点的坐标和右下角像素点的坐标,作为一种示例,(Cx,Cy)表示当前色度块的左上角像素点的坐标,CW和CH分别表示当前色度块的宽和高。MV1x表示比MVx小的最大整数,MV1y表示比MVy小的最大整数,例如,(MVx,MVy)为(-5.5,-1.5),则MV1x为-6,MV1y为-2。
第二种实现方式:
检测备选预测块是否全部位于待处理图像帧中的色度已编码区域内,以及检测备选预测块是否位于当前色度块的指定方位;(需要说明的是,本申请实施例对前述检测备选预测块是否全部位于待处理图像帧中的色度已编码区域内,以及检测备选预测块是否位于当前色度块的指定方位的先后顺序不做限定)当备选预测块全部位于待处理图像帧中的色度已编码区域内,且位于当前色度块的指定方位,确定备选预测块有效;当备选预测块不是全部位于待处理图像帧中的色度已编码区域内,或者,不位于当前色度块的指定方位,确定备选预测块无效;作为一种示例,当前色度块的指定方位为当前色度块的左侧、上侧和左上侧的任一方位。
示例的,检测备选预测块是否位于当前色度块的指定方位可以包括:检测备选预测块的右下角像素点的坐标是否位于当前色度块的指定方位,当备选预测块的右下角像素点的坐标位于当前色度块的指定方位,确定备选预测块位于当前色度块的指定方位;当备选预测块的右下角像素点的坐标不位于当前色度块的指定方位,确定备选预测块不位于当前色度块的指定方位。或者,检测备选预测块是否位于当前色度块的指定方位还可以有其他方式,例如,检测备选预测块的第一像素点与当前色度块的第二像素点的相对位置,例如该第一像素点和第二像素点均可以为左上角像素点、右上角像素点、中间像素点、左下角像素点和右下角像素点的任一像素点。本申请实施例对此不作限定。
需要说明的是,第二种实现方式中,检测备选预测块是否全部位于待处理图像帧中的色度已编码区域的方法可以参考上述第一种实现方式,本申请实施例对此不再赘述。
对于当前色度块,假设备选运动矢量(即由第i个亮度块的运动矢量缩放后得到的运动矢量)为(MVx,MVy),如果(MVx,MVy)为整像素运动矢量,若(Cx+MVx+(CW-1),Cy+MVy+(CH-1))在当前色度块所属的四叉树叶节点块时,其还需要满足(Cx+MVx+(CW-1),Cy+MVy+(CH-1))在当前色度块的左侧、上侧和左上侧的任一方位,才可以确定该备选预测块有效,当备选预测块不是全部在色度已编码区域内,或者不位于当前色度块的左侧、上侧和左上侧的任一方位,则认为备选预测块无效;同样,如果(MVx,MVy)为分像素运动矢量,备选预测块需要通过插值得到,则可以先获取备选预测块对应的参考色度块,该备选预测块的色度像素值是基于参考色度块的像素值插值得到的;检测参考色度块是否全部位于待处理图像帧中的色度已编码区域内以及是否位于当前色度块的指定方位(由于参考色度块与备选预测块位于当前色度块的同一方位,所以可以通过检测参考亮度块的方位来确定备选预测块的方位);当参考色度块全部位于待处理图像帧中的色度已编码区域内,且位于当前色度块的指定方位,认为备选预测块有效;当参考色度块不是全部位于待处理图像帧中的色度已编码区域内,或者不位于当前色度块的指定方位,此时认为备选预测块无效。
以图10中的黑色块表示的当前色度块为例,假设基于第i个亮度块的运动矢量确定的备选运动矢量为分像素运动矢量,此时需要通过插值滤波器进行插值处理以得到备选预测块。
假设,插值分像素位置的色度像素值所使用的插值滤波器为N抽头滤波器,N为正整 数,假设插值出备选预测块的分像素位置的色度像素值,所需要的是该位置左边N1个像素点的色度像素值,右边N2个像素点的色度像素值,上边N3个像素点的色度像素值,下边N4个像素点的色度像素值,作为一种示例,N1+N2=N,N3+N4=N,若(Cx+MV1x+(CW-1)+N2,Cy+MV1y+(CH-1)+N4)在当前色度块所属的四叉树叶节点块时,其还需要满足(Cx+MV1x+(CW-1)+N2,Cy+MV1y+(CH-1)+N4)在当前色度块的左侧、上侧和左上侧的任一方位,才可以确定该备选预测块有效。(Cx+MV1x+1-N1,Cy+MV1y+1-N3)和(Cx+MV1x+(CW-1)+N2,Cy+MV1y+(CH-1)+N4)这两个坐标位置表示参考色度块的左上角像素点的坐标和右下角像素点的坐标,作为一种示例,(Cx,Cy)表示当前色度块的左上角像素点的坐标,CW和CH分别表示当前色度块的宽和高。MV1x表示比MVx小的最大整数,MV1y表示比MVy小的最大整数,例如,(MVx,MVy)为(-6.5,-2.5),则MV1x为-7,MV1y为-3。
对于上述第一种实现方式,实际在编码实现的过程中,需要判断在和当前色度块属于同1个四叉树叶节点的区域内,当前色度块的左下角的色度块和右上角的色度块是否已编码,而由于二叉树的划分使得每个四叉树叶节点内部的划分和编码顺序有很多选择,导致该判断过程比较复杂。示例的,图11和图12是两种编码划分方式的结构示意图,请参考图11和图12,当当前色度块所属的四叉树叶节点块按照图11所示的划分方式进行编码时,其左下角的色度块还未进行编码;而当当前色度块所属的四叉树叶节点块按照图12所示的划分方式进行编码时,其右上角的色度块还未进行编码。而该第二种实现方式可以避免判断该右上角的色度块以及左下角的速度块是否已编码,有效简化该判断备选预测块是否有效的过程,降低运算代价。
作为一种示例,上述步骤C2中,检测是否达到检测停止条件可以包括:检测可参考的运动矢量的总数是否等于预设的个数阈值k,以及检测i是否等于n;当可参考的运动矢量不等于预设的个数阈值k,且i不等于n,确定未达到检测停止条件;当可参考的运动矢量等于预设的个数阈值k或i=n,确定达到检测停止条件。
步骤2023、当n个亮度块中存在运动矢量可参考的亮度块时,在预测模式候选队列中添加帧内运动补偿模式。
在本申请实施例中,当n个亮度块中存在运动矢量可参考的亮度块时,可以只在预测模式候选队列中添加一个帧内运动补偿模式,以指示当前色度块的运动矢量可以基于亮度块的运动矢量生成,这样,如果存在运动矢量可参考的亮度块,只需添加一次帧内运动补偿模式即可达到相应的预测模式的指示效果,过程较为简单。
也可以在检测停止条件的限制下,基于检测到的运动矢量可参考的亮度块的数量,来添加帧内运动补偿模式,此时该帧内运动补偿模式为至少一个。这样,预测模式候选队列中帧内运动补偿模式的添加方式可以至少包括以下两种:
第一种添加方式,在按照目标顺序每检测到一个运动矢量可参考的亮度块时,在预测模式候选队列中添加一个帧内运动补偿模式。
例如,在执行上述步骤2022的过程中,每检测到一个运动矢量可参考的亮度块时,即在预测模式候选队列中添加一个帧内运动补偿模式,在预测模式候选队列中添加的m个帧内运动补偿模式的模式编号可以相同,也可以不同,当m个帧内运动补偿模式的模式编号不同时,该m个帧内运动补偿模式的模式编号可以以第一个帧内运动补偿模式为基准,按照添加顺序依次递加,例如依次加1,假设,m=3,即在执行上述步骤2022的过程中,出现了3个运动矢量可参考的亮度块,则在第一次出现运动矢量可参考的亮度块时,添加第一个帧内运动补 偿模式,该第一个帧内运动补偿模式的模式编号可以用大于或等于73的编号来表示,例如,第一个帧内运动补偿模式的模式编号为73,则第二个帧内运动补偿模式的模式编号为74,第三个帧内运动补偿模式的模式编号为75。
第二种添加方式,在达到检测停止条件后,若存在运动矢量可参考的m个亮度块,按照m个亮度块在目标顺序中的检测排布顺序,在预测模式候选队列中添加m个帧内运动补偿模式,m≥1。
例如,在上述步骤2022执行完毕后,若存在运动矢量可参考的m个亮度块,在预测模式候选队列中添加m个帧内运动补偿模式,该m个帧内运动补偿模式的模式编号可以相同,也可以不同,当m个帧内运动补偿模式的模式编号不同时,该m个帧内运动补偿模式的模式编号可以以第一个帧内运动补偿模式为基准,按照添加顺序依次递加,例如依次加2,该第一个帧内运动补偿模式的模式编号可以用大于或等于73的编号来表示,该添加顺序即m个亮度块在目标顺序中的检测排布顺序,假设,m=3,即在执行上述步骤2022的过程后,出现了3个运动矢量可参考的亮度块,则假设第一个帧内运动补偿模式的模式编号为73,则第二个帧内运动补偿模式的模式编号为75,第三个帧内运动补偿模式的模式编号为77。
上述两种添加方式得到的m个帧内运动补偿模式与m个运动矢量可参考的亮度块一一对应。
作为一种示例,目标顺序可以根据具体情况设置,作为一种示例,当n个亮度块至少包括:覆盖亮度图像区域中中心像素点的亮度块、覆盖亮度图像区域中左上角像素点的亮度块、覆盖亮度图像区域中右上角像素点的亮度块、覆盖亮度图像区域中左下角像素点的亮度块和覆盖亮度图像区域中右下角像素点的亮度块,作为一种示例,目标亮度块为部分或全部在亮度图像区域中的亮度块,亮度图像区域为待处理图像帧中与当前色度块位置对应的亮度区域。此时,如图7至9所示,上述目标顺序可以为:
覆盖亮度图像区域中中心像素点的亮度块、覆盖亮度图像区域中左上角像素点的亮度块、覆盖亮度图像区域中右上角像素点的亮度块、覆盖亮度图像区域中左下角像素点的亮度块和覆盖亮度图像区域中右下角像素点的亮度块的顺序。也即是覆盖像素点CR>LT>TR>BL>BR的顺序。
作为一种示例,上述目标顺序也可以为随机确定的顺序。
作为一种示例,上述步骤A3中,n个亮度块的确定方法可以参考上述步骤2021,本申请实施例对此不再赘述。第一顺序可以为覆盖亮度图像区域中中心像素点的亮度块、覆盖亮度图像区域中左上角像素点的亮度块、覆盖亮度图像区域中右上角像素点的亮度块、覆盖亮度图像区域中左下角像素点的亮度块和覆盖亮度图像区域中右下角像素点的亮度块的顺序,也即是覆盖像素点CR>LT>TR>BL>BR的顺序;或者,随机确定的顺序。在该步骤A3中,需要执行查询添加的预测模式是否重复的过程,也简称查重过程,也即是,对于当前色度块对应的n个亮度块的每个帧内预测模式,检测该帧内预测模式与预测模式候选队列中添加的预测模式是否相同;当该帧内预测模式与预测模式候选队列中添加的预测模式相同时,检测下一个帧内预测模式;当帧内预测模式与预测模式候选队列中添加的预测模式不同时,将该帧内预测模式添加至预测模式候选队列中。
作为一种示例,上述步骤A4中,第二顺序可以为当前色度块的左边相邻色度块、当前色度块的上方相邻色度块、当前色度块的左下方相邻色度块、当前色度块的右下方相邻色度块和当前色度块的左上方相邻色度块的顺序;或者,在当前色度块所有相邻的色度块中随机 确定的顺序。在该步骤A4中,也需要执行查询添加的预测模式是否重复的过程,也即是,对于当前色度块每个相邻的色度块的帧内预测模式,检测该帧内预测模式与预测模式候选队列中添加的预测模式是否相同;当该帧内预测模式与预测模式候选队列中添加的预测模式相同时,检测下一个相邻的色度块的帧内预测模式;当帧内预测模式与预测模式候选队列中添加的预测模式不同时,将该帧内预测模式添加至预测模式候选队列中。
需要说明的是,上述步骤中,目标顺序和第一顺序可以相同也可以不同,本申请实施例对此不作限定。
步骤203、在构造完成的预测模式候选队列中,确定目标帧内预测模式。
该目标帧内预测模式为用于预测当前色度块的预测块的模式。
步骤203中,在构造完成的预测模式候选队列中,确定目标帧内预测模式的过程包括:
将构造完成的预测模式候选队列中符合第二目标条件的帧内预测模式,确定为目标帧内预测模式。该过程可以通过遍历预测模式队列中的所有预测模式来实现。
作为一种示例,第二目标条件为基于帧内预测模式确定的预测块所对应的残差块的残差值的绝对值之和最小,或,基于帧内预测模式确定的预测块所对应的残差块的残差值变换量的绝对值之和最小,或,采用帧内预测模式编码的编码代价最小。
当第二目标条件为基于帧内预测模式确定的预测块所对应的残差块的残差值的绝对值之和最小时,可以先遍历预测模式队列中的所有预测模式,计算当前亮度块基于每种模式确定的预测块所对应的预测残差,然后基于当前亮度块基于每种模式确定的预测块所对应的预测残差,选择对应的预测块所对应的残差块(即该亮度块的残差块)的残差值的绝对值之和最小的帧内预测模式作为目标帧内预测模式;
当第二目标条件为基于帧内预测模式确定的预测块所对应的残差块的残差值变换量的绝对值最小时,可以先遍历预测模式队列中的所有预测模式,计算当前亮度块采用每种模式对应确定的预测块所对应的预测残差,然后对当前亮度块采用每种模式对应确定的预测块所对应的预测残差进行残差变换,得到残差值变换量,选择对应的预测块所对应的残差块的残差值变换量的绝对值最小的帧内预测模式作为目标帧内预测模式,该对当前亮度块采用每种模式对应确定的预测块所对应的预测残差进行残差变换的过程指的是对当前亮度块采用每种模式对应确定的预测块所对应的预测残差乘以变换矩阵,得到残差值变换量,残差变换过程可以实现残差的去相关,使得最终得到的每个残差值变换量能量更集中;
当第二目标条件为采用帧内预测模式编码对应的编码代价最小时,可以先遍历预测模式队列中的所有预测模式,基于当前亮度块的每种模式对当前亮度块进行编码,计算每个编码后的当前亮度块的编码代价,选择对应的编码代价最小的帧内预测模式作为目标帧内预测模式,该编码代价可以采用预设的代价函数来计算。
步骤204、当目标帧内预测模式为帧内运动补偿模式时,确定参考亮度块。
该参考亮度块为与当前色度块位置对应的n个亮度块中的亮度块。
在本申请实施例中,当目标帧内预测模式为帧内运动补偿模式时,由于当前亮度块的运动矢量需要基于参考亮度块的运动矢量获得,因此需要确定参考亮度块,该确定参考亮度块的过程的实现方式可以有多种,示例的,本申请实施例提供以下三种实现方式:
第一种实现方式,基于目标帧内预测模式在预测模式候选队列中的排序,确定参考亮度块。
由上述步骤2023可知,预测模式候选队列包括m个帧内运动补偿模式,m≥1,则确定 参考亮度块的过程可以包括:
步骤D1、在预测模式候选队列中确定目标帧内预测模式在m个帧内运动补偿模式中为第r个帧内运动补偿模式,1≤r≤m。
由于预测模式候选队列包括至少一个帧内运动补偿模式,在第一种实现方式中,亮度块的确定与目标帧内预测模式在预测模式候选队列中的排序相关,因此需要确定该目标帧内预测模式在预测模式候选序列中的顺序,也即是其为第几个帧内运动补偿模式。步骤D1假设确定的是第r个帧内运动补偿模式。
步骤D2、按照目标顺序依次检测n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,该检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,可参考的运动矢量的总数等于r,或者,遍历n个亮度块。作为一种示例,m可以为1或5,x=m。
步骤D3、在达到检测停止条件后,将第r个运动矢量可参考的亮度块确定为参考亮度块。
在上述第一种实现方式中,实际上是将目标帧内预测模式在预测模式候选队列中的排序,与运动矢量可参考的亮度块的检测顺序对应起来,与上述步骤2023对应的,当只在预测模式候选队列中添加一个帧内运动补偿模式时,将第1个运动矢量可参考的亮度块确定为参考亮度块;当在检测停止条件的限制下,基于检测到的运动矢量可参考的亮度块的数量,来添加帧内运动补偿模式时,步骤D1至D3所确定的运动矢量可参考的亮度块与添加的帧内运动补偿模式的个数一一对应,步骤D1至D3所确定的运动矢量可参考的亮度块的检测顺序与添加的帧内运动补偿模式的添加顺序一致,则作为目标帧内预测模式的第r个帧内运动补偿模式与作为参考亮度块的第r个运动矢量可参考的亮度块对应。
第二种实现方式,基于n个亮度块对应的参考预测块,筛选参考亮度块。
由上述步骤2023可知,预测模式候选队列包括m个帧内运动补偿模式,m≥1,则确定参考亮度块的过程可以包括:
步骤E1、按照目标顺序依次检测n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,该检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,遍历n个亮度块。
步骤E2、在达到检测停止条件后,基于每个可参考的运动矢量,生成当前色度块的参考预测块。
步骤E3、在生成的多个参考预测块中,确定符合第一目标条件的参考预测块,第一目标条件为参考预测块对应的残差块的残差值的绝对值之和最小,或,参考预测块对应的残差块的残差值变换量的绝对值之和最小,或,参考预测块对应编码代价最小。
步骤E4、将符合第一目标条件的参考预测块所对应的亮度块确定为参考亮度块。
在上述第二种实现方式中,相对于第一种实现方式,实际上是将参考亮度块与帧内运动补偿模式的顺序去相关,请参考上述步骤2023,无论预测模式候选队列中有几个帧内运动补偿模式,也无论目标帧内预测模式为第几个帧内运动补偿模式,参考亮度块的参考预测块只需要符合第一目标条件即可,这样相较于第一种实现方式,准确性较高。
第三种实现方式,基于预先建立的帧内运动补偿模式的标识与亮度块的标识的对应关系表,确定参考亮度块。
由上述步骤2023可知,预测模式候选队列包括m个帧内运动补偿模式,m≥1,作为一种示例,在上述步骤202构造预测模式候选队列的过程中,还可以建立帧内运动补偿模式的 标识与亮度块的标识的对应关系表,该对应关系表记录有添加至预测模式候选队列中的每个帧内运动补偿模式的标识,以及对应的运动矢量可参考的亮度块的标识。例如,采用2023中的两种添加方式得到的m个帧内运动补偿模式与m个运动矢量可参考的亮度块存在一一对应关系,该对应关系表即可记录该关系。该亮度块为运动矢量可参考的亮度块,对应关系表中每个帧内运动补偿模式的标识用于唯一标识一个预测模式候选队列中的帧内运动补偿模式,每个亮度块的标识也用于唯一标识一个亮度块,作为一种示例,该亮度块的标识可以为在划分待处理图像帧对应的亮度图像时,亮度块的编号;也可以是其他类型的标识,如后续步骤209中所确定的标识,本申请实施例对此不做赘述。
由于在预测模式候选队列中添加的m个帧内运动补偿模式的模式编号可以相同,也可以不同。示例的,当预测模式候选队列中添加的m个帧内运动补偿模式的模式编号相同时,该帧内运动补偿模式的标识可以由模式编号与预测模式候选队列中的索引组成,这样可以在预测模式候选队列中唯一标识一个帧内运动补偿模式;当预测模式候选队列中添加的m个帧内运动补偿模式的模式编号不同时,该帧内运动补偿模式的标识可以与模式编号相同。
则确定参考亮度块的过程可以包括:
步骤F1、基于目标帧内预测模式的标识,查询对应关系表,得到参考亮度块的标识。
步骤F2、基于参考亮度块的标识确定参考亮度块。
在第三种实现方式中,通过查询对应关系表可以直接确定参考亮度块的标识,相较于第一种实现方式和第二种实现方式,过程较为简单,计算代价较小。
作为一种示例,在上述第一种实现方式和第二种实现方式中,按照目标顺序依次检测n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件的过程可以包括:
设置i=1;
执行检测过程,检测过程包括:
步骤G1、检测n个亮度块中第i个亮度块的运动矢量是否可参考。
步骤G2、当第i个亮度块的运动矢量可参考时,检测是否达到检测停止条件。
步骤G3、当未达到检测停止条件,更新i,使得更新后的i=i+1,再次执行检测过程。
步骤G4、当达到检测停止条件,停止执行检测过程。
作为一种示例,上述步骤G1至步骤G4的具体过程可以参考上述步骤2022中的步骤C1至步骤C4,本申请实施例对此不再赘述。
示例的,上述步骤G1中,检测n个亮度块中第i个亮度块的运动矢量是否可参考的过程可以包括:检测n个亮度块中第i个亮度块的预测类型;当第i个亮度块的预测类型为帧内预测类型时,确定第i个亮度块的运动矢量不可参考;当第i个亮度块的预测类型为帧间预测类型时,基于第i个亮度块的运动矢量生成备选运动矢量;检测第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量是否相同;当第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量不同时,基于第i个亮度块对应的备选运动矢量,预测当前色度块的备选预测块;当备选预测块有效,确定第i个亮度块的运动矢量可参考;当备选预测块无效,确定第i个亮度块的运动矢量不可参考;当第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块对应的备选运动矢量相同时,确定第i个亮度块的运动矢量不可参考。该过程参考上述步骤C1的解释。
需要说明的是,在上述预测当前色度块的备选预测块之后,可以判断备选预测块是否有 效。示例的,该判断过程可以包括以下两种实现方式:
第一种方式,当备选预测块全部位于待处理图像帧中的色度已编码区域内,确定备选预测块有效;当备选预测块不是全部位于待处理图像帧中的色度已编码区域内,确定备选预测块无效。
第二种方式,当备选预测块全部位于待处理图像帧中的色度已编码区域内,且位于当前色度块的指定方位,确定备选预测块有效;当备选预测块不是全部位于待处理图像帧中的色度已编码区域内,或者,不位于当前色度块的指定方位,确定备选预测块无效;作为一种示例,当前色度块的指定方位为当前色度块的左侧、上侧和左上侧的任一方位。
上述判断定备选预测块是否有效的两种实现方式可以参考上述步骤2022所提供的两种实现方式,本申请实施例在此不再赘述。
需要说明的是,不同于上述步骤C2的是,上述步骤G2中,检测是否达到检测停止条件的过程可以包括:检测可参考的运动矢量的总数是否等于预设的个数阈值x,该x≥m;以及检测i是否等于n;当可参考的运动矢量不等于预设的个数阈值x,且i不等于n,确定未达到检测停止条件;当可参考的运动矢量等于预设的个数阈值k或i=n,确定达到检测停止条件。
步骤205、基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量。
在步骤205中,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量的过程可以包括:
步骤H1、按照待处理图像帧的编码格式,确定当前色度块与参考亮度块的矢量缩放比。
在本申请实施例中,当前色度块与参考亮度块的矢量缩放比等于待处理图像帧中,色度块与亮度块的分布密度比例,该分布密度比例由待处理图像帧的编码格式决定。示例的,编码格式为4:4:4时,在水平方向(也称x方向)上,色度块与亮度块的分布密度比例为1:1,在竖直方向(也称y方向)上,色度块与亮度块的分布密度比例为1:1,则当前色度块与参考亮度块的矢量在水平方向上的缩放比等于1:1,在竖直方向上的缩放比等于1:1。该步骤可以参考上述步骤C131。
步骤H2、基于矢量缩放比,对参考亮度块的运动矢量进行缩放得到当前色度块的运动矢量。
基于该矢量缩放比,对参考亮度块的运动矢量进行等比例的缩放即可得到参考亮度块的备选运动矢量。例如,参考亮度块的运动矢量为(-11,-3),编码格式为4:4:4,则当前色度块的备选运动矢量为(-11,-3)。该步骤可以参考上述步骤C132。
步骤206、基于当前色度块的运动矢量,预测当前色度块的预测块。
当目标帧内预测模式为帧内运动补偿模式时,可以基于该当前色度块的运动矢量从待处理视频帧中的色度图像中的色度已编码区域中,查找得到当前色度块的预测块。
步骤207、将目标帧内预测模式在预测模式候选队列中的索引编码后添加至当前色度块的码流中。
在本申请实施例中,预测模式候选队列包括至少一个预测模式,每个预测模式均为用于预测所述当前色度块的预测块的模式。预测模式候选队列中的索引用于指示预测模式在预测模式候选队列中的排序,例如,目标帧内预测模式的索引为3,表示该目标帧内预测模式为预测模式候选队列中的第3个预测模式。
作为一种示例,可以通过熵编码模块对该索引进行熵编码,然后再写入码流。
步骤208、向解码端传输码流,该码流包括编码后的目标帧内预测模式的索引和编码后的残差块。
需要说明的是,在步骤206确定了当前色度块的预测块后,可以采用该当前色度块的原始像素值减去该预测块的像素值得到当前色度块的残差块,然后对当前色度块的残差块进行变换和量化,将量化后的残差块进行熵编码得到编码后的码流。该过程可以参考JEM或HEVC。
还需要说明的是,在步骤201对待处理图像帧进行色度块的划分后,最终得到的划分方式也可以在编码后添加进码流,以使解码端基于该划分方式对其处理的待处理图像帧的进行划分,并对划分后的色度块对进行相应的处理。
在本申请实施例中,编码在码流中的索引用于供解码端确定相应的参考亮度块,一方面,解码端可以基于预先约定的方式以及该目标帧内预测模式的索引来确定参考亮度块,这样无需在码流中编码相关的指示信息,因此降低了指示信息的编码代价,有利于提高视频编码的效率,另一方面,由于编码端已经确定了参考亮度块的标识,其也可以将该参考亮度块的标识编码进码流,以供解码端参考,这样解码端即可无需进行过多的运算,基于该参考亮度块的标识直接确定参考亮度块,从而减少解码端的运算代价。则进一步作为一种示例,如图13所示,在步骤207之后,步骤208之前,本申请实施例提供的色度的帧内预测方法还可以包括:
步骤209、获取参考亮度块的标识。
作为一种示例,参考亮度块的标识的可以有多种获取方式,本申请实施例以以下两种获取方式为例进行说明:
第一种获取方式,获取参考亮度块的标识的过程包括:
步骤I1、为n个亮度块中所有亮度块按照与解码端约定的顺序分配标识。
步骤I2、获取为参考亮度块分配的标识。
第二种获取方式,获取参考亮度块的标识的过程包括:
步骤J1、为n个亮度块中运动矢量可参考的亮度块按照与解码端约定的顺序分配标识。
需要说明的是,为n个亮度块中运动矢量可参考的亮度块分配的标识为以0或1为起始标识,公差为u的等差递增数列,u为正整数,通常为1。
该分配标识的过程可以有多种实现方式,第一种是可以与步骤2022同步执行,先为n个亮度块全部分配互不相同的标识,在按照目标顺序每检测到一个运动矢量不可参考的亮度块时,将该亮度块的标识删除,并更新其后的所有亮度块的标识,直至步骤2022中达到检测停止条件。示例的,先为n个亮度块全部分配标识:0、1…n-1,分配的标识可以为以0开始,公差为1的等差递增数列,当检测到第一个亮度块的运动矢量不可参考时,删除亮度块0,更新其后的所有亮度块,更新方式是在每个原标识基础上减去公差,也即是减1,更新后得到的标识为:0、1…n-2;重复该过程,直至步骤2022中达到检测停止条件。
第二种是,在达到检测停止条件后,对检测到运动矢量可参考的所有亮度块分配互不相同的标识。
示例的,先为检测到运动矢量可参考的所有亮度块共3个,则分配的标识为:0、1、2。
步骤J2、获取为参考亮度块分配的标识。
在上述两种获取方式中,示例的,该约定的顺序可以为亮度块的编码先后顺序,也可以是上述步骤2022检测n个亮度块中是否存在运动矢量可参考的亮度块的目标顺序,上述分配 的标识可以为数字标识。作为一种示例,为n个亮度块中运动矢量可参考的亮度块分配的标识为以0或1为起始标识,公差为1的等差递增数列,如0、1、2…,为n个亮度块中运动矢量可参考的亮度块分配的标识为也可以为其他形式的数列,本申请实施例对此不做限定。示例的,请参考图6,假设n个亮度块分别为:亮度块M11、亮度块M13、亮度块M14、亮度块M15和亮度块M16,运动矢量可参考的亮度块为M14和M15,参考亮度块为M14,约定的顺序为亮度块的编码先后顺序,则采用第一种获取方式,为该n个亮度块的标识分别为:亮度块M11为0、亮度块M13为1、亮度块M14为2、亮度块M15为3和亮度块M16为4,则参考亮度块的标识为2;采用第二种获取方式,为该n个亮度块的标识分别为:亮度块M14为0、亮度块M15为1,则参考亮度块的标识为0。这些亮度块的标识可以以二进制数表示。
由上述例子可以看出,相较于第一种获取方式,第二种获取方式所分配的亮度块的标识数值更少,最终确定的参考亮度块的标识数值更小,这样在该参考亮度的标识通过码流传输时,所占用的数据位更少,可以有效节约码流资源。
步骤210、将参考亮度块的标识编码后添加至当前色度块的码流中。
作为一种示例,可以通过熵编码模块对参考亮度块的标识进行熵编码,然后再写入码流。
需要说明的是,本申请实施例提供的色度的帧内预测方法步骤的先后顺序可以进行适当调整,步骤也可以根据情况进行相应增减,例如步骤209和210的步骤可以不执行,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本申请的保护范围之内,因此不再赘述。
综上所述,本申请实施例提供的色度的帧内预测方法,由于色度块的运动矢量是基于亮度块的运动矢量确定的,充分利用了亮度分量的运动矢量和色度分量的运动矢量的相关性,无需单独计算色度分量的运动矢量,从而简化了帧内运动补偿技术的过程,降低了色度分量的运动矢量的运算代价,相应地降低了整体运动矢量的运算代价。
第二方面,当该帧内预测方法应用于解码端时,如图14所示,该色度的帧内预测方法由解码端执行,其用于I帧的解码,该方法包括:
步骤301、解码当前色度块的码流,该码流包括编码后的目标帧内预测模式的索引和编码后的残差块。
该当前色度块指的是当前待解码的色度块,解码端在接收到编码端传输的码流后,对该码流进行解码,通常是通过熵解码模块进行解码。该码流解码后,可以包括目标帧内预测模式的索引和解码后的残差块。解码端可以继续对该解码后的残差块进行反量化和反变换,以得到当前色度块的残差块。
还需要说明的是,在步骤201对待处理图像帧进行色度块的划分后,最终得到的划分方式也可以在编码后添加进码流,因此,该码流中还包括编码后的划分方式,解码端可以从解码后的码流中提取该划分方式,基于该划分方式对其处理的待处理图像帧的进行划分,并对划分后的色度块对进行相应的处理。
步骤302、从解码后的码流中提取目标帧内预测模式在预测模式候选队列中的索引。
在本申请实施例中,索引用于指示预测模式在预测模式候选队列中的排序,例如,目标帧内预测模式的索引为3,表示该目标帧内预测模式为预测模式候选队列中的第3个预测模式。
步骤303、对于当前色度块,构造预测模式候选队列,预测模式候选队列包括至少一个 预测模式,每个预测模式均为用于预测当前色度块的预测块的模式。
示例的,该构造预测模式候选队列的过程可以参考上述步骤202中的步骤A1至A7,该构造过程是与编码端约定的,过程与编码端一致,也即是该步骤303与上述步骤202一致。因此,本申请实施例对此不再赘述。
步骤304、在构造完成的预测模式候选队列中,确定目标帧内预测模式。
作为一种示例,可以基于目标帧内预测模式在预测模式候选队列中的索引查询预测模式候选队列,得到目标帧内预测模式。
由于步骤302中,解码端获取了目标帧内预测模式的索引,因此,可以基于该索引在预测模式候选队列中查询得到目标帧内预测模式。
例如,预测模式候选队列为:{11,75,…,68},共包含11个预测模式,该目标帧内预测模式的索引为2,则查询该预测模式候选队列,选择第2个预测模式即为该目标帧内预测模式,由上述预测模式候选队列可知,该目标帧内预测模式为模式编号为75的帧内运动补偿模式。
步骤305、当目标帧内预测模式为帧内运动补偿模式时,确定参考亮度块。
该过程与编码端是相应的,一方面,解码端可以基于预先约定的方式以及该目标帧内预测模式的索引来确定参考亮度块,这样无需在码流中编码相关的指示信息,因此降低了指示信息的编码代价,有利于提高视频编码的效率;另一方面,由于编码端已经确定了参考亮度块的标识,其也可以将该参考亮度块的标识编码进码流,以供解码端参考,这样解码端即可无需进行过多的运算,基于该参考亮度块的标识直接确定参考亮度块,从而减少解码端的运算代价。
因此,解码端确定参考亮度块的方式可以有多种,本申请实施例提供以下两种确定方式。
第一种确定方式,解码端基于预先约定的方式以及该目标帧内预测模式的索引来确定参考亮度块。该过程与上述步骤204相同,分别有三种实现方式,具体过程参考上述步骤204,本申请实施例对此不作赘述。
第二种确定方式,解码端基于码流中的参考亮度块的标识确定参考亮度块。该过程可以包括:
步骤K1、从解码后的码流中提取参考亮度块的标识。
请参考上述步骤210,由于编码端将参考亮度块的标识进行熵编码后写入了码流,解码端可以在对该码流进行熵解码后,提取参考亮度块的标识。
步骤K2、基于参考亮度块的标识,在n个亮度块中确定参考亮度块。
请参考上述步骤209,由于在编码端,参考亮度块的标识的可以有多种获取方式,而解码端需要与编码端的获取方法一致,才能保证获取的标识指示同一位置的亮度块,因此,对应的,本申请实施例以以下两种获取方式为例进行说明:
与步骤209中的第一种获取方式对应的,解码端的第一种获取方式,包括:
步骤L1、为n个亮度块中运动矢量可参考的亮度块按照与编码端约定的顺序分配标识。
步骤L2、将与参考亮度块的标识一致的运动矢量可参考的亮度块确定为参考亮度块。
与步骤209中的第二种获取方式对应的,解码端的第二种获取方式,包括:
步骤M1、为n个亮度块中运动矢量可参考的亮度块按照与编码端约定的顺序分配标识。
需要说明的是,为n个亮度块中运动矢量可参考的亮度块分配的标识为以0或1为起始标识,公差为u的等差递增数列,u为正整数,通常为1。
该分配标识的过程可以有多种实现方式,第一种是可以与步骤303同步执行,先为n个亮度块全部分配互不相同的标识,在按照目标顺序每检测到一个运动矢量不可参考的亮度块时,将该亮度块的标识删除,并更新其后的所有亮度块的标识,直至步骤303中达到检测停止条件。示例的,先为n个亮度块全部分配标识:0、1…n-1,分配的标识可以为以0开始,公差为1的等差递增数列,当检测到第一个亮度块的运动矢量不可参考时,删除亮度块0,更新其后的所有亮度块,更新方式是在原标识基础上减去公差,也即是减1,更新后的标识为:0、1…n-2;重复该过程,直至步骤303中达到检测停止条件。
第二种是,在达到检测停止条件后,对检测到运动矢量可参考的所有亮度块分配互不相同的标识。
示例的,先为检测到运动矢量可参考的所有亮度块共4个,则分配的标识为:0、1、2和3。
步骤M2、将与参考亮度块的标识一致的运动矢量可参考的亮度块确定为参考亮度块。
在上述两种获取方式中,示例的,该约定的顺序可以为亮度块的编码先后顺序,也可以是上述步骤2022和步骤303中检测n个亮度块中是否存在运动矢量可参考的亮度块的目标顺序,该标识可以为数字标识,为n个亮度块中运动矢量可参考的亮度块分配的标识为以0或1为起始标识,公差为1的等差递增数列。示例的,请参考图6所示,假设n个亮度块分别为:亮度块M11、亮度块M13、亮度块M14、亮度块M15和亮度块M16,运动矢量可参考的亮度块为M14和M15,参考亮度块为M14,约定的顺序为亮度块的编码先后顺序,则采用第一种获取方式,为该n个亮度块的标识分别为:亮度块M11为0、亮度块M13为1、亮度块M14为2、亮度块M15为3和亮度块M16为4,则需要在码流中传输的参考亮度块的标识为2,编码端基于该标识确定参考亮度块为M14;采用第二种获取方式,为该n个亮度块的标识分别为:亮度块M14为0、亮度块M15为1,则需要在码流中传输的参考亮度块的标识为0,编码端基于该标识确定参考亮度块为M14。这些亮度块的标识可以以二进制数表示。
由上述例子可以看出,相较于第一种获取方式,第二种获取方式所分配的亮度块的标识更少,最终确定的参考亮度块的标识更小,这样在该参考亮度的标识通过码流传输时,所占用的数据位更少,可以有效节约码流资源。
步骤306、基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量。
该过程与编码端一致,可以参考上述步骤205,本申请实施例对此不作赘述。
步骤307、基于当前色度块的运动矢量,预测当前色度块的预测块。
该过程与编码端一致,可以参考上述步骤206,本申请实施例对此不作赘述。
步骤308、基于当前色度块的预测块和当前色度块的残差块,确定该当前色度块的重建像素值。
基于步骤307获取的当前色度块的预测块,以及步骤301获取的当前色度块的残差块,可以确定当前色度块的重建像素值。例如,将当前色度块的预测块的像素值与当前色度块的残差块的像素值相加即可得到该重建像素值。
需要说明的是,本申请实施例提供的色度的帧内预测方法步骤的先后顺序可以进行适当调整,步骤也可以根据情况进行相应增减,例如步骤301和303的步骤可以颠倒,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本申请的保护范围之内,因此不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的色度的帧内测方法的解码端的具体步骤,可以参考前述编码端实施例中的对应过程,在此不再赘述。
综上所述,本申请实施例提供的色度的帧内预测方法,由于色度块的运动矢量是基于亮度块的运动矢量确定的,充分利用了亮度分量的运动矢量和色度分量的运动矢量的相关性,无需单独计算色度分量的运动矢量,从而简化了帧内运动补偿技术的过程,降低了色度分量的运动矢量的运算代价,相应地降低了整体运动矢量的运算代价。
本申请实施例提供一种色度的帧内预测装置40,用于I帧的编码或解码,如图15所示,所述装置40包括:
第一确定模块401,用于当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,作为一种示例,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;
预测模块402,用于基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。
综上所述,本申请实施例提供的色度的帧内预测装置,由于色度块的运动矢量是基于亮度块的运动矢量确定的,充分利用了亮度分量的运动矢量和色度分量的运动矢量的相关性,无需单独计算色度分量的运动矢量,从而简化了帧内运动补偿技术的过程,降低了色度分量的运动矢量的运算代价,相应地降低了整体运动矢量的运算代价。
作为一种示例,如图16所示,所述装置40还包括:
构造模块403,用于在与所述基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量之前,构造预测模式候选队列,所述预测模式候选队列包括至少一个预测模式,每个所述预测模式均为用于预测所述当前色度块的预测块的模式;
第二确定模块404,用于在构造完成的所述预测模式候选队列中,确定所述目标帧内预测模式;
第三确定模块405,用于当目标帧内预测模式为帧内运动补偿模式时,确定参考亮度块。
作为一种示例,如图17所示,所述构造模块403,包括:
第一确定子模块4031,用于确定待处理图像帧中与所述当前色度块位置对应的n个亮度块;
检测子模块4032,用于检测所述n个亮度块中是否存在运动矢量可参考的亮度块;
添加子模块4033,用于当所述n个亮度块中存在运动矢量可参考的亮度块时,在所述预测模式候选队列中添加帧内运动补偿模式。
作为一种示例,所述检测子模块4032,用于:
按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值k,或者,遍历所述n个亮度块。
作为一种示例,所述添加子模块4033,用于:
在按照所述目标顺序每检测到一个运动矢量可参考的亮度块时,在所述预测模式候选队列中添加一个帧内运动补偿模式;
或者,在达到检测停止条件后,若存在运动矢量可参考的m个亮度块,按照所述m个亮 度块在所述目标顺序中的检测排布顺序,在所述预测模式候选队列中添加m个帧内运动补偿模式,m≥1。
作为一种示例,第三确定模块405有多种可实现方式,示例的,包括:
第一种可实现方式:所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,如图18所示,所述第三确定模块405,包括:
第二确定子模块4051,用于在所述预测模式候选队列中,确定所述目标帧内预测模式在所述m个帧内运动补偿模式中为第r个帧内运动补偿模式,1≤r≤m;
检测子模块4052,用于按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,可参考的运动矢量的总数等于所述r,或者,遍历所述n个亮度块;
第三确定子模块4053,用于在达到检测停止条件后,将第r个运动矢量可参考的亮度块确定为所述参考亮度块。
第二种可实现方式:所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,如图19所示,所述第三确定模块405,包括:
检测子模块4052,用于按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,遍历所述n个亮度块;
生成子模块4054,用于在达到检测停止条件后,基于每个可参考的运动矢量,生成所述当前色度块的参考预测块;
第四确定子模块4055,用于在生成的多个参考预测块中,确定符合第一目标条件的参考预测块,所述第一目标条件为参考预测块对应的残差块的残差值的绝对值之和最小,或,参考预测块对应的残差块的残差值变换量的绝对值之和最小,或,参考预测块对应的编码代价最小;
第五确定子模块4056,用于将符合所述第一目标条件的参考预测块所对应的亮度块确定为所述参考亮度块。
第三种可实现方式:所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,如图20所示,所述装置40还包括:
建立模块406,用于在构造所述预测模式候选队列的过程中,建立帧内运动补偿模式的标识与亮度块的标识的对应关系表,所述对应关系表记录有添加至所述预测模式候选队列中的每个帧内运动补偿模式的标识,以及对应的运动矢量可参考的亮度块的标识,所述对应关系表中每个所述帧内运动补偿模式的标识用于唯一标识一个预测模式候选队列中的帧内运动补偿模式;
此时,所述第三确定模块405,用于:
基于所述目标帧内预测模式的标识,查询所述对应关系表,得到所述参考亮度块的标识;基于所述参考亮度块的标识确定所述参考亮度块。
作为一种示例,上述所述检测子模块4032或者所述检测子模块4052,可以包括:
设置单元,用于设置i=1;
执行单元,用于执行检测过程,所述检测过程包括:
检测所述n个亮度块中第i个亮度块的运动矢量是否可参考;
当所述第i个亮度块的运动矢量可参考时,检测是否达到所述检测停止条件;
当未达到所述检测停止条件,更新所述i,使得更新后的i=i+1,再次执行所述检测过程;
当达到所述检测停止条件,停止执行所述检测过程。
作为一种示例,所述执行单元,用于:
检测所述n个亮度块中第i个亮度块的预测类型;
当所述第i个亮度块的预测类型为帧内预测类型时,确定所述第i个亮度块的运动矢量不可参考;
当所述第i个亮度块的预测类型为帧间预测类型时,基于所述第i个亮度块的运动矢量生成备选运动矢量;
检测所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量是否相同;
当所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量不同时,基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块;
当所述备选预测块有效,确定所述第i个亮度块的运动矢量可参考;
当所述备选预测块无效,确定所述第i个亮度块的运动矢量不可参考;
当所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块对应的备选运动矢量相同时,确定所述第i个亮度块的运动矢量不可参考。
作为一种示例,如图21所示,所述装置40还包括:
第一检测模块407,用于在所述基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块之后,检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域;
第四确定模块408,用于当所述备选预测块全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块有效;
第五确定模块409,用于当所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块无效。
作为一种示例,如图22所示,所述装置40还包括:
第一检测模块407,用于在所述基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块之后,检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域;
第二检测模块410,用于检测所述备选预测块是否位于所述当前色度块的指定方位;
第六确定模块411,用于当所述备选预测块全部位于所述待处理图像帧中的色度已编码区域内,且位于所述当前色度块的指定方位,确定所述备选预测块有效;
第七确定模块412,用于当所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域内,或者,不位于所述当前色度块的指定方位,确定所述备选预测块无效;
作为一种示例,所述当前色度块的指定方位为所述当前色度块的左侧、上侧和左上侧的任一方位。
作为一种示例,所述第一检测模块407,用于:
当所述第i个亮度块对应的备选运动矢量为分像素运动矢量,获取所述备选预测块对应的参考色度块,所述备选预测块的色度像素值是基于所述参考色度块的像素值插值得到的;
检测所述参考色度块是否全部位于所述待处理图像帧中的色度已编码区域;
当所述参考色度块全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块全部位于所述待处理图像帧中的色度已编码区域;
当所述参考色度块不是全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域。
作为一种示例,上述第一确定子模块4031,包括:
确定单元,用于确定待处理图像帧中与所述当前色度块位置对应的亮度图像区域;
处理单元,用于将所有目标亮度块作为所述n个亮度块,或者,在所有目标亮度块中筛选指定位置的亮度块作为所述n个亮度块,所述指定位置的亮度块包括:覆盖所述亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖所述亮度图像区域中右下角像素点的亮度块;
作为一种示例,所述目标亮度块为部分或全部在所述亮度图像区域中的亮度块。
作为一种示例,所述n个亮度块至少包括:覆盖亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖所述亮度图像区域中右下角像素点的亮度块,作为一种示例,所述亮度图像区域为待处理图像帧中与所述当前色度块位置对应的亮度区域;
所述目标顺序为:
覆盖所述亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖亮度图像区域中右下角像素点的亮度块的顺序。
作为一种示例,所述目标顺序为随机确定的顺序。
作为一种示例,所述装置应用于解码端,该装置还包括:用于确定所述参考亮度块第三确定模块。该第三确定模块可以为图16所示的第三确定模块405,如图23所示,所述第三确定模块405,包括:
提取子模块4057,用于从解码后的码流中提取所述参考亮度块的标识;
第六确定子模块4058,用于基于所述参考亮度块的标识,在所述n个亮度块中确定所述参考亮度块。
作为一种示例,所述第六确定子模块4058,用于:
为所述n个亮度块中运动矢量可参考的亮度块按照与编码端约定的顺序分配标识;
将与所述参考亮度块的标识一致的运动矢量可参考的亮度块确定为所述参考亮度块。
作为一种示例,所述装置应用于编码端,如图24所示,所述装置40还包括:
第一编码模块413,用于在所述确定所述参考亮度块之后,将所述目标帧内预测模式在预测模式候选队列中的索引编码后添加至所述当前色度块的码流中,所述预测模式候选队列包括至少一个预测模式,每个所述预测模式均为用于预测所述当前色度块的预测块的模式。
作为一种示例,如图25所示,所述装置40还包括:
获取模块414,用于在所述确定所述参考亮度块之后,获取所述参考亮度块的标识;
第二编码模块415,用于将所述参考亮度块的标识编码后添加至所述当前色度块的码流中。
作为一种示例,所述获取模块414,用于:
为所述n个亮度块中运动矢量可参考的亮度块按照与解码端约定的顺序分配标识;获取为所述参考亮度块分配的标识。
作为一种示例,为所述n个亮度块中运动矢量可参考的亮度块分配的标识为以0或1为起始标识,公差为1的等差递增数列。
作为一种示例,所述第一确定模块401,用于:
按照所述待处理图像帧的编码格式,确定所述当前色度块与所述参考亮度块的矢量缩放比;基于所述矢量缩放比,对所述参考亮度块的运动矢量进行缩放得到所述当前色度块的运动矢量。
作为一种示例,所述第二确定模块404,用于:
将构造完成的所述预测模式候选队列中对应的预测块符合第二目标条件的帧内预测模式,确定为所述目标帧内预测模式;
作为一种示例,所述第二目标条件为:基于帧内预测模式确定的预测块所对应的残差块的残差值的绝对值之和最小,或,基于帧内预测模式确定的预测块所对应的残差块的残差值变换量的绝对值之和最小,或,采用帧内预测模式编码对应的编码代价最小。
作为一种示例,所述确定单元,用于:基于所述当前色度块的尺寸,以及亮度分量与色度分量的分布密度比例,确定所述当前色度块位置对应的亮度图像区域,该亮度图像区域的尺寸等于所述当前色度块的尺寸与所述分布密度比例的乘积。
综上所述,本申请实施例提供的色度的帧内预测装置,由于色度块的运动矢量是基于亮度块的运动矢量确定的,充分利用了亮度分量的运动矢量和色度分量的运动矢量的相关性,无需单独计算色度分量的运动矢量,从而简化了帧内运动补偿技术的过程,降低了色度分量的运动矢量的运算代价,相应地降低了整体运动矢量的运算代价。
本申请实施例提供一种色度的帧内预测装置,包括:
至少一个处理器;和
至少一个存储器;
所述至少一个存储器存储有至少一个程序,所述至少一个存储器能够执行所述至少一个程序,以执行本申请实施例提供的任一所述的色度的帧内预测方法。
本申请实施例提供一种存储介质,该存储介质为非易失性计算机可读存储介质,所述存储介质中存储有指令或代码,
所述指令或代码被处理器执行时,使得所述处理器能够执行本申请实施例任一所述的色度的帧内预测方法。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (26)

  1. 一种色度的帧内预测方法,其特征在于,所述方法包括:
    当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,其中,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;
    基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。
  2. 根据权利要求1所述的方法,其特征在于,在与所述基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量之前,所述方法还包括:
    构造预测模式候选队列,所述预测模式候选队列包括至少一个预测模式,每个所述预测模式均为用于预测所述当前色度块的预测块的模式;
    在构造完成的所述预测模式候选队列中,确定所述目标帧内预测模式;
    当目标帧内预测模式为帧内运动补偿模式时,确定所述参考亮度块。
  3. 根据权利要求2所述的方法,其特征在于,
    所述构造预测模式候选队列,包括:
    确定待处理图像帧中与所述当前色度块位置对应的n个亮度块;
    检测所述n个亮度块中是否存在运动矢量可参考的亮度块;
    当所述n个亮度块中存在运动矢量可参考的亮度块时,在所述预测模式候选队列中添加帧内运动补偿模式。
  4. 根据权利要求3所述的方法,其特征在于,所述检测所述n个亮度块中是否存在运动矢量可参考的亮度块,包括:
    按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值k,或者,遍历所述n个亮度块。
  5. 根据权利要求4所述的方法,其特征在于,所述在所述预测模式候选队列中添加帧内运动补偿模式,包括:
    在按照所述目标顺序每检测到一个运动矢量可参考的亮度块时,在所述预测模式候选队列中添加一个帧内运动补偿模式;
    或者,在达到检测停止条件后,若存在运动矢量可参考的m个亮度块,按照所述m个亮度块在所述目标顺序中的检测排布顺序,在所述预测模式候选队列中添加m个帧内运动补偿模式,m≥1。
  6. 根据权利要求2所述的方法,其特征在于,所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,所述确定所述参考亮度块,包括:
    在所述预测模式候选队列中,确定所述目标帧内预测模式在所述m个帧内运动补偿模式中为第r个帧内运动补偿模式,1≤r≤m;
    按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,可参考的运动矢量的总数等于所述r,或者,遍历所述n个亮度块;
    在达到检测停止条件后,将第r个运动矢量可参考的亮度块确定为所述参考亮度块。
  7. 根据权利要求2所述的方法,其特征在于,所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,
    所述确定所述参考亮度块,包括:
    按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,所述检测停止条件为可参考的运动矢量的总数等于预设的个数阈值x,x≥m,或者,遍历所述n个亮度块;
    在达到检测停止条件后,基于每个可参考的运动矢量,生成所述当前色度块的参考预测块;
    在生成的多个参考预测块中,确定符合第一目标条件的参考预测块,所述第一目标条件为参考预测块对应的残差块的残差值的绝对值之和最小,或,参考预测块对应的残差块的残差值变换量的绝对值之和最小,或,参考预测块对应的编码代价最小;
    将符合所述第一目标条件的参考预测块所对应的亮度块确定为所述参考亮度块。
  8. 根据权利要求2所述的方法,其特征在于,所述预测模式候选队列包括m个帧内运动补偿模式,m≥1,所述方法还包括:在构造所述预测模式候选队列的过程中,建立帧内运动补偿模式的标识与亮度块的标识的对应关系表,所述对应关系表记录有添加至所述预测模式候选队列中的每个帧内运动补偿模式的标识,以及对应的运动矢量可参考的亮度块的标识,所述对应关系表中每个所述帧内运动补偿模式的标识用于唯一标识一个预测模式候选队列中的帧内运动补偿模式;
    所述确定所述参考亮度块,包括:
    基于所述目标帧内预测模式的标识,查询所述对应关系表,得到所述参考亮度块的标识;
    基于所述参考亮度块的标识确定所述参考亮度块。
  9. 根据权利要求4至8任一所述的方法,其特征在于,所述按照目标顺序依次检测所述n个亮度块中亮度块的运动矢量是否可参考,直至达到检测停止条件,包括:
    设置i=1;
    执行检测过程,所述检测过程包括:
    检测所述n个亮度块中第i个亮度块的运动矢量是否可参考;
    当所述第i个亮度块的运动矢量可参考时,检测是否达到所述检测停止条件;
    当未达到所述检测停止条件,更新所述i,使得更新后的i=i+1,再次执行所述检测过程;
    当达到所述检测停止条件,停止执行所述检测过程。
  10. 根据权利要求9所述的方法,其特征在于,所述检测所述n个亮度块中第i个亮度块的运动矢量是否可参考,包括:
    检测所述n个亮度块中第i个亮度块的预测类型;
    当所述第i个亮度块的预测类型为帧内预测类型时,确定所述第i个亮度块的运动矢量不可参考;
    当所述第i个亮度块的预测类型为帧间预测类型时,基于所述第i个亮度块的运动矢量生成备选运动矢量;
    检测所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量是否相同;
    当所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块所对应的备选运动矢量不同时,基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块;
    当所述备选预测块有效,确定所述第i个亮度块的运动矢量可参考;
    当所述备选预测块无效,确定所述第i个亮度块的运动矢量不可参考;
    当所述第i个亮度块对应的备选运动矢量与当前检测得到的运动矢量可参考的亮度块对应的备选运动矢量相同时,确定所述第i个亮度块的运动矢量不可参考。
  11. 根据权利要求10所述的方法,其特征在于,在所述基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块之后,所述方法还包括:
    检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域;
    当所述备选预测块全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块有效;
    当所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块无效。
  12. 根据权利要求10所述的方法,其特征在于,在所述基于所述第i个亮度块对应的备选运动矢量,预测所述当前色度块的备选预测块之后,所述方法还包括:
    检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域;
    检测所述备选预测块是否位于所述当前色度块的指定方位;
    当所述备选预测块全部位于所述待处理图像帧中的色度已编码区域内,且位于所述当前色度块的指定方位,确定所述备选预测块有效;
    当所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域内,或者,不位于所述当前色度块的指定方位,确定所述备选预测块无效;
    其中,所述当前色度块的指定方位为所述当前色度块的左侧、上侧和左上侧的任一方位。
  13. 根据权利要求11或12所述的方法,其特征在于,所述检测所述备选预测块是否全部位于所述待处理图像帧中的色度已编码区域,包括:
    当所述第i个亮度块对应的备选运动矢量为分像素运动矢量,获取所述备选预测块对应的参考色度块,所述备选预测块的色度像素值是基于所述参考色度块的像素值插值得到的;
    检测所述参考色度块是否全部位于所述待处理图像帧中的色度已编码区域;
    当所述参考色度块全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块全部位于所述待处理图像帧中的色度已编码区域;
    当所述参考色度块不是全部位于所述待处理图像帧中的色度已编码区域内,确定所述备选预测块不是全部位于所述待处理图像帧中的色度已编码区域。
  14. 根据权利要求3至13任一所述的方法,其特征在于,所述确定待处理图像帧中与所述当前色度块位置对应的n个亮度块,包括:
    确定待处理图像帧中与所述当前色度块位置对应的亮度图像区域;
    将所有目标亮度块作为所述n个亮度块,或者,在所有目标亮度块中筛选指定位置的亮度块作为所述n个亮度块,所述指定位置的亮度块包括:覆盖所述亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖所述亮度图像 区域中右下角像素点的亮度块;
    其中,所述目标亮度块为部分或全部在所述亮度图像区域中的亮度块。
  15. 根据权利要求4至8任一所述的方法,其特征在于,所述n个亮度块至少包括:覆盖亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖所述亮度图像区域中右下角像素点的亮度块,其中,所述亮度图像区域为待处理图像帧中与所述当前色度块位置对应的亮度区域,所述目标顺序为:
    覆盖所述亮度图像区域中中心像素点的亮度块、覆盖所述亮度图像区域中左上角像素点的亮度块、覆盖所述亮度图像区域中右上角像素点的亮度块、覆盖所述亮度图像区域中左下角像素点的亮度块和覆盖亮度图像区域中右下角像素点的亮度块的顺序;
    或者,所述目标顺序为:随机确定的顺序。
  16. 根据权利要求1所述的方法,其特征在于,所述方法应用于解码端,确定所述参考亮度块的过程,包括:
    从解码后的码流中提取所述参考亮度块的标识;
    基于所述参考亮度块的标识,在所述n个亮度块中确定所述参考亮度块。
  17. 根据权利要求16所述的方法,其特征在于,所述基于所述参考亮度块的标识,在所述n个亮度块中确定所述参考亮度块,包括:
    为所述n个亮度块中运动矢量可参考的亮度块按照与编码端约定的顺序分配标识;
    将与所述参考亮度块的标识一致的运动矢量可参考的亮度块确定为所述参考亮度块。
  18. 根据权利要求1所述的方法,其特征在于,所述方法应用于编码端,
    在所述预测所述当前色度块的预测块之后,所述方法还包括:
    将所述目标帧内预测模式在预测模式候选队列中的索引编码后添加至所述当前色度块的码流中,所述预测模式候选队列包括至少一个预测模式,每个所述预测模式均为用于预测所述当前色度块的预测块的模式。
  19. 根据权利要求18所述的方法,其特征在于,所述方法还包括:
    获取所述参考亮度块的标识;
    将所述参考亮度块的标识编码后添加至所述当前色度块的码流中。
  20. 根据权利要求19所述的方法,其特征在于,所述获取所述参考亮度块的标识,包括:
    为所述n个亮度块中运动矢量可参考的亮度块按照与解码端约定的顺序分配标识;
    获取为所述参考亮度块分配的标识。
  21. 根据权利要求1所述的方法,其特征在于,所述基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,包括:
    按照所述待处理图像帧的编码格式,确定所述当前色度块与所述参考亮度块的矢量缩放比;
    基于所述矢量缩放比,对所述参考亮度块的运动矢量进行缩放得到所述当前色度块的运动矢量。
  22. 根据权利要求2所述的方法,其特征在于,所述在构造完成的所述预测模式候选队列中,确定所述目标帧内预测模式,包括:
    将构造完成的所述预测模式候选队列中对应的预测块符合第二目标条件的帧内预测模 式,确定为所述目标帧内预测模式;
    其中,所述第二目标条件为:基于帧内预测模式确定的预测块所对应的残差块的残差值的绝对值之和最小,或,基于帧内预测模式确定的预测块所对应的残差块的残差值变换量的绝对值之和最小,或,采用帧内预测模式编码对应的编码代价最小。
  23. 根据权利要求14所述的方法,其特征在于,所述确定待处理图像帧中与所述当前色度块位置对应的亮度图像区域,包括:
    基于所述当前色度块的尺寸,以及亮度分量与色度分量的分布密度比例,确定所述当前色度块位置对应的亮度图像区域,该亮度图像区域的尺寸等于所述当前色度块的尺寸与所述分布密度比例的乘积。
  24. 一种色度的帧内预测装置,其特征在于,所述装置包括:
    第一确定模块,用于当目标帧内预测模式为帧内运动补偿模式时,基于参考亮度块的运动矢量确定待处理图像帧中当前色度块的运动矢量,其中,所述目标帧内预测模式为用于预测所述当前色度块的预测块的模式,在所述帧内运动补偿模式下,所述当前色度块的运动矢量基于亮度块的运动矢量生成,所述参考亮度块为与所述当前色度块位置对应的n个亮度块中的亮度块,n≥1;
    预测模块,用于基于所述当前色度块的运动矢量,预测所述当前色度块的预测块。
  25. 一种色度的帧内预测装置,其特征在于,包括:
    至少一个处理器;和
    至少一个存储器;
    所述至少一个存储器存储有至少一个程序,所述至少一个存储器能够执行所述至少一个程序,以执行权利要求1至23任一所述的色度的帧内预测方法。
  26. 一种存储介质,其特征在于,所述存储介质中存储有指令或代码,
    所述指令或代码被处理器执行时,使得所述处理器能够执行权利要求1至23任一所述的色度的帧内预测方法。
PCT/CN2019/079808 2018-03-30 2019-03-27 色度的帧内预测方法及装置 WO2019184934A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810276799.5 2018-03-30
CN201810276799.5A CN110324627B (zh) 2018-03-30 2018-03-30 色度的帧内预测方法及装置

Publications (1)

Publication Number Publication Date
WO2019184934A1 true WO2019184934A1 (zh) 2019-10-03

Family

ID=68062566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/079808 WO2019184934A1 (zh) 2018-03-30 2019-03-27 色度的帧内预测方法及装置

Country Status (2)

Country Link
CN (1) CN110324627B (zh)
WO (1) WO2019184934A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203086A (zh) * 2020-09-30 2021-01-08 字节跳动(香港)有限公司 图像处理方法、装置、终端和存储介质
CN114189688A (zh) * 2020-09-14 2022-03-15 四川大学 基于亮度模板匹配的色度分量预测方法
CN115190312A (zh) * 2021-04-02 2022-10-14 西安电子科技大学 一种基于神经网络的跨分量色度预测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105393536A (zh) * 2013-06-21 2016-03-09 高通股份有限公司 使用位移向量从预测性块的帧内预测
WO2016199574A1 (ja) * 2015-06-08 2016-12-15 ソニー株式会社 画像処理装置および画像処理方法
CN106464921A (zh) * 2014-06-19 2017-02-22 Vid拓展公司 用于块内复制搜索增强的方法和系统
WO2017171370A1 (ko) * 2016-03-28 2017-10-05 주식회사 케이티 비디오 신호 처리 방법 및 장치
WO2017206803A1 (en) * 2016-05-28 2017-12-07 Mediatek Inc. Method and apparatus of current picture referencing for video coding

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259741B1 (en) * 1999-02-18 2001-07-10 General Instrument Corporation Method of architecture for converting MPEG-2 4:2:2-profile bitstreams into main-profile bitstreams
US7116831B2 (en) * 2002-04-10 2006-10-03 Microsoft Corporation Chrominance motion vector rounding
CN1232126C (zh) * 2002-09-30 2005-12-14 三星电子株式会社 图像编码方法和装置以及图像解码方法和装置
US7724827B2 (en) * 2003-09-07 2010-05-25 Microsoft Corporation Multi-layer run level encoding and decoding
CN100461867C (zh) * 2004-12-02 2009-02-11 中国科学院计算技术研究所 一种帧内图像预测编码方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105393536A (zh) * 2013-06-21 2016-03-09 高通股份有限公司 使用位移向量从预测性块的帧内预测
CN106464921A (zh) * 2014-06-19 2017-02-22 Vid拓展公司 用于块内复制搜索增强的方法和系统
WO2016199574A1 (ja) * 2015-06-08 2016-12-15 ソニー株式会社 画像処理装置および画像処理方法
WO2017171370A1 (ko) * 2016-03-28 2017-10-05 주식회사 케이티 비디오 신호 처리 방법 및 장치
WO2017206803A1 (en) * 2016-05-28 2017-12-07 Mediatek Inc. Method and apparatus of current picture referencing for video coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIN HEO: "Chroma Intra Prediction", JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3, 5 October 2016 (2016-10-05), Chengdu *
MADHUKAR BUDAGAVI: "AHG8: Video Coding Using Intra Motion Compensation", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3, 9 April 2013 (2013-04-09), Incheon *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189688A (zh) * 2020-09-14 2022-03-15 四川大学 基于亮度模板匹配的色度分量预测方法
CN112203086A (zh) * 2020-09-30 2021-01-08 字节跳动(香港)有限公司 图像处理方法、装置、终端和存储介质
CN112203086B (zh) * 2020-09-30 2023-10-17 字节跳动(香港)有限公司 图像处理方法、装置、终端和存储介质
CN115190312A (zh) * 2021-04-02 2022-10-14 西安电子科技大学 一种基于神经网络的跨分量色度预测方法及装置

Also Published As

Publication number Publication date
CN110324627B (zh) 2022-04-05
CN110324627A (zh) 2019-10-11

Similar Documents

Publication Publication Date Title
US11399179B2 (en) Method and apparatus for encoding/decoding image
CN114900690B (zh) 视频解码方法、视频编码方法、装置、设备及存储介质
TWI741589B (zh) 視頻編解碼之亮度mpm列表導出的方法及裝置
WO2019184934A1 (zh) 色度的帧内预测方法及装置
CN113273213A (zh) 图像编码/解码方法和设备以及存储比特流的记录介质
EP3198867A1 (en) Method of improved directional intra prediction for video coding
CN111131822B (zh) 具有从邻域导出的运动信息的重叠块运动补偿
CN109379594B (zh) 视频编码压缩方法、装置、设备和介质
US11729410B2 (en) Image decoding method/apparatus, image encoding method/apparatus, and recording medium storing bitstream
TWI729477B (zh) 視訊編解碼中的子塊去塊及裝置
CN114830651A (zh) 帧内预测方法、编码器、解码器以及计算机存储介质
CN115174931A (zh) 视频图像解码、编码方法及装置
US20220353509A1 (en) Method and apparatus for image encoding and decoding using temporal motion information
CN110719467B (zh) 色度块的预测方法、编码器及存储介质
US20230283795A1 (en) Video coding method and device using motion compensation of decoder side
CN111770334B (zh) 数据编码方法及装置、数据解码方法及装置
CN116918331A (zh) 编码方法和编码装置
CN113875237A (zh) 用于在帧内预测中用信号传送预测模式相关信号的方法和装置
JP6875802B2 (ja) 画像符号化装置及びその制御方法及び撮像装置及びプログラム
WO2022116119A1 (zh) 一种帧间预测方法、编码器、解码器及存储介质
WO2022061563A1 (zh) 视频编码方法、装置及计算机可读存储介质
RU2819286C2 (ru) Способ и устройство кодирования/декодирования сигналов изображений
RU2819393C2 (ru) Способ и устройство кодирования/декодирования сигналов изображений
RU2806878C2 (ru) Способ и устройство кодирования/декодирования изображения и носитель записи, хранящий битовый поток
RU2819080C2 (ru) Способ и устройство кодирования/декодирования сигналов изображений

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19775981

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19775981

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/04/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19775981

Country of ref document: EP

Kind code of ref document: A1