WO2018196864A1 - 图像预测方法和相关产品 - Google Patents

图像预测方法和相关产品 Download PDF

Info

Publication number
WO2018196864A1
WO2018196864A1 PCT/CN2018/084955 CN2018084955W WO2018196864A1 WO 2018196864 A1 WO2018196864 A1 WO 2018196864A1 CN 2018084955 W CN2018084955 W CN 2018084955W WO 2018196864 A1 WO2018196864 A1 WO 2018196864A1
Authority
WO
WIPO (PCT)
Prior art keywords
current block
pixel
weighting coefficient
block
pixel value
Prior art date
Application number
PCT/CN2018/084955
Other languages
English (en)
French (fr)
Inventor
鲁晓牧
高山
杨海涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018196864A1 publication Critical patent/WO2018196864A1/zh
Priority to US16/662,627 priority Critical patent/US11039145B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present application relates to the field of video image processing, and more particularly to image prediction methods and related products.
  • HEVC high efficient video coding
  • the basic principle of video coding compression is to use the correlation between airspace, time domain and codewords to remove redundancy as much as possible.
  • the current popular practice is to use a block-based hybrid video coding framework to implement video coding compression through prediction (including intra prediction and inter prediction), transform, quantization, and entropy coding. This coding framework shows a strong vitality, and HEVC still uses this block-based hybrid video coding framework.
  • an image is generally divided into a plurality of square coding units (English: coding unit, abbreviation: CU) for encoding.
  • CU coding unit
  • the texture characteristics of the horizontal direction and the vertical direction of the CU are substantially the same.
  • the image prediction method can obtain relatively good prediction accuracy.
  • the test found that in some cases where the texture characteristics of the horizontal and vertical directions of the CU are quite different, it is sometimes difficult to obtain good prediction accuracy by using the conventional image prediction method.
  • Embodiments of the present application provide an image prediction method and related products.
  • an embodiment of the present application provides an image prediction method, including: performing intra prediction on a current block by using a reference block to obtain an initial predicted pixel value of a pixel in the current block. Performing a weighted filtering process on the initial predicted pixel values of the pixel points in the current block to obtain predicted pixel values of the pixel points in the current block.
  • the weighting coefficients used in the weighting filtering process include horizontal weighting coefficients and vertical weighting coefficients, and the attenuation speed factor acting on the horizontal weighting coefficients is different from the attenuation speed factor acting on the vertical weighting coefficients.
  • an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical weighting
  • the decay rate factor of the coefficient is
  • the attenuation speed factor acting on the horizontal weighting coefficient is the same as The decay rate factor of the vertical weighting factor.
  • the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block may be determined first. a difference, and then determining a decay speed factor acting on the horizontal weighting coefficient and acting on the vertical weighting coefficient based on the determined difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic Attenuation speed factor.
  • the attenuation speed factor acting on the horizontal weighting coefficient can also be considered
  • the attenuation speed factor acting on the vertical weighting coefficient is also determined correspondingly, so there is no need to perform “determining the difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic, and then based on the determined
  • the difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic is determined to determine the attenuation speed factor acting on the horizontal weighting coefficient and the attenuation speed factor acting on the vertical weighting coefficient.
  • the intra predictions mentioned in the embodiments of the present application are, for example, directional intra prediction, DC coefficient intra prediction or interpolated intra prediction or other intra prediction.
  • the image prediction method in each embodiment of the present application may be applied to, for example, a video encoding process or to a video decoding process.
  • the reference block of the current block may include, for example, an upper adjacent reference block of the current block, a left adjacent reference block, and an upper left adjacent reference block, and the like.
  • the attenuation speed factor can be, for example, equal to 1, 1.5, 1.65, 2, 3 or other values.
  • the texture characteristics may include, for example, side length, self-variance, and/or edge sharpness. Therefore, the horizontal direction texture characteristics may include horizontal side length (length), horizontal direction self-variance, and/or horizontal direction edge sharpness, etc.; vertical direction texture characteristics may include vertical side length (width), vertical direction self-variance, and/or Vertical edge sharpness, etc.
  • the value types of the difference thresholds may be various.
  • the attenuation speed factor reflects the attenuation rate of the applied weighting coefficient to some extent. If the attenuation speed factors are different, then the attenuation speed of the applied weighting coefficients can be made different. Whether the attenuation speed factor acting on the horizontal weighting coefficient is the same as the attenuation speed factor acting on the vertical weighting coefficient depends mainly on the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block.
  • the weighting coefficients used for weighting the initial predicted pixel values of the pixel points in the current block include horizontal weighting coefficients and vertical weighting coefficients, which are set for vertical weighting coefficients and horizontal weighting coefficients.
  • the attenuation rate factor of the difference that is, the attenuation speed factor acting on the horizontal weighting coefficient may be different from the attenuation speed factor acting on the vertical weighting coefficient, which facilitates the control of the vertical weighting factor and the horizontal weighting of the difference
  • the coefficient attenuation speed is further beneficial to meet the needs of some scenes where the vertical weighting factor and the horizontal weighting coefficient need to be attenuated according to different attenuation speeds.
  • the flexibility of controlling the vertical weighting coefficient and the horizontal weighting coefficient attenuation speed is enhanced, which is beneficial to improve image prediction accuracy. Sex. For example, in the case where the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block exceeds the difference threshold, it is advantageous to make the weighting coefficient attenuation speed difference and the texture characteristic difference more match, thereby facilitating improvement. Image prediction accuracy.
  • performing intra prediction on the current block by using the reference block to obtain an initial predicted pixel value of the pixel in the current block may include filtering the reconstructed pixel value of the reference pixel in the reference block. And obtaining a filtered pixel value of the reference pixel in the reference block; performing intra prediction on the current block by using the filtered pixel value of the reference pixel in the reference block to obtain an initial predicted pixel value of the pixel in the current block.
  • performing intra prediction on the current block by using the reference block to obtain an initial predicted pixel value of the pixel point in the current block may include: using the reconstructed pixel of the reference pixel in the reference block The value performs intra prediction on the current block to obtain an initial predicted pixel value of the pixel in the current block.
  • the weighting filtering formula used in the weighting filtering process may be various.
  • performing weighted filtering on the initial predicted pixel values of the pixel points in the current block to obtain the predicted pixel values of the pixel points in the current block may include: initial prediction of the pixel points in the current block based on the following weighting filtering formula Pixel values are weighted to obtain predicted pixel values of pixels in the current block,
  • c top belongs to a horizontal weighting coefficient
  • the c left and the c topleft belong to a vertical weighting coefficient
  • the c top represents a weighting coefficient corresponding to the reconstructed pixel value of the upper adjacent reference block of the current block.
  • the c left represents a weighting coefficient corresponding to a left neighboring reference block reconstructed pixel value of the current block.
  • the c topleft represents a weighting coefficient corresponding to the reconstructed pixel value of the upper left adjacent reference block of the current block.
  • x represents the abscissa of the pixel point in the current block with respect to the upper left vertex of the current block
  • y represents the ordinate of the pixel point in the current block with respect to the upper left vertex of the current block.
  • the d 2 is an attenuation speed factor acting on the horizontal weighting coefficient.
  • the d 1 is an attenuation speed factor acting on a vertical weighting coefficient
  • the d 1 and the d 2 are real numbers.
  • p′′[x,y] represents the predicted pixel value of the pixel with the coordinate [x, y] in the current block
  • p′[x, y] represents the initial prediction of the pixel with the coordinate [x, y] in the current block.
  • r[x, -1] represents a pixel point reconstructed pixel value of coordinates [x, -1] in the upper adjacent reference block of the current block
  • r[-1, -1] represents the current block
  • r[-1, y] represents the coordinate of the left adjacent reference block of the current block is [-1, y
  • the determination manners of the attenuation speed factors d 1 and d 2 can be various.
  • thresh1 is equal to 2.5, 4, 6, 8, 16, 32 or other values.
  • thresh4 is a real number greater than or equal to 64, for example, thresh4 is equal to 64, 65, 80, 96, 128 or other values.
  • the thresh2 is equal to 16, 17, 32, 64, 128 or other values.
  • the thresh3 is greater than Or a real number equal to 16.
  • the thresh2 is equal to 16, 18, 32, 64, 128 or other values.
  • an embodiment of the present application provides an image prediction apparatus, including a plurality of functional units for implementing any one of the methods of the first aspect.
  • the image prediction apparatus may include: a prediction unit and a filtering unit.
  • a prediction unit configured to perform intra prediction on the current block by using the reference block to obtain an initial predicted pixel value of the pixel in the current block.
  • a filtering unit configured to perform weighting filtering processing on initial predicted pixel values of the pixel points in the current block to obtain predicted pixel values of the pixel points in the current block, where the weighting coefficients used in the weighting filtering process include a level The weighting coefficient and the vertical weighting coefficient, the attenuation speed factor acting on the horizontal weighting coefficient is different from the attenuation speed factor acting on the vertical weighting coefficient.
  • an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical weighting
  • the decay rate factor of the coefficient is
  • the attenuation speed factor acting on the horizontal weighting coefficient is the same as The decay rate factor of the vertical weighting factor.
  • the image prediction device is applied, for example, to a video encoding device or a video decoding device.
  • an image prediction apparatus including:
  • a memory and processor coupled to each other; the processor for performing some or all of the steps of any of the methods of the first aspect.
  • an embodiment of the present application provides a computer readable storage medium, where the program code stores program code, where the program code includes a part for performing any one of the first aspects or Instructions for all steps.
  • an embodiment of the present application provides a computer program product, when the computer program product is run on a computer, causing the computer to perform some or all of the steps of any one of the first aspects.
  • FIG. 1A to FIG. 1B are schematic diagrams showing the division of several image blocks according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of locations of several possible adjacent reference blocks of a current block according to an embodiment of the present application
  • FIG. 1-D to FIG. 1-G are schematic diagrams showing values of speed attenuation factors of several size image blocks according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an image prediction method according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image encoding method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of an image decoding method according to an embodiment of the present disclosure.
  • FIG. 5-A to FIG. 5-C are schematic diagrams showing several attenuation processes of c top according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart diagram of an image prediction apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart diagram of another image prediction apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a video encoder according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a video decoder according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic block diagram of an electronic device according to an embodiment of the present application.
  • FIG. 11 is another schematic block diagram of an electronic device according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an embodiment of the present invention applicable to a television application
  • Figure 13 is a schematic block diagram of an embodiment of the present invention suitable for use in a mobile phone application.
  • the image prediction method and the video encoding and decoding method provided by the embodiments of the present application are described below.
  • the execution subject of the image prediction method provided by the embodiment of the present application is a video encoding device or a video decoding device, and the video encoding device or the video decoding device may be any device that needs to output or store video, such as a notebook computer, a tablet computer, a personal computer, Mobile phones, video servers, digital televisions, digital live broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, e-book readers, digital cameras, digital recording devices, digital media players, video Game devices, video game consoles, cellular or satellite radio phones, video conferencing devices, video streaming devices, and the like.
  • PDAs personal digital assistants
  • Video devices implement video compression techniques such as MPEG-2, MPEG-4, ITU-TH.263, ITU-TH.264/MPEG-4 Part 10 Advanced Video Codec (AVC), ITU-TH.
  • video compression techniques described in the 265 High Efficiency Video Codec (HEVC) standard and the video compression techniques described in the Extended section of the standard enable more efficient transmission and reception of digital video information.
  • Video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing these video codec techniques.
  • the concept of a frame refers to a complete image, which can be played by a frame-by-frame image in a certain order and frame rate.
  • the frame rate reaches a certain speed, the interval between the two frames is less than the resolution limit of the human eye, and a short visual pause occurs, so that it appears to be dynamically appearing on the screen.
  • the basis for the compression of video files is the compression encoding of single-frame digital images.
  • the spatial structure is the same or similar in a frame of image. For example, there is a close correlation and similarity between the colors of sampling points in the same object or background.
  • one frame of image has substantially a large correlation with its previous or subsequent frame, and the difference in pixel values of the description information is small, and these are all parts that can be compressed.
  • there is not only spatial redundancy information in the video file but also a large amount of time redundant information, which is caused by the composition of the video.
  • the frame rate of video sampling is generally 25 frames/second to 30 frames/second, and in special cases, 60 frames/second may occur. That is to say, the sampling interval between two adjacent frames is at least 1/30 second to 1/25 second. In such a short period of time, there is basically a large amount of similar information in the sampled image, and there is a great correlation between the images.
  • video compression coding is to use various technical methods to remove redundant information in the video sequence to achieve the effect of reducing storage space and saving transmission bandwidth.
  • video compression processing technologies mainly include intra prediction, inter prediction, transform quantization, entropy coding, and deblocking filtering.
  • intra prediction inter prediction
  • transform quantization entropy coding
  • deblocking filtering deblocking filtering
  • chroma sampling predictive coding
  • transform coding quantization coding
  • Chroma sampling This method makes full use of the visual psychological characteristics of the human eye, and tries to minimize the amount of data described by a single element from the underlying data representation.
  • Most of the television systems use luminance-chrominance-chrominance (YUV) color coding, which is widely adopted by European television systems.
  • the YUV color space includes a luminance signal Y and two color difference signals U and V, and the three components are independent of each other.
  • the YUV color mode is more flexible in representation, and the transmission takes up less bandwidth, which is superior to the traditional red, green and blue (RGB) color model.
  • the YUV 4:2:0 form indicates that the two chrominance components U and V are only half of the luminance Y component in both the horizontal direction and the vertical direction, that is, there are four luminance components Y among the four sampled pixels, and the chrominance component There is only one U and V.
  • the amount of data is further reduced, which is only about 33% of the original.
  • the use of human eye physiological visual characteristics to achieve video compression through this color sampling method is one of the widely used video data compression methods.
  • Predictive coding that is, using the data information of the previously encoded frame to predict the frame currently to be encoded.
  • a predicted value is obtained by prediction, which is not completely equivalent to the actual value, and there is a certain residual value. If the prediction is more suitable, the closer the predicted value is to the actual value, the smaller the residual value, so that the residual value can be encoded to greatly reduce the amount of data, and the residual value plus the predicted value is used when decoding at the decoding end.
  • Restoring and reconstructing the initial image is the basic idea of predictive coding. In the mainstream coding standard, predictive coding is divided into two basic types: intra prediction and inter prediction.
  • Transform coding instead of directly encoding the original spatial domain information, the information sample value is converted from the current domain to another artificial domain (usually called the transform domain) according to some form of transformation function. Compression coding is performed according to the distribution characteristics of the information in the transform domain.
  • the reason for transform coding includes: video image data tends to have large data correlation in the spatial domain, resulting in the existence of a large amount of redundant information, and direct encoding requires a large amount of bits.
  • the data correlation is greatly reduced, so that the redundant information of the encoding is reduced, and the amount of data required for encoding is also greatly reduced, so that a higher compression ratio can be obtained, and a better compression effect can be achieved.
  • Typical transform codes include a Kalo (K-L) transform, a Fourier transform, and the like.
  • Integer Discrete Cosine Transform (DCT) is a transform coding method commonly used in many international standards.
  • Quantization coding The above-mentioned transform coding does not compress data itself.
  • the quantization process is a powerful means of compressing data, and it is also the main reason for the loss of data in lossy compression.
  • the process of quantification is the process of forcing a large dynamic input value to be forced into fewer output values. Because the range of quantized input values is large, more bit numbers are needed, and the range of output values after "forced planning" is small, so that only a small number of bits can be represented.
  • Each quantized input is normalized to a quantized output, that is, quantized into an order of magnitude, often referred to as a quantization level (usually specified by the encoder).
  • the encoder control module selects the coding mode adopted by the image block according to the local characteristics of different image blocks in the video frame.
  • the intra-predictive coded block is subjected to frequency domain or spatial domain prediction
  • the inter-predictive coded block is subjected to motion compensation prediction
  • the predicted residual is further transformed and quantized to form a residual coefficient
  • the final code is generated by the entropy encoder. flow.
  • the intra or inter prediction reference signals are obtained by the decoding module at the encoding end.
  • the transformed and quantized residual coefficients are reconstructed by inverse quantization and inverse transform, and then added to the predicted reference signal to obtain a reconstructed image.
  • the video sequence consists of a series of pictures (English: picture), the picture is further divided into slices (English: slice), and the slice is further divided into blocks (English: block).
  • the video coding is performed in units of blocks, and can be encoded from left to right and from top to bottom line from the upper left corner position of the picture.
  • Some new video coding standards have further expanded the concept of blocks.
  • the H.264 standard there is a macroblock (English: macroblock, abbreviation: MB), and the MB can be further divided into a plurality of prediction blocks (English: partition) that can be used for predictive coding.
  • the coding unit (English: coding unit, abbreviation: CU), prediction unit (English: prediction unit, abbreviation: PU) and transform unit ( English: transform unit, abbreviation: TU) and other basic concepts, functionally divided into a variety of Unit, and a new tree-based structure for description.
  • a CU can be divided into smaller CUs according to a quadtree, and smaller CUs can continue to be divided to form a quadtree structure.
  • CU PU or TU
  • CU PU
  • TU coding tree block
  • the size of a coding unit may include several size levels of 64 ⁇ 64, 32 ⁇ 32, 16 ⁇ 16, and 8 ⁇ 8, and coding units of each level may be divided into two according to intra prediction and inter prediction.
  • Forecast units of different sizes For example, as shown in FIG. 1-A and FIG. 1-B, FIG. 1-A exemplifies a prediction unit division manner corresponding to intra prediction, and FIG. 1-B exemplifies several types corresponding to inter prediction. Forecast unit division method.
  • the solution of the present application is mainly for an intra prediction scenario.
  • Several possible adjacent reference blocks of the current block are shown in FIG. 1-C.
  • the reference block A is a left adjacent reference block of the current block, and the reference block B is an upper left phase of the current block.
  • the neighboring reference block, the reference block C is the upper adjacent reference block of the current block.
  • the image prediction method may include: performing intra prediction on a current block by using a reference block to obtain an initial predicted pixel value of a pixel point in the current block; and using a weighting filter formula to calculate a pixel point in the current block The initial predicted pixel value is subjected to a weighting filtering process to obtain a predicted pixel value of the pixel point in the current block.
  • Formula 1 is a possible weighting filter formula. ">>" indicates the shift operator.
  • the c top represents a weighting coefficient corresponding to an upper neighboring reference block reconstructed pixel value of the current block
  • the c left represents a weighting coefficient corresponding to a left neighboring reference block reconstructed pixel value of the current block
  • the c topleft represents a weighting coefficient corresponding to the reconstructed pixel value of the upper left adjacent reference block of the current block.
  • x represents the abscissa of the pixel point in the current block with respect to the upper left vertex of the current block
  • y represents the ordinate of the pixel point in the current block with respect to the upper left vertex of the current block
  • the coordinates of the upper left vertex of the current block are, for example, Is [0,0].
  • p′′[x,y] represents the predicted pixel value of the pixel with the coordinate [x, y] in the current block
  • p′[x, y] represents the pixel of the current block with the coordinate [x, y]
  • An initial predicted pixel value, r[x, -1] represents a pixel-reconstructed pixel value of coordinates [x, -1] in the upper adjacent reference block of the current block
  • r[-1, -1] represents the The reconstructed pixel value of the pixel with coordinates [-1, -1] in the upper left adjacent reference block of the current block
  • r[-1, y] represents the coordinate of the left adjacent reference block of the current block is [-1]
  • y] reconstructs the pixel value of the pixel.
  • the weighting coefficients used in the weighting filtering process include horizontal weighting coefficients and vertical weighting coefficients, and the horizontal weighting coefficients include: c top .
  • Vertical weighting factors include: c left and c topleft .
  • d denotes the attenuation speed factor
  • the attenuation speed factor acting on the horizontal weighting coefficient in Equations 1 and 2 is equal to the vertical weighting coefficient Attenuation speed factor.
  • the weighted filtering formula needs to reflect the correlation between adjacent pixels.
  • the correlation is related to the distance between two points.
  • the increase will decay in a similar exponential relationship.
  • ">>[y/d]” and ">>[x/d]” in Equation 1 reflect the attenuation characteristics.
  • the coordinates of (0, 0) are adjacent to the positions of the two pixel points of (0, -1), then according to the formula 1, c top does not perform the reduction operation, so that r(0, -1) is The prediction result of the pixel of the coordinate (0, 0) has a large influence.
  • quadtree plus binary tree (English: quadtree plus binary tree, abbreviation: QTBT) method was proposed.
  • QTBT method include the possibility of non-square CUs with unequal length and width; the length of the CU side ranges from 4 to 128.
  • the PDPC further adjusts the value of the attenuation speed factor d.
  • the values of d corresponding to the length and the width are the same, that is, the correlation between d and the length and width of the CU is weak.
  • the CU aspect ratio of 8x32 and 32x8 is 1:4 or 4:1, since the value of d is 1, the degree of coefficient attenuation in both the length and width directions is considered to be the same;
  • the CU aspect ratio of 4x64 and 64x4 is 1:8 or 8:1, since the value of d is 2, the coefficient attenuation speed in both the length and width directions is considered to be the same.
  • the attenuation speed factor acting on the horizontal weighting coefficient is equal to the attenuation speed factor acting on the vertical weighting coefficient, and also in some cases, the attenuation speed represented by the weighting filtering formula A large deviation from the actual attenuation speed, which reduces the accuracy of image prediction, may affect the system codec performance.
  • FIG. 2 is a schematic flowchart of an image prediction method according to an embodiment of the present disclosure.
  • the image prediction method may be applied to a video encoding process or a video decoding process, and the image prediction method may include but is not limited to the following. step:
  • the intra prediction is, for example, directional intra prediction, DC coefficient intra prediction, interpolated intra prediction, or other intra prediction methods.
  • performing intra prediction on the current block by using the reference block to obtain the initial predicted pixel value of the pixel in the current block includes: performing filtering processing on the reconstructed pixel value of the reference pixel in the reference block to obtain a reference in the reference block a filtered pixel value of the pixel; performing intra prediction on the current block using the filtered pixel value of the reference pixel in the reference block to obtain an initial predicted pixel value of the pixel in the current block.
  • performing intra prediction on the current block by using the reference block to obtain an initial predicted pixel value of the pixel in the current block includes: performing intra prediction on the current block by using the reconstructed pixel value of the reference pixel in the reference block to obtain the The initial predicted pixel value of the pixel in the current block.
  • the weighting coefficient used in the weighting filtering process includes a horizontal weighting coefficient and a vertical weighting coefficient, and an attenuation speed factor acting on the horizontal weighting coefficient is different from an attenuation speed factor acting on the vertical weighting coefficient.
  • an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical weighting
  • the decay rate factor of the coefficient is
  • the attenuation speed factor acting on the horizontal weighting coefficient is the same as acting on the vertical The decay rate factor of the weighting factor.
  • the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block may be determined first. a difference, and then determining a decay speed factor acting on the horizontal weighting coefficient and acting on the vertical weighting coefficient based on the determined difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic Attenuation speed factor.
  • the attenuation speed factor acting on the horizontal weighting coefficient can also be considered
  • the attenuation speed factor acting on the vertical weighting coefficient is also determined correspondingly, so there is no need to perform “determining the difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic, and then based on the determined
  • the difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic is determined to determine the attenuation speed factor acting on the horizontal weighting coefficient and the attenuation speed factor acting on the vertical weighting coefficient.
  • the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block exceeds the difference threshold, indicating that the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block is large, then the difference The difference should be considered. If the difference between the horizontal direction texture characteristics of the current block and the vertical direction texture characteristics does not exceed the difference threshold, the difference between the horizontal direction texture characteristics of the front block and the vertical direction texture characteristics is small. , then this difference may be allowed to be ignored.
  • an attenuation speed factor acting on the horizontal weighting coefficient and an attenuation speed factor acting on the vertical weighting coefficient may be determined based on a difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block.
  • the texture characteristics may include, for example, side length, self-variance, and/or edge sharpness. Therefore, the horizontal direction texture characteristics may include horizontal side length (length), horizontal direction self-variance, and/or horizontal direction edge sharpness, etc.; vertical direction texture characteristics may include vertical side length (width), vertical direction self-variance, and/or Or vertical edge sharpness, etc.
  • the texture characteristic is a side length
  • the level is applied to the level
  • the attenuation speed factor of the weighting coefficient is greater than the attenuation speed factor acting on the vertical weighting coefficient; if the difference between the length and the width of the current block exceeds the difference threshold, and the length of the current block is less than the width
  • the attenuation speed factor acting on the horizontal weighting coefficient is smaller than the attenuation speed factor acting on the vertical weighting coefficient.
  • the difference between the self-variance and the vertical self-variance in the horizontal direction of the current block exceeds the difference threshold, and the horizontal direction self-variance of the current block is greater than the vertical
  • the attenuation speed factor acting on the horizontal weighting coefficient is greater than the attenuation speed factor acting on the vertical weighting coefficient
  • the difference between the self-variance and the vertical self-variance in the horizontal direction of the current block exceeds The difference threshold, and in the case where the horizontal direction self-variance of the current block is smaller than the vertical direction self-variance, the attenuation speed factor acting on the horizontal weighting coefficient is smaller than the attenuation speed factor acting on the vertical weighting coefficient.
  • the texture characteristic is edge sharpness
  • the difference between the horizontal direction edge sharpness and the vertical direction edge sharpness of the current block exceeds the difference threshold, and the horizontal edge of the current block
  • the attenuation speed factor acting on the horizontal weighting coefficient is greater than the attenuation speed factor acting on the vertical weighting coefficient
  • the sharpness and the vertical direction in the horizontal direction of the current block If the difference between the edge sharpness exceeds the difference threshold, and the horizontal edge sharpness of the current block is smaller than the vertical edge sharpness, the attenuation speed factor acting on the horizontal weighting coefficient is smaller than The decay rate factor of the vertical weighting factor.
  • the attenuation speed factor reflects the attenuation rate of the applied weighting coefficient to some extent. If the attenuation speed factors are different, then the attenuation speed of the applied weighting coefficients can be made different. Whether the attenuation speed factor acting on the horizontal weighting coefficient is the same as the attenuation speed factor acting on the vertical weighting coefficient depends mainly on the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block.
  • performing weighted filtering on the initial predicted pixel values of the pixel points in the current block to obtain the predicted pixel values of the pixel points in the current block may include: pixel points in the current block based on a weighting filtering formula The initial predicted pixel value is weighted and filtered to obtain a predicted pixel value of the pixel point in the current block,
  • formula 3 is a weighting filter formula. ">>" indicates the shift operator.
  • c top belongs to a horizontal weighting coefficient
  • the c left and the c topleft belong to a vertical weighting coefficient
  • the c top represents a weighting coefficient corresponding to the reconstructed pixel value of the upper adjacent reference block of the current block
  • the c left represents a weighting coefficient corresponding to the reconstructed pixel value of the left adjacent reference block of the current block
  • c topleft denotes a weighting coefficient corresponding to the reconstructed pixel value of the upper left adjacent reference block of the current block.
  • x represents the abscissa of the pixel point in the current block relative to the upper left vertex of the current block
  • y represents the ordinate of the pixel point in the current block relative to the upper left vertex of the current block.
  • the d 1 is an attenuation speed factor acting on a horizontal weighting coefficient
  • the d 2 is an attenuation speed factor acting on the vertical weighting coefficient
  • the d 1 and the d 2 are real numbers
  • p′′[x, y] represents The predicted pixel value of the pixel with the coordinate [x, y] in the current block
  • p'[x, y] represents the initial predicted pixel value of the pixel with the coordinate [x, y] in the current block
  • r[x,- 1] represents a pixel point reconstructed pixel value of coordinates [x, -1] in the upper adjacent reference block of the current block
  • r[-1, -1] represents coordinates in the upper left adjacent reference block of the current block.
  • a reconstructed pixel value of a pixel of [-1, -1], r[-1, y] represents a reconstructed pixel value of a pixel having a coordinate of [-1, y] in the left adjacent reference block of the current block .
  • the attenuation speed factor acting on the horizontal weighting coefficient and the attenuation speed factor acting on the vertical weighting coefficient mainly depend on the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block, and between them
  • the specific correspondence can be varied. Wherein, in a case where the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block exceeds a difference threshold, the d 1 is not equal to the d 2 , and the horizontal direction texture of the current block Where the difference between the characteristic and the vertical direction texture characteristic does not exceed the difference threshold, the d 1 is equal to the d 2 .
  • the thresh2 is A real number greater than or equal to 16.
  • the thresh2 is equal to 16, 17, 32, 64, 128 or other values.
  • the thresh3 is greater than Or a real number equal to 16.
  • the thresh2 is equal to 16, 18, 32, 64, 128 or other values.
  • the degree of attenuation vertical weighting coefficient (c top) level of 32x8,64x4 is a weighting coefficient (c left and c topleft) twice, to the size of the current block 8x32,4x64, vertical
  • thresh1 is equal to 2.5, 4, 6, 8, 16, 32 or other values.
  • thresh4 is a real number greater than or equal to 64, for example, thresh4 is equal to 64, 65, 80, 96, 128 or other values.
  • thresh1 an example equal to 64
  • d 1 2 in the case where the current block size is 32x8,64x4
  • d 2 1.
  • the degree of attenuation vertical weighting coefficient (c top) level of 32x8,64x4 is a weighting coefficient (c left and c topleft) twice, to the size of the current block 32x16,64x32, C Top , c left and c topleft have the same degree of attenuation. It can be seen that in this example scenario, the attenuation speed of the coefficients can be adaptively changed according to the aspect ratio, shape, and the like of the image block.
  • FIG. 1-F and FIG. 1-G are just some possible example implementations, and may of course not be limited to such example implementations in practical applications.
  • the weighting coefficients used for weighting the initial predicted pixel values of the pixel points in the current block include horizontal weighting coefficients and vertical weighting coefficients, which are set for vertical weighting coefficients and horizontal weighting coefficients.
  • the attenuation rate factor of the difference that is, the attenuation speed factor acting on the horizontal weighting coefficient may be different from the attenuation speed factor acting on the vertical weighting coefficient, which facilitates the control of the vertical weighting factor and the horizontal weighting of the difference
  • the coefficient attenuation speed is further beneficial to meet the needs of some scenes where the vertical weighting factor and the horizontal weighting coefficient need to be attenuated according to different attenuation speeds.
  • the flexibility of controlling the vertical weighting coefficient and the horizontal weighting coefficient attenuation speed is enhanced, which is beneficial to improve image prediction accuracy. Sex. For example, in the case where the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block exceeds the difference threshold, it is advantageous to make the weighting coefficient attenuation speed difference and the texture characteristic difference more match, thereby facilitating improvement. Image prediction accuracy.
  • FIG. 3 is a schematic flowchart of a video encoding method according to another embodiment of the present disclosure.
  • the video encoding method may include, but is not limited to, the following steps:
  • the video encoding apparatus performs a filtering process on the reconstructed pixel value of the reference pixel in the reference block to obtain a filtered pixel value of the reference pixel in the reference block.
  • the video encoding apparatus performs intra prediction on the current block by using the filtered pixel value of the reference pixel in the reference block to obtain an initial predicted pixel value of the pixel in the current block.
  • the intra prediction is, for example, directional intra prediction, DC coefficient intra prediction, interpolated intra prediction, or other intra prediction methods.
  • the video encoding apparatus determines an attenuation speed factor acting on the horizontal weighting coefficient and an attenuation speed factor acting on the vertical weighting coefficient based on a difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block.
  • the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block may be determined first. a difference, and then determining a decay speed factor acting on the horizontal weighting coefficient and acting on the vertical weighting coefficient based on the determined difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic Attenuation speed factor.
  • the attenuation speed factor acting on the horizontal weighting coefficient can also be considered
  • the attenuation speed factor acting on the vertical weighting coefficient is also determined correspondingly, so there is no need to perform “determining the difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic, and then based on the determined
  • the difference between the horizontal direction texture characteristic of the current block and the vertical direction texture characteristic is determined to determine the attenuation speed factor acting on the horizontal weighting coefficient and the attenuation speed factor acting on the vertical weighting coefficient.
  • an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical weighting coefficient
  • the attenuation speed factor in the case where the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block does not exceed the difference threshold, the attenuation speed factor acting on the horizontal weighting coefficient is the same as The decay rate factor of the vertical weighting factor.
  • the texture characteristics may include, for example, side length, self-variance, and/or edge sharpness. Therefore, the horizontal direction texture characteristics may include horizontal side length (length), horizontal direction self-variance, and/or horizontal direction edge sharpness, etc.; vertical direction texture characteristics may include vertical side length (width), vertical direction self-variance, and/or Or vertical edge sharpness, etc.
  • step 303 there is no necessary sequence between steps 301-302 and step 303.
  • the execution of step 303 may be preceded, later, or synchronized with the execution of steps 301-302.
  • the video encoding apparatus performs weighting filtering processing on the initial predicted pixel value of the pixel point in the current block to obtain a predicted pixel value of the pixel point in the current block.
  • the weighting coefficient used in the weighting filtering process includes the horizontal weighting coefficient and the vertical weighting coefficient.
  • the weighting filtering formula used in performing weighting filtering processing on the initial predicted pixel values of the pixel points in the current block in step 304 may be Equation 3.
  • the video encoding device determines the specific method for the attenuation rate factor of the horizontal weighting coefficient and the attenuation speed factor for the vertical weighting coefficient. For reference, refer to the related description of the embodiment corresponding to FIG. 2, and details are not described herein.
  • the video encoding apparatus obtains a prediction residual of the current block based on the predicted pixel value of the pixel point in the current block.
  • the video encoding device can write the prediction residual of the current block to the video code stream.
  • the weighting coefficients used for weighting the initial predicted pixel values of the pixel points in the current block include horizontal weighting coefficients and vertical weighting coefficients, due to the vertical weighting coefficients and
  • the horizontal weighting coefficient sets a differential attenuation speed factor, that is, the attenuation speed factor acting on the horizontal weighting coefficient may be different from the attenuation speed factor acting on the vertical weighting coefficient, which facilitates the control vertical weighting of the difference
  • the coefficient and the weighting coefficient of the horizontal weighting coefficient are further beneficial to meet the needs of some scenes where the vertical weighting factor and the horizontal weighting coefficient need to be attenuated according to different attenuation speeds, and the flexibility of controlling the vertical weighting coefficient and the horizontal weighting coefficient attenuation speed is enhanced.
  • FIG. 4 is a schematic flowchart of a video decoding method according to another embodiment of the present disclosure.
  • the video decoding method may include, but is not limited to, the following steps:
  • the video decoding apparatus performs a filtering process on the reconstructed pixel value of the reference pixel in the reference block to obtain a filtered pixel value of the reference pixel in the reference block.
  • the video decoding apparatus performs intra prediction on the current block by using the filtered pixel value of the reference pixel in the reference block to obtain an initial predicted pixel value of the pixel in the current block.
  • the intra prediction is, for example, directional intra prediction, DC coefficient intra prediction, interpolated intra prediction, or other intra prediction methods.
  • the video decoding device determines an attenuation speed factor acting on the horizontal weighting coefficient and an attenuation speed factor acting on the vertical weighting coefficient based on a difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block.
  • an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical weighting coefficient
  • the attenuation speed factor in the case where the difference between the horizontal direction texture characteristic and the vertical direction texture characteristic of the current block does not exceed the difference threshold, the attenuation speed factor acting on the horizontal weighting coefficient is the same as The decay rate factor of the vertical weighting factor.
  • the texture characteristics may include, for example, side length, self-variance, and/or edge sharpness. Therefore, the horizontal direction texture characteristics may include horizontal side length (length), horizontal direction self-variance, and/or horizontal direction edge sharpness, etc.; vertical direction texture characteristics may include vertical side length (width), vertical direction self-variance, and/or Or vertical edge sharpness, etc.
  • step 303 may precede, later, or synchronize with the execution of steps 401-402.
  • the video decoding apparatus performs weighting filtering processing on initial predicted pixel values of the pixel points in the current block to obtain predicted pixel values of the pixel points in the current block.
  • the weighting coefficient used in the weighting filtering process includes the horizontal weighting coefficient and the vertical weighting coefficient.
  • the weighting filtering formula used for performing the weighting filtering process on the initial predicted pixel values of the pixel points in the current block in step 404 may be Equation 3.
  • the video decoding device determines the specific method for the attenuation rate factor of the horizontal weighting coefficient and the attenuation rate factor for the vertical weighting coefficient. For reference, refer to the related description of the first embodiment, and details are not described herein.
  • the video decoding apparatus reconstructs the current block based on a predicted pixel value of a pixel point in the current block and a prediction residual.
  • the video decoding device can obtain the prediction residual of the pixel points in the current block from the video code stream.
  • the weighting coefficients used for weighting the initial predicted pixel values of the pixel points in the current block include horizontal weighting coefficients and vertical weighting coefficients, because for vertical weighting coefficients And the horizontal weighting coefficient sets a difference attenuation speed factor, that is, the attenuation speed factor acting on the horizontal weighting coefficient may be different from the attenuation speed factor acting on the vertical weighting coefficient, which facilitates the vertical control of the difference
  • the weighting coefficient and the weighting coefficient of the horizontal weighting coefficient are further beneficial to meet the needs of some scenes where the vertical weighting coefficient and the horizontal weighting coefficient need to be attenuated according to different attenuation speeds, and the flexibility of controlling the vertical weighting coefficient and the horizontal weighting coefficient attenuation speed is enhanced.
  • FIG. 5-A exemplifies the decay process of c top of an image block of size 8x32 in the case of using Equation 1, and c top has been obtained to the pixel of the seventh row.
  • the attenuation is 0.
  • Figure 5-B exemplifies the decay process of c top of an image block of size 8x16 in the case of using Equation 1, and c top is also attenuated to 0 by the pixel of the seventh row, which is an image block of size 8x32.
  • the c top decay rate is basically the same.
  • an array corresponding to the prediction direction may be selected from the following 35 arrays based on the prediction direction, and one of the 6 numbers included in the array is selected as the value of c top .
  • an array corresponding to the prediction direction can be selected from the following 35 arrays based on the prediction direction, and one of the 6 numbers included in the array is selected as the c left . value.
  • an array corresponding to the prediction direction may be selected from the following 35 arrays based on the prediction direction, and one of the 6 numbers included in the array is selected as the c top . value.
  • an array corresponding to the prediction direction can be selected from the following 35 arrays based on the prediction direction, and one of the 6 numbers included in the array is selected as the c left . value.
  • an array corresponding to the prediction direction is selected from the following 35 arrays, and one of the six numbers included in the array is selected as the value of c top . .
  • the width of the image block is 16, then based on the prediction direction, select one array corresponding to the prediction direction from the following 35 arrays, and select one of the six numbers included in the array as the c left . value.
  • an array corresponding to the prediction direction is selected from the following 35 arrays based on the prediction direction, and one of the six numbers included in the array is selected as the value of c top . .
  • the width of the image block is 32, then based on the prediction direction, one array corresponding to the prediction direction is selected from the following 35 arrays, and one of the six numbers included in the array is selected as the c left . value.
  • an array corresponding to the prediction direction is selected from the following 35 arrays based on the prediction direction, and one of the six numbers included in the array is selected as the value of c top . .
  • the width of the image block is 64, then based on the prediction direction, one array corresponding to the prediction direction is selected from the following 35 arrays, and one of the six numbers included in the array is selected as the c left . value.
  • c top and c left can also be other empirical values.
  • c topleft can be obtained based on c top and c left , or other empirical values.
  • an image prediction apparatus 600 including:
  • the prediction unit 610 is configured to perform intra prediction on the current block by using the reference block to obtain an initial predicted pixel value of the pixel in the current block.
  • the intra prediction is, for example, directional intra prediction, DC coefficient intra prediction, or interpolated intra prediction.
  • the filtering unit 620 is configured to perform weighting filtering processing on the initial predicted pixel values of the pixel points in the current block to obtain predicted pixel values of the pixel points in the current block, where the weighting coefficients used in the weighting filtering process include The horizontal weighting coefficient and the vertical weighting coefficient, the attenuation speed factor acting on the horizontal weighting coefficient is different from the attenuation speed factor acting on the vertical weighting coefficient.
  • an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical weighting
  • the decay rate factor of the coefficient is
  • the attenuation speed factor acting on the horizontal weighting coefficient is the same as The decay rate factor of the vertical weighting factor.
  • the texture characteristics may include, for example, side length, self-variance, and/or edge sharpness. Therefore, the horizontal direction texture characteristics may include horizontal side length (length), horizontal direction self-variance, and/or horizontal direction edge sharpness, etc.; vertical direction texture characteristics may include vertical side length (width), vertical direction self-variance, and/or Vertical edge sharpness, etc.
  • the prediction unit 610 is specifically configured to perform filtering processing on the reconstructed pixel values of the reference pixel points in the reference block to obtain filtered pixel values of the reference pixel points in the reference block; and use filtered pixel value pairs of the reference pixel points in the reference block.
  • the current block performs intra prediction to obtain an initial predicted pixel value of the pixel point in the current block; or performs intra prediction on the current block by using the reconstructed pixel value of the reference pixel in the reference block to obtain a pixel point in the current block.
  • the initial predicted pixel value is specifically configured to perform filtering processing on the reconstructed pixel values of the reference pixel points in the reference block to obtain filtered pixel values of the reference pixel points in the reference block.
  • the filtering unit is configured to perform weighted filtering on an initial predicted pixel value of a pixel point in the current block to obtain a predicted pixel value of a pixel point in the current block, according to a weighting filtering formula.
  • c top belongs to a horizontal weighting coefficient
  • the c left and the c topleft belong to a vertical weighting coefficient
  • the c top represents a weighting coefficient corresponding to the reconstructed pixel value of the upper adjacent reference block of the current block
  • the c left represents a weighting coefficient corresponding to the reconstructed pixel value of the left adjacent reference block of the current block
  • c topleft represents the weighting coefficient corresponding to the upper left adjacent reference block reconstructed pixel value of the current block
  • x represents the abscissa of the pixel point in the current block relative to the upper left vertex of the current block
  • y represents the pixel point in the current block relative to An ordinate of an upper left vertex of the current block
  • the d 2 being an attenuation speed factor acting on the horizontal weighting coefficient
  • the d 1 being an attenuation speed factor acting on the vertical weighting coefficient
  • p′′[x,y] represents the predicted pixel value of the pixel with the coordinate [x,y] in the current block
  • p′[x,y] represents the pixel with the coordinate [
  • the initial predicted pixel value, r[x, -1] represents a pixel reconstructed pixel value of coordinates [x, -1] in the upper adjacent reference block of the current block
  • r[-1, -1] represents Reconstructing pixel values of pixels having coordinates [-1, -1] in the upper left adjacent reference block of the current block
  • r[-1, y] indicating the current block Left neighboring reference block with coordinates [-1, y] reconstructed pixel values of the pixels.
  • the texture characteristic includes a side length
  • the difference threshold includes a threshold thresh1.
  • thresh4 is a real number greater than or equal to 64.
  • the image prediction device 600 can be applied to a video encoding device or a video decoding device, which can be any device that needs to output or store video, such as a laptop, a tablet, a personal computer, a mobile phone or a video. Servers and other devices.
  • the function of the image prediction apparatus 600 in this embodiment may be specifically implemented based on the solution of the foregoing method embodiment, and some undescribed parts may refer to the above embodiment.
  • an embodiment of the present application provides an image prediction apparatus 700, including: a processor 710 and a memory 720 coupled to each other.
  • the memory 720 is used to store instructions and data, and the processor 710 is configured to execute the instructions.
  • the processor 710 is used, for example, to perform some or all of the steps of the methods in the above method embodiments.
  • the processor 710 is also referred to as a central processing unit (English: Central Processing Unit, abbreviated: CPU).
  • the components of the image prediction device in a particular application are coupled together, for example, via a bus system.
  • the bus system may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus.
  • various buses are labeled as bus system 730 in the figure.
  • the method disclosed in the foregoing embodiment of the present application may be applied to the processor 710 or implemented by the processor 710.
  • the processor 710 may be an integrated circuit chip with signal processing capability. In some implementations, the steps of the above methods may be performed by integrated logic of hardware in processor 710 or by instructions in software.
  • the processor 710 can be a general purpose processor, a digital signal processor, an application specific integrated circuit, an off-the-shelf programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component.
  • the processor 710 can implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application.
  • the general purpose processor 710 can be a microprocessor or the processor can be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software modules can be located in random memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like, which are well established in the art.
  • the storage medium is located in memory 720.
  • processor 710 can read the information in memory 720 and, in conjunction with its hardware, perform the steps of the above method.
  • the processor 710 may be configured to: perform intra prediction on a current block by using a reference block to obtain an initial predicted pixel value of a pixel point in the current block; and weight an initial predicted pixel value of a pixel point in the current block. Filtering to obtain predicted pixel values of pixel points in the current block, wherein the weighting coefficients used in the weighting filtering process include horizontal weighting coefficients and vertical weighting coefficients, and attenuation speed factors acting on the horizontal weighting coefficients may be different An attenuation rate factor acting on the vertical weighting factor.
  • an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical weighting
  • the decay rate factor of the coefficient is
  • the attenuation speed factor acting on the horizontal weighting coefficient is the same as acting on the vertical The decay rate factor of the weighting factor.
  • the image prediction device 700 can be applied to a video encoding device or a video decoding device, which can be any device that needs to output or store video, such as a laptop, a tablet, a personal computer, a mobile phone or a video. Servers and other devices.
  • the function of the image prediction apparatus 700 in this embodiment may be specifically implemented based on the solution of the foregoing method embodiment, and some undescribed parts may refer to the above embodiment.
  • FIG. 8 is a schematic block diagram of a video encoder 20 that can be used in an embodiment of the present application, including an encoding end prediction module 201, a transform quantization module 202, an entropy encoding module 203, a code reconstruction module 204, and an encoding end filtering module 205.
  • FIG. 9 is a schematic block diagram of a video decoder 30 that can be utilized in the embodiments of the present application, including a decoding end prediction module 206, an inverse transform inverse quantization module 207, an entropy decoding module 208, a decoding reconstruction module 209, and a decoding filtering module 210.
  • Video encoder 20 may be used to implement an image prediction method or a video encoding method of an embodiment of the present application.
  • Video encoder 30 may be used to implement the image prediction method or video decoding method of embodiments of the present application. specific:
  • Video encoder 20 may generate one or more prediction units (PUs) that each no longer partition the CU. Each PU of a CU may be associated with a different block of pixels within a block of pixels of the CU. Video encoder 20 may generate predictive pixel blocks for each PU of the CU. Video encoder 20 may use intra prediction or inter prediction to generate predictive pixel blocks for the PU. If video encoder 20 uses intra prediction to generate a predictive pixel block for a PU, video encoder 20 may generate a predictive pixel block for the PU based on the decoded pixels of the picture associated with the PU.
  • PUs prediction units
  • video encoder 20 may generate predictiveness of the PU based on decoded pixels of one or more pictures that are different from pictures associated with the PU. Pixel block. Video encoder 20 may generate residual pixel blocks of the CU based on the predictive pixel blocks of the PU of the CU. The residual pixel block of the CU may indicate the difference between the sampled value in the predictive pixel block of the PU of the CU and the corresponding sampled value in the initial pixel block of the CU.
  • Transform quantization module 202 is operative to process the predicted residual data.
  • Video encoder 20 may perform recursive quadtree partitioning on the residual pixel blocks of the CU to partition the residual pixel blocks of the CU into one or more smaller residual pixel blocks associated with the transform units (TUs) of the CU. Because the pixels in the pixel block associated with the TU each correspond to one luma sample and two chroma samples, each TU can be associated with one luma residual sample block and two chroma residual sample blocks.
  • Video encoder 20 may apply one or more transforms to the residual sample block associated with the TU to generate a coefficient block (ie, a block of coefficients).
  • the transform can be a DCT transform or a variant thereof.
  • the coefficient block is obtained by applying a one-dimensional transform in the horizontal and vertical directions to calculate a two-dimensional transform.
  • Video encoder 20 may perform a quantization procedure for each of the coefficients in the coefficient block. Quantization generally refers to the process by which the coefficients are quantized to reduce the amount of data used to represent the coefficients, thereby providing further compression.
  • the inverse transform inverse quantization module 207 performs the inverse of the transform quantization module 202.
  • Video encoder 20 may generate a set of syntax elements that represent coefficients in the quantized coefficient block.
  • Video encoder 20 may apply an entropy encoding operation (eg, a context adaptive binary arithmetic coding (CABAC) operation) to some or all of the above syntax elements by entropy encoding module 203.
  • CABAC context adaptive binary arithmetic coding
  • video encoder 20 may binarize the syntax elements to form a binary sequence that includes one or more bits (referred to as "binary").
  • Video encoder 20 may encode a portion of the binary using regular encoding, and may use bypass encoding to encode other portions of the binary.
  • video encoder 20 may apply inverse quantization and inverse transform to the transformed coefficient block by code reconstruction module 204 to reconstruct the residual sample block from the transformed coefficient block.
  • Video encoder 20 may add the reconstructed residual sample block to a corresponding sample block of one or more predictive sample blocks to produce a reconstructed sample block.
  • video encoder 20 may reconstruct the block of pixels associated with the TU. The pixel block of each TU of the CU is reconstructed in this way until the entire pixel block reconstruction of the CU is completed.
  • video encoder 20 After video encoder 20 reconstructs the block of pixels of the CU, video encoder 20 performs a deblocking filtering operation through encoding end filtering module 205 to reduce the blockiness of the block of pixels associated with the CU. And after video encoder 20 performs the deblocking filtering operation, video encoder 20 may use sample adaptive offset (SAO) to modify the reconstructed block of pixels of the CTB of the picture. After performing these operations, video encoder 20 may store the reconstructed blocks of pixels of the CU in a decoded picture buffer for use in generating predictive blocks of pixels for other CUs.
  • SAO sample adaptive offset
  • Video decoder 30 can receive the code stream.
  • the code stream contains encoded information of video data encoded by video encoder 20 in the form of a bitstream.
  • Video decoder 30 parses the code stream by entropy decoding module 208 to extract syntax elements from the code stream.
  • video decoder 30 may perform regular decoding on partial bins and may perform bypass decoding on bins of other portions, and the bins in the code stream have mapping relationships with syntax elements, through parsing The binary gets the syntax element.
  • the process of reconstructing video data based on syntax elements is generally reciprocal to the process performed by video encoder 20 to generate syntax elements.
  • video decoder 30 may generate a predictive pixel block of a PU of a CU based on syntax elements associated with the CU.
  • video decoder 30 may inverse quantize the coefficient blocks associated with the TUs of the CU.
  • Video decoder 30 may perform an inverse transform on the inverse quantized coefficient block to reconstruct a residual pixel block associated with the TU of the CU.
  • Video decoder 30 may reconstruct a block of pixels of the CU based on the predictive pixel block and the residual pixel block.
  • video decoder 30 After video decoder 30 reconstructs the block of pixels of the CU, video decoder 30 performs a deblocking filtering operation through decoding filter module 210 to reduce the blockiness of the block of pixels associated with the CU. Additionally, video decoder 30 may perform the same SAO operations as video encoder 20 based on one or more SAO syntax elements. After video decoder 30 performs these operations, video decoder 30 may store the block of pixels of the CU in a decoded picture buffer. The decoded picture buffer can provide reference pictures for subsequent motion compensation, intra prediction, and presentation by the display device.
  • the embodiment of the present application further provides a computer readable storage medium, wherein the computer readable storage medium stores program code, wherein the program code includes part or all of the method for performing the above method embodiments. Instructions for the steps.
  • the embodiment of the present application further provides a computer program product, wherein when the computer program product is run on a computer, the computer is caused to perform some or all of the steps of the method in the above method embodiments.
  • the embodiment of the present application further provides a computer program product, wherein when the computer program product is run on a computer, the computer is caused to perform some or all of the steps of the method in the foregoing method embodiments.
  • An embodiment of the present application further provides an application publishing platform, wherein the application publishing platform is configured to release a computer program product, wherein when the computer program product runs on a computer, the computer is executed to perform the foregoing methods.
  • FIGS. 10 and 11 are two schematic block diagrams of an electronic device 50 that can incorporate a codec that can be utilized in embodiments of the present application.
  • 11 is a schematic diagram of an apparatus for video encoding in accordance with an embodiment of the present application. The units in Figs. 10 and 11 will be explained below.
  • the electronic device 50 can be, for example, a mobile terminal or user equipment of a wireless communication system. It should be understood that embodiments of the present application may be implemented in any electronic device or device that may require encoding and decoding, or encoding, or decoding of a video image.
  • Device 50 can include a housing 30 for incorporating and protecting the device.
  • Device 50 may also include display 32 in the form of a liquid crystal display.
  • the display may be any suitable display technology suitable for displaying images or video.
  • Device 50 may also include a keypad 34.
  • any suitable data or user interface mechanism can be utilized.
  • the user interface can be implemented as a virtual keyboard or data entry system as part of a touch sensitive display.
  • the device may include a microphone 36 or any suitable audio input, which may be a digital or analog signal input.
  • the device 50 may also include an audio output device, which in the embodiment of the present application may be any of the following: an earphone 38, a speaker, or an analog audio or digital audio output connection.
  • Device 50 may also include battery 40, and in other embodiments of the present application, the device may be powered by any suitable mobile energy device, such as a solar cell, fuel cell, or clock mechanism generator.
  • the device may also include an infrared port 42 for short-range line of sight communication with other devices.
  • device 50 may also include any suitable short range communication solution, such as a Bluetooth wireless connection or a USB/Firewire wired connection.
  • Device 50 may include a controller 56 or processor for controlling device 50.
  • the controller 56 can be coupled to a memory 58, which in the embodiments of the present application can store data in the form of data and audio in the form of images, and/or can also store instructions for execution on the controller 56.
  • Controller 56 may also be coupled to codec circuitry 54 suitable for implementing encoding and decoding of audio and/or video data or assisted encoding and decoding by controller 56.
  • the apparatus 50 may also include a card reader 48 and a smart card 46, such as a UICC and a UICC reader, for providing user information and for providing authentication information for authenticating and authorizing users on the network.
  • a card reader 48 and a smart card 46 such as a UICC and a UICC reader, for providing user information and for providing authentication information for authenticating and authorizing users on the network.
  • Apparatus 50 may also include a radio interface circuit 52 coupled to the controller and adapted to generate, for example, a wireless communication signal for communicating with a cellular communication network, a wireless communication system, or a wireless local area network. Apparatus 50 may also include an antenna 44 coupled to radio interface circuitry 52 for transmitting radio frequency signals generated at radio interface circuitry 52 to other apparatus(s) and for receiving radio frequency signals from other apparatus(s).
  • a radio interface circuit 52 coupled to the controller and adapted to generate, for example, a wireless communication signal for communicating with a cellular communication network, a wireless communication system, or a wireless local area network.
  • Apparatus 50 may also include an antenna 44 coupled to radio interface circuitry 52 for transmitting radio frequency signals generated at radio interface circuitry 52 to other apparatus(s) and for receiving radio frequency signals from other apparatus(s).
  • device 50 includes a camera capable of recording or detecting a single frame, and codec 54 or controller receives these single frames and processes them.
  • the device may receive video image data to be processed from another device prior to transmission and/or storage.
  • device 50 may receive images for encoding/decoding via a wireless or wired connection.
  • the solution of the embodiment of the present application can be applied to various electronic devices.
  • an example of the application of the embodiment of the present application to a television device and a mobile phone device is given below.
  • FIG. 12 is a schematic structural diagram of an embodiment of the present application applied to a television application.
  • the television device 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processor 905, a display unit 906, an audio signal processor 907, a speaker 908, an external interface 909, and a controller 910. , user interface 911, bus 912, and the like.
  • the tuner 902 extracts a signal of a desired channel from a broadcast signal received via the antenna 901, and demodulates the extracted signal.
  • the tuner 902 then outputs the encoded bit stream obtained by the demodulation to the demultiplexer 903. That is to say, the tuner 902 functions as a transmitting device in the television device 900 that receives the encoded stream of the encoded image.
  • the demultiplexer 903 separates the video stream and the audio stream of the program to be viewed from the encoded bit stream, and outputs the separated stream to the decoder 904.
  • the demultiplexer 903 also extracts auxiliary data, such as an electronic program guide, from the encoded bit stream, and provides the extracted data to the controller 910. If the encoded bit stream is scrambled, the demultiplexer 903 can descramble the encoded bit stream.
  • the decoder 904 decodes the video stream and the audio stream input from the demultiplexer 903. The decoder 904 then outputs the video data generated by the decoding to the video signal processor 905. The decoder 904 also outputs the audio data generated by the decoding to the audio signal processor 907.
  • the video signal processor 905 reproduces the video data input from the decoder 904, and displays the video data on the display unit 906.
  • the video signal processor 905 can also display an application screen provided via the network on the display unit 906. Additionally, video signal processor 905 can perform additional processing, such as noise removal, on the video data in accordance with the settings.
  • Video signal processor 905 can also generate an image of a GUI (Graphical User Interface) and overlay the resulting image on the output image.
  • GUI Graphic User Interface
  • the display unit 906 is driven by a drive signal supplied from the video signal processor 905, and displays a video or image on a video screen of a display device such as a liquid crystal display, a plasma display, or an OELD (with an organic electroluminescence display).
  • a display device such as a liquid crystal display, a plasma display, or an OELD (with an organic electroluminescence display).
  • the audio signal processor 907 performs reproduction processing, for example, digital-to-analog conversion and amplification, and the like on the audio data input from the decoder 904, and outputs the audio through the speaker 908.
  • the audio signal processor 907 can perform additional processing on the audio data, such as noise removal and the like.
  • the external interface 909 is an interface for connecting the television device 900 with an external device or network.
  • a video stream or audio stream received via external interface 909 may be decoded by decoder 904. That is, the external interface 909 is also used as a transmitting device in the television device 900 that receives the encoded stream of the encoded image.
  • Controller 910 includes a processor and a memory.
  • the memory stores programs to be executed by the processor, program data, auxiliary data, data acquired via the network, and the like. For example, when the television device 900 is booted, the program stored in the memory is read and executed by the processor.
  • the processor controls the operation of the television device 900 based on control signals input from the user interface 911.
  • the user interface 911 is connected to the controller 910.
  • the user interface 911 includes buttons and switches for the user to operate the television device 900 and a receiving unit for receiving remote control signals.
  • the user interface 911 detects an operation performed by the user via these components, generates a control signal, and outputs the generated control signal to the controller 910.
  • the bus 912 connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processor 905, the audio signal processor 907, the external interface 909, and the controller 910 to each other.
  • the decoder 904 can have the functions of the video decoding device or the image predicting device according to the above embodiment.
  • the decoder 904 can be configured to perform intra prediction on the current block by using a reference block to obtain an initial predicted pixel value of the pixel in the current block, and perform weighting filtering on the initial predicted pixel value of the pixel in the current block to Obtaining a predicted pixel value of the pixel point in the current block, the weighting coefficient used in the weighting filtering process includes a horizontal weighting coefficient and a vertical weighting coefficient, and an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical The decay rate factor of the weighting factor.
  • FIG. 13 is a schematic structural diagram of an embodiment of the present application applied to a mobile phone application.
  • the mobile telephone device 920 may include an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processor 927, a demultiplexer 928, a recording/reproducing unit 929, and a display unit. 930, controller 931, operation unit 932, bus 933, and the like.
  • the antenna 921 is connected to the communication unit 922.
  • the speaker 924 and the microphone 925 are connected to the audio codec 923.
  • the operation unit 932 is connected to the controller 931.
  • the bus 933 connects the communication unit 922, the audio codec 923, the camera unit 926, the image processor 927, the demultiplexer 928, the recording/reproducing unit 929, the display unit 930, and the controller 931 to each other.
  • the mobile telephone device 920 performs operations in various operational modes, such as transmission/reception of audio signals, transmission/reception of electronic mail and image data, photographing of images, recording of data, etc., the various modes of operation including voice calls Mode, data communication mode, imaging mode, and video telephony mode.
  • the analog audio signal generated by the microphone 925 is supplied to the audio codec 923.
  • the audio codec 923 converts the analog audio signal into audio data, performs analog-to-digital conversion on the converted audio data, and compresses the audio data.
  • the audio codec 923 then outputs the audio data obtained as a result of the compression to the communication unit 922.
  • the communication unit 922 encodes and modulates the audio data to generate a signal to be transmitted.
  • the communication unit 922 then transmits the generated signal to be transmitted to the base station via the antenna 921.
  • the communication unit 922 also amplifies the radio signal received via the antenna 921 and performs frequency conversion on the radio signal received via the antenna 921 to obtain a received signal.
  • the communication unit 922 then demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
  • the audio codec 923 decompresses the audio data and performs digital to analog conversion on the audio data to generate an analog audio signal.
  • the audio codec 923 then supplies the generated audio signal to the speaker 924 to output audio from the speaker 924.
  • the controller 931 In the data communication mode, for example, the controller 931 generates text data to be included in the email in accordance with an operation by the user via the operation unit 932.
  • the controller 931 also displays text on the display unit 930.
  • the controller 931 also generates email data in response to an instruction from the user for transmission via the operation unit 932, and outputs the generated email data to the communication unit 922.
  • the communication unit 922 encodes and modulates the email data to generate a signal to be transmitted.
  • the communication unit 922 then transmits the generated signal to be transmitted to the base station via the antenna 921.
  • the communication unit 922 also amplifies the radio signal received via the antenna 921 and performs frequency conversion on the radio signal received via the antenna 921 to obtain a received signal.
  • the communication unit 922 then demodulates and decodes the received signal to recover the email data, and outputs the restored email data to the controller 931.
  • the controller 931 displays the content of the email on the display unit 930, and stores the email data in the storage medium of the recording/reproducing unit 929.
  • the recording/reproducing unit 929 includes a readable/writable storage medium.
  • the storage medium may be an internal storage medium, or may be an externally mounted storage medium such as a hard disk, a magnetic disk, a magneto-optical disk, a USB (Universal Serial Bus) memory, or a memory card.
  • USB Universal Serial Bus
  • the camera unit 926 images an object to generate image data, and outputs the generated image data to the image processor 927.
  • the image processor 927 encodes the image data input from the camera unit 926, and stores the encoded stream in the storage medium of the storage/reproduction unit 929.
  • the demultiplexer 928 In the video telephony mode, the demultiplexer 928 multiplexes the video stream encoded by the image processor 927 and the audio stream input from the audio codec 923, and outputs the multiplexed stream to the communication unit 922. .
  • Communication unit 922 encodes and modulates the multiplexed stream to produce a signal to be transmitted.
  • the communication unit 922 then transmits the generated signal to be transmitted to the base station via the antenna 921.
  • the communication unit 922 also amplifies the radio signal received via the antenna 921 and performs frequency conversion on the radio signal received via the antenna 921 to obtain a received signal.
  • the signal to be transmitted and the received signal may comprise an encoded bit stream.
  • the communication unit 922 then demodulates and decodes the received signal to recover the stream, and outputs the recovered stream to the demultiplexer 928.
  • the demultiplexer 928 separates the video stream and the audio stream from the input stream, outputs the video stream to the image processor 927, and outputs the audio stream to the audio codec 923.
  • Image processor 927 decodes the video stream to produce video data.
  • the video data is supplied to the display unit 930, and a series of images are displayed by the display unit 930.
  • the audio codec 923 decompresses the audio stream and performs digital to analog conversion on the audio stream to produce an analog audio signal.
  • the audio codec 923 then supplies the generated audio signal to the speaker 924 to output audio from the speaker 924.
  • the image processor 927 has the functions of the video encoding device (video encoder, image predicting device) and/or video decoding device (video decoder) according to the above embodiment.
  • the image processor 927 can be configured to perform intra prediction on the current block by using the reference block to obtain an initial predicted pixel value of the pixel point in the current block; and perform weighting filtering processing on the initial predicted pixel value of the pixel point in the current block to Obtaining a predicted pixel value of the pixel point in the current block, the weighting coefficient used in the weighting filtering process includes a horizontal weighting coefficient and a vertical weighting coefficient, and an attenuation speed factor acting on the horizontal weighting coefficient is different from acting on the vertical The decay rate factor of the weighting factor.
  • system and “network” are used interchangeably herein. It should be understood that the term “and/or” herein is merely an association relationship describing an associated object, indicating that there may be three relationships, such as A and/or B, which may indicate that A exists separately, and A and B exist at the same time. There are three cases of B alone. In addition, the character "/" in this article generally indicates that the contextual object is an "or" relationship.
  • B corresponding to A means that B is associated with A, and B can be determined from A.
  • determining B from A does not mean that B is only determined based on A, and that B can also be determined based on A and/or other information.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center via wired (eg, coaxial cable, fiber optic, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (the magnetic medium may be, for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (such as an optical disk), or a semiconductor medium (such as a solid state hard disk) or the like.
  • the descriptions of the various embodiments are different, and the details that are not detailed in a certain embodiment can be referred to the related descriptions of other embodiments.
  • the disclosed device may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the above units is only a logical function division.
  • multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be in the form of an electrical or other form through some interface, indirect coupling or communication connection of the device or unit.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the above integrated units if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-accessible memory.
  • the technical solution of the present application which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a memory.
  • a number of requests are included to cause a computer device (which may be a personal computer, server or network device, etc., and in particular a processor in a computer device) to perform all or part of the steps of the above-described methods of various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种图像预测方法和相关产品。一种图像预测方法包括:利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值。所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。本申请实施例方案有利于提升图像预测准确性。

Description

图像预测方法和相关产品
本申请要求于2017年4月28日提交中国专利局、申请号为201710300302.4、申请名称为“图像预测方法和相关产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频图像处理领域,尤其涉及图像预测方法和相关产品。
背景技术
随着光电采集技术的发展及不断增长的高清数字视频需求,视频数据量越来越大,有限异构的传输带宽、多样化的视频应用不断地对视频编码效率提出了更高的需求,高性能视频编码(英文:high efficient video coding,缩写:HEVC)标准的制定工作因需启动。视频编码压缩的基本原理是利用空域、时域和码字之间的相关性,尽可能去除冗余。目前流行做法是采用基于块的混合视频编码框架,通过预测(包括帧内预测和帧间预测)、变换、量化、熵编码等步骤实现视频编码压缩。这种编码框架显示了很强的生命力,HEVC也仍沿用这种基于块的混合视频编码框架。
HEVC标准中一般将图像划分成多个正方形的编码单元(英文:coding unit,缩写:CU)进行编码,大多数情况下CU的水平方向和垂直方向的纹理特性大致相同,这种情况下运用传统图像预测方法可以获得相对较好预测准确性。但测试发现,在CU的水平方向和垂直方向的纹理特性具有相当差异性的某些情况下,运用传统图像预测方法有时候难以获得较好预测准确性。
发明内容
本申请实施例提供图像预测方法和相关产品。
第一方面,本申请实施例提供了一种图像预测方法,包括:利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值。所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
具体例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
又例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出所述差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
可以理解,在事先无法获知所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性的情况下,那么可以先确定所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,而后基于确定出的所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,来确定作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数 的衰减速度因子。
当然,如果所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性是事先已知确定的(例如假设纹理特性为边长,当前块如果是按照特定的划分方式划分得到的,那么当前块的形状尺寸就是一种已知确定参数,因此,当前块的长宽之间差异性就是一种已知确定参数),那么也可认为作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子也就对应确定了,那么就无需执行“先确定所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,而后基于确定出的所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,来确定作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子”的步骤了。
这也就是说,在“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值”的情况已知确定的情况下,或在“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值”的情况已知确定的情况下,是无需执行“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性是否超出差异性阈值”的条件判定的,而是可以直接按照当前已知确定的情况,来选用与之对应的作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子。
其中,本申请各实施例中提及的帧内预测例如为方向性帧内预测、直流系数帧内预测或插值帧内预测或其它帧内预测。
其中,本申请各实施例中的图像预测方法例如可应用于视频编码过程或者应用于视频解码过程。
其中,当前块的参考块例如可包括当前块的上相邻参考块、左相邻参考块和左上相邻参考块等等。
其中,衰减速度因子例如可等于1、1.5、1.65、2、3或其它值。
其中,纹理特性例如可包括边长、自方差和/或边缘锐度等。因此水平方向纹理特性可包括水平方向边长(长)、水平方向自方差和/或水平方向边缘锐度等;垂直方向纹理特性可包括垂直方向边长(宽)、垂直方向自方差和/或垂直方向边缘锐度等。
可以理解,鉴于纹理特性的参数类型(例如边长、自方差和/或边缘锐度等等)是多种多样的,因此差异性阈值的取值类型可能是多种多样的。
可以理解,衰减速度因子在一定程度上反应出所作用加权系数的衰减速度。衰减速度因子不同,那么就可使得所作用加权系数的衰减速度体现出差异性。作用于水平加权系数的衰减速度因子是否同于作用于垂直加权系数的衰减速度因子,主要取决于所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性。
可以看出,上述技术方案中,对所述当前块中像素点的初始预测像素值进行加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,由于针对垂直加权系数和水平加权系数设置差异性的衰减速度因子,也就是说,作用于所述水平加权系数的衰减速度因子可不同于作用于所述垂直加权系数的衰减速度因子,这有利于差异性的控制垂直加权系数和水平加权系数衰减速度,进而有利于满足垂直加权系数和水平加权系数需按不同衰减速度进行衰减的一些场景需要,由于增强了控制垂直加权系数和水平加权系数衰减速度的灵活性,有利于提升图像预测准确性。例如在当前块的水平方向纹理特性与垂直方向纹理 特性之间的差异性超出差异性阈值等情况下,有利于使得加权系数衰减速度差异性和纹理特性差异性之间更匹配,进而有利于提升图像预测准确性。
例如,在一些可能实施方式中,利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值可包括:对参考块中参考像素点的重建像素值进行滤波处理以得到所述参考块中参考像素点的滤波像素值;利用所述参考块中参考像素点的滤波像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
又例如,在另一些可能实施方式中,利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值可包括:利用所述参考块中参考像素点的重建像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
其中,加权滤波处理所使用的加权滤波公式可能是多种多样的。
例如对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值可包括:基于如下加权滤波公式,对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值,
Figure PCTCN2018084955-appb-000001
其中,c cur=64-(c top>>[y/d 2])-(c left>>[x/d 1])+(c topleft>>[x/d 1]),
其中,所述c top属于水平加权系数,所述c left和所述c topleft属于垂直加权系数。
其中,所述c top表示所述当前块的上相邻参考块重建像素值对应的加权系数。所述c left表示所述当前块的左相邻参考块重建像素值对应的加权系数。所述c topleft表示所述当前块的左上相邻参考块重建像素值对应的加权系数。x表示当前块中像素点相对于所述当前块的左上顶点的横坐标,y表示当前块中像素点相对于所述当前块的左上顶点的纵坐标。所述d 2为作用于水平加权系数的衰减速度因子。所述d 1为作用于垂直加权系数的衰减速度因子,所述d 1和所述d 2为实数。p″[x,y]表示当前块中坐标为[x,y]的像素点的预测像素值,p'[x,y]表示当前块中坐标为[x,y]的像素点的初始预测像素值,r[x,-1]表示所述当前块的上相邻参考块中坐标为[x,-1]的像素点重建像素值,r[-1,-1]表示所述当前块的左上相邻参考块中坐标为[-1,-1]的像素点的重建像素值,r[-1,y]表示所述当前块的左相邻参考块中坐标为[-1,y]的像素点的重建像素值。
其中,衰减速度因子d 1和d 2取值的确定方式可以是多种多样的。
举例来说,当所述纹理特性包括边长,所述差异性阈值包括阈值thresh1,在所述当前块的长宽比大于所述阈值thresh1的情况下,例如所述d 1=1且所述d 2=2;在所述当前块的宽长比大于所述阈值thresh1的情况下,例如所述d 1=2且所述d 2=1,其中,所述thresh1为大于2的实数。例如thresh1等于2.5、4、6、8、16、32或者其它值。
进一步的,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或者等于宽,且所述当前块的长宽和大于阈值thresh4的情况下,所述d 1=d 2=2。和/或,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当前块的长宽和小于或等于所述阈值thresh4的情况下,所述d 1=d 2=1。
其中,所述thresh4为大于或等于64的实数,例如thresh4等于64、65、80、96、128或其它值。
又例如,在所述当前块的长大于阈值thresh2的情况下所述d 1=2,在所述当前块的长小于或等于阈值thresh2的情况下所述d 1=1,所述thresh2为大于或等于16的实数。例如thresh2等于16、17、32、64、128或者其它值。
又例如,在所述当前块的宽大于阈值thresh3的情况下所述d 2=2,在所述当前块的宽小于或等于阈值thresh3的情况下所述d 2=1,所述thresh3为大于或等于16的实数。例如thresh2等于16、18、32、64、128或者其它值。
第二方面,本申请实施例提供一种图像预测装置,包括用于实施第一方面的任意一种方法的若干个功能单元。举例来说,图像预测装置可以包括:预测单元和滤波单元。
预测单元,用于利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
滤波单元,用于对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值,其中,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
具体例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
又例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出所述差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
其中,所述图像预测装置例如应用于视频编码装置或视频解码装置。
第三方面,本申请实施例提供一种图像预测装置,包括:
相互耦合的存储器和处理器;所述处理器用于执行第一方面的任意一种方法的部分或全部步骤。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储了程序代码,其中,所述程序代码包括用于执行第一方面的任意一种方法的部分或全部 步骤的指令。
第五方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行第一方面的任意一种方法的部分或全部步骤。
附图说明
下面将对本申请实施例或背景技术中所使用的附图进行说明。
图1-A~图1-B为本申请实施例提供的几种图像块的划分示意图;
图1-C为本申请实施例提供的当前块的几个可能相邻参考块的位置示意图;
图1-D~图1-G为本申请实施例举例的几种尺寸图像块的速度衰减因子取值的示意图;
图2为本申请实施例提供的一种图像预测方法的流程示意图;
图3为本申请实施例提供的一种图像编码方法的流程示意图;
图4为本申请实施例提供的一种图像解码方法的流程示意图;
图5-A~图5-C为本申请实施例提供的c top的几种衰减过程示意图;
图6是本申请实施例提供的一种图像预测装置的流程示意图;
图7是本申请实施例提供的另一种图像预测装置的流程示意图;
图8是本申请实施例提供的一种视频编码器的示意图;
图9是本申请实施例提供的一种视频解码器的示意图;
图10是本申请实施例提供的电子装置的一种示意性框图;
图11是本申请实施例提供的电子装置的另一种示意性框图;
图12是本发明实施例适用于电视机应用的示意性结构图;
图13是本发明实施例适用于移动电话应用的示意性结构图。
具体实施方式
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
下面结合本申请实施例中的附图对本申请实施例进行描述。
下面介绍本申请实施例提供的图像预测方法、视频编解码方法。本申请实施例提供的图像预测方法的执行主体是视频编码装置或视频解码装置,该视频编码装置或视频解码装置可以是任何需要输出或存储视频的装置,如笔记本电脑、平板电脑、个人电脑、手机、视频服务器、数字电视、数字直播系统、无线广播系统、个人数字助理(PDA)、膝上型或桌上型计算机、电子书阅读器、数码相机、数字记录装置、数字媒体播放器、视频游戏装置、视频游戏控制台、蜂窝式或卫星无线电电话、视频会议装置、视频流装置等设备。上述这些数字视频装置实施视频压缩技术,例如由MPEG-2、MPEG-4、ITU-TH.263、ITU-TH.264/MPEG-4第10部分高级视频编解码(AVC)、ITU-TH.265高效率视频编解码(HEVC)标准定义的标准和所述标准的扩展部分中所描述的那些视频压缩技术,从而更高效地发射及接收数字视频信息。视频装置可通过实施这些视频编解码技术来更高效地发射、接收、编码、解码和/或存储数字视频信息。
在视频编解码领域中,帧的概念是指一副完整的图像,由一帧一帧的图像按照一定的 次序和帧速率组成视频格式后即可播放。当帧速率达到一定的速度后,两帧之间的间隔时间小于人眼的分辨极限,会出现短暂的视觉停留,方才能看似动态的出现在屏幕上。视频文件能够实现压缩的基础是单帧数字图像的压缩编码,数字化后的图像中存在很多的重复表示信息,称之为冗余信息。一帧图像中往往存有许多空间结构相同或相似的地方,例如同一物体或背景中的采样点颜色之间大都存在着紧密关联性和相似性。在多帧图像组中,一帧图像和其前一帧或后一帧基本上都有巨大的相关性,描述信息的像素值差别很小,这些都是可以被压缩的部分。同样的道理,视频文件中不但存在着空间冗余信息,而且包含大量的时间冗余信息,这是由视频的组成结构导致的。例如,视频采样的帧速率一般为25帧/秒至30帧/秒,特殊情况中会出现60帧/秒的可能。也就是说,相邻两帧间的采样时间间隔至少为1/30秒到1/25秒。在这么短的时间内,采样得到的图像画面中基本上都存在大量的相似信息,画面之间存在巨大关联性。但在原始的数字视频录入系统中是各自独立的记录,未考虑和利用到这些连贯相似特性,这就造成了相当巨大的重复多余的数据量。另外,已通过研究表明,从人眼的视觉敏感度这一心理特性的角度出发,视频信息中也是存在可以用来压缩的部分,即视觉冗余。所谓视觉冗余,是指利用人眼对亮度变化比较敏感,而对色度的变化相对不敏感的胜利特性来适当的压缩视频比特流。在高亮度的区域,人眼视觉对亮度变化的敏感度呈现下降趋势,转而对物体的边缘处较为敏感,内部区域相对不敏感;对整体结构较为敏感,对内部细节变换相对不敏感。因为视频图像信息的最终服务对象是我们人类群体,所以可充分利用人眼的这些特性对原有的视频图像信息进行压缩处理,达到更佳压缩效果。除了上面提到的空间冗余、时间冗余和视觉冗余外,视频图像信息中还存在信息熵冗余、结构冗余、知识冗余、重要性冗余等等这一系列的冗余信息。视频压缩编码的目的就是使用各种技术方法将视频序列中的冗余信息去除掉,以达到减小存储空间和节省传输带宽的效果。
就目前的技术发展现状而言,视频压缩处理技术主要包括了帧内预测、帧间预测、变换量化、熵编码以及消块滤波处理等。在国际通用范围内,存在的视频压缩编码标准中主流的压缩编码方式主要有四种:色度抽样、预测编码、变换编码和量化编码。
色度抽样:此方式充分利用了人眼的视觉心理特点,从底层的数据表示中就开始设法最大限度的缩减单个元素描述的数据量。在电视系统中多数采用的是亮度-色度-色度(YUV)颜色编码,它是欧洲电视系统广泛采用的标准。YUV颜色空间中包括一个亮度信号Y和两个色差信号U和V,三个分量彼此独立。YUV颜色模式彼此分开的表示方式更加灵活,传输占用带宽少,比传统的红绿蓝(RGB)色彩模型更具优势。例如,YUV 4:2:0形式表示两色度分量U和V在水平方向和垂直方向上都只有亮度Y分量的一半,即4个采样像素点中存在4个亮度分量Y,而色度分量U和V则只有一个。这样表示时,数据量进一步缩小,仅为原始的33%左右。利用人眼生理视觉特性,通过这种色度抽样的方式实现视频压缩的目的,是目前广泛采用的视频数据压缩方式之一。
预测编码:即利用之前已编码帧的数据信息来预测当前将要编码的帧。通过预测得到一个预测值,它不完全等同与实际值,之间存在着一定的残差值。如果预测越适合,则预测值就会越接近实际值,残差值就越小,这样对残差值进行编码就能大大减小数据量,在解码端解码时运用残差值加上预测值还原重构出初始图像,这就是预测编码的基本思想方 法。在主流编码标准中预测编码分为帧内预测和帧间预测两种基本类型。
变换编码:是不直接对原本的空间域信息进行编码,而是将信息采样值按照某种形式的变换函数,从当前域转换到另外一种人为定义域中(通常称为变换域),再根据信息在变换域的分布特性进行压缩编码。变换编码的原因包括:视频图像数据往往在空间域的数据相关性大,导致大量冗余信息的存在,直接编码需要很大的比特量。而在变换域中数据相关性则大大减少,使得编码的冗余信息减少,编码所需的数据量也随之大大减少,这样就能够得到较高的压缩比,而且可以实现较好的压缩效果。典型的变换编码有卡洛(K-L)变换、傅立叶变换等。整数离散余弦变换(DCT)是许多国际标准中都普遍采用的变换编码方式。
量化编码:上述提到的变换编码其实本身并不压缩数据,量化过程才是压缩数据的有力手段,也是有损压缩中数据“损失”的主要原因。量化的过程就是将动态范围较大的输入值强行规划成较少的输出值的过程。因为量化输入值范围较大,需要较多的比特数表示,而“强行规划”后的输出值范围较小,从而只需要少量的比特数即可表示。每个量化输入被归一化到一个量化输出,即量化到某个数量级中,这些数量级通常被称之为量化级(通常由编码器规定)。
在基于混合编码架构的编码算法中,上述压缩编码方式被混合使用,编码器控制模块根据视频帧中不同图像块的局部特性,选择该图像块所采用的编码模式。对帧内预测编码的块进行频域或空域预测,对帧间预测编码的块进行运动补偿预测,预测的残差再通过变换和量化处理形成残差系数,最后通过熵编码器生成最终的码流。为避免预测误差的累积,帧内或帧间预测的参考信号是通过编码端的解码模块得到。变换和量化后的残差系数经过反量化和反变换重建残差信号,再与预测的参考信号相加得到重建的图像。
在多数的编码框架中,视频序列包括一系列图像(英文:picture),图像被进一步划分为切片(英文:slice),slice再被划分为块(英文:block)。视频编码以块为单位,可从picture的左上角位置开始从左到右从上到下一行一行进行编码处理。一些新视频编码标准将block的概念被进一步扩展。在H.264标准中有宏块(英文:macroblock,缩写:MB),MB可进一步划分成多个可用于预测编码的预测块(英文:partition)。
在高性能视频编码(英文:high efficiency video coding,缩写:HEVC)标准中,采用编码单元(英文:coding unit,缩写:CU),预测单元(英文:prediction unit,缩写:PU)和变换单元(英文:transform unit,缩写:TU)等基本概念,从功能上划分了多种Unit,并采用全新的基于树结构进行描述。比如CU可按照四叉树进行划分为更小CU,而更小的CU还可继续划分,从而形成一种四叉树结构。对于PU和TU也有类似树结构。无论CU,PU还是TU,本质上都属于块(block)的概念,CU类似于宏块MB或编码块,是对编码图像进行划分和编码的基本单元。PU可对应预测块,是预测编码的基本单元。对CU按照划分模式进一步划分成多个PU。TU可以对应变换块,是对预测残差进行变换的基本单元。HEVC标准中则可把它们统一称之为编码树块(英文:coding tree block,缩写:CTB)等等。
在HEVC标准中,编码单元的大小可包括64×64,32×32,16×16和8×8等几个尺寸级别,每个级别的编码单元按照帧内预测和帧间预测由可划分为不同大小的预测单元。例 如图1-A和图1-B所示,图1-A举例示出了一种与帧内预测对应的预测单元划分方式,图1-B举例示出了几种与帧间预测对应的预测单元划分方式。
本申请方案主要针对帧内预测场景,图1-C中示出当前块的几个可能相邻参考块,参考块A为当前块的左相邻参考块,参考块B为当前块的左上相邻参考块,参考块C为当前块的上相邻参考块。
在一些技术方案中,图像预测方法可包括:利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;利用如下加权滤波公式对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值。
Figure PCTCN2018084955-appb-000002
其中,c cur=64-(c top>>[y/d])-(c left>>[x/d])+(c topleft>>[x/d])   (公式2)
其中,公式1是一种可能的加权滤波公式。“>>”表示移位运算符。
在公式1中,所述c top表示所述当前块的上相邻参考块重建像素值对应的加权系数,所述c left表示所述当前块的左相邻参考块重建像素值对应的加权系数,所述c topleft表示所述当前块的左上相邻参考块重建像素值对应的加权系数。x表示当前块中像素点相对于所述当前块的左上顶点的横坐标,y表示当前块中像素点相对于所述当前块的左上顶点的纵坐标,所述当前块的左上顶点的坐标例如为[0,0]。其中,p″[x,y]表示当前块中坐标为[x,y]的像素点的预测像素值,p'[x,y]表示当前块中坐标为[x,y]的像素点的初始预测像素值,r[x,-1]表示所述当前块的上相邻参考块中坐标为[x,-1]的像素点重建像素值,r[-1,-1]表示所述当前块的左上相邻参考块中坐标为[-1,-1]的像素点的重建像素值,r[-1,y]表示所述当前块的左相邻参考块中坐标为[-1,y]的像素点的重建像素值。
其中,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,水平加权系数包括:c top。垂直加权系数包括:c left和c topleft。其中,d表示衰减速度因子,由于d既作用于水平加权系数,并且也作用于垂直加权系数,因此公式1和公式2中作用于所述水平加权系数的衰减速度因子等于作用于垂直加权系数的衰减速度因子。
按照位置相关组合帧内预测(英文:position dependent intra prediction combination,缩写:PDPC)的算法原理,加权滤波公式需要反应出相邻像素点之间存在的相关性,这种相关性随两点间距离增加将以类似指数的关系来衰减。例如公式1中的“>>[y/d]”和“>>[x/d]”反应了衰减特性。具体例如,坐标为(0,0)与(0,-1)的这两个像素点的位置相邻,那么按照公式1则c top不进行缩小操作,从而使r(0,-1)对坐标(0,0)的像素点的预测结果影响较大。
衰减速度因子d的值反应了衰减速度。如图1-D所示,在当前块的尺寸为32x32的情况下d=2,坐标为(4,4)的预测像素点a的系数c top和c left,相比于左上顶点衰减了4倍。 在当前块的尺寸为8x8的情况下d=1,坐标(4,4)的预测像素点a的系数c top和c left,相比于左上顶点衰减了16倍。像素点a在32x32的CU中靠近左上,而在8x8的CU中位于中心,其与坐标(0,0)点的相邻程度相对降低了。从上述分析可见,边长较长的CU(例如边长>=32)系数衰减慢于边长较短的CU。
近期JVET会议中提出四叉树联合二叉树划分(英文:quadtree plus binary tree,缩写:QTBT)方法。QTBT方法的特点包括:允许出现长宽不相等的非正方形CU;CU一边的长度范围从4到128。
为了适应QTBT方法,PDPC进一步对衰减速度因子d的取值进行调整。
在一种可能实施方式中,当适用QTBT方法划分CU时,衰减速度因子d的取值取决于CU的长宽和。例如当长宽和小于64时d=1;长宽和大于等于64时d=2。研究发现,衰减速度因子d的取值取决于CU的长宽和的这种方案也存在一些问题,例如当长宽和相等但形状不同的CU的d取值相同,d与CU形状之间相关性弱。如图1-E所示,32x8、8x32、16x16三种大小和形状的CU,d的取值都为1。又例如,长宽比大的CU,长和宽对应的d的取值相同,即d与CU长宽之间相关性弱。如图1-E所示,虽然8x32和32x8的CU长宽比达到1:4或4:1,但因为d取值为1,那么长和宽两个方向的系数衰减程度被看作一致;虽然4x64和64x4的CU长宽比达到1:8或8:1,但因为d取值为2,那么长和宽两个方向的系数衰减速度被看作一致。
在上面举例的几种方案中,在任何情况下,作用于水平加权系数的衰减速度因子等于作用于垂直加权系数的衰减速度因子,也使得在某些情况下,加权滤波公式所体现的衰减速度和实际衰减速度之间出现较大偏差,进而降低了图像预测的准确性,进而可能较大影响到系统编解码性能。
下面继续探讨其它技术方案。
请参见图2,图2是本申请一个实施例提供的一种图像预测方法的流程示意图,这种图像预测方法可应用于视频编码过程或视频解码过程,图像预测方法具体可以包括但不限于如下步骤:
210、利用当前块的参考块对所述当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
其中,帧内预测例如为方向性帧内预测、直流系数帧内预测、插值帧内预测或其它帧内预测方法。
可以理解,利用参考块对当前块进行帧内预测的方式可能多种多样。
具体例如利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值包括:对参考块中参考像素点的重建像素值进行滤波处理以得到所述参考块中参考像素点的滤波像素值;利用所述参考块中参考像素点的滤波像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
又例如,利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值包括:利用参考块中参考像素点的重建像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
220、对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中 像素点的预测像素值。其中,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
具体例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
又例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
可以理解,在事先无法获知所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性的情况下,那么可以先确定所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,而后基于确定出的所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,来确定作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子。
当然,如果所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性是事先已知确定的(例如假设纹理特性为边长,当前块如果是按照特定的划分方式划分得到的,那么当前块的形状尺寸就是一种已知确定参数,因此,当前块的长宽之间差异性就是一种已知确定参数),那么也可认为作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子也就对应确定了,那么就无需执行“先确定所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,而后基于确定出的所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,来确定作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子”的步骤了。
这也就是说,在“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值”的情况已知确定的情况下,或在“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值”的情况已知确定的情况下,是无需执行“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性是否超出差异性阈值”的条件判定的,而是可以直接按照当前已知确定的情况,来选用与之对应的作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子。
可以理解,如果当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值,说明当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性较大,那么这种差异性应当被考虑,如果当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值,说明前块的水平方向纹理特性与垂直方向纹理特性之间的差异性较小,那么这种差异性可能允许被忽略。
例如可基于所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性确定作用于水平加权系数的衰减速度因子和作用于垂直加权系数的衰减速度因子。
其中,纹理特性例如可包括边长、自方差和/或边缘锐度等。因此,水平方向纹理特性可包括水平方向边长(长)、水平方向自方差和/或水平方向边缘锐度等;垂直方向纹理特性可包括垂直方向边长(宽)、垂直方向自方差和/或垂直方向边缘锐度等。
举例来说,当纹理特性为边长,那么在所述当前块的长与宽长之间的差异性超出差异性阈值,并且所述当前块的长大于宽的情况下,作用于所述水平加权系数的衰减速度因子大于作用于所述垂直加权系数的衰减速度因子;在所述当前块的长与宽之间的差异性超出差异性阈值,且所述当前块的长小于宽的情况下,作用于所述水平加权系数的衰减速度因子小于作用于所述垂直加权系数的衰减速度因子。
又举例来说,当纹理特性为自方差,那么在所述当前块的水平方向自方差与垂直方向自方差之间的差异性超出差异性阈值,且所述当前块的水平方向自方差大于垂直方向自方差的情况下,作用于所述水平加权系数的衰减速度因子大于作用于所述垂直加权系数的衰减速度因子;在当前块的水平方向自方差与垂直方向自方差之间的差异性超出差异性阈值,且所述当前块的水平方向自方差小于垂直方向自方差的情况下,作用于所述水平加权系数的衰减速度因子小于作用于所述垂直加权系数的衰减速度因子。
又举例来说,当纹理特性为边缘锐度,那么在所述当前块的水平方向边缘锐度与垂直方向边缘锐度之间的差异性超出差异性阈值,且所述当前块的水平方向边缘锐度大于垂直方向边缘锐度的情况下,作用于所述水平加权系数的衰减速度因子大于作用于所述垂直加权系数的衰减速度因子;在所述当前块的水平方向边缘锐度与垂直方向边缘锐度之间的差异性超出差异性阈值,且所述当前块的水平方向边缘锐度小于垂直方向边缘锐度的情况下,作用于所述水平加权系数的衰减速度因子小于作用于所述垂直加权系数的衰减速度因子。
可以理解,衰减速度因子在一定程度上反应出所作用加权系数的衰减速度。衰减速度因子不同,那么就可使得所作用加权系数的衰减速度体现出差异性。作用于水平加权系数的衰减速度因子是否同于作用于垂直加权系数的衰减速度因子,主要取决于所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性。
举例来说,对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值可包括:基于如下加权滤波公式,对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值,
Figure PCTCN2018084955-appb-000003
其中,c cur=64-(c top>>[y/d 2])-(c left>>[x/d 1])+(c topleft>>[x/d1 ])   (公式4)
其中,公式3是一种加权滤波公式。“>>”表示移位运算符。
其中,所述c top属于水平加权系数,所述c left和所述c topleft属于垂直加权系数。
其中,所述c top表示所述当前块的上相邻参考块重建像素值对应的加权系数,所述c left表示所述当前块的左相邻参考块重建像素值对应的加权系数,所述c topleft表示所述当前块的左上相邻参考块重建像素值对应的加权系数。其中,x表示当前块中像素点相对于所述当前块的左上顶点的横坐标,y表示当前块中像素点相对于所述当前块的左上顶点的纵坐标。
所述d 1为作用于水平加权系数的衰减速度因子,所述d 2为作用于垂直加权系数的衰减速度因子,所述d 1和所述d 2为实数,p″[x,y]表示当前块中坐标为[x,y]的像素点的预测像素值,p'[x,y]表示当前块中坐标为[x,y]的像素点的初始预测像素值,r[x,-1]表示所述当前块的上相邻参考块中坐标为[x,-1]的像素点重建像素值,r[-1,-1]表示所述当前块的左上相邻参考块中坐标为[-1,-1]的像素点的重建像素值,r[-1,y]表示所述当前块的左相邻参考块中坐标为[-1,y]的像素点的重建像素值。
可以理解,作用于水平加权系数的衰减速度因子和作用于垂直加权系数的衰减速度因子的取值,主要取决于当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,它们之间的具体对应关系可以是多种多样的。其中,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,所述d 1不等于所述d 2,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值的情况下,所述d 1等于所述d 2
下面以纹理特性包括边长的情况来举例。
举例来说,在所述当前块的长大于阈值thresh2的情况下所述d 1=2,在所述当前块的长小于或等于阈值thresh2的情况下所述d 1=1,所述thresh2为大于或等于16的实数。例如thresh2等于16、17、32、64、128或者其它值。
又例如,在所述当前块的宽大于阈值thresh3的情况下所述d 2=2,在所述当前块的宽小于或等于阈值thresh3的情况下所述d 2=1,所述thresh3为大于或等于16的实数。例如thresh2等于16、18、32、64、128或者其它值。
具体例如图1-F所示,图1-F举例中以thresh2和thresh2等于16为例,在当前块的尺寸为32x8、64x4的情况下d 1=2,d 2=1。在当前块的尺寸为16x16的情况下d 1=1,d 2=1,在当前块的尺寸为8x32、4x64的情况下d 1=1,d 2=2。如此设置后,对于尺寸为32x8、64x4的当前块,垂直加权系数(c top)的衰减程度是水平加权系数(c left和c topleft)的一倍,对于尺寸为8x32、4x64的当前块,垂直加权系数(c top)的衰减程度是是水平加权系数(c left和c topleft)的一半;对于尺寸为16x16的当前块d 1=d 2=1,c top、c left和c topleft的衰减程度相同。可见这个举例场景中,系数的衰减速度可根据图像块的长宽比、形状等自适应改变。
又例如,当所述纹理特性包括边长,所述差异性阈值包括阈值thresh1,在所述当前块的长宽比大于所述阈值thresh1的情况下,例如所述d 1=1且所述d 2=2;在所述当前块的宽长比大于所述阈值thresh1的情况下,例如所述d 1=2且所述d 2=1,其中,所述thresh1为大于2的实数。例如thresh1等于2.5、4、6、8、16、32或者其它值。
进一步的,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或者等于宽,且所述当前块的长宽和大于阈值thresh4的情况下,所述d 1=d 2=2。和/或,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当 前块的长宽和小于或等于所述阈值thresh4的情况下,所述d 1=d 2=1。
其中,所述thresh4为大于或等于64的实数,例如thresh4等于64、65、80、96、128或其它值。
具体例如图1-G所示,图1-G举例中以thresh1等于4,thresh1等于64为例,在当前块尺寸为32x8、64x4的情况下d 1=2,d 2=1。在当前块尺寸为32x16的情况下d 1=d 2=1。在当前块尺寸为64x32的情况下d 1=d 2=2。如此设置后,对于尺寸为32x8、64x4的当前块,垂直加权系数(c top)的衰减程度是水平加权系数(c left和c topleft)的一倍,对于尺寸为32x16、64x32的当前块,c top、c left和c topleft的衰减程度相同。可见这个举例场景中,系数的衰减速度可根据图像块的长宽比、形状等自适应改变。
可以理解,图1-F和图1-G只是一些可能的举例实施方式,当然在实际应用中可能并不限于这样的举例实施方式。
可以看出,上述技术方案中,对所述当前块中像素点的初始预测像素值进行加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,由于针对垂直加权系数和水平加权系数设置差异性的衰减速度因子,也就是说,作用于所述水平加权系数的衰减速度因子可不同于作用于所述垂直加权系数的衰减速度因子,这有利于差异性的控制垂直加权系数和水平加权系数衰减速度,进而有利于满足垂直加权系数和水平加权系数需按不同衰减速度进行衰减的一些场景需要,由于增强了控制垂直加权系数和水平加权系数衰减速度的灵活性,有利于提升图像预测准确性。例如在当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值等情况下,有利于使得加权系数衰减速度差异性和纹理特性差异性之间更匹配,进而有利于提升图像预测准确性。
请参见图3,图3是本申请另一实施例提供的一种视频编码方法的流程示意图,视频编码方法具体可包括但不限于如下步骤:
301、视频编码装置对参考块中参考像素点的重建像素值进行滤波处理以得到所述参考块中参考像素点的滤波像素值。
302、视频编码装置利用所述参考块中参考像素点的滤波像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
其中,帧内预测例如为方向性帧内预测、直流系数帧内预测、插值帧内预测或其它帧内预测方法。
303、视频编码装置基于所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性确定作用于水平加权系数的衰减速度因子和作用于垂直加权系数的衰减速度因子。
可以理解,在事先无法获知所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性的情况下,那么可以先确定所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,而后基于确定出的所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,来确定作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子。
当然,如果所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性是事先已知确定的(例如假设纹理特性为边长,当前块如果是按照特定的划分方式划分得到的,那么当前块的形状尺寸就是一种已知确定参数,因此,当前块的长宽之间差异性就是一种 已知确定参数),那么也可认为作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子也就对应确定了,那么就无需执行“先确定所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,而后基于确定出的所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性,来确定作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子”的步骤了。
这也就是说,在“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值”的情况已知确定的情况下,或在“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值”的情况已知确定的情况下,是无需执行“当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性是否超出差异性阈值”的条件判定的,而是可以直接按照当前已知确定的情况,来选用与之对应的作用于所述水平加权系数的衰减速度因子和作用于所述垂直加权系数的衰减速度因子。
其中,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
其中,纹理特性例如可包括边长、自方差和/或边缘锐度等。因此,水平方向纹理特性可包括水平方向边长(长)、水平方向自方差和/或水平方向边缘锐度等;垂直方向纹理特性可包括垂直方向边长(宽)、垂直方向自方差和/或垂直方向边缘锐度等。
可以理解,步骤301~302和步骤303之间没有必然的先后顺序。具体例如步骤303的执行可先于、晚于或同步于步骤301~302的执行。
304、视频编码装置对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值。其中,所述加权滤波处理所使用的加权系数包括所述水平加权系数和所述垂直加权系数。
其中,步骤304中对所述当前块中像素点的初始预测像素值进行加权滤波处理所使用的加权滤波公式可为公式3。
其中,视频编码装置确定作用于水平加权系数的衰减速度因子和作用于垂直加权系数的衰减速度因子的具体方式,可参考图2所对应实施例的相关描述,此处不在赘述。
305、视频编码装置基于当前块中像素点的预测像素值得到当前块的预测残差。视频编码装置可将当前块的预测残差写入视频码流。
可以看出,在上述举例的视频编码方案中,对所述当前块中像素点的初始预测像素值进行加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,由于针对垂直加权系数和水平加权系数设置差异性的衰减速度因子,也就是说,作用于所述水平加权系数的衰减速度因子可不同于作用于所述垂直加权系数的衰减速度因子,这有利于差异性的控制垂直加权系数和水平加权系数衰减速度,进而有利于满足垂直加权系数和水平加权系数需按不同衰减速度进行衰减的一些场景需要,由于增强了控制垂直加权系数和水平加权系数衰减速度的灵活性,有利于提升图像预测准确性。例如在当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值等情况下,有利于使得加权系数衰减速度 差异性和纹理特性差异性之间更匹配,进而有利于提升图像预测准确性,进而有利于提升视频解码质量。
请参见图4,图4是本申请另一实施例提供的一种视频解码方法的流程示意图,视频解码方法具体可包括但不限于如下步骤:
401、视频解码装置对参考块中参考像素点的重建像素值进行滤波处理以得到所述参考块中参考像素点的滤波像素值。
402、视频解码装置利用所述参考块中参考像素点的滤波像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
其中,帧内预测例如为方向性帧内预测、直流系数帧内预测、插值帧内预测或其它帧内预测方法。
403、视频解码装置基于所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性确定作用于水平加权系数的衰减速度因子和作用于垂直加权系数的衰减速度因子。
其中,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
其中,纹理特性例如可包括边长、自方差和/或边缘锐度等。因此,水平方向纹理特性可包括水平方向边长(长)、水平方向自方差和/或水平方向边缘锐度等;垂直方向纹理特性可包括垂直方向边长(宽)、垂直方向自方差和/或垂直方向边缘锐度等。
可以理解,步骤401~402和步骤403之间没有必然的先后顺序。具体例如步骤303的执行可先于、晚于或同步于步骤401~402的执行。
404、视频解码装置对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值。其中,所述加权滤波处理所使用的加权系数包括所述水平加权系数和所述垂直加权系数。
其中,步骤404中对所述当前块中像素点的初始预测像素值进行加权滤波处理所使用的加权滤波公式可为公式3。
其中,视频解码装置确定作用于水平加权系数的衰减速度因子和作用于垂直加权系数的衰减速度因子的具体方式,可参考实施例一的相关描述,此处不在赘述。
405、视频解码装置基于所述当前块中像素点的预测像素值和预测残差对所述当前块进行重建。
其中,视频解码装置可从视频码流中获得当前块中像素点的预测残差。
可以看出,,在上述举例的视频解码方案中,对所述当前块中像素点的初始预测像素值进行加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,由于针对垂直加权系数和水平加权系数设置差异性的衰减速度因子,也就是说,作用于所述水平加权系数的衰减速度因子可不同于作用于所述垂直加权系数的衰减速度因子,这有利于差异性的控制垂直加权系数和水平加权系数衰减速度,进而有利于满足垂直加权系数和水平加权系数需按不同衰减速度进行衰减的一些场景需要,由于增强了控制垂直加权系数和水平加权系 数衰减速度的灵活性,有利于提升图像预测准确性。例如在当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值等情况下,有利于使得加权系数衰减速度差异性和纹理特性差异性之间更匹配,进而有利于提升图像预测准确性,进而有利于提升视频解码质量。
通过测试发现,相对于使用公式1的相关方案,运用图2~图4举例所示方案,有时候在影视序列编解码中能带来0.2%左右的性能提升。
下面以尺寸为8x32、8x16的图像块的c top的衰减过程为例,来说明一下使用公式1和公式3的区别。参见图5-A~图5-C,图5-A举例示出了在使用公式1的情况下,尺寸为8x32的图像块的c top的衰减过程,到第七行像素点时c top已衰减为0了。图5-B举例示出在使用公式1的情况下,尺寸为8x16的图像块的c top的衰减过程,到第七行像素点时c top也衰减为0,其与尺寸为8x32的图像块的c top衰减速度基本一样。
图5-C举例示出在使用公式3的情况下(以d 1=1,d 2=2为例),尺寸为8x32的图像块的c top的衰减过程,到第十一行像素点时c top才衰减为0。可见,使用公式3对预测像素值的滤波程度相对更深,有利于提高编解码性能。
下面对c top和c left的取值进行举例说明。
例如,若图像块的长为4,那么可基于预测方向,从下面35个数组中选择与预测方向对应的一个数组,从这个数组包括的6个数中选择其中一个数作为c top的取值。同理,若图像块的宽为4,那么可基于预测方向,从下面35个数组中选择与预测方向对应的一个数组,从这个数组包括的6个数中选择其中一个数作为c left的取值。
Figure PCTCN2018084955-appb-000004
Figure PCTCN2018084955-appb-000005
又例如,若图像块的长为8,那么可基于预测方向,从下面35个数组中选择与预测方向对应的一个数组,从这个数组包括的6个数中选择其中一个数作为c top的取值。同理,若图像块的宽为8,那么可基于预测方向,从下面35个数组中选择与预测方向对应的一个数组,从这个数组包括的6个数中选择其中一个数作为c left的取值。
Figure PCTCN2018084955-appb-000006
Figure PCTCN2018084955-appb-000007
又例如若图像块的长为16,那么基于预测方向,从下面35个数组中选择与预测方向对应的一个数组,从这个数组包括的6个数中选择其中1个数作为c top的取值。同理,若图像块的宽为16,那么基于预测方向,从下面35个数组中选择与预测方向对应的1个数组,从这个数组包括的6个数中选择其中一个数作为c left的取值。
Figure PCTCN2018084955-appb-000008
Figure PCTCN2018084955-appb-000009
又例如若图像块的长为32,那么基于预测方向,从下面35个数组中选择与预测方向对应的一个数组,从这个数组包括的6个数中选择其中1个数作为c top的取值。同理,若图像块的宽为32,那么基于预测方向,从下面35个数组中选择与预测方向对应的1个数组,从这个数组包括的6个数中选择其中一个数作为c left的取值。
Figure PCTCN2018084955-appb-000010
Figure PCTCN2018084955-appb-000011
又例如若图像块的长为64,那么基于预测方向,从下面35个数组中选择与预测方向对应的一个数组,从这个数组包括的6个数中选择其中1个数作为c top的取值。同理,若图像块的宽为64,那么基于预测方向,从下面35个数组中选择与预测方向对应的1个数组,从这个数组包括的6个数中选择其中一个数作为c left的取值。
Figure PCTCN2018084955-appb-000012
Figure PCTCN2018084955-appb-000013
当然,c top和c left的取值也可以是其它经验值,c topleft可基于c top和c left得到,也可以是其它经验值。
下面还提供用于实施上述方案的相关装置。
参见图6,本申请实施例提供一种图像预测装置600,包括:
预测单元610,用于利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
其中,所述帧内预测例如为方向性帧内预测、直流系数帧内预测或插值帧内预测。
滤波单元620,用于对所述当前块中像素点的初始预测像素值进行加权滤波处理以得 到所述当前块中像素点的预测像素值,其中,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
具体例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
又例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出所述差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
其中,纹理特性例如可包括边长、自方差和/或边缘锐度等。因此水平方向纹理特性可包括水平方向边长(长)、水平方向自方差和/或水平方向边缘锐度等;垂直方向纹理特性可包括垂直方向边长(宽)、垂直方向自方差和/或垂直方向边缘锐度等。
例如预测单元610具体用于对参考块中参考像素点的重建像素值进行滤波处理以得到所述参考块中参考像素点的滤波像素值;利用所述参考块中参考像素点的滤波像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;或利用参考块中参考像素点的重建像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
例如所述滤波单元用于基于如下加权滤波公式,对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值,
Figure PCTCN2018084955-appb-000014
其中,c cur=64-(c top>>[y/d 2])-(c left>>[x/d 1])+(c topleft>>[x/d 1]),
其中,所述c top属于水平加权系数,所述c left和所述c topleft属于垂直加权系数。
其中,所述c top表示所述当前块的上相邻参考块重建像素值对应的加权系数,所述c left表示所述当前块的左相邻参考块重建像素值对应的加权系数,所述c topleft表示所述当前块的左上相邻参考块重建像素值对应的加权系数;x表示当前块中像素点相对于所述当前块的左上顶点的横坐标,y表示当前块中像素点相对于所述当前块的左上顶点的纵坐标,所述d 2为作用于水平加权系数的衰减速度因子,所述d 1为作用于垂直加权系数的衰减速度因子,所述d 1和所述d 2为实数,p″[x,y]表示当前块中坐标为[x,y]的像素点的预测像素值,p'[x,y]表示当前块中坐标为[x,y]的像素点的初始预测像素值,r[x,-1]表示所述当前块的上相邻参考块中坐标为[x,-1]的像素点重建像素值,r[-1,-1]表示所述当前块的左上相邻参考块中坐 标为[-1,-1]的像素点的重建像素值,r[-1,y]表示所述当前块的左相邻参考块中坐标为[-1,y]的像素点的重建像素值。
举例来说,所述纹理特性包括边长,所述差异性阈值包括阈值thresh1。
在所述当前块的长宽比大于所述阈值thresh1的情况下,所述d 1=1且所述d 2=2;在所述当前块的宽长比大于所述阈值thresh1的情况下,所述d 1=2且所述d 2=1,所述thresh1为大于2的实数。
又举例来说,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当前块的长宽和大于阈值thresh4的情况下,所述d 1=d 2=2;和/或,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当前块的长宽和小于或等于所述阈值thresh4的情况下,所述d 1=d 2=1。
其中,所述thresh4为大于或等于64的实数。
又例如,所述纹理特性包括边长,其中,在所述当前块的长大于阈值thresh2的情况下所述d 1=2,在所述当前块的长小于或等于阈值thresh2的情况下所述d 1=1,所述thresh2为大于或等于16的实数;和/或,在所述当前块的宽大于阈值thresh3的情况下所述d 2=2,在所述当前块的宽小于或等于阈值thresh3的情况下所述d 2=1,其中,所述thresh3为大于或等于16的实数。
可以理解,图像预测装置600可应用于视频编码装置或视频解码装置,该视频编码装置或者视频解码装置可以是任何需要输出或存储视频的装置,如笔记本电脑、平板电脑、个人电脑、手机或者视频服务器等设备。本实施例中的图像预测装置600的功能,可基于上述方法实施例的方案的来具体实现,一些未描述的部分可参考上述实施例。
参见图7,本申请实施例提供一种图像预测装置700,包括:包括相互耦合的处理器710和存储器720。所述存储器720用于存储指令和数据,所述处理器710用于执行所述指令。处理器710例如用于执行以上各方法实施例中的方法的部分或全部步骤。
处理器710还称中央处理单元(英文:Central Processing Unit,缩写:CPU)。具体的应用中图像预测装置的各组件例如通过总线系统耦合在一起。其中,总线系统除了可包括数据总线之外,还可包括电源总线、控制总线和状态信号总线等。但是为清楚说明起见,在图中将各种总线都标为总线系统730。上述本申请实施例揭示的方法可应用于处理器710中,或由处理器710实现。其中,处理器710可能是一种集成电路芯片,具有信号的处理能力。在一些实现过程中,上述方法的各步骤可通过处理器710中的硬件的集成逻辑电路或者软件形式的指令完成。上述处理器710可以是通用处理器、数字信号处理器、专用集成电路、现成可编程门阵列或其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。处理器710可实现或执行本申请实施例中公开的各方法、步骤及逻辑框图。通用处理器710可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可直接体现为硬件译码处理器执行完成,或用译码处理器中的硬件及软件模块组合执行完成。软件模块可位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等等本领域成熟的存储介质之中。该存储介质位于存储器 720,例如处理器710可读取存储器720中的信息,结合其硬件完成上述方法的步骤。
例如所述处理器710例如可用于,利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值,其中,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子可不同于作用于所述垂直加权系数的衰减速度因子。
具体例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
又例如,在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
可以理解,图像预测装置700可应用于视频编码装置或视频解码装置,该视频编码装置或者视频解码装置可以是任何需要输出或存储视频的装置,如笔记本电脑、平板电脑、个人电脑、手机或者视频服务器等设备。本实施例中的图像预测装置700的功能,可基于上述方法实施例的方案的来具体实现,一些未描述的部分可参考上述实施例。
参见图8,图8是本申请实施例可运用的视频编码器20的示意性框图,包括编码端预测模块201、变换量化模块202、熵编码模块203、编码重建模块204、编码端滤波模块205、图9是本申请实施例可运用的视频解码器30的示意性框图,包括解码端预测模块206、反变换反量化模块207、熵解码模块208、解码重建模块209、解码滤波模块210。
视频编码器20可用于实施本申请实施例的图像预测方法或视频编码方法。视频编码器30可用于实施本申请实施例的图像预测方法或视频解码方法。具体的:
编码端预测模块201和解码端预测模块206用于产生预测数据。视频编码器20可产生每一不再分割CU的一个或多个预测单元(PU)。CU的每一个PU可与CU的像素块内的不同像素块相关联。视频编码器20可针对CU的每一个PU产生预测性像素块。视频编码器20可使用帧内预测或帧间预测来产生PU的预测性像素块。如果视频编码器20使用帧内预测来产生PU的预测性像素块,则视频编码器20可基于与PU相关联的图片的解码后的像素来产生PU的预测性像素块。如果视频编码器20使用帧间预测来产生PU的预测性像素块,则视频编码器20可基于不同于与PU相关联的图片的一个或多个图片的解码后的像素来产生PU的预测性像素块。视频编码器20可基于CU的PU的预测性像素块来产生CU的残余像素块。CU的残余像素块可指示CU的PU的预测性像素块中的采样值与CU的初始像素块中的对应采样值之间的差。
变换量化模块202用于对经过预测的残差数据进行处理。视频编码器20可对CU的残余像素块执行递归四叉树分割以将CU的残余像素块分割成与CU的变换单元(TU)相关联的一个或多个较小残余像素块。因为与TU相关联的像素块中的像素各自对应一个亮度采样及两个色度采样,所以每一个TU可与一个亮度的残余采样块及两个色度的残余采样块相关联。视频编码器20可将一个或多个变换应用于与TU相关联的残余采样块以产生系数块(即,系数的块)。变换可以是DCT变换或者它的变体。采用DCT的变换矩阵,通过在水平和竖直方向应用一维变换计算二维变换,获得所述系数块。视频编码器20可对系数块中的每一个系数执行 量化程序。量化一般指系数经量化以减少用以表示系数的数据量,从而提供进一步压缩的过程。反变换反量化模块207执行的是变换量化模块202的逆过程。
视频编码器20可产生表示量化后系数块中的系数的语法元素的集合。视频编码器20通过熵编码模块203可将熵编码操作(例如,上下文自适应二进制算术译码(CABAC)操作)应用于上述语法元素中的部分或者全部。为将CABAC编码应用于语法元素,视频编码器20可将语法元素二进制化以形成包括一个或多个位(称作“二进位”)的二进制序列。视频编码器20可使用规则(regular)编码来编码二进位中的一部分,且可使用旁通(bypass)编码来编码二进位中的其它部分。
除熵编码系数块的语法元素外,视频编码器20通过编码重建模块204,可将逆量化及逆变换应用于变换后的系数块,以从变换后的系数块重建残余采样块。视频编码器20可将重建后的残余采样块加到一个或多个预测性采样块的对应采样块,以产生重建后的采样块。通过重建每一色彩分量的采样块,视频编码器20可重建与TU相关联的像素块。以此方式重建CU的每一TU的像素块,直到CU的整个像素块重建完成。
在视频编码器20重建构CU的像素块之后,视频编码器20通过编码端滤波模块205,执行消块滤波操作以减少与CU相关联的像素块的块效应。而在视频编码器20执行消块滤波操作之后,视频编码器20可使用采样自适应偏移(SAO)来修改图片的CTB的重建后的像素块。在执行这些操作之后,视频编码器20可将CU的重建后的像素块存储于解码图片缓冲器中以用于产生其它CU的预测性像素块。
视频解码器30可接收码流。所述码流以比特流的形式包含了由视频编码器20编码的视频数据的编码信息。视频解码器30通过熵解码模块208,解析所述码流以从所述码流提取语法元素。当视频解码器30执行CABAC解码时,视频解码器30可对部分二进位执行规则解码且可对其它部分的二进位执行旁通解码,码流中的二进位与语法元素具有映射关系,通过解析二进位获得语法元素。
其中,视频解码器30通过解码重建模块209,可基于从码流提取的语法元素来重建视频数据的图片。基于语法元素来重建视频数据的过程大体上与由视频编码器20执行以产生语法元素的过程互逆。举例来说,视频解码器30可基于与CU相关联的语法元素来产生CU的PU的预测性像素块。另外,视频解码器30可逆量化与CU的TU相关联的系数块。视频解码器30可对逆量化后的系数块执行逆变换以重建与CU的TU相关联的残余像素块。视频解码器30可基于预测性像素块及残余像素块来重建CU的像素块。
在视频解码器30重建CU的像素块之后,视频解码器30通过解码滤波模块210,执行消块滤波操作以减少与CU相关联的像素块的块效应。另外,基于一个或多个SAO语法元素,视频解码器30可执行与视频编码器20相同的SAO操作。在视频解码器30执行这些操作之后,视频解码器30可将CU的像素块存储于解码图片缓冲器中。解码图片缓冲器可以提供用于后续运动补偿、帧内预测及显示装置呈现的参考图片。
本申请实施例还提供一种计算机可读存储介质,其中,所述计算机可读存储介质存储了程序代码,其中,所述程序代码包括用于执行以上各方法实施例中的方法的部分或全部步骤的指令。
本申请实施例还提供一种计算机程序产品,其中,当所述计算机程序产品在计算机上 运行时,使得所述计算机执行如以上各方法实施例中的方法的部分或全部步骤。
本申请实施例还提供一种计算机程序产品,其中,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如以上各方法实施例中的方法的部分或全部步骤。
本申请实施例还提供一种应用发布平台,其中,所述应用发布平台用于发布计算机程序产品,其中,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如以上各方法实施例中的方法的部分或全部步骤。
图10和图11是电子装置50的两种示意性框图,电子装置50可并入本申请实施例可运用的编码解码器。图11是根据本申请实施例的用于视频编码的示意性装置图。下面将说明图10和图11中的单元。
电子装置50可以例如是无线通信系统的移动终端或者用户设备。应理解,可以在可能需要对视频图像进行编码和解码,或者编码,或者解码的任何电子设备或者装置内实施本申请的实施例。
装置50可以包括用于并入和保护设备的壳30。装置50还可以包括形式为液晶显示器的显示器32。在本申请的其它实施例中,显示器可以是适合于显示图像或者视频的任何适当的显示器技术。装置50还可以包括小键盘34。在本申请的其它实施例中,可以运用任何适当的数据或者用户接口机制。例如,可以实施用户接口为虚拟键盘或者数据录入系统作为触敏显示器的一部分。装置可以包括麦克风36或者任何适当的音频输入,该音频输入可以是数字或者模拟信号输入。装置50还可以包括如下音频输出设备,该音频输出设备在本申请的实施例中可以是以下各项中的任何一项:耳机38、扬声器或者模拟音频或者数字音频输出连接。装置50也可以包括电池40,在本申请的其它实施例中,设备可以由任何适当的移动能量设备,比如太阳能电池、燃料电池或者时钟机构生成器供电。装置还可以包括用于与其它设备的近程视线通信的红外线端口42。在其它实施例中,装置50还可以包括任何适当的近程通信解决方案,比如蓝牙无线连接或者USB/火线有线连接。
装置50可以包括用于控制装置50的控制器56或者处理器。控制器56可以连接到存储器58,该存储器在本申请的实施例中可以存储形式为图像的数据和音频的数据,和/或也可以存储用于在控制器56上实施的指令。控制器56还可以连接到适合于实现音频和/或视频数据的编码和解码或者由控制器56实现的辅助编码和解码的编码解码器电路54。
装置50还可以包括用于提供用户信息并且适合于提供用于在网络认证和授权用户的认证信息的读卡器48和智能卡46,例如UICC和UICC读取器。
装置50还可以包括无线电接口电路52,该无线电接口电路连接到控制器并且适合于生成例如用于与蜂窝通信网络、无线通信系统或者无线局域网通信的无线通信信号。装置50还可以包括天线44,该天线连接到无线电接口电路52用于向其它(多个)装置发送在无线电接口电路52生成的射频信号并且用于从其它(多个)装置接收射频信号。
在本申请的一些实施例中,装置50包括能够记录或者检测单帧的相机,编码解码器54或者控制器接收到这些单帧并对它们进行处理。在本申请一些实施例中,装置可在传输和/或存储之前从另一设备接收待处理的视频图像数据。在本申请的一些实施例中,装置50可以通过无线或者有线连接接收图像用于编码/解码。
本申请实施例方案可应用于各种电子装置中,示例性的,下面给出本申请实施例应用 于电视设备和移动电话设备的例子。
图12是本申请实施例适用于电视机应用的示意性结构图。
电视设备900包括可天线901、调谐器902、多路解复用器903、解码器904、视频信号处理器905、显示单元906、音频信号处理器907、扬声器908、外部接口909、控制器910、用户接口911和总线912等。
其中,调谐器902从经天线901接收到的广播信号提取期望频道的信号,并且解调提取的信号。调谐器902随后将通过解调获得的编码比特流输出到多路解复用器903。也就是说调谐器902在接收编码图像的编码流的电视设备900中用作发送装置。
多路解复用器903从编码比特流分离将要观看的节目的视频流和音频流,且将分离的流输出到解码器904。多路解复用器903还从编码比特流提取辅助数据,例如电子节目指南,并且将提取的数据提供给控制器910。如果编码比特流被加扰,则多路解复用器903可对编码比特流进行解扰。
解码器904对从多路解复用器903输入的视频流和音频流进行解码。解码器904随后将通过解码产生的视频数据输出到视频信号处理器905。解码器904还将通过解码产生的音频数据输出到音频信号处理器907。
视频信号处理器905再现从解码器904输入的视频数据,并且,在显示单元906上显示视频数据。视频信号处理器905还可在显示单元906上显示经网络提供的应用画面。另外,视频信号处理器905可根据设置对视频数据执行额外的处理,例如,噪声去除。视频信号处理器905还可产生GUI(图形用户界面)的图像并且将产生的图像叠加在输出图像上。
显示单元906由从视频信号处理器905提供的驱动信号驱动,并且在显示装置,例如液晶显示器、等离子体显示器或OELD(有机场致发光显示器)的视频屏幕上显示视频或图像。
音频信号处理器907对从解码器904输入的音频数据执行再现处理,例如,数模转换和放大等,并且通过扬声器908输出音频。另外,音频信号处理器907可以对音频数据执行额外的处理,例如噪声去除等。
外部接口909是用于连接电视设备900与外部装置或网络的接口。例如,经外部接口909接收的视频流或音频流可由解码器904解码。也就是说,外部接口909也在接收编码图像的编码流的电视设备900中用作发送装置。
控制器910包括处理器和存储器。存储器存储将要由处理器执行的程序、节目数据、辅助数据、经网络获取的数据等。例如,当电视设备900启动时,存储在存储器中的程序由处理器读取并且执行。处理器根据从用户接口911输入的控制信号控制电视设备900的操作。
其中,用户接口911连接到控制器910。例如,用户接口911包括用于使用户操作电视设备900的按钮和开关以及用于接收遥控信号的接收单元。用户接口911检测由用户经这些部件执行的操作,产生控制信号,并且将产生的控制信号输出到控制器910。
总线912将调谐器902、多路解复用器903、解码器904、视频信号处理器905、音频信号处理器907、外部接口909和控制器910彼此连接。
在具有这种结构的电视设备900中,解码器904可具有根据上述实施例的视频解码装置或图像预测装置的功能。例如解码器904可用于,利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;对所述当前块中像素点的初始预测像素值进 行加权滤波处理以得到所述当前块中像素点的预测像素值,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
图13是本申请实施例适用于移动电话应用的示意性结构图。移动电话装置920可以包括天线921、通信单元922、音频编解码器923、扬声器924、麦克风925、相机单元926、图像处理器927、多路解复用器928、记录/再现单元929、显示单元930、控制器931、操作单元932和总线933等。
天线921连接到通信单元922。扬声器924和麦克风925连接到音频编码解码器923。操作单元932连接到控制器931。总线933将通信单元922、音频编解码器923、相机单元926、图像处理器927、多路解复用器928、记录/再现单元929、显示单元930和控制器931彼此连接。
移动电话装置920在各种操作模式下执行操作,例如,音频信号的发送/接收、电子邮件和图像数据的发送/接收、图像的拍摄、数据的记录等,所述各种操作模式包括语音呼叫模式、数据通信模式、成像模式和视频电话模式。
在语音呼叫模式下,由麦克风925产生的模拟音频信号被提供给音频编解码器923。音频编解码器923将模拟音频信号转换成音频数据,对转换的音频数据执行模数转换,并且压缩音频数据。音频编码解码器923随后将作为压缩结果得到的音频数据输出到通信单元922。通信单元922对音频数据进行编码和调制以产生待发送的信号。通信单元922随后经天线921将产生的待发送的信号发送给基站。通信单元922还放大经天线921接收到的无线电信号并且对经天线921接收到的无线电信号执行频率转换以获得接收到的信号。通信单元922随后对接收到的信号进行解调和解码以产生音频数据,并且将产生的音频数据输出到音频编解码器923。音频编解码器923解压缩音频数据并且对音频数据执行数模转换以产生模拟音频信号。音频编码解码器923随后将产生的音频信号提供给扬声器924以从扬声器924输出音频。
在数据通信模式下,例如,控制器931根据由用户经操作单元932的操作产生将要被包括在电子邮件中的文本数据。控制器931还在显示单元930上显示文本。控制器931还响应于经操作单元932来自用户的用于发送的指令产生电子邮件数据,并且将产生的电子邮件数据输出到通信单元922。通信单元922对电子邮件数据进行编码和调制以产生待发送的信号。通信单元922随后经天线921将产生的待发送的信号发送给基站。通信单元922还放大经天线921接收到的无线电信号并且对经天线921接收到的无线电信号执行频率转换以获得接收到的信号。通信单元922随后对接收到的信号进行解调和解码以恢复电子邮件数据,并且将恢复的电子邮件数据输出到控制器931。控制器931在显示单元930上显示电子邮件的内容,并且将电子邮件数据存储在记录/再现单元929的存储介质中。
记录/再现单元929包括可读/可写存储介质。例如,存储介质可以是内部存储介质,或者可以是在外部安装的存储介质,例如,硬盘、磁盘、磁光盘、USB(通用串行总线)存储器或存储卡。
在成像模式下,相机单元926对对象成像以产生图像数据,并且将产生的图像数据输出到图像处理器927。图像处理器927对从相机单元926输入的图像数据进行编码,并且将编码流存储在存储/再现单元929的存储介质中。
在视频电话模式下,多路解复用器928多路复用由图像处理器927编码的视频流和从音频编码解码器923输入的音频流,并且将多路复用流输出到通信单元922。通信单元922对多路复用流进行编码和调制以产生待发送的信号。通信单元922随后经天线921将产生的待发送的信号发送给基站。通信单元922还放大经天线921接收到的无线电信号并且对经天线921接收到的无线电信号执行频率转换以获得接收到的信号。待发送的信号和接收到的信号可包括编码比特流。通信单元922随后对接收到的信号进行解调和解码以恢复流,并且将恢复的流输出到多路解复用器928。多路解复用器928从输入流分离视频流和音频流,将视频流输出到图像处理器927并且将音频流输出到音频编解码器923。图像处理器927对视频流进行解码以产生视频数据。视频数据被提供给显示单元930,并且一系列图像由显示单元930显示。音频编解码器923解压缩音频流并且对音频流执行数模转换以产生模拟音频信号。音频编解码器923随后将产生的音频信号提供给扬声器924以从扬声器924输出音频。
在具有这种结构的移动电话装置920中,图像处理器927具有根据上述实施例的视频编码装置(视频编码器、图像预测装置)和/或视频解码装置(视频解码器)的功能。
例如图像处理器927可用于利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在本申请的各种实施例中,应理解,上述各过程的序号的大小并不意味着执行顺序的必然先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
另外,本文中术语“系统”和“网络”在本文中常可互换使用。应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请所提供的实施例中,应理解,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
在上述实施例中,可全部或部分地通过软件、硬件、固件或其任意组合来实现。当使用软件实现时,可全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可为通用计算机、专用计算机、计算 机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(磁性介质例如可以是软盘、硬盘、磁带)、光介质(例如光盘)、或半导体介质(例如固态硬盘)等。在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可以通过其它的方式来实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或者讨论的相互之间的耦合或直接耦合或通信连接可以通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可位于一个地方,或者也可以分布到多个网络单元上。可根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元若以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可获取的存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或者部分,可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干请求用以使得一台计算机设备(可以为个人计算机、服务器或者网络设备等,具体可以是计算机设备中的处理器)执行本申请的各个实施例上述方法的全部或部分步骤。
以上所述,以上实施例仅用以说明本申请的技术方案而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,然而本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (23)

  1. 一种图像预测方法,其特征在于,包括:利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;
    对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值,
    所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
  2. 根据权利要求1所述的方法,其特征在于,
    在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子;
    在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出所述差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
  3. 根据权利要求2所述的方法,其特征在于,所述纹理特性包括边长、自方差和/或边缘锐度。
  4. 根据权利要求2至3任一项所述的方法,其特征在于,所述对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值包括:基于如下加权滤波公式,对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值,
    Figure PCTCN2018084955-appb-100001
    其中,c cur=64-(c top>>[y/d 2])-(c left>>[x/d 1])+(c topleft>>[x/d 1]),
    其中,所述c top属于水平加权系数,所述c left和所述c topleft属于垂直加权系数;
    其中,所述c top表示所述当前块的上相邻参考块重建像素值对应的加权系数,所述c left表示所述当前块的左相邻参考块重建像素值对应的加权系数,所述c topleft表示所述当前块的左上相邻参考块重建像素值对应的加权系数;x表示当前块中像素点相对于所述当前块的左上顶点的横坐标, y表示当前块中像素点相对于所述当前块的左上顶点的纵坐标,所述d 1为作用于垂直加权系数的衰减速度因子,所述d 2为作用于水平加权系数的衰减速度因子,所述d 1和所述d 2为实数,p”[x,y]表示当前块中坐标为[x,y]的像素点的预测像素值, p'[x,y]表示当前块中坐标为[x,y]的像素点的初始预测像素值,r[x,-1]表示所述当前块的上相邻参考块中坐标为[x,-1]的像素点重建像素值,r[-1,-1]表示所述当前块的左上相邻参考块中坐标为[-1,-1]的像素点的重建像素值,r[-1,y]表示所述当前块的左相邻参考块中坐标为[-1,y]的像素点的重建像素值。
  5. 根据权利要求4所述的方法,其特征在于,所述纹理特性包括边长,所述差异性阈值包括阈值thresh1,
    在所述当前块的长宽比大于所述阈值thresh1的情况下,所述d 1=1且所述d 2=2;在所述当前块的宽长比大于所述阈值thresh1的情况下,所述d 1=2且所述d 2=1,所述thresh1为大于2的实数。
  6. 根据权利要求5所述的方法,其特征在于,
    在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当前块的长宽和大于阈值thresh4的情况下,所述d 1=d 2=2;和/或,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当前块的长宽和小于或等于所述阈值thresh4的情况下,所述d 1=d 2=1;
    其中,所述thresh4为大于或等于64的实数。
  7. 根据权利要求4所述的方法,其特征在于,
    在所述当前块的长大于阈值thresh2的情况下所述d 1=2,在所述当前块的长小于或等于阈值thresh2的情况下所述d 1=1,所述thresh2为大于或等于16的实数;和/或,在所述当前块的宽大于阈值thresh3的情况下所述d 2=2,在所述当前块的宽小于或等于阈值thresh3的情况下所述d 2=1,所述thresh3为大于或等于16的实数。
  8. 根据权利要求1至7任一项所述方法,其特征在于,所述利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值包括:
    对参考块中参考像素点的重建像素值进行滤波处理以得到所述参考块中参考像素点的滤波像素值;利用所述参考块中参考像素点的滤波像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;
    或者,利用参考块中参考像素点的重建像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,
    所述帧内预测为方向性帧内预测、直流系数帧内预测或插值帧内预测。
  10. 根据权利要求1至9任一项所述的方法,其特征在于,所述图像预测方法应用于视频编码过程或视频解码过程。
  11. 一种图像预测装置,其特征在于,包括:
    预测单元,用于利用参考块对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;
    滤波单元,用于对所述当前块中像素点的初始预测像素值进行加权滤波处理以得到所述当前块中像素点的预测像素值,其中,所述加权滤波处理所使用的加权系数包括水平加权系数和垂直加权系数,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子。
  12. 根据权利要求11所述的装置,其特征在于,
    在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性超出差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子不同于作用于所述垂直加权系数的衰减速度因子;
    在所述当前块的水平方向纹理特性与垂直方向纹理特性之间的差异性未超出所述差异性阈值的情况下,作用于所述水平加权系数的衰减速度因子同于作用于所述垂直加权系数的衰减速度因子。
  13. 根据权利要求12所述的装置,其特征在于,所述纹理特性包括边长、自方差和/或边缘锐度。
  14. 根据权利要求12至13任一项所述的装置,其特征在于,所述滤波单元用于基于如下加权滤波公式,对所述当前块中像素点的初始预测像素值进行加权滤波以得到所述当前块中像素点的预测像素值,
    Figure PCTCN2018084955-appb-100002
    其中,c cur=64-(c top>>[y/d 2])-(c left>>[x/d 1])+(c topleft>>[x/d 1]),
    其中,所述c top属于水平加权系数,所述c left和所述c topleft属于垂直加权系数;
    其中,所述c top表示所述当前块的上相邻参考块重建像素值对应的加权系数,所述c left表示所述当前块的左相邻参考块重建像素值对应的加权系数,所述c topleft表示所述当前块的左上相邻参考块重建像素值对应的加权系数;x表示当前块中像素点相对于所述当前块的左上顶点的横坐标,y表示当前块中像素点相对于所述当前块的左上顶点的纵坐标,所述d 2为作用于水平加权系数的衰减速度因子,所述d 1为作用于垂直加权系数的衰减速度因子,所述d 1和所述d 2为实数,p”[x,y]表示当前块中坐标为[x,y]的像素点的预测像素值,p'[x,y]表示当前块中坐标为[x,y]的像素点的初始预测像素值,r[x,-1]表示所述当前块的上相邻参考块中坐标为[x,-1]的像素点重建像素值,r[-1,-1]表示所述当前块的左上相邻参考块中坐标为[-1,-1]的像素点的重建像素值,r[-1,y]表示所述当前块的左相邻参考块中坐 标为[-1,y]的像素点的重建像素值。
  15. 根据权利要求14所述的装置,其特征在于,所述纹理特性包括边长,所述差异性阈值包括阈值thresh1,
    在所述当前块的长宽比大于所述阈值thresh1的情况下,所述d 1=1且所述d 2=2;在所述当前块的宽长比大于所述阈值thresh1的情况下,所述d 1=2且所述d 2=1,所述thresh1为大于2的实数。
  16. 根据权利要求15所述的装置,其特征在于,
    在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当前块的长宽和大于阈值thresh4的情况下,所述d 1=d 2=2;和/或,在所述当前块的长宽比小于所述阈值thresh1,且所述当前块的长大于或等于宽,且所述当前块的长宽和小于或等于所述阈值thresh4的情况下,所述d 1=d 2=1;
    其中,所述thresh4为大于或等于64的实数。
  17. 根据权利要求14所述的装置,其特征在于,所述纹理特性包括边长,
    在所述当前块的长大于阈值thresh2的情况下所述d 1=2,在所述当前块的长小于或等于阈值thresh2的情况下所述d 1=1,所述thresh2为大于或等于16的实数;和/或,在所述当前块的宽大于阈值thresh3的情况下所述d 2=2,在所述当前块的宽小于或等于阈值thresh3的情况下所述d 2=1,所述thresh3为大于或等于16的实数。
  18. 根据权利要求11至17任一项所述装置,其特征在于,
    所述预测单元具体用于,对参考块中参考像素点的重建像素值进行滤波处理以得到所述参考块中参考像素点的滤波像素值;利用所述参考块中参考像素点的滤波像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值;或者,利用参考块中参考像素点的重建像素值对当前块进行帧内预测以得到所述当前块中像素点的初始预测像素值。
  19. 根据权利要求11至18任一项所述的装置,其特征在于,所述帧内预测为方向性帧内预测、直流系数帧内预测或插值帧内预测。
  20. 根据权利要求11至19任一项所述的装置,其特征在于,所述图像预测装置应用于视频编码装置或视频解码装置。
  21. 一种图像预测装置,其特征在于,包括:相互耦合的存储器和处理器;所述处理器用于执行如权利要求1至10任一项所述方法。
  22. 一种计算机可读存储介质,其特征在于,
    所述计算机可读存储介质存储了程序代码,所述程序代码包括用于执行如权利要求1至10任一项所述方法的指令。
  23. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至10任一项所述方法。
PCT/CN2018/084955 2017-04-28 2018-04-27 图像预测方法和相关产品 WO2018196864A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/662,627 US11039145B2 (en) 2017-04-28 2019-10-24 Image prediction method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710300302.4A CN108810552B (zh) 2017-04-28 2017-04-28 图像预测方法和相关产品
CN201710300302.4 2017-04-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/662,627 Continuation US11039145B2 (en) 2017-04-28 2019-10-24 Image prediction method and apparatus

Publications (1)

Publication Number Publication Date
WO2018196864A1 true WO2018196864A1 (zh) 2018-11-01

Family

ID=63920253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084955 WO2018196864A1 (zh) 2017-04-28 2018-04-27 图像预测方法和相关产品

Country Status (3)

Country Link
US (1) US11039145B2 (zh)
CN (1) CN108810552B (zh)
WO (1) WO2018196864A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10965941B2 (en) 2017-10-09 2021-03-30 Qualcomm Incorporated Position-dependent prediction combinations in video coding
US11310515B2 (en) * 2018-11-14 2022-04-19 Tencent America LLC Methods and apparatus for improvement for intra-inter prediction mode
WO2020130514A1 (ko) * 2018-12-17 2020-06-25 엘지전자 주식회사 고주파 제로잉을 기반으로 변환 계수 스캔 순서를 결정하는 방법 및 장치
US11470354B2 (en) * 2019-03-11 2022-10-11 Tencent America LLC Inter PDPC mode
WO2021127923A1 (zh) * 2019-12-23 2021-07-01 Oppo广东移动通信有限公司 图像预测方法、编码器、解码器以及存储介质
CN113891075B (zh) * 2020-07-03 2023-02-28 杭州海康威视数字技术股份有限公司 滤波处理方法及装置
WO2023217235A1 (en) * 2022-05-12 2023-11-16 Mediatek Inc. Prediction refinement with convolution model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1723799A1 (en) * 2004-03-09 2006-11-22 Nokia Corporation Method, coding device and software product for motion estimation in scalable video editing
CN101919253A (zh) * 2008-01-08 2010-12-15 高通股份有限公司 基于水平和垂直对称的滤波系数的视频译码
CN102196265A (zh) * 2010-03-16 2011-09-21 索尼公司 图像编码装置和方法、图像解码装置和方法、以及程序

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196854A1 (en) * 2001-06-15 2002-12-26 Jongil Kim Fast video encoder using adaptive hierarchical video processing in a down-sampled domain
KR20090110323A (ko) * 2007-01-04 2009-10-21 브리티쉬 텔리커뮤니케이션즈 파블릭 리미티드 캄퍼니 비디오 신호를 인코딩하는 방법 및 시스템
CN101668207B (zh) * 2009-09-25 2011-12-14 天津大学 Mpeg到avs视频编码转换方法
US8593483B2 (en) 2009-10-20 2013-11-26 Apple Inc. Temporal filtering techniques for image signal processing
US8503528B2 (en) * 2010-09-15 2013-08-06 Google Inc. System and method for encoding video using temporal filter
US9258573B2 (en) * 2010-12-07 2016-02-09 Panasonic Intellectual Property Corporation Of America Pixel adaptive intra smoothing
WO2018067051A1 (en) * 2016-10-05 2018-04-12 Telefonaktiebolaget Lm Ericsson (Publ) Deringing filter for video coding
EP3560204A4 (en) * 2016-12-23 2019-12-04 Telefonaktiebolaget LM Ericsson (publ) DERINGING FILTER FOR VIDEO CODING
US20200236361A1 (en) * 2017-07-18 2020-07-23 Lg Electronics Inc. Intra prediction mode based image processing method, and apparatus therefor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1723799A1 (en) * 2004-03-09 2006-11-22 Nokia Corporation Method, coding device and software product for motion estimation in scalable video editing
CN101919253A (zh) * 2008-01-08 2010-12-15 高通股份有限公司 基于水平和垂直对称的滤波系数的视频译码
CN102196265A (zh) * 2010-03-16 2011-09-21 索尼公司 图像编码装置和方法、图像解码装置和方法、以及程序

Also Published As

Publication number Publication date
US11039145B2 (en) 2021-06-15
CN108810552B (zh) 2021-11-09
US20200059650A1 (en) 2020-02-20
CN108810552A (zh) 2018-11-13

Similar Documents

Publication Publication Date Title
US11245932B2 (en) Encoding method and apparatus and decoding method and apparatus
WO2018196864A1 (zh) 图像预测方法和相关产品
KR102431537B1 (ko) 루마 및 크로마 성분에 대한 ibc 전용 버퍼 및 디폴트 값 리프레싱을 사용하는 인코더, 디코더 및 대응하는 방법들
US11330285B2 (en) Picture prediction method and related device
CN109076226B (zh) 图像处理装置和方法
KR101836027B1 (ko) 일정 품질 비디오 코딩
TWI723849B (zh) 圖像解碼裝置、方法及電腦可讀取記錄媒體
RU2533444C2 (ru) Устройство и способ обработки изображений
US10805606B2 (en) Encoding method and device and decoding method and device
WO2017129023A1 (zh) 解码方法、编码方法、解码设备和编码设备
US20180124289A1 (en) Chroma-Based Video Converter
WO2011155377A1 (ja) 画像処理装置および方法
US20240155121A1 (en) Deblocking filtering
KR20160076309A (ko) 부호화 방법 및 장치와 복호화 방법 및 장치
WO2013073328A1 (ja) 画像処理装置及び画像処理方法
US20240155159A1 (en) Frequency-dependent joint component secondary transform
WO2023093768A1 (zh) 图像处理方法和装置
KR20210103562A (ko) 인트라 예측을 위한 교차-성분 선형 모델링의 방법 및 장치
KR102475963B1 (ko) 변환 프로세스를 위해 사용되는 인코더, 디코더 및 대응하는 방법
KR20170040164A (ko) 부호화 방법 및 장치와 복호화 방법 및 장치
WO2020140889A1 (zh) 量化、反量化方法及装置
JP2012129925A (ja) 画像処理装置および方法、並びに、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18791112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18791112

Country of ref document: EP

Kind code of ref document: A1