WO2017097441A1 - Dispositifs et procédés de codage vidéo utilisant la prédiction intra - Google Patents

Dispositifs et procédés de codage vidéo utilisant la prédiction intra Download PDF

Info

Publication number
WO2017097441A1
WO2017097441A1 PCT/EP2016/066988 EP2016066988W WO2017097441A1 WO 2017097441 A1 WO2017097441 A1 WO 2017097441A1 EP 2016066988 W EP2016066988 W EP 2016066988W WO 2017097441 A1 WO2017097441 A1 WO 2017097441A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
block
pixels
current block
reconstructed
Prior art date
Application number
PCT/EP2016/066988
Other languages
English (en)
Inventor
Thorsten LAUDE
Stella GRASSHOF
Marco Munderloh
Joern Ostermann
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2017097441A1 publication Critical patent/WO2017097441A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present invention relates to the field of video coding. More specifically, the present invention relates to an apparatus for encoding and an apparatus for decoding a video signal using intra prediction as well as corresponding methods.
  • JCT-VC Joint Collaborative Team on Video Coding
  • ISO/IEC MPEG finished the technical work for the latest video coding standard, called High Efficiency Video Coding (HEVC). It achieves the same visual quality at half the bit rate compared to the predecessor standard AVC.
  • inter prediction employs the redundancy between different pictures to further increase the coding efficiency. Therefore, in general intra prediction requires higher bitrates than inter prediction to achieve the same visual quality for typical video signals.
  • intra coding is an essential part of all video coding systems: it is required to start a video transmission, for random access into ongoing transmissions and for error concealment.
  • the intra mode in HEVC reveals uses only one adjacent row/column as prediction basis for the current block (which in case of HEVC is referred to as coding unit or CU).
  • coding unit or CU which in case of HEVC is referred to as coding unit or CU.
  • angular prediction only one direction can be applied per CU. Due to these limitations, high bit rates are required for the residuals of intra coded CUs.
  • Edge-based error concealment algorithms are disclosed, for instance, in H. Asheri et al, "Multi-directional Spatial Error Concealment using Adaptive Edge Thresholding,” IEEE Trans. Consum. Electron., vol. 58, no. 3, pp. 880-885, Aug. 2012 and O. C. Au and S.-H. G. Chan, "Edge-Directed Error Concealment,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 3, pp. 382-395, Mar. 2010. These edge-based error concealment algorithms do not influence the actual coding process, but try to utilize edge information for recovering transmission errors as a post processing step.
  • the invention proposes a new coding mode which we refer to as contour-based multidirectional intra coding.
  • contours are detected, parameterized and extrapolated while smooth areas are filled with other algorithms.
  • the entire information for the prediction process is extracted from reconstructed parts of the current picture.
  • no signaling overhead is needed except for the coding mode usage itself.
  • the invention relates an apparatus for encoding a video signal, wherein the video signal comprises a plurality of frames, each frame is dividable into a plurality of blocks, each block comprises a plurality of pixels, and each pixel is associated with at least one pixel value (also referred to as sample value).
  • the encoding apparatus comprises a prediction module for intra prediction configured to generate a prediction block for a current, i.e.
  • the prediction module is configured to detect at least one contour in the reconstructed area, wherein the contour is associated with a subset of the plurality of pixels of the at least one already generated reconstructed block, and to generate the prediction block for the current block on the basis of the at least one contour.
  • reconstructed block is a block reconstructed from a predicted block and an error block (also referred to as residual block).
  • the prediction module is configured to detect a first and a second contour in the reconstructed area.
  • the prediction module is further configured to extrapolate said first and second contours into the current block to generate a first and a second extrapolated contour, wherein the first extrapolated contour is connected to the second extrapolated contour in the current block.
  • the encoding apparatus allows to extract the entire information for the prediction process from the reconstructed area of a currently processed frame. Thus, no signaling overhead is needed except for the coding mode usage itself.
  • the prediction module is configured to approximate the shape of the at least one contour in the reconstructed area using a parametric model. Approximating the shape of a contour in the reconstructed area using a parametric model provides a representation of a contour, which allows an accurate extrapolation of the contour. Furthermore, this representation can be handled in a computationally efficient way.
  • the parametric model is based on a polynomial and wherein the polynomial has a predefined degree or wherein the prediction module is configured to choose the degree of the polynomial depending on the number of pixels of the reconstructed area, which are associated with the at least one contour.
  • a parametric model based on a polynomial provides a particular efficient representation of a contour in the reconstructed area.
  • the prediction module is configured to determine the parametric model on the basis of a regression analysis.
  • Determining the parametric model on the basis of a regression analysis allows optimizing the approximation of the contour by the parametric model in a computationally efficient way. Furthermore, the error of the approximation is minimized.
  • the prediction module is configured to select a subset of the plurality of pixels of the current block by extrapolating the at least one contour into the current block on the basis of the parametric model of the at least one contour in the reconstructed area and to predict the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour.
  • the prediction module is configured to predict the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour by determining a boundary pixel value of a pixel of the at least one already generated reconstructed block located at the boundary with the current block and by assigning respective modified boundary pixel values to the selected subset of the plurality of pixels of the current block.
  • the prediction module is configured to predict the pixel values of the pixels of the current block, which have not been selected by extrapolating the at least one contour into the current block, by horizontally and/or vertically continuing pixel values of pixels located at the boundary between the current block and the reconstructed area or by assigning a mean pixel value of the pixels of the reconstructed area to a subset of the pixels of the current block.
  • This implementation form allows the encoding apparatus to predict pixel values of pixels of the current block not coinciding with a contour in a computationally efficient way.
  • this implementation form takes into account that sharp increases or decreases of sample values are unlikely.
  • the apparatus is configured to provide the encoded video signal in the form of a bit stream, wherein the bit stream comprises information about the parametric model for approximating the shape of the at least one contour in the reconstructed area.
  • the invention relates to a corresponding apparatus for decoding an encoded bit stream based on a video signal, wherein the video signal comprises a plurality of frames, each frame is dividable into a plurality of blocks, each block comprises a plurality of pixels, and each pixel is associated with at least one pixel value.
  • the decoding apparatus comprises a prediction module for intra prediction configured to generate a prediction block for a current, i.e.
  • the prediction module is configured to detect at least one contour in the reconstructed area, wherein the contour is associated with a subset of the plurality of pixels of the at least one already generated reconstructed block, and to generate the prediction block for the current block on the basis of the at least one contour.
  • the prediction module is configured to approximate the shape of the at least one contour in the reconstructed area using a parametric model.
  • the parametric model is based on a polynomial and wherein the polynomial has a predefined degree or wherein the prediction module is configured to choose the degree of the polynomial depending on the number of pixels of the reconstructed area, which are associated with the at least one contour.
  • the prediction module is configured to determine the parametric model on the basis of a regression analysis.
  • the prediction module is configured to select a subset of the plurality of pixels of the current block by extrapolating the at least one contour into the current block on the basis of the parametric model of the at least one contour in the reconstructed area and to predict the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour.
  • the prediction module is configured to predict the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour by determining a boundary pixel value of a pixel of the at least one already generated reconstructed block located at the boundary with the current block and by assigning a respective modified boundary pixel values to the selected subset of the plurality of pixels of the current block.
  • the prediction module is configured to predict the pixel values of the pixels of the current block, which have not been selected by extrapolating the at least one contour into the current block, by horizontally and/or vertically continuing pixel values of pixels located at the boundary between the current block and the reconstructed area or by assigning a mean pixel value of the pixels of the reconstructed area to a subset of the pixels of the current block.
  • the invention relates to a method for encoding a video signal, wherein the video signal comprises a plurality of frames, each frame is dividable into a plurality of blocks, each block comprises a plurality of pixels, and each pixel is associated with at least one pixel value.
  • the encoding method comprises the step of generating a prediction block for a current, i.e. currently processed block on the basis of a
  • the encoding method according to the third aspect of the invention can be performed by the encoding apparatus according to the first aspect of the invention. Further features and implementation forms of the encoding method according to the third aspect of the invention result directly from the functionality of the encoding apparatus according to the first aspect of the invention and its different implementation forms.
  • the encoding method comprises detecting a first and a second contour in the reconstructed area and extrapolating said first and second contours into the current block to generate a first and a second extrapolated contour, wherein the first extrapolated contour is connected to the second extrapolated contour in the current block.
  • the method comprises the further step of approximating the shape of the at least one contour in the reconstructed area using a parametric model.
  • the parametric model is based on a polynomial, wherein the polynomial has a predefined degree or wherein the prediction module is configured to choose the degree of the polynomial depending on the number of pixels of the reconstructed area, which are associated with the at least one contour.
  • the parametric model is determined on the basis of a regression analysis.
  • the method comprises the further steps of selecting a subset of the plurality of pixels of the current block by extrapolating the at least one contour into the current block on the basis of the parametric model of the at least one contour in the reconstructed area and predicting the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour.
  • the step of predicting the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour comprises the steps of determining a boundary pixel value of a pixel of the at least one already generated reconstructed block located at the boundary with the current block and assigning a respective modified boundary pixel values to the selected subset of the plurality of pixels of the current block.
  • the method comprises the further step of predicting the pixel values of the pixels of the current block, which have not been selected by extrapolating the at least one contour into the current block, by horizontally and/or vertically continuing pixel values of pixels located at the boundary between the current block and the reconstructed area or by assigning a mean pixel value of the pixels of the reconstructed area to a subset of the pixels of the current block.
  • the method comprises the further step of providing the encoded video signal in the form of a bit stream, wherein the bit stream comprises information about the parametric model for approximating the shape of the at least one contour in the reconstructed area.
  • the invention relates to a method for decoding an encoded bit stream based on a video signal, wherein the video signal comprises a plurality of frames, each frame is dividable into a plurality of blocks, each block comprises a plurality of pixels, and each pixel is associated with at least one pixel value.
  • the decoding method comprises a step of generating a prediction block for a current, i.e.
  • the decoding method according to the fourth aspect of the invention can be performed by the decoding apparatus according to the second aspect of the invention. Further features and implementation forms of the decoding method according to the fourth aspect of the invention result directly from the functionality of the decoding apparatus according to the second aspect of the invention and its different implementation forms.
  • the method comprises the further step of approximating the shape of the at least one contour in the reconstructed area using a parametric model.
  • the parametric model is based on a polynomial, wherein the polynomial has a predefined degree or wherein the prediction module is configured to choose the degree of the polynomial depending on the number of pixels of the reconstructed area, which are associated with the at least one contour.
  • the parametric model is determined on the basis of a regression analysis.
  • the method comprises the further steps of selecting a subset of the plurality of pixels of the current block by extrapolating the at least one contour into the current block on the basis of the parametric model of the at least one contour in the reconstructed area and predicting the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour.
  • the step of predicting the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour comprises the steps of determining a boundary pixel value of a pixel of the at least one already generated reconstructed block located at the boundary with the current block and assigning a respective modified boundary pixel values to the selected subset of the plurality of pixels of the current block.
  • the method comprises the further step of predicting the pixel values of the pixels of the current block, which have not been selected by extrapolating the at least one contour into the current block, by horizontally and/or vertically continuing pixel values of pixels located at the boundary between the current block and the reconstructed area or by assigning a mean pixel value of the pixels of the reconstructed area to a subset of the pixels of the current block.
  • the invention relates to a computer program comprising program code for performing the method according to the third aspect or the method according to the fourth aspect when executed on a computer.
  • the invention can be implemented in hardware and/or software.
  • FIG. 1 shows an exemplary illustration of a contour extrapolation process implemented in different embodiments of the invention
  • Fig. 2 shows an exemplary illustration of another contour extrapolation process
  • Fig. 3 shows an exemplary illustration of a smooth area filling process implemented in different embodiments of the invention
  • Fig. 4 shows an exemplary illustration of a counter parametrization implemented in different embodiments of the invention
  • Fig. 5 shows a schematic diagram of a network element configured to implement an encoding apparatus and/or a decoding apparatus according to an embodiment
  • Fig. 6 shows a schematic diagram of an encoding apparatus according to an embodiment
  • Fig. 7 shows a schematic diagram of a decoding apparatus according to an embodiment
  • Fig. 8 shows an exemplary illustration of a contour extrapolation process based on Hermite splines, which is implemented in different embodiments of the invention
  • Fig. 9 shows a schematic diagram illustrating a method for encoding a video signal according to an embodiment
  • Fig. 10 shows a schematic diagram illustrating a method for decoding a video signal according to an embodiment.
  • Slice - a spatially distinct region of a picture that is independently encoded/decoded.
  • Slice header Data structure configured to signal information associated with a particular slice.
  • Block an MxN (M-column by N-row) array of samples, or an MxN array of transform coefficients.
  • Coding Tree Unit (CTU) grid - a grid structure employed to partition blocks of pixels into macro-blocks for video encoding.
  • CU Coding Unit
  • CU Coding Unit
  • Picture Parameter Set (PPS) a syntax structure containing syntax elements that apply to zero or more entire coded pictures as determined by a syntax element found in each slice segment header.
  • Sequence Parameter Set (SPS) a syntax structure containing syntax elements that apply to zero or more entire coded video sequences as determined by the content of a syntax element found in the PPS referred to by a syntax element found in each slice segment header.
  • Video Parameter Set - a syntax structure containing syntax elements that apply to zero or more entire coded video sequences.
  • Prediction Unit PU
  • Prediction Unit PU
  • Transform Unit TU
  • Transform Unit TU
  • transform block of luma samples two corresponding transform blocks of chroma samples of a picture that has three sample arrays, or a transform block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax used to predict the transform block samples.
  • Supplemental enhancement information (SEI) - extra information that may be inserted into a video bit-stream to enhance the use of the video.
  • Luma information indicating the brightness of an image sample.
  • Chroma information indicating the color of an image sample, which may be described in terms of red difference chroma component (Cr) and blue difference chroma component (Cb).
  • the invention relates to an apparatus 605 for encoding a video signal (also referred to as encoding apparatus or encoder 605) as well as an apparatus 700 for decoding a correspondingly encoded video signal or bit stream (also referred to as decoding apparatus or decoder 700).
  • the video signal comprises a plurality of frames (also referred to as images), wherein each frame can be divided into a plurality of blocks.
  • Each block comprises a plurality of pixels, wherein each pixel is associated with at least one pixel value or sample value, for instance, a luma intensity value.
  • the main component of the encoding apparatus 605 and the decoding apparatus 700 is a prediction module for intra prediction, which in case of the encoding apparatus 605 is referred to as the prediction module 602 (see figure 6) and in the case of the decoding apparatus 700 is referred to as the prediction module 740.
  • the prediction module 620, 740 is configured to generate a prediction block for a currently processed block on the basis of a reconstructed area comprising at least one already generated reconstructed block adjacent to the current block, wherein the prediction module 620, 740 is configured to detect at least one contour in the
  • the contour is associated with a subset of the plurality of pixels of the at least one already generated reconstructed block, and to generate the prediction block for the current block on the basis of the at least one contour.
  • the area of operation may be limited to the current block, for instance the CU, and adjacent reconstructed areas of the same picture.
  • the trade-off for the decision of how many reconstructed areas should be involved in the prediction process may be that larger areas allow the extraction of more information while they
  • areas of equal size than the current CU on the top, top left and left side may be used.
  • Figure 1 illustrates one possible example for the selection of the reconstructed area with respect to the current CU.
  • Embodiments of the invention may solely rely on reconstructed samples which are available at the encoder 620 and at the decoder 700 or may rely on additional information, e.g. information which is signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax).
  • additional information e.g. information which is signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax).
  • some or all of the described operations may be applied in the same or in a similar way at the encoder 620 and at the decoder 700. Furthermore, in an embodiment, some or all of the described operations may be based on reconstructed samples which are available at the encoder 620 and at the decoder 700. Hence, these operations may be applied at the encoder 620 and at the decoder 700 without the necessity of signaling side information as part of the bit stream.
  • the prediction module 620, 740 is configured to detect one or more contours (also referred to as edges) in the reconstructed area as follows.
  • the contour detection may be based on the Canny edge detection algorithm, disclosed in J. Canny, "A Computational Approach to Edge Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679-698, Nov. 1986, which is herein fully incorporated by reference.
  • 395 ⁇ 100 may be implemented in the prediction module 620, 740 for edge detection.
  • Other algorithms may also be used for edge detection.
  • Video signals may have very noisy content which might be falsely detected as edges by the Canny algorithm.
  • the reconstructed area may be preprocessed to remove such noise.
  • the reconstructed samples may be filtered prior to the actual edge detection, e.g. by a low pass filter.
  • the Canny edge detection algorithm may be controlled by setting two thresholds for the suppression of weak edges. Taking into account that the characteristics of video signals may be very different, embodiments allow to define thresholds which result in good detection results for all video signals. These thresholds may be defined in different ways. For instance, the thresholds may be fixed, calculated based on the video signal or signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax).
  • thresholding may be applied to generate an edge image.
  • a connected components analysis may be applied to detect contours.
  • segmentation methods e.g. clustering, region-growing, level-sets, graph partitioning methods like Markov random fields, watershed, etc.
  • the prediction module 620, 740 is configured to approximate the shape of a contour detected in the reconstructed area using a
  • a binary image may be available that may describe all edge pixels in the reconstructed area, i.e. all pixels of the
  • edges may be further split into separate contours, for instance by following the approach disclosed in S. Suzuki and K. Be, "Topological Structural Analysis of Digitized Binary Images by Border Following," Comput. Vision, Graph. Image Process., vol. 30, no. 1 , pp. 32-46, Apr. 1985, which is fully incorporated herein by reference.
  • these contours are parameterized. For instance, each contour may be described by a polynomial parameterization. The degree of the polynomial should be chosen carefully. On the one hand a higher degree may allow the approximation of unregularly shaped or curved contours.
  • the prediction module 620, 740 is configured to use polynomials with a fixed degree or polynomials whose degree is chosen based on the number of edge pixels for approximating the shape of a contour in the reconstructed area. In an embodiment, in case there are n edge pixels, the prediction module 620, 740 is configured to use a polynomial with a degree smaller than or equal to n-1 . In an embodiment, the prediction module 620, 740 is configured to select the smallest degree of the polynomial, which results in an error between the contour and its polynomial approximation, which is smaller than a predefined error threshold. In general, the error increases with decreasing degree.
  • the degree of the polynomial may be signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax) or be derived based on other coded syntax elements.
  • the prediction module 620, 740 is configured to test multiple polynomial types for each contour and select one of them.
  • the coefficient of determination or another criterion may be used as basis for the selection.
  • the prediction module 620, 740 is configured to determine the parametric model on the basis of a regression analysis.
  • splines may be used to parameterize contours.
  • Hermite splines may be used with the starting point defined at the boundary of the reconstructed area and a slope according to the slope of the contour in the reconstructed area.
  • An exemplary illustration for the interpolation of given data points with Hermite splines is shown in Figure 4.
  • an exemplary illustration for extrapolated Hermite splines based on the slope of the contour in the reconstructed area is given in Figure 8.
  • prediction module 620, 740 is configured to use polygons for the contour parameterization.
  • contours may be parameterized as geometric structures (e.g. circles, ellipses, etc.).
  • time series analysis methods may be applied to derive contour parameterizations.
  • (non-) linear models or statistical linear models linear in parameters, but any transform of the data is allowed) may be used to derive contour parameterizations. Further information about linear models can be found in C. R. Rao, H. Toutenburg, Shalabh, C. Heumann, and M. Schomaker, "Linear Models and Generalizations: Least Squares and Alternatives". Springer, 2007, which is incorporated herein by reference.
  • the data points which are used for the contour parameterization derivation may be preprocessed. For instance, outlier detection, RANSAC, etc. may be applied to these data points.
  • the prediction module 620, 740 is further configured to extrapolate a parametrized contour into the currently processed block, for instance a currently processed CU. By extrapolating the parametrized contour into the currently processed block the prediction module 620, 740 select a subset of the plurality of pixels of the current block, namely those pixels of the current block that are part of the extrapolated contour. Moreover, in an embodiment, the prediction module 620, 740 is configured to predict the pixel values of the pixels of the current block that are part of the extrapolated contour on the basis of the parametric model of the contour.
  • the prediction module 620, 740 is configured to predict the pixel values of the pixels of the selected subset on the basis of the parametric model of the at least one contour by determining a boundary pixel value of a pixel of the at least one already generated reconstructed block located at the boundary with the current block and by assigning a respective modified boundary pixel values to the selected subset of the plurality of pixels of the current block.
  • the extrapolation sample value may be modified along the extrapolated contour. The motivation for this altering of the sample value may be based on the observation that the accuracy of the extrapolated contours with respect to the original input signal decreases with increasing distance to the reconstructed pixels, i.e. the pixels of the reconstructed area.
  • sample value s a of the adjacent pixel i.e. the pixel of the contour at the boundary between the reconstructed area and the current block, may be diminished towards a specific sample value.
  • this signal value might be signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header,
  • the sample value s a may be derived based on a mean sample value s m of a region of the current block, which may be used for the mean fill process that will be described in detail further below.
  • the resulting sample value for the extrapolation for a given pixel on the extrapolated contour may be denoted as s e .
  • d max be the distance after which the extrapolated sample value may be completely diminished towards s m and d the distance between the adjacent pixel and the current pixel on the extrapolated contour.
  • the diminishing may be described mathematically as noted in the following equation: s m d + s a (d max ci)
  • the value for d max may be determined in various ways. For instance, the value may be fixed, determined based on other parameters of the coded video (e.g. CU size, PU size, etc.), signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax) or based on some other characteristic of the video.
  • An example for the diminishing process implemented in the prediction module 620, 740 according to an embodiment is given in Figure 1 .
  • the width of the extrapolated contour itself may be only one pixel.
  • the contours in the coded video signals can have a larger width.
  • the width of the extrapolated contour may be extended as illustrated by the thin dotted lines in Figure 1 .
  • the contour may be extended to this adjacent pixel:
  • the value of t may be derived in various ways. For instance, the value may be fixed, may be derived based on some signal characteristics, may be signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax) or be derived based on other syntax elements.
  • the signal may be extrapolated into the current block in such a way that two contours in the reconstructed area (e.g. two parts of the same edge in the content of the signal) are connected.
  • An exemplary illustration for this process, which can be implemented in the prediction module 620, 740 according to an embodiment is shown in Figure 2.
  • two contours in the reconstructed area (solid lines) are connected through the current block using the dotted line.
  • the prediction module may be configured to detect a first and a second contour in the reconstructed area.
  • the prediction module is further configured to extrapolate said first and second contours into the current block to generate a first and a second extrapolated contour, therein the first extrapolated contour is connected to the second extrapolated contour in the current block.
  • the prediction module 620, 740 is configured to handle overlapping contours. In an embodiment, the prediction module 620, 740 is configured to handle overlapping contours by averaging the contours. In further embodiments of the extrapolation of contours implemented in the prediction module 620, 740 color gradients or luminance gradients may be involved in the extrapolation process. In an embodiment, the color gradient along the contour in the reconstructed area may be analyzed and extrapolated into the current CU. In an embodiment, diffusion equations may be applied to describe the extrapolation of sample values into the current CU. In an embodiment, (partial) differential equations may be applied to achieve the contour extrapolation process.
  • a smooth area algorithm is implemented in the prediction module 620, 740. More specifically, in an embodiment the prediction module 620, 740 is configured to predict the pixel values of the pixels of the current block, which have not been selected by extrapolating the at least one contour into the current block, by horizontally and vertically continuing pixel values of pixels located at the boundary between the current block and the reconstructed area or by assigning a mean pixel value of the pixels of the
  • the prediction module 620, 740 is configured to use different algorithms for the prediction of contours and of smooth areas for the current block.
  • smooth areas are predicted subsequently to the contour extrapolation.
  • two different algorithms are used for the smooth area prediction, namely an algorithm for the horizontal and vertical continuation of adjacent sample values and an algorithm for the filling of unpredictable areas by the mean sample value of the reconstructed area. Both algorithms are visualized in Figure 3.
  • the horizontal and vertical continuation may be based on the repetition of the adjacent sample value from the reconstructed area into the current block, e.g. coding unit. Thereby, it can be ensured that there are no discontinuities at the border of the coding unit.
  • the sample value continuation is applied until the first contour is hit, as illustrated in Figure 3. There may be other criteria for deciding when to stop the sample value continuation.
  • the sample value continuation may be stopped after a fixed distance, a distance which is derived based on the video signal, based on other syntax elements or signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax).
  • unpredictable parts of the current block e.g. coding unit, i.e. parts which are neither predicted by extrapolated contours nor by horizontal or vertical continuation, may be filled on the basis the mean sample value of the pixels of the reconstructed area.
  • the following equation formulates one possible embodiment for the derivation of the mean sample value s m based on the sample values sfo) for all pixels Pi in the reconstructed area A r : wherein w denotes the linear size of an edge of the current block, as indicated in Figure 3.
  • the difference to the HEVC intra DC mode is based on the fact that the mean sample value of the entire reconstructed area as defined above may be used instead of just using the mean value of the adjacent sample row/column. In this way, embodiments of the invention can cope better with local fluctuations of the signal.
  • smooth areas may be filled with the conventional intra prediction of HEVC.
  • differential equations or partial differential equations may be applied to fill smooth areas.
  • the areas may be predicted by some value which among other possibilities may be fixed, may be derived based on the coded video signal or based on other syntax elements or may be signaled as part of the bit stream (e.g. as part of the SPS, VPS, PPS, SEI messages, slice header, CU, PU or TU syntax).
  • diffusion equations may be applied to fill smooth areas.
  • FIG. 5 is a schematic diagram of a network element 500 (e.g., a computer, server, smartphone, tablet computer, etc.) configured to implement the embodiments disclosed above, i.e. configured to comprise an encoding apparatus 605 with a prediction module 620 and/or a decoding apparatus 700 with a prediction module 740.
  • the network element 500 comprises ports 510, transceiver units (Tx/Rx) 520, a processor 530, and a memory 540 comprising a coding module 550 (e.g., a contour-based multidirectional intra prediction module).
  • Ports 510 are coupled to Tx/Rx 520, which may be transmitters, receivers, or combinations thereof.
  • the Tx/Rx 520 may transmit and receive data via the ports 510.
  • Processor 530 is configured to process data.
  • Memory 540 is configured to store data and instructions for implementing embodiments described herein.
  • the network element 500 may also comprise electrical-to-optical (EO) components and optical-to- electrical (OE) components coupled to the ports 510 and Tx/Rx 520 for receiving and transmitting electrical signals and optical signals.
  • EO electrical-to-optical
  • OE optical-to- electrical
  • the processor 530 may be implemented by hardware and software.
  • the processor 530 may be implemented as one or more central processing unit (CPU) chips, logic units, cores (e.g., as a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs).
  • the processor 530 is in communication with the ports 510, Tx/Rx 520, and memory 540.
  • the memory 540 can comprise one or more of disks, tape drives, and solid-state drives, or other kind of memory and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • the memory 540 may be volatile and nonvolatile and may be read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), and static random-access memory (SRAM).
  • Coding module 550 is implemented by processor 530 to execute the instructions for implementing various embodiments previously discussed.
  • FIG. 6 illustrates an embodiment of an encoding apparatus or video encoder 605.
  • the video encoder 605 may comprise a rate-distortion optimization (RDO) module 610, the prediction module 620, a transform module 630, a quantization module 640, an entropy encoder 650, a de-quantization module 660, an inverse transform module 670, and a reconstruction module 680 arranged as shown in Figure 6.
  • the video encoder 605 may receive an input video signal comprising a sequence of video pictures (or slices or frames).
  • a picture may refer to any of a predicted picture (P-picture), an intra-coded picture (l-picture), or a bi- predictive picture (B-picture).
  • a slice may refer to any of a P-slice, an l-slice, or a B-slice.
  • the RDO module 610 may be configured to coordinate or make logic decisions for one or more of other modules. For example, based on one or more previously encoded pictures, the RDO module 610 may determine how a current picture (or slice) being encoded is partitioned into a plurality of blocks, e.g. CUs, and how a CU is partitioned into one or more PUs and TUs. As noted above, CU, PU, and TU are various types of blocks used in HEVC. In addition, the RDO module 610 may determine how the current picture is to be predicted.
  • the current picture may be predicted via inter and/or intra prediction.
  • intra prediction there are a plurality of available prediction modes or directions in HEVC (e.g., 34 modes for the Y component and six modes (including LM mode) for the U or V component), and an optimal mode may be determined by the RDO module 610.
  • the RDO module 610 may calculate a sum of absolute error (SAE) for each prediction mode, and select a prediction mode that results in the smallest SAE.
  • SAE sum of absolute error
  • the prediction module 620 is configured to implement at least some of the inventive concepts disclosed above to generate a prediction block for a current block from the input video signal.
  • the prediction block comprises a plurality of predicted pixel samples, each of which may be generated based on a plurality of reconstructed luma samples located in a corresponding reconstructed luma block, and a plurality of reconstructed chroma samples located in a corresponding reconstructed chroma block.
  • the current block may be subtracted by the prediction block, or vice versa, to generate a residual block.
  • the residual block may be fed into the transform module 630, which may convert residual samples into a matrix of transform coefficients via a two-dimensional orthogonal transform, such as a discrete cosine transform (DCT). Then, the matrix of transform coefficients may be quantized by the quantization module 640 before being fed into the entropy encoder 650.
  • the quantization module 640 may alter the scale of the transform coefficients and round them to integers, which may reduce the number of non-zero transform coefficients. As a result, a compression ratio may be increased.
  • Quantized transform coefficients may be scanned and encoded by the entropy encoder 650 into an encoded bit stream.
  • the quantized transform coefficients may also be fed into the de-quantization module 660 to recover the original scale of the transform coefficients. Then, the inverse transform module 670 may perform the inverse of the transform module 630 and generate a noisy version of the original residual block. Then, the lossy residual block may be fed into the reconstruction module 680, which may generate reconstructed samples for the prediction of future blocks. If desired, filtering may be performed on the reconstructed samples before they are used for the prediction.
  • Figure 6 may be a simplified illustration of a video encoder, thus it may include only part of modules present in the video encoder. Other modules (e.g., filter, scanner, and transmitter), although not shown in Figure 6, may also be included to facilitate video encoding as understood by one of skill in the art.
  • some of the modules in the video encoder 605 may be omitted. For example, in lossless encoding of certain video content, no information loss may be allowed, thus the quantization module 640 and the de-quantization module 660 may be omitted. For another example, if the residual block is encoded directly without being converted to transform coefficients, the transform module 630 and the inverse transform module 670 may be omitted.
  • the encoded bitstream may be configured to include other information, such as video resolution, picture rate, block partitioning information (sizes, coordinates), prediction modes, etc., so that the encoded sequence of video pictures may be properly decoded by the video decoder 700.
  • other information such as video resolution, picture rate, block partitioning information (sizes, coordinates), prediction modes, etc.
  • Figure 7 illustrates an embodiment of the decoding apparatus or video decoder 700.
  • the video decoder 700 may correspond to the video encoder 605 of Figure 6 and may comprise an entropy decoder 710, a de-quantization module 720, an inverse transform module 730, the prediction module 740, and a reconstruction module 750 arranged as shown in Figure 7.
  • an encoded bit stream containing information of a sequence of video pictures may be received by the entropy decoder 710, which may decode the bit stream to an uncompressed format.
  • a matrix of quantized transform coefficients may be generated, which may then be fed into the de-quantization module 720, which may be the same or similar to the de-quantization module 660 in Figure 6.
  • output of the de-quantization module 720 may be fed into the inverse transform module 730, which may convert transform coefficients to residual values of a residual block.
  • information containing a prediction mode of the current block may also be decoded by the entropy decoder 710.
  • the prediction module 740 is configured to generate a prediction block for the current block based on at last some of the inventive concepts disclosed above.
  • the apparatus 605 is configured to provide the encoded video signal in the form of a bit stream, wherein the bit stream comprises information about the parametric model for approximating the shape of the at least one contour in the reconstructed area.
  • the encoding apparatus 605 is configured to indicate whether the coding mode according to embodiments of the invention or whether a different coding mode is used for providing the encoded bit stream. In an embodiment, the encoding apparatus 605 is configured to select different coding modes for different blocks, i.e. coding units. In an embodiment, the encoding apparatus 605 is configured to set for each coding unit a flag for indicating that for the respective coding unit the coding mode according to embodiments of the invention is used.
  • the encoding apparatus 605 is configured to provide for each coding unit array indices xO, yO, which specify the location (xO, yO) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture.
  • syntax elements could be signaled accordingly in the VPS, slice header, as SEI message, etc.
  • syntax elements may be specified according with semantics as elaborated in the following:
  • the following table defines an exemplary syntax for the signaling of the mode usage of the contour-based multidirectional intra coding mode according to an embodiment and based on the signaling syntax defined for instance in ITU-T Recommendation H.265/ ISO/I EC 23008-2:2015 MPEG-H Part 2: High Efficiency Video Coding (HEVC). The newly introduced fields are described below.
  • HEVC High Efficiency Video Coding
  • Table 3 Exemplary syntax related to the proposed technologies as part of the SPS
  • multi_intra_flag[xO][yO] 1 specifies that the proposed coding mode is used to code the current coding unit.
  • multi_intra_flag[xO][yO] 0 specifies that the coding unit is not coded with the proposed coding mode.
  • the array indices xO, yO specify the location ( xO, yO ) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture.
  • multi_intra_enabled_flag 1 specifies that the proposed coding mode is available for slices referring to the corresponding PPS/SPS and that associated syntax elements are present in the bitstream.
  • multi_intra_enabled_flag 0 specifies that the proposed coding mode is not available for slices referring to the corresponding PPS/SPS and that associated syntax elements are not present in the bitstream.
  • ntra_parameter_t specifies the value for the threshold t as described above.
  • multi_intra_parameter_dmax specifies the value of the parameter dmax as described above.
  • multi_intra_parameter_polynomial_type specifies the type of the polynomial parameterization as described above.
  • multi_intra_parameter_diminishing_value specifies the target value for the diminishing as described above.
  • multi_intra_parameter_hor_ver_continuation_stop specifies the criterion based on which the horizontal and vertical sample value continuation is stopped as described above.
  • Figure 9 shows a schematic diagram illustrating a method 900 for encoding a video signal, wherein the video signal comprises a plurality of frames, each frame is dividable into a plurality of blocks, each block comprises a plurality of pixels, and each pixel is associated with at least one pixel value.
  • the encoding method 900 comprises a step 901 of generating a prediction block for a current, i.e.
  • a reconstructed area comprising at least one already generated reconstructed block adjacent to the current block by detecting at least one contour in the reconstructed area, wherein the contour is associated with a subset of the plurality of pixels of the at least one already generated reconstructed block, and generating the prediction block for the current block on the basis of the at least one contour.
  • Figure 10 shows a schematic diagram illustrating a corresponding method 1000 for decoding an encoded bit stream based on a video signal, wherein the video signal comprises a plurality of frames, each frame is dividable into a plurality of blocks, each block comprises a plurality of pixels, and each pixel is associated with at least one pixel value.
  • the decoding method 1000 comprises a step 1001 of generating a prediction block for a current, i.e.
  • a reconstructed area comprising at least one already generated reconstructed block adjacent to the current block by detecting at least one contour in the reconstructed area, wherein the contour is associated with a subset of the plurality of pixels of the at least one already generated reconstructed block, and generating the prediction block for the current block on the basis of the at least one contour.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un appareil (605) servant à coder un signal vidéo. Le signal vidéo comprend une pluralité de trames, chaque trame peut être divisée en une pluralité de blocs, chaque bloc comprend une pluralité de pixels et chaque pixel est associé à au moins une valeur de pixel. L'appareil (605) comprend un module de prédiction (620) destiné à la prédiction intra, configuré pour générer un bloc de prédiction pour un bloc courant sur la base d'une zone reconstruite comprenant au moins un bloc reconstruit déjà généré voisin du bloc actuel. Le module de prédiction (620) est configuré pour détecter au moins un contour dans la zone reconstruite, le contour étant associé à un sous-ensemble de la pluralité de pixels dudit bloc reconstruit déjà généré, et pour générer le bloc de prédiction pour le bloc actuel en se basant sur au moins un contour. L'invention concerne en outre un appareil de décodage correspondant ainsi que des procédés correspondants.
PCT/EP2016/066988 2015-12-07 2016-07-15 Dispositifs et procédés de codage vidéo utilisant la prédiction intra WO2017097441A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562263759P 2015-12-07 2015-12-07
US62/263,759 2015-12-07

Publications (1)

Publication Number Publication Date
WO2017097441A1 true WO2017097441A1 (fr) 2017-06-15

Family

ID=56413684

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/066988 WO2017097441A1 (fr) 2015-12-07 2016-07-15 Dispositifs et procédés de codage vidéo utilisant la prédiction intra

Country Status (1)

Country Link
WO (1) WO2017097441A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274166A1 (en) * 2009-01-29 2011-11-10 Lg Electronics Inc. Method And Apparatus For Processing Video Signals Using Boundary Intra Coding
US20140247871A1 (en) * 2011-11-11 2014-09-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Adaptive partition coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274166A1 (en) * 2009-01-29 2011-11-10 Lg Electronics Inc. Method And Apparatus For Processing Video Signals Using Boundary Intra Coding
US20140247871A1 (en) * 2011-11-11 2014-09-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Adaptive partition coding

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
C. A. ROTHWELL; J. L. MUNDY; W. HOFFMAN; V.-D. NGUYEN: "Driving Vision by Topology", PROCEEDINGS OF INTERNATIONAL SYMPOSIUM ON COMPUTER VISION (ISCV, 1995, pages 395 - 400, XP010151111, DOI: doi:10.1109/ISCV.1995.477034
C. R. RAO; H. TOUTENBURG; SHALABH, C. HEUMANN; M. SCHOMAKER: "Linear Models and Generalizations: Least Squares and Alternatives", 2007, SPRINGER
D. LIU; X. SUN; F. WU; Y.-Q. ZHANG: "Edge-oriented Uniform Intra Prediction", IEEE TRANS. IMAGE PROCESS, vol. 17, no. 10, October 2008 (2008-10-01), pages 1827 - 36, XP011234202, DOI: doi:10.1109/TIP.2008.2002835
DONG LIU ET AL: "Edge-Oriented Uniform Intra Prediction", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 17, no. 10, 1 October 2008 (2008-10-01), pages 1827 - 1836, XP011248122, ISSN: 1057-7149 *
DONG LIU; XIAOYAN SUN; FENG WU; SHIPENG LI; YA-QIN ZHANG: "Image Compression With Edge-Based Inpainting", IEEE TRANS. CIRCUITS SYST. VIDEO TECHNOL., vol. 17, no. 10, October 2007 (2007-10-01), pages 1273 - 1287, XP011193147, DOI: doi:10.1109/TCSVT.2007.903663
H. ASHERI ET AL.: "Multi-directional Spatial Error Concealment using Adaptive Edge Thresholding", IEEE TRANS. CONSUM. ELECTRON., vol. 58, no. 3, August 2012 (2012-08-01), pages 880 - 885, XP011465104, DOI: doi:10.1109/TCE.2012.6311331
J. CANNY: "A Computational Approach to Edge Detection", IEEE TRANS. PATTERN ANAL. MACH. INTELL., vol. 8, no. 6, November 1986 (1986-11-01), pages 679 - 698, XP000604891, DOI: doi:10.1109/TPAMI.1986.4767851
M. FANG; G. X. YUE; Q. C. YU: "The Study on an Application of Otsu Method in Canny Operator", PROCEEDINGS OF THE 2009 INTERNATIONAL SYMPOSIUM ON INFORMATION PROCESSING (ISIP, 2009
N. OTSU: "A Threshold Selection Method from Gray-Level Histograms", IEEE TRANS. SYST. MAN CYBERN., vol. 9, no. 1, 1979, pages 62 - 66
O. C. AU; S.-H. G. CHAN: "Edge-Directed Error Concealment", IEEE TRANS. CIRCUITS SYST. VIDEO TECHNOL., vol. 20, no. 3, March 2010 (2010-03-01), pages 382 - 395, XP011284298, DOI: doi:10.1109/TCSVT.2009.2035839
S. SUZUKI; K. BE: "Topological Structural Analysis of Digitized Binary Images by Border Following", COMPUT. VISION, GRAPH. IMAGE PROCESS., vol. 30, no. 1, April 1985 (1985-04-01), pages 32 - 46, XP001376400
Y. YUAN; X. SUN: "Edge Information Based Effective Intra Mode Decision Algorithm", 2012 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATION AND COMPUTING (ICSPCC, 2012, pages 628 - 633, XP032256608, DOI: doi:10.1109/ICSPCC.2012.6335602

Similar Documents

Publication Publication Date Title
CN112823518B (zh) 用于译码块的几何划分块的帧间预测的装置及方法
CN112534818B (zh) 使用运动和对象检测的用于视频编码的译码参数的基于机器学习的自适应
CN111819852B (zh) 用于变换域中残差符号预测的方法及装置
CN111107356B (zh) 图像预测方法及装置
CN113545040B (zh) 用于多假设编码的加权预测方法及装置
US20190387222A1 (en) Method and apparatus for intra chroma coding in image and video coding
CN112040229B (zh) 视频解码方法、视频解码器及计算机可读存储介质
WO2020232845A1 (fr) Dispositif et procédé de prédiction intertrame
EP3893510B1 (fr) Procédé et appareil d'encodage et de décodage d'image de vidéo
KR20210088697A (ko) 인코더, 디코더 및 대응하는 디블록킹 필터 적응의 방법
KR20220160038A (ko) 비디오 코딩 데이터를 시그널링하기 위한 방법들
CN111355959B (zh) 一种图像块划分方法及装置
US20230388490A1 (en) Encoding method, decoding method, and device
CN115349257A (zh) 基于dct的内插滤波器的使用
CN110868590B (zh) 图像划分方法及装置
US20230090025A1 (en) Methods and systems for performing combined inter and intra prediction
CN110944184A (zh) 视频解码方法及视频解码器
WO2017097441A1 (fr) Dispositifs et procédés de codage vidéo utilisant la prédiction intra
CN110958452A (zh) 视频解码方法及视频解码器
US20220060734A1 (en) Intra prediction methods in video coding
CN111770337B (zh) 视频编码方法、视频解码方法及相关设备
CN116708787A (zh) 编解码方法和装置
CN116647683A (zh) 量化处理方法和装置
Hwang Enhanced Coding Tools and Algorithms for Screen Content Video: A Review: A Review
CN116134817A (zh) 使用稀疏光流表示的运动补偿

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16739168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16739168

Country of ref document: EP

Kind code of ref document: A1