ZA200205507B - A method for filtering digital images, and a filtering device. - Google Patents

A method for filtering digital images, and a filtering device. Download PDF

Info

Publication number
ZA200205507B
ZA200205507B ZA200205507A ZA200205507A ZA200205507B ZA 200205507 B ZA200205507 B ZA 200205507B ZA 200205507 A ZA200205507 A ZA 200205507A ZA 200205507 A ZA200205507 A ZA 200205507A ZA 200205507 B ZA200205507 B ZA 200205507B
Authority
ZA
South Africa
Prior art keywords
filtering
block
boundary
current block
blocks
Prior art date
Application number
ZA200205507A
Inventor
Jari Lainema
Bogdan-Paul Dobrin
Marta Karczewicz
Original Assignee
Nokia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corp filed Critical Nokia Corp
Publication of ZA200205507B publication Critical patent/ZA200205507B/en

Links

Description

‘ A method for filtering digital images, and a filtering device ) The present invention relates to a method for reducing visual artefacts in a digital image, which is encoded and decoded by blocks, in which filtering is performed to reduce visual artefacts due to a boundary between a current block and an adjacent block.
The present invention also relates to a device for reducing visual artefacts in a digital image, which is encoded and decoded by blocks, the device comprising means for performing filtering to reduce visual artefacts due to a boundary between a current block and an adjacent block.
The present invention also relates to an encoder comprising means for coding and means for locally decoding a digital image by blocks, which encoder comprises means for performing filtering to reduce visual artefacts due to a boundary between a current block and an adjacent block.
The present : 15 invention also relates to a decoder comprising means for decoding a digital image by blocks, which decoder comprises means for performing - filtering to reduce visual artefacts due to a boundary between a current block and an adjacent block.
The present invention also relates to a terminal comprising an encoder, which comprises means for coding and means for locally decoding a digital image by blocks, means for performing filtering to reduce visual artefacts due to a boundary between a current block and an adjacent block.
The present invention further relates to a terminal comprising means for decoding a digital image by blocks, means for performing filtering to reduce visual artefacts due to a boundary between a current block and an adjacent block.
The present invention further relates to a storage medium for storing a software program comprising machine executable steps for coding and locally decoding a digital video signal by blocks, and for performing filtering to reduce visual artefacts due to a boundary . 30 between a current block and an adjacent block.
The present invention further relates to a storage medium for storing a software program , comprising machine executable steps for decoding a digital video signal by blocks, and for performing filtering to reduce visual artefacts due to a boundary between a current block and an adjacent block.
‘ An arrangement like that shown in Figure 1 is generally used for transferring a digital video sequence in compressed form. The digital v video sequence is formed of sequential images, often referred to as frames. In some prior art digital video transmission systems, for example ITU-T H.261/H.263 recommendations, at least three frame types are defined: an I-frame (intra), a P-frame (predicted or inter), and a B-frame (bi-directional). The I-frame is generated solely on the basis of information contained in the image itself, wherein at the receiving end, this I-frame can be used to form the entire image. P-frames are formed on the basis of a preceding I-frame or P-frame, wherein at the receiving stage a preceding I-frame or P-frame is correspondingly used together with the received P-frame in order to reconstruct the image. In the composition of P-frames, for instance motion compensation is used to compress the quantity of information. B-frames are formed on the ) 15 basis of one or more preceding P-frames or I-frames and/or one or more following P- or I-frames.
The frames are further divided into blocks. One frame can comprise different types of blocks. A predicted frame (e.g. inter frame) may also contain blocks that are not predicted. In other words, some blocks of a
P-frame may in fact be intra coded. Furthermore, some video coders may use the concept of independent segment decoding in which case several blocks are grouped together to form segments that are then coded independently from each other. All the blocks within a certain segment are of the same type. For example, if a P-frame is composed mainly of predicted blocks and some intra-coded blocks, the frame can be considered to comprise at least one segment of intra blocks and at least one segment of predicted blocks. ‘ 30 As is well known, a digital image comprises an array of image pixels. In the case of a monochrome image, each pixel has a pixel value within a . certain range (e.g. 0 - 255), which denotes the pixel’s luminance. In a colour image, pixel values may be represented in a number of different ways. In a commonly used representation, referred to as the RGB colour model, each pixel is described by three values, one corresponding to the value of a Red colour component, another
. corresponding to the value of a Green colour component and the third corresponding to the value of a Blue colour component. Numerous other colour models exist, in which alternative representations are used. In one such alternative, known as the YUV colour model, image pixels are represented by a luminance component (Y) and two chrominance or colour difference components (U, V), each of which has an associated pixel value.
Generally, colour models that employ luminance and chrominance components provide a more efficient representation of a colour image than the RGB model. It is also known that the luminance component of such colour models generally provides the most information about the perceived structure of an image. Among other things, this allows the chrominance components of an image to be spatially sub-sampled } 15 without a significant loss in perceived image quality. For these reasons colour models that employ a luminance / chrominance representation are favoured in many applications, particularly those in which data storage space, processing power or transmission bandwidth is limited.
As stated above, in the YUV colour model, an image is represented by a luminance component and two chrominance components. Typically, the luminance information in the image is transformed with full spatial resolution. Both chrominance signals are spatially subsampled, for example a field of 16 x 16 pixels is subsampled into a field of 8 x 8 pixels. The differences in the block sizes are primarily due to the fact that the eye does not discern changes in chrominance equally well as changes in luminance, wherein a field of 2 x 2 pixels is encoded with the same chrominance value. , 30 Typically, image blocks are grouped together to form macroblocks. The macroblock usually contains 16 pixels by 16 rows of luminance , samples, mode information, and possible motion vectors. The macroblock is divided into four 8 x 8 luminance blocks and to two 8 x 8 chrominance blocks. Scanning (and encoding/decoding) proceeds macroblock by macroblock, conventionally from the top-left to the bottom-right corner of the frame. Inside one macroblock the scanning
. (and encoding/decoding) order is from the top-left to the bottom-right corner of the macroblock. '¢
Referring to figure 1, which illustrates a typical encoding and decoding system (codec) used, for example, in the transmission of digital video, a current video frame to be coded comes to the transmission system 10 as input data l,(x,y). The input data l.(x,y) typically takes the form of pixel value information. In the differential summer 11 it is transformed into a prediction error frame E, (x,y) by subtracting from it a prediction frame P,(x,y) formed on the basis of a previous image. The prediction error frame is coded in block 12 in the manner described hereinafter, and the coded prediction error frame is directed to a multiplexer 13. To form a new reconstructed frame, the coded prediction error frame is also directed to a decoder 14, which produces a decoded prediction ] 15 error frame E,(x,y) which is summed in a summer 15 with the prediction frame P.(x,y), resulting in a reconstructed frame i,(x,y). The reconstructed frame is saved in a frame memory 16. To code the next frame, the reconstructed frame saved in the frame memory is read as a reference frame R,(x,y) and is transformed into a new prediction frame
Pq(x,y) in a motion compensation and prediction block 17 according to the formula:
Pn(X,y) = RalX + Du(X,y), y + Dy(x,y)] (1)
The pair of numbers [Dy(x,y), Dy(x,y)] is called the motion vector of the pixel at location (x,y) and the numbers Dy(x,y) and D,(x,y) are the hori- zontal and vertical shifts of the pixel. They are calculated in a motion estimation block 18. The set of motion vectors [D,(-), Dy(-)] consisting of all motion vectors related to the pixels of the frame to be compressed is also coded using a motion model comprising basis functions and : coefficients. The basis functions are known to both the encoder and the decoder. The coefficient values are coded and directed to the mul- ’ tiplexer 13, which multiplexes them into the same data stream with the coded prediction error frame for sending to a receiver. In this way the amount of information to be transmitted is dramatically reduced.
. Some frames can be partly, or entirely, so difficult to predict using only the reference frame R,(x,y) that it is not practical to use motion
Y compensated prediction when coding them. These frames or parts of frames are coded using intra-coding without any prediction from the 5 reference frame R\(x,y), and accordingly motion vector information relating to them is not sent to the receiver. In prior art another kind of prediction may be employed for I-frames or those parts of P-frames which are intra-coded, namely intra prediction. In this case, the reference is formed by the previously decoded and reconstructed blocks which are part of the same frame (or slice if independent segment decoding is used).
In the receiver 20, a demultiplexer 21 separates the coded prediction error frames and the motion information transmitted by the motion : 15 vectors and directs the coded prediction error frames to a decoder 22, which produces a decoded prediction error frame E,(x,y), which is } summed in a summer 23 with the prediction frame P,(x,y) formed on the basis of a previous frame, resulting in a decoded and reconstructed frame In(x,y). The decoded frame is directed to an output 24 of the decoder and at the same time saved in a frame memory 25. When decoding the next frame, the frame saved in the frame memory is read as a reference frame R,(x,y) and transformed into a new prediction frame in the motion compensation and prediction block 26, according to formula (1) presented above.
The coding method used in the coding of prediction error frames and in the intracoding of a frame or part of a P-frame to be sent without using motion prediction, is generally based on a transformation, the most common of which is the Discrete Cosine Transformation, DCT. The . 30 frame is divided into adjacent blocks having a size of e.g. 8 x 8 pixels.
The transformation is calculated for the block to be coded, resulting in a . series of terms. The coefficients of these terms are quantized on a dis- crete scale in order that they can be processed digitally. Quantization causes rounding errors, which can become visible in an image recon- structed from blocks, so that there is a discontinuity of pixel values at the boundary between adjacent blocks. Because a certain decoded
. frame is used to calculate the prediction frame for subsequent predicted (P) frames, these errors can be propagated in sequential t frames, thus causing visible edges in the image reproduced by the receiver. Image errors of this type are called blocking artefacts.
Furthermore, if intra-prediction is used, blocking artefacts may also propagate from block to block within a given frame. In this case blocking artefacts typically lead to visual effects which are specific to the type of intra prediction used. It should therefore be appreciated that there exists a significant technical problem relating to the spatial and temporal propagation of blocking artefacts in digital images that are coded for transmission and subsequently decoded.
The principles presented above are also applicable to a situation where segmented frames are used. In that case the coding and decoding is ) 15 performed in segments of the frame, according to the type of blocks in each segment.
It should also be noted that although the preceding discussion and much of the following description concentrates on application of the invention to image sequences, such as digital video, the method according to the invention may also be applied to individual digital images (i.e. still images). Essentially, the method according to the invention may be applied to any digital image that is encoded and/or decoded on a block-by-block basis using any encoding/decoding method.
Furthermore, the method according to the invention may be applied to any luminance or colour component of a digital image. Taking the example of an image represented using a YUV colour model, as . 30 introduced above, the method according to the invention may be applied to the luminance (Y) component, to either chrominance i component (U or V) to both chrominance components (U and V), or to all three components (Y, U and V). In this case, where it is known that the luminance component provides more perceptually important information relating to image structure and content, it may be sufficient to apply the method according to the invention only to the luminance
. component, but there is no limitation on the number or combination of luminance / colour / colour difference components to which the method * according to the invention may be applied.
Some prior art methods are known for removing blocking artefacts.
These methods are characterized by the following features: - determining which pixels require value correction in order to remove a blocking artefact, - determining a suitable low-pass filtering for each pixel to be corrected, based on the values of other pixels contained by a filtering window placed around the pixel, - calculating a new value for the pixel to be corrected, and ) - rounding the new value to the closest digitized pixel value.
Factors that influence the selection of a filter and the decision whether to use filtering can be, for example, the difference between the values of pixels across the block boundary, the size of the quantization step of the coefficients received as the transformation result, and the difference between the pixel values on different sides of the pixel being processed.
In prior art methods, the filtering of blocking and other types of visual artefacts is performed frame by frame, ie. the whole frame is first decoded and then filtered. As a result, the effects of blocking artefacts easily propagate within a frame or from one frame to the next. This is . 30 especially true when predictive intra-coding is used. . It has been found that prior art methods also tend to remove lines that belong to real features of the image. On the other hand, prior art methods are not always capable of removing all blocking or blocking- related artefacts.
. A primary objective of the method according to the invention is to limit the propagation of blocking artefacts within frames and from one frame i to another. Another objective of the present invention is to present a new kind of filtering arrangement for removing blocking and other blocking-related artefacts which are especially visible when predictive intra-coding is used. The invention also has the objective that the method and associated device operate more reliably and efficiently than prior art solutions.
The objectives of the invention are achieved by performing block boundary filtering substantially immediately after an image block is decoded and there is at least one block boundary available to be fil- tered. Among other things, this provides the advantage that the spatial and temporal propagation of blocking and other visual artefacts is : 15 limited to a greater degree than in prior art methods. Furthermore, the results of previous block boundary filtering operations can be utilised in ’ the encoding, decoding and filtering of subsequent blocks. In other words, pixel values modified/corrected in connection with the filtering of one boundary are available for use when encoding and decoding other blocks and when filtering other blocks / block boundaries.
According to a first aspect of the invention, there is provided a method for reducing blocking and other visual artefacts (which are mainly caused by blocking artefacts) from a frame that has been coded by blocks, characterized in that the filtering is performed after the current block is decoded and there is a boundary available for filtering between the current block and a previously decoded block.
According to a second aspect of the invention, there is provided a
A 30 device for implementing the method according to the invention. The device according to the invention is characterized in that the filtering is ) arranged to be performed after the current block is decoded and there is a boundary available for filtering between the current block and a previously decoded block.
’ According to a third aspect of the invention, there is provided an encoder for encoding digital images comprising a decoder that : implements the method of the invention. The encoder according to the invention is characterized in that the filtering is arranged to be performed after the current block is locally decoded and there is a boundary available for filtering between the current block and a previously locally decoded block.
According to a fourth aspect of the invention, there is provided a decoder for decoding digital images that implements the method according to the invention. The decoder according to the invention is characterized in that the filtering is arranged to be performed after the current block is decoded and there is a boundary available for filtering between the current block and a previously decoded block.
According to a fifth aspect of the invention, there is provided a terminal having digital image transmission capability and implementing the method according to the invention. The terminal according to the invention is characterized in that the filtering is arranged to be performed after the current block is locally decoded and there is a boundary available for filtering between the current block and a previously locally decoded block.
According to a sixth aspect of the invention, there is provided a terminal having digital image decoding capability and implementing the method according to the invention. The terminal according to the invention is characterized in that the filtering is arranged to be performed after the current block is decoded and there is a boundary available for filtering between the current block and a previously decoded block. . 30
According to a seventh aspect of the invention, there is provided a . storage medium storing a software program comprising instructions implementing the method according to the invention. The storage medium according to the invention is characterized in that the software program further comprises machine executable steps for performing the filtering after the current block is locally decoded and there is a
. boundary available for filtering between the current block and a previously locally decoded block. 1
According to a eighth aspect of the invention, there is provided a storage medium storing a software program comprising machine executable steps for decoding a digital video signal by blocks and with instructions implementing the method according to the invention. The storage medium according to the invention is characterized in that the software program further comprises machine executable steps for performing the filtering after the current block is decoded and there is a boundary available for filtering between the current block and a previously decoded block.
Because blocking artefacts occur at block boundaries, it is : 15 advantageous to filter only pixels at block boundaries and the vicinity thereof. Edges that are part of the image itself can reside anywhere in the image area. In order that only pixels containing blocking artefacts . are selected for corrective filtering and that the quality of edges that are part of the image is not affected during filtering, the following assumptions are made:
Changes in pixel value associated with edges that are part of the image, are generally larger than those associated with blocking artefacts, and those edges within the image, where the pixel value change is small, do not suffer considerably from the rounding of the pixel value differences caused by filtering. ] 30 It should be noted that the method according to the invention can be applied regardless of the method chosen to perform the actual filtering . operation and the number of pixels in the vicinity of the boundary chosen for filtering. The invention relates, among other things, to the stage at which filtering is performed in the encoding / decoding process and the manner in which filtering is applied, rather than to the precise details of any particular filter implementation that may be used.
. Because an image to be coded is generally divided into blocks both vertically and horizontally, the image contains both vertical and horizon- + tal block boundaries. With regard to vertical block boundaries, there are pixels to the right and left of the boundary, and with regard to horizontal block boundaries, there are pixels above and below the boundary. In general, the location of the pixels can be described as being on a first or a second side of the block boundary.
The method and associated device according to the invention signifi- cantly limits propagation of visual anomalies due to blocking artefacts from previously reconstructed blocks to subsequently reconstructed blocks within the same frame, or within the same segment, if independ- ent segment decoding is used. Propagation of blocking artefacts from one frame to the next is also reduced. By using the method and device according to the invention a larger number of blocking and blocking- related artefacts can be removed without weakening the real edges - within the image unreasonably. :
In the following, the invention will be described in more detail with reference to the preferred embodiments and the accompanying draw- ings, in which
Figure 1 represents a digital video encoder and decoder according to prior art,
Figure 2 represents an advantageous block scanning order in the method according to a preferred embodiment of the inven- tion, . 30 Figure 3 represents an advantageous block boundary filtering order according to a preferred embodiment of the invention,
Figure 4 represents an advantageous filtering order in the preferred method according to the invention,
. Figure 5 represents a digital image block transfer system for implementing the method according to the invention,
EY
Figure 6 represents a flow diagram of the method according to the invention, and
Figure7 is a schematic representation of a portable video telecommunications device implementing a method according to the invention.
In the following description of the invention and its preferred embodi- ments, reference will be made mostly to Figures 2 to 6.
In the following, the operation of a digital image decoder is described. : 15 In a digital image transfer system according to the invention, such as that illustrated in figure 5, the block and block boundary scanning order used in the encoder and decoder are the same, i.e. known to both encoder and decoder. However, the particular scanning order chosen is not essential for implementation of the method according to the invention. Figure 2 shows an advantageous scanning order for a frame comprising groups of blocks, e.g. macroblocks. First, the top-left block
B1 is decoded, ie. pixel values representing e.g. the luminance information of the block are reconstructed and saved into a frame buffer. Then, the top-right block B2 is decoded and saved into the frame buffer. Now, the first vertical boundary R12 between the top-left block B1 and top-right block B2 has a decoded block at both sides, so the boundary R12 can be filtered. Advantageously, only those pixel values which are changed by the filtering are updated in the frame buffer. There are many known methods for performing filtering on the . 30 block boundary. One prior art filtering method which can be implemented with the invention is disclosed in international patent . publication WO 98/41025, which is to be considered as a reference here.
Next, the bottom-left block B3 of the group of blocks in question is decoded and saved into the frame buffer. Now, the first horizontal
’ boundary R13 between the top-left block B1 and bottom-left block B3 has a decoded block on both sides, wherein the first horizontal a boundary R13 can also be filtered. In the filtering of the first vertical boundary R12 some pixel values near the first horizontal boundary R13 may have changed. Advantageously, these modified values are used in the filtering of the first horizontal boundary R13. This helps to further restrict propagation of visual artefacts from previously reconstructed blocks to subsequently reconstructed blocks within the same frame, or same segment, if independent segment encoding/decoding is used.
Now the fourth block B4 inside the macroblock is decoded and saved into the frame buffer. When decoding of the fourth block B4 is complete there exist two additional boundaries having a decoded block on either side: the second vertical boundary R34 and the second horizontal : 15 boundary R24. Therefore, both of said boundaries R34, R24 can now be filtered. In this advantageous embodiment of the method according to the invention, the filtering is performed such that the second vertical boundary R34 is filtered first, the filtering result is saved in the frame buffer, and the second horizontal boundary R24 is filtered subsequently. In a general case, if two boundaries (e.g. to the left of and above) a current block are filtered then the changed pixel values resulting from the first filtered boundary are used when filtering the other boundary. in an advantageous embodiment of the invention, for a certain block, filtering can be performed across one, two, three, four, or none of the boundaries of the block, depending on which scanning order is adopted for coding the blocks inside a frame, or segment. In the preferred mode of implementation, the order in which blocks are reconstructed is ; 30 illustrated in Figure 2. As depicted, four blocks are grouped together to form macroblocks of 2x2 blocks. Scanning then proceeds macroblock . by macroblock from the top-left to the bottom-right corner of the frame.
Inside one macroblock the scanning order is from the top-left to the bottom-right corner of the macroblock. Due to this particular scanning order, in the preferred mode of implementation a maximum of two boundaries (to the left of and/or above) a block become available for
" filtering, when a block is reconstructed. The order in which block boundaries are advantageously examined for filtering is illustrated in 4 Figure 3 (left boundary first and then the upper one). The remaining boundaries, i.e. to the right and below, are filtered only when recon- struction of an adjacent block to the right and, respectively, below is completed. In the event that the block is located at the frame border or at a segment border, the corresponding block boundary/boundaries is/are not filtered since there is no adjacent block to filter across the common boundary. In the preferred mode of implementation, the order in which the block boundaries are filtered inside one frame is illustrated in Figure 4 for a small frame size of 6x4 blocks. The numbers on the block boundaries represent the filtering order according to an advanta- geous embodiment of the present invention. In practical applications, the frame typically comprises more than 6x4 blocks, but it is clear from : 15 the description above how the preferred filtering order can be extended - to frames and segments which comprise more (or less) than 6x4 blocks.
In general, the system described in this invention can be applied to block-based still image coding as well as to all kinds of coding in block- based video coders: |, P, B, coded and not-coded. The filtering process described in this invention works for any frame that is divided into NxM blocks for coding.
In an advantageous embodiment of the method according to the invention, filtering is applied only across those block boundaries that are adjacent to other previously reconstructed blocks, substantially as soon as a block is reconstructed (decoded). However, it is obvious that other steps can be performed between reconstruction of the block and ; 30 the filtering of the block boundary/boundaries. . In the method according to the invention, filtering is applied only across block boundaries that are adjacent to previously reconstructed (decoded) blocks. In an advantageous embodiment of the invention, filtering is performed substantially as soon as a block is reconstructed and a boundary with a previously decoded block becomes available.
v However, it is obvious that other steps can be performed between reconstruction of the block and filtering of the block boundary / a boundaries. However, once a boundary becomes available, preferably that boundary is filtered before another block is decoded.
It is also possible that the filtering is not performed substantially immediately after a block is reconstructed and a boundary to be filtered exists. For example, in another embodiment of the present invention the filtering is performed when most of the blocks of the image are reconstructed. It is also possible that the filtering and reconstruction are performed sequentially. For example, a certain amount of blocks are reconstructed and then the filtering is performed on such boundaries which are available for filtering. : 15 If independent segment decoding is used, filtering is only applied across those block boundaries that are adjacent to previously reconstructed blocks belonging to the same segment as the current block.
A particularly advantageous feature of the method according to the invention is the fact that the result of filtering is made available to the digital image coding system before reconstructing a subsequent block in the frame. This is especially advantageous for predictive intra-coding where prediction of a block is performed with reference to previously reconstructed blocks within the same frame (or segment, if independent segment decoding is utilised). Thus, the present invention is advantageous when used in connection with any block coding scheme in which blocks are predicted from previously reconstructed blocks within the same frame or the same segment. It significantly reduces or ; 30 prevents the propagation of visual artefacts from previously reconstructed blocks to subsequently reconstructed blocks within the . same frame, or the same segment, if independent segment encoding/decoding is used. Reconstruction of a certain block is therefore dependent on filtered data derived from previously reconstructed blocks. Thus, in order to avoid encoder-decoder mismatch, the decoder should not only execute the same filtering
‘ scheme, but should also perform filtering operations in the same order as the encoder. The method also reduces or prevents propagation of a blocking artefacts from one frame to the next, because blocking artefacts in a frame used e.g. in inter frame prediction are reduced by the method according to the invention.
In the following, the transmission and reception of video frames in a video transmission system is described with reference to the digital image transfer system presented in Figure 5 and the flow diagram in
Figure 6. Operation of the block boundary filtering method according to the invention will first be described in connection with the encoder of the transmission system, in the situation where a frame of a digital image sequence is encoded in intra (I-frame) format and using some form of intra block prediction, in which a block of the intra frame may be : 15 coded with reference to other previously coded intra blocks within the same frame. Filtering of block boundaries in the decoder of the transmission system will then be described for a corresponding intra- coded frame received for decoding at the receiver. Finally, application of the block boundary filtering method according to the invention to inter-coded frames (P-frames) will be described.
Assuming that a frame is to be encoded in intra format using some form of intra prediction, encoding of the frame proceeds as follows. The blocks of the frame to be coded are directed one by one to the encoder 50 of the video transfer system presented in Figure 5. The blocks of the frame are received from a digital image source, e.g. a camera or a video recorder (not shown) at an input 27 of the image transfer system.
In a manner known as such, the blocks received from the digital image source comprise image pixel values. The frame can be stored . 30 temporarily in a frame memory (not shown), or alternatively, the encoder receives the input data directly block by block.
The blocks are directed one by one to a prediction method selection block 35 that determines whether the pixel values of the current block to be encoded can be predicted on the basis of previously intra-coded blocks within the same frame or segment. In order to do this, the
. prediction method selection block 35 receives input from a frame buffer of the encoder 33, which contains a record of previously encoded and a subsequently decoded and reconstructed intra blocks. In this way, the prediction method selection block can determine whether prediction of the current block can be performed on the basis of previously decoded and reconstructed blocks. Furthermore, if appropriate decoded blocks are available, the prediction method selection block 35 can select the most appropriate method for predicting the pixel values of the current block, if more than one such method may be chosen. It should be appreciated that in certain cases, prediction of the current block is not possible because appropriate blocks for use in prediction are not available in the frame buffer 33. In the situation where more than one prediction method is available, information about the chosen prediction method is supplied to multiplexer 13 for further transmission to the - 15 decoder. It should also be noted that in some prediction methods, certain parameters necessary to perform the prediction are transmitted to the decoder. This is, of course, dependent on the exact implementation adopted and in no way limits the application of the block boundary filter according to the invention.
Pixel values of the current block are predicted in the intra prediction block 34. The intra prediction block 34 receives input concerning the chosen prediction method from the prediction method selection block 35 and information concerning the blocks available for use in prediction from frame buffer 33. On the basis of this information, the intra prediction block 34 constructs a prediction for the current block. The predicted pixel values for the current block are sent to a differential summer 28 which produces a prediction error block by taking the difference between the pixel values of the predicted current block and . 30 the actual pixel values of the current block received from input 27. Next, the error information for the predicted block is encoded in the prediction ) error coding block in an efficient form for transmission, for example using a discrete cosine transform (DCT). The encoded prediction error block is sent to multiplexer 13 for further transmission to the decoder.
. The encoder of the digital image transmission system also includes decoding functionality. The encoded prediction error of the current
N block is decoded in prediction error decoding block 30 and is subsequently summed in summer 31 with the predicted pixel values for the current block. In this way, a decoded version of the current block is obtained. The decoded current block is then directed to a block boundary filter 32, implemented according to the method of the invention. Now referring to the flow chart of Figure 6, the block boundary filter 32 examines 602, 604, if the current (just decoded) block has boundaries that can be filtered. In order to determine if such boundaries exist, the block boundary filter examines the contents of the frame buffer 33. If more than one such boundary exists, the block boundary filter determines 603 the filtering order to be used in filtering the boundaries. If at least one boundary is found, the block boundary : 15 filter retrieves 605 those pixel values that belong to the adjacent block of the present boundary to be used in the filtering process. The block boundary filter performs the filtering 606 according to a preferred filtering method and updates at least the modified pixel values in the current block and the values of pixels filtered in previously decoded blocks stored in the frame buffer 33. The block boundary filter then examines 608 whether there are still boundaries to be filtered. If other boundaries are to be filtered the process returns to step 605. The block filter performs analogous operations on each block of the frame being coded until all blocks have been encoded and locally decoded. As each block is filtered it is made available for use e.g. in the prediction and/or : filtering of subsequent blocks by storing it in the frame buffer 33.
The operation of the block boundary filtering method according to the invention will now be described in connection with the receiver of a . 30 digital image transmission system. Here, use of the filter is described in connection with the decoding of a frame, assumed to have been . encoded in intra format using some form of intra prediction method in which the pixel values of a current block are predicted on the basis of previously encoded image blocks within the same frame.
- Here it is also assumed that the receiver receives the blocks that form a digital image frame one by one from a transmission channel. It should : be appreciated that in other embodiments, the receiver may receive a complete frame to be decoded, or alternatively may retrieve the digital image to be decoded from a file present on some form of storage medium or device. In any case, operation of the block boundary filtering method according to the invention is performed on a block-by-block basis, as described below.
In the receiver 60, a demultiplexer receives and demultiplexes coded prediction error blocks and prediction information transmitied from the encoder 50. Depending on the prediction method in question, the prediction information may include parameters used in the prediction process. It should be appreciated that in the case that only one intra ’ 15 prediction method is used, information concerning the prediction method used to code the blocks is unnecessary, although it may still be necessary to transmit parameters used in the prediction process. In
Figure 5, dotted lines are used to represent the optional transmission and reception of prediction method information and/or prediction parameters. Assuming more than one intra prediction method may be used, information concerning the choice of prediction method for the current block being decoded is provided to intra prediction block 41.
Intra prediction block 41 examines the contents of frame buffer 39 to determine if there exist previously decoded blocks to be used in the prediction of the pixel values of the current block. If such image blocks exist, intra prediction block 41 predicts the contents of the current block using the prediction method indicated by the received prediction method information and possible prediction-related parameters received from the encoder. Prediction error information associated with . 30 the current block is received by prediction error decoding block 36 which decodes the prediction error block using an appropriate method. : For example, if the prediction error information was encoded using a discrete cosine transform, the prediction error decoding block performs an inverse DCT to retrieve the error information. The prediction error information is then summed with the prediction for the current image block in summer 37 and the output of the summer is applied to block
. boundary filter 38. Block boundary filter 38 applies boundary filtering to the newly decoded image block in a manner analogous to the block , boundary filter of the encoder (block 32). Accordingly, block boundary filter 38 examines 602, 604, if the current (newly decoded) block has boundaries that can be filtered. In order to determine if such boundaries exist, the block boundary filter 38 examines the contents of frame buffer 39 which contains previously decoded and reconstructed image blocks. if more than one such boundary exists, the block boundary filter determines 603 the filtering order to be used in filtering the boundaries.
Advantageously, this order is identical to that used in the boundary filter 32 of the encoder. If at least one boundary is found, the block boundary filter retrieves 605 those pixel values that belong to the adjacent block of the present boundary for use in the filtering process. The block boundary filter performs the filtering 606 according to a preferred : 15 filtering method (advantageously identical to that used in the encoder) and updates at least the modified pixel values in the current block and : the values of pixels filtered in previously decoded blocks stored in the frame buffer 39. The block boundary filter then examines 608 whether there are still boundaries to be filtered. If other boundaries are to be filtered the process returns to step 605.
The block filter performs analogous operations on each block of the frame, substantially immediately after each block is decoded, until all blocks have been decoded and their boundaries appropriately filtered.
As each block is filtered it is made available for use e.g. in the prediction and/or filtering subsequent blocks by storing it in the frame buffer 39. Furthermore, as each block is decoded and its boundaries filtered by applying the method according to the invention, it is directed to the output of the decoder 40, for example to be displayed on some - 30 form of display means. Alternatively, the image frame may be displayed only after the whole frame has been decoded and accumulated in the frame buffer 39.
In the paragraphs above, the method according to the invention was described in connection with the filtering of block boundaries in a frame coded in intra format and further using intra prediction methods. It
. should be appreciated that the method according to the invention may be applied in an exactly analogous manner for filtering block : boundaries between intra coded blocks that form part of an otherwise inter-coded frame. Alternatively, the method may be applied to inter coded image blocks. It may also be applied for filtering block boundaries between different types of coded blocks regardless of the type of block in question.
In the case of inter coded image blocks, each inter coded block is predicted based on information concerning its motion between a current frame and a reference frame and the operations of the encoder are as follows: A prediction error block is formed based on the difference between the prediction for the block and the actual contents of the block. The prediction error block is coded in a way known as such, for example using a DCT and transmitted to the decoder together with information, for example motion coefficients, representing the motion of the block. In the encoder, the coded prediction error is further decoded and summed with the prediction for the current block to produce a decoded and reconstructed block that forms part of a prediction reference frame to be used in connection with motion compensated prediction coding of the next frame in the sequence. As was the case in the example given above concerning intra coding, a block boundary filter can be implemented for use in connection with inter coded blocks in such a way that it receives and filters inter coded blocks of the current frame substantially immediately after they have been decoded.
An equivalent arrangement may be used in connection with the decoding of inter coded blocks received at the receiver of a digital image transmission system. . 30 The block carrying out the filtering method according to the invention is particularly advantageously implemented in a digital signal processor or ; a corresponding general purpose device suited to processing digital signals, which can be programmed to apply predetermined processing functions to signals received as input data. The measures according to
Figure 6 can be carried out in a separate signal processor or they can
. be part of the operation of such a signal processor which also contains other arrangements for signal processing.
A storage medium can be used for storing a software program comprising machine executable steps for performing the method according to the invention. Then, in an advantageous embodiment of the invention, the software program can be read from the storage medium to a device comprising programmable means, e.g. a processor, for performing the method of the invention.
In the method and device according to the invention, the number of pixels selected for filtering can vary, and it is not necessarily the same on different sides of the block boundary. The number of pixels may be adapted according to the general features of the image information contained by the frame. Furthermore, many filtering methods can be applied with the present invention. In some intra-prediction methods it is not necessary to send the intra prediction information in addition to the coded differential blocks to the receiver 60. The definitions of filtering order above have also been intended only as examples.
A particularly advantageous use of the invention is in mobile teleconferencing applications, digital television receivers and other devices that at least receive and decode digital video images.
Figure 7 presents a simplified schematic diagram of a mobile terminal 41 intended for use as a portable video telecommunications device and applying the deblocking filter method according to the invention. The mobile terminal comprises advantageously at least display means 42 for displaying images, audio means 43 for capturing and reproducing . 30 audio information, keyboard 44 for inputting e.g. user commands, radio part 45 for communicating with a mobile telecommunications network, . processing means 46 for controlling the operation of the device, memory means 47 for storing information, and preferably a camera 48 for taking images.
] The present invention is not solely restricted to the above presented embodiments, but it can be modified within the scope of the appended . claims.

Claims (39)

  1. . Claims: . 1. A method for reducing visual artefacts in a digital image, which is encoded and decoded by blocks (B1, B2, B3, B4), in which filtering is performed to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and an adjacent block (B1, B2, B3, : B4), characterized in that the filtering is performed after the current block (B1, B2, B3, B4) is decoded and there is a boundary available for filtering between the current block and a previously decoded block.
  2. 2. A method according to Claim 1, characterized in that the filtering is performed substantially immediately after the current block is decoded and there is a boundary available for filtering.
  3. 3. A method according to Claim 1, characterized in that the filtering is performed before all blocks of the digital image are decoded.
  4. 4. A method according to Claim 1, in which the blocks are decoded in a certain decoding order, characterized in that the filtering is performed before decoding a block later in the decoding order than the current block and adjacent to the current block.
  5. 5. A method according to Claim 1, characterized in that the decoding and the filtering of a block are performed sequentially.
  6. 6. A method according to Claim 1, characterised in that the filtering is arranged to cause modification of a first number of pixel values in the current block and a second number of pixel values in the previously decoded block.
  7. 7. A method according to any of Claims 1 to 6, characterized in that
    . it is determined (603, 604) whether there exists more than one boundary available for filtering, wherein filtering is performed on said ) more than one boundary (R12, R13, R24, R34) available for filtering.
  8. 8. A method according to Claim 7, characterized in that filtering is performed in a certain order on said more than one boundary (R12, R13, R24, R34).
  9. . 9. A method according to any of Claims 1 to 8, characterized in that filtering is performed to reduce visual artefacts due to boundaries (R12, . R13, R24, R34) between blocks (B1, B2, B3, B4) in the digital image during encoding and decoding of the digital image, and that the order of filtering the boundaries in decoding is the same as in encoding.
  10. 10. A method according to any of Claims 1 to 9, characterized in that a pixel value is corrected by filtering, and that said corrected pixel value is used in filtering at least one other boundary (R12, R13, R24, R34).
  11. 11. A method according to any of Claims 1 to 10, characterized in that intra prediction of a subsequent block is performed after the current block (B1, B2, B3, B4) is decoded, that a pixel value is corrected by filtering, and that said corrected pixel value is used in the intra prediction of at least one subsequent block.
  12. 12. A method according to any of Claims 1 to 11, characterized in that the blocks of the image are grouped into macroblocks, wherein the image is scanned macroblock by macroblock.
  13. 13. A method according to any of Claims 1 to 12, characterized in that the image is scanned horizontally from top-left to bottom-right.
  14. 14. A method according to Claim 8, characterized in that the filtering order is selected such that a boundary (R34) to the left of said current block is filtered before a boundary (R24) to the top of said current block.
  15. 15. A method according to any of Claims 1 to 14, characterized in that the image comprises at least one segment of blocks (B1, B2, B3, B4), and that only boundaries between such adjacent blocks which belong to the same segment are filtered.
  16. 16. A method according to Claim 15, characterized in that all blocks (B1, B2, B3, B4) within one segment are of the same type.
  17. 17. A method according to any of Claims 1 to 16, characterized in that the image comprises luminance and chrominance components,
    . and that the filtering is performed to reduce visual artefacts due to a boundary between at least one of the following: — A current block and an adjacent block in the luminance component, — A current block and an adjacent block in the chrominance component.
  18. 18. A method according to any of Claims 1 to 16, characterized in that the image comprises at least a first colour component and a second colour component, and that the filtering is performed to reduce visual artefacts due to a boundary between at least one of the following: — A current block and an adjacent block in the first colour . component, — A current block and an adjacent block in the second colour component.
  19. 19. A device for reducing visual artefacts in a digital image, which is encoded and decoded by blocks (B1, B2, B3, B4), the device comprising means for performing filtering to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and an adjacent block (B1, B2, B3, B4), characterized in that the filtering is arranged to be performed after the current block (B1, B2, B3, B4) is : decoded and there is a boundary available for filtering between the current block and a previously decoded block.
  20. 20. A device according to Claim 19, characterized in that the filtering is arranged to be performed substantially immediately after the current block is decoded and there is a boundary available for filtering.
  21. 21. A device according to Claim 19, characterized in that the filtering , is arranged to be performed before all blocks of the digital image are decoded.
  22. . 22. A device according to Claim 19, characterized in that the filtering is arranged to be performed before decoding a block later in the decoding . order than the current block and adjacent to the current block.
  23. 23. A device according to Claim 19, characterized in that the filtering and the decoding of a block are arranged to be performed sequentially.
  24. 24. A device according to any of Claims 19 to 23, characterized in that it comprises means for determining if there exists more than one boundary available for filtering.
  25. 25. A device according to any of Claims 19 to 24, characterized in that it comprises means for performing the filtering in a certain order on more than one boundary (R12, R13, R24, R34). .
  26. 26. A device according to any of Claims 19 to 25, characterized in that filtering is arranged to be performed to reduce visual artefacts due to boundaries (R12, R13, R24, R34) between blocks (B1, B2, B3, B4) in the digital image during encoding and decoding of the digital image, and the order of filtering the boundaries in decoding is arranged to be the same as in encoding.
  27. 27. A device according to any of Claims 19 to 26, characterized in that it comprises a filter for correcting a pixel value, means for saving a pixel value corrected by filtering, and means for using the corrected pixel value in filtering at least one other boundary (R12, R13, R24, R34).
  28. 28. A device according to Claim 25, 26 or 27, characterized in that it comprises means for performing intra prediction of a subsequent block, a filter for correcting a pixel value, means for saving a pixel value corrected by filtering, and means for using the corrected pixel value in . intra prediction of at least one subsequent block .
  29. 29. A device according to any of Claims 25 to 28, characterized in ) that the blocks of the image are grouped into macroblocks, and the device comprises means for scanning the image macroblock by macroblock.
  30. . 30. A device according to Claim 29, characterized in that the means for scanning the image is arranged to scan the image horizontally from i top-left to bottom-right.
  31. 31. A device according to Claim 25, characterized in that the filtering order is arranged such that the boundary (R34) on the left of said current block is filtered before the boundary (R24) at the top of said current block.
  32. 32. A device according to any of Claims 19 to 31, characterized in that the image comprises at least one segment of blocks (B1, B2, B3, B4), and that the device comprises means for determining which segment the blocks belong to, wherein only boundaries between adjacent blocks which belong to the same segment are filtered. .
  33. 33. An encoder (50) comprising means for coding and means for locally decoding a digital image by blocks (B1, B2, B3, B4), which encoder : comprises means for performing filtering to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and an adjacent block (B1, B2, B3, B4), characterized in that the filtering is arranged to be performed after the current block (B1, B2, B3, B4) is locally decoded and there is a boundary available for filtering between the current block and a previously locally decoded block.
  34. 34. A decoder (60) comprising means for decoding a digital image by blocks (B1, B2, B3, B4), which decoder comprises means for performing filtering to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and an adjacent block (B1, B2, B3, B4), characterized in that the filtering is arranged to be performed after the current block (B1, B2, B3, B4) is decoded and there is a boundary available for filtering between the current block and a ’ previously decoded block. , 35. A terminal comprising an encoder, which comprises means for coding and means for locally decoding a digital image by blocks (Bf, B2, B3, B4), means for performing filtering to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and
  35. , an adjacent block (B1, B2, B3, B4), characterized in that the filtering is arranged to be performed after the current block (B1, B2, B3, B4) is . locally decoded and there is a boundary available for filtering between the current block and a previously locally decoded block.
  36. 36. A terminal comprising means for decoding a digital image by blocks (B1, B2, B3, B4), means for performing filtering to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and an adjacent block (B1, B2, B3, B4), characterized in that the filtering is arranged to be performed after the current block (B1, B2, B3, B4) is decoded and there is a boundary available for filtering between * the current block and a previously decoded block.
  37. 37. A terminal according to Claim 35 or 36, characterized in that it is a mobile terminal.
  38. 38. A storage medium for storing a software program comprising machine executable steps for coding and locally decoding a digital video signal by blocks, and for performing filtering to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and an adjacent block (B1, B2, B3, B4), characterized in that the software program further comprises machine executable steps for performing the filtering after the current block (B1, B2, B3, B4) is locally decoded and there is a boundary available for filtering between the current block and a previously locally decoded block.
  39. 39. A storage medium for storing a software program comprising machine executable steps for decoding a digital video signal by blocks, and for performing filtering to reduce visual artefacts due to a boundary (R12, R13, R24, R34) between a current block and an adjacent block (B1, B2, B3, B4), characterized in that the software program further comprises machine executable steps for performing the filtering after the current block (B1, B2, B3, B4) is decoded and there is a boundary i 30 available for filtering between the current block and a previously decoded block.
ZA200205507A 2000-01-21 2002-07-10 A method for filtering digital images, and a filtering device. ZA200205507B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20000122A FI20000122A0 (en) 2000-01-21 2000-01-21 Method of filtering digital images and filtering device

Publications (1)

Publication Number Publication Date
ZA200205507B true ZA200205507B (en) 2002-12-10

Family

ID=8557151

Family Applications (1)

Application Number Title Priority Date Filing Date
ZA200205507A ZA200205507B (en) 2000-01-21 2002-07-10 A method for filtering digital images, and a filtering device.

Country Status (2)

Country Link
FI (1) FI20000122A0 (en)
ZA (1) ZA200205507B (en)

Also Published As

Publication number Publication date
FI20000122A0 (en) 2000-01-21

Similar Documents

Publication Publication Date Title
US7388996B2 (en) Method for filtering digital images, and a filtering device
EP1186177B1 (en) A method and associated device for filtering digital video images
EP1246131B1 (en) Method and apparatus for the reduction of artifact in decompressed images using post-filtering
US7324595B2 (en) Method and/or apparatus for reducing the complexity of non-reference frame encoding using selective reconstruction
KR101596829B1 (en) A method and an apparatus for decoding a video signal
US7010044B2 (en) Intra 4×4 modes 3, 7 and 8 availability determination intra estimation and compensation
US20110110431A1 (en) Method of coding and decoding a stream of images; associated devices
KR101668718B1 (en) Image processing device and method thereof
ZA200205507B (en) A method for filtering digital images, and a filtering device.
KR101873766B1 (en) A method and an apparatus for decoding a video signal
US8233709B2 (en) Color effects for compressed digital video