WO2006054201A1 - Video data enhancement - Google Patents

Video data enhancement Download PDF

Info

Publication number
WO2006054201A1
WO2006054201A1 PCT/IB2005/053673 IB2005053673W WO2006054201A1 WO 2006054201 A1 WO2006054201 A1 WO 2006054201A1 IB 2005053673 W IB2005053673 W IB 2005053673W WO 2006054201 A1 WO2006054201 A1 WO 2006054201A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
motion
discontinuity
value
filter
Prior art date
Application number
PCT/IB2005/053673
Other languages
French (fr)
Inventor
Leo L. Velthoven
Michiel A. Klompenhouwer
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006054201A1 publication Critical patent/WO2006054201A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • G06T5/75Unsharp masking

Definitions

  • the invention relates in general to a method and a filter for providing motion dependant image processing of video data, in particular for liquid crystal displays, with receiving an input video data, calculating an image enhancement value for the input video data, adding the enhancement value to the input video data, and outputting the enhanced video data on a display.
  • LCDs Liquid Crystal Displays
  • CRT Cathode Ray Tube
  • the transfer function of the display and the eye (display and eye system) of a viewer is inversed and a pre-compensation of image distortion effects on the video data is proposed.
  • MCIF motion compensated inverse filtering
  • the temporal aperture acts as a purely temporal low-pass filter, i.e. high temporal frequencies are suppressed.
  • One result of this is the reduction of image flicker, by suppressing the temporal sampling frequency.
  • I p f (f x ) lf(J x )smc( ⁇ v -J x T h )
  • I p (J x ) is the perceived image (spectrum) as a function of spatial frequency
  • (J x ) is the input image
  • v is the speed of the moving image. This results in a motion dependent spatial filter.
  • the motion v is measured in the units used for x and t. E.g. in a discrete signal, v is usually expressed in 'pixels per frame' (ppf).
  • the low-pass filtering of the display+eye system can be compensated for using an inverse filter in the frequency domain, such as:
  • the high-pass filter is rotated in the direction of the motion vector. Furthermore, the (high-pass) filter can be very simple, i.e. low-order or very few taps, when the gain is adjusted with the size of the motion. This implementation is more or less an extension of well-known sharpness enhancement (unsharp masking) filtering, by adding the motion dependency.
  • the gain of the filter can still be very high.
  • the gain should increase proportionally to the speed. This will result in a very high amplification factor already at moderate speeds. This leads to the problem of noise amplification.
  • one object of the invention is to provide improved image processing, in particular for LCD displays. Another object of the invention is to decrease noise amplification for MCIF filters. A further object of the invention is to reduce "halo" artifacts due to motion in video data. Yet, another object of the invention is to provide a speed adaptive, simple filtering.
  • a method for providing motion dependant image processing of video data with receiving an input video data, calculating an image enhancement value for the input video data, adding the enhancement value to the input video data, and outputting the enhanced video data on a display, characterized by calculating a motion vector field for the input video data, measuring discontinuities within the vector field, and changing the enhancement value with a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity.
  • the enhancement value can be changed in dependence of the motion within the image.
  • the vector field can represent motion vectors within the image.
  • a change in the value of a motion vector can represent boundaries within segments of the image with different motion. These boundaries are subject to many image enhancement methods. To account for halo artifacts occurring at the boundaries, for instance due to overshoots in the enhanced pixel value due to large apertures of the filter in high motion regions or due to peaking existing halo artifacts, for instance, due to wrong foreground, background occlusion estimation.
  • the motion compensated inverse filtering should be reduced.
  • This reduction can be in reducing the strength (gain) or in reducing the filter aperture (stretch), or in changing the filter coefficients in general, as provided according to embodiments of claims 3 to 5.
  • the reduction can preferably be smooth.
  • the discontinuity in the vector field can be an indication of an occlusion area, where either existing halo can be present due to the imperfect up-conversion, or halo can be introduced by the motion compensated inverse filtering due to mixing of foreground and background pixels in the filter operation.
  • a vector edge detector can measure discontinuities in the vector field and can create a suppression factor at certain spatial positions where the motion compensated inverse filtering should be suppressed.
  • embodiments provide changing the settings of motion compensated inverse filtering at spatial positions where a discontinuity in the vector field is measured, such that a reduced effect of filtering is obtained at said spatial positions, leading to less visible artifacts.
  • the change value can be determined according to any one of claims 6 to 8.
  • a method of claim 10 is provided.
  • the vector edge detector can be constructed such that two dimensional discontinuities in the vector field are measured, for instance, by combining a horizontal 'pass' on the horizontal component of the motion vector and a vertical 'pass' on the vertical component of the vector field.
  • a method of claim 11 is provided for such a 2- dimensonal enhanced MCIF.
  • embodiments provide introducing a temporal measurement.
  • Another aspect of the invention is a filter arranged for providing motion dependant image processing of video data, in particular for liquid crystal displays, with an input arranged for receiving an input video data, an image enhancement filter arranged for calculating an image enhancement value for the input video data, an output arranged for adding the enhancement value to the input video data, and outputting the enhanced video data on a display, characterized by a motion estimator arranged for calculating a motion vector field for the input video data, a vector edge detector arranged for measuring discontinuities within the vector field, and outputting a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtainable at least at the spatial position of the discontinuity.
  • a further aspect of the invention is a computer program and a computer program product for providing motion dependant image processing of video data, in with a computer program operable to cause a processor to receive an input video data, calculate an image enhancement value for the input video data, add the enhancement value to the input video data, put out the enhanced video data on a display, calculate a motion vector field for the input video data, measure discontinuities within the vector field, and change the enhancement value with a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity.
  • Fig. 1 a pre-compensation inverse filter implementation to reduce the motion blur effect of the display+eye system
  • Fig. 2A-C an amplitude response in the motion direction of the spatial filtering due to the temporal display aperture and eye tracking, as a function of frequency for several speeds, and the corresponding inverse filters;
  • Fig. 3 a block diagram of a motion compensated inverse filter with the high- pass filter oriented along the direction of the motion, and the gain controlled by the speed of the motion;
  • Figs. 4A-C a frequency response of a motion compensated inverse filter
  • Fig. 5A a block diagram of a motion compensated inverse filtering, with a speed dependent filter
  • Fig. 5B a tap filter arrangement for a speed adaptive interpolation where the tap distance varies with the size of the motion
  • Figs. 6A-C an amplitude response of speed adaptive MCIF
  • Fig. 7 a vector field of an image
  • Fig. 8 an ID-spatial example of the processing steps in the 'vector edge detector' from vector input to suppression output.
  • Fig. 9 a first implementation of an inventive suppression filter
  • Fig. 10 a second implementation of an inventive suppression filter
  • Fig. 11 a third implementation of an inventive suppression filter
  • Fig. 12 a fourth implementation of an inventive suppression filter.
  • Fig. 1 depicts a block diagram of a pre-compensating filter for compensating the transfer function introduced by the display and eye system. Shown is an input video signal 2, which is fed to a filter block 4. The output of filter block 4 is applied to display and eye system 6, comprising the display transfer function 6a and an eye transfer function 6b. The resulting transfer function of system 6 is H(f x ,f t ). The transfer function of the filter block 4 is the inverse of transfer function H. The output is an image as being perceived by a user. The shown filter applies pre-compensation of the low-pass filtering of the display+eye system 6 in the video domain. The transfer function of the inverse filter is:
  • FIG. 2A-C A transfer function of such a filter is shown in Figs. 2A-C. Shown are transfer functions 10 of the display and eye system 6 and transfer functions 8 of the inverse filter 6. It can clearly be seen that the inverse filter transfer function 8 has singularities, which result in implementation restrictions and errors in computation.
  • the transfer function of the display and eye system is a function of v, which determines the speed of a motion, preferably measured in pixel per frame.
  • the absolute value of the motion vector determines the speed.
  • Figs. 2A-C indicate the various different transfer functions 10 and inverse transfer functions 8 for different speeds of pixels.
  • Fig. 2A shows transfer functions for a speed of 2 pixel per frame
  • Fig. 2B shows transfer functions for a speed of 4 pixel per frame
  • Fig. 2C shows transfer functions for a speed of 8 pixel per frame.
  • Fig. 3 shows a basic implementation of a 'Motion Compensated Inverse Filter'
  • MCIF Shown are input video data 2, a high pass filter (HPF) 12, a motion estimator 14, a multiplication means 16 and an addition means 18
  • the high pass filter filters 12 inputs video data 2 accordingly.
  • the motion estimator 14 allows controlling the speed dependent behavior. Motion vectors can be determined within the motion estimator 14 using a '3D recursive search' motion estimation, as described in G. de Haan, 'IC for motion-compensated de-interlacing, noise reduction, and picture-rate conversion', IEEE tr.on. CE, 45, pp. 617-624, 1999.
  • the motion obtained from the motion estimator 14 describes the true motion of objects in the image. Since the blurring acts only along the motion direction, the high-pass 12 filter can be rotated in the direction of the motion vector using the information from motion estimator 14. Furthermore, the high- pass filter 12 can be very simple, i.e. low-order or very few taps, when the gain is adjusted with the size of the motion within the multiplication means 16.
  • Figs. 4A-C are transfer functions 8 and 10 and a transfer function 20 of a filter as depicted in Fig. 3. This transfer function 20 has no singularities at zeros of the transfer function of the display and eye system, but, as depicted within Fig. 4C, can result in a high gain at high speeds for all pixels within the respective areas.
  • a basic MCIF system will apply the highest gain to the highest spatial frequencies. Therefore, for higher speeds, compensation of the lowest affected frequencies needs to be prioritized. The highest frequencies can be leaved unchanged.
  • Such a filter is shown in Fig. 5 A. It is comprised of the elements as described with Fig. 2 and additionally comprises a 2-D interpolator 24 and a 1-D high pass filter.
  • the motion estimator 14 provides direction information of the motion vector to the 2-D interpolator 24.
  • the 2-D interpolator thus can interpolate pixels of consecutive frames using the direction information.
  • the output of the 2-D interpolator 24 is provided to 1-D high pass filter 26. ⁇ igh pass filter 26 also receives speed information about the motion vector from motion estimator 14. The speed information allows varying the tap distance of the 1-d high pass filter 26 from the central tap accordingly.
  • the final MCIF result is a medium- frequency boosting filter, as shown in Fig. 5A.
  • the positions of the filter taps 30 in relation to the central tap within a video sampling grid 28 is depicted in Fig. 5B.
  • the position of the filter taps 30 depends on the direction of the motion vector and the speed of the motion vector.
  • the filter response needs to be speed adaptive. This extends the speed dependency of the MC-inverse filter from a simple varying gain and rotating but fixed filter response, to a varying filter response. To achieve this, the directional dependent interpolation of the filter taps is changed according to Fig. 5B. The positions of these 'interpolated' taps vary not only with the direction of the motion vector, but also lie at a larger distance from the central tap for higher speeds. This shifts the response of the static ID high-pass filter to lower frequencies, no longer requiring the gain of the filter to be increased with speed.
  • Motion compensated inverse filtering can increase the visibility of halo artifacts within the image and can even introduce new halos. For instance, big overshoots can result from combining pixels from fore- and background objects. This relates to the large aperture of the filter at high motion in the image. Also a more pronounced halo artifact compared to the original can be introduce to an image using MCIF. This can relate to the peaking of the existing halo artifacts. Filters with a small aperture and filters with a large aperture can peak the existing halo.
  • a motion estimator As can be seen in Fig. 7.
  • the different segments indicate areas with different motion in the image.
  • the speed of the areas varies, and each area has assigned a particular speed.
  • the speed estimation e.g. a motion estimation, however, is imperfect and at object boundaries a wrong speed estimation can occur. This can also be seen from Fig. 7, as at the object boundaries the speed is estimated with a different values as within the respective objects.
  • An edge of the vector field between foreground and background can be peaked with a filter having a larger aperture due to the large motion vector at that spatial position. This can result in the combining of fore- and background pixels and leads to big overshoots.
  • halo artifacts can also be introduced. For instance areas with large motion can lead to filters with a large aperture such that the 'halo-pixels' are mixed with the 'good' background pixels which have only little motion. This can lead to a visible extension of the halo area.
  • a step in the vector field can lead to a step in the filter aperture that can lead to visible artifacts, i.e. a sudden change from a blurred to a sharp image part. From Fig. 7 it becomes apparent that discontinuities within the vector field can lead to image distortions introduced by the filtering itself.
  • Discontinuity in the vector field can be an indication of an occlusion area, either where existing halo artifacts can be present due to the imperfect up-conversion, or halo artifacts can be introduced by the motion compensated inverse filtering due to mixing of foreground and background pixels in the filter operation. Discontinuities in the vector field can be measured by a vector edge detector. The vector edge detector can create a suppression factor at certain spatial positions where the motion compensated inverse filtering should be suppressed.
  • Fig. 8 shows the in and output of a vector edge detector in the case of a one- dimensional spatial situation, but it can be extended to the 2D situation.
  • the depicted waveforms describe the value of a certain signal at its spatial position along the x-axis.
  • the waveform of Fig. 8 A describes horizontal values of the motion vector on a certain line in the image.
  • Fig. 8A is a waveform with a step in the vector-field. Such a step can occur when two objects have different speeds, e.g. different absolute values of the respective motion vectors.
  • the absolute difference can be, for instance, determined as hi.
  • Fig. 8B describes a high pass filtering or edge detection of the vector-field where the height h2 of the pulse is correlated to the step hi in the vector-field as shown in Fig. 8A.
  • the output of the high pass filtering can be convoluted with a transfer function to create an output as depicted in Figs. 8C-G.
  • a low pass filtering results in a waveform as shown in Fig. 8C.
  • a simple operation on the waveform shown in Fig. 8B would be the convolution with a triangle shaped filter of which the aperture is correlated to the height h2, therefore leading to a correlation of wl with b.2 and hi. This is shown within Fig. 8C.
  • Other types of filtering may be applied leading for example to waveforms as shown in Figs. 8D-F.
  • Multiplying for example, a waveform shown in Fig. 8C with a certain factor, coring and clipping between a lower boundary, leading to no suppression, and an upper boundary, indicating full suppression, can result in waveform shown in Fig. 8G.
  • the waveform shown in Fig. 8G indicates the amount of suppression of the motion compensated inverse filtering at given spatial positions. This suppression value can be applied to the output of the MCIF filter and thus reduces the effect of discontinuities within the vector field, , resulting in reduced distortion due to filtering.
  • a transfer function 32 of such a suppressed, motion compensated, inverse filtering for various speeds (2, 4, 8ppf) is illustrated in Figs. 6A-C.
  • the suppression results in moderated inverse filtering even for areas with high speed and object boundaries, as can be seen by the moderate slope of transfer function 32.
  • the vector edge detector can be constructed, for example, such that two- dimensional discontinuities in the vector field can be measured, for instance by combining a horizontal 'pass' on the horizontal component of the motion vector and a vertical 'pass' on the vertical component of the vector field.
  • the motion compensated inverse filtering should be suppressed.
  • This suppression can be in reducing the strength (gain) or in reducing the filter aperture (stretch), or in changing the filter coefficients in general.
  • the reduction can be stepwise or preferably smooth.
  • the inventive method provides changing the settings of motion compensated inverse filtering at spatial positions where a discontinuity in the vector-field is measured, such that a reduced effect of MCIF is obtained at said spatial positions, leading to less visible artifacts.
  • the filter comprises an image enhancement filter 40, which provides any appropriate image enhancement. Additionally, a motion estimator 14 and a vector edge detector 42 are provided.
  • the motion estimator 14 provides the vector edge detector 42 with a vector-field of the image. Using the vector-field of the image, the vector edge detector 42 can detect discontinuities within the vector-field. Using the information about the discontinuities within the vector-field, the vector edge detector 42 can calculate a suppression value to be applied to the output of the image enhancement filter 40 at the said spatial positions of the discontinuities.
  • the vector edge detector 42 can also calculate any change value to be applied to the image enhancement filter 40 for manipulating its output or its filter coefficients.
  • Fig. 10 shows another possible implementation of a filter according to the invention.
  • Comprised is a motion compensated inverse filter 44 as already described in Fig. 3.
  • the output of motion estimator 14 is provided to vector edge detector 42.
  • the results of vector edge detector 42 can be utilized to calculate a suppression value applied on the output of the MCIF 44.
  • Fig. 11 shows a speed adaptive motion compensated inverse filter 44, as already depicted in Fig. 5A.
  • the vector edge detector 42 calculates a suppression value based on the received vector field from motion estimator 14.
  • the suppression value is provided to the output of motion estimator 14 and of 1-dimensonal high pass filter 26. Insofar, the aperture and the gain of the MCIF 44 is corrected with the suppression value.
  • Fig. 12 another possible embodiment can be determined.
  • Fig. 12 only differs from the filter depicted in Fig. 11 in that the suppression value for suppressing the gain and the aperture differ. It is not necessary that the suppression for the different values is the same. This allows adopting the filter to display and input signal particularities.
  • the inventive filtering allow for adapting the filter output of an MCIF filter such that reduced artifacts occur at spatial positions of discontinuities within the vector-field of the image.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The invention relates in general to a method and a filter for providing motion dependant image processing of video data, in particular for liquid crystal displays, with receiving an input video data, calculating an image enhancement value for the input video data, adding the enhancement value to the input video data, and outputting the enhanced video data on a display. To allow for changing settings of the filter where discontinuities within the vector-field occur, calculating a motion vector field for the input video data, measuring discontinuities within the vector field, and changing the enhancement value with a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity is provided.

Description

Video data enhancement
The invention relates in general to a method and a filter for providing motion dependant image processing of video data, in particular for liquid crystal displays, with receiving an input video data, calculating an image enhancement value for the input video data, adding the enhancement value to the input video data, and outputting the enhanced video data on a display.
Nowadays, Liquid Crystal Displays (LCDs) are rapidly overtaking Cathode Ray Tube (CRT) displays, not only on the desktop but also as television displays. However, the CRT is still unbeaten in one major aspect for television: motion portrayal. In that area, LCDs perform much worse, since the LC-molecules react slowly to image changes. The result is a smearing (blurring) of moving objects, making slow LCDs less suited for video applications.
Therefore, a lot of effort has been put into speeding up the response of LC materials. This can be done by applying better materials, or by improved LC cell design. There is also a well known method for response time improvement based on video processing, called 'overdrive'. Overdrive can reduce the response time within a frame period.
However, speeding up the response of the LC pixels is not enough to avoid motion blur completely. This is caused by the active matrix principle itself, which exhibits a sample-and-hold characteristic. Combined with the continuous light emission from the backlight, this results in a light emission during the whole frame time. This is a big difference with the very short light flashes produced by the phosphors of a CRT. The fastest LCDs currently available have response times shorter than the frame period. This is achieved either by fast LC material, or by overdrive. The faster the LC becomes, the less overdrive is needed. However, even these displays will still have a light emission during the whole frame period due to the sample-and-hold behavior of the active matrix and the continuous illumination by the backlight.
In some known implementations, the transfer function of the display and the eye (display and eye system) of a viewer is inversed and a pre-compensation of image distortion effects on the video data is proposed. The possibility of such a 'pre-compensation' method is possible using motion compensated inverse filtering (MCIF), which is described in M. A. Klompenhouwer, LJ. Velthoven, 'Motion blur reduction for liquid crystal displays: Motion compensated inverse filtering', proc. SPIE, VCIP 2004, pp. 690-699. This approach is based on a frequency domain analysis.
The described approach tries to reduce the motion blur effect of the display+eye system. It is known that the temporal aperture, A(x,t) , of a very fast LCD will approach a 'box' function, with a width equal to the hold time Th (equal to the frame period T). In the frequency domain, this aperture is described by a sine function:
^ (Z, /J = SiHC(^rJ
The temporal aperture acts as a purely temporal low-pass filter, i.e. high temporal frequencies are suppressed. One result of this is the reduction of image flicker, by suppressing the temporal sampling frequency.
However, things change when images move. The motion will create temporal frequency components even in an originally still image. The effect of eye tracking is to restore the original still image on the retina, but this will causes the temporal sine filter to be 'warped' to the spatial domain:
Ip f(fx) = lf(Jx)smc(πv -JxTh)
where Ip (Jx) is the perceived image (spectrum) as a function of spatial frequency, If (Jx) is the input image, and v is the speed of the moving image. This results in a motion dependent spatial filter. The motion v is measured in the units used for x and t. E.g. in a discrete signal, v is usually expressed in 'pixels per frame' (ppf).
The low-pass filtering of the display+eye system can be compensated for using an inverse filter in the frequency domain, such as:
1
HL(LJ1)=- sinc(πv -/frj This inverse filter however is not practical, since the amplification goes to infinity at the zeros of the sine function. Therefore, an approximation is the best what can be done. Moreover, a practical solution should also limit the computational complexity.
Since the blurring acts only along the motion direction, the high-pass filter is rotated in the direction of the motion vector. Furthermore, the (high-pass) filter can be very simple, i.e. low-order or very few taps, when the gain is adjusted with the size of the motion. This implementation is more or less an extension of well-known sharpness enhancement (unsharp masking) filtering, by adding the motion dependency.
However, although the simple filter already makes the necessary approximation to the infinite gains in the ideal filter, the gain of the filter can still be very high. For a good blur reduction, the gain should increase proportionally to the speed. This will result in a very high amplification factor already at moderate speeds. This leads to the problem of noise amplification.
The images that result from such a MCIF filter show strongly amplified noise when displayed on an actual LCD panel. One cause of this is that the motion estimator has a high chance of estimating the wrong vector in flat (undetailed) image parts, which can cause undesirable noise amplification at high gains.
Nevertheless, even when the motion has been estimated correctly, noise amplification is visible. To reduce noise amplification it is necessary to prevent .excessive noise amplification in the MCIF corrected images.
Therefore, one object of the invention is to provide improved image processing, in particular for LCD displays. Another object of the invention is to decrease noise amplification for MCIF filters. A further object of the invention is to reduce "halo" artifacts due to motion in video data. Yet, another object of the invention is to provide a speed adaptive, simple filtering.
These and other objects of the invention are solved by a method for providing motion dependant image processing of video data, in particular for liquid crystal displays, with receiving an input video data, calculating an image enhancement value for the input video data, adding the enhancement value to the input video data, and outputting the enhanced video data on a display, characterized by calculating a motion vector field for the input video data, measuring discontinuities within the vector field, and changing the enhancement value with a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity.
By determining a change value from the vector field, in particular from discontinuities in the vector field, the enhancement value can be changed in dependence of the motion within the image. The vector field can represent motion vectors within the image. A change in the value of a motion vector can represent boundaries within segments of the image with different motion. These boundaries are subject to many image enhancement methods. To account for halo artifacts occurring at the boundaries, for instance due to overshoots in the enhanced pixel value due to large apertures of the filter in high motion regions or due to peaking existing halo artifacts, for instance, due to wrong foreground, background occlusion estimation.
In general, it has been found that in areas where a discontinuity is present in the vector field, the motion compensated inverse filtering should be reduced. This reduction can be in reducing the strength (gain) or in reducing the filter aperture (stretch), or in changing the filter coefficients in general, as provided according to embodiments of claims 3 to 5. According to embodiments, the reduction can preferably be smooth.
Even in the case of a 'perfect' vector field, a reduction and/or modification of the motion compensated inverse filtering in the area of the discontinuity of the vector field can be desirable, to prevent the filter from mixing pixels from fore- and background objects. The discontinuity in the vector field can be an indication of an occlusion area, where either existing halo can be present due to the imperfect up-conversion, or halo can be introduced by the motion compensated inverse filtering due to mixing of foreground and background pixels in the filter operation. A vector edge detector can measure discontinuities in the vector field and can create a suppression factor at certain spatial positions where the motion compensated inverse filtering should be suppressed.
Therefore, embodiments provide changing the settings of motion compensated inverse filtering at spatial positions where a discontinuity in the vector field is measured, such that a reduced effect of filtering is obtained at said spatial positions, leading to less visible artifacts. The change value can be determined according to any one of claims 6 to 8.
To avoid changes in the enhancement value within areas with little or no change in the vector field and to reduce the maximum change in the enhancement value, a method of claim 10 is provided. The vector edge detector can be constructed such that two dimensional discontinuities in the vector field are measured, for instance, by combining a horizontal 'pass' on the horizontal component of the motion vector and a vertical 'pass' on the vertical component of the vector field. For example, a method of claim 11 is provided for such a 2- dimensonal enhanced MCIF. In addition, embodiments provide introducing a temporal measurement.
Another aspect of the invention is a filter arranged for providing motion dependant image processing of video data, in particular for liquid crystal displays, with an input arranged for receiving an input video data, an image enhancement filter arranged for calculating an image enhancement value for the input video data, an output arranged for adding the enhancement value to the input video data, and outputting the enhanced video data on a display, characterized by a motion estimator arranged for calculating a motion vector field for the input video data, a vector edge detector arranged for measuring discontinuities within the vector field, and outputting a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtainable at least at the spatial position of the discontinuity.
A further aspect of the invention is a computer program and a computer program product for providing motion dependant image processing of video data, in with a computer program operable to cause a processor to receive an input video data, calculate an image enhancement value for the input video data, add the enhancement value to the input video data, put out the enhanced video data on a display, calculate a motion vector field for the input video data, measure discontinuities within the vector field, and change the enhancement value with a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity.
Other advantages can be derived from the dependent claims. These and other aspects of the invention will become apparent from and elucidated with reference to the following Figures. In the Figures show:
Fig. 1 a pre-compensation inverse filter implementation to reduce the motion blur effect of the display+eye system; Fig. 2A-C an amplitude response in the motion direction of the spatial filtering due to the temporal display aperture and eye tracking, as a function of frequency for several speeds, and the corresponding inverse filters;
Fig. 3 a block diagram of a motion compensated inverse filter with the high- pass filter oriented along the direction of the motion, and the gain controlled by the speed of the motion;
Figs. 4A-C a frequency response of a motion compensated inverse filter; Fig. 5A a block diagram of a motion compensated inverse filtering, with a speed dependent filter; Fig. 5B a tap filter arrangement for a speed adaptive interpolation where the tap distance varies with the size of the motion;
Figs. 6A-C an amplitude response of speed adaptive MCIF; Fig. 7 a vector field of an image;
Fig. 8 an ID-spatial example of the processing steps in the 'vector edge detector' from vector input to suppression output.
Fig. 9 a first implementation of an inventive suppression filter; Fig. 10 a second implementation of an inventive suppression filter; Fig. 11 a third implementation of an inventive suppression filter; Fig. 12 a fourth implementation of an inventive suppression filter.
The invention will be described in more detail within the following Figures. Throughout the Figures, same reference signs refer to similar elements, where appropriate. Fig. 1 depicts a block diagram of a pre-compensating filter for compensating the transfer function introduced by the display and eye system. Shown is an input video signal 2, which is fed to a filter block 4. The output of filter block 4 is applied to display and eye system 6, comprising the display transfer function 6a and an eye transfer function 6b. The resulting transfer function of system 6 is H(fx,ft). The transfer function of the filter block 4 is the inverse of transfer function H. The output is an image as being perceived by a user. The shown filter applies pre-compensation of the low-pass filtering of the display+eye system 6 in the video domain. The transfer function of the inverse filter is:
1
H LCf, J.) =- sinc(πv -/Λ) This inverse filter is, however, not practical, since the amplification goes to infinity at the zeros of the sine function.
A transfer function of such a filter is shown in Figs. 2A-C. Shown are transfer functions 10 of the display and eye system 6 and transfer functions 8 of the inverse filter 6. It can clearly be seen that the inverse filter transfer function 8 has singularities, which result in implementation restrictions and errors in computation.
It has been previously stated that the transfer function of the display and eye system is a function of v, which determines the speed of a motion, preferably measured in pixel per frame. The absolute value of the motion vector determines the speed. Figs. 2A-C indicate the various different transfer functions 10 and inverse transfer functions 8 for different speeds of pixels. Fig. 2A shows transfer functions for a speed of 2 pixel per frame, Fig, 2B shows transfer functions for a speed of 4 pixel per frame, and Fig. 2C shows transfer functions for a speed of 8 pixel per frame. Fig. 3 shows a basic implementation of a 'Motion Compensated Inverse Filter'
(MCIF). Shown are input video data 2, a high pass filter (HPF) 12, a motion estimator 14, a multiplication means 16 and an addition means 18
The high pass filter filters 12 inputs video data 2 accordingly. The motion estimator 14 allows controlling the speed dependent behavior. Motion vectors can be determined within the motion estimator 14 using a '3D recursive search' motion estimation, as described in G. de Haan, 'IC for motion-compensated de-interlacing, noise reduction, and picture-rate conversion', IEEE tr.on. CE, 45, pp. 617-624, 1999. The motion obtained from the motion estimator 14 describes the true motion of objects in the image. Since the blurring acts only along the motion direction, the high-pass 12 filter can be rotated in the direction of the motion vector using the information from motion estimator 14. Furthermore, the high- pass filter 12 can be very simple, i.e. low-order or very few taps, when the gain is adjusted with the size of the motion within the multiplication means 16.
However, although the simple filter as shown in Fig. 3 already makes the necessary approximation to the infinite gains in the ideal filter, the gain of the filter can still be very high. For a good blur reduction, the gain should increase proportionally to the speed. This will result in a very high amplification factor already at moderate speeds, as shown in Figs. 4A-C for various speeds, e.g. 2ppf, 4ppf and 8ppf. This leads to the problem of noise amplification. Shown if Figs. 4A-C are transfer functions 8 and 10 and a transfer function 20 of a filter as depicted in Fig. 3. This transfer function 20 has no singularities at zeros of the transfer function of the display and eye system, but, as depicted within Fig. 4C, can result in a high gain at high speeds for all pixels within the respective areas.
The images that result from the filter in Fig. 3 show strongly amplified noise when displayed on an actual LCD panel. One cause of this is that the motion estimator has a high chance of estimating the wrong vector in flat (undetailed) image parts, which can cause undesirable noise amplification at high gains.
A basic MCIF system will apply the highest gain to the highest spatial frequencies. Therefore, for higher speeds, compensation of the lowest affected frequencies needs to be prioritized. The highest frequencies can be leaved unchanged. These measures transform a high-pass filter as shown in Fig. 3 into a band-pass filter.
Such a filter is shown in Fig. 5 A. It is comprised of the elements as described with Fig. 2 and additionally comprises a 2-D interpolator 24 and a 1-D high pass filter. The motion estimator 14 provides direction information of the motion vector to the 2-D interpolator 24. The 2-D interpolator thus can interpolate pixels of consecutive frames using the direction information. The output of the 2-D interpolator 24 is provided to 1-D high pass filter 26.Ηigh pass filter 26 also receives speed information about the motion vector from motion estimator 14. The speed information allows varying the tap distance of the 1-d high pass filter 26 from the central tap accordingly. After adding the filtered signal to the input at adder 18, the final MCIF result is a medium- frequency boosting filter, as shown in Fig. 5A. The positions of the filter taps 30 in relation to the central tap within a video sampling grid 28 is depicted in Fig. 5B. The position of the filter taps 30 depends on the direction of the motion vector and the speed of the motion vector.
In order to limit the amplification of the higher frequencies at high speeds, and only compensate the lowest frequencies, the filter response needs to be speed adaptive. This extends the speed dependency of the MC-inverse filter from a simple varying gain and rotating but fixed filter response, to a varying filter response. To achieve this, the directional dependent interpolation of the filter taps is changed according to Fig. 5B. The positions of these 'interpolated' taps vary not only with the direction of the motion vector, but also lie at a larger distance from the central tap for higher speeds. This shifts the response of the static ID high-pass filter to lower frequencies, no longer requiring the gain of the filter to be increased with speed.
By using motion compensated inverse filtering, additional artifacts are introduced. Motion compensated inverse filtering can increase the visibility of halo artifacts within the image and can even introduce new halos. For instance, big overshoots can result from combining pixels from fore- and background objects. This relates to the large aperture of the filter at high motion in the image. Also a more pronounced halo artifact compared to the original can be introduce to an image using MCIF. This can relate to the peaking of the existing halo artifacts. Filters with a small aperture and filters with a large aperture can peak the existing halo.
Both undesirable situations can easily be explained by evaluating the vector output of a motion estimator as can be seen in Fig. 7. The different segments indicate areas with different motion in the image. The speed of the areas varies, and each area has assigned a particular speed. The speed estimation, e.g. a motion estimation, however, is imperfect and at object boundaries a wrong speed estimation can occur. This can also be seen from Fig. 7, as at the object boundaries the speed is estimated with a different values as within the respective objects.
An edge of the vector field between foreground and background can be peaked with a filter having a larger aperture due to the large motion vector at that spatial position. This can result in the combining of fore- and background pixels and leads to big overshoots.
Due to imperfect motion estimation and imperfect up-conversion, halo artifacts can also be introduced. For instance areas with large motion can lead to filters with a large aperture such that the 'halo-pixels' are mixed with the 'good' background pixels which have only little motion. This can lead to a visible extension of the halo area. Moreover, a step in the vector field can lead to a step in the filter aperture that can lead to visible artifacts, i.e. a sudden change from a blurred to a sharp image part. From Fig. 7 it becomes apparent that discontinuities within the vector field can lead to image distortions introduced by the filtering itself.
Discontinuity in the vector field can be an indication of an occlusion area, either where existing halo artifacts can be present due to the imperfect up-conversion, or halo artifacts can be introduced by the motion compensated inverse filtering due to mixing of foreground and background pixels in the filter operation. Discontinuities in the vector field can be measured by a vector edge detector. The vector edge detector can create a suppression factor at certain spatial positions where the motion compensated inverse filtering should be suppressed.
Fig. 8 shows the in and output of a vector edge detector in the case of a one- dimensional spatial situation, but it can be extended to the 2D situation. The depicted waveforms describe the value of a certain signal at its spatial position along the x-axis. In this example, the waveform of Fig. 8 A describes horizontal values of the motion vector on a certain line in the image. Fig. 8A is a waveform with a step in the vector-field. Such a step can occur when two objects have different speeds, e.g. different absolute values of the respective motion vectors. The absolute difference can be, for instance, determined as hi.
Fig. 8B describes a high pass filtering or edge detection of the vector-field where the height h2 of the pulse is correlated to the step hi in the vector-field as shown in Fig. 8A.
The output of the high pass filtering can be convoluted with a transfer function to create an output as depicted in Figs. 8C-G. For instance, a low pass filtering results in a waveform as shown in Fig. 8C. A simple operation on the waveform shown in Fig. 8B would be the convolution with a triangle shaped filter of which the aperture is correlated to the height h2, therefore leading to a correlation of wl with b.2 and hi. This is shown within Fig. 8C. Other types of filtering may be applied leading for example to waveforms as shown in Figs. 8D-F.
Multiplying, for example, a waveform shown in Fig. 8C with a certain factor, coring and clipping between a lower boundary, leading to no suppression, and an upper boundary, indicating full suppression, can result in waveform shown in Fig. 8G. The waveform shown in Fig. 8G indicates the amount of suppression of the motion compensated inverse filtering at given spatial positions. This suppression value can be applied to the output of the MCIF filter and thus reduces the effect of discontinuities within the vector field, , resulting in reduced distortion due to filtering.
A transfer function 32 of such a suppressed, motion compensated, inverse filtering for various speeds (2, 4, 8ppf) is illustrated in Figs. 6A-C. The suppression results in moderated inverse filtering even for areas with high speed and object boundaries, as can be seen by the moderate slope of transfer function 32. The vector edge detector can be constructed, for example, such that two- dimensional discontinuities in the vector field can be measured, for instance by combining a horizontal 'pass' on the horizontal component of the motion vector and a vertical 'pass' on the vertical component of the vector field.
In general, in areas where a discontinuity is present in the vector-field, the motion compensated inverse filtering should be suppressed. This suppression can be in reducing the strength (gain) or in reducing the filter aperture (stretch), or in changing the filter coefficients in general. The reduction can be stepwise or preferably smooth.
Therefore, the inventive method provides changing the settings of motion compensated inverse filtering at spatial positions where a discontinuity in the vector-field is measured, such that a reduced effect of MCIF is obtained at said spatial positions, leading to less visible artifacts.
One possible implementation is depicted in Fig. 9. The filter comprises an image enhancement filter 40, which provides any appropriate image enhancement. Additionally, a motion estimator 14 and a vector edge detector 42 are provided. The motion estimator 14 provides the vector edge detector 42 with a vector-field of the image. Using the vector-field of the image, the vector edge detector 42 can detect discontinuities within the vector-field. Using the information about the discontinuities within the vector-field, the vector edge detector 42 can calculate a suppression value to be applied to the output of the image enhancement filter 40 at the said spatial positions of the discontinuities. The vector edge detector 42 can also calculate any change value to be applied to the image enhancement filter 40 for manipulating its output or its filter coefficients. This allows providing filter enhancement also relying on the vector-field and in particular on changes within the speed of motion of pixels. Fig. 10 shows another possible implementation of a filter according to the invention. Comprised is a motion compensated inverse filter 44 as already described in Fig. 3. The output of motion estimator 14 is provided to vector edge detector 42. The results of vector edge detector 42 can be utilized to calculate a suppression value applied on the output of the MCIF 44. Fig. 11 shows a speed adaptive motion compensated inverse filter 44, as already depicted in Fig. 5A. The vector edge detector 42 calculates a suppression value based on the received vector field from motion estimator 14. The suppression value is provided to the output of motion estimator 14 and of 1-dimensonal high pass filter 26. Insofar, the aperture and the gain of the MCIF 44 is corrected with the suppression value. From Fig. 12 another possible embodiment can be determined. The filter of
Fig. 12 only differs from the filter depicted in Fig. 11 in that the suppression value for suppressing the gain and the aperture differ. It is not necessary that the suppression for the different values is the same. This allows adopting the filter to display and input signal particularities. The inventive filtering allow for adapting the filter output of an MCIF filter such that reduced artifacts occur at spatial positions of discontinuities within the vector-field of the image.

Claims

CLAIMS:
1. Method for providing motion dependant image processing of video data, in particular for liquid crystal displays, with receiving an input video data, calculating an image enhancement value for the input video data, - adding the enhancement value to the input video data, and outputting the enhanced video data on a display, characterized by calculating a motion vector field for the input video data, measuring discontinuities within the vector field, and - changing the enhancement value with a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity.
2. The method of claim 1, with calculating the enhancement value using motion compensated inverse filtering (MCIF).
3. The method of claim 1 or 2, with changing a gain value of the motion compensated inverse filtering with the change value at least at the spatial position of the discontinuity.
4. The method of any one of claims 1 to 3, with changing a filter aperture value of the motion compensated inverse filtering with the change value at least at the spatial position of the discontinuity.
5. The method of any one of claims 1 to 4, with changing at least one filter coefficient of the motion compensated inverse filtering with the change value at least at the spatial position of the discontinuity.
6. The method of any one of claims 1 to 5, with determining the change value depending on the steepness of the discontinuity of the vector field.
7. The method of any one of claims 1 to 6, with determining the change value depending on the absolute value of the vector at the discontinuity of the vector field.
8. The method of any one of claims 1 to 7, with determining the change value depending on the absolute difference between vectors within the discontinuity of the vector field.
9. The method of any one of claims 1 to 8, with suppressing the enhancement value such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity.
10. The method of any one of claims 1 to 9, with clipping and/or coring the change value based on the vector field discontinuity.
11. The method of any one of claims 1 to 10, with providing filter taps within the motion compensated filter varying in the direction of the motion vector and in the distance from the central tap depending on the speed of the motion vector.
12. The method of any one of claims 1 to 11, with calculating the vector field using motion estimation.
13. Filter arranged for providing motion dependant image processing of video data, in particular for liquid crystal displays, with an input arranged for receiving an input video data, an image enhancement filter arranged for calculating an image enhancement value for the input video data, - an output arranged for adding the enhancement value to the input video data, and outputting the enhanced video data on a display, characterized by a motion estimator arranged for calculating a motion vector field for the input video data, a vector edge detector arranged for measuring discontinuities within the vector field, and outputting a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtainable at least at the spatial position of the discontinuity.
14. Computer program product for providing motion dependant image processing of video data, in with a computer program operable to cause a processor to receive an input video data, calculate an image enhancement value for the input video data, - add the enhancement value to the input video data, put out the enhanced video data on a display, calculate a motion vector field for the input video data, measure discontinuities within the vector field, and change the enhancement value with a change value at a spatial position where a discontinuity in the vector field is measured such that a reduced effect of image enhancement is obtained at least at the spatial position of the discontinuity.
PCT/IB2005/053673 2004-11-16 2005-11-08 Video data enhancement WO2006054201A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04105794.4 2004-11-16
EP04105794 2004-11-16

Publications (1)

Publication Number Publication Date
WO2006054201A1 true WO2006054201A1 (en) 2006-05-26

Family

ID=35703757

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/053673 WO2006054201A1 (en) 2004-11-16 2005-11-08 Video data enhancement

Country Status (1)

Country Link
WO (1) WO2006054201A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2234778B1 (en) * 2007-12-21 2013-06-19 Robert Bosch GmbH Machine tool device and method with such a machine tool device
CN103489190A (en) * 2013-09-26 2014-01-01 中国科学院深圳先进技术研究院 Method and system for extracting image feature curve
US8966277B2 (en) 2013-03-15 2015-02-24 Mitsubishi Electric Research Laboratories, Inc. Method for authenticating an encryption of biometric data
US9978180B2 (en) 2016-01-25 2018-05-22 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2280812A (en) * 1993-08-05 1995-02-08 Sony Uk Ltd Deblurring image data using motion vector dependent deconvolution
US5526044A (en) * 1990-04-29 1996-06-11 Canon Kabushiki Kaisha Movement detection device and focus detection apparatus using such device
WO2003100724A2 (en) * 2002-05-23 2003-12-04 Koninklijke Philips Electronics N.V. Edge dependent motion blur reduction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5526044A (en) * 1990-04-29 1996-06-11 Canon Kabushiki Kaisha Movement detection device and focus detection apparatus using such device
GB2280812A (en) * 1993-08-05 1995-02-08 Sony Uk Ltd Deblurring image data using motion vector dependent deconvolution
WO2003100724A2 (en) * 2002-05-23 2003-12-04 Koninklijke Philips Electronics N.V. Edge dependent motion blur reduction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KLOMPENHOUWER M A ET AL: "LCD MOTION BLUR REDUCTION WITH MOTION COMPENSATED INVERSE FILTERING", 2004 SID INTERNATIONAL SYMPOSIUM DIGEST OF TECHNICAL PAPERS. SEATTLE, WA, MAY 25 - 27, 2004, SID INTERNATIONAL SYMPOSIUM DIGEST OF TECHNICAL PAPERS, SAN JOSE, CA : SID, US, vol. VOL. 35 PRT 2, 26 May 2004 (2004-05-26), pages 1340 - 1343, XP001222865 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2234778B1 (en) * 2007-12-21 2013-06-19 Robert Bosch GmbH Machine tool device and method with such a machine tool device
US8948903B2 (en) 2007-12-21 2015-02-03 Robert Bosch Gmbh Machine tool device having a computing unit adapted to distinguish at least two motions
US8966277B2 (en) 2013-03-15 2015-02-24 Mitsubishi Electric Research Laboratories, Inc. Method for authenticating an encryption of biometric data
CN103489190A (en) * 2013-09-26 2014-01-01 中国科学院深圳先进技术研究院 Method and system for extracting image feature curve
CN103489190B (en) * 2013-09-26 2016-05-11 中国科学院深圳先进技术研究院 Characteristics of image curve extracting method and system
US9978180B2 (en) 2016-01-25 2018-05-22 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments

Similar Documents

Publication Publication Date Title
RU2419243C1 (en) Device and method to process images and device and method of images display
US7876979B2 (en) Resolution-converting apparatus and method
US7782401B1 (en) Method and system for digital image scaling with sharpness enhancement and transient improvement
EP1702457B1 (en) Motion-compensated inverse filtering with band-pass-filters for motion blur reduction
US7406208B2 (en) Edge enhancement process and system
Klompenhouwer et al. Motion blur reduction for liquid crystal displays: motion-compensated inverse filtering
JP5342068B2 (en) Multiple frame approach and image upscaling system
US8369644B2 (en) Apparatus and method for reducing motion blur in a video signal
EP2249556A2 (en) Image processing method and apparatus
US20090060370A1 (en) Filter for adaptive noise reduction and sharpness enhancement for electronically displayed pictures
US20110033130A1 (en) Systems And Methods For Motion Blur Reduction
US20100259675A1 (en) Frame rate conversion apparatus and frame rate conversion method
US6714258B2 (en) Video-apparatus with noise reduction
Klompenhouwer et al. 48.1: LCD Motion Blur Reduction with Motion Compensated Inverse Filtering
US8098333B2 (en) Phase shift insertion method for reducing motion artifacts on hold-type displays
KR20050059251A (en) A unit for and method of image conversion
US8345163B2 (en) Image processing device and method and image display device
WO2006054201A1 (en) Video data enhancement
US7570306B2 (en) Pre-compensation of high frequency component in a video scaler
US9401010B2 (en) Enhancing perceived sharpness of images
CN111754437B (en) 3D noise reduction method and device based on motion intensity
WO2010103593A1 (en) Image display method and image display apparatus
CN112001976A (en) Dynamic image processing method
Klompenhouwer The temporal MTF of displays and related video signal processing
CN113037991B (en) Signal processing device and signal processing method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05805655

Country of ref document: EP

Kind code of ref document: A1