EP0944251A1 - Method for providing motion-compensated multi-field enhancement of still images from video - Google Patents

Method for providing motion-compensated multi-field enhancement of still images from video Download PDF

Info

Publication number
EP0944251A1
EP0944251A1 EP98116451A EP98116451A EP0944251A1 EP 0944251 A1 EP0944251 A1 EP 0944251A1 EP 98116451 A EP98116451 A EP 98116451A EP 98116451 A EP98116451 A EP 98116451A EP 0944251 A1 EP0944251 A1 EP 0944251A1
Authority
EP
European Patent Office
Prior art keywords
fields
field
auxiliary
reference field
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP98116451A
Other languages
German (de)
French (fr)
Other versions
EP0944251B1 (en
Inventor
David S. Taubman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HP Inc
Original Assignee
Hewlett Packard Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Co filed Critical Hewlett Packard Co
Priority to EP02028854A priority Critical patent/EP1296515A3/en
Publication of EP0944251A1 publication Critical patent/EP0944251A1/en
Application granted granted Critical
Publication of EP0944251B1 publication Critical patent/EP0944251B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/4448Receiver circuitry for the reception of television signals according to analogue transmission standards for frame-grabbing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Definitions

  • This invention relates to a method and system for combining the information from multiple video fields into a single, high quality still image.
  • video sources have the advantage that many pictures of the same scene are available, usually with relatively small displacements of the scene elements between consecutive fields. After suitable compensation for motion, these multiple pictures can be combined to produce a still image with less noise. Perhaps more importantly, however, the existance of motion allows for effectively having a denser sampling of the optical scene than is available from any single field. This opens up the possibility for aliasing removal as well as resolution enhancement.
  • This invention disclosure describes a system for combining the information from multiple video fields into a single high quality still image.
  • One of the fields is selected to be the reference and the remaining fields are identified as auxiliary fields.
  • the system reduces the noise, as well as the luminance and color aliasing artifacts associated with the reference field, while enhancing its resolution, by utilizing information from the auxiliary fields.
  • An orientation map is constructed for the reference field and is used to directionally interpolate this field up to four times the vertical field resolution.
  • Motion maps are constructed to model the local displacement between features in the reference field and corresponding features in each of the auxiliary fields. Motion is computed to quarter pixel accuracy in the vertical direction and half pixel accuracy in the horizontal direction, using the directionally interpolated reference field to accomplish the sub-pixel search.
  • the motion maps are used firstly to infer an orientation map for each of the auxiliary fields directly from the reference field's orientation map (note that orientation maps could be computed for each field separately, if the computational demand were not considered excessive) and later to guide incorporation of information from the auxiliary fields into the reference field.
  • auxiliary fields are then directionally interpolated to the same resolution as the interpolated reference field, using their inferred orientation maps.
  • a merge mask is determined for each auxiliary field to mask off pixels which should not be used in the final enhanced still image; the masked off pixels generally correspond to regions where the motion maps fail to correctly model the relationship between the reference and auxiliary fields; such regions might involve uncovered background, for example.
  • a weighted average is formed from the reference field pixels and the motion-compensated auxiliary field pixels which have not been masked off.
  • the weights associated with this weighted averaging operation are spatially varying and depend upon both the merge masks and the displacements recorded in the motion maps. Unlike conventional field averaging techniques, this approach does not destroy available picture resolution in the process of removing aliasing artifacts.
  • the final still image is obtained after horizontal interpolation by an additional factor of two (to obtain the correct aspect ratio after the fourfold vertical interpolation described above) and an optional post-processing operation which sharpens the image formed from the weighted averaging process described above.
  • the above processing steps are modified somewhat for the chrominance components to reflect the fact that these components have much less spatial frequency content than the luminance component.
  • this image enhancement system can work with any number of video fields at all. If only one field is supplied, the system employs the sophisticated directional interpolation technique mentioned above. If additional fields are available, they are directionally interpolated and merged into the interpolated reference field so as to progressively enhance the spatial frequency content, while reducing noise and other artifacts. In the special case where two fields are available, the system may also be understood as a "de-interlacing" tool.
  • H F and W F denote the number of rows (height) and columns (width) of each digitized video field.
  • the chrominance components are treated differently.
  • Original chrominance fields each have H F rows, but only W F /4 columns.
  • the video digitization and decoding operations may produce chrominance components with these resolutions, or else the process may decimate the chrominance components of a collection of video fields which have already been decoded. In this way the process reduces the memory requirements and computational demand associated with the multi-field enhancement operation, without sacrificing actual information.
  • the chrominance components from the various fields are merged by simple averaging, after invalid pixels from regions which do not conform to the estimated inter-field motion model have been mashed off. This temporal averaging is able to reduce noise and color aliasing artifacts but is not able to enhance the spatial frequency content of the image.
  • the luminance components from the various fields are merged using a spatially varying weighted average, whose weights are computed from the estimated inter-field motion so as to remove aliasing while enhancing the spatial frequency content of the image.
  • horizontal doubling of the luminance resolution may be achieved by applying the interpolation filter kernel, ( -1 8 , 5 8 , 5 8 , -1 8 )
  • This kernel has been selected to preserve the horizontal frequency response of the original video signal, while allowing for a multiplication-free implementation.
  • the same interpolation kernel is used to expand the horizontal chrominance resolution by a factor of two, after which the chrominance components are expanded by an additional factor of two in both directions using conventional bilinear interpolation.
  • Section 2 discloses a method to estimate local orientations within the reference field, along with an interpolation procedure used to directionally interpolate the reference and auxiliary fields according to the estimated orientation map.
  • Section 1 discloses a method to obtain the motion maps between the reference and auxiliary fields.
  • Section 3 discloses methods to build merge masks and merge weighting factors, along with the fast algorithm used to actually merge the reference and auxiliary fields into an enhanced still image.
  • the reference field is first segmented into non-overlapping blocks which have approximately 15 field rows and 23 field columns each in the current implementation. This segmentation is depicted in Figure 1a. Each of these segmentation blocks 10 is surrounded by a slightly larger motion block 12, as depicted in Figure 1b. Adjacent motion blocks overlap one another by two field rows 14 and four field columns 16 in the current implementation.
  • the motion estimation sub-system is responsible for computing a single motion vector for each motion block, for each auxiliary field.
  • the motion vector is intended to describe the displacement of scene objects within the motion block, between the reference and the relevant auxiliary field. These motion vectors are used to guide the process described in Section 3, whereby interpolated reference and auxiliary fields are merged into a single image.
  • This merging process is performed independently on each motion block, after which the merged motion blocks are stitched together to form the final image.
  • a smooth weighting function is used to average overlapping regions from adjacent motion blocks. The purpose of this section is to describe only the method used to estimate motion vectors for any given motion block.
  • the system For each of the auxiliary fields, the system processs the motion blocks in lexicographical fashion. Estimation of the motion vector for each of these blocks proceeds in three distinct phases. The first phase attempts to predict the motion vector based on the motion vectors obtained from previously processed, neighboring motion blocks. The prediction strategy is described in Section 1.1. This predicted motion vector is used to bias the efficient and robust coarse motion estimation technique, described in Section 1.2, which estimates motion to full pixel accuracy only. The final refinement to quarter pixel accuracy in the vertical direction and half pixel accuracy in the horizontal direction is performed using a conventional MAD block matching technique. Details are supplied below in Section 1.3.
  • the coarse - i.e. pixel resolution - motion vector for the ( m , n )'th motion block i.e. the n 'th motion block in the m 'th row of motion blocks associated with the reference field. Since motion vectors are estimated in lexicographical order, the neighboring coarse motion vectors, m , n -1 , m -1, n -1 , m -1, n and m -1, n +1 , have already been estimated and can be used to form an initial prediction for m , n .
  • the motion estimation sub-system sets the predicted vector, m , n , to be the arithmetic mean of the three least disparate of these four neighboring motion vectors.
  • the disparity among any collection of three motion vectors, 1 , 2 and 3 is defined as i.e. the sum of the L 1 distances between each of the vectors and their arithmetic mean.
  • m , n One reason for forming this prediction, m , n , is not to reduce the full motion vector search time, but rather to encourage the development of smooth motion maps. In regions of the scene which contain little information from which to estimate motion, we prefer to adopt the predicted vectors whenever this is reasonable. Otherwise, the "random" motion vectors usually produced by block motion estimation algorithms in these regions can cause annoying visual artifacts when attempts are made to merge the reference and auxiliary fields into a single still image.
  • the process produces a coarse - i.e. full pixel accuracy - estimate of the motion, m , n , between the reference field and a given auxiliary field, for the ( m , n )'th motion block.
  • the coarse motion estimation sub-system performs a full search which, in one embodiment of the system, involves a search range of
  • denote the set of all motion vectors, , such that where T m is a pre-defined threshold.
  • the final coarse motion estimate, m , n is taken to be the vector, ⁇ ⁇ , which is closest to the predicted vector, m , n .
  • the L 1 distance metric is used, so that the distance between and m , n is
  • the actual objective function, ( ), used for coarse motion estimation, is disclosed as follows. Rather than using the computationally intensive Maximum Absolute Distance (MAD) objective, the process constructs the objective function from a novel 2 bit per pixel representation of the reference and auxiliary fields. Specifically, the luminance components of the reference and auxiliary fields are first pre-conditioned with a spatial bandpass filter, as described in Section 1.2.1; a two bit representation of each bandpass filtered luminance sample is formed by a simple thresholding technique, described in Section 1.2.2; and then the objective function is evaluated by a combination of "exclusive or” (XOR) and counting operations which are applied to the two bit pixel representations, as described in Section 1.2.3.
  • XOR exclusive or
  • a simple bandpass filter is constructed, in one embodiment, by taking the difference of two moving window lowpass filters. Specifically, let y [ i , j ] denote the luminance sample from any given field at row i and column j .
  • the bandpass filtered pixel, y [ i , j ] is computed according to
  • the scaling operations may be reduced to shift operations by ensuring that each of these four dimensions is a power of 2, in which case the entire bandpass filtering operation may be implemented with four additions, four subtractions and two shifts per pixel.
  • this bandpass filtering operation desensitizes the motion estimator to inter-field illumination variations, as well as to high frequency aliasing artifacts in the individual fields. At the same time it produces pixels which have zero mean - a key requirement for the generation of useful two bit per pixel representations according to the method described in the following section.
  • each filtered sample, y [ i , j ] is assigned a two bit representation in the preferred embodiment of the invention, where this representation is based on a parameter, T b .
  • the first bit is set to 1 if y [ i , j ] > T b and 0 otherwise, while the second bit is set to 1 if y [ i . j ] ⁇ - T b and 0 otherwise.
  • This two bit representation entails some redundancy in that it quantizes y [ i , j ] into only three different regimes.
  • the representation has the following important property, however.
  • ( y 1 , y 2 ) represents the total number of 1 bits in the two bit result obtained by taking the exclusive or of corresponding bits in the two bit representations associated with y 1 and y 2 , then it is easy to verify that ( y 1 , y 2 ) satisfies the following relationship.
  • ( y 1 , y 2 ) may be Interpreted as a measure of the distance between y 1 and y 2 .
  • Our objective function, ( ), is constructed by taking the sum of the two bit distance metric, over all pixels, ( i , j ), which lying within a coarse matching block; this coarse matching block is generally larger than the motion block itself.
  • y r [ i , j ] is the bandpass filtered sample at row i and column j of the reference field
  • y a [ i , j ] is the bandpass filtered sample at row i and column j of the auxiliary field.
  • the coarse matching block consists of 20 field rows by 32 field columns, surrounding the motion block of interest.
  • a conventional MAD Mel Absolute Difference
  • the auxiliary fields are pre-conditioned by applying a five tap vertical low-pass filter with kernel, ( 1 16 , 4 16 , 6 16 , 4 16 , 1 16 ), prior to performing the motion refinement search. This low-pass filtering reduces sensitivity to vertical aliasing.
  • the object of orientation estimation is to identify the direction of image edges in the neighborhood of any given pixel so the intra-field vertical interpolation operation described in Section 2.2 below can be careful to interpolate along rather than across an edge.
  • one key requirement for the estimator is that the resulting orientation map be as smooth as possible. It is preferred that the orientation map not fluctuate wildly in textured regions or in smooth regions which lie near to actual image edges, because such fluctuations might manifest themselves as visually disturbing artifacts after interpolation. It is therefore important to control the numerical complexity of the estimation technique.
  • the orientation estimation sub-system works with the luminance component of the reference field. For each "target" luminance pixel in this field, an orientation class is selected.
  • the orientation class associated with a given target row and target column is to be interpreted as the dominant feature orientation observed in a neighborhood whose centroid lies between the target and next field rows and between the target and next field columns. This centroid is marked with a cross pattern 20 in Figure 2.
  • the figure also illustrates the set of eight orientation classes which may be associated with each target luminance pixel; they are:
  • the orientation class associations for each luminance pixel in the reference field constitute the orientation map.
  • the estimation strategy consists of a number of elements, whose details are discussed separately below. Essentially, a numerical value, L C , is assigned to each of the distinctly oriented classes, C ⁇ ⁇ V , V - , V + , D - , D + , O - , O + ⁇ , which is to be interpreted as the likelihood that a local orientation feature exists with the corresponding orientation.
  • the estimated orientation class is tentatively set to the distinct orientation class, C , which has the maximum value, L C .
  • the likelihood value, L C for the selected class is then compared with an orthogonal likelihood value, L ⁇ / C, which represents the likelihood associated with the orthogonal direction.
  • the orientation class is set to N , i.e. no distinct orientation.
  • the orthogonal likelihood value is obtained from the table of Figure 3.
  • the orientation map obtained in the manner described above, is subjected to a final morphological smoothing operation to minimize the number of disturbing artifacts produced during directional interpolation. This smoothing operation is described in Section 2.1.3.
  • L C for each directed orientation class, C , the luminance pixels are processed first, using a directional low-pass filter which smooths in a direction which is approximately perpendicular to that of C .
  • L C is then based on a Total Variation (TV) metric, computed along a trajectory which is parallel to the orientation of C ; the larger the variation, the smaller the likelihood value.
  • TV Total Variation
  • the reference field's luminance component is first pre-conditioned by applying a vertical low-pass filter with the three tap kernel, ( 1 4 , 1 2 , 1 4 ).
  • This pre-conditioning filter is to reduce the influence of vertical aliasing artifacts which can adversely affect the estimation sub-system.
  • the likelihood values, L C are found by negating a set of corresponding "unlikelihood” values, U C .
  • the unlikelihood values for each directed orientation class are computed by a applying an appropriate "total variation” measure to the reference field's luminance component, after pre-conditioning and appropriate directional pre-filtering, as described in Section 2.1.1 above.
  • the vertical unlikelihood value is calculated from where ⁇ 1 , ⁇ 2 , ⁇ 3 and ⁇ 4 are linear combinations of pixels from the pre-conditioned and horizontally pre-filtered luminance field; these linear combinations are depicted in Figure 5.
  • the centroid of this calculation lies halfway between the target 50 and next field row 52 and halfway between the target 54 and next field column 56, which is in agreement with Figure 2.
  • the near vertical unlikelihood values are calculated from where the ⁇ ⁇ / i terms represent pixel values from the pre-conditioned and horizontally pre-filtered luminance field; the relevant pixels are depicted in Figures 6a and 6b. Again, the centroid of these calculations lies half a field row below and half a field column to the right of the target field row 60 and column 62, as required for consistency with the definition of the orientation classes.
  • the diagonal unlikelihood values are calculated from where the d ⁇ / i terms each represents a linear combination of two pixel values from the pre-conditioned and diagonally pre-filtered luminance field.
  • the d - / i terms are formed after applying the diagonal pre-filter shown in Figure 4b, while the d + / i terms are formed after applying the diagonal pre-filter shown in Figure 4c.
  • the pixels and weights used to form the d - / i and d + / i terms are illustrated in Figures 7a and 7b, respectively. Notice that the centroid of these calculations again lies half a field row below and half a field column to the right of the target field row 70 and column 72, as required for consistency with the definition of the orientation classes.
  • the oblique unlikelihood values are calculated from where the o ⁇ / i terms each represents a linear combination of two pixel values from the pre-conditioned and near vertically pre-filtered luminance field.
  • the o - / i terms are formed after applying the near vertical pre-filter shown in Figure 4d, while the o + / i terms are formed after applying the near vertical pre-filter shown in Figure 4e.
  • the pixels and weights used to form the o - / i and o + / i terms are illustrated in Figures 8a and 8b, respectively. Notice that the centroid of these calculations again lies half a field row below and half a field column to the right of the target field row 80 and column 82, as required for consistency with the definition of the orientation classes.
  • the morphological smoothing operator takes, as its input, the initial classification of each pixel in the reference field into one of the orientation classes, V , V - , V + , D - , D + , O - or O + , and produces a new classification which generally has less inter-class transitions.
  • C m , n denote the initial orientation classification associated with target field row m 90 and target field column n 92.
  • the smoothing operator generates a potentially different classification, C m , n , by considering C m , n together with the 14 neighbors 94 depicted in Figure 9.
  • the smoothing policy is that the value of C m , n should be identical to C m , n , unless either a majority of the 6 neighbors 96 lying to the left of the target pixel and a majority of the 6 neighbors 98 lying to the right of the target pixel all have the same classification, C , or a majority of the 5 neighbors 100 lying above the target pixel and a majority of the 5 neighbors 102 lying below the target pixel all have the same classification, C . In either of these two cases, the value of C m , n is set to C .
  • This section describes the directional interpolation strategy which is used to quadruple the number of luminance rows and double the number of chrominance rows.
  • Y 4 m , n in Figure 10 denote the luminance pixel at field row m and field column n .
  • the purpose of luminance interpolation is to derive three new luminance rows, Y 4 m +1, n , Y 4 m +2, n and Y 4 m +3, n , between every pair of original luminance rows, Y 4 m , n 120 and Y 4 m +4, n 122.
  • chrominance interpolation is to derive one new chrominance row, C 2 m +1, k , between every pair of original chrominance rows, C 2 m , k and C 2 m +2, k .
  • Only one of the chrominance components is explicitly referred to, with the understanding that both chrominance components should be processed identically.
  • we use the index, k rather than n to denote chrominance columns, since the horizontal chrominance resolution is only one quarter of the horizontal luminance resolution.
  • C m , n refers to the local orientation class associated with a region whose centroid lies between field rows m and m +1.
  • the missing luminance samples, Y 4 m +1, n , Y 4 m +2, n and Y 4 m +3, n are linearly interpolated based on a line 124 drawn through the missing sample location 126 with the orientation, C m , n .
  • the interpolation filter of equation (1) is used to minimize loss of spatial frequency content during this interpolation process.
  • the non-directed orientation class, N defaults to the same vertical interpolation strategy as the V class.
  • Chrominance components are treated similarly. Specifically, the missing chrominance sample, C 2 m +1, k , is linearly interpolated based on a line drawn through the missing sample location with the orientation, C m ,2 k . Again, chrominance samples from the original field rows, C 2 m , k and C 2 m +2, k , must often be horizontally interpolated to find sample values on the end-points of the oriented interpolation lines.
  • the merging process is guided by a single sub-pixel accurate motion vector for each of the auxiliary fields.
  • the first step involves generation of a merge mask for each auxiliary field, to identify the regions in which this motion vector may be considered to describe scene motion between the reference field and the relevant auxiliary field. Merge mask generation is discussed in Section 3.1 below.
  • the next step is to assign weights to each spatially interpolated pixel in the reference and auxiliary fields, identifying the contribution that each will make to the merged image. This step is discussed in Section 3.2 below.
  • the weighted average of pixels from the various fields is formed using the fast technique described in Section 3.3 below.
  • the general objective of the merge mask generation sub-system is to determine a binary mask value for each pixel in the original reference field, not the interpolated reference field.
  • the mask value for a particular pixel identifies whether or not the motion vector associated with the given auxiliary field correctly describes scene motion between the reference and auxiliary fields in the vicinity of that pixel.
  • Our basic approach for generating these masks involves computing a directionally sensitive weighted average of neighboring pixels in the reference field and corresponding motion compensated pixels in the auxiliary field and comparing these averages.
  • One key to the success of this method is the directional sensitivity of the local pixel averages.
  • y r [ i , j ] denote the luminance sample at row i and column j in the reference field.
  • y a [ i , j ] denote the corresponding pixel in the auxiliary field, after compensating for the estimated motion vector. Note that motion compensation may involve sub-pixel interpolation, since our motion vectors are estimated to sub-pixel accuracy. If the motion vector correctly describes motion in the vicinity of pixel ( i , j ), it might be expected that neighborhood averages around y r [ i , j ] and y a [ i , j ] would yield similar results.
  • the orientation map discussed in Section 2.1 is used. Only in the special case when the orientation class for pixel ( i , j ) is N , i.e. no distinct direction, does the process use a non-directional weighted average, whose weights are formed from the tensor product of the seven tap horizontal kernel, ( 1 2 ,1,1,1,1, 1 2 ) and the five tap vertical kernel, ( 1 2 ,1,1,1, 1 2 ) in one particular embodiment of the invention.
  • the merge mask, m i , j is set to 0, indicating that the motion model should not be considered valid in this region.
  • the reference and auxiliary fields are first filtered with the three tap horizontal low-pass kernel, ( 1 4 , 1 2 , 1 4 ), and then four one-dimensional weighted averages, ⁇ [ i , j ], ⁇ 1 [ i , j ], ⁇ 2 [ i , j ] and ⁇ 3 [ i , j ] are computed.
  • Each of these weighted averages is taken along a line oriented in the direction identified by the orientation class for pixel ( i , j ), using the weights, ( 1 2 ,1,1,1, 1 2 ).
  • the oriented line used to form ⁇ [ i , j ] is centered about pixel ( i , j ) in the reference field.
  • the line used to form ⁇ 2 [ i , j ] is centered about pixel ( i , j ) in the auxiliary field.
  • the lines for ⁇ 1 [ i , j ] and ⁇ 2 [ i , j ] have centers which fall on either side of pixel ( i , j ) in the auxiliary field, displaced by approximately half a field row or one field column, as appropriate, in the orthogonal direction to that identified by the orientation class.
  • ⁇ 2 [ i , j ] exceeds a pre-determined threshold, it is concluded that the motion vector does not describe scene motion in the vicinity of pixel ( i , j ) and m i , j is set to 0 accordingly. Otherwise, it is concluded that the motion vector is approximately accurate, but the process must still check to see if it is sufficiently accurate for field merging to improve the quality of an edge feature. Reasoning that a small motion error would generally cause one, but not both of ⁇ 1 [ i , j ] and ⁇ 2 [ i , j ] to be smaller than ⁇ 2 [ i , j ], the process tests for this condition, setting m i , j to 0 whenever it is found to be true.
  • the merging sub-system forms a weighted average between the directionally interpolated pixels from each field.
  • This section describes the methodology used to determine relevant weights.
  • the chrominance and luminance components are treated in a fundamentally different way, since most of the spatial information is carried only in the luminance channel. All chrominance samples in the reference field are assigned a weight of 1, while chrominance samples from the auxiliary fields are assigned weights of either 1 or 0, depending only on the value of the merge mask for the relevant auxiliary field. In this way, the chrominance components are simply averaged across all fields, except in regions where the motion vectors do not reflect the underlying scene motion. This has the effect of substantially reducing chrominance noise.
  • the directional spatial interpolation technique is able to enhance spatial frequency content of oriented edge features, textured regions are entirely dependent upon the information from multiple fields for resolution enhancement.
  • simple averaging of the interpolated fields has the effect of subjecting the original spatial frequencies in the scene to a low pass filter whose impulse response is identical to the spatial interpolation kernel. If an "ideal" sinc interpolator is used to vertically interpolate the missing rows in each field, the result is an image which has no vertical frequency content whatsoever beyond the Nyquist limit associated with a single field.
  • linear interpolation is used to interpolate the missing field rows prior to merging; and the averaging process does tend to eliminate aliasing artifacts.
  • a space varying weighting function can be adopted. Specifically, each luminance sample in any given auxiliary field is assigned a weight of 2 if it corresponds to an original pixel from that field, 1 if it is located within one interpolated row (i.e. one quarter of a field row) from an original pixel, and 0 otherwise. If the relevant merge mask is zero, then the process sets the weight to 0 regardless of the distance between the sample and an original field sample.
  • the reference field luminance samples are weighted in the same manner, except that all samples are assigned a weight of at least 1, in order to provide that at least one non-zero weight is available for every sample in the merged image.
  • This weighting policy has the effect of subjecting vertical frequencies to a much less severe lowpass filter than simple averaging with uniform weights.
  • This section describes an efficient method used to implement the weighted averaging of interpolated sample values from the reference and auxiliary fields.
  • the same technique can be used for both luminance and chrominance samples.
  • ⁇ 1 , ⁇ 2 ,..., ⁇ F denote the sample values to be merged from each of F fields to form a single luminance or chrominance sample in the final image.
  • ⁇ 1 , ⁇ 2 ,..., ⁇ F denote the corresponding weighting values, which take on values of 0, 1 or 2 according to the discussion in Section 3.2.
  • the process constructs a single 16 bit word, ⁇ ⁇ f , for each sample.
  • the least significant 9 bits of ⁇ ⁇ f hold the weighted sample value, ⁇ f ⁇ w f ; the next 3 bits are set to zero; and the most significant 4 bits of ⁇ ⁇ f hold the weight, ⁇ f .
  • the multi-field enhancement system disclosed in this document may appear to involve numerous operations, it should be noted that an efficient implementation need not make exorbitant demands on the computing or memory resources of a general purpose computer. This is because the numerous intermediate results required to implement the various sub-systems described earlier, may be generated and discarded incrementally on a row-by-row basis. Moreover, intermediate results may often be shared among the different sub-systems. Many parameters such as filter coefficients and dimensions have been selected with a view to implementation efficiency. As an example, to process four full color video fields, each with 240 rows and 640 columns, the system requires a total storage capacity of only 1.1 M B , almost all of which (0.92 M B ) is used to store the source fields themselves.

Abstract

A method and system for combining the information from one video field, or multiple video fields into a single, high quality still image. A reference field and auxiliary fields are selected and an orientation map is constructed for the reference field. Motion maps are constructed to model displacement between the reference and auxiliary fields. The auxiliary fields are directionally interpolated using orientation maps. A merge mask is used to mask of certain pixels which should not be used in the final enhanced image. A weighted average is then formed from the reference field pixels which have not been masked off. A final still image is obtained after additional horizontal interpolation. Post-processing might be used to further sharpen the image. The method and system are applicable to both the luminance and chrominance components of the video image. The method and system serve to reduce the noise, as well as the luminance and color aliasing artifacts associated with the reference field, while enhancing its resolution, by utilizing information from the auxiliary fields.

Description

    Field of the Invention
  • This invention relates to a method and system for combining the information from multiple video fields into a single, high quality still image.
  • Background of the Invention
  • Individual fields from video sources generally exhibit the following shortcomings:
  • sensor, tape and transmission noise;
  • luminance aliasing due to insufficiently dense spatial sampling of the optical scene;
  • chrominance aliasing due to insufficiently dense spatial sampling of particular color components in the optical scene (often occurs with single CCD video cameras which can only sense one color component at each pixel position);
  • relatively poor resolution.
  • However, video sources have the advantage that many pictures of the same scene are available, usually with relatively small displacements of the scene elements between consecutive fields. After suitable compensation for motion, these multiple pictures can be combined to produce a still image with less noise. Perhaps more importantly, however, the existance of motion allows for effectively having a denser sampling of the optical scene than is available from any single field. This opens up the possibility for aliasing removal as well as resolution enhancement.
  • While analog video is considered, many of the following observations also apply to a variety of digital video sources. One observation is that the resolution of the chrominance components is significantly lower than that of the luminance components. Specifically, the horizontal chrominance resolution of an NTSC (National Television System Standard) broadcast video source is about 1 / 7 that of the luminance. Also, although the NTSC standard does not limit the vertical resolution of the chrominance components below that of the luminance components, most popular video cameras inherently halve the vertical chrominance resolution, due to their single CCD design. Since the chrominance components carry very little spatial information in comparison to the luminance component, a process might focus resolution enhancement efforts on the luminance channel alone. Moreover, the computational demand of the multi-field enhancement system can be reduced by working with a coarser set of chrominance samples than that used for the luminance component.
  • A second observation concerning analog video is that the luminance component is often heavily aliased in the vertical direction, but much less so in the horizontal direction. This is to be expected, since the optical bandwidth is roughly the same in both the horizontal and vertical directions, but the vertical sampling density is less than half the horizontal sampling density. Moreover, newer video cameras employ CCD sensors with an increasing number of sensors per row, whereas the number of sensor rows is set by the NTSC standard. Empirical experiments confirm the expectation that high horizontal frequencies experience negligible aliasing, whereas high vertical frequencies are subjected to considerable aliasing. Hence, it is unlikely to be possible to increase the horizontal resolution of the final still image through multi-field processing; however, it should be possible to "unwrap" aliasing components to enhance the vertical resolution and remove the annoying aliasing artifacts ("jaggies") around non-vertical edges.
  • Hence, what is needed is a method and system for combining the information from multiple video fields into a single, high quality still image.
  • Summary of the Invention
  • This invention disclosure describes a system for combining the information from multiple video fields into a single high quality still image. One of the fields is selected to be the reference and the remaining fields are identified as auxiliary fields. The system reduces the noise, as well as the luminance and color aliasing artifacts associated with the reference field, while enhancing its resolution, by utilizing information from the auxiliary fields.
  • An orientation map is constructed for the reference field and is used to directionally interpolate this field up to four times the vertical field resolution.
  • Motion maps are constructed to model the local displacement between features in the reference field and corresponding features in each of the auxiliary fields. Motion is computed to quarter pixel accuracy in the vertical direction and half pixel accuracy in the horizontal direction, using the directionally interpolated reference field to accomplish the sub-pixel search. The motion maps are used firstly to infer an orientation map for each of the auxiliary fields directly from the reference field's orientation map (note that orientation maps could be computed for each field separately, if the computational demand were not considered excessive) and later to guide incorporation of information from the auxiliary fields into the reference field.
  • The auxiliary fields are then directionally interpolated to the same resolution as the interpolated reference field, using their inferred orientation maps.
  • A merge mask is determined for each auxiliary field to mask off pixels which should not be used in the final enhanced still image; the masked off pixels generally correspond to regions where the motion maps fail to correctly model the relationship between the reference and auxiliary fields; such regions might involve uncovered background, for example.
  • A weighted average is formed from the reference field pixels and the motion-compensated auxiliary field pixels which have not been masked off. The weights associated with this weighted averaging operation are spatially varying and depend upon both the merge masks and the displacements recorded in the motion maps. Unlike conventional field averaging techniques, this approach does not destroy available picture resolution in the process of removing aliasing artifacts.
  • The final still image is obtained after horizontal interpolation by an additional factor of two (to obtain the correct aspect ratio after the fourfold vertical interpolation described above) and an optional post-processing operation which sharpens the image formed from the weighted averaging process described above. The above processing steps are modified somewhat for the chrominance components to reflect the fact that these components have much less spatial frequency content than the luminance component.
  • An important property of this image enhancement system is that it can work with any number of video fields at all. If only one field is supplied, the system employs the sophisticated directional interpolation technique mentioned above. If additional fields are available, they are directionally interpolated and merged into the interpolated reference field so as to progressively enhance the spatial frequency content, while reducing noise and other artifacts. In the special case where two fields are available, the system may also be understood as a "de-interlacing" tool.
  • Other advantages of this invention will become apparent from the following description taken in conjunction with the accompanying drawings which set forth, by way of illustration and example, certain embodiments of this invention. The drawings constitute a part of this specification and include exemplary embodiments, objects and features of the present invention.
  • Brief Description of the Drawings
  • Figure 1 shows a Block structure used for motion estimation and field merging: a) non-overlapping segmentation of reference field; b) overlapping motion blocks surrounding each segmentation block.
  • Figure 2 shows the eight orientation classes and their relationship to the target luminance pixel with which they are associated.
  • Figure 3 shows a table of orthogonal likelihood values, L ┴ / C, for each of the directed orientation classes, C.
  • Figure 4 shows directional low-pass filters applied to the pre-conditioned reference field's luminance component to prepare for calculation of the orientation class likelihood values: a) LV , LV - and LV +; b) D -; c) D +; d) O -; and e) O +.
  • Figure 5 shows intermediate linear combinations, υ1, υ2, υ3 and υ4, of horizontally pre-filtered luminance peels used to form the vertical unlikelihood value, UV .
  • Figure 6 shows horizontally pre-filtered luminance pixels used to form the near vertical unlikelihood values: a) UV - and b) UV +.
  • Figure 7 shows intermediate linear combinations, d ± / i, of diagonally pre-filtered luminance pixels, used to form the diagonal unlikelihood values: a) UD - and b) UD +.
  • Figure 8 shows intermediate linear combinations, o ± / i, of near vertically pre-filtered luminance pixels, used to form the oblique unlikelihood values: a) UO - and b) UO +.
  • Figure 9 shows neighboring class values used to form the smoothed orientation class, C m ,n , associated with the target pixel at row m and column n.
  • Figure 10 shows an example of the linear directional interpolation strategy used to recover the three missing luminance samples, Y 4m+1,n , Y 4m+2,n and Y 4m+3,n from neighboring original field rows. In this example, the orientation class is Cm ,n = V + .
  • Detailed Description of the Preferred Embodiment
  • It should be understood that while certain forms of the invention are illustrated, they are not to be limited to the specific forms or arrangements of parts herein described and shown. It will be apparent to those skilled in the art that various changes may be made without departing from the scopeof the invention and the invention is not to be considered limited to chat is shown in the drawings and descriptions.
  • To facilitate the discussion which follows, let HF and WF denote the number of rows (height) and columns (width) of each digitized video field. Many video digitizers produce fields with WF = 640 columns and HF = 240 rows, but this need not be the case. The multi-field processing system directionally interpolates the reference field to a resolution of HI = 4HF by WI = WF (i.e. vertical expansion by a factor of 4) and then enhances the vertical information content by adaptively merging the directionally interpolated, motion compensated and appropriately weighted auxiliary fields into this interpolated reference field. This adaptive merging process also serves to remove aliasing and reduce noise.
  • It should be noted that these dimensions describe only the luminance component of the video signal. The chrominance components are treated differently. Original chrominance fields each have HF rows, but only WF /4 columns. The video digitization and decoding operations may produce chrominance components with these resolutions, or else the process may decimate the chrominance components of a collection of video fields which have already been decoded. In this way the process reduces the memory requirements and computational demand associated with the multi-field enhancement operation, without sacrificing actual information. The multi-field processing system directionally interpolates the reference field's chrominance components to a resolution of HI /2 = 2HF by WI /4 = WF /4 (i.e. vertical expansion by a factor of 2) and then adaptively merges the directionally interpolated and motion compensated auxiliary fields' chrominance components into the reference field to reduce chrominance noise and artifacts. Note that the chrominance components from the various fields are merged by simple averaging, after invalid pixels from regions which do not conform to the estimated inter-field motion model have been mashed off. This temporal averaging is able to reduce noise and color aliasing artifacts but is not able to enhance the spatial frequency content of the image. The luminance components from the various fields, however, are merged using a spatially varying weighted average, whose weights are computed from the estimated inter-field motion so as to remove aliasing while enhancing the spatial frequency content of the image.
  • The final image produced by the system has H = HI = 4HF rows by W = 2WI = 2WF columns. It is formed by doubling the horizontal resolution of the luminance component and quadrupling the horizontal resolution and doubling the vertical resolution of the chrominance components produced by the method described above. These operations are required to restore the luminance component to the correct aspect ratio and to obtain a full set of chrominance sample values at every pixel position. In the preferred embodiment of the invention, horizontal doubling of the luminance resolution may be achieved by applying the interpolation filter kernel, (-18 ,58 ,58 ,-18 )
  • This kernel has been selected to preserve the horizontal frequency response of the original video signal, while allowing for a multiplication-free implementation. The same interpolation kernel is used to expand the horizontal chrominance resolution by a factor of two, after which the chrominance components are expanded by an additional factor of two in both directions using conventional bilinear interpolation.
  • Section 2 below discloses a method to estimate local orientations within the reference field, along with an interpolation procedure used to directionally interpolate the reference and auxiliary fields according to the estimated orientation map. Section 1 discloses a method to obtain the motion maps between the reference and auxiliary fields. Finally, Section 3 discloses methods to build merge masks and merge weighting factors, along with the fast algorithm used to actually merge the reference and auxiliary fields into an enhanced still image.
  • 1 Motion Estimation between Reference and Auxiliary Fields
  • The reference field is first segmented into non-overlapping blocks which have approximately 15 field rows and 23 field columns each in the current implementation. This segmentation is depicted in Figure 1a. Each of these segmentation blocks 10 is surrounded by a slightly larger motion block 12, as depicted in Figure 1b. Adjacent motion blocks overlap one another by two field rows 14 and four field columns 16 in the current implementation. The motion estimation sub-system is responsible for computing a single motion vector for each motion block, for each auxiliary field. The motion vector is intended to describe the displacement of scene objects within the motion block, between the reference and the relevant auxiliary field. These motion vectors are used to guide the process described in Section 3, whereby interpolated reference and auxiliary fields are merged into a single image. This merging process is performed independently on each motion block, after which the merged motion blocks are stitched together to form the final image. A smooth weighting function is used to average overlapping regions from adjacent motion blocks. The purpose of this section is to describe only the method used to estimate motion vectors for any given motion block.
  • For each of the auxiliary fields, the system processs the motion blocks in lexicographical fashion. Estimation of the motion vector for each of these blocks proceeds in three distinct phases. The first phase attempts to predict the motion vector based on the motion vectors obtained from previously processed, neighboring motion blocks. The prediction strategy is described in Section 1.1. This predicted motion vector is used to bias the efficient and robust coarse motion estimation technique, described in Section 1.2, which estimates motion to full pixel accuracy only. The final refinement to quarter pixel accuracy in the vertical direction and half pixel accuracy in the horizontal direction is performed using a conventional MAD block matching technique. Details are supplied below in Section 1.3.
  • 1.1 Motion Prediction
  • To facilitate the discussion, let
    Figure 00080001
    denote the coarse - i.e. pixel resolution - motion vector for the (m,n)'th motion block, i.e. the n'th motion block in the m'th row of motion blocks associated with the reference field. Since motion vectors are estimated in lexicographical order, the neighboring coarse motion vectors,
    Figure 00080002
    m ,n-1, m -1,n-1, m -1,n and m -1,n+1, have already been estimated and can be used to form an initial prediction for m ,n . In particular, the motion estimation sub-system sets the predicted vector,
    Figure 00080003
    m ,n , to be the arithmetic mean of the three least disparate of these four neighboring motion vectors. For the purpose of this computation, the disparity among any collection of three motion vectors, 1, 2 and 3, is defined as
    Figure 00080004
    i.e. the sum of the L 1 distances between each of the vectors and their arithmetic mean.
  • One reason for forming this prediction, m ,n , is not to reduce the full motion vector search time, but rather to encourage the development of smooth motion maps. In regions of the scene which contain little information from which to estimate motion, we prefer to adopt the predicted vectors whenever this is reasonable. Otherwise, the "random" motion vectors usually produced by block motion estimation algorithms in these regions can cause annoying visual artifacts when attempts are made to merge the reference and auxiliary fields into a single still image.
  • 1.2 Coarse Motion Estimation
  • In general, the process produces a coarse - i.e. full pixel accuracy - estimate of the motion, m ,n , between the reference field and a given auxiliary field, for the (m,n)'th motion block. Moreover, we would like to bias the estimate towards the predicted vector, m ,n , whenever this is consistent with the features found in the two fields. The coarse motion estimation sub-system performs a full search which, in one embodiment of the system, involves a search range of |υ x / m,n| ≤ 10 field columns and |υ y / m,n| ≤ 5 field rows. For each vector, , in this range, the process computes an objective function,
    Figure 00090001
    ( ), which will be discussed below. To facilitate the discussion, let min denote the minimum value attained by ( ) over the search range and let ν denote the set of all motion vectors, , such that
    Figure 00090002
    where Tm is a pre-defined threshold. The final coarse motion estimate, m ,n , is taken to be the vector, ∈ ν, which is closest to the predicted vector, m ,n . Here, the L 1 distance metric is used, so that the distance between and m ,n is x - υ x m,n | + |υ y - υ y m,n |.
  • The actual objective function, ( ), used for coarse motion estimation, is disclosed as follows. Rather than using the computationally intensive Maximum Absolute Distance (MAD) objective, the process constructs the objective function from a novel 2 bit per pixel representation of the reference and auxiliary fields. Specifically, the luminance components of the reference and auxiliary fields are first pre-conditioned with a spatial bandpass filter, as described in Section 1.2.1; a two bit representation of each bandpass filtered luminance sample is formed by a simple thresholding technique, described in Section 1.2.2; and then the objective function is evaluated by a combination of "exclusive or" (XOR) and counting operations which are applied to the two bit pixel representations, as described in Section 1.2.3.
  • 1.2.1 Bandpass Filtering for Coarse Motion Estimation
  • A simple bandpass filter is constructed, in one embodiment, by taking the difference of two moving window lowpass filters. Specifically, let y[i,j] denote the luminance sample from any given field at row i and column j. The bandpass filtered pixel, y [i,j], is computed according to
    Figure 00100001
  • Here, Lx and Ly and the width and height of the "local-scale" moving average window, while Wx and Wy are the width and height of the "wide-scale" moving average window. The scaling operations may be reduced to shift operations by ensuring that each of these four dimensions is a power of 2, in which case the entire bandpass filtering operation may be implemented with four additions, four subtractions and two shifts per pixel. In our particular implementation, the dimensions, Lx = Ly = 4, Wx = 32 and Wy = 16 were found to optimize the robustness of the overall motion estimation scheme. It is worth noting that this bandpass filtering operation desensitizes the motion estimator to inter-field illumination variations, as well as to high frequency aliasing artifacts in the individual fields. At the same time it produces pixels which have zero mean - a key requirement for the generation of useful two bit per pixel representations according to the method described in the following section.
  • 1.2.2 Two Bit Pixel Representation for Coarse Motion Estimation
  • After bandpass filtering, each filtered sample, y [i,j], is assigned a two bit representation in the preferred embodiment of the invention, where this representation is based on a parameter, Tb . The first bit is set to 1 if y [i,j] >Tb and 0 otherwise, while the second bit is set to 1 if y [i.j] < -Tb and 0 otherwise. This two bit representation entails some redundancy in that it quantizes y [i,j] into only three different regimes. The representation has the following important property, however. If
    Figure 00100002
    ( y 1, y 2) represents the total number of 1 bits in the two bit result obtained by taking the exclusive or of corresponding bits in the two bit representations associated with y 1 and y 2, then it is easy to verify that ( y 1, y 2) satisfies the following relationship.
    Figure 00110001
  • Thus, ( y 1, y 2) may be Interpreted as a measure of the distance between y 1 and y 2.
  • 1.2.3 The Coarse Motion Estimation Objective Function
  • Our objective function, ( ), is constructed by taking the sum of the two bit distance metric,
    Figure 00110002
    over all pixels, (i,j), which lying within a coarse matching block; this coarse matching block is generally larger than the motion block itself. Here y r [i,j] is the bandpass filtered sample at row i and column j of the reference field, while y a [i,j] is the bandpass filtered sample at row i and column j of the auxiliary field. In our implementation the coarse matching block consists of 20 field rows by 32 field columns, surrounding the motion block of interest.
  • 1.3 Refinement to Sub-Pixel Accuracy
  • In one embodiment of the invention, a conventional MAD (Mean Absolute Difference) search is performed, with a search range of one field column and half a field row around the vector returned by the coarse motion estimation sub-system, searching in increments of half the field column separation and a quarter of the field row separation. Only the reference field need be interpolated to the higher resolution (four times vertical and twice horizontal resolution) in order achieve this sub-pixel accuracy search. The auxiliary fields are pre-conditioned by applying a five tap vertical low-pass filter with kernel, (116 ,416 ,616 ,416 ,116 ), prior to performing the motion refinement search. This low-pass filtering reduces sensitivity to vertical aliasing.
  • 2 Directional Interpolation of each Field 2.1 Orientation Estimation
  • The object of orientation estimation is to identify the direction of image edges in the neighborhood of any given pixel so the intra-field vertical interpolation operation described in Section 2.2 below can be careful to interpolate along rather than across an edge. In addition to correct identification of edge orientation, one key requirement for the estimator is that the resulting orientation map be as smooth as possible. It is preferred that the orientation map not fluctuate wildly in textured regions or in smooth regions which lie near to actual image edges, because such fluctuations might manifest themselves as visually disturbing artifacts after interpolation. It is therefore important to control the numerical complexity of the estimation technique.
  • The orientation estimation sub-system works with the luminance component of the reference field. For each "target" luminance pixel in this field, an orientation class is selected. The orientation class associated with a given target row and target column is to be interpreted as the dominant feature orientation observed in a neighborhood whose centroid lies between the target and next field rows and between the target and next field columns. This centroid is marked with a cross pattern 20 in Figure 2. The figure also illustrates the set of eight orientation classes which may be associated with each target luminance pixel; they are:
  • 22 N: No distinct orientation.
  • 24 V: Distinct orientational feature at 90° (vertical).
  • 26 V -: Distinct orientational feature at 63° (near vertical) from top-left to bottom-right.
  • 28 V +: Distinct orientational feature at 63° (near vertical) from top-right to bottom-left.
  • 30 D -: Distinct orientational feature at 45° (diagonal) from top-left to bottom-right.
  • 32 D +: Distinct orientational feature at 45° (diagonal) from top-right to bottom-left
  • 34 O -: Distinct orientational feature at 27° (oblique) from top-left to bottom-right.
  • 36 O +: Distinct orientational feature at 27° (oblique) from top-right to bottom-left.
  • The orientation class associations for each luminance pixel in the reference field constitute the orientation map.
  • The estimation strategy consists of a number of elements, whose details are discussed separately below. Essentially, a numerical value, LC , is assigned to each of the distinctly oriented classes, C ∈ {V,V -,V +,D -,D +,O -,O +}, which is to be interpreted as the likelihood that a local orientation feature exists with the corresponding orientation. The estimated orientation class is tentatively set to the distinct orientation class, C, which has the maximum value, LC. The likelihood value, LC , for the selected class is then compared with an orthogonal likelihood value, L ┴ / C, which represents the likelihood associated with the orthogonal direction. If the difference between LC and L ┴ / C is less than a predetermined threshold, the orientation class is set to N, i.e. no distinct orientation. The orthogonal likelihood value is obtained from the table of Figure 3. The orientation map obtained in the manner described above, is subjected to a final morphological smoothing operation to minimize the number of disturbing artifacts produced during directional interpolation. This smoothing operation is described in Section 2.1.3.
  • To compute the likelihood values, LC , for each directed orientation class, C, the luminance pixels are processed first, using a directional low-pass filter which smooths in a direction which is approximately perpendicular to that of C. LC is then based on a Total Variation (TV) metric, computed along a trajectory which is parallel to the orientation of C; the larger the variation, the smaller the likelihood value. The seven directional filtering operations are described in Section 2.1.1, while the TV metric is described in Section 2.1.2 below.
  • 2.1.1 Oriented Pre-Filtering of the Luminance Field
  • The reference field's luminance component is first pre-conditioned by applying a vertical low-pass filter with the three tap kernel, (14 ,12 ,14 ).
  • One purpose of this pre-conditioning filter is to reduce the influence of vertical aliasing artifacts which can adversely affect the estimation sub-system.
  • To prepare the pre-conditioned luminance field for calculation of vertical likelihood values, LV , and near vertical likelihood values, LV - and LV +, we apply the horizontal low-pass filter whose five taps are illustrated in Figure 4a.
  • To prepare the pre-conditioned luminance field for calculation of the diagonal likelihood values, LD -, we apply the diagonal low-pass filter whose three taps 42 are illustrated in Figure 4b. The complementary filter, whose three taps 44 are illustrated in Figure 4c, is used to prepare for calculation of the complementary diagonal likelihood values, LD +.
  • Finally, to prepare the pre-conditioned luminance field for calculation of the oblique like-lihood values, LO - and LO +, we apply the near vertical low- pass filters 46,48 illustrated in Figures 4d and 4e, respectively.
  • 2.1.2 The Directional TV Metric
  • For each directed orientation class, C, the likelihood values, LC , are found by negating a set of corresponding "unlikelihood" values, UC . The unlikelihood values for each directed orientation class are computed by a applying an appropriate "total variation" measure to the reference field's luminance component, after pre-conditioning and appropriate directional pre-filtering, as described in Section 2.1.1 above.
  • The vertical unlikelihood value is calculated from
    Figure 00140001
    where υ1, υ2, υ3 and υ4 are linear combinations of pixels from the pre-conditioned and horizontally pre-filtered luminance field; these linear combinations are depicted in Figure 5. The centroid of this calculation lies halfway between the target 50 and next field row 52 and halfway between the target 54 and next field column 56, which is in agreement with Figure 2.
  • The near vertical unlikelihood values are calculated from
    Figure 00140002
    where the υ ± / i terms represent pixel values from the pre-conditioned and horizontally pre-filtered luminance field; the relevant pixels are depicted in Figures 6a and 6b. Again, the centroid of these calculations lies half a field row below and half a field column to the right of the target field row 60 and column 62, as required for consistency with the definition of the orientation classes.
  • The diagonal unlikelihood values are calculated from
    Figure 00150001
    where the d ± / i terms each represents a linear combination of two pixel values from the pre-conditioned and diagonally pre-filtered luminance field. The d - / i terms are formed after applying the diagonal pre-filter shown in Figure 4b, while the d + / i terms are formed after applying the diagonal pre-filter shown in Figure 4c. The pixels and weights used to form the d - / i and d + / i terms are illustrated in Figures 7a and 7b, respectively. Notice that the centroid of these calculations again lies half a field row below and half a field column to the right of the target field row 70 and column 72, as required for consistency with the definition of the orientation classes.
  • Finally, the oblique unlikelihood values are calculated from
    Figure 00150002
    where the o ± / i terms each represents a linear combination of two pixel values from the pre-conditioned and near vertically pre-filtered luminance field. The o - / i terms are formed after applying the near vertical pre-filter shown in Figure 4d, while the o + / i terms are formed after applying the near vertical pre-filter shown in Figure 4e. The pixels and weights used to form the o - / i and o + / i terms are illustrated in Figures 8a and 8b, respectively. Notice that the centroid of these calculations again lies half a field row below and half a field column to the right of the target field row 80 and column 82, as required for consistency with the definition of the orientation classes.
  • 2.1.3 Morphological Smoothing of the Orientation Map
  • The morphological smoothing operator takes, as its input, the initial classification of each pixel in the reference field into one of the orientation classes, V, V -, V +, D -, D +, O - or O +, and produces a new classification which generally has less inter-class transitions. To facilitate the discussion, let Cm ,n denote the initial orientation classification associated with target field row m 90 and target field column n 92. The smoothing operator generates a potentially different classification, C m ,n , by considering Cm ,n together with the 14 neighbors 94 depicted in Figure 9. The smoothing policy is that the value of C m ,n should be identical to Cm ,n , unless either a majority of the 6 neighbors 96 lying to the left of the target pixel and a majority of the 6 neighbors 98 lying to the right of the target pixel all have the same classification, C, or a majority of the 5 neighbors 100 lying above the target pixel and a majority of the 5 neighbors 102 lying below the target pixel all have the same classification, C. In either of these two cases, the value of C m ,n is set to C.
  • 2.2 Interpolation
  • This section describes the directional interpolation strategy which is used to quadruple the number of luminance rows and double the number of chrominance rows. To facilitate the ensuing discussion, let Y 4m,n in Figure 10 denote the luminance pixel at field row m and field column n. The purpose of luminance interpolation is to derive three new luminance rows, Y 4m+1,n , Y 4m+2,n and Y 4m+3,n , between every pair of original luminance rows, Y 4m,n 120 and Y 4m+4,n 122. Similarly, the purpose of chrominance interpolation is to derive one new chrominance row, C 2m+1,k , between every pair of original chrominance rows, C 2m,k and C 2m+2,k . Only one of the chrominance components is explicitly referred to, with the understanding that both chrominance components should be processed identically. Also, we use the index, k rather than n, to denote chrominance columns, since the horizontal chrominance resolution is only one quarter of the horizontal luminance resolution.
  • As described above, Cm ,n refers to the local orientation class associated with a region whose centroid lies between field rows m and m+1. The missing luminance samples, Y 4m+1,n , Y 4m+2,n and Y 4m+3,n , are linearly interpolated based on a line 124 drawn through the missing sample location 126 with the orientation, Cm ,n . Figure 10 illustrates this process for a near-vertical orientation class of Cm ,n = V + . Note that the original field rows, Y 4m,n 120 and Y 4m+4,n 122, must often be horizontally interpolated to find sample values on the end-points 120 of the oriented interpolation lines. In one embodiment, the interpolation filter of equation (1) is used to minimize loss of spatial frequency content during this interpolation process. The non-directed orientation class, N, defaults to the same vertical interpolation strategy as the V class.
  • Chrominance components are treated similarly. Specifically, the missing chrominance sample, C 2m+1,k , is linearly interpolated based on a line drawn through the missing sample location with the orientation, Cm ,2k . Again, chrominance samples from the original field rows, C 2m,k and C 2m+2,k , must often be horizontally interpolated to find sample values on the end-points of the oriented interpolation lines.
  • 3 Adaptive, Non-Stationary Merging of Interpolated Fields
  • In this section the method used to merge spatially interpolated pixels from the auxiliary fields into the reference field to produce a high quality still image is described. As mentioned in Section 1, the merging operation is performed independently on each of the overlapping motion blocks illustrated in Figure 1. The image is stitched together from these overlapping blocks using a smooth transition function within the overlapping regions. The discussion which follows considers the operations used to merge a single motion block; for simplicity, these operations will be described as though this motion block occupied the entire reference field.
  • The merging process is guided by a single sub-pixel accurate motion vector for each of the auxiliary fields. The first step involves generation of a merge mask for each auxiliary field, to identify the regions in which this motion vector may be considered to describe scene motion between the reference field and the relevant auxiliary field. Merge mask generation is discussed in Section 3.1 below. The next step is to assign weights to each spatially interpolated pixel in the reference and auxiliary fields, identifying the contribution that each will make to the merged image. This step is discussed in Section 3.2 below. Finally, the weighted average of pixels from the various fields is formed using the fast technique described in Section 3.3 below.
  • 3.1 Generation of Merge Masks
  • For any given auxiliary field, the general objective of the merge mask generation sub-system is to determine a binary mask value for each pixel in the original reference field, not the interpolated reference field. The mask value for a particular pixel, identifies whether or not the motion vector associated with the given auxiliary field correctly describes scene motion between the reference and auxiliary fields in the vicinity of that pixel. Our basic approach for generating these masks involves computing a directionally sensitive weighted average of neighboring pixels in the reference field and corresponding motion compensated pixels in the auxiliary field and comparing these averages. One key to the success of this method is the directional sensitivity of the local pixel averages.
  • To facilitate the ensuing discussion, let yr [i,j] denote the luminance sample at row i and column j in the reference field. For convenience, let ya [i,j] denote the corresponding pixel in the auxiliary field, after compensating for the estimated motion vector. Note that motion compensation may involve sub-pixel interpolation, since our motion vectors are estimated to sub-pixel accuracy. If the motion vector correctly describes motion in the vicinity of pixel (i,j), it might be expected that neighborhood averages around yr [i,j] and ya [i,j] would yield similar results. One concern is with image edges, where the success of subsequent field merging depends critically on motion vector accuracy in the direction perpendicular to the edge orientation. To address this concern, the orientation map, discussed in Section 2.1 is used. Only in the special case when the orientation class for pixel (i,j) is N, i.e. no distinct direction, does the process use a non-directional weighted average, whose weights are formed from the tensor product of the seven tap horizontal kernel, (12 ,1,1,1,1,1,12 ) and the five tap vertical kernel, (12 ,1,1,1,12 ) in one particular embodiment of the invention. In this case, if the weighted averages formed around yr [i,j] and ya [i,j] differ by more than a prescribed threshold, the merge mask, mi ,j is set to 0, indicating that the motion model should not be considered valid in this region.
  • For all other orientation classes, the reference and auxiliary fields are first filtered with the three tap horizontal low-pass kernel, (14 ,12 ,14 ), and then four one-dimensional weighted averages, ρ[i,j], α1[i,j], α2[i,j] and α3[i,j] are computed. Each of these weighted averages is taken along a line oriented in the direction identified by the orientation class for pixel (i,j), using the weights, (12 ,1,1,1,12 ).
  • The oriented line used to form ρ[i,j] is centered about pixel (i,j) in the reference field. Similarly, the line used to form α2[i,j] is centered about pixel (i,j) in the auxiliary field. The lines for α1[i,j] and α2[i,j] have centers which fall on either side of pixel (i,j) in the auxiliary field, displaced by approximately half a field row or one field column, as appropriate, in the orthogonal direction to that identified by the orientation class. Thus, in a region whose orientation class is uniformly vertical, the result would be α1[i,j] = α2[i,j-1] = α3[i,j-2]. On the other hand, in a region whose orientation class is uniformly oblique, O - or O +, the horizontal average, α1[i,j] should approximately equal the arithmetic mean of α2[i,j] and α2[i,j-1]. From these directional averages, three absolute differences are formed,
    Figure 00190001
  • If δ2[i,j] exceeds a pre-determined threshold, it is concluded that the motion vector does not describe scene motion in the vicinity of pixel (i,j) and mi ,j is set to 0 accordingly. Otherwise, it is concluded that the motion vector is approximately accurate, but the process must still check to see if it is sufficiently accurate for field merging to improve the quality of an edge feature. Reasoning that a small motion error would generally cause one, but not both of δ1[i,j] and δ2[i,j] to be smaller than δ2[i,j], the process tests for this condition, setting mi ,j to 0 whenever it is found to be true.
  • 3.2 Generation of Spatial Weighting Factors
  • The merging sub-system forms a weighted average between the directionally interpolated pixels from each field. This section describes the methodology used to determine relevant weights. The chrominance and luminance components are treated in a fundamentally different way, since most of the spatial information is carried only in the luminance channel. All chrominance samples in the reference field are assigned a weight of 1, while chrominance samples from the auxiliary fields are assigned weights of either 1 or 0, depending only on the value of the merge mask for the relevant auxiliary field. In this way, the chrominance components are simply averaged across all fields, except in regions where the motion vectors do not reflect the underlying scene motion. This has the effect of substantially reducing chrominance noise. Moreover, the fact that most scenes contain at least some inter-field motion, means that field averaging of the chrominance components tends to cancel color aliasing artifacts, which arise from the harmonic beating of scene features with the color mosaic used in single CCD video cameras.
  • The same approach could be adopted for merging the luminance components as well; but limitations might exist with respect to enhancing spatial frequency content. Although, the directional spatial interpolation technique is able to enhance spatial frequency content of oriented edge features, textured regions are entirely dependent upon the information from multiple fields for resolution enhancement. In the limit as the number of available fields becomes very large, simple averaging of the interpolated fields has the effect of subjecting the original spatial frequencies in the scene to a low pass filter whose impulse response is identical to the spatial interpolation kernel. If an "ideal" sinc interpolator is used to vertically interpolate the missing rows in each field, the result is an image which has no vertical frequency content whatsoever beyond the Nyquist limit associated with a single field. In the example embodiment of the invention, linear interpolation is used to interpolate the missing field rows prior to merging; and the averaging process does tend to eliminate aliasing artifacts. In order to preserve high spatial frequencies while still removing aliasing and reducing noise in the luminance components, a space varying weighting function can be adopted. Specifically, each luminance sample in any given auxiliary field is assigned a weight of 2 if it corresponds to an original pixel from that field, 1 if it is located within one interpolated row (i.e. one quarter of a field row) from an original pixel, and 0 otherwise. If the relevant merge mask is zero, then the process sets the weight to 0 regardless of the distance between the sample and an original field sample. The reference field luminance samples are weighted in the same manner, except that all samples are assigned a weight of at least 1, in order to provide that at least one non-zero weight is available for every sample in the merged image. This weighting policy has the effect of subjecting vertical frequencies to a much less severe lowpass filter than simple averaging with uniform weights.
  • 3.3 Fast Method for Implementing the Weighted Averages
  • This section describes an efficient method used to implement the weighted averaging of interpolated sample values from the reference and auxiliary fields. The same technique can be used for both luminance and chrominance samples. To facilitate the discussion, let υ1, υ2,..., υ F denote the sample values to be merged from each of F fields to form a single luminance or chrominance sample in the final image. Also, let ω1, ω2,..., ω F denote the corresponding weighting values, which take on values of 0, 1 or 2 according to the discussion in Section 3.2. The desired weighted average may be calculated as υ = f=1 F υ f ·ω f f=1 F ω f .
  • This expression involves a costly division operation. To resolve this difficulty, in one embodiment of the invention, the process constructs a single 16 bit word, υ ∧ f , for each sample. The least significant 9 bits of υ ∧ f hold the weighted sample value, υ f ·ωwf ; the next 3 bits are set to zero; and the most significant 4 bits of υ ∧ f hold the weight, ω f . The weighted average is then implemented by forming the sum, υ = f=1 F υ f and using υ ∧ as the index to a lookup table with 216 entries. This technique will be effective so long as the number of fields, F, does not exceed 8. Subject to this condition, the least significant 12 bits of υ ∧ hold the sum of the weighted sample values and the most significant 4 bits hold the sum of the weights so that a table lookup operation is sufficient to recover the weighted average.
  • 4 Performance
  • Although the multi-field enhancement system disclosed in this document may appear to involve numerous operations, it should be noted that an efficient implementation need not make exorbitant demands on the computing or memory resources of a general purpose computer. This is because the numerous intermediate results required to implement the various sub-systems described earlier, may be generated and discarded incrementally on a row-by-row basis. Moreover, intermediate results may often be shared among the different sub-systems. Many parameters such as filter coefficients and dimensions have been selected with a view to implementation efficiency. As an example, to process four full color video fields, each with 240 rows and 640 columns, the system requires a total storage capacity of only 1.1M B, almost all of which (0.92M B) is used to store the source fields themselves. The processing of these four fields requires about 8 seconds of CPU time on, for instance, an HP Series 735 workstation operating at 99M H z. On a PC with a 200M H z Pentium Pro processor, the same operation takes less than 4 seconds of CPU time. Empirical observations indicate that this multi-field processing system achieves significantly higher still image quality than conventional single field enhancement or de-interlacing techniques. Moreover the system appears to be robust to a wide range of inter-field motion, from simple camera jitter to more complex motion of scene objects.

Claims (16)

  1. A method for combining the information from multiple video fields having pixels into an enhanced still image, the method comprising the steps of:
    (a) selecting at least one field to serve as a reference field;
    (b) using remaining fields to serve as auxiliary fields;
    (c) constructing a orientation map for the reference field which is used to directionally interpolate the reference field;
    (d) constructing a motion map to model the local displacement between features in the reference field ad corresponding features in the auxiliary fields;
    (e) using the motion map to infer an orientation map for each of the auxiliary fields directly from the orientation map of the reference field;
    (f) using the orientation maps to directionally interpolate the auxiliary fields to the same resolution as the interpolated reference field;
    (g) determining a merge mask for each auxiliary field to mask off certain pixels;
    (h) forming a weighted average image from the reference field pixels, and the auxiliary field pixels which have not been masked off; and
    (i) horizontally interpolating the weighted average image to form the enhanced still image.
  2. The method of Claim 1, wherein the steps are applied to the luminance components of the video fields.
  3. The method of Claim 1, wherein the steps are applied to the chrominance components of the video fields.
  4. The method of Claim 3, wherein the steps are performed on chrominance components with relatively less spatial frequency content than corresponding luminance components.
  5. The method of Claim 1, wherein the motion maps in step (d) are computed, using the directionally interpolated reference field, to sub-pixel quarter pixel accuracy in the vertical direction and half pixel accuracy in the horizontal direction.
  6. The method of Claim 1, wherein step (e) is replaced with: computing an orientation map for each field separately.
  7. The method of Claim 1, wherein the certain pixels in step (g) correspond to regions where the motion maps fail to correctly model the relationship between the reference and auxiliary fields.
  8. The method of Claim 1, including an additional post-processing operation which sharpens the image formed from the weighted averaging step (h).
  9. The method of Claim 1, wherein the horizontal interpolation of step (i) includes a horizontal interpolation factor of two.
  10. A system for combining the information from multiple video fields having pixels into an enhanced still image, the system comprising:
    (a) a selection means for selecting at least one field to serve as a reference field, and using the remaining fields to serve as auxiliary fields;
    (b) a mapping means for
    (i) constructing an orientation map for the reference field which is used to directionally interpolate the reference field;
    (ii) constructing a motion map to model the local displacement between features in the reference field and corresponding features in the auxiliary fields;
    (iii) using the motion map to infer an orientation map for each of the auxiliary fields directly from the orientation map of the reference field;
    (c) an interpolation means which uses the orientation maps to directionally interpolate the auxiliary fields to the same resolution as the interpolated reference field;
    (d) a masking means which determines a merge mask for each auxiliary field to mask off certain pixels;
    (e) an averaging means which forms a weighted average image from the reference field pixels, and the auxiliary field pixels which have not been masked off; and
    (f) an interpolation means which horizontally interpolates the weighted average image to form the enhanced still image.
  11. A system for efficiently estimating motion between two video fields, the system comprising:
    (a) a means for bandpass filtering each of the fields;
    (b) a means for obtaining a two-bit representation of each sample in the bandpass filtered fields;
    (c) a means for comparing the two-bit representations which are associated with relatively displaced regions from the two fields, in order to determine an initial coarse motion estimate with full-pixel accuracy; and
    (d) a means for refining the initial coarse motion estimate to fractional pixel accuracy.
  12. The system of Claim 11, wherein the fields include frames.
  13. The system of Claim 11, wherein the bandpass filtering step (a) is accomplished by taking the difference between two moving averages.
  14. The system of Claim 13, wherein the moving averages have different different window sizes.
  15. The system of Claim 11, wherein the two-bit representation step (b) includes three states, according to whether the bandpass filter sample exceeds a positive threshold, falls below a negative threshold, or falls between the positive and negative thresholds.
  16. The system of Claim 11, wherein the two-bit representations for two different bandpass filtered samples are two-bit words which are compared in step (c) by applying a logical exclusive OR operator to the pair of two-bit words and counting the number of ones in the resulting two-bit value.
EP98116451A 1998-01-22 1998-08-31 Method for providing motion-compensated multi-field enhancement of still images from video Expired - Lifetime EP0944251B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP02028854A EP1296515A3 (en) 1998-01-22 1998-08-31 Method for providing motion-compensated multi-field enhancement of still images from video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10803 1998-01-22
US09/010,803 US6122017A (en) 1998-01-22 1998-01-22 Method for providing motion-compensated multi-field enhancement of still images from video

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP02028854A Division EP1296515A3 (en) 1998-01-22 1998-08-31 Method for providing motion-compensated multi-field enhancement of still images from video

Publications (2)

Publication Number Publication Date
EP0944251A1 true EP0944251A1 (en) 1999-09-22
EP0944251B1 EP0944251B1 (en) 2003-04-02

Family

ID=21747507

Family Applications (2)

Application Number Title Priority Date Filing Date
EP98116451A Expired - Lifetime EP0944251B1 (en) 1998-01-22 1998-08-31 Method for providing motion-compensated multi-field enhancement of still images from video
EP02028854A Withdrawn EP1296515A3 (en) 1998-01-22 1998-08-31 Method for providing motion-compensated multi-field enhancement of still images from video

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP02028854A Withdrawn EP1296515A3 (en) 1998-01-22 1998-08-31 Method for providing motion-compensated multi-field enhancement of still images from video

Country Status (4)

Country Link
US (2) US6122017A (en)
EP (2) EP0944251B1 (en)
JP (1) JPH11284834A (en)
DE (1) DE69812882T2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1202220A2 (en) * 2000-10-16 2002-05-02 Eastman Kodak Company Removing color aliasing artifacts from color digital images
US7379501B2 (en) 2002-01-14 2008-05-27 Nokia Corporation Differential coding of interpolation filters
WO2008151802A1 (en) * 2007-06-14 2008-12-18 Fotonation Ireland Limited Fast motion estimation method
US7639888B2 (en) 2004-11-10 2009-12-29 Fotonation Ireland Ltd. Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts
US8169486B2 (en) 2006-06-05 2012-05-01 DigitalOptics Corporation Europe Limited Image acquisition method and apparatus
US8199222B2 (en) 2007-03-05 2012-06-12 DigitalOptics Corporation Europe Limited Low-light video frame enhancement
US8212882B2 (en) 2007-03-25 2012-07-03 DigitalOptics Corporation Europe Limited Handheld article with movement discrimination
US8264576B2 (en) 2007-03-05 2012-09-11 DigitalOptics Corporation Europe Limited RGBW sensor array
US8494300B2 (en) 2004-11-10 2013-07-23 DigitalOptics Corporation Europe Limited Method of notifying users regarding motion artifacts based on image analysis
US8698924B2 (en) 2007-03-05 2014-04-15 DigitalOptics Corporation Europe Limited Tone mapping for low-light video frame enhancement
US8989516B2 (en) 2007-09-18 2015-03-24 Fotonation Limited Image processing method and apparatus
DE10225227B4 (en) * 2001-07-06 2016-07-28 Hewlett-Packard Development Company, L.P. A method of providing digital video images and still images, an imaging system and computer readable medium

Families Citing this family (116)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19730305A1 (en) * 1997-07-15 1999-01-21 Bosch Gmbh Robert Method for generating an improved image signal in the motion estimation of image sequences, in particular a prediction signal for moving images with motion-compensating prediction
DE19746214A1 (en) * 1997-10-21 1999-04-22 Bosch Gmbh Robert Movement compensated prediction method for moving image sequences
US6122017A (en) * 1998-01-22 2000-09-19 Hewlett-Packard Company Method for providing motion-compensated multi-field enhancement of still images from video
JP3734362B2 (en) * 1998-03-09 2006-01-11 パイオニア株式会社 Interpolation method
US7067473B1 (en) * 1998-07-14 2006-06-27 Janssen Pharmaceutica N.V. Neurotrophic growth factor
US6778758B1 (en) * 1998-09-11 2004-08-17 Canon Kabushiki Kaisha Image processing apparatus
US6731790B1 (en) * 1999-10-19 2004-05-04 Agfa-Gevaert Method of enhancing color images
ES2219392T3 (en) * 2000-01-28 2004-12-01 Fujitsu General Limited CONVERSION CIRCLE OF EXPLORATION.
WO2001076232A1 (en) * 2000-03-30 2001-10-11 Sony Corporation Information processor
US6414719B1 (en) * 2000-05-26 2002-07-02 Sarnoff Corporation Motion adaptive median filter for interlace to progressive scan conversion
US7095445B2 (en) * 2000-12-20 2006-08-22 Samsung Electronics Co., Ltd. Method of detecting motion in an interlaced video sequence based on logical operation on linearly scaled motion information and motion detection apparatus
US20020094026A1 (en) * 2001-01-12 2002-07-18 Edelson Steven D. Video super-frame display system
US6950469B2 (en) * 2001-09-17 2005-09-27 Nokia Corporation Method for sub-pixel value interpolation
US7630566B2 (en) * 2001-09-25 2009-12-08 Broadcom Corporation Method and apparatus for improved estimation and compensation in digital video compression and decompression
US20030059089A1 (en) * 2001-09-25 2003-03-27 Quinlan James E. Block matching at the fractional pixel level for motion estimation
US7711044B1 (en) 2001-10-29 2010-05-04 Trident Microsystems (Far East) Ltd. Noise reduction systems and methods
CN101448162B (en) * 2001-12-17 2013-01-02 微软公司 Method for processing video image
US7003035B2 (en) * 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
US7305034B2 (en) * 2002-04-10 2007-12-04 Microsoft Corporation Rounding control for multi-stage interpolation
US7116831B2 (en) * 2002-04-10 2006-10-03 Microsoft Corporation Chrominance motion vector rounding
US7620109B2 (en) * 2002-04-10 2009-11-17 Microsoft Corporation Sub-pixel interpolation in motion estimation and compensation
US7110459B2 (en) * 2002-04-10 2006-09-19 Microsoft Corporation Approximate bicubic filter
US20040001546A1 (en) * 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US7280700B2 (en) * 2002-07-05 2007-10-09 Microsoft Corporation Optimization techniques for data compression
US7154952B2 (en) 2002-07-19 2006-12-26 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
US8384790B2 (en) * 2002-08-20 2013-02-26 Hewlett-Packard Development Company, L.P. Video image enhancement method and apparatus using reference and auxiliary frames
US7729563B2 (en) * 2002-08-28 2010-06-01 Fujifilm Corporation Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames
US7379175B1 (en) 2002-10-15 2008-05-27 Kla-Tencor Technologies Corp. Methods and systems for reticle inspection and defect review using aerial imaging
US7027143B1 (en) 2002-10-15 2006-04-11 Kla-Tencor Technologies Corp. Methods and systems for inspecting reticles using aerial imaging at off-stepper wavelengths
US7123356B1 (en) 2002-10-15 2006-10-17 Kla-Tencor Technologies Corp. Methods and systems for inspecting reticles using aerial imaging and die-to-database detection
US7120195B2 (en) * 2002-10-28 2006-10-10 Hewlett-Packard Development Company, L.P. System and method for estimating motion between images
JP2004336717A (en) * 2003-04-15 2004-11-25 Seiko Epson Corp Image synthesis producing high quality image from a plurality of low quality images
US7499081B2 (en) * 2003-04-30 2009-03-03 Hewlett-Packard Development Company, L.P. Digital video imaging devices and methods of processing image data of different moments in time
US8417055B2 (en) 2007-03-05 2013-04-09 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US7636486B2 (en) 2004-11-10 2009-12-22 Fotonation Ireland Ltd. Method of determining PSF using multiple instances of a nominally similar scene
US7672528B2 (en) * 2003-06-26 2010-03-02 Eastman Kodak Company Method of processing an image to form an image pyramid
US10554985B2 (en) 2003-07-18 2020-02-04 Microsoft Technology Licensing, Llc DC coefficient signaling at small quantization step sizes
US7738554B2 (en) * 2003-07-18 2010-06-15 Microsoft Corporation DC coefficient signaling at small quantization step sizes
US7609763B2 (en) * 2003-07-18 2009-10-27 Microsoft Corporation Advanced bi-directional predictive coding of video frames
US7499495B2 (en) * 2003-07-18 2009-03-03 Microsoft Corporation Extended range motion vectors
US7426308B2 (en) * 2003-07-18 2008-09-16 Microsoft Corporation Intraframe and interframe interlace coding and decoding
US20050013498A1 (en) * 2003-07-18 2005-01-20 Microsoft Corporation Coding of motion vector information
US7432979B2 (en) * 2003-09-03 2008-10-07 Sony Corporation Interlaced to progressive scan image conversion
US7567617B2 (en) * 2003-09-07 2009-07-28 Microsoft Corporation Predicting motion vectors for fields of forward-predicted interlaced video frames
US7724827B2 (en) * 2003-09-07 2010-05-25 Microsoft Corporation Multi-layer run level encoding and decoding
US7599438B2 (en) * 2003-09-07 2009-10-06 Microsoft Corporation Motion vector block pattern coding and decoding
US7317839B2 (en) * 2003-09-07 2008-01-08 Microsoft Corporation Chroma motion vector derivation for interlaced forward-predicted fields
US8064520B2 (en) * 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US20050157949A1 (en) * 2003-09-30 2005-07-21 Seiji Aiso Generation of still image
JP4461937B2 (en) * 2003-09-30 2010-05-12 セイコーエプソン株式会社 Generation of high-resolution images based on multiple low-resolution images
US20050117639A1 (en) * 2003-10-24 2005-06-02 Turaga Deepak S. Optimal spatio-temporal transformations for reduction of quantization noise propagation effects
US7526025B2 (en) * 2003-10-24 2009-04-28 Sony Corporation Lifting-based implementations of orthonormal spatio-temporal transformations
JP4286124B2 (en) * 2003-12-22 2009-06-24 三洋電機株式会社 Image signal processing device
KR20050062709A (en) 2003-12-22 2005-06-27 삼성전자주식회사 Apparatus for digital image processing and method thereof
US7362376B2 (en) 2003-12-23 2008-04-22 Lsi Logic Corporation Method and apparatus for video deinterlacing and format conversion
KR101056142B1 (en) 2004-01-29 2011-08-10 케이엘에이-텐코 코포레이션 Computerized method for detecting defects in reticle design data
US20050168589A1 (en) * 2004-01-30 2005-08-04 D. Amnon Silverstein Method and system for processing an image with an image-capturing device
US7529423B2 (en) * 2004-03-26 2009-05-05 Intel Corporation SIMD four-pixel average instruction for imaging and video applications
DE102004026782A1 (en) * 2004-06-02 2005-12-29 Infineon Technologies Ag Method and apparatus for computer-aided motion estimation in at least two temporally successive digital images, computer-readable storage medium and computer program element
US7653260B2 (en) * 2004-06-17 2010-01-26 Carl Zeis MicroImaging GmbH System and method of registering field of view
EP1631068A3 (en) * 2004-08-26 2008-09-03 Samsung Electronics Co., Ltd. Apparatus and method for converting interlaced image into progressive image
JP4904034B2 (en) 2004-09-14 2012-03-28 ケーエルエー−テンカー コーポレイション Method, system and carrier medium for evaluating reticle layout data
US20060291751A1 (en) * 2004-12-16 2006-12-28 Peyman Milanfar Robust reconstruction of high resolution grayscale images from a sequence of low-resolution frames (robust gray super-resolution)
US7542095B2 (en) * 2005-01-20 2009-06-02 Samsung Electronics Co., Ltd. Method and system of noise-adaptive motion detection in an interlaced video sequence
JP4736456B2 (en) * 2005-02-15 2011-07-27 株式会社日立製作所 Scanning line interpolation device, video display device, video signal processing device
US7321400B1 (en) * 2005-02-22 2008-01-22 Kolorific, Inc. Method and apparatus for adaptive image data interpolation
US7634152B2 (en) 2005-03-07 2009-12-15 Hewlett-Packard Development Company, L.P. System and method for correcting image vignetting
US7522220B2 (en) * 2005-03-30 2009-04-21 Samsung Electronics Co., Ltd. Dual-channel adaptive 2D noise reduction for video signals
US20060285597A1 (en) * 2005-06-20 2006-12-21 Flextronics International Usa, Inc. Reusing interpolated values in advanced video encoders
CN2804798Y (en) * 2005-06-21 2006-08-09 吴东明 Horizon measuring means with magnetic device
US7769225B2 (en) 2005-08-02 2010-08-03 Kla-Tencor Technologies Corp. Methods and systems for detecting defects in a reticle design pattern
US7538824B1 (en) * 2005-08-18 2009-05-26 Magnum Semiconductor, Inc. Systems and methods for reducing noise during video deinterlacing
US8041103B2 (en) 2005-11-18 2011-10-18 Kla-Tencor Technologies Corp. Methods and systems for determining a position of inspection data in design data space
US7676077B2 (en) 2005-11-18 2010-03-09 Kla-Tencor Technologies Corp. Methods and systems for utilizing design data in combination with inspection data
US7570796B2 (en) 2005-11-18 2009-08-04 Kla-Tencor Technologies Corp. Methods and systems for utilizing design data in combination with inspection data
US8120658B2 (en) * 2006-01-19 2012-02-21 Qualcomm Incorporated Hand jitter reduction system for cameras
US8019179B2 (en) * 2006-01-19 2011-09-13 Qualcomm Incorporated Hand jitter reduction for compensating for linear displacement
US7970239B2 (en) * 2006-01-19 2011-06-28 Qualcomm Incorporated Hand jitter reduction compensating for rotational motion
GB2439119B (en) * 2006-06-12 2011-04-20 Tandberg Television Asa Motion estimator
FR2906093A1 (en) * 2006-09-18 2008-03-21 Canon Kk METHODS AND DEVICES FOR ENCODING AND DECODING, TELECOMMUNICATION SYSTEM AND COMPUTER PROGRAM USING THE SAME
WO2008077100A2 (en) 2006-12-19 2008-06-26 Kla-Tencor Corporation Systems and methods for creating inspection recipes
US8194968B2 (en) 2007-01-05 2012-06-05 Kla-Tencor Corp. Methods and systems for using electrical information for a device being fabricated on a wafer to perform one or more defect-related functions
US8189061B1 (en) * 2007-03-21 2012-05-29 Ambarella, Inc. Digital still camera with multiple frames combined into a single frame for digital anti-shake/anti-blur
US7962863B2 (en) 2007-05-07 2011-06-14 Kla-Tencor Corp. Computer-implemented methods, systems, and computer-readable media for determining a model for predicting printability of reticle features on a wafer
US7738093B2 (en) 2007-05-07 2010-06-15 Kla-Tencor Corp. Methods for detecting and classifying defects on a reticle
US8213704B2 (en) 2007-05-09 2012-07-03 Kla-Tencor Corp. Methods and systems for detecting defects in a reticle design pattern
US20080309770A1 (en) * 2007-06-18 2008-12-18 Fotonation Vision Limited Method and apparatus for simulating a camera panning effect
US8254455B2 (en) * 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
US7796804B2 (en) * 2007-07-20 2010-09-14 Kla-Tencor Corp. Methods for generating a standard reference die for use in a die to standard reference die inspection and methods for inspecting a wafer
US7711514B2 (en) 2007-08-10 2010-05-04 Kla-Tencor Technologies Corp. Computer-implemented methods, carrier media, and systems for generating a metrology sampling plan
US7975245B2 (en) 2007-08-20 2011-07-05 Kla-Tencor Corp. Computer-implemented methods for determining if actual defects are potentially systematic defects or potentially random defects
FR2925200A1 (en) * 2007-12-13 2009-06-19 Thomson Licensing Sas VIDEO IMAGE FORMAT CONVERSION METHOD AND CORRESPONDING DEVICE
US8139844B2 (en) 2008-04-14 2012-03-20 Kla-Tencor Corp. Methods and systems for determining a defect criticality index for defects on wafers
KR101841897B1 (en) 2008-07-28 2018-03-23 케이엘에이-텐코어 코오포레이션 Computer-implemented methods, computer-readable media, and systems for classifying defects detected in a memory device area on a wafer
US8189666B2 (en) 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
US8775101B2 (en) 2009-02-13 2014-07-08 Kla-Tencor Corp. Detecting defects on a wafer
US8204297B1 (en) 2009-02-27 2012-06-19 Kla-Tencor Corp. Methods and systems for classifying defects detected on a reticle
US8112241B2 (en) 2009-03-13 2012-02-07 Kla-Tencor Corp. Methods and systems for generating an inspection process for a wafer
US8571355B2 (en) * 2009-08-13 2013-10-29 Samsung Electronics Co., Ltd. Method and apparatus for reconstructing a high-resolution image by using multi-layer low-resolution images
US8781781B2 (en) 2010-07-30 2014-07-15 Kla-Tencor Corp. Dynamic care areas
TWI471010B (en) * 2010-12-30 2015-01-21 Mstar Semiconductor Inc A motion compensation deinterlacing image processing apparatus and method thereof
US9170211B2 (en) 2011-03-25 2015-10-27 Kla-Tencor Corp. Design-based inspection using repeating structures
US9087367B2 (en) 2011-09-13 2015-07-21 Kla-Tencor Corp. Determining design coordinates for wafer defects
US8831334B2 (en) 2012-01-20 2014-09-09 Kla-Tencor Corp. Segmentation for wafer inspection
US8826200B2 (en) 2012-05-25 2014-09-02 Kla-Tencor Corp. Alteration for wafer inspection
US9189844B2 (en) 2012-10-15 2015-11-17 Kla-Tencor Corp. Detecting defects on a wafer using defect-specific information
US9053527B2 (en) 2013-01-02 2015-06-09 Kla-Tencor Corp. Detecting defects on a wafer
US9134254B2 (en) 2013-01-07 2015-09-15 Kla-Tencor Corp. Determining a position of inspection system output in design data space
US9311698B2 (en) 2013-01-09 2016-04-12 Kla-Tencor Corp. Detecting defects on a wafer using template image matching
WO2014149197A1 (en) 2013-02-01 2014-09-25 Kla-Tencor Corporation Detecting defects on a wafer using defect-specific and multi-channel information
US9865512B2 (en) 2013-04-08 2018-01-09 Kla-Tencor Corp. Dynamic design attributes for wafer inspection
US9310320B2 (en) 2013-04-15 2016-04-12 Kla-Tencor Corp. Based sampling and binning for yield critical defects
US9942560B2 (en) 2014-01-08 2018-04-10 Microsoft Technology Licensing, Llc Encoding screen capture data
US9749642B2 (en) 2014-01-08 2017-08-29 Microsoft Technology Licensing, Llc Selection of motion vector precision
US9774881B2 (en) 2014-01-08 2017-09-26 Microsoft Technology Licensing, Llc Representing motion vectors in an encoded bitstream
CN105701772B (en) * 2014-11-28 2019-07-23 展讯通信(上海)有限公司 A kind of post processing of image method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0454442A2 (en) * 1990-04-27 1991-10-30 Canon Kabushiki Kaisha Image processing device
US5341174A (en) * 1992-08-17 1994-08-23 Wright State University Motion compensated resolution conversion system
WO1995025404A1 (en) * 1994-03-16 1995-09-21 France Telecom Method and device for estimating motion between television frames in a frame sequence
EP0785683A2 (en) * 1996-01-17 1997-07-23 Sharp Kabushiki Kaisha Image data interpolating apparatus

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0294960B1 (en) * 1987-06-09 1994-09-28 Sony Corporation Motion vector processing in television images
JPH01138858A (en) * 1987-11-26 1989-05-31 Matsushita Electric Ind Co Ltd Picture processing method
JPH01138859A (en) * 1987-11-26 1989-05-31 Matsushita Electric Ind Co Ltd Picture processing method
EP0474285B1 (en) * 1990-09-03 1996-05-08 Koninklijke Philips Electronics N.V. Edge direction detector for image processing video systems
US5191413A (en) * 1990-11-01 1993-03-02 International Business Machines System and method for eliminating interlace motion artifacts in captured digital video data
JP2611591B2 (en) * 1991-10-31 1997-05-21 日本ビクター株式会社 Motion compensator
US5185664A (en) * 1991-10-31 1993-02-09 North American Philips Corporation Method and apparatus for combining field and frame recursive noise reduction for video signals
US5657402A (en) * 1991-11-01 1997-08-12 Massachusetts Institute Of Technology Method of creating a high resolution still image using a plurality of images and apparatus for practice of the method
US5309243A (en) * 1992-06-10 1994-05-03 Eastman Kodak Company Method and apparatus for extending the dynamic range of an electronic imaging system
EP0648046B1 (en) * 1993-10-11 1999-12-22 THOMSON multimedia Method and apparatus for motion compensated interpolation of intermediate fields or frames
US5642170A (en) * 1993-10-11 1997-06-24 Thomson Consumer Electronics, S.A. Method and apparatus for motion compensated interpolation of intermediate fields or frames
DE4344924A1 (en) * 1993-12-30 1995-08-10 Thomson Brandt Gmbh Method and device for motion estimation
GB2297450B (en) * 1995-01-18 1999-03-10 Sony Uk Ltd Video processing method and apparatus
US5579054A (en) * 1995-04-21 1996-11-26 Eastman Kodak Company System and method for creating high-quality stills from interlaced video
FR2742900B1 (en) * 1995-12-22 1998-02-13 Thomson Multimedia Sa METHOD FOR INTERPOLATING PROGRESSIVE FRAMES
US5784115A (en) * 1996-12-31 1998-07-21 Xerox Corporation System and method for motion compensated de-interlacing of video frames
US6122017A (en) * 1998-01-22 2000-09-19 Hewlett-Packard Company Method for providing motion-compensated multi-field enhancement of still images from video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0454442A2 (en) * 1990-04-27 1991-10-30 Canon Kabushiki Kaisha Image processing device
US5341174A (en) * 1992-08-17 1994-08-23 Wright State University Motion compensated resolution conversion system
WO1995025404A1 (en) * 1994-03-16 1995-09-21 France Telecom Method and device for estimating motion between television frames in a frame sequence
EP0785683A2 (en) * 1996-01-17 1997-07-23 Sharp Kabushiki Kaisha Image data interpolating apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LEE S ET AL: "TWO-STEP MOTION ESTIMATION ALGORITHM USING LOW-RESOLUTION QUANTIZATION", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (IC, LAUSANNE, SEPT. 16 - 19, 1996, VOL. VOL. 3, PAGE(S) 795 - 798, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, ISBN: 0-7803-3259-8, XP000704130 *
NATARAJAN B ET AL: "LOW-COMPLEXITY BLOCK-BASED MOTION ESTIMATION VIA ONE-BIT TRANSFORMS", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 7, no. 4, 1 August 1997 (1997-08-01), pages 702 - 706, XP000694623, ISSN: 1051-8215 *
OGURA E ET AL: "A COST EFFECTIVE MOTION ESTIMATION PROCESSOR LSI USING A SIMPLE ANDEFFICIENT ALGORITHM", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, vol. 41, no. 3, 1 August 1995 (1995-08-01), pages 690 - 696, XP000539525, ISSN: 0098-3063 *
SIU-LEONG IU: "COMPARISION OF MOTION COMPENSATION USING DIFFERENT DEGREES OF SUB- PIXEL ACCURACY FOR INTERFIELD/INTERFRAME HYBRID CODING OF HDTV IMAGE SEQUENCES", MULTIDIMENSIONAL SIGNAL PROCESSING, SAN FRANCISCO, MAR. 23 - 26, 1992, VOL. VOL. 3, NR. CONF. 17, PAGE(S) 465 - 468, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, ISBN: 0-7803-0532-9, XP000378969 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1202220A3 (en) * 2000-10-16 2004-10-20 Eastman Kodak Company Removing color aliasing artifacts from color digital images
US7092570B2 (en) 2000-10-16 2006-08-15 Eastman Kodak Company Removing color aliasing artifacts from color digital images
EP1202220A2 (en) * 2000-10-16 2002-05-02 Eastman Kodak Company Removing color aliasing artifacts from color digital images
DE10225227B4 (en) * 2001-07-06 2016-07-28 Hewlett-Packard Development Company, L.P. A method of providing digital video images and still images, an imaging system and computer readable medium
US7379501B2 (en) 2002-01-14 2008-05-27 Nokia Corporation Differential coding of interpolation filters
US8494300B2 (en) 2004-11-10 2013-07-23 DigitalOptics Corporation Europe Limited Method of notifying users regarding motion artifacts based on image analysis
US7639888B2 (en) 2004-11-10 2009-12-29 Fotonation Ireland Ltd. Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts
US7676108B2 (en) 2004-11-10 2010-03-09 Fotonation Vision Limited Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts
US8169486B2 (en) 2006-06-05 2012-05-01 DigitalOptics Corporation Europe Limited Image acquisition method and apparatus
US8520082B2 (en) 2006-06-05 2013-08-27 DigitalOptics Corporation Europe Limited Image acquisition method and apparatus
US8199222B2 (en) 2007-03-05 2012-06-12 DigitalOptics Corporation Europe Limited Low-light video frame enhancement
US8264576B2 (en) 2007-03-05 2012-09-11 DigitalOptics Corporation Europe Limited RGBW sensor array
US8698924B2 (en) 2007-03-05 2014-04-15 DigitalOptics Corporation Europe Limited Tone mapping for low-light video frame enhancement
US8878967B2 (en) 2007-03-05 2014-11-04 DigitalOptics Corporation Europe Limited RGBW sensor array
US8212882B2 (en) 2007-03-25 2012-07-03 DigitalOptics Corporation Europe Limited Handheld article with movement discrimination
WO2008151802A1 (en) * 2007-06-14 2008-12-18 Fotonation Ireland Limited Fast motion estimation method
US8989516B2 (en) 2007-09-18 2015-03-24 Fotonation Limited Image processing method and apparatus

Also Published As

Publication number Publication date
EP1296515A2 (en) 2003-03-26
EP0944251B1 (en) 2003-04-02
JPH11284834A (en) 1999-10-15
US6122017A (en) 2000-09-19
DE69812882D1 (en) 2003-05-08
DE69812882T2 (en) 2004-02-05
US6381279B1 (en) 2002-04-30
EP1296515A3 (en) 2003-04-02

Similar Documents

Publication Publication Date Title
EP0944251B1 (en) Method for providing motion-compensated multi-field enhancement of still images from video
US7406208B2 (en) Edge enhancement process and system
US7729563B2 (en) Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames
JP4162621B2 (en) Frame interpolation method and apparatus for frame rate conversion
JP5594968B2 (en) Method and apparatus for determining motion between video images
US7570309B2 (en) Methods for adaptive noise reduction based on global motion estimation
JP2011508516A (en) Image interpolation to reduce halo
WO2020253103A1 (en) Video image processing method, device, apparatus, and storage medium
US20110206127A1 (en) Method and Apparatus of Frame Interpolation
US10021413B2 (en) Apparatus and method for video data processing
JP3591859B2 (en) Digital video data interpolation method and circuit
CN101496063A (en) Method and system for creating an interpolated image
US6307569B1 (en) POCS-based method for digital image interpolation
US7286721B2 (en) Fast edge-oriented image interpolation algorithm
Lee et al. A motion-adaptive de-interlacing method using an efficient spatial and temporal interpolation
CN110418081B (en) High dynamic range image full-resolution reconstruction method and device and electronic equipment
Zhang et al. Nonlocal edge-directed interpolation
Guo et al. Frame rate up-conversion using linear quadratic motion estimation and trilateral filtering motion smoothing
KR101829742B1 (en) Deinterlacing apparatus and method based on bilinear filter and fuzzy-based weighted average filter
KR101046347B1 (en) Image deinterlacing method and apparatus
KR100471186B1 (en) Deinterlacing apparatus and method for interpolating of image data by region based
JP5752167B2 (en) Image data processing method and image data processing apparatus
RU2308817C1 (en) Method and device for scaling a dynamic video-image
Venkatesan et al. Video deinterlacing with control grid interpolation
Zhang et al. Fusion-based edge-sensitive interpolation method for deinterlacing

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17P Request for examination filed

Effective date: 20000124

AKX Designation fees paid

Free format text: DE FR GB

17Q First examination report despatched

Effective date: 20001031

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: HEWLETT-PACKARD COMPANY, A DELAWARE CORPORATION

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Designated state(s): DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 69812882

Country of ref document: DE

Date of ref document: 20030508

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20040105

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20060825

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20060831

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20061002

Year of fee payment: 9

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20070831

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20080430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080301

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070831