GB2502047A - Video sequence processing by dissimilarity value processing using a filter aperture decomposed into two or more regions - Google Patents

Video sequence processing by dissimilarity value processing using a filter aperture decomposed into two or more regions Download PDF

Info

Publication number
GB2502047A
GB2502047A GB1206065.3A GB201206065A GB2502047A GB 2502047 A GB2502047 A GB 2502047A GB 201206065 A GB201206065 A GB 201206065A GB 2502047 A GB2502047 A GB 2502047A
Authority
GB
United Kingdom
Prior art keywords
regions
pixel
partial
filter
filter aperture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1206065.3A
Other versions
GB2502047B (en
GB201206065D0 (en
Inventor
Michael James Knee
Martin Weston
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snell Advanced Media Ltd
Original Assignee
Snell Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Snell Ltd filed Critical Snell Ltd
Priority to GB1206065.3A priority Critical patent/GB2502047B/en
Priority to GB1905665.4A priority patent/GB2572497B/en
Publication of GB201206065D0 publication Critical patent/GB201206065D0/en
Priority to US13/832,764 priority patent/US9532053B2/en
Publication of GB2502047A publication Critical patent/GB2502047A/en
Priority to US15/367,284 priority patent/US20170085912A1/en
Application granted granted Critical
Publication of GB2502047B publication Critical patent/GB2502047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/521Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Abstract

In video sequence processing, motion compensated (MC) pixel-to-pixel dissimilarity (e.g. displaced field difference, DFD) values are filtered with a filter aperture decomposed into two or more regions or sectors, with partial filters applied to each region. Outputs of the partial filters are combining by a non-linear operation, for example taking the minimum from diametrically opposed regions. The dissimilarity value may be a rectified displaced field difference, and the regions may be non-overlapping. Partial filters may operate on a number (e.g. eight) of radial line segments. Partial filters can operate on sectors (e.g. octants, Figure 6) of the filter aperture. The partial filtering operation may be a rank-order or averaging operation, both processing minimum values from region pairs. The filtering may be applied to a video sequence processing method comprising candidate motion vector derivation, based on a motion vector error value, the motion vector projecting between first and second images for a current pixel and plural neighbouring pixels; in this case, the current and neighbouring pixels define the filter aperture.

Description

VIDEO SEQUENCE PROCESSING
FIELD OF INVENTION
This invention relates to video sequence processing particularly in connection with motion estimation of video signals.
BACKGROUND OF THE INVENTION
In the estimation of motion vectors between video frames, motion vectors are assigned to pixels, or blocks of pixels, in each frame and describe the estimated displacement of each pixel or block in a next frame or a previous frame in the sequence of frames. In the following description, the motion estimation is considered to be "dense" meaning that a motion vector is calculated for every pixel. The definition of "dense" may be widened to cover the calculation of a motion vector for each small block in the picture, for each pixel in a subsampled version of the picture, or for each small region of arbitrary shape within which the motion is expected to be uniform. The invention can be applied with trivial modification to these wider cases.
Motion estimation has application in many image and video processing tasks, including video compression, motion-compensated temporal interpolation for standards conversion or slow-motion synthesis, motion-compensated noise reduction, object tracking, image segmentation, and, in the form of displacement estimation, stereoscopic 3D analysis and view synthesis from multiple cameras.
Most applications of motion estimation involve the "projection" (also described as "shifting") of picture information forward or backward in time according to the motion vector that has been estimated. This is known as "motion-compensated" projection. The projection may be to the time instant of an existing frame or field, for example in compression, where a motion-compensated projection of a past or future frame to the current frame instant serves as a prediction of the current frame. Alternatively, the projection may be to a time instant not in the input sequence, for example in motion-compensated standards conversion, where information from a current frame is projected to an output time instant, where it will be used to build a motion-compensated interpolated output frame.
Some of the terminology used in describing motion estimation systems will now be described. Figure 1 shows one-dimensional sections through two successive frames in a sequence of video frames. The horizontal axis of Figure 1 represents time, and the vertical axis represents position. Of course, the skilled person will recognise that Figure 1 is a simplification and that motion vectors used in image processing are generally two dimensional. The illustrated frames are: a previous or reference frame (101); and, the current frame (102). A motion vector (104) is shown assigned to a pixel (103) in the current frame. The motion vector indicates a point (105) in the reference frame which is the estimated source, in the reference frame, of the current frame pixel (103). This example shows a backward vector. Forward vectors may also be measured, in which case the reference frame is the next frame in the sequence rather than the previous frame.
The following descriptions assume that these frames are consecutive in the sequence, but the described processes are equally applicable in cases where there are intervening frames, for example in some compression algorithms.
Temporal samples of an image will henceforth be referred to as fields, as would be the case when processing interlaced images. However, as the skilled person will appreciate, in non-interlaced image formats a temporal sample is represented by a frame; and, fields may be de-interlaced' to form frames within an image process. The spatial sampling of the image is not relevant to the discussion which follows.
An example of an algorithm that calculates motion vectors is disclosed in GB2188510. This algorithm is summarised in Figure 2 and assigns a single vector to every pixel of a current field in a sequence of fields. The process of Figure 2 is assumed to operate sequentially on the pixels of the current field; the pixel whose vector assignment is currently being determined will be referred to as the current pixel. The current field (202) and the previous field (201) are applied to a phase correlation unit (203) which calculates a "menu" (204) for every pixel of the current field consisting of a number (three in this example) of candidate motion vectors. Each candidate vector controls a respective member of a set of shift units (205) which, for every pixel in the current field, displaces the previous field (201) by the respective candidate vector to produce a shifted pixel corresponding to the current pixel of the current field in the respective member of
the set of displaced fields (206).
A set of error calculation units (207) produces a set of error values (208), one error value for every menu vector for every pixel of the current field. Each of the error calculation units (207) subtracts the respective one of the displaced fields (206) from the current field (202) and rectifies the result to produce a field of difference magnitudes, which are known as displaced field differences or "DFDs".
Each of the error calculation units (207) spatially filters its respective field of DFDs in a filter centred on the current pixel to give an error value for that pixel and menu vector. This spatially filtered DED is the error value for the respective current pixel and vector. The set three error values (208) for the current pixel are compared in a comparison unit (209), which finds the minimum error value. The comparison unit (209) outputs a candidate index (210), which identifies the vector that gave rise to the minimum error value. The candidate index (210) is then applied to a vector selection unit (211) to select the identified candidate from the menu of vectors (204) as the respective output assigned vector (212) for the current pixel.
An important property of DFDs will now be described. If a candidate motion vector for a pixel describes the true motion of that pixel, then we would expect the DFD to be small, and only non-zero because of noise in the video sequence. If the candidate motion vector is incorrect, then the DFD may well be large, but it might be coincidentally small. For example, a rising waveform in one field may match a falling waveform in the displaced field at the point where they cross.
Alternatively, a pixel may be in a plain area or in a one-dimensional edge, in which case several motion vectors would give rise to a small or even a zero DFD value. This inconvenient property of DFDs is sometimes referred to as the "aperture problem" and leads to the necessity of spatially filtering the DEDs in order to take information from nearby pixels into account in determining the error value fora pixel.
In the example of Figure 2, each error calculation block (207) filters the DFDs with a two-dimensional filter, a typical example of which is aS x 5 running-average filter. It is this rectified and filtered error that is used for comparison of candidate motion vectors. Figure 3 illustrates the positions of the 25 samples involved in the running-average filter. The 5 x 5 arrangement of 25 samples comprises the samples within the rectangular filter window (302) and is centred on the current pixel position (301).
Choosing the size of the two-dimensional DFD filter involves a trade-off between reliability and spatial accuracy of the resulting assigned motion vector field. If, on the one hand, the filter is large, then the effect of noise on the filtered error value is reduced and the filter is more likely to take into account nearby detail in the picture which might help to distinguish reliably between candidate motion vectors.
However, a large filter is also more likely to take in pixel data from one or more objects whose motion is properly described by different motion vectors, in which case it will fail to give a low error value for any candidate motion vector, even for one that is correct for the pixel in question.
If, on the other hand, the filter is small, it is more likely to involve pixels from only one object and so is more likely to return a low error value for the correct motion vector. However, it will be less likely to reject wrong motion vectors and will be more susceptible to noise.
The inventors have observed that, for critical picture material, there is no choice of filter size which yields satisfactory performance in all aspects of reliability, noise immunity, spatial accuracy and sensitivity. However, the inventors have recognized that it is possible to design an improved displaced field difference filter which combines the reliability and noise immunity of a large conventional filter with the sensitivity and spatial accuracy of a small filter, while avoiding the disadvantages of each.
SUMMARY OF THE INVENTION
The invention consists of a method and apparatus for filtering displaced field differences arising from candidate motion vectors, characterised in that the filter window is decomposed into regions that are filtered separately and whose outputs are combined by a non-linear operation.
BRIEF DESCRIPTION OF THE DRAWINGS
An example of the invention will now be described with reference to the drawings in which: Figure 1 is a diagram showing current and previous frames in an image sequence and a backward motion vector extending from a pixel in the current frame; Figure 2 is a block diagram of apparatus for assigning backward motion
vectors to pixels according to the prior art;
Figure 3 is a diagram of a filter window according to the prior art; Figure 4 is a diagram of a set of filter windows according to a first embodiment of the invention; Figure 5 is a block diagram of an improved filter according to a first embodiment of the invention.
Figure 6 is a diagram of a set of filter windows according to a second embodiment of the invention; Figure 7 is a block diagram of an improved filter according to a second embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
As explained in the introduction, a displaced field difference filter operates on a set of DFDs representing difference values, between current field pixels and respective displaced field pixels for a particular motion vector. Typically the difference values are rectified prior to filtering so that the magnitudes of the errors are represented by the DFDs. The filter takes contributions from the DFDs for a number of pixels within a filter window surrounding a current pixel; the DFD for the current pixel may also be used. Contributions from these DFDs are used to form an error value for the current pixel.
The input DFD values being filtered arise from a candidate motion vector, or from a smoothly varying motion vector field, calculated by known methods. In the description that follows, the term "motion vector" refers either to a constant vector over a region or to a smoothly varying vector field.
Displaced field difference filters according to examples of the invention will now be described. In each case the filter output is an error value for a particular motion vector at a particular pixel position within a current field, this pixel position will be referred to as the current pixel. The filter input DFD values will be referred to as samples, and the DFD corresponding to the current pixel will be described as the current sample. The positions of samples correspond with the positions of the respective current field pixels used to calculate the respective DFDs.
The filter window of a first exemplary embodiment of the invention is illustrated in Figure 4, to which reference is now directed. The filter is given access to a number of contributing samples surrounding the current sample (401). Only samples that are used by the filter are shown in Figure 4; other samples in the vicinity of the current sample are not shown, typically there will be intermediate, unused samples forming part of an orthogonal spatial sampling structure for the current field. The contributing samples are grouped into eight line segments (402 to 409) in a star pattern centred on the current sample (401). The choice of this pattern is a compromise between economy and ease of access to samples in a hardware implementation, and the need to cover a reasonably wide area surrounding the current sample. In this particular example, each line segment contains seven samples, though other sizes are possible without departing from the scope of the invention.
The object of the filter is to give a high output if the motion vector that gave rise to the contributing samples is the wrong motion vector for the position of the current sample (401), and to give a low output if the motion vector is correct. If we begin with the assumption that the validity or invalidity of a motion vector extends across the area covered by the star pattern, then a high sample value somewhere in the pattern constitutes evidence that the motion vector is incorrect, and a suitable nonlinear filtering operation would be to take the maximum of the sample values across the pattern. However, it is quite possible that a boundary between two differently moving objects, for example the line shown (410) will cross the area. In this case, if the motion vector that gave rise to the sample is the one describing the motion of the right-hand object, we would expect the samples to the right of the line to have low values and those to the left to have at least some high values. We observe that, if the eight line segments in the star pattern are grouped into pairs of diametrically opposite segments (402 with 403; 404 with 405; 406 with 407; and, 408 with 409) then one segment of each pair will be expected to contain low sample values. The operation of the first inventive filter is therefore to take maximum values in each line segment, and then to take the minimum of the two maxima within each pair. This operation produces four values, all of which we expect to be low if the motion vector is correct. A further operation of the filter is therefore to take the maximum of the four minima.
Finally, it is important for spatial accuracy to take account of the current sample.
This is done by combining its value with the output of the filter so far defined, for example by taking the mean square value.
An alternative description of the first exemplary inventive filter will now be given with reference to the block diagram in Figure 5. The filter receives an input stream of samples (500) corresponding to the DEDs for a current field and a particular motion vector. The samples are ordered according to a scanning raster so that when they are passed through a chain of delay elements (510) suitable choices for the delay values give access to the 57 (in this example) samples at the locations shown in the star pattern of Figure 4. The output of the delay chain (510) takes the form of eight sets (502 to 509) of seven samples each, where output (502) corresponds to line segment (402), output (503) to line segment (403). and so on, together with the central sample (501), corresponding to current sample (401)). The maximum value of each of the eight sets is found in respective maximum-value calculation units (512) to (519). The resulting maximum values (522) to (529) are applied in pairs to minimum-value calculation units (532), (534), (536) and (538) so as to find the respective minimum values from diametrically-opposite filter window segments. The resulting minimum values (542), (544), (546) and (548) are applied to a maximum-value calculation unit (550) whose output (551) is combined (553) with the current sample (501) by taking the root-mean-square value, which form the filtered DFD output (554).
Possible variations of this filter will now be described. In a first variation, the eight maximum-value calculation units (512) to (519) are replaced by eight averaging units. This variation can improve the noise immunity of the filter. In a second variation, the subsequent maximum-value unit (550) is likewise replaced by an averaging unit.
It will be apparent to the skilled person that other choices of processing elements may also be used. For example, units (512) to (519) may calculate: a mean square value; a combination of the mean and the maximum; or, other rank-order values such as the second or third highest value. Similarly, unit (550) may also take: a mean square value; a combination of the mean and the maximum; or, the second highest value. Such decisions are a trade-off between robustness to noise and sensitivity to data, and between reliability and the capability of handling motion vector boundaries that are more complex in shape.
A displaced field difference filter according to a second exemplary embodiment of the invention will now be described. The second filter is more reliable than those previously described, at the cost of an increase in complexity. Figure 6 shows the samples involved in the second filter, based on an example window size of x 15. In place of the eight 7-sample line segments shown in Figure 4, this filter has eight octants (602) to (609) each containing 28 samples. (In Figure 6 the sample positions in alternate octants are indicated by open circles so as to indicate more clearly the allocation of samples to octants.) The average value of the samples within each octant is taken, and subsequent processing may be the same as that of the first filter.
Preferably however, the final combining step, (553) of Figure 5, may be replaced by a linear combination of the output of the four-value mean (550 in Figure 5) with the output of a conventional 5 x 5 running average filter whose window (610) is also shown in Figure 6.
The architecture of the second filter may be based on Figure 5, with the output of delay chain (510) now consisting of eight sets of 28 samples. However, a more efficient implementation is as shown in Figure 7, where the chain of delay elements and the mean-value calculations at its output are replaced by octant-shaped running-average filters which may be constructed, for example, as described in UK patent application 1113569.6, with additional simplifications that exploit the fact that the octants have shared boundaries.
Referring to Figure 7, the input stream of samples (700) is applied to eight octant-shaped running-average filters (712) to (719) whose outputs (722) to (729) are applied in pairs to minimum-value calculation units (732), (734), (736) and (738) so as to find the respective minimum values from diametrically-opposite filter window segments. The resulting minimum values (742), (744), (746) and (748) are applied to an averaging unit (750) whose output (751) is linearly combined (753) with the output (752) of a 5 x S running-average filter (702) applied to a suitably delayed version (701) of the input (700), to produce a final filtered DFD output (754). A typical linear combination in block (753) is to add 75% of the output (751) of the averaging unit (750) to 25% of the output (752) of the 5 x 5 running-average filter (702).
The invention so far described involves filter windows of particular sizes and shapes. It will be apparent to the skilled person that other sizes and shapes may be chosen without departing from the scope of the invention. For example, the line segments of the star pattem in Figure 4 may contain fewer or more than the seven samples shown. The pattern may also have fewer or more than the eight line segments shown. Likewise, the square window shown in Figure 6 may be smaller or larger than the 15 x 15 window shown, and the eight octants may be replaced by suitable numbers of other shapes, for example four quadrants or sixteen sedecants. The window need not be square: for example, windows that are polygonal with other than four sides, or that are approximately circular, may also be used. It is also possible to combine error value samples from overlapping segments of the filter window without departing from the scope of the invention.
The above description is based on displaced field differences. Other measures of pixel-to-pixel dissimilarity may also be used, including but not limited to: nonlinear functions of displaced field difference, displaced field differences between noise-reduced fields, Euclidean or other distances between multidimensional signals, for example ROB signals, and differences between feature point descriptors.
The implementations of the filters have been described in terms of serial processing of streams of values, typically ordered according to a scanning raster.
Of course the skilled person will appreciate that many other implementations of the inventive filters are possible, including, for example, the use of random-access field or frame stores or programmable apparatus. And, as explained in the introduction, filtering according to the invention may be applied to measures of dissimilarity between subsamples or regions of an image.
Although motion-compensated processing of images is typically applied to a time sequence of images where the sequence of images is a time sequence, the same process may be used with spatial image sequences, where the sequence is a sequence of different views of a common scene, or a sequence of different views captured in a time sequence. The current invention is equally applicable to the processing of these other types of image sequence. -10-

Claims (27)

  1. CLAIMS1. In video sequence processing, a method of filtering motion-compensated pixel-to-pixel dissimilarity values in which the filter aperture is decomposed into two or more regions and the outputs of partial filters applied to each region are combined by a non-linear operation.
  2. 2. A method according to claim 1 in which the dissimilarity value is arectified displaced field difference.
  3. 3. A method according to claim I in which the regions are non-overlapping.
  4. 4. A method according to claim 1 in which the non-linear combination process includes taking minimum values of partial-filter outputs from pairs of regions that are diametrically opposite each other in the filter aperture.
  5. 5. A method according to claim 4 in which the partial filters operate on radial line segments.
  6. 6. A method according to claim 5 in which the number of radial line segments is eight.
  7. 7. A method according to claim 4 in which the partial filters operate on sectors of the filter aperture.
  8. 8. A method according to claim 7 in which the sectors are octants and the number of sectors is eight.
  9. 9. A method according to claim I in which the partial filtering operation is a rank-order operation.
  10. 10. A method according to claim 1 in which the partial filtering operation is an averaging operation.
  11. 11. A method according to claim 4 in which the minimum values from pairs of regions are processed by a rank-order operation.
  12. 12. A method according to claim 4 in which the minimum values from pairs of regions are processed by an averaging operation.
  13. 13. In video sequence processing, apparatus for filtering motion-compensated pixel-to-pixel dissimilarity values in which the filter aperture is decomposed into two or more regions and the outputs of partial filters applied to each region are combined by a non-linear operation.
  14. 14. Apparatus according to claim 13 in which the dissimilarity value is arectified displaced field difference.
  15. 15. Apparatus according to claim 13 in which the regions are non-overlapping.
  16. 16. Apparatus according to claim 13 in which the non-linear combination process includes taking minimum values of partial-filter outputs from pairs of regions that are diametrically opposite each other in the filter aperture.
  17. 17. Apparatus according to claim 16 in which the partial filters operate on radial line segments.
  18. 18. Apparatus according to claim 17 in which the number of radial line segments is eight.
  19. 19. Apparatus according to claim 16 in which the partial filters operate on sectors of the filter aperture.
  20. 20. Apparatus according to claim 19 in which the sectors are octants and the number of sectors is eight.
  21. 21. Apparatus according to claim 13 in which the partial filtering operation is a rank-order operation.
  22. 22. Apparatus according to claim 13 in which the partial filtering operation is an averaging operation.
  23. 23. Apparatus according to claim 16 in which the minimum values from pairs of regions are processed by a rank-order operation.
  24. 24. Apparatus according to claim 16 in which the minimum values from pairs of regions are processed by an averaging operation. -12-
  25. 25. A method of video sequence processing, comprising the steps of deriving a candidate motion vector representing the displacement of an object between first and second images of the video sequence, each image being formed of pixels; using the motion vector to project from the first image to the second image a current pixel and a plurality of pixels neighbouring the current pixel, the current pixel and the neighbouring pixels defining a filter aperture; providing for each pixel in the filter aperture a dissimilarity value indicative of the dissimilarity between the pixel in the second image and the pixel projected from the first image; and deriving from the plurality of dissimilarity values in the filter aperture an error value for the candidate motion vector; characterised in that the filter aperture is decomposed into two or more regions; the dissimilarity values in each region are spatially filtered and the outputs of the spatial filters applied to each region are combined by a non-linear operation to provide the error value for the candidate motion vector.
  26. 26. Programmable apparatus programmed to implement a method according to any one of Claims Ito 12 or 25.
  27. 27. A computer program product adapted to cause programmable apparatus to implement a method according to any one of Claims Ito 12 or 25.
GB1206065.3A 2012-04-04 2012-04-04 Video sequence processing Active GB2502047B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB1206065.3A GB2502047B (en) 2012-04-04 2012-04-04 Video sequence processing
GB1905665.4A GB2572497B (en) 2012-04-04 2012-04-04 Video sequence processing
US13/832,764 US9532053B2 (en) 2012-04-04 2013-03-15 Method and apparatus for analysing an array of pixel-to-pixel dissimilarity values by combining outputs of partial filters in a non-linear operation
US15/367,284 US20170085912A1 (en) 2012-04-04 2016-12-02 Video sequence processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1206065.3A GB2502047B (en) 2012-04-04 2012-04-04 Video sequence processing

Publications (3)

Publication Number Publication Date
GB201206065D0 GB201206065D0 (en) 2012-05-16
GB2502047A true GB2502047A (en) 2013-11-20
GB2502047B GB2502047B (en) 2019-06-05

Family

ID=46160349

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1206065.3A Active GB2502047B (en) 2012-04-04 2012-04-04 Video sequence processing

Country Status (2)

Country Link
US (2) US9532053B2 (en)
GB (1) GB2502047B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8860880B2 (en) 2012-07-24 2014-10-14 Snell Limited Offset interpolation of a sequence of images to obtain a new image
US9877022B2 (en) 2013-04-08 2018-01-23 Snell Limited Video sequence processing of pixel-to-pixel dissimilarity values

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10453208B2 (en) 2017-05-19 2019-10-22 Waymo Llc Camera systems using filters and exposure times to detect flickering illuminated objects
US11455705B2 (en) * 2018-09-27 2022-09-27 Qualcomm Incorporated Asynchronous space warp for remotely rendered VR

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011127961A1 (en) * 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e. V. Adaptive image filtering method and apparatus

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0261137B1 (en) 1986-03-19 1991-10-16 British Broadcasting Corporation Tv picture motion measurement
US5329317A (en) * 1992-09-30 1994-07-12 Matsushita Electric Corporation Of America Adaptive field/frame filter for interlaced video signals
US5479926A (en) * 1995-03-10 1996-01-02 Acuson Corporation Imaging system display processor
TW368640B (en) * 1996-12-25 1999-09-01 Hitachi Ltd Image processor, image processing device and image processing method
US6519005B2 (en) * 1999-04-30 2003-02-11 Koninklijke Philips Electronics N.V. Method of concurrent multiple-mode motion estimation for digital video
US6331874B1 (en) * 1999-06-29 2001-12-18 Lsi Logic Corporation Motion compensated de-interlacing
US6484191B1 (en) * 1999-07-02 2002-11-19 Aloka Co., Ltd. Apparatus and method for the real-time calculation of local variance in images
US6563550B1 (en) * 2000-03-06 2003-05-13 Teranex, Inc. Detection of progressive frames in a video field sequence
US7423691B2 (en) * 2001-11-19 2008-09-09 Matsushita Electric Industrial Co., Ltd. Method of low latency interlace to progressive video format conversion
US7119837B2 (en) * 2002-06-28 2006-10-10 Microsoft Corporation Video processing system and method for automatic enhancement of digital video
US7016415B2 (en) * 2002-07-16 2006-03-21 Broadcom Corporation Modifying motion control signals based on input video characteristics
US7057664B2 (en) * 2002-10-18 2006-06-06 Broadcom Corporation Method and system for converting interlaced formatted video to progressive scan video using a color edge detection scheme
US7113221B2 (en) * 2002-11-06 2006-09-26 Broadcom Corporation Method and system for converting interlaced formatted video to progressive scan video
JPWO2004068411A1 (en) * 2003-01-31 2006-05-25 富士通株式会社 Average value filter device and filtering method
US7724827B2 (en) * 2003-09-07 2010-05-25 Microsoft Corporation Multi-layer run level encoding and decoding
US7496141B2 (en) * 2004-04-29 2009-02-24 Mediatek Incorporation Adaptive de-blocking filtering apparatus and method for MPEG video decoder
US7388987B2 (en) * 2004-05-28 2008-06-17 Hewlett-Packard Development Company, L.P. Computing dissimilarity measures
US8442108B2 (en) * 2004-07-12 2013-05-14 Microsoft Corporation Adaptive updates in motion-compensated temporal filtering
US8687008B2 (en) * 2004-11-15 2014-04-01 Nvidia Corporation Latency tolerant system for executing video processing operations
JP4586627B2 (en) * 2005-05-18 2010-11-24 ソニー株式会社 DATA ACCESS DEVICE, DATA ACCESS METHOD, PROGRAM, AND RECORDING MEDIUM
US7747075B2 (en) * 2005-06-20 2010-06-29 International Business Machines Corporation Salient motion detection system, method and program product therefor
US7787048B1 (en) * 2005-09-08 2010-08-31 Nvidia Corporation Motion-adaptive video de-interlacer
US8218655B2 (en) * 2005-09-19 2012-07-10 Maxim Integrated Products, Inc. Method, system and device for improving video quality through in-loop temporal pre-filtering
US7990471B1 (en) * 2005-10-17 2011-08-02 Texas Instruments Incorporated Interlaced-to-progressive video
US8023041B2 (en) * 2006-01-30 2011-09-20 Lsi Corporation Detection of moving interlaced text for film mode decision
US20090306505A1 (en) * 2006-02-22 2009-12-10 Hideki Yoshikawa Ultrasonic diagnostic apparatus
US8582032B2 (en) * 2006-09-07 2013-11-12 Texas Instruments Incorporated Motion detection for interlaced video
US20090060373A1 (en) * 2007-08-24 2009-03-05 General Electric Company Methods and computer readable medium for displaying a restored image
US8953673B2 (en) * 2008-02-29 2015-02-10 Microsoft Corporation Scalable video coding and decoding with sample bit depth and chroma high-pass residual layers
CN101926175A (en) * 2008-03-07 2010-12-22 株式会社东芝 Dynamic image encoding/decoding method and device
US8711948B2 (en) * 2008-03-21 2014-04-29 Microsoft Corporation Motion-compensated prediction of inter-layer residuals
JP4748191B2 (en) * 2008-07-30 2011-08-17 ソニー株式会社 Motion vector detection apparatus, motion vector detection method, and program
DE102009012441B4 (en) * 2009-03-12 2010-12-09 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method for reducing the memory requirement when determining disparity values for at least two stereoscopically recorded images
US8289447B1 (en) * 2009-07-14 2012-10-16 Altera Corporation Cadence detection for film mode de-interlacing
WO2011086836A1 (en) * 2010-01-12 2011-07-21 シャープ株式会社 Encoder apparatus, decoder apparatus, and data structure
WO2011126284A2 (en) * 2010-04-05 2011-10-13 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by using adaptive prediction filtering, method and apparatus for decoding video by using adaptive prediction filtering
EP2559240B1 (en) * 2010-04-13 2019-07-10 GE Video Compression, LLC Inter-plane prediction
US9094658B2 (en) * 2010-05-10 2015-07-28 Mediatek Inc. Method and apparatus of adaptive loop filtering
US8861617B2 (en) * 2010-10-05 2014-10-14 Mediatek Inc Method and apparatus of region-based adaptive loop filtering
US20130222539A1 (en) * 2010-10-08 2013-08-29 Dolby Laboratories Licensing Corporation Scalable frame compatible multiview encoding and decoding methods
GB2493396B (en) 2011-08-05 2018-08-08 Snell Advanced Media Ltd Multidimensional sampled-signal filter
US8867825B2 (en) * 2011-08-30 2014-10-21 Thompson Licensing Method and apparatus for determining a similarity or dissimilarity measure
US20130070049A1 (en) * 2011-09-15 2013-03-21 Broadcom Corporation System and method for converting two dimensional to three dimensional video
US10015515B2 (en) * 2013-06-21 2018-07-03 Qualcomm Incorporated Intra prediction from a predictive block

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011127961A1 (en) * 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e. V. Adaptive image filtering method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8860880B2 (en) 2012-07-24 2014-10-14 Snell Limited Offset interpolation of a sequence of images to obtain a new image
US9877022B2 (en) 2013-04-08 2018-01-23 Snell Limited Video sequence processing of pixel-to-pixel dissimilarity values

Also Published As

Publication number Publication date
US20170085912A1 (en) 2017-03-23
GB2502047B (en) 2019-06-05
US9532053B2 (en) 2016-12-27
GB201206065D0 (en) 2012-05-16
US20130265499A1 (en) 2013-10-10

Similar Documents

Publication Publication Date Title
EP0540762B1 (en) Method for detecting moving vector and apparatus therefor, and system for processing image signal using the apparatus
Bestagini et al. Detection of temporal interpolation in video sequences
US20170085912A1 (en) Video sequence processing
US20080100750A1 (en) Method for detecting interlaced material and field order
WO2001074072A1 (en) Processing sequential video images to detect image motion among interlaced video fields or progressive video images
EP2064671B1 (en) Method and apparatus for interpolating an image
JP2000261828A (en) Stereoscopic video image generating method
KR20020064440A (en) Apparatus and method for compensating video motions
US8923400B1 (en) Method and/or apparatus for multiple pass digital image stabilization
US8306123B2 (en) Method and apparatus to improve the convergence speed of a recursive motion estimator
JP4213035B2 (en) Occlusion detector and method for detecting an occlusion region
EP1958451B1 (en) Motion vector field correction
US8565309B2 (en) System and method for motion vector collection for motion compensated interpolation of digital video
US20080144716A1 (en) Method For Motion Vector Determination
EP1897376A2 (en) Motion estimation
JPH06326975A (en) Method and equipment for movement compressing type processing of video signal
WO2006003545A1 (en) Motion estimation with video mode detection
FI97663C (en) A method for detecting motion in a video signal
WO2004082294A1 (en) Method for motion vector determination
US8462170B2 (en) Picture attribute allocation
GB2572497A (en) Video sequence processing
US9648339B2 (en) Image processing with segmentation using directionally-accumulated difference-image pixel values
GB2513112A (en) Video sequence processing
WO2001074082A1 (en) Temporal interpolation of interlaced or progressive video images
KR100949137B1 (en) Apparatus for video interpolation, method thereof and computer recordable medium storing the method