GB2333413A - Moving image restoration - Google Patents
Moving image restoration Download PDFInfo
- Publication number
- GB2333413A GB2333413A GB9801186A GB9801186A GB2333413A GB 2333413 A GB2333413 A GB 2333413A GB 9801186 A GB9801186 A GB 9801186A GB 9801186 A GB9801186 A GB 9801186A GB 2333413 A GB2333413 A GB 2333413A
- Authority
- GB
- United Kingdom
- Prior art keywords
- motion
- motion vectors
- video
- video signal
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Picture Signal Circuits (AREA)
- Image Analysis (AREA)
Abstract
Archived or other defective video signals are restored in two processing rounds. First, global or block motion vectors are identified and used to remove unsteadiness and brightness flicker. Then, motion vectors are identified in the steadied and flicker-free signal and assigned to pixels. These pixel motion vectors are then used in motion compensated noise reduction, scratch removal and dirt concealment.
Description
MOVING IMAGE RESTORATION
This invention relates to processes and apparatus for the restoration of moving images and in the most important example to the restoration of video archive material.
There exists a very considerable amount of material in video archives and there are commercial imperatives in making as much as possible of this material available for broadcast or other distribution. Archive material tends, however, to suffer from a range of picture quality defects which are visually unacceptable when judged against current display standards. There exists therefore a real need for video archive restoration.
Manual techniques exist for the correction of archive defects and for certain defects - which are generally rare - manual correction is the optimum solution. It is not practicable nor economic, however, for entire archives to be restored manually. A degree of automation is essential. The volume of material requiring restoration demands that the bulk of defects are corrected automatically and at rates which are real-time or close to real-time. It is then possible for sufficient time to be devoted to the manual repair of the most heavily damaged sections.
The categories of defects that affect video archives vary considerably in nature and in the strategies required for their detection and correction. The defects include:
Dirt
Sparkle
Video drop outs
Noise
Film grain
Film and video scratches
Unsteadiness
Brightness flicker
Accurate restoration will in the case of many of these defects demand temporal processing with motion estimation. This will enable advantage to be taken of the temporal redundancy in successive images and will benefit the correction of both impulsive events (such as dirt, sparkle and dropouts) and continuous distortions (such as noise). The video processing burden of motion estimation is high, however, and generally too high for reliance to be placed on general purpose data or video processors. Dedicated hardware is necessary for the more intensive processing functions. Practical and cost restraints demand, however, that the hardware is as simple as possible and is employed as efficiently as possible, consistent always with maintaining the highest levels of performance.
It is an object of certain aspects of the present invention to provide improved processes and apparatus for use in the restoration of video archive or other moving image material.
Accordingly, the present invention consists in one aspect in a process for the restoration of a defective video signal, comprising the steps of identifying large area motion vectors, utilising said large area motion vectors in a first processing round to remove unsteadiness from the video signal and thereby generate a steadied video signal, identifying and assigning pixel motion vectors in the steadied video signal; and in a second processing round conducting motion compensated noise reduction.
Advantageously, the first processing round further comprises removal of brightness flicker, and the second processing round further comprises one or both of scratch removal and dirt concealment.
In a preferred example, the first block will perform flicker and unsteadiness correction. Considerable hardware economies can be made through the realisation that both flicker and unsteadiness can be corrected with global (or at least block-based) motion vectors. The assignment of motion vectors to individual pixels is therefore not required. This allows a very significant reduction in the motion estimation hardware. Video corrected for flicker and unsteadiness is then passed to a second processing block which performs scratch removal, dirt concealment and noise reduction. The corrections are applied in parallel, in that detection of both scratches and dirt is conducted on the signal entering the processing block. Spatial and temporal processing is conveniently separated with a spatially restored video flow being presented to a temporal processing unit.
The arrangements according to the preferred embodiments of this invention provide a number of important advantages.
Providing unsteadiness correction in a first processing round, with a dedicated motion estimator, allows the independent selection of modes (such as field and frame mode) in motion estimation for unsteadiness correction, with no regard to the motion estimation requirements for scratch, dirt and noise removal.
The saving in hardware through the avoidance of vector assignment in the first processing round has already been noted. Moreover, since motion measurement in the second processing block is conducted on a video signal which has been steadied, there is no need for correction of those motion vectors to compensate for unsteadiness offsets.
Conducting flicker removal in the first processing round, means that the assignment of vectors in the second motion estimation process is more reliable.
The matching process will not (or is much less likely to) be confused by the unnatural variation in brightness levels that is flicker.
The invention will now be described by way of example with reference to the accompanying drawings, in which:
Figure 1 is a block diagram of apparatus according to this invention for video archive restoration;
Figure 2 is a block diagram showing in more detail the 1St processing block of the apparatus of Figure 1;
Figure 3 is a block diagram showing in more detail part of the apparatus of
Figure 2 designated in Figure 2 by a dotted outline;
Figure 4 is a diagram illustrating the function of the flicker removal unit of the apparatus shown in Figure 2;
Figure 5 is a block diagram showing in more detail the 2nd processing block of the apparatus of Figure 1;
Figure 6 is a block diagram showing in more detail the motion estimator of the apparatus of Figure 5; and
Figure 7 is a block diagram showing in more detail the spatial filter and motion compensation of the apparatus of Figure 5.
Referring initially to Figure 1, an incoming video signal undergoes motion estimation in block 102 and the video signal, together with a motion vector signal, is presented to an unsteadiness and flicker removal block 104, the operation of which will be discussed in more detail below. What should be stressed here is that no attempt is made to assign the motion vectors in block 104 to pixels; vectors are taken as global in character or, at most, restricted to block level. The steadied video signal then undergoes a second motion estimation process in block 106, with motion vectors and the steadied video signal passing to a scratch removal, dirt concealment and noise reduction block 108. This block includes a motion compensated process by which motion compensated images are made available for temporal processing.
Before discussing in more detail the function of processing block 104, it may be helpful to review briefly the nature of the defects to be corrected.
Two forms of unsteadiness are corrected for in apparatus according to this embodiment of the invention: hop/weave unsteadiness and twin lens unsteadiness. Hop/weave unsteadiness arises from a variety of sources including camera shake, sprocket hole damage, printing mis-registration and mechanical misalignments in telecine equipment used for the conversion of cinematographic film to video. A further problem arises with so-called twin-lens telecine equipment in which separate optical paths are provided for each of the two interlaced video fields to be derived from a single film frame. Misalignment between these two optical paths can lead to severe horizontal and vertical vibrations at the television frame rate, sometimes referred to as twitter. Although twin-lens telecine equipment is no longer in use, a considerable amount of filmoriginating video archive is believed to have been converted using this technology.
Hop/weave unsteadiness and twitter can be very unsettling to a viewer, particularly in cases where film-originating video is displayed alongside "true" video or electronically generated graphics or captions.
Image flicker is a common artefact in old film sequences. It can be defined as unnatural temporal fluctuations in perceived image intensity whether globally or over regions of the picture. Not only is image flicker disturbing to the viewer, it may also hamper motion estimation or - more particularly - the pixel assignment of motion vectors. Image flicker can have a number of sources including ageing of film stock, dust, and variations in chemical or optical processing.
Reference is now directed to Figure 2. The input video signal is received in pre-processor 202, which performs a number of operations, including interfacing, aperture correction, and synchronisation. It is advantageous for the pre-processor also to have a video analysis function, in which it detects scene changes, monitors whether the video is film originated and, if so, detects the phase of any 2:2 or 3:2 pulldown sequence. The pre-processor may also take measurements to set a noise floor for later processing and may itself conduct some initial processing such as echo concealment and programmable horizontal filtering.
The video format and control block 204 serves to pre-filter and format the video signal supplied to the motion estimation block 206 which, in the preferred example, employs phase correlation. Thus, luminance video is vertically and horizontally filtered, sub-sampled and formatted into 128 by 64 overlapping blocks. The video format and control block 204 also provides a video output to the unsteadiness removal block 208, this video signal being delayed so as to be co-timed with the vector output from the motion estimation block 20. Additionally, the video format and control block 204 sends and receives control data to and from the other blocks in the diagram and interfaces - where appropriate - with external equipment.
The preferred motion estimation block 206 performs a two dimensional
FFT on the block-formatted video and correlates against the 2D FFT data generated from the previous field. The correlation data is subject to an inverse 2D FFT prior to peak hunting and interpolation to derive one or more motion vectors for each block, to a sub-pixel accuracy. The height of the associated peak is also measured as an indicator of confidence in the motion vector. The block based motion vectors are further processed to extract global motion vectors and to derive control parameters for the unsteadiness (both film unsteadiness and twin-lens twitter).
As can be seen more clearly in Figure 3, global vectors from a phase correlation unit 304 are passed to a weave analysis unit 308. It should be explained that the phase correlation unit 304 switches between a frame mode in which it compares frames or corresponding fields from two frames, and a field mode in which it compares the two fields from a single frame. The global vectors are frame based. Weave analysis unit 308 serves to avoid the correction of intentional camera pan and tilt, to prevent motion vectors from exceeding a set measurement range and to ensure that the accumulated control signal converges to zero in the absence of motion. Global motion due to unsteadiness is distinguished from "real" motion through consideration of temporal frequency.
"Real" global motion will generally be smoothly varying with time, whilst unsteadiness will generally have high temporal frequencies.
Block based vectors, in the field mode of the motion estimation, are provided to a reliability control unit 306 and then to a model fitting unit 310.
Since the fields under comparison have been derived from the same film frame, any motion vector can be attributed to a distortion. The model fitting unit 310 attempts to fit a linear transformation to each field in such a way as to remove the assumed distortion and make equal the two fields of the frame. In an interpolate parameters block 312, the results of the weave analysis and the coefficients of the linear transformations are used to derive a re-positioning map which is applicable to an entire frame. This map effectively comprises one vector per pixel. Using this map, the frame is then re-positioned in block 314.
The steadied video signal passes to the flicker removal block 210 shown in Figure 2. The operation of this block will now be described with reference to the more detailed block diagram of Figure 4.
The amount and distribution of flicker is estimated using the guidelines that flicker is generally of higher frequency than actual luminance and chrominance variation and that flicker is limited in range. The approach taken is to equalise, locally, the mean and the variance in a temporal sense.
Operating on a low pass filtered signal, mean and variance values are computed for overlapping blocks and compared with values from the previous field or frame. From this comparison, there are derived an intensity flicker gain parameter a and an intensity flicker offset parameter . The parameters will not be valid in regions in which there is local motion. Accordingly, for those regions in which motion is detected, the parameter values from surrounding stationary regions are employed. The resulting arrays of parameters are smoothed in a filter and then up-sampled to full frame resolution using bi-linear interpolation.
Each individual pixel of the frame is then corrected using the gain and offset parameters to provide the flicker removed output. In the undelayed path to the low pass filter and compute variance and mean block, a pan compensation unit is provided. This utilises global motion vectors from the phase correlation unit 304.
A description will now be given of the second processing block in Figure 1, which serves to remove scratches, conceal dirt and reduce noise. Reference is directed initially to Figure 5.
The steadied and flicker reduced video signal is received in 4:2:2 format in pre-processor 502. This serves broadly the same function as pre-processor 202 shown in Figure 2. The pre-processed signal then passes to scratch detection block 504. Film scratches are detected by looking for horizontal discontinuities with a large vertical and temporal correlation. Helical video scratches are detected by looking for vertical discontinuities with a medium horizontal correlation and large temporal correlation. Quadruplex scratches are detected by looking for vertical discontinuities with a small horizontal correlation, medium temporal correlation and a characteristic periodicity.
Primary scratch detection is conducted prior to the motion estimator 506 so that the results of scratch detection can be used for motion vector repair (or more robust motion estimation). The scratch key signal derived in block 504 is carried forward, through the motion estimator 506, so that the scratch can be repaired in the motion compensation unit 510. Wherever possible, scratches will be repaired by replacement from the motion compensated previous field or frame. Wherever the required part of the previous frame is invalid, due to revealed background or a shot change, spatial interpolation will be used.
The function of motion estimator 506 is more easily described with reference to Figure 6 which illustrates the content of block 506 in more detail.
Video format and control unit 602 serves to provide filtered and formatted video data to the phase correlation unit 604. In particular, unit 602 provides a first signal which is vertically and horizontally filtered and sub-sampled luminance formatted into 128 by 64 overlapping blocks and a second signal which is horizontally and vertically low pass filtered video. Additionally, the video format and control unit 602 provides a delayed video output for connection with the spatial filter 508, this delayed video signal being co-timed with the vector bus output from the video output processor 610. The video format and control block 602 also sends and receives control data to and from blocks and interfaces where appropriate - with external equipment.
Phase correlation unit 604 performs a two dimensional FFT on the blockformatted video and correlates against the 2D FFT data generated from the previous field. The correlation data is subject to an inverse 2D FFT prior to peak hunting and interpolation to derive one or more motion vectors for each block, to a sub-pixel accuracy. The height of the associated peak is also measured as an indicator of confidence in the motion vector. A vector bus is provided in tandem to a forward assignment unit 606 and a backward assignment unit 608. The phase correlation unit 604 also passes on to both the assignment units 606,608 and the vector output processor 610, the delayed, sub-sampled video signal generated in video format and control block 602.
The forward assignment unit 606 and the backward assignment unit 608 operate to assign vectors using the candidate vectors supplied from the phase correlation unit and serve also to generate error signals. The forward and backward vectors, together with associated error signals, are passed in the form of video streams to the vector output processor 610.
It is the function of the vector output processor 610 to generate assigned forward and backward vector and error signals from the vector and error signals from the forward and backward assignment units 606 and 608. The error signals are used to determine which vectors are used prior to three dimensional median filtering. Global errors may be substituted for the forward and backward vectors if error signals are high. Unreliable vectors can also be replaced by projection from preceding or succeeding frames or by spatial interpolation across small areas. It is also possible to apply a constraint of local motion smoothness which must be satisfied by vectors, in addition to the requirement for low match error.
The vector output processor 610 receives as an input the scratch key from scratch detection unit 504. This is used to identify vectors in need of repair.
Analysis of error signals also enables the vector output processor to generate for subsequent use a "large dirt" key.
The outputs of the vector output processor 610 are a combined key signal, a confidence signal and a vector bus containing processed vectors ready for use in picture building.
Reference is now directed to Figure 7 which shows in more detail the spatial filter 508 and motion compensation 510 of Figure 5.
The approach adopted in this portion of the apparatus is to generate a spatially filtered signal and to provide this, along with a number of further signals from earlier and later in time, to an arbiter which will select between or blend its inputs in accordance with information from various sources.
In more detail, the delayed video signal from the video format and control unit 602 is received by a spatial filter 702 which is provided through field/frame delay 706 to arbiter 714. It should be explained that processing can operate alternatively in field or frame mode and the delays are switched accordingly. For convenience, fields are referred to in the following description. The arbiter 714 receives the current field through delay 704 (which matches the delay of the spatial filter 702) and field delay 708. The arbiter 714 also receives two motion compensated fields. One of these motion compensated fields is termed the next field and, through the lack of a field delay matching delays 706 and 708, is a field in advance of the current field. This field is motion compensated using forward pointing vectors in the "future/back" projection unit 710. The second motion compensated field is a recursively filtered field. For the recursive loop, a modified output field is created which is the actual output of the arbiter less the "next field" input. This avoids a possibly confusing and unnecessary combination of both forwardly and backwardly motion compensated fields.
The arbiter provides recursive noise reduction. In contrast to conventional motion adaptive recursive noise reduction, where recursion is simply turned off in the presence of motion, use is made here of a motion compensated recursive store. Therefore, it should be necessary to turn off the recursion only in the case of shot changes and revealed background. The spatially filtered signal will be include in the temporal averaging, either continuously or as a fallback when the temporal noise reduction is turned off. To ensure the availability of a spatially filtered signal to contribute to the recursive signal, cross-fader 716 enables a contribution from the spatially filtered signal to be added to the current field.
Claims (7)
- CLAIMS 1. A process for the restoration of a defective video signal, comprising the steps of identifying large area motion vectors; utilising said large area motion vectors in a first processing round to remove unsteadiness from the video signal and thereby generate a steadied video signal; identifying and assigning pixel motion vectors in the steadied video signal; and in a second processing round, conducting motion compensated noise reduction.
- 2. A process according to Claim 1, wherein the first processing round further comprises removal of brightness flicker.
- 3. A process according to Claim 2, wherein the removal of brightness flicker utilises said large area motion vectors.
- 4. A process according to any one of Claims 1 to 3, wherein the second processing round further comprises one or both of scratch removal and dirt concealment.
- 5. A process for removing noise from a video signal comprising the steps of generating a current picture signal, a spatially noise reduced current picture signal and a motion compensated recursively filtered signal and selecting between or mixing said signals.
- 6. A process according to Claim 5, wherein said selecting between or mixing is conducted in accordance with information relating to at least one of: reliability, motion vectors, shot changes and revealed background.
- 7. A process according to Claim 5 or Claim 6, where there is additionally made available a motion compensated next picture signal.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9801186A GB2333413B (en) | 1998-01-20 | 1998-01-20 | Moving image restoration |
PCT/GB1999/000180 WO1999037087A1 (en) | 1998-01-20 | 1999-01-20 | Moving image restoration |
EP99901750A EP1051842A1 (en) | 1998-01-20 | 1999-01-20 | Moving image restoration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9801186A GB2333413B (en) | 1998-01-20 | 1998-01-20 | Moving image restoration |
Publications (3)
Publication Number | Publication Date |
---|---|
GB9801186D0 GB9801186D0 (en) | 1998-03-18 |
GB2333413A true GB2333413A (en) | 1999-07-21 |
GB2333413B GB2333413B (en) | 2002-05-15 |
Family
ID=10825585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9801186A Expired - Fee Related GB2333413B (en) | 1998-01-20 | 1998-01-20 | Moving image restoration |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1051842A1 (en) |
GB (1) | GB2333413B (en) |
WO (1) | WO1999037087A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2356514A (en) * | 1999-09-09 | 2001-05-23 | Pandora Int Ltd | Film defect correction |
GB2366113A (en) * | 2000-06-28 | 2002-02-27 | Samsung Electronics Co Ltd | Correcting a digital image for camera shake |
WO2002032118A1 (en) * | 2000-10-06 | 2002-04-18 | Bts Holding International Bv | Device for correcting still image errors in a video signal |
EP3007426A1 (en) * | 2014-10-08 | 2016-04-13 | Thomson Licensing | Method and apparatus for detecting defects in digitized image sequences |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10009585B4 (en) * | 2000-02-29 | 2008-03-13 | Bts Holding International B.V. | Video-technical device |
US8078002B2 (en) | 2008-05-21 | 2011-12-13 | Microsoft Corporation | Matte-based video restoration |
JP5111315B2 (en) * | 2008-09-24 | 2013-01-09 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
US8488007B2 (en) | 2010-01-19 | 2013-07-16 | Sony Corporation | Method to estimate segmented motion |
US8285079B2 (en) | 2010-03-19 | 2012-10-09 | Sony Corporation | Method for highly accurate estimation of motion using phase correlation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2251353A (en) * | 1990-10-08 | 1992-07-01 | Broadcast Television Syst | Noise reduction |
GB2264414A (en) * | 1992-02-12 | 1993-08-25 | Sony Broadcast & Communication | Motion compensated noise reduction |
GB2305803A (en) * | 1995-09-30 | 1997-04-16 | Philips Electronics Nv | Correcting picture steadiness errors in telecine scanning |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE469412B (en) * | 1992-04-13 | 1993-06-28 | Dv Sweden Ab | MAKE ADAPTIVE ESTIMATES UNUSUAL GLOBAL IMAGE INSTABILITIES IN IMAGE SEQUENCES IN DIGITAL VIDEO SIGNALS |
EP0735746B1 (en) * | 1995-03-31 | 1999-09-08 | THOMSON multimedia | Method and apparatus for motion compensated frame rate upconversion |
-
1998
- 1998-01-20 GB GB9801186A patent/GB2333413B/en not_active Expired - Fee Related
-
1999
- 1999-01-20 WO PCT/GB1999/000180 patent/WO1999037087A1/en not_active Application Discontinuation
- 1999-01-20 EP EP99901750A patent/EP1051842A1/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2251353A (en) * | 1990-10-08 | 1992-07-01 | Broadcast Television Syst | Noise reduction |
GB2264414A (en) * | 1992-02-12 | 1993-08-25 | Sony Broadcast & Communication | Motion compensated noise reduction |
GB2305803A (en) * | 1995-09-30 | 1997-04-16 | Philips Electronics Nv | Correcting picture steadiness errors in telecine scanning |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2356514A (en) * | 1999-09-09 | 2001-05-23 | Pandora Int Ltd | Film defect correction |
GB2356514B (en) * | 1999-09-09 | 2004-04-07 | Pandora Int Ltd | Film restoration system |
US7012642B1 (en) | 1999-09-09 | 2006-03-14 | Pandora International Limited | Method for adjusting digital images to compensate for defects on film material |
GB2366113A (en) * | 2000-06-28 | 2002-02-27 | Samsung Electronics Co Ltd | Correcting a digital image for camera shake |
GB2366113B (en) * | 2000-06-28 | 2002-12-04 | Samsung Electronics Co Ltd | Decoder having digital image stabilization function and digital image stabiliz ation method |
US7010045B2 (en) | 2000-06-28 | 2006-03-07 | Samsung Electronics Co., Ltd. | Decoder having digital image stabilization function and digital image stabilization method |
WO2002032118A1 (en) * | 2000-10-06 | 2002-04-18 | Bts Holding International Bv | Device for correcting still image errors in a video signal |
US7116720B2 (en) * | 2000-10-06 | 2006-10-03 | Thomson Licensing | Device for correcting still image errors in a video signal |
EP3007426A1 (en) * | 2014-10-08 | 2016-04-13 | Thomson Licensing | Method and apparatus for detecting defects in digitized image sequences |
EP3007425A1 (en) * | 2014-10-08 | 2016-04-13 | Thomson Licensing | Method and apparatus for detecting defects in digitized image sequences |
Also Published As
Publication number | Publication date |
---|---|
EP1051842A1 (en) | 2000-11-15 |
GB2333413B (en) | 2002-05-15 |
WO1999037087A1 (en) | 1999-07-22 |
GB9801186D0 (en) | 1998-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5642170A (en) | Method and apparatus for motion compensated interpolation of intermediate fields or frames | |
US7558320B2 (en) | Quality control in frame interpolation with motion analysis | |
US5661525A (en) | Method and apparatus for converting an interlaced video frame sequence into a progressively-scanned sequence | |
US6606126B1 (en) | Deinterlacing method for video signals based on motion-compensated interpolation | |
US5784115A (en) | System and method for motion compensated de-interlacing of video frames | |
US7408986B2 (en) | Increasing motion smoothness using frame interpolation with motion analysis | |
JPH08307820A (en) | System and method for generating high image quality still picture from interlaced video | |
EP1164792A2 (en) | Format converter using bidirectional motion vector and method thereof | |
US8817878B2 (en) | Method and system for motion estimation around a fixed reference vector using a pivot-pixel approach | |
US20080055477A1 (en) | Method and System for Motion Compensated Noise Reduction | |
Van Roosmalen et al. | Correction of intensity flicker in old film sequences | |
US8355442B2 (en) | Method and system for automatically turning off motion compensation when motion vectors are inaccurate | |
US7489350B2 (en) | Unit for and method of sharpness enhancement | |
JP4153480B2 (en) | Noise attenuator and progressive scan converter | |
EP0535066B1 (en) | Video signal processing | |
GB2450121A (en) | Frame rate conversion using either interpolation or frame repetition | |
JP2006504175A (en) | Image processing apparatus using fallback | |
US7050108B2 (en) | Motion vector correction circuit and method | |
US6930728B2 (en) | Scan conversion apparatus | |
GB2333413A (en) | Moving image restoration | |
US20130201405A1 (en) | Method and System for Adaptive Temporal Interpolation Filtering for Motion Compensation | |
JP2001024988A (en) | System and device for converting number of movement compensation frames of picture signal | |
KR100976718B1 (en) | Method and apparatus for field rate up-conversion | |
JPH11298861A (en) | Method and device for converting frame number of image signal | |
EP1460847B1 (en) | Image signal processing apparatus and processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20020815 |