GB2370932A - Reduction in defect visibilty in image signals - Google Patents

Reduction in defect visibilty in image signals Download PDF

Info

Publication number
GB2370932A
GB2370932A GB0100518A GB0100518A GB2370932A GB 2370932 A GB2370932 A GB 2370932A GB 0100518 A GB0100518 A GB 0100518A GB 0100518 A GB0100518 A GB 0100518A GB 2370932 A GB2370932 A GB 2370932A
Authority
GB
United Kingdom
Prior art keywords
local
data
defect
replacement
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0100518A
Other versions
GB0100518D0 (en
Inventor
Sarah Witt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB0100518A priority Critical patent/GB2370932A/en
Publication of GB0100518D0 publication Critical patent/GB0100518D0/en
Publication of GB2370932A publication Critical patent/GB2370932A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/253Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Abstract

The apparatus is for reducing the visibility of defects in a video image represented by frames of original image data. The apparatus includes a motion compensator for producing motion compensated frames of image data. A differencing arrangement produces a) a forwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame. A comparison arrangement compares the forwards and backwards local difference signals with thresholds to produce local comparison signals. A processor processes the said current and motion compensated succeeding and preceding frames to produce replacement data, the processor being responsive to the local comparison signals to <I>not</I> base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold. A defect detector produces a defect indicating siganl and a defect replacer, having an input for receiving the replacement data, replaces a detected defect with replacement data. A checking processor checks replacement data against at least one reference criterion to produce a replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data. An an expander expands the defect indicating signal. and the defect replacer replaces the original image data indicated by the expanded defect indicating signal with the replacement data.

Description

BLOTCH REMOVAL
The present invention relates to noise reduction in telecine systems.
Telecine systems are apparatus designed to produce film-to-video transfers.
The book"Motion Picture Restoration"by A. C Kokaram ISBN 3-54076040-7 Springer Verlag London Limited 1998, discloses various techniques for reducing noise in image sequences, especially in Chapter 10.
Figure 1 illustrates the use of a telecine machine in a video making process. A film camera 100 records images on acetate-based film that has three layers of silver halide emulsion sensitised to red, green and blue respectively. Once the film has been exposed it is processed to produce a film negative 200. The film negative 200 is inserted into a telecine machine 300 which transfers the recorded material on the negative to a videotape 400. The telecine machine includes a noise reduction unit 900 that consists of hardware and software dedicated to reducing the amount of noise present in the film negative that is transferred to the video copy 400. The video 400 can be played on a videotape recorder 500.
The video image may be edited and then transferred to film. For example the video may be edited"off-line"on a computer-based digital non-linear editing apparatus 600. The non-linear editing system has the flexibility of allowing footage to be edited starting at any point in the recorded sequence. The images used for digital editing are often a reduced resolution copy of the original source material. Digital editing of the images from the videotape 400 is performed and an edit decision list (EDL) is produced. The EDL is a file that identifies edit points by their timecode addresses and thus contains all the instructions for editing the programme. The EDL is then used to transfer the edit decisions to an on-line editing system where the film negative is cut according to the EDL to produce a high-resolution broadcast quality copy 700 of the edited video footage. Then release prints are made from the final negative 700 and supplied to a film distribution network 800.
Apart from its use in the film production process, the telecine has wider applicability in terms of transferring the final negative 700 to videotape for general distribution. A telecine such as the Sony's Vialta (FVS-1000) is capable of
producing digital video copies of a film original at various resolutions ranging from 525/625 which is suitable for use on standard definition television (SDTV) through to resolutions appropriate for high definition television (HDTV). The input film media that is converted to video in the telecine can typically be selected from all types of 16mm and 35mm film stock, both negative and positive.
In order to take full advantage of the high compression and limited bandwidth of digital television transmission, high picture quality is necessary. Image quality obtained by producing video directly from a film original can be degraded by graininess resulting from undeveloped photosensitive chemical remaining on the film, blotches due to deposits of dust or dirt and scratches due to wear and tear on the film.
By performing noise reduction on the images read from the film prior to recording them in video format the visual quality of the final video images can be improved.
Furthermore reduced levels of noise mean that higher image compression ratios can be achieved.
The present invention is directed to the removal of blotches. Blotches are artefacts appearing on a video frame that obscure pixels of the original recorded image and thus degrade the image quality. The blotches can be caused foreign bodies such as dust, dirt or hair that adhere to the film. This often happens as the film passes through the transport mechanism of the telecine machine during a film to video transfer process.
Summary of the Invention.
It is known to replace a defect in an image by replacement data. However such replacement may reduce the image quality by as much as, if not more than, the defect. It is desirable to reduce the visibility of image defects whilst not reducing the overall image quality.
According to a first aspect of the present invention, there is provided apparatus for reducing the visibility of defects in a video image represented by frames of original image data, the apparatus comprising: a motion compensator for producing motion compensated frames of image data;
a differencing arrangement for producing a) a forwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame; a comparison arrangement for comparing the forwards and backwards local difference signals with thresholds to produce local comparison signals; a processor for processing the said current and motion compensated succeeding and preceding frames to produce replacement data, the processor being responsive to the local comparison signals to not base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold; a defect detector; and a defect replacer, having an input for receiving the replacement data, for replacing a detected defect with replacement data.
The production of replacement data involves processing the original image data. The processing in this aspect involves processing image data from a current and preceding and succeeding frames to produce replacement data for replacing a defect in the current frame. However that assumes the image data in the current frame and in the preceding and/or succeeding frames is correlated.
Scene changes or other image changes typically occur within video sequences and result in a lack of correlation between adjacent frames. In this aspect of the invention, such changes are detected by comparing the forwards and backwards difference signals with the said thresholds. The processing for producing replacement data is responsive to the detected image changes so that it does not use uncorrelated data.
Moreover, this aspect uses local changes. Even where a scene change occurs, parts of the image may remain correlated from frame to frame. For example, adjacent frames containing image data from different scenes may both include sky or other local image areas which are correlated. In this aspect of the invention, replacement data is based on such local areas allowing replacement data most
appropriate to that area to be used. Thus for example replacement data produce by temporal processing of uncorrelated image data is not used.
This aspect of the invention also provides a method of reducing the visibility of defects; in a video image represented by frames of original image data, the method comprising the steps of : producing motion compensated frames of image data; producing a) a forwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame; comparing the forwards and backwards local difference signals with thresholds to produce local comparison signals; processing the said current and motion compensated succeeding and preceding frames to produce replacement data, the processor being responsive to the local comparison signals to not base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold; detecting a defect; and receiving the replacement data, and replacing a detected defect with replacement data.
This first aspect of the invention also provides a computer program product comprising instructions which, when run on a data processing system, implement the method of this aspect.
According to a second aspect of the invention there is provided an apparatus for reducing the visibility of defects in a video image represented by frames of original image data, the apparatus comprising: a defect detector for producing a signal indicating a defect; a signal processor for producing, from the image data, replacement data for replacing a defect; a checking processor for checking the replacement data against at least one reference criterion to produce a replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data; and
a defect replacer for replacing original image data indicated by the defect indicating signal provided such replacement is indicated by the replacement signal.
In some circumstances simply replacing a defect even with data generated from the image may reduce the subjective image quality by as much as, or more than, the defect. By checking the replacement data against one or more reference criteria, the potential effect of replacement can be judged before replacement is effected.
This second aspect of the invention also provides a method of reducing the visibility of defects in a video image represented by frames of original image data, the method comprising the steps of : producing a signal indicating a defect; producing, from the image data, replacement data for replacing a defect; checking the replacement data against at least one reference criterion to produce a replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data; and replacing original image data indicated by the defect indicating signal provided such replacement is indicated by the replacement signal.
This second aspect of the invention also provides a computer program product comprising instructions which, when run on a data processing system, implement the method of this aspect.
According to a third aspect of the invention there is provided apparatus for reducing the visibility of a defect in a video image represented by frames of original image data, the apparatus comprising: a defect detector for producing a signal indicating a defect; an expander for expanding the defect indicating signal; a signal processor for producing, from the image data, replacement data for replacing a defect,; and a defect replacer for replacing original image data indicated by the expanded defect indicating signal with the replacement data.
By expanding the defect indicating signal defects such as hairs are more reliably detected and replaced.
This third aspect of the invention also provides a method of for reducing the visibility of a defect in a video image represented by frames of original image data, the method comprising: producing a signal indicating a defect; expanding the defect indicating signal; producing, from the image data, replacement data for replacing a defect,; and replacing original image data indicated by the expanded defect indicating signal with the replacement data.
This third aspect also provides a computer program product arranged to carry out the method of said third aspect.
The invention also provides a telecine system comprising apparatus according to any one or more of the preceding apparatus aspects.
The invention also provides an apparatus is for reducing the visibility of defects in a video image represented by frames of original image data, the apparatus including; a motion compensator for producing motion compensated frames of image data;. a differencing arrangement for producing a) a forwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame; a comparison arrangement for comparing the forwards and backwards local difference signals with thresholds to produce local comparison signals; a processor for processing the said current and motion compensated succeeding and preceding frames to produce replacement data, the processor being responsive to the local comparison signals to not base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold; a checking processor for checking the replacement data against at least one reference criterion to produce a replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data; a defect detector for producing a defect indicating signal; an expander for
expanding the defect indicating signal and a defect replacer, having an input for receiving the replacement data, and adefect replacer for replacing the original image data For a better understanding of the present invention, and to show how the same may be carried into effect, reference will now be made by way of example to the accompanying drawings in which: Figure 1 is a schematic block diagram of a system for transferring film to video and, optionally, also for transferring edited video to film; Figure 2 is a schematic block diagram of an illustrative noise reduction system embodying the present invention; Figure 3 is a schematic illustration of the block matching process used in the blotch detection scheme in embodiments of the present invention; Figure 4 is a flow diagram illustrating the global vector extraction process used in the blotch detection scheme in embodiments of the present invention; Figure 5 is a schematic block diagram that illustrates the blotch removal process embodying the present invention; Figure 6 illustrates the internal structure of the brightness module of Figure 5; Figure 7 illustrates the internal structure of the forward brightness compensation module of Figure 5A; Figure 8 illustrates the internal structure of the blotch and scratch combining module of Figure 5A; Figure 9 illustrates the internal structure of the scene change detection module of Figure 5A; Figure 10 illustrates the internal structure of the edge detection module of Figure 5A ; Figure 11 illustrates the internal structure of the temporal difference threshold detection module of Figure 5A; Figure 12 illustrates the internal structure of the dirt flag combination module of Figure 5A; Figure 13A illustrates the internal structure of the first H and V expansion unit of Figure 5A ; Figure 13B illustrates a block of dirt flags a to i centred on a flag e ;
Figure 14A illustrates the internal structure of the second H and V expansion unit of Figure 5A ; Figure 14B illustrates schematically the blotch flag expansion process performed by the second H and V expansion module of Figure 14B; Figure 15 is a schematic block diagram of an example illustrating the internal structure of the median filter blotch replacement module of Figure 5A; Figure 16 illustrates the internal structure of the median input selection module of Figure 15; Figure 17 illustrates the internal structure of the 3D median filter module of Figure 15; Figure 18 illustrates the internal structure of the blotch replacement checking module of Figure 15; Figure 19 illustrates the internal structure of the blotch removal output selection module of Figure 15.
The noise reduction system of the present embodiment is implemented in real-time hardware in a digital telecine system such as the Sony Vialta (FVS-1000) Telecine.
The telecine noise reduction system is illustrated schematically in Figure 2. The noise reduction is performed by four modules: a block match vector generation and blotch detection module 1000; a scratch detection and removal module 2000; a blotch removal module 3000 ; and a grain reduction module 4000.
Video data in RGB format 901r, g, b is provided as input to the block motion vector generation and blotch detection module 1000. In this module a process known as motion estimation is performed whereby temporal information is obtained about the contents of each image frame with respect to adjacent frames in a time sequence. The temporal information is used for the detection of blotches and grain noise but not for scratch detection. Motion estimation is performed using a technique known as"block matching"whereby a block of data of fixed size is defined around a central reference position and this block is compared with respective data blocks corresponding to the same portion of the image in the previous frame and in the next frame of a chronological sequence. A block size of plus or minus 40 lines or pixels is typically used. Forward and backward motion vectors 1005 are output from the motion
estimation module 1000 and these are provided as input to the grain reduction module 4000. An information signal 905, indicating where blotches have been detected in frames, is produced by the motion estimation module 1000 and supplied to the blotch removal module 3000. The scratch removal module 2000 detects scratches on a frame-by-frame basis in dependence upon characteristics such as width, depth and orientation of the scratch. Thresholds such as the minimum scratch width and depth for removal can be pre-selected by the user. Account is taken of the fact that scratches will be whiter than the surrounding area on a negative film but darker for a positive film. Once detected, scratches are concealed by interpolation of the adjacent undamaged image areas. The blotch removal module 3000 receives blotch detection flags 905, RGB chrominance data 901r, g, b and forward and backward motion vectors 1005 as input. Blotch removal is performed if the area flagged as a blotch in the current frame differs from the corresponding area in motion compensated previous and next frames by more than a predetermined threshold. Blotches are removed by means of a 3-dimensional median filter, followed by a smoothing low-pass filter. Candidate image data for blotch replacement is cross-checked against adjacent frames and local brightness levels. The blotch removal module 3000 uses inter-frame differences in local brightness levels to detect scene changes and passes-on scene change signals 909 to the grain reduction module 4000. In addition to the scene change signals 909, the grain reduction module 4000 receives RGB video data 901r, g, b and forward and backward motion vectors 1005 as input. In the grain reduction module 4000, large grains and large spatial areas of graininess are removed by a combination of temporal and spatial filtering. The Walsh-Hadamard transform technique is subsequently used by the grain reduction module 4000, to remove the remaining smaller grains and grains in more detailed image areas.
A grain simulation module 49 is provided. The grain simulation module 49 adds random noise to image areas from which scratches and blotches have been removed. The module in this example is at the input to the grain noise reduction module 4000. It receives from the blotch and scratch removal modules 3000 and 2000 flags indicating image areas where blotches and scratches have been removed and inserts into those areas random noise simulating grain noise. That is done for two purposes. Firstly, grain is preferably reduced but not removed in embodiments of the
invention. Areas from which scratches and blotches are removed also then have no grain noise and are too visible. Adding grain noise reduces the visibility of the areas.
Secondly, the grain noise reduction module operates better if there is grain noise; areas with no grain noise would reduce the effectiveness of reduction.
Blotch detection As a prerequisite to blotch removal it is necessary to employ a scheme of blotch detection. The blotch detection process serves to generate blotch flags that identify those pixels of a video frame that are likely to be a blotch. The blotch flags are supplied as input to a blotch removal scheme that is directed to reconstructing the pixels obscured by blotches. The blotch detection scheme of the current invention uses"motion vectors"to generate blotch flags, as shall be explained below.
The motion vectors are obtained via a process known as motion compensation that comprises the following stages: block matching; calculation of a correlation surface; vector estimation; global vector extraction; vector reduction and vector selection. With the exception of the vector selection stage, all of these stages use luminance data in this embodiment. Alternatively, a singlr colour channel, e. g. Green, could be used. The block matching process is illustrated schematically in Figure 3. A video frame, Frame n, being tested is divided into blocks ofNxN pixels.
Frame n shall be referred to as the reference frame. A best-match for the image region defined by block 10 of Frame n is searched for in a predetermined area 20B of a Frame n-1 that represents the previous frame in a temporal sequence and a corresponding area 20F of a Frame n+1 that represents a next frame in the temporal sequence. A bestmatch block 30 is identified in Frame n-1. The relative positions of the current block 10 and the matched block 30 define a backward motion vector. A best-match block is also found for Frame n+1 and this gives a forward motion vector. Each block in Frame n is considered in turn and a forward and backward motion vector is assigned to each block.
The best-match block 30 in Frame n-1 is determined by constructing a set of correlation surfaces comprising a single correlation surface for each NxN block of the reference frame. Two sets of correlation surfaces are calculated: one for Frame n-1 and one for Frame n+1. A correlation surface is calculated by subtracting each pixel in the NxN block of the reference frame from the corresponding pixel in a
respective NxN block of the Frame n-1 and taking the absolute value of the difference. Each NxN block in the predetermined search area of the Frame n-l must be considered. The absolute difference values for each pixel location in the NxN block are summed together. The whole process is repeated for Frame n+1. The correlation surface is
defined by the following equation :
N N CorrelaO (Cgy) = ! ,, (/,..)-Fram. (/, , ) I v=1 h=1 Y=l h=l
where s represents either the Frame n-l or the Frame n+1 ; (h, v) are the coordinates of the current pixel within the block; and x and y represent the horizontal and vertical displacements that map the current block of the reference frame to a corresponding block in the search area of the Frame n-1 or of the Frame n+l. The size of the correlation surface increases with the size of the predefined search area. The correlation surface with the minimum value typically defines the best motion vector [x, y]. Although the above formula is the preferred way of calculating the correlation surface, an alternative way of calculating the correlation surface performs a sum of squared differences rather than a sum of absolute differences.
Once a correlation surface has been calculated for each NxN block of the reference frame, a weighting factor is applied to each of the correlation surfaces.
The weighting factor serves to reduce the likelihood of aliased minima in the correlation surfaces by assigning larger weights to the larger motion vector thus biasing the choice of motion vector to the vector closest to the zero vector [0, 0].
The process of vector estimation involves determining a single best vector [x, y] for each correlation surface. The vector is defined as the distance from the centre of the correlation surface which is defined as the zero vector to the lowest minimum on the surface. An alias flag and a valid flag are produced to indicate the reliability of each motion vector thus selected. At the end of the vector estimation process there will be two motion vectors for each NxN block of the reference frameone relative to Frame n-1 and one relative to Frame n+1.
The global vector extraction process involves determining the four most frequently occurring motion vectors for Frame n-1 and the four most frequently occurring motion vectors for the Frame n+1. This process is illustrated by the flow chart of Figure 4. At stage S 1 of the flow chart the most frequent motion vector (MV)
is found and added to a global motion vector list. The next most frequent MV is determined at stage S2. At stage S3 a test is performed to see if the most recently generated MV lies within a predetermined exclusion zone around the previous most frequent MV. The exclusion zone prevents motion vectors that are too similar being chosen as the global motion vectors. This is important because different parts of the reference frame may be moving at different velocities and it is important that the global motion vectors are representative of these differences. If the most recently generated MV does lie within the exclusion zone then the process returns to stage S2, otherwise at stage S4 the most recently generated MV is added to the global motion vector list and the process proceeds to stage S5. At stage S5 a check is performed to see if all four global motion vector have been found. If all four vectors have been found the process terminates at stage S6, otherwise the process returns to stage S2.
The stage of vector reduction involves augmenting the motion vectors determined in the global vector extraction process to reduce the likelihood of a"bad" motion vector being selected in the vector selection stage described below. Recall that each motion vector has a flag to indicate whether it is valid and a flag to indicate whether it is aliased. These two flags indicate how"good"a motion vector is. The vector reduction process involves identifying any valid motion vectors associated with the 8 blocks adjacent to a given NxN block and using these valid vectors to replace any non-valid vectors in the global motion vector list associated with the given block. At the end of the vector reduction process each NxN block should have 5 motion vectors including the zero vector associated with it. Thus there are 5 motion vectors for each P-frame and for each N-frame.
The Vector Selection stage involves determining a motion vector for each pixel in the reference frame. The motion vectors calculated in the above stages apply to NxN blocks of pixels. Now motion vectors are assigned to each individual pixel in the NxN block. In the above stages only a single colour channel has been used. At the vector selection stage, it is necessary to check the vectors against all three colours, as vectors that seem correct for one colour could be very wrong for others.
The Vector Selection process takes each of the five motion vectors determined from the Vector Reduction process and performs a 3 pixel by 3 pixel block-match centred on each pixel under test in the reference frame. A sum of absolute differences
is calculated for every pixel in the 3x3 block. The results for the three colour channels R, G and B are then added together. The block-match giving the lowest result is chosen as the motion vector for that particular pixel. The block-match is done
separately for Frame n-1 and for Frame n+1. The block-match is defined by the following formula :
BlockMatch (x, y) = 2LvSh ! /Mg,, )-FraM+x, v+l
where s represents either Frame n-1 or Frame n+l ; (h, v) are the coordinates of the pixel in the reference frame and (x, y) are the displacements corresponding to the potential motion vector for the pixel at (h, v). A block match surface is produced for each of the N2 pixels in each block.
The minimum of the block match surface is chosen as the motion vector for the pixel (h, v). Each pixel of the reference frame is assigned a forward motion vector calculated with respect to Frame n+l and a backwards motion vector calculated with respect to Frame n-1. The motion vectors are used in a process called"motion compensation"to determine the likelihood of blotches being present in a video frame.
The motion compensated Frame n-1 is known as the previous frame (P-frame) and the motion compensated Frame n+1 is known as the next frame (N-frame).
The blotch detector looks for temporal discontinuities between pixels of the reference frame and the pixels given by the corresponding forward and backward motion vectors. The technique relies on the fact that a Displaced Pixel Difference (DPD) is typically small when motion is correctly estimated. The DPD is defined by the following equation: D (x) = In (x)- (x + (x)) where In (x) is pixel value in the reference frame and In-l (x+dn, n-i (x)) pixel value in frame n-1 given by the backward motion vector.
When a pixel signal value has been corrupted by a blotch, the DPD value will typically have a large magnitude with respect to the values for uncorrupted pixels.
However the DPD also typically has a large magnitude when motion discontinuities
occur. This potential ambiguity can be overcome by recognising that in cases of motion discontinuity, the temporal discontinuity typically occurs in a single direction along the motion trajectory. Thus two DPD values, Eb and Ef, are calculated corresponding to the backward and forward directions respectively.
=/--/ (x+,,,./ E/-= In -x),,n+/x+, n+/ A"Spike Detection Index" (SDI) with motion compensation is used for blotch detection. The SDI exploits the facts that the intensity of blotches deviates substantially from the intensity of the surrounding region and that the majority of the values for Eb and Ef associated with pixels of a blotch have the same sign. A user
defined threshold Et is introduced to determine which DPD values constitute true blotches. The following equation defines the SDI algorithm
SDI (X) = Ilfor (JEbl > Et) AND (JEf I > E,) ANDsign (Ef) =slgn (Eb) 00therwise
where x corresponds to a pixel of the reference frame.
Thus pixels associated with a blotch will have SDI (x) =l. An alternative to the SDI blotch detection method is"Rank Order Detection"which uses both temporal and spatial differences to identify pixels associated with a blotch.
The final result of the SDI blotch detection algorithms is a binary blotch map in which a blotch is represented by a one and the absence of a blotch is represented by a zero. These values are used as blotch flags. A blotch flag is generated for every pixel in the reference frame. The flag is supplied as input to the Blotch Removal Scheme. The SDI algorithm is implemented in the temporal difference threshold detection module 3600 described below. Blotch Removal Figure 5A schematically illustrates the blotch removal process. A frame store 5000 holds write address information in section 5000A ; information for the motion compensated next frame in section 5000B ; scratch pointer information supplied by the scratch detection and removal module 2000 in section 5000C ; information for the current frame in section 5000D ; and information for the motion
compensated previous frame in section 5000E. A read/write control module 5002 controls the input and output of data in the frame store. As the next frame data is input to the frame store a pixel and line counter 5004 supplies a write address for the next frame. The write addresses are also supplied to an adder 5008 and the corresponding forward motion vectors are added to the write addresses to produce motion compensated forward addresses. Likewise, the write addresses and the the corresponding backward motion vector are added in an adder 5006 to produce motion compensated backwards addresses. The motion compensated forward addresses in section 5000B and the motion compensated backward addresses in section 5000E are used as read addresses both for the video data and for the scratch pointer data. The address of the current frame is associated with a scratch pointer for the current frame.
The inputs to the frame store 5000 are RGB video data, forward and backward motion vector data and the output from the pixel and line counter 5004. The scratch pointer information from section 5000C and blotch flags from the block matching process are supplied as inputs to a blotch and scratch combining module 3300. In the blotch and scratch combining module 3300 the blotch flag information is combined with the scratch flag information for current, previous and next frames to produce a dirt pointer flag 3312. The dirt pointer 3312 is supplied as input, to a brightness module 3100 for current frame data and to a dirt flag combination logic 3700. The brightness module 3100 together with a brightness compensation module 3200F for forward frame data and a brightness compensation module 3200B for backwards frame data serve to reduce the effects of global or local fluctuations in brightness between frames. Since blotch replacement pixel values are calculated using adjacent frames in the temporal sequence, any brightness fluctuations could lead to ill-matching replacement pixel values. The brightness modules 3100,3200B and 3200F take a localised area average of the brightness levels of the current frame, the P-frame and the N-frame respectively and the P-frame and N-frame values are compensated by the difference measure between their brightness and the current frame brightness. The dirt pointer 3312 information is used by the brightness module 3100 to exclude pixels flagged as blotches from the localised area average for the current frame. A set of brightness pointers 3150 is output from the brightness module 3100. The brightness pointers 3150, a forward brightness difference signal 3232F and a backward brightness
difference signal 3232B are supplied as inputs to a scene change detection module 3400. The scene change detection module 3400 compares the inter-frame brightness differences with a predetermined threshold and if the difference exceeds the threshold a scene change is flagged. The scene change information is used to disregard any temporal processing of blotches where there is a scene change. A scene change pointer 3440 is output from the scene change detection module 3400 and supplied as input to a median filter blotch replacement module 3900.
A local detail calculation module 5014 determines the difference between adjacent pixel values horizontally and vertically in the surrounding image area. The output of this module comprises a horizontal detail key H and a vertical detail key V that are defined by the following formulae:
H = HLPF (abs (Y (x, y)-Y (x-l, y))) V = VLPF fs (-y)--7j) = FfT-K-7 Where : x and y are horizontal and vertical pixel coordinates respectively ; Y represents the amplitude of an R, G or B signal ; HLPF denotes horizontal low pass filtering ; and VLPF denotes vertical low pass filtering. The signals H and V are supplied as inputs to an edge detection module 3500.
The edge detection module 3500 also receives as inputs, the outputs from the brightness compensation modules 3200F and 3200B, including the forward brightness difference 3232F, the backward brightness difference 3232B, the brightness compensated next frame data 3230N and the brightness compensated previous frame data 3230P. The edge detection module 3500 is used to identify the edges of blotches.
Blotch edges are typically sharper than most edges in most images. If a relatively sharp edge is detected for a given pixel in combination with an inter-frame difference of comparable magnitude then that pixel is likely to be associated with a blotch edge.
The edge detection module 3500 flags the blotch edges and generates an output signal 3564 that is supplied as input to a dirt flag combination module 3700.
A temporal difference threshold detection module 3600 receives as inputs the current frame data from frame store unit 5000D, the N-frame brightness compensated data 3230N and the P-frame brightness compensated data 3230P. The module 3600 applies a predetermined temporal difference threshold to the difference between the current pixel and the corresponding motion compensated equivalent pixel on the
previous and next frames. Pixel difference values that lie above the threshold for both forward and backward differences are more likely to be associated with blotches and are flagged as such. The temporal difference threshold detection module 3600 outputs a 1-bit temporal threshold flag 3624 that is supplied as input to the dirt flag combination module 3700.
The dirt flag combination module 3700 combines the information given by the blotch flag 3312, the edge detect flag 3564 and the temporal threshold flag 3624 to produce dirt flags 3712RGB as output. The dirt flag is supplied as input to a spatial low pass filter 5012 and to a first horizontal and vertical (H and V) expansion unit 3800.
The first H and V expansion unit 3800 expands the blotch flags so that the edges of the blotches are not left unprocessed. This unit flags a pixel as a blotch if either the pixel itself or the pixel above, below, to the left or to the right of it is currently flagged as a blotch by the dirt flags 3712RGB. The flags are generated separately for each chrominance channel. The flags for all three RGB chrominance channels are output from the first H and V expansion unit 3800 and supplied as input to an AND logic gate where they are combined. A pixel is flagged as a blotch only if it is flagged separately on all three chrominance channels. The AND gate outputs an expand (l) dirt flag 3814 signal to a second H and V expansion unit 3850.
The second H and V expansion unit 3850 expands the combined dirt flags so that lines are joined up where necessary. Thus if two opposite pixels vertically horizontally or diagonally are flagged as a blotch then the pixel in the middle is also flagged. This is illustrated in Figure 14B. The second H and V expansion unit 3850 outputs an expand (2) dirt flag signal 3880 which is fed as input to a median filter blotch replacement module 3900.
The median filter blotch replacement module 3900 receives as input the scene change detection pointer 3440; an expand (2) dirt flag 3880 from the second H and V expansion unit 3850; P-frame data 3230P from the brightness module 3200B ; the Nframe data 3230N from the brightness module 3200F ; and current frame data from the frame store unit 5000D. The median filter blotch replacement module 3900 implements a median filter horizontally, vertically and temporally in order to calculate replacement values for pixels flagged as blotches. The pixel being replaced in the
current frame and five pixels each from the backwards and forwards motion compensated frames contribute to the median filter calculation. Scratches that are not flagged as blotches are removed by simple temporal filtering in this module. The median filter blotch replacement module 3900 outputs a signal 3934RGB comprising pointers to filtered pixels values for R, G and B chrominance channels of the image reference frame. These pointers 3934 are supplied as input to a spatial low pass filter 5012.
The spatial low pass filter 5012 also receives as input, the dirt flags 3712RGB that flag the blotches prior to flag expansion. The spatial low pass filter 5012 low-pass filters all blotches and any pixels on individual RGB chrominance channels that were flagged as potential blotches before flag expansion. This improves the integration of the blotch replacement pixels with the surrounding image area.
Brightness Module 3100. Figure 6 Figure 6 shows the internal structure of the brightness module 3100 that operates on the current frame data. This module calculates a localised area average of the brightness level in the vicinity of the pixel being processed in the reference frame.
The pixels flagged as blotches in the current frame are excluded from the localised area average. A selection module 3124 selects the current frame video data input if the blotch flag is LOW, but if the blotch flag is HIGH a zero input 3126 is provided to the circuit.
The following description of module 3100 refers to a value D. D represents the size of the area over which the brightness level is calculated. For D=10 the area is 21x21 pixels.
A group of circuit elements comprising an adder 3134 an adder 3136, a FIFO delay module 3132 that provides a (2D+1) line delay and a FIFO delay module 3138
that provides a 1 line delay serve to add current frame pixel values vertically over 2D+1 lines. An adder 3142, an adder 3144, a FIFO delay module 3140 that provides a (2D+1) clock cycle delay and a delay module 3146 that provides a 1 clock cycle delay serve to add groups of 2D+1 horizontally adjacent current frame pixels. A group of circuit elements comprising an adder 3104, an adder 3106, a FIFO delay module 3100 that provides D lines delay, a FIFO delay module 3102 that provides D+l lines delay and a FIFO delay module 3108 that provides a 1 line delay keep track of the number
of TRUE blotch flags in a vertical sequence of 2D+l pixels. A group of circuit elements comprising an adder 3112, an adder 3114, a FIFO delay module 3110 that provides a (2D+1) clock cycle delay and a delay module 3116 that provides a single clock cycle delay keep track of the number of TRUE blotch flags in a sequences of (2D+1) horizontally adjacent pixels. A reciprocal unit 3118 ensures that in performing the average calculation, the vertical sum of current frame pixel elements is divided by the number of FALSE blotch flags corresponding to the number of pixels included in the vertical sum. Similarly, the a reciprocal unit 3120 ensures that the vertical sum of current frame pixel elements is divided by the number of FALSE blotch flags corresponding to the number of pixels included in the horizontal sum. The output of this module is a local brightness signal 3150 that is an brightness average over a 2D+1 by 2D+l block of pixels centred on the pixel currently being processed and excluding any pixels in the block that have a TRUE blotch flag.
Brightness Compensation Module 3200. Figure 7 The following description of module 3200 refers to a value D which is
the same as value D described above in relation to module 3100. Figure 7 shows the internal structure of the brightness compensation module 3200B.
The video input data 3201B corresponds to motion compensated P-frame data.
The P-frame data 3201B is supplied to a first group of circuit elements comprising an adder 3204, a FIFO delay module 3202 that provides a delay of (2D+l) lines, an adder 3206 and a FIFO delay module 3208 that provides a delay of I line. This first group of circuit elements serves to add brightness values for groups of (2D+1) vertically adjacent pixels. A multiplier 3210 multiplies the sum of brightness for (2D+1) pixels by 1/ (2D+1) thus calculating the average brightness value for each vertical group. The output of the multiplier 3210 is supplied as input to a second group of circuit elements comprising a FIFO delay module 3220 that provides a delay of (2D+1) clock cycles, an adder 3222, an adder 3224 and a delay module 3226 that provides a delay of 1 clock cycle. This second group of circuit elements serves to sum brightness values for
groups of (2D+1) horizontally adjacent pixels. A multiplier 3214 multiplies the sum of brightness for (2D+1) horizontally adjacent pixels by 1/ (2D+1) thus calculating the average brightness value for each horizontal group. The output of the multiplier 3214 is the average local brightness level corresponding to a (2D+1) by (2D+1) block of
pixels centre of the P-frame. The location of the centre of this block in the P-frame is given by the backwards motion vector of the pixel currently being processed in the reference frame. An adder 3216 receives an input 3150 corresponding to the local (average) brightness of the current frame and the output of the multiplier 3214 is subtracted from this value to give an output brightness difference signal 3232B corresponding to the local brightness difference between the current frame and the Pframe. An adder 3218 adds the brightness difference signal 3232B to the video input data 3232B to produce an output signal 3230 comprising P-frame data that is compensated by the difference between average local brightness of the input P-frame video data 3232B and the average local brightness of the current frame.
The brightness module 3200F has an identical internal structure to the module 3200B described above. The difference is that the input to module 3200F is the motion compensated N-frame data rather than the motion compensated P-frame data.
Blotch and Scratch Combining Module 3300. Figure 8 Figure 8 shows the internal structure of the blotch and scratch combining module 3300. An AND logic gate 3310 has input of a scratch pointer 3304 for the current frame, an input of a scratch pointer for the next motion compensated frame
3306 which is inverted on input to the AND gate and an input of a scratch pointer for the previous motion compensated frame 3308 which is also inverted. Thus the output from AND gate 3310 will be high only if the scratch pointer 3304 for the current fame is HIGH and the two motion compensated scratch pointers 3306 and 3308 are LOW.
The AND gate 3310 output is supplied as input to an OR gate 3302. The OR gate 3302 receives a second input of the blotch flag 3301 from the block matching process.
The OR gate generates a dirt pointer 3312 as output that combines the information from the blotch flag 3301 and the scratch pointers 3304, 3306 and 3308.
Scene Change Detection Module 3400, Figure 9 Figure 9 shows the internal structure of the scene change detection module 3400. The scene change detection module calculates the inter-frame difference between brightness levels on each colour channel. If the difference is greater than a predetermined threshold, then a scene change in that temporal direction is flagged. There are four possible outputs of the scene change detection:
No Scene Change : in which case the brightness levels do not vary by more than the threshold in either direction.
Forward Scene Change : in which case the brightness levels on at least one colour differ by more than the threshold between the current and next frames.
Backward Scene Change : in which case the brightness levels on at least one colour differ by more than the threshold between the current and previous frames.
* Scene Change in Both Directions : in which case the brightness levels on at least one colour differ by more than the threshold both between the current and previous frames, and the current and next frames. This must be confirmed by the difference between the previous and next frame brightness being greater than the threshold as well.
Depending on the type of scene change detected, any temporal processing in the direction of a scene change must be ignored, as it is no longer based on a valid temporal variation. Therefore, the temporal difference threshold detection carried out in the module 3600, and the 3D median filtering carried out in the module 3900 must both be modified to look only in the direction where there is no scene change. For the 3D median filter 3900, a frame in the direction in which there is a scene change is replaced with a duplicate of the frame in the other direction. In cases where there is a scene change in both directions, the 3D median filter cannot be used at all so the blotch must be left uncorrected. Typical cases where a scene change is detected in both directions are rare, and only happen in cases where a blotch would be hard to spot anyway.
Refer now to the circuit elements of Figure 9. The current frame local average brightness signal 3150 for a single chrominance channel is input to a valuechecking module 3442 that gives a HIGH output if the input signal is non-zero. An AND gate 3444 is supplied with the output of the value-checking module 3442 and with the two corresponding outputs 3408 for the other chrominance channels. The AND gate 3444 has an output that is fed as a first input to an AND gate 3430.
An adder 3416 subtracts the forwards brightness difference 3232F from the backwards brightness difference 3232B for a single chrominance channel in this case. The value of the difference between these two signals 3232F and 3232B is subtracted from a scene change difference threshold 3402, which may be user defined or may be
fixed, at an adder 3418 and the sign of the difference is supplied as a second input to the AND gate 3430. The output of the AND gate 3430 is supplied to an output select module 3438. If the output of the AND gate 3430 is HIGH then the output select module 3438 flags a scene change in both forward and backward directions.
The backwards brightness difference signal 3232B for a single chrominance channel is supplied as input to an absolute value module 3412, the output of which is fed to an adder 3422. The absolute value of the backwards brightness difference signal 3232B is subtracted form the predetermined scene change difference threshold 3402 at the adder 3422. The output of the adder 3422 is fed to an OR gate 3424. The OR gate 3424 receives two equivalent input signals 3410 corresponding to the other two chrominance channels. The output of the OR gate 3424 is fed to an AND gate 3436 and to an AND gate 3430. The output of the OR gate 3424 will be HIGH if at least one of the three chrominance channels has a backwards brightness difference that exceeds the threshold 3400. The AND gate 3436 receives a second input from the output of the AND gate 3430 that is supplied via a NOT gate 3432.
This second input ensures that a backwards scene change will not be flagged by the output select module 3438 when the scene change is in fact in both directions.
The forward brightness difference signal 3232F for a single chrominance channel is supplied as input to an absolute value module 3414, the output of which is fed to an adder 3420. The absolute value of the forward brightness difference signal 3232F is subtracted form the predetermined scene change difference threshold 3400 at the adder 3420. The output of the adder 3420 is fed to an OR gate 3426. The OR gate 3426 receives two equivalent input signals 3428 corresponding to the other two chrominance channels. The output of the OR gate 3426 is fed to an AND gate 3434 and to the AND gate 3430. The output of the OR gate 3426 will be HIGH if at least one of the three chrominance channels has a forward brightness difference that exceeds the threshold 3400. The AND gate 3434 receives a second input from the output of the AND gate 3430 that is supplied via a NOT gate 3432. This second input ensures that a forward scene change will not be flagged by the output select module 3438 when the scene change is in fact in both directions.
The output select module 3438 determines the value of a scene change pointer 3440 that is output from the scene change detection module 3400. If the output
of the AND gate 3436 is HIGH then a backwards scene change is flagged ; if the output of the AND gate 3434 is HIGH then a forward scene change is flagged ; if the output of the AND gate 3430 is HIGH then a scene change in both directions is flagged; otherwise no scene change is flagged.
The scene changes detected by the scene change detection module 3400 are actually local differences rather than global differences. Global differences would indicate that the whole frame is different to the one before or after whereas local differences correspond to portions of the frame. Detecting scene changes locally, rather than globally has certain advantages. Even on a genuine scene change, some areas of the picture may be similar to the same area before the scene change: for instance, a sky or dark background. Using local scene change detection, these similar areas can still be processed in both temporal directions despite the fact that there is a global scene change. Furthermore in areas of fast motion, where the motion vectors are unable to track objects successfully, a local scene change will be detected. This will prevent errors in motion vectors from producing noticeable errors on the output picture.
Edge Detection Module 3500 < Figure 10 Figure 10 shows the internal structure of the edge detection module 3500.
The horizontal detail level H is input to a horizontal edge threshold (Hthreshold) calculation module 3548 that calculates Hthreshold = (Thedge* H), where Thedge is a predetermined constant The Hthreshold is fed to an adder 3560.
Similarly, the vertical detail level V is input to a vertical edge threshold (Vthreshold) calculation module 3548 that calculates Vthreshold = Thedge * V). The Vthreshold is fed to an adder 3524. The Hthreshold and Vthreshold vary with the square root of H and V respectively. This typically gives better results than applying thresholds that are linearly related to H and V.
An adder 3520 calculates Vdiff = , y)-Y (x, y-l) where Y represents the amplitude of R, G or B pixels of the current frame. The Vthreshold 3550 is subtracted
from ABS (Vdiff) at an adder 3524. The output of the adder 3524 is fed to a logic unit 3562.
An adder 3536 calculates HdifJ = Y (x, y)-nt-l, y). The Hthreshold 3552 is subtracted from ABS (Hdiff) at an adder 3530. The output of the adder 3530 is fed to the logic unit 3562.
An absolute value module 3516 receives the backwards brightness difference 3232B and the forwards brightness difference 3232F as inputs. The module 3516 determines the maximum absolute value of the forward difference and of the backwards difference and outputs these values to an adder 3526 and to an adder 3528.
An adder 3538 takes the signals 3230N and 3506 as inputs and calculates a
forward difference F= F-Af. An adder 3542 takes the signals 3230P and 3506 as inputs and calculates a backward difference Bdiff Y (x, y)-Prev (x, y).
An adder 3554 determines whether ABS (Fdiff) > Vthreshold ; and an adder 3556 determines whether ABS (Fdiff) > Hthreshold. An adder 3558 determines whether ABS (Bdiff) > Vthreshold ; and an adder 3560 determines whether ABS (biff) > Hthreshold.
An absolute value module 3522 generates a sign bit corresponding to the sign ofVdiff ; an absolute value module 3532 generates a sign bit corresponding to the sign of Hdiff ; an absolute value module 3540 generates a sign bit corresponding to the sign of Fdiff ; and an absolute value module 3544 generates a sign bit corresponding to the sign of Bdiff.
The logic unit 3562 generates horizontal and vertical edge detect flags HEdge and Vedge respectively on the basis of the output signals of the adders 3524,3526, 3528,3530, 3554,3556, 3558, and 3560 and on the values of sign bits generated by an
absolute value modules 3522, 3532, 3540 and 3544. The following pseudocode gives the edge flag allocation rules :
7( (ABS (Hdiff) > Hthreshold) AND (ABS (Fdi&commat; > Hthreshold) AND (ABSF(Bdif) > threshold) AND (ABS (Bdiffi > Hthreshold) AND 4B > / ! rM/ ! o/ VD fS = =Ng Else Edge ~H = 0
If ((ABS (Vdif%) > Vthreshold) AND If ( (ABS (Vdif (ABS (Fdif-) > Vthreshold) AND > Ao// < M) Sign (Vdiffl =Stgn (Fdzffi =Sign (Bdo) VEdge I Else VEdge = 0
EdgeDetect Flag = HEdge OR Venge The logic unit 3562 generates the edge detect flag 3564 as output.
Temporal Difference Threshold Detection Module 3600. Figure 11 Figure 11 shows the internal detail of the temporal difference threshold detection module 3600. This module 3600 calculates the difference between the current pixel and its corresponding motion and brightness compensated pixel on both the previous and next frames and compares this difference with a predetermined temporal difference threshold 3618. A blotch will typically exceed the threshold 3618 for both the forward and backward differences. A blotch should also differ with the same polarity relative to the preceding and following frame. The following pseudocode algorithm summarises the logic of the temporal difference threshold detection module 3600:
ff = Current-Next Fwi di Back di Fwc( = Curren--Next Back~diff= Current-Previous w < T > f/ : o/ VD oco VD nwe) =ac6 Temporal Difference Threshold Detect = 1 ; Else Temporal Difference Threshold Detect = 0; Refer now to the circuit elements of Figure 11. The inputs to the temporal difference threshold detection module 3600 are the brightness compensated next frame data 3230N, the brightness compensated previous frame data 3230P, the current frame data 3506 retrieved from the frame store unit 5000D and the predetermined temporal difference threshold 3618.
The current frame data 3506 is subtracted from the next frame data 3230N at an adder 3608 and a forward difference signal that is the output of the adder 3608 is fed to an absolute value unit 3612. The absolute value of the forward difference signal is supplied as input to an adder 3616. A sign bit from the output of the absolute value unit 3612 is fed directly to a logic module 3622. This sign bit indicates the polarity of the forward difference. The temporal difference threshold 3618 is subtracted from the forward difference at the adder 3616 and a sign bit for the output of the adder 3616 is fed to the logic module 3622.
The current frame data 3506 is subtracted from the previous frame data 3230P at an adder 3610 and a backward difference signal that is an output of the adder 3610 is fed to an absolute value unit 3614. The absolute value of the backward difference signal is supplied as input to an adder 3620. A sign bit from the output of the absolute value unit 3614 is fed directly to the logic module 3622. This sign bit indicates the polarity of the backwards difference. The temporal difference threshold 3618 is subtracted from the backward difference at the adder 3620 and a sign bit corresponding to the output of the adder 3620 is fed to the logic module 3622.
The logic module 3622 outputs a 1-bit temporal threshold detect flag 3624. The flag 3624 is HIGH if the forward and backward differences both lie above the threshold provided that the polarity of the forward and backward differences matches.
Dirt Flag Combination Module 3700. Figure 12 Figure 12 shows the internal structure of the dirt flag combination module 3700. The blotch flag 3310 and the edge detection flag 3564 are fed as inputs to an OR gate 3708. The output of the OR gate is fed to an AND gate 3710. The 1-bit temporal threshold detect flag 3624 is also supplied as an input to the AND gate 3710.
The output of the AND gate 3710 will be HIGH if the temporal threshold detect flag 3624 is HIGH provided that at least one of the blotch detect flag and the edge detect flag is high. The output signal from the AND gate 3710 is the output signal 3712RGB from the dirt flag combination module 3700. The three chrominance channels are processed separately by this module.
First H and V Expansion Unit 3800. Figures 13A and 13B
Figure 13 shows the internal structure of the first H and V expansion unit 3800 of one colour channel. The dirt flags for a block of pixels a to i centred on a pixel e is shown in Figure 13B. The dirt flag data 3802 is input to this module and this is fed through a sequence of delay modules comprising a FIFO delay unit 3804 that
provides a delay of 1line minus one clock cycle, a one clock cycle delay unit 3806, a one clock cycle delay unit 3808 and a FIFO delay unit 3810 that provides a delay of I line minus one clock cycle. An OR gate receives the undelayed dirt flag data 3802 and delayed dirt flags output from each of the four delay modules 3804,3806, 3808 and 3810. The output of the OR gate 3812 is an expand (l) dirt flag 3814 that flags the pixel above, below, to the left and to the right of any pixel flagged as a blotch by the input dirt flag data 3802. The input dirt flag is also retained. The function of this module is to expand the blotch flags 3802 to reduce the chances of the edges of any detected blotches remaining unprocessed. Provided at least one of the flags h, i, e, d and b is logic 1 OR gate 3812 will output logic 1 thus expanding the dirt flag. The output of the first H and V expansion module 3800 is fed to the AND gate 5010, also shown in Figure 5, that combines expand (l) dirt flags from the R, G and B channels to produce a combined expanded dirt flag 3852.
Second H and V Expansion Unit 3850. Figure 14 Figure 14A shows the internal structure of the second H and V expansion unit 3850.
The function of this module is to further expand the combined flags. If in a 3x3 block of pixels as shown in Figure 14B, two opposite pixels vertically, horizontally or diagonally are flagged as a blotch the pixel in the centre of the block is also flagged as a blotch. This is illustrated schematically in Figure 14B This second flag expansion is particularly effective in the removal of image artefacts due to contamination by hair because some parts of the hair tend to be picked up by the unexpanded blotch flags while other parts can remain undetected. The module comprises a sequence of a one clock delay 3854, a one clock delay 3856, a FIFO delay 3858 of 1 line minus two clocks, a one clock delay 3860, a one clock delay 3868, a FIFO delay 3866 of 1 line minus two clocks, a one clock delay 3864, and a one clock delay 3862. The sequence of delays creates the block of flags a to i shown by way of example in Figure 14B.
For that block AND gate 3870 receives flags and i; 3872 h and b; 3874 g and c; and 3876 f and d. Flag e is output by delay 3860. An OR gate 3878 has inputs connected
to the AND gates. It will be seen from Figures 14A and B that if flags a and i input to AND gate 3870 are both logic 1, then logic 1 is output to OR gate 3878. If flag e is e. g. logic 0 it is replaced by the logic 1 of gate 3870 thus expanding the flag.
Median Filter Blotch Replacement Module 3900. Figure 15 Figure 15 illustrates schematically the components of the median filter blotch replacement module 3900. Component sub-modules of the median filter blotch replacement module are described in detail below with reference to Figures 16,17, 18 and 19. The inputs to the module 3900 are the expand (2) dirt flag, the brightness compensated previous and next frame data 3230P and 3230N respectively, the current frame data 3506 and the scene change pointer 3440. The P-frame data 3230P, the Nframe data 3230N and the scene change pointer 3440 are supplied as inputs to a median input selection module 3908. The median input selection module 3908 determines which frames should be used in the subsequent median filter calculation on the basis of the scene change information provided by the scene change pointer 3440.
As explained above in the description of the scene change detection module, the presence of a scene change means that the median filter must disregard particular frames. Thus the median input selection module 3908 replaces any frames in the direction in which there is a scene change by a duplicate of the frame in the other direction. The median input selection module 3908 generates a previous frame signal 3930 and a next frame signal 3932 as outputs and these signals are supplied as inputs to a 3D median filter 3910.
The 3D median filter 3910 performs a three dimensional median filtering operation on those pixels flagged as blotches by the expand (2) dirt flag input.
The three dimensions concerned are horizontal, vertical and temporal. The median filtered values for the blotch pixels are intended for use in reconstructing the image portions obscured by blotches. The 3D median filter 3910 generates an output signal 3920RGB comprising median filtered values for all three chrominance channels. The median filtered output signal 3920RGB is supplied as input to a replacement checking module 3912.
The replacement checking module 3912 checks the median filtered values 3920RGB against the local brightness level of the current frame that is supplied as an input signal 3150 to the module. The median filtered values 3920RGB are also
checked against the P-frame data 3230P, the N-frame data 3230N and the current frame data 3506 which corresponds to the pixel values of the blotch. The replacement values 3920RGB should be closer to the P-frame data 3230P, the N-frame data 3230N and the local brightness level 3150 than the original current frame data 3506 is. The scene change pointer 3440 is used to exclude comparisons of data in the direction of a scene change. Furthermore the original data 3506 should differ from the replacement with the right polarity for the type of film: blotches should be dark on positive but light on negative film. The replacement checking module 3912 generates as output a median filtered values signal 3922RGB and a dirt pointer 3918. The dirt pointer 3918 cancels blotch replacements where the median filtered values are ill-matched.
A temporal filter 3926 receives as input the previous frame signal 3930 and the next frame signal 3932 from the median input selection module 3908 and also the current frame data 3505. The temporal filter 3926 applies a simple temporal filter to the input data defined by the following equation:
Temporalfilter = 0. 25 * (Previous + 2*Current + Next). The output of the temporal filter 3926 is supplied as input to a blotch removal output selection module 3924. This module additionally receives as input the median filtered values signal 3922RGB, the dirt pointer 3918, the scratch pointer 3304 and the current frame data 3506. If the scratch pointer 3304 indicates that there was a scratch at the current location, but it is not flagged as a blotch by the dirt pointer 3918, then the output 3927 of the temporal filter 3926 is used to smooth out the scratch temporally. The median filtered values signal 3922RGB is selected as output provided that the dirt pointer 3918 flags the pixel as a blotch. The blotch removal output selection module 3924 generates a median filtered output signal 3934RGB.
Median Input Select Module 3908, Figure 16 Figure 16 shows the internal structure of the median input selection module 3908. This module determines the frames to be treated as the P-frame and the N-frame for the median filter calculation. The standard motion compensated P-frame and N-frame are used except in cases where there is a scene change. The frame in the direction of any scene change is replaced with a duplicate of the frame in the opposite direction, for example if there is a forward scene change then the N-frame will be replaced by a copy of the P-frame in the median filter calculation. A scene change in
both directions means that blotches cannot be repaired and in this case two copies of the current frame data are input to the median filter calculation.
The scene change pointer 3440 is fed into a decode module 3940 that decodes the scene change information. The decode module outputs a signal 3950 to a select module 3942 indicating a backward scene change; a signal 3954 to a select module 3944 indicating a forward scene change; and a signal 3952 indicating a scene change in both directions that is supplied as an input to a select module 3946 and to a select module 3948.
The select modules 3942 and 3946 determine the P-frame input to the median filter. The select module 3942 receives the brightness compensated P-frame data 3230P, the brightness compensated N-frame data 3230N and the backward scene change flag 3942. If the backward scene change is flag 3942 is HIGH then the select module 3942 will output the N-frame data, otherwise it will output the P-frame data.
The output of the select module 3942 is supplied as input to the select module 3946.
The select module 3946 also receives as an input, the current frame data 3506 and the both-directions scene change flag 3952. If the both-directions scene change flag 3952 is HIGH then the"P-frame output"of the select module 3946 is a signal 3956 comprising the current frame data, otherwise the signal 3956 comprises the output of the select module 3942.
The select modules 3944 and 3948 determine the N-frame input to the median filter. The select module 3944 receives the brightness compensated P-frame data 3230P, the brightness compensated N-frame data 3230N and the forward scene change is flag 3954. If the forward scene change is flag 3954 is HIGH then the select module 3944 will output the P-frame data, otherwise it will output the N-frame data. The output of the select module 3944 is supplied as input to the select module 3948. The select module 3948 also receives as input the current frame data 3506 and the bothdirections scene change flag 3952. If the both-directions scene change flag 3952 is HIGH then"N-frame output"of the select module 3948 is a signal 3958 comprising the current frame data, otherwise the signal 3956 comprises the output of the select module 3944.
3D Median Filter Module 3910. Figure 17 Figure 17 shows the internal structure of the 3D median filter module 3910. The median filter 3910 replaces the central pixel value in a specified group of pixels by the median value calculated from the pixel values of the central pixel and its neighbours within the group. In an alternative implementation of a median filter, all pixels in the group deviating by more than a predetermined threshold from the median value are replaced by the median value. Five pixels each from the P-frame and the Nframe and a single pixel from the current frame make up the specified group of pixels that contribute to the median filtering operation. In the current frame only the pixel value currently being replaced contributes to the median filter because the surrounding pixels in the same frame as a blotch are likely to be blotches too so they are not used for estimating the correct data with which to replace the blotch.
The next frame data 3932 produced by the median input selection module 3908 is fed through a series comprising FIFO delay module 3960 that provides a delay of 1 line minus one clock-cycle, a one clock-cycle delay unit 3968, a one clock-cycle delay unit 3970 and a FIFO delay module 3972 that provides a delay of 1 line minus one clock-cycle. These four delay modules effectively select a pixel to the right, to the left, above and below the pixel of the next frame corresponding, that is related by a motion vector, to the pixel of the current frame being replaced. The outputs of each of the four delay modules in the series are supplied as inputs to a median filter unit 3966.
The previous frame data 3930 produced by the median input selection module 3908 is fed through a series comprising FIFO delay module 3964 that provides a delay of 1 line minus one clock-cycle, a one clock-cycle delay unit 3974, a one clock-cycle delay unit 3976 and a FIFO delay module 3978 that provides a delay of 1 line minus one clock-cycle. These four delay modules effectively select a pixel to the right, to the left, above and below the pixel of the previous frame corresponding, that is related by a motion vector, to the pixel of the current frame being replaced. The outputs of each of the four delay modules in the series are supplied as inputs to the median filter unit 3966.
The current frame data 3506 is fed into the median filter unit 3966 via a FIFO delay module 3962 that provides a delay of 1 line. The current frame data is also
routed to a switch 3984 via a FIFO compensation delay module 3980. The FIFO compensation delay module 3980 compensates for the delays through the FIFOs 3960, 3968, 3978,3972 (or 3964,3974, 3976,3978) and the filter 3966. The median filter unit 3966 produces a median filtered output signal 3982 that is supplied as an input the switch 3984. The switch 3984 operates in dependence upon the value of the expand (2) dirt flag that is supplied as a further input to the swithch. If the expand (2) dirt flag is HIGH then the median filtered output 3982 is selected as an output signal 3986 of this 3D median filter module, otherwise the current frame data supplied via the FIFO compensation delay module 3980 is selected as the output signal 3986.
Blotch Replacement Checking Module 3912. Figure 18 Figure 18 shows the internal structure of the blotch replacement checking module 3912. This module performs a series of checks on the median filtered blotch replacement values to determine if these replacement values match the previous and next frames and the local brightness levels sufficiently well. If the match is determined to be poor then the corresponding blotch pixel will be restored to its original value and the median filtered replacement values will not be used.
A first check ensures that the replacement chrominance signal value minus the corresponding original current frame value is less than zero for negative film but greater than zero for positive film. This check is performed by a subtractor 3988 and an exclusive OR (EOR) gate 3994. The subtractor 3988 subtracts the original current frame data 3506 from the median filtered replacement data 3920RGB and the sign of this difference is supplied to the EOR gate 3994. The EOR gate 3994 receives a neg/pos flag as input that specifies whether the video source material was negative
or positive film. The EOR gate 3994 generates a HIGH output only if : * Replacement-Original < 0, for negative film * Replacement-Original > 0, for positive film.
The output of the EOR gate 3994 is fed to an AND gate 3998 where the result of this test is combined with the results of the remaining tests.
The scene change pointer 3928 is supplied as input to a decoder 3993 that generates a flag to indicate whether a backwards scene change has occurred and a flag to indicate whether a forward scene change has occurred. This scene change information is used to override consistency checks on replacement pixel values with
corresponding motion-compensated P-frame and N-frame pixel values in circumstances where a scene change has occurred. If the forward scene change flag has logic value one then the output of the OR gate 3996 will also have logic value one, thus overriding the sign bit produced by a subtractor 3990B. Similarly, if the backward scene change flag has logic value one then the output of the OR gate 3995 will also have logic value one, thus overriding the sign bit produced by a subtractor 3989B.
A second check ensures that the absolute value of the difference between the replacement chrominance value and the corresponding chrominance value from the motion compensated P-frame is less than the absolute value of the difference between the chrominance value for the original current frame and the motion
compensated P-frame. Thus it is a requirement that ABS (Replacement-Previous) < ABS (Original-Previous). This check is performed by a triplet of subtractors 3989A, B, C and an OR gate 3995. The subtractor 3989A subtracts the previous frame signal 3230P from the median filtered replacement signal 3920 and feeds the absolute value of this difference to the subtractor 3989B. The subtractor 3989C subtacts the previous frame signal 3230P from the original current frame signal 3506 and feeds the absolute value of this difference to the subtractor 3989B. The subtractor 3989B determines the sign of the difference between ABS(Replacement - Previous) and
ABS (Original-Previous) and supplies this sign information to the OR gate 3995. The OR gate 3995 also receives as input the backwards scene change flag from the decoder 3993. The output of the OR gate 3995 will be high provided that either ABS (Replacement-Previous) < ABS (Original-Previous) or there has been a backward scene change. The output of the OR gate will also be HIGH if both of these conditions are met.
A third check ensures that the absolute value of the difference between the replacement chrominance value and the corresponding chrominance value from the motion compensated N-frame is less than the absolute value of the difference between the chrominance value for the original current frame and the motion compensated Nframe. Thus it is a requirement that ABS (Replacement-Next) < ABS (Original- Next). This check is performed by a triplet of subtractors 3990A, B, C and an OR gate 3996. The subtractor 3990A subtracts the next frame signal 3230N from the median
filtered replacement pixel-value signal 3920 and feeds the absolute value of this difference to the subtractor 3990B. The subtractor 3990C subtracts the next frame signal 3230N from the original current frame signal 3506 and feeds the absolute value of this difference to the subtractor 3990B. The subtractor 3990B determines the sign of the difference between ABS (Replacement-Next) and ABS (Original-Next) and supplies this sign information to the OR gate 3996. The OR gate 3996 also receives as input the forwards scene change flag from the decoder 3993. The output of the OR gate 3996 will be high provided that either ABS (Replacement-Next) < ABS (Original - Next) or there has been a forwards scene change. The output of the OR gate will also be HIGH if both of these conditions are met.
A fourth check ensures that the absolute value of the difference between the replacement pixel value (Replacement) and the local brightness level (Brightness) is less than the absolute value of the difference between pixel value (Original) for the original current frame and the local brightness level. Local brightness is signal 3150 of Figure 6 representing the brightness of an area defined by the parameter D. Thus it is a requirement that ABS (Replacement-Brightness) < ABS (Original-Brightness).
This check is performed by a triplet of subtractors 3991A, B, C. The subtractor 3991A subtracts the local brightness signal 3150 from the median filtered replacement pixelvalue signal 3920 and feeds the absolute value of this difference to the subtractor 3991B. The subtractor 3991C subtacts the local brightness signal 3150 from the original current frame signal 3506 and feeds the absolute value of this difference to the subtractor 3991B. The subtractor 3991B determines the sign of the difference between ABS (Replacement-Brightness) and ABS (Original-Brightness) and supplies this sign information to the AND gate 3998. If ABS (Replacement-Brightness) < ABS (Original-Brightness) then the input to the AND gate 3998 will be HIGH indicating that the fourth check is TRUE.
The AND gate 3998 receives four inputs corresponding to TRUE/FALSE results for the four checks described above. A fifth input to the AND gate 3998 is the expand (2) dirt flag 3880. If the results of the first four checks were all TRUE then the AND gate 3998 generates a HIGH value for the dirt flag output signal 3918, otherwise the dirt flag 3918 output signal is assigned a LOW value.
A fifth and final check on the blotch replacements performed by the blotch replacement checking module 3912 involves a comparison of brightness differences with the predetermined temporal difference threshold 3618 (see also Figure 11). This check is enabled by a subtractor 3992 that receives a first input from the subtractor 3991A corresponding to (Replacement-Brightness) and a second input of the predetermined temporal difference threshold 3618. The subtractor 3992 determines whether the value (Replacement-Brightness) lies above or below the temporal difference threshold. The check requires that for negative film (Replacement - Brightness) < temporal difference threshold and for positive film that (Brightness
Replacement) < temporal difference threshold Thus an Exclusive OR gate (EOR) 3999 receives the output of subtractor 3992 and the neg-pos signal. The output of the EOR gate 3999 is supplied as input to a select module 3997. This select module also receives the local brightness level 3150 and the median filtered signal values 3920RGB as input. The selector determines the result of the fifth check, taking account of whether the film original was negative or positive film. If the fifth check gives a FALSE result then the selector 3997 replaces the median filtered values for the current pixel with the current local brightness. The selector generates an output signal 3922.
Output Selection Module 3924, Figure 19 Figure 19 shows the internal structure of the blotch removal output selection module 3924 that is a component of the median filter blotch replacement module 3900. A select module 3925 receives as inputs the scratch pointer 3916, the original current frame data 3506 and the temporally filtered data 3927. If the scratch pointer flag 3916 is HIGH then the select module 3925 generates an output signal 3926 comprising the temporally filtered data, otherwise the output signal 3926 comprises the current frame data. The output signal 3926 is fed to a select module 3929.
The select module 3929 also receives the dirt flag 3918 produced by the replacement checking module 3912 and the median filtered values 3922RGB from the check replacement module 3912 as input signals. If the dirt pointer flag 3918 is HIGH then the select module 3929 outputs a signal 3934RGB comprising the median filtered
values 3922, otherwise the output signal 3934RGB comprises the output of the select module 3925.
In the embodiment of the invention described above the blotch removal scheme processes input data in RGB format. Alternative embodiments could process data in other input formats such as YPbPr or YCrCb. The advantage of processing data in the RGB input format is that all three chrominance channels can be used to determine whether or not each blotch flag is valid. In the case where the input format is YPbPr or YCrCb, the Pb and Pr or the Cr and Cb signals respectively, typically have noise levels that are too high for these channels to be useful for blotch validation. Therefore only the luminance data Y can be used. Although the Red and Blue channels in RGB suffer from higher levels of noise than the Green channel, all three channels have noise levels low enough that they can be used for blotch flag validation.
Modifications Whilst the invention has been described with reference to a system comprising primarily hardware for real-time processing of the video signal, the invention may be implemented by software in a data processing system. Such a software implementation, with current commonly available processors, would not operate in real time.
Attention is invited to cofiled patent applications referenced as follows which relate to other aspects of the telecine system of Figure 2 and the whole contents of which are incorporated herein by this reference: P/9893, 1-00-103, Application Number 01 ; P/9894, 1-00-109 Application Number 01 ; and P/9895, 1-00-113 Application Number 01

Claims (39)

1. Apparatus for reducing the visibility of defects in a video image represented by frames of original image data, the apparatus comprising: a motion compensator for producing motion compensated frames of image data; a differencing arrangement for producing a) a forwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame; a comparison arrangement for comparing the forwards and backwards local difference signals with thresholds to produce local comparison signals; a processor for processing the said current and motion compensated succeeding and preceding frames to produce replacement data, the processor being responsive to the local comparison signals to not base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold; a defect detector; and a defect replacer, having an input for receiving the replacement data, for replacing a detected defect with replacement data.
2. Apparatus according to claim 1, wherein the comparison arrangement compares the forwards and backwards local difference signals with substantially identical thresholds.
3. Apparatus according to claim 1 or 2, wherein the said thresholds are predetermined thresholds.
4 Apparatus according to claim 1, 2 or 3, wherein the said processor includes : a filter for producing the replacement data ; and a selector, responsive to the local comparison signal, for selectively providing the preceding and succeeding frames to the filter.
5. Apparatus according to claim 4, wherein the filter comprises a temporal filter for producing temporal replacement data.
6. Apparatus according to claim 4 or 5, wherein the filter comprises a median filter for producing median filtered replacement data.
7. Apparatus according to any preceding claim wherein a said local difference is between a function of the values of pixels in a group of pixels in the current frame and the function of the values of pixels in a group of pixels in a preceding or succeeding motion compensated frame, which group corresponds in position to the group of the current frame.
8. Apparatus according to claim 7, wherein the said group in the current frame excludes pixels indicated by the defect detector to be defects.
9. Apparatus according to claim 7 or 8 wherein the said function is the average of the said values
10. Apparatus according to any preceding claim, wherein the said forwards and backwards differences represent brightness differences.
11. A method of reducing the visibility of defects; in a video image represented by frames of original image data, the method comprising the steps of : producing motion compensated frames of image data; producing a) a forwards local difference signal representing the local differences between local image portions of a current frame and a
motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame; comparing the forwards and backwards local difference signals with thresholds to produce local comparison signals; processing the said current and motion compensated succeeding and preceding frames to produce replacement data, the processor being responsive to the local comparison signals to not base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold; detecting a defect; and receiving the replacement data, and replacing a detected defect with replacement data.
12. A computer program product comprising instructions which when run on a data processing system implement the method of claim 11.
13. Apparatus for reducing the visibility of defects in a video image represented by frames of original image data, the apparatus comprising: a defect detector for producing a signal indicating a defect; a signal processor for producing, from the image data, replacement data for replacing a defect; a checking processor for checking the replacement data against at least one reference criterion to produce a replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data; and a defect replacer for replacing original image data indicated by the defect indicating signal provided such replacement is indicated by the replacement signal.
14. Apparatus according to claim 13, wherein a said reference criterion is whether the image data represents a positive or negative image as
indicated by a pos-neg signal, and the replacement data is consistent with the type of image indicated by the pos-neg signal.
15. Apparatus according to claim 14, wherein the indicating signal is dependent on a comparison of the sign of the difference between the replacement and original data with the sign of the pos-neg signal.
16. Apparatus according to claim 13,14 or 15, wherein a said reference criterion is whether replacement data for a current frame differs from corresponding data in an adjacent frame less the original data for a current frame differs from corresponding data in the adjacent frame.
17. Apparatus according to claim 16, comprising a calculator for: calculating the absolute values of a value A equal to said replacement data minus the said corresponding data, and a value B equal to said original data minus the said corresponding data; and for detecting whether the absolute value of A is less than or greater than the absolute value of B.
18. Apparatus according to claim 16 or 17 wherein the said adjacent frame is a frame preceding the current frame.
19. Apparatus according to claim 16 or 17 wherein the said adjacent frame is a frame succeeding the current frame.
20. Apparatus according to any one of claims 13 to 19, wherein a said reference criterion is whether replacement data for a current frame differs from a reference brightness less than the original data differs form the reference brightness.
21. Apparatus according to claim 20, wherein the reference brightness is a function of the values of pixels in a group of pixels of original image data, which group corresponds in position in a frame to the position of the image data subject to comparison with the reference brightness.
22. Apparatus according to claim 21 wherein the said function is the average.
23. Apparatus according to any one of claims 20 to 23, wherein a said reference criterion is whether the difference between the said replacement data for a current frame and the said reference brightness is less than a threshold level.
24. Apparatus according to claim 23, wherein if the said difference is greater than the threshold level, then the replacement data is replaced by the reference brightness.
25. Apparatus according to claim 23 or 24 wherein the said reference criterion for a negative image is whether replacement data minus the reference brightness is less than the threshold level.
26. Apparatus according to claim 23 or 24 wherein the said reference criterion for a positive image is whether the reference brightness minus replacement data is less than the threshold level.
27. A method of reducing the visibility of defects in a video image represented by frames of original image data, the method comprising the steps of : producing a signal indicating a defect; producing, from the image data, replacement data for replacing a defect;
checking the replacement data against at least one reference criterion to produce a replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data; and replacing original image data indicated by the defect indicating signal provided such replacement is indicated by the replacement signal.
28. A computer program product comprising instructions which when run on a data processing system implement the method of claim 27.
29. Apparatus for reducing the visibility of a defect in a video image represented by frames of original image data, the apparatus comprising : a defect detector for producing a signal indicating a defect; an expander for expanding the defect indicating signal; a signal processor for producing, from the image data, replacement data for replacing a defect,; and a defect replacer for replacing original image data indicated by the expanded defect indicating signal with the replacement data.
30. Apparatus according to claim 29, wherein the expander comprises first and second expansion stages.
31. Apparatus according to claim 30, wherein the each expansion stage includes a sequence of delays, to which defect indicating signals are input, for defining a block of defect indicating signal positions, and at least one logic gate defining a predetermined logical function and having inputs coupled to the said positions and arranged to produce a defect indicating signal for any position in the block not having a defect indicating signal and related by the said predetermined logical function to other positions in the block.
32. Apparatus according to claim 31, wherein the first stage comprises an OR gate having inputs connected to the said positions.
33. Apparatus according to claim 31 or 32, wherein the second stage comprises AND gates each having inputs connected to predetermined positions in the block and an OR gate having inputs connected to the OR gate.
34. A method of for reducing the visibility of a defect in a video image represented by frames of original image data, the method comprising: producing a signal indicating a defect; expanding the defect indicating signal; producing, from the image data, replacement data for replacing a defect,; and replacing original image data indicated by the expanded defect indicating signal with the replacement data.
35. A computer program product arranged to carry out the method of claim 34.
36. An apparatus is for reducing the visibility of defects in a video image represented by frames of original image data, the apparatus including; a motion compensator for producing motion compensated frames of image data;. a differencing arrangement for producing a) a forwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame; a comparison arrangement for comparing the forwards and backwards local difference signals with thresholds to produce local comparison signals; a processor for processing the said current and motion compensated succeeding and preceding frames to produce replacement data, the processor being responsive to the local comparison signals to not base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold; a checking processor for checking the replacement data against at least one reference criterion to produce a
replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data ; a defect detector for producing a defect indicating signal; an expander for expanding the defect indicating signal and a defect replacer, having an input for receiving the replacement data, and a defect replacer for replacing the original image data indicated by the expanded defect indicating signal with the replacement data.
37. A method of reducing the visibility of defects in a video image represented by frames of original image data, the mehtod including; producing motion
compensated frames of image data ;. producing a) a forwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated succeeding frame, and b) a backwards local difference signal representing the local differences between local image portions of a current frame and a motion compensated preceding frame; comparing the forwards and backwards local difference signals with thresholds to produce local comparison signals; processing the said current and motion compensated succeeding and preceding frames to produce replacement data, the processing being responsive to the local comparison signals to not base processing on local portions of the preceding and succeeding frames for which the local difference signal exceeds the said threshold; checking the replacement data against at least one reference criterion to produce a replacement signal indicating whether or not the replacement data is suitable for use in place of the original image data; producing a defect indicating signal; expanding the defect indicating signal and receiving the replacement data, and replacing the original image data in accordance with the expanded defect indicating signal..
38. A telecine system comprising apparatus according to any one of the preceding apparatus claims.
39. A telecine system substantially as hereinbefore described with reference to the accompanying drawings.
GB0100518A 2001-01-09 2001-01-09 Reduction in defect visibilty in image signals Withdrawn GB2370932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0100518A GB2370932A (en) 2001-01-09 2001-01-09 Reduction in defect visibilty in image signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0100518A GB2370932A (en) 2001-01-09 2001-01-09 Reduction in defect visibilty in image signals

Publications (2)

Publication Number Publication Date
GB0100518D0 GB0100518D0 (en) 2001-02-21
GB2370932A true GB2370932A (en) 2002-07-10

Family

ID=9906490

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0100518A Withdrawn GB2370932A (en) 2001-01-09 2001-01-09 Reduction in defect visibilty in image signals

Country Status (1)

Country Link
GB (1) GB2370932A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007063912A1 (en) 2005-11-29 2007-06-07 Matsushita Electric Industrial Co., Ltd. Reproduction device
EP1924097A1 (en) * 2006-11-14 2008-05-21 Sony Deutschland Gmbh Motion and scene change detection using color components
EP2302582A1 (en) * 2009-08-21 2011-03-30 Snell Limited Correcting defects in an image
WO2012065759A1 (en) * 2010-11-16 2012-05-24 Thomson Licensing Method and apparatus for automatic film restoration
WO2013130478A1 (en) * 2012-02-29 2013-09-06 Dolby Laboratories Licensing Corporation Image metadata creation for improved image processing and content delivery
US9027329B2 (en) 2011-05-25 2015-05-12 GM Global Technology Operations LLC Method for determining load of a particulate filter

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2163619A (en) * 1984-08-21 1986-02-26 Sony Corp Error concealment in digital television signals
WO1993021728A1 (en) * 1992-04-13 1993-10-28 Dv Sweden Ab A method for detecting and removing errors exceeding a specific contrast in digital video signals
GB2343321A (en) * 1998-11-02 2000-05-03 Nokia Mobile Phones Ltd Error concealment in a video signal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2163619A (en) * 1984-08-21 1986-02-26 Sony Corp Error concealment in digital television signals
WO1993021728A1 (en) * 1992-04-13 1993-10-28 Dv Sweden Ab A method for detecting and removing errors exceeding a specific contrast in digital video signals
GB2343321A (en) * 1998-11-02 2000-05-03 Nokia Mobile Phones Ltd Error concealment in a video signal

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8351756B2 (en) 2005-11-29 2013-01-08 Panasonic Corporation Reproduction device
EP1956830A1 (en) * 2005-11-29 2008-08-13 Matsushita Electric Industrial Co., Ltd. Reproduction device
EP1956830A4 (en) * 2005-11-29 2010-09-29 Panasonic Corp Reproduction device
WO2007063912A1 (en) 2005-11-29 2007-06-07 Matsushita Electric Industrial Co., Ltd. Reproduction device
EP1924097A1 (en) * 2006-11-14 2008-05-21 Sony Deutschland Gmbh Motion and scene change detection using color components
EP2302582A1 (en) * 2009-08-21 2011-03-30 Snell Limited Correcting defects in an image
US8515204B2 (en) 2009-08-21 2013-08-20 Snell Limited Correcting defects in an image
WO2012065759A1 (en) * 2010-11-16 2012-05-24 Thomson Licensing Method and apparatus for automatic film restoration
CN103210636A (en) * 2010-11-16 2013-07-17 汤姆逊许可公司 Method and apparatus for automatic film restoration
US9167219B2 (en) 2010-11-16 2015-10-20 Thomson Licensing Method and apparatus for automatic film restoration
US9027329B2 (en) 2011-05-25 2015-05-12 GM Global Technology Operations LLC Method for determining load of a particulate filter
WO2013130478A1 (en) * 2012-02-29 2013-09-06 Dolby Laboratories Licensing Corporation Image metadata creation for improved image processing and content delivery
US9819974B2 (en) 2012-02-29 2017-11-14 Dolby Laboratories Licensing Corporation Image metadata creation for improved image processing and content delivery

Also Published As

Publication number Publication date
GB0100518D0 (en) 2001-02-21

Similar Documents

Publication Publication Date Title
Kokaram On missing data treatment for degraded video and film archives: a survey and a new Bayesian approach
JP4644669B2 (en) Multi-view image generation
EP1483909B1 (en) Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data
Kokaram et al. Interpolation of missing data in image sequences
US6868190B1 (en) Methods for automatically and semi-automatically transforming digital image data to provide a desired image look
JP4074062B2 (en) Semantic object tracking in vector image sequences
Schallauer et al. Automatic restoration algorithms for 35mm film
Van Roosmalen et al. Correction of intensity flicker in old film sequences
CA2702163C (en) Image generation method and apparatus, program therefor, and storage medium which stores the program
KR20110042089A (en) Use of inpainting techniques for image correction
US20080170801A1 (en) Automatic digital film and video restoration
WO2006060509A1 (en) Artifact reduction in a digital video
CN111127376B (en) Digital video file repairing method and device
US5598226A (en) Reparing corrupted data in a frame of an image sequence
GB2370932A (en) Reduction in defect visibilty in image signals
JP4880807B2 (en) Method for detecting relative depth of object in one image from a pair of images
Wei et al. DA-DRN: A degradation-aware deep Retinex network for low-light image enhancement
Gangal et al. An improved motion-compensated restoration method for damaged color motion picture films
GB2356514A (en) Film defect correction
JP2002223374A (en) Device and method for removing noise
Maddalena Efficient methods for scratch removal in image sequences
GB2370933A (en) Detecting and reducing visibility of scratches in images.
GB2370934A (en) Noise reduction in video signals
Lee et al. Multi-image high dynamic range algorithm using a hybrid camera
Manikandan et al. A nonlinear decision-based algorithm for removal of strip lines, drop lines, blotches, band missing and impulses in images and videos

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)