GB2370934A - Noise reduction in video signals - Google Patents

Noise reduction in video signals Download PDF

Info

Publication number
GB2370934A
GB2370934A GB0100520A GB0100520A GB2370934A GB 2370934 A GB2370934 A GB 2370934A GB 0100520 A GB0100520 A GB 0100520A GB 0100520 A GB0100520 A GB 0100520A GB 2370934 A GB2370934 A GB 2370934A
Authority
GB
United Kingdom
Prior art keywords
filter
pixel
image
threshold
video signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0100520A
Other versions
GB0100520D0 (en
Inventor
Ian Mclean
Sarah Witt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB0100520A priority Critical patent/GB2370934A/en
Publication of GB0100520D0 publication Critical patent/GB0100520D0/en
Publication of GB2370934A publication Critical patent/GB2370934A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/253Picture signal generating by scanning motion picture films or slide opaques, e.g. for telecine
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/84Television signal recording using optical recording
    • H04N5/843Television signal recording using optical recording on film
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/022Electronic editing of analogue information signals, e.g. audio or video signals

Abstract

A video signal is transformed (4830A-C) using a Hadamard/Walsh transform to produce frequency components representative of the image signal. In order to remove random noise, frequency components of the current frame are compared with values of the preceding and subsequent frames, and frequency components which are temporally uncorrelated are reduced (4900). The resulting frequency components are then inversely transformed (4840-4860) in order to produce a noise reduced video signal. Also described are various filters used in preprocessing of the video signal prior to transformation. Such filters may be temporal and/or spatial filters and features such as weighted averaging of pixel values, filtering dependent on detail in portion of the image, combining signals and grouping of pixels are described.

Description

Noise Reduction The present invention relates to noise reduction. Illustrative embodiments of the invention relate to reduction of grain noise in video signals produced from film.
Preferred embodiments of the invention relate to grain noise reduction apparatus and to such apparatus installed in telecine systems.
The book"Motion Picture Restoration"by A. C Kokaram ISBN 3-540-76040-7 Springer Verlag London Limited 1998, discloses various techniques for reducing noise in image sequences, especially in Chapter 10.
Telecine systems are apparatus designed to produce film-to-video transfers.
Figure 1 illustrates the use of a telecine machine in a video making process. A film camera 100 records images on acetate-based film that has three layers of silver halide emulsion sensitised to red, green and blue respectively. Once the film has been exposed it is processed to produce a film negative 200. The film negative 200 is inserted into a telecine machine 300 which transfers the recorded material on the negative to a videotape 400. The telecine machine includes a noise reduction unit 900 that consists of hardware and software dedicated to reducing the amount of noise present in the film negative that is transferred to the video copy 400. The video 400 can be played on a videotape recorder 500.
The video image may be edited and then transferred to film. For example the video may be edited"off-line"on a computer-based digital non-linear editing apparatus 600. The non-linear editing system has the flexibility of allowing footage to be edited starting at any point in the recorded sequence. The images used for digital editing are often a reduced resolution copy of the original source material. Digital editing of the images from the videotape 400 is performed and an edit decision list (EDL) is produced. The EDL is a file that identifies edit points by their timecode addresses and thus contains all the instructions for editing the programme. The EDL is then used to transfer the edit decisions to an on-line editing system where the film negative is cut according to the EDL to produce a high-resolution broadcast quality copy 700 of the edited video footage. Then release prints are made from the final negative 700 and supplied to a film distribution network 800.
Apart from its use in the film production process, the telecine has wider applicability in terms of transferring the final negative 700 to videotape for general distribution. A telecine such as the Sony's Vialta (FVS-1000) is capable of producing digital video copies of a film original at various resolutions ranging from 525/625 which is suitable for use on standard definition television (SDTV) through to resolutions appropriate for high definition television (HDTV). The input film media that is converted to video in the telecine can typically be selected from all types of 16mm and 35mm film stock, both negative and positive.
In order to take full advantage of the high compression and limited bandwidth of digital television transmission, high picture quality is necessary. Image quality obtained by producing video directly from a film original can be degraded by graininess resulting from undeveloped photosensitive chemical remaining on the film, blotches due to deposits of dust or dirt and scratches due to wear and tear on the film.
By performing noise reduction on the images read from the film prior to recording them in video format the visual quality of the final video images can be improved.
Furthermore reduced levels of noise mean that higher image compression ratios can be achieved.
Film records information by means of its sensitivity to light. Film consists of a base-layer; at least one emulsion layer made up of silver nitrate or silver chloride plus additives suspended in a gelatin; and an antihallation backing layer which absorbs any light that penetrates the emulsion layer and would otherwise cause a halo effect in the captured image. The silver salts in the emulsion layer are photosensitive grains which react to light by changing state and this is the means by which an image is recorded. As the intensity of light incident on an area of film increases, the number of grains induced to change state in that area will increase. The larger the grain, the less light it will take to form an image: however the image will be less sharp. Colour film consists of three emulsion layers each of which is independently sensitive to a different primary colour of white light. The graininess effect observable in film images is caused by undeveloped silver salt grains. The effect is more noticeable in colour film because the emulsion layers of the colour channels react to light independently. Undeveloped grains are generally neither temporally correlated nor spatially correlated and this lack of correlation makes the grain stand out. Temporally, the grain manifests itself as small
oscillations as the film is played. Spatially the graininess results in distinctive patterns in a still image resulting from slight differences between the colour channels. Grain is most noticeable in flat areas such as sky; static or slow moving areas such as mountains in a background; large moving areas ; and mid to high-level luminance areas. Film to video transfer allows films of differing format to be used to produce shots in the video. This may result in different grain sizes in different shots in the
video. Whilst grain size of 16mm film is approximately the same as grain size of 35mm film, once transferred to video the 16mm grain may enlarged.
It is desirable to reduce noise in video signals to improve image quality and also to improve compression ratios when video is compressed for storage and/or transmission. Embodiments of the invention seek to reduce grain noise in video signals derived from film.
According to a first aspect of the present invention, there is provided apparatus for reducing random noise in a video signal comprising: a Hadamard/Walsh transformer for applying a Hadamard/Walsh transform to the video signal to produce sequency components; a sequency reducer for reducing sequency components which are temporally uncorrelated; and an inverse Hadamard/Walsh transformer for applying an inverse Hadamard/Walsh transform to the output of the sequency reducer.
Temporal lack of correlation is regarded as an indication of random noise: thus a sequency component which is temporally uncorrelated is reduced. It is possible that noise is correlated temporally by chance; such noise would not be reduced but it would not occur often.
An embodiment of the apparatus comprises means for comparing sequency components with a sequency dependent threshold and means for determining temporal correlation in dependence on components which have a predetermined relation to the threshold. Preferably the threshold with which a component is compared is proportional to the sequency of the component. That allows the preservation of low level high frequency detail in the signal.
According to a second aspect of the invention, there is provided a temporal filter for reducing random noise in a video signal, the filter being arranged to:
calculate in respect of each pixel C of a current frame a weighted average of values P', C and N'where P'and N'are weighted values of pixels in motion compensated preceding and succeeding frames corresponding to pixel C, wherein the weighted values P'and N'tend towards C in those portions of the image represented by the video signal in which temporal change is greater than a lower threshold.
This reduces image distortion in areas of the image where there is rapid change. The filter is non-recursive thus avoiding motion artefacts which would otherwise be produced with moving images. Using both the preceding and succeeding frames has been found to give better results than using only one of those frames.
According to a third aspect of the invention there is provided a filter for reducing random noise in a video signal, comprising means for determining the amount of detail in portions of the image represented by the video signal and means for filtering the portions of the image where the amount of detail is less than a predetermined amount.
This preserves image detail.
According to a fourth aspect of the invention there is provided a filter for reducing random noise in a video signal representing an image, the filter being arranged to calculate a combination of a first video signal and a second spatially filtered video signal wherein the combination comprises the first video signal in portions of the image where detail is high, and the second video signal in portions of the image where detail is low and a combination of the first and second video signal in other portions of the image.
This tends to preserve more image detail in areas of high detail than if only spatail filtering is applied to the image, allows maximal filtering where the detail is low and an intermediate level of filtering where the amount of detail is between the high and low extremes.
In a preferred embodiment, the first signal is a signal which is temporally filtered to reduce random noise.
A preferred embodiment comprises comprising means for determining the spatial detail gradients in the image to detect the level of detail in the image. Most preferably, the said first and second video signals are combined in dependence on a key signal k where k is dependent on spatial detail gradients in the image. Thus the
proportions of spatial and temporal filtering depend on the level of detail preserving detail in the image.
According to a fifth aspect of the invention, there is provided a filter for reducing random noise in a video signal, the filter being arranged to: select, in a video frame, one pixel Cx, y in respect of which a spatial mean value is to be calculated; determining a threshold value, which threshold is dependent on the differences in value between the said one pixel Cx, y and pixels Px, y and/or Nx, y, corresponding to the said one pixel Cx, y, in a motion compensated preceding frame P and/or a motion compensated succeeding frame N; and calculating the said spatial mean value in dependence on the values of a group of selected pixels in the said frame, which selected pixels are adjacent the said one pixel each pixel in the said group being selected on the basis that it is a pixel having a value such that the difference between the value of that pixel and the said one pixel Cx, y is less than the said threshold value.
Thus pixels to be used in spatial filtering are selected on the basis of a temporal threshold. This is based on an assumption that a temporal disturbance due noise is similar in magnitude to a spatial disturbance. Producing the threshold value in this way omits large detail edges from the spatial mean calculation, which edges would otherwise be blurred.
Also, it has been found that relating temporal noise to spatial noise in this way allows filtering to vary automatically through a frame without the need for user intervention According to a sixth aspect of the invention, there is provided a filter system for reducing grain noise in a video signal comprising: a temporal filter as specified in said second aspect of the invention; a first spatial filter as specified in said fifth aspect of the invention which spatially filters the temporally filtered signal; a second spatial filter as specified in said fourth aspect of the invention in which the said first signal is the output of the temporal filter, and the second signal is the output of the first spatial filter; and
an apparatus according to said first aspect of the invention to which the output of the second spatial filter is applied.
For a better understanding of the present invention, reference will now be made, by way of example, to the accompanying drawings in which: Figure I is a schematic block diagram of a system for transferring film to video and, optionally, also for transferring edited video to film; Figure 2 is a schematic block diagram of an illustrative noise reduction system embodying the present invention; Figure 3 is a schematic block diagram of an example, in accordance with the present invention, of the grain noise reduction system of Figure 2; Figure 4 is a schematic block diagram of an example, in accordance with the
present invention, of the pre-processor of Figure 3 ; Figure 5A illustrates a group of frames and a parameter"grainsize" ; Figure 5B illustrates a parameter"meansize" ; Figure 6A is a flow diagram showing temporal processing; Figure 6B illustrates the overall operation of the flow diagram of Figure 6A; Figure 7A is a flow diagram showing spatial processing; Figure 8A is a diagram of the Hadamard transformer of Figure 3; Figure 8B illustrates reordering of transform coefficients; Figure 9 is a block diagram of a sequency adjustment module of the transformer of Figure 8A; Figures 1OA and B illustrate hardware implementations of a Hadamard Transform.
System Overview, Figure 2 The illustrative noise reduction system shown in Figure 2 may be used in a digital telecine system such as the Sony Vialta (FVS-1000) Telecine.
The noise reduction is performed by four modules: a block match vector generation and blotch detection module 1000; a scratch detection and removal module 2000; a blotch removal module 3000; and a grain reduction module 4000.
Video data in RGB format 901r, g, b is provided as input to the block motion vector generation and blotch detection module 1000. In this module a process known
as motion estimation is performed whereby temporal information is obtained about the contents of each image frame with respect to adjacent frames in a time sequence. The temporal information is used for the detection of blotches and grain noise but not for scratch detection. Motion estimation is performed using a technique known as"block matching"whereby a block of data of fixed size is defined around a central reference position and this block is compared with respective data blocks corresponding to the same portion of the image in the previous frame and in the next frame of a chronological sequence. A block size of plus or minus 40 lines or pixels is typically used. Forward and backward motion vectors 1005 are output from the motion estimation module 1000 and these are provided as input to the grain reduction module 4000. Blotch detection flags 905, indicating where blotches have been detected in frames, are produced by the motion estimation module 1000 and supplied to the blotch removal module 3000.
The scratch removal module 2000 detects scratches on a frame-by-frame basis in dependence upon characteristics such as width, depth and orientation of the scratch.
Thresholds such as the minimum scratch width and depth for removal can be preselected by the user. Account is taken of the fact that scratches will be whiter than the surrounding area on a negative film but darker for a positive film. Once detected, scratches are concealed by interpolation of the adjacent undamaged image areas.
The blotch removal module 3000 receives blotch detection flags 905, RGB chrominance data 901r, g, b and forward and backward motion vectors 1005 as input.
Blotch removal is performed if the area flagged as a blotch in the current frame differs from the corresponding area in motion compensated previous and next frames by more than a predetermined threshold. Blotches are removed by means of a 3-dimensional median filter, followed by a smoothing low-pass filter. Candidate image data for blotch replacement is cross-checked against adjacent frames and local brightness levels. The blotch removal module 3000 uses inter-frame differences in local brightness levels to detect scene changes and passes-on scene change signals 909 to the grain reduction module 4000.
In addition to the scene change signals 909, the grain reduction module 4000 receives RGB video data 901r, g, b and forward and backward motion vectors 1005 as input. In the grain reduction module 4000, large grains and large spatial areas of
graininess are removed by a combination of temporal and spatial filtering. The Walsh-Hadamard transform technique is subsequently used by the grain reduction module 4000, to remove the remaining smaller grains and grains in more detailed image areas.
A grain simulation module 49 is provided. The grain simulation module 49 adds random noise to image areas from which scratches and blotches have been removed. The module in this example is at the input to the grain noise reduction module 4000. It receives from the blotch and scratch removal modules 3000 and 2000 flags indicating image areas where blotches and scratches have been removed and inserts into those areas random noise simulating grain noise. That is done for two purposes. Firstly, grain is preferably reduced but not removed in embodiments of the invention. Areas from which scratches and blotches are removed also then have no grain noise and are too visible. Adding grain noise reduces the visibility of the areas.
Secondly, the grain noise reduction module operates better if there is grain noise; areas with no grain noise would reduce the effectiveness of reduction.
Grain Reduction, Figure 3.
Figure 3 schematically illustrates the two-stage grain noise reduction process in which three techniques are used to remove grain noise. The first grain noise reduction stage is performed by a pre-processing unit 5000 that removes the largest grains. The second grain noise reduction stage is performed by a Hadamard transform unit 4800 that serves to remove the smaller remaining grain. The grain noise reduction is performed in image areas having low image detail. The first pre-processing stage is performed by a temporal filter unit 5010 which comprises a temporal deviation calculation module 4200 and a temporal mean calculation module 4300.
The second pre-processing stage is performed by a spatial filter unit 5020 that comprises a horizontal spatial filter module 4500 and vertical spatial filter module 4400. The output of the spatial filter modules 4400 and 4500 are mixed with the temporally filtered image according to a key created from a gradient map of the image. Temporal grain oscillations are removed by temporal filtering whilst spatial grain patterns are removed by spatial filtering. The temporal filtering is performed on the current frame provided that the motion compensation on previous and next frames has been accurately performed. Spatial filtering is performed for a given image area in
dependence upon the level of detail present in that area. Grain is most noticeable in flat areas such as sky ; static or slow moving areas such as mountains in a background ; large moving areas; and mid to high-level luminance areas. Image areas determined to have above a maximum level of detail are left unfiltered because grains will not be noticed in such areas. Spatial filtering of detailed image areas is undesirable because it reduces the sharpness of the image. Accordingly, only the image areas that are found to have a level of spatial detail below a minimum threshold are replaced by a filtered image. Image areas with a level of detail between these two extremes are replaced by a weighted sum of the filtered and the unfiltered image areas.
Pre-processing Figures 4 and 5.
The pre-processing stage of the grain noise reduction system is shown in greater detail in Figure 4. The multi-stage process of Figure 4 is independently performed on each of the red, green and blue (RGB) colour channels. Referring to Figure 5, the inputs to the pre-processing stage are a centre frame (C-frame) 1005C, a motion compensated previous frame (P-frame) 1005P and a motion compensated next frame (N-frame) 1005N. Temporal filtering is performed prior to spatial filtering.
Temporal filtering is performed on pixels. A temporal deviation detection module 4200 calculates a first key KeyP representing the difference between the C-frame pixel and the corresponding P-frame pixel and a second key KeyN representing the difference between the C-frame pixel and the corresponding N-frame pixel. A temporal mean calculation module 4300 calculates a temporal mean value Ctf for each pixel of the C-frame. When a large temporal deviation is detected, the temporal mean is weighted towards the C-frame pixel. The P-frame, the N-frame and the temporally filtered C-frame are provided as inputs to a vertical spatial mean module 4400. The temporally filtered C frame input to the vertical spatial mean module may be replaced by the original unfiltered C frame but this is not currently preferred because it would require a further video feed from the input 1005C to the module 4400.
The vertical spatial filter module 4400 and the horizontal spatial filter module 4500 both use mean value filters. The calculation scheme of these mean value filters will now be explained. A parameter"meansize"determines the maximum number of candidate-pixels which can be considered for inclusion in the mean. Meansize is specified by the user. Referring to Figure 5B,"meansize"defines a window centred on
the pixel currently C being processed and extending in a vertical direction for vertical filtering or extending in a horizontal direction for horizontal filtering. Figure 5B shows only a horizontal window. If the absolute value of the difference in magnitude between the central pixel and any given candidate-pixel lies above a predetermined"grainsize" threshold then the candidate pixel will be rejected. Referring to Figure 5A, the grainsize threshold is preferably calculated pixel by pixel (as described in more detail with reference to Figure 7) from temporal difference data, C-P, P-N and N-C. This procedure prevents spurious values or edges from being included in the mean. This procedure also allows filtering to occur pixel by pixel. This mean filter calculation scheme pre-supposes that the temporal grain noise is comparable in magnitude to the spatial grain noise. It has been found that relating temporal grain noise to spatial grain noise in this way allows filtering to vary automatically through a frame without the need for usere intervention. In an alternative scheme,"grainsize"is specified by the user.
The vertically filtered C-frame 4405 and the temporally filtered C-frame 4305 are provided as inputs to a first detail dependent mixer 4600 where they are combined in dependence upon a vertical detail key KeyV supplied by the spatial detail detection module 4100. A horizontal spatial mean calculation is performed by a horizontal spatial mean module 4500, which takes as its inputs the original N-frame 1005N, the original P-frame 1005P and a detail dependent vertically filtered C-frame 4605 produced by the first detail dependent mixer 4600. A horizontally filtered C-frame 4505 and the detail-dependent vertically filtered C-frame 4605 are supplied as inputs to a second detail dependent mixer 4700 and combined in dependence upon a horizontal detail key KeyH supplied by the spatial detail detection module 4100. The output signal 4705 of the second detail dependent mixer is the grain noise reduced Cframe.
Temporal Filtering Figures 5 and 6.
The temporal filtering carried out by temporal filter unit 5010 involves temporal deviation detection followed by temporal mean calculation. Referring to Figure 5A, the temporal deviation detection module 4200 uses groups of frames comprising a C-frame 1005C, a P-frame 1005P and an N-frame 1005N to calculate
two temporal detail keys for each pixel of the C-frame. Figure SA shows for each
frame the pixels u, d, f and b in the up, down, forward and backward directions respectively. For each pixel C of the C-frame 1005C, a sum, Sc, is formed of pixel values C and adjacent pixels, Cu, Cd, Cf and Cb. Analogous sums, Sp and Sn are formed for the corresponding pixel of the P-frame 1005Pand of the N-frame 1005N.
In particular, the sums are given by the following formulae: Sp = Pu + Pd + P + Pb + Pf Sc = Cu + Cd + C + Cb + Cf Sn = Nu + Nd + N + Nb + Nf A previous frame temporal detail key, KeyP, is given by the absolute value of the difference between Sp and Sc while a next frame temporal difference key KeyN is given by the absolute value of the difference between Sn and Sc: Previous frames detail key = KeyP = ISc-Spl and, Next frames detail key = KeyN = ISc-Snl These keys are used in the calculation of the temporal mean for each pixel.
The details of temporal mean calculations are illustrated by the various stages of the flow chart of Figure 6. P represents a motion compensated previous frame's
pixel, N represents a motion compensated next frame's pixel and C represents a centre frame pixel. A standard temporal mean would be given by (P+ C + N)/3. However embodiments of the present invention, use a non-standard"deviation-dependent temporal mean"that is given by (P'+ C + N')/3 as shown by stage 4370 of Figure 6.
Referring to the flow chart of Figure 6A and to Figurer 6B, the user defines an upper threshold"hi", and a lower threshold"lo"for temporal deviations. Calculation of Pis performed as shown in the stages contained within unit 5030 of the flowchart.
At stage 4310 the temporal deviation key KeyP is compared with the lower threshold "lo". If KeyP is less than the lower threshold then P'is set equal to P at stage 4315. If however KeyP is greater than or equal to"lo", KeyP is subsequently compared to the upper threshold"hi"at stage 4320. If KeyP is greater than"hi", then P'is set equal to C at stage 4325. Otherwise KeyP must lie between the two thresholds"hi"and"lo", in which case P'is set equal to a linear combination of C and P at stage 4330.
Calculation of N'is performed as shown in the stages contained within unit 5040 in the flowchart of Figure 6. At stage 4340 the temporal deviation key KeyN is compared with the lower threshold"lo". If KeyN is less than"lo"then N'is set equal to N at stage 4345. If however KeyN is greater than or equal to"lo"then KeyN is compared with the upper threshold, "hi"at stage 4350. If KeyN is greater than"hi" then N'is set equal to C at stage 4355. Otherwise KeyP must lie between thresholds "hi"and"lo", in which case, at stage 4360 N'is set equal to a linear combination of C and N. Finally, at stage 4370 the deviation-dependent temporal mean Ctf determined by calculating the average of P N and C. The deviation-dependent temporal mean is equivalent to the full temporal mean only when the temporal deviations are small, that is less than the'lo'threshold whereas the mean is most heavily weighted towards the C-frame pixel value when the temporal deviations are largethat is greater than the'hi' threshold. This reduces distortion, which can arise with standard temporal filtering, of the image in image areas in which motion compensation has not been successful.
Spatial Filtering, Figure 7 The spatial filtering carried out by the vertical spatial mean module 4400 and the horizontal spatial mean module 4500 will now be explained in more detail. Spatial detail keys KeyH and KeyV are calculated for the horizontal and vertical directions respectively. Spatial detail is quantified using a standard edge detection method known as"Sobel edge detection". The two matrices below, known as Sobel convolution kernels are applied to the image to provide a two-dimensional spatial
gradient measurement :
-1 0 1 -1-2-11-2 0 2 0 0 0-1 0 1 1 2 1 Vertical Sobel matrix Hon- & L H"'
The first vertical kernel responds maximally to edges running vertically with respect to the picture grid while the second horizontal kernel responds maximally to edges running horizontally with respect to the picture grid. For each pixel C [x] [y] of the Cframe, the spatial detail gradients are obtained by applying the appropriate kernel to a 3 x3 grid of pixels formed by C [x] [y] and the eight pixels adjacent to it as illustrated below.
A horizontal-detail gradient at pixel [x] [y] is obtained by pre-multiplying the matrix corresponding to the 3x3 grid of C-frame pixels shown above above by the 3x3 horizontal kernel and taking the absolute value of the sum of the elements of the 3x3 result matrix. Similarly, a vertical-detail gradient is obtained by pre-multiplying the matrix corresponding to the 3x3 grid of C-frame pixels by the 3x3 vertical kernel.
Thus:
Horizontal detail gradient [x] [y]- (C [x-1] [y-1] + 2*C[x][y-1] + C [x+1] [y-1]} + {C [x-1] [y+1] + 2*C [x] [y+1] + C [x+1] [y+1]} = KeyH Vertical detail gradient M [y] = | iC [x-1] [y-1] + 2*C [x-1] [y] + C [x-1] [y+1]} + {C [x+1] [y-1] + 2*C [x+1] [y] + C [x+1] [y+1]} I =KeyV These horizontal and vertical detail gradients are subsequently used as spatial detail keys in the spatial and temporal mean calculations. The spatial filtering scheme described above obtains spatial detail keys KeyH and KeyV using only vertical and horizontal detail gradients for the C-frame.
In an alternative scheme, a more detailed calculation is implemented where there is a large amount of temporal variation in the image. In this scheme the user
judges whether there is a large amount of temporal variation in the image and manually switches to this scheme. In this detailed calculation the vertical and horizontal detail gradients are calculated for the motion compensated P-frame and Nframe in addition to the C-frame using the equations KeyH and KeyV above but substituting P for C in P frames and N for C in N frames. The spatial detail key KeyH for each pixel is given by the largest of the three (P, N and C) horizontal gradients and the spatial detail key KeyV for each pixel is given by the largest of the three (P, N and C) vertical gradients. The use of motion compensated P-frames and N-frames in addition to C-frames gives better preservation of detail for images such as waves which have considerable temporal variance. Although the detail is preserved, it is done so at the expense of more grain noise remaining in the video image.
Spatial Mean Calculation, Figure 7 The horizontal flowchart of Figure 7 illustrates the stages of the horizontal spatial mean calculation. At stage 4510 a"grainsize"is determined for each pixel.
This grainsize is a measure of the magnitude of the difference in the representative value, in this case the R, G or B chrominance value, between two pixels. In a preferred embodiment represented by Figure 7, and referring to Figure 5A, the grainsize is taken to be the value corresponding to the maximum one of the temporal differences (C-P), (N-C) and (P-N) for the given pixel. This definition of grainsize is based on the assumption that the temporal effect of the grain is similar to the spatial effect. Alternatively, the grainsize is user-defined. At stage 4520 a counter n, a sum and avariable"size"are all initialised to zero. Size is a counter that keeps track of the number of values contributing to the mean. The loop for the horizontal spatial mean calculation for the pixel currently being processed involves stages 4530 through to 4570. In this loop a user-defined"meansize"specifies the maximum number of pixels in a horizontal window, centred on C [x] [y], to be included in the calculation of the mean. The pixels in this horizontal window are called candidate pixels. The userdefined meansize value can be set to take into account of the type of film being processed by the telecine. In particular, 16mm film when transferred to video may produ e a larger video grain size than 35mm film therefore a larger meansize may be specified for the 16mm film than for the 35mm film. Furthermore the meansize could
be set larger for the B chrominance signal than for the R signal or the G signal because blue grain is generally larger than red or green grain. If meansize is set too large then detail could potentially be lost. The value of the variable"halfsize"used at stage 4530 is calculated directly from meansize according to the formula
halfsize = (/Ke < M'-l)/2.
The value of meansize is and must be an odd integer. At stage 4530, the difference between the values of each candidate pixel, C [x-halfsize+n] [y], in the horizontal window specified by meansize and the pixel C [x] [y] is compared to grainsize. If this difference is greater than or equal to grainsize then the calculation jumps to stage 4550 hence the candidate pixel is excluded from the current sum. This prevents inclusion of spurious values or edges in the spatial mean for pixel C [x] [y]. If however, the difference is less than grainsize then sum is incremented by the value of
the candidate pixel at stage 4540 and"size"is incremented by one. The final value of the mean for the pixel currently being processed C [x] [y] is obtained at 4570 once every candidate pixel in the window specified by meansize has been compared with grainsize. The next pixel for which the horizontal spatial mean is to be calculated is identified at stage 4580 of the flowchart and the whole process is repeated starting from stage 4510. The mean calculation is performed for every pixel of the C-frame.
The vertical spatial mean calculation is performed similarly to the horizontal spatial mean calculation of Figure 7. The main difference is that the y index corresponding to the vertical direction is incremented at stage 4530 rather than the horizontal index x.
Mixing in dependence on spatial detail keys After the vertical spatial key KeyV, the horizontal spatial key KeyH, and the vertical and horizontal spatial means have been calculated for a given image region, the filtered image Cfilt, and the original image Corig, are mixed on a pixel by pixel basis by forming a linear combination of the images in dependence upon a userdefined lower spatial detail threshold"loS", and an upper spatial detail threshold "hiS". In a preferred embodiment the linear combination is given by the following algorithm expressed in pseudocode: if (KeyV > hiS) C = Ctf e ! seif (KeyV < IoS) CCfilt
else
c =c -') +ct-'i--'1 hiS- (oS hiS-loS
where Ctf means the output of the temporal deviation dependent temporal mean module 4300 of Figure4 and Cfilt means the output of the vertical spatial mean module 4400 of Figure4. The horizontal detail dependent mixer 4700 operates according to an equivalent algorithm but KeyV is repaleed by KeyH, Ctf is replaced by the output of the vertical mixer 4600 and Cfilt is replaced by the output of the horizontal spatial mean module 4500 of Figure 4.
Hadamard/Walsh transform. Figures 8 to 10 The detail dependent mixing of filtered images described above considerably reduces undesirable loss of detail by preserving the original image in areas of high detail. Accordingly, the linear combination given by the above formula includes a proportionately higher contribution from the filtered image as the level of detail diminishes.
The pre-processing stage of Figure 3, serves to remove noise due to large area and large magnitude grains, however the remaining grain noise is reduced by making use of the Hadamard Transform technique. This is the function of the Hadamard Transform unit 4800 of Figure 3.
The Hadamard Transform is the digital equivalent of the Fourier Transform.
The Fourier Transform converts a signal value in the time domain to an equivalent representation of that same signal in the frequency domain, whereas the Hadamard Transform converts a time sampled signal into a digital frequency or"sequency" signal.
The elements of the basis vectors of the Hadamard Transform take only binary values 1, therefore the transform can be computed using only additions and subtractions. This makes the transform fast to compute and easy to implement. The matrix H representing the Hadamard Transform is real and symmetric hence it is unitary i. e. the inverse matrix is equal to the complex conjugate of the transpose of the matrix. Unitary transforms are useful in performing image transforms because they conserve signal energy, decorrelate highly correlated input data and provide a representation of the image data in which a large fraction of the average energy of the image can be concentrated in a relatively small number of transform coefficients. The fact that the Hadamard Transform has good energy compaction properties for highly correlated images makes it particularly useful for image compression. In general, HDTV images are more highly correlated than their equivalent standard definition images.
For the purposes of performing the transform, the image is divided into blocks of size NxN pixels. A larger block size, offers more efficient energy compaction of the transform but spatial variation of the image favours choice of a smaller block size.
A smaller block size is also favourable in terms of hardware implementation. If f (x),
0 x < (N-l) are the N x I image pixel samples, n is the bit-length of the value of the pixel and bi (x) is the value of the ith bit of the pixel value indexed by x, then the one dimensional Hadamard transform F is given by
1 N-1.-1 1 --, < ") (,) =-E/x-1)- 0
and with the exception of the UN term, the transform is self-inverse :
y b, (u) b, (x) f (x) = F [u] (-1)u = 0
A two-dimensional Hadamard transform can be performed by applying two successive one dimensional transforms. After the Hadamard transform is performed, it is necessary to reorder the transformed sequences F [u] so that they are arranged in terms of increasing digital frequency i. e. sequency.
The Walsh transform is very similar to the Hadamard transform, the only difference being in the ordering of the transform as illustrated by the following examples:
For a ID transform of length 4 Hadamard Ordered : 4*F [0] =f (O) + f (l) + f (2) + f (3) 4*F [l] =f (0)- f (l) + f (2) -f (3) 4*F [2] = f (O) + f (l)-f (2)-f (3) 4*F [3] = f (O)-f (l)-f (2) + f (3) etc.
For a ID transform of length 4 Walsh Ordered: 4*F [O] = f (O) + f (l) + f (2) + f (3) 4*F [1] = f (O) + f (l)-f (2) -f (3) 4*F[2] = f (O)-f (l) + f (2)-f (3) 4*F [3]-f (0)- f (l)-f (2) + f (3) etc.
Both Hadamard and Walsh transforms can be arranged in terms of sequency.
Given a Hadamard matrix HM representing a sample intensity of an image of size MxM, the standard Hadamard transform takes of order M3 operations to complete but by making use of a recursion relation which relates a Hadamard transform of length M to the sum of two transforms of length M/2 it is possible to increase the speed of the computation by reducing the number of operations to order Mlog2M. This increased speed computation is known as a Fast Hadamard Transform The Hadamard transform unit 4800 applies Hadamard transforms to the image data and performs a temporal comparison of the sequency content thus obtained.
The Hadamard Transform is performed separately on R, G and B chrominance channels. The completely random nature of the grain noise results in the existence of temporally uncorrelated sequency components in the Hadamard transformed image.
The invention recognises that grain noise can be reduced by comparing the sequency data of successive image frames and actively reducing the contribution from I sequencies which are unmatched between frames.
The grain removal process is illustrated in Figure 8A. In a single cycle the Cframe, P-frame and N-frame are processed. At stage 4810 the one-dimensional Walsh Hadamard transform is applied to horizontally ordered pixel data of a block ie data read out along rows. The transform is applied separately to the P-frame at stage 4810A, the C-frame at stage 48 1 OB and the N-frame at stage 481 OC. Each of the three data blocks is re-ordered at stage 4820 by performing a horizontal to vertical transform as illustrated in Figure 8B. The horizontal to vertical transform means that the data is effectively read out down the columns of the NxN block rather than along the rows. At stage 4830, a second one-dimensional Walsh-Hadamard transform is applied to the
vertically ordered data. In a sequency adjustment module 4900 the transformed data C*, P*, N* corresponding to the C-frame, P-frame and N-frame input data respectively, are subjected to a sequency matching and adjustment process. In the sequency adjustment module 4900, the transformed data are ordered according to increasing (or decreasing) sequency and a sequency-dependent threshold is defined. The sequency adjustment process will be described in detail below with reference to Figure 9.
In Figure 8A, following reduction of unmatched sequencies in the sequency adjustment module 4900, at stage 4840 the Walsh-Hadamard transformed and partially reduced data for the C-frame is subjected to a first one-dimensional inverse Walsh Hadamard transform. At stage 4850 the data block is then transformed from vertical to horizontal ordering and then at stage 4860, a second inverse Walsh-Hadamard transform is applied in the horizontal direction. The output of stage 4860 is C-frame image data with a reduced level of grain noise. The process of Figure 8A is performed separately for R, G and B chrominance signals and is performed separately on each NxN block of the image.
The sequency matching process performed in the sequency adjustment module 4900 is described in detail in Figure 9. Stage 4918 illustrates that the sequencydependent threshold is a maximum for the lowest sequency and decreases linearly to zero as the sequency increases. The threshold can be further increased for higher sequencies in order to preserve any low level, high frequency detail that may be present in the image. The image transform C* of the current frame is compared with the threshold at stage 4910; the image transform P* of the previous frame is compared with the threshold at stage 4911; and the image transform N* of the next frame is compared with the threshold at stage 4912. Because the coefficients of the image transforms C*, P* and N* present at the inputs to the stages 4910,4911 and 4912 at one time have the same sequency, they are compared in the stages 4910,4911 and 4912 with the same threshold. A logical OR gate 4914 takes inputs from stages 4911 and 4912 and a logical AND gate 4913 takes a first input from stage 4910 and a second input from the output of the OR gate 4914. The output of the AND gate 4913 is input to a selector 4917. If the output of the AND gate 4913 is high then the selector 4917 outputs unchanged the transform C* of the current frame 4915. If however the
output of the AND gate 4913 is low, the C* signal"reduced"by a reduction factor rf (in this example 0. 3) at the multiplier 4916 is output by the selector 4917.
Thus if C* in addition to at least one of P* or N* have values above the threshold for the current sequency, the transformed data will pass from stage 4900 to stage 4840 of Figure 8A unchanged. If however C* is above the threshold while the corresponding P* and N* are below the threshold, the sequency is considered to be unmatched. As previously explained, unmatched sequencies are much more likely than matched sequencies to be associated with grain noise. For unmatched sequencies and for sequencies where C* is below the relevant threshold, the transformed data for C* is"reduced"by multiplying by a fractional scale factor. C* is also reduced if all of C8, P* and N* are below the threshold. This sequency reduction is performed by the multiplier 4916 which multiplies the C* input signal 4915 by a factor and by the selector 4917 that determines whether or not the reduced C* signal is output. In this embodiment of the invention the factor used by the multiplier 4916 is chosen to be 0. 3 but this value can be varied as required by the user. Uncorrelated sequencies can be completely removed by setting the scale factor to zero but this has the disadvantage of producing some image distortion. The"sequency reduction"described above serves to remove most of the grain noise. The reduction factor rf may be in the range 0 < = rf
< = 1 If rf=l no change to the image would occur. If rf = 0 image distortion might occur, but might be acceptable, so rif=0 is not preferred.
As explained above, the image is divided into NxN blocks for the purpose of performing the Hadamard transform. The transform block size can be set to a static value, which in preferred embodiments is at least N=16. In the case where the preprocessing stage is omitted, the block size can be arranged to be proportional to the grain size. The transform block size can be set to different values for R, G and B input signals but preferably all channels have the same values to simplify the hardware.
Although a large block size transform will effectively remove large grain, the grain is more likely to be confused with image detail if large blocks are used and edge effects are amplified in blocks where a large amount of grain reduction has been achieved. If non-overlapping blocks are used a"blocking effect"may occur. This manifests itself in visible block borders in the output image which results in some image distortion.
To overcome the blocking effect, overlapping blocks are used and only the pixels of the central block area are output after completing the process of Figure 8A. The use of overlapping blocks also reduces image distortion that can arise due to the phase dependency of the Hadamard transform. An alternative method of reducing the blocking effect would be to apply low pass filtering to the block border areas.
A hardware implementation of the Hadamard transform is illustrated in Figure 10. Figure 1OA illustrates a calculation for blocksize N=2. The circuit comprises two clock-triggered registers 4920 and 4921, a half-rate controlled multiplexer 4922 and a half-rate controlled alternating adder/subtractor 4923. Figure lOB is the functional diagram for the N=4 Hadamard transform which takes the N=2 transform data as input and comprises four clock triggered registers 4930,4931, 4932 and 4933, a quarter-rate controlled multiplexer 4134 and a quarter-rate controlled alternating adder/subtractor 4935.
Modifications Whilst embodiments of the invention have been described in relation to colour signals, the invention may operate on luminance signals alone. Also other forms of colour signals may be used instead of R, G and b signals. For example Y, U, V signals may be used. Whilst the invention has been described with reference to a system comprising primarily hardware for real-time processing of the video signal, the invention may be implemented by software in a data processing system. Such a software implementation, with current commonly available processors, would not operate in real time.
Attention is invited to cofiled patent applications referenced as follows which relate to other aspects of the telecine system of Figure 2 and the whole contents of which are incorporated herein by this reference: P/9893, 1-00-103, Application Number 01 ;
P/9894, 1-00-109 Application Number 01 ; and P/9895, 1-00-113 Application Number 01
.......................................................................

Claims (29)

1. Apparatus for reducing random noise in a video signal comprising : a Hadamard/Walsh transformer for applying a Hadamard/Walsh transform to the video signal to produce sequency components; a sequency reducer for reducing sequency components which are temporally uncorrelated; and an inverse Hadamard/Walsh transformer for applying an inverse Hadamard/Walsh transform to the output of the sequency reducer.
2. Apparatus according to claim 1, comprising means for comparing sequency components with a sequency dependent threshold and means for determining temporal correlation in dependence on components which have a predetermined relation to the threshold.
3. Apparatus according to claim 2, wherein the threshold with which a component is compared is proportional to the sequency of the component.
4. Apparatus according to claim 2 or 3, wherein the sequency reducer comprises : comparison means which compares corresponding sequency components of motion compensated current, preceding and succeeding frames with respective sequency-dependent thresholds, the thresholds being greater for components of lower sequency than for components of higher sequency; and means for reducing the component of the current frame if the component of the current frame is greater than the threshold and the corresponding component of at least one of the preceding and succeeding frames is less than the threshold with which it is compared.
5. A temporal filter for reducing random noise in a video signal, the filter being arranged to: calculate in respect of each pixel C of a current frame a weighted average of values P', C and N'where P'and N'are weighted values of pixels in motion compensated preceding and succeeding frames corresponding to pixel C, wherein the
weighted values P'and N'tend towards C in those portions of the image represented by the video signal in which temporal change is greater than a lower threshold.
6. A filter according to claim 5, wherein the amount of temporal change between the current and preceding frames is compared with a lower threshold and, if the amount of change is less than the lower threshold, the value of P'is the value of the said corresponding pixel P of the said preceding frame.
7 A filter according to claim 6, wherein the amount of temporal change between the current and preceding frames is compared with an upper threshold and, if the amount of change is greater than the upper threshold, the values of P'is set to C.
8 A filter according to claim 7, wherein if the amount of temporal change is greater than the lower threshold and less than the upper threshold, P'is set to a linear combination of C and P'.
9. A filter according to claim 5,6, 7 or 8, wherein the amount of temporal change between the current and succeeding frames is compared with a lower threshold and, if the amount of change is less than the lower threshold, the value of N'is the value of the said corresponding pixel N of the said preceding frame.
10. A filter according to claim 9, wherein the amount of temporal change between the current and succeeding frames is compared with an upper threshold and, if the amount of change is greater than the upper threshold, the values of N'is set to C.
11. A filter according to claim 10, wherein if the amount of temporal change is greater than the lower threshold and less than the upper threshold, N'is set to a linear combination of C and N'.
12. A filter according to claim 5,6, 7,8, 9 10 or 11, wherein the amount of temporal change is calculated as the difference between (a) the sum of the pixel C of
the current frame and a group of adjacent pixels in the frame and (b) the sum of the corresponding pixel in the said preceding frame.
13 A telecine apparatus including a filter according to any one of claims 5 to 12 for reducing grain noise.
14. A filter for reducing random noise in a video signal, comprising means for determining the amount of detail in portions of the image represented by the video signal and means for filtering the portions of the image where the amount of detail is less than a predetermined amount.
15. A filter for reducing random noise in a video signal representing an image, the filter being arranged to calculate a combination of a first video signal and a second spatially filtered video signal wherein the combination comprises the first video signal in portions of the image where detail is high, and the second video signal in portions of the image where detail is low and a combination of the first and second video signal in other portions of the image.
16 A filter according to claim 15, wherein the said combination includes a proportion of the second video which is inversely dependent on the amount of detail in the portion of the image to which the combination applies.
17. A filter according to claim 15 or 16, further comprising means for determining the spatial detail gradients in the image to detect the level of detail in the image.
18. A filter according to claim 17, wherein the said first and second video signals are combined in dependence on a key signal k where k is dependent on spatial detail gradients in the image.
19. A filter according to claim 15,16, 17, or 18, further comprising a spatial filter as claimed in anyone of claims 19 to 23 for producing the said second signal.
20. A filter according to claim 15, 16, 17, 18 or 19 further comprising a temporal filter as claimed in anyone of claims 5 to 10 for producing the said first signal.
21. A filter for reducing random noise in a video signal and arranged to calculate a function of a spatial combination of pixels values, which combination is selected in dependence on a temporal metric.
22. A filter for reducing random noise in a video signal, the filter being arranged to: select, in a video frame, one pixel Cx, y in respect of which a spatial mean value is to be calculated; determining a threshold value, which threshold is dependent on the differences in value between the said one pixel Cx, y, and pixels Px, y and/or Nx, y, corresponding to the said one pixel Cx, y, in a motion compensated preceding frame P and/or a motion compensated succeeding frame N; and calculating the said spatial mean value in dependence on the values of a group of selected pixels in the said frame, which selected pixels are adjacent the said one pixel each pixel in the said group being selected on the basis that it is a pixel having a value such that the difference between the value of that pixel and the said one pixel Cx, y is less than the said threshold value.
23. A filter according to claim 22, wherein the pixels of the said group are horizontally adjacent the said one pixel Cx, y.
24 A filter according to claim 23, wherein the pixels of the said group are vertically adjacent the said one pixel Cx, y.
25. A filter according to claim 22,23 or 24, wherein the said threshold value is dependent on the largest of :
lPx, y-Cx, yl, IPx, y-Nx, yl, and INx, y-Cx, yl.
26. A filter according to claim 22, 23, 24 or 25, wherein the said mean value is the sum of the values of the pixels in the said group divided by the number of pixels in the group.
27. A filter system for reducing grain noise in a video signal comprising: a temporal filter as claimed in any one of claims 5 to 12; a first spatial filter as claimed in any one of claims 22 to 26 which spatially filters the temporally filtered signal; a second spatial filter as claimed in any one of claims 15 to 21 in which the said first signal is the output of the temporal filter, and the second signal is the output of the first spatial filter; and an apparatus according to claim 1,2, 3 or 4, to which the output of the second spatial filter is applied.
28. A telecine system including apparatus or a filter or a system according to any preceding claim.
29. A filter system substantially as hereinbefore described with reference to the accompanying drawings.
GB0100520A 2001-01-09 2001-01-09 Noise reduction in video signals Withdrawn GB2370934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0100520A GB2370934A (en) 2001-01-09 2001-01-09 Noise reduction in video signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0100520A GB2370934A (en) 2001-01-09 2001-01-09 Noise reduction in video signals

Publications (2)

Publication Number Publication Date
GB0100520D0 GB0100520D0 (en) 2001-02-21
GB2370934A true GB2370934A (en) 2002-07-10

Family

ID=9906492

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0100520A Withdrawn GB2370934A (en) 2001-01-09 2001-01-09 Noise reduction in video signals

Country Status (1)

Country Link
GB (1) GB2370934A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180089807A1 (en) * 2015-04-14 2018-03-29 Koninklijke Philips N.V. Device and method for improving medical image quality
EP3798988B1 (en) 2003-05-15 2021-08-04 Dolby International AB Method and apparatus for representing image granularity by one or more parameters

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5327242A (en) * 1993-03-18 1994-07-05 Matsushita Electric Corporation Of America Video noise reduction apparatus and method using three dimensional discrete cosine transforms and noise measurement

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5327242A (en) * 1993-03-18 1994-07-05 Matsushita Electric Corporation Of America Video noise reduction apparatus and method using three dimensional discrete cosine transforms and noise measurement

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3798988B1 (en) 2003-05-15 2021-08-04 Dolby International AB Method and apparatus for representing image granularity by one or more parameters
US20180089807A1 (en) * 2015-04-14 2018-03-29 Koninklijke Philips N.V. Device and method for improving medical image quality
US10546367B2 (en) * 2015-04-14 2020-01-28 Koninklijke Philips N.V. Device and method for improving medical image quality

Also Published As

Publication number Publication date
GB0100520D0 (en) 2001-02-21

Similar Documents

Publication Publication Date Title
Kokaram et al. Interpolation of missing data in image sequences
EP0979487B1 (en) Method and apparatus for aligning images
US5701163A (en) Video processing method and apparatus
JP4644669B2 (en) Multi-view image generation
Mangiat et al. High dynamic range video with ghost removal
US7769244B2 (en) Automatic digital film and video restoration
Schallauer et al. Automatic restoration algorithms for 35mm film
EP1329094A2 (en) Methods for automatically and semi-automatically transforming digital image data to provide a desired image look
Van Roosmalen et al. Correction of intensity flicker in old film sequences
EP1371015A2 (en) Bilateral filtering in a demosaicing process
WO1998002844A9 (en) Method and apparatus for mosaic image construction
CA2540852A1 (en) Technique for bit-accurate film grain simulation
US11645734B2 (en) Circuitry for image demosaicing and contrast enhancement and image-processing method
Buades et al. Enhancement of noisy and compressed videos by optical flow and non-local denoising
Rizzi et al. Perceptual color film restoration
GB2370934A (en) Noise reduction in video signals
Rizzi et al. Unsupervised color film restoration using adaptive color equalization
GB2370932A (en) Reduction in defect visibilty in image signals
Croci et al. Advanced tools and framework for historical film restoration
Van Roosmalen et al. Restoration and storage of film and video archive material
Li et al. Shen-Castan Based Edge Detection Methods for Bayer CFA Images
Buades et al. Patch-Based Methods for Video Denoising
Weerasinghe et al. Method of color interpolation in a single sensor color camera using green channel separation
Coria et al. Using temporal correlation for fast and highdetailed video tone mapping
GB2370933A (en) Detecting and reducing visibility of scratches in images.

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)