GB2280812A - Deblurring image data using motion vector dependent deconvolution - Google Patents
Deblurring image data using motion vector dependent deconvolution Download PDFInfo
- Publication number
- GB2280812A GB2280812A GB9316307A GB9316307A GB2280812A GB 2280812 A GB2280812 A GB 2280812A GB 9316307 A GB9316307 A GB 9316307A GB 9316307 A GB9316307 A GB 9316307A GB 2280812 A GB2280812 A GB 2280812A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image data
- array
- motion vector
- motion vectors
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000013598 vector Substances 0.000 title claims abstract description 81
- 230000001419 dependent effect Effects 0.000 title abstract description 3
- 238000003491 array Methods 0.000 claims description 13
- 238000000034 method Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 2
- 238000012546 transfer Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 206010034960 Photophobia Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 208000013469 light sensitivity Diseases 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Picture Signal Circuits (AREA)
- Studio Circuits (AREA)
- Image Analysis (AREA)
Abstract
The deblurring deconvolution is dependent upon the capture time T of the image and the motion vector detected for that part of the image. A multiplexer 14 selects between deblurring deconvolutions performed in parallel in dependence upon a plurality of different candidate motion vectors V1, V2, V3 and zero motion. A region processor 10 identifies boundaries between areas within the image corresponding to different objects and controls a mixer 16 to selectively output deblurred values or unprocessed raw video values so as to reduce edge effects due to the inappropriate application of deblurring deconvolution at the edges between objects. Provision of parallel deconvolutions allows the system to operate in a continuous pipelined manner. <IMAGE>
Description
IMAGE ENHANCEMENT
This invention relates to the field of image enhancement. More particularly, this invention relates to the enhancement of arrays of image data captured during a capture time such that moving images are blurred.
The blurring of moving images often occurs due to the non-zero capture times of image capture devices, such as stills cameras and television cameras. Blurring can be produced by decreasing the capture time, but this brings with it its own set of problems, such as the requirement for higher light sensitivity. The problems of image blurring are made worse when the scene contains rapidly moving images (e.g. motor racing) or the capture device itself is moving relative to the image (e.g. satellite photography where long capture times are also required).
It is known that blurring due to image motion can be reduced using deconvolution techniques. The image data value at a given point within the blurred image may be considered as a convolution over the capture time of the actual instantaneous image at that point and the motion. This convolution can be reversed given values of the variables representing the capture time and the motion of the image assuming the motion can be approximated to uniform linear motion.
A mathematical treatment of this technique may be found in "Digital Image Processing" by Gonzalez & Wintz published by Addison
Wesley 1987, pages 224-232. The convolution in the spatial domain can be considered as multiplication by a transfer function in the spatial frequency domain, the transfer function being capable of solution in terms of the capture time and speed of movement of the image.
Such techniques have been used to enhance images, such as still images and satellite images, where the degree of motion present is either known from the outset (e.g. a fixed satellite orbit) or may be determined by trial and error attempts at deblurring. Whilst the deblurring technique can be effective, the problems of solving the transfer function have meant its application has been limited to specialist fields and it may be considered as something of a textbook curiosity.
Viewed from one aspect this invention provides apparatus for processing an array of image data captured during a capture time such that moving images are blurred, said apparatus comprising:
means for detecting an array of motion vectors associated with corresponding portions of an array of image data, each of said motion vectors representing image motion of a corresponding one of said portions between temporally spaced arrays of image data; and
means for deconvolving a blur function from each of said portions in dependence upon said capture time and a corresponding motion vector for that portion detected by said means for detecting to yield an array of reduced blur image data.
The invention both recognises and exploits that the systems that have been developed for motion compensated interpolation of image data to effect standards conversion can be utilised to detect motion vectors for moving images as part of an image deblurring system of general applicability. Furthermore, differing motion vectors can be identified for differing parts of a complex moving scene and the appropriate deblurring convolution applied to the corresponding parts of the scene.
This considerably enhances the effectiveness of the technique.
In order to ease the burden of identifying the motion vectors, it is preferred that said portions comprise arrays of pixel values.
Such an approximation of associating motion vectors with a block of pixel values provides an advantageous compromise between system processing requirements and the achieved accuracy of results.
Having identified motion vectors to be associated with each of the arrays of pixel values, it is important to provide at least some flexibility as to which motion vectors are actually utilised.
Accordingly, it is preferred that said means for detecting an array of motion vectors detects a plurality of candidate motion vectors for each of said arrays of pixel values and a means for selecting selects a motion vector to be associated with each pixel value from among said candidate motion vectors.
The provision of a plurality of candidate motion vectors for each of the portions (/arrays of pixel values) from among which a particular motion vector to be used is selected on a pixel by pixel basis, provides for differences between what the pixel values within a given portion in fact represent without inflicting unduly disadvantageous processing requirements.
The deconvolution process required to produce a particular output pixel value requires contributions from a number of surrounding pixel values. In order to improve the efficiency of the system, it is desirable that the image data should be input to the system as a continuous stream that can be passed through the system without undue requirements for buffering and the like. This presents a problem in the case of deconvolution that may be based upon one of a number of candidate motion vectors. In such a circumstance, the particular deconvolution that is required may not be identified until after at least some of the pixel values upon which the result depends have passed through at least part of the apparatus.In order to deal with this, it is preferred that said means for deconvolving performs a deconvolution upon each pixel value for each of said candidate motion vectors to yield a plurality of reduced blur pixel values and a multiplexer controlled by said means for selecting switches one of said candidate reduced blur image pixel values corresponding to said selected motion vector to form part of said array of reduced blur image data.
The performing of a plurality of deconvolutions when only one will ultimately be used, may seem at first sight to disadvantageously increase the amount of hardware that will be needed to implement the system. However, when consideration is made that the processing may then be made in a continuous pipelined fashion without the need to repeatedly access the same pixel data values, then an overall advantage results from such an approach.
As mentioned previously, the ability of the system to apply different motion vectors to different parts of the image allows a scene containing multiple moving images to be dealt with. Such scenes do however create problems at the boundaries between objects within the scene. As a first stage in dealing with such problems it is desirable that the system should include means for detecting boundaries between areas within said array of image data sharing a substantially common motion vector, sharing of a substantially common motion vector within such an area being indicative of said area representing a single object (the object may comprise a plurality of different parts, but they all move together).
Having identified the boundaries between objects, the problems that could be introduced by an inappropriate application of a deblurring deconvolution at a boundary between different objects can be reduced by providing a mixer responsive to said means for detecting boundaries for mixing together deblurred image data from said means for deconvolving and unprocessed image data such that at boundaries between detected areas substantially wholly unprocessed image data is output from said mixer with an increasing proportion of deblurred image data being output from said mixer upon moving away from said boundaries.
It is preferably that the mixer should select unprocessed blurred image data at boundaries rather than risk the introduction of the potentially more disturbing artifacts due to an inappropriate deblurring deconvolution being applied.
A particularly efficient embodiment of the means for deconvolving comprises a two dimensional finite impulse response filter having a set of filter coefficients selected in dependence upon motion vector components in orthogonal directions for that portion of image data being processed.
The realisation that the blurring transfer function operates in the spatial frequency domain of the image makes the use of a finite impulse response filter for transformation back into the spatial domain particularly suitable. Furthermore, such a filter is easy to operate in a pipelined fashion whereby the data is sequentially clocked along the filter to yield a filtered output result upon each clock cycle.
It will be appreciated that the filter coefficients for the finite impulse response filter is dependent upon the orthogonal motion vector components and the capture time of the image. In order to efficiently deal with such variations, it is preferred to provide a filter coefficient store for said filter storing a set of filter coefficients for a range of possible motion vectors and capture times that are triggered into use in dependence upon detected motion vector components and the particular capture time in question.
Viewed from another aspect the invention provides a method of processing an array of image data captured during a capture time such that moving images are blurred, said method comprising the steps of:
detecting an array of motion vectors associated with corresponding portions of an array of image data, each of said motion vectors representing image motion of a corresponding one of said portions between temporally spaced arrays of image data; and
deconvolving a blur function from each of said portions in dependence upon said capture time and a corresponding detected motion vector for that portion to yield an array of reduced blur image data.
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 illustrates an apparatus for deblurring image data; and
Figure 2 illustrates a two dimensional finite impulse response filter for effecting image deblurring.
Figure 1 shows an apparatus for image deblurring. A video data stream is input to a vector selector 2 and first, second and third deconvolution means 4, 6 and 8 respectively. Four candidate motion vectors (V1, V2, V3 and the zero motion vector) for a block/portion of the image are input to the vector selector 2. It will be appreciated that the candidate motion vectors are detected using apparatus and techniques such as those described in British Published Patent
Application GB-A-2 231 749 (Sony Corporation) for the purpose of motion compensated standards conversion.
One of the candidate motion vectors is the zero motion vector.
The other candidate motion vectors V1, V2 and V3 are fed to the respective deconvolution means 4, 6 and 8.
The vector selector 2 selects from among the candidate vectors for the block (Vectors/b) on a pixel by pixel to yield basis that vector (Vectors/p) which is most suitable for a particular pixel. In accordance with the known technique, this selection may be achieved by projection from one area of image pixel values to another using each of the vectors and testing to determine how well the pixel values in the target matches those source.
The particular vectors selected for each pixel (Vectors/p) are passed to a region processor 10 where an array of vectors selected for all of the pixel values is assembled and analysed as will be described later. The vector selector 2 also outputs a pixel vector choice value to a delay unit 12 through which it passes to a multiplexer 14.
The multiplexer 14 receives the outputs from the first, second and third deconvolution means 4, 6 and 8 respectively. The multiplexer 14 also receives the unprocessed video data in one channel corresponding to the zero motion candidate motion vector. The multiplexer 14 selects one of its inputs for output to a mixer 16 in dependence upon the output of the delay 12. The delay 12 is chosen to match the delays in the vector selector 2 and first, second and third deconvolution means 4, 6 and 8. The first, second and third deconvolution means 4, 6 and 8 each serve to generate deblurred pixel values as will be described below.
The region processor 10 serves to identify boundaries between objects within a scene of moving objects. This is achieved by searching through the motion vector selected for the various pixel positions within the image and identifying areas showing substantially common motion vectors within a predetermined threshold of one another.
When these areas of common motion vectors have been identified then their boundaries may be traced to represent the boundaries of differing moving objects within the scene. In synchronism with the output of the deblurred pixel value from the multiplexer 14, the region processor 10 outputs a boundary flag via a delay unit 18 to the mixer 16. This boundary flag is a multi-bit word indicating the proximity of the pixel now being output to a boundary between objects within the scene. If the boundary flag indicates that the particular pixel is at the boundary between two objects, then the mixer 16 selects the unprocessed video for output.As the boundary flag progressively indicates that a pixel is further away from a boundary between moving objects, then the mixer 16 correspondingly selects a higher proportion of the output from the multiplexer 14 representing the deblurred pixels rather than the raw unprocessed pixels. The action of the mixer 16 and the region processor 10 serves to suppress the generation of disturbing artifacts around the borders of moving objects at which the deblurring deconvolutions would not be appropriately applied.
Figure 2 illustrates one of the deconvolution means 4, 6 and 8 in more detail. The deconvolution means 4, 6 and 8 is implementing using a two dimensional finite impulse response filter 20. This filter 20 has conventional pixel and line delays between taps. The input video data is pipelined through the filter 20 in sequence to generate the output.
The filter coefficients applied by the filters 20 are stored within a coefficient store 24. The coefficient stores 24 is supplied with the capture time of the video data and a value of the horizontal vector component and vertical vector component of the pixel values currently at the centre position within the filter 20.
The coefficient store 24 outputs n horizontal coefficient values and m vertical coefficient values to the filter 20 in dependence upon the horizontal vector component, the vertical vector component and the capture time. The values of the coefficients are determined in advance for particular combinations of vector component values and capture time using the mathematical treatment referenced in the introduction.
The blurring transfer function may be represented by:
where: u = horizontal spatial frequency; v = vertical spatial frequency;
T = capture time; a = (horizontal motion vector) * T; and b = (vertical motion vector) * T.
Accordingly, the deblurring transfer function may be represented by:
By substituting into this equation the spatial frequencies corresponding to the differing horizontal pixel spacings and interline spacings in the vertical direction, a set of filter coefficient values for representative values of the capture time T and the motion vector might be derived. Unfortunately, the value of H,(u.v) becomes infinite at the zero crossing that occur within HB(U,V); the first zero crossing occurs at lua+vbl = 1.
One way of dealing with this to provide approximate solutions for the filter coefficients without the zero crossing problem is to only consider up to the first zero crossing of H,(u,v) and then apply a window function. An example of a window function W(u,v) that may be used is a raised cosine:
W(u,v) = 1/2(1 + cos(x(ua+vb))) ... for Jua+vbl < 1 W(u,v) = O . . . elsewhere
Thus, the applied non-separable deblur function becomes:
An alternative is to use a least-mean-square (Wiener) filter as described in the mathematical treatment referenced in the introduction.
The Wiener filter given by equation 5.5-10 in this reference (where K is a constant) is:
Claims (12)
- CLAIMS 1. Apparatus for processing an array of image data captured during a capture time such that moving images are blurred, said apparatus comprising: means for detecting an array of motion vectors associated with corresponding portions of an array of image data, each of said motion vectors representing image motion of a corresponding one of said portions between temporally spaced arrays of image data; and means for deconvolving a blur function from each of said portions in dependence upon said capture time and a corresponding motion vector for that portion detected by said means for detecting to yield an array of reduced blur image data.
- 2. Apparatus as claimed in claim 1, wherein said portions comprise arrays of pixel values.
- 3. Apparatus as claimed in claims 2, wherein said means for detecting an array of motion vectors detects a plurality of candidate motion vectors for each of said arrays of pixel values and a means for selecting selects a motion vector to be associated with each pixel value from among said candidate motion vectors.
- 4. Apparatus as claimed in claim 3, wherein said means for deconvolving performs a deconvolution upon each pixel value for each of said candidate motion vectors to yield a plurality of reduced blur pixel values and a multiplexer controlled by said means for selecting switches one of said candidate reduced blur image pixel values corresponding to said selected motion vector to form part of said array of reduced blur image data.
- 5. Apparatus as claimed in any one of claims 2, 3 and 4, comprising means for detecting boundaries between areas within said array of image data sharing a substantially common motion vector, sharing of a substantially common motion vector within such an area being indicative of said area representing a single object.
- 6. Apparatus as claimed in claim 5, comprising a mixer responsive to said means for detecting boundaries for mixing together deblurred image data from said means for deconvolving and unprocessed image data such that at boundaries between detected areas substantially wholly unprocessed image data is output from said mixer with an increasing proportion of deblurred image data being output from said mixer upon moving away from said boundaries.
- 7. Apparatus as claimed in any one of the preceding claims, wherein said means for deconvolving comprises a two dimensional finite impulse response filter having a set of filter coefficients selected in dependence upon motion vector components in orthogonal directions for that portion of image data being processed.
- 8. Apparatus as claimed in claim 7, comprising a filter coefficient store for said finite impulse response filter storing sets of filter coefficients for a range of possible motion vector components, detected motion vector component triggering selection of a corresponding set of filter coefficients.
- 9. Apparatus as claimed in claim 8, wherein said filter coefficient store stores a plurality of sets of filter coefficients for each possible motion vector, each of said plurality of sets corresponding to a different capture time.
- 10. A method of processing an array of image data captured during a capture time such that moving images are blurred, said method comprising the steps of: detecting an array of motion vectors associated with corresponding portions of an array of image data, each of said motion vectors representing image motion of a corresponding one of said portions between temporally spaced arrays of image data; and deconvolving a blur function from each of said portions in dependence upon said capture time and a corresponding detected motion vector for that portion to yield an array of reduced blur image data.
- 11. Apparatus for processing arrays of image data substantially as hereinbefore described with reference to the accompanying drawings.
- 12. A method of processing arrays of image data substantially as hereinbefore described with reference to the accompanying drawings.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9316307A GB2280812B (en) | 1993-08-05 | 1993-08-05 | Image enhancement |
JP16199694A JP3251127B2 (en) | 1993-08-05 | 1994-07-14 | Video data processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9316307A GB2280812B (en) | 1993-08-05 | 1993-08-05 | Image enhancement |
Publications (3)
Publication Number | Publication Date |
---|---|
GB9316307D0 GB9316307D0 (en) | 1993-09-22 |
GB2280812A true GB2280812A (en) | 1995-02-08 |
GB2280812B GB2280812B (en) | 1997-07-30 |
Family
ID=10740058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9316307A Expired - Fee Related GB2280812B (en) | 1993-08-05 | 1993-08-05 | Image enhancement |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP3251127B2 (en) |
GB (1) | GB2280812B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001024531A1 (en) * | 1999-09-24 | 2001-04-05 | Sony Electronics, Inc. | Classified adaptive error recovery method and apparatus |
WO2002051124A2 (en) * | 2000-12-19 | 2002-06-27 | Pulsent Corporation | Deblurring and re-blurring image segments |
AU767162B2 (en) * | 1999-12-14 | 2003-10-30 | Canon Kabushiki Kaisha | Method and apparatus for uniform lineal motion blur estimation using multiple exposures |
WO2003100724A2 (en) * | 2002-05-23 | 2003-12-04 | Koninklijke Philips Electronics N.V. | Edge dependent motion blur reduction |
US6754371B1 (en) | 1999-12-07 | 2004-06-22 | Sony Corporation | Method and apparatus for past and future motion classification |
AU2003244630B2 (en) * | 1999-12-14 | 2005-05-26 | Canon Kabushiki Kaisha | Method and Apparatus for Uniform Lineal Motion Blur Estimation Using Multiple Exposures |
WO2006046182A1 (en) * | 2004-10-26 | 2006-05-04 | Koninklijke Philips Electronics N.V. | Enhancement of blurred image portions |
WO2006054201A1 (en) * | 2004-11-16 | 2006-05-26 | Koninklijke Philips Electronics N.V. | Video data enhancement |
WO2006135459A1 (en) * | 2005-04-11 | 2006-12-21 | Ircon, Inc. | Method and apparatus for capturing and analyzing thermo-graphic images of a moving object |
US8134613B2 (en) | 2001-04-10 | 2012-03-13 | Sony Corporation | Image processing apparatus and method, and image pickup apparatus |
US8319840B2 (en) | 2005-04-27 | 2012-11-27 | Mitsubishi Electric Corporation | Image processing device, image processing method, and information terminal apparatus |
US8345163B2 (en) | 2010-08-03 | 2013-01-01 | Mitsubishi Electric Corporation | Image processing device and method and image display device |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000011863A1 (en) * | 1998-08-21 | 2000-03-02 | Koninklijke Philips Electronics N.V. | Problem area location in an image signal |
US6621936B1 (en) | 1999-02-12 | 2003-09-16 | Sony Corporation | Method and apparatus for spatial class reduction |
US6591398B1 (en) | 1999-02-12 | 2003-07-08 | Sony Corporation | Multiple processing system |
US6519369B1 (en) | 1999-02-12 | 2003-02-11 | Sony Corporation | Method and apparatus for filter tap expansion |
US6522785B1 (en) | 1999-09-24 | 2003-02-18 | Sony Corporation | Classified adaptive error recovery method and apparatus |
JP4674408B2 (en) * | 2001-04-10 | 2011-04-20 | ソニー株式会社 | Image processing apparatus and method, recording medium, and program |
JP4596205B2 (en) * | 2001-04-10 | 2010-12-08 | ソニー株式会社 | Image processing apparatus and method, and program |
ATE441283T1 (en) * | 2003-12-01 | 2009-09-15 | Koninkl Philips Electronics Nv | MOTION COMPENSATED INVERSE FILTERING WITH BANDPASS FILTERS FOR MOTION SMURRY REDUCTION |
JP4910839B2 (en) * | 2007-03-30 | 2012-04-04 | ソニー株式会社 | Image processing apparatus and method, and program |
WO2009107487A1 (en) | 2008-02-25 | 2009-09-03 | 三菱電機株式会社 | Motion blur detecting device and method, and image processor, and image display |
CN102073993B (en) * | 2010-12-29 | 2012-08-22 | 清华大学 | Camera self-calibration-based jittering video deblurring method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4612575A (en) * | 1984-07-24 | 1986-09-16 | E-Systems, Inc. | T.V. video image correction |
GB2264414A (en) * | 1992-02-12 | 1993-08-25 | Sony Broadcast & Communication | Motion compensated noise reduction |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62185466A (en) * | 1986-02-10 | 1987-08-13 | Canon Inc | Picture processor |
JP2611211B2 (en) * | 1987-02-28 | 1997-05-21 | ソニー株式会社 | Two-dimensional filter device |
JPH02202287A (en) * | 1989-01-31 | 1990-08-10 | Victor Co Of Japan Ltd | Movement adaptive filter |
JPH04188283A (en) * | 1990-11-22 | 1992-07-06 | Canon Inc | Method and device for processing picture |
JP3179475B2 (en) * | 1990-11-30 | 2001-06-25 | ソニー株式会社 | Signal processing device and signal processing method |
-
1993
- 1993-08-05 GB GB9316307A patent/GB2280812B/en not_active Expired - Fee Related
-
1994
- 1994-07-14 JP JP16199694A patent/JP3251127B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4612575A (en) * | 1984-07-24 | 1986-09-16 | E-Systems, Inc. | T.V. video image correction |
GB2264414A (en) * | 1992-02-12 | 1993-08-25 | Sony Broadcast & Communication | Motion compensated noise reduction |
Non-Patent Citations (1)
Title |
---|
"Digital Image Processing", R C Gonzalez & P Wintz, Adison- Wesley,1977. See pages 224-232. * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2371938A (en) * | 1999-09-24 | 2002-08-07 | Sony Electronics Inc | Classified adaptive error recovery method and apparatus |
WO2001024531A1 (en) * | 1999-09-24 | 2001-04-05 | Sony Electronics, Inc. | Classified adaptive error recovery method and apparatus |
GB2371938B (en) * | 1999-09-24 | 2004-04-21 | Sony Electronics Inc | Classified adaptive error recovery method and apparatus |
US6754371B1 (en) | 1999-12-07 | 2004-06-22 | Sony Corporation | Method and apparatus for past and future motion classification |
AU767162B2 (en) * | 1999-12-14 | 2003-10-30 | Canon Kabushiki Kaisha | Method and apparatus for uniform lineal motion blur estimation using multiple exposures |
AU2003244630B2 (en) * | 1999-12-14 | 2005-05-26 | Canon Kabushiki Kaisha | Method and Apparatus for Uniform Lineal Motion Blur Estimation Using Multiple Exposures |
US6959117B2 (en) | 2000-12-19 | 2005-10-25 | Pts Corporation | Method and apparatus for deblurring and re-blurring image segments |
WO2002051124A2 (en) * | 2000-12-19 | 2002-06-27 | Pulsent Corporation | Deblurring and re-blurring image segments |
WO2002051124A3 (en) * | 2000-12-19 | 2003-03-27 | Pulsent Corp | Deblurring and re-blurring image segments |
US8134613B2 (en) | 2001-04-10 | 2012-03-13 | Sony Corporation | Image processing apparatus and method, and image pickup apparatus |
WO2003100724A2 (en) * | 2002-05-23 | 2003-12-04 | Koninklijke Philips Electronics N.V. | Edge dependent motion blur reduction |
WO2003100724A3 (en) * | 2002-05-23 | 2004-10-14 | Koninkl Philips Electronics Nv | Edge dependent motion blur reduction |
WO2006046182A1 (en) * | 2004-10-26 | 2006-05-04 | Koninklijke Philips Electronics N.V. | Enhancement of blurred image portions |
WO2006054201A1 (en) * | 2004-11-16 | 2006-05-26 | Koninklijke Philips Electronics N.V. | Video data enhancement |
WO2006135459A1 (en) * | 2005-04-11 | 2006-12-21 | Ircon, Inc. | Method and apparatus for capturing and analyzing thermo-graphic images of a moving object |
GB2440696A (en) * | 2005-04-11 | 2008-02-06 | Ircon Inc | Method and apparatus for capturing and analyzing thermo-graphic images of moving object |
US7622715B2 (en) | 2005-04-11 | 2009-11-24 | Fluke Corporation | Method and apparatus for capturing and analyzing thermo-graphic images of a moving object |
GB2440696B (en) * | 2005-04-11 | 2011-02-23 | Ircon Inc | Method and apparatus for capturing and analyzing thermo-graphic images of moving object |
CN101180652B (en) * | 2005-04-11 | 2012-08-15 | 福禄克公司 | Method and apparatus for capturing and analyzing thermo-graphic images of moving object |
US8319840B2 (en) | 2005-04-27 | 2012-11-27 | Mitsubishi Electric Corporation | Image processing device, image processing method, and information terminal apparatus |
US8542283B2 (en) | 2005-04-27 | 2013-09-24 | Mitsubishi Electric Corporation | Image processing device, image processing method, and information terminal apparatus |
US8345163B2 (en) | 2010-08-03 | 2013-01-01 | Mitsubishi Electric Corporation | Image processing device and method and image display device |
Also Published As
Publication number | Publication date |
---|---|
GB9316307D0 (en) | 1993-09-22 |
GB2280812B (en) | 1997-07-30 |
JP3251127B2 (en) | 2002-01-28 |
JPH0765163A (en) | 1995-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
GB2280812A (en) | Deblurring image data using motion vector dependent deconvolution | |
EP0476959B1 (en) | System and method for fusing video imagery from multiple sources in real time | |
US5241372A (en) | Video image processing apparatus including convolution filter means to process pixels of a video image by a set of parameter coefficients | |
US5535291A (en) | Superresolution image enhancement for a SIMD array processor | |
Kim et al. | Spatio-temporal adaptive 3-D Kalman filter for video | |
US6614474B1 (en) | Electronic pan tilt zoom video camera with adaptive edge sharpening filter | |
EP0134244B1 (en) | Resolution enhancement and zoom by degradation estimates | |
US5198896A (en) | Movement detection apparatus for detecting movement vectors from an image signal | |
US5666160A (en) | Digital zooming system of high resolution and method therefor | |
JPH08336046A (en) | Method and device for processing signal | |
Lorenz et al. | Adaptive filtering in astronomical image processing-part one-basic considerations and examples | |
JPS62127976A (en) | Image recording processor | |
JP2795148B2 (en) | Moving image processing method and apparatus | |
US6304679B1 (en) | Method and apparatus for implementing two-dimensional digital filters | |
JPS5885673A (en) | Defocusing effect device | |
Pan et al. | Fractional directional derivative and identification of blur parameters of motion-blurred image | |
KR100717031B1 (en) | 1-D image restoration using a sliding window method | |
Gleed et al. | High-speed super-resolution techniques for passive millimeter-wave imaging systems | |
EP0466252A2 (en) | A method and apparatus for restoring convolution degraded images and signals | |
Araneda et al. | A compact hardware architecture for digital image stabilization using integral projections | |
Deepthi Jordhana et al. | Various filter algorithms using impulse noise removal in digital images with image fusion technique | |
EP0425288B1 (en) | Movement detection apparatus | |
Wen et al. | Fast splitting algorithm for multiframe total variation blind video deconvolution | |
KR101775273B1 (en) | Method and system for acquiring correlation coefficient between images | |
Ponomarev et al. | Adaptive Wiener filter implementation for image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20120805 |