US20090244300A1 - Method and apparatus for motion invariant imaging - Google Patents

Method and apparatus for motion invariant imaging Download PDF

Info

Publication number
US20090244300A1
US20090244300A1 US12/058,105 US5810508A US2009244300A1 US 20090244300 A1 US20090244300 A1 US 20090244300A1 US 5810508 A US5810508 A US 5810508A US 2009244300 A1 US2009244300 A1 US 2009244300A1
Authority
US
United States
Prior art keywords
camera
moving
moving object
motion
entire scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/058,105
Other versions
US8451338B2 (en
Inventor
Anat Levin
Peter Sand
Taeg Sang Cho
Fredo Durand
Willliam T. Freeman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Massachusetts Institute of Technology
Original Assignee
Massachusetts Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Massachusetts Institute of Technology filed Critical Massachusetts Institute of Technology
Priority to US12/058,105 priority Critical patent/US8451338B2/en
Assigned to MASSACHUSETTS INSTITUTE OF TECHNOLOGY reassignment MASSACHUSETTS INSTITUTE OF TECHNOLOGY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FREEMAN, WILLIAM T., SAND, PETER, DURAND, FREDO, CHO, TAEG SANG, LEVIN, ANAT
Publication of US20090244300A1 publication Critical patent/US20090244300A1/en
Assigned to NATIONAL SCIENCE FOUNDATION reassignment NATIONAL SCIENCE FOUNDATION CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Application granted granted Critical
Publication of US8451338B2 publication Critical patent/US8451338B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Definitions

  • Motion blur often limits the quality of photographs and can be caused by either the shaking of the camera or the movement of photographed objects (e.g., subject, passerby, props, etc.) in the scene.
  • Modern cameras address the former case with image stabilization, where motion sensors control mechanical actuators that shift the sensor or camera lens element in real time during the exposure to compensate for the motion (shaking) of the camera, e.g. Canon. 2003. EF Lens Work III, The Eyes of EOS. Canon Inc. Lens Product Group.
  • image stabilization enables sharp hand-held photographs of still subjects at much longer shutter speed, thereby reducing image noise.
  • image stabilization only addresses camera motion and cannot help with moving objects in the subject scene or field of view.
  • the typical motion-blur kernel is a line segment in the direction of motion, which corresponds to a box filter. This kernel severely attenuates high spatial frequencies and deconvolution quickly becomes ill-conditioned.
  • the length and direction of the blur kernel both depend on the motion and are therefore unknown and must be estimated.
  • motion blur usually varies over the image since different objects or regions can have different motion, and segmentation must be used to separate image regions with different motion.
  • the present invention addresses the foregoing problems in the art.
  • applicants show that if the motion is restricted to a 1D set of velocities, such as horizontal motion (as is the case with many real world objects like cars or walking people), one can address all three challenges of the prior art mentioned above.
  • Using camera hardware similar to that used for image stabilization applicants and the present invention make the point-spread function (PSF) invariant to motion and easy to invert.
  • PSF point-spread function
  • applicants/the invention system introduce a specific camera movement during exposure. This movement is designed so that the compound motion of the camera and any object velocity (within a speed range and along the selected orientation) and at any depth in the camera filed of view results in the same easy-to-invert PSF.
  • a method and apparatus deblurs images of a moving object.
  • the invention system blurs an entire scene of the moving object. This blurring is in a manner which is invariant to velocity of the moving object.
  • the invention system deconvolutes the blurred entire scene and generates a reconstructed image. The reconstructed image displays the moving object in a deblurred state.
  • the invention step of blurring the entire scene is preferably implemented by moving any one or combination of the camera, a lens element of the camera and camera sensor.
  • This moving of the camera or part of the camera system includes linear movement in a range of direction and speed sufficient to include direction and speed of the moving object. Examples of linear movement here are a sinusoidal pattern, a parabolic pattern or any other simple harmonic motion and the like.
  • the linear movement follows a parabolic path by moving laterally initially at a maximum speed of the range and slowing to a stop and then moving in an opposite direction laterally, increasing in speed to a maximum speed of the range in the opposite direction and stopping.
  • the present invention may be applied to a variety of moving objects and environments.
  • the moving object may be (a) a moving vehicle on a roadway or other terrain, (b) a body part of a patient (human or animal), (c) in aerial photography, or other moving objects in a subject scene.
  • FIG. 1 a is a block diagram of an embodiment of the present invention.
  • FIG. 1 b is a flow diagram of an embodiment of the present invention.
  • FIGS. 2 a - 2 d are schematic graphs illustrating xt-slice and integration curves resulting from different camera motions.
  • FIGS. 2 e - 2 h are graphs illustrating corresponding integration curves sheared to account for object slope for FIGS. 2 a - 2 d.
  • FIGS. 2 i - 2 l are graphs of projected point spread functions corresponding to different object velocities of FIGS. 2 a - 2 d.
  • FIGS. 3 a - 3 d is a set of simulated photographs of five dots moving over a range of speeds and directions.
  • FIGS. 4 a - 4 d are illustrations of integration curve traces in space time and corresponding log spectrums of a static camera, a parabolic motion camera of the present invention, a flutter shutter camera and the upper bound.
  • FIGS. 5 a - 5 c are synthetic visualizations (imagery) of point spread function information loss between a blurred input and deblurring solution result.
  • FIG. 6 is a schematic view of one embodiment of the present invention.
  • FIGS. 7 a - 7 c are photographic illustrations of deblurring results of the present invention.
  • FIGS. 8 a - 8 e are comparisons of deblurring photographic images using the present invention and using static camera deblurring of the art.
  • FIGS. 9 a - 9 b are scene views illustrating the present invention PSF clipping and a working velocities range.
  • the present invention PSF preserves more high frequencies for moving objects than a normal exposure. This however comes at the cost of slightly degraded performance for static objects.
  • applicants show that even if object motions could be estimated perfectly, the type of PSFs resulting from the parabolic camera motion is near-optimal for a stable deconvolution over a range of possible object speeds. This optimality is in the sense of minimizing the degradation of the reconstructed image for a range of potential object velocities.
  • Applicants' (the present invention's) design distributes this budget more uniformly, improving the reconstruction of all motion velocities, at the price of a slightly worse reconstruction of the static parts.
  • a translation of the full camera There are three basic options for implementing this camera movement: a translation of the full camera, a rotation of the camera, or a translation of the sensor or lens element.
  • the latter may be the best solution, and could be achieved with the existing hardware used for stabilization.
  • a camera rotation is a good approximation to sensor translation and is easier to implement as a prototype.
  • Applicants demonstrate a prototype using camera rotation and show 1D speed-invariant deconvolution results for a range of 1D and even for some 2D motions.
  • the present invention characterizes motion blur as an integration in space-time. Applicants show that the effect of object motion can be characterized by a shear. Applicants also show that a camera parabolic movement is invariant to shear in space-time and permits the removal of motion blur.
  • FIGS. 2 a - 2 d demonstrate an xt-slice—the green (central area) object 15 is static, which means that it is invariant to time and results in vertical lines in space time.
  • the blue (right hand side) and red (left hand side) objects 17 , 19 are moving in opposite directions, resulting in oblique lines.
  • the slope of these lines correspond to image-space object velocity.
  • the space-time function of an object moving at constant velocity s is related to that of a static object by a shear, since kinematics gives:
  • the image recorded at time instance t is a shifted version of row t in the xt-plane (the xt-plane represents the scene relative to a static camera).
  • the recorded intensities are the average of all shifted images seen during the exposure length, at all infitisimal time instances. That is, the sensor elements integrate light over curves in the xt-plane.
  • the simplest case is a static camera, which integrate light over straight vertical lines 13 ( FIG. 2 a ).
  • a sensor translating with constant velocity leads to slant straight integration lines 21 (in FIG. 2 b the camera tracks the red object 19 motion).
  • the integration curve of a uniformly-translating sensor is a sheared version of that of the static sensor, following the same principle as the object-motion shear (but in the opposite direction). More complex sensor motion leads to more general curves.
  • FIG. 2 c presents a parabola 23 , obtained with translating sensor undergoing a parabolic displacement. If a shutter is fluttered during exposure as in Raskar et al. 2006 (cited above), the integration curve 25 is discontinuous as shown in FIG. 2 d.
  • the integration curves that applicants consider are spatially shift invariant. Applicants denote by L(x,t) the intensity of light rays in the xt-slice, I(x) the intensity of the captured image, f(t) the integration curve, and [ ⁇ T,T] an integration interval of length 2T.
  • the captured image can be modeled as:
  • I ⁇ ( x ) ⁇ T T ⁇ L ⁇ ( f ⁇ ( t ) + x , t ) ⁇ ⁇ ⁇ t ( 2 )
  • PSF Point Spread Function
  • Sheared curves are illustrated in FIGS. 2 e - 2 h.
  • the PSF ⁇ s for a moving object is the vertical projection of the sheared integration curve f s . Equivalently, it is the oblique projection of f along the direction of motion.
  • FIGS. 2 i - 2 l present the PSF 26 , 27 , 28 , 29 of the three different objects 15 , 17 , 19 , for each of the curves 13 , 21 , 23 , 25 , in FIGS. 2 a - 2 d.
  • the integration curve is a straight line 13 , 21
  • the PSF 26 , 27 is a delta function for objects whose slope matches the integration slope, and a box filter for other slopes.
  • the box width is a function of the deviation between the object slope and integration slope.
  • Shear invariant curves Applicants and the present invention derive a camera motion rule that leads to a velocity invariant PSF. One can achieve such effect if one devotes a portion of the exposure time tracing each possible velocity, and one spends an equal amount of time tracing each velocity, so that all velocities are covered equally.
  • the derivative of an integration curve representing such motion should be linear and therefore, a candidate curve is a parabola.
  • PSFs corresponding to different velocities are obtained from sheared version of the sensor integration curve.
  • a parabola curve 23 of the form: f(t) a 0 t 2 ( FIG. 2 c ).
  • the resulting PSF 28 behaves like 1/ ⁇ square root over (a 0 x) ⁇ ( FIG. 2 k ).
  • a sheared parabola is also a parabola with the same scale, only the center is shifting
  • the projections ⁇ s are also identical up to a spatial shift.
  • the important practical application of this property is that if the camera is moved during integration along a parabola curve, one can deconvolve all captured images I with the same PSF, without segmenting the moving objects in the image, and without estimating their velocity or depth.
  • the small spatial shift of the PSF leads to a small spatial shift of the deconvolved image, but such a shift is uncritical as it does not translate to visual artifacts. This simply means that the position of moving objects corresponds to different time instants within the exposure.
  • the time shift for a given velocity corresponds to the time where the sensor is perfectly tracking this velocity.
  • ⁇ s ⁇ ( x ) ⁇ 1 a 0 ⁇ ( x + s 2 4 ⁇ a 0 ) for ⁇ ⁇ 0 ⁇ x + s 2 4 ⁇ a 0 ⁇ a 0 ⁇ ( T - s 2 ⁇ a 0 ) 2 1 2 ⁇ a 0 ⁇ ( x + s 2 4 ⁇ a 0 ) for ⁇ ⁇ a 0 ⁇ ( T - s 2 ⁇ a 0 ) 2 ⁇ x + s 2 4 ⁇ a 0 ⁇ a 0 ⁇ ( T + s 2 ⁇ a 0 ) 2 0 otherwise ( 6 )
  • the tails of the PSF do depend on the slope s. This change in the tail clipping can also be observed in the projected PSFs 28 in FIG. 2 k. For a sufficiently bounded range of slopes the tail clipping happens far enough from the center and its effect could be neglected. However, Equation 6 also highlights the tradeoffs in the exact parabola scaling a 0 . Smaller a 0 values lead to a sharper PSF. On the other hand, for a given integration interval [ ⁇ T, T], the tail clipping starts at
  • FIG. 3 a shows five dots at the initial time, with their motion vectors
  • FIG. 3 b shows their final configuration at the end of the camera integration period
  • FIG. 3 c is the photograph obtained with a static camera, revealing the different impulse responses for each of the five different dot speeds.
  • FIG. 3 d shows the image that would be recorded from the camera undergoing a parabolic displacement. Note that now each of the dots creates virtually the same impulse response, regardless of its speed of translation (there is a small speed-dependent spatial offset to each kernel). This allows an unblurred image to be recovered from spatially invariant deconvolution.
  • the parabola is a shear invariant curve
  • parabolic displacement is one camera movement that yields a PSF invariant to motion.
  • this curve approaches optimality even if we drop the motion-invariant requirement. That is, suppose that we could perfectly segment the image into different motions, and accurately know the PSF of each segment. For good image restoration, we want the PSFs corresponding to different velocities or slopes to be as easy to invert as possible. In the Fourier domain, this means that low Fourier coefficients must be avoided.
  • the ability to maximize the Fourier spectrum is bounded and that our parabolic integration approaches the optimal spectrum bound.
  • image-space object velocity corresponds to the slope of a shear in space time.
  • a given slope corresponds to a line orthogonal to the primal slope.
  • the shear in the primal corresponds to a shear in the opposite direction in the Fourier domain.
  • the frequency content of an object at velocity s is on the line of slope s going through the origin.
  • a range of velocities ⁇ S ⁇ s ⁇ S corresponds to a double-wedged in the Fourier domain.
  • the convolution step is key to analyzing the loss of frequency content.
  • the convolution by k is a multiplication by its Fourier transform ⁇ circumflex over (k) ⁇ .
  • a static camera has a kernel k that is a box in time, times a Dirac in space. Its Fourier transform is a sinc in time, times a constant in space ( FIG. 4 a ).
  • the convolution by k results in a reduction of the high temporal frequencies according to the sinc. Since faster motion corresponds to larger slopes, their frequency content is more severely affected, while a static object is perfectly imaged.
  • the budget When one wants to cover a broader range of velocities, the budget must be split between a larger area of the Fourier domain and overall signal-noise ratio is reduced according to a square root law.
  • the flutter-shutter approach adds a broad-band amplitude pattern to a static camera.
  • the integration kernel k is a vertical 1D function over [ ⁇ T,T] and the amount of recorded light is halved. Because of the loss of light, the vertical budget is reduced from 2T to T for each ⁇ x . Furthermore, since k is vertical, its Fourier transform is constant along ⁇ x . This means that the optimal flutter code must have constant spectrum magnitude over the full domain of interest ( FIG. 4 c ). This is why the spatial resolution of the camera must be taken into account.
  • the intersection of the spatial bandwidth ⁇ max of the camera and the range of velocities defines a finite double-wedge in the Fourier domain. The minimum magnitude of the slice at ⁇ x is bounded by
  • the parabolic integration curve attempts to distribute the bandwidth budget equally for each ⁇ x slice ( FIG. 4 b ), resulting in almost the same performance for each motion slope in the range, but a falloff proportional to 1/ ⁇ square root over ( ⁇ x ) ⁇ along the spatial dimension.
  • the infinite parabola has the same falloff as the upper bound.
  • the upper bound is valid for a finite exposure, and we can relate the infinite parabola to our finite kernel by a multiplication by a box in the spatial domain, which is a convolution by a sinc in the Fourier domain.
  • the parabola curve of finite exposures approaches the upper bound, in the limit of long exposure times.
  • the intuitive reason why the parabolic camera can better adapt to the wedged shape of the Fourier region of interest is that its kernel is not purely vertical, that is, the sensor is moving.
  • a parabola in 2D contains edge pieces of different slopes, which corresponds to Fourier components of orthogonal orientation.
  • FIGS. 5 a - 5 c applicants synthetically rendered a moving car. Applicants simulate a static camera ( FIG. 5 a ), a parabolic displacement ( FIG. 5 b ), and a static camera with a flutter-shutter ( FIG. 5 c ), all with an identical exposure length and an equal noise level.
  • the box-deblurred car in FIG. 5 a lost high frequencies.
  • the flutter-shutter reconstruction ( FIG. 5 c ) is much better, but the best results are obtained by the parabolic blur ( FIG. 5 b ) of the present invention.
  • a static camera, the flutter shutter camera, and a parabolic motion camera each offer different performance tradeoffs.
  • a static camera is optimal for photographing static objects, but suffers significantly in its ability to reconstruct spatial details of moving objects.
  • a flutter shutter camera is also excellent for photographing static objects (although records a factor of two less light than a full exposure). It provides good spatial frequency bandwidth for recording moving objects and can handle 2D motion.
  • Motion-invariant photography of the present invention e.g., parabolic motion camera
  • relative to the static camera and the flutter shutter camera it gives degraded reconstruction of static objects. While the invention method is primarily designed for 1D motions, we found it gave reasonable reconstructions of some 2D motions as well.
  • a camera system 103 includes an automated control assembly 101 (such as that in FIG. 7 or similar) and a camera 100 .
  • the camera 100 has (i) a lens and/or lens elements for focusing and defining the field of view 125 and (ii) an optional motion sensor.
  • the automated control assembly 101 has a motor and/or electro-mechanical mechanism for moving the camera 100 , lens, lens element(s) or sensor as prescribed by the present invention.
  • the automated control assembly 101 operates the camera 100 to produce a subject image having both moving objects 10 and static objects 20 .
  • the control assembly 101 operates the camera shutter together with moving the camera 100 (as a whole, or just the lens or sensor) preferably in a lateral parabolic motion. Other linear movement such as in a sinusoidal pattern or simple harmonic motion or the like are suitable.
  • the result is an entire scene blurred 110 with a single point spread function (PSF) throughout.
  • PSF point spread function
  • the blurred entire scene (i.e., working or intermediate image) 110 is invariant to object motion of the moving objects 10 .
  • a deconvolution processor 105 reconstructs the subject image with one PSF and without requiring a velocity or depth of the moving object 10 .
  • Step 205 is illustrative.
  • the deconvolution processor 105 may be within the camera 100 processing during image exposure (near real time) or external to the camera 100 processing subsequence to image exposure.
  • Deconvolution processor (or similar engine) 105 employs deconvolution algorithms and techniques known in the art. Example deconvolution techniques are as given in Levin, A., Fergus, R., Durand, F., and Freeman, W., “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH, 2007; and Lucy, L., “Bayesian-based iterative method of image restoration,” Journal of Ast., 1974, both herein incorporated by reference.
  • the reconstructed image 112 showing moving objects 10 and static objects 20 in an unblurred state is produced (generated as output).
  • the hardware shown in FIG. 6 rotates the camera 61 in a controlled fashion to evaluate the potential of motion-invariant photography.
  • the camera for instance a Canon EOS 1D Mark II with an 85 mm lens
  • a rotating cam 67 moves a lever 65 that is rigidly attached to the camera 61 to generate the desired acceleration.
  • the cam 67 edge is designed such that the polar coordinate radius is a parabola:
  • FIGS. 7 a - c present deblurring results on images captured by the invention camera 100 , 61 .
  • FIGS. 7 a, 7 b, 7 c present the scene (top row) captured by a synchronized static camera and the deconvolution results (bottom row) obtained from the invention moving camera 100 , 61 .
  • the moving objects were mounted on a linear rail to create multiple velocities from the multiple depth layers.
  • the other pairs of images ( FIGS. 7 b, 7 c ) involved natural human motions.
  • the invention approach bottom row FIGS. 7 a - 7 c ) deblurs the images reasonably well despite the fact that the human motions were neither perfectly linear nor horizontal.
  • the middle pair of images ( FIG. 7 b ) shows multiple independent motions in opposite directions, all deblurred well in the present invention (bottom FIG. 7 b ).
  • the far-right image pair ( FIG. 7 c ) had many non-horizontal motions, resulting from the man (moving subject) walking toward the camera. While the face contains some residual blur, the deconvolved image has few objectionable artifacts.
  • FIGS. 8 a - 8 e demonstrate these challenges. For example, we can try to deconvolve the image with a range of box widths, and manually pick the one producing the most visually plausible results ( FIG. 8 b ). In the first row of images ( FIG. 8 a ), static camera images are presented. Here deconvolving with the correct blur can sharpen the moving layer, but creates significant artifacts in the static parts, and an additional segmentation stage is required. In the second case, images of FIG. 8 b show a box filter filtered manually to the moving layer and applied to deblur the entire image.
  • FIG. 8 c - 8 d results of a recent automatic algorithm (Levin 2006, cited above) are shown.
  • the images of FIG. 8 c employed layers segmentation by Levin 2006, and the images of FIG. 8 d show deblurring results of Levin 2006. While this algorithm did a reasonable job for the left-hand side image (which was captured using a linear rail) it did a much worse job on the human motion in the right-hand side image.
  • the present invention results obtained spatially uniform deconvolution of images ( FIG. 8 e ) from parabolic integration.
  • FIG. 9 applicants illustrate the effect of the PSF tail clipping (discussed above in FIGS. 2 k and Eq 6) on the valid velocities range.
  • Applicants used the invention parabolic camera to capture a binary pattern. The pattern was static in the first shot (top row FIGS. 9 a - 9 b ) and for the other two (middle and bottom rows FIGS. 9 a - 9 b ) linearly moving during the exposure.
  • the static motion was easily deblurred, and the PSF approximation is reasonable for the slow motion case as well.
  • deconvolution artifacts start to be observed, as effect of the clipping of the tails of the PSF becomes important and the static motion PSF is not an accurate approximation for the moving object smear.
  • the present invention suggests a solution that handles motion blur along a 1D direction.
  • the camera 100 FIGS. 1 a, 1 b
  • the blur resulting from this special camera motion is shown to be invariant to object (moving and/or static) depth and velocity.
  • blur can be removed by deconvolving the entire image with an identical, known PSF.
  • This solution eliminates the major traditional challenges involved with motion debluring: the need to segment motion layers and estimate a precise PSF in each of them.
  • the present invention analyzes the amount of information that can be maintained by different camera paths, and show that the parabola path approaches the optimal PSF whose inversion is stable at all velocities.
  • This motion may be manually produced by the user (photographer) in some embodiments and mechanically automated, such as by a controller assembly 101 , in other embodiments.
  • controller 101 and deconvolution processor 105 may be executed by one CPU (digital processing unit) or chip in camera 100 or may be separate computers/digital processing systems, either stand alone or networked. Common computer network communications, configurations, protocols, busses, interfaces, couplings (wireless, etc.) and the like are used.
  • the network may be a wide area network, a local area network, a global computer network (e.g., Internet) and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal), applicants show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, applicants show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is not only invariant to object velocity, but preserves image frequency content nearly optimally. That is, static objects are degraded relative to their image from a static camera, but all moving objects within a given range of motions reconstruct well. A single deconvolution kernel can be used to remove blur and create sharp images of scenes with objects moving at different speeds, without requiring any segmentation and without knowledge of the object speeds.

Description

    GOVERNMENT SUPPORT
  • The invention was supported, in whole or in part, by the following grants:
      • HM 1582-05-C-0011 from the National Geospatial Intelligence Agency,
      • IIS-0413232 from the National Science Foundation, and
      • CAREER 0447561 from the National Science Foundation.
  • The government has certain rights in the invention.
  • BACKGROUND OF THE INVENTION
  • Motion blur often limits the quality of photographs and can be caused by either the shaking of the camera or the movement of photographed objects (e.g., subject, passerby, props, etc.) in the scene. Modern cameras address the former case with image stabilization, where motion sensors control mechanical actuators that shift the sensor or camera lens element in real time during the exposure to compensate for the motion (shaking) of the camera, e.g. Canon. 2003. EF Lens Work III, The Eyes of EOS. Canon Inc. Lens Product Group. The use of image stabilization enables sharp hand-held photographs of still subjects at much longer shutter speed, thereby reducing image noise. Unfortunately, image stabilization only addresses camera motion and cannot help with moving objects in the subject scene or field of view.
  • One option is to remove the blur after the shot was taken using deconvolution. However, this raises several challenges. First, the typical motion-blur kernel is a line segment in the direction of motion, which corresponds to a box filter. This kernel severely attenuates high spatial frequencies and deconvolution quickly becomes ill-conditioned. Second, the length and direction of the blur kernel both depend on the motion and are therefore unknown and must be estimated. Finally, motion blur usually varies over the image since different objects or regions can have different motion, and segmentation must be used to separate image regions with different motion. These two later challenges lead most existing motion deblurring strategies to rely on multiple input images (see Bascle, B., Blake, A., and Zisserman, A., “Motion de-blurring and superresolution from an image sequence,” ECCV, 1996; Rav-Acha and Peleg, S., “Two motion-blurred images are better than one,” Pattern Recognition Letters, 2005; Zheng, M. S. J., “A slit scanning depth of route panorama from stationary blur,” Proc. IEEE Conf. Comput. Vision Pattern Recog., 2005; Bar, L., Berkels, B., Sapiro, G., and Rumpf, M., “A variational framework for simultaneous motion estimation and restoration of motion-blurred video,” ICCV, 2007; Ben-Ezra, M., and Nayar, S. K., “Motion-based motion deblurring,” PAMI, 2004; Yuan, L., Sun, J., Quan, L., and Shum, H., “Image deblurring with blurred/noisy image pairs,” SIGGRAPH, 2007.)
  • More recent methods attempt to remove blur from a single input image using natural image statistics (see Fergus, R., Singh, B., Hertzmann, A., Roweis, S., and Freeman, W., “Removing camera shake from a single photograph,” SIGGRAPH, 2006; Levin, A., “Blind motion deblurring using image statistics,” Advances in Neural Information Processing Systems (NIPS), 2006). While these techniques demonstrated impressive abilities, their performance is still far from perfect. Raskar et al. proposed a hardware approach that addresses the first challenge (Raskar, R., Agrawal, A., and Tubmlin, J., “Coded exposure photography: Motion deblurring using fluttered shutter,” ACM Transactions on Graphics, SIGGRAPH 2006 Conference Proceedings, Boston, Mass. vol. 25, pgs. 795-804). A fluttered shutter modifies the line segment kernel to achieve a more broad-band frequency response, which allows for dramatically improved deconvolution results. While the Raskar approach blocks half of the light, the improved kernel is well worth the tradeoff. However, this approach still requires the precise knowledge of motion segmentation boundaries and object velocities, an unsolved problem.
  • SUMMARY OF THE INVENTION
  • The present invention addresses the foregoing problems in the art. In the present invention, applicants show that if the motion is restricted to a 1D set of velocities, such as horizontal motion (as is the case with many real world objects like cars or walking people), one can address all three challenges of the prior art mentioned above. Using camera hardware similar to that used for image stabilization, applicants and the present invention make the point-spread function (PSF) invariant to motion and easy to invert. For this, applicants/the invention system introduce a specific camera movement during exposure. This movement is designed so that the compound motion of the camera and any object velocity (within a speed range and along the selected orientation) and at any depth in the camera filed of view results in the same easy-to-invert PSF. Since the entire scene is blurred with an identical PSF (up to tail truncation), including static objects and moving objects, the blur can be removed via deconvolution, without segmenting moving objects and without estimating their velocity. In practice, applicants find that motions even somewhat away from the selected 1D orientation are deblurred as well.
  • In a preferred embodiment, a method and apparatus deblurs images of a moving object. During imaging of the moving object, the invention system blurs an entire scene of the moving object. This blurring is in a manner which is invariant to velocity of the moving object. Next the invention system deconvolutes the blurred entire scene and generates a reconstructed image. The reconstructed image displays the moving object in a deblurred state.
  • The invention step of blurring the entire scene is preferably implemented by moving any one or combination of the camera, a lens element of the camera and camera sensor. This moving of the camera or part of the camera system includes linear movement in a range of direction and speed sufficient to include direction and speed of the moving object. Examples of linear movement here are a sinusoidal pattern, a parabolic pattern or any other simple harmonic motion and the like.
  • In one embodiments, the linear movement follows a parabolic path by moving laterally initially at a maximum speed of the range and slowing to a stop and then moving in an opposite direction laterally, increasing in speed to a maximum speed of the range in the opposite direction and stopping.
  • The present invention may be applied to a variety of moving objects and environments. For non-limiting examples, the moving object may be (a) a moving vehicle on a roadway or other terrain, (b) a body part of a patient (human or animal), (c) in aerial photography, or other moving objects in a subject scene.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
  • FIG. 1 a is a block diagram of an embodiment of the present invention.
  • FIG. 1 b is a flow diagram of an embodiment of the present invention.
  • FIGS. 2 a-2 d are schematic graphs illustrating xt-slice and integration curves resulting from different camera motions.
  • FIGS. 2 e-2 h are graphs illustrating corresponding integration curves sheared to account for object slope for FIGS. 2 a-2 d.
  • FIGS. 2 i-2 l are graphs of projected point spread functions corresponding to different object velocities of FIGS. 2 a-2 d.
  • FIGS. 3 a-3 d is a set of simulated photographs of five dots moving over a range of speeds and directions.
  • FIGS. 4 a-4 d are illustrations of integration curve traces in space time and corresponding log spectrums of a static camera, a parabolic motion camera of the present invention, a flutter shutter camera and the upper bound.
  • FIGS. 5 a-5 c are synthetic visualizations (imagery) of point spread function information loss between a blurred input and deblurring solution result.
  • FIG. 6 is a schematic view of one embodiment of the present invention.
  • FIGS. 7 a-7 c are photographic illustrations of deblurring results of the present invention.
  • FIGS. 8 a-8 e are comparisons of deblurring photographic images using the present invention and using static camera deblurring of the art.
  • FIGS. 9 a-9 b are scene views illustrating the present invention PSF clipping and a working velocities range.
  • DETAILED DESCRIPTION OF THE INVENTION
  • A description of example embodiments of the invention follows.
  • Applicants' approach is inspired by wavefront coding (Cathey, W., and Dowski, R., “A new paradigm for imaging systems,” Applied Optics, No. 41, pgs. 1859-1866, (1995)), where depth of field is improved by modifying a lens to make the defocus blur invariant to depth and easy to invert. While the cited work deals with wave optics and depth of field, applicants and the present invention consider geometric ray optics and remove 1D motion blur.
  • By analyzing motion blur as integration in a space time volume over curves resulting from camera and object motion, applicants prove that one integration curve that results in a motion-invariant PSF (point-spread function) is a parabola. This corresponds to constant 1D acceleration of the camera, first going fast in one direction, progressively slowing down to a stop and then picking up speed in the other (opposite) direction. As a result for any object velocity within a range, there is always one moment during exposure where the camera is perfectly tracking (in speed and direction) the object. While the camera motion is along a straight or lateral line, applicants call it “parabolic motion” because of the parabolic relationship between position and time.
  • In addition to its invariance to object speed, the present invention PSF preserves more high frequencies for moving objects than a normal exposure. This however comes at the cost of slightly degraded performance for static objects. In fact, applicants show that even if object motions could be estimated perfectly, the type of PSFs resulting from the parabolic camera motion is near-optimal for a stable deconvolution over a range of possible object speeds. This optimality is in the sense of minimizing the degradation of the reconstructed image for a range of potential object velocities. In a nutshell, applicants show that there is a fixed bandwidth budget for imaging objects at different velocities. For example, a static camera spends most of this budget to achieve high-quality images of static objects, at the cost of severe blur for moving objects. Applicants' (the present invention's) design distributes this budget more uniformly, improving the reconstruction of all motion velocities, at the price of a slightly worse reconstruction of the static parts.
  • There are three basic options for implementing this camera movement: a translation of the full camera, a rotation of the camera, or a translation of the sensor or lens element. For a commercial product, the latter may be the best solution, and could be achieved with the existing hardware used for stabilization. However, if the focal length is not too wide-angle, a camera rotation is a good approximation to sensor translation and is easier to implement as a prototype. Applicants demonstrate a prototype using camera rotation and show 1D speed-invariant deconvolution results for a range of 1D and even for some 2D motions.
  • Motion Invariant Integration
  • In order to derive a motion-invariant photography scheme, the present invention characterizes motion blur as an integration in space-time. Applicants show that the effect of object motion can be characterized by a shear. Applicants also show that a camera parabolic movement is invariant to shear in space-time and permits the removal of motion blur.
  • Space-time analysis of motion blur: The set of 2D images falling on a detector over time forms a 3D space-time volume of image intensities. Consider a 2D xt-slice through that 3D space-time volume. Each row in this slice represents a horizontal 1D image, as captured by a static pinhole camera with an infinitesimal exposure time.
  • For sufficiently small exposure, a first order approximation to the object motion is sufficient, and the motion path is assumed to be linear. In this case, scene points trace straight lines in the xt-slice and the slopes of these lines are a function of the object velocity and depth. FIGS. 2 a-2 d demonstrate an xt-slice—the green (central area) object 15 is static, which means that it is invariant to time and results in vertical lines in space time. The blue (right hand side) and red (left hand side) objects 17, 19 are moving in opposite directions, resulting in oblique lines. The slope of these lines correspond to image-space object velocity.
  • Formally, the space-time function of an object moving at constant velocity s is related to that of a static object by a shear, since kinematics gives:

  • x(t)=x(0)+st   (1)
  • Camera motion and integration: If the sensor is translating, the image recorded at time instance t is a shifted version of row t in the xt-plane (the xt-plane represents the scene relative to a static camera). Thus, when the scene is captured by a translating sensor over a finite exposure time, the recorded intensities (the blurred image) are the average of all shifted images seen during the exposure length, at all infitisimal time instances. That is, the sensor elements integrate light over curves in the xt-plane. The simplest case is a static camera, which integrate light over straight vertical lines 13 (FIG. 2 a). A sensor translating with constant velocity leads to slant straight integration lines 21 (in FIG. 2 b the camera tracks the red object 19 motion). The integration curve of a uniformly-translating sensor is a sheared version of that of the static sensor, following the same principle as the object-motion shear (but in the opposite direction). More complex sensor motion leads to more general curves. For example FIG. 2 c presents a parabola 23, obtained with translating sensor undergoing a parabolic displacement. If a shutter is fluttered during exposure as in Raskar et al. 2006 (cited above), the integration curve 25 is discontinuous as shown in FIG. 2 d.
  • Since the applicants only translate the sensor, the integration curves that applicants consider are spatially shift invariant. Applicants denote by L(x,t) the intensity of light rays in the xt-slice, I(x) the intensity of the captured image, f(t) the integration curve, and [−T,T] an integration interval of length 2T. The captured image can be modeled as:
  • I ( x ) = T T L ( f ( t ) + x , t ) t ( 2 )
  • The Point Spread Function: Denote by I0(x) an ideal instantaneous pinhole image I0(x)=L(x, 0). The movement of the camera creates motion blur. For a static object, applicants can model it as a convolution of I0 with a Point Spread Function (PSF) φ0:I=φ0{circle around (×)}I0. In this case, φ0 is simply the projection off along the time direction onto the spatial line:

  • φ0(x)=∫t δ f(t)=xdt   (3)
  • where δ is a Dirac.
  • Now consider objects moving at speed s and seek to derive an equivalent Point Spread Function φs. We can reduce this to the static case by applying a change of frame that “stabilizes” this motion, that is, that makes the space-time volume of this object vertical. We apply the inverse of the shear in Eq. 1, which is the shear in the opposite direction, and the sheared curve can be expressed as:

  • fs(t)=f(t)−st   (4)
  • Sheared curves are illustrated in FIGS. 2 e-2 h. As a result, the PSF φs for a moving object is the vertical projection of the sheared integration curve fs. Equivalently, it is the oblique projection of f along the direction of motion.
  • FIGS. 2 i-2 l present the PSF 26, 27, 28, 29 of the three different objects 15, 17, 19, for each of the curves 13, 21, 23, 25, in FIGS. 2 a-2 d. For example, if the integration curve is a straight line 13, 21 the PSF 26, 27 is a delta function for objects whose slope matches the integration slope, and a box filter for other slopes. The box width is a function of the deviation between the object slope and integration slope.
  • The analytic way to derive this projection is to note that the vertical projection is the “amount of time” the curve f spent at the spatial point x—the slope of the inverse curve. That is, if gs=fs −1 is the inverse curve, the PSF (the vertical projection) satisfies: φs(x)=gs′(x).
  • Shear invariant curves: Applicants and the present invention derive a camera motion rule that leads to a velocity invariant PSF. One can achieve such effect if one devotes a portion of the exposure time tracing each possible velocity, and one spends an equal amount of time tracing each velocity, so that all velocities are covered equally. The derivative of an integration curve representing such motion should be linear and therefore, a candidate curve is a parabola.
  • Applicants have shown that PSFs corresponding to different velocities are obtained from sheared version of the sensor integration curve. Consider a parabola curve 23 of the form: f(t)=a0t2 (FIG. 2 c). The resulting PSF 28 behaves like 1/√{square root over (a0x)} (FIG. 2 k). Applicants note that a sheared parabola is also a parabola with the same scale, only the center is shifting
  • f x ( t ) = f ( t ) - st = a 0 ( t - s 2 a 0 ) 2 - s 2 4 a 0 ( 5 )
  • Thus, the projections φs are also identical up to a spatial shift. The important practical application of this property is that if the camera is moved during integration along a parabola curve, one can deconvolve all captured images I with the same PSF, without segmenting the moving objects in the image, and without estimating their velocity or depth. The small spatial shift of the PSF leads to a small spatial shift of the deconvolved image, but such a shift is uncritical as it does not translate to visual artifacts. This simply means that the position of moving objects corresponds to different time instants within the exposure. The time shift for a given velocity corresponds to the time where the sensor is perfectly tracking this velocity.
  • It is noted that the above invariance involves two approximations. The first approximation has to do with the fact that the invariant convolution model is wrong at the motion layer boundaries. However, this has not been a major practical issue in applicants' experiments and is visible only when both foreground and background have high-contrast textures. The second approximation results from the fact that a parabola is perfectly shear invariant only if an infinite integration time is used. For any finite time interval, the accurate projection is equal to:
  • φ s ( x ) = { 1 a 0 ( x + s 2 4 a 0 ) for 0 x + s 2 4 a 0 a 0 ( T - s 2 a 0 ) 2 1 2 a 0 ( x + s 2 4 a 0 ) for a 0 ( T - s 2 a 0 ) 2 x + s 2 4 a 0 a 0 ( T + s 2 a 0 ) 2 0 otherwise ( 6 )
  • Thus, for a finite integration interval, the tails of the PSF do depend on the slope s. This change in the tail clipping can also be observed in the projected PSFs 28 in FIG. 2 k. For a sufficiently bounded range of slopes the tail clipping happens far enough from the center and its effect could be neglected. However, Equation 6 also highlights the tradeoffs in the exact parabola scaling a0. Smaller a0 values lead to a sharper PSF. On the other hand, for a given integration interval [−T, T], the tail clipping starts at
  • a ( T - s 2 a 0 ) ;
  • thus reducing a0 also reduces the range of s values for which the tail clipping is actually negligible.
  • Simulation: To simulate the blur from a camera moving in a parabolic displacement in space-time (constant 1D acceleration), applicants projected synthetic scenes and summed displaced images over the camera integration time. FIG. 3 a shows five dots at the initial time, with their motion vectors, and FIG. 3 b shows their final configuration at the end of the camera integration period. FIG. 3 c is the photograph obtained with a static camera, revealing the different impulse responses for each of the five different dot speeds. FIG. 3 d shows the image that would be recorded from the camera undergoing a parabolic displacement. Note that now each of the dots creates virtually the same impulse response, regardless of its speed of translation (there is a small speed-dependent spatial offset to each kernel). This allows an unblurred image to be recovered from spatially invariant deconvolution.
  • Optimality
  • Applicants derive optimality criteria as follows.
  • Upper Bound
  • We have seen that the parabola is a shear invariant curve, and that parabolic displacement is one camera movement that yields a PSF invariant to motion. Here applicants show that, in the case of 1D motions, this curve approaches optimality even if we drop the motion-invariant requirement. That is, suppose that we could perfectly segment the image into different motions, and accurately know the PSF of each segment. For good image restoration, we want the PSFs corresponding to different velocities or slopes to be as easy to invert as possible. In the Fourier domain, this means that low Fourier coefficients must be avoided. We show that, for a given range of velocities, the ability to maximize the Fourier spectrum is bounded and that our parabolic integration approaches the optimal spectrum bound.
  • At a high level, our proof is a bandwidth budget argument. We show that, for a given spatial frequency wx, we have a fixed budget which must be shared by all motion slopes. A static camera spends most of this budget on static objects and therefore does poorly for other object speeds. In contrast, our approach attempts to distribute this budget uniformly across the range of velocities and makes sure that no coefficient is low.
  • Space time integration in the frequency domain: We consider the Fourier domain ωx, ωt of a scanline of space time. Fourier transforms will be denoted with a hat and Fourier pairs will be denoted k
    Figure US20090244300A1-20091001-P00001
    {circumflex over (k)}.
  • First consider the space-time function of a static object. It is constant over time, which means that its Fourier transform is non-zero only on the pure spatial frequency line ωt=0. This line is the 1D Fourier transform of the ideal instantaneous image I0.
  • We have seen that image-space object velocity corresponds to the slope of a shear in space time. In the frequency domain, a given slope corresponds to a line orthogonal to the primal slope. Or equivalently, the shear in the primal corresponds to a shear in the opposite direction in the Fourier domain. The frequency content of an object at velocity s is on the line of slope s going through the origin. A range of velocities −S≦s≦S corresponds to a double-wedged in the Fourier domain. This is similar to the link between depth and light field spectra (Chai, J., Tong, X., Chan, S., and Shum, H., “Plenoptic sampling,” SIGGRAPH, 2000; Isaksen, A., McMillan, L., and Gortler, S. J., “Dynamically reparameterized light fields,” SIGGRAPH, 2000). This double-wedged is the frequency content that we strive to record. Areas of the Fourier domain outside it correspond to faster motion, and can be sacrificed.
  • Consider a given light integration curve f and its 2D trace k(x,t) in space time, where k(x,t) is non zero only at x=f(t) (FIGS. 4 a-4 d). The 1D image scanline can be seen as the combination of a 2D convolution in space time by k, and a 2D (two-dimensional) to 1D (one-dimensional) slicing. That is, we lookup the result of the convolution only at time t=0. The convolution step is key to analyzing the loss of frequency content. In the Fourier domain, the convolution by k is a multiplication by its Fourier transform {circumflex over (k)}.
  • For example, a static camera has a kernel k that is a box in time, times a Dirac in space. Its Fourier transform is a sinc in time, times a constant in space (FIG. 4 a). The convolution by k results in a reduction of the high temporal frequencies according to the sinc. Since faster motion corresponds to larger slopes, their frequency content is more severely affected, while a static object is perfectly imaged.
  • In summary, we have reduced the problem to designing an integration curve whose spectrum {circumflex over (k)} has the highest possible Fourier coefficients in the double-wedge defined by a desired velocity range.
  • Slicing: We now show that for each vertical slice of the Fourier doubled wedge, we have a fixed bandwidth budget because of conservation of energy in the spatial domain. That is, the sum of the squared Fourier coefficients for a given spatial frequency ωx is bounded from above.
  • When studying slices in the Fourier domain, we can use the slicing theorem. First consider the vertical Fourier slice {circumflex over (k)}0 going through (0,0). In the primal space time, this Fourier slice corresponds to the projection along the horizontal x direction.

  • {circumflex over (k)} 0t)
    Figure US20090244300A1-20091001-P00002
    k p(t)=∫x k(x,t)dx
  • And using the shifting property, we obtain an arbitrary slice for a given ωx using

  • {circumflex over (k)}ωxt)
    Figure US20090244300A1-20091001-P00003
    xk(x,t)e−2πiω x xdx
  • which only introduces phases shifts in the integral.
  • Conservation of energy: We have related slices in the Fourier domain to space-only integrals of our camera's light integration curve in space-time. In particular, the central slice is the Fourier transform of kp(t), the total amount of light recorded by the sensor at a given moment during the exposure. Conservation of energy imposes

  • k p(t)≦1   (8)
  • Since k is non-zero only during the 2T exposure time, we get a bound on the square integral

  • t k p(t)2 dt≦2T   (9)
  • This bound is not affected by the phase shift used to extract slices at different ωx.
  • Furthermore, by Parseval's theorem, the square integral is the same in the dual and the primal domains. This means that for each slice at a spatial frequency ωx,
  • ω t k ^ ω x ( ω t ) 2 ω t = t [ k p ( t ) - 2 πω x x ] 2 t ( 10 ) 2 T ( 11 )
  • The squared integral for a slice is bounded by a fixed budget of 2T. In order to maximize the minimal frequency response, one should use a constant magnitude. Given the wedged shape of our velocity range in the Fourier domain, we get
  • min ω t k ^ ω x ( ω t ) 2 T S ω x ( 12 )
  • where S is the absolute maximal slope (speed). This upper bound is visualized in FIG. 4 d. In other words, if we wish to maximize the spectrum of the PSFs over a finite slope range −S≦s≦S, Eq 12 provides an upper bound on how much we can hope to achieve.
  • When one wants to cover a broader range of velocities, the budget must be split between a larger area of the Fourier domain and overall signal-noise ratio is reduced according to a square root law.
  • Discussion of Different Cameras
  • Applicants have shown that in a traditional static camera, the light integration curve in space-time k(x,t) is a vertical box function. Performances are perfect for the ωt=0 line corresponding to static objects, but degrade according to a sinc for lines of increasing slope, corresponding to higher velocities.
  • The flutter-shutter approach adds a broad-band amplitude pattern to a static camera. The integration kernel k is a vertical 1D function over [−T,T] and the amount of recorded light is halved. Because of the loss of light, the vertical budget is reduced from 2T to T for each ωx. Furthermore, since k is vertical, its Fourier transform is constant along ωx. This means that the optimal flutter code must have constant spectrum magnitude over the full domain of interest (FIG. 4 c). This is why the spatial resolution of the camera must be taken into account. The intersection of the spatial bandwidth Ωmax of the camera and the range of velocities defines a finite double-wedge in the Fourier domain. The minimum magnitude of the slice at Ωx is bounded by
  • T 2 S Ω x .
  • Since {circumflex over (k)} is constant along ωx, this bound applies to all ωx. As a result, for all band frequencies |ωx|<Ωmax, {circumflex over (k)} spends energy outside the slope wedge and thus does not make a full usage of the vertical {circumflex over (k)}ωx, budget.
  • The parabolic integration curve attempts to distribute the bandwidth budget equally for each ωx slice (FIG. 4 b), resulting in almost the same performance for each motion slope in the range, but a falloff proportional to 1/√{square root over (ωx)} along the spatial dimension. To see why, applicants note that using some calculus manipulation the Fourier transform of an infinite parabola can be computed explicitly. If the integration kernel k is defined via the (infinite) parabola curve f(t)=a0t2, then
  • k ^ ( ω x , ω t ) = 1 2 a 0 ω x ω t 2 4 a 0 ω x ( 13 )
  • On the other hand, achieving a good PSF for all −S≦s≦S implies that
  • a 0 S 2 T
  • (otherwise, from Eq 6 the PSF won't include an infinite spike). Using Eq 13, applicants can conclude that if the exposure was infinitely long
  • k ^ ( ω x , ω t ) 2 = T S ω x
  • and the infinite parabola has the same falloff as the upper bound. Of course, the upper bound is valid for a finite exposure, and we can relate the infinite parabola to our finite kernel by a multiplication by a box in the spatial domain, which is a convolution by a sinc in the Fourier domain. Thus, the parabola curve of finite exposures approaches the upper bound, in the limit of long exposure times. The intuitive reason why the parabolic camera can better adapt to the wedged shape of the Fourier region of interest is that its kernel is not purely vertical, that is, the sensor is moving. A parabola in 2D contains edge pieces of different slopes, which corresponds to Fourier components of orthogonal orientation.
  • In summary, in the special case of 1D motions, if one seeks the ability to reconstruct a given range of image-space velocities, with minimal degradation to the reconstructed image for any velocity, the parabolic light integration curve is near optimal. On the other hand, a fluttered shutter can handle all motion directions, albeit at the cost of motion identification and image segmentation.
  • Simulation: To visualize these tradeoffs, in FIGS. 5 a-5 c, applicants synthetically rendered a moving car. Applicants simulate a static camera (FIG. 5 a), a parabolic displacement (FIG. 5 b), and a static camera with a flutter-shutter (FIG. 5 c), all with an identical exposure length and an equal noise level. The box-deblurred car in FIG. 5 a lost high frequencies. The flutter-shutter reconstruction (FIG. 5 c) is much better, but the best results are obtained by the parabolic blur (FIG. 5 b) of the present invention.
  • A static camera, the flutter shutter camera, and a parabolic motion camera (the present invention) each offer different performance tradeoffs. A static camera is optimal for photographing static objects, but suffers significantly in its ability to reconstruct spatial details of moving objects. A flutter shutter camera is also excellent for photographing static objects (although records a factor of two less light than a full exposure). It provides good spatial frequency bandwidth for recording moving objects and can handle 2D motion. However, to reconstruct, one needs to identify the image velocities and segment regions of uniform motion. Motion-invariant photography of the present invention (e.g., parabolic motion camera) requires no speed estimation or object segmentation and provides nearly optimal reconstruction for the worst-case speed within a given range. However, relative to the static camera and the flutter shutter camera, it gives degraded reconstruction of static objects. While the invention method is primarily designed for 1D motions, we found it gave reasonable reconstructions of some 2D motions as well.
  • Embodiments of the present invention 11 are thus as illustrated in FIGS. 1 a and 1 b. With reference to FIG. 1 a, a camera system 103 includes an automated control assembly 101 (such as that in FIG. 7 or similar) and a camera 100. The camera 100 has (i) a lens and/or lens elements for focusing and defining the field of view 125 and (ii) an optional motion sensor. The automated control assembly 101 has a motor and/or electro-mechanical mechanism for moving the camera 100, lens, lens element(s) or sensor as prescribed by the present invention.
  • In particular, the automated control assembly 101 operates the camera 100 to produce a subject image having both moving objects 10 and static objects 20. First as shown at step 201 in FIG. 1 b, the control assembly 101 operates the camera shutter together with moving the camera 100 (as a whole, or just the lens or sensor) preferably in a lateral parabolic motion. Other linear movement such as in a sinusoidal pattern or simple harmonic motion or the like are suitable. The result is an entire scene blurred 110 with a single point spread function (PSF) throughout. As obtained at step 203, the blurred entire scene (i.e., working or intermediate image) 110 is invariant to object motion of the moving objects 10. Thus, a deconvolution processor 105 reconstructs the subject image with one PSF and without requiring a velocity or depth of the moving object 10. Step 205 is illustrative.
  • The deconvolution processor 105 may be within the camera 100 processing during image exposure (near real time) or external to the camera 100 processing subsequence to image exposure. Deconvolution processor (or similar engine) 105 employs deconvolution algorithms and techniques known in the art. Example deconvolution techniques are as given in Levin, A., Fergus, R., Durand, F., and Freeman, W., “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH, 2007; and Lucy, L., “Bayesian-based iterative method of image restoration,” Journal of Ast., 1974, both herein incorporated by reference.
  • The reconstructed image 112 showing moving objects 10 and static objects 20 in an unblurred state is produced (generated as output).
  • Experiments
  • While camera stabilization hardware should be capable of moving a detector with the desired constant acceleration (parabolic displacement) inside a hand-held camera, applicants chose to use larger scale structures for an initial prototype, and approximate sensor translation using a rotation of the entire camera. The hardware shown in FIG. 6 rotates the camera 61 in a controlled fashion to evaluate the potential of motion-invariant photography. The camera (for instance a Canon EOS 1D Mark II with an 85 mm lens) sits on a platform 63 that rotates about a vertical axis through the camera's optical center. Applicants use a cam approach to precisely control the rotation angle over time. A rotating cam 67 moves a lever 65 that is rigidly attached to the camera 61 to generate the desired acceleration. For θ∈[−π,π] the cam 67 edge is designed such that the polar coordinate radius is a parabola:

  • x(θ)=cos(θ)(c−bθ 2)

  • y(θ)=sin(θ)(c−bθ 2)   (14)
  • In applicants' experiments, c=8 cm, b=0.33 cm. The cam 67 rotates at a constant velocity, pushing the lever arm 65 to rotate the camera 61 with approximately constant angular acceleration, yielding horizontal motion with the desired parabolic integration path in space-time. For a fixed cam size, one can increase the magnitude of the parabola by moving the cam 67 closer to the camera 61. Applicants place a static camera next to the rotating camera 61 to obtain a reference image for each moving-camera image. A microcontroller synchronizes the cam 67 rotation and the camera shutters. In order to reduce mechanical noise, the exposure length of the system was set to 1 second. This relatively long exposure time limits the linear motion approximation for some real-world motions. To calibrate the exact PSF produced by the rotating camera 61, applicants captured a blurred image I of a calibration pattern. Applicants also captured a static image I0 of the same pattern and solved for the PSF φ minimizing the squared convolution error: φ=argmin ∥I0−φ*I∥2.
  • Results
  • The deconvolution results presented herein were achieved with the sparse deconvolution algorithm of Levin, A., Fergus, R., Durand, F., and Freeman, W., “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH, 2007 (incorporated herein by reference). Comparable, but slightly worse, results can be obtained using the Richardson-Lucy deconvolution algorithm (see Lucy, L., 1974, cited above).
  • FIGS. 7 a-c present deblurring results on images captured by the invention camera 100, 61. For each example, FIGS. 7 a, 7 b, 7 c present the scene (top row) captured by a synchronized static camera and the deconvolution results (bottom row) obtained from the invention moving camera 100, 61. In the first pair of images (FIG. 7 a), the moving objects were mounted on a linear rail to create multiple velocities from the multiple depth layers. The other pairs of images (FIGS. 7 b, 7 c) involved natural human motions. The invention approach (bottom row FIGS. 7 a-7 c) deblurs the images reasonably well despite the fact that the human motions were neither perfectly linear nor horizontal. The middle pair of images (FIG. 7 b) shows multiple independent motions in opposite directions, all deblurred well in the present invention (bottom FIG. 7 b). The far-right image pair (FIG. 7 c) had many non-horizontal motions, resulting from the man (moving subject) walking toward the camera. While the face contains some residual blur, the deconvolved image has few objectionable artifacts.
  • Applicants are encouraged by the deconvolution results on some images even with substantial non-horizontal motions. One possible explanation for the results is the aperture effect, the ambiguity of the 2D motion of locally 1-dimensional image structures, such as edges and contours. The velocity component normal to the edge or contour is determined from the image data, but the parallel component is ambiguous. Local image motion that could be explained by horizontal motions within the range of the camera motions should deconvolve correctly, even though the object motions were not horizontal.
  • Note that the static object resolution in applicants results decreases with respect to the static camera input, as applicant uniformly distribute the bandwidth budget over velocities range.
  • Motion deblurring from a stationary camera is very challenging due to the need to segment the image by motion and estimate accurate PSFs within each segment. FIGS. 8 a-8 e demonstrate these challenges. For example, we can try to deconvolve the image with a range of box widths, and manually pick the one producing the most visually plausible results (FIG. 8 b). In the first row of images (FIG. 8 a), static camera images are presented. Here deconvolving with the correct blur can sharpen the moving layer, but creates significant artifacts in the static parts, and an additional segmentation stage is required. In the second case, images of FIG. 8 b show a box filter filtered manually to the moving layer and applied to deblur the entire image. As a result, most blurred edges are occlusion boundaries and even manually identifying the correct blur kernel is challenging. In FIG. 8 c-8 d, results of a recent automatic algorithm (Levin 2006, cited above) are shown. The images of FIG. 8 c employed layers segmentation by Levin 2006, and the images of FIG. 8 d show deblurring results of Levin 2006. While this algorithm did a reasonable job for the left-hand side image (which was captured using a linear rail) it did a much worse job on the human motion in the right-hand side image.
  • The present invention results obtained spatially uniform deconvolution of images (FIG. 8 e) from parabolic integration.
  • In FIG. 9, applicants illustrate the effect of the PSF tail clipping (discussed above in FIGS. 2 k and Eq 6) on the valid velocities range. Applicants used the invention parabolic camera to capture a binary pattern. The pattern was static in the first shot (top row FIGS. 9 a-9 b) and for the other two (middle and bottom rows FIGS. 9 a-9 b) linearly moving during the exposure. As shown in the top image of FIG. 9 b, the static motion was easily deblurred, and the PSF approximation is reasonable for the slow motion case as well. For the faster motion case (middle and bottom images FIG. 9 b), deconvolution artifacts start to be observed, as effect of the clipping of the tails of the PSF becomes important and the static motion PSF is not an accurate approximation for the moving object smear.
  • Accordingly, the present invention suggests a solution that handles motion blur along a 1D direction. In the invention system 11, the camera 100 (FIGS. 1 a, 1 b) translates within its exposure following a parabolic displacement rule. The blur resulting from this special camera motion is shown to be invariant to object (moving and/or static) depth and velocity. Hence, blur can be removed by deconvolving the entire image with an identical, known PSF. This solution eliminates the major traditional challenges involved with motion debluring: the need to segment motion layers and estimate a precise PSF in each of them. Furthermore, the present invention analyzes the amount of information that can be maintained by different camera paths, and show that the parabola path approaches the optimal PSF whose inversion is stable at all velocities.
  • The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
  • While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
  • For example, the above description mentions parabolic or other motion of the camera. This motion may be manually produced by the user (photographer) in some embodiments and mechanically automated, such as by a controller assembly 101, in other embodiments.
  • Further, the controller 101 and deconvolution processor 105 may be executed by one CPU (digital processing unit) or chip in camera 100 or may be separate computers/digital processing systems, either stand alone or networked. Common computer network communications, configurations, protocols, busses, interfaces, couplings (wireless, etc.) and the like are used. The network may be a wide area network, a local area network, a global computer network (e.g., Internet) and the like.
  • Fields of application may then include:
  • (a) medical imaging at hospitals, clinics, care profession's office (e.g., dentist, pediatrician, etc.), mobile unit, etc;
  • (b) arial imaging of vehicles or vessels in various terrain maneuvers;
  • (c) monitoring systems at airports, secured areas, security locations, or public/pedestrian throughways; and
  • (d) toll booth imagery of traveling vehicle license plates or similar check points where a moving subject is photographed for identity verification or similar purposes.
  • Other applications and uses are within the purview of one skilled in the art given this disclosure.

Claims (21)

1. A method for deblurring images, comprising:
during imaging of a moving object, blurring an entire scene that includes the moving object, said blurring being invariant to velocity of the moving object;
deconvoluting the blurred entire scene and generating a reconstructed image, the reconstructed image bearing the moving object in a deblurred state.
2. A method of claim 1 wherein the step of blurring the entire scene is implemented by moving any one or combination of a camera, a lens element of the camera and sensor of the camera.
3. A method of claim 2 wherein the step of moving includes movement in a range of direction and speed sufficient to include direction and speed of the moving object.
4. A method as claimed in claim 3 wherein movement is any of:
a sinusoidal pattern,
a parabolic pattern,
a simple harmonic motion, and
a sweep motion starting in one direction and progressively decelerating then accelerating in an opposite direction.
5. The method of claim 4 wherein the movement follows a parabolic path by moving laterally initially at a maximum speed of the range and slowing to a stop and then moving in an opposite direction laterally, increasing in speed to a maximum speed of the range in the opposite direction and stopping.
6. A method as claimed in claim 1 wherein the moving object is a moving vehicle on a roadway.
7. A method as claimed in claim 1 wherein the moving object is a body part of a patient (human or animal).
8. A method as claimed in claim 1 wherein the deconvolution is computed near real-time of image exposure (i.e. during imaging of the moving object).
9. A method as claimed in claim 1 wherein the deconvolution is by an application separate from and subsequent to image exposure by a camera.
10. A method as claimed in claim 1 wherein the step of blurring blurs the entire scene, including static objects and the moving object, using a single point spread function which is invariant to object motion such that the deconvoluting removes blur from the blurred entire scene using the point spread function, without segmenting the moving object and without estimating velocity of the moving object.
11. A method as claimed in claim 1 wherein the reconstructed image bears multiple moving objects, each in a deblurred state.
12. Apparatus for deblurring images, comprising:
a camera controller, during imaging of a moving object the controller effectively moving a camera to blur an entire scene that includes the moving object and said blurring being in a manner that is invariant to velocity of the moving object; and
a deconvolution processor coupled to receive the blurred entire scene and deconvoluting the blurred entire scene, resulting in a reconstructed image bearing the moving object in a deblurred state.
13. Apparatus as claimed in claim 12 where the controller effectively moving the camera moves any one or combination of the camera, a lens element of the camera and a sensor of the camera.
14. Apparatus as claimed in claim 12 wherein the controller effectively moving the camera includes movement in a range of direction and speed sufficient to include direction and speed of the moving object.
15. Apparatus as claimed in claim 14 wherein movement is any of:
a sinusoidal pattern,
a parabolic pattern,
a simple harmonic motion, and
a sweep motion starting in one direction and progressively decelerating then accelerating in an opposite direction.
16. Apparatus as claimed in claim 15 wherein the movement follows a parabolic path by moving laterally initially at a maximum speed of the range and slowing to a stop and then moving in an opposite direction laterally, increasing in speed to a maximum speed of the range in the opposite direction and stopping.
17. Apparatus as claimed in claim 12 wherein the moving object is any of a moving vehicle on a roadway, a body part of a patient, and a face of a subject.
18. Apparatus as claimed in claim 12 wherein the deconvolution processor deconvolutes the blurred entire scene near real-time of image exposure.
19. Apparatus as claimed in claim 12 wherein the deconvolution processor executes as an application separate from and subsequent to image exposure by the camera.
20. Apparatus as claimed in claim 12 wherein the reconstructed image bears multiple moving objects, each in a deblurred state.
21. A motion invariant imaging system comprising:
controller means for effectively controlling a camera during imaging of a moving object, and blurring an entire scene that includes the moving object; and
deconvolution means for deconvoluting the blurred entire scene and generating a reconstructed image bearing the moving object in a deblurred state;
wherein the controller means through the camera blurring the entire scene includes blurrying static objects and the moving object using a single point spread function which is invariant to object motion, such that the deconvoluting removes blur from the blurred entire scene using the point spread function, without segmenting the moving object and without estimating velocity of the moving object.
US12/058,105 2008-03-28 2008-03-28 Method and apparatus for motion invariant imaging Active 2030-12-28 US8451338B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/058,105 US8451338B2 (en) 2008-03-28 2008-03-28 Method and apparatus for motion invariant imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/058,105 US8451338B2 (en) 2008-03-28 2008-03-28 Method and apparatus for motion invariant imaging

Publications (2)

Publication Number Publication Date
US20090244300A1 true US20090244300A1 (en) 2009-10-01
US8451338B2 US8451338B2 (en) 2013-05-28

Family

ID=41116547

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/058,105 Active 2030-12-28 US8451338B2 (en) 2008-03-28 2008-03-28 Method and apparatus for motion invariant imaging

Country Status (1)

Country Link
US (1) US8451338B2 (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090122150A1 (en) * 2006-09-14 2009-05-14 Gal Shabtay Imaging system with improved image quality and associated methods
US20100034429A1 (en) * 2008-05-23 2010-02-11 Drouin Marc-Antoine Deconvolution-based structured light system with geometrically plausible regularization
US20100231732A1 (en) * 2009-03-11 2010-09-16 Zoran Corporation Estimation of point spread functions from motion-blurred images
US20100259627A1 (en) * 2009-04-13 2010-10-14 Showscan Digital Llc Method and apparatus for photographing and projecting moving images
WO2010131142A1 (en) * 2009-05-12 2010-11-18 Koninklijke Philips Electronics N.V. Camera, system comprising a camera, method of operating a camera and method for deconvoluting a recorded image
US20110019068A1 (en) * 2009-07-24 2011-01-27 Hon Hai Precision Industry Co., Ltd. Computational imaging system
US20110044677A1 (en) * 2009-08-20 2011-02-24 Canon Kabushiki Kaisha Image capture apparatus and method
US20110075020A1 (en) * 2009-09-30 2011-03-31 Veeraraghavan Ashok N Increasing Temporal Resolution of Signals
US20110081142A1 (en) * 2009-10-06 2011-04-07 Apple Inc. Pulsed control of camera flash
US20110181690A1 (en) * 2010-01-26 2011-07-28 Sony Corporation Imaging control apparatus, imaging apparatus, imaging control method, and program
US20110254983A1 (en) * 2010-04-14 2011-10-20 Sony Corporation Digital camera and method for capturing and deblurring images
US20120148108A1 (en) * 2010-12-13 2012-06-14 Canon Kabushiki Kaisha Image processing apparatus and method therefor
CN102693046A (en) * 2011-02-23 2012-09-26 微软公司 Hover detection in an interactive display device
CN103310486A (en) * 2013-06-04 2013-09-18 西北工业大学 Reconstruction method of atmospheric turbulence degraded images
WO2013144427A1 (en) * 2012-03-26 2013-10-03 Nokia Corporation Method, apparatus and computer program product for image stabilization
US8605938B2 (en) 2011-01-25 2013-12-10 Honeywell International Inc. Motion-based image watermarking
US20140002606A1 (en) * 2012-06-29 2014-01-02 Broadcom Corporation Enhanced image processing with lens motion
US8699867B2 (en) * 2010-09-30 2014-04-15 Trimble Germany Gmbh Aerial digital camera and method of controlling the same
US8702003B2 (en) 2011-11-18 2014-04-22 Honeywell International Inc. Bar code readers and methods of reading bar codes
US20140148691A1 (en) * 2009-08-17 2014-05-29 Cms Medical, Llc System and method for four dimensional angiography and fluoroscopy
JP2014515206A (en) * 2011-03-22 2014-06-26 コーニンクレッカ フィリップス エヌ ヴェ Camera system having camera, camera, method of operating camera, and method of analyzing recorded image
US20140204111A1 (en) * 2013-01-18 2014-07-24 Karthik Vaidyanathan Layered light field reconstruction for defocus blur
US20140226878A1 (en) * 2010-10-12 2014-08-14 International Business Machines Corporation Deconvolution of digital images
US20150042829A1 (en) * 2013-04-09 2015-02-12 Honeywell International Inc. Motion deblurring
US8963919B2 (en) 2011-06-15 2015-02-24 Mistretta Medical, Llc System and method for four dimensional angiography and fluoroscopy
US20150077583A1 (en) * 2013-09-19 2015-03-19 Raytheon Canada Limited Systems and methods for digital correction of aberrations produced by tilted plane-parallel plates or optical wedges
US20150206340A1 (en) * 2014-01-17 2015-07-23 Carl J. Munkberg Layered Reconstruction for Defocus and Motion Blur
US20160063670A1 (en) * 2014-08-27 2016-03-03 Adobe Systems Incorporated Dynamic Motion Path Blur Kernel
US9414799B2 (en) 2010-01-24 2016-08-16 Mistretta Medical, Llc System and method for implementation of 4D time-energy subtraction computed tomography
US20170214833A1 (en) * 2016-01-27 2017-07-27 Diehl Defence Gmbh & Co. Kg Method and device for identifying an object in a search image
US9779484B2 (en) 2014-08-04 2017-10-03 Adobe Systems Incorporated Dynamic motion path blur techniques
US9955065B2 (en) 2014-08-27 2018-04-24 Adobe Systems Incorporated Dynamic motion path blur user interface
CN109274895A (en) * 2018-09-25 2019-01-25 杭州电子科技大学 Encode the quick relative movement scene capture device and restored method of exposure image
CN110097509A (en) * 2019-03-26 2019-08-06 杭州电子科技大学 A kind of restored method of local motion blur image
US20230061085A1 (en) * 2021-08-27 2023-03-02 Raytheon Company Arbitrary motion smear modeling and removal

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI524107B (en) * 2012-06-05 2016-03-01 鴻海精密工業股份有限公司 Method of auto-focus
EP3143583B1 (en) * 2014-06-12 2018-10-31 Duke University System and method for improved computational imaging

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5237365A (en) * 1990-10-15 1993-08-17 Olympus Optical Co., Ltd. Exposure control apparatus for camera with shake countermeasure
US6377404B1 (en) * 2000-01-20 2002-04-23 Eastman Kodak Company Reverse telephoto zoom lens
US6687386B1 (en) * 1999-06-15 2004-02-03 Hitachi Denshi Kabushiki Kaisha Object tracking method and object tracking apparatus
US20050047672A1 (en) * 2003-06-17 2005-03-03 Moshe Ben-Ezra Method for de-blurring images of moving objects
US20080266655A1 (en) * 2005-10-07 2008-10-30 Levoy Marc S Microscopy Arrangements and Approaches

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5237365A (en) * 1990-10-15 1993-08-17 Olympus Optical Co., Ltd. Exposure control apparatus for camera with shake countermeasure
US6687386B1 (en) * 1999-06-15 2004-02-03 Hitachi Denshi Kabushiki Kaisha Object tracking method and object tracking apparatus
US6377404B1 (en) * 2000-01-20 2002-04-23 Eastman Kodak Company Reverse telephoto zoom lens
US20050047672A1 (en) * 2003-06-17 2005-03-03 Moshe Ben-Ezra Method for de-blurring images of moving objects
US20080266655A1 (en) * 2005-10-07 2008-10-30 Levoy Marc S Microscopy Arrangements and Approaches

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090122150A1 (en) * 2006-09-14 2009-05-14 Gal Shabtay Imaging system with improved image quality and associated methods
US20100034429A1 (en) * 2008-05-23 2010-02-11 Drouin Marc-Antoine Deconvolution-based structured light system with geometrically plausible regularization
US8411995B2 (en) * 2008-05-23 2013-04-02 National Research Council Of Canada Deconvolution-based structured light system with geometrically plausible regularization
US8698905B2 (en) * 2009-03-11 2014-04-15 Csr Technology Inc. Estimation of point spread functions from motion-blurred images
US20100231732A1 (en) * 2009-03-11 2010-09-16 Zoran Corporation Estimation of point spread functions from motion-blurred images
US20100259627A1 (en) * 2009-04-13 2010-10-14 Showscan Digital Llc Method and apparatus for photographing and projecting moving images
US8363117B2 (en) * 2009-04-13 2013-01-29 Showscan Digital Llc Method and apparatus for photographing and projecting moving images
WO2010131142A1 (en) * 2009-05-12 2010-11-18 Koninklijke Philips Electronics N.V. Camera, system comprising a camera, method of operating a camera and method for deconvoluting a recorded image
US8605202B2 (en) 2009-05-12 2013-12-10 Koninklijke Philips N.V. Motion of image sensor, lens and/or focal length to reduce motion blur
US8212914B2 (en) * 2009-07-24 2012-07-03 Hon Hai Precision Industry Co., Ltd. Computational imaging system
US20110019068A1 (en) * 2009-07-24 2011-01-27 Hon Hai Precision Industry Co., Ltd. Computational imaging system
US8957894B2 (en) * 2009-08-17 2015-02-17 Mistretta Medical, Llc System and method for four dimensional angiography and fluoroscopy
US20140148691A1 (en) * 2009-08-17 2014-05-29 Cms Medical, Llc System and method for four dimensional angiography and fluoroscopy
US8634711B2 (en) * 2009-08-20 2014-01-21 Canon Kabushiki Kaisha Image capture apparatus and method
US20110044677A1 (en) * 2009-08-20 2011-02-24 Canon Kabushiki Kaisha Image capture apparatus and method
US8315514B2 (en) * 2009-08-20 2012-11-20 Canon Kabushiki Kaisha Image capture apparatus and method
US20130038777A1 (en) * 2009-08-20 2013-02-14 Canon Kabushiki Kaisha Image capture apparatus and method
US20110075020A1 (en) * 2009-09-30 2011-03-31 Veeraraghavan Ashok N Increasing Temporal Resolution of Signals
US8223259B2 (en) * 2009-09-30 2012-07-17 Mitsubishi Electric Research Laboratories, Inc. Increasing temporal resolution of signals
US20110081142A1 (en) * 2009-10-06 2011-04-07 Apple Inc. Pulsed control of camera flash
US7962031B2 (en) * 2009-10-06 2011-06-14 Apple Inc. Pulsed control of camera flash
US9414799B2 (en) 2010-01-24 2016-08-16 Mistretta Medical, Llc System and method for implementation of 4D time-energy subtraction computed tomography
US20110181690A1 (en) * 2010-01-26 2011-07-28 Sony Corporation Imaging control apparatus, imaging apparatus, imaging control method, and program
US10931855B2 (en) * 2010-01-26 2021-02-23 Sony Corporation Imaging control based on change of control settings
US8537238B2 (en) * 2010-04-14 2013-09-17 Sony Corporation Digital camera and method for capturing and deblurring images
US20110254983A1 (en) * 2010-04-14 2011-10-20 Sony Corporation Digital camera and method for capturing and deblurring images
US8699867B2 (en) * 2010-09-30 2014-04-15 Trimble Germany Gmbh Aerial digital camera and method of controlling the same
US10803275B2 (en) * 2010-10-12 2020-10-13 International Business Machines Corporation Deconvolution of digital images
US20170024594A1 (en) * 2010-10-12 2017-01-26 International Business Machines Corporation Deconvolution of digital images
US9508116B2 (en) * 2010-10-12 2016-11-29 International Business Machines Corporation Deconvolution of digital images
US10140495B2 (en) * 2010-10-12 2018-11-27 International Business Machines Corporation Deconvolution of digital images
US20190042817A1 (en) * 2010-10-12 2019-02-07 International Business Machines Corporation Deconvolution of digital images
US20140226878A1 (en) * 2010-10-12 2014-08-14 International Business Machines Corporation Deconvolution of digital images
US9535537B2 (en) 2010-11-18 2017-01-03 Microsoft Technology Licensing, Llc Hover detection in an interactive display device
US20120148108A1 (en) * 2010-12-13 2012-06-14 Canon Kabushiki Kaisha Image processing apparatus and method therefor
US8605938B2 (en) 2011-01-25 2013-12-10 Honeywell International Inc. Motion-based image watermarking
CN102693046A (en) * 2011-02-23 2012-09-26 微软公司 Hover detection in an interactive display device
JP2014515206A (en) * 2011-03-22 2014-06-26 コーニンクレッカ フィリップス エヌ ヴェ Camera system having camera, camera, method of operating camera, and method of analyzing recorded image
US8963919B2 (en) 2011-06-15 2015-02-24 Mistretta Medical, Llc System and method for four dimensional angiography and fluoroscopy
US8702003B2 (en) 2011-11-18 2014-04-22 Honeywell International Inc. Bar code readers and methods of reading bar codes
EP2831670A4 (en) * 2012-03-26 2015-12-23 Nokia Technologies Oy Method, apparatus and computer program product for image stabilization
US20150036008A1 (en) * 2012-03-26 2015-02-05 Nokia Corporation Method, Apparatus and Computer Program Product for Image Stabilization
WO2013144427A1 (en) * 2012-03-26 2013-10-03 Nokia Corporation Method, apparatus and computer program product for image stabilization
US9191578B2 (en) * 2012-06-29 2015-11-17 Broadcom Corporation Enhanced image processing with lens motion
US20140002606A1 (en) * 2012-06-29 2014-01-02 Broadcom Corporation Enhanced image processing with lens motion
US9298006B2 (en) * 2013-01-18 2016-03-29 Intel Corporation Layered light field reconstruction for defocus blur
US20140204111A1 (en) * 2013-01-18 2014-07-24 Karthik Vaidyanathan Layered light field reconstruction for defocus blur
US20150042829A1 (en) * 2013-04-09 2015-02-12 Honeywell International Inc. Motion deblurring
US9552630B2 (en) * 2013-04-09 2017-01-24 Honeywell International Inc. Motion deblurring
CN103310486A (en) * 2013-06-04 2013-09-18 西北工业大学 Reconstruction method of atmospheric turbulence degraded images
US9196020B2 (en) * 2013-09-19 2015-11-24 Raytheon Canada Limited Systems and methods for digital correction of aberrations produced by tilted plane-parallel plates or optical wedges
US20150077583A1 (en) * 2013-09-19 2015-03-19 Raytheon Canada Limited Systems and methods for digital correction of aberrations produced by tilted plane-parallel plates or optical wedges
US9483869B2 (en) * 2014-01-17 2016-11-01 Intel Corporation Layered reconstruction for defocus and motion blur
US20150206340A1 (en) * 2014-01-17 2015-07-23 Carl J. Munkberg Layered Reconstruction for Defocus and Motion Blur
US9672657B2 (en) * 2014-01-17 2017-06-06 Intel Corporation Layered reconstruction for defocus and motion blur
US9779484B2 (en) 2014-08-04 2017-10-03 Adobe Systems Incorporated Dynamic motion path blur techniques
US9955065B2 (en) 2014-08-27 2018-04-24 Adobe Systems Incorporated Dynamic motion path blur user interface
US20160063670A1 (en) * 2014-08-27 2016-03-03 Adobe Systems Incorporated Dynamic Motion Path Blur Kernel
US9723204B2 (en) * 2014-08-27 2017-08-01 Adobe Systems Incorporated Dynamic motion path blur kernel
US10070031B2 (en) * 2016-01-27 2018-09-04 Diehl Defence Gmbh & Co. Kg Method and device for identifying an object in a search image
US20170214833A1 (en) * 2016-01-27 2017-07-27 Diehl Defence Gmbh & Co. Kg Method and device for identifying an object in a search image
CN109274895A (en) * 2018-09-25 2019-01-25 杭州电子科技大学 Encode the quick relative movement scene capture device and restored method of exposure image
CN110097509A (en) * 2019-03-26 2019-08-06 杭州电子科技大学 A kind of restored method of local motion blur image
US20230061085A1 (en) * 2021-08-27 2023-03-02 Raytheon Company Arbitrary motion smear modeling and removal
US11704777B2 (en) * 2021-08-27 2023-07-18 Raytheon Company Arbitrary motion smear modeling and removal

Also Published As

Publication number Publication date
US8451338B2 (en) 2013-05-28

Similar Documents

Publication Publication Date Title
US8451338B2 (en) Method and apparatus for motion invariant imaging
Levin et al. Motion-invariant photography
Delbracio et al. Burst deblurring: Removing camera shake through fourier burst accumulation
US9998666B2 (en) Systems and methods for burst image deblurring
Tai et al. Image/video deblurring using a hybrid camera
Whyte et al. Non-uniform deblurring for shaken images
Zhong et al. Handling noise in single image deblurring using directional filters
Agrawal et al. Invertible motion blur in video
Raskar et al. Coded exposure photography: motion deblurring using fluttered shutter
JP4679662B2 (en) Method for reducing blur in scene image and apparatus for removing blur in scene image
JP4672060B2 (en) Method for reducing blur in an image of a scene and method for removing blur in an image of a scene
US9349164B2 (en) De-noising image content using directional filters for image deblurring
Cho et al. Motion blur removal with orthogonal parabolic exposures
Delbracio et al. Removing camera shake via weighted fourier burst accumulation
US20080240607A1 (en) Image Deblurring with Blurred/Noisy Image Pairs
US20130343669A1 (en) Blur-kernel estimation from spectral irregularities
Schuon et al. Comparison of motion de-blur algorithms and real world deployment
Bae et al. Patch mosaic for fast motion deblurring
Lee et al. Recent advances in image deblurring
Wang et al. High-quality image deblurring with panchromatic pixels.
Verma et al. An efficient deblurring algorithm on foggy images using curvelet transforms
Bhagat et al. Novel Approach to Estimate Motion Blur Kernel Parameters and Comparative Study of Restoration Techniques
Zaharescu et al. An Investigation of Image Deblurring Techniques
Zaharescu et al. Image deblurring: challenges and solutions
Bonchev et al. Improving super-resolution image reconstruction by in-plane camera rotation

Legal Events

Date Code Title Description
AS Assignment

Owner name: MASSACHUSETTS INSTITUTE OF TECHNOLOGY, MASSACHUSET

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAND, PETER;LEVIN, ANAT;CHO, TAEG SANG;AND OTHERS;REEL/FRAME:021027/0932;SIGNING DATES FROM 20080430 TO 20080522

Owner name: MASSACHUSETTS INSTITUTE OF TECHNOLOGY, MASSACHUSET

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAND, PETER;LEVIN, ANAT;CHO, TAEG SANG;AND OTHERS;SIGNING DATES FROM 20080430 TO 20080522;REEL/FRAME:021027/0932

AS Assignment

Owner name: NATIONAL SCIENCE FOUNDATION,VIRGINIA

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:MASSACHUSETTS INSTITUTE OF TECHNOLOGY;REEL/FRAME:024384/0869

Effective date: 20100322

Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:MASSACHUSETTS INSTITUTE OF TECHNOLOGY;REEL/FRAME:024384/0869

Effective date: 20100322

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8