WO2018116322A1 - System and method for generating pan shots from videos - Google Patents

System and method for generating pan shots from videos Download PDF

Info

Publication number
WO2018116322A1
WO2018116322A1 PCT/IN2017/050605 IN2017050605W WO2018116322A1 WO 2018116322 A1 WO2018116322 A1 WO 2018116322A1 IN 2017050605 W IN2017050605 W IN 2017050605W WO 2018116322 A1 WO2018116322 A1 WO 2018116322A1
Authority
WO
WIPO (PCT)
Prior art keywords
frames
foreground
background
frame
displacement
Prior art date
Application number
PCT/IN2017/050605
Other languages
French (fr)
Inventor
Rajagopalan AMBASAMUDRAM NARAYANAN
Nimisha THEKKE MADAM
Original Assignee
Indian Institute Of Technology Madras
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Indian Institute Of Technology Madras filed Critical Indian Institute Of Technology Madras
Publication of WO2018116322A1 publication Critical patent/WO2018116322A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor

Definitions

  • Pan photography or panning is basically an imaging technique, which involves swiveling an image capturing device, such as a video camera, horizontally at a fixed position.
  • the swiveling motion of the camera is used for capturing images from one part of a scene to another.
  • the swiveling motion of the camera is used for capturing images of a moving object, such that the camera motion and object motion are in sync.
  • This imaging technique can be used to produce a "pan shot", which gives an artistic visual effect to the objects in motion.
  • Pan shots have a blurred background and have a high focus on the moving object in the foreground. The moving object appears sharp and frozen against a blurred background. Thus, pan shots give an aesthetic appeal to an image by accentuating the object from other elements in the frame and relegating object motion to the background.
  • the techniques involve a great amount of manual effort from the camera operator. For example, setting the correct shutter speed, ensuring autofocus mode, adjusting the exposure, and tracking the object, should all happen in perfect harmony. This process is difficult and it is highly likely that the event could get over by the time these settings are adjusted, especially in the case of fast-paced scenarios, such as running, car races, etc. Therefore, creating pan shots requires substantial amount of manual skills and there is a dearth of automated techniques in the existing state-of-the-art technologies.
  • Described herein are systems and methods for automatically generating a pan- shot from a video.
  • the present subject matter relates to a method for automatically generating a pan shot from a video of a dynamic object.
  • the method includes warping a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames.
  • the warping may include aligning dynamic backgrounds to create a static background, which is consistent, in each of the plurality of frames.
  • the background compensated frames are used for segmenting the foreground and background to create a trimap.
  • the foreground of each frame is correlated with a preceding and succeeding frame to obtain an inter-frame object displacement with respect to the preceding and succeeding frames. Based on the inter-frame object displacement, a net displacement and a relative depth of the object are determined.
  • a deblurring operation is performed in the foreground to obtain a plurality of clear frames if the foreground is blurred.
  • the plurality of clear frames are rewarped using the net displacement to create rewarped clear frames, which have a dynamic background and static foreground.
  • the rewarped clear frames are averaged to generate the pan shot, which has a blurred background and a sharp foreground.
  • the present subject matter relates to a system for automatically generating a pan shot from a video of a dynamic object.
  • the system includes an image capturing unit, a processing unit, and memory unit coupled to the processing unit.
  • the image capturing unit captures the video of the dynamic object.
  • the memory unit includes a warping module, a segmentation module, a correlation module, a displacement computation module, a deblurring module, a rewarping module, and an averaging module.
  • the warping module is configured to warp a plurality of frames of the video to compensate for background motion in the video based on homographies of consecutive frames.
  • the warping includes aligning dynamic backgrounds to create a static background, which is consistent, in the plurality of frames.
  • the segmentation module is configured to segment a foreground, which comprises the dynamic object, from the background compensated frames to create a trimap.
  • the correlation module is configured to correlate the foreground of each frame with a preceding and a succeeding frame to obtain an inter-frame object displacement with respect to the preceding and the succeeding frames.
  • the displacement computation module is configured to determine a net displacement and a relative depth of the object based on the inter-frame object displacement and camera motion.
  • the deblurring module is configured to deblur the foreground if a blur is present in the foreground to obtain a plurality of clear frames.
  • the rewarping module is configured to rewarp the plurality of clear frames using the net displacement of the object to create rewarped clear frames, which have a dynamic background and static foreground.
  • the averaging module is configured to perform an averaging operation on the rewarped clear frames to generate the pan shot, which has a blurred background and a sharp foreground.
  • the present subject matter relates to a computer program product having non-volatile memory carrying computer executable instructions stored therein for automatically generating a pan shot from a video of a dynamic object.
  • the instructions include warping a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames.
  • the warping may include aligning dynamic backgrounds to create a static background, which is consistent, in each of the plurality of frames.
  • the instructions further comprise segmenting the foreground and background to create a trimap in the background compensated frames. Further, the foreground of each frame is correlated with a preceding and succeeding frame to obtain an inter-frame object displacement with respect to the preceding and succeeding frames.
  • a net displacement and a relative depth of the object are determined. Further, a deblurring operation is performed in the foreground to obtain a plurality of clear frames if the foreground is blurred.
  • the plurality of clear frames are rewarped using the net displacement to create rewarped clear frames, which have a dynamic background and static foreground.
  • the rewarped clear frames are averaged to generate the pan shot, which has a blurred background and a sharp foreground.
  • performing the deblurring operation includes determining an object velocity from the net displacement and frame rate.
  • the method further includes calculating an alpha-matte using the trimap to separate background, foreground and an ambiguous region.
  • the blur weights of the foreground are estimated by uniformly sampling the net object displacement.
  • the separated background is filled using the pixels of the static background in the plurality of frames.
  • the separated foreground in the plurality of frames is deblurred based on object velocity, net displacement, and a kernel.
  • the deblurred foreground and the filled background are mixed to obtain the plurality of clear frames, which can be rewarped and averaged to generate the pan shot.
  • the homography (H) is determined between a reference frame and the plurality of frames using random sampling consensus.
  • the segmenting is performed using graph-cut algorithm.
  • the graph-cut algorithm is based on a data cost function and a smoothness cost function for distinguishing foreground and background in each frame.
  • system further comprises a user interface configured to enable a user to interact with the system.
  • the image capturing unit comprises at least a lens, a shutter, and an image sensor for capturing the videos and photographs.
  • the system is a video recording device.
  • FIG. 1 illustrates a flow diagram of a method for automatic generation of pan shot from a video, according to one embodiment of the present subject matter.
  • FIG. 2 illustrates a system for automatic generation of pan shot from a video, according to one embodiment of the present subject matter.
  • FIG. 3A illustrates three consecutive input frames of a video, according to an example of the present subject matter.
  • FIG. 3B illustrates background compensated frames, according to an example of the present subject matter.
  • FIG. 3C illustrates foreground of the frames on segmentation, according to an example of the present subject matter.
  • FIG. 3D illustrates clear frames of the consecutive frames, according to an example of the present subject matter.
  • FIG. 3E illustrates pan shot generated using the video, according to an example of the present subject matter.
  • FIG. 4A illustrates an input frame of a video of a gazelle, according to another example of the present subject matter.
  • FIG. 4B illustrates a pan shot generated from the video of the gazelle, according to another example of the present subject matter.
  • FIG. 5A illustrates an input frame with blurred foreground of a video of a car, according to an example of the present subject matter.
  • FIG. 5B illustrates a pan shot generated from the video of a car, according to an example of the present subject matter.
  • FIG. 1 A flow diagram of a method 100 for automatically generating a pan shot from a video of a dynamic object is illustrated in FIG. 1, according to an embodiment of the present subject matter.
  • the dynamic object may be in foreground of each frame of the video.
  • the method includes receiving a plurality of frames of the video, at block 102, and warping the plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames, at block 104.
  • the warping may include aligning dynamic backgrounds to create a static background, which is consistent, in each of the plurality of frames.
  • scale-invariant feature transform (SIFT) based feature correspondences between two frames may be estimated.
  • the correspondences between two frames mostly occur in the background as the foreground may be assumed to be small.
  • RANSAC random sampling consensus
  • the homographies between the frames are determined.
  • the frames are aligned using the homographies that relate the background in the consecutive frames to compensate the background motion.
  • the homographies (H) may be determined between a reference frame and the plurality of frames.
  • the background compensated frames are subjected to segmentation of the foreground and background to create a trimap, at block 106.
  • the segmentation may be performed by graph cut approach.
  • the segmentation of the foreground may be formulated as a bilabel assignment problem, which can be incorporated in Markov Random Field (MRF) model and effectively solved using graph cuts.
  • MRF Markov Random Field
  • the foreground of each frame is correlated with a preceding and succeeding frame to obtain an inter-frame object displacement with respect to the preceding and succeeding frames, at block 108.
  • a net displacement and a relative depth of the object are determined based on the inter-frame object displacement and camera motion, at block 110.
  • the net displacement refers to the total displacement of the object in the video and the relative depth refers to the relative distance of object with respect to the background, which is given as Dbackground , where D refers to the distance from the camera center.
  • the background motion is dependent on the camera motion and any blur in the foreground is due to the object motion as well as the camera motion. Therefore, when the camera motion and object motion are not in sync, the object (foreground) may be blurred if the camera has a slow shutter speed. Further, if the shutter speed is not slow, the object appears in different positions in the plurality of frames of the videos.
  • the method further includes determining whether the foreground is blurred or not, at block 112.
  • the blurring may be determined based on various techniques as known in the art. For example, a gradient distribution method to analyze the blur in a region of the frame may be used for determining whether foreground is blurred. A log magnitude response of the gradient from the foreground may be correlated with a reference gradient to indicate the presence of blur. If the foreground is not blurred, then the frames are clear and ready for rewarping. However, if a blur is present in the foreground, then a deblurring operation is performed to obtain a plurality of clear frames, at block 114.
  • the deblurring operation may include constructing an alpha-matte using the trimap to separate background, foreground, and ambiguous region, which may be either a background or foreground.
  • the separated background includes regions with and without pixels.
  • the regions without pixels may be filled using the pixels of the static background of the plurality of frames to obtain a filled background.
  • blur weights of the foreground are estimated by uniformly sampling the net object displacement.
  • a non-blind deblurring is performed using the object velocity, net object displacement, blur weights, and a kernel, to deblur the foreground.
  • the deblurred foreground and the filled background in each frame are mixed to obtain the plurality of clear frames (not blurred).
  • the plurality of clear frames are rewarped using the net displacement to create rewarped clear frames, at block 116.
  • the rewarped clear frames have a dynamic background similar to the background of the input frames of the video.
  • the deblurred foreground is static and the object is at the same position in each frame.
  • the rewarped clear frames are averaged to generate the pan shot, which has a sharp foreground and blurred background, at block 118.
  • a system 200 for automatically generating a pan shot from a video is illustrated in FIG. 2, according to an embodiment of the present subject matter.
  • the system 200 includes an image capturing unit 202, a processing unit 204, and memory unit 206 coupled to the processing unit 204.
  • the image capturing unit 202 may provide a photo or video capturing capability and may generally include at least a lens, a shutter, an image sensor and the like.
  • the image capturing unit 202 captures the video of the dynamic object and stores it in a storage unit 208, which may be removable or non-removable.
  • the processing unit 204 may include one or more computing components including, but not limited to, central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), or other specialized microprocessors.
  • the one or more computing components may be in communication with the memory unit 206 for executing specific functions.
  • the memory unit 206 includes a warping module 210, a segmentation module 212, a correlation module 214, a displacement computation module 216, a deblurring module 218, a rewarping module 220, and an averaging module 222.
  • the modules may be implemented as software code to be executed by the processing unit 204 using any suitable computer language. These software codes may be stored as a series of instructions or commands in the memory unit 206.
  • the modules may be implemented as one or more software modules, hardware modules, firmware modules, or some combination of these.
  • the warping module 210 receives the video as an input to remove the background motion in each frame of the video.
  • the video may be captured using the image capturing unit 202 and stored in the storage unit 208 or may be received from another device or network.
  • the warping module 210 is configured to warp the plurality of frames of the video to compensate for background motion in the video based on homographies of consecutive frames. The warping aligns these dynamic or changing backgrounds to create the static background, which is consistent in each of the plurality of frames.
  • the segmentation module 212 is configured to segment the foreground from the background compensated frames to create the trimap.
  • the foreground may include the moving object, which is required to be aligned.
  • the segmentation module 212 may perform graph cut segmentation to create the trimap.
  • the correlation module 214 is configured to correlate the foreground of each frame with a preceding and a succeeding frame to obtain inter-frame object displacement with respect to the preceding and the succeeding frames.
  • the displacement computation module 216 is configured to determine a net displacement and a relative depth of the object based on the inter-frame object displacement and camera motion t ( i j) .
  • the deblurring module 218 is configured to deblur the foreground if a blur is present in the foreground to obtain plurality of clear frames.
  • the deblurring module 218 may construct an alpha-matte using the trimap to separate background, foreground, and ambiguous region.
  • the deblurring module 218 may fill the separated backgrounds using the pixels of the static backgrounds of the plurality of frames. Blur weights of the foreground are estimated by uniformly sampling the net object displacement.
  • the foreground is deblurred using the object velocity, net displacement of object, blur weights, and a kernel. The deblurred foreground and the filled background in each frame are mixed to obtain the plurality of clear frames.
  • the rewarping module 220 is configured to rewarp the plurality of clear frames using the net displacement of the object to create rewarped clear frames, which has a dynamic background and static foreground.
  • the dynamic backgrounds are similar to the backgrounds of the plurality of frames of the video.
  • the averaging module 222 is configured to perform averaging operation on the rewarped clear frames to generate the pan shot.
  • the pan shot has a blurred background and a sharp foreground that gives an artistic visual effect.
  • the also invention includes a computer-readable non-volatile memory or storage (not shown in figure) embodying instructions for implementing the method as illustrated and described in FIG. 1.
  • the computer-readable memory may be media and/or devices that enable non-transitory storage of data to be executed on a system as illustrated in FIG. 2.
  • the computer-readable memory may include removable and nonremovable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer-readable instructions, data structures, program modules, logic elements/circuits, and other data.
  • the system 200 may also include a user interface 224 configured to enable a user to interact with the system 200. For instance, the user may select a mode of operation, such as pan shot mode, and initiate video capturing via the user interface 224.
  • the user interface 224 may include a display unit and a plurality of control buttons. In other embodiments, the user interface 224 may include a touch display.
  • the user interface 224 may provide a plurality of controls for adjusting settings of the system 200.
  • the plurality of controls may include menu button, info button, shutter button, ISO button, mode dial, display illumination button, power button, flash button, erase button, and various control options as known in the art.
  • the system 200 may include a communication unit (not shown in figure) for sending or receiving files, such as images and videos, to other devices.
  • the system may be configured with a Wi-Fi feature to connect or disconnect to Wi-Fi networks.
  • the system 200 may also include one or more of a NFC antenna, Bluetooth chip, Infrared, USB port, HDMI port, and the like.
  • FIG. 3A - FIG. 3E An example implementation of the method for generating pan shots from video with a clear foreground is illustrated in FIG. 3A - FIG. 3E.
  • the video depicts a cheetah, in the foreground, sprinting against a green background.
  • the video was downloaded from: https://en.wikipedia rg vvikWi1e:Cheetah8 on the Edge (Direci ' or%27s Cu .ogv.
  • FIG. 3A Three consecutive frames of the video is depicted in FIG. 3A.
  • the three consecutive frames (i), (ii), and (iii) of the video depict the cheetah as an object in foreground and each frame has a distinct background.
  • the object and the foreground was clearly captured, i.e., without blur as the video was captured by a video capturing device with high shutter speed.
  • Each frame of the video was warped to compensate the motion in the background based on homographies (H) of consecutive frames.
  • the homographies (H) was determined between a reference frame and the plurality of frames.
  • the background motion due to the camera was removed on aligning the frames and the background in each frame (i), (ii), (iii) of the video were made consistent, as shown in FIG. 3B.
  • E(r x(i) ) E data (x (i) ) + E smooth (x (i) ) (1)
  • r ⁇ ⁇ — 1, 1 ⁇ and -1 and +1 may be the two labels corresponding to foreground and background, respectively.
  • the data cost at a pixel x® is defined by combining the global motion between the frames and optical flow, while the smoothness cost depends on the color information between the neighboring pixels.
  • optical flow technique was used to give the displacement field between two images which can be used to decipher point correspondences. This method uses additional constraints on gradient and flow smoothness to make the flow vectors reliable under small illumination changes.
  • E sm0 oth(X (i) ) ⁇ y (i) eN cp[r Y( i) ⁇ r x ( i) ] . e -P" c " 2
  • C is the color difference between the pixels
  • N is the pixel neighborhood of X (l)
  • ⁇ [.] is a function that returns 1 when the argument inside the braces is true and 0 if false.
  • the relative depth and the net displacement of the object may be determined by
  • the frames were rewarped using the net displacement to create rewarped clear frames.
  • the rewarped frames have a dynamic background and static foreground, as shown in FIG. 3D.
  • the rewarped clear frames were averaged to generate the pan shot.
  • the pan shot has a sharp foreground and blurred background.
  • FIG. 4A and FIG. 4B depict a gazelle.
  • FIG. 4A illustrates one of the plurality of frames of the video of the gazelle sprinting on which the method was performed.
  • Fig 4A also depicts the scenario where the foreground and background are devoid of any motion blur artifacts. Even though the background appears to have defocus blur here, it does not reflect the object motion. The same procedure of no blur case was performed to obtain the pan shot, as shown in Fig 4B.
  • EXAMPLE 3 PAN SHOT FROM A VIDEO WITH BLURRED FOREGROUND
  • FIG. 5 A depicts one of the plurality of frames of the video of the moving car on which the method was performed. As shown, the frame has a static background and blurred foreground due to a slow shutter speed of the video recording device.
  • a global homography of the plurality of frames was determined and the frames were warped to align the background in each of the plurality of frames.
  • the foreground in each of the frames was then segmented and correlated to determine the net displacement and relative depth of the object.
  • the deblurring operation was performed on each of the plurality of frames as the foreground is blurred.
  • the deblurring operation included constructing an alpha-matte using the trimap to separate background, foreground, and ambiguous region, which may be either a background or foreground.
  • the separated background includes regions with and without pixels. The regions without pixels were filled using the pixels of the static backgrounds of the plurality of frames to obtain a filled background.
  • the deblurring of the foreground was based on blur weights corresponding to the camera motion t ⁇ j) in each frame.
  • Each entry in ⁇ ( ⁇ ) represents a fraction of the total exposure time the camera spent in a particular pose k e K. Since the camera trajectory is dominated by the panning motion of camera and is along the horizontal direction, the weights estimated for each frame can be ordered in ascending order of t x so as to obtain the camera motion t ⁇ j) trajectory in each frame.
  • the foreground weights (w) will not be uniform in general as they also depend on the camera motion t ⁇ ⁇ -
  • the foreground alone was subjected to non -blind deblurring using the non-uniformly distributed blur weights.
  • the foreground deblurring may be based on Richardson-Lucy algorithm.
  • the update rule may be modified to incorporate non-uniform blur weights.
  • the modified update rule may be given
  • the deblurred foreground and the filled background in each frame were mixed to obtain the plurality of clear frames (not blurred).
  • the plurality of clear frames were rewarped using the net displacement to create rewarped clear frames.
  • the rewarped frames have a dynamic background similar to the background of the input frames of the video.
  • the deblurred foreground is static and is at the same position in each frame.
  • the rewarped clear frames were then averaged to generate the pan shot, which has a sharp foreground and blurred background.
  • the advantages of the present subject matter and its embodiments include automatically and effortlessly generating pan shots with minimal human intervention.
  • the method and system also accounts for any blurring of the object or foreground due to camera motion, object motion, shutter speed, etc., and provides an artistic visual effect to the photograph. Therefore, the subject matter may be used for augmenting hardware capability of consumer cameras or mobile phones.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Circuits (AREA)
  • Studio Devices (AREA)

Abstract

A system and method for automatic generation of a pan shot from a video of a dynamic object is disclosed. The method includes warping a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames. The background compensated frames are used for segmenting the foreground and background to create a trimap. The foreground of each frame is correlated with a preceding and succeeding frame to obtain an inter-frame object displacement with respect to the preceding and succeeding frames. Based on the inter-frame object displacement, a net displacement and a relative depth of the object are determined. The plurality of clear frames are rewarped using the net displacement to create rewarped clear frames with dynamic background and static foreground. The rewarped clear frames are averaged to generate the pan shot, which has a blurred background and a sharp foreground.

Description

SYSTEM AND METHOD FOR GENERATING PAN SHOTS FROM VIDEOS
RELATED APPLICATION
[0001] This application claims benefit and priority to Indian provisional patent application No. 201641043468, titled "Pan Shots From Videos", filed on December 20, 2016, and the complete specification thereof, filed on October 6, 2017. The disclosures of these India applications are incorporated herein by reference for all purposes.
BACKGROUND
[0002] Pan photography or panning is basically an imaging technique, which involves swiveling an image capturing device, such as a video camera, horizontally at a fixed position. The swiveling motion of the camera is used for capturing images from one part of a scene to another. Generally, the swiveling motion of the camera is used for capturing images of a moving object, such that the camera motion and object motion are in sync. This imaging technique can be used to produce a "pan shot", which gives an artistic visual effect to the objects in motion. [0003] Pan shots have a blurred background and have a high focus on the moving object in the foreground. The moving object appears sharp and frozen against a blurred background. Thus, pan shots give an aesthetic appeal to an image by accentuating the object from other elements in the frame and relegating object motion to the background.
[0004] However, taking pan shots is not straight forward as the techniques involve intricate steps. A camera operator should have an approximate idea of the object velocity a priori so as to pan the camera in sync with the moving object and to avoid undesirable effects. For instance, a high relative velocity between object and camera may result in blurring of the object.
[0005] Additionally, the techniques involve a great amount of manual effort from the camera operator. For example, setting the correct shutter speed, ensuring autofocus mode, adjusting the exposure, and tracking the object, should all happen in perfect harmony. This process is difficult and it is highly likely that the event could get over by the time these settings are adjusted, especially in the case of fast-paced scenarios, such as running, car races, etc. Therefore, creating pan shots requires substantial amount of manual skills and there is a dearth of automated techniques in the existing state-of-the-art technologies.
SUMMARY
[0006] Described herein are systems and methods for automatically generating a pan- shot from a video.
[0007] According to one embodiment, the present subject matter relates to a method for automatically generating a pan shot from a video of a dynamic object. The method includes warping a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames. The warping may include aligning dynamic backgrounds to create a static background, which is consistent, in each of the plurality of frames. The background compensated frames are used for segmenting the foreground and background to create a trimap. The foreground of each frame is correlated with a preceding and succeeding frame to obtain an inter-frame object displacement with respect to the preceding and succeeding frames. Based on the inter-frame object displacement, a net displacement and a relative depth of the object are determined. Further, a deblurring operation is performed in the foreground to obtain a plurality of clear frames if the foreground is blurred. The plurality of clear frames are rewarped using the net displacement to create rewarped clear frames, which have a dynamic background and static foreground. The rewarped clear frames are averaged to generate the pan shot, which has a blurred background and a sharp foreground.
[0008] According to another embodiment, the present subject matter relates to a system for automatically generating a pan shot from a video of a dynamic object. The system includes an image capturing unit, a processing unit, and memory unit coupled to the processing unit. The image capturing unit captures the video of the dynamic object. The memory unit includes a warping module, a segmentation module, a correlation module, a displacement computation module, a deblurring module, a rewarping module, and an averaging module. The warping module is configured to warp a plurality of frames of the video to compensate for background motion in the video based on homographies of consecutive frames. The warping includes aligning dynamic backgrounds to create a static background, which is consistent, in the plurality of frames. The segmentation module is configured to segment a foreground, which comprises the dynamic object, from the background compensated frames to create a trimap. The correlation module is configured to correlate the foreground of each frame with a preceding and a succeeding frame to obtain an inter-frame object displacement with respect to the preceding and the succeeding frames. The displacement computation module is configured to determine a net displacement and a relative depth of the object based on the inter-frame object displacement and camera motion. The deblurring module is configured to deblur the foreground if a blur is present in the foreground to obtain a plurality of clear frames. The rewarping module is configured to rewarp the plurality of clear frames using the net displacement of the object to create rewarped clear frames, which have a dynamic background and static foreground. The averaging module is configured to perform an averaging operation on the rewarped clear frames to generate the pan shot, which has a blurred background and a sharp foreground.
[0009] According to yet another embodiment, the present subject matter relates to a computer program product having non-volatile memory carrying computer executable instructions stored therein for automatically generating a pan shot from a video of a dynamic object. The instructions include warping a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames. The warping may include aligning dynamic backgrounds to create a static background, which is consistent, in each of the plurality of frames. The instructions further comprise segmenting the foreground and background to create a trimap in the background compensated frames. Further, the foreground of each frame is correlated with a preceding and succeeding frame to obtain an inter-frame object displacement with respect to the preceding and succeeding frames. Based on the inter-frame object displacement, a net displacement and a relative depth of the object are determined. Further, a deblurring operation is performed in the foreground to obtain a plurality of clear frames if the foreground is blurred. The plurality of clear frames are rewarped using the net displacement to create rewarped clear frames, which have a dynamic background and static foreground. The rewarped clear frames are averaged to generate the pan shot, which has a blurred background and a sharp foreground.
[0010] In one embodiment, performing the deblurring operation includes determining an object velocity from the net displacement and frame rate. The method further includes calculating an alpha-matte using the trimap to separate background, foreground and an ambiguous region. The blur weights of the foreground are estimated by uniformly sampling the net object displacement. Further, the separated background is filled using the pixels of the static background in the plurality of frames. The separated foreground in the plurality of frames is deblurred based on object velocity, net displacement, and a kernel. The deblurred foreground and the filled background are mixed to obtain the plurality of clear frames, which can be rewarped and averaged to generate the pan shot.
[0011] In some embodiments, the homography (H) is determined between a reference frame and the plurality of frames using random sampling consensus. [0012] In some embodiments, the segmenting is performed using graph-cut algorithm.
[0013] In some embodiments, the graph-cut algorithm is based on a data cost function and a smoothness cost function for distinguishing foreground and background in each frame.
[0014] In some embodiments, the system further comprises a user interface configured to enable a user to interact with the system.
[0015] In some embodiments, the image capturing unit comprises at least a lens, a shutter, and an image sensor for capturing the videos and photographs.
[0016] In some embodiments, the system is a video recording device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The invention has other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which: [0018] FIG. 1 illustrates a flow diagram of a method for automatic generation of pan shot from a video, according to one embodiment of the present subject matter.
[0019] FIG. 2 illustrates a system for automatic generation of pan shot from a video, according to one embodiment of the present subject matter.
[0020] FIG. 3A illustrates three consecutive input frames of a video, according to an example of the present subject matter.
[0021] FIG. 3B illustrates background compensated frames, according to an example of the present subject matter.
[0022] FIG. 3C illustrates foreground of the frames on segmentation, according to an example of the present subject matter. [0023] FIG. 3D illustrates clear frames of the consecutive frames, according to an example of the present subject matter.
[0024] FIG. 3E illustrates pan shot generated using the video, according to an example of the present subject matter. [0025] FIG. 4A illustrates an input frame of a video of a gazelle, according to another example of the present subject matter.
[0026] FIG. 4B illustrates a pan shot generated from the video of the gazelle, according to another example of the present subject matter.
[0027] FIG. 5A illustrates an input frame with blurred foreground of a video of a car, according to an example of the present subject matter.
[0028] FIG. 5B illustrates a pan shot generated from the video of a car, according to an example of the present subject matter.
DETAILED DESCRIPTION
[0029] While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.
[0030] Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of "a", "an", and "the" include plural references. The meaning of "in" includes "in" and "on." Referring to the drawings, like numbers indicate like parts throughout the views. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
[0031] The present subject matter is further described with reference to figures 1 - 5B. It should be noted that the description and figures merely illustrate principles of the present subject matter. It is thus understood that various arrangements may be devised that, although not explicitly described or shown herein, encompass the principles of the present subject matter. Moreover, all statements herein reciting principles, aspects, examples, and embodiments of the present subject matter, as well as specific examples thereof, are intended to encompass equivalents thereof. [0032] A flow diagram of a method 100 for automatically generating a pan shot from a video of a dynamic object is illustrated in FIG. 1, according to an embodiment of the present subject matter. The dynamic object may be in foreground of each frame of the video. The method includes receiving a plurality of frames of the video, at block 102, and warping the plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames, at block 104. The warping may include aligning dynamic backgrounds to create a static background, which is consistent, in each of the plurality of frames.
[0033] In one embodiment, scale-invariant feature transform (SIFT) based feature correspondences between two frames may be estimated. The correspondences between two frames mostly occur in the background as the foreground may be assumed to be small. Further, on applying random sampling consensus (RANSAC) method, the homographies between the frames are determined. The frames are aligned using the homographies that relate the background in the consecutive frames to compensate the background motion. In one embodiment, the homographies (H) may be determined between a reference frame and the plurality of frames. [0034] The background compensated frames are subjected to segmentation of the foreground and background to create a trimap, at block 106. In one embodiment, the segmentation may be performed by graph cut approach. The segmentation of the foreground may be formulated as a bilabel assignment problem, which can be incorporated in Markov Random Field (MRF) model and effectively solved using graph cuts. The segmentation step will be discussed further in subsequent paragraphs.
[0035] After segmentation, the foreground of each frame is correlated with a preceding and succeeding frame to obtain an inter-frame object displacement with respect to the preceding and succeeding frames, at block 108. A net displacement and a relative depth of the object are determined based on the inter-frame object displacement and camera motion, at block 110. The net displacement refers to the total displacement of the object in the video and the relative depth refers to the relative distance of object with respect to the background, which is given as Dbackground, where D refers to the distance from the camera center.
^foreground
[0036] It may be worth understanding that the background motion is dependent on the camera motion and any blur in the foreground is due to the object motion as well as the camera motion. Therefore, when the camera motion and object motion are not in sync, the object (foreground) may be blurred if the camera has a slow shutter speed. Further, if the shutter speed is not slow, the object appears in different positions in the plurality of frames of the videos.
[0037] Thus, the method further includes determining whether the foreground is blurred or not, at block 112. The blurring may be determined based on various techniques as known in the art. For example, a gradient distribution method to analyze the blur in a region of the frame may be used for determining whether foreground is blurred. A log magnitude response of the gradient from the foreground may be correlated with a reference gradient to indicate the presence of blur. If the foreground is not blurred, then the frames are clear and ready for rewarping. However, if a blur is present in the foreground, then a deblurring operation is performed to obtain a plurality of clear frames, at block 114. [0038] The deblurring operation may include constructing an alpha-matte using the trimap to separate background, foreground, and ambiguous region, which may be either a background or foreground. The separated background includes regions with and without pixels. The regions without pixels may be filled using the pixels of the static background of the plurality of frames to obtain a filled background. Further, blur weights of the foreground are estimated by uniformly sampling the net object displacement. A non-blind deblurring is performed using the object velocity, net object displacement, blur weights, and a kernel, to deblur the foreground. The deblurred foreground and the filled background in each frame are mixed to obtain the plurality of clear frames (not blurred).
[0039] The plurality of clear frames are rewarped using the net displacement to create rewarped clear frames, at block 116. The rewarped clear frames have a dynamic background similar to the background of the input frames of the video. The deblurred foreground is static and the object is at the same position in each frame. The rewarped clear frames are averaged to generate the pan shot, which has a sharp foreground and blurred background, at block 118.
[0040] A system 200 for automatically generating a pan shot from a video is illustrated in FIG. 2, according to an embodiment of the present subject matter. The system 200 includes an image capturing unit 202, a processing unit 204, and memory unit 206 coupled to the processing unit 204. The image capturing unit 202 may provide a photo or video capturing capability and may generally include at least a lens, a shutter, an image sensor and the like. The image capturing unit 202 captures the video of the dynamic object and stores it in a storage unit 208, which may be removable or non-removable.
[0041] The processing unit 204 may include one or more computing components including, but not limited to, central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), or other specialized microprocessors. The one or more computing components may be in communication with the memory unit 206 for executing specific functions.
[0042] The memory unit 206 includes a warping module 210, a segmentation module 212, a correlation module 214, a displacement computation module 216, a deblurring module 218, a rewarping module 220, and an averaging module 222. In one embodiment, the modules may be implemented as software code to be executed by the processing unit 204 using any suitable computer language. These software codes may be stored as a series of instructions or commands in the memory unit 206. In various embodiments, the modules may be implemented as one or more software modules, hardware modules, firmware modules, or some combination of these.
[0043] The warping module 210 receives the video as an input to remove the background motion in each frame of the video. The video may be captured using the image capturing unit 202 and stored in the storage unit 208 or may be received from another device or network. The warping module 210 is configured to warp the plurality of frames of the video to compensate for background motion in the video based on homographies of consecutive frames. The warping aligns these dynamic or changing backgrounds to create the static background, which is consistent in each of the plurality of frames.
[0044] Further, the segmentation module 212 is configured to segment the foreground from the background compensated frames to create the trimap. The foreground may include the moving object, which is required to be aligned. The segmentation module 212 may perform graph cut segmentation to create the trimap. [0045] The correlation module 214 is configured to correlate the foreground of each frame with a preceding and a succeeding frame to obtain inter-frame object displacement with respect to the preceding and the succeeding frames. Further, the displacement computation module 216 is configured to determine a net displacement and a relative depth of the object based on the inter-frame object displacement and camera motion t(ij).
[0046] Further, the deblurring module 218 is configured to deblur the foreground if a blur is present in the foreground to obtain plurality of clear frames. The deblurring module 218 may construct an alpha-matte using the trimap to separate background, foreground, and ambiguous region. The deblurring module 218 may fill the separated backgrounds using the pixels of the static backgrounds of the plurality of frames. Blur weights of the foreground are estimated by uniformly sampling the net object displacement. Further, the foreground is deblurred using the object velocity, net displacement of object, blur weights, and a kernel. The deblurred foreground and the filled background in each frame are mixed to obtain the plurality of clear frames. [0047] The rewarping module 220 is configured to rewarp the plurality of clear frames using the net displacement of the object to create rewarped clear frames, which has a dynamic background and static foreground. The dynamic backgrounds are similar to the backgrounds of the plurality of frames of the video. The averaging module 222 is configured to perform averaging operation on the rewarped clear frames to generate the pan shot. The pan shot has a blurred background and a sharp foreground that gives an artistic visual effect.
[0048] In one embodiment, the also invention includes a computer-readable non-volatile memory or storage (not shown in figure) embodying instructions for implementing the method as illustrated and described in FIG. 1. The computer-readable memory may be media and/or devices that enable non-transitory storage of data to be executed on a system as illustrated in FIG. 2. The computer-readable memory may include removable and nonremovable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer-readable instructions, data structures, program modules, logic elements/circuits, and other data.
[0049] Further, the system 200 may also include a user interface 224 configured to enable a user to interact with the system 200. For instance, the user may select a mode of operation, such as pan shot mode, and initiate video capturing via the user interface 224. In one embodiment, the user interface 224 may include a display unit and a plurality of control buttons. In other embodiments, the user interface 224 may include a touch display.
[0050] In various embodiments, the user interface 224 may provide a plurality of controls for adjusting settings of the system 200. For example, the plurality of controls may include menu button, info button, shutter button, ISO button, mode dial, display illumination button, power button, flash button, erase button, and various control options as known in the art.
[0051] In some embodiments, the system 200 may include a communication unit (not shown in figure) for sending or receiving files, such as images and videos, to other devices. For example, the system may be configured with a Wi-Fi feature to connect or disconnect to Wi-Fi networks. The system 200 may also include one or more of a NFC antenna, Bluetooth chip, Infrared, USB port, HDMI port, and the like. EXAMPLES
EXAMPLE 1: PAN SHOT FROM A VIDEO WITH CLEAR FOREGROUND
[0052] An example implementation of the method for generating pan shots from video with a clear foreground is illustrated in FIG. 3A - FIG. 3E. The video depicts a cheetah, in the foreground, sprinting against a green background. The video was downloaded from: https://en.wikipedia rg vvikWi1e:Cheetah8 on the Edge (Direci'or%27s Cu .ogv.
[0053] Three consecutive frames of the video is depicted in FIG. 3A. As shown in FIG. 3A, the three consecutive frames (i), (ii), and (iii) of the video depict the cheetah as an object in foreground and each frame has a distinct background. The object and the foreground was clearly captured, i.e., without blur as the video was captured by a video capturing device with high shutter speed.
[0054] Each frame of the video was warped to compensate the motion in the background based on homographies (H) of consecutive frames. The homographies (H) was determined between a reference frame and the plurality of frames. As there would have been large view changes between the reference frame and say the i"1 frame, the warp between the consecutive frames was determined to build homographies relating the i"1 frame and the reference frame as Hg k' ^ = nm=k Hg m' m+1-). The background motion due to the camera was removed on aligning the frames and the background in each frame (i), (ii), (iii) of the video were made consistent, as shown in FIG. 3B. [0055] After background compensation, the foreground from each of the background compensated frames was segmented, as shown in FIG. 3C. The problem of segmenting the moving object can be formulated as a bilabel assignment problem, which can be incorporated in a Markov Random Field (MRF) framework and effectively solved using graph cuts. The cost of assigning a label r to the pixel position X® = (x®, y®) in the ith frame is given as
E(rx(i)) = Edata(x(i)) + Esmooth(x(i)) (1) where r ^ {— 1, 1 } and -1 and +1 may be the two labels corresponding to foreground and background, respectively. Here, the data cost at a pixel x® is defined by combining the global motion between the frames and optical flow, while the smoothness cost depends on the color information between the neighboring pixels.
[0056] Further, optical flow technique was used to give the displacement field between two images which can be used to decipher point correspondences. This method uses additional constraints on gradient and flow smoothness to make the flow vectors reliable under small illumination changes.
[0057] For any given two motion compensated frames (L®, L®) and the optical flow vectors (ux, uy) between the two frames, each pixel position X® in was warped to X® in
L® using the optical flow as X® = [x®, ya)] = [x(l) + ux, y(l) + uy]. When the warped point X® and X® coincide, i.e., when the flow is zero or within a limit δ, then it was assumed that the point is from the background since we have already compensated for the background motion otherwise it was considered as from the foreground. [0058] Further, the data cost in Eq. (1) is a measure of how well the label r fits the Xw pixel position and is defined as
Edata(x(i)) = exp[(rx(i))( | | « _ *( | | 2 _ δ)]
[0059] If the pixel distance | |X(j) — X(i) | | | ≤ δ (i.e. the pixels correspond to the background), then assigning value rx i) = 1 will minimize the data cost and vice-versa. Further, the smoothness cost takes the color information in the neighborhood of a pixel into consideration while assigning the label to make the cut smooth. Hence, it is assigned as
Esm0oth(X(i)) =∑y(i) eN cp[rY(i)≠ rx (i) ] . e-P"c"2 where C is the color difference between the pixels, N is the pixel neighborhood of X(l) and φ[.] is a function that returns 1 when the argument inside the braces is true and 0 if false. Minimizing the cost function in Eq. (1) gives the dynamic object segmented out from the background.
[0060] The segmented foregrounds from each of the plurality of frames were correlated to obtain inter-frame object displacement. For example, consider the three consecutive frames { L^, }i=1 with inter-frame object displacement da and dt,. Let the true object displacement be Od, the relative depth be γ, and the camera motion j) between the frames i and j to be t^ j), which can be found by negating the homography (H) estimated from the background. If the frame -rate is fp, then the object velocity vx is Odfp. The object undergoes a displacement of Od - yt(i, j) in between the input frames. When the background motion is compensated for, equivalently the object net displacement becomes Od + (1 - y)t(i, j) in the motion compensated frames. The relative depth and the net displacement of the object may be determined by
Figure imgf000021_0001
[0061] The frames were rewarped using the net displacement to create rewarped clear frames. The rewarped frames have a dynamic background and static foreground, as shown in FIG. 3D. The rewarped clear frames were averaged to generate the pan shot. As shown in FIG. 3E the pan shot has a sharp foreground and blurred background.
EXAMPLE 2: PAN SHOT FROM A VIDEO WITH CLEAR FOREGROUND
[0062] Another example implementation of the method is illustrated in FIG. 4A and FIG. 4B, which depict a gazelle. FIG. 4A illustrates one of the plurality of frames of the video of the gazelle sprinting on which the method was performed. As shown, Fig 4A also depicts the scenario where the foreground and background are devoid of any motion blur artifacts. Even though the background appears to have defocus blur here, it does not reflect the object motion. The same procedure of no blur case was performed to obtain the pan shot, as shown in Fig 4B. EXAMPLE 3: PAN SHOT FROM A VIDEO WITH BLURRED FOREGROUND
[0063] Yet another example implementation of the method for generating a pan shot from a video, which was captured with slow shutter speed. The video as illustrated in FIG. 5 A and FIG. 5B, depicts a moving car. FIG. 5 A illustrates one of the plurality of frames of the video of the moving car on which the method was performed. As shown, the frame has a static background and blurred foreground due to a slow shutter speed of the video recording device.
[0064] A global homography of the plurality of frames was determined and the frames were warped to align the background in each of the plurality of frames. The foreground in each of the frames was then segmented and correlated to determine the net displacement and relative depth of the object. The deblurring operation was performed on each of the plurality of frames as the foreground is blurred.
[0065] The deblurring operation included constructing an alpha-matte using the trimap to separate background, foreground, and ambiguous region, which may be either a background or foreground. The separated background includes regions with and without pixels. The regions without pixels were filled using the pixels of the static backgrounds of the plurality of frames to obtain a filled background.
[0066] The deblurring of the foreground was based on blur weights corresponding to the camera motion t^ j) in each frame. The blur weights estimated for the i"1 background frame may be denoted by ω(ι) = {ω^1}}^ . Each entry in ω(ι) represents a fraction of the total exposure time the camera spent in a particular pose k e K. Since the camera trajectory is dominated by the panning motion of camera and is along the horizontal direction, the weights estimated for each frame can be ordered in ascending order of tx so as to obtain the camera motion t^ j) trajectory in each frame. From the camera trajectory and weight ω(ι) for the 1th background frame, it may be assumed that for fraction of time, the camera was static and only the foreground dynamic object moved with its own velocity vx. The displacement experienced by the moving object in that fraction of time would be vx. .τ from its position with respect to previous camera pose k - 1. The position of object with respect to the kth pose was calculated using
Figure imgf000023_0001
where, is the homography corresponding to the k pose and scaled by γ. Hence, the weights {ω^Ρ}^ of the foreground for that particular camera pose k and background weight was found by sampling the displacement into intervals equally and distributing the k"1 weight
Figure imgf000023_0002
On repeating this for every k = (1, 2, ..IKI) the foreground weight from the estimated background weights for the ith frame was derived. The foreground weights (w) will not be uniform in general as they also depend on the camera motion t^}-
[0067] After the weight distribution, the foreground alone was subjected to non -blind deblurring using the non-uniformly distributed blur weights. In one embodiment, the foreground deblurring may be based on Richardson-Lucy algorithm. The update rule may be modified to incorporate non-uniform blur weights. The modified update rule may be given
where λι = 0.002 for an image scaled in 0-1 range, El is the residual error between the real blurred image and predicted blurred image, and Rjy =— V— -. [0068] Further, the deblurred foreground and the filled background in each frame were mixed to obtain the plurality of clear frames (not blurred). The plurality of clear frames were rewarped using the net displacement to create rewarped clear frames. The rewarped frames have a dynamic background similar to the background of the input frames of the video. The deblurred foreground is static and is at the same position in each frame. The rewarped clear frames were then averaged to generate the pan shot, which has a sharp foreground and blurred background.
[0069] The advantages of the present subject matter and its embodiments include automatically and effortlessly generating pan shots with minimal human intervention. The method and system also accounts for any blurring of the object or foreground due to camera motion, object motion, shutter speed, etc., and provides an artistic visual effect to the photograph. Therefore, the subject matter may be used for augmenting hardware capability of consumer cameras or mobile phones.
[0070] Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples and aspects of the invention. It should be appreciated that the scope of the invention includes other embodiments not discussed herein. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the system and method of the present invention disclosed herein without departing from the spirit and scope of the invention as described here.
[0071] While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material the teachings of the invention without departing from its scope.

Claims

We claim:
1. A method for automatically generating a pan shot from a video of a dynamic object, the method comprising:
warping a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames, wherein the warping comprises aligning dynamic backgrounds to create a static background in the plurality of frames;
segmenting foreground from the background compensated frames to create a trimap, wherein the foreground comprises the dynamic object;
correlating the foreground of each frame with a preceding and a succeeding frame to obtain inter-frame object displacement (dj) with respect to the preceding and succeeding frames;
determining a net displacement (od) and a relative depth (γ) of the object based on the inter-frame object displacement (dj) and a camera motion t(ij);
performing a deblurring operation in the foreground if a blur is present in the foreground to obtain a plurality of clear frames;
rewarping the plurality of clear frames using the net displacement (Od) of the object to create rewarped clear frames, wherein the rewarped clear frames comprise dynamic background and static foreground; and
averaging the rewarped clear frames to generate the pan shot, wherein the pan shot comprises a blurred background and a sharp foreground.
2. The method of claim 1, wherein the performing the deblurring operation comprises: determining an object velocity from the net displacement (Od) and a frame rate;
constructing an alpha-matte using the trimap to separate background, foreground, and an ambiguous region, wherein the ambiguous region comprises a combination of pixels from background and foreground;
estimating blur weights by uniformly sampling object displacement;
filling the background using the pixels of the static backgrounds in the plurality of frames;
deblurring the foreground in the plurality of frames based on object velocity, net displacement (od), and a kernel;
mixing the deblurred foreground and filled background in each frame to obtain the plurality of clear frames.
3. The method of claim 1, wherein the trimap is a pre-segmented image comprising a background, a foreground, and a plurality of ambiguous regions.
4. The method of claim 1 , wherein the homography (H) is determined between a reference frame and the plurality of frames using random sampling consensus (RANSAC).
5. The method of claim 1, wherein the determining the net displacement and the relative depth is based on:
Figure imgf000027_0001
Figure imgf000027_0002
6. The method of claim 1 , wherein the segmenting is performed using graph-cut algorithm.
7. The method of claim 6, wherein the graph-cut algorithm is based on a data cost function and a smoothness cost function for distinguishing foreground and background in each frame.
8. The method of claim 1, wherein the camera motion t^ j) is determined using the homographies (H).
9. The method of claim 1, further comprising displaying the generated pan shot, via interface, to a user.
10. A system for automatically generating a pan shot from a video of a dynamic object, the system comprising:
an image capturing unit for capturing the video;
a processing unit;
a memory unit coupled to the processing unit, the memory unit comprising:
a warping module configured to warp a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames, wherein the warping comprises aligning dynamic backgrounds to create a static background in the plurality of frames; a segmentation module configured to segment foreground from the background compensated frames to create a trimap, wherein the foreground comprises the dynamic object;
a correlation module configured to correlate the foreground of each frame with a preceding and a succeeding frame to obtain inter-frame object displacement (dj) with respect to the preceding and succeeding frames;
a displacement computation module configured to determine a net displacement (od) and a relative depth (γ) of the object based on the inter-frame object displacement (dj) and the camera motion t^^y,
a deblurring module configured to deblur the foreground if a blur is present in the foreground to obtain a plurality of clear frames;
a rewarping module configured to rewarp the plurality of clear frames using the net displacement (od) of the object to create rewarped clear frames, wherein the rewarped clear frames comprise dynamic background and static foreground; and an averaging module configured to perform an averaging operation on the rewarped clear frames to generate the pan shot, wherein the pan shot comprises a blurred background and a sharp foreground.
11. The system of claim 10, wherein the image capturing unit comprises at least:
a lens for allowing light from the object and surroundings of the object to be captured;
a shutter to control amount of light to be captured; an image sensor for converting the light to electrical signal, wherein the electric signal is stored in the memory unit.
12. The system of claim 10, wherein the deblurring module is further configured to:
determine an object velocity from the net displacement (od) and frame rate
(fP);
constructing an alpha-matte using the trimap to separate a clear background, a clear foreground, and an ambiguous region, wherein the ambiguous region comprises a combination of pixels from background and foreground;
filling the background using the pixels of the static backgrounds in the plurality of frames;
estimating blur weights by uniformly sampling object displacement;
deblurring the foreground in the plurality of frames based on object velocity, net displacement, blur weights, and a kernel;
mixing the deblurred foreground and filled background in each frame to obtain a plurality of clear frames.
13. The system of claim 10, further comprising a user interface configured to enableo interact with the system, wherein the user interface comprises at least:
a display unit for displaying at least the generated pan shot; and
a plurality of controls configured to enable at least:
a selection of a mode of operation comprising pan shot mode, and an adjustment of settings of the system.
14. The system of claim 13, wherein the display unit is configured to display the plurality of controls.
15. The system of claim 10 incorporated in a video recording device.
16. The system of claim 10, further comprising a communication unit for sending to or receiving data from other devices.
17. A computer program product having non-volatile memory therein, carrying computer executable instructions stored therein for automatically generating a pan shot from a video of a dynamic object, the instructions for performing the steps of:
warping a plurality of frames of the video to compensate for background motion in the video based on homographies (H) of consecutive frames, wherein the warping comprises aligning dynamic backgrounds to create a static background in the plurality of frames;
segmenting foreground from the background compensated frames to create a trimap, wherein the foreground comprises the dynamic object;
correlating the foreground of each frame with a preceding and a succeeding frame to obtain inter-frame object displacement (dj) with respect to the preceding and succeeding frames;
determining a net displacement (Od) and a relative depth (γ) of the object based on the inter-frame object displacement (dj) and a camera motion t(ij); performing a deblurring operation in the foreground if a blur is present in the foreground to obtain a plurality of clear frames;
rewarping the plurality of clear frames using the net displacement (od) of the object to create rewarped clear frames, wherein the rewarped clear frames comprise dynamic background and static foreground; and
averaging the rewarped clear frames to generate the pan shot, wherein the pan shot comprises a blurred background and a sharp foreground.
18. The computer program product of claim 17 wherein the instructions to perform the deblurring operation comprise:
determining an object velocity from the net displacement (od) and a frame rate;
constructing an alpha-matte using the trimap to separate background, foreground, and an ambiguous region, wherein the ambiguous region comprises a combination of pixels from background and foreground;
estimating blur weights by uniformly sampling object displacement;
filling the background using the pixels of the static backgrounds in the plurality of frames;
deblurring the foreground in the plurality of frames based on object velocity, net displacement (od), and a kernel;
mixing the deblurred foreground and filled background in each frame to obtain the plurality of clear frames.
19. The computer program product of claim 17, further comprising instructions for displaying at least the generated pan shot via a user interface.
20. The computer program product of claim 17, further comprising instructions for providing a plurality of controls via a user interface for:
selection a mode of operation, wherein the mode of operation comprises a pan shot mode, and
an adjustment of settings of the system.
21. The computer program product of claim 17, further comprising instructions for communicating with other devices.
PCT/IN2017/050605 2016-12-20 2017-12-19 System and method for generating pan shots from videos WO2018116322A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201641043468 2016-12-20
IN201641043468 2017-10-06

Publications (1)

Publication Number Publication Date
WO2018116322A1 true WO2018116322A1 (en) 2018-06-28

Family

ID=62639176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2017/050605 WO2018116322A1 (en) 2016-12-20 2017-12-19 System and method for generating pan shots from videos

Country Status (1)

Country Link
WO (1) WO2018116322A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020080518A (en) * 2018-11-14 2020-05-28 キヤノン株式会社 Image processing apparatus
CN113315903A (en) * 2020-02-26 2021-08-27 北京小米移动软件有限公司 Image acquisition method and device, electronic equipment and storage medium
WO2022062554A1 (en) * 2020-09-27 2022-03-31 华为技术有限公司 Multi-lens video recording method and related device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160301868A1 (en) * 2015-04-10 2016-10-13 Qualcomm Incorporated Automated generation of panning shots

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160301868A1 (en) * 2015-04-10 2016-10-13 Qualcomm Incorporated Automated generation of panning shots

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. IRANI ET AL.: "Efficient representations of video sequences and their applications", SIGNAL PROCESSING: IMAGE COMMUNICATION, vol. 8, no. 4, May 1996 (1996-05-01), pages 327 - 351, XP004069965 *
M. IRANI ET AL.: "Video indexing based on mosaic representations", PROCEEDINGS OF THE IEEE, vol. 86, no. 5, May 1998 (1998-05-01), pages 905 - 921, XP011044016 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020080518A (en) * 2018-11-14 2020-05-28 キヤノン株式会社 Image processing apparatus
JP7143191B2 (en) 2018-11-14 2022-09-28 キヤノン株式会社 Image processing device
CN113315903A (en) * 2020-02-26 2021-08-27 北京小米移动软件有限公司 Image acquisition method and device, electronic equipment and storage medium
CN113315903B (en) * 2020-02-26 2023-08-22 北京小米移动软件有限公司 Image acquisition method and device, electronic equipment and storage medium
WO2022062554A1 (en) * 2020-09-27 2022-03-31 华为技术有限公司 Multi-lens video recording method and related device

Similar Documents

Publication Publication Date Title
CN111557016B (en) Method and apparatus for generating an image comprising simulated motion blur
US9591237B2 (en) Automated generation of panning shots
CN106899781B (en) Image processing method and electronic equipment
CN104408701B (en) A kind of large scene video image joining method
US10580140B2 (en) Method and system of real-time image segmentation for image processing
KR102103252B1 (en) Image fusion method and apparatus, and terminal device
US10764496B2 (en) Fast scan-type panoramic image synthesis method and device
CN106331480B (en) Video image stabilization method based on image splicing
CN105590309B (en) Foreground image dividing method and device
CN110651297B (en) Optional enhancement of synthesized long exposure images using a guide image
CN111028191B (en) Anti-shake method and device for video image, electronic equipment and storage medium
IES20050090A2 (en) A digital image acquisition system having means for determining a camera motion blur function
US9838604B2 (en) Method and system for stabilizing video frames
US9792698B2 (en) Image refocusing
EP3912336B1 (en) Automatic generation of all-in-focus images with a mobile camera
CN110383335A (en) The background subtraction inputted in video content based on light stream and sensor
CN105282421A (en) Defogged image obtaining method, device and terminal
WO2018116322A1 (en) System and method for generating pan shots from videos
US10460487B2 (en) Automatic image synthesis method
KR102362345B1 (en) Method and apparatus for processing image
CN111712857A (en) Image processing method, device, holder and storage medium
CN113938578A (en) Image blurring method, storage medium and terminal device
WO2023020190A1 (en) All-in-focus image synthesis method, storage medium and smart phone
EP3429186B1 (en) Image registration method and device for terminal
CN113395434B (en) Preview image blurring method, storage medium and terminal equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17884901

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17884901

Country of ref document: EP

Kind code of ref document: A1