WO2017064700A1 - A method and system for stabilizing video frames - Google Patents

A method and system for stabilizing video frames Download PDF

Info

Publication number
WO2017064700A1
WO2017064700A1 PCT/IL2016/051094 IL2016051094W WO2017064700A1 WO 2017064700 A1 WO2017064700 A1 WO 2017064700A1 IL 2016051094 W IL2016051094 W IL 2016051094W WO 2017064700 A1 WO2017064700 A1 WO 2017064700A1
Authority
WO
WIPO (PCT)
Prior art keywords
salient feature
frames
feature points
dropping
cluster
Prior art date
Application number
PCT/IL2016/051094
Other languages
French (fr)
Inventor
Markus Schlattmann
Rohit MANDE
Original Assignee
Agt International Gmbh
Reinhold Cohn And Partners
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agt International Gmbh, Reinhold Cohn And Partners filed Critical Agt International Gmbh
Priority to EP16855056.4A priority Critical patent/EP3362988A4/en
Publication of WO2017064700A1 publication Critical patent/WO2017064700A1/en
Priority to IL257090A priority patent/IL257090A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present disclosure relates to stabilizing frames captured by video cameras in general, and to a method and system for stabilizing frames captured by fixed location video cameras, in particular.
  • US2011017601 discloses a method of processing a digital video sequence that includes estimating compensated motion parameters and compensated distortion parameters (compensated M/D parameters) of a compensated motion/distortion (M/D) affine transformation for a block of pixels in the digital video sequence, and applying the compensated M/D affine transformation to the block of pixels using the estimated compensated M/D parameters to generate an output block of pixels, wherein trans lationai and rotational jitter in the block of pixels is stabilized in the output block of pixels and distortion due to skew, horizontal scaling, vertical scaling, and wobble in the block of pixels is reduced in the output block of pixels.
  • compensated motion parameters compensated motion/distortion
  • the method first extracts robust feature trajectories from the input video. Optimization is then performed to find a set of transformations to smooth out these trajectories and stabilize the video. In addition, the optimization also considers quality of the stabilized video and selects a video with not only smooth camera motion but also less unfilled area after stabilization.
  • motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas.
  • image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, the method transfers and interpolates sharper image pixels of neighbouring frames to increase the sharpness of the frame.
  • Image Stabilization improving camera usability a white paper by Axis communications published on 2014 relates to a combination of gyroscopes and efficient algorithms for modeling camera motion.
  • US8054881 provides real-time image stabilization using computationally efficient corner detection and correspondence.
  • the real-time image stabilization performs a scene learning process on a first frame of an input video to obtain reference features and a detection threshold value.
  • the presence of jitter is determined in a current frame of the input video by comparing features of the current frame against the reference features using the detection threshold value. If the current frame is found to be unstable, corner points are obtained from the current frame. The obtained corner points are matched against reference corner points of the reference features. If the number of matched corner points is not less than a match point threshold value, the current frame is modeled using random sample consensus. The current frame is corrected to compensate for the jitter based on the results of the modeling.
  • US8385732 disclose image stabilization techniques used to reduce jitter associated with the motion of a camera.
  • Image stabilization can compensate for pan and tilt (angular movement, equivalent to yaw and pitch) of a camera or other imaging device.
  • Image stabilization can be used in still and video cameras, including those found in mobile devices such as cell phones and personal digital assistants (PDAs).
  • PDAs personal digital assistants
  • One aspect of the disclosed subject matter relates to a computer-implemented method for stabilizing a frame, comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
  • the method may further comprise converting one or more of the frames into a black and white frame. In some exemplary embodiments of the disclosed subject matter, the method may further comprise reducing resolution of one or more of the frames. In some exemplary embodiments of the disclosed subject matter, the method may further comprise adding one or more points to the salient feature points. In some exemplary embodiments of the disclosed subject matter, within the method, dropping the salient feature points associated with objects moving in shaking movements or the salient feature points not associated with advancing objects is optionally performed only for salient feature points appearing in at least a minimal number of frames within the frames.
  • dropping the salient feature points associated with advancing objects is optionally performed by; determining total flow for a salient feature point over the at least three frames; determining representative flow for a multiplicity of frames of the frames, and an average representative flow by averaging the flow determined for the multiplicity of frames; and dropping salient feature points for which the total flow meets a criterion related to the average representative flow.
  • the method may further comprise providing the total flow for one or more salient feature points.
  • dropping the salient feature points associated with objects moving in shaking movements is optionally performed only for salient feature points not associated with advancing objects.
  • dropping the salient feature points associated with objects moving in shaking movements is optionally performed by: determining an amplitude for each salient feature point over the frames: clustering the salient feature point into a first cluster and a second cluster based upon the amplitude, wherein the first cluster has a higher center value than the second cluster; and subject to at most a predetermined percentage of the salient feature points being clustered to the first cluster, dropping salient feature points associated with the first cluster, otherwise dropping salient feature points associated with the second cluster.
  • dropping the salient feature points associated with advancing objects is optionally performed only for salient feature points not associated with objects moving in shaking movements.
  • the method may further comprise providing the amplitude for one or more salient feature point.
  • the predetermined percentage is optionally between about 15% and about 40%.
  • the method may further comprise determining proximity between the first cluster and the second cluster, and re-considering a salient feature point associated with a dropped cluster, if close to the center value of a non-dropped cluster.
  • each frame is optionally stabilized when it is the current frame. In some exemplary embodiments of the disclosed subject matter, within the method, a frame is stabilized only if it is displayed.
  • computing the transformation between pairs of consecutive frames is optionally based on considering a representative point for each area of the current frame, the representative point determined upon non-dropped salient feature points in the area.
  • the method may further comprise dropping incorrectly tracked background points.
  • Another aspect of the disclosed subject matter relates to a computerized system for determining transition parameters between objects appearing in a first image captured by a first capture device and objects appearing in a second image captured by a second capture device, the system comprising a processor configured to: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the frames, based upon a multiplicity of non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
  • dropping the salient feature points associated with advancing objects is optionally performed by: determining total flow for a salient feature point over the frames; determining representative flow for a multiplicity of frames of the frames, and an average representative flow by averaging the flow determined for the multiplicity of frames; and dropping salient feature points for which the total flow meets a criterion related to the average representative flow.
  • dropping the salient feature points associated with objects moving in shaking movements is optionally performed by: determining an amplitude for each salient feature point over the frames; clustering the salient feature point into a first cluster and a second cluster based upon the amplitude, wherein the first cluster has a higher center value than the second cluster; and subject to at most a predetermined percentage of the salient feature points being clustered to the first cluster, dropping salient feature points associated with the first cluster, otherwise dropping salient feature points associated with the second cluster.
  • Yet another aspect of the disclosed subject matter relates to a computer program product comprising a computer readable storage medium retaining program instructions, which program instructions when read by a processor, cause the processor to perform a method comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the frames, based upon a multiplicity of non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
  • FIG. 1 shows an exemplary illustration of an environment in which the disclosed subject matter may be used
  • FIG. 2 shows a flowchart of steps in a method for stabilizing a current frame, in accordance with some exemplary embodiments of the disclosed subject matter
  • FIG. 3 shows a flowchart of steps in a method for dropping salient feature points associated with advancing objects, in accordance with some exemplary embodiments of the disclosed subject matter
  • FIG. 4 shows a flowchart of steps in a method for dropping salient feature points associated with objects moving in shaking movements, in accordance with some exemplary embodiments of the disclosed subject matter.
  • FIG. 5 shows a block diagram of a system for stabilizing a sequence of frames, in accordance with some exemplary embodiments of the disclosed subject matter.
  • One technical problem relates to video stabilization for a camera in a fixed location. Cameras deployed outdoors, or in very large spaces, often experience shaking, due to wind, vibrations caused by passing vehicles, or the like. Shaking may lead to dizziness of operators watching the video, as well as malfunctioning or reduced effectiveness of video analytics procedures applied on the video.
  • One technical solution relates to a system and method for stabilizing video frames captured by a fixed camera, and in particular video frames comprising moving or shaking objects. It will be appreciated that the solution is also applicable to a temporary fixed camera, such as a camera mounted on a non-moving vehicle. The solution is also applicable to any pan-tilt-zoom (PZT) camera located at a temporary or permanent location.
  • PZT pan-tilt-zoom
  • the system and method receive a sequence of images, and start by identifying or selecting a group of salient feature points (also referred to as “salient points”, “feature points” or “points”) within the image sequence.
  • the system and method then identify points associated with advancing objects and drop them from the group of salient feature points.
  • Transformation is then determined between frames, based on the changes in location of non-dropped points, and a center position is determined for the sequence of frames. Based on these changes, a transformation may then be determined for each frame and in particular a current frame, relatively to the center position. The transformation may then be applied to the frames to obtain stabilized frames.
  • One technical effect of the disclosed subject matter relates to receiving a sequence of frames and stabilizing each frame in the sequence or at least selected frames.
  • the frames may be stabilized only when it is required to display them.
  • the fames may be stabilized in real time, right after being captured, or offline before being stored.
  • FIG. 1 showing an exemplary frame that may have to be stabilized.
  • the frame is taken by a video camera (not shown).
  • the frame shows a road 100, and a first car 104 and a second car 108 going along the road.
  • the frame also comprises a first tree 116 and a second tree 120 having branches and leaves that shake in captured frames.
  • FIG. 2 showing a flowchart of steps in a method for stabilizing a sequence of video frames.
  • a frame sequence comprising at least three digital frames including a current frame
  • the frames may be received directly from a capture device such as a camera or video camera, from a storage device, from a scanner, or the like.
  • the method may be applied in an ongoing manner, such that for a current frame, the frame itself and preceding frames may be treated as a sequence.
  • the current frame may be treated as part of the frames preceding the newly received one.
  • salient feature points are obtained for the frame.
  • the term salient feature point refers to an outstanding or noticeable point in the frame, such as a corner of an object, an edge, or the like.
  • the points may be identified, in a non-limiting example, by applying the Shi-Tomasi corner detector algorithm, or by any other corner detection image processing algorithm.
  • the salient feature points may be determined earlier or by another entity and received on step 204, while in online mode, the salient feature points any be determined.
  • the salient feature points may include the hollow points, such as point 110 and the other points on corners of car 108, point 114 and the other points on corners of car 112, point 118 and other points on the tips of leaves of tree 1 16, and point 122 and other points on the tips of leaves of tree 120.
  • points may be determined, for example if the number of detected points is below a predetermined threshold.
  • the predetermined threshold may relate to an absolute number of points per frame or to a number of points relative to the number of points, or pixels, in the frame.
  • points may be added in areas of the frame in which relatively few salient feature points have been detected, for example pomts that are far from the detected points in at least a predetermined distance relative to the frame size.
  • the additional points are added for providing better coverage of the frame.
  • the added points include points 124 and any one or more of the other black points. It will be appreciated that some added points may be at or near features not identified in the initial phase of salient feature point detection, such as point 126.
  • the salient feature points may be matched between the frames of the sequence. For example, point 1 10 will be matched with the point representing the front left corner of car 108 in further frames. [0039] On step 212, salient feature points associated with advancing objects are dropped.
  • FIG. 3 showing an exemplary method for identifying and dropping salient feature points associated with advancing objects.
  • step 300 the following values may be determined:
  • Presence count for a salient feature point the number or percentage of frames in the frame sequence in which the salient feature point was detected and matched with corresponding points in other frames within the sequence ;
  • Total flow for a salient feature point the magnitude of the sum of all displacement vectors of the feature point over time, e.g., the distance between the location of the point in the first frame in which the point appears and its location in the last frame in which the point appears.
  • the above values may be determined for all salient feature points determined on step 204 of Fig. 2, including the additional salient points. However, it may also be possible to determine the values only for a subset of the points. It will be appreciated that in some situations, the more salient feature points the values are determined for, the more accurate are the obtained results.
  • a representative flow value may be determined for a current frame, which it is required to stabilize.
  • the representative flow value may be a mean, a median, or the like.
  • a representative flow being a mean flow value may be obtained by calculating the average magnitude of the displacement vectors of all salient feature points matched from a previous frame to a current frame.
  • the average representative flow for a sequence of frames may be obtained, for example by averaging the representative flow values for a predetermined number of frames, for example the last 20 frames within the sequence.
  • step 308 salient feature points are dropped, for which the total flow meets one or more criteria related to the average representative flow over a number of frames. Dropping points may relate to excluding the points from further computations.
  • the average representative flow over a number of frames provides an estimate of the average movement of the scene between consecutive frames, over a sequence of a predetermined number of frames. Points whose total flow as determined above meets the criteria, are assumed to be associated with advancing objects like vehicles, fast advancing humans or animals, or the like, and are dropped since they may interfere with estimating the movement of the camera which it is required to stabilize.
  • the criteria may be, for a non-limiting example, that the total flow of the point is significantly more, for example more than three times, the average representative flow over the predetermined number of frames.
  • step 212 of Fig. 2 will drop points 110, 114, and other points associated with car 108 and car 112.
  • step 216 salient feature points associated with objects moving in shaking movements are dropped.
  • FIG. 4 showing an exemplary method for identifying and dropping salient feature points associated with objects moving in shaking movements.
  • an amplitude is determined for a salient feature point: e.g., the maximal distance between two locations of the salient point within the frame sequence.
  • step 404 the salient feature points not dropped on step 212 above are clustered into two groups, based on the amplitudes.
  • Outlier points to be removed are determined after clustering, based on the amplitude of feature points and comparing the number of points in two clusters.
  • outlier points are the points on foreground moving objects like e.g. cars, trees etc., as well as points on the background which are incorrectly tracked.
  • Each cluster is then associated with a center value, for example the average amplitude of the points associated with the cluster.
  • the points associated with this cluster are dropped, since this cluster is assumed to be of lower confidence than the other cluster.
  • the predetermined percentage may be, in a non-limiting example, between about 15% and about 40% of the points associated with the two clusters, such as about 25%.
  • This situation may be, in some exemplar ⁇ ' situations, associated with frames having shaking objects in the foreground, such as trees with shaking branches and leaves. Since these objects are closer to the camera than other objects, their shaking is usually more significant, therefore these points should be eliminated when stabilizing the frame and not influence the stabilized frame as a whole.
  • the cluster having the higher center value of the two clusters is associated with at least the predetermined number or percentage of points, this cluster is assumed to be of higher confidence and on step 412 the points of the cluster that has the lower center value are dropped.
  • This action provides for removing outlier points which may have passed advancing point removal step 212 due to wrong feature matching or incorrect tracking. For example, if corner matching as performed for example on step 208 is incorrect, then some salient feature points may be considered to have moved less than other corners, and may thus be associated with low amplitude. If the percentage of the less moving points is lower than the threshold, then the low amplitude cluster may be considered as having low confidence, and may therefore be removed on step 12.
  • the proximity between the centers of the two clusters may be determined. If the clusters are close to each other, then in some situations a smaller number of points should have been removed, since all points are associated with similar amplitude. In order to compensate for the unneeded removal of step 408 or 412, points in the cluster that has been removed, which are associated with an amplitude close to the center value of the other cluster, may be added back to salient feature points, thus keeping points which are not outliers, such as background points. For example, points whose distance from center value of the other cluster is smaller than the distance between the center values of the two clusters may be un-dropped and reconsidered in further computations.
  • the cluster proximity may be determined, for example, by determining whether the ratio between the center values of the two clusters complies with a criterion, for example if the ratio between the lower center value and the higher center value is above a threshold, for example above 0.8.
  • a threshold for example above 0.8.
  • this process can be generalized to more than two clusters, based on the amplitudes. In such case, points associated with one or more low- center clusters may be kept, provided they have together at least a predetermined percentage of the total number of points, while points associated with one or more other clusters may be dropped.
  • a distance metric may be employed to merge clusters having close center values.
  • Step 216 of Fig. 2 will thus drop points 118, 122 and 126 and other points associated with tree 116 and tree 120 of Fig. 1, if indeed the trees or branches move in shaking movements.
  • steps 212 and 216 may be performed concurrently or in any order. If performed one after the other, then the later step may operate only on the point not dropped by the first step, so as to save unnecessary computations.
  • the frame may be divided into sub-areas of equal size, for example 64*64 pixels.
  • the transformation between consecutive frames may then be determined by combining all non-dropped salient feature points within each sub-area into a representative point, for example by averaging the salient feature points within the sub- area, and determining the transformation based on the difference between the locations of the representative points in corresponding sub-areas of the frames.
  • Using representative points provides for assigning the same effect or weight to all sub-areas of the frame.
  • the initial determination of the points, including comer points and additional points may ensure that each sub-area of the frame contains at least one point. Alternatively, if a sub-area does not contain any points, it may be considered irrelevant for stabilization and thus no harm may be caused if it is not considered.
  • a transformation between pairs of consecutive frames may be determined based on the non-dropped points, wherein the transformation may be expressed as a transformation matrix.
  • the transformation between consecutive frames may be determined based on the representative points determined for the frame disclosed above in association with step 218.
  • the transformation between the current frame and the previous one may be determined every time a new current frame is received.
  • the transformation matrix may be determined as the optimal affine transformation between two sets of points.
  • a center position within the sequence is determined based on the frame-to-frame transformations determined for each pair of frames on step 220.
  • a stabilizing transformation is determined from the current frame to the center position determined on step 224, and on step 232 the stabilizing transformation may be applied to the current frame to obtain a stabilized frame.
  • the stabilized frames may then be displayed to a user.
  • other measures such as the amplitude or flow per pixel or per salient feature point may be used to quantify the camera shaking in terms of shaking extent and frequency, and may also be displayed to a user or used for triggering one or more actions.
  • Steps 212 and 216 may eliminate at least many of the points associated with advancing objects and shaking objects, and leave mainly the points associated with fixed objects, upon which the camera movements may be determined and stabilized. Stabilization is achieved by applying transformation to the frames, thus bringing the frames closer to "an average" of a sequence of frames, thus eliminating sharp changes.
  • each frame when received may be the current frame and may be stabilized. However, in some embodiments, stabilization may be performed for selected frames only. When the captured frames are not displayed but are stored, then stabilization may be performed only upon need, and prior to displaying.
  • whether a frame sequence is stabilized or not may be determined by a viewer, and upon changing needs. For example, for ongoing human traffic monitonng, stabilizing may be performed in order to reduce dizziness of a viewer. However when investigating sequences containing critical frames, some frames may remain unstabilized so as not to lose any information.
  • FIG. 5 showing a block diagram of a system for stabilizing a sequence of frames.
  • the system may be implemented as a computing platform 500, such as a server, a desktop computer, a laptop computer, a processor embedded within a video capture device, or the like.
  • a computing platform 500 such as a server, a desktop computer, a laptop computer, a processor embedded within a video capture device, or the like.
  • computing platform 500 may comprise a storage device 504.
  • Storage device 504 may comprise one or more of the following: a hard disk drive, a Flash disk, a Random Access Memory (RAM), a memory chip, or the like.
  • storage device 504 may retain program code operative to cause processor 512 detailed below to perform acts associated with any of the components executed by computing platform 500.
  • computing platform 500 may comprise an Input/Output (I/O) device 508 such as a display, a pointing device, a keyboard, a touch screen, or the like.
  • I/O device 508 may be utilized to provide output to or receive input from a user.
  • Computing platform 500 may comprise a processors 12.
  • Processor 512 may comprise any one or more of the following processing units, such as but not limited to: a Central Processing Unit (CPU), a microprocessor, an electronic circuit, an Integrated Circuit (IC), a Central Processor (CP), or the like.
  • processor 512 may be a graphic processing unit.
  • processor 504 may be a processing unit embedded on a video capture device.
  • Processor 512 may be utilized to perform computations required by the system or any of its subcomponents.
  • Processor 512 may comprise one or more processing units in direct or indirect communication.
  • Processor 512 may be configured to execute several functional modules in accordance with computer-readable instructions implemented on a non-transitory computer usable medium. Such functional modules are referred to hereinafter as comprised in the processor.
  • the modules also referred to as components as detailed below, may be implemented as one or more sets of interrelated computer instructions, loaded to and executed by, for example, processor 504 or by another processor.
  • the components may be arranged as one or more executable files, dynamic libraries, static libraries, methods, functions, sendees, or the like, programmed in any programming language and under any computing environment,
  • Processor 512 may comprise communication with image source component 516 for communicating with an image source, such as a storage device storing images, a capture device, or the like.
  • an image source such as a storage device storing images, a capture device, or the like.
  • the frames may be stored on storage device 512.
  • Processor 512 may comprise user interface 520 for receiving information from a user, such as thresholds or other parameters, for showing results to a user, such as displaying a sequence of stabilized frames, or the like, using for example any of 1.0 devices 508.
  • Processor 512 may comprise data and control flow component 524 for controlling the activation of the various components, providing the required input and receiving the required output from each component.
  • Processor 512 may comprise salient feature point determination and matching component 524 for detecting salient feature points by one or more algorithms, adding points in addition to the salient feature points detected by the user algorithm, and matching corresponding salient feature points appearing in two or more frames, as described in association with steps 204 and 208 of Fig. 2.
  • Processor 512 may comprise salient feature point dropping component 532 for dropping salient feature points associated with advancing objects as described on step 212 of Fig. 2and Fig. 3, or dropping salient feature points associated with objects moving in shaking movements, as described on step 216 of Fig. 2and Fig. 4.
  • Processor 512 may comprise stabilization determination and application component 536 for determining and applying the stabilization transformation between frames based on the non-dropped salient feature points, as disclosed in association with steps 220, 224, 228 and 232 of Fig. 2.
  • the method and system may be used as a standalone system, or as a component for implementing a feature in a system such as a video camera, or in a device intended for specific purpose such as camera state monitoring, video anomaly detection, or the like.
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for cariying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Sendee Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processmg apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It will also be noted that each block of the block diagrams and/or flowchart illustration may be perfomied by a multiplicity of interconnected components, or two or more blocks may be performed as a single block or step.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

A method, system, and computer program product for stabilizing frames, the method comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.

Description

A METHOD AND SYSTEM FOR STABILIZING VIDEO FRAMES
TECHNICAL FIELD
[0001] The present disclosure relates to stabilizing frames captured by video cameras in general, and to a method and system for stabilizing frames captured by fixed location video cameras, in particular.
BACKGROUND
[0002] Many locations are constantly or intermittently captured by video cameras. However, due to movements of the camera or the captured objects, the images are not clear enough, and stabilization may be required.
[0003] Abdullah, Tahir, and Samad in "Video stabilization based on point feature matching technique" published in Control and System Graduate Research Colloquium (ICSGRC), 2012 IEEE , vol., no., pp.303,307, 16-17 July 2012 disclose an algorithm to stabilize jitter}' videos directly without the need to estimate camera motion. A stable output video will be attained without the effect of jittery that caused by shaking the handheld camera during video recording. Firstly, salient feature points from each frame of the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization and less unallied area after the process of stabilization.
[0004] Wei, Wei, and Batur in "Video stabilization and rolling shutter distortion reduction" published in IEEE International Conference on Image Processing (ICIP), 2010 17th, vol., no., pp.3501, 3504, 26-29 Sept. 2010 and in, presents an algorithm that stabilizes video and reduces rolling shutter distortions using a six-parameter affine model that explicitly contains parameters for translation, rotation, scaling, and skew to describe transformations between frames. Rolling shutter distortions, including wobble, skew and vertical scaling distortions, together with both translational and rotational jitter are corrected by estimating the parameters of the model and performing compensating transformations based on those estimates. The results show the benefits of the proposed algorithm quantified by the Interframe Transformation Fidelity (ITF) metric.
[0005] US2011017601 discloses a method of processing a digital video sequence that includes estimating compensated motion parameters and compensated distortion parameters (compensated M/D parameters) of a compensated motion/distortion (M/D) affine transformation for a block of pixels in the digital video sequence, and applying the compensated M/D affine transformation to the block of pixels using the estimated compensated M/D parameters to generate an output block of pixels, wherein trans lationai and rotational jitter in the block of pixels is stabilized in the output block of pixels and distortion due to skew, horizontal scaling, vertical scaling, and wobble in the block of pixels is reduced in the output block of pixels.
[0006] Battiato, Gallo, Puglisi and Scellato in "SIFT Features Tracking for Video Stabilization" published in the 14th International Conference on Image Analysis and Processing, 2007, pp.825,830, 10-14 Sept. 2007 discloses a video stabilization algorithm based on the extraction and tracking of scale invariant feature transform features through video frames. Implementation of SIFT operator is analyzed and adapted to be used in a feature-based motion estimation algorithm. SIFT features are extracted from video frames and then their trajectory is evaluated to estimate interframe motion. A modified version of iterative least squares method is adopted to avoid estimation errors and features are tracked as they appear in nearby frames to improve video stability. Intentional camera motion is eventually filtered with adaptive motion vector integration. Results confirm the effectiveness of the method.
[0007] Ken-Yi, Yung-Yu, Bing-Yu and Ming Ouhyoung in "Video stabilization using robust feature trajectories" published in Computer Vision, 2009 IEEE 12th International Conference on , vol., no. , pp.1397, 1404, Sept. 29 2009-Oct. 2 2009, disclose a method to directly stabilize a video without explicitly estimating camera motion, thus assuming neither motion models nor dominant motion. The method first extracts robust feature trajectories from the input video. Optimization is then performed to find a set of transformations to smooth out these trajectories and stabilize the video. In addition, the optimization also considers quality of the stabilized video and selects a video with not only smooth camera motion but also less unfilled area after stabilization. [0008] Yasuyuki, Eyal, Xiaoou, and Heung-Yeung in "Full-Frame Video Stabilization" published in 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 50-57, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1, 2005, discloses that video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. Proposed is a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. The completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, the method transfers and interpolates sharper image pixels of neighbouring frames to increase the sharpness of the frame.
[0009] Veon, Mahoor, and Voyles in "Video stabilization using SIFT-ME features and fuzzy clustering" published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 201 1, vol., no,, pp.2377, 2382, 25-30 Sept. 2011 proposes a digital video stabilization process using information that the scale-invariant feature transform (SIFT) provides for each frame. The process uses a fuzzy clustering scheme to separate the SIFT features representing global motion from those representing local motion. The process then calculates the global orientation change and translation between the current frame and the previous frame. Each frame's translation and orientation is added to an accumulated total, and a Kalman filter is applied to estimate the desired motion.
[0010] "Image Stabilization improving camera usability", a white paper by Axis communications published on 2014 relates to a combination of gyroscopes and efficient algorithms for modeling camera motion.
[0011] US8054881 provides real-time image stabilization using computationally efficient corner detection and correspondence. The real-time image stabilization performs a scene learning process on a first frame of an input video to obtain reference features and a detection threshold value. The presence of jitter is determined in a current frame of the input video by comparing features of the current frame against the reference features using the detection threshold value. If the current frame is found to be unstable, corner points are obtained from the current frame. The obtained corner points are matched against reference corner points of the reference features. If the number of matched corner points is not less than a match point threshold value, the current frame is modeled using random sample consensus. The current frame is corrected to compensate for the jitter based on the results of the modeling.
[0012] US8385732 disclose image stabilization techniques used to reduce jitter associated with the motion of a camera. Image stabilization can compensate for pan and tilt (angular movement, equivalent to yaw and pitch) of a camera or other imaging device. Image stabilization can be used in still and video cameras, including those found in mobile devices such as cell phones and personal digital assistants (PDAs).
BRIEF SUMMARY
[0013] One aspect of the disclosed subject matter relates to a computer-implemented method for stabilizing a frame, comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame. In some exemplary embodiments of the disclosed subject matter, the method may further comprise converting one or more of the frames into a black and white frame. In some exemplary embodiments of the disclosed subject matter, the method may further comprise reducing resolution of one or more of the frames. In some exemplary embodiments of the disclosed subject matter, the method may further comprise adding one or more points to the salient feature points. In some exemplary embodiments of the disclosed subject matter, within the method, dropping the salient feature points associated with objects moving in shaking movements or the salient feature points not associated with advancing objects is optionally performed only for salient feature points appearing in at least a minimal number of frames within the frames. In some exemplary embodiments of the disclosed subject matter, within the method, dropping the salient feature points associated with advancing objects is optionally performed by; determining total flow for a salient feature point over the at least three frames; determining representative flow for a multiplicity of frames of the frames, and an average representative flow by averaging the flow determined for the multiplicity of frames; and dropping salient feature points for which the total flow meets a criterion related to the average representative flow. In some exemplary embodiments of the disclosed subject matter, the method may further comprise providing the total flow for one or more salient feature points. In some exemplary embodiments of the disclosed subject matter, within the method, dropping the salient feature points associated with objects moving in shaking movements is optionally performed only for salient feature points not associated with advancing objects. In some exemplary embodiments of the disclosed subject matter, within the method, dropping the salient feature points associated with objects moving in shaking movements is optionally performed by: determining an amplitude for each salient feature point over the frames: clustering the salient feature point into a first cluster and a second cluster based upon the amplitude, wherein the first cluster has a higher center value than the second cluster; and subject to at most a predetermined percentage of the salient feature points being clustered to the first cluster, dropping salient feature points associated with the first cluster, otherwise dropping salient feature points associated with the second cluster. In some exemplary embodiments of the disclosed subject matter, within the method, dropping the salient feature points associated with advancing objects, is optionally performed only for salient feature points not associated with objects moving in shaking movements. In some exemplary embodiments the method may further comprise providing the amplitude for one or more salient feature point. In some exemplary embodiments of the disclosed subject matter, within the method, the predetermined percentage is optionally between about 15% and about 40%. In some exemplary embodiments the method may further comprise determining proximity between the first cluster and the second cluster, and re-considering a salient feature point associated with a dropped cluster, if close to the center value of a non-dropped cluster. In some exemplary embodiments of the disclosed subject matter, within the method, each frame is optionally stabilized when it is the current frame. In some exemplary embodiments of the disclosed subject matter, within the method, a frame is stabilized only if it is displayed. In some exemplary embodiments of the disclosed subject matter, within the method, computing the transformation between pairs of consecutive frames is optionally based on considering a representative point for each area of the current frame, the representative point determined upon non-dropped salient feature points in the area. In some exemplary embodiments, the method may further comprise dropping incorrectly tracked background points.
[0014] Another aspect of the disclosed subject matter relates to a computerized system for determining transition parameters between objects appearing in a first image captured by a first capture device and objects appearing in a second image captured by a second capture device, the system comprising a processor configured to: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the frames, based upon a multiplicity of non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame. In some exemplary embodiments of the disclosed subject matter, within the system, dropping the salient feature points associated with advancing objects is optionally performed by: determining total flow for a salient feature point over the frames; determining representative flow for a multiplicity of frames of the frames, and an average representative flow by averaging the flow determined for the multiplicity of frames; and dropping salient feature points for which the total flow meets a criterion related to the average representative flow. In some exemplary embodiments of the disclosed subject matter, within the system, dropping the salient feature points associated with objects moving in shaking movements is optionally performed by: determining an amplitude for each salient feature point over the frames; clustering the salient feature point into a first cluster and a second cluster based upon the amplitude, wherein the first cluster has a higher center value than the second cluster; and subject to at most a predetermined percentage of the salient feature points being clustered to the first cluster, dropping salient feature points associated with the first cluster, otherwise dropping salient feature points associated with the second cluster.
[0015] Yet another aspect of the disclosed subject matter relates to a computer program product comprising a computer readable storage medium retaining program instructions, which program instructions when read by a processor, cause the processor to perform a method comprising: receiving a frame sequence comprising three or more frames, including a current frame; determining salient feature points within the frames; matching the salient feature points between the frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements; computing a transformation between pairs of consecutive frames from amongst the frames, based upon a multiplicity of non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the frames based upon the multiplicity of transformations; determining a stabilizing transformation from a current frame to the center position; and applying the stabilizing transformation to the current frame to obtain a stabilized frame.
THE BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0016] The present disclosed subject matter will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which corresponding or like numerals or characters indicate corresponding or like components. Unless indicated otherwise, the drawings provide exemplary embodiments or aspects of the disclosure and do not limit the scope of the disclosure. In the drawings:
[0017] Fig. 1 shows an exemplary illustration of an environment in which the disclosed subject matter may be used;
[0018] Fig. 2 shows a flowchart of steps in a method for stabilizing a current frame, in accordance with some exemplary embodiments of the disclosed subject matter;
[0019] Fig. 3 shows a flowchart of steps in a method for dropping salient feature points associated with advancing objects, in accordance with some exemplary embodiments of the disclosed subject matter;
[0020] Fig. 4 shows a flowchart of steps in a method for dropping salient feature points associated with objects moving in shaking movements, in accordance with some exemplary embodiments of the disclosed subject matter; and
[0021] Fig. 5 shows a block diagram of a system for stabilizing a sequence of frames, in accordance with some exemplary embodiments of the disclosed subject matter.
DETAILED DESCRIPTION
[0022] One technical problem relates to video stabilization for a camera in a fixed location. Cameras deployed outdoors, or in very large spaces, often experience shaking, due to wind, vibrations caused by passing vehicles, or the like. Shaking may lead to dizziness of operators watching the video, as well as malfunctioning or reduced effectiveness of video analytics procedures applied on the video.
[0023] When attempting to stabilize frames captured by a shaking video camera, further complexity is introduced by objects moving regardless of the camera, including advancing objects such as vehicles, people or animals, as well as objects moving in shaking movements, such as leaves, flags, or other objects which may shake even with a slightest wind or vibration. Such moving objects may interfere with the operation of stabilizing algorithms and may result in strong and undesired artifacts, such as shaking which is even stronger than in the original frames.
[0024] One technical solution relates to a system and method for stabilizing video frames captured by a fixed camera, and in particular video frames comprising moving or shaking objects. It will be appreciated that the solution is also applicable to a temporary fixed camera, such as a camera mounted on a non-moving vehicle. The solution is also applicable to any pan-tilt-zoom (PZT) camera located at a temporary or permanent location.
[0025] The system and method receive a sequence of images, and start by identifying or selecting a group of salient feature points (also referred to as "salient points", "feature points" or "points") within the image sequence.
[0026] The system and method then identify points associated with advancing objects and drop them from the group of salient feature points.
[0027] The system and method then continue to identify points associated with shaking objects and drop them, too, from the group of salient feature points.
[0028] Transformation is then determined between frames, based on the changes in location of non-dropped points, and a center position is determined for the sequence of frames. Based on these changes, a transformation may then be determined for each frame and in particular a current frame, relatively to the center position. The transformation may then be applied to the frames to obtain stabilized frames.
[0029] One technical effect of the disclosed subject matter relates to receiving a sequence of frames and stabilizing each frame in the sequence or at least selected frames. Thus, when it is required to watch the sequence, it is more convenient and less tiring for the eyes to watch stabilized frames. Alternatively, the frames may be stabilized only when it is required to display them. Thus, if an amount of footage is captured, only the frames that are actually watched are stabilized, and not all the footage, thus reducing the required processing. The fames may be stabilized in real time, right after being captured, or offline before being stored.
[0030] Referring now to Fig. 1, showing an exemplary frame that may have to be stabilized.
[0031] The frame, generally referenced 100, is taken by a video camera (not shown). The frame shows a road 100, and a first car 104 and a second car 108 going along the road. The frame also comprises a first tree 116 and a second tree 120 having branches and leaves that shake in captured frames.
[0032] Referring now to Fig. 2, showing a flowchart of steps in a method for stabilizing a sequence of video frames.
[0033] On step 200, a frame sequence, comprising at least three digital frames including a current frame, may be received. It will be appreciated that the frames may be received directly from a capture device such as a camera or video camera, from a storage device, from a scanner, or the like. It will also be appreciated that the method may be applied in an ongoing manner, such that for a current frame, the frame itself and preceding frames may be treated as a sequence. When a new frame is later received, the current frame may be treated as part of the frames preceding the newly received one. Some of the calculations detailed below are per frame or per a pair of frames, and may therefore be performed just once and their results may be stored, such that the calculations relates to that frame or frame pair need not be repeated when processing further frames. [0034] It will be appreciated that there is no requirement to store the full images of the sequence throughout the computations as detailed below. Rather it is possible to store only some of the computation results from previous frames, such as the salient feature points, and apply the stabilization to the current frame.
[0035] On step 204, salient feature points are obtained for the frame. The term salient feature point refers to an outstanding or noticeable point in the frame, such as a corner of an object, an edge, or the like. The points may be identified, in a non-limiting example, by applying the Shi-Tomasi corner detector algorithm, or by any other corner detection image processing algorithm. In some embodiments, for example when the method is performed offline, the salient feature points may be determined earlier or by another entity and received on step 204, while in online mode, the salient feature points any be determined.
[0036] In the exemplary frame of Fig. 1, the salient feature points may include the hollow points, such as point 110 and the other points on corners of car 108, point 114 and the other points on corners of car 112, point 118 and other points on the tips of leaves of tree 1 16, and point 122 and other points on the tips of leaves of tree 120.
[0037] In addition to the salient feature points detected by any algorithm, further points may be determined, for example if the number of detected points is below a predetermined threshold. The predetermined threshold may relate to an absolute number of points per frame or to a number of points relative to the number of points, or pixels, in the frame. In this case, points may be added in areas of the frame in which relatively few salient feature points have been detected, for example pomts that are far from the detected points in at least a predetermined distance relative to the frame size. Thus, the additional points are added for providing better coverage of the frame. In the example of Fig. 1, the added points include points 124 and any one or more of the other black points. It will be appreciated that some added points may be at or near features not identified in the initial phase of salient feature point detection, such as point 126.
[0038] On step 208 the salient feature points may be matched between the frames of the sequence. For example, point 1 10 will be matched with the point representing the front left corner of car 108 in further frames. [0039] On step 212, salient feature points associated with advancing objects are dropped.
[0040] Referring now to Fig. 3, showing an exemplary method for identifying and dropping salient feature points associated with advancing objects.
[0041] On step 300 the following values may be determined:
[0042] Presence count for a salient feature point: the number or percentage of frames in the frame sequence in which the salient feature point was detected and matched with corresponding points in other frames within the sequence ; and
[0043] Total flow for a salient feature point: the magnitude of the sum of all displacement vectors of the feature point over time, e.g., the distance between the location of the point in the first frame in which the point appears and its location in the last frame in which the point appears.
[0044] The above values may be determined for all salient feature points determined on step 204 of Fig. 2, including the additional salient points. However, it may also be possible to determine the values only for a subset of the points. It will be appreciated that in some situations, the more salient feature points the values are determined for, the more accurate are the obtained results.
[0045] On step 304, a representative flow value may be determined for a current frame, which it is required to stabilize. The representative flow value may be a mean, a median, or the like. For example, a representative flow being a mean flow value may be obtained by calculating the average magnitude of the displacement vectors of all salient feature points matched from a previous frame to a current frame.
[0046] On step 306, the average representative flow for a sequence of frames may be obtained, for example by averaging the representative flow values for a predetermined number of frames, for example the last 20 frames within the sequence.
[0047] On step 308, salient feature points are dropped, for which the total flow meets one or more criteria related to the average representative flow over a number of frames. Dropping points may relate to excluding the points from further computations.
[0048] The average representative flow over a number of frames provides an estimate of the average movement of the scene between consecutive frames, over a sequence of a predetermined number of frames. Points whose total flow as determined above meets the criteria, are assumed to be associated with advancing objects like vehicles, fast advancing humans or animals, or the like, and are dropped since they may interfere with estimating the movement of the camera which it is required to stabilize.
[0049] The criteria may be, for a non-limiting example, that the total flow of the point is significantly more, for example more than three times, the average representative flow over the predetermined number of frames.
[0050] In the example of Fig. 1, step 212 of Fig. 2 will drop points 110, 114, and other points associated with car 108 and car 112.
[0051] Referring now back to Fig. 2, on step 216, salient feature points associated with objects moving in shaking movements are dropped.
[0052] Referring now to Fig. 4, showing an exemplary method for identifying and dropping salient feature points associated with objects moving in shaking movements.
[0053] On step 400, an amplitude is determined for a salient feature point: e.g., the maximal distance between two locations of the salient point within the frame sequence.
[0054] On step 404, the salient feature points not dropped on step 212 above are clustered into two groups, based on the amplitudes.
[0055] Outlier points to be removed are determined after clustering, based on the amplitude of feature points and comparing the number of points in two clusters. For the video stabilization case, outlier points are the points on foreground moving objects like e.g. cars, trees etc., as well as points on the background which are incorrectly tracked.
[0056] Each cluster is then associated with a center value, for example the average amplitude of the points associated with the cluster.
[0057] Then if the cluster having the higher center value of the two clusters is associated with no more than a predetermined number or percentage of the salient feature points, then on step 408 the points associated with this cluster are dropped, since this cluster is assumed to be of lower confidence than the other cluster. The predetermined percentage may be, in a non-limiting example, between about 15% and about 40% of the points associated with the two clusters, such as about 25%. This situation may be, in some exemplar}' situations, associated with frames having shaking objects in the foreground, such as trees with shaking branches and leaves. Since these objects are closer to the camera than other objects, their shaking is usually more significant, therefore these points should be eliminated when stabilizing the frame and not influence the stabilized frame as a whole.
[0058] If, however, the cluster having the higher center value of the two clusters is associated with at least the predetermined number or percentage of points, this cluster is assumed to be of higher confidence and on step 412 the points of the cluster that has the lower center value are dropped. This action provides for removing outlier points which may have passed advancing point removal step 212 due to wrong feature matching or incorrect tracking. For example, if corner matching as performed for example on step 208 is incorrect, then some salient feature points may be considered to have moved less than other corners, and may thus be associated with low amplitude. If the percentage of the less moving points is lower than the threshold, then the low amplitude cluster may be considered as having low confidence, and may therefore be removed on step 12.
[0059] On step 416, the proximity between the centers of the two clusters may be determined. If the clusters are close to each other, then in some situations a smaller number of points should have been removed, since all points are associated with similar amplitude. In order to compensate for the unneeded removal of step 408 or 412, points in the cluster that has been removed, which are associated with an amplitude close to the center value of the other cluster, may be added back to salient feature points, thus keeping points which are not outliers, such as background points. For example, points whose distance from center value of the other cluster is smaller than the distance between the center values of the two clusters may be un-dropped and reconsidered in further computations.
[0060] The cluster proximity may be determined, for example, by determining whether the ratio between the center values of the two clusters complies with a criterion, for example if the ratio between the lower center value and the higher center value is above a threshold, for example above 0.8. [0061 ] It will be appreciated that this process can be generalized to more than two clusters, based on the amplitudes. In such case, points associated with one or more low- center clusters may be kept, provided they have together at least a predetermined percentage of the total number of points, while points associated with one or more other clusters may be dropped. A distance metric may be employed to merge clusters having close center values.
[0062] Step 216 of Fig. 2 will thus drop points 118, 122 and 126 and other points associated with tree 116 and tree 120 of Fig. 1, if indeed the trees or branches move in shaking movements.
[0063] It will be appreciated that the points evaluated above, in association with steps 212 and 216 implemented as Fig. 3 and Fig. 4 may be performed only for salient points having a minimal presence count, i.e. appearing and tracked in at least a predetermined number or percentage of the frames in the sequence.
[0064] It will be appreciated that steps 212 and 216 may be performed concurrently or in any order. If performed one after the other, then the later step may operate only on the point not dropped by the first step, so as to save unnecessary computations.
[0065] In order to change the influence of individual correspondences, steps taking into account the spatial distribution of the points may be applied. In one embodiment, on step 218, the frame may be divided into sub-areas of equal size, for example 64*64 pixels. The transformation between consecutive frames may then be determined by combining all non-dropped salient feature points within each sub-area into a representative point, for example by averaging the salient feature points within the sub- area, and determining the transformation based on the difference between the locations of the representative points in corresponding sub-areas of the frames. Using representative points provides for assigning the same effect or weight to all sub-areas of the frame. The initial determination of the points, including comer points and additional points may ensure that each sub-area of the frame contains at least one point. Alternatively, if a sub-area does not contain any points, it may be considered irrelevant for stabilization and thus no harm may be caused if it is not considered.
[0066] Referring now back to Fig. 2, on step 220, a transformation between pairs of consecutive frames may be determined based on the non-dropped points, wherein the transformation may be expressed as a transformation matrix. In some embodiments, the transformation between consecutive frames may be determined based on the representative points determined for the frame disclosed above in association with step 218. The transformation between the current frame and the previous one may be determined every time a new current frame is received. The transformation matrix may be determined as the optimal affine transformation between two sets of points.
[0067] On step 224, a center position within the sequence is determined based on the frame-to-frame transformations determined for each pair of frames on step 220.
[0068] On step 228, a stabilizing transformation is determined from the current frame to the center position determined on step 224, and on step 232 the stabilizing transformation may be applied to the current frame to obtain a stabilized frame.
[0069] The stabilized frames may then be displayed to a user. In addition, other measures such as the amplitude or flow per pixel or per salient feature point may be used to quantify the camera shaking in terms of shaking extent and frequency, and may also be displayed to a user or used for triggering one or more actions.
[0070] Steps 212 and 216 may eliminate at least many of the points associated with advancing objects and shaking objects, and leave mainly the points associated with fixed objects, upon which the camera movements may be determined and stabilized. Stabilization is achieved by applying transformation to the frames, thus bringing the frames closer to "an average" of a sequence of frames, thus eliminating sharp changes.
[0071 ] It will be appreciated that the method is repeated for each current frame it is required to stabilize.
[0072] When displaying the video as captured, each frame when received may be the current frame and may be stabilized. However, in some embodiments, stabilization may be performed for selected frames only. When the captured frames are not displayed but are stored, then stabilization may be performed only upon need, and prior to displaying.
[0073] In further embodiments, whether a frame sequence is stabilized or not may be determined by a viewer, and upon changing needs. For example, for ongoing human traffic monitonng, stabilizing may be performed in order to reduce dizziness of a viewer. However when investigating sequences containing critical frames, some frames may remain unstabilized so as not to lose any information.
[0074] It will be appreciated that if a change in resolution occurs, such that after receiving one or more frames with a particular resolution, another one or more frames within the sequence are received with different resolution, then all salient feature points and other calculations performed may be ignored, and the calculations may be restarted to avoid inaccuracies due to the different resolutions.
[0075] Referring now to Fig. 5, showing a block diagram of a system for stabilizing a sequence of frames.
[0076] The system may be implemented as a computing platform 500, such as a server, a desktop computer, a laptop computer, a processor embedded within a video capture device, or the like.
[0077] In some exemplar}' embodiments, computing platform 500 may comprise a storage device 504. Storage device 504 may comprise one or more of the following: a hard disk drive, a Flash disk, a Random Access Memory (RAM), a memory chip, or the like. In some exemplary embodiments, storage device 504 may retain program code operative to cause processor 512 detailed below to perform acts associated with any of the components executed by computing platform 500.
[0078] In some exemplary embodiments of the disclosed subject matter, computing platform 500 may comprise an Input/Output (I/O) device 508 such as a display, a pointing device, a keyboard, a touch screen, or the like. I/O device 508 may be utilized to provide output to or receive input from a user.
[0079] Computing platform 500 may comprise a processors 12. Processor 512 may comprise any one or more of the following processing units, such as but not limited to: a Central Processing Unit (CPU), a microprocessor, an electronic circuit, an Integrated Circuit (IC), a Central Processor (CP), or the like. In other embodiments, processor 512 may be a graphic processing unit. In further embodiments, processor 504 may be a processing unit embedded on a video capture device. Processor 512 may be utilized to perform computations required by the system or any of its subcomponents. Processor 512may comprise one or more processing units in direct or indirect communication. Processor 512may be configured to execute several functional modules in accordance with computer-readable instructions implemented on a non-transitory computer usable medium. Such functional modules are referred to hereinafter as comprised in the processor.
[0080] The modules, also referred to as components as detailed below, may be implemented as one or more sets of interrelated computer instructions, loaded to and executed by, for example, processor 504 or by another processor. The components may be arranged as one or more executable files, dynamic libraries, static libraries, methods, functions, sendees, or the like, programmed in any programming language and under any computing environment,
[0081] Processor 512 may comprise communication with image source component 516 for communicating with an image source, such as a storage device storing images, a capture device, or the like. In some embodiments, the frames may be stored on storage device 512.
[0082] Processor 512 may comprise user interface 520 for receiving information from a user, such as thresholds or other parameters, for showing results to a user, such as displaying a sequence of stabilized frames, or the like, using for example any of 1.0 devices 508.
[0083] Processor 512 may comprise data and control flow component 524 for controlling the activation of the various components, providing the required input and receiving the required output from each component.
[0084] Processor 512 may comprise salient feature point determination and matching component 524 for detecting salient feature points by one or more algorithms, adding points in addition to the salient feature points detected by the user algorithm, and matching corresponding salient feature points appearing in two or more frames, as described in association with steps 204 and 208 of Fig. 2.
[0085] Processor 512 may comprise salient feature point dropping component 532 for dropping salient feature points associated with advancing objects as described on step 212 of Fig. 2and Fig. 3, or dropping salient feature points associated with objects moving in shaking movements, as described on step 216 of Fig. 2and Fig. 4.
[0086] Processor 512 may comprise stabilization determination and application component 536 for determining and applying the stabilization transformation between frames based on the non-dropped salient feature points, as disclosed in association with steps 220, 224, 228 and 232 of Fig. 2.
[0087] The method and system may be used as a standalone system, or as a component for implementing a feature in a system such as a video camera, or in a device intended for specific purpose such as camera state monitoring, video anomaly detection, or the like.
[0088] The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
[0089] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
[0090] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
[0091] Computer readable program instructions for cariying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Sendee Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. [0092] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
[0093] These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
[0094] The computer readable program instructions may also be loaded onto a computer, other programmable data processmg apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0095] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It will also be noted that each block of the block diagrams and/or flowchart illustration may be perfomied by a multiplicity of interconnected components, or two or more blocks may be performed as a single block or step.
[0096] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0097] The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims

CLAIMS What is claimed is:
1. A computer-implemented method for stabilizing a frame, comprising:
receiving a frame sequence comprising at least three frames, including a current frame;
determining salient feature points within the at least three frames; matching the salient feature points between the at least three frames;
dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements;
computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon non-dropped salient feature points, thereby obtaining a multiplicity of transformations;
determining a center position for the at least three frames based upon the multiplicity of transformations;
determining a stabilizing transformation from a current frame to the center position; and
applying the stabilizing transformation to the current frame to obtain a stabilized frame.
2. The method of Claim 1, further comprising converting at least one of the at least three frames into a black and white frame,
3. The method of Claim 1, further comprising reducing resolution of at least one of the at least three frames.
4. The method of Claim 1, further comprising adding at least one point to the salient feature points.
5. The method of Claim I, wherein dropping the salient feature points associated with objects moving in shaking movements or the salient feature points not associated with advancing objects is performed only for salient feature points appearing in at least a minimal number of frames within the at least three frames.
6. The method of Claim I, wherein dropping the salient feature points associated with advancing objects is performed by:
determining total flow for a salient feature point over the at least three frames;
determining representative flow for a multiplicity of frames of the at least three frames, and an average representative flow by averaging the flow determined for the multiplicity of frames; and
dropping salient feature points for which the total flow meets a criterion related to the average representative flow.
7. The method of Claim 6, further comprising providing the total flow for at least one salient feature point.
8. The method of Claim 6, wherein dropping the salient feature points associated with objects moving in shaking movements is performed only for salient feature points not associated with advancing objects.
9. The method of Claim 1, wherein dropping the salient feature points associated with objects moving in shaking movements is performed by:
determining an amplitude for each salient feature point over the at least three frames;
clustering the salient feature point into a first cluster and a second cluster based upon the amplitude, wherein the first cluster has a higher center value than the second cluster; and
subject to at most a predetermined percentage of the salient feature points being clustered to the first cluster, dropping salient feature points associated with the first cluster, otherwise dropping salient feature points associated with the second cluster.
10. The method of Claim 9, wherein dropping the salient feature points associated with advancing objects, is performed only for salient feature pomts not associated with objects moving in shaking movements.
1 1. The method of Claim 9, further comprising providing the amplitude for at least one salient feature point.
12. The method of Claim 9, wherein the predetermined percentage is between about 15% and about 40%.
13. The method of Claim 9, further comprising determining proximity between the first cluster and the second cluster, and re-considering a salient feature point associated with a dropped cluster, if close to the center value of a non-dropped cluster.
14. The method of Claim 1, wherein each frame is stabilized when it is the current frame.
15. The method of Claim I, wherein a frame is stabilized only if it is displayed.
16. The method of Claim 1, wherein computing the transformation between pairs of consecutive frames is based on considering a representative point for each sub-area of the current frame, the representative point determined upon non-dropped salient feature points in the sub-area.
17. The method of Claim 1, further comprising dropping incorrectly tracked background points.
18. A computerized system for determining transition parameters between objects appearing in a first image captured by a first capture device and objects appearing in a second image captured by a second capture device, the system comprising a processor configured to:
receiving a frame sequence comprising at least three frames, including a current frame;
determining salient feature points within the at least three frames;
matching the salient feature points between the at least three frames;
dropping salient feature points associated with advancing objects;
dropping salient feature points associated with objects moving in shaking movements;
computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon a multiplicity of non-dropped salient feature points, thereby obtaining a multiplicity of transformations determining a center position for the at least three frames based upon the multiplicity of transformations;
determining a stabilizing transformation from a current frame to the center position; and
applying the stabilizing transformation to the current frame to obtain a stabilized frame.
19. The system of Claim 8, wherein dropping the salient feature points associated with advancing objects is performed by:
determining total flow for a salient feature point over the at least three frames;
determining representative flow for a multiplicity of frames of the at least three frames, and an average representative flow by averaging the flow determined for the multiplicity of frames; and
dropping salient feature points for which the total flow meets a criterion related to the average representative flow.
20. The system of Claim 18, wherein dropping the salient feature points associated with objects moving in shaking movements is performed by:
determining an amplitude for each salient feature point over the at least three frames;
clustering the salient feature point into a first cluster and a second cluster based upon the amplitude, wherein the first cluster has a higher center value than the second cluster; and
subject to at most a predetermined percentage of the salient feature points being clustered to the first cluster, dropping salient feature points associated with the first cluster, otherwise dropping salient feature points associated with the second cluster.
21. A computer program product comprising a computer readable storage medium retaining program instructions, which program instructions when read by a processor, cause the processor to perform a method comprising:
receiving a frame sequence comprising at least three frames, including a current frame;
determining salient feature points within the at least three frames; matching the salient feature points between the at least three frames; dropping salient feature points associated with advancing objects; dropping salient feature points associated with objects moving in shaking movements;
computing a transformation between pairs of consecutive frames from amongst the at least three frames, based upon a multiplicity of non-dropped salient feature points, thereby obtaining a multiplicity of transformations; determining a center position for the at least three frames based upon the multiplicity of transformations;
determining a stabilizing transformation from a current frame to the center position; and
applying the stabilizing transformation to the current frame to obtain a stabilized frame.
PCT/IL2016/051094 2015-10-15 2016-10-09 A method and system for stabilizing video frames WO2017064700A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16855056.4A EP3362988A4 (en) 2015-10-15 2016-10-09 A method and system for stabilizing video frames
IL257090A IL257090A (en) 2015-10-15 2018-01-23 A method and system for stabilizing video frames

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/883,743 US9838604B2 (en) 2015-10-15 2015-10-15 Method and system for stabilizing video frames
US14/883,743 2015-10-15

Publications (1)

Publication Number Publication Date
WO2017064700A1 true WO2017064700A1 (en) 2017-04-20

Family

ID=58517133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2016/051094 WO2017064700A1 (en) 2015-10-15 2016-10-09 A method and system for stabilizing video frames

Country Status (4)

Country Link
US (2) US9838604B2 (en)
EP (1) EP3362988A4 (en)
IL (1) IL257090A (en)
WO (1) WO2017064700A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6739200B2 (en) * 2016-03-24 2020-08-12 キヤノン株式会社 Video processing device, video processing system and control method
CN110493488B (en) * 2018-05-15 2021-11-26 株式会社理光 Video image stabilization method, video image stabilization device and computer readable storage medium
CN110557522A (en) * 2018-05-31 2019-12-10 阿里巴巴集团控股有限公司 Method and device for removing video jitter
JP7105370B2 (en) 2019-03-28 2022-07-22 オリンパス株式会社 Tracking device, learned model, endoscope system and tracking method
WO2020194663A1 (en) * 2019-03-28 2020-10-01 オリンパス株式会社 Tracking device, pretained model, endoscope system, and tracking method
CN110428390B (en) * 2019-07-18 2022-08-26 北京达佳互联信息技术有限公司 Material display method and device, electronic equipment and storage medium
CN110602393B (en) * 2019-09-04 2020-06-05 南京博润智能科技有限公司 Video anti-shake method based on image content understanding
CN110856014B (en) * 2019-11-05 2023-03-07 北京奇艺世纪科技有限公司 Moving image generation method, moving image generation device, electronic device, and storage medium
CN114612837A (en) * 2022-03-15 2022-06-10 北京达佳互联信息技术有限公司 Video processing method and device and video stabilizing method
CN117575966B (en) * 2023-11-28 2024-06-21 同济大学 Video image stabilizing method for unmanned aerial vehicle high-altitude hovering shooting scene

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120105654A1 (en) * 2010-10-28 2012-05-03 Google Inc. Methods and Systems for Processing a Video for Stabilization and Retargeting

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7558405B2 (en) * 2005-06-30 2009-07-07 Nokia Corporation Motion filtering for video stabilization
US20100092093A1 (en) * 2007-02-13 2010-04-15 Olympus Corporation Feature matching method
EP2053844B1 (en) * 2007-06-28 2011-05-18 Panasonic Corporation Image processing device, image processing method, and program
EP2211302A1 (en) * 2007-11-08 2010-07-28 Nec Corporation Feature point arrangement checking device, image checking device, method therefor, and program
US8395671B2 (en) * 2008-06-09 2013-03-12 Panasonic Corporation Imaging device and imaging method for correcting effects of motion on a captured image
US8054881B2 (en) 2008-12-22 2011-11-08 Honeywell International Inc. Video stabilization in real-time using computationally efficient corner detection and correspondence
KR101622110B1 (en) * 2009-08-11 2016-05-18 삼성전자 주식회사 method and apparatus of feature extraction and image based localization method using the same
CN102656876A (en) * 2009-10-14 2012-09-05 Csr技术公司 Method and apparatus for image stabilization
US8179446B2 (en) 2010-01-18 2012-05-15 Texas Instruments Incorporated Video stabilization and reduction of rolling shutter distortion
US8385732B2 (en) 2011-07-29 2013-02-26 Hewlett-Packard Development Company, L.P. Image stabilization
US8760513B2 (en) * 2011-09-30 2014-06-24 Siemens Industry, Inc. Methods and system for stabilizing live video in the presence of long-term image drift
US8594488B1 (en) * 2012-03-13 2013-11-26 Google Inc. Methods and systems for video retargeting using motion saliency
US8989376B2 (en) * 2012-03-29 2015-03-24 Alcatel Lucent Method and apparatus for authenticating video content
MY188908A (en) * 2012-12-10 2022-01-13 Mimos Berhad Method for camera motion estimation with presence of moving object
US9319586B2 (en) * 2013-01-24 2016-04-19 Stmicroelectronics S.R.L. Method and device for stabilizing video sequences, related video-capture apparatus and computer-program product
US9277129B2 (en) * 2013-06-07 2016-03-01 Apple Inc. Robust image feature based video stabilization and smoothing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120105654A1 (en) * 2010-10-28 2012-05-03 Google Inc. Methods and Systems for Processing a Video for Stabilization and Retargeting

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PIERRE DRAGICEVIC ET AL.: "Video browsing by direct manipulation", CHI '08 PROCEEDINGS OF THE SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 5 April 2008 (2008-04-05), Florence, Italy, pages 237 - 246., XP055153877 *
S. RAJESWARARAO ET AL.: "Stabilization of jittery videos using matching techniques", INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING (IJARECE, vol. 3, no. 12, December 2014 (2014-12-01), pages 1895 - 1899., XP055376394, [retrieved on 20141201] *
See also references of EP3362988A4 *
TAE HWAN LEE ET AL.: "Fast 3D video stabilization using ROI-based warping", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 25, no. I5, 1 July 2014 (2014-07-01), pages 943 - 950., XP055376397 *

Also Published As

Publication number Publication date
US20170111585A1 (en) 2017-04-20
EP3362988A4 (en) 2018-09-12
IL257090A (en) 2018-03-29
EP3362988A1 (en) 2018-08-22
US9838604B2 (en) 2017-12-05
US20180070013A1 (en) 2018-03-08

Similar Documents

Publication Publication Date Title
US9838604B2 (en) Method and system for stabilizing video frames
US10404917B2 (en) One-pass video stabilization
US8428390B2 (en) Generating sharp images, panoramas, and videos from motion-blurred videos
KR101071352B1 (en) Apparatus and method for tracking object based on PTZ camera using coordinate map
US9716832B2 (en) Apparatus and method for stabilizing image
US7221776B2 (en) Video stabilizer
US9202263B2 (en) System and method for spatio video image enhancement
US20150022677A1 (en) System and method for efficient post-processing video stabilization with camera path linearization
US8891625B2 (en) Stabilization method for vibrating video frames
Ryu et al. Robust online digital image stabilization based on point-feature trajectory without accumulative global motion estimation
KR101524548B1 (en) Apparatus and method for alignment of images
WO2015172235A1 (en) Time-space methods and systems for the reduction of video noise
KR20160113887A (en) Method and Device for dewobbling scene
KR102069269B1 (en) Apparatus and method for stabilizing image
US20120002842A1 (en) Device and method for detecting movement of object
KR102003460B1 (en) Device and Method for dewobbling
CN111712857A (en) Image processing method, device, holder and storage medium
JP6282133B2 (en) Imaging device, control method thereof, and control program
KR101460317B1 (en) Detection apparatus of moving object in unstable camera environment and method thereof
JP2015079329A (en) Image processor, image processing method and program
Yousaf et al. Real time video stabilization methods in IR domain for UAVs—A review
US10861166B2 (en) Image restoration method
KR20140127049A (en) Image Stabilization Method and Image Processing Apparatus usign the smae
Sreegeethi et al. Online Video Stabilization using Mesh Flow with Minimum Latency
Lee et al. A fast spatiotemporal denoising scheme for multi-shot images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16855056

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 257090

Country of ref document: IL

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016855056

Country of ref document: EP