CN106372598A - Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering - Google Patents

Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering Download PDF

Info

Publication number
CN106372598A
CN106372598A CN201610786962.3A CN201610786962A CN106372598A CN 106372598 A CN106372598 A CN 106372598A CN 201610786962 A CN201610786962 A CN 201610786962A CN 106372598 A CN106372598 A CN 106372598A
Authority
CN
China
Prior art keywords
video
point
characteristic point
image
rotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610786962.3A
Other languages
Chinese (zh)
Inventor
黄超
李青海
简宋全
邹立斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jing Dian Computing Machine Science And Technology Ltd
Original Assignee
Guangzhou Jing Dian Computing Machine Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jing Dian Computing Machine Science And Technology Ltd filed Critical Guangzhou Jing Dian Computing Machine Science And Technology Ltd
Priority to CN201610786962.3A priority Critical patent/CN106372598A/en
Publication of CN106372598A publication Critical patent/CN106372598A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image stabilizing method based on image characteristic detection for eliminating video rotation and jittering. The method comprises steps that step S1, a video adjacent frame is acquired; step S2, characteristic points are extracted, and characteristic description vectors are generated; step S3, characteristic point matching is carried out, step S4, characteristic point pairs in error matching are eliminated; step S5, a rotation angle is calculated; step S6, reverse superposition of a rotation angle of a second video image is carried out, and video rotation is eliminated; step S7, characteristic point locus for the video frame image after video rotation elimination is generated; step S8, Gauss noise of the characteristic point locus is eliminated; step S9, an accurate video jittering direction and amplitude are acquired; and step S10, jittering elimination operation on the video frame image is carried out. Through the method, jittering and rotation of the image can be eliminated, algorithm optimization is carried out for extreme point detection, characteristic principal direction generation and description vector construction, an operation speed is greatly improved, and error matching of adjacent frame characteristic points can be effectively avoided through utilizing a nearest neighbor to next nearest neighbor method and an RANSIC algorithm.

Description

A kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection
Technical field
The present invention relates to surely as technical field and in particular to a kind of elimination video based on Image Feature Detection rotates and trembles Dynamic digital image stabilization method.
Background technology
In recent years, with the popularization of photographic equipment, video is increasingly appearing in the work and life of people, for example Safety monitoring, automobile monitor, digital camera etc..When photographic equipment is operated on the motion carrier such as vehicle-mounted, hand-held, equipment meeting Occur corresponding move, cause the shake of video, rotation simultaneously, scale.For individual, the shake of video, Effect of Rotation video Viewing effect, for automatic identification tracing system, then lead to system detectio mistake or follow the trail of unsuccessfully.Existing digital image stabilization method The eradicating efficacy of the rotation to video and shake is not fine.
In view of drawbacks described above, creator of the present invention passes through long research and practice obtains the present invention finally.
Content of the invention
For solving above-mentioned technological deficiency, the technical solution used in the present invention is, provides one kind to be based on Image Feature Detection The digital image stabilization method eliminating video rotation and shake, the method comprises the following steps:
Step s1, obtains video consecutive frame;
Step s2, extracts the characteristic point of the adjacent two field picture of video using surf algorithm and generates feature description vector;
Step s3, the characteristic point of traversal video frame images and the distance of the characteristic point of adjacent two field picture, using arest neighbors ratio Secondary near neighbor method matching characteristic point;
Step s4, rejects the paired characteristic point of error hiding using ransic algorithm;
Step s5, calculates the anglec of rotation of described adjacent video two field picture;
Step s6, is reversely superimposed the anglec of rotation to the second frame video image, eliminates video rotation;
Step s7, to eliminating the postrotational video frame images of video, generates characteristic point rail using klt feature point tracking algorithm Mark;
Step s8, carries out kalman Filtering Processing to described feature point trajectory, and the Gauss eliminating described feature point trajectory makes an uproar Sound;
Step s9, carries out b spline curve fitting to feature point trajectory after filtering, obtains accurate video jitter direction and width Value;
Video frame images, according to described video jitter direction and amplitude, are carried out eliminating dither operation by step s10.
Preferably, described step s2 specifically includes following steps:
Step s21, does convolution using box filtering and image, builds multiscale space, generate metric space pyramid;
Step s22, asks for the local extremum under a certain particular dimensions, carries out non-maximum in the three-dimensional field of its 3*3*3 Suppression, screening qualified point is candidate's extreme value, record position and size;
Step s23, in metric space and image space to extreme value near click-through row interpolation, filter the pole of low contrast Value point;
Step s24, constructs a principal direction to each characteristic point so that surf feature keeps constant to image rotation;
Step s25, generates feature description vector to each characteristic point, is normalized operation so that feature point pairs brightness Change, view transformation keep constant.
Preferably, in described step s24, a principal direction is constructed to each characteristic point comprising the following steps:
Step s241, marking the center of circle is sample characteristics point position, and radius is the border circular areas of certain setting value;
Step s242, calculates the harr small echo response vector in its x, y direction for all characteristic points in border circular areas respectively;
Step s243, gives different Gauss weights to each direction vector, near the center of circle, imparting that direction contribution is big Greater weight;
Step s244, divides 6 sectors by circular, calculates in each sector region, the harr small echo response vector of weighting Vector, with maximum vector and corresponding direction for the principal direction of this feature point.
Preferably, characteristic point generates feature description vector and specifically includes following steps in described step s25:
Coordinate axess are rotated to the principal direction of this feature point by step s251 first, determine with characteristic point as midpoint, the length of side is The square field of 20 σ;
Step s252, principal direction region division is become 4 × 4 sub-block regions, and every sub-regions sample according to 5 × 5 sizes, Harr wavelet filtering process, wherein d are carried out to each block regionxRepresent harr small echo level of response component, dyRepresent that harr is little Ripple responds vertical component;
Step s253, needs to carry out Gaussian function weighting operations to the response of harr small echo, to strengthen the Shandong to geometric transformation Rod, the center of this Gaussian function is characteristic point, and σ0=3.3 σ;
Step s254, calculates the small echo response value with vertical direction in the horizontal direction of 25 sampled points of 16 sub-regions And its summation of absolute value, thus obtain a new vector: ∑ dx, ∑ dy, ∑ | dx|, ∑ | dy|;
Step s255, is normalized operation to the vector of 16 × 4=64 dimension of each characteristic point, removes brightness flop and make The impact becoming, thus obtain the description vectors of characteristic point.
Preferably, arest neighbors in described step s3 specifically includes following steps than secondary near neighbor method matching characteristic point:
Step s31, to the sample characteristics point in image, calculates and its closest and secondary near similitude, order is recently Distance is d1, secondary is closely d2, using Euclidean distance formula, its expression formula is as follows for wherein range formula:
d = σ i = 1 n ( x 1 i - x 2 i ) 2
Step s32, calculates minimum distance and time in-plant ratio, ratio is compared with given threshold:
When ratio is less than threshold value it is believed that this feature Point matching, otherwise it is assumed that mismatching, give up this feature point.
Preferably, described step s4 specifically includes following steps:
Step s41, randomly drawing sample set p from data acquisition system s;
Step s42, according to sample set p solving model parameter, thus obtain system model h;
Step s43, to remaining data acquisition systemElement l, the element l in statistics set l be System the distance between model h, if distance is less than given threshold, for interior point, otherwise for exterior point;
Step s44, repeats described step s41, step s42, step s43, by most for interior quantity once right System model h answered, as final mask, filters the exterior point under this model.
Preferably, described step s7 specifically includes following steps:
Step s71, selects video, with the first frame as reference frame, extracts characteristic point and preserves, recording feature point number m;
Step s72, in the next frame, using klt track algorithm track and extract characteristic point and preserve, recording feature point number n;
Step s73, judges the threshold value whether ratio of n and m sets less than certain, if less than threshold value, then generates one section Feature point trajectory, again with present frame as reference frame, returns to described step s71;
Step s74, judges whether this video terminates, if do not terminated, returns to step s72.
Prior art compare the beneficial effects of the present invention is: the present invention provide a kind of disappearing based on Image Feature Detection Except the digital image stabilization method of video rotation and shake, enable to eliminate flating and rotation based on the surf algorithm of scale invariant feature Turn;Surf algorithm uses for reference the design philosophy of sift algorithm, keeps the detection performance that it is good, in extreme point detection, feature main formula Do algorithm optimization to when generating and description vectors build, substantially increase calculating speed;Using arest neighbors than secondary near neighbor method The error hiding of consecutive frame characteristic point can be efficiently avoid with ransic algorithm.
Brief description
For the technical scheme being illustrated more clearly that in various embodiments of the present invention, below will be to required in embodiment description The accompanying drawing using is briefly described.
Fig. 1 is a kind of flow process of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection of the present invention Figure;
Fig. 2 is the flow chart of step s2;
Fig. 3 is the flow chart constructing a principal direction to each characteristic point in step s24;
Fig. 4 is the flow chart that in step s25, characteristic point generates feature description vector.
Specific embodiment
Below in conjunction with accompanying drawing, the above-mentioned He other technical characteristic of the present invention and advantage are described in more detail.
As shown in figure 1, a kind of image stabilization eliminating video rotation and shake based on Image Feature Detection for the present invention The flow chart of method, the method comprises the following steps:
Step s1, obtains video consecutive frame.
Step s2, extracts the characteristic point of the adjacent two field picture of video using surf algorithm and generates feature description vector.
Step s3, the characteristic point of traversal video frame images and the distance of consecutive frame image characteristic point, using arest neighbors than secondary Near neighbor method matching characteristic point.
Step s4, rejects the paired characteristic point of error hiding using ransic algorithm.
Step s5, calculates the anglec of rotation of adjacent video two field picture.
Step s6, is reversely superimposed the anglec of rotation to the second frame video image, eliminates video rotation.
Step s7, to eliminating the postrotational video frame images of video, generates characteristic point rail using klt feature point tracking algorithm Mark.
Step s8, carries out kalman Filtering Processing to feature point trajectory, eliminates the Gaussian noise of feature point trajectory.
Step s9, carries out b spline curve fitting to feature point trajectory after filtering, obtains accurate video jitter direction and width Value.
Video frame images, according to video jitter direction and amplitude, are carried out eliminating dither operation by step s10.
As shown in Fig. 2 for the flow chart of step s2, specifically including following steps:
Step s21, does convolution using box filtering and image, builds multiscale space, generate metric space pyramid.
Step s22, asks for the local extremum under a certain particular dimensions, carries out non-maximum in the three-dimensional field of its 3*3*3 Suppression, screening qualified point is candidate's extreme value, record position and size.
Step s23, in metric space and image space to extreme value near click-through row interpolation, filter the pole of low contrast Value point.
Step s24, constructs a principal direction to each characteristic point so that surf feature keeps constant to image rotation.
Step s25, generates feature description vector to each characteristic point, is normalized operation so that feature point pairs brightness Change, view transformation keep constant.
As shown in figure 3, the flow chart for a principal direction being constructed to each characteristic point in step s24, specifically include following Step:
Step s241, marking the center of circle is sample characteristics point position, and radius is the border circular areas of certain setting value.
Step s242, calculates the harr small echo response vector in its x, y direction for all characteristic points in border circular areas respectively.
Step s243, gives different Gauss weights to each direction vector, near the center of circle, imparting that direction contribution is big Greater weight.
Step s244, divides 6 sectors by circular, calculates in each sector region, the harr small echo response vector of weighting Vector, with maximum vector and corresponding direction for the principal direction of this feature point.
As shown in figure 4, the flow chart generating feature description vector for characteristic point in step s25, specifically include following steps:
Coordinate axess are rotated to the principal direction of this feature point by step s251 first, determine with characteristic point as midpoint, the length of side is The square field of 20 σ (wherein σ is characterized a corresponding yardstick).
Step s252, principal direction region division is become 4 × 4 sub-block regions, and every sub-regions sample according to 5 × 5 sizes, Harr wavelet filtering process, wherein d are carried out to each block regionxRepresent harr small echo level of response component, dyRepresent that harr is little Ripple responds vertical component.
Step s253, needs to carry out Gaussian function weighting operations to the response of harr small echo, to strengthen the Shandong to geometric transformation Rod, the center of this Gaussian function is characteristic point, and σ0=3.3 σ.
Step s254, calculates the small echo response value with vertical direction in the horizontal direction of 25 sampled points of 16 sub-regions And its summation of absolute value, thus obtain a new vector: σ dx, σ dy, σ | dx|, ∑ | dy|.
Step s255, is normalized operation to the vector of 16 × 4=64 dimension of each characteristic point, removes brightness flop and make The impact becoming, thus obtain the description vectors of characteristic point.
Arest neighbors in step s3 specifically includes following steps than secondary near neighbor method matching characteristic point:
Step s31, to the sample characteristics point in image, calculates and its closest and secondary near similitude, order is recently Distance is d1, secondary is closely d2, using Euclidean distance formula, its expression formula is as follows for wherein range formula:
d = σ i = 1 n ( x 1 i - x 2 i ) 2
Step s32, calculates minimum distance and time in-plant ratio, ratio is compared with given threshold:
When ratio is less than threshold value it is believed that this feature Point matching, otherwise it is assumed that mismatching, give up this feature point.
Step s4 step can more accurately obtain the consecutive frame anglec of rotation.Specifically include following steps:
Step s41, randomly drawing sample set p from data acquisition system s.
Step s42, according to sample set p solving model parameter, thus obtain system model h;
Step s43, to remaining data acquisition systemElement l, the element l in statistics set l be System the distance between model h, if distance is less than given threshold, for interior point, otherwise for exterior point.
Step s44, repeated execution of steps s41, step s42, step s43, will be most for interior quantity once corresponding System model h, as final mask, filters the exterior point under this model.
Step s7 can improve the accuracy of feature point trajectory.Specifically include following steps:
Step s71, selects video, with the first frame as reference frame, extracts characteristic point and preserves, recording feature point number m.
Step s72, in the next frame, using klt track algorithm track and extract characteristic point and preserve, recording feature point number n.
Step s73, judges the threshold value whether ratio of n and m sets less than certain, if less than threshold value, then generates one section Feature point trajectory, again with present frame as reference frame, returns to step s71.
Step s74, judges whether this video terminates, if do not terminated, returns to step s72.
Step s8 specifically includes: in kalman Filtering Model, the time of day x in k momentkTime of day by the k-1 moment Derive, expression formula is as follows:
xk=fkxk-1+bkuk+wk
Wherein fkIt is to act on xk-1On state transition matrix, bkIt is that input controls matrix, wkFor process noise, just meet State is distributed.In this application, the characteristic point position x in k momentkIt is the characteristic point position x in k-1 momentk-1Move position plus characteristic point Move d (x, y), wherein moving displacement can be by the product representation of the translational speed of image and inter frame temporal it is contemplated that inter frame temporal Shorter, the translational speed of image approximates the translational speed in a moment.
In k moment, time of day xkWith observer state zkMeet expression formula:
zk=hkxx+vk
Wherein hkIt is observer state matrix, vkFor observation noise, meet normal distribution.In the application, measurement is subject to only noise Interference, therefore observer state matrix hkFor unit matrix.
In kalman filtering, kalman gain is constantly revised according to least mean-square error, using to current shape The predictive value that the observation optimization of state obtains in forecast period, to obtain more accurately estimated value.
A kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection that the present invention provides is logical first Cross the characteristic point that Image Feature Detection Algorithms find video image, reject the characteristic point of wherein misrecognition;To adjacent video frames figure The characteristic point of picture is mated, and calculates the angle of adjacent video frames rotation, eliminates video rotation by angle compensation;Then right One section of video eliminating rotation, generates the feature point trajectory of this two field picture by feature point tracking algorithm, using filtering and curve Approximating method obtains the motion model of each two field picture, eliminates the shake of video by translation compensation.Have the advantage that
1. commonly use electronic image stabilization method, for example soon coupling, Gray Projection method, the motion model of translation can only be processed, that is, Flating can only be eliminated, and enable to eliminate flating and rotation based on the surf algorithm of scale invariant feature.
2. traditional sift feature detection algorithm has invariance to rotation, scaling, and to noise, light differential There is good robustness, but calculating speed is slow.Surf algorithm uses for reference the design philosophy of sift algorithm, keeps the detection that it is good Performance, has done algorithm optimization in extreme point detection, the generation of feature principal direction and description vectors when building, has substantially increased calculating speed Degree.
3. the mistake of consecutive frame characteristic point can be efficiently avoid than secondary near neighbor method and ransic algorithm using arest neighbors Coupling.Wherein arest neighbors solves consecutive frame than secondary near neighbor method and does not have similar features point or have asking of multiple similar features point Topic, ransic algorithm rejects Mismatching point according to the consistent degree with model.
The foregoing is only presently preferred embodiments of the present invention, be merely illustrative for the purpose of the present invention, and non-limiting 's.Those skilled in the art understands, it can be carried out in the spirit and scope that the claims in the present invention are limited with many changes, Modification, in addition equivalent, but fall within protection scope of the present invention.

Claims (7)

1. a kind of the digital image stabilization method of video rotation and shake eliminated it is characterised in that the method bag based on Image Feature Detection Include following steps:
Step s1, obtains video consecutive frame;
Step s2, extracts the characteristic point of the adjacent two field picture of video using surf algorithm and generates feature description vector;
Step s3, the characteristic point of traversal video frame images and the distance of the characteristic point of adjacent two field picture, nearer than secondary using arest neighbors Adjacent method matching characteristic point;
Step s4, rejects the paired characteristic point of error hiding using ransic algorithm;
Step s5, calculates the anglec of rotation of described adjacent video two field picture;
Step s6, is reversely superimposed the anglec of rotation to the second frame video image, eliminates video rotation;
Step s7, to eliminating the postrotational video frame images of video, generates feature point trajectory using klt feature point tracking algorithm;
Step s8, carries out kalman Filtering Processing to described feature point trajectory, eliminates the Gaussian noise of described feature point trajectory;
Step s9, carries out b spline curve fitting to feature point trajectory after filtering, obtains accurate video jitter direction and amplitude;
Video frame images, according to described video jitter direction and amplitude, are carried out eliminating dither operation by step s10.
2. a kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection according to claim 1, It is characterized in that, described step s2 specifically includes following steps:
Step s21, does convolution using box filtering and image, builds multiscale space, generate metric space pyramid;
Step s22, asks for the local extremum under a certain particular dimensions, carries out non-maximum suppression in the three-dimensional field of its 3*3*3, Screening qualified point is candidate's extreme value, record position and size;
Step s23, in metric space and image space to extreme value near click-through row interpolation, filter the extreme value of low contrast Point;
Step s24, constructs a principal direction to each characteristic point so that surf feature keeps constant to image rotation;
Step s25, to each characteristic point generate feature description vector, be normalized operation so that feature point pairs brightness flop, View transformation keeps constant.
3. a kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection according to claim 2, It is characterized in that, in described step s24, a principal direction is constructed to each characteristic point and comprises the following steps:
Step s241, marking the center of circle is sample characteristics point position, and radius is the border circular areas of certain setting value;
Step s242, calculates the harr small echo response vector in its x, y direction for all characteristic points in border circular areas respectively;
Step s243, gives different Gauss weights to each direction vector, near the center of circle, imparting that direction contribution is big larger Weight;
Step s244, divides 6 sectors by circular, calculates in each sector region, the arrow of the harr small echo response vector of weighting Amount and, with maximum vector and corresponding direction for the principal direction of this feature point.
4. a kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection according to claim 3, It is characterized in that, in described step s25, characteristic point generates feature description vector and specifically includes following steps:
Coordinate axess are rotated to the principal direction of this feature point by step s251 first, determine with characteristic point as midpoint, the length of side is 20 σ Square field;
Step s252, principal direction region division is become 4 × 4 sub-block regions, every sub-regions sample according to 5 × 5 sizes, to every Individual block region carries out harr wavelet filtering process, wherein dxRepresent harr small echo level of response component, dyRepresent that harr small echo rings Answer vertical component;
Step s253, needs to carry out Gaussian function weighting operations to the response of harr small echo, to strengthen the robustness to geometric transformation, The center of this Gaussian function is characteristic point, and σ0=3.3 σ;
Step s254, calculate 16 sub-regions 25 sampled points in the horizontal direction with the small echo response value of vertical direction and its The summation of absolute value, thus obtain a new vector: ∑ dx, ∑ dy, σ | dx|, ∑ | dy|;
Step s255, is normalized operation to the vector of 16 × 4=64 dimension of each characteristic point, removes what brightness flop caused Impact, thus obtain the description vectors of characteristic point.
5. a kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection according to claim 4, It is characterized in that, the arest neighbors in described step s3 specifically includes following steps than secondary near neighbor method matching characteristic point:
Step s31, to the sample characteristics point in image, calculates and its closest and secondary near similitude, makes minimum distance For d1, secondary is closely d2, using Euclidean distance formula, its expression formula is as follows for wherein range formula:
d = σ i = 1 n ( x 1 i - x 2 i ) 2
Step s32, calculates minimum distance and time in-plant ratio, ratio is compared with given threshold:
When ratio is less than threshold value it is believed that this feature Point matching, otherwise it is assumed that mismatching, give up this feature point.
6. a kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection according to claim 5, It is characterized in that, described step s4 specifically includes following steps:
Step s41, randomly drawing sample set p from data acquisition system s;
Step s42, according to sample set p solving model parameter, thus obtain system model h;
Step s43, to remaining data acquisition systemElement l, the element l in statistics set l and system mould The distance between type h, if distance is less than given threshold, for interior point, otherwise for exterior point;
Step s44, repeats described step s41, step s42, step s43, will be most for interior quantity once corresponding System model h, as final mask, filters the exterior point under this model.
7. a kind of digital image stabilization method eliminating video rotation and shake based on Image Feature Detection according to claim 6, It is characterized in that, described step s7 specifically includes following steps:
Step s71, selects video, with the first frame as reference frame, extracts characteristic point and preserves, recording feature point number m;
Step s72, in the next frame, using klt track algorithm track and extract characteristic point and preserve, recording feature point number n;
Step s73, judges the threshold value whether ratio of n and m sets less than certain, if less than threshold value, then generates one section of feature The locus of points, again with present frame as reference frame, returns to described step s71;
Step s74, judges whether this video terminates, if do not terminated, returns to step s72.
CN201610786962.3A 2016-08-31 2016-08-31 Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering Pending CN106372598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610786962.3A CN106372598A (en) 2016-08-31 2016-08-31 Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610786962.3A CN106372598A (en) 2016-08-31 2016-08-31 Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering

Publications (1)

Publication Number Publication Date
CN106372598A true CN106372598A (en) 2017-02-01

Family

ID=57898717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610786962.3A Pending CN106372598A (en) 2016-08-31 2016-08-31 Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering

Country Status (1)

Country Link
CN (1) CN106372598A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680127A (en) * 2017-10-11 2018-02-09 华中科技大学 A kind of fast image stabilization method based on centralizing mapping
CN107944455A (en) * 2017-11-15 2018-04-20 天津大学 A kind of image matching method based on SURF
CN107968916A (en) * 2017-12-04 2018-04-27 国网山东省电力公司电力科学研究院 A kind of fast video digital image stabilization method suitable for on-fixed scene
CN108805908A (en) * 2018-06-08 2018-11-13 浙江大学 A kind of real time video image stabilization based on the superposition of sequential grid stream
CN109688358A (en) * 2018-12-29 2019-04-26 盐城工业职业技术学院 Fabricate class course resources visual development and the information transmission system and method
CN109698906A (en) * 2018-11-24 2019-04-30 四川鸿景润科技有限公司 Dithering process method and device, video monitoring system based on image
CN109977775A (en) * 2019-02-25 2019-07-05 腾讯科技(深圳)有限公司 Critical point detection method, apparatus, equipment and readable storage medium storing program for executing
CN113096154A (en) * 2020-01-08 2021-07-09 浙江光珀智能科技有限公司 Target detection and tracking method and system based on inclined depth camera
CN114143459A (en) * 2021-11-26 2022-03-04 中国电子科技集团公司第五十四研究所 Video jitter elimination method suitable for large zoom camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN102427505A (en) * 2011-09-29 2012-04-25 深圳市万兴软件有限公司 Video image stabilization method and system on the basis of Harris Corner
CN103841296A (en) * 2013-12-24 2014-06-04 哈尔滨工业大学 Real-time electronic image stabilizing method with wide-range rotation and horizontal movement estimating function
KR20140106310A (en) * 2013-02-26 2014-09-03 인테그레이티드에너지 주식회사 Uneven pattern image acquisition apparatus and method
CN105245841A (en) * 2015-10-08 2016-01-13 北京工业大学 CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN102427505A (en) * 2011-09-29 2012-04-25 深圳市万兴软件有限公司 Video image stabilization method and system on the basis of Harris Corner
KR20140106310A (en) * 2013-02-26 2014-09-03 인테그레이티드에너지 주식회사 Uneven pattern image acquisition apparatus and method
CN103841296A (en) * 2013-12-24 2014-06-04 哈尔滨工业大学 Real-time electronic image stabilizing method with wide-range rotation and horizontal movement estimating function
CN105245841A (en) * 2015-10-08 2016-01-13 北京工业大学 CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
帅平 等: "《X射线脉冲星导航系统与方法》", 31 July 2009, 中国宇航出版社 *
牛杰杰: "旋转视频电子稳像算法研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680127B (en) * 2017-10-11 2019-11-12 华中科技大学 A kind of fast image stabilization method based on centralizing mapping
CN107680127A (en) * 2017-10-11 2018-02-09 华中科技大学 A kind of fast image stabilization method based on centralizing mapping
CN107944455A (en) * 2017-11-15 2018-04-20 天津大学 A kind of image matching method based on SURF
CN107944455B (en) * 2017-11-15 2020-06-02 天津大学 Image matching method based on SURF
CN107968916A (en) * 2017-12-04 2018-04-27 国网山东省电力公司电力科学研究院 A kind of fast video digital image stabilization method suitable for on-fixed scene
CN108805908B (en) * 2018-06-08 2020-11-03 浙江大学 Real-time video image stabilization method based on time sequence grid stream superposition
CN108805908A (en) * 2018-06-08 2018-11-13 浙江大学 A kind of real time video image stabilization based on the superposition of sequential grid stream
CN109698906A (en) * 2018-11-24 2019-04-30 四川鸿景润科技有限公司 Dithering process method and device, video monitoring system based on image
CN109698906B (en) * 2018-11-24 2021-01-26 四川鸿景润科技有限公司 Image-based jitter processing method and device and video monitoring system
CN109688358A (en) * 2018-12-29 2019-04-26 盐城工业职业技术学院 Fabricate class course resources visual development and the information transmission system and method
CN109977775A (en) * 2019-02-25 2019-07-05 腾讯科技(深圳)有限公司 Critical point detection method, apparatus, equipment and readable storage medium storing program for executing
CN109977775B (en) * 2019-02-25 2023-07-28 腾讯科技(深圳)有限公司 Key point detection method, device, equipment and readable storage medium
CN113096154A (en) * 2020-01-08 2021-07-09 浙江光珀智能科技有限公司 Target detection and tracking method and system based on inclined depth camera
CN114143459A (en) * 2021-11-26 2022-03-04 中国电子科技集团公司第五十四研究所 Video jitter elimination method suitable for large zoom camera

Similar Documents

Publication Publication Date Title
CN106372598A (en) Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering
CN103426182B (en) The electronic image stabilization method of view-based access control model attention mechanism
Li et al. SPM-BP: Sped-up PatchMatch belief propagation for continuous MRFs
US9536147B2 (en) Optical flow tracking method and apparatus
CN111260738A (en) Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion
CN108470354A (en) Video target tracking method, device and realization device
CN105046710A (en) Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus
KR101787542B1 (en) Estimation system and method of slope stability using 3d model and soil classification
CN107784663A (en) Correlation filtering tracking and device based on depth information
CN110309808B (en) Self-adaptive smoke root node detection method in large-scale space
CN104680554B (en) Compression tracking and system based on SURF
CN113312973B (en) Gesture recognition key point feature extraction method and system
CN104794737A (en) Depth-information-aided particle filter tracking method
CN106780560A (en) A kind of feature based merges the bionic machine fish visual tracking method of particle filter
CN104881029A (en) Mobile robot navigation method based on one point RANSAC and FAST algorithm
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN110910421A (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN113065397B (en) Pedestrian detection method and device
Tian et al. Research on multi-sensor fusion SLAM algorithm based on improved gmapping
Zhao et al. Fractal dimension estimation of RGB color images using maximum color distance
CN109493370B (en) Target tracking method based on space offset learning
Du et al. Spatio-temporal self-organizing map deep network for dynamic object detection from videos
CN105303544A (en) Video splicing method based on minimum boundary distance
CN110111358B (en) Target tracking method based on multilayer time sequence filtering
CN116777956A (en) Moving target screening method based on multi-scale track management

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170201