CN113034383A - Method for obtaining video image based on improved grid motion statistics - Google Patents

Method for obtaining video image based on improved grid motion statistics Download PDF

Info

Publication number
CN113034383A
CN113034383A CN202110209457.3A CN202110209457A CN113034383A CN 113034383 A CN113034383 A CN 113034383A CN 202110209457 A CN202110209457 A CN 202110209457A CN 113034383 A CN113034383 A CN 113034383A
Authority
CN
China
Prior art keywords
image
video
frame
video image
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110209457.3A
Other languages
Chinese (zh)
Inventor
柳晓鸣
蔡兵
左杰格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202110209457.3A priority Critical patent/CN113034383A/en
Publication of CN113034383A publication Critical patent/CN113034383A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for obtaining video images based on improved grid motion statistics, which belongs to the field of electronic image stabilization and comprises the following steps: extracting the characteristic points of each frame of the video image by adopting a FAST method for improving the suppression radius; calculating a descriptor of the feature point by adopting a BRIEF descriptor method; matching adjacent feature points by adopting an improved ORB according to the descriptors of the feature points to obtain the global motion amount between front and rear frames of the video image; performing Gaussian smoothing filtering and image compensation on the global motion amount between the front frame and the rear frame of the video image to finally obtain a stable video image; the method discloses a method for extracting ORB features by using a radius inhibition, reduces local clustering property of feature point extraction, eliminates mismatching by using grid motion statistics, accurately estimates global motion, compensates video, reduces jitter interference in the compensated video, and obtains a more stable video beneficial to observation.

Description

Method for obtaining video image based on improved grid motion statistics
Technical Field
The invention relates to the field of electronic image stabilization, in particular to a method for acquiring video images based on improved grid motion statistics.
Background
With the rapid development of modern computer technology, people have higher and higher requirements on high-quality video images. Stabilized video images are an important way for people to obtain information, and therefore the field of electronic image stabilization has gained wide attention over the years. The stability and the accuracy of the video image are improved, so that observers can be helped to better observe and reduce discomfort of naked eyes, control over uncertain factors of scenes is improved, and potential safety hazards are reduced. Motion estimation is an important method for obtaining stable monitoring video images and obtaining high-quality video monitoring in the field of electronic image stabilization. The motion estimation mainly comprises two aspects of ship feature extraction and motion estimation after extraction.
The traditional motion estimation method mainly comprises a block matching method, a gray projection method, an optical flow method and a feature matching method, wherein the block matching method adopts a global search method, all pixels need to be traversed to find the optimal sub-block, and the real-time requirement is difficult to realize due to huge calculated amount; the gray projection method projects images at a vertical position or a horizontal position according to the gray difference of the images, matches two adjacent frames of images according to the change condition of a projection curve, has high calculation speed and accuracy, can meet the real-time performance, but the gray change is not obvious in some videos, so that the real motion condition cannot be reflected, and the gray projection method is only limited to project in the horizontal direction and the vertical direction and is ineffective to the rotation and scaling motion. The optical flow method is to observe the instantaneous speed of the pixel motion of a space moving object on an imaging plane, find a corresponding relation by utilizing the change of the pixels in an image sequence on a time domain and the coherence between two frames, and calculate the motion information of the two frames of objects, but the optical flow formula is complex and has huge calculation amount, and is difficult to meet the harsh real-time requirement, and the changed light rays can be wrongly identified as optical flows, so that the sensitivity of the light ray change is sensitive, and the accuracy of motion estimation is reduced.
Disadvantages of the improved process herein:
(1) the traditional ORB feature matching method is used for motion estimation, the mismatching of feature points is high, and the accuracy of the motion estimation is influenced;
(2) the traditional ORB feature points have local clustering property, and the method can repeatedly extract the features of one region and influence the extraction of global motion.
(3) The traditional video image stabilization method has the problem that the motion estimation is not accurate enough, so that the accuracy of acquiring the motion track of the camera is reduced, and the quality of the stabilized video is influenced.
Disclosure of Invention
According to the problems existing in the prior art, the invention discloses a method for acquiring a video image based on improved grid motion statistics, which comprises the following steps:
s1, extracting the characteristic points of each frame of the video image by adopting a FAST method for improving the suppression radius;
s2, calculating the descriptor of the feature point by adopting a BRIEF descriptor method;
s3, matching adjacent feature points by using an improved ORB according to the descriptors of the feature points to obtain the global motion quantity between the front frame and the rear frame of the video image;
and S4, performing Gaussian smooth filtering and image compensation on the global motion quantity between the front frame and the rear frame of the video image to finally obtain a stable video image.
Further, the process of extracting the feature points of each frame of the video image by using the FAST method for improving the suppression radius is as follows:
s1-1: reading in a video, and performing different levels of downsampling processing on an original frame image and a reference frame image according to a Gaussian image pyramid principle to respectively obtain a pyramid of the original frame image and a pyramid of the reference frame image, so that the invariance of the scale of Fast features is increased;
s1-2: primarily screening the characteristic points in the video image according to a threshold value;
s1-3: performing response calculation on the preliminarily screened characteristics by adopting a Harris response formula, sorting the response values of the preliminarily screened points, performing difference on Euclidean distances of the characteristic points corresponding to two response values before and after sorting to obtain a group of inhibition radiuses, and screening the characteristic points again according to the size of the group of inhibition radiuses;
s1-4: using a gray scale centroid method to re-screen the feature points, and adding selection rotation invariance; and finishing the extraction of the characteristic points of each frame of the video image.
Further, the process of matching adjacent feature points by using an improved ORB method according to the descriptors of the feature points to obtain the motion amount between the front and rear frames of the video image is as follows:
s3-1, matching two groups of ORB characteristic points in the video image according to Hamming distance by using a BF matching mode, and sequencing according to the difference of the Hamming distance to obtain a group of preliminary characteristic matching sets F;
s3-2, re-screening the preliminary feature matching set F according to the motion smoothness, firstly dividing the reference frame and the current frame image of the video image into m x m grid images respectively, dividing each grid image into 3 x 3 sub-grids, defining the matching number of feature points of the reference frame and the adjacent frame image in a region as a fraction, and calculating the sum of the 3 x 3 region fractions as the total fraction S of the current grid imageij
S3-3, sorting the scores of each grid image from big to small, distinguishing true matching and false matching according to a threshold value tau, and when S is used, sorting the scores of all grid images from big to smallijτ, is a true match; when S isijτ, is a false match;
selecting a high-subarea feature point matching pair from the true matching feature points according to the required feature points to obtain accurate feature points, and forming a feature matching set F' by the accurate feature points to finish an accurate screening process;
and S3-4, performing affine motion model fitting on the obtained accurate characteristic point matching pair by using a Randac method to finally obtain the global motion quantity of the video.
Further, the process of performing gaussian smoothing filtering and image compensation on the global motion amount between the front frame and the rear frame of the video image to finally obtain a stable video image is as follows:
s4-1, carrying out Gaussian filtering of a delay k frame on the global motion quantity between the front frame and the rear frame of the video image to obtain the smoothed global motion quantity;
and S4-2, adding the smoothed global motion quantity into the original video to obtain the original video added with the smoothed global motion quantity, and compensating the original video added with the smoothed global motion quantity to obtain the final stable video.
Due to the adoption of the technical scheme, the method for acquiring the video image based on the improved grid motion statistics has higher accuracy than the original motion estimation method, and can keep higher estimation accuracy under different conditions of rotation, brightness and the like.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram illustrating the matching effect of the method of the present invention after rotational interference;
FIG. 3 is a diagram illustrating the effect of the method of the present invention after the bright-dark interference;
FIG. 4(a) is a diagram of the effect of feature point extraction in the original feature extraction method;
FIG. 4(b) is a diagram illustrating the effect of extracting the radius-suppressing feature points according to the present invention;
FIG. 5(a) is a reference frame grid breakdown diagram of the present invention;
fig. 5(b) is a current frame grid division diagram of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
a method for acquiring video images based on improved grid motion statistics, comprising the steps of:
s1, extracting the characteristic points of each frame of the video image by adopting a FAST method for improving the suppression radius;
s2, calculating the descriptor of the feature point by adopting a BRIEF descriptor method;
s3, matching adjacent feature points by using an improved ORB according to the descriptors of the feature points to obtain the global motion quantity between the front frame and the rear frame of the video image;
and S4, performing Gaussian smooth filtering and image compensation on the global motion quantity between the front frame and the rear frame of the video image to finally obtain a stable video image.
Further, the process of extracting the feature points of each frame of the video image by using the FAST method for improving the suppression radius is as follows:
s1-1: reading in a video, and performing different levels of downsampling processing on an original frame image and a reference frame image according to a Gaussian image pyramid principle to respectively obtain a pyramid of the original frame image and a pyramid of the reference frame image, so that the invariance of the scale of Fast features is increased;
s1-2: primarily screening the characteristic points in the video image according to a threshold value; namely, taking a pixel point P as a reference point and according to the gray value I of the pixel point PpAs a reference value, a circle is formed by taking r as 3 as a radius, and 16 pixels P are arranged on the circle1、P2、P3、P4…P16Setting a threshold t, firstly comparing the four points of P up, down, left and right, according to formula IpEmbroidering Ip Ip+ t for comparison; if 3 of the 4 square points satisfy the condition, the corner point is determined, if more than 1 point does not satisfy the condition, 16 points on the radius are compared, if more than 3/4 satisfies the formula IpEmbroidering Ip IpIf the positive t is the fast characteristic point, otherwise, discarding the point P;
s1-3: fast characteristic points have the phenomenon of angular point clustering after being preliminarily extracted, a plurality of points with similar and adjacent characteristics are obtained, the characteristics after preliminary screening are carried out by adopting a Harris response formula, response calculation is carried out, response values of the preliminary screening points are sequenced, the Euclidean distances of the characteristic points corresponding to the two response values before and after sequencing are differed to obtain a group of inhibiting radiuses R, and the characteristic points are screened again according to the size of the group of inhibiting radiuses;
the Harris response formula is as follows:
Figure BDA0002950873650000041
wherein: n is the Harris response value, adIs a threshold value, I (p) is the gray value of the selected pixel point, and I (x) is the gray value of the neighborhood pixel point;
s1-4: using a gray scale centroid method to re-screen the feature points, and adding selection rotation invariance; completing the extraction of the feature points of each frame of the video image, wherein the process comprises the following steps:
s1-4-1, finding the moment of the video image;
Figure BDA0002950873650000051
wherein: m ispqIs the moment at the (x, y) pixel, p, q are the orders representing the horizontal and vertical directions of the image, p + q is the moment mpqThe order of (a);
s1-4-2, finding the centroid of the image through the moment;
Figure BDA0002950873650000052
wherein: c is the centroid of the image, m10、m01Is a first moment, m00Is a 0 th order moment;
s1-4-3, connecting the geometric center O and the centroid C of the image block to obtain a direction vector
Figure BDA0002950873650000053
This direction is the fast characteristic direction θ, atan 2 (m)01,m10);
Further, the method for calculating the descriptor of the feature point by adopting the BRIEF descriptor method comprises the following steps:
s2-1: selecting oFast characteristic points as centers, taking mxm-size fields as windows, randomly selecting a pair of points, comparing the pixel values, and performing binary assignment;
Figure BDA0002950873650000054
wherein: τ (p; x; y) is a one-bit binary descriptor;
s2-2: selecting N pairs of random points, repeating the step S2-1 to form N-bit binary codes, wherein N is 256 generally, and obtaining Brief descriptors;
Figure BDA0002950873650000055
wherein: f. ofn(p) is an n-bit binary encoding of the feature point p;
s2-3: the Brief descriptor uses Hamming distance for matching, uses n pairs of feature point sets, and is represented by a 2xn matrix:
Figure BDA0002950873650000056
wherein: s is a matrix representation of n feature points;
s2-4: using the previously calculated Fast characteristic direction θ ═ atan 2 (m)01,m10) And a rotation matrix
Figure BDA0002950873650000061
Calculating:
Sθ=RθS (7)
wherein: rθIs the amount of rotation, SθIs an n-dimensional characteristic point matrix added with rotation quantity;
s2-5: calculating a feature descriptor with rotation invariance;
gn(p,θ)=fn(p)∣(xi,yi)∈Sθ (8)
wherein: gn(p, θ) is an n-bit binary descriptor with added rotation invariance;
s2-6: the processed Brief descriptor increases the correlation and is not beneficial to feature description, so that greedy search is performed again, and 256 rBRIEF feature point pairs with the lowest correlation are found out by utilizing the principle of maximization of the variance between the mean value and the sample points.
Further, the process of matching adjacent feature points by using an improved ORB method according to the descriptors of the feature points to obtain the motion amount between the front and rear frames of the video image is as follows:
s3-1, matching two groups of ORB characteristic points in the video image according to Hamming distance by using a BF matching mode, and sequencing according to the difference of the Hamming distance to obtain a group of preliminary characteristic matching sets F;
s3-2, screening the preliminary feature matching set F again according to the motion smoothness, dividing the reference frame and the current frame image of the video image into 20 × 20 grid images respectively, dividing each grid image into 3 × 3 sub-grids, defining the matching number of feature points of the reference frame and the adjacent frame image in a region as a fraction, and calculating the sum of the 3 × 3 region fractions as the total fraction S of the current grid imageij
Figure BDA0002950873650000062
Wherein:
s3-3, sorting the scores of each grid image from big to small, distinguishing true matching and false matching according to a threshold value tau, and when S is used, sorting the scores of all grid images from big to smallijτ, is a true match; when S isijτ, is a false match;
Figure BDA0002950873650000063
wherein: τ is a threshold for distinguishing between true and false matches, mfIs a mean function of the matching event space, sfIs a standard deviation function matching the event space and alpha is a suitable deviation factor.
Selecting a high-subarea feature point matching pair from the true matching feature points according to the required feature points to obtain accurate feature points, and forming a feature matching set F' by the accurate feature points to finish an accurate screening process;
and S3-4, performing affine motion model fitting on the obtained accurate characteristic point matching pair by using a Randac method to finally obtain the global motion quantity of the video.
Further, the process of performing gaussian smoothing filtering and image compensation on the global motion amount between the front frame and the rear frame of the video image to finally obtain a stable video image is as follows:
s4-1, carrying out Gaussian filtering of a delay k frame on the global motion quantity between the front frame and the rear frame of the video image to obtain the smoothed global motion quantity;
and S4-2, adding the smoothed global motion quantity into the original video to obtain the original video added with the smoothed global motion quantity, and compensating the original video added with the smoothed global motion quantity to obtain the final stable video.
FIG. 2 is a diagram illustrating the matching effect of the method of the present invention after rotational interference; FIG. 3 is a diagram illustrating the effect of the method of the present invention after the bright-dark interference; as can be seen from fig. 2 and 3, the method of the present invention still maintains high matching accuracy under the conditions of rotation and light and dark interference, can accurately extract the global motion amount,
FIG. 4(a) is a diagram of the effect of feature point extraction in the original feature extraction method; FIG. 4(b) is a diagram illustrating the effect of extracting the radius-suppressing feature points according to the present invention; it can be seen from the figure that the method of the present invention reduces the local clustering of feature points,
FIG. 5(a) is a reference frame grid breakdown diagram of the present invention; fig. 5(b) is a current frame grid scoring graph, improved grid motion statistics have robustness with high interference on rotation, scale conversion and brightness change, accuracy of global motion amount is improved, and quality of processed video is improved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (4)

1. A method for obtaining video images based on improved grid motion statistics, characterized by: the method comprises the following steps:
s1, extracting the characteristic points of each frame of the video image by adopting a FAST method for improving the suppression radius;
s2, calculating the descriptor of the feature point by adopting a BRIEF descriptor method;
s3, matching adjacent feature points by using an improved ORB according to the descriptors of the feature points to obtain the global motion quantity between the front frame and the rear frame of the video image;
and S4, performing Gaussian smooth filtering and image compensation on the global motion quantity between the front frame and the rear frame of the video image to finally obtain a stable video image.
2. The method of claim 1, wherein the method comprises: the process of extracting the feature points of each frame of the video image by adopting the FAST method for improving the suppression radius is as follows:
s1-1: reading in a video, and performing different levels of downsampling processing on an original frame image and a reference frame image according to a Gaussian image pyramid principle to respectively obtain a pyramid of the original frame image and a pyramid of the reference frame image, so that the invariance of the scale of Fast features is increased;
s1-2: primarily screening the characteristic points in the video image according to a threshold value;
s1-3: performing response calculation on the preliminarily screened characteristics by adopting a Harris response formula, sorting the response values of the preliminarily screened points, performing difference on Euclidean distances of the characteristic points corresponding to two response values before and after sorting to obtain a group of inhibition radiuses, and screening the characteristic points again according to the size of the group of inhibition radiuses;
s1-4: using a gray scale centroid method to re-screen the feature points, and adding selection rotation invariance; and finishing the extraction of the characteristic points of each frame of the video image.
3. The method as claimed in claim 1, wherein the matching of adjacent feature points by using the improved ORB method according to the descriptor of the feature points to obtain the motion amount between the previous and next frames of the video image is performed as follows:
s3-1, matching two groups of ORB characteristic points in the video image according to Hamming distance by using a BF matching mode, and sequencing according to the difference of the Hamming distance to obtain a group of preliminary characteristic matching sets F;
s3-2, re-screening the preliminary feature matching set F according to the motion smoothness, firstly dividing the reference frame and the current frame image of the video image into m x m grid images respectively, dividing each grid image into 3 x 3 sub-grids, defining the matching number of feature points of the reference frame and the adjacent frame image in a region as a fraction, and calculating the sum of the 3 x 3 region fractions as the total fraction S of the current grid imageij
S3-3, sorting the scores of each grid image from big to small, distinguishing true matching and false matching according to a threshold value tau, and when S is used, sorting the scores of all grid images from big to smallijτ, is a true match; when S isijτ, is a false match;
selecting a high-subarea feature point matching pair from the true matching feature points according to the required feature points to obtain accurate feature points, and forming a feature matching set F' by the accurate feature points to finish an accurate screening process;
and S3-4, performing affine motion model fitting on the obtained accurate characteristic point matching pair by using a Randac method to finally obtain the global motion quantity of the video.
4. The method of claim 1, wherein the method comprises: the process of performing gaussian smooth filtering and image compensation on the global motion amount between the front frame and the rear frame of the video image to finally obtain the stable video image is as follows:
s4-1, carrying out Gaussian filtering of a delay k frame on the global motion quantity between the front frame and the rear frame of the video image to obtain the smoothed global motion quantity;
and S4-2, adding the smoothed global motion quantity into the original video to obtain the original video added with the smoothed global motion quantity, and compensating the original video added with the smoothed global motion quantity to obtain the final stable video.
CN202110209457.3A 2021-02-24 2021-02-24 Method for obtaining video image based on improved grid motion statistics Pending CN113034383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110209457.3A CN113034383A (en) 2021-02-24 2021-02-24 Method for obtaining video image based on improved grid motion statistics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110209457.3A CN113034383A (en) 2021-02-24 2021-02-24 Method for obtaining video image based on improved grid motion statistics

Publications (1)

Publication Number Publication Date
CN113034383A true CN113034383A (en) 2021-06-25

Family

ID=76461188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110209457.3A Pending CN113034383A (en) 2021-02-24 2021-02-24 Method for obtaining video image based on improved grid motion statistics

Country Status (1)

Country Link
CN (1) CN113034383A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627306A (en) * 2021-08-03 2021-11-09 展讯通信(上海)有限公司 Key point processing method and device, readable storage medium and terminal
CN116389793A (en) * 2023-02-21 2023-07-04 三亚学院 Method and device for realizing video frame rate improvement

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040057520A1 (en) * 2002-03-08 2004-03-25 Shijun Sun System and method for predictive motion estimation using a global motion predictor
US20100157070A1 (en) * 2008-12-22 2010-06-24 Honeywell International Inc. Video stabilization in real-time using computationally efficient corner detection and correspondence
CN101854465A (en) * 2010-02-01 2010-10-06 杭州海康威视软件有限公司 Image processing method and device based on optical flow algorithm
CN107968916A (en) * 2017-12-04 2018-04-27 国网山东省电力公司电力科学研究院 A kind of fast video digital image stabilization method suitable for on-fixed scene
CN108109163A (en) * 2017-12-18 2018-06-01 中国科学院长春光学精密机械与物理研究所 A kind of moving target detecting method for video of taking photo by plane
CN108805908A (en) * 2018-06-08 2018-11-13 浙江大学 A kind of real time video image stabilization based on the superposition of sequential grid stream
CN110084830A (en) * 2019-04-07 2019-08-02 西安电子科技大学 A kind of detection of video frequency motion target and tracking
US20200160560A1 (en) * 2018-11-19 2020-05-21 Canon Kabushiki Kaisha Method, system and apparatus for stabilising frames of a captured video sequence
CN111667506A (en) * 2020-05-14 2020-09-15 电子科技大学 Motion estimation method based on ORB feature points

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040057520A1 (en) * 2002-03-08 2004-03-25 Shijun Sun System and method for predictive motion estimation using a global motion predictor
US20100157070A1 (en) * 2008-12-22 2010-06-24 Honeywell International Inc. Video stabilization in real-time using computationally efficient corner detection and correspondence
CN101854465A (en) * 2010-02-01 2010-10-06 杭州海康威视软件有限公司 Image processing method and device based on optical flow algorithm
CN107968916A (en) * 2017-12-04 2018-04-27 国网山东省电力公司电力科学研究院 A kind of fast video digital image stabilization method suitable for on-fixed scene
CN108109163A (en) * 2017-12-18 2018-06-01 中国科学院长春光学精密机械与物理研究所 A kind of moving target detecting method for video of taking photo by plane
CN108805908A (en) * 2018-06-08 2018-11-13 浙江大学 A kind of real time video image stabilization based on the superposition of sequential grid stream
US20200160560A1 (en) * 2018-11-19 2020-05-21 Canon Kabushiki Kaisha Method, system and apparatus for stabilising frames of a captured video sequence
CN110084830A (en) * 2019-04-07 2019-08-02 西安电子科技大学 A kind of detection of video frequency motion target and tracking
CN111667506A (en) * 2020-05-14 2020-09-15 电子科技大学 Motion estimation method based on ORB feature points

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
付宏博: "星载视频动态车辆实时检测与跟踪方法研究", 中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑), no. 06, pages 20 *
朱娟娟;郭宝龙;: "一种基于迭代运动估计的全景稳像系统", 光电子.激光, no. 03 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627306A (en) * 2021-08-03 2021-11-09 展讯通信(上海)有限公司 Key point processing method and device, readable storage medium and terminal
CN116389793A (en) * 2023-02-21 2023-07-04 三亚学院 Method and device for realizing video frame rate improvement
CN116389793B (en) * 2023-02-21 2024-01-26 三亚学院 Method and device for realizing video frame rate improvement

Similar Documents

Publication Publication Date Title
CN108304798B (en) Street level order event video detection method based on deep learning and motion consistency
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
CN110490158B (en) Robust face alignment method based on multistage model
Xu et al. Automatic building rooftop extraction from aerial images via hierarchical RGB-D priors
CN110033514B (en) Reconstruction method based on point-line characteristic rapid fusion
CN111160291B (en) Human eye detection method based on depth information and CNN
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN113034383A (en) Method for obtaining video image based on improved grid motion statistics
Tu et al. MSR-CNN: Applying motion salient region based descriptors for action recognition
CN113763269A (en) Stereo matching method for binocular images
CN111709893B (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
CN115239882A (en) Crop three-dimensional reconstruction method based on low-light image enhancement
US11256949B2 (en) Guided sparse feature matching via coarsely defined dense matches
CN111652243A (en) Infrared and visible light image fusion method based on significance fusion
Miao et al. Ds-depth: Dynamic and static depth estimation via a fusion cost volume
CN110910497B (en) Method and system for realizing augmented reality map
Zhang et al. The farther the better: Balanced stereo matching via depth-based sampling and adaptive feature refinement
CN108564020B (en) Micro-gesture recognition method based on panoramic 3D image
CN108573217B (en) Compression tracking method combined with local structured information
Ma et al. MSMA-Net: An Infrared Small Target Detection Network by Multi-scale Super-resolution Enhancement and Multi-level Attention Fusion
CN110503061B (en) Multi-feature-fused multi-factor video occlusion area detection method and system
CN113554036A (en) Characteristic point extraction and matching method for improving ORB algorithm
Park et al. Independent Object Tracking from Video using the Contour Information in HSV Color Space
CN112560651A (en) Target tracking method and device based on combination of depth network and target segmentation
CN110750680A (en) Video scene classification method based on multiple features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination