CN106228578A - A kind of motion capture algorithm improving optical flow method - Google Patents
A kind of motion capture algorithm improving optical flow method Download PDFInfo
- Publication number
- CN106228578A CN106228578A CN201610667078.8A CN201610667078A CN106228578A CN 106228578 A CN106228578 A CN 106228578A CN 201610667078 A CN201610667078 A CN 201610667078A CN 106228578 A CN106228578 A CN 106228578A
- Authority
- CN
- China
- Prior art keywords
- optical flow
- motion capture
- flow method
- next step
- out next
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of motion capture algorithm improving optical flow method, relate to motion capture technical field, first this algorithm creates image L layer pyramid, and target area is detected characteristic point;Judge that characteristic point number, whether more than 8, if YES, carries out next step;Uv and H is solved by optical flow equation;Judge that adjacent twice iteration uv difference out, whether more than 0.0003, if YES, carries out next step;5) iterations p=P+1, if p is < P, then carries out next step;Pyramid number of plies l=l+1, if l is < L, then iteration terminates.The present invention improves the motion capture algorithm of optical flow method by providing, it has the beneficial effects that: speed aspect is greatly improved with accuracy rate aspect, on time complexity and in stability, more original HS optical flow method has had bigger lifting, well meets the motion capture system requirement to real-time.
Description
Technical field
The present invention relates to motion capture technical field, a kind of motion capture algorithm improving optical flow method.
Background technology
When the eyes of people are with observed object generation relative motion, the image of object forms one in retinal plane and is
Row continually varying image, the image information of these a series of changes constantly " flowing through " retina, seems " stream " of a kind of light, institute
To be referred to as light stream.
Light stream defines based on pixel, and the collection of all light streams is collectively referred to as optical flow field.By optical flow field is analyzed,
The object sports ground relative to observer can be obtained.The algorithm analyzed during this is referred to as optical flow method.
The most conventional optical flow method is HS optical flow method, is found by substantial amounts of actual application, and HS optical flow method is following the tracks of texture
During the image enriched, effect is fine, but effect is unsatisfactory when motion capture, there is the problems such as seizure instability.HS at present
Optical flow method yet suffers from some deficiency following: 1, HS optical flow method is not easy to choose a window being of moderate size so that it is adapt to not
Video and different characteristic points with resolution;2, easily there is catching the situation of unstable result, the Partial Feature that is captured point
The irreversible meeting of matrix G at place causes the solution of optical flow equation unreliable and then occurs catching drift phenomenon, when the motion of object is bigger
Time, need iteration to go out accurate optical flow field by addition image pyramid, meanwhile add light stream after image pyramid
Amount of calculation increase a lot of and time consumption increase the most simultaneously;3, light stream is that the information by neighborhood of a point solves, each
Point all solves out by optical flow equation, and characteristic point not constraint each other, when partial dot in the point set followed the tracks of
Tracking result easily affects the tracking effect of entirety time inaccurate, the effect followed the tracks of can be caused unstable.
Summary of the invention
A kind of motion capture algorithm improving optical flow method that the present invention proposes, the speed of algorithm and precision have bigger proposing
Rise.
The technical scheme is that and be achieved in that:
A kind of motion capture algorithm improving optical flow method, this algorithm comprises the following steps:
1) create image L layer pyramid, target area is detected characteristic point, every layer of iterations P time;
2) judge that characteristic point number, whether more than 8, if NO, then returns step 1), if YES, carry out next step;
3) uv and H is solved by optical flow equation;
4) judge that adjacent twice iteration uv difference out, whether more than 0.0003, if NO, then jumps to step 6),
If YES, next step is carried out;
5) iterations p=P+1, if p is < P, then carries out next step, if p >=P, then returns step 3);
6) pyramid number of plies l=l+1, if l is < L, then iteration terminates, if l >=L, then returns step 3).
As preferably, described step 3) in, u and v is respectively the feature point set of consecutive frame, and H is Hessian matrix.
The present invention is by the motion capture algorithm of the improvement optical flow method of offer, and it has the beneficial effects that: speed aspect is with accurate
Really rate aspect is greatly improved, and on time complexity and in stability, more original HS optical flow method has had bigger proposing
Rise, well meet the motion capture system requirement to real-time.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, also may be used
To obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the framework map of holographic imaging systems of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely.Based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under not making creative work premise
Embodiment, broadly falls into the scope of protection of the invention.
As it is shown in figure 1, a kind of motion capture algorithm improving optical flow method that the present embodiment provides, this algorithm includes following step
Rapid:
1) create image L layer pyramid, target area is detected characteristic point, every layer of iterations P time;
2) judge that characteristic point number, whether more than 8, if NO, then returns step 1), if YES, carry out next step;
3) solving uv and H, u and v by optical flow equation and be respectively the feature point set of consecutive frame, H is Hessian matrix;
4) judge that adjacent twice iteration uv difference out, whether more than 0.0003, if NO, then jumps to step 6),
If YES, next step is carried out;
5) iterations p=P+1, if p is < P, then carries out next step, if p >=P, then returns step 3);
6) pyramid number of plies l=l+1, if l is < L, then iteration terminates, if l >=L, then returns step 3).
First this algorithm in use, construct image pyramid to consecutive frame I and J in image sequence, and u is in image I
The feature point set of motion capture, needs to solve point set v, v corresponding in image J for u and is initialized as u.Assume that pyramid is altogether
For L layer, L=O ..., Lm′,For u coordinate in pyramid L layer, wherein
Optical flow fieldIt is initialized as [0,0], the iterations during k is every layer in formula, algorithm uses
Newton steepest descent method, if initial value is more or less the same with actual value, then the most most 5 iteration just can converge to accurately
Value.If but initial value differs bigger with actual value, then the result of iteration dissipates, vk=vk-1+η.Vk is kth
The initial value of secondary iteration, vk-1 is the initial value of-1 iteration of kth, and η is the result of-1 iteration of kth.
Calculating IL x, y and gradient of time orientation at a uL:
Ii(x, y)=JL′(x ', y ')-IL′(x, y)
Wherein IL′For image pixel value after Gaussian convolution, (x ', y ') be image IL midpoint (x, y) in the picture
Corresponding point in JI.Ix (x, y) and Iy (x, y) only with calculating once in every tomographic image, and It (x y) then needs the most repeatedly
For time all recalculate.Substitute the above to solve H, the corresponding point set vL, v that be can get in image JL by H with uLL=
HuL。
Carrying out wrong point after the corresponding point set vL obtained in image vL, the strategy taked at present is: look first at seizure
If the point arrived is outside image, then be directly defined as wrong point, if in image, determined by following formula:
After removing mistake point, solve optical flow field η by corresponding point.Calculate Δ η=ηk-ηk-1, above formula k is iterations,
If the difference between the optical flow field η that iteration is out is less than 0.0003 (empirical value can according to circumstances adjust), then think
The current number of plies, iteration terminates, and can enter next layer and continue iteration.Enter next stacking for time, optical flow field need
It is following process: vl-1=vl* 2, after Global Iterative Schemes completes, after solving H by RANSAC, can again remove a part wrong
Point.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention
Within god and principle, any modification, equivalent substitution and improvement etc. made, should be included within the scope of the present invention.
Claims (2)
1. the motion capture algorithm improving optical flow method, it is characterised in that this algorithm comprises the following steps:
1) create image L layer pyramid, target area is detected characteristic point, every layer of iterations P time;
2) judge that characteristic point number, whether more than 8, if NO, then returns step 1), if YES, carry out next step;
3) uv and H is solved by optical flow equation;
4) judge that adjacent twice iteration uv difference out, whether more than 0.0003, if NO, then jumps to step 6), if
It is yes, carries out next step;
5) iterations p=P+1, if p is < P, then carries out next step, if p >=P, then returns step 3);
6) pyramid number of plies l=l+1, if l is < L, then iteration terminates, if l >=L, then returns step 3).
A kind of motion capture algorithm improving optical flow method the most according to claim 1, it is characterised in that: described step 3)
In, u and v is respectively the feature point set of consecutive frame, and H is Hessian matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610667078.8A CN106228578A (en) | 2016-08-12 | 2016-08-12 | A kind of motion capture algorithm improving optical flow method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610667078.8A CN106228578A (en) | 2016-08-12 | 2016-08-12 | A kind of motion capture algorithm improving optical flow method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106228578A true CN106228578A (en) | 2016-12-14 |
Family
ID=57548091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610667078.8A Pending CN106228578A (en) | 2016-08-12 | 2016-08-12 | A kind of motion capture algorithm improving optical flow method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106228578A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10424069B2 (en) | 2017-04-07 | 2019-09-24 | Nvidia Corporation | System and method for optical flow estimation |
CN110322477A (en) * | 2019-06-10 | 2019-10-11 | 广州视源电子科技股份有限公司 | Feature point observation window setting method, tracking method, device, equipment and medium |
-
2016
- 2016-08-12 CN CN201610667078.8A patent/CN106228578A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10424069B2 (en) | 2017-04-07 | 2019-09-24 | Nvidia Corporation | System and method for optical flow estimation |
US10467763B1 (en) | 2017-04-07 | 2019-11-05 | Nvidia Corporation | System and method for optical flow estimation |
CN110322477A (en) * | 2019-06-10 | 2019-10-11 | 广州视源电子科技股份有限公司 | Feature point observation window setting method, tracking method, device, equipment and medium |
CN110322477B (en) * | 2019-06-10 | 2022-01-04 | 广州视源电子科技股份有限公司 | Feature point observation window setting method, tracking method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490928A (en) | A kind of camera Attitude estimation method based on deep neural network | |
Xiao et al. | Satellite video super-resolution via multiscale deformable convolution alignment and temporal grouping projection | |
CN110348445A (en) | A kind of example dividing method merging empty convolution sum marginal information | |
CN110108258B (en) | Monocular vision odometer positioning method | |
Klingner et al. | Street view motion-from-structure-from-motion | |
CN107368845A (en) | A kind of Faster R CNN object detection methods based on optimization candidate region | |
CN114782691A (en) | Robot target identification and motion detection method based on deep learning, storage medium and equipment | |
CN103473774B (en) | A kind of vehicle positioning method based on pavement image characteristic matching | |
CN103413352A (en) | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion | |
KR102320999B1 (en) | Learning method and learning device for removing jittering on video acquired through shaking camera by using a plurality of neural networks for fault tolerance and fluctuation robustness in extreme situations, and testing method and testing device using the same | |
CN102982334B (en) | The sparse disparities acquisition methods of based target edge feature and grey similarity | |
CN105872345A (en) | Full-frame electronic image stabilization method based on feature matching | |
CN103227888A (en) | Video stabilization method based on empirical mode decomposition and multiple evaluation criteria | |
CN114663496A (en) | Monocular vision odometer method based on Kalman pose estimation network | |
CN107545586A (en) | Based on the local depth acquisition methods of light field limit plane picture and system | |
CN106887010A (en) | Ground moving target detection method based on high-rise scene information | |
CN103985154A (en) | Three-dimensional model reestablishment method based on global linear method | |
Gehrig et al. | Pushing the limits of asynchronous graph-based object detection with event cameras | |
CN105787932A (en) | Stereo matching method based on segmentation cross trees | |
CN106228578A (en) | A kind of motion capture algorithm improving optical flow method | |
CN106875359A (en) | A kind of sample block image repair method based on layering boot policy | |
CN105719236B (en) | The method for generating complete high-resolution vegetation cover degree image seasonal variations sequence | |
CN113744308A (en) | Pose optimization method, pose optimization device, electronic device, pose optimization medium, and program product | |
CN109740551A (en) | A kind of night Lane detection method and system based on computer vision | |
CN107301628B (en) | It is trembled image deblurring method based on trembling as moving the satellite platform of track |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20161214 |