CN102222348B - Method for calculating three-dimensional object motion vector - Google Patents
Method for calculating three-dimensional object motion vector Download PDFInfo
- Publication number
- CN102222348B CN102222348B CN201110176736.0A CN201110176736A CN102222348B CN 102222348 B CN102222348 B CN 102222348B CN 201110176736 A CN201110176736 A CN 201110176736A CN 102222348 B CN102222348 B CN 102222348B
- Authority
- CN
- China
- Prior art keywords
- theta
- dimensional
- prime
- sin
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 239000013598 vector Substances 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 30
- 239000011159 matrix material Substances 0.000 claims description 53
- 230000008878 coupling Effects 0.000 claims description 6
- 238000010168 coupling process Methods 0.000 claims description 6
- 238000005859 coupling reaction Methods 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract 1
- 238000002372 labelling Methods 0.000 abstract 1
- 230000000007 visual effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000452 restraining effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a method for calculating a three-dimensional object motion vector. The method comprises the following steps of: detection and matching of object corner: identifying objects in a video steam, labelling the corners of the objects in a basic frame and a key frame, and matching the corners with each other; calculating the three-dimensional motion vector of the key frame in relation to the basic frame based on the matched corners in the basic frame and the key frame; calculating a three-dimensional object model of the key frame based on a basic three-dimensional model and the three-dimensional motion vector, back projecting the three-dimensional object model to a two-dimensional plane, eliminating wrong points and calculating the final three-dimensional object motion vector.
Description
Technical field
The present invention relates to a kind of multi-angle dynamic imaging field, particularly a kind of stereoscopic video with free visual angles method for calculating three-dimensional object motion vector in dynamic demonstration of three-dimensional body.
Background technology
Along with technical development, occurred some in the research field and shown the display terminal of stereoeffect, wherein, free-viewing angle stereo display is active, real dynamic the demonstration, do not rely on parallax and form the stereoscopic vision effect, the observer can independently select observation visual angle and distance.The calculating of target three-dimensional motion vector is a difficult point in the free-viewing angle stereo display, also is that stereoscopic vision is from static state to dynamic key.By calculating the three-dimensional motion vector of target, can follow the tracks of, the motion of three-dimensional body in the simulation, display video, therefore for video signal processing field, important function and significance is arranged, be requisite step crucial in the Stereoscopic Video Presentation.
Summary of the invention
Goal of the invention: technical matters to be solved by this invention is for the Stereoscopic Video Presentation system, and a kind of linear method that calculates the target three-dimensional motion vector is provided.
Technical scheme: the invention discloses the calculating of three-dimensional motion vector in a kind of Stereoscopic Video Presentation, may further comprise the steps:
Step (1), Corner detect and coupling: the target in the identification video flowing marks out the angle point of target in basic frame and the key frame, and mates;
Step (2), according to the angle point that mates in basic frame and the key frame, the primary Calculation key frame is with respect to the three-dimensional motion of basic frame;
Step (3), according to the target three-dimensional model of motion and basic three-dimensional model calculating key frame, it is overdue that the two dimensional surface rejecting is returned by back projection, calculates final three-dimensional motion vector.
Among the present invention, step (1) may further comprise the steps:
Step (11), the target in identification key frame and the basic frame;
Step (12) marks out the angle point of target in basic frame and the key frame;
Step (13) is mated key frame and basic frame: the corresponding angle point and the two-dimensional coordinate thereof that find key frame and basic frame.
Among the present invention, step (2) may further comprise the steps:
Based on video camera 1 and video camera 2:
Use approximation method to draw system of linear equations:
φ
1, φ
2, φ
3Be respectively the three-dimensional motion vector reduced parameter, t
1, t
2, t
3Be respectively the translation matrix parameter; Form three-dimensional motion vector M:
Or use non-approximation method, draw system of linear equations:
, r wherein
11, r
21, r
31, r
12, r
22, r
32, r
13, r
23, r
33Be the rotation matrix parameter, form three-dimensional motion vector M:
(X wherein
i, Y
i, Z
i) expression any point three-dimensional coordinate in the basic frame of video camera 1 or video camera 2; (X
i', Y
i', Z
i') represent this three-dimensional coordinate in the key frame of video camera 1 or video camera 2; (x
i, y
i) represent that this projects to the two-dimensional coordinate of two dimensional surface at video camera 1 or video camera 2 in basic frame; (x
i', y
i') represent that this projects to the two-dimensional coordinate of two dimensional surface at video camera 1 or video camera 2 in key frame;
Find the solution the least mean-square error solution of system of linear equations, obtain the kinematic matrix parameter, i.e. motion vector.
Among the present invention, step (3) may further comprise the steps:
To two video cameras, described kinematic matrix parameter is additional to basic three-dimensional model, obtain the three-dimensional model of two camera video key frames;
To two video cameras, respectively with the three-dimensional model of two camera video key frames, separately two dimensional surface corresponding to video camera matrix returns in back projection, with the angle point two-dimensional coordinate of the key frame of corresponding video camera relatively, reject the point of error beyond threshold value, use step 2 to recomputate the kinematic matrix parameter, obtain final three-dimensional motion motion vector.
Beneficial effect: the present invention calculates fast relative motion and obtains the key frame three-dimensional model key frame according to the static three-dimensional model of basic frame, in order to obtain the motion video of Same Scene visual angle.The present invention does not need camera calibration, and real-time is good, and the smoothness that is conducive to dynamic video shows, and algorithm has certain robustness.In calculating the free-viewing angle display system, can carry out three-dimensional reconstruction consuming time every the long period, and calculate in real time motion with the inventive method, adjust the position of three-dimensional body, make to show that real-time is better, do not sacrifice precision simultaneously.
Description of drawings
Below in conjunction with the drawings and specific embodiments the present invention is done further to specify, above-mentioned and/or otherwise advantage of the present invention will become apparent.
Fig. 1 is video sequence basis frame of the present invention and key frame signal.
The coordinate system perspective view that Fig. 2 is involved in the present invention.
Fig. 3 is that the present invention tests comparing result with the control of existing research.
Embodiment:
The inventive method adopts twin camera (can expand to multiple-camera) on the basis that static three-dimensional is rebuild, according to two angle motion video pictures of synchronous shooting Same Scene, calculate the target three-dimensional motion vector, shows in order to realize the dynamic solid video.
During the invention process, the condition that needs to satisfy is:
Video flowing is synchronous: the collection that starts video flowing comes the two-dimensional scene of synchronous different visual angles camera head shooting; Concerning embodiment, be embodied as the picture frame that adopts with the moment two video cameras, such as Fig. 1.
During the invention process, Given information has:
Basis frame and basic three-dimensional model: basic three-dimensional model is set up centered by basic frame, and basic frame comprises the basic frame of two video cameras.
The projection matrix of two video cameras: rebuild in the process of basic three-dimensional model and can obtain the video camera projection matrix.
Among the present invention, rebuild on the completed basis at static three-dimensional, according to the two-dimensional video picture that two video cameras provide, calculate the three-dimensional motion vector of target.As shown in Figure 1, the image sequence that the optional interval of key frame equates, present embodiment choose wherein that a frame calculates it with respect to the motion vector of basic frame, and other frame method is identical.
Implementation process is as follows:
One, Corner detects and coupling: identify the separately target in the video flowing of two video cameras, mark out two video cameras separately in basic frame and the key frame, the angle point on the target;
Can be 200910234584.8 in the application number of application on November 23rd, 2009 referring to the applicant, name be called the patent of invention of " a kind of method for displaying stereoscopic video with free visual angles ".Concise and to the point step is:
1.1 with the Harris algorithm angle point in the basic frame being carried out the first time estimates;
1.2 the angle point that estimates is further screened with the SUSAN algorithm, draws final angle point.
1.3 key frame and basic frame are mated: the two-dimensional coordinate (x that finds key frame with the template window matching process
1, y
1) and the two-dimensional coordinate (x of the corresponding angle point of basic frame
1', y
1').
Use above method, for two video cameras, its key frame is mated to basic frame respectively.
Two, calculate key frame with respect to the three-dimensional motion of basic frame.
The present invention adopts the affine camera model under the homogeneous coordinates, and is applicable when its degree of depth can be ignored relatively in the change in depth of target as shown in Figure 2, realistic video camera.
Consider first video camera 1.For any point, establish that three-dimensional coordinate is respectively (X in basic frame and the key frame
1, Y
1, Z
1) and (X
1', Y
1', Z
1'), two-dimensional coordinate is respectively (x
1, y
1) and (x
1', y
1').
Three-dimensional coordinate is write to the mapping of two-dimensional coordinate:
Wherein
Be the video camera projection matrix.
Then key frame can be expressed as with respect to the motion of basic frame:
Wherein kinematic matrix M is:
Wherein R is rotation matrix, and T is translation matrix, and all elements of 1 * 3 matrix O is zero.Although rotation matrix R has 9 elements, it is orthogonal matrix, has 6 independent restrainings:
Thereby really independently parameter only have 3.Can be by the x of Roll-Pitch-Yaw(right-handed Cartesian coordinate system, y and z axle) the method representation rotation matrix, θ is the anglec of rotation:
Finding the solution of motion is divided into 2 steps:
2.1 will finding the solution motion, step 1. is summed up as solving equations b=Ax
By
Three formulas can get:
Each point provides two equations, and is as follows:
1. " if make small angle approximation (key frame is separated by when nearer, and it is less to move, and can make small angle approximation), and the Roll of (6) formula of employing, Pitch and Yaw method for expressing, rotation matrix is reduced to, φ
1, φ
2, φ
3Be respectively the three-dimensional motion vector reduced parameter:
(n wherein
1θ)
2+ (n
2θ)
2+ (n
3θ)
2=θ
2So, φ
1 2+ φ
2 2+ φ
3 2=θ
2(8)
System of equations (6) is rewritten as:
As only using the multiple spot of a video camera, can list system of equations:
First matrix of equation the right is designated as A in the following formula, and its order is 5 to the maximum.This is because a video camera can not provide along the depth information of its optical axis direction.The below is issued a certificate:
I. observation matrix A finds that right three row interlacing are identical.This matrix is done line translation, capable with the first row cancellation 2i-1 after three row, three be listed as (i=2,3 after capable with the second row cancellation 2i ... n):
Ii. in the matrix that obtains, right three column ranks are 2 to the maximum, because non-vanishingly only have two row; Left three column ranks are 3 to the maximum.So this rank of matrix is 5 to the maximum.The elementary row rank transformation does not change rank of matrix, so the order of matrix A is 5 to the maximum.
By above-mentioned proof as can be known, find the solution the rigid body three-dimensional motion, at least need two video cameras.Second video camera projection matrix is:
So find the solution motion, the group that need establish an equation to the multiple spot of two video cameras is:
Note equation left side matrix is b, and the right matrix is followed successively by A and x.This moment, the order of A was 6 to the maximum.. then finding the solution conversion of motion is solving equation group b=Ax.
N point comprises m point of video camera 1, and the k of video camera 2 point.N=m+k(m, k 〉=1). in theory, 3 groups of points that n=3 namely uses two video cameras to the time, can solve system of equations; When n 〉=3, need to separate the overdetermined equation group, obtain a result.
Herein, equation only needs to add newline in the matrix bottom also widenable to multiple-camera, adds a plurality of video camera equations, and equation form is identical.
If 2. do not make small angle approximation, need equally two video cameras, system of equations is:
For non-approximate data, in order to guarantee the matrix A full rank, need satisfy m 〉=4 and n 〉=4, altogether the minimum 4+4=8 stack features point that needs.
Equally, equation is also widenable to multiple-camera.
2.2 step 2. is found the solution the least mean-square error solution of system of linear equations b=Ax
Adopt QR(orthonomal matrix-upper triangular matrix) decomposition method, solution is as follows:
Since the existence of error, Ax=b+ ε, problem is to find the solution x to make norm || ε ||
2 2Minimum.
Can find Q so that
Wherein Q is orthogonal matrix, and R is nonsingular upper triangular matrix.
Notice || b
2||
2 2Constant, so former problem || Ax-b||
2 2Min is converted into || Rx-b
1||
2 2Min, that is: Rx-b
1=0.
So have:
x=R
-1b
1 (14)
So far, the x vector solves.
To the approximate data of system of equations (12), x=(φ
1, φ
2, φ
3, t
1, t
2, t
3)
T,
To the non-approximate data of system of equations (13), x=(R
11, R
21, R
31, R
12, R
22, R
32, R
13, R
23, R
33, t
1, t
2, t
3)
T.
Three, the three-dimensional motion vector of calculating according to step 2 is calculated the target three-dimensional model of key frame, to two video cameras, two dimensional surface corresponding to this video camera matrix returns in back projection respectively, compare with the key frame angle point two-dimensional coordinate of this video camera, reject the point of error beyond threshold value, then recomputate three-dimensional motion vector by step 2.
The mistake coupling can cause back projection's error very large, and this error directly affects result of calculation, so that the present invention proposes to reject is overdue.Because back projection's error that mistake coupling causes is generally at 10 more than the pixel, even 20, and general normal point is many in 2 pixels, is 10 so establish threshold value.
As long as surpass threshold value in x direction or y deflection error, namely as overdue rejecting.Then (the individual point of l≤n) re-starts step 2, recomputates three-dimensional motion vector by (15) formula or (16) formula for the l that keeps.
(16)
Think that the result is final three-dimensional motion vector.
Comparatively ripe Han-Kanade method in the present invention and the existing research has been done the control experiment.The Han-Kanade method is based on the three-dimensional reconstruction of uncalibrated image not, also can recover motion.Experiment embodiment is: real-world object around the x direction rotation 5 degree, and control inputs point is identical, obtains calculating the anglec of rotation to such as Fig. 3.Can find out that precision is in same level.And on the time, MATLAB realizes that the computing of three frames needs 5 ~ 10 minutes, and comprehensive 360 degree are rebuild object just needs the more time.5 video cameras needed more than half an hour.And video camera is fewer, and is higher to the requirement of wide-angle coupling, so can not reduce simply video camera.Therefore, in the free-viewing angle display system, can carry out three-dimensional reconstruction consuming time every the long period, and calculate in real time motion with the inventive method, adjust the position of three-dimensional body, make to show that real-time is better, not sacrifice precision simultaneously.
The invention provides thinking and the method for three-dimensional vectors computing method in a kind of Stereoscopic Video Presentation; method and the approach of this technical scheme of specific implementation are a lot; the above only is preferred implementation of the present invention; should be understood that; for those skilled in the art; under the prerequisite that does not break away from the principle of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.In the present embodiment not clear and definite each ingredient all available prior art realized.
Claims (1)
1. a method for calculating three-dimensional object motion vector is characterized in that, may further comprise the steps:
Step (1), Corner detect and coupling: identify the separately target in the video flowing of two video cameras, mark out the separately angle point of target in basic frame and the key frame of two video cameras, and for two video cameras key frame is mated to basic frame respectively;
Step (2) according to the angle point that mates in basic frame and the key frame, is calculated key frame with respect to the three-dimensional motion matrix of basic frame;
Step (3), the target three-dimensional model according to basic three-dimensional model and described three-dimensional motion matrix computations key frame returns target three-dimensional model back projection to two dimensional surface, and rejects overduely, calculates final objective motion vector;
Step (1) may further comprise the steps:
Identify the separately target in key frame and the basic frame in the video flowing of two video cameras;
Mark out the separately angle point of target in basic frame and the key frame of two video cameras;
Angle point to basic frame and key frame mates: find the separately corresponding angle point of key frame and basic frame of two video cameras, obtain the two-dimensional coordinate of angle point;
Step (2) may further comprise the steps:
Based on video camera 1 and video camera 2: for any point, establish that three-dimensional coordinate is respectively (X in basic frame and the key frame
1, Y
1, Z
1) and (X
1', Y
1', Z
1'), two-dimensional coordinate is respectively (x
1, y
1) and (x
1', y
1');
Three-dimensional coordinate is write to the mapping of two-dimensional coordinate:
Wherein
Be video camera 1 projection matrix;
Then key frame with respect to the movement representation of basic frame is:
Wherein three-dimensional motion matrix M is:
(3) wherein R is rotation matrix, wherein r
11, r
21, r
31, r
12, r
22, r
32, r
13, r
23, r
33Be the rotation matrix parameter, T is translation matrix, t
1, t
2, t
3Be respectively the translation matrix parameter, all elements of 1 * 3 matrix O is zero;
By the x of right-handed Cartesian coordinate system, y and z axle method representation rotation matrix, θ is the anglec of rotation:
Finding the solution of motion is divided into 2 steps:
To find the solution motion and be summed up as solving equations b=Ax:
By
Three formulas can get:
Each point provides two equations, and is as follows:
If make small angle approximation, and the method for expressing of (6) formula of employing, rotation matrix is reduced to:
(n wherein
1θ)
2+ (n
2θ)
2+ (n
3θ)
2=θ
2So, φ
1 2+ φ
2 2+ φ
3 2=θ
2(8), wherein, φ
1, φ
2, φ
3Be respectively three-dimensional motion matrix reduction parameter;
System of equations (6) is rewritten as:
Order
Projection matrix for video camera 2;
Use approximation method to the multiple spot alignment system of equations of two video cameras to be:
Or use non-approximation method, draw system of linear equations:
(X wherein
i, Y
i, Z
i) expression any point three-dimensional coordinate in the basic frame of video camera 1 or video camera 2; (x
i', y
i') represent that this projects to the two-dimensional coordinate of two dimensional surface at video camera 1 or video camera 2 in key frame;
Find the solution the least mean-square error solution of system of linear equations, obtain the three-dimensional motion matrix parameter, form the three-dimensional motion matrix;
Step (3) may further comprise the steps:
To two video cameras, described three-dimensional motion matrix parameter is additional to basic three-dimensional model, obtain the three-dimensional model of two camera video key frames;
Respectively the three-dimensional model back projection of two camera video key frames is returned separately two dimensional surface corresponding to video camera projection matrix, with the angle point two-dimensional coordinate of the key frame of corresponding video camera relatively, reject the point of error beyond threshold value, use step (2) to recomputate the three-dimensional motion matrix parameter, obtain final objective motion vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110176736.0A CN102222348B (en) | 2011-06-28 | 2011-06-28 | Method for calculating three-dimensional object motion vector |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110176736.0A CN102222348B (en) | 2011-06-28 | 2011-06-28 | Method for calculating three-dimensional object motion vector |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102222348A CN102222348A (en) | 2011-10-19 |
CN102222348B true CN102222348B (en) | 2013-04-24 |
Family
ID=44778892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110176736.0A Expired - Fee Related CN102222348B (en) | 2011-06-28 | 2011-06-28 | Method for calculating three-dimensional object motion vector |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102222348B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123716B (en) * | 2013-04-28 | 2016-08-10 | 腾讯科技(深圳)有限公司 | The detection method of picture steadiness, device and terminal |
US9317770B2 (en) | 2013-04-28 | 2016-04-19 | Tencent Technology (Shenzhen) Co., Ltd. | Method, apparatus and terminal for detecting image stability |
CN107230220B (en) * | 2017-05-26 | 2020-02-21 | 深圳大学 | Novel space-time Harris corner detection method and device |
CN107808388B (en) * | 2017-10-19 | 2021-10-12 | 中科创达软件股份有限公司 | Image processing method and device containing moving object and electronic equipment |
CN107707899B (en) * | 2017-10-19 | 2019-05-10 | 中科创达软件股份有限公司 | Multi-view image processing method, device and electronic equipment comprising moving target |
DE102018105063A1 (en) | 2018-03-06 | 2019-09-12 | Ebm-Papst Mulfingen Gmbh & Co. Kg | Apparatus and method for air volume detection |
CN109063567B (en) * | 2018-07-03 | 2021-04-13 | 百度在线网络技术(北京)有限公司 | Human body recognition method, human body recognition device and storage medium |
CN109146932B (en) * | 2018-07-17 | 2021-08-24 | 北京旷视科技有限公司 | Method, device and system for determining world coordinates of target point in image |
CN111179271B (en) * | 2019-11-22 | 2021-05-11 | 浙江众合科技股份有限公司 | Object angle information labeling method based on retrieval matching and electronic equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7164800B2 (en) * | 2003-02-19 | 2007-01-16 | Eastman Kodak Company | Method and system for constraint-consistent motion estimation |
CN101336856B (en) * | 2008-08-08 | 2010-06-02 | 西安电子科技大学 | Information acquisition and transfer method of auxiliary vision system |
CN101729920B (en) * | 2009-11-23 | 2011-10-19 | 南京大学 | Method for displaying stereoscopic video with free visual angles |
-
2011
- 2011-06-28 CN CN201110176736.0A patent/CN102222348B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN102222348A (en) | 2011-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102222348B (en) | Method for calculating three-dimensional object motion vector | |
CN109166077B (en) | Image alignment method and device, readable storage medium and computer equipment | |
EP3067861B1 (en) | Determination of a coordinate conversion parameter | |
CN102697508B (en) | Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision | |
CN108399643A (en) | A kind of outer ginseng calibration system between laser radar and camera and method | |
CN102842117B (en) | Method for correcting kinematic errors in microscopic vision system | |
CN104376552A (en) | Virtual-real registering algorithm of 3D model and two-dimensional image | |
US20130136302A1 (en) | Apparatus and method for calculating three dimensional (3d) positions of feature points | |
CN101729920B (en) | Method for displaying stereoscopic video with free visual angles | |
CN104677330A (en) | Small binocular stereoscopic vision ranging system | |
CN111415375B (en) | SLAM method based on multi-fisheye camera and double-pinhole projection model | |
CN102831601A (en) | Three-dimensional matching method based on union similarity measure and self-adaptive support weighting | |
CN110065075A (en) | A kind of spatial cell robot external status cognitive method of view-based access control model | |
CN111429571A (en) | Rapid stereo matching method based on spatio-temporal image information joint correlation | |
CN113487726B (en) | Motion capture system and method | |
CN103426170A (en) | Hidden target imaging method based on non-structural light field synthesis aperture imaging | |
Huang et al. | MC-VEO: A visual-event odometry with accurate 6-DoF motion compensation | |
CN103006332A (en) | Scalpel tracking method and device and digital stereoscopic microscope system | |
CN112329723A (en) | Binocular camera-based multi-person human body 3D skeleton key point positioning method | |
Gaschler et al. | Epipolar-based stereo tracking without explicit 3d reconstruction | |
CN116630423A (en) | ORB (object oriented analysis) feature-based multi-target binocular positioning method and system for micro robot | |
Grundmann et al. | A gaussian measurement model for local interest point based 6 dof pose estimation | |
CN109712195A (en) | The method for carrying out homography estimation using the public self-polar triangle of ball picture | |
Chen et al. | End-to-end multi-view structure-from-motion with hypercorrelation volume | |
Liu et al. | Improved template matching based stereo vision sparse 3D reconstruction algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130424 Termination date: 20160628 |
|
CF01 | Termination of patent right due to non-payment of annual fee |