CN1851618A - Single-eye vision semi-matter simulating system and method - Google Patents

Single-eye vision semi-matter simulating system and method Download PDF

Info

Publication number
CN1851618A
CN1851618A CN 200610083639 CN200610083639A CN1851618A CN 1851618 A CN1851618 A CN 1851618A CN 200610083639 CN200610083639 CN 200610083639 CN 200610083639 A CN200610083639 A CN 200610083639A CN 1851618 A CN1851618 A CN 1851618A
Authority
CN
China
Prior art keywords
virtual
video camera
unit
plane
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610083639
Other languages
Chinese (zh)
Other versions
CN100416466C (en
Inventor
张广军
刘震
孙军华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing University of Aeronautics and Astronautics
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNB2006100836396A priority Critical patent/CN100416466C/en
Publication of CN1851618A publication Critical patent/CN1851618A/en
Priority to US11/561,696 priority patent/US7768527B2/en
Application granted granted Critical
Publication of CN100416466C publication Critical patent/CN100416466C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

Said system of the present invention includes virtual imaging unit, picture projection unit and video camera realizing unit. In semi practicality emulation, said method includes: in virtual imaging unit, constructing and configuring virtual scene position stance through OpenGL etc software, through virtual video camera projecting virtual scene to computer screen, then picture projection unit projecting picture to projection plane, through video camera realizing unit to obtain projected picture on projection plane, finally through computer software to calculate virtual video camera inner parameter, virtual scene position stance; through comparing with set value and analysing to obtain emulation result accuracy error. Said invention has advantages of Said invention has advantages of simple equipment, easy realizing, and wide use range, especially suitable for realizing calibrating and vision guided navigation positional emulation.

Description

Single-eye vision semi-matter simulating system and method
Technical field
The present invention relates to a kind of single-eye vision semi-matter simulating system and method.
Background technology
Along with theory on computer vision and development of computer, computer vision has been widely used in aspects such as Target Recognition, visual guidance, commercial measurement, Industry Control.Semi-true object emulation technology is called hardware again at loop simulation, inserts material object under the situation of conditions permit as far as possible in analogue system, to replace the mathematical model of appropriate section, so more near actual conditions, thereby obtains more definite information.Run into some complex situations through regular meeting in the computer vision research process, such as often needing a large amount of scene images carry out algorithm research, but field condition complexity usually, being difficult in the research initial stage just carries out full-scale investigation and obtains a large amount of scene images.Like that both waste fund, influenced the research progress again.Therefore, it is very necessary carrying out emulation experiment at the initial stage of research work.Though Digital Simulation is completely economized on the use of funds, search time short, because complication system modeling difficulty is had to complication system is simplified, and a lot of states are made as perfect condition.Though also added noise, but be difficult to obtain very simulation result near actual conditions.Therefore, hardware-in-the-loop simulation has just embodied advantage in the initial stage of computer vision research.But also do not mention the problem that a kind of semi-matter simulating system can solve the complex situations emulation difficulty that is run in the computer vision research process in the available data, therefore designing a kind of single-eye vision semi-matter simulating system is very necessary and urgent in order to solve these difficulties.
Summary of the invention
The objective of the invention is to overcome the defective of prior art, a kind of single-eye vision semi-matter simulating system and method are provided, in order to be implemented in the simulation work of computer vision research under the complex environment, especially be fit to be applied to demarcate the The Realization of Simulation with the vision guided navigation location, its have equipment simple, be easy to realize and advantage that usable range is wide.
Monocular semi-matter simulating system of the present invention comprises that virtual image unit, image projection unit and shooting realize the unit, connect in turn between these three unit, and its relative position can not change.Described virtual image unit is that virtual scene is projected on the computer screen; Described image projection unit is that the virtual scene on the computer screen is projected on the projection plane; Described shooting realizes that the unit is to obtain image on the projection plane by monocular-camera.
Monocular hardware-in-the-loop simulation method of the present invention may further comprise the steps: at first, in the virtual image unit, by softwares such as OpenGL virtual scene is made up, set the position and attitude of virtual scene, virtual scene is projected on the computer screen by virtual video camera.Then, image projection unit projects image onto on the projection plane, realizes that through shooting the unit obtains projected image on projection plane.At last, calculate the inner parameter of virtual video camera by computer software, the position and attitude of virtual scene by comparing with setting value, is analyzed the trueness error that draws simulation result.
Advantage of the present invention mainly contains: (1) creatively combines computer vision and virtual reality technology, has well solved the problem of complex situations emulation difficulty; (2) equipment simple, be easy to realize that usable range is wide; (3), can obtain than complete Digital Simulation more near the simulation result of truth problem such as it is many to have solved full-scale investigation cost fund again, and setup time is long by this simulated environment.
Description of drawings
With embodiment single-eye vision semi-matter simulating system of the present invention and method are done explanation in further detail with reference to the accompanying drawings.
Fig. 1 is a system architecture schematic diagram of the present invention;
Fig. 2 is the structural representation of system of the present invention;
Fig. 3 is a method flow schematic diagram of the present invention;
Fig. 4 is that virtual video camera field angle of the present invention and scale factor concern synoptic diagram;
Fig. 5 is the plane target drone synoptic diagram of embodiments of the invention;
Fig. 6 is the projected image synoptic diagram of virtual scene of the embodiment of Fig. 5;
Fig. 7 is the image synoptic diagram of unique point of extraction of the embodiment of Fig. 6.
Embodiment
Single-eye vision semi-matter simulating system shown in Figure 1 comprises: virtual image unit, image projection unit and shooting realize the unit, connect in turn between these three unit, and its relative position can not change.Described virtual image unit is that virtual scene is projected on the computer screen; Described image projection unit is that the virtual scene on the computer screen is projected on the projection plane; Described shooting realizes that the unit is to obtain image on the projection plane by monocular-camera.
As shown in Figure 2, virtual image of the present invention unit comprises the computing machine 1 that virtual views software is housed, and described image projection unit comprises projector 2 and projection screen 3.Described shooting realization unit comprises video camera 4, camera pan-tilt 5 and the computing machine 6 of simulation software is housed.Computing machine 6 may command camera pan-tilts 5 make video camera 4 obtain best field positions, and projector 2, projection screen 3, video camera 4 and camera pan-tilt 5 all are fixed on indoor wall or the support, and relative variation can not appear in the position between them.The parameter of projector 2 immobilizes, it is connected with computing machine 1, on projection screen 3, launch the virtual three-dimensional visual pattern that generates by virtual views software in the computing machine 1, video camera 4 and computing machine 6 join, to be sent to computing machine 6 by the view data that video camera 4 is gathered earlier by image pick-up card, and the image of this time collection is handled by the simulation software in the computing machine 6, calculate the inner parameter of virtual video camera and the position and attitude of virtual scene, then, change the position and attitude of virtual scene, repeat again to gather and calculate.
As shown in Figure 3, monocular hardware-in-the-loop simulation method of the present invention may further comprise the steps: at first, in the virtual image unit, by softwares such as OpenGL virtual scene is made up, set the position and attitude of virtual scene, by virtual video camera virtual scene is projected on the computer screen, then, image projection unit projects image onto on the projection plane, realizes that through shooting the unit obtains projected image on projection plane, and is last, calculate the inner parameter of virtual video camera by computer software, the position and attitude of virtual scene by comparing with setting value, is analyzed the trueness error that draws simulation result.
Specify each step that this method realizes below in conjunction with accompanying drawing:
Step 1: set up by the imaging model of virtual scene to computer screen
The video camera linear model is as shown in the formula 1
s 1 a b 1 = α x 0 u 0 0 0 α y v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M x 1 M x 2 x w y w z w 1 . . . ( 1 )
The homogeneous coordinates of putting on the wherein virtual scene are [x w, y w, z w, 1] T, the homogeneous coordinates of putting on the image are [a, b, 1] Tα xBe that virtual video camera is as the scale factor on the u axle on the plane, α yBe that virtual video camera is as the scale factor on the v axle on the plane; [u 0, v 0] be that virtual video camera is as plane epigraph centre coordinate; s 1It is a scale factor; R, T are rotation matrix and translation vector; M X1By α x, α y, u 0, v 0Determine, be called as the inner parameter of virtual video camera; M X2By R, T determines, is called as the external parameter of virtual video camera.
Step 1: determine the virtual video camera inner parameter
Setting the virtual video camera image resolution ratio is 1024 * 768.30 degree are got at the vertical field of view angle, and concrete geometric relationship is set α here with reference to Fig. 4 xy=384.0/tan (15.0 * π/180)=1433.1075357064; u 0=512, v 0=384.
Step 2: determine the virtual video camera external parameter
In virtual views software, need determine the locus of viewpoint, i.e. the D coordinates value T of viewpoint under the virtual world coordinate system for the outer ginseng of determining virtual video camera w=[X w, Y w, Z w]; Direction of visual lines driftage angle θ (rotating), luffing angle  (rotating) and lift-over angle φ (rotating) around Z-direction around Y direction around X-direction.
Here the initial position of setting the virtual video camera coordinate system overlaps with virtual world coordinate system position.Virtual video camera under original state according to first lift-over again the order of the last driftage of pitching be rotated, thereby determine the postrotational final position of virtual video camera, its rotation matrix is R, again the virtual video camera viewpoint is displaced to set position and has promptly finished the conversion that is tied to the virtual video camera coordinate system from the virtual world coordinate.
Rotation of coordinate matrix R can try to achieve with the Eulerian angle representation, according to the virtual video camera rotation of front defined order as can be known in the Eulerian angle representation rotation be in proper order: around Z axle → around Y-axis → around X-axis, the anglec of rotation is positive and negative to be determined by the right-handed coordinate system rule, angle around X-axis, Y-axis, Z axle is respectively θ, , behind the φ, can derive and try to achieve rotation matrix and be:
If the coordinate figure of virtual video camera viewpoint in the virtual world coordinate system is T W, then translation vector is:
T=-(R*T W)
Finally obtaining virtual video camera external parameter matrix is:
M x 2 = R T 0 T 1
Step 2: set up by computer screen to the imaging model of video camera as the plane
Because in calibration process, be difficult to projector is demarcated separately, so, do as a wholely to consider with latter two projection process merging.
, not only to consider the foundation of concrete model, and will demarcate during to video camera setting up computer screen, specifically be divided into the parameter of system as the imaging model on plane:
Step 1: set up imaging model
In order to understand whole imaging model more accurately, earlier set up computer screen respectively to projection plane, projection plane is to the model of video camera as the plane, and last again two models merging forms by computer screen to the imaging model of video camera as the plane, specifically comprises:
1) sets up by the imaging model of computer screen to projection plane
Because computer screen is face-face projection to the projection relation of projection plane, so can obtain as drag:
s 2 x t y t 1 = H t a b 1 . . . ( 2 )
S wherein 2Be scale factor; H tBe one 3 * 3 nonsingular matrix; The homogeneous coordinates of putting on the projection plane are [x t, y t, 1].
2) set up by the imaging model of projection plane to video camera
Because projection plane is a plane, projection plane can get to the imaging model of video camera:
s 3 u v 1 = α cx 0 u c 0 0 α cy v c 0 0 0 1 r c 1 r c 2 T c x t y t 1 . . . ( 3 )
Wherein video camera is [u, v, 1] as the homogeneous coordinates of putting on the plane Ts 3It is a scale factor; α CxBe that video camera is as the scale factor on the u axle on the plane, α CyBe that video camera is as the scale factor on the v axle on the plane; [u C0, y C0] be that video camera is as epigraph center, plane coordinate; r C1, r C2Be respectively R cThe 1st, 2 row.
3) set up by computer screen to the imaging model of video camera as the plane
Formula (2) and formula (3) merging are formed by computer screen to the imaging model of video camera as the plane
Concrete formula is as follows:
s 4 u v 1 = α cx 0 u c 0 0 α cy v c 0 0 0 1 r c 1 r c 2 t c H t a b 1 = H b a b 1 . . . ( 4 )
H in the formula bBe 3 * 3 matrixes; s 4It is a scale factor.
Step 2: the demarcation of system
Utilize the model formation of formula (4), system is demarcated.
Detailed process is as follows:
(1) the varifocal method that proposes according to Lenz and Tsai is (referring to Lenz.R.K, Tsai.R.Y, Techniques for Calibrationof the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology, IEEETransactions on Pattern Analysis and Machine Intelligence.Volume 10.Issue 5.September 1988.Pages:713-720) try to achieve video camera as the picture centre on the plane.
That (2) utilizes proposition such as Zhang Guangjun realizes demarcation to the video camera coefficient of radial distortion based on the constant distortion of camera coefficient scaling method of double ratio (writing " machine vision ", Beijing: Science Press, 2005. referring to Zhang Guangjun).
(3) obtaining on the basis of video camera as picture centre on the plane and coefficient of radial distortion, after video camera carried out distortion correction as the image on the plane, can set up linear equation by (4) formula by a plurality of angle points on the target, utilize least square method can obtain matrix H b, promptly finish the demarcation of parameter.
Step 3: set up overall calculation machine vision hardware-in-the-loop simulation environmental model
The mathematical model of front process is merged, can obtain the block mold of hardware-in-the-loop simulation environment:
s 5 u v 1 = H b α x 0 u 0 0 0 α y v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 . . . ( 5 )
s 5It is a scale factor; H wherein bCan obtain by demarcation; R, T are artificial the setting.
According to above-mentioned model as can be seen, the homogeneous coordinates of putting on as the plane when known video camera [u, v, 1] TWith the homogeneous coordinates [x that puts on the virtual scene w, y w, z w, 1] TThe time, the confidential reference items that just can calculate the virtual video camera in the formula (5) reach outer ginseng (R, T).
The step of monocular of the present invention half visual simulation in kind is described below by a specific embodiment: unmanned plane has been obtained illustrious military success on land battlefield, its outstanding performance has showed that to All Around The World unmanned plane is applied to the bright prospects of sea warfare, thereby is subjected to the attention of various countries naval.And one of key technical problem of carrier-borne unmanned plane with regard to after being to use warship reclaim problem, here I adopt the present invention to carry out unmanned plane and the warship emulation experiment.
Step 1:
At first by plane target drone of OpenGL software design of virtual image unit, as shown in Figure 5.Here get that four angle points of each black square are unique point in the target zone, each characteristic point coordinates is set in advance, the arrangement mode of wherein big or small black square be for the initial point of discerning the target coordinate system and X-axis convenient.
Step 2:
Realize that by shooting the unit obtains the image through the image projection unit projection on projection plane.In order to narrate for simplicity, only corresponding change a position and attitude here, be 45 degree with the angle of pitch, viewpoint is to be example in 150 centimeters, as shown in Figure 6.
Step 3:
By the image coordinate of unique point in the image processing function extraction image of Computer Simulation software, as shown in Figure 7.
Step 4:
M in known formula (5) 1, M 2, α x, α y, u 0, v 0Under the situation, according to formula (5), can solve unmanned plane position and attitude (R, T), concrete data are as follows:
R=
0.702725 -0.001678 -0.714938
-0.000565 -0.999956 0.001809
-0.711514 -0.000596 -0.699186
T=
-0.006923
-0.004578
150.141495
Setting value and actual measured value such as following table among the OpenGL
θ (degree)  (degree) φ (degree) T x(centimetre) T y(centimetre) T z(centimetre)
Set 0 45 0 106.06601 0 106.066018
Calculate -0.129 45.356 -0.137 106.82711 0.102277 105.504913
Deviation -0.129 0.356 -0.137 0.761095 0.102277 -0.561105
T=[T in the table x, T y, T z] TBe the coordinate under the alive boundary of the video camera photocentre coordinate system.
Above-described only is preferred implementation of the present invention.Should be pointed out that for the person of ordinary skill of the art under the prerequisite that does not break away from the principle of the invention, can also make some modification and improvement, these also should be considered as belonging to protection scope of the present invention.

Claims (6)

1. single-eye vision semi-matter simulating system is characterized in that comprising: virtual image unit, image projection unit and shooting realize the unit, connect in turn between these three unit, and its relative position can not change.
2. single-eye vision semi-matter simulating system as claimed in claim 1, it is characterized in that described virtual image unit comprises the computing machine (1) that virtual views software is housed, described image projection unit comprises projector (2) and projection screen (3), described shooting realizes that the unit comprises video camera (4), camera pan-tilt (5) and the computing machine (6) of simulation software is housed, computing machine (6) may command camera pan-tilt (5), make video camera (4) obtain best field positions, described projector (2), projection screen (3), video camera (4) and camera pan-tilt (5) are fixed on indoor wall or the support, relative variation can not appear in the position between them, the parameter of projector (2) immobilizes, it is connected with computing machine (1), launch the virtual three-dimensional visual pattern that virtual views software generates from computing machine 1 on projection screen (3), described video camera (4) and computing machine (6) join.
3. a method of utilizing single-eye vision semi-matter simulating system as claimed in claim 1 to carry out hardware-in-the-loop simulation is characterized in that comprising the steps:
The first step in the virtual image unit, makes up virtual scene by softwares such as OpenGL, sets the position and attitude of virtual scene, by virtual video camera virtual scene is projected on the computer screen;
In second step, image projection unit projects image onto on the projection plane, realizes that through shooting the unit obtains projected image on projection plane;
The 3rd step calculated the inner parameter of virtual video camera by computer software, and the position and attitude of virtual scene by comparing with setting value, is analyzed the trueness error that draws simulation result.
4. single-eye vision semi-matter emulation mode as claimed in claim 3, it is characterized in that: the described first step comprises the step of the internal and external parameter of setting virtual video camera.
5. single-eye vision semi-matter emulation mode as claimed in claim 3 is characterized in that: described second step comprises: set up by the imaging model of computer screen to projection plane; Foundation is by the imaging model of projection plane to video camera; Foundation by computer screen to the imaging model of video camera as the plane.
6. single-eye vision semi-matter emulation mode as claimed in claim 3 is characterized in that: described the 3rd step comprises the block mold of setting up hardware-in-the-loop simulation:
s 5 u v 1 = H b α x 0 u 0 0 0 α y v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1
H wherein bBe 3 * 3 matrixes, α xBe that virtual video camera is as the scale factor on the u axle on the plane, α yBe that virtual video camera is as the scale factor on the v axle on the plane, [u 0, v 0] be that virtual video camera is as plane epigraph centre coordinate, s 5Be a scale factor, video camera is as the coordinate of putting on the plane [u, v, 1] T, the homogeneous coordinates [x that puts on the virtual scene w, y w, z w, 1] T
CNB2006100836396A 2006-05-31 2006-05-31 Single-eye vision semi-matter simulating system and method Expired - Fee Related CN100416466C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CNB2006100836396A CN100416466C (en) 2006-05-31 2006-05-31 Single-eye vision semi-matter simulating system and method
US11/561,696 US7768527B2 (en) 2006-05-31 2006-11-20 Hardware-in-the-loop simulation system and method for computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100836396A CN100416466C (en) 2006-05-31 2006-05-31 Single-eye vision semi-matter simulating system and method

Publications (2)

Publication Number Publication Date
CN1851618A true CN1851618A (en) 2006-10-25
CN100416466C CN100416466C (en) 2008-09-03

Family

ID=37133097

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100836396A Expired - Fee Related CN100416466C (en) 2006-05-31 2006-05-31 Single-eye vision semi-matter simulating system and method

Country Status (1)

Country Link
CN (1) CN100416466C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715486A (en) * 2015-03-25 2015-06-17 北京经纬恒润科技有限公司 Simulated rack camera calibration method and real-time machine
CN106558080A (en) * 2016-11-14 2017-04-05 天津津航技术物理研究所 Join on-line proving system and method outside a kind of monocular camera
CN107255458A (en) * 2017-06-19 2017-10-17 昆明理工大学 A kind of upright projection grating measuring analogue system and its implementation
CN107918293A (en) * 2017-12-15 2018-04-17 四川汉科计算机信息技术有限公司 Universal type simulation system
CN109242752A (en) * 2018-08-21 2019-01-18 吉林大学 A kind of analog acquisition obtains the method and application of mobile image
CN112987593A (en) * 2021-02-19 2021-06-18 中国第一汽车股份有限公司 Visual positioning hardware-in-the-loop simulation platform and simulation method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6618076B1 (en) * 1999-12-23 2003-09-09 Justsystem Corporation Method and apparatus for calibrating projector-camera system
CN1149388C (en) * 2001-02-23 2004-05-12 清华大学 Method for reconstructing 3D contour of digital projection based on phase-shifting method
US6634552B2 (en) * 2001-09-26 2003-10-21 Nec Laboratories America, Inc. Three dimensional vision device and method, and structured light bar-code patterns for use in the same

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715486A (en) * 2015-03-25 2015-06-17 北京经纬恒润科技有限公司 Simulated rack camera calibration method and real-time machine
CN104715486B (en) * 2015-03-25 2017-12-19 北京经纬恒润科技有限公司 One kind emulation stand camera marking method and real-time machine
CN106558080A (en) * 2016-11-14 2017-04-05 天津津航技术物理研究所 Join on-line proving system and method outside a kind of monocular camera
CN106558080B (en) * 2016-11-14 2020-04-24 天津津航技术物理研究所 Monocular camera external parameter online calibration method
CN107255458A (en) * 2017-06-19 2017-10-17 昆明理工大学 A kind of upright projection grating measuring analogue system and its implementation
CN107255458B (en) * 2017-06-19 2020-02-07 昆明理工大学 Resolving method of vertical projection grating measurement simulation system
CN107918293A (en) * 2017-12-15 2018-04-17 四川汉科计算机信息技术有限公司 Universal type simulation system
CN109242752A (en) * 2018-08-21 2019-01-18 吉林大学 A kind of analog acquisition obtains the method and application of mobile image
CN112987593A (en) * 2021-02-19 2021-06-18 中国第一汽车股份有限公司 Visual positioning hardware-in-the-loop simulation platform and simulation method
CN112987593B (en) * 2021-02-19 2022-10-28 中国第一汽车股份有限公司 Visual positioning hardware-in-the-loop simulation platform and simulation method

Also Published As

Publication number Publication date
CN100416466C (en) 2008-09-03

Similar Documents

Publication Publication Date Title
CN107730503B (en) Image object component level semantic segmentation method and device embedded with three-dimensional features
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN1897715A (en) Three-dimensional vision semi-matter simulating system and method
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN1259542C (en) Vision measuring method for spaced round geometrical parameters
CN1851618A (en) Single-eye vision semi-matter simulating system and method
CN106197265B (en) A kind of space free flight simulator precision visual localization method
CN1801953A (en) Video camera reference method only using plane reference object image
CN107679537A (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings
CN102184563B (en) Three-dimensional scanning method, three-dimensional scanning system and three-dimensional scanning device used for plant organ form
CN107705252A (en) Splice the method and system of expansion correction suitable for binocular fish eye images
CN101038678A (en) Smooth symmetrical surface rebuilding method based on single image
CN103544344B (en) A kind of car load Electromagnetic Simulation reverse modeling method
CN1694126A (en) System and method for building three-dimentional scene dynamic model and real-time simulation
CN106157246A (en) A kind of full automatic quick cylinder panoramic image joining method
CN105551020A (en) Method and device for detecting dimensions of target object
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN106971408A (en) A kind of camera marking method based on space-time conversion thought
CN101038153A (en) Three-point scaling measuring method
CN116822100B (en) Digital twin modeling method and simulation test system thereof
CN1878319A (en) Video camera marking method based on plane homographic matrix characteristic line
CN111189415A (en) Multifunctional three-dimensional measurement reconstruction system and method based on line structured light
CN113658262A (en) Camera external parameter calibration method, device, system and storage medium
CN113313659A (en) High-precision image splicing method under multi-machine cooperative constraint
CN116433843A (en) Three-dimensional model reconstruction method and device based on binocular vision reconstruction route

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080903

Termination date: 20200531

CF01 Termination of patent right due to non-payment of annual fee