CN105806318A - Visual measurement method for space three-dimensional information based on motion time quantity - Google Patents
Visual measurement method for space three-dimensional information based on motion time quantity Download PDFInfo
- Publication number
- CN105806318A CN105806318A CN201610134789.9A CN201610134789A CN105806318A CN 105806318 A CN105806318 A CN 105806318A CN 201610134789 A CN201610134789 A CN 201610134789A CN 105806318 A CN105806318 A CN 105806318A
- Authority
- CN
- China
- Prior art keywords
- camera
- coordinate
- measurement
- space
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
Abstract
The invention relates to a visual measurement method for space three-dimensional information based on motion time quantity and belongs to the technical field of computer visual measurement. The visual measurement method comprises the following steps: firstly, fixing a camera on a high-precision linear guide rail, wherein the guide rail is utilized to ensure the high-precision uniform linear motion of the camera; solving a space three-dimensional coordinate by utilizing a sequence image feature in a motion process, thereby completing the measurement for the space three-dimensional information of a to-be-detected object. The first step is adopted for calibrating the camera, the second step is adopted for extracting a marking point feature and the third step is adopted for solving the three-dimensional information. According to the visual measurement method provided by the invention, the high-precision uniform linear motion of an electric control platform is utilized to convert the geometrical measurement for the measured space into the measurement for the time quantity of the speed, the feature in the image can be subjected to reconstruction of three-dimensional information, the measurement cost is lowered, the measurement efficiency and precision are increased and the quick measurement in full-visual field of a monocular camera can be realized.
Description
Technical field
The invention belongs to computer vision measurement technical field, relate to a kind of space three-dimensional information vision measuring method based on movement time amount.
Background technology
Vision measurement technology suffers from extremely wide application in every field such as Aeronautics and Astronautics, military project, marine navigations.Vision measurement has the advantages such as noncontact, environmental suitability be strong, the pictorial information shot by camera gets final product the three-dimensional information of quick measurement space feature, and present vision measuring method is mostly only applied the image information of camera or the aggregate information in space and space three-dimensional information is measured, which has limited the precision of measurement and efficiency.
The patent of invention CN105184857A of Li Xiuzhi of Beijing University of Technology et al. application, " monocular vision based on structure light range finding rebuilds mesoscale factor determination method ", the method is from structured light, propose a kind of with structure light for auxiliary the European three-dimensional rebuilding method of monocular vision, including facula mass center centralized positioning, add RANSAC rejection space line matching, ask for space three-dimensional point coordinates.But, the method needs repeatedly to adjust the position of chessboard target repeatedly in measurement process, demarcates numerous and diverse and needs repeatedly to demarcate, and requiring higher to the locus of laser instrument and camera, and stability is difficult to ensure that.The patent of invention CN104864851A of Zhao Rujin of Photoelectric Technology Inst., Chinese Academy of Sciences et al. application, " a kind of monocular vision pose measuring method based on rectangular perimeter and area Weighted Constraint " the method constructs rectangle on measured object surface by 4 characteristic points, weighted error restriction relation, recycling rectangular area and girth error constraints is established to target measurement according to the geometric properties that rectangular target girth and area remain unchanged.Measured object surface configuration and size be there are certain requirements by this measuring method, therefore measure and are limited in scope;Secondly, in the process of structural feature rectangle, the geometric accuracy of rectangle is difficult to ensure that, measurement result can be produced large effect by rectangular shape scale error, causes that measurement result is more unstable.
Summary of the invention
The present invention not only improves the efficiency of vision measurement, and geometric measurement in vision measurement process is converted into time quantum and measures, improve the precision measured, invent a kind of space three-dimensional information vision measuring method based on movement time amount, simplify measurement procedure and improve measuring speed and precision.By traditional geological information vision measuring method based on image information or space, refer to kinematic parameter, improve measurement efficiency and reduce cost, it is achieved that monocular camera is for the quick measurement in full filed.
The technical solution adopted in the present invention is a kind of space three-dimensional information vision measuring method based on movement time amount, camera is first fixed on high accuracy line slideway by measuring method, guide rail is utilized to ensure that the high accuracy linear uniform motion of camera, then utilize sequential image feature in motor process that 3 d space coordinate is solved, complete the measurement of testee space three-dimensional information, specifically comprising the following steps that of measuring method
The first step carries out the demarcation of camera
Measuring method adopts the relatively-stationary mode of single camera, with the method that video camera is demarcated by precision machined target plate;Based on Zhang Zhengyou et al. camera marking method based on 2 dimensional plane targets proposed, and for the high-precision measurement requirement of relative distance in measurement process, it is achieved the high-precision calibrating of camera, camera imaging machine model is as follows:
Wherein, ZcFor scale factor,For control point actual pixels coordinate in camera plane, k1,k2For distortion factor, M is camera Intrinsic Matrix, R, and T respectively camera coordinate system is relative to the rotation of world coordinate system and translation matrix, (Xw,Yw,Zw) for control point coordinate under world coordinate system.
The extraction of second step labelling point feature
First at electric control platform initial position, control camera by graphics workstation to shoot, obtain the first two field picture A1, then control electric control platform to advance 1s along y-axis with certain speed, continue shooting, obtain the second two field picture A2, finally still control electric control platform and advance 1s along y-axis with certain speed, it is thus achieved that the 3rd two field picture A3;Gradient centroid method is adopted to extract the feature on the first frame, the second frame, the 3rd two field picture A1, A2, A3 image respectively;Then, adopting Gauss first order differential operator that image is carried out convolution operation, to obtain the image gradient at each point, then the acquiring method for the gradient focus point coordinate of pattern place subregion is as follows:
Wherein: C is the labelling dot center pixel coordinate value that application gradient centroid method extracts, | and G (i, j) | for (i, j) gradient magnitude put, w, the width of h respectively target image and height, (i j) is (i, j) image coordinate put to P.
3rd step three-dimensional information solves
The movement velocity of known guide is v, the movement time of taken image is spaced apart Δ t, respectively at time t=0, t=Δ t, inscribe during t=2 Δ t tri-and carry out image procossing, obtain three, the space coordinate of characteristic point, first according to national forest park in Xiaokeng, the actual coordinate of spatial point and the relation of pixel planes coordinate can be obtained:
Wherein, (x, y, z) be the actual coordinate of spatial point, and (X, Y) is characteristic point physical coordinates on picture.Below equation can be obtained according to formula (3) and spatial relation:
Wherein, v is the movement velocity of guide rail, and Δ t is movement time interval, and Δ l is the move distance of camera, (X in the Δ t timei,Yi) it is the pixel planes coordinate of the i-th frame spatial point, (xi,yi,zi) for being separated by the actual coordinate of i × Δ t spatial point.(i=1,2,3), solve (x by simultaneous formula (4), (5), (6)1,y1,z1), it is object point real space coordinate.
The invention has the beneficial effects as follows that the method only needs single camera collocation high accuracy straight line electric control platform, do not need auxiliary laser and accessory projector, also without the priori in known image, utilize the high accuracy linear uniform motion of electric control platform, the geometric measurement of measurement space is converted to the measurement of the time quantum of speed, the reconstruction of three-dimensional information can be carried out for the feature in image, reduce measurement cost, add measurement efficiency and precision, it is achieved that monocular camera is for the quick measurement in full filed.
Accompanying drawing explanation
Fig. 1 show the space three-dimensional information vision measuring method mounted cast figure based on movement time amount.1-high precision electric control platform, 2-camera, 3-testee.
Fig. 2 is based on the flow chart of the space three-dimensional information vision measuring method of movement time amount.
Detailed description of the invention
The specific embodiment of the present invention is described in detail below in conjunction with technical scheme and accompanying drawing.
Accompanying drawing 1 is the space three-dimensional information vision measuring method mounted cast figure based on movement time amount.This device drives camera by high-precision electric control platform, in identical interval, completes linear uniform motion, three photos of the identical time shooting in interval on straight-line trajectory, thus carrying out the accurate measurement of space object three-dimensional coordinate.
The mounting means of its device is as follows: camera is installed on high precision electric control platform.The present invention adopts high accuracy straight line electric control platform.Adopting the camera subject volume image with camera lens, camera model is FASTCAMUX50 video camera, resolution: 1248*1024, frame frequency: ensure in resolution 1248*1024 situation at most up to 2000fps.Camera lens model is that AF-S17-35mmf/2.8DIF-ED parameter is as follows, lens focus: f=17-35, APS focal length: 25.5-52.5, maximum ring: F2.8, minimum aperture: F22, camera lens weight: 745g, Lens: 82.5 × 106.Shooting condition is as follows: picture pixels is 1248 × 1024, and visual field is about 600mm × 600mm, and shooting speed is 100fps.
Accompanying drawing 2 is based on the flow chart of the space three-dimensional information vision measuring method of movement time amount, according to flow chart embodiment to be embodied as step as follows:
The first step carries out the demarcation of camera
The present invention adopts the relatively-stationary mode of video camera, with the method that video camera is demarcated by precision machined target plate.The method is based on the camera marking method based on 2D plane target drone that Zhang Zhengyou et al. proposes, and for the high-precision measurement requirement of relative distance in measurement process, using the reconstruction precision of monocular system space point as object function.And adopt Longguet-Higgins 8 algorithms of normalization proposed to calculate fundamental matrix, utilize formula (1) that the intrinsic parameter K of high speed camera, outer parameter [RT], distortion factor k and fundamental matrix R, t can be obtained;Focal length calibration result is f=1294.21658.
Adopt Zhang Shi scaling method accurate can ask for the inside and outside parameter of camera, add the reconstruction precision using monocular system space point, as object function, inside and outside parameter is carried out global optimization, improve the credibility of flag parameters under larger field further.
The extraction of second step labelling point feature
First at electric control platform initial position, control camera by graphics workstation to shoot, obtain the first two field picture A1, then control electric control platform to advance 1s along y-axis with speed v=20.209mm/s, continue shooting, obtain the second two field picture A2, finally still control electric control platform and advance 1s along y-axis with v=20.209mm/s speed, it is thus achieved that the 3rd two field picture A3.
Gradient centroid method is adopted to extract the pixel coordinate value of the upper labelling point of image A1, A2 that camera shoots at 3 diverse locations, A3, respectively [X respectively1Y1]=[469.58219291.43307], [X2Y2]=[440.40808292.30085], [X3Y3]=[410.74325292.68879].
Solving of 3rd step three-dimensional information
The movement velocity of known guide is v=20.209mm/s, and the movement time of taken image is spaced apart Δ t=1s, inscribes when time t=0, t=Δ t, t=2 Δ t tri-respectively and carries out image procossing, it is thus achieved that three, the space of characteristic point coordinate.Further according to national forest park in Xiaokeng, the actual coordinate of spatial point and the relation of pixel planes coordinate can be obtained, utilize the movement velocity of electric control platform and time to try to achieve: Δ l=v × Δ t=20.209mm/s × 1s=20.209mm
Then bring formula formula (3) into, (4), (5), (6) simultaneous solution go out three groups of space coordinatess, and result is as follows:
[x1y1z1]=[-112.056153-162.0950091150.600906]
[x2y2z2]=[-130.899183-160.7039101143.197584]
[x3y3z3]=[-149.838016-159.3166371136.007787]
[-112.056153-162.0950091150.600906] is spatial point actual coordinate.
The method utilizes the high accuracy linear uniform motion of electric control platform, the geometric measurement of measurement space is converted to the measurement of the time quantum of speed, the reconstruction of three-dimensional information can be carried out for the feature in image, reduce measurement cost, add measurement efficiency and precision, it is achieved that monocular camera is for the quick measurement in full filed.
Claims (1)
1. based on a space three-dimensional information vision measuring method for movement time amount, it is characterized in that, camera is first fixed on high accuracy line slideway by measuring method, utilizes guide rail to ensure that the high accuracy linear uniform motion of camera;Then, utilize sequential image feature in motor process that 3 d space coordinate is solved, complete the measurement of testee space three-dimensional information;Specifically comprising the following steps that of measuring method
The first step carries out the demarcation of camera
Measuring method adopts the relatively-stationary mode of single camera, with the method that video camera is demarcated by precision machined target plate;Based on Zhang Zhengyou et al. camera marking method based on 2 dimensional plane targets proposed, and for the high-precision measurement requirement of relative distance in measurement process, it is achieved the high-precision calibrating of camera, camera imaging machine model is as follows:
Wherein, ZcFor scale factor,For control point actual pixels coordinate in camera plane, k1,k2For distortion factor, M is camera Intrinsic Matrix, R, and T respectively camera coordinate system is relative to the rotation of world coordinate system and translation matrix, (Xw,Yw,Zw) for control point coordinate under world coordinate system;
The extraction of second step labelling point feature
First at electric control platform initial position, control camera by graphics workstation to shoot, obtain the first two field picture A1, then control electric control platform to advance 1s along y-axis with certain speed, continue shooting, obtain the second two field picture A2, finally still control electric control platform and advance 1s along y-axis with certain speed, it is thus achieved that the 3rd two field picture A3;Gradient centroid method is adopted to extract the feature on the first frame, the second frame, the 3rd two field picture A1, A2, A3 image respectively;Then adopting Gauss first order differential operator that image is carried out convolution operation, to obtain the image gradient at each point, then the acquiring method for the gradient focus point coordinate of pattern place subregion is as follows:
Wherein: C is the labelling dot center pixel coordinate value that application gradient centroid method extracts, | and G (i, j) | for (i, j) gradient magnitude put, w, the width of h respectively target image and height, (i j) is (i, j) image coordinate put to P;
3rd step three-dimensional information solves
The movement velocity of known guide is v, the movement time of taken image is spaced apart Δ t, respectively at time t=0, t=Δ t, inscribe during t=2 Δ t tri-and carry out image procossing, obtain three, the space coordinate of characteristic point, first according to national forest park in Xiaokeng, obtain the actual coordinate of spatial point and the relation of pixel planes coordinate:
Wherein, (x, y, z) be the actual coordinate of spatial point, and (X, Y) is characteristic point physical coordinates on picture;Below equation is obtained according to formula (3) and spatial relation:
Wherein, v is the movement velocity of guide rail, and Δ t is movement time interval, and Δ l is the move distance of camera, (X in the Δ t timei,Yi) it is the pixel planes coordinate of the i-th frame spatial point, (xi,yi,zi) for being separated by the actual coordinate of i × Δ t spatial point;(i=1,2,3), solve (x by simultaneous formula (4), (5), (6)1,y1,z1), it is object point real space coordinate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610134789.9A CN105806318A (en) | 2016-03-09 | 2016-03-09 | Visual measurement method for space three-dimensional information based on motion time quantity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610134789.9A CN105806318A (en) | 2016-03-09 | 2016-03-09 | Visual measurement method for space three-dimensional information based on motion time quantity |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105806318A true CN105806318A (en) | 2016-07-27 |
Family
ID=56467982
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610134789.9A Pending CN105806318A (en) | 2016-03-09 | 2016-03-09 | Visual measurement method for space three-dimensional information based on motion time quantity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105806318A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107449403A (en) * | 2017-08-09 | 2017-12-08 | 天津理工大学 | A kind of space-time four-dimension joint imaging model and application |
CN108242072A (en) * | 2016-12-23 | 2018-07-03 | 捷西迪光学(开曼)股份有限公司 | Establish the method for space map and the patch graphical set applied to this method |
CN109030868A (en) * | 2018-07-06 | 2018-12-18 | 江西洪都航空工业集团有限责任公司 | Plane motion object angular acceleration measurement method in drop-test |
CN109186455A (en) * | 2018-09-06 | 2019-01-11 | 安徽师范大学 | A kind of device of view-based access control model measurement dynamic object three-dimensional coordinate |
CN109520477A (en) * | 2018-10-30 | 2019-03-26 | 成都飞机工业(集团)有限责任公司 | A method of the high-acruracy survey scale factor based on two dimension photography |
CN109712139A (en) * | 2019-01-02 | 2019-05-03 | 苏州天准科技股份有限公司 | The dimension measurement method of monocular vision based on linear movement mould group |
CN109764858A (en) * | 2018-12-24 | 2019-05-17 | 中公高科养护科技股份有限公司 | A kind of photogrammetric survey method and system based on monocular camera |
CN111890255A (en) * | 2020-07-29 | 2020-11-06 | 苏州华兴源创科技股份有限公司 | Crimping method and crimping system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102221331A (en) * | 2011-04-11 | 2011-10-19 | 浙江大学 | Measuring method based on asymmetric binocular stereovision technology |
CN102374860A (en) * | 2011-09-23 | 2012-03-14 | 奇瑞汽车股份有限公司 | Three-dimensional visual positioning method and system |
CN103529838A (en) * | 2013-09-23 | 2014-01-22 | 南京航空航天大学 | Multi-objective visual guide driving device of automated guided vehicle and collaborative calibration method of multi-objective visual guide driving device |
CN103542981A (en) * | 2013-09-28 | 2014-01-29 | 大连理工大学 | Method for measuring rotary inertia through binocular vision |
CN104864851A (en) * | 2015-06-01 | 2015-08-26 | 中国科学院光电技术研究所 | Monocular vision pose measurement method based on weighting and constraining of perimeter and area of rectangle |
CN105180904A (en) * | 2015-09-21 | 2015-12-23 | 大连理工大学 | High-speed moving target position and posture measurement method based on coding structured light |
CN105184857A (en) * | 2015-09-13 | 2015-12-23 | 北京工业大学 | Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging |
-
2016
- 2016-03-09 CN CN201610134789.9A patent/CN105806318A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102221331A (en) * | 2011-04-11 | 2011-10-19 | 浙江大学 | Measuring method based on asymmetric binocular stereovision technology |
CN102374860A (en) * | 2011-09-23 | 2012-03-14 | 奇瑞汽车股份有限公司 | Three-dimensional visual positioning method and system |
CN103529838A (en) * | 2013-09-23 | 2014-01-22 | 南京航空航天大学 | Multi-objective visual guide driving device of automated guided vehicle and collaborative calibration method of multi-objective visual guide driving device |
CN103542981A (en) * | 2013-09-28 | 2014-01-29 | 大连理工大学 | Method for measuring rotary inertia through binocular vision |
CN104864851A (en) * | 2015-06-01 | 2015-08-26 | 中国科学院光电技术研究所 | Monocular vision pose measurement method based on weighting and constraining of perimeter and area of rectangle |
CN105184857A (en) * | 2015-09-13 | 2015-12-23 | 北京工业大学 | Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging |
CN105180904A (en) * | 2015-09-21 | 2015-12-23 | 大连理工大学 | High-speed moving target position and posture measurement method based on coding structured light |
Non-Patent Citations (4)
Title |
---|
ZHENGYOU ZHANG,ET AL.: "Camera calibration with one-dimensional", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
刘巍 等: "基于双目视觉的转动惯量测量方法", 《仪器仪表学报》 * |
洪荣: "无人机对地面目标位姿的单目视觉测量方法研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
马桂珍: "基于立体视觉的三维景物重建与跟踪技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108242072A (en) * | 2016-12-23 | 2018-07-03 | 捷西迪光学(开曼)股份有限公司 | Establish the method for space map and the patch graphical set applied to this method |
CN107449403B (en) * | 2017-08-09 | 2020-07-17 | 天津理工大学 | Time-space four-dimensional joint imaging model and application |
CN107449403A (en) * | 2017-08-09 | 2017-12-08 | 天津理工大学 | A kind of space-time four-dimension joint imaging model and application |
CN109030868A (en) * | 2018-07-06 | 2018-12-18 | 江西洪都航空工业集团有限责任公司 | Plane motion object angular acceleration measurement method in drop-test |
CN109186455A (en) * | 2018-09-06 | 2019-01-11 | 安徽师范大学 | A kind of device of view-based access control model measurement dynamic object three-dimensional coordinate |
CN109520477B (en) * | 2018-10-30 | 2021-01-26 | 成都飞机工业(集团)有限责任公司 | High-precision scale factor measuring method based on two-dimensional photography |
CN109520477A (en) * | 2018-10-30 | 2019-03-26 | 成都飞机工业(集团)有限责任公司 | A method of the high-acruracy survey scale factor based on two dimension photography |
CN109764858A (en) * | 2018-12-24 | 2019-05-17 | 中公高科养护科技股份有限公司 | A kind of photogrammetric survey method and system based on monocular camera |
CN109764858B (en) * | 2018-12-24 | 2021-08-06 | 中公高科养护科技股份有限公司 | Photogrammetry method and system based on monocular camera |
CN109712139A (en) * | 2019-01-02 | 2019-05-03 | 苏州天准科技股份有限公司 | The dimension measurement method of monocular vision based on linear movement mould group |
CN109712139B (en) * | 2019-01-02 | 2023-05-26 | 苏州天准科技股份有限公司 | Monocular vision size measurement method based on linear motion module |
CN111890255A (en) * | 2020-07-29 | 2020-11-06 | 苏州华兴源创科技股份有限公司 | Crimping method and crimping system |
CN111890255B (en) * | 2020-07-29 | 2021-10-01 | 苏州华兴源创科技股份有限公司 | Crimping method and crimping system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105806318A (en) | Visual measurement method for space three-dimensional information based on motion time quantity | |
CN103256896B (en) | Position and posture measurement method of high-speed rolling body | |
CN105698699B (en) | A kind of Binocular vision photogrammetry method based on time rotating shaft constraint | |
CN103954221B (en) | The binocular photogrammetric survey method of large-size pliable structure vibration displacement | |
CN105716542B (en) | A kind of three-dimensional data joining method based on flexible characteristic point | |
CN104215239B (en) | Guidance method using vision-based autonomous unmanned plane landing guidance device | |
CN105571518B (en) | Three-dimensional information vision measuring method based on refraction image deviation | |
CN102012217B (en) | Method for measuring three-dimensional geometrical outline of large-size appearance object based on binocular vision | |
CN107316325A (en) | A kind of airborne laser point cloud based on image registration and Image registration fusion method | |
CN110517325B (en) | Coordinate transformation and method and system for positioning objects around vehicle body through coordinate transformation | |
CN104880176A (en) | Moving object posture measurement method based on prior knowledge model optimization | |
CN103542981B (en) | A kind of method of Binocular vision photogrammetry moment of inertia | |
CN103186892B (en) | Aerial Images is utilized to generate the method and system of equal proportion outdoor scene scene photo | |
CN101033972A (en) | Method for obtaining three-dimensional information of space non-cooperative object | |
CN110174088A (en) | A kind of target ranging method based on monocular vision | |
CN106871787A (en) | Large space line scanning imagery method for three-dimensional measurement | |
CN105913410A (en) | Long-distance moving object height measurement apparatus and method based on machine vision | |
CN106408650B (en) | In-orbit plunder flies imaging to extraterrestrial target three-dimensional reconstruction and measurement method | |
CN109272532A (en) | Model pose calculation method based on binocular vision | |
CN110070598A (en) | Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding | |
CN103759669A (en) | Monocular vision measuring method for large parts | |
CN110806199A (en) | Terrain measurement method and system based on laser demarcation device and unmanned aerial vehicle | |
CN102692214A (en) | Narrow space binocular vision measuring and positioning device and method | |
CN107144241A (en) | A kind of binocular vision high-precision measuring method compensated based on the depth of field | |
CN106352817A (en) | Non-contact four-wheel positioner and positioning method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160727 |