CN106441151A - Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion - Google Patents
Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion Download PDFInfo
- Publication number
- CN106441151A CN106441151A CN201610872190.5A CN201610872190A CN106441151A CN 106441151 A CN106441151 A CN 106441151A CN 201610872190 A CN201610872190 A CN 201610872190A CN 106441151 A CN106441151 A CN 106441151A
- Authority
- CN
- China
- Prior art keywords
- module
- target
- information
- dimensional
- vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
Abstract
The invention discloses a three-dimensional object European space reconstruction measurement system based on vision and active optics fusion. The system comprises a monocular vision module, a photometry range finding module, a preprocessing module, an initial three-dimensional reconstruction module, a SLAM posture calculating module, an accurate 3D reconstruction module and a correlation auxiliary mechanism. Through a new method, a mutual relation of an image sequence acquired from the monocular vision module and single point distance information acquired from a laser range finding module is established, space three-dimensional information of each characteristic point of an observed object is calculated, and through iteration optimization, accurate three-dimensional measurement information of the object and position attitude information of the object are finally acquired. Through adopting combination of a high speed Monocular vision system and a laser range finding system, based on high frame frequency detection, data processing pressure is greatly alleviated and the system is especially suitable for space or underwater motion object observation. The system possesses advantages that an imaging speed is rapid; reconstruction precision is high; and the system is simple, the size, weight and power consumption are small and so on. The system is suitable for applications of space and underwater operation robots.
Description
Technical field
The invention belongs to the three of noncooperative target three-dimensional reconstruction field, particularly a kind of view-based access control model and initiative range measurement fusion
The measuring system that dimension target theorem in Euclid space is rebuild.
Background technology
In space and the under water measurement task of noncooperative target, closely reconstruction and measurement to target are passes therein
Key technology.Because non-cooperation cannot provide effective cooperation information, it is fitted without communicating acknowledgement mechanism or other active sensings
The extraterrestrial target of device, other robots can not be interrogated by electronics or the mode such as transmission signal is to such target identification or positioning.
In the case that target state and space structure are unknown, by means such as visions, target is carried out with detection, the weight of 3D yardstick
Structure, and accurate measurement is carried out to its pose, for further operation offer condition.
Traditional measuring method based on scanning type laser radar is although can pass through to obtain the intensive three-dimensional of target surface
Point cloud information, finally realizes the three-dimensionalreconstruction of target, but its certainty of measurement and square distance is inversely proportional to, and be only applicable to closely
Move gentle target measurement.Target generally adopts ICP (the Iterative Closest Point) three-dimensional point of the propositions such as Besl
Cloud matching algorithm realizes the pose measurement to noncooperative target.Many researchers are try to using scanning type laser radar at present
The posture information of Image Acquisition noncooperative target.LCS (the Lase Camera System) system of Canadian Neptec company exploitation
System is using 3D LASSO (the three-dimensional laser camera system algorithms based on ICP algorithm
For spacecraft servicing on-orbit) software be capable of non-cooperative Spacecraft pose resolve, certainty of measurement
It is inversely proportional to it is adaptable to the gentle target measurement that closely moves with square distance.But various processing method armies are inevitable at present
Measurement work in can there is data distortion problem.
Binocular vision simulates human eye image-forming principle, by measuring the parallax of two width figures, can obtain the three-dimensional information of target.
P.Jasiobedzk proposes to be first about target motion, sets up target three-dimensional by binocular camera system, and determines target
Relative position;Then the attitude of target is determined by the method that geometry is explored.By three-dimensional during closely following the tracks of
Data iteration closest approach algorithm (Iterative Closest Point) carries out the renewal of targeted attitude parameter, and target is relative to position
Put and obtained by binocular camera system triangulation.Haifa, Israel Institute of Technology Segal etc. also establishes based on binocular vision
Non- cooperative Spacecraft state measurement system, initially set up the observation model of target signature point, afterwards utilize spreading kalman filter
Wave method achieves the pose measurement to non-cooperative Spacecraft.But due to Binocular vision photogrammetry precision heavy dependence two camera relatively
Position and angle, and imaging region exists only in the overlapping region of two cameras.It is difficult to meet in space and underwater robot to non-
It is subject to the requirement of detector size, detection range and precision in the detection of cooperative target.
Monocular vision be most common be also simplest optical pickocff, be most of spaceborne standard facility.
By the robot " immediately position and draw (SLAM) " based on monocular camera, method is extended foreign study person, and successful Application
Arrive.In the measurement of target.Augenstein as Stamford University Space robot laboratory etc. is only with Bayesian Estimation side
Method estimates attitude parameter, by multiple positioning measurement mode such as gyroscope, GPS, using Optimum Theory estimated location parameter, can
For inspection and maintenance damage satellite or under water scientific instrument it can also be used to dock with the autonomous church of rolling satellite.And success
In monterey gulf, research institute waterborne has carried out field trial.But monocular vision cannot direct access target depth information, non-
Need in cooperative target pose measurement to use cooperatively with other multiple sensors, be frequently subjected to limit in autonomous system application aspect
System.
Face battle array laser radar is the important means that a kind of nearest three-dimensional information rising obtains, and it utilizes light beam from sensor
Flight time to impact point calculates the depth information of target it is allowed to the disposably parallel acquisition of multiple depth information, Ke Yishi
The real-time acquisition of existing target three-dimensional image.Massachusetts Institute of Technology's Lincoln laboratory maintains the leading position, and has completed the third generation three-dimensional
Imaging laser radar system (Gen- III system), using Geiger mode angular position digitizer 32*32 pixel A PD array as detector, there is list
Photon detection sensitivity, has the advantages that high frame frequency, High Range Resolution and miniaturization.But because current technology is still immature,
High pixel, high accuracy three-dimensional target measurement cannot be accomplished, and its lateral resolution is still undesirable, and be subject to equipment cost high,
Hardware requirement height etc. limits and is difficult at present popularize.
Have is combined monocular vision with laser radar, such as the Jose Padial in aerospace portion of Stanford University et al., thus
Improve certainty of measurement, but whole system be due to the presence of camera and rotating mirror system, volume, power consumption and weight larger it is difficult to adapt to
Complex environment is applied.
Content of the invention
The present invention proposes a kind of measuring system of the objective theorem in Euclid space reconstruction of view-based access control model and active optics fusion.
This measuring system becomes sequence image by monocular vision module to target, and in the image of single-point laser range finder module gained certain
The depth information of single point carries out fusion calculation, and three-dimensional reconstruction under the European coordinate to realization of goal is it is possible to further try to achieve
The movable information of the position of target, attitude and correlation.Its small volume, lightweight, low in energy consumption;Structure is simple, and detection range is remote,
Low cost, applied widely;By there is very high adaptability to the specific design of camera.It is satisfied with future space and under water
The miniaturization that needs in detection, become more meticulous, autonomy-oriented and intelligent characteristic.
The technical solution used in the present invention is:A kind of view-based access control model and the objective theorem in Euclid space weight of active optics fusion
The measuring system built, this system by monocular vision module, laser ranging module, pretreatment module, initial three-dimensional reconstruction module,
SLAM pose resolves module, accurate 3D rebuilds the mechanisms such as module, fixed support, turntable and scaling board composition, wherein:
Described single camera vision system is object-oriented, for gathering the image sequence of target;
Described laser ranging module and monocular vision module are substantially placed in the same direction, for obtaining any point in target
Actual distance information;
Described pretreatment module is integrated in this internal system, for monocular vision module and laser ranging module gained
Data is pre-processed, and provides basis for subsequent reconstruction process;
Described initial three-dimensional reconstruction module is integrated in this internal system, for process above-mentioned image information and point away from
From information, thus obtaining the initial value of target three-dimensional measurement information;
Described SLAM pose resolves module and is integrated in this internal system, settles accounts algorithm using improved pose, can solve
The real-time motion state of target object and posture information at calculation;
Described accurate 3D rebuilds module and is integrated in this internal system, this module can real-time update threedimensional model information,
Ensure the accuracy of three-dimensional reconstruction.
Further, described pretreatment module is responsible for receiving, is processed described single camera vision system and LDMS
The data obtained, the image for obtaining to single camera vision system carries out distortion correction, denoising, foreground extraction process, and resolves place
The range information that laser ranging point identifies under camera coordinates system.
Further, described initial three-dimensional reconstruction module is responsible for the initial work of the three-dimensional reconstruction of target, achievable mesh
Target two-dimensional image sequence characteristic point and the high registration accuracy of single-point laser range information.
Further, described SLAM pose resolves module and accurate 3D rebuilds the target that module is only provided by triangular web
Sign information can carry out the pose measurement of real-time target, simultaneously can be according to the object pose information inverting follow-up mesh of real-time update
The precision of mark reconstructing three-dimensional model, the three-dimensional model information of real-time update gained target, thus reach the purpose that accurate 3D rebuilds.
Present invention advantage compared with prior art is:
(1) small volume of the present invention, lightweight, power consumption are few, and single monocular camera and laser range finder are matured product,
And do not need the big machinery devices such as scanning.
(2) present invention adopts high-resolution single camera vision system, and the minutia of target can preferably be reduced.
(3) present invention adopts phase-shift laser range-finder, can make up monocular camera in high precision and lack specific depth letter
The defect of breath.
(4) present invention adopts self-programming data processing module, using self-editing Thought computation program, by single camera vision system institute
Obtain image sequence to merge with LDMS gained single-point range information, obtain the three dimensions of all characteristic points of target object
Information, and realize final three-dimensional reconstruction, integral operation speed is fast, high precision.
(5) present invention requires low to hardware configuration, and image taking speed is fast, can by measured by laser range finder with
The high accuracy range information of machine point, precise restoration obtains the depth information of the whole pixel of objective body.
Brief description
Fig. 1 is the whole machine of measuring system of the objective theorem in Euclid space reconstruction of view-based access control model of the present invention and active optics fusion
Structural scheme of mechanism.
Fig. 2 is the measuring system principle of the objective theorem in Euclid space reconstruction of view-based access control model of the present invention and active optics fusion
Schematic diagram.
Fig. 3 is the survey of the measuring system of objective theorem in Euclid space reconstruction of view-based access control model of the present invention and active optics fusion
Amount process and conclusion schematic diagram.
In figure reference implication is:100 is monocular vision module, and 200 is laser ranging module, and 300 is pretreatment mould
Block, 310 is vision and laser ranging offline combined calibrating module, and 320 is image pre-processing module, and 321 correct for pattern distortion
Module, 330 is laser ranging information pre-processing module, and 400 is initial three-dimensional reconstruction module, and 410 are characterized extraction module, 411
Be characterized matching module, 412 be similar reconstruction module, 420 be laser, vision data fusion treatment module, 430 be band ratio because
The European initial 3D of son rebuilds module, and 500 resolve module for pose, and 510 is SFM motion estimation module, and 520 is SLAM pose solution
Calculate module, 521 is positioning states prediction module, 522 is positioning states filtration module, 523 is linked character update module, 524
For weight computing and resampling module, 530 is real-time pose estimation module, and 600 is that accurate 3D rebuilds module, and 610 is first starting weight
Build and feature database matching module, 620 are characterized storehouse update module.
Specific embodiment
For making the object, technical solutions and advantages of the present invention become more apparent, below in conjunction with specific embodiment, and reference
Accompanying drawing, the present invention is described in more detail.
The measuring system that the objective theorem in Euclid space of a kind of view-based access control model of the present invention and active optics fusion is rebuild, including
Monocular vision module, laser ranging module, pretreatment module, initial three-dimensional reconstruction module, SLAM pose resolve module, accurate 3D
Rebuild module, fixed support, follow the trail of turret systems, scaling board, wherein:
Shown in described each module mounting location modular structure schematic diagram as of the present invention in Fig. 1:
Described monocular vision module is used for catching the image of target and the sequence information of texture;
Described laser ranging module is used for obtaining the range information of certain arbitrfary point in target;
The image that described pretreatment module is used for monocular vision module is obtained carries out distortion correction, denoising, foreground extraction
Deng processing, extract expression under image coordinate system for the laser ranging module point distance measurement simultaneously, carry for follow-up three-dimensional reconstruction process
For meeting the reconstruction image requiring, LDMS gained single-point range information is pre-processed simultaneously, thus at resolving
The range information that laser ranging point identifies under camera coordinates system;
Described initial three-dimensional reconstruction module be used for pretreatment module provide graphical information carry out feature point extraction,
Join, laser ranging module imaging point re-projection, SFM spatial information resolves, fusion distance information resolves the true 3D of full characteristic point and sits
The functions such as mark, resolve for pose afterwards and accurately three-dimensional reconstruction provides primary data to support.
Described SLAM pose resolves module and is used for using initialization gained impact point rough grade 3D information and monocular before
Vision system measures image in real time, and the real time position of target, attitude information are settled accounts, and enrich constantly in resolved data
During optimize 3D model information.
Described accurate 3D rebuilds module and is used for resolving mould by initial three-dimensional reconstruction module the data obtained and SLAM pose
Block the data obtained wakes up with a start fusion, according to enriching constantly of pose resolved data, the precision of Continuous optimization 3D Model Reconstruction, will simultaneously
Accurately 3D model is supplied to SLAM pose and resolves module, resolves for accurately pose and lays the foundation.
Described fixed support is used for fixing described monocular vision module, described laser ranging module, pretreatment module, initial
Three-dimensional reconstruction module, SLAM pose resolve module, accurate 3D rebuilds the position of module, keep relatively stable between modules,
Whole system is higher to rigid requirements, and especially the relative position between front end monocular vision module and laser ranging module is necessary
Fixing;
Described tracking turret systems are used for following the tracks of target, are measured offer bar for above-mentioned all systems in real time to target
Part;
Described scaling board is used in system initialization process to described monocular vision module coordinate system and described laser ranging
The combined calibrating of module coordinate system, it has accurate characteristic point coordinate information, and the laser that laser ranging module is sent
Clearly scattering picture is become on scaling board.
Measurement procedure figure shown in Figure 2 illustrates present embodiment, pretreatment module described in present embodiment, first from
To carrying out monocular vision module and laser ranging module carries out combined calibrating under line states, obtain the inside ginseng of single camera vision system
The system fix information such as number and mathematical expression under monocular vision module coordinate system for the laser ranging module.
Described Part I, i.e. pretreatment module part, the monocular vision module in present system is implementing measurement
With the image sequence obtaining target during reconstruction with high frame frequency, laser ranging module is then with same frame frequency acquisition target
The range information of any point, wherein image sequence are in pretreatment module through image denoising, the image such as prospect, background separation
Processing means, carry out the operation such as distortion correction using the calibrating parameters of off-line calibration gained before to image, and by every two field picture
Set up with the range information of LDMS gained and contact, and by incoming for result further part.
Described Part II, i.e. initial three-dimensional reconstruction module part, this module in present system passes through pretreatment
The image information of module transfer carries out feature extraction, coupling, thus obtaining relative pose relation (its of preliminary target signature point
Middle scale factor is undetermined, and the computational methods of relative pose are similar to motion modeling SFM method, and present embodiment repeats no more), it
Afterwards the high accuracy range information according to two field picture every in described pretreatment module and the real world of LDMS gained it
Between set up mathematical relationship, associate all data, using scale factor at initial three-dimensional reconstruction module resolving.Thus restoring mesh
The estimate of the actual spatial coordinates of mark each characteristic point of object, the knot of the preliminary European three-dimensional reconstruction obtaining clarification of objective point
Really.
Described initial three-dimensional reconstruction module partly in, new reconstruction thinking is proposed, using the side of selectively dense reconstruction
Formula, will set up the corresponding relation of space coordinates, the method between become with target for the point distance measurement of LDMS visual pattern
The existing method of contrast, speed is fast, high precision, greatly reduces system-wide operand, have in the case of ensureing precision
Very strong engineering practicability.
Described Part III, i.e. pose and motion measurement part, by the target of theorem in Euclid space characteristic matching gained
Image sequence and range information that original reconstruction result and follow-up single camera vision system and LDMS constantly obtain, lead to
Closed SLAM pose resolving module the motion state of target is estimated, and be positioned to build figure (Fast by Fast synchronization afterwards
SLAM) method, on the basis of positioning states prediction, positioning states filtering, linked character update and calculate weights and resampling,
The motion state of target is further judged and is optimized, thus obtain more accurate target object motion model and
Real-time pose is estimated.
In described SLAM pose and motion measurement module, overcoming former SLAM method need to pass through multisensor, such as laser
The cooperative restriction of the equipment such as radar, GPS, gyroscope, the target sign information only being provided by triangular web of the present invention is
Real-time three-dimensional reconstruction can be carried out.
Described Part IV, i.e. precision target three-dimensional reconstruction part, the object being obtained by described Part I
The initial results of the characteristic point three-dimensional reconstruction of body and the affiliated exact position of the target of Part II gained and attitude information,
In data processing module, the three-dimensional information of existing clarification of objective point is constantly revised, optimized, thus obtaining more accurate
The three-dimensional reconstruction result of target.
Claims (4)
1. measuring system that the objective theorem in Euclid space that a kind of view-based access control model and active optics merge is rebuild it is characterised in that:
This system resolves mould by monocular vision module, laser ranging module, pretreatment module, initial three-dimensional reconstruction module, SLAM pose
Block, accurate 3D rebuild the mechanisms such as module, fixed support, turntable and scaling board composition, wherein:
Described single camera vision system is object-oriented, for gathering the image sequence of target;
Described laser ranging module and monocular vision module are substantially placed in the same direction, for obtaining the true of any point in target
Range information;
Described pretreatment module is integrated in this internal system, for single camera vision system and LDMS the data obtained
Pre-processed, provided basis for subsequent reconstruction process;
Described initial three-dimensional reconstruction module is integrated in this internal system, for processing the distance letter of above-mentioned image information and point
Breath, thus obtain the initial value of target three-dimensional measurement information;
Described SLAM pose resolves module and is integrated in this internal system, settles accounts algorithm using improved pose, can resolve place
The real-time motion state of target object and posture information;
Described accurate 3D rebuilds module and is integrated in this internal system, this module can real-time update threedimensional model information it is ensured that
The accuracy of three-dimensional reconstruction.
2. system according to claim 1 it is characterised in that:Described pretreatment module is responsible for receiving, is processed described list
Mesh vision module and laser ranging module the data obtained, the image for obtaining to monocular vision module carries out distortion correction, goes
Make an uproar, foreground extraction processes, and calculates the range information that laser ranging point identifies under camera coordinates system.
3. system according to claim 1 it is characterised in that:Described initial three-dimensional reconstruction module is responsible for the Three-dimensional Gravity of target
The initial work built, the two-dimensional image sequence characteristic point of achievable target and the high registration accuracy of single-point laser range information.
4. system according to claim 1 it is characterised in that:Described SLAM pose resolves module and accurate 3D rebuilds module
The target sign information only being provided by triangular web can carry out the pose measurement of real-time target, simultaneously can be according to real-time update
Object pose information inverting follow up target three-dimensional rebuild precision, the three-dimensional model information of real-time update gained target,
Thus reaching the purpose that accurate 3D rebuilds.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610872190.5A CN106441151A (en) | 2016-09-30 | 2016-09-30 | Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610872190.5A CN106441151A (en) | 2016-09-30 | 2016-09-30 | Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106441151A true CN106441151A (en) | 2017-02-22 |
Family
ID=58173097
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610872190.5A Pending CN106441151A (en) | 2016-09-30 | 2016-09-30 | Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106441151A (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274368A (en) * | 2017-06-16 | 2017-10-20 | 大连交通大学 | Compatible vision processing system and method |
CN107764183A (en) * | 2017-11-07 | 2018-03-06 | 浙江大学 | Local laser image co-registration measuring system and its measuring method for underwater object dimensional measurement |
CN107958466A (en) * | 2017-12-01 | 2018-04-24 | 大唐国信滨海海上风力发电有限公司 | A kind of tracking of the Slam algorithm optimizations based on model |
CN108089196A (en) * | 2017-12-14 | 2018-05-29 | 中国科学院光电技术研究所 | The noncooperative target pose measuring apparatus that a kind of optics master is passively merged |
CN108109208A (en) * | 2017-12-01 | 2018-06-01 | 同济大学 | A kind of marine wind electric field augmented reality method |
CN108120544A (en) * | 2018-02-13 | 2018-06-05 | 深圳精智机器有限公司 | A kind of triaxial residual stresses of view-based access control model sensor |
CN108897029A (en) * | 2018-03-30 | 2018-11-27 | 北京空间飞行器总体设计部 | Noncooperative target short distance Relative Navigation vision measurement system index evaluating method |
CN108981672A (en) * | 2018-07-19 | 2018-12-11 | 华南师范大学 | Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor |
CN109254579A (en) * | 2017-07-14 | 2019-01-22 | 上海汽车集团股份有限公司 | A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method |
CN109444912A (en) * | 2018-10-31 | 2019-03-08 | 电子科技大学 | A kind of driving environment sensory perceptual system and method based on Collaborative Control and deep learning |
CN110596660A (en) * | 2019-10-09 | 2019-12-20 | 富临精工先进传感器科技(成都)有限责任公司 | Method and system for improving accuracy of radar measurement object size |
CN110657803A (en) * | 2018-06-28 | 2020-01-07 | 深圳市优必选科技有限公司 | Robot positioning method, device and storage device |
CN110703268A (en) * | 2019-11-06 | 2020-01-17 | 广东电网有限责任公司 | Air route planning method and device for autonomous positioning navigation |
WO2020038155A1 (en) * | 2018-08-22 | 2020-02-27 | 科沃斯机器人股份有限公司 | Autonomous movement device, control method and storage medium |
CN111611913A (en) * | 2020-05-20 | 2020-09-01 | 北京海月水母科技有限公司 | Human-shaped positioning technology of monocular face recognition probe |
CN111699410A (en) * | 2019-05-29 | 2020-09-22 | 深圳市大疆创新科技有限公司 | Point cloud processing method, device and computer readable storage medium |
CN111735439A (en) * | 2019-03-22 | 2020-10-02 | 北京京东尚科信息技术有限公司 | Map construction method, map construction device and computer-readable storage medium |
CN112017237A (en) * | 2020-08-31 | 2020-12-01 | 北京轩宇智能科技有限公司 | Operation auxiliary device and method based on field splicing and three-dimensional reconstruction |
CN113175929A (en) * | 2021-03-12 | 2021-07-27 | 南京航空航天大学 | UPF-based spatial non-cooperative target relative pose estimation method |
CN113324538A (en) * | 2021-05-08 | 2021-08-31 | 中国科学院光电技术研究所 | Cooperative target remote high-precision six-degree-of-freedom pose measurement method |
CN113587904A (en) * | 2021-07-29 | 2021-11-02 | 中国科学院西安光学精密机械研究所 | Target attitude and position measurement method integrating machine vision and laser reference point information |
CN115984512A (en) * | 2023-03-22 | 2023-04-18 | 成都量芯集成科技有限公司 | Three-dimensional reconstruction device and method for plane scene |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103278170A (en) * | 2013-05-16 | 2013-09-04 | 东南大学 | Mobile robot cascading map building method based on remarkable scenic spot detection |
CN103901895A (en) * | 2014-04-18 | 2014-07-02 | 江苏久祥汽车电器集团有限公司 | Target positioning method based on unscented FastSLAM algorithm and matching optimization and robot |
CN105184857A (en) * | 2015-09-13 | 2015-12-23 | 北京工业大学 | Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging |
-
2016
- 2016-09-30 CN CN201610872190.5A patent/CN106441151A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103278170A (en) * | 2013-05-16 | 2013-09-04 | 东南大学 | Mobile robot cascading map building method based on remarkable scenic spot detection |
CN103901895A (en) * | 2014-04-18 | 2014-07-02 | 江苏久祥汽车电器集团有限公司 | Target positioning method based on unscented FastSLAM algorithm and matching optimization and robot |
CN105184857A (en) * | 2015-09-13 | 2015-12-23 | 北京工业大学 | Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging |
Non-Patent Citations (2)
Title |
---|
张勤: "基于信息融合的移动机器人三维环境建模技术研究", 《中国博士学位论文全文数据库(电子期刊)》 * |
梁潇: "基于激光与单目视觉融合的机器人室内定位与制图研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274368A (en) * | 2017-06-16 | 2017-10-20 | 大连交通大学 | Compatible vision processing system and method |
CN107274368B (en) * | 2017-06-16 | 2019-11-22 | 大连交通大学 | Compatible vision processing system and method |
CN109254579A (en) * | 2017-07-14 | 2019-01-22 | 上海汽车集团股份有限公司 | A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method |
CN107764183A (en) * | 2017-11-07 | 2018-03-06 | 浙江大学 | Local laser image co-registration measuring system and its measuring method for underwater object dimensional measurement |
CN107958466A (en) * | 2017-12-01 | 2018-04-24 | 大唐国信滨海海上风力发电有限公司 | A kind of tracking of the Slam algorithm optimizations based on model |
CN107958466B (en) * | 2017-12-01 | 2022-03-29 | 大唐国信滨海海上风力发电有限公司 | Slam algorithm optimization model-based tracking method |
CN108109208A (en) * | 2017-12-01 | 2018-06-01 | 同济大学 | A kind of marine wind electric field augmented reality method |
CN108109208B (en) * | 2017-12-01 | 2022-02-08 | 同济大学 | Augmented reality method for offshore wind farm |
CN108089196B (en) * | 2017-12-14 | 2021-11-19 | 中国科学院光电技术研究所 | Optics is initiative and is fused non-cooperative target position appearance measuring device passively |
CN108089196A (en) * | 2017-12-14 | 2018-05-29 | 中国科学院光电技术研究所 | The noncooperative target pose measuring apparatus that a kind of optics master is passively merged |
CN108120544A (en) * | 2018-02-13 | 2018-06-05 | 深圳精智机器有限公司 | A kind of triaxial residual stresses of view-based access control model sensor |
CN108897029B (en) * | 2018-03-30 | 2021-06-11 | 北京空间飞行器总体设计部 | Non-cooperative target short-distance relative navigation vision measurement system index evaluation method |
CN108897029A (en) * | 2018-03-30 | 2018-11-27 | 北京空间飞行器总体设计部 | Noncooperative target short distance Relative Navigation vision measurement system index evaluating method |
CN110657803A (en) * | 2018-06-28 | 2020-01-07 | 深圳市优必选科技有限公司 | Robot positioning method, device and storage device |
CN110657803B (en) * | 2018-06-28 | 2021-10-29 | 深圳市优必选科技有限公司 | Robot positioning method, device and storage device |
CN108981672A (en) * | 2018-07-19 | 2018-12-11 | 华南师范大学 | Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor |
WO2020038155A1 (en) * | 2018-08-22 | 2020-02-27 | 科沃斯机器人股份有限公司 | Autonomous movement device, control method and storage medium |
CN109444912A (en) * | 2018-10-31 | 2019-03-08 | 电子科技大学 | A kind of driving environment sensory perceptual system and method based on Collaborative Control and deep learning |
CN111735439B (en) * | 2019-03-22 | 2022-09-30 | 北京京东乾石科技有限公司 | Map construction method, map construction device and computer-readable storage medium |
CN111735439A (en) * | 2019-03-22 | 2020-10-02 | 北京京东尚科信息技术有限公司 | Map construction method, map construction device and computer-readable storage medium |
CN111699410A (en) * | 2019-05-29 | 2020-09-22 | 深圳市大疆创新科技有限公司 | Point cloud processing method, device and computer readable storage medium |
CN110596660A (en) * | 2019-10-09 | 2019-12-20 | 富临精工先进传感器科技(成都)有限责任公司 | Method and system for improving accuracy of radar measurement object size |
CN110703268B (en) * | 2019-11-06 | 2022-02-15 | 广东电网有限责任公司 | Air route planning method and device for autonomous positioning navigation |
CN110703268A (en) * | 2019-11-06 | 2020-01-17 | 广东电网有限责任公司 | Air route planning method and device for autonomous positioning navigation |
CN111611913A (en) * | 2020-05-20 | 2020-09-01 | 北京海月水母科技有限公司 | Human-shaped positioning technology of monocular face recognition probe |
CN112017237A (en) * | 2020-08-31 | 2020-12-01 | 北京轩宇智能科技有限公司 | Operation auxiliary device and method based on field splicing and three-dimensional reconstruction |
CN112017237B (en) * | 2020-08-31 | 2024-02-06 | 北京轩宇智能科技有限公司 | Operation auxiliary device and method based on view field splicing and three-dimensional reconstruction |
CN113175929B (en) * | 2021-03-12 | 2021-12-21 | 南京航空航天大学 | UPF-based spatial non-cooperative target relative pose estimation method |
CN113175929A (en) * | 2021-03-12 | 2021-07-27 | 南京航空航天大学 | UPF-based spatial non-cooperative target relative pose estimation method |
CN113324538A (en) * | 2021-05-08 | 2021-08-31 | 中国科学院光电技术研究所 | Cooperative target remote high-precision six-degree-of-freedom pose measurement method |
CN113324538B (en) * | 2021-05-08 | 2022-10-21 | 中国科学院光电技术研究所 | Cooperative target remote high-precision six-degree-of-freedom pose measurement method |
CN113587904A (en) * | 2021-07-29 | 2021-11-02 | 中国科学院西安光学精密机械研究所 | Target attitude and position measurement method integrating machine vision and laser reference point information |
CN113587904B (en) * | 2021-07-29 | 2022-05-20 | 中国科学院西安光学精密机械研究所 | Target attitude and position measurement method integrating machine vision and laser reference point information |
CN115984512A (en) * | 2023-03-22 | 2023-04-18 | 成都量芯集成科技有限公司 | Three-dimensional reconstruction device and method for plane scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106441151A (en) | Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion | |
CN108089196B (en) | Optics is initiative and is fused non-cooperative target position appearance measuring device passively | |
CN105976353B (en) | Spatial non-cooperative target pose estimation method based on model and point cloud global matching | |
CN107945220B (en) | Binocular vision-based reconstruction method | |
Peng et al. | Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion | |
CN105469405B (en) | Positioning and map constructing method while view-based access control model ranging | |
Peng et al. | An efficient pose measurement method of a space non-cooperative target based on stereo vision | |
CN111123911B (en) | Legged intelligent star catalogue detection robot sensing system and working method thereof | |
CN107590827A (en) | A kind of indoor mobile robot vision SLAM methods based on Kinect | |
CN105913410A (en) | Long-distance moving object height measurement apparatus and method based on machine vision | |
CN112987065B (en) | Multi-sensor-integrated handheld SLAM device and control method thereof | |
Lagisetty et al. | Object detection and obstacle avoidance for mobile robot using stereo camera | |
CN109579825A (en) | Robot positioning system and method based on binocular vision and convolutional neural networks | |
Tao et al. | A multi-sensor fusion positioning strategy for intelligent vehicles using global pose graph optimization | |
CN109724586B (en) | Spacecraft relative pose measurement method integrating depth map and point cloud | |
CN110412868A (en) | A kind of non-cooperative Spacecraft track using optical imagery between star determines method | |
CN114608561A (en) | Positioning and mapping method and system based on multi-sensor fusion | |
CN110030979B (en) | Spatial non-cooperative target relative pose measurement method based on sequence images | |
Yin et al. | Study on underwater simultaneous localization and mapping based on different sensors | |
Yingying et al. | Fast-swirl space non-cooperative target spin state measurements based on a monocular camera | |
Huntsberger et al. | Sensory fusion for planetary surface robotic navigation, rendezvous, and manipulation operations | |
JPH08261719A (en) | Device and method for calculating amount of relative movement | |
Liu et al. | 6-DOF motion estimation using optical flow based on dual cameras | |
Liu et al. | Stereo-image matching using a speeded up robust feature algorithm in an integrated vision navigation system | |
Bao et al. | Cost-effective and robust visual based localization with consumer-level cameras at construction sites |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170222 |