CN106652026A - Three-dimensional space automatic calibration method based on multi-sensor fusion - Google Patents
Three-dimensional space automatic calibration method based on multi-sensor fusion Download PDFInfo
- Publication number
- CN106652026A CN106652026A CN201611206307.2A CN201611206307A CN106652026A CN 106652026 A CN106652026 A CN 106652026A CN 201611206307 A CN201611206307 A CN 201611206307A CN 106652026 A CN106652026 A CN 106652026A
- Authority
- CN
- China
- Prior art keywords
- dimensions
- sensor fusion
- kinect
- automatic calibration
- dimensional space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a three-dimensional space automatic calibration method based on multi-sensor fusion. The method comprises the steps that 1, a support is installed on a mobile platform, and a Kinect device and multiple network cameras are installed on the support; 2, a depth image sequence of the Kinect device is used for calibrating a three-dimensional space; 3, the depth image and a visible light image of the Kinect device are subjected to matching processing, and the calibration result of the three-dimensional space in the step 2 is utilized to realize three-dimensional space calibration based on the Kinect visible light image; 4, the calibration result of the Kinect visible light image and an image matching algorithm are used for calibrating the network cameras; 5, the calibrated network cameras are used for three-dimensional space calibration, namely three-dimensional modeling and reconstruction. The method overcomes the defects that in the prior art, calibration is inaccurate and operation is complicated during three-dimensional space calibration.
Description
Technical field
The present invention relates to three dimensions demarcates field, in particular it relates to the three dimensions for being based on Multi-sensor Fusion is automatic
The method of demarcation.
Background technology
In robot SLAM technologies and application process, robot needs to carry out the three dimensional environmental space residing for it weight
Build, and this work is mainly needed by several cameras installed in robotic vision system, i.e. robot.Most
In number three dimensions reconstruction, need the camera in advance to being used to demarcate, i.e., count under a certain specific PTZ states
The parameter matrix of camera is calculated, these work generally need manual intervention, need to use the known target auxiliary of gridiron pattern etc,
Therefore, in prior art when demarcating to three dimensions, demarcate inaccurate and cumbersome.
For this purpose, providing one kind in use, the three-dimensional total space under any PTZ states can automatically be marked
It is fixed, and calibration result is accurately, and the method for the easy to operate three dimensions automatic Calibration based on Multi-sensor Fusion is this
The problem of bright urgent need to resolve.
The content of the invention
For above-mentioned technical problem, the purpose of the present invention is overcome in prior art when demarcating to three dimensions,
Inaccurate and cumbersome problem is demarcated, so as to provide one kind in use, three dimensions can be carried out automatically
Demarcate, and calibration result is accurately, the method for the easy to operate three dimensions automatic Calibration based on Multi-sensor Fusion.
To achieve these goals, the invention provides a kind of three dimensions automatic Calibration based on Multi-sensor Fusion
Method, methods described includes:Step 1, is provided with a mobile platform a support, and a Kinect is installed on the bracket
Equipment and multiple IP Cameras;Step 2, is demarcated using the range image sequence of Kinect device to three dimensions;Step
Rapid 3, matching treatment is carried out to the depth image and visible images of the Kinect device, and using three in the step 2
Dimension space calibration result realizes that the three dimensions based on Kinect visible images is demarcated;Step 4, using Kinect visible ray figures
As the result and image matching algorithm of demarcation are demarcated to the IP Camera;Step 5, using the network demarcated
Camera is demarcated to three dimensions, i.e. three dimensions modeling and reconstruction.
Preferably, the matching treatment in the step 3 is calculated using transformation matrix and error minimizes method and processed.
Preferably, the step 4 described image matching algorithm is the algorithm of feature based matching.
Preferably, the characteristic matching primitive in described image matching algorithm includes point, line, region.
Preferably, described image matching algorithm uses SIFT feature matching algorithm.
Preferably, two IP Cameras are installed on the support.
The method of the three dimensions automatic Calibration based on Multi-sensor Fusion provided according to above-mentioned technical proposal, the present invention
Three dimensions is demarcated by the range image sequence of the Kinect device, then by the depth of the Kinect device
Image and visible images carry out matching treatment, and realize that the three dimensions based on Kinect visible images is demarcated, and reuse
Image matching algorithm is demarcated to the IP Camera, finally using the IP Camera demarcated to three dimensions
Demarcated.The present invention provide the three dimensions automatic Calibration based on Multi-sensor Fusion method overcome in prior art
When demarcating to three dimensions, inaccurate and cumbersome problem is demarcated.
Other features and advantages of the present invention will be described in detail in subsequent specific embodiment part.
Description of the drawings
Accompanying drawing is, for providing a further understanding of the present invention, and to constitute the part of specification, with following tool
Body embodiment is used to explain the present invention together, but is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is a kind of side of the three dimensions automatic Calibration that the present invention is based on Multi-sensor Fusion under preferred embodiment
The flow chart of method.
Specific embodiment
The specific embodiment of the present invention is described in detail below in conjunction with accompanying drawing.It should be appreciated that this place is retouched
The specific embodiment stated is merely to illustrate and explains the present invention, is not limited to the present invention.
Such as Fig. 1, the invention provides a kind of method of the three dimensions automatic Calibration based on Multi-sensor Fusion, its feature
It is that methods described includes:Step 1, is provided with a mobile platform a support, and a Kinect is installed on the bracket
Equipment and multiple IP Cameras;Step 2, using Kinect device range image sequence to closely, the three-dimensional of small range
Space is demarcated;Step 3, to the depth image and visible images of the Kinect device matching treatment is carried out, and is utilized
Three dimensions calibration result in the step 2 realizes that the three dimensions on a large scale based on Kinect visible images is demarcated;Step
Rapid 4, the result and image matching algorithm demarcated using Kinect visible images carries out different PTZ ginsengs to the IP Camera
The total space under several is demarcated, while being demarcated to form the characteristic of binocular camera between multi-cam;Step 5, uses
The IP Camera demarcated carries out space demarcation, i.e. three dimensions modeling to current field and rebuilds.
The method of the three dimensions automatic Calibration based on Multi-sensor Fusion provided according to above-mentioned technical proposal, the present invention
By the range image sequence of the Kinect device to closely, the three dimensions of small range demarcate, then will be described
The depth image and visible images of Kinect device carries out matching treatment, and realizes based on the big model of Kinect visible images
Three dimensions demarcation is enclosed, the total space mark that image matching algorithm is carried out under different PTZ parameters to the IP Camera is reused
It is fixed, while demarcated between multi-cam to form the characteristic of binocular camera, finally using the network demarcated
Camera carries out space demarcation to current field.The three dimensions automatic Calibration based on Multi-sensor Fusion that the present invention is provided
Method is overcome in prior art when demarcating to three dimensions, demarcates inaccurate and cumbersome problem.
For the matching treatment in the step 3 in the present invention, because relative position is fixed between the sensor on Kinect,
And calculated using transformation matrix and approached with error minimum method.Specifically include:Image is split, object matching,
Coordinate transform, and continuous iteration is carried out to said process by error minimum.
For the image matching algorithm in the step 4 can adopt correlation matching algorithm in the present invention, but in order that
The total space that must can be carried out to the IP Camera under different PTZ parameters is demarcated, and result is more accurate, and operation is more
It is convenient, it is preferably carried out in mode in one kind of the present invention, described image matching algorithm is the algorithm of feature based matching.
It is preferably carried out in mode in one kind of the present invention, the characteristic matching primitive in described image matching algorithm includes
Point, line, region, when using, contrast point, line, the feature in region in described image, and described image is carried out into contrast
Match somebody with somebody, then the result demarcated by the result of matching and using Kinect visible images combines to carry out the IP Camera
Demarcate.Again the visual field under different PTZ parameters is extended and matched afterwards, to complete the demarcation to the total space.Finally, to machine
Demarcated between the IP Camera installed on device people, to form the characteristic of many mesh cameras, used in follow-up work.
For the quantity of the IP Camera installed on the support does not make specifically defined in the present invention, but in order that
Obtain the IP Camera effectively can demarcate to three dimensions, be preferably carried out in mode in one kind of the present invention,
Two IP Cameras are installed on the support.
The preferred embodiment of the present invention is described in detail above in association with accompanying drawing, but, the present invention is not limited to above-mentioned reality
The detail in mode is applied, in the range of the technology design of the present invention, various letters can be carried out to technical scheme
Monotropic type, these simple variants belong to protection scope of the present invention.
It is further to note that each particular technique feature described in above-mentioned specific embodiment, in not lance
In the case of shield, can be combined by any suitable means, in order to avoid unnecessary repetition, the present invention to it is various can
The combination of energy is no longer separately illustrated.
Additionally, can also be combined between a variety of embodiments of the present invention, as long as it is without prejudice to this
The thought of invention, it should equally be considered as content disclosed in this invention.
Claims (6)
1. a kind of method of the three dimensions automatic Calibration based on Multi-sensor Fusion, it is characterised in that methods described includes:
Step 1, is provided with a mobile platform a support, and a Kinect device and multiple networks are installed on the bracket
Camera;
Step 2, is demarcated using the range image sequence of Kinect device to three dimensions;
Step 3, to the depth image and visible images of the Kinect device matching treatment is carried out, and using the step 2
In three dimensions calibration result realize based on Kinect visible images three dimensions demarcate;
Step 4, the result demarcated using Kinect visible images and image matching algorithm enter rower to the IP Camera
It is fixed;
Step 5, using the IP Camera demarcated space demarcation, i.e. three dimensions modeling and weight are carried out to current field
Build.
2. the method for the three dimensions automatic Calibration based on Multi-sensor Fusion according to claim 1, it is characterised in that
Matching treatment in the step 3 is calculated using transformation matrix and error minimizes method and processed.
3. the method for the three dimensions automatic Calibration based on Multi-sensor Fusion according to claim 1, it is characterised in that
Described image matching algorithm in the step 4 is the algorithm of feature based matching.
4. the method for the three dimensions automatic Calibration based on Multi-sensor Fusion according to claim 3, it is characterised in that
Characteristic matching primitive in described image matching algorithm includes point, line, region.
5. the method for the three dimensions automatic Calibration based on Multi-sensor Fusion according to claim 4, it is characterised in that
Described image matching algorithm uses SIFT feature matching algorithm.
6. the method for the three dimensions automatic Calibration based on Multi-sensor Fusion according to claim 1, it is characterised in that
Two IP Cameras are installed on the support.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611206307.2A CN106652026A (en) | 2016-12-23 | 2016-12-23 | Three-dimensional space automatic calibration method based on multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611206307.2A CN106652026A (en) | 2016-12-23 | 2016-12-23 | Three-dimensional space automatic calibration method based on multi-sensor fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106652026A true CN106652026A (en) | 2017-05-10 |
Family
ID=58827225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611206307.2A Pending CN106652026A (en) | 2016-12-23 | 2016-12-23 | Three-dimensional space automatic calibration method based on multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106652026A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108838998A (en) * | 2018-07-25 | 2018-11-20 | 安徽信息工程学院 | Novel robot data collection layer structure |
CN108965812A (en) * | 2018-07-25 | 2018-12-07 | 安徽信息工程学院 | Robot panoramic view data acquisition layer structure |
CN108983790A (en) * | 2018-08-24 | 2018-12-11 | 安徽信息工程学院 | The autonomous positioning robot of view-based access control model |
CN109015588A (en) * | 2018-07-25 | 2018-12-18 | 安徽信息工程学院 | The wooden robot of damping |
CN109015755A (en) * | 2018-08-24 | 2018-12-18 | 安徽信息工程学院 | wooden robot based on Kinect |
CN109079855A (en) * | 2018-07-25 | 2018-12-25 | 安徽信息工程学院 | Robot data collection layer |
CN109079815A (en) * | 2018-08-24 | 2018-12-25 | 安徽信息工程学院 | The intelligent robot of view-based access control model |
CN109079737A (en) * | 2018-07-25 | 2018-12-25 | 安徽信息工程学院 | robot |
CN109108932A (en) * | 2018-07-25 | 2019-01-01 | 安徽信息工程学院 | Wooden robot |
CN109129396A (en) * | 2018-08-24 | 2019-01-04 | 安徽信息工程学院 | The wooden robot of view-based access control model |
CN109129391A (en) * | 2018-07-25 | 2019-01-04 | 安徽信息工程学院 | The wooden robot of liftable |
CN109164800A (en) * | 2018-07-25 | 2019-01-08 | 安徽信息工程学院 | Highly machine personal data acquisition layer |
CN109176605A (en) * | 2018-07-25 | 2019-01-11 | 安徽信息工程学院 | Robot data collection layer structure |
CN109176539A (en) * | 2018-08-24 | 2019-01-11 | 安徽信息工程学院 | Autonomous positioning robot based on Kinect |
CN109764824A (en) * | 2018-12-19 | 2019-05-17 | 武汉西山艺创文化有限公司 | A kind of Portable three-dimensional model scanning method and apparatus based on detachable external member |
CN109839827A (en) * | 2018-12-26 | 2019-06-04 | 哈尔滨拓博科技有限公司 | A kind of gesture identification intelligent home control system based on total space location information |
CN110458897A (en) * | 2019-08-13 | 2019-11-15 | 北京积加科技有限公司 | Multi-cam automatic calibration method and system, monitoring method and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794448A (en) * | 2010-04-07 | 2010-08-04 | 上海交通大学 | Full automatic calibration method of master-slave camera chain |
CN102638653A (en) * | 2012-03-01 | 2012-08-15 | 北京航空航天大学 | Automatic face tracing method on basis of Kinect |
WO2013127418A1 (en) * | 2012-02-27 | 2013-09-06 | Eth Zurich | Method and system for image processing in video conferencing for gaze correction |
CN103646394A (en) * | 2013-11-26 | 2014-03-19 | 福州大学 | Mixed visual system calibration method based on Kinect camera |
CN103824278A (en) * | 2013-12-10 | 2014-05-28 | 清华大学 | Monitoring camera calibration method and system |
CN104134188A (en) * | 2014-07-29 | 2014-11-05 | 湖南大学 | Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion |
CN104126989A (en) * | 2014-07-30 | 2014-11-05 | 福州大学 | Foot surface three-dimensional information obtaining method based on multiple RGB-D cameras |
CN105678734A (en) * | 2014-11-21 | 2016-06-15 | 中国科学院沈阳自动化研究所 | Different-source test image calibration method of image matching system |
CN105913489A (en) * | 2016-04-19 | 2016-08-31 | 东北大学 | Indoor three-dimensional scene reconstruction method employing plane characteristics |
-
2016
- 2016-12-23 CN CN201611206307.2A patent/CN106652026A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794448A (en) * | 2010-04-07 | 2010-08-04 | 上海交通大学 | Full automatic calibration method of master-slave camera chain |
WO2013127418A1 (en) * | 2012-02-27 | 2013-09-06 | Eth Zurich | Method and system for image processing in video conferencing for gaze correction |
CN102638653A (en) * | 2012-03-01 | 2012-08-15 | 北京航空航天大学 | Automatic face tracing method on basis of Kinect |
CN103646394A (en) * | 2013-11-26 | 2014-03-19 | 福州大学 | Mixed visual system calibration method based on Kinect camera |
CN103824278A (en) * | 2013-12-10 | 2014-05-28 | 清华大学 | Monitoring camera calibration method and system |
CN104134188A (en) * | 2014-07-29 | 2014-11-05 | 湖南大学 | Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion |
CN104126989A (en) * | 2014-07-30 | 2014-11-05 | 福州大学 | Foot surface three-dimensional information obtaining method based on multiple RGB-D cameras |
CN105678734A (en) * | 2014-11-21 | 2016-06-15 | 中国科学院沈阳自动化研究所 | Different-source test image calibration method of image matching system |
CN105913489A (en) * | 2016-04-19 | 2016-08-31 | 东北大学 | Indoor three-dimensional scene reconstruction method employing plane characteristics |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109176605A (en) * | 2018-07-25 | 2019-01-11 | 安徽信息工程学院 | Robot data collection layer structure |
CN109164800A (en) * | 2018-07-25 | 2019-01-08 | 安徽信息工程学院 | Highly machine personal data acquisition layer |
CN109108932A (en) * | 2018-07-25 | 2019-01-01 | 安徽信息工程学院 | Wooden robot |
CN109015588A (en) * | 2018-07-25 | 2018-12-18 | 安徽信息工程学院 | The wooden robot of damping |
CN108965812A (en) * | 2018-07-25 | 2018-12-07 | 安徽信息工程学院 | Robot panoramic view data acquisition layer structure |
CN109079855A (en) * | 2018-07-25 | 2018-12-25 | 安徽信息工程学院 | Robot data collection layer |
CN108838998A (en) * | 2018-07-25 | 2018-11-20 | 安徽信息工程学院 | Novel robot data collection layer structure |
CN109079737A (en) * | 2018-07-25 | 2018-12-25 | 安徽信息工程学院 | robot |
CN109129391A (en) * | 2018-07-25 | 2019-01-04 | 安徽信息工程学院 | The wooden robot of liftable |
CN109176539A (en) * | 2018-08-24 | 2019-01-11 | 安徽信息工程学院 | Autonomous positioning robot based on Kinect |
CN108983790A (en) * | 2018-08-24 | 2018-12-11 | 安徽信息工程学院 | The autonomous positioning robot of view-based access control model |
CN109079815A (en) * | 2018-08-24 | 2018-12-25 | 安徽信息工程学院 | The intelligent robot of view-based access control model |
CN109015755A (en) * | 2018-08-24 | 2018-12-18 | 安徽信息工程学院 | wooden robot based on Kinect |
CN109129396A (en) * | 2018-08-24 | 2019-01-04 | 安徽信息工程学院 | The wooden robot of view-based access control model |
CN109764824A (en) * | 2018-12-19 | 2019-05-17 | 武汉西山艺创文化有限公司 | A kind of Portable three-dimensional model scanning method and apparatus based on detachable external member |
CN109839827B (en) * | 2018-12-26 | 2021-11-30 | 哈尔滨拓博科技有限公司 | Gesture recognition intelligent household control system based on full-space position information |
CN109839827A (en) * | 2018-12-26 | 2019-06-04 | 哈尔滨拓博科技有限公司 | A kind of gesture identification intelligent home control system based on total space location information |
CN110458897A (en) * | 2019-08-13 | 2019-11-15 | 北京积加科技有限公司 | Multi-cam automatic calibration method and system, monitoring method and system |
CN110458897B (en) * | 2019-08-13 | 2020-12-01 | 北京积加科技有限公司 | Multi-camera automatic calibration method and system and monitoring method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106652026A (en) | Three-dimensional space automatic calibration method based on multi-sensor fusion | |
CN104331896B (en) | A kind of system calibrating method based on depth information | |
CN105627992B (en) | A kind of method that ancient building is surveyed and drawn in quick high accuracy noncontact | |
CN103837869B (en) | Based on single line laser radar and the CCD camera scaling method of vector relations | |
CN104748683B (en) | A kind of on-line automatic measurement apparatus of Digit Control Machine Tool workpiece and measuring method | |
KR101054736B1 (en) | Method for 3d object recognition and pose estimation | |
JP7343624B2 (en) | Methods, mobile terminal equipment and systems for evaluating laser cut edges | |
CN109658456A (en) | Tank body inside fillet laser visual vision positioning method | |
US20140081459A1 (en) | Depth mapping vision system with 2d optical pattern for robotic applications | |
CN105758426A (en) | Combined calibration method for multiple sensors of mobile robot | |
CN103578109A (en) | Method and device for monitoring camera distance measurement | |
CN105894511B (en) | Demarcate target setting method, device and parking assistance system | |
CN106908064B (en) | Indoor night vision navigation method based on Kinect2 sensor | |
CN207522229U (en) | CNC vision positioning systems | |
JP2005201861A (en) | Three-dimensional visual sensor | |
CN111345029A (en) | Target tracking method and device, movable platform and storage medium | |
CN104299231B (en) | Method and system for registering images of multiple sensors in real time | |
Jun | 3D modelling of small object based on the projector‐camera system | |
CN112096454A (en) | Tunnel lining crack repairing device | |
CN104123726B (en) | Heavy forging measuring system scaling method based on vanishing point | |
CN109712197B (en) | Airport runway gridding calibration method and system | |
CN116934871B (en) | Multi-objective system calibration method, system and storage medium based on calibration object | |
KR102064149B1 (en) | Apparatus for weld bead detecting and method for calibration of the same | |
CN104537627A (en) | Depth image post-processing method | |
CN111932517B (en) | Contour mapping method and device for residual plate, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |