CN108389157A - A kind of quick joining method of three-dimensional panoramic image - Google Patents
A kind of quick joining method of three-dimensional panoramic image Download PDFInfo
- Publication number
- CN108389157A CN108389157A CN201810026058.1A CN201810026058A CN108389157A CN 108389157 A CN108389157 A CN 108389157A CN 201810026058 A CN201810026058 A CN 201810026058A CN 108389157 A CN108389157 A CN 108389157A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- depth camera
- camera
- binocular depth
- scaling board
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000009466 transformation Effects 0.000 claims abstract description 20
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000013519 translation Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of quick joining methods of three-dimensional panoramic image, and the intrinsic parameter and distortion parameter of camera are obtained first with traditional scaling method;It recycles calibrating parameters to obtain the evolution relationship of RGB camera coordinate system and depth camera coordinate system, generates 3-D view;Then three-dimensional position relationship of the method identification scaling board of machine vision in each camera coordinates system is utilized, and then the 3-D view in different cameral can be transformed into the coordinate system of same calibration object, the rough registration of three-dimensional splicing is completed;The precision for just utilizing the splicing of icp iterative algorithm Optimized Iterative 3-D views later, then preserves the evolution relationship of each camera coordinate system, generates coordinate system transformation relational query table;Inquiry table only needs to generate once, keeps the position of each camera constant, is mapped in the same coordinate system using the 3-D view in the several coordinate systems by different cameral of inquiry table, completes the quick splicing of three-dimensional panoramic image.
Description
Technical field
The present invention relates to three dimensional detection, three-dimensionalreconstruction field, especially a kind of method that three-dimensional panoramic image is quickly spliced.
Background technology
Three-dimensional panoramic image splicing is to obtain big visual field inner periphery environment and object space spatial coordinated information
A kind of technology.Three-dimensional panoramic image can more provide 360 degree of three-dimension object iconic model, reflect the true shape of object well
Looks, quick three-dimensional image mosaic technology suffer from huge in fields such as many military affairs, historical relic, medicine, education, industrial detections
Application prospect.
Image mosaic technology is long-standing, and image mosaic technology is to have the image collection of overlapping to be combined into a pair greatly by one group
The advantages that seamless image of type can reject the information of overlapping, reduce information storage capacity.The common quasi- algorithm of image is based on
The accurate stitching algorithm of characteristic point is overlapped the characteristic point of image by extraction, finds out the image transformation relation of character pair point, from
And image is got up the partial fusion that image is overlapped by geometric transformation, complete the splicing of image.In addition the spelling of three-dimensional point cloud
Two parts can be divided into according to process point by connecing method, and as thick splicing and accurate splicing, thick splicing is according to relatively rough precision
By under the point cloud registering to unified coordinate system under different coordinates, then in order to realize the minimum of stitching error, change utilizing
Optimization is iterated to consecutive points cloud for algorithm, obtains optimal splicing effect.
The prior art does not occur also using machine vision and iteration optimization algorithms, simple and efficient in conjunction with network communication
The panoramic mosaic scheme of 3-D view is completed, the present invention solves this problem.
Invention content
To solve the deficiencies in the prior art, the purpose of the present invention is to provide a kind of sides that three-dimensional panoramic image is quickly spliced
Method, the present invention utilizes the pattern of server-side and client by using machine vision and iteration optimization algorithms, by different phases
3-D view in machine is mapped under the same coordinate system, the splicing of simple and efficient completion 3-D view.
In order to realize that above-mentioned target, the present invention adopt the following technical scheme that:
A kind of quick joining method of three-dimensional panoramic image, includes the following steps:
Step 1:The RGB camera coordinate system to binocular depth camera and infrared depth using chessboard calibration plate
Camera coordinates system is carried out at the same time calibration, and the Intrinsic Matrix of image acquisition device is found out using calibration algorithm,
Radial distortion parameter, tangential distortion parameter, outer parameter of the image acquisition device relative to plane to be measured;
Step 2:By demarcating, outer parameter obtains RGB camera coordinate system and the evolution of infrared depth camera coordinate system closes
System is being unified the three-dimensional reconstruction to the same coordinate system completion 3-D view;
Step 3:The coding of scaling board is identified, to identifying that the angle point of scaling board is detected;
Step 4:Scaling board is placed on to the centre of two adjacent binocular depth cameras, makes scaling board is complete simultaneously to occur
In the field range of two adjacent binocular depth cameras, recycle the RGB camera of two binocular depth cameras in its visual field
Scaling board be detected and identify, obtain the position of adjacent binocular depth camera coordinates system and same scaling board coordinate system
Transformation relation;
Step 5:Fixed multiple binocular depth cameras obtain each adjacent binocular depth phase according to the method for step 4
The evolution relationship of machine coordinate system and same scaling board coordinate system, then with the world coordinate system of a binocular depth camera
Centered on, obtain remaining each binocular depth camera to final world coordinate system transformation relation, then by each binocular depth phase
3-D view in machine coordinate system completes rough registration, followed by icp by rotating under translation transformation unification to unified coordinate system
Algorithm uniformly optimizes the registration accuracy of entire panorama camera, and each binocular depth camera coordinates system is finally arrived world coordinate system
Evolution relationship preserves, and generates evolution relational query table;
Step 6:The image information of binocular depth camera is read using client-side program, completes three-dimensional image reconstruction.
The quick joining method of a kind of three-dimensional panoramic image above-mentioned,
Step 1:The RGB camera coordinate system to binocular depth camera and infrared depth camera coordinate using chessboard calibration plate
System is carried out at the same time calibration, the Intrinsic Matrix of image acquisition device is found out using calibration algorithm, radial distortion parameter k1, k2, k3 are cut
To distortion parameter p1, p2, outer parameter of the image acquisition device relative to plane to be measured;Above-mentioned outer parameter includes:Rotating vector peace
The amount of shifting to.
A kind of quick joining method of three-dimensional panoramic image according to claim 1, which is characterized in that step 3:It is right
The coding of scaling board is identified, and the specific method being detected to the angle point of identification scaling board is:
Step a reads video frame, detection calibration gridiron pattern, and corrects the distortion of image;
Step b generates gray level image, and carries out thresholding, edge extracting, search profile, check contour area;
Step c, affine transformation;
Step d reads encoded radio, determines scaling board center in infrared depth camera coordinate system and RGB camera coordinate system
Position, the mapping relations of the coordinate system of the infrared depth camera obtained further in accordance with step 2 and the coordinate system of RGB camera obtain
Three-dimensional space position of the scaling board in depth camera world coordinate system finally recycles singular value decomposition algorithm to obtain scaling board
The evolution relationship x of coordinate system and binocular depth camera coordinates systemm=Rs(xs-ts), wherein xsRepresent binocular depth camera seat
Any point in mark system, RsFor binocular depth camera coordinates system to the spin matrix of scaling board coordinate system, tsFor binocular depth camera
Coordinate system is to the translation matrix of scaling board coordinate system, xmFor the corresponding points in scaling board coordinate system.
The quick joining method of a kind of three-dimensional panoramic image above-mentioned, multiple binocular depth cameras in step 5 are 8.
The quick joining method of a kind of three-dimensional panoramic image above-mentioned, step 6:Binocular depth is read using client-side program
The image information of camera completes three-dimensional image reconstruction;Each binocular depth camera docks a client-side program, then will be each
A collected 3-D view of client-side program is mapped that under unified coordinate system, is connect using evolution relational query table
It by way of network transmission, the 3-D view in each different binocular depth camera coordinates systems is transferred to server end,
Complete the splicing of three-dimensional panoramic image.
The quick joining method of a kind of three-dimensional panoramic image above-mentioned, binocular depth camera composition have:GRB cameras and infrared depth
Spend camera.
The quick joining method of a kind of three-dimensional panoramic image above-mentioned, the angle between binocular depth camera is 45 degree.
The invention has the beneficial effects that:The present invention provides a kind of method that three-dimensional panoramic image is quickly spliced, sharp first
The intrinsic parameter and distortion parameter of camera are obtained with traditional scaling method;Calibrating parameters are recycled to obtain RGB camera coordinate system and depth
The evolution relationship of camera coordinates system is spent, 3-D view is generated;Then the method for utilizing machine vision identifies scaling board each
Three-dimensional position relationship in a camera coordinates system, and then the 3-D view in different cameral can be transformed into same calibration object
In coordinate system, thick standard of three-dimensional splicing is completed;Iteration is recycled to calculate the precision of Optimized Iterative 3-D view splicing, it will be each
The evolution relationship of camera coordinate system preserves, and generates coordinate system transformation relational query table;Inquiry table only needs to generate
Once, it keeps the position of each camera constant, is mapped to using the 3-D view in the several coordinate systems by different cameral of inquiry table
In the same coordinate system, the quick splicing of three-dimensional panoramic image is completed.
Description of the drawings
Fig. 1 is a kind of structural schematic diagram of embodiment of hardware of the present invention;
Fig. 2 is a kind of structural schematic diagram of embodiment of binocular depth camera of the present invention placement position.
Specific implementation mode
Specific introduce is made to the present invention below in conjunction with the drawings and specific embodiments.
3-D view is acquired by binocular depth camera;As a preferred embodiment, binocular depth camera composition has:GRB cameras
With infrared depth camera.RGB schemes to complete three-dimensional reconstruction by client with depth information;Client and server-side by network into
Row communicates and is transformed into the 3-D view of multiple visual fields in the same coordinate system in server end in real time.
The adjacent angle setting of camera is related with the imaging viewing field range of single camera, that the precision of splicing can be made more accurate
Really, then the angle of adjacent cameras should be the smaller the better, but to complete 360 ° of panoramic mosaic schemes, and the angle of adjacent cameras is smaller
It means to provide more cameras, consider, in conjunction with the imaging viewing field range of general depth camera, adjacent binocular depth
The angle of camera is set as 45 °, that is, completes 360 ° of 3-D view splicing system, need the depth camera of 8 same types.
A kind of quick joining method of three-dimensional panoramic image, includes the following steps:
Step 1:The RGB camera coordinate system to binocular depth camera and infrared depth camera coordinate using chessboard calibration plate
System is carried out at the same time calibration, the Intrinsic Matrix of image acquisition device is found out using calibration algorithm, radial distortion parameter k1, k2, k3 are cut
To distortion parameter p1, p2, outer parameter of the image acquisition device relative to plane to be measured;Above-mentioned outer parameter includes:Rotating vector peace
The amount of shifting to.
Step 2:By demarcating, outer parameter obtains RGB camera coordinate system and the evolution of infrared depth camera coordinate system closes
System is being unified the three-dimensional reconstruction to the same coordinate system completion 3-D view;
Step 3:The coding of scaling board is identified, to identifying that the angle point of scaling board is detected;Specific method is:
Step a reads video frame, detection calibration gridiron pattern, and corrects the distortion of image;
Step b generates gray level image, and carries out thresholding, edge extracting, search profile, check contour area;
Step c, affine transformation;
Step d reads encoded radio, determines scaling board center in infrared depth camera coordinate system and RGB camera coordinate system
Position, the mapping relations of the coordinate system of the infrared depth camera obtained further in accordance with step 2 and the coordinate system of RGB camera obtain
Three-dimensional space position of the scaling board in depth camera world coordinate system finally recycles singular value decomposition algorithm to obtain scaling board
The evolution relationship x of coordinate system and binocular depth camera coordinates systemm=Rs(xs-ts), wherein xsRepresent binocular depth camera seat
Any point in mark system, RsFor binocular depth camera coordinates system to the spin matrix of scaling board coordinate system, tsFor binocular depth camera
Coordinate system is to the translation matrix of scaling board coordinate system, xmFor the corresponding points in scaling board coordinate system.
Step 4:Scaling board is placed on to the centre of two adjacent binocular depth cameras, makes scaling board is complete simultaneously to occur
In the field range of two adjacent binocular depth cameras, recycle the RGB camera of two binocular depth cameras in its visual field
Scaling board be detected and identify, obtain the position of adjacent binocular depth camera coordinates system and same scaling board coordinate system
Transformation relation.
Step 5:8 binocular depth cameras are fixed, according to the method for step 4, obtain each adjacent binocular depth phase
The evolution relationship of machine coordinate system and same scaling board coordinate system, then with the world coordinate system of a binocular depth camera
Centered on, obtain remaining each binocular depth camera to final world coordinate system transformation relation, then by each binocular depth phase
3-D view in machine coordinate system completes rough registration, followed by icp by rotating under translation transformation unification to unified coordinate system
Algorithm uniformly optimizes the registration accuracy of entire panorama camera, and each binocular depth camera coordinates system is finally arrived world coordinate system
Evolution relationship preserves, and generates evolution relational query table.
Step 6:The image information of binocular depth camera is read using client-side program, completes three-dimensional image reconstruction;Each
Binocular depth camera all docks a client-side program, then by the collected 3-D view of each client-side program, utilizes position
Transformation relation inquiry table is set, is mapped that under unified coordinate system, then by way of network transmission, by each different binoculars
3-D view in depth camera coordinate system is transferred to server end, completes the splicing of three-dimensional panoramic image.
In order to further illustrate the present invention, 1 specific data calculation process is brought by the following examples:Simultaneously to adjacent
Two depth camera cameras one are identified and detect to active scaling board simultaneously with camera two, can obtain:
x1m=R1c(x1c-T1c)
x2m=R2c(x2c-T2c)
Wherein x1cRepresent any point in the 1st camera coordinates system, R1cWith T1cIt respectively represents and is sat from first depth camera
Mark system arrives the spin matrix and translation matrix of scaling board coordinate system, x1mFor x1cCoordinate value in scaling board coordinate system, x2cIt represents
Any point in second camera coordinates system, R2cWith T2cIt respectively represents from first depth camera coordinate system to scaling board coordinate
The spin matrix and translation matrix of system, x2mFor x2cCoordinate value in scaling board coordinate system.Due to being carried out to same scaling board
Calibration, so the coordinate system residing for two scaling boards is same at this time, that is, has x1m=x2m, then above-mentioned two formula can merge,
It can obtain any point x in second depth camera coordinate system2cIn the position of first depth camera coordinate system, i.e.,:
R1c(x1c-Tic)=R2c(x2c-T2c)
Similarly, using scaling board while calibration for cameras i and camera i+1, we can also obtain between their coordinate systems
Transformation relation, i.e.,:
Wherein Ric, TicFor the spin matrix and translation matrix of camera coordinates system in depth camera i to scaling board coordinate system,
R(i+1)c, T(i+1)cFor the spin matrix and translation matrix of camera coordinates system in depth camera i+1 to scaling board coordinate system, all may be used
It is acquired in the method using SVD singular value decompositions, xicRepresent any point in i-th of camera coordinates system, x(i+1)cIt is it i-th-
Value in 1 camera coordinates system.
Above formula can be seen that through two two calibration, and the position that we can demarcate each adjacent depth camera coordinate system becomes
Relationship is changed, as shown in above-mentioned steps, for 8 fixed position cameras, ensures that the angle of the field angle of every two cameras meets one
Surely condition is overlapped, at least can be only by 8 calibration, we can establish the connection of this 8 different location fixed camera coordinate systems
System, will be in the point cloud data unification to the same coordinate system in 8 different location camera coordinates systems, you can generate evolution and close
It is inquiry table, such as we unify 8 depth camera coordinate systems into first camera coordinates system, such evolution
Relational query table just translates matrix group at wherein final evolution inquiry table is by icp algorithm optimizations by eight rotations
It is after iteration as a result, evolution relational query table sample table is as follows:
Evolution relational query table sample table
In table, R11, T11The rotation translation matrix of first depth camera coordinate system itself is transformed into for depth camera one,
So spin matrix is unit matrix, translation matrix is that each entry value is 0, R21, T21It is transformed into first for second depth camera
The translation matrix of depth camera coordinate system, by the matrix, we can be by any point in second depth camera coordinate system
x2, utilize formula x1=R21(x2+T21) convert it in first depth camera coordinate system, wherein x1As it is first
Coordinate in a depth camera coordinate system.
So the scaling board of the autonomous Design proposed by using this patent completes rough registration, icp algorithms are then recycled
Simultaneously iteration obtains above-mentioned evolution relational query table to Accurate Calibration, we can sit the arbitrary depth camera described in this patent
Point x in mark systemi, i=1,2,3 ..., 8, i represent i-th of depth camera, pass through the rotation translation matrix R in above-mentioned tablei1,
Ti1, i=1,2,3 ..., 8, indicate that i-th of depth camera to the spin matrix and translation matrix of first depth camera, utilizes public affairs
Formula x1=Ri1(xi+Ti1) all unify under the same coordinate system, world coordinate system i.e. in evolution relational query table sample table
One depth camera coordinate system, the applicable real world coordinate system of this patent can be placed in any position.
The present invention provides a kind of method that three-dimensional panoramic image is quickly spliced, and camera is obtained first with traditional scaling method
Intrinsic parameter and distortion parameter;Calibrating parameters are recycled to obtain the evolution of RGB camera coordinate system and depth camera coordinate system
Relationship generates 3-D view;Then three-dimensional position of the method identification scaling board of machine vision in each camera coordinates system is utilized
Relationship is set, and then the 3-D view in different cameral can be transformed into the coordinate system of same calibration object, three-dimensional spell is completed
The thick standard connect;It recycles iteration to calculate the precision of Optimized Iterative 3-D view splicing, the position of each camera coordinate system is become
The relationship of changing preserves, and generates coordinate system transformation relational query table;Inquiry table only needs to generate once, keeps the position of each camera
It sets constant, is mapped in the same coordinate system using the 3-D view in the several coordinate systems by different cameral of inquiry table, complete three
Tie up the quick splicing of panoramic picture.
The basic principles, main features and advantages of the invention have been shown and described above.The technical staff of the industry should
Understand, the invention is not limited in any way above-described embodiment, all to be obtained by the way of equivalent substitution or equivalent transformation
Technical solution is all fallen in protection scope of the present invention.
Claims (7)
1. a kind of quick joining method of three-dimensional panoramic image, which is characterized in that include the following steps:
Step 1:It is same to the RGB camera coordinate system of binocular depth camera and infrared depth camera coordinate system using chessboard calibration plate
Shi Jinhang is demarcated, and finds out the Intrinsic Matrix of image acquisition device using calibration algorithm, radial distortion parameter, tangential distortion parameter,
Outer parameter of the image acquisition device relative to plane to be measured;
Step 2:The evolution relationship of RGB camera coordinate system and infrared depth camera coordinate system is obtained by demarcating outer parameter,
Unified to the same coordinate system to complete the three-dimensional reconstruction of 3-D view;
Step 3:The coding of scaling board is identified, to identifying that the angle point of scaling board is detected;
Step 4:Scaling board is placed on to the centre of two adjacent binocular depth cameras, make scaling board while completely appearing in two
In the field range of a adjacent binocular depth camera, recycle the RGB camera of two binocular depth cameras to the mark in its visual field
Fixed board is detected and identifies, obtains the evolution of adjacent binocular depth camera coordinates system and same scaling board coordinate system
Relationship;
Step 5:Fixed multiple binocular depth cameras obtain each adjacent binocular depth camera and sit according to the method for step 4
The evolution relationship of mark system and same scaling board coordinate system, in being then with the world coordinate system of a binocular depth camera
The heart, obtain remaining each binocular depth camera to final world coordinate system transformation relation, then by each binocular depth camera sit
3-D view in mark system completes rough registration, followed by icp algorithms by rotating under translation transformation unification to unified coordinate system
The unified registration accuracy for optimizing entire panorama camera, finally by each binocular depth camera coordinates system to the position of world coordinate system
Transformation relation preserves, and generates evolution relational query table;
Step 6:The image information of binocular depth camera is read using client-side program, completes three-dimensional image reconstruction.
2. the quick joining method of a kind of three-dimensional panoramic image according to claim 1, which is characterized in that
Step 1:It is same to the RGB camera coordinate system of binocular depth camera and infrared depth camera coordinate system using chessboard calibration plate
Shi Jinhang is demarcated, and the Intrinsic Matrix of image acquisition device is found out using calibration algorithm, radial distortion parameter k1, k2, k3 are tangential abnormal
Variable element p1, p2, outer parameter of the image acquisition device relative to plane to be measured;Above-mentioned outer parameter includes:It rotating vector and is translated towards
Amount.
3. the quick joining method of a kind of three-dimensional panoramic image according to claim 1, which is characterized in that step 3:To mark
The coding of fixed board is identified, and the specific method being detected to the angle point of identification scaling board is:
Step a reads video frame, detection calibration gridiron pattern, and corrects the distortion of image;
Step b generates gray level image, and carries out thresholding, edge extracting, search profile, check contour area;
Step c, affine transformation;
Step d reads encoded radio, determines position of the scaling board center in infrared depth camera coordinate system and RGB camera coordinate system
It sets, the mapping relations of the coordinate system of the infrared depth camera obtained further in accordance with step 2 and the coordinate system of RGB camera are demarcated
Three-dimensional space position of the plate in depth camera world coordinate system finally recycles singular value decomposition algorithm to obtain scaling board coordinate
The evolution relationship x of system and binocular depth camera coordinates systemm=Rs(xs-ts), wherein xsRepresent binocular depth camera coordinates system
Middle any point, RsFor binocular depth camera coordinates system to the spin matrix of scaling board coordinate system, tsFor binocular depth camera coordinates
It is the translation matrix to scaling board coordinate system, xmFor the corresponding points in scaling board coordinate system.
4. the quick joining method of a kind of three-dimensional panoramic image according to claim 1, which is characterized in that more in step 5
A binocular depth camera is 8.
5. the quick joining method of a kind of three-dimensional panoramic image according to claim 1, which is characterized in that step 6:It utilizes
Client-side program reads the image information of binocular depth camera, completes three-dimensional image reconstruction;Each binocular depth camera docks
One client-side program, then will using evolution relational query table by the collected 3-D view of each client-side program
It is mapped under unified coordinate system, will be in each different binocular depth camera coordinates systems then by way of network transmission
3-D view is transferred to server end, completes the splicing of three-dimensional panoramic image.
6. the quick joining method of a kind of three-dimensional panoramic image according to claim 1, which is characterized in that above-mentioned binocular depth
Camera composition has:GRB cameras and infrared depth camera.
7. the quick joining method of a kind of three-dimensional panoramic image according to claim 1, which is characterized in that above-mentioned binocular depth
Angle between camera is 45 degree.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810026058.1A CN108389157A (en) | 2018-01-11 | 2018-01-11 | A kind of quick joining method of three-dimensional panoramic image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810026058.1A CN108389157A (en) | 2018-01-11 | 2018-01-11 | A kind of quick joining method of three-dimensional panoramic image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108389157A true CN108389157A (en) | 2018-08-10 |
Family
ID=63076116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810026058.1A Pending CN108389157A (en) | 2018-01-11 | 2018-01-11 | A kind of quick joining method of three-dimensional panoramic image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108389157A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766001A (en) * | 2018-12-29 | 2019-05-17 | 北京诺亦腾科技有限公司 | A kind of unified approach, system and the storage medium of difference MR device coordinate system |
CN110363838A (en) * | 2019-06-06 | 2019-10-22 | 浙江大学 | Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models |
CN110490967A (en) * | 2019-04-12 | 2019-11-22 | 北京城市网邻信息技术有限公司 | Image procossing and object-oriented modeling method and equipment, image processing apparatus and medium |
CN110599546A (en) * | 2019-08-28 | 2019-12-20 | 贝壳技术有限公司 | Method, system, device and storage medium for acquiring three-dimensional space data |
CN110930312A (en) * | 2018-09-19 | 2020-03-27 | 驭势(上海)汽车科技有限公司 | Method and device for generating fisheye camera image |
CN111559314A (en) * | 2020-04-27 | 2020-08-21 | 长沙立中汽车设计开发股份有限公司 | Depth and image information fused 3D enhanced panoramic looking-around system and implementation method |
CN111626930A (en) * | 2020-04-30 | 2020-09-04 | 兰州大学 | Omnibearing three-dimensional photographing method |
CN111986086A (en) * | 2020-08-27 | 2020-11-24 | 贝壳技术有限公司 | Three-dimensional image optimization generation method and system |
CN112422848A (en) * | 2020-11-17 | 2021-02-26 | 深圳市歌华智能科技有限公司 | Video splicing method based on depth map and color map |
CN112614191A (en) * | 2020-12-16 | 2021-04-06 | 江苏智库智能科技有限公司 | Loading and unloading position detection method, device and system based on binocular depth camera |
CN112734921A (en) * | 2021-01-11 | 2021-04-30 | 燕山大学 | Underwater three-dimensional map construction method based on sonar and visual image splicing |
CN114007014A (en) * | 2021-10-29 | 2022-02-01 | 北京环境特性研究所 | Method and device for generating panoramic image, electronic equipment and storage medium |
WO2023174068A1 (en) * | 2022-03-18 | 2023-09-21 | 上海寒武纪信息科技有限公司 | Data acquisition method and apparatus, device, and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106600654A (en) * | 2017-01-24 | 2017-04-26 | 浙江四点灵机器人股份有限公司 | Large viewing angle depth camera splicing device and splicing method |
CN106657809A (en) * | 2016-12-13 | 2017-05-10 | 深圳先进技术研究院 | Panoramic 3D video stitching system and method |
CN107016704A (en) * | 2017-03-09 | 2017-08-04 | 杭州电子科技大学 | A kind of virtual reality implementation method based on augmented reality |
-
2018
- 2018-01-11 CN CN201810026058.1A patent/CN108389157A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657809A (en) * | 2016-12-13 | 2017-05-10 | 深圳先进技术研究院 | Panoramic 3D video stitching system and method |
CN106600654A (en) * | 2017-01-24 | 2017-04-26 | 浙江四点灵机器人股份有限公司 | Large viewing angle depth camera splicing device and splicing method |
CN107016704A (en) * | 2017-03-09 | 2017-08-04 | 杭州电子科技大学 | A kind of virtual reality implementation method based on augmented reality |
Non-Patent Citations (3)
Title |
---|
Z.ZHANG等: "A flexible new technique for camera calibration", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
陈晓明: "基于Kinect深度信息的实时三维重建和滤波算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
高军强: "基于摄像机阵列的大范围三维场景重建", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110930312A (en) * | 2018-09-19 | 2020-03-27 | 驭势(上海)汽车科技有限公司 | Method and device for generating fisheye camera image |
CN109766001A (en) * | 2018-12-29 | 2019-05-17 | 北京诺亦腾科技有限公司 | A kind of unified approach, system and the storage medium of difference MR device coordinate system |
CN110490967A (en) * | 2019-04-12 | 2019-11-22 | 北京城市网邻信息技术有限公司 | Image procossing and object-oriented modeling method and equipment, image processing apparatus and medium |
CN110490967B (en) * | 2019-04-12 | 2020-07-17 | 北京城市网邻信息技术有限公司 | Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium |
CN110363838A (en) * | 2019-06-06 | 2019-10-22 | 浙江大学 | Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models |
CN110599546A (en) * | 2019-08-28 | 2019-12-20 | 贝壳技术有限公司 | Method, system, device and storage medium for acquiring three-dimensional space data |
CN111559314B (en) * | 2020-04-27 | 2021-08-24 | 长沙立中汽车设计开发股份有限公司 | Depth and image information fused 3D enhanced panoramic looking-around system and implementation method |
CN111559314A (en) * | 2020-04-27 | 2020-08-21 | 长沙立中汽车设计开发股份有限公司 | Depth and image information fused 3D enhanced panoramic looking-around system and implementation method |
CN111626930A (en) * | 2020-04-30 | 2020-09-04 | 兰州大学 | Omnibearing three-dimensional photographing method |
CN111986086A (en) * | 2020-08-27 | 2020-11-24 | 贝壳技术有限公司 | Three-dimensional image optimization generation method and system |
CN112422848A (en) * | 2020-11-17 | 2021-02-26 | 深圳市歌华智能科技有限公司 | Video splicing method based on depth map and color map |
CN112422848B (en) * | 2020-11-17 | 2024-03-29 | 深圳市歌华智能科技有限公司 | Video stitching method based on depth map and color map |
CN112614191A (en) * | 2020-12-16 | 2021-04-06 | 江苏智库智能科技有限公司 | Loading and unloading position detection method, device and system based on binocular depth camera |
CN112614191B (en) * | 2020-12-16 | 2024-05-24 | 江苏智库智能科技有限公司 | Loading and unloading position detection method, device and system based on binocular depth camera |
CN112734921A (en) * | 2021-01-11 | 2021-04-30 | 燕山大学 | Underwater three-dimensional map construction method based on sonar and visual image splicing |
CN112734921B (en) * | 2021-01-11 | 2022-07-19 | 燕山大学 | Underwater three-dimensional map construction method based on sonar and visual image splicing |
CN114007014A (en) * | 2021-10-29 | 2022-02-01 | 北京环境特性研究所 | Method and device for generating panoramic image, electronic equipment and storage medium |
CN114007014B (en) * | 2021-10-29 | 2023-06-16 | 北京环境特性研究所 | Method and device for generating panoramic image, electronic equipment and storage medium |
WO2023174068A1 (en) * | 2022-03-18 | 2023-09-21 | 上海寒武纪信息科技有限公司 | Data acquisition method and apparatus, device, and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108389157A (en) | A kind of quick joining method of three-dimensional panoramic image | |
CN112894832B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
CN104867160B (en) | A kind of directionality demarcation target demarcated for camera interior and exterior parameter | |
CN107833181B (en) | Three-dimensional panoramic image generation method based on zoom stereo vision | |
CN107154014B (en) | Real-time color and depth panoramic image splicing method | |
CN108648240A (en) | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration | |
CN107705252B (en) | Method and system suitable for splicing, unfolding and correcting binocular fisheye image | |
CN110211043A (en) | A kind of method for registering based on grid optimization for Panorama Mosaic | |
Mistry et al. | Image stitching using Harris feature detection | |
CN107063228A (en) | Targeted attitude calculation method based on binocular vision | |
Zou et al. | A method of stereo vision matching based on OpenCV | |
CN111553939A (en) | Image registration algorithm of multi-view camera | |
CN117036641A (en) | Road scene three-dimensional reconstruction and defect detection method based on binocular vision | |
CN101354796B (en) | Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model | |
Khoshelham et al. | Generation and weighting of 3D point correspondences for improved registration of RGB-D data | |
CA3233222A1 (en) | Method, apparatus and device for photogrammetry, and storage medium | |
CN109292099A (en) | A kind of UAV Landing judgment method, device, equipment and storage medium | |
CN114255197A (en) | Infrared and visible light image self-adaptive fusion alignment method and system | |
CN116957987A (en) | Multi-eye polar line correction method, device, computer equipment and storage medium | |
CN112348890A (en) | Space positioning method and device and computer readable storage medium | |
CN113793266A (en) | Multi-view machine vision image splicing method, system and storage medium | |
Barazzetti et al. | Automated and accurate orientation of complex image sequences | |
CN111815511A (en) | Panoramic image splicing method | |
CN113112532A (en) | Real-time registration method for multi-ToF camera system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180810 |