CN107578450A - A kind of method and system for the demarcation of panorama camera rigging error - Google Patents
A kind of method and system for the demarcation of panorama camera rigging error Download PDFInfo
- Publication number
- CN107578450A CN107578450A CN201710828874.XA CN201710828874A CN107578450A CN 107578450 A CN107578450 A CN 107578450A CN 201710828874 A CN201710828874 A CN 201710828874A CN 107578450 A CN107578450 A CN 107578450A
- Authority
- CN
- China
- Prior art keywords
- mrow
- camera
- msub
- mtd
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A kind of method and system for the demarcation of panorama camera rigging error, this method include:Establish monocular peg model;Establish binocular solid peg model;The rigging error of panorama camera is calculated according to monocular peg model and binocular solid peg model.The present invention calculates the rotation and translation error of the angle and all coordinate systems between adjacent camera reference axis on the basis of standard camera peg model, and both parameter indexs can be accurate, the rigging error of concise description panorama camera., can be in the production application of panorama camera simultaneously The present invention gives the practicable method for solving of rigging error, preferably accuracy problem is spliced in compensation caused by camera rigging error, and then improves splicing effect.
Description
Technical field
The invention belongs to computer vision field, and in particular to it is a kind of for panorama camera rigging error demarcation method and
System.
Background technology
Panorama camera refers to using multiple cameras while shoots spatial scene, and exports 360 degree of panoramic videos in real time.
Panorama camera at least possesses two or more cameras, wherein most crucial technology is exactly the real-time spelling of multi-channel video
Connect.
360 degree of spatial scenes are shot because panorama camera employs multiple cameras, are collected using adjacent camera
Partly overlap region progress image co-registration, and the image mosaic to overlap for finally collecting each camera is big into a width
The seamless high-definition picture of type.Panorama camera image mosaic technology is highly dependent on the big of overlapping region between camera
It is small, and the overlapping region size collected between adjacent camera is on the one hand relevant with the field of view angle of camera lens, the opposing party also by
The limitation of panorama camera rigging error.
At present, the also no specific scaling method of rigging error of panorama camera how quickly, simply, is accurately demarcated,
So needing a kind of scaling method that quantifies of panorama camera rigging error badly, and determine that panorama camera can hold according to stitching algorithm
Perhaps error range, and then instruct industrial production.
The content of the invention
In the prior art, how quick, simple, accurately demarcation panorama camera rigging error is not marked specifically also
Determine method, in order to solve this problem, the present invention provides a kind of method and system for the demarcation of panorama camera rigging error, tool
Body scheme is as follows:
A kind of method for the demarcation of panorama camera rigging error, comprise the following steps:
S1, establish monocular peg model;
S2, establish binocular solid peg model;
S3, the rigging error of panorama camera is calculated according to monocular peg model and binocular solid peg model.
Wherein, in the above-mentioned methods, it is described to establish comprising the following steps that for monocular peg model:
Coordinates of the P under world coordinate system that set up an office is Pw=(Xw, Yw, Zw)T, the coordinate under camera coordinates system is Pc=
(Xc, Yc, Zc)T, meet following relation between them:
Pc=R × Rw+T (1)
By the spot projection in camera coordinates system into image coordinate system, coordinate Ps of the point P in camera coordinates systemc=(Xc,
Yc, Zc)TWith the coordinate P=(x, y) in image coordinate systemTMeet following relation:
In formula (2), s is the scale factor of any yardstick, and M is the Intrinsic Matrix of camera, fxRepresent that camera is being imaged
The equivalent focal length in plane x directions, fyRepresent camera in the equivalent focal length in imaging plane y directions, (cx, cy) represent that principal point is being imaged
Pixel coordinate on plane x, y direction;
Wherein, the homogeneous coordinates form of camera calibration model is expressed as:
In formula (3),Feature point coordinates in plane of delineation coordinate system is represented,
Represent space characteristics point under scaling board coordinate system;
Wherein, W represents the physical conversion of the object plane for location observation, and including the plane of delineation with observing
Related part rotation R and part translation T sum, and W=[R | T].
Wherein, in above-mentioned steps S1, complete scaling board image is gathered using each camera of panorama camera, according to
The scaling board image of the different visual fields gathered, extract the feature point coordinates of all scaling boards planar.
Wherein, in the above-mentioned methods, the uncalibrated image of each camera collection is 10 to 15, and the mark gathered every time
The position of fixed board image is different with angle.
Wherein, in the above-mentioned methods, it is described to establish comprising the following steps that for binocular solid peg model:
The binocular solid scaling board being made up of the sub- scaling board 1,2 in left and right two is established, two sub- scaling boards 1,2 are built respectively
Vertical the first in the world coordinate system O1-XY and the second coordinate system O2-XY, wherein, the spatial relationship of two worlds coordinate system is with being translated towards
T0 is measured to represent;
Binocular solid scaling board image, described two adjacent respectively left cameras are gathered using two adjacent cameras
It is respectively X in two sub- 1,2 coordinates in world coordinate system of scaling board if any in space is P with right camera1、X2,
Then X1And X2Meet following relation:
X2=X1-T0 (4)
Coordinates of the spatial point P under left camera coordinate system is Xl, the coordinate under right camera coordinate system is Xr, they
Between have following transformational relation:
Xl=Rl×X1+Tl, Xr=Rr×X2+Tr (5)
In formula (5), RlAnd TlRepresent the outer parameter of left camera, RrAnd TrRepresent the outer parameter of right camera;
By eliminating X in formula (4) and formula (5)1, X2, can obtain:
Xr=Rr×Rl -1×Xl-Rr×T0+Tr-Rr×Rl -1×Tl (6)
And then it can be derived from:
R=Rr×Rl -1, T=-Rr×T0+Tr-Rr×Rl -1×Tl (7)
In formula (7), R and T represent what left camera coordinate system obtained after spin matrix R and translation vector T conversion
Right camera coordinate system, R and T are the result of progress binocular solid demarcation between adjacent camera.
Wherein, in above-mentioned steps S2, the binocular solid scaling board image of panorama camera difference visual field is gathered, extraction is all
The feature point coordinates of binocular solid scaling board planar, using the binocular solid peg model, calculates panorama camera respectively
Spin matrix R and translation vector T between middle adjacent camera.
Wherein, in the above-mentioned methods, two sub- scaling boards 1,2 in the binocular solid scaling board use chessboard trellis
Formula, and each tessellated size is consistent.
Wherein, in the above-mentioned methods, when gathering binocular solid scaling board image, using adjacent left and right camera simultaneously
Binocular solid scaling board image is shot, sub- scaling board 1,2 is only present in respectively in the visual field of a camera when gathering image,
And the amount of images of the binocular solid scaling board gathered is 8 to 12.
Wherein, in the above-mentioned methods, the rigging error for calculating panorama camera comprises the following steps that:
The square that the spin matrix R between panorama camera adjacent camera coordinate system is 3 × 3 has been tried to achieve by the step S2
Battle array, spin matrix R first row represent the rotating vector of X-axis between adjacent camera coordinate system, and the rotating vector is set to Vx;
Secondary series represents the rotating vector of Y-axis between adjacent camera coordinate system, and the rotating vector is set to Vy;3rd row represent adjacent and taken the photograph
As the rotating vector of Z axis between head coordinate system, the rotating vector is set to Vz;
If a standard vector is S=(0,0,1)T, the X-axis of adjacent camera coordinate system, Y-axis, the angle point between Z axis
Angle is not usedx, Angley, AnglezRepresent, then Anglex, Angley, AnglezCalculation formula it is as follows:
Wherein, dot () represents the dot-product operation between two vectors, and norm () represents vector field homoemorphism, each seat
Angle between parameter is the rigging error of panorama camera.
The method for being used for the demarcation of panorama camera rigging error of the present invention, first, establishes monocular peg model, secondly, builds
Vertical binocular solid peg model;Then, the assembling of panorama camera is calculated according to monocular peg model and binocular solid peg model
Error.The present invention calculates the angle between adjacent camera reference axis and institute on the basis of standard camera peg model
There is the rotation and translation error of coordinate system, both parameter indexs can be accurate, the rigging error of concise description panorama camera.
, can be in the production application of panorama camera, preferably simultaneously The present invention gives the practicable method for solving of rigging error
Accuracy problem is spliced in compensation caused by camera rigging error, and then improves splicing effect.
According to another aspect of the present invention, it is for the demarcation of panorama camera rigging error present invention also offers a kind of
System, including:
Monocular peg model establishes module, for establishing monocular peg model;
Binocular solid peg model establishes module, for establishing binocular solid peg model;
Rigging error module is calculated, for calculating panorama camera according to monocular peg model and binocular solid peg model
Rigging error.
The system for being used for the demarcation of panorama camera rigging error of the present invention, first, establishes monocular peg model, secondly, builds
Vertical binocular solid peg model;Then, the assembling of panorama camera is calculated according to monocular peg model and binocular solid peg model
Error.The present invention calculates the angle between adjacent camera reference axis and institute on the basis of standard camera peg model
There is the rotation and translation error of coordinate system, both parameter indexs can be accurate, the rigging error of concise description panorama camera.
, can be in the production application of panorama camera, preferably simultaneously The present invention gives the practicable method for solving of rigging error
Accuracy problem is spliced in compensation caused by camera rigging error, and then improves splicing effect.
Brief description of the drawings
Fig. 1 is that the present invention is used for the method flow diagram implemented that the method for panorama camera rigging error demarcation provides;
Fig. 2 is the monocular scaling board schematic diagram of the present invention;
Fig. 3 is the binocular solid scaling board schematic diagram of the present invention;
Fig. 4 is the image collecting device schematic diagram of the present invention;
Fig. 5 is that the present invention is used for the structured flowchart implemented that the system of panorama camera rigging error demarcation provides.
Embodiment
To make the object, technical solutions and advantages of the present invention of greater clarity, with reference to embodiment and join
According to accompanying drawing, the present invention is described in more detail.It should be understood that these descriptions are merely illustrative, and it is not intended to limit this hair
Bright scope.In addition, in the following description, the description to known features and technology is eliminated, to avoid unnecessarily obscuring this
The concept of invention.
Panorama camera refers to using multiple cameras while shoots spatial scene, and exports 360 degree of panoramic videos in real time.
Panorama camera at least possesses two or more cameras, wherein most crucial technology is exactly the real-time spelling of multi-channel video
Connect.
360 degree of spatial scenes are shot because panorama camera employs multiple cameras, are collected using adjacent camera
Partly overlap region progress image co-registration, and the image mosaic to overlap for finally collecting each camera is big into a width
The seamless high-definition picture of type.Panorama camera image mosaic technology is highly dependent on the big of overlapping region between camera
It is small, and the overlapping region size collected between adjacent camera is on the one hand relevant with the field of view angle of camera lens, the opposing party also by
The limitation of panorama camera rigging error.
At present, the also no specific scaling method of rigging error of panorama camera how quickly, simply, is accurately demarcated,
So needing a kind of scaling method that quantifies of panorama camera rigging error badly, and determine that panorama camera can hold according to stitching algorithm
Perhaps error range, and then instruct industrial production.
, can be in the production application of panorama camera provided by the present invention for the method for panorama camera rigging error demarcation
In, preferably accuracy problem is spliced in compensation caused by camera rigging error, and then improves splicing effect, such as Fig. 1 institutes
Show, it specifically comprises the following steps:
Step S1, establish monocular peg model;
In step S1, complete scaling board image is gathered using each camera of panorama camera, according to what is gathered
The scaling board image of different visual fields, extract the feature point coordinates of all scaling boards planar;
Step S2, establish binocular solid peg model;
In step S2, the binocular solid scaling board image of panorama camera difference visual field is gathered, extracts all binocular solids
The feature point coordinates of scaling board planar, using the binocular solid peg model, adjacent in panorama camera take the photograph is calculated respectively
As the spin matrix R and translation vector T between head;
Step S3, the rigging error of panorama camera is calculated according to monocular peg model and binocular solid peg model.
At this it should be noted that monocular peg model and middle binocular solid peg model in the present invention, the world are sat
Coordinate mapping relations between mark system W-XYZ and camera coordinates system C-XYZ represent with spin matrix R and translation vector T, wherein R
It is the outer parameter of camera with T, represents that camera coordinates system C-XYZ obtains the world after translation vector T and spin matrix R conversion and sat
Mark system W-XYZ.
Specifically, in the above-mentioned methods, it is described to establish comprising the following steps that for monocular peg model:
Coordinates of the P under world coordinate system that set up an office is Pw=(Xw, Yw, Zw)T, the coordinate under camera coordinates system is Pc=
(Xc, Yc, Zc)T, meet following relation between them:
Pc=R × Rw+T (1)
Because camera coordinates system C-XYZ is three-dimensional, image coordinate system I-XY is two-dimentional, by the point in camera coordinates system
Project in image coordinate system, coordinate Ps of the point P in camera coordinates systemc=(Xc, Yc, Zc)TWith the coordinate P in image coordinate system
=(x, y)TMeet following relation:
In formula (2), s is the scale factor of any yardstick, and M is the Intrinsic Matrix of camera, fxRepresent that camera is being imaged
The equivalent focal length in plane x directions, fyRepresent camera in the equivalent focal length in imaging plane y directions, (cx, cy) represent that principal point is being imaged
Pixel coordinate on plane x, y direction.
As described above, the homogeneous coordinates form of camera calibration model is expressed as:
In formula (3),Feature point coordinates in plane of delineation coordinate system is represented,
Represent space characteristics point under scaling board coordinate system;
Wherein, W represents the physical conversion of the object plane for location observation, and including the plane of delineation with observing
Related part rotation R and part translation T sum, W=[R | T].
Fig. 2 is the monocular scaling board schematic diagram of the present invention, as shown in Fig. 2 in above-mentioned steps S1, each camera collection
Uncalibrated image be 10 to 15, and the position of the scaling board image gathered every time and angle are different, such calibration result
The degree of accuracy is higher, and amount of calculation is moderate.According to the scaling board image for gathering different visual fields, all scaling boards are extracted planar
Feature point coordinates, using the camera calibration model, the intrinsic parameter of each camera and outer parameter in panorama camera are calculated respectively.
Specifically, in the above-mentioned methods, it is described to establish comprising the following steps that for binocular solid peg model:
Fig. 3 is the binocular solid scaling board schematic diagram of the present invention, as shown in figure 3, establishing by the sub- scaling board 1,2 in left and right two
The binocular solid scaling board of composition, the first in the world coordinate system O1-XY and the second coordinate system are established respectively to two sub- scaling boards 1,2
O2-XY, wherein, the spatial relationship of two worlds coordinate system is represented with translation vector T0;
Binocular solid scaling board image, described two adjacent respectively left cameras are gathered using two adjacent cameras
It is respectively X in two sub- 1,2 coordinates in world coordinate system of scaling board if any in space is P with right camera1、X2,
Then X1And X2Meet following relation:
X2=X1-T0 (4)
Coordinates of the spatial point P under left camera coordinate system is Xl, the coordinate under right camera coordinate system is Xr, they
Between have following transformational relation:
Xl=Rl×X1+Tl, Xr=Rr×X2+Tr (5)
In formula (5), RlAnd TlRepresent the outer parameter of left camera, RrAnd TrRepresent the outer parameter of right camera.
By eliminating X in formula (4) and formula (5)1, X2, can obtain:
Xr=Rr×Rl -1×Xl-Rr×T0+Tr-Rr×Rl -1×Tl (6)
And then it can be derived from:
R=Rr×Rl -1, T=-Rr×T0+Tr-Rr×Rl -1×Tl (7)
In formula (7), R and T represent what left camera coordinate system obtained after spin matrix R and translation vector T conversion
Right camera coordinate system, R and T are the result of progress binocular solid demarcation between adjacent camera.
Wherein, in the above-mentioned methods, two sub- scaling boards 1,2 in the binocular solid scaling board use chessboard trellis
Formula, and each tessellated size is consistent.
Wherein, in the above-mentioned methods, when gathering binocular solid scaling board image, clapped simultaneously using adjacent left and right camera
Binocular solid scaling board image is taken the photograph, sub- scaling board 1,2 is only present in respectively in the visual field of a camera when gathering image, and
The amount of images of the binocular solid scaling board gathered is 8 to 12.
Preferably, the scaling board picture of sub- scaling board 1 is gathered using left camera, sub- scaling board is gathered using right camera
2 scaling board picture.
Fig. 4 is the image collecting device schematic diagram of the present invention, in step s3, panorama camera model such as Fig. 4 institutes of use
Show, wherein O1-xyz and O2-xyz represent the first camera O1 and second camera O2 camera coordinates system respectively.
In the ideal case, i.e., under conditions of no rigging error, the first camera O1 and second camera O2 coordinates
It is that angle between X-axis is zero degree, the angle between Y-axis and between Z axis is 90 degree.In the assembling process of reality,
It is difficult to ensure that the angle between panorama camera adjacent camera coordinate system is in perfect condition.
In order to describe this rigging error, on the basis of the binocular solid peg model of panorama camera, panorama is calculated
Angle between each reference axis of camera adjacent camera coordinate system.
Specifically, in above-mentioned steps S3, the rigging error for calculating panorama camera comprises the following steps that:
The square that the spin matrix R between panorama camera adjacent camera coordinate system is 3 × 3 has been tried to achieve by the step S2
Battle array, spin matrix R first row represent the rotating vector of X-axis between adjacent camera coordinate system, and the rotating vector is set to Vx;
Secondary series represents the rotating vector of Y-axis between adjacent camera coordinate system, and the rotating vector is set to Vy;3rd row represent adjacent and taken the photograph
As the rotating vector of Z axis between head coordinate system, the rotating vector is set to Vz,
If a standard vector is S=(0,0,1)T, the X-axis of adjacent camera coordinate system, Y-axis, the angle point between Z axis
Angle is not usedx, Angley, AnglezRepresent, then Anglex, Angley, AnglezCalculation formula it is as follows:
Wherein, dot () represents the dot-product operation between two vectors, and norm () represents vector field homoemorphism.Each seat
Angle between parameter is the rigging error of panorama camera.
Preferably, panorama camera model as shown in Figure 4,4 cameras are shared, it is assumed that the first camera O1 and second takes the photograph
As the spin matrix between head O2 is R12, translation vector T12, second camera O2 and the 3rd camera (not shown) it
Between spin matrix be R23, translation vector T23, the spin moment between the 3rd camera and the 4th camera (not shown)
Battle array is R34, translation vector T34, the spin matrix between the 4th camera and the first camera O1 is R41, translation vector T41。
There is-point P in 01 coordinate system1, its coordinate under 02 coordinate system is P2, under 03 coordinate system (not shown)
Coordinate is P3, the coordinate under 04 coordinate system (not shown) is P4, then have following calculation formula:
P2=R12×P1+T12 (8)
P3=R23×P2+T23 (9)
P4=R34×P3+T34 (10)
By formula (8), (9), (10) are understood:
In embodiments of the present invention, the rotation error for making panorama camera is Re, translation error Te, then have:
Re=R41×R34×R23×R12
Te=R41×R34×R23×T12+R41×R34×T23+R41×T34+T41
Ideally, ReIt is 3 × 3 unit matrix, translation error TeFor 0, due to each camera assembling of panorama camera
Error, in practice, rotation error ReAnd translation error TeAnd it is not equal to ideal value, the rotation error R calculatedeIt is and flat
Shift error TeDeviate the rigging error that the size of ideal value can be used for describing panorama camera.
The method for being used for the demarcation of panorama camera rigging error of the present invention, first, establishes monocular peg model, utilizes panorama
Each camera of camera gathers complete scaling board image, according to the scaling board image of the different visual fields gathered, extracts institute
There is the feature point coordinates of scaling board planar;Secondly, binocular solid peg model is established, collection panorama camera difference visual field
Binocular solid scaling board image, the feature point coordinates of all binocular solid scaling boards planar is extracted, is stood using the binocular
Body peg model, spin matrix R and translation vector T in panorama camera between adjacent camera are calculated respectively;Then, according to list
Mesh peg model and binocular solid peg model calculate the rigging error of panorama camera.The present invention is in standard camera peg model
On the basis of, the rotation and translation error of the angle and all coordinate systems between adjacent camera reference axis is calculated, both
Parameter index can be accurate, the rigging error of concise description panorama camera.The present invention gives rigging error simultaneously conscientiously may be used
Capable method for solving, can be in the production application of panorama camera, and preferably compensation is spelled caused by camera rigging error
Accuracy problem is connect, and then improves splicing effect.
As another program of the present invention, it is for the demarcation of panorama camera rigging error present invention also offers a kind of
System, as shown in figure 5, the system includes:
Monocular peg model establishes module 51, for establishing monocular peg model;
The monocular peg model is established in module 51, and complete scaling board figure is gathered using each camera of panorama camera
Picture, according to the scaling board image of the different visual fields gathered, extract the feature point coordinates of all scaling boards planar.
Binocular solid peg model establishes module 52, for establishing binocular solid peg model;
The binocular solid peg model is established in module 52, gathers the binocular solid scaling board figure of panorama camera difference visual field
Picture, the feature point coordinates of all binocular solid scaling boards planar is extracted, using the binocular solid peg model, is counted respectively
Calculate spin matrix R and translation vector T between adjacent camera in panorama camera.
Rigging error module 53 is calculated, for calculating panorama camera according to monocular peg model and binocular solid peg model
Rigging error.
The above-mentioned system for the demarcation of panorama camera rigging error, first, monocular peg model is established, utilizes panorama camera
Each camera gather complete scaling board image, according to the scaling board image of the different visual fields gathered, extract all marks
The feature point coordinates of fixed board planar;Secondly, binocular solid peg model is established, gathers the binocular of panorama camera difference visual field
Stereo calibration plate image, the feature point coordinates of all binocular solid scaling boards planar is extracted, utilizes the binocular solid mark
Cover half type, spin matrix R and translation vector T in panorama camera between adjacent camera are calculated respectively;Then, according to single goal
Cover half type and binocular solid peg model calculate the rigging error of panorama camera.The present invention is on the basis of standard camera peg model
On, calculate the rotation and translation error of the angle and all coordinate systems between adjacent camera reference axis, both parameters
Index can be accurate, the rigging error of concise description panorama camera.It is simultaneously practicable The present invention gives rigging error
Method for solving, can be in the production application of panorama camera, and preferably compensation is spliced accurate caused by camera rigging error
True sex chromosome mosaicism, and then improve splicing effect.
At this it should be noted that monocular peg model and middle binocular solid peg model in the present invention, the world are sat
Coordinate mapping relations between mark system W-XYZ and camera coordinates system C-XYZ represent with spin matrix R and translation vector T, wherein R
It is the outer parameter of camera with T, represents that camera coordinates system C-XYZ obtains the world after translation vector T and spin matrix R conversion and sat
Mark system W-XYZ.
Specifically, described establish comprising the following steps that for monocular peg model:
Coordinates of the P under world coordinate system that set up an office is Pw=(Xw, Yw, Zw)T, the coordinate under camera coordinates system is Pc=
(Xc, Yc, Zc)T, meet following relation between them:
Pc=R × Rw+T (1)
Because camera coordinates system C-XYZ is three-dimensional, image coordinate system I-XY is two-dimentional, by the point in camera coordinates system
Project in image coordinate system, coordinate Ps of the point P in camera coordinates systemc=(Xc, Yc, Zc)TWith the coordinate P in image coordinate system
=(x, y)TMeet following relation:
In formula (2), s is the scale factor of any yardstick, and M is the Intrinsic Matrix of camera, fxRepresent that camera is being imaged
The equivalent focal length in plane x directions, fyRepresent camera in the equivalent focal length in imaging plane y directions, (cx, cy) represent that principal point is being imaged
Pixel coordinate on plane x, y direction.
As described above, the homogeneous coordinates form of camera calibration model is expressed as:
In formula (3),Feature point coordinates in plane of delineation coordinate system is represented,
Represent space characteristics point under scaling board coordinate system;
Wherein, W represents the physical conversion of the object plane for location observation, and including the plane of delineation with observing
Related part rotation R and part translation T sum, W=[R | T].
Fig. 2 is the monocular scaling board schematic diagram of the present invention, as shown in Fig. 2 establishing module 51 in above-mentioned monocular peg model
In, the uncalibrated image of each camera collection is 10 to 15, and the position of the scaling board image gathered every time and angle are not
Together, the degree of accuracy of such calibration result is higher, and amount of calculation is moderate.According to the scaling board image for gathering different visual fields, institute is extracted
There is the feature point coordinates of scaling board planar, using the camera calibration model, calculate each shooting in panorama camera respectively
The intrinsic parameter of head and outer parameter.
Specifically, it is described to establish comprising the following steps that for binocular solid peg model:
Fig. 3 is the binocular solid scaling board schematic diagram of the present invention, as shown in figure 3, establishing by the sub- scaling board 1,2 in left and right two
The binocular solid scaling board of composition, the first in the world coordinate system O1-XY and the second coordinate system are established respectively to two sub- scaling boards 1,2
O2-XY, wherein, the spatial relationship of two worlds coordinate system is represented with translation vector T0;
Binocular solid scaling board image, described two adjacent respectively left cameras are gathered using two adjacent cameras
It is respectively X in two sub- 1,2 coordinates in world coordinate system of scaling board if any in space is P with right camera1、X2,
Then X1And X2Meet following relation:
X2=X1-T0 (4)
Coordinates of the spatial point P under left camera coordinate system is Xl, the coordinate under right camera coordinate system is Xr, they
Between have following transformational relation:
Xl=Rl×X1+Tl, Xr=Rr×X2+Tr (5)
In formula (5), RlAnd TlRepresent the outer parameter of left camera, RrAnd TrRepresent the outer parameter of right camera.
By eliminating X in formula (4) and formula (5)1, X2, can obtain:
Xr=Rr×Rl -1×Xl-Rr×T0+Tr-Rr×Rl -1×Tl (6)
And then it can be derived from:
R=Rr×Rl -1, T=-Rr×T0+Tr-Rr×Rl -1×Tl (7)
In formula (7), R and T represent what left camera coordinate system obtained after spin matrix R and translation vector T conversion
Right camera coordinate system, R and T are the result of progress binocular solid demarcation between adjacent camera.
Wherein, two sub- scaling boards 1,2 in the binocular solid scaling board use checkerboard pattern, and each gridiron pattern
Size it is consistent.
Wherein, in above-mentioned collection binocular solid scaling board image, binocular is shot simultaneously using adjacent left and right camera
Stereo calibration plate image, sub- scaling board 1,2 is only present in respectively in the visual field of a camera when gathering image, and gathered
Binocular solid scaling board amount of images be 8 to 12.
Preferably, the scaling board picture of sub- scaling board 1 is gathered using left camera, sub- scaling board is gathered using right camera
2 scaling board picture.
Fig. 4 is the image collecting device schematic diagram of the present invention, in above-mentioned calculating rigging error module 53, the panorama of use
Camera model is as shown in figure 4, wherein O1-xyz and O2-xyz represent the first camera O1 and second camera O2 camera respectively
Coordinate system;
In the ideal case, i.e., under conditions of no rigging error, the first camera O1 and second camera O2 coordinates
It is that angle between X-axis is zero degree, the angle between Y-axis and between Z axis is 90 degree.In the assembling process of reality,
It is difficult to ensure that the angle between panorama camera adjacent camera coordinate system is in perfect condition.
In order to describe this rigging error, on the basis of the binocular solid peg model of panorama camera, panorama is calculated
Angle between each reference axis of camera adjacent camera coordinate system.
Specifically, the rigging error for calculating panorama camera comprises the following steps that:
Module 52 is established by the binocular solid peg model to have tried to achieve between panorama camera adjacent camera coordinate system
Spin matrix R be 3 × 3 matrix, spin matrix R first row represents the rotation of X-axis between adjacent camera coordinate system
Vector, the rotating vector are set to Vx;Secondary series represents the rotating vector of Y-axis between adjacent camera coordinate system, the rotating vector
It is set to Vy;3rd row represent the rotating vector of Z axis between adjacent camera coordinate system, and the rotating vector is set to Vz,
If a standard vector is S=(0,0,1)T, the X-axis of adjacent camera coordinate system, Y-axis, the angle point between Z axis
Angle is not usedx, Angley, AnglezRepresent, then Anglex, Angley, AnglezCalculation formula it is as follows:
Wherein, dot () represents the dot-product operation between two vectors, and norm () represents vector field homoemorphism.Each seat
Angle between parameter is the rigging error of panorama camera.
Preferably, panorama camera model as shown in Figure 4,4 cameras are shared, it is assumed that the first camera O1 and second takes the photograph
As the spin matrix between head O2 is R12, translation vector T12, second camera O2 and the 3rd camera (not shown) it
Between spin matrix be R23, translation vector T23, the spin moment between the 3rd camera and the 4th camera (not shown)
Battle array is R34, translation vector T34, the spin matrix between the 4th camera and the first camera O1 is R41, translation vector T41。
There is a point P in 01 coordinate system1, its coordinate under 02 coordinate system is P2, under 03 coordinate system (not shown)
Coordinate is P3, the coordinate under 04 coordinate system (not shown) is P4, then have following calculation formula:
P2=R12×P1+T12 (8)
P3=R23×P2+T23 (9)
P4=R34×P3+T34 (10)
By formula (8), (9), (10) are understood:
In embodiments of the present invention, the rotation error for making panorama camera is Re, translation error Te, then have:
Re=R41×R34×R23×R12
Te=R41×R34×R23×T12+R41×R34×T23+R41×T34+T41
Ideally, ReIt is 3 × 3 unit matrix, translation error TeFor 0, due to each camera assembling of panorama camera
Error, in practice, rotation error ReAnd translation error TeAnd it is not equal to ideal value, the rotation error R calculatedeIt is and flat
Shift error TeDeviate the rigging error that the size of ideal value can be used for describing panorama camera.
It should be appreciated that the above-mentioned embodiment of the present invention is used only for exemplary illustration or explains the present invention's
Principle, without being construed as limiting the invention.Therefore, that is done without departing from the spirit and scope of the present invention is any
Modification, equivalent substitution, improvement etc., should be included in the scope of the protection.In addition, appended claims purport of the present invention
Covering the whole changes fallen into scope and border or this scope and the equivalents on border and repairing
Change example.
Claims (10)
- A kind of 1. method for the demarcation of panorama camera rigging error, it is characterised in that comprise the following steps:S1, establish monocular peg model;S2, establish binocular solid peg model;S3, the rigging error of panorama camera is calculated according to monocular peg model and binocular solid peg model.
- 2. according to the method for claim 1, it is characterised in that described to establish comprising the following steps that for monocular peg model:Coordinates of the P under world coordinate system that set up an office is Pw=(Xw, Yw, Zw)T, the coordinate under camera coordinates system is Pc=(Xc, Yc, Zc)T, meet following relation between them:Pc=R × Pw+T (1)By the spot projection in camera coordinates system into image coordinate system, coordinate Ps of the point P in camera coordinates systemc=(Xc, Yc, Zc)T With the coordinate P=(x, y) in image coordinate systemTMeet following relation:<mrow> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>&RightArrow;</mo> <mi>s</mi> <mi>M</mi> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>c</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>w</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>e</mi> <mi> </mi> <mi>M</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mi>x</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>c</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> </mtd> <mtd> <msub> <mi>c</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>In formula (2), s is the scale factor of any yardstick, and M is the Intrinsic Matrix of camera, fxRepresent camera in imaging plane x The equivalent focal length in direction, fyRepresent camera in the equivalent focal length in imaging plane y directions, (cx, cy) represent principal point imaging plane x, Pixel coordinate on y directions;Wherein, the homogeneous coordinates form of camera calibration model is expressed as:<mrow> <mover> <mi>q</mi> <mo>~</mo> </mover> <mo>=</mo> <mi>s</mi> <mi>M</mi> <mi>W</mi> <mover> <mi>Q</mi> <mo>~</mo> </mover> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>In formula (3),Feature point coordinates in plane of delineation coordinate system is represented,Table Space characteristics point under indicating fixed board coordinate system;Wherein, W represents the physical conversion of the object plane for location observation, and related including the plane of delineation to observing Part rotation R and part translation T sum, and W=[R | T].
- 3. according to the method for claim 2, it is characterised in that in above-mentioned steps S1, each using panorama camera takes the photograph As the complete scaling board image of head collection, according to the scaling board image of the different visual fields gathered, all scaling boards are extracted flat Feature point coordinates in face.
- 4. according to the method for claim 3, it is characterised in that the uncalibrated image of each camera collection is 10 to 15, And the position of the scaling board image gathered every time is different with angle.
- 5. according to the method for claim 6, it is characterised in that the specific steps for establishing binocular solid peg model are such as Under:The binocular solid scaling board being made up of the sub- scaling board 1,2 in left and right two is established, generation is established respectively to two sub- scaling boards 1,2 Boundary the first coordinate system O1-XY and the second coordinate system O2-XY, wherein, the spatial relationship translation vector T0 of two worlds coordinate system Represent;Binocular solid scaling board image, described two adjacent respectively left camera and the right sides are gathered using two adjacent cameras Camera, it is respectively X in two sub- 1,2 coordinates in world coordinate system of scaling board if any in space is P1、X2, then X1 And X2Meet following relation:X2=X1-T0 (4)Coordinates of the spatial point P under left camera coordinate system is Xl, the coordinate under right camera coordinate system is Xr, between them There is following transformational relation:Xl=Rl×X1+Tl, Xr=Rr×X2+Tr (5)In formula (5), RlAnd TlRepresent the outer parameter of left camera, RrAnd TrRepresent the outer parameter of right camera;By eliminating X in formula (4) and formula (5)1, X2, can obtain:Xr=Rr×Rl -1×Xl-Rr×T0+Tr-Rr×Rl -1×Tl (6)And then it can be derived from:R=Rr×Rl -1, T=-Rr×T0+Tr-Rr×Rl -1×Tl (7)In formula (7), R and T represent that the right side that left camera coordinate system obtains after spin matrix R and translation vector T conversion is taken the photograph Picture head coordinate system, R and T are the result of progress binocular solid demarcation between adjacent camera.
- 6. according to the method for claim 5, it is characterised in that in above-mentioned steps S2, gather panorama camera difference visual field Binocular solid scaling board image, extract the feature point coordinates of all binocular solid scaling boards planar, utilize the binocular Stereo calibration model, spin matrix R and translation vector T in panorama camera between adjacent camera are calculated respectively.
- 7. according to the method for claim 6, it is characterised in that two sub- scaling boards 1 in the binocular solid scaling board, 2 use checkerboard pattern, and each tessellated size is consistent.
- 8. according to the method for claim 7, it is characterised in that when gathering binocular solid scaling board image, utilization is adjacent Left and right camera shoot binocular solid scaling board image simultaneously, sub- scaling board 1,2 is only present in one respectively when gathering image In the visual field of individual camera, and the amount of images of the binocular solid scaling board gathered is 8 to 12.
- 9. according to the method for claim 8, it is characterised in that the specific steps of the rigging error for calculating panorama camera It is as follows:The matrix that the spin matrix R between panorama camera adjacent camera coordinate system is 3 × 3 has been tried to achieve by the step S2, Spin matrix R first row represents the rotating vector of X-axis between adjacent camera coordinate system, and the rotating vector is set to Vx;The Two row represent the rotating vector of Y-axis between adjacent camera coordinate system, and the rotating vector is set to Vy;3rd row represent adjacent shooting The rotating vector of Z axis, the rotating vector are set to V between head coordinate systemz;If a standard vector is S=(0,0,1)T, the X-axis of adjacent camera coordinate system, Y-axis, the angle between Z axis uses respectively Anglex, Angley, AnglezRepresent, then Anglex, Angley, AnglezCalculation formula it is as follows:<mrow> <msub> <mi>Angle</mi> <mi>x</mi> </msub> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>d</mi> <mi>o</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>x</mi> </msub> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>n</mi> <mi>o</mi> <mi>r</mi> <mi>m</mi> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>x</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>n</mi> <mi>o</mi> <mi>r</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>*</mo> <mn>180</mn> <mo>/</mo> <mi>&pi;</mi> </mrow><mrow> <msub> <mi>Angle</mi> <mi>y</mi> </msub> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>d</mi> <mi>o</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>y</mi> </msub> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>n</mi> <mi>o</mi> <mi>r</mi> <mi>m</mi> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>n</mi> <mi>o</mi> <mi>r</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>*</mo> <mn>180</mn> <mo>/</mo> <mi>&pi;</mi> </mrow><mrow> <msub> <mi>Angle</mi> <mi>z</mi> </msub> <mo>=</mo> <msup> <mi>cos</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>d</mi> <mi>o</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>z</mi> </msub> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>n</mi> <mi>o</mi> <mi>r</mi> <mi>m</mi> <mrow> <mo>(</mo> <msub> <mi>V</mi> <mi>z</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>n</mi> <mi>o</mi> <mi>r</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>*</mo> <mn>180</mn> <mo>/</mo> <mi>&pi;</mi> </mrow>Wherein, dot () represents the dot-product operation between two vectors, and norm () represents vector field homoemorphism, each reference axis Between angle be panorama camera rigging error.
- A kind of 10. system for the demarcation of panorama camera rigging error, it is characterised in that including:Monocular peg model establishes module, for establishing monocular peg model;Binocular solid peg model establishes module, for establishing binocular solid peg model;Rigging error module is calculated, for calculating the assembling of panorama camera according to monocular peg model and binocular solid peg model Error.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710828874.XA CN107578450B (en) | 2017-09-14 | 2017-09-14 | Method and system for calibrating assembly error of panoramic camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710828874.XA CN107578450B (en) | 2017-09-14 | 2017-09-14 | Method and system for calibrating assembly error of panoramic camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107578450A true CN107578450A (en) | 2018-01-12 |
CN107578450B CN107578450B (en) | 2020-04-14 |
Family
ID=61033440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710828874.XA Active CN107578450B (en) | 2017-09-14 | 2017-09-14 | Method and system for calibrating assembly error of panoramic camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107578450B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109215085A (en) * | 2018-08-23 | 2019-01-15 | 上海小萌科技有限公司 | A kind of article statistic algorithm using computer vision and image recognition |
CN109410283A (en) * | 2018-09-14 | 2019-03-01 | 高新兴科技集团股份有限公司 | The space caliberating device of indoor panorama camera and positioning device with it |
CN110779933A (en) * | 2019-11-12 | 2020-02-11 | 广东省智能机器人研究院 | Surface point cloud data acquisition method and system based on 3D visual sensing array |
CN111383277A (en) * | 2018-12-29 | 2020-07-07 | 余姚舜宇智能光学技术有限公司 | Wide-spacing double-camera module AA method and system |
CN112612002A (en) * | 2020-12-01 | 2021-04-06 | 北京天地玛珂电液控制系统有限公司 | Digital construction system and method for scene space of full working face under coal mine |
CN115065782A (en) * | 2022-04-29 | 2022-09-16 | 珠海视熙科技有限公司 | Scene acquisition method, acquisition device, camera equipment and storage medium |
CN116030145A (en) * | 2023-03-23 | 2023-04-28 | 北京中科慧眼科技有限公司 | Stereo matching method and system for binocular lenses with different focal lengths |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010145499A1 (en) * | 2009-06-17 | 2010-12-23 | 华为终端有限公司 | Method and device for implementing real-time preview of panoramic images |
CN106875339A (en) * | 2017-02-22 | 2017-06-20 | 长沙全度影像科技有限公司 | A kind of fish eye images joining method based on strip scaling board |
CN107052086A (en) * | 2017-06-01 | 2017-08-18 | 扬州苏星机器人科技有限公司 | Stamping parts surface defect detection apparatus and detection method based on 3D vision |
-
2017
- 2017-09-14 CN CN201710828874.XA patent/CN107578450B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010145499A1 (en) * | 2009-06-17 | 2010-12-23 | 华为终端有限公司 | Method and device for implementing real-time preview of panoramic images |
CN106875339A (en) * | 2017-02-22 | 2017-06-20 | 长沙全度影像科技有限公司 | A kind of fish eye images joining method based on strip scaling board |
CN107052086A (en) * | 2017-06-01 | 2017-08-18 | 扬州苏星机器人科技有限公司 | Stamping parts surface defect detection apparatus and detection method based on 3D vision |
Non-Patent Citations (1)
Title |
---|
徐芳: "基于FPGA的航空CCD相机图像畸变校正技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109215085A (en) * | 2018-08-23 | 2019-01-15 | 上海小萌科技有限公司 | A kind of article statistic algorithm using computer vision and image recognition |
CN109215085B (en) * | 2018-08-23 | 2021-09-17 | 上海小萌科技有限公司 | Article statistical method using computer vision and image recognition |
CN109410283A (en) * | 2018-09-14 | 2019-03-01 | 高新兴科技集团股份有限公司 | The space caliberating device of indoor panorama camera and positioning device with it |
CN109410283B (en) * | 2018-09-14 | 2021-09-24 | 高新兴科技集团股份有限公司 | Space calibration device of indoor panoramic camera and positioning device with space calibration device |
CN111383277A (en) * | 2018-12-29 | 2020-07-07 | 余姚舜宇智能光学技术有限公司 | Wide-spacing double-camera module AA method and system |
CN111383277B (en) * | 2018-12-29 | 2023-05-19 | 余姚舜宇智能光学技术有限公司 | Wide-interval double-camera module AA method and system |
CN110779933A (en) * | 2019-11-12 | 2020-02-11 | 广东省智能机器人研究院 | Surface point cloud data acquisition method and system based on 3D visual sensing array |
CN112612002A (en) * | 2020-12-01 | 2021-04-06 | 北京天地玛珂电液控制系统有限公司 | Digital construction system and method for scene space of full working face under coal mine |
CN115065782A (en) * | 2022-04-29 | 2022-09-16 | 珠海视熙科技有限公司 | Scene acquisition method, acquisition device, camera equipment and storage medium |
CN115065782B (en) * | 2022-04-29 | 2023-09-01 | 珠海视熙科技有限公司 | Scene acquisition method, acquisition device, image pickup equipment and storage medium |
CN116030145A (en) * | 2023-03-23 | 2023-04-28 | 北京中科慧眼科技有限公司 | Stereo matching method and system for binocular lenses with different focal lengths |
Also Published As
Publication number | Publication date |
---|---|
CN107578450B (en) | 2020-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578450A (en) | A kind of method and system for the demarcation of panorama camera rigging error | |
WO2021120407A1 (en) | Parallax image stitching and visualization method based on multiple pairs of binocular cameras | |
CN106657910B (en) | A kind of panoramic video monitoring method of electricity substation | |
CN105118055B (en) | Camera position amendment scaling method and system | |
CN105809701B (en) | Panoramic video posture scaling method | |
TWI555378B (en) | An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
CN110782394A (en) | Panoramic video rapid splicing method and system | |
CN109064404A (en) | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system | |
CN103337094A (en) | Method for realizing three-dimensional reconstruction of movement by using binocular camera | |
CN104599317B (en) | A kind of mobile terminal and method for realizing 3D scanning modeling functions | |
CN106846415A (en) | A kind of multichannel fisheye camera binocular calibration device and method | |
CN108717728A (en) | A kind of three-dimensional reconstruction apparatus and method based on various visual angles depth camera | |
CN108234989B (en) | convergent integrated imaging shooting method based on checkerboard calibration plate | |
CN107886547B (en) | Fisheye camera calibration method and system | |
WO2017121058A1 (en) | All-optical information acquisition system | |
CN108830905A (en) | The binocular calibration localization method and virtual emulation of simulating medical instrument cure teaching system | |
CN104835118A (en) | Method for acquiring panorama image by using two fish-eye camera lenses | |
CN105959669B (en) | It is a kind of based on the micro- pattern matrix rapid generation of integration imaging remapped | |
CN108846796B (en) | Image splicing method and electronic equipment | |
CN111009030A (en) | Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method | |
CN110827392A (en) | Monocular image three-dimensional reconstruction method, system and device with good scene usability | |
CN206460515U (en) | A kind of multichannel fisheye camera caliberating device based on stereo calibration target | |
CN114359406A (en) | Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method | |
CN105139336B (en) | A kind of method of multichannel full-view image conversion ball curtain flake film | |
CN103997638A (en) | Matrix type camera array multi-view image correction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |