CN105608667A - Method and device for panoramic stitching - Google Patents

Method and device for panoramic stitching Download PDF

Info

Publication number
CN105608667A
CN105608667A CN201410667610.7A CN201410667610A CN105608667A CN 105608667 A CN105608667 A CN 105608667A CN 201410667610 A CN201410667610 A CN 201410667610A CN 105608667 A CN105608667 A CN 105608667A
Authority
CN
China
Prior art keywords
image
module
width
block
plans
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410667610.7A
Other languages
Chinese (zh)
Inventor
魏园波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Infinova Ltd
Original Assignee
Shenzhen Infinova Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Infinova Ltd filed Critical Shenzhen Infinova Ltd
Priority to CN201410667610.7A priority Critical patent/CN105608667A/en
Publication of CN105608667A publication Critical patent/CN105608667A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, and provides a method and a device for panoramic stitching. Through image pretreatment, feature extraction, feature matching, sorting, perspective transformation matrix calculation, initial depth and rotation matrix calculation, bundle adjustment, image straightening, exposure compensation and image fusion on more than two division maps sequentially, the more than two division maps can be stitched into a panoramic map, the panoramic map has no visual angle dead zone for 360 DEG and the definition is high, the details are perfectly expressed, the sense of reality and the sense of the scene are enhanced, the panoramic map has a small data amount, stitching the division maps in the panoramic map lowly requires on the hardware, and the huge data amount generated by the camera during a long-time operation process can be monitored.

Description

A kind of method and device of panorama splicing
Technical field
The present invention relates to technical field of image processing, especially a kind of method and device of panorama splicing.
Background technology
In today of monitoring system fast development, system scale is increasing, and control point is more and more, workPersonnel have no time to attend in the face of increasing monitoring image, and along with the continuous expansion of monitoring range, it is right to needBed rearrangement bridge, airport, very Chang Yiduan highway or skyscraper are monitored, the visual field of most of video camerasDo not reach large like this monitoring range, therefore the requirement of large visual field video monitoring is increased gradually, form graduallyA kind of high-end demand. Video panorama splices as a solution, has also obtained increasing concern.
Summary of the invention
Technical problem to be solved by this invention is: provide a kind of block plans more than two width is closed by splicingBecome method and the device of the panorama splicing of panorama sketch.
In order to solve the problems of the technologies described above, the technical solution used in the present invention is:
A method for panorama splicing, comprising:
S101, block plans more than two width that collect is carried out to image pretreatment;
S102, block plans more than two pretreated image width is carried out to feature extraction;
S103, block plans more than two width of feature extraction is carried out to characteristic matching;
Block plans more than S104, two width that characteristic matching is crossed sorts;
S105, the block plan of arranging more than two width of order is calculated to perspective transformation matrix;
S106, the block plan calculating more than two width of perspective transformation matrix is calculated to the initial depth of field and rotationMatrix;
S107, the block plan calculating more than two width of the initial depth of field and spin matrix is carried out to bundle adjustment;
It is stretching that block plans more than S108, two width that bundle adjustment is crossed carries out image;
S109, block plans more than two width of the stretching mistake of image is carried out to exposure compensating;
Block plans more than S110, two width that exposure compensating is crossed carries out image co-registration.
Another kind of technical scheme provided by the invention is:
A device for panorama splicing, comprise the image pretreatment module that connects successively, characteristic extracting module,Characteristic matching module, order module, calculating perspective transformation matrix module, calculate the initial depth of field and spin matrixModule, bundle adjustment module, the stretching module of image, exposure compensating module and image co-registration module;
Described image pretreatment module, for carrying out image pretreatment by block plans more than two width that collect;
Described characteristic extracting module, for carrying out block plans more than two width of image pretreatment module gainedFeature extraction;
Described characteristic matching module, for carrying out spy by block plans more than two width of characteristic extracting module gainedLevy coupling;
Described order module, for sorting block plans more than two width of characteristic matching module gained;
Described calculating perspective transformation matrix module, for entering block plans more than two width of order module gainedRow calculates perspective transformation matrix;
The initial depth of field of described calculating and spin matrix module, for calculating perspective transformation matrix module gainedBlock plans more than two width calculates the initial depth of field and spin matrix;
Described bundle adjustment module, for by more than calculating two width of the initial depth of field and spin matrix module gainedBlock plan carry out bundle adjustment;
The stretching module of described image, for carrying out figure by block plans more than two width of bundle adjustment module gainedPicture is stretching;
Described exposure compensating module, for exposing to the sun block plans more than two width of stretching image module gainedLight compensation;
Described image co-registration module, for carrying out image co-registration by the panorama sketch of exposure compensating module gained.
Beneficial effect of the present invention is: by the block plan to more than two width carry out successively image pretreatment,Feature extraction, characteristic matching, image sort, calculate perspective transformation matrix, calculate the initial depth of field and spin momentBattle array, bundle adjustment, image are stretching, exposure compensating and image co-registration, realize block plans more than two width is closedBecome panorama sketch, can form 360 ° without dead band, visual angle, the panorama sketch of fine definition, makes details performance more perfect,Strengthened the sense of reality and presence, the data volume of panorama sketch is less, and the synthetic panorama sketch of block plan shows hardwareRequire lowly, contribute to meet the huge data volume that monitoring camera produces in long-term operation process.
Brief description of the drawings
Fig. 1 is the flow chart of steps of the method for a kind of panorama splicing in the embodiment of the present invention;
Fig. 2 is the schematic diagram that the metric space extreme value in the embodiment of the present invention detects;
Fig. 3 is the schematic diagram that extracts sift characteristic point in the embodiment of the present invention and generate local feature descriptor;
Fig. 4 is the schematic diagram of the gray area integration in the embodiment of the present invention;
Fig. 5 is the schematic diagram that extracts surf characteristic point in the embodiment of the present invention and generate local feature descriptor;
Fig. 6 is the schematic diagram of the harris angle point that extracts in the embodiment of the present invention;
Fig. 7 is the schematic diagram of " exposure compensating " method one in embodiment of the present invention step S109;
Fig. 8 is the schematic diagram of embodiment of the present invention step S105 midplane projection model;
Fig. 9 is the schematic diagram of embodiment of the present invention step S105 central column face projection model;
Figure 10 is the schematic diagram of spherical projection model in embodiment of the present invention step S105;
Figure 11 is the schematic diagram of multi-resolution Fusion in embodiment of the present invention step S110;
Figure 12 is the schematic diagram of the average addition method in embodiment of the present invention step S110;
Figure 13 is the structural representation of the device of a kind of panorama splicing in the embodiment of the present invention;
Label declaration:
10, image pretreatment module; 11, characteristic extracting module; 12, characteristic matching module; 13, sequenceModule; 14, calculate perspective transformation matrix module; 15, calculate the initial depth of field and spin matrix module; 16,Bundle adjustment module; 17, the stretching module of image; 18, exposure compensating module; 19, image co-registration module.
Detailed description of the invention
By describing technology contents of the present invention in detail, being realized object and effect, below in conjunction with embodiment alsoCoordinate accompanying drawing to be explained.
The design of most critical of the present invention is: by the block plan to more than two width carry out successively image pretreatment,Feature extraction, characteristic matching, sequence, calculating perspective transformation matrix, calculate the initial depth of field and spin matrix,Bundle adjustment, image are stretching, exposure compensating and image co-registration, carry out block plan splicings more than two width, realNow by synthetic block plans more than two width panorama sketch.
Please refer to Fig. 1, is the flow chart of steps of the method for a kind of panorama splicing in the embodiment of the present invention, concrete asUnder:
A method for panorama splicing, comprising:
S101, block plans more than two width that collect is carried out to image pretreatment;
S102, block plans more than two pretreated image width is carried out to feature extraction;
S103, block plans more than two width of feature extraction is carried out to characteristic matching;
Block plans more than S104, two width that characteristic matching is crossed sorts;
S105, the block plan of arranging more than two width of order is calculated to perspective transformation matrix;
S106, the block plan calculating more than two width of perspective transformation matrix is calculated to the initial depth of field and rotationMatrix;
S107, the block plan calculating more than two width of the initial depth of field and spin matrix is carried out to bundle adjustment;
It is stretching that block plans more than S108, two width that bundle adjustment is crossed carries out image;
S109, block plans more than two width of the stretching mistake of image is carried out to exposure compensating;
Block plans more than S110, two width that exposure compensating is crossed carries out image co-registration.
From foregoing description, beneficial effect of the present invention is: by the block plan to more than two width successivelyCarry out image pretreatment, feature extraction, characteristic matching, sequence, calculating perspective transformation matrix, calculate initiallyThe depth of field and spin matrix, bundle adjustment, image are stretching, exposure compensating and image co-registration, realize by two width withOn the synthetic panorama sketch of block plan, can form 360 ° without dead band, visual angle, the panorama sketch of fine definition, makes detailsShow more perfectly, strengthened the sense of reality and presence, the data volume of panorama sketch is less, and block plan synthesizes panoramaFigure shows low to hardware requirement, contributes to meet the huge number that monitoring camera produces in long-term operation processAccording to amount.
Further, in the method for splicing at the panorama of the embodiment of the present invention, the image in described step S101Pretreatment comprises image histogram equalization and image denoising;
Described image histogram equalization is specially: the gray level that number of pixels in image is many is carried out to broadening,And the few gray level of number of pixels is reduced;
Described image histogram equalization concrete steps:
If f, g are respectively original image and image after treatment;
1) obtain the grey level histogram of original image f, be made as h;
H is the vector of one 256 dimension;
Obtain the overall pixel number Nf of image f;
Nf=m*n (m, n is respectively the length of image and wide);
The number of pixels of calculating each gray level shared percentage in whole image;
hs(i)=h(i)/Nf,(i=0,1,…,255);
2) the cumulative distribution hp of the each gray level of computed image;
hp ( i ) = Σ k = 0 i h ( k ) , i = 1,2 , . . . , 255 ;
3) obtain the gray value of new images g;
g=255·hp(i),i=1,2,...,255;
Described image denoising, adopts mean filter or medium filtering or k nearest neighbor mean filter to carry out denoising;
Described image denoising adopts mean filter step to be: to neighborhood territory pixel weighted average;
Described image denoising adopts medium filtering step to be: in neighborhood, grey scale pixel value being sorted, and choosingGet median as pixel value;
Described image denoising adopts k nearest neighbor mean filter step to be: in field, choose K and current pictureElement gray value differs minimum pixel, carries out intermediate value or mean filter, the smoothing filter of Edge keeping;
" feature extraction " in described step S102 is for extracting sift, surf or harris characteristic point.
Described extraction sift characteristic point step is:
1) metric space extreme value detects, and tentatively determines key point position and place yardstick. Pass through gaussian kernel functionRealize the metric space of two dimensional image with the convolution of image.
G ( x , y , σ ) = 1 2 π σ 2 e - ( x 2 + y 2 ) / 2 σ 2 ;
In the time detecting yardstick spatial extrema, the pixel that is labeled as cross in Fig. 2 needs the attendant of a stage actor to draw together same yardstick9 × 2 pixels of surrounding's neighborhood, 26 pixels altogether of 8 pixels of neighborhood and adjacent yardstick correspondence position aroundCompare, to guarantee local extremum all to be detected at metric space and two dimensional image space;
2) the accurate position of location feature point, by the three-dimensional quadratic function of matching with accurately determine key pointPut and yardstick, remove key point and the unsettled skirt response point of low contrast simultaneously;
Obtain by Taylor expansion at key point place:
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ 2 D ∂ X 2 X , In formula, X=(x, y, σ)TFor the side-play amount of key point,D is the value at D (x, y, σ) key point place;
3) determine the principal direction of characteristic point;
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2
θ(x,y)=atan2((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y)));
Each key point has three information: position, yardstick of living in, direction;
4) generate local feature descriptor;
First reference axis is rotated to be to the direction of key point, centered by key point, get the window of 4*4, as Fig. 3Shown in. The position that in Fig. 3, the stain of left figure is current key point, each little lattice represent key point neighborhood placeA pixel of metric space, the direction of arrow represents the gradient direction of this pixel, arrow length represents gradientSize, circle represents the scope (the pixel gradient directional information contribution the closer to key point is larger) of Gauss's weighting.Next on the fritter of each 4*4, calculate the gradient orientation histogram of 8 directions, draw each gradient sideTo accumulated value, can form a Seed Points, key point by 4*4 totally 16 Seed Points form, everyIndividual Seed Points has 8 direction vector information, finally forms the sift characteristic vector of 128 dimensions.
Described extraction surf characteristic point step is:
1) IntegralImages (integrogram);
Integrogram be mainly some regions in computed image pixel and, integrogram is in the definition at x place, positionAs follows:
As gray area integrogram in Fig. 4 is: A-B-C+D;
2) approximate Hessian matrix;
1 X=(x, y) in given image I, its Hessian matrix is:
3) metric space is described;
In surf, be the size variation of boxfilter, but not image scaling;
4) positioning feature point;
Obtain after the extreme value that scalogram picture locates at (x, y), first at 3 of extreme point according to Hessian matrixIn × 3 × 3 three-dimensional neighborhood, carry out non-maximum inhibition, in metric space and image space, carry out interpolation,Use quadratic fit function to carry out interpolation:
Above formula is carried out to differentiate, and the extreme value that obtains extreme point place is:
When extreme value >=0.03, this point is characteristic point;
5) feature descriptor;
As Fig. 5, centered by characteristic point, build a square that the length of side is 20 along main orientation, then divideFor the subregion of 4*4, in every sub regions, be divided into again 5*5 sampled point, calculate Haar small echo with respect toResponse in the horizontal and vertical direction of main orientation response;
Described extraction harris characteristic point step is:
1) each pixel is calculated to correlation matrix m;
m = I x 2 I x I y I x I y I y 2
I x 2 = I x * I x
I y 2 = I y * I y ;
2) four of m elements are carried out to Gaussian smoothing filtering, obtain new m, Gaussian function is:
Gauss = exp ( - ( x 2 + y 2 ) 2 σ 2 ) ;
3) utilize m to calculate the angle point amount cim of each pixel;
cim = I x 2 * I y 2 - ( I x I y ) 2 I x 2 + I y 2 ;
4) cim meets and is greater than some threshold values and cim is certain neighborhood local maximum, and what satisfy condition is exactlyAngle point;
If Fig. 6 is the harris angle point extracting in piece image.
Seen from the above description, described image histogram equalization can reach the object of picture rich in detail, helpsIn extracting characteristics of image; Described image denoising adopts mean filter can effectively suppress noise; Described image denoisingAdopt medium filtering can protect border, window is larger, acts on stronger; Described image denoising adopts k nearest neighborMean filter keeps edge smoothing; Described " feature extraction " is for extracting in sift characteristic point " by matchingThree-dimensional quadratic function to be accurately to determine position and the yardstick of key point, remove simultaneously low contrast key point andUnsettled skirt response point " can strengthen coupling stability, improve noise resisting ability; Described " feature extraction "In order to extract, in surf characteristic point, " in metric space and image space, carrying out interpolation " can be to candidate feature pointCarry out sub-pixel positioning.
Further, in the panorama joining method of the embodiment of the present invention, " the feature in described step S103Coupling " to adopt Euclidean distance, two coupling, medium filtering or RANSAC algorithm to obtain coupling right; Described step" calculating the initial depth of field and spin matrix " in S104 adopts least square method to calculate.
" characteristic matching " in described step S103 adopts Euclidean distance concrete steps to be:
1) suppose image to be matched be img1 (1, m), img2 (2, n), image img1 has m characteristic point,Image img2 has n characteristic point, characteristic point i=1 in image img1, and 2 ... .m is img1 in img2In each characteristic point to find coupling right;
2) characteristic point i=1 in img1 is that first characteristic point in img1 is found closest approach d1 in img2With inferior near point d2, before mentioned each characteristic point and had (sift:128 dimension) feature descriptor, by spyLevy a descriptor computation Euclidean distance, if meet formula below, so just think in feature i=1 and img2Euclidean distance closest approach coupling.
d 1 d 2 < &epsiv; ;
3) repeating step 2) in i=2,3,4....m, finds further feature point in img1 in img2It is right to mate.
" characteristic matching " in described step S103 adopts two coupling concrete steps to be:
1) suppose image to be matched be img1 (1, m), img2 (2, n), image img1 has m characteristic point,Image img2 has n characteristic point;
2), taking image img1 as reference base picture, in image img2, be characteristic point searching in image img1Pairing, supposes to be paired into P;
3), taking image img2 as reference base picture, in image img1, be characteristic point searching in image img2Pairing, supposes to be paired into Q;
4) by contrast P, it is right in Q, to mate, if simultaneously at P, when Q coupling, it is right just to think to mate.
" characteristic matching " in described step S103 adopts medium filtering concrete steps to be:
1) by characteristic matching to connecting into vector, find nearest k vector of each vectorial neighborhood (k=10);
2) adopt angle filtering, calculate angle average, from k vector, choose the individual angle of approach average of nVector;
3) adopt length filtering, calculate the length average of n vector, judge that whether n vector error be at thresholdWithin the scope of value, float, if met, think correct characteristic matching pair, otherwise be wrong characteristic matchingRight.
" characteristic matching " in described step S103 adopts RANSAC algorithm specific as follows:
Taking adjacent two two field pictures as example, establishing matching characteristic point is N to number, and the set of matching characteristic point is designated as respectivelyP (1, N), P (2, N), the characteristic point set that wherein P (1, N) is benchmark image, P (2, N) is to be matchedThe characteristic point set of image, concrete steps are as follows:
1) from initial N to choosing at random 4 pairs of matching characteristic points matching characteristic point;
2) calculate mapping matrix M12 between benchmark image and image to be matched by 4 pairs of matching characteristic points choosing,Utilize transform matrix M 12 treat N-2 characteristic point P of residue in the characteristic point set P (2, N) of matching image (2,N-2) transform under benchmark image coordinate system, be designated as P ' (2, N-2);
3) error of coordinate between the characteristic point P ' after computational transformation (2, N-2) and characteristic point P (1, N-2);
4) from N, matching characteristic point centering is found out to the characteristic point of error of coordinate in certain error threshold value to number,Be designated as i;
5) iterate 1) to 4) step n time, find the set of i maximum for imperial palace point set, be interior point,All the other N-i are Mismatching point, are exterior point. Reduce error by least square method, removed mistake couplingImpact, obtains spatial alternation matrix M.
Further, in the panorama joining method of the embodiment of the present invention, " calculating in described step S104The initial depth of field and spin matrix " concrete steps that adopt least square method to calculate are:
By characteristic matching pair between image, computed image is to the mapping relations between image, with the form table of matrixShow, i.e. homography matrix. Homography matrix is the matrix of a 3*3, and it has 8 frees degree, as follows:
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1 , x &prime; y &prime; 1 = x &prime; ~ Hx = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1 x y 1 ;
x=(x,y,1)TAnd x '=(x ', y ', 1)TBe respectively the coupling coordinate of two width images, can see and need 4 from figure belowTo mating solving homography matrix;
x 1 , y 1 , 1,0,0,0 , - x 1 &prime; x 1 , - x 1 &prime; y 1 0,0,0 , x 1 , y 1 , 1 , - y 1 &prime; x 1 , - y 1 &prime; y 1 x 2 , y 2 , 1,0,0,0 , - x 2 &prime; x 2 , - x 2 &prime; y 2 0,0,0 , x 2 , y 2 , 1 , - y 2 &prime; x 2 , - y 2 &prime; y 2 x 3 , y 3 , 1,0,0,0 , - x 3 &prime; x 3 , - x 3 &prime; y 3 0,0,0 , x 3 , y 3 , 1 , - y 3 &prime; x 3 , - y 3 &prime; y 3 x 4 , y 4 , 1,0,0,0 , - x 4 &prime; x 4 , - x 4 &prime; y 4 0,0,0 , x 4 , y 4 , 1 , - y 4 &prime; x 4 , - y 4 &prime; y 4 h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 = x 1 &prime; y 1 &prime; x 2 &prime; y 2 &prime; x 3 &prime; y 3 &prime; x 4 &prime; y 4 &prime; .
Extract at least 4 pairs of couplings to after, can solve formula above by least square method.
Further, in the panorama joining method of the embodiment of the present invention, " sequence " in described step S105Be specially: put right number according to block plan matching characteristics more than two width and sort;
Suppose that a, b, c and d exist corresponding matching relationship each other, wherein b and other images matchCharacteristic point to summation maximum, b is as with reference to plane picture so, by matching characteristic put right number to a,B, c and d sort.
Further, in the panorama joining method of the embodiment of the present invention, " the exposure in described step S109Compensation " be specially: obtain the overlapping region of block plans more than two width, calculate the exposure compensating of overlapping regionCoefficient, then according to exposure compensating coefficient, block plan is carried out to exposure compensating;
" exposure compensating " method one in described step S109:
1) a luminance difference function of definition:
gi,gjExposure compensating parameter,Be image i, the overlapping region between j, in actual mechanical process, can be usedReplace I (ui),
2) in order to make luminance difference function minimum, in the time of g=0, also can make e=0, therefore, we implementIn process, first look for a benchmark image, make the exposure parameter g=1 of this image;
As Fig. 7, suppose g1=1, g1 × A1=g2 × B1 so, can solve g2, then calculates successively g3,G4 etc., in like manner suppose that g2=1, g3=1 or g4=1 can calculate g1, g2, g3, (this sentences 4 to g4 valueRoad is spliced into example);
" exposure compensating " method two in described step S109:
1) a luminance difference function of definition:
e = 1 2 &Sigma; i = 1 n &Sigma; j = 1 n N ij ( ( g i I &OverBar; ij - g j I &OverBar; ji ) 2 / &sigma; N 2 + ( 1 - g i ) 2 / &sigma; g 2 ) ,
Show i, the number of j doubling of the image area pixel, σNAnd σgFor standard deviation and mean square error; σN=10.0,σg=0.1, calculate gi and gj, make last e value for minimum;
" exposure compensating " method three in described step S109: obtain ISP parameter by headend equipment, logicalCross and use identical ISP exposure parameter, reach the same effect of brightness;
Seen from the above description,, by calculating such exposure compensating parameter g, make overlay region between each imageTerritory luminance difference minimum.
Further, in the panorama joining method of the embodiment of the present invention, " the binding in described step S107Adjust " be specially: image is added in adjuster by width, and optimizes and revises the each width image in device simultaneouslyTo the transformation matrix of reference planes;
" bundle adjustment " concrete steps in described step S107 are:
1) to matching characteristic point to the xi in (xi, xj), first projected to reference planes, then fromReference planes project to its adjacent image, and its coordinate becomes xi ', can be as follows apart from difference:
r ij = | | x j - H j - 1 H i x i | | ;
2) all matching characteristics of matching image being concentrated are put right distance difference and are added, and can obtain optimization order alwaysScalar functions is as follows:
E = &Sigma; i = 1 n &Sigma; j &Element; L ( i ) &Sigma; &alpha; &Element; F ( i , j ) f ( r ij &alpha; ) 2 ;
Wherein, F (i, j) presentation graphs is right as the matching characteristic point between Ii and Ij,Represent matching characteristic pointRight distance. By the continuous renewal to object function, can complete the adjustment to Hi;
3) solve by gradient descent method, when meeting E and being less than some threshold values, solve H matrix now;
Further, in the panorama joining method of the embodiment of the present invention, " calculating in described step S105Perspective transformation matrix " be specially: block plans more than two width that bundle adjustment is crossed is projected in same referenceIn plane, synthetic by adopting plane projection model or cylindrical surface projecting model or spherical projection model to carry out projection.
Described plane projection model carries out the synthetic concrete steps of projection:
As Fig. 8, plane projection model is first to use image registration techniques to try to achieve the transformation relation between image to be spliced,Selected reference planes again, and image is projected to reference planes one by one, re-use image fusion technology and generate completeScape image. Choosing of reference planes can be a width reference picture place plane, can be also appointing in spaceMeaning plane. Owing to being the conversion of plane to plane, with 8 parameter model perspective transformation matrix H.
Described cylindrical surface projecting model carries out the synthetic concrete steps of projection:
As Fig. 9, all image projection, to periphery, the cylinder two dimension of setting up taking viewpoint D as initial point is satMark system (Xc, Yc), defining cylindrical section radius is f, planar S is the plane of delineation, some P, Q is for flatPoint on face S, some M, N is the point on the face of cylinder, and M, N is the subpoint of a Q on the face of cylinder,Plane picture S is taking C as the origin of coordinates.
X c = f &times; arctan x f
Y c = yf x 2 + f 2 ;
Described spherical projection model carries out the synthetic concrete steps of projection:
As Figure 10, all image projection, to spherome surface, the sphere two dimension of setting up taking viewpoint D as initial point is satMark system (Xs, Ys), defining cylindrical section radius is f, planar S is the plane of delineation, some P, Q is for flatPoint on face S, some M, N is the point on sphere, and M, N is P, the subpoint of Q on sphere, planeImage S is taking C as the origin of coordinates.
Definition sphere origin is (0,0), the longitude and latitude coordinate of the corresponding points M of some P under spheric coordinate systemFor:
X s = f &times; arctan x f
Y s = f &times; arctan y x 2 + f 2 ;
Seen from the above description, calculate matching characteristic point on every width image and its adjacent image to through referenceDistance value after plane transformation, minimizes to adjust the transformation matrix between adjacent image by distance value, thereby reachesTo the object of adjusting.
Further, in the panorama joining method of the embodiment of the present invention, " the image in described step S108Stretching " be specially: by calculating overall rotating vector, to carry out image stretching;
" image is stretching " concrete steps in described step S108 are:
1) obtain the projective transformation matrix R of every width imagei,RiIt is the matrix of 3 × 3;
2) the definition matrix r 0 of 3 × 1i,avgzi r 0 i = R i [ 0 ] | R i [ 3 ] | R i [ 6 ] , avgz i = R i [ 2 ] | R i [ 5 ] | R i [ 8 ] ;
covsum = &Sigma; i = 0 i < image number ( r 0 i * r 0 i T ) , avgzsum = &Sigma; i = 0 i < image number avgz i ;
3) covsum is done to SVD and decompose, get orthogonal matrix, definition r1 is this orthogonal matrix the 3rdOK;
r0=avgsumT× r1, normalization r0, r2=r0 × r1;
4) u = r 0 | r 1 | r 2 ;
Further, in the panorama joining method of the embodiment of the present invention, " the image in described step S110Merge " adopt the average addition method, multi-resolution Fusion or Weighted Fusion to merge.
" image co-registration " in described step S110 adopts the average addition method to be specially:
As Figure 11, in doubling of the image region, the gray value Pixel of pixel is by the ash of corresponding points in two width imagesDegree value Pixel_L and Pixel_R on average superpose and obtain;
Pixel=0.5*(Pixel_L+Pixel_R);
" image co-registration " in described step S110 adopts multi-resolution Fusion to be specially:
As Figure 12, first build image pyramid hierarchy, input picture is decomposed into a series of frequently differentThe logical layer of band of band; Then the logical layer of each band is merged respectively to the logical layer of band that obtains stitching image: finally by spellingThe logical layer of the band reconstruct of map interlinking picture obtains final stitching image. Because image co-registration is entered respectively at different frequency bandsOK, so can effectively avoid image co-registration to produce ghost; And because multiband decomposition and the reconstruct of image areA reciprocal process, makes can ensure in this way the correctness of gained stitching image. Concrete steps asUnder:
Step1: build image pyramid hierarchy, obtain every piece image gaussian pyramid G0 andDecompose the G1 obtaining, G2 ..., GN;
Wherein G0 is original image, w (m, n)=h (m) h (n)For the weighting function of 5x5 window.
Formula as provided above: G0 (x, y) is original image, and G1 (x, y) is ground floor, so can basisFollowing formula calculates and obtains yardstick 1 image G1.
G 1 ( x , y ) = &Sigma; m = - 2 m = 2 &Sigma; n = - 2 n = 2 h ( m ) h ( n ) G 0 ( 2 x + m , 2 y + n ) ;
Gaussian density distribution function is obeyed in choosing of h, can be found out G1 by the parameter of G0 in formula and G1Wide, height be the half of G0, the every one-level image in sequence be do after its previous stage Image Low-passed filtering everyRow is every the down-sampled gained of row. Gaussian pyramid is an one group of image sequence successively reducing by half dimensionally.
In like manner, can calculate and obtain yardstick 2 image G2 according to following formula;
G 2 ( x , y ) = &Sigma; m = - 2 m = 2 &Sigma; n = - 2 n = 2 w ( m ) w ( n ) G 1 ( 2 x + m , 2 y + n ) ;
The rest may be inferred for other yardstick hypograph.
Step2: utilize each low pass layer of gained image, decompose the logical layer of the band L0 that obtains this image, L1 ... LN-1;
L l ( x , y ) = G l ( x , y ) - 4 &Sigma; &Sigma; m , n = - 2 2 G l ( 2 x + m 2 , 2 y + n 2 ) ;
Step3: in the logical layer of each band, carry out respectively image co-registration operation, use weighting method on average to realize here,For current LK layer, have:
L kout ( x , y ) = &Sigma; i = 0 i = N - 1 L kii ( x &prime; , y &prime; ) w i ( x &prime; ) w i ( y &prime; ) &Sigma; i = 0 i = N - 1 w i ( x &prime; ) w i ( y &prime; ) ;
Through this step operation, obtain the logical space L0out of the corresponding band of output image, L1out, LN-1out.
Step4: the logical layer of each gained band is reconstructed, obtains final stitching image;
G out = &Sigma; k = 0 N L kout .
" image co-registration " in described step S110 adopts Weighted Fusion to be specially:
In doubling of the image region, the gray value Pixel of pixel is by the gray value of corresponding points in two width imagesPixel_L and Pixel_R weighted average obtain, that is: Pixel=k × Pixel_L+ (1-k) × Pixel_R,Wherein k is adjustable factors;
0 < k < 1 under normal circumstances, in overlapping region, the direction along image 1 to image 2, k is by 1 graduallyBecome 0, thereby realize the level and smooth splicing of overlapping region. Build for making point and two width images in doubling of the image regionVertical larger correlation, makes k=d1/ (d1+d2), wherein: d1, d2 represents that respectively the point in overlapping region arrivesThe left margin in two width doubling of the image regions and the distance of right margin. Use formula Pixel=d1/ (d1+d2)× Pixel_L+d2/ (d1+d2) × Pixel_R carries out suture processing, and non-integration region is directly copied from former figureShellfish.
Please refer to Figure 13, be the structural representation of a kind of device of panorama splicing in the embodiment of the present invention, concreteAs follows:
A device for panorama splicing, comprises the image pretreatment module 10, the characteristic extracting module that connect successively11, characteristic matching module 12, order module 13, calculate perspective transformation matrix module 14, calculate the initial depth of fieldWith spin matrix module 15, bundle adjustment module 16, the stretching module 17 of image, exposure compensating module 18 andImage co-registration module 19;
Described image pretreatment module 10, locates in advance for block plans more than two width that collect is carried out to imageReason;
Described characteristic extracting module 11, for by block plans more than two width of image pretreatment module 10 gainedCarry out feature extraction;
Described characteristic matching module 12, for entering block plans more than two width of characteristic extracting module 11 gainedRow characteristic matching;
Described order module 13, for arranging block plans more than two width of characteristic matching module 12 gainedOrder;
Described calculating perspective transformation matrix module 14, for by subregions more than two width of order module 13 gainedFigure calculates perspective transformation matrix;
The initial depth of field of described calculating and spin matrix module 15, for calculating 14 of perspective transformation matrix modulesBlock plans more than two width that obtain calculates the initial depth of field and spin matrix;
Described bundle adjustment module 16, for calculating two width of the initial depth of field and spin matrix module 15 gainedAbove block plan carries out bundle adjustment;
The stretching module 17 of described image, for entering block plans more than two width of bundle adjustment module 16 gainedRow image is stretching;
Described exposure compensating module 18, for entering block plans more than two width of stretching image module 17 gainedRow exposure compensating;
Described image co-registration module 19, for carrying out image co-registration by the panorama sketch of exposure compensating module 18 gained.
Seen from the above description, by the image pretreatment module, characteristic extracting module, the feature that connect successivelyMatching module, calculate the initial depth of field and spin matrix module, order module, exposure compensating module, binding is adjustedMould preparation piece, calculating perspective transformation matrix module, the stretching module of image and image co-registration module, can realize twoBlock plan more than width is spliced into panorama sketch.
In sum, method and the device of a kind of panorama splicing provided by the invention, by more than two widthBlock plan carry out successively image pretreatment, feature extraction, characteristic matching, sequence, calculating perspective transformation matrix,Calculate that the initial depth of field and spin matrix, bundle adjustment, image are stretching, exposure compensating and image co-registration, canThe block plan of realizing more than two width is spliced into panorama sketch, can form 360 ° without dead band, visual angle, fine definition completeJing Tu, makes details performance more perfect, has strengthened the sense of reality and presence, and the data volume of panorama sketch is less, pointFigure synthetic panorama sketch in district shows low to hardware requirement, contributes to meet monitoring camera in long-term operation processThe huge data volume producing. Described image histogram equalization can reach the object of picture rich in detail, contributes toExtract characteristics of image; Described image denoising adopts mean filter can effectively suppress noise; Described image denoising is adoptedCan protect border with medium filtering, window is larger, acts on stronger; Described image denoising adopts k nearest neighbor equalValue filtering device keeps edge smoothing; Described " feature extraction " is for extracting in sift characteristic point " by matching threeDimension quadratic function to be accurately to determine position and the yardstick of key point, removes the key point of low contrast and not simultaneouslyStable skirt response point " can strengthen coupling stability, improve noise resisting ability; Described " feature extraction "In order to extract, in surf characteristic point, " in metric space and image space, carrying out interpolation " can be to candidate feature pointCarry out sub-pixel positioning. By calculating such exposure compensating parameter, make between each image overlapping region brightDegree difference minimum. By calculating matching characteristic point on every width image and its adjacent image to through reference planesDistance value after conversion, minimizes to adjust the transformation matrix between adjacent image by distance value, thereby reaches tuneWhole object.
The foregoing is only embodiments of the invention, not thereby limit the scope of the claims of the present invention, every profitThe equivalents of doing by description of the present invention and accompanying drawing content, or be directly or indirectly used in relevant technologyField, is all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a method for panorama splicing, is characterized in that, comprising:
S101, block plans more than two width that collect is carried out to image pretreatment;
S102, block plans more than two pretreated image width is carried out to feature extraction;
S103, block plans more than two width of feature extraction is carried out to characteristic matching;
Block plans more than S104, two width that characteristic matching is crossed sorts;
S105, the block plan of arranging more than two width of order is calculated to perspective transformation matrix;
S106, the block plan calculating more than two width of perspective transformation matrix is calculated to the initial depth of field and rotationMatrix;
S107, the block plan calculating more than two width of the initial depth of field and spin matrix is carried out to bundle adjustment;
It is stretching that block plans more than S108, two width that bundle adjustment is crossed carries out image;
S109, block plans more than two width of the stretching mistake of image is carried out to exposure compensating;
Block plans more than S110, two width that exposure compensating is crossed carries out image co-registration.
2. the method for panorama splicing according to claim 1, is characterized in that described step S101In image pretreatment comprise image histogram equalization and image denoising;
Described image histogram equalization is specially: the gray level that number of pixels in image is many is carried out to broadening,And the few gray level of number of pixels is reduced;
Described image denoising, adopts mean filter or medium filtering or k nearest neighbor mean filter to carry out denoising.
3. the method for panorama splicing according to claim 1, is characterized in that described step S102In " feature extraction " for extracting sift, surf or harris characteristic point.
4. the method for panorama splicing according to claim 1, is characterized in that described step S103In " characteristic matching " to adopt Euclidean distance, two coupling, medium filtering or RANSAC algorithm to obtain coupling right;" calculating the initial depth of field and spin matrix " in described step S104 adopts least square method to calculate.
5. the method for panorama splicing according to claim 1, is characterized in that described step S104In " sequence " be specially: put right number according to block plan matching characteristics more than two width and sort.
6. the method for panorama splicing according to claim 1, is characterized in that described step S107In " bundle adjustment " be specially: image is added in adjuster by width, and optimizes and revises in device simultaneouslyEach width image to the transformation matrix of reference planes; " calculating perspective transformation matrix " in described step S105Be specially: block plans more than two width that bundle adjustment is crossed is projected in same reference planes, by adoptingCarry out projection with plane projection model or cylindrical surface projecting model or spherical projection model synthetic.
7. the method for panorama splicing according to claim 1, is characterized in that described step S108In " image is stretching " be specially: by calculating overall rotating vector, to carry out image stretching.
8. the method for panorama splicing according to claim 1, is characterized in that described step S109In " exposure compensating " be specially: obtain the overlapping region of block plans more than two width, calculate overlapping regionExposure compensating coefficient, then according to exposure compensating coefficient, block plan is carried out to exposure compensating.
9. the method for panorama splicing according to claim 1, is characterized in that described step S110In " image co-registration " adopt the average addition method, multi-resolution Fusion or Weighted Fusion to merge.
10. a device for panorama splicing, is characterized in that, comprise the image pretreatment module that connects successively,Characteristic extracting module, characteristic matching module, order module, calculating perspective transformation matrix module, calculating are initiallyThe depth of field and spin matrix module, bundle adjustment module, the stretching module of image, exposure compensating module and image meltCompound module;
Described image pretreatment module, for carrying out image pretreatment by block plans more than two width that collect;
Described characteristic extracting module, for carrying out block plans more than two width of image pretreatment module gainedFeature extraction;
Described characteristic matching module, for carrying out spy by block plans more than two width of characteristic extracting module gainedLevy coupling;
Described order module, for sorting block plans more than two width of characteristic matching module gained;
Described calculating perspective transformation matrix module, for entering block plans more than two width of order module gainedRow calculates perspective transformation matrix;
The initial depth of field of described calculating and spin matrix module, for calculating perspective transformation matrix module gainedBlock plans more than two width calculates the initial depth of field and spin matrix;
Described bundle adjustment module, for by more than calculating two width of the initial depth of field and spin matrix module gainedBlock plan carry out bundle adjustment;
The stretching module of described image, for carrying out figure by block plans more than two width of bundle adjustment module gainedPicture is stretching;
Described exposure compensating module, for exposing to the sun block plans more than two width of stretching image module gainedLight compensation;
Described image co-registration module, for carrying out image co-registration by the panorama sketch of exposure compensating module gained.
CN201410667610.7A 2014-11-20 2014-11-20 Method and device for panoramic stitching Pending CN105608667A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410667610.7A CN105608667A (en) 2014-11-20 2014-11-20 Method and device for panoramic stitching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410667610.7A CN105608667A (en) 2014-11-20 2014-11-20 Method and device for panoramic stitching

Publications (1)

Publication Number Publication Date
CN105608667A true CN105608667A (en) 2016-05-25

Family

ID=55988583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410667610.7A Pending CN105608667A (en) 2014-11-20 2014-11-20 Method and device for panoramic stitching

Country Status (1)

Country Link
CN (1) CN105608667A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105107190A (en) * 2015-09-15 2015-12-02 清华大学 Image collecting and processing system applied to Chinese billiards and image processing method
CN106339981A (en) * 2016-08-25 2017-01-18 安徽协创物联网技术有限公司 Panorama stitching method
CN106780309A (en) * 2016-12-21 2017-05-31 中国航空工业集团公司雷华电子技术研究所 A kind of diameter radar image joining method
CN107124588A (en) * 2017-05-24 2017-09-01 成都跟驰科技有限公司 A kind of monitoring system of use multi-cam system
CN107123090A (en) * 2017-04-25 2017-09-01 无锡中科智能农业发展有限责任公司 It is a kind of that farmland panorama system and method are automatically synthesized based on image mosaic technology
CN107197209A (en) * 2017-06-29 2017-09-22 中国电建集团成都勘测设计研究院有限公司 The dynamic method for managing and monitoring of video based on panorama camera
CN107497621A (en) * 2017-09-20 2017-12-22 王晓东 Extended pattern is atomized regulating system and method
CN107580175A (en) * 2017-07-26 2018-01-12 济南中维世纪科技有限公司 A kind of method of single-lens panoramic mosaic
CN107645646A (en) * 2017-07-07 2018-01-30 罗普特(厦门)科技集团有限公司 A kind of regional monitoring system implementation method
CN107665479A (en) * 2017-09-05 2018-02-06 平安科技(深圳)有限公司 A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium
CN107918927A (en) * 2017-11-30 2018-04-17 武汉理工大学 A kind of matching strategy fusion and the fast image splicing method of low error
CN107968918A (en) * 2016-10-20 2018-04-27 现代自动车株式会社 Apparatus for controlling exposure of multi-view camera, system and method thereof
CN108648150A (en) * 2018-05-10 2018-10-12 句容康泰膨润土有限公司 A kind of image split-joint method
CN108830804A (en) * 2018-05-23 2018-11-16 长春理工大学 Virtual reality fusion Fuzzy Consistent processing method based on line spread function standard deviation
CN108921848A (en) * 2018-09-29 2018-11-30 长安大学 Bridge Defect Detecting device and detection image joining method based on more mesh cameras
CN109478318A (en) * 2016-09-08 2019-03-15 三星电子株式会社 360 deg video-splicing
CN110118530A (en) * 2019-04-22 2019-08-13 西京学院 A kind of big workpiece high-precision optical vision measuring method
WO2020007307A1 (en) * 2018-07-03 2020-01-09 深圳岚锋创视网络科技有限公司 Sky filter method for panoramic images and portable terminal
CN111192198A (en) * 2019-12-26 2020-05-22 台州学院 Pipeline panoramic scanning method based on pipeline robot
CN111275765A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Method and device for determining target GPS and camera
CN111402309A (en) * 2020-03-05 2020-07-10 中国科学院上海微系统与信息技术研究所 Calibration and depth image processing method for multi-view stereo camera
CN111738918A (en) * 2020-05-08 2020-10-02 普宙机器人科技(武汉)有限公司 Panorama splicing method and system based on unmanned aerial vehicle cloud server calculation
WO2021120405A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Large angle of visual field image real-time splicing method based on calibration
WO2021120498A1 (en) * 2019-12-20 2021-06-24 Qualcomm Incorporated Image fusion
CN113298720A (en) * 2021-04-21 2021-08-24 重庆邮电大学 Self-adaptive overlapped image rotation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points
CN103226822A (en) * 2013-05-15 2013-07-31 清华大学 Medical image stitching method
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN104125410A (en) * 2014-07-02 2014-10-29 深圳英飞拓科技股份有限公司 Panoramic video multi-lens exposure compensation method and device thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points
CN103226822A (en) * 2013-05-15 2013-07-31 清华大学 Medical image stitching method
CN104125410A (en) * 2014-07-02 2014-10-29 深圳英飞拓科技股份有限公司 Panoramic video multi-lens exposure compensation method and device thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MATTHEW BROWN等: "Automatic Panoramic Image Stitching using Invariant Features", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
阮鹏: "多图像拼接算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105107190A (en) * 2015-09-15 2015-12-02 清华大学 Image collecting and processing system applied to Chinese billiards and image processing method
CN106339981A (en) * 2016-08-25 2017-01-18 安徽协创物联网技术有限公司 Panorama stitching method
CN109478318A (en) * 2016-09-08 2019-03-15 三星电子株式会社 360 deg video-splicing
CN109478318B (en) * 2016-09-08 2022-05-24 三星电子株式会社 Three hundred sixty degree video stitching
CN107968918A (en) * 2016-10-20 2018-04-27 现代自动车株式会社 Apparatus for controlling exposure of multi-view camera, system and method thereof
CN106780309A (en) * 2016-12-21 2017-05-31 中国航空工业集团公司雷华电子技术研究所 A kind of diameter radar image joining method
CN107123090A (en) * 2017-04-25 2017-09-01 无锡中科智能农业发展有限责任公司 It is a kind of that farmland panorama system and method are automatically synthesized based on image mosaic technology
CN107124588A (en) * 2017-05-24 2017-09-01 成都跟驰科技有限公司 A kind of monitoring system of use multi-cam system
CN107197209A (en) * 2017-06-29 2017-09-22 中国电建集团成都勘测设计研究院有限公司 The dynamic method for managing and monitoring of video based on panorama camera
CN107645646A (en) * 2017-07-07 2018-01-30 罗普特(厦门)科技集团有限公司 A kind of regional monitoring system implementation method
CN107580175A (en) * 2017-07-26 2018-01-12 济南中维世纪科技有限公司 A kind of method of single-lens panoramic mosaic
CN107665479A (en) * 2017-09-05 2018-02-06 平安科技(深圳)有限公司 A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium
WO2019047284A1 (en) * 2017-09-05 2019-03-14 平安科技(深圳)有限公司 Methods for feature extraction and panoramic stitching, and apparatus thereof, device, readable storage medium
CN107497621B (en) * 2017-09-20 2018-04-06 安徽灵感科技有限公司 Extended pattern is atomized regulating system and method
CN107497621A (en) * 2017-09-20 2017-12-22 王晓东 Extended pattern is atomized regulating system and method
CN107918927B (en) * 2017-11-30 2021-06-04 武汉理工大学 Matching strategy fusion and low-error rapid image splicing method
CN107918927A (en) * 2017-11-30 2018-04-17 武汉理工大学 A kind of matching strategy fusion and the fast image splicing method of low error
CN108648150A (en) * 2018-05-10 2018-10-12 句容康泰膨润土有限公司 A kind of image split-joint method
CN108830804A (en) * 2018-05-23 2018-11-16 长春理工大学 Virtual reality fusion Fuzzy Consistent processing method based on line spread function standard deviation
WO2020007307A1 (en) * 2018-07-03 2020-01-09 深圳岚锋创视网络科技有限公司 Sky filter method for panoramic images and portable terminal
US11887362B2 (en) 2018-07-03 2024-01-30 Arashi Vision Inc. Sky filter method for panoramic images and portable terminal
CN108921848A (en) * 2018-09-29 2018-11-30 长安大学 Bridge Defect Detecting device and detection image joining method based on more mesh cameras
CN111275765A (en) * 2018-12-05 2020-06-12 杭州海康威视数字技术股份有限公司 Method and device for determining target GPS and camera
CN111275765B (en) * 2018-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 Method and device for determining target GPS and camera
CN110118530A (en) * 2019-04-22 2019-08-13 西京学院 A kind of big workpiece high-precision optical vision measuring method
CN110118530B (en) * 2019-04-22 2021-06-18 西京学院 Large workpiece high-precision optical vision measurement method
WO2021120405A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Large angle of visual field image real-time splicing method based on calibration
US11783446B2 (en) 2019-12-17 2023-10-10 Dalian University Of Technology Large-field-angle image real-time stitching method based on calibration
WO2021120498A1 (en) * 2019-12-20 2021-06-24 Qualcomm Incorporated Image fusion
CN114868377A (en) * 2019-12-20 2022-08-05 高通股份有限公司 Image fusion
CN114868377B (en) * 2019-12-20 2023-09-12 高通股份有限公司 image fusion
CN111192198B (en) * 2019-12-26 2023-08-29 台州学院 Pipeline panoramic scanning method based on pipeline robot
CN111192198A (en) * 2019-12-26 2020-05-22 台州学院 Pipeline panoramic scanning method based on pipeline robot
CN111402309A (en) * 2020-03-05 2020-07-10 中国科学院上海微系统与信息技术研究所 Calibration and depth image processing method for multi-view stereo camera
CN111402309B (en) * 2020-03-05 2023-11-03 中国科学院上海微系统与信息技术研究所 Calibration and depth image processing method for multi-view stereo camera
CN111738918A (en) * 2020-05-08 2020-10-02 普宙机器人科技(武汉)有限公司 Panorama splicing method and system based on unmanned aerial vehicle cloud server calculation
CN113298720A (en) * 2021-04-21 2021-08-24 重庆邮电大学 Self-adaptive overlapped image rotation method

Similar Documents

Publication Publication Date Title
CN105608667A (en) Method and device for panoramic stitching
US20220044375A1 (en) Saliency Map Enhancement-Based Infrared and Visible Light Fusion Method
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN102006425B (en) Method for splicing video in real time based on multiple cameras
CN102132323B (en) System and method for automatic image straightening
KR101175097B1 (en) Panorama image generating method
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN110660023A (en) Video stitching method based on image semantic segmentation
CN105678722A (en) Panoramic stitched image bending correction method and panoramic stitched image bending correction device
CN104820965A (en) Geocoding-free rapid image splicing method of low-altitude unmanned plane
CN101894366A (en) Method and device for acquiring calibration parameters and video monitoring system
CN106910208A (en) A kind of scene image joining method that there is moving target
CN109300096A (en) A kind of multi-focus image fusing method and device
CN110992263A (en) Image splicing method and system
CN105931185A (en) Automatic splicing method of multiple view angle image
CN105678721A (en) Method and device for smoothing seams of panoramic stitched image
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN105678719A (en) Panoramic stitching seam smoothing method and panoramic stitching seam smoothing device
CN104574443B (en) The cooperative tracking method of moving target between a kind of panoramic camera
Bastanlar et al. Multi-view structure-from-motion for hybrid camera scenarios
CN117456136A (en) Digital twin scene intelligent generation method based on multi-mode visual recognition
CN112734642A (en) Remote sensing satellite super-resolution method and device of multi-scale texture transfer residual error network
CN105608689A (en) Method and device for eliminating image feature mismatching for panoramic stitching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160525

RJ01 Rejection of invention patent application after publication