CN109903227A - Full-view image joining method based on camera geometry site - Google Patents

Full-view image joining method based on camera geometry site Download PDF

Info

Publication number
CN109903227A
CN109903227A CN201910129695.6A CN201910129695A CN109903227A CN 109903227 A CN109903227 A CN 109903227A CN 201910129695 A CN201910129695 A CN 201910129695A CN 109903227 A CN109903227 A CN 109903227A
Authority
CN
China
Prior art keywords
camera
full
image
point
view image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910129695.6A
Other languages
Chinese (zh)
Other versions
CN109903227B (en
Inventor
黄玉春
邱歆煜
田程硕
刘洋洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910129695.6A priority Critical patent/CN109903227B/en
Publication of CN109903227A publication Critical patent/CN109903227A/en
Application granted granted Critical
Publication of CN109903227B publication Critical patent/CN109903227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

The present invention provides a kind of full-view image joining method based on camera geometry site, and the method comprising the steps of 1, and assembling multicamera system is simultaneously demarcated, and the multicamera system is made of multiple cameras;Step 2, panoramic coordinates system is established, the outer ginseng between each camera and panoramic coordinates system is obtained;Step 3, distortion rectification is carried out to original distorted image, acquisition meets image after the correction of object point-optical center-picture point three point on a straight line;The region that partly overlaps between the image of adjacent cameras shooting is removed, and the image of duplicate removal poststack is mapped on panorama spherical surface;Step 4, panorama development of a sphere is obtained into spliced full-view image.The method of the present invention improves the splicing efficiency of full-view image, can splice on real-time level and obtain full-view image.

Description

Full-view image joining method based on camera geometry site
Technical field
This method belongs to photogrammetric and computer vision field, is related to shooting camera the mark of inside and outside parameter when being imaged Fixed and panorama mapping model foundation, can not need to calculate wait spell between image sequence transformation matrix or according to image from Body feature obtains full-view image under the premise of finding optimal splicing line, greatly improves splicing speed.
Background technique
Full-view image is due to being capable of providing 360 degree of comprehensive scene information and in streetscape map, digital city, mobile survey The fields such as amount are widely used, therefore the full-view image for how quickly and accurately obtaining different scenes is particularly important.Tradition Method be usually pass through one or more camera shooting between any two with certain degree of overlapping image sequence, then pass through Characteristic matching between overlapping region calculates the transformation matrix between image, realizes image sequence two finally by this transformation matrix Splicing between two, obtains full-view image.This joining method based on characteristics of image need to every group of image to be spliced into Row feature extraction and matching, splicing is computationally intensive, speed is slow.In addition, if textural characteristics in overlapping region between adjacent image Deficient or overlapping region is smaller, is easy to cause between feature extraction or image that it fails to match, is unable to complete splicing.Another kind is common Method be best splicing line between detecting image in overlapping region, multiple image is inlayed and obtains panoramic picture together.This Kind method regards an energy-optimised problem as best splicing line is detected, and passes through the energy function for minimizing a special designing It solves.Energy function is defined usually using color, gradient and texture, and is optimized by different optimization algorithms, such as This drawing algorithm of snake model, Dijet and figure are cut.Due to needing to calculate various cost of energy and being determined by optimization algorithm Optimal splicing line, such methods equally exist the disadvantage that time overhead is big, inefficient.Therefore, how accurately and rapidly Obtaining full-view image is always a photogrammetric hot issue in computer vision field.
Summary of the invention
In order to more accurately and efficiently obtain full-view image, it is mutual based on multiple cameras that the invention discloses one kind Geometry site full-view image joining method.
The technical scheme is that being established complete using the inside and outside calibrating parameters and camera imaging geometrical relationship of multiple cameras Scape mapping model obtains full-view image, comprising the following steps:
Step 1, it assembles multicamera system and is demarcated, the multicamera system is made of multiple cameras;
Step 2, panoramic coordinates system is established, the outer ginseng between each camera and panoramic coordinates system is obtained;
Step 3, distortion rectification is carried out to original distorted image, acquisition meets the correction of object point-optical center-picture point three point on a straight line Then image afterwards removes the region that partly overlaps between the image of adjacent cameras shooting, and the image of duplicate removal poststack is mapped to entirely On scape spherical surface;
Step 4, panorama development of a sphere is obtained into spliced full-view image.
Further, multicamera system described in step 1 includes 6 cameras, wherein one camera in top, optical axis are directed toward day Direction is pushed up, horizontal side is in 5 cameras of regular pentagon regular distribution, is used for side shooting level direction scene, 5 camera photocentres It is substantially distributed on a circle.
Further, the parameter demarcated in step 1 include in multicamera system the internal reference of each camera and each camera exist Outer ginseng of the moment relative to Calibration Field coordinate system, wherein internal reference includes focal length (fx, fy), principal point (x0, y0) and distortion system Number (k1, k2, p1, p2), reflect the physical characteristic and distortion situation of camera itself imaging;Outer ginseng includes a translation vector (Xs, Ys, Zs) and three rotation anglesDefine pose of the camera coordinates system relative to reference frame.
Further, the specific implementation of calibration for cameras internal reference and outer ginseng is as follows,
Multicamera system is placed in the high-precision close-range photogrammetry Calibration Field manually established, with two or more There is the camera in the public visual field while shooting the control point of known coordinate in Calibration Field and obtain calibration image, is measured in calibration image The pixel coordinate at these control points and as observation, disposably calculates in camera according to the collinearity equation in photogrammetric Ginseng and outer ginseng,
Wherein (1) is radial tangential distortion model formula, wherein (δx, δy) it is picture point offset caused by distortion, k1、k2 For coefficient of radial distortion, p1、p2For tangential distortion coefficient, Δ x=xd-x0, Δ y=yd-y0, r2=Δ x2+Δy2, (xd, yd) be The actual pixels coordinate at control point;It (2) is collinearity equation, (Xa, Ya, Za) be control point three-dimensional coordinate,For By three rotation anglesThe spin matrix of definition.
Further, in step 2 panoramic coordinates system to establish mode as follows,
Using any one camera of horizontal direction as benchmark camera lens, the Z axis of panoramic coordinates system is directed toward zenith direction, and X-axis is just Direction is consistent with the optical axis direction of basic lens, and Y-axis is determined according to the right-hand rule.
Further, it is obtained in step 2 according to the outer ginseng between adjacent cameras outer between each camera and panoramic coordinates system Ginseng, wherein the outer ginseng calculation method between adjacent cameras is as follows,
Wherein,It is defined as follows,
b1=cos ω sin κ
b2=cos ω cos κ
b3=-sin ω
Wherein, the subscript 1 and 2 of R, T are the number of two neighboring camera.
Further, specific implementation image being mapped on panorama spherical surface in step 3 is as follows,
Assuming that object A is in camera OnIn be imaged as a0, wherein n is the number of camera, its pixel coordinate is (x, y), therefore a0Coordinate in camera coordinates system isWhereinThe respectively principal point and coke of camera Away from, they are obtained by demarcating multicamera system,Due to On、a0, a three point on a straight line, a is a0In panorama The mapping position of spherical surface, thereforeSimultaneouslyWherein t be than Example coefficient;VectorIt is expressed as in panoramic coordinates systemPixel a in this way0In the mapping position a of panorama spherical surface (X, Y, Z) is obtained by simultaneous (3), (4), and wherein r is the radius of panorama ball;
X2+Y2+Z2=r2 (4)
WhereinOuter ginseng of the expression camera to panoramic coordinates system.
Further, the implementation that panorama development of a sphere is obtained spliced full-view image by step 4 is as follows,
Image that each camera obtains by distortion rectification and after going overlap processing, is mapped on panorama spherical surface and obtains one Group is located at discrete three-dimensional point, establishes kd tree to three-dimensional point, is denoted as K;Then panoramic pixel p (u, v) is calculated in panoramic coordinates system Position P (X, Y, Z);Finally in K query point P closest point, (R, G, B) value of closest point is assigned to panoramic pixel p, Full-view image is obtained, (u, v) corresponding coordinate (X, Y, Z) in panoramic coordinates system is defined by the formula,
Wherein, (w, h) indicates the wide height of film size of full-view image.
The advantages of the present invention are as follows: outer between all cameras after being integrated due to multicamera system assembly Ginseng is definite value, thus only need to inside and outside parameter Accurate Calibration to camera system it is primary.In addition, the solution of the present invention does not need It treats to spell image progress feature extraction and matching or define other features and establishes energy function.Because of image to be spelled and panorama shadow The corresponding relationship of picture is fixed, therefore can quickly splice to obtain full-view image, can achieve real-time level substantially.
Detailed description of the invention
Fig. 1 is the structural schematic diagram (a) and each camera coordinates system and panoramic coordinates system's schematic diagram (b) of multicamera system.
Fig. 2 is the high-precision close-range photogrammetry Calibration Field manually established.
Fig. 3 is one group of calibration image of two camera shootings for having the public visual field.
Fig. 4 is panorama mapping model schematic diagram.
Fig. 5 is that two adjacent cameras remove overlapping zone concepts figure.
Fig. 6 is overlapping region processing schematic example, wherein (a) is to become the image plane of 1, No. 2 camera by rotation Result schematic diagram after changing (b) is top view.
Fig. 7 is the overview flow chart of present example.
Specific embodiment
Below in conjunction with attached drawing and the embodiment of the present invention, detailed analysis explanation is carried out to technical solution of the present invention, it is specific real Steps flow chart such as attached drawing 7 when applying, the specific implementation process of embodiment can be summarized as following 5 steps:
Step 1: calibration multicamera system.
Use six imaging parameters and the similar high-quality wide angle camera of performance according to (a) of Fig. 1 in the embodiment of the present invention (b) structure shown in assembles multicamera system.Wherein 5 are mounted on side shooting level direction scene, and camera photocentre is substantially It is upper (numbering respectively by clock-wise order is 1,2,3,4,5) to be distributed in a circle, another camera is mounted on top and optical axis refers to To zenith direction (number 6).Outer ginseng (rotation and translation) after multicamera system is integrated between the coordinate system of each camera is no Become.Multicamera system parameter to be calibrated is divided into internal reference and outer ginseng two major classes.Internal reference includes focal length (fx, fy), principal point (x0, y0) and distortion factor (k1, k2, p1, p2), reflect the physical characteristic and distortion situation of camera itself imaging;Outer ginseng includes one Translation vector (Xs, Ys, Zs) and three rotation anglesDefine pose of the camera coordinates system relative to reference frame.
When calibration, two or more, which have the camera in the public visual field while shooting the control point of known coordinate in Calibration Field, is obtained To calibration image, the pixel coordinate at these control points is measured in calibration image and as observation, in photogrammetric Collinearity equation disposably calculate camera internal reference and outer ginseng, and it is mutual using Calibration Field coordinate system as intermediary to obtain camera Outer ginseng.It is specific as follows:
Multicamera system is demarcated using the high-precision close-range photogrammetry Calibration Field manually established, in the Calibration Field There is a control point more than 200, the three-dimensional coordinate at these control points is obtained by all-station instrument accurate measurement, sees Figure of description 2.Make adjacent Two cameras regard altogether Calibration Field shooting calibration image, as shown in Figure 3 left and right two images shot respectively by 1, No. 2 camera It arrives.The pixel coordinate that enough control points are chosen from two images is used as observation by linearizing collinearity equation Least square-fit iteratively solves parameter to be calibrated.
It (1) is radial tangential distortion model formula, wherein (δx, δy) it is picture point offset caused by distortion, k1、k2For diameter To distortion factor, p1、p2For tangential distortion coefficient, Δ x=xd-x0, Δ y=yd-y0, r2=Δ x2+Δy2, (xd, yd) it is control The actual pixels coordinate of point.It (2) is collinearity equation, (Xa, Ya, Za) be control point three-dimensional coordinate,For by three A rotation angleThe spin matrix of definition.No. 1 camera and No. 2 cameras are calculated separately out by two calibration images of Fig. 3 Calibrating parameters, then using Calibration Field coordinate system as intermediary calculate No. 1 camera and No. 2 cameras outer ginseng (R12, T12), calculating side Method is as follows,
Wherein,It is defined as follows,
b1=cos ω sin κ
b2=cos ω cos κ
b3=-sin ω
We regard close-range photogrammetry according to the order of (1,2), (2,3), (3,4), (4,5), (5,6), (6,1) respectively altogether Calibration Field shoots calibration image, obtains (R by collinearity equation12, T12)、(R23, T23)、(R34, T34)、(R45, T45)、(R56, T56)、(R61, T61) join outside six groups.
Step 2: panoramic coordinates system being established according to the calibration result of multicamera system, and obtains each camera coordinates system and complete The outer ginseng of scape coordinate system.
Shown in the foundation of panoramic coordinates system such as Fig. 1 (b), origin is the geometric center of five camera photocentres in side;Z axis and top The optical axis of portion's camera is parallel, and positive direction is directed toward zenith direction;X-axis is parallel with the optical axis of No. 1 camera, positive direction and No. 1 camera Optical axis is directed toward consistent;Y-axis is determined by the right-hand rule.Here illustrate two o'clock: (1) due to the micro error assembled, integrated, No. 1 phase The optical axis of machine may not with the optical axis exact vertical of No. 6 cameras, but deviate it is minimum;(2) when the present invention establishes panoramic coordinates system Be reference with No. 1 camera, be actually distributed in five of side it is magazine any one all can be used as reference.It is sat according to panorama Mark the definition of system, the outer ginseng (R between No. 1 camera and panoramic coordinates system1, T1) can be obtained by following formula,
Wherein, T21、T31、T41、T51Respectively 2,3,4, No. 5 cameras are to the translation vector in the outer ginseng of No. 1 camera.Consider Two coordinate systems, are denoted as i and j, it is assumed that the outer ginseng of i coordinate system to j coordinate system is (Rij, Tij), then j coordinate system to i coordinate system Outer ginseng (Rji, Tji) can be obtained by following formula,
In this way according to (R12, T12)、(R23, T23)、(R34, T34)、(R45, T45)、(R56, T56)、(R61, T61) can be obtained by 2, outer ginseng (R of 3,4,5, No. 6 cameras to No. 1 camera21, T21)、(R31, T31)、(R41, T41)、(R51, T51)、(R61, T61), then Other cameras are obtained to the outer ginseng of panoramic coordinates system by intermediary of No. 1 camera again, and the present invention is uniformly denoted as Wherein n is the number of camera, translation vector
Step 3: distortion rectification being carried out to original distorted image, acquisition meets the correction of object point-optical center-picture point three point on a straight line Image afterwards, and get rid of adjacent two correct after the extra overlapping region of image, the shadow that six are gone after overlapping and distortion rectification As being mapped in the way of in Fig. 4 on panorama spherical surface.
First using the principal point (x in multicamera system calibration result0, y0) and distortion factor (k1, k2, p1, p2According to (1) formula calculates picture point offset (δ caused by distortionx, δy), it then finds out ideal pixel coordinate and completes to original distortion image Distortion rectification, obtain meeting image after the correction of object point-optical center-picture point three point on a straight line.
Then overlap processing is carried out.As shown in figure 5, O1And O2It is two adjacent cameras light in multicamera system respectively The heart, it is about A on the circle of 6cm that they, which are located substantially at a radius,1B1And A2B2For corresponding image plane.If adjacent cameras Optical center is overlapped imaging, such as image plane A '1B′1And A2B2, the overlapping region between two images is A ' at this time1D1And D2B2.Right 2 A '1E sections of Ds of the corresponding scene at left2E sections of imagings, left EB2ED of the corresponding scene of section at right 21Section imaging.Utilize two A camera coordinates system carries out two image planes to convert the intersection of latter two plane relative to the spin matrix of panoramic coordinates system, hands over Line position is point E.If removing A ' according to this intersection1E and EB2This two sections of regions that partly overlap do not have any scene letter The loss of breath, and the intersection of left and right 2, scene information be be not interrupted, continuously transition.Due to assembling polyphaser Two camera photocentres of system cannot be overlapped when system, and actual imaging mode is as shown in left and right 1, between two image centers There are certain offsets.O1E ' and O2E is parallel, is now placed between this two parallel lines and the public visual field model of two cameras Object in enclosing is imaged on A on right 11E ' section is imaged on EB in left on piece2Section, if getting rid of A1E ' and EB2Section, there will be Least a portion of scene information loss.But be about on the spherical surface of 6cm since the optical center of 6 camera lenses is substantially in a radius, because This O1E ' and O2Range between E is the space of a very little, can only lose few scene information in this way, is not influenced last complete Scape image.
It is illustrated by taking 1, No. 2 camera as an example below.Assuming that the image plane of 1, No. 2 camera is denoted as P respectively1、P2, they Equation be expressed as z1=f1、z2=f2, normal vector n1=(0,0,1), n2=(0,0,1), this is because image plane is Perpendicular to camera optical axis.It is respectively n ' after the transformation of spin matrix1=R1n1、n′2=R2n2。P1、P2On two o'clock p1 (0,0, f1)、p2(0,0, f2) it is respectively p ' after the transformation of spin matrix1=R1p1、p′2=R2p2.Remember P1、P′2By rotation It is P ' after matrixing1、P′2, their equattion root strong point French can be expressed as n '1 T(x′1-p′1)=0, n '2 T(x′2-p ′2).As shown in fig. 6, (a) is the result schematic diagram by the image plane of 1, No. 2 camera after rotation transformation, it is (b) vertical view Figure.As described above, B1E and A2E is the part (grey) for needing to retain, A1E and B2E is then the part to be removed (black).It is located at A1The point x ' of E1With P '2On point p '2The vector of compositionWith n '2Inner product be greater than zero, i.e. θ1For acute angle, and it is located at B1E's Point x '2With p '2On point p '2The vector of compositionWith n '2Inner product less than zero, i.e. θ2For obtuse angle.B can similarly be removed2E is protected Stay A2E.The processing mode of the image overlap area of other cameras is similar.
Finally according to mode shown in Fig. 4, treated that image is mapped on panorama spherical surface by six.Object A is in camera On In be imaged as a0, its pixel coordinate is (x, y), therefore a0Coordinate in camera coordinates system is WhereinThe respectively principal point and focal length of camera, they are obtained by demarcating multicamera system,Due to On、a0, a three point on a straight line, thereforeSimultaneouslyWherein t is proportionality coefficient.VectorIt is expressed as in panoramic coordinates systemThe pixel a of image in this way after distortion rectification0Panorama spherical surface mapping position a (X, Y, Z) by simultaneous (3), (4) it obtains, wherein r is the radius of panorama ball.
X2+Y2+Z2=r2 (10)
Step 4: being unfolded panorama ball to obtain full-view image
The mapping result to image after six distortion rectifications after overlapping regional processing is obtained by step 3.Due to original The quantity of sequence image pixel is limited, if the panorama ball that mapping obtains directly is unfolded to will cause the panorama being directly unfolded There are the panoramic pixels of many unknown pixel values in image.The present invention is by transforming to panoramic pixel in panoramic coordinates system, root Panoramic pixel value is determined according to the distance relation between the mapping point on panoramic pixel and panorama spherical surface.Specific practice is as follows: by six Image obtains one group positioned at discrete three-dimensional point after distortion rectification after being mapped on panorama spherical surface, establishes kd tree to three-dimensional point, is denoted as K;Then the position P (X, Y, Z) of panoramic pixel p (u, v) in panoramic coordinates system is calculated;Finally in K query point P it is closest (R, G, B) value of closest point is assigned to panoramic pixel p, obtains full-view image by point.Assuming that it is respectively w that the width of panoramic picture is high And h, (u, v) corresponding coordinate (X, Y, Z) in panoramic coordinates system are defined by the formula,
Itself inside and outside ginseng of multicamera systemN=1,2,3,4,5,6, it is filled in system With being definite value after the completion, after determining panorama radius of a ball r (it is proposed that 10 meters of panorama radius of a ball selection or bigger), map Obtained panorama spherical surface point set is also changeless.After the film size of full-view image wide high (w, h) determines, according to (5), (6) The corresponding query point P (X, Y, Z) of panoramic pixel p (u, v) is also constant known to formula.Therefore for the given panorama radius of a ball and The wide height of film size, the corresponding relationship of image pixel is also constant after panoramic pixel and distortion rectification, in this way for different scenes Shadow can quickly obtain full-view image, can reach the level spliced in real time.
Specific implementation example described herein only illustrates that spirit of the invention.Technology belonging to the present invention The technical staff in field can make various modifications or additions to the described embodiments or by a similar method Substitution, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (8)

1. a kind of full-view image joining method based on camera geometry site, which comprises the following steps:
Step 1, it assembles multicamera system and is demarcated, the multicamera system is made of multiple cameras;
Step 2, panoramic coordinates system is established, the outer ginseng between each camera and panoramic coordinates system is obtained;
Step 3, distortion rectification is carried out to original distorted image, acquisition meets shadow after the correction of object point-optical center-picture point three point on a straight line Then picture removes the region that partly overlaps between the image of adjacent cameras shooting, and the image of duplicate removal poststack is mapped to panorama ball On face;
Step 4, panorama development of a sphere is obtained into spliced full-view image.
2. a kind of full-view image joining method based on camera geometry site as described in claim 1, it is characterised in that: Multicamera system described in step 1 includes 6 cameras, wherein one camera in top, optical axis are directed toward zenith direction, horizontal side is in 5 cameras of regular pentagon regular distribution, are used for side shooting level direction scene, and 5 camera photocentres are substantially distributed in a circle On.
3. a kind of full-view image joining method based on camera geometry site as described in claim 1, it is characterised in that: The parameter demarcated in step 1 include in multicamera system the internal reference of each camera and each camera in moment relative to calibration The outer ginseng of field coordinate system, wherein internal reference includes focal length (fx, fy), principal point (x0, y0) and distortion factor (k1, k2, p1, p2), reflection The physical characteristic and distortion situation of camera itself imaging;Outer ginseng includes a translation vector (Xs, Ys, Zs) and three rotation anglesDefine pose of the camera coordinates system relative to reference frame.
4. a kind of full-view image joining method based on camera geometry site as claimed in claim 3, it is characterised in that: The specific implementation of calibration for cameras internal reference and outer ginseng is as follows,
Multicamera system is placed in the high-precision close-range photogrammetry Calibration Field manually established, there are public affairs with two or more The control point that the camera in the visual field shoots known coordinate in Calibration Field simultaneously altogether obtains calibration image, measures these in calibration image The pixel coordinate at control point and as observation, according to the collinearity equation in photogrammetric disposably calculate camera internal reference and Outer ginseng,
Wherein (1) is radial tangential distortion model formula, wherein (δx, δy) it is picture point offset caused by distortion, k1、k2For radial direction Distortion factor, p1、p2For tangential distortion coefficient, Δ x=xd-x0, Δ y=yd-y0, r2=Δ x2+Δy2, (xd, yd) it is control point Actual pixels coordinate;It (2) is collinearity equation, (Xa, Ya, Za) be control point three-dimensional coordinate,For by three Rotation angleThe spin matrix of definition.
5. a kind of full-view image joining method based on camera geometry site as described in claim 1, it is characterised in that: In step 2 panoramic coordinates system to establish mode as follows,
Using any one camera of horizontal direction as benchmark camera lens, the Z axis of panoramic coordinates system is directed toward zenith direction, X-axis positive direction Consistent with the optical axis direction of basic lens, Y-axis is determined according to the right-hand rule.
6. a kind of full-view image joining method based on camera geometry site as described in claim 1, it is characterised in that: The outer ginseng between each camera and panoramic coordinates system is obtained according to the outer ginseng between adjacent cameras in step 2, wherein between adjacent cameras Outer ginseng calculation method it is as follows,
Wherein,It is defined as follows,
bi=cos ω sin κ
b2=cos ω cos κ
b3=-Sin ω
Wherein, the subscript 1 and 2 of R, T are the number of two neighboring camera.
7. a kind of full-view image joining method based on camera geometry site according to claim 1, feature exist In: the specific implementation that image is mapped on panorama spherical surface in step 3 is as follows,
Assuming that object A is in camera OnIn be imaged as a0, wherein n is the number of camera, its pixel coordinate is (x, y), therefore a0? Coordinate in camera coordinates system isWhereinThe respectively principal point and focal length of camera, it By demarcate multicamera system obtain,Due to On、a0, a three point on a straight line, a is a0In panorama spherical surface Mapping position, thereforeSimultaneouslyWherein t is ratio system Number;VectorIt is expressed as in panoramic coordinates systemPixel a in this way0Panorama spherical surface mapping position a (X, Y, Z) it is obtained by simultaneous (3), (4), wherein r is the radius of panorama ball;
X2+Y2+Z2=r2 (4)
WhereinOuter ginseng of the expression camera to panoramic coordinates system.
8. a kind of full-view image joining method based on camera geometry site according to claim 1, feature exist In: the implementation that panorama development of a sphere is obtained spliced full-view image by step 4 is as follows,
Image that each camera obtains by distortion rectification and after going overlap processing, is mapped on panorama spherical surface and obtains one group of position In discrete three-dimensional point, kd tree is established to three-dimensional point, is denoted as K;Then the position of panoramic pixel p (u, v) in panoramic coordinates system is calculated Set P (X, Y, Z);Finally in K query point P closest point, (R, G, B) value of closest point is assigned to panoramic pixel p, is obtained Full-view image, (u, v) corresponding coordinate (X, Y, Z) in panoramic coordinates system are defined by the formula,
Wherein, (w, h) indicates the wide height of film size of full-view image.
CN201910129695.6A 2019-02-21 2019-02-21 Panoramic image splicing method based on camera geometric position relation Active CN109903227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910129695.6A CN109903227B (en) 2019-02-21 2019-02-21 Panoramic image splicing method based on camera geometric position relation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910129695.6A CN109903227B (en) 2019-02-21 2019-02-21 Panoramic image splicing method based on camera geometric position relation

Publications (2)

Publication Number Publication Date
CN109903227A true CN109903227A (en) 2019-06-18
CN109903227B CN109903227B (en) 2021-09-14

Family

ID=66945175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910129695.6A Active CN109903227B (en) 2019-02-21 2019-02-21 Panoramic image splicing method based on camera geometric position relation

Country Status (1)

Country Link
CN (1) CN109903227B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443855A (en) * 2019-08-08 2019-11-12 Oppo广东移动通信有限公司 Multi-camera calibration, device, storage medium and electronic equipment
CN110827199A (en) * 2019-10-29 2020-02-21 武汉大学 Tunnel image splicing method and device based on guidance of laser range finder
CN110910457A (en) * 2019-11-22 2020-03-24 大连理工大学 Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN111432119A (en) * 2020-03-27 2020-07-17 贝壳技术有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
CN111462304A (en) * 2020-03-16 2020-07-28 天目爱视(北京)科技有限公司 3D acquisition and size measurement method for space field
CN111770326A (en) * 2020-06-20 2020-10-13 武汉大势智慧科技有限公司 Indoor three-dimensional monitoring method for panoramic video projection
CN112308778A (en) * 2020-10-16 2021-02-02 香港理工大学深圳研究院 Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information
CN112437287A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Panoramic image scanning and splicing method
CN112669199A (en) * 2020-12-16 2021-04-16 影石创新科技股份有限公司 Image stitching method, computer-readable storage medium and computer device
CN113485058A (en) * 2021-06-18 2021-10-08 苏州小优智能科技有限公司 Compact high-precision three-dimensional face imaging device and three-dimensional face imaging method
CN113758423A (en) * 2021-11-10 2021-12-07 风脉能源(武汉)股份有限公司 Method for determining position of image acquisition equipment based on image inner scale
CN114677842A (en) * 2022-03-13 2022-06-28 党荣斌 Freight road safety data acquisition is with night ground information shooting device
US11645780B2 (en) 2020-03-16 2023-05-09 Realsee (Beijing) Technology Co., Ltd. Method and device for collecting images of a scene for generating virtual reality data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206878A1 (en) * 2002-06-28 2007-09-06 Microsoft Corporation System and method for head size equalization in 360 degree panoramic images
CN104243776A (en) * 2013-06-18 2014-12-24 华硕电脑股份有限公司 Image processing method
CN104835117A (en) * 2015-05-11 2015-08-12 合肥工业大学 Spherical panorama generating method based on overlapping way
CN106651773A (en) * 2016-11-30 2017-05-10 努比亚技术有限公司 Picture processing method and device
CN106886976A (en) * 2017-03-14 2017-06-23 成都通甲优博科技有限责任公司 A kind of image generating method based on intrinsic parameter amendment flake camera
CN108200360A (en) * 2018-01-12 2018-06-22 深圳市粒视界科技有限公司 A kind of real-time video joining method of more fish eye lens panoramic cameras
CN108364252A (en) * 2018-01-12 2018-08-03 深圳市粒视界科技有限公司 A kind of correction of more fish eye lens panorama cameras and scaling method
CN109064404A (en) * 2018-08-10 2018-12-21 西安电子科技大学 It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206878A1 (en) * 2002-06-28 2007-09-06 Microsoft Corporation System and method for head size equalization in 360 degree panoramic images
CN104243776A (en) * 2013-06-18 2014-12-24 华硕电脑股份有限公司 Image processing method
CN104835117A (en) * 2015-05-11 2015-08-12 合肥工业大学 Spherical panorama generating method based on overlapping way
CN106651773A (en) * 2016-11-30 2017-05-10 努比亚技术有限公司 Picture processing method and device
CN106886976A (en) * 2017-03-14 2017-06-23 成都通甲优博科技有限责任公司 A kind of image generating method based on intrinsic parameter amendment flake camera
CN108200360A (en) * 2018-01-12 2018-06-22 深圳市粒视界科技有限公司 A kind of real-time video joining method of more fish eye lens panoramic cameras
CN108364252A (en) * 2018-01-12 2018-08-03 深圳市粒视界科技有限公司 A kind of correction of more fish eye lens panorama cameras and scaling method
CN109064404A (en) * 2018-08-10 2018-12-21 西安电子科技大学 It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
H.Y.SHUM,: "Systems and experiment paper: Construction of panoramic image mosaics with global and local alignmen", 《NTERNATIONAL JOURNAL OF COMPUTER VISION》 *
刘娜: "基于双目鱼眼镜头的标定方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
曹君等: "车载鱼眼相机的间接法全景影像拼接", 《地理空间信息》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443855A (en) * 2019-08-08 2019-11-12 Oppo广东移动通信有限公司 Multi-camera calibration, device, storage medium and electronic equipment
CN110827199A (en) * 2019-10-29 2020-02-21 武汉大学 Tunnel image splicing method and device based on guidance of laser range finder
CN110910457B (en) * 2019-11-22 2021-04-16 大连理工大学 Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN110910457A (en) * 2019-11-22 2020-03-24 大连理工大学 Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN111462304A (en) * 2020-03-16 2020-07-28 天目爱视(北京)科技有限公司 3D acquisition and size measurement method for space field
US11645780B2 (en) 2020-03-16 2023-05-09 Realsee (Beijing) Technology Co., Ltd. Method and device for collecting images of a scene for generating virtual reality data
CN111462304B (en) * 2020-03-16 2021-06-15 天目爱视(北京)科技有限公司 3D acquisition and size measurement method for space field
CN111432119A (en) * 2020-03-27 2020-07-17 贝壳技术有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
WO2021190649A1 (en) * 2020-03-27 2021-09-30 Ke.Com (Beijing) Technology Co., Ltd. Method and device for collecting images of a scene for generating virtual reality data
CN111432119B (en) * 2020-03-27 2021-03-23 北京房江湖科技有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
CN111770326A (en) * 2020-06-20 2020-10-13 武汉大势智慧科技有限公司 Indoor three-dimensional monitoring method for panoramic video projection
CN111770326B (en) * 2020-06-20 2022-03-29 武汉大势智慧科技有限公司 Indoor three-dimensional monitoring method for panoramic video projection
CN112308778B (en) * 2020-10-16 2021-08-10 香港理工大学深圳研究院 Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information
CN112308778A (en) * 2020-10-16 2021-02-02 香港理工大学深圳研究院 Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information
CN112437287A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Panoramic image scanning and splicing method
CN112669199A (en) * 2020-12-16 2021-04-16 影石创新科技股份有限公司 Image stitching method, computer-readable storage medium and computer device
WO2022127875A1 (en) * 2020-12-16 2022-06-23 影石创新科技股份有限公司 Image splicing method, computer-readable storage medium, and computer device
CN113485058A (en) * 2021-06-18 2021-10-08 苏州小优智能科技有限公司 Compact high-precision three-dimensional face imaging device and three-dimensional face imaging method
CN113758423A (en) * 2021-11-10 2021-12-07 风脉能源(武汉)股份有限公司 Method for determining position of image acquisition equipment based on image inner scale
CN113758423B (en) * 2021-11-10 2022-02-15 风脉能源(武汉)股份有限公司 Method for determining position of image acquisition equipment based on image inner scale
CN114677842A (en) * 2022-03-13 2022-06-28 党荣斌 Freight road safety data acquisition is with night ground information shooting device

Also Published As

Publication number Publication date
CN109903227B (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN109903227A (en) Full-view image joining method based on camera geometry site
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN107492069B (en) Image fusion method based on multi-lens sensor
CN110197466B (en) Wide-angle fisheye image correction method
US7479982B2 (en) Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
CN111243033B (en) Method for optimizing external parameters of binocular camera
CN112598740B (en) Rapid and accurate matching method for large-range multi-view oblique image connection points
CN109325981B (en) Geometric parameter calibration method for micro-lens array type optical field camera based on focusing image points
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN106556414B (en) A kind of automatic digital orientation method of laser scanner
CN109685855A (en) A kind of camera calibration optimization method under road cloud monitor supervision platform
CN109443359A (en) A kind of geographic positioning of ground full-view image
CN106886976B (en) Image generation method for correcting fisheye camera based on internal parameters
CN113947638B (en) Method for correcting orthographic image of fish-eye camera
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CN101354796A (en) Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model
CN108269234A (en) A kind of lens of panoramic camera Attitude estimation method and panorama camera
CN111968182B (en) Calibration method for nonlinear model parameters of binocular camera
CN109712200B (en) Binocular positioning method and system based on least square principle and side length reckoning
Wu Photogrammetry: 3-D from imagery
CN113029108B (en) Automatic relative orientation method and system based on sequence sea surface images
CN106447613A (en) Image local registration based method and system for removing blur shadow of panorama
CN114972013B (en) Fisheye image rapid orthorectification method based on spherical geometry single transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant