CN106331527A - Image splicing method and device - Google Patents

Image splicing method and device Download PDF

Info

Publication number
CN106331527A
CN106331527A CN201610890008.9A CN201610890008A CN106331527A CN 106331527 A CN106331527 A CN 106331527A CN 201610890008 A CN201610890008 A CN 201610890008A CN 106331527 A CN106331527 A CN 106331527A
Authority
CN
China
Prior art keywords
coordinate
camera head
image
pixel
photocentre
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610890008.9A
Other languages
Chinese (zh)
Other versions
CN106331527B (en
Inventor
袁梓瑾
简伟华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN201610890008.9A priority Critical patent/CN106331527B/en
Publication of CN106331527A publication Critical patent/CN106331527A/en
Priority to PCT/CN2017/105657 priority patent/WO2018068719A1/en
Application granted granted Critical
Publication of CN106331527B publication Critical patent/CN106331527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses an image splicing method and device. The method includes the steps that images which are shot by at least two cameras are obtained; for each camera, a three-dimensional coordinate system is constructed with a preset public optical center of the cameras being an original point; for each pixel of the image shot by each camera, execution includes the following steps that first coordinates, in a two-dimensional coordinate system of the image, of the pixel are converted into second coordinates of the three-dimensional coordinate system; according to the optical center of the camera and a target objective point appointed in the image, the second coordinates are corrected, so that third coordinates are obtained; all the images are spliced according to third coordinates of all the pixels in all the images. According to the technical scheme, a spliced image without a parallax error can be provided, and the resource utilization rate of the image splicing device is increased.

Description

A kind of image split-joint method and device
Technical field
The application relates to technical field of image processing, particularly relates to a kind of image split-joint method and device.
Background technology
At present, 360 degree of panoramic videos are increasingly becoming one of main content of field of virtual reality.Regard compared to conventional finite Wild video, this panoramic video can be supplied to user the most true to nature immerse viewing experience.Owing to gathering aphorama at present The single-lens system of frequency is also little, is usually and is formed by the video-splicing of multiple camera heads or multiple lens system collection.
Optical perspective geometrical principle according to camera lens, the two-dimensional imaging of two lens system captures not being total to photocentre, at it Public view sections always there is certain parallax (parallax).Further, in different depth plane, parallax degree is not Equally, the flaw that spliced image visually occurs being difficult to accept, such as ghost image, ghost, continuous lines mistake are ultimately resulted in Position fracture etc..Therefore, the image effect being spliced into is very poor, have impact on the viewing experience of user, and reduces imaging device Resource utilization.
Summary of the invention
In view of this, the invention provides a kind of image split-joint method and device, using the teaching of the invention it is possible to provide parallax free stitching image, Improve the resource utilization of image splicing device.
The technical scheme is that and be achieved in that:
The invention provides a kind of image split-joint method, including:
Obtain the image that at least two camera head each photographs;
For each camera head, build this with the public photocentre of default described at least two camera head for initial point and take the photograph Three-dimensional system of coordinate as device;
For each pixel in the image that each camera head photographs, execution is following to be processed:
It is the second coordinate under this three-dimensional system of coordinate by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images;
The target object point specified in photocentre according to this camera head and this image, is modified described second coordinate, Obtain the 3rd coordinate;And,
All images are spliced by described 3rd coordinate according to pixel each in all images.
Present invention also offers a kind of image splicing device, including:
Acquisition module, for obtaining the image that at least two camera head each photographs;
Coordinate system builds module, for for each camera head, with the public affairs of default described at least two camera head Photocentre is the three-dimensional system of coordinate that initial point builds this camera head altogether;
Coordinate processing module, each pixel in the image photographed for each camera head, below execution Process: be the second coordinate under this three-dimensional system of coordinate by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images;Root According to the target object point specified in the photocentre of this camera head and this image, described second coordinate is modified, obtains the 3rd seat Mark;And,
Concatenation module, for splicing all images according to described 3rd coordinate of pixel each in all images.
Compared with prior art, the method that the present invention provides, with geometrical property, the tool of camera head of captured object Body imaging geometry formula, the final projection type spliced are the most unrelated, it is provided that the current techique of a kind of no parallax splicing depth plane, Depth plane can be spliced by main contents place depth location in adaptive selection scene, it is provided that parallax free as no parallax Stitching image, it is not necessary to the extra parallax that goes processes, and improves the resource utilization of image splicing device.
Accompanying drawing explanation
For the technical scheme in the clearer explanation embodiment of the present invention, in embodiment being described below required for make Accompanying drawing be briefly described, it should be apparent that, below describe in accompanying drawing be only some embodiments of the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to obtain other according to these accompanying drawings Accompanying drawing.Wherein,
Fig. 1 is the exemplary process diagram of the image split-joint method according to one embodiment of the invention;
Fig. 2 is the schematic diagram of the structure cartesian coordinate system according to one embodiment of the invention;
Fig. 3 is the exemplary process diagram of the optical centre bias compensation method according to one embodiment of the invention;
Fig. 4 a is the coordinate schematic diagram being modified the second coordinate according to one embodiment of the invention;
Fig. 4 b is the coordinate schematic diagram of the determination side-play amount according to one embodiment of the invention;
Fig. 5 is the exemplary process diagram of the image split-joint method according to another embodiment of the present invention;
Fig. 6 a is the two dimensional image schematic diagram before the splicing of foundation one embodiment of the invention;
Fig. 6 b is the spliced two dimensional image schematic diagram according to one embodiment of the invention;
Fig. 7 is according to the structural representation of the image splicing device of one embodiment of the invention;
Fig. 8 is the structural representation of the image splicing device according to another embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Describe, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiments wholely.Based on this Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under not making creative work premise Example, broadly falls into the scope of protection of the invention.
Image split-joint method in the embodiment of the present invention and device are applicable to any have taking the photograph of at least two camera head As system, wherein, the visual angle of two adjacent camera heads has common portion, the most public view sections, and the two is captured Image has lap.Method according to embodiments of the present invention, is respectively directed to the image that each camera head photographs and enters Row processes, and then carries out the splicing of image in whole camera system, can on the target object point (or depth plane) specified Obtain complete parallax free panoramic picture.
Fig. 1 is the exemplary process diagram of the image split-joint method according to one embodiment of the invention.As it is shown in figure 1, the method Can comprise the steps:
Step 101, obtains the image that at least two camera head each photographs.
This step, first obtains the image that in a camera system, all camera heads photograph.
Step 102, for each camera head, builds with the public photocentre of default at least two camera head for initial point The three-dimensional system of coordinate of this camera head.
Owing to each camera head possesses the photocentre of self camera lens, in this step, first preset a common light The heart, i.e. assumes that all of camera head all possesses such a preferable photocentre, builds three-dimensional system of coordinate as initial point.
If three-dimensional system of coordinate is expressed as (X, Y, Z), build the three-dimensional of this camera head with default public photocentre for initial point During coordinate system, specifically include: with public photocentre as initial point, the parallel surface of the imaging surface of this camera head sets up two-dimensional coordinate System (X, Y), then determines Z axis according to two-dimensional coordinate system (X, Y) and the right-hand rule.
In one embodiment, this three-dimensional system of coordinate is cartesian coordinate system.For the coordinate system of camera head, this Kind of cartesian coordinate system is otherwise known as Descartes's world coordinate system.Fig. 2 is to sit according to the structure Descartes of one embodiment of the invention The schematic diagram of mark system.As in figure 2 it is shown, X-axis, Y-axis and Z axis have collectively constituted the cartesian coordinate system of a camera head A, common light Heart O is the initial point of coordinate system.Incident illuminationThe lens combination of camera head A is entered, after lens reflect, in shooting with θ angle Imaging on the imaging surface x'o'y' of device A.Wherein, XOY face is parallel with x'o'y' face.
Step 103, for each pixel in the image that each camera head photographs, execution is following to be processed:
Step 1031, is under this three-dimensional system of coordinate by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images Second coordinate;
Step 1032, according to the target object point specified in the photocentre of this camera head and this image, is carried out the second coordinate Revise, obtain the 3rd coordinate.
Wherein, for step 1031, it is the second coordinate by the first Coordinate Conversion, specifically includes: determine according to the first coordinate The angular coordinate of this pixel, lens imaging geometric function and the first coordinate according to this camera head determine that incident illumination is sat with this three-dimensional Mark is the angle in (X, Y, Z) between Z axis, then calculates the second coordinate according to angle and angular coordinate.
If the first coordinate representation of a pixel is (x1, y1), angular coordinate is expressed asThis pixel is determined according to the first coordinate Angular coordinate include determiningFollowing trigonometric function value:
If the second coordinate representation is (x2, y2, z2), angle is expressed as θ, then calculate in the second coordinate according to equation below X2、y2And z2:
If the lens imaging geometric function of camera head is r (θ), when the lens of this camera head are linear type (rectilinear), time, there are r (θ)=f tan (θ), then angle
θ = a t a n ( ( p w · x 1 ) 2 + ( p h · y 1 ) 2 f ) - - - ( 3 )
When the lens of this camera head are isometric type (equidistant), there is r (θ)=f θ, then angle
θ = ( p w · x 1 ) 2 + ( p h · y 1 ) 2 f - - - ( 4 )
Wherein, atan () expression negates tangent value function, and pw, ph represent width and the height of this pixel respectively, and f is The focal length (as shown in Figure 2) of mirror.
Correspond in Fig. 2, a pixel p of imaging surface x'o'y'1', its first coordinate is (x1, y1), p1' and initial point o ' Between line and x ' o ' axle between angle beIt is transformed under cartesian coordinate system (X, Y, Z), corresponding object point P1, it is three-dimensional Shown in coordinate such as formula (2).Wherein, P1XOY two-dimensional surface is projected as p1, p1And between line and XO axle between initial point O Angle be also
Above-mentioned public photocentre O is unique for all of camera head, it is contemplated that each shooting dress in reality Put the photocentre O ' all possessing oneself, accordingly, it would be desirable to the image of imaging is compensated according to the deviation between photocentre so that It is that the imaging under initial point is consistent with O.
To this, Fig. 3 is the exemplary process diagram of the optical centre bias compensation method according to one embodiment of the invention.For step 1032, according to the target object point specified in the photocentre of this camera head and this image, the second coordinate is modified, obtains the 3rd Coordinate, as it is shown on figure 3, specifically include following steps:
Step 301, obtains the distance between public photocentre and target object point, i.e. obtains the degree of depth of target object point.
In this step, target object point can be referred to according to oneself object point interested in taken image by user Fixed, or, can specify according to the main target thing in scene or content.After specifying target object point, estimate Go out the distance between public photocentre and target object point on XOZ face.Such as, estimate in a concrete field according to third party software In scape, the degree of depth of this target object point is 10m, or 20m etc..
Fig. 4 a is the coordinate schematic diagram being modified the second coordinate according to one embodiment of the invention.As shown in fig. 4 a, Target object point is incident illuminationOn object point P1, above-mentioned distance is P1XOZ face projectsLength, i.e. O to P ' it Between length, be designated as R0, this distance also referred to as object point P1The degree of depth.
Step 302, obtains the photocentre side-play amount relative to public photocentre of this camera head.
In this step, it is contemplated that have between image captured by adjacent two camera heads in a panoramic shooting system Standby lap, according to the sample data of overlay chart picture and and the correspondence/matching relationship of camera head carry out returning or imitating True estimation, it may be determined that go out above-mentioned side-play amount.Such as, panorama (i.e. 360 °) video system, it is mounted with in three dimensions Multiple photographing units, each photographing unit photographs the image in certain angular field of view.
Fig. 4 b is the coordinate schematic diagram of the determination side-play amount according to one embodiment of the invention.As shown in Figure 4 b, at three-dimensional ball In ABC coordinate system constructed by face 400, diverse location is disposed with photographing unit 401 and 402, the two captured image tool There is lap.It is inclined that sample data according to overlay chart picture can determine that between the photocentre O ' of each photographing unit and initial point O Shifting amount.Returning in Fig. 4 a, photocentre O ' is respectively T relative to initial point O side-play amount on X-axis, Y-axis and Z axisx, Ty, Tz
Step 303, calculates the 3rd coordinate according to distance, side-play amount and the second coordinate.
Second coordinate is modified, the 3rd coordinate (x can be calculated according to equation below3, y3, z3Each in) Coordinate figure x3、y3And z3:
x 3 = x 2 + T x R , y 3 = y 2 + T y R , z 3 = z 2 + T z R , - - - ( 5 )
Wherein,B=2 (Tz·z2+Tx·x2)。
All images are spliced by step 104 according to the 3rd coordinate of pixel each in all images.
After carrying out above-mentioned process for each pixel in each image, according to each camera head institute in camera system The position at place, splices the image after all process according to certain projection type, thus obtains residing for target object point Without the panoramic picture of any parallax in depth plane.
In the present embodiment, by obtaining the image that at least two camera head each photographs, for each shooting dress Put, build the three-dimensional system of coordinate of this camera head with the public photocentre of default at least two camera head for initial point, for often Each pixel in the image that individual camera head photographs, performs following process: by this pixel two-dimensional coordinate in the images First Coordinate Conversion of system is the second coordinate under this three-dimensional system of coordinate;Photocentre according to this camera head and this image are specified Target object point, the second coordinate is modified, obtains the 3rd coordinate, according to the 3rd coordinate pair of pixel each in all images All images splice, it is provided that the technology of a kind of no parallax splicing depth plane, can master in adaptive selection scene Content place depth location is wanted to splice depth plane as no parallax so that the main contents in scene present the spelling of no parallax flaw Connect effect.
Additionally, the conversion of coordinate and the compensation of optical centre bias in said method, unrelated with the geometrical property of target object point, no Depend on the shape of concrete target object point, be more suitable for the Video Applications that content on time dimension is continually changing.With existing Technology is compared, and said method is without carrying out feature detection and characteristic matching to scene content, such that it is able to the basis of fast and flexible The target object point (or the no parallax splicing depth plane specified) that user specifies, enters the object point at desired locations or scene content The complete alignment of row, it is provided that parallax free stitching image.Further, said method and the concrete imaging geometry formula of camera head, The projection type of splicing is the most unrelated eventually, therefore, has versatility, improves the resource utilization of image splicing device.
Fig. 5 is the exemplary process diagram of the image split-joint method according to another embodiment of the present invention.As it is shown in figure 5, include Following steps:
Step 501, obtains the image that at least two camera head each photographs.
Step 502, for each camera head, builds with the public photocentre of default at least two camera head for initial point The cartesian coordinate system of this camera head.
Step 503, for each pixel in the image that each camera head photographs, execution is following to be processed:
Step 5031, carries out Coordinate Conversion:
It is the second seat under this cartesian coordinate system by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images Mark;
Step 5032, carries out optical centre bias compensation:
The target object point specified in photocentre according to this camera head and this image, is modified the second coordinate, obtains 3rd coordinate.
By above-mentioned formula (2) it can be seen that the mould of the second coordinate is 1, i.e.I.e. set up Cartesian coordinate system is normalized cartesian coordinate system.Owing to normalization cartesian coordinate system is free from depth information, institute With at same incident rayThe object point that upper two degree of depth are different has identical normalization cartesian coordinate value.Such as Fig. 2 institute Show, by p1' be transformed under normalization cartesian coordinate system (X, Y, Z) correspondence object point be not only P1, except P1, it is also possible to it is Along incident illuminationOn other object points, such as the P in Fig. 22.Object point P1And P2The degree of depth different, i.e. relative to light on XOZ face Distance between heart O is different, but the two has identical normalization cartesian coordinate value (x2, y2, z2), both correspond to imaging P on the x'o'y' of face1′。
Step 504, according to each camera head location in panorama system, according to default projection type by Three coordinate projections are in unit panorama sphere.
When all of camera head constitutes the camera system of a panorama, by the 3rd coordinate projection to a unit panorama In sphere.Preset projection type include but not limited to: linear type (rectilinear), fisheye type (fisheye), etc. square post Shape projection (equirectangular), orthogonal projection (orthographic), spherical projection (stereographic) etc..
Step 505, splices all of image in unit panorama sphere, obtains panoramic picture.
By above-mentioned steps, in spliced panoramic picture, it is possible to reach to ignore on the target object point position specified The splicing depth plane of difference, adjacent image is perfectly aligned, obtains the effect without splicing flaw.When showing image to user, permissible Three-dimensional panoramic head picture is reconverted into the image of two dimension.
Fig. 6 a is the two dimensional image schematic diagram before the splicing of foundation one embodiment of the invention.Wherein, in left Figure 60 0, mesh Mark object point is the first flagpole (as shown in arrow 601), corresponding to the P1-P ' shown in Fig. 4 a.Before optical centre bias compensates, Occur at this flagpole that upper and lower, the left images that cause due to parallax do not line up phenomenon.Right Figure 61 0 can be clearly seen that, Also there is unnecessary point 611 ' in the lower left on the top 611 of flagpole, and flag is originally used for the image shown in 612, but due to parallax, Cause being ultimately imaged for 612 '.
Fig. 6 b is the spliced two dimensional image schematic diagram according to one embodiment of the invention.Correspondingly, left Figure 62 0 is for passing through Coordinate transform, optical centre bias compensate after imaging, at flagpole on hypograph perfection alignment.Can be clearly in right Figure 63 0 See not having the image of alignment all to disappear outside top 611 and flag 612, shown flagpole clearly.It is visible, it is achieved Perfect alignment to main contents thing in scene (i.e. flagpole), in flagpole position, becomes no parallax splicing depth plane.
When specifically applying, it is also possible to use the mode of reverse process, i.e. at a blank panorama painting canvas (canvas) On, perform inversely processing process pixel-by-pixel and (perform the optical centre bias compensation described in step 5032, the seat described in step 5031 the most successively Mark conversion operation), find the location of pixels of the camera head captured images that it corresponds to, then interpolation obtains current panorama and draws The actual value of this pixel on cloth.
Fig. 7 is according to the structural representation of the image splicing device of one embodiment of the invention.As it is shown in fig. 7, image mosaic dress Put 700 and include that acquisition module 710, coordinate system build module 720, coordinate processing module 730 and concatenation module 740, wherein,
Acquisition module 710, for obtaining the image that at least two camera head each photographs;
Coordinate system builds module 720, for for each camera head, and public with default at least two camera head Photocentre is the three-dimensional system of coordinate that initial point builds this camera head;
Coordinate processing module 730, each pixel in the image photographed for each camera head, perform with Lower process: be the second coordinate under this three-dimensional system of coordinate by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images; The target object point specified in photocentre according to this camera head and this image, is modified the second coordinate, obtains the 3rd coordinate; And,
Concatenation module 740, for splicing all images according to the 3rd coordinate of pixel each in all images.
In one embodiment, coordinate processing module 730 includes converting unit 731, for determining this picture according to the first coordinate The angular coordinate of element;Lens imaging geometric function and the first coordinate according to this camera head determine incident illumination and this three-dimensional system of coordinate Angle between Z axis in (X, Y, Z);The second coordinate is gone out according to angular coordinate and angle calcu-lation.
In one embodiment, if the first coordinate representation is (x1, y1), angular coordinate is expressed asConverting unit 731 is used for, really Fixed:
This three-dimensional system of coordinate is cartesian coordinate system, if the second coordinate representation is (x2, y2, z2), angle is expressed as θ, conversion Unit 731 is used for, and is calculated x according to equation below2、y2And z2:
z2=cos (θ)
In one embodiment, coordinate processing module 730 includes amending unit 732, is used for obtaining public photocentre and object Distance between point;Obtain the photocentre side-play amount relative to public photocentre of this camera head;According to distance, side-play amount and second Coordinate calculates the 3rd coordinate.
In one embodiment, if distance is expressed as R0, offset-lists is shown as (Tx, Ty, Tz), the second coordinate representation is (x2, y2, z2), the 3rd coordinate representation is (x3, y3, z3), amending unit 732 is used for, and is calculated x according to equation below3、y3And z3:
x 3 = x 2 + T x R ,
y 3 = y 2 + T y R ,
z 3 = z 2 + T z R ,
Wherein,B=2 (Tz·z2+Tx·x2)。
In one embodiment, concatenation module 740 is used for, according to each camera head location in panorama system, According to default projection type by the 3rd coordinate projection to unit panorama sphere;By all of image in unit panorama sphere Splice, obtain panoramic picture.
Fig. 8 is the structural representation of the image splicing device according to another embodiment of the present invention.This image splicing device 800 Comprise the steps that processor 810, memorizer 820, port 830 and bus 840.Processor 810 and memorizer 820 are by bus 840 Interconnection.Processor 810 can be received by port 830 and be sent data.Wherein,
Processor 810 is for performing the machine readable instructions module of memorizer 820 storage.
Memorizer 820 storage has processor 810 executable machine readable instructions module.The executable finger of processor 810 Module is made to include: acquisition module 821, coordinate system build module 822, coordinate processing module 823 and concatenation module 824.Wherein,
When acquisition module 821 is performed by processor 810 can be: obtain the figure that at least two camera head each photographs Picture;
Coordinate system builds module 822: for each camera head, with default extremely The public photocentre of few two camera heads is the three-dimensional system of coordinate that initial point builds this camera head;
When coordinate processing module 823 is performed by processor 810 can be: the image photographed for each camera head In each pixel, perform following process: be this three-dimensional by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images The second coordinate under coordinate system;The target object point specified in photocentre according to this camera head and this image, enters the second coordinate Row is revised, and obtains the 3rd coordinate;And,
When concatenation module 824 is performed by processor 810 can be: according to the 3rd coordinate pair of pixel each in all images All images splice.
It can thus be seen that when the instruction module being stored in memorizer 820 is performed by processor 810, before can realizing State acquisition module in each embodiment, coordinate system builds module, coordinate processing module and the various functions of concatenation module.
In said apparatus and system embodiment, modules and unit realize the concrete grammar of self function to be implemented in method Example is all described, repeats no more here.
It addition, each functional module in each embodiment of the present invention can be integrated in a processing unit, it is also possible to It is that modules is individually physically present, it is also possible to two or more modules are integrated in a unit.Above-mentioned integrated list Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
It addition, each embodiment of the present invention can be processed by the data performed by data handling equipment such as computer Program realizes.Obviously, data processor constitutes the present invention.Additionally, the data being generally stored inside in a storage medium Processing routine by directly reading out storage medium or by program being installed or copying to data handling equipment by program Storage device (such as hard disk and or internal memory) performs.Therefore, such storage medium also constitutes the present invention.Storage medium is permissible Using any kind of recording mode, such as paper storage medium (such as paper tape etc.), magnetic storage medium are (such as floppy disk, hard disk, flash memory Deng), optical storage media (such as CD-ROM etc.), magnetic-optical storage medium (such as MO etc.) etc..
Therefore, the invention also discloses a kind of storage medium, wherein storage has data processor, this data processor For performing any embodiment of said method of the present invention.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention Within god and principle, any modification, equivalent substitution and improvement etc. done, within should be included in the scope of protection of the invention.

Claims (14)

1. an image split-joint method, it is characterised in that including:
Obtain the image that at least two camera head each photographs;
For each camera head, build this shooting dress with the public photocentre of default described at least two camera head for initial point The three-dimensional system of coordinate put;
For each pixel in the image that each camera head photographs, execution is following to be processed:
It is the second coordinate under this three-dimensional system of coordinate by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images;
The target object point specified in photocentre according to this camera head and this image, is modified described second coordinate, obtains 3rd coordinate;And,
All images are spliced by described 3rd coordinate according to pixel each in all images.
Method the most according to claim 1, wherein, if described three-dimensional system of coordinate is expressed as (X, Y, Z), described with default The public photocentre of described at least two camera head is that initial point builds the three-dimensional system of coordinate of this camera head and includes:
With described public photocentre as initial point, the parallel surface of the imaging surface of this camera head is set up two-dimensional coordinate system (X, Y);
Z axis is determined according to described two-dimensional coordinate system (X, Y) and the right-hand rule.
Method the most according to claim 1, wherein, described the first coordinate by this pixel two-dimensional coordinate system in the images The second coordinate be converted under this three-dimensional system of coordinate includes:
The angular coordinate of this pixel is determined according to described first coordinate;
Lens imaging geometric function according to this camera head and described first coordinate determine incident illumination and this three-dimensional system of coordinate (X, Y, Z) in angle between Z axis;
Described second coordinate is gone out according to described angular coordinate and described angle calcu-lation.
Method the most according to claim 3, wherein, if the first coordinate representation is (x1, y1), angular coordinate is expressed asDescribed Determine that according to the first coordinate the angular coordinate of this pixel includes:
DetermineTrigonometric function value be respectively as follows:
This three-dimensional system of coordinate is cartesian coordinate system, if the second coordinate representation is (x2, y2, z2), angle is expressed as θ, described basis Described angle and described angular coordinate calculate described second coordinate and include:
It is calculated x according to equation below2、y2And z2:
z2=cos (θ).
5. according to the method described in claim 3 or 4, wherein, the described lens imaging geometric function r according to this camera head (θ) with described first coordinate (x1, y1) determine that the angle theta between incident illumination with Z axis in this three-dimensional system of coordinate (X, Y, Z) includes:
When the lens of this camera head are linear type, there is r (θ)=f tan (θ), then
When the lens of this camera head are isometric type, there is r (θ)=f θ, then
Wherein, atan () expression negates tangent value function, and pw, ph represent width and the height of this pixel respectively, and f is lens Focal length.
Method the most according to claim 1, wherein, the mesh specified in the described photocentre according to this camera head and this image Mark object point, is modified described second coordinate, obtains the 3rd coordinate and include:
Obtain the distance between described public photocentre and described target object point;
Obtain the photocentre side-play amount relative to described public photocentre of this camera head;
Described 3rd coordinate is calculated according to described distance, described side-play amount and described second coordinate.
Method the most according to claim 6, wherein, described according to described distance R0, described side-play amount (Tx, Ty, Tz) and institute State the second coordinate (x2, y2, z2) calculate described 3rd coordinate (x3, y3, z3) including:
It is calculated x according to equation below3、y3And z3:
Wherein,
Method the most according to any one of claim 1 to 7, wherein, the described institute according to pixel each in all images State the 3rd coordinate all images are carried out splicing to include:
According to each camera head location in panorama system, according to default projection type, described 3rd coordinate is thrown Shadow is in unit panorama sphere;
All of image is spliced by described unit panorama sphere, obtains panoramic picture.
9. an image splicing device, it is characterised in that including:
Acquisition module, for obtaining the image that at least two camera head each photographs;
Coordinate system builds module, for for each camera head, with the common light of default described at least two camera head The heart is the three-dimensional system of coordinate that initial point builds this camera head;
Coordinate processing module, each pixel in the image photographed for each camera head, execution is following to be processed: It is the second coordinate under this three-dimensional system of coordinate by the first Coordinate Conversion of this pixel two-dimensional coordinate system in the images;Take the photograph according to this As the target object point specified in the photocentre of device and this image, described second coordinate is modified, obtains the 3rd coordinate;And,
Concatenation module, for splicing all images according to described 3rd coordinate of pixel each in all images.
Device the most according to claim 9, wherein, described coordinate processing module includes converting unit, for according to described First coordinate determines the angular coordinate of this pixel;Lens imaging geometric function and described first coordinate according to this camera head determine Angle between incident illumination with Z axis in this three-dimensional system of coordinate (X, Y, Z);Go out described according to described angular coordinate and described angle calcu-lation Second coordinate.
11. devices according to claim 10, wherein, if the first coordinate representation is (x1, y1), angular coordinate is expressed asInstitute State converting unit for, determine:
This three-dimensional system of coordinate is cartesian coordinate system, if the second coordinate representation is (x2, y2, z2), angle is expressed as θ, described conversion Unit is used for, and is calculated x according to equation below2、y2And z2:
z2=cos (θ).
12. devices according to claim 9, wherein, described coordinate processing module includes amending unit, is used for obtaining described Distance between public photocentre and described target object point;Obtain the photocentre skew relative to described public photocentre of this camera head Amount;Described 3rd coordinate is calculated according to described distance, described side-play amount and described second coordinate.
13. devices according to claim 12, wherein, if described distance is expressed as R0, described offset-lists is shown as (Tx, Ty, Tz), described second coordinate representation is (x2, y2, z2), described 3rd coordinate representation is (x3, y3, z3), described amending unit is used In, it is calculated x according to equation below3、y3And z3:
Wherein,
14. according to the method according to any one of claim 9 to 14, and wherein, described concatenation module is used for, according to each shooting Device is location in panorama system, according to default projection type by described 3rd coordinate projection to unit panorama sphere In;All of image is spliced by described unit panorama sphere, obtains panoramic picture.
CN201610890008.9A 2016-10-12 2016-10-12 A kind of image split-joint method and device Active CN106331527B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610890008.9A CN106331527B (en) 2016-10-12 2016-10-12 A kind of image split-joint method and device
PCT/CN2017/105657 WO2018068719A1 (en) 2016-10-12 2017-10-11 Image stitching method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610890008.9A CN106331527B (en) 2016-10-12 2016-10-12 A kind of image split-joint method and device

Publications (2)

Publication Number Publication Date
CN106331527A true CN106331527A (en) 2017-01-11
CN106331527B CN106331527B (en) 2019-05-17

Family

ID=57820319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610890008.9A Active CN106331527B (en) 2016-10-12 2016-10-12 A kind of image split-joint method and device

Country Status (2)

Country Link
CN (1) CN106331527B (en)
WO (1) WO2018068719A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068719A1 (en) * 2016-10-12 2018-04-19 腾讯科技(深圳)有限公司 Image stitching method and apparatus
CN108470360A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 The image device and its correlation technique of depth map are generated using on-plane surface projected image
CN109889736A (en) * 2019-01-10 2019-06-14 深圳市沃特沃德股份有限公司 Based on dual camera, the image acquiring method of multi-cam, device and equipment
CN110072158A (en) * 2019-05-06 2019-07-30 复旦大学 Spherical surface equatorial zone double C-type panoramic video projecting method
CN110519774A (en) * 2018-05-21 2019-11-29 中国移动通信集团广东有限公司 Base station surveying method, system and equipment based on VR technology
CN111432119A (en) * 2020-03-27 2020-07-17 贝壳技术有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
CN112449100A (en) * 2019-09-03 2021-03-05 中国科学院长春光学精密机械与物理研究所 Splicing method and device for aerial camera oblique images, terminal and storage medium
CN112669199A (en) * 2020-12-16 2021-04-16 影石创新科技股份有限公司 Image stitching method, computer-readable storage medium and computer device
CN112771842A (en) * 2020-06-02 2021-05-07 深圳市大疆创新科技有限公司 Imaging method, imaging apparatus, computer-readable storage medium
TWI764024B (en) * 2018-07-30 2022-05-11 瑞典商安訊士有限公司 Method and camera system combining views from plurality of cameras
US11645780B2 (en) 2020-03-16 2023-05-09 Realsee (Beijing) Technology Co., Ltd. Method and device for collecting images of a scene for generating virtual reality data

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142825B (en) * 2019-12-27 2024-04-16 杭州拓叭吧科技有限公司 Multi-screen visual field display method and system and electronic equipment
CN113873220A (en) * 2020-12-03 2021-12-31 上海飞机制造有限公司 Deviation analysis method, device, system, equipment and storage medium
CN114554176A (en) * 2022-01-24 2022-05-27 北京有竹居网络技术有限公司 Depth camera
CN115781665B (en) * 2022-11-01 2023-08-08 深圳史河机器人科技有限公司 Mechanical arm control method and device based on monocular camera and storage medium
CN116643393B (en) * 2023-07-27 2023-10-27 南京木木西里科技有限公司 Microscopic image deflection-based processing method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010019363A1 (en) * 2000-02-29 2001-09-06 Noboru Katta Image pickup system and vehicle-mounted-type sensor system
US20020036649A1 (en) * 2000-09-28 2002-03-28 Ju-Wan Kim Apparatus and method for furnishing augmented-reality graphic using panoramic image with supporting multiuser
CN101521745A (en) * 2009-04-14 2009-09-02 王广生 Multi-lens optical center superposing type omnibearing shooting device and panoramic shooting and retransmitting method
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN101783883A (en) * 2009-12-26 2010-07-21 华为终端有限公司 Adjusting method in co-optical-center videography and co-optical-center camera system
CN102798350A (en) * 2012-07-10 2012-11-28 中联重科股份有限公司 Method, device and system for measuring deflection of arm support
US20140071227A1 (en) * 2012-09-11 2014-03-13 Hirokazu Takenaka Image processor, image processing method and program, and imaging system
CN104506764A (en) * 2014-11-17 2015-04-08 南京泓众电子科技有限公司 An automobile traveling recording system based on a spliced video image
CN105812640A (en) * 2016-05-27 2016-07-27 北京伟开赛德科技发展有限公司 Spherical omni-directional camera device and video image transmission method thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3889650B2 (en) * 2002-03-28 2007-03-07 三洋電機株式会社 Image processing method, image processing apparatus, computer program, and recording medium
CN103379267A (en) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 Three-dimensional space image acquisition system and method
CN106331527B (en) * 2016-10-12 2019-05-17 腾讯科技(北京)有限公司 A kind of image split-joint method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010019363A1 (en) * 2000-02-29 2001-09-06 Noboru Katta Image pickup system and vehicle-mounted-type sensor system
US20020036649A1 (en) * 2000-09-28 2002-03-28 Ju-Wan Kim Apparatus and method for furnishing augmented-reality graphic using panoramic image with supporting multiuser
CN101521745A (en) * 2009-04-14 2009-09-02 王广生 Multi-lens optical center superposing type omnibearing shooting device and panoramic shooting and retransmitting method
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN101783883A (en) * 2009-12-26 2010-07-21 华为终端有限公司 Adjusting method in co-optical-center videography and co-optical-center camera system
CN102798350A (en) * 2012-07-10 2012-11-28 中联重科股份有限公司 Method, device and system for measuring deflection of arm support
US20140071227A1 (en) * 2012-09-11 2014-03-13 Hirokazu Takenaka Image processor, image processing method and program, and imaging system
CN104506764A (en) * 2014-11-17 2015-04-08 南京泓众电子科技有限公司 An automobile traveling recording system based on a spliced video image
CN105812640A (en) * 2016-05-27 2016-07-27 北京伟开赛德科技发展有限公司 Spherical omni-directional camera device and video image transmission method thereof

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068719A1 (en) * 2016-10-12 2018-04-19 腾讯科技(深圳)有限公司 Image stitching method and apparatus
CN108470360A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 The image device and its correlation technique of depth map are generated using on-plane surface projected image
TWI660328B (en) * 2017-02-23 2019-05-21 鈺立微電子股份有限公司 Image device utilizing non-planar projection images to generate a depth map and related method thereof
US11508082B2 (en) 2017-02-23 2022-11-22 Eys3D Microelectronics, Co. Image device utilizing non-planar projection images to generate a depth map
CN108470360B (en) * 2017-02-23 2022-06-17 钰立微电子股份有限公司 Image device for generating depth map by using non-plane projection image and related method thereof
US10885650B2 (en) 2017-02-23 2021-01-05 Eys3D Microelectronics, Co. Image device utilizing non-planar projection images to generate a depth map and related method thereof
CN110519774A (en) * 2018-05-21 2019-11-29 中国移动通信集团广东有限公司 Base station surveying method, system and equipment based on VR technology
TWI764024B (en) * 2018-07-30 2022-05-11 瑞典商安訊士有限公司 Method and camera system combining views from plurality of cameras
CN109889736A (en) * 2019-01-10 2019-06-14 深圳市沃特沃德股份有限公司 Based on dual camera, the image acquiring method of multi-cam, device and equipment
CN110072158B (en) * 2019-05-06 2021-06-04 复旦大学 Spherical equator area double-C type panoramic video projection method
CN110072158A (en) * 2019-05-06 2019-07-30 复旦大学 Spherical surface equatorial zone double C-type panoramic video projecting method
CN112449100A (en) * 2019-09-03 2021-03-05 中国科学院长春光学精密机械与物理研究所 Splicing method and device for aerial camera oblique images, terminal and storage medium
CN112449100B (en) * 2019-09-03 2023-11-17 中国科学院长春光学精密机械与物理研究所 Aviation camera inclined image splicing method, device, terminal and storage medium
US11645780B2 (en) 2020-03-16 2023-05-09 Realsee (Beijing) Technology Co., Ltd. Method and device for collecting images of a scene for generating virtual reality data
CN111432119B (en) * 2020-03-27 2021-03-23 北京房江湖科技有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
WO2021190649A1 (en) * 2020-03-27 2021-09-30 Ke.Com (Beijing) Technology Co., Ltd. Method and device for collecting images of a scene for generating virtual reality data
CN111432119A (en) * 2020-03-27 2020-07-17 贝壳技术有限公司 Image shooting method and device, computer readable storage medium and electronic equipment
CN112771842A (en) * 2020-06-02 2021-05-07 深圳市大疆创新科技有限公司 Imaging method, imaging apparatus, computer-readable storage medium
CN112669199A (en) * 2020-12-16 2021-04-16 影石创新科技股份有限公司 Image stitching method, computer-readable storage medium and computer device

Also Published As

Publication number Publication date
CN106331527B (en) 2019-05-17
WO2018068719A1 (en) 2018-04-19

Similar Documents

Publication Publication Date Title
CN106331527B (en) A kind of image split-joint method and device
CN101673395B (en) Image mosaic method and image mosaic device
CN110809786B (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
US20190012804A1 (en) Methods and apparatuses for panoramic image processing
JP2874710B2 (en) 3D position measuring device
CN107316273B (en) Panoramic image acquisition device and acquisition method
CN106709865B (en) Depth image synthesis method and device
CN202172446U (en) Wide angle photographing apparatus
CN109087244A (en) A kind of Panorama Mosaic method, intelligent terminal and storage medium
JP2007192832A (en) Calibrating method of fish eye camera
JP4680104B2 (en) Panorama image creation method
WO2013005265A1 (en) Three-dimensional coordinate measuring device and three-dimensional coordinate measuring method
Pathak et al. Dense 3D reconstruction from two spherical images via optical flow-based equirectangular epipolar rectification
CN109819169A (en) Panorama shooting method, device, equipment and medium
KR102200866B1 (en) 3-dimensional modeling method using 2-dimensional image
CN115049548A (en) Method and apparatus for restoring image obtained from array camera
KR20150002995A (en) Distortion Center Correction Method Applying 2D Pattern to FOV Distortion Correction Model
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
TWM594322U (en) Camera configuration system with omnidirectional stereo vision
JP4778569B2 (en) Stereo image processing apparatus, stereo image processing method, and stereo image processing program
WO2021093804A1 (en) Omnidirectional stereo vision camera configuration system and camera configuration method
CN111131689B (en) Panoramic image restoration method and system
Kudinov et al. The algorithm for a video panorama construction and its software implementation using CUDA technology
Xu et al. Image rectification for single camera stereo system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant