CN107333064A - The joining method and system of a kind of spherical panorama video - Google Patents

The joining method and system of a kind of spherical panorama video Download PDF

Info

Publication number
CN107333064A
CN107333064A CN201710607242.0A CN201710607242A CN107333064A CN 107333064 A CN107333064 A CN 107333064A CN 201710607242 A CN201710607242 A CN 201710607242A CN 107333064 A CN107333064 A CN 107333064A
Authority
CN
China
Prior art keywords
image
video
angle point
point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710607242.0A
Other languages
Chinese (zh)
Other versions
CN107333064B (en
Inventor
罗立宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201710607242.0A priority Critical patent/CN107333064B/en
Publication of CN107333064A publication Critical patent/CN107333064A/en
Application granted granted Critical
Publication of CN107333064B publication Critical patent/CN107333064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The invention discloses a kind of joining method of spherical panorama video, including:Obtain the video pair that fish eye lens is shot;The first image and the second image with the one-to-one relationship are spliced into video as image to performing concatenation, formation respectively;Panoramic video automatic Mosaic is realized to carrying out suitable coordinate transform by the video shot to fish eye lens.Wherein need to carry out Corner Detection, experience range searching and hsv color spatial match in the solution procedure of transition matrix to realize the automatic detection of match point;This method shooting quantity is few, and splicing speed is fast, and splicing need not be manually adjusted and intervened after starting.Generate video-splicing effect good, the splicing of center-line joint position is accurate, and transition is natural.In terms of whole video playback, without flating and dislocation.The invention also discloses a kind of splicing system of spherical panorama video, with above-mentioned beneficial effect.

Description

The joining method and system of a kind of spherical panorama video
Technical field
The present invention relates to technical field of image processing, the joining method and system of more particularly to a kind of spherical panorama video.
Background technology
Panoramic video is a kind of new video.Conventional video can only see the information of the direction of some in space and scope, In contrast to this, panoramic video have recorded space 360 degrees omnidirection information, and can be carried out with user it is certain interact, for example Change direction of observation and handoff scenario etc..It is that the ordinary video shot by multiple different directions is put together.Video-splicing There is important application in the fields such as military affairs, medical science, computer vision, video conference, security protection.Panoramic video is developed by panorama sketch .The splicing of panorama sketch, which has been studied, must compare many.
By the difference of technical method, most common method can be divided into three classes:The method of method, feature based based on frequency domain With the method based on shade of gray.Wherein, the method based on frequency domain is using the improved method of fractional fourier transform as representative, such Method can solve the registration problems for translating, rotating and uniformly scaling well, but can not solve perspective projection transformation well Registration.The method of feature based with Scale invariant features transform algorithm (SIFT) represent, this method can solve including The registration problems of the various conversion of perspective transform, but this method time complexity is very high, and for the registration of texture image As a result it is also not ideal enough.Method based on shade of gray is to minimize to solve projective transformation parameter by gradation of image difference, is made Perspective projection parameter is asked for LM optimisation techniques, but these methods are easily influenceed by illumination variation.
Different by make, panorama sketch can be divided into column type, square and three kinds of ball-type (panoramic video is as the same).Wherein column type Panorama is studied more, and various panorama correlation techniques are also taken the lead in promoting by it.But column type panorama has the shortcomings that individual intrinsic, it is impossible to complete Whole includes spatial information, and the spatial information of top and bottom is no.Ball-type panoramic space information is complete, but more complete than column type Scape is difficult, and research report is then few a lot.On the one hand, its photographic schemes is cumbersome, need to shoot more than ten or even tens photos;It is another Aspect, so many photo error can accumulate too much, so that automatic Mosaic can not finally be carried out, therefore it be a kind of half from Dynamic method, it is often necessary to manually adjust.Therefore, how to provide shoot that quantity is few, full-automatic, that stitching image quality is high is spherical Panoramic video joining method, so that the shooting of panoramic video becomes simple, and shooting quality is also improved, and is this area skill Art personnel's technical issues that need to address.
The content of the invention
It is an object of the invention to provide a kind of joining method of spherical panorama video and system, the splicing of panoramic video can be made Become simple and efficient, and picture quality can be improved.
In order to solve the above technical problems, the present invention provides a kind of joining method of spherical panorama video, methods described includes:
Obtain the video pair that fish eye lens is shot;Wherein, the video is to including the first video with the image of N frames first With the second video with the image of N frames second;And the image of N frames first has one-to-one relationship with the image of N frames second;
Respectively using the first image and the second image with the one-to-one relationship as image to perform concatenation, Form splicing video;Wherein, the process of the concatenation includes:
Determine the corresponding angle point of the image of described image centering first;
Experience region corresponding with each angle point is scanned in the second image in described image pair, and utilizes HSV Algorithm is tested to search result determines the corresponding match point of each angle point;
The transition matrix of described image pair is calculated according to selected angle point and corresponding match point;
The color value of each pixel in the underlapped region of described image centering is determined using the transition matrix, and according to weights The pixel fusion of coefficient progress overlapping region determines the color value of each pixel in overlapping region;
All color values are filled into the correspondence position in the splicing video.
Optionally, the corresponding angle point of the image of described image centering first is determined, including:
Judge described image centering the first image whether be first video the first two field picture;
If the first two field picture, then Corner Detection is carried out to first two field picture, determine the first two field picture correspondence Angle point;
If not the first two field picture, then judge described image to whether being the image pair that is spaced predetermined number of frames;
If not being spaced the image pair of predetermined number of frames, then the angle point of the first image of previous frame image centering is regard as institute State the corresponding angle point of the first image of image pair;
If being spaced the image pair of predetermined number of frames, then the selected overlay region of the first image of described image centering is carried out Corner Detection obtains shade angle point, and judge the shade angle point quantity and the angle point in longitude and latitude image overlay region difference whether More than threshold value;If so, then carrying out Corner Detection to the first image of described image centering, the first figure of described image centering is determined As corresponding angle point.
Optionally, the process of the Corner Detection, including:
Utilize formulaCorner Detection is carried out, pre-selection angle point M is obtained;
Utilize formula R=det (M)-k (trace (M))2Calculate the receptance function value R of the pre-selection angle point M;
The receptance function value R is arranged according to order from high in the end, and chooses the response letter of preceding predetermined quantity The angle point that the corresponding pre-selection angle point M of numerical value R are selected as Corner Detection;
Wherein, I is the first video, Ix、IyTo pass through gradient that horizontally and vertically direction difference operator is obtained to image filtering Image, w (x, y) takes two-dimensional Gaussian function, and det is seeks matrix determinant, and trace is the mark of matrix, and k is a constant.
Optionally, experience region corresponding with each angle point is scanned in the second image in described image pair, is wrapped Include:
Determine the selected angle point of the Corner Detection corresponding theoretical position in the second image in described image pair, and with To the external expansion preset range empirically corresponding match point of range searching centered on each theoretical position.
Optionally, search result is tested using HSV algorithms and determines the corresponding match point of each angle point, including:
It is utilized respectively formulaMeter Calculate the poor summation E of weighted color of the angle point corresponding with the experience region of each pixel in the experience regionx,y
Choose the poor summation E of minimum weighted colorx,yCorresponding pixel as angle point match point;
Wherein, Ex,yFor the colour-difference summation of the second image angle point in (x, y) is for the first image, H 's+u,t+vWith S′s+u,t+vFor the tone and saturation degree in the first image at (s+u, t+v) place, H "x+u,y+vWith S "x+u,y+vFor in the second image The tone and saturation degree at (s+u, t+v) place, khAnd ksFor two coefficients, w (u, v) is the weighting function of window pixel position, u, v The position of pixel respectively in window, l is the size of square window.
Optionally, the transition matrix of described image pair is calculated according to selected angle point and corresponding match point, including:
5 angle points are selected to be used as selected angle point according to distance threshold in the angle point selected from the Corner Detection;
Utilize formulaCalculate selected angle point And corresponding match point calculates the transition matrix of described image pair;
Wherein, a11、a12…a43It is altogether 15 unknown numbers, φL、θL、φR、θRIt is the first image and the second image respectively Shoot coordinate system and be converted into the longitude and latitude after spherical coordinate system.
Optionally, the distance threshold rtSpecially:rt=ktr0;Wherein, r0For turning radius, ktFor a coefficient.
Optionally, the color value of each pixel in the underlapped region of described image centering is determined using the transition matrix, and The pixel fusion for carrying out overlapping region according to weight coefficient determines the color value of each pixel in overlapping region, including:
Longitude and latitude in the transition matrix are traveled through, the color value c of each position in the first image is determined1With the second image The color value c of middle each position2
Formula is determined using color valueDetermine face of the described image to each position in spliced image Colour;
Wherein, c is described image to the color value of each position in spliced image, k1、k2It is weight coefficient.
The present invention also provides a kind of splicing system of spherical panorama video, and the system includes:
Video acquiring module, the video pair for obtaining fish eye lens shooting;Wherein, the video is to including with N frames First video of the first image and the second video with the image of N frames second;And the image of N frames first has with the image of N frames second One-to-one relationship;
Concatenation module, for regarding the first image and the second image with the one-to-one relationship as image pair respectively Concatenation is performed, splicing video is formed;
Concatenation module, for determining the corresponding angle point of the image of described image centering first;In described image pair Experience region corresponding with each angle point is scanned in two images, and search result is tested really using HSV algorithms Determine the corresponding match point of each angle point;The conversion square of described image pair is calculated according to selected angle point and corresponding match point Battle array;The color value of each pixel in the underlapped region of described image centering is determined using the transition matrix, and according to weight coefficient The pixel fusion for carrying out overlapping region determines the color value of each pixel in overlapping region;All color values are filled into described Splice the correspondence position in video.
A kind of joining method of spherical panorama video provided by the present invention, including:Obtain the video that fish eye lens is shot It is right;Respectively using the first image and the second image with the one-to-one relationship as image to perform concatenation, formed Splice video;Wherein, the process of the concatenation includes:Determine the corresponding angle point of the image of described image centering first;Institute State experience region corresponding with each angle point in the image of image pair second to scan for, and search is tied using HSV algorithms Fruit, which is tested, determines the corresponding match point of each angle point;The figure is calculated according to selected angle point and corresponding match point As to transition matrix;The color value of each pixel in the underlapped region of described image centering is determined using the transition matrix, and The pixel fusion for carrying out overlapping region according to weight coefficient determines the color value of each pixel in overlapping region;Will whole colors Value is filled into the correspondence position in the splicing video.
It can be seen that, realize that panoramic video is spelled automatically to carrying out suitable coordinate transform by the video shot to fish eye lens Connect.Progress Corner Detection, experience range searching and hsv color spatial match is wherein needed to come real in the solution procedure of transition matrix The automatic detection of existing match point;This method shooting quantity is few, and splicing speed is fast, and splicing need not be manually adjusted and done after starting In advance.Generate video-splicing effect good, the splicing of center-line joint position is accurate, and transition is natural.In terms of whole video playback, without image Shake and dislocation.Present invention also offers a kind of splicing system of spherical panorama video, with above-mentioned beneficial effect, herein no longer Repeat.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this The embodiment of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis The accompanying drawing of offer obtains other accompanying drawings.
The flow chart of the joining method for the spherical panorama video that Fig. 1 is provided by the embodiment of the present invention;
The Computing Principle schematic diagram of longitude and latitude and turning radius in the fish eye images that Fig. 2 is provided by the embodiment of the present invention;
The shooting direction and coordinate system schematic diagram for the spherical panorama video that Fig. 3 is provided by the embodiment of the present invention;
The transition diagram for the rectangular co-ordinate pair warp and weft degree that Fig. 4 is provided by the embodiment of the present invention;
The longitude and latitude for the panoramic expansion image that Fig. 5 is provided by the embodiment of the present invention and the conversion schematic diagram of pixel coordinate;
The calculating schematic diagram for the color blend weights that Fig. 6 is provided by the embodiment of the present invention;
The splicing overlapping region schematic diagram that Fig. 7 is provided by the embodiment of the present invention;
The specific splicing video interception of one kind that Fig. 8 is provided by the embodiment of the present invention;
A kind of flow signal of the joining method for specific spherical panorama video that Fig. 9 is provided by the embodiment of the present invention Figure;
The structured flowchart of the splicing system for the spherical panorama video that Figure 10 is provided by the embodiment of the present invention.
Embodiment
The core of the present invention is to provide the joining method and system of a kind of spherical panorama video, can make the splicing of panoramic video Become simple and efficient, and picture quality can be improved.
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
It refer to Fig. 1, the flow chart of the joining method for the spherical panorama video that Fig. 1 is provided by the embodiment of the present invention;Should Method can include:
S100, the video pair for obtaining fish eye lens shooting;Wherein, video including first with the image of N frames first to regarding Frequency and the second video with the image of N frames second;And the image of N frames first has one-to-one relationship with the image of N frames second.
Specifically, each two field picture for the video centering that the present embodiment is shot to fish eye lens is to splicing, Jin Erke To complete the splicing to whole video pair.I.e. the present embodiment is not defined in itself to fish eye lens, as long as it can complete The video of shooting is panoramic video to being stitched together.Here the first video and the second video regards just for the sake of differentiation They are not defined by two videos of frequency centering, i.e. one video of people of video centering can first be regarded Frequently, corresponding another is the second video.Corresponding first image and the second image are also just for the sake of two videos of differentiation In two field picture.The present embodiment does not limit N numerical value, and it may be greater than 0 arbitrary integer, two video centerings The frame number of video is identical and corresponding with one-to-one relation, the i.e. frame sequential according to the video of shooting stroke.For example, First video has 5 the first images of frame (A1, A2, A3, A4, A5), and the second video has 5 the second images of frame (B1, B2, B3, B4, B5), Then A1 and B1 is the image pair with one-to-one relationship, and A2 and B2 is the image pair with one-to-one relationship.
S110, respectively using the first image and the second image with one-to-one relationship as image to perform splicing behaviour Make, form splicing video.
Specifically, being completed to whole video pair to each two field picture of video centering to all carrying out after concatenation Splicing operation.Every frame of video is operated and appropriate optimization processing is carried out, splicing video is ultimately generated.The present embodiment In do not limit each pair image to carry out concatenation processing form, can be simultaneously to each two field picture to splicing, Can also be successively to each two field picture to carrying out concatenation according to frame sequential.User can be selected according to actual conditions. Wherein, each two field picture is as follows to the process of corresponding concatenation:
Wherein, the process of concatenation includes:
Step 1, determine the corresponding angle point of the image of image pair first;
Specifically, in order to obtain follow-up transition matrix, it is thus necessary to determine that image angle point.The present embodiment determines the first figure first The angle point of picture, then determines the corresponding match point of each angle point by matching.
The present embodiment does not limit the mode that each the first image of two field picture centering angle point is determined, as long as can determine each The angle point of individual first image.First image of such as each two field picture centering all calculates correspondence using the method for Corner Detection Angle point, or in the corresponding angle point of previous frame the first image of previous frame that all more accurately words can be continued to use of present frame Angle point as present frame the corresponding angle point of the image of image pair first.It is preferred that, in order to save computing resource, accelerate splicing speed Degree;Determine that the corresponding angle point of the image of image pair first can include:
Judge image pair the first image whether be the first video the first two field picture;
If the first two field picture, then Corner Detection is carried out to the first two field picture, determine the corresponding angle point of the first two field picture;
If not the first two field picture, then judge image to whether being the image pair that is spaced predetermined number of frames;
If not being spaced the image pair of predetermined number of frames, then the angle point of the first image of previous frame image centering is regard as figure As the corresponding angle point of the first image of centering;
If being spaced the image pair of predetermined number of frames, then angle point is carried out to the selected overlay region of the first image of image pair Detection obtains shade angle point, and judges whether the quantity of shade angle point and the difference of the angle point in longitude and latitude image overlay region are more than threshold Value;If so, then carrying out Corner Detection to the first image of image pair, the corresponding angle point of the first image of image pair is determined; If it is not, then using the angle point of the first image of previous frame image centering as image pair the corresponding angle point of the first image.
Specifically, the method determined by above-mentioned angle point can only to the image pair of the first frame video the first image (i.e. the first two field picture) carries out Corner Detection.First image of image pair that afterwards can be only to being spaced predetermined number of frames is carried out Angle point trimming process.For example an angle point correction is performed every 10 frames.Such processing procedure can ensure angle point accuracy In the case of, improve computational efficiency.
Further the present embodiment does not limit the size of selected overlay region, you can with whole overlay region computational shadowgraph angles Point or a part of computational shadowgraph angle point in overlay region.But the latter's is more efficient because retrieval region compared with It is small.The present embodiment does not limit the selection mode of selected overlay region.
Optionally, the process of Corner Detection can be:Utilize formulaCarry out angle point inspection Survey, obtain pre-selection angle point M;Utilize formula R=det (M)-k (trace (M))2Calculate pre-selection angle point M receptance function value R;Will Receptance function value R is arranged according to order from high in the end, and chooses the corresponding pre-selections of receptance function value R of preceding predetermined quantity The angle point that angle point M is selected as Corner Detection;
Wherein, I is the first video, Ix、IyTo pass through gradient that horizontally and vertically direction difference operator is obtained to image filtering Image, w (x, y) takes two-dimensional Gaussian function, and det is seeks matrix determinant, and trace is the mark of matrix, and k is that (it takes a constant Value can be 0.04-0.06).It is angle point when R values are very big, at this;Work as R<It is side when 0, at this, as | R | it is during very little, at this Flat site.The position of angle point is judged in whole image may have a lot, can select maximum n (such as n=of R values 30) further screened.I.e. the present embodiment is not defined to the numerical value of preceding predetermined quantity.
Every certain frame number (such as every 10 frame), Corner Detection is carried out again to overlay region.The explanation of overlay region is referring to Fig. 7. Slight vibrations and scenery distance change due to camera, transformation matrix can change.At this moment, it is necessary to reassign match point simultaneously Recalculate transformation matrix.It can be learnt by the splicing situation of check image overlay region.As shown in fig. 7, for 2 × 180 ° of bats Take the photograph, overlapping region is shown in dash area, wherein being a middle narrow rectangle in longitude and latitude image.If image splices Obtain bad, there must be the dislocation of many lines in this rectangle, angle point can be formed in the place of dislocation.Carrying out Corner Detection can be with They are detected.Therefore, for a certain two field picture, shadow region in fish eye images (such as image of flake first) can be checked Angle point number and longitude and latitude image shadow region in angle point number it is whether roughly equal.If roughly equal, splicing effect It is good;If number in longitude and latitude image shadow region illustrates that splicing effect has declined far more than in fish eye images, Need to take 5 groups of match points to recalculate transformation matrix again.
It is not required to every two field picture all so to be checked, only needs (such as every 10 frame) inspection at regular intervals once. The image-region of inspection is also not required to take whole overlapping regions, in fish eye images, and n (such as n=only need to be extended out by turning radius 5) individual pixel, to inside contracting n pixel, the circle ring area included between taking;, only need to be along center line (in figure in longitude and latitude image Dotted line) to n pixel is moved to left, move right n pixel, the rectangular area included between taking.In fact, the angle that dislocation is produced Point is all in midline.The angle point number in angle point number and longitude and latitude image overlay region in the fish eye images overlay region detected. If both number differences are big, Corner Detection is re-started.
Step 2, experience region corresponding with each angle point is scanned in the image of image pair second, and is calculated using HSV Method is tested to search result determines the corresponding match point of each angle point;
Specifically, pan figure when, if the node of camera can ensure that constant and shooting angle is accurately known, the first image In any point, can be calculated in fact in the second image.For example when style of shooting is 2 × 180 °, One image polar coordinates are (r 'p, θ) point, match point position should be (2r in the second image0-r′p, π-θ '), r in formula0To turn To radius.However, because camera node can not strictly ensure the reasons such as constant, shooting angle error, the position meeting of match point Have offset.And due to clap panoramic video during vibrations and scenery distance change acutely etc. reason, match point also need into Row is readjusted.Therefore experience region corresponding with each angle point is scanned in the image of image pair second.The present embodiment is simultaneously Not limiting specific experience region (being referred to as experience position), shaped formula really, and the big of specific experience region is not limited yet It is small.Such method for example can be used:Using the theoretical position of match point as centered on initial experience position, a model is specified Enclose, such as scan for that (l values can use 0.1-0.3 times of r in the rectangular extent of l × l0), find real match point position.Splicing is regarded During a frequently new frame, when finding that match point position has been offset, using former matched position as experience position, centered on it, then Re-searched in l × l rectangular extent.The method for determining hunting zone with experience position is not required to be searched in entire image Rope, is substantially reduced hunting zone, greatlys save search time.It is i.e. optional, determine the selected angle point of Corner Detection in figure As corresponding theoretical position in the image of centering second, and to external expansion preset range empirically area centered on each theoretical position The corresponding match point of domain search.
Can be based on the HSV processes for searching determination match point to the second image:To two images, a small window is given Mouthful, the center of the window of the first image is appointed as the angle point checked, the rectangle model of the window center of the second image to l × l Whole pixels traversal in enclosing is attempted, and calculates each position and the poor summation of weighted color of the first image window.
Compare for color and match, it is more more particularly suitable than RGB color model using hsv color model.Therefore, need to be first Image and the second image are converted into hsv color and represented.The poor summation of the weighted color of window is calculated with following formula:
It is utilized respectively formula The poor summation E of the weighted color of each pixel angle point corresponding with experience region in calculating experience regionx,y
Choose the poor summation E of minimum weighted colorx,yCorresponding pixel as angle point match point;
Wherein, Ex,yFor the colour-difference summation of the second image angle point in (x, y) is for the first image, H 's+u,t+vWith S′s+u,t+vFor the tone and saturation degree in the first image at (s+u, t+v) place, H "x+u,y+vWith S "x+u,y+vFor in the second image The tone and saturation degree at (s+u, t+v) place, khAnd ksFor two coefficients, tone and saturation degree can be set to the important of result of calculation Degree.Desirable kh=0.8 and ks=0.2, even 4 times more important than saturation degree of tone.W (u, v) is the weight letter of window pixel position Number, can use it is two-dimensional Gaussian function, and u, v is respectively the position of pixel in window, and l is the size of square window.
Step 3, the transition matrix according to selected angle point and corresponding match point calculating image pair;
Specifically, the present embodiment does not limit the quantity of selected angle point and corresponding match point, but in order to improve Computational efficiency, due to there is 15 unknown numbers in transition matrix, it is therefore desirable to which at least 5 pairs match points can be calculated.It is preferred that, root The transition matrix for calculating image pair according to selected angle point and corresponding match point can include:
5 angle points are selected to be used as selected angle point according to distance threshold in the angle point selected from Corner Detection;
Utilize formulaCalculate selected angle point and Corresponding match point calculates the transition matrix of image pair;
Wherein, a11、a12…a43It is altogether 15 unknown numbers, φL、θL、φR、θRIt is the first image and the second image respectively Shoot coordinate system and be converted into the longitude and latitude after spherical coordinate system.
Specifically, because the point really needed only needs 5.This 5 pairs of match points are except the clearly demarcated angle of desirably color change Outside point, it is also desirable to which the distance between they are than larger, and the matrix otherwise solved below understands some errors.Therefore need from above N angle point according to distance in select 5.Selected using distance threshold.Distance threshold rtSpecially:rt=ktr0;Its In, r0For turning radius, ktFor a coefficient, 0.3-0.6 can use.N angle point is traveled through, mutual distance is checked, until finding 5 Individual mutual distance is more than rtAngle point.
Wherein, the definition of turning radius is:In the coordinate system of the first image, camera view angle is corresponding when being 180 ° Radius.Can prove, when two images be camera rotate 180 ° shoot and when, turning radius is equal to two fish eye images Image1 and image2 (refer to Fig. 2) Corresponding matching point to the distance of its origin average value, i.e.,:
Wherein, r'pFor picture centre in image 1 to pixel P ' distance, x'p、y'pIt is pixel P ' relative to image The rectangular co-ordinate at center.r"pFor the distance of picture centre in image 2 to pixel P ", x ", y " is pixel P " relative to image The rectangular co-ordinate at center.
Due to equationIt is to solve for panoramic picture conversion In most important equation group.There is a11、a12…a43It is altogether 15 unknown numbers, therefore need to only there are 5 pairs of match points just can be matrix element Element is all calculated.This equation group can be solved with methods such as column principle Gaussian elimination methods.φ in formulaL、θL、φR、θRIt is respectively The shooting coordinate system of first image and the second image is converted into the longitude and latitude after spherical coordinate system.Be appreciated that and calculate them, it is necessary to Understand the coordinate system and conversion computational methods of figure below:
The formation of panorama sketch can so be imagined in fact:One people is sitting in a huge glass marble scene of looking around, The eyes of people just in the centre of sphere, the light that scenery is mapped to the eyes must be put through glass sphere and imagine at this point into Picture.Final whole scenery is all in glass spherical imaging, and this glass sphere with picture is exactly the panorama sketch of the scene.Fisheye photo It is a kind of angularly plane picture, panoramic picture is a kind of longitude and latitude image.Fish eye images are made to become panoramic picture, Ke Yitong The method for crossing coordinate system transformation is realized.
Fig. 3 is the situation of two fish eye images.Wherein arrow shootLThe shooting direction that camera shoots the first video is represented, Arrow shootRRepresent the shooting direction that camera shoots the second video.It is directed to five coordinate systems:(1) world coordinate system XYZ, this is the coordinate system of three-dimensional world, and the longitude and latitude of panoramic picture will also come out from this coordinate system conversion.(2) left side hemisphere Partial shooting coordinate system oxlylzl.The shooting direction shoot of first imageLIt is opposite with its zl direction of principal axis.(3) left side photo Image coordinate system o ' x ' the y ' of image (image1).This is plane coordinate system.This coordinate system is used during observation image (image1). (4) the shooting coordinate system oxryrzr of the right hemispherical portion.Its shooting direction shootRIt is opposite with its zr direction of principal axis.(5) the right is shone Image coordinate system o " x " the y " of picture (image2).This is plane coordinate system.This coordinate system is used during observation image (image2). In addition to this five coordinate systems, also one is pressed the unfolded image coordinate system that longitude and latitude deploys by world coordinates sphere.Except this Outside five coordinate systems, also one is pressed the unfolded image coordinate system that longitude and latitude deploys by world coordinates sphere.
It is imaged although the part of the right hemisphere is most of not in the first image, being in mark system oxlylzl can be with table Show what is come.Coordinate system oxryrzr rotations can obtain coordinate system oxlylzl.Every bit on the hemisphere of the right is in oxryrzr Coordinate can be read in the second image.Same point can be multiplied by some matrix and obtain just transforming in oxlylzl, note For:
Wherein, (xL,yL,zL) denotation coordination system OXLYLZLIn coordinate, (xR,yR,zR) denotation coordination It is OXRYRZR's.
On the other hand, it is considered to the conversion of spherical coordinate system and rectangular coordinate system, such as Fig. 4.The radius of a ball in Fig. 3, Fig. 4 can be with Meaning, does not influence imaging, it can be assumed that be 1.Therefore longitude and latitudeThere is following relation with rectangular co-ordinate (x, y, z):
Relation in turn:
ByWithIt can just be rewritten into
Step 4, the color value for determining using transition matrix each pixel in the underlapped region of image pair, and according to weights system The pixel fusion of number progress overlapping region determines the color value of each pixel in overlapping region;
Specifically, it is 2 that panoramic picture, which is a ratio of width to height,:1 image, wherein width represent longitude, short transverse Represent latitude.Therefore, the two field picture of panoramic video one to be generated, need to be carried out at traversal to 0-360 ° of longitude, -90 ° of latitude -90 ° Reason.Therefore longitude θ and latitude are traveled throughThe color value of two flake video correspondence two field picture correspondence positions is searched, then calculates and melts Color is closed, in the corresponding two field picture for inserting generation video.Method is as follows:
The final panoramic picture that splice is the image that the sphere in global coordinate system is obtained by longitude and latitude expansion in fact.By Pixel coordinate in unfolded image can converse longitude and latitude, and world coordinates can be conversed again by longitude and latitude.For example with reference to Fig. 5, The longitude and latitude of i-th row jth row pixel is calculated as below:
Then with longitude and latitude by formulaCan converse in global coordinate system oxyz rectangular co-ordinate (x, y,z)。
For the pixel, its correspondence position in left shooting coordinate system is searched.For this need to first consider shoot coordinate system and The relation of global coordinate system:Consider the conversion of coordinate system oxlylzl to oxyz in Fig. 3.The matrix a solved is by coordinate system Oxryrzr and the matrix for transforming to oxlylzl coordinate systems.And global coordinate system oxyz is different (see figure with the two coordinate systems 2).And oxlylzl need to be through that can be transformed into oxyz with down conversion:180 ° are rotated around yl axles, is rotated by 90 ° further around new xl axles.I.e.:
Above formula enters line translation, can obtain:
The position of (x, y, z) in left shooting coordinate system can be calculated by above formula.Simultaneously in the right shooting coordinate system of correspondence:
- 1 is matrix inversion operation.Shoot coordinate system identical with correspondingly photo coordinate system longitude and latitude, therefore can be further according toX'=r'cos θ ', y'=r'sin θ ' can calculate pixel coordinate (x ', y ') in photograph image and (x ", y "), it is possible to find pixel color.R in formula0For turning radius, (x ', y ') sat for the image of the image of flake first Mark, (x ', y ') andChange into (x ", y ") andFinally the image of flake second image coordinate (x ", y”).Each parameter and relation are referring to Fig. 2.
It is coloured at (x ', the y ') and (x ", the y ") at least one of the image of flake second of the image of flake first.For The point investigated in unfolded image, sometimes only has imaging in the first image, sometimes only has imaging in the second image, and has When have imaging in both images.Formula is determined using color valueDetermine image to spliced image The color value of middle each position;Wherein, c is image to the color value of each position in spliced image, k1、k2It is weight coefficient.
Wherein, k1+k2=1.Can such value:Referring to Fig. 6, P1And P2It is matching double points, P2eIt is P2Exist along radial direction The intersection point of flake boundary, P2e1It is P2eThe image of flake first corresponding points (by P2eIt is multiplied by transformation matrix to calculate).P1eIt is similar P2e, it is P1Along radial direction flake boundary intersection point.k1、k2Can be according to P1With P1eAnd P2e1Distance relation calculate:
Step 5, by whole color values be filled into splicing video in correspondence position.
So far, a panoramic video is just generated.Panoramic video case to a generation uses common video playback The sectional drawing example that device is played is shown in Fig. 8.360 degree of VR effect can be obtained using special panoramic video player.
Illustrate such scheme below by specific example, two visuals field have been shot using video camera and fish eye lens to scene More than the video material of 180 degree.Use one new video of steps of processing and generation.It refer to Fig. 9.
1st, start and start to carry out every frame of two videos following operate.
2nd, if the first frame, or by 11 this step is jumped to, progress is down operated from 3, otherwise directly down grasped from 9 Make.
3rd, Corner Detection is carried out to image 1 (i.e. the first image).
4th, image 2 (i.e. the second image) is scanned for based on experience range.
5th, HSV is based on to image 2 and searches determination match point.
6th, from above-mentioned n to selecting 5 pairs in match point.
If the 7, the first frame, then calculate turning radius.If not the first frame, then skip this step.What the first frame was calculated turns It is used in subsequent frame is calculated to radius.
8th, solving equations calculate transition matrix.
9th, panoramic picture is that a ratio of width to height is 2:1 image, wherein width represent longitude, and short transverse represents latitude Degree.Therefore, the two field picture of panoramic video one to be generated, need to be to 0-360 ° of longitude, -90 ° of progress traversal processings of latitude -90 °.Therefore Travel through longitude θ and latitudeThe color value of two flake video correspondence two field picture correspondence positions is searched, Fusion of Color is then calculated, In the corresponding two field picture for inserting generation video.
10th, every certain frame number (such as every 10 frame), Corner Detection is carried out again to overlay region.
11st, check by the angle point in the angle point number and longitude and latitude image overlay region in the 10 fish eye images overlay regions detected Number.If both number differences are big, 2 are returned, otherwise, 1 is jumped to, until all processing is completed all frames.
Based on above-mentioned technical proposal, the joining method of spherical panorama video provided in an embodiment of the present invention, it is adaptable to 2 × The spherical panorama video method for automatically split-jointing of 180 ° of styles of shooting;To two videos shot using fish eye lens, first to Image in one video carries out Corner Detection, and then corresponding experience position is scanned in the corresponding two field picture of the second video Compare, then examine the method compared to draw each match point in video 2 by HSV, using 5 pairs of match points, solve equation calculating Go out the transition matrix of video image.The color value of each respective pixel is replicated into generation video using transition matrix, and according to being Number carries out the pixel fusion of overlapping region.Every frame of video is operated and appropriate optimization processing is carried out, spelling is ultimately generated Connect video.For the splicing of panoramic video, the method shooting quantity that the present embodiment is provided is few, and splicing speed is fast, and splicing is opened It need not manually adjust and intervene after dynamic.Generate video-splicing effect good, the splicing of center-line joint position is accurate, and transition is natural.From whole Individual video playback is seen, without flating and dislocation.
The splicing system of spherical panorama video provided in an embodiment of the present invention is introduced below, it is described below spherical The joining method of the splicing system of panoramic video and above-described spherical panorama video can be mutually to should refer to.
It refer to Figure 10, the structural frames of the splicing system for the spherical panorama video that Figure 10 is provided by the embodiment of the present invention Figure, the system can include:
Video acquiring module 100, the video pair for obtaining fish eye lens shooting;Wherein, video is to including with N frames First video of one image and the second video with the image of N frames second;And the image of N frames first has one with the image of N frames second One corresponding relation;
Concatenation module 200, for regarding the first image and the second image with one-to-one relationship as image pair respectively Concatenation is performed, splicing video is formed;
Concatenation module 300, for determining the corresponding angle point of the image of image pair first;In the image of image pair second In experience region corresponding with each angle point scan for, and search result is tested each angle point pair of determination using HSV algorithms The match point answered;The transition matrix of image pair is calculated according to selected angle point and corresponding match point;It is true using transition matrix Determine the color value of each pixel in the underlapped region of image pair, and the pixel fusion for carrying out overlapping region according to weight coefficient is determined The color value of each pixel in overlapping region;Whole color values are filled into the correspondence position in splicing video.
The embodiment of each in specification is described by the way of progressive, and what each embodiment was stressed is and other realities Apply the difference of example, between each embodiment identical similar portion mutually referring to.For device disclosed in embodiment Speech, because it is corresponded to the method disclosed in Example, so description is fairly simple, related part is referring to method part illustration .
Professional further appreciates that, with reference to the unit of each example of the embodiments described herein description And algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware and The interchangeability of software, generally describes the composition and step of each example according to function in the above description.These Function is performed with hardware or software mode actually, depending on the application-specific and design constraint of technical scheme.Specialty Technical staff can realize described function to each specific application using distinct methods, but this realization should not Think beyond the scope of this invention.
Directly it can be held with reference to the step of the method or algorithm that the embodiments described herein is described with hardware, processor Capable software module, or the two combination are implemented.Software module can be placed in random access memory (RAM), internal memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
The joining method and system of a kind of spherical panorama video provided by the present invention are described in detail above.This Apply specific case in text to be set forth the principle and embodiment of the present invention, the explanation of above example is only intended to Help to understand method and its core concept of the invention.It should be pointed out that for those skilled in the art, Without departing from the principles of the invention, some improvement and modification can also be carried out to the present invention, these are improved and modification also falls Enter in the protection domain of the claims in the present invention.

Claims (9)

1. a kind of joining method of spherical panorama video, it is characterised in that methods described includes:
Obtain the video pair that fish eye lens is shot;Wherein, the video is to including the first video and tool with the image of N frames first There is the second video of the image of N frames second;And the image of N frames first has one-to-one relationship with the image of N frames second;
Respectively using the first image and the second image with the one-to-one relationship as image to perform concatenation, formed Splice video;Wherein, the process of the concatenation includes:
Determine the corresponding angle point of the image of described image centering first;
Experience region corresponding with each angle point is scanned in the second image in described image pair, and utilizes HSV algorithms Search result is tested and determines the corresponding match point of each angle point;
The transition matrix of described image pair is calculated according to selected angle point and corresponding match point;
The color value of each pixel in the underlapped region of described image centering is determined using the transition matrix, and according to weight coefficient The pixel fusion for carrying out overlapping region determines the color value of each pixel in overlapping region;
All color values are filled into the correspondence position in the splicing video.
2. according to the method described in claim 1, it is characterised in that determine the corresponding angle point of the image of described image centering first, Including:
Judge described image centering the first image whether be first video the first two field picture;
If the first two field picture, then Corner Detection is carried out to first two field picture, determine the corresponding angle of first two field picture Point;
If not the first two field picture, then judge described image to whether being the image pair that is spaced predetermined number of frames;
If not being spaced the image pair of predetermined number of frames, then the angle point of the first image of previous frame image centering is regard as the figure As the corresponding angle point of the first image of centering;
If being spaced the image pair of predetermined number of frames, then angle point is carried out to the selected overlay region of the first image of described image centering Detection obtains shade angle point, and judges whether the quantity of the shade angle point and the difference of the angle point in longitude and latitude image overlay region are more than Threshold value;If so, then carrying out Corner Detection to the first image of described image centering, the first image pair of described image centering is determined The angle point answered.
3. method according to claim 2, it is characterised in that the process of the Corner Detection, including:
Utilize formulaCorner Detection is carried out, pre-selection angle point M is obtained;
Utilize formula R=det (M)-k (trace (M))2Calculate the receptance function value R of the pre-selection angle point M;
The receptance function value R is arranged according to order from high in the end, and chooses the receptance function value R of preceding predetermined quantity The angle point that corresponding pre-selection angle point M is selected as Corner Detection;
Wherein, I is the first video, Ix、IyTo pass through gradient map that horizontally and vertically direction difference operator is obtained to image filtering Picture, w (x, y) takes two-dimensional Gaussian function, and det is seeks matrix determinant, and trace is the mark of matrix, and k is a constant.
4. method according to claim 3, it is characterised in that in described image pair in the second image with each angle point Corresponding experience region is scanned for, including:
The selected angle point of the Corner Detection corresponding theoretical position in the second image in described image pair is determined, and with each institute State centered on theoretical position to the external expansion preset range empirically corresponding match point of range searching.
5. method according to claim 4, it is characterised in that determination is tested to search result respectively using HSV algorithms The corresponding match point of the angle point, including:
It is utilized respectively formulaCalculate institute State the poor summation E of weighted color of the angle point corresponding with the experience region of each pixel in experience regionx,y
Choose the poor summation E of minimum weighted colorx,yCorresponding pixel as angle point match point;
Wherein, Ex,yFor the colour-difference summation of the second image angle point in (x, y) is for the first image, H's+u,t+vAnd S's+u,t+vFor In the tone and saturation degree at (s+u, t+v) place, H " in first imagex+u,y+vWith S "x+u,y+vFor in the second image at (s+u, t+v) The tone and saturation degree at place, khAnd ksFor two coefficients, w (u, v) is the weighting function of window pixel position, and u, v is respectively window The position of interior pixel, l is the size of square window.
6. method according to claim 5, it is characterised in that institute is calculated according to selected angle point and corresponding match point The transition matrix of image pair is stated, including:
5 angle points are selected to be used as selected angle point according to distance threshold in the angle point selected from the Corner Detection;
Utilize formulaCalculate selected angle point and Corresponding match point calculates the transition matrix of described image pair;
Wherein, a11、a12…a43It is altogether 15 unknown numbers, φL、θL、φR、θRIt is the shooting seat of the first image and the second image respectively Mark system is converted into the longitude and latitude after spherical coordinate system.
7. method according to claim 6, it is characterised in that the distance threshold rtSpecially:rt=ktr0;Wherein, r0 For turning radius, ktFor a coefficient.
8. method according to claim 7, it is characterised in that determine that described image centering is not weighed using the transition matrix The color value of each pixel in folded region, and the pixel fusion for carrying out overlapping region according to weight coefficient determines each picture in overlapping region The color value of element, including:
Longitude and latitude in the transition matrix are traveled through, the color value c of each position in the first image is determined1With it is each in the second image The color value c of position2
Formula is determined using color valueDetermine color value of the described image to each position in spliced image;
Wherein, c is described image to the color value of each position in spliced image, k1、k2It is weight coefficient.
9. a kind of splicing system of spherical panorama video, it is characterised in that the system includes:
Video acquiring module, the video pair for obtaining fish eye lens shooting;Wherein, the video is to including with N frames first First video of image and the second video with the image of N frames second;And the image of N frames first has one by one with the image of N frames second Corresponding relation;
Concatenation module, for respectively using the first image and the second image with the one-to-one relationship as image to perform Concatenation, forms splicing video;
Concatenation module, for determining the corresponding angle point of the image of described image centering first;The second figure in described image pair Experience region corresponding with each angle point is scanned for as in, and each using the determination of being tested to search result of HSV algorithms The corresponding match point of the angle point;The transition matrix of described image pair is calculated according to selected angle point and corresponding match point; The color value of each pixel in the underlapped region of described image centering is determined using the transition matrix, and is carried out according to weight coefficient The pixel fusion of overlapping region determines the color value of each pixel in overlapping region;All color values are filled into the splicing Correspondence position in video.
CN201710607242.0A 2017-07-24 2017-07-24 Spherical panoramic video splicing method and system Active CN107333064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710607242.0A CN107333064B (en) 2017-07-24 2017-07-24 Spherical panoramic video splicing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710607242.0A CN107333064B (en) 2017-07-24 2017-07-24 Spherical panoramic video splicing method and system

Publications (2)

Publication Number Publication Date
CN107333064A true CN107333064A (en) 2017-11-07
CN107333064B CN107333064B (en) 2020-11-13

Family

ID=60200645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710607242.0A Active CN107333064B (en) 2017-07-24 2017-07-24 Spherical panoramic video splicing method and system

Country Status (1)

Country Link
CN (1) CN107333064B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154476A (en) * 2017-12-22 2018-06-12 成都华栖云科技有限公司 The method of video-splicing correction
CN109063632A (en) * 2018-07-27 2018-12-21 重庆大学 A kind of parking position Feature Selection method based on binocular vision
CN110264397A (en) * 2019-07-01 2019-09-20 广东工业大学 A kind of method and apparatus of effective coverage that extracting fish eye images
CN113793382A (en) * 2021-08-04 2021-12-14 北京旷视科技有限公司 Video image splicing seam searching method and video image splicing method and device
CN114390222A (en) * 2022-03-24 2022-04-22 北京唱吧科技股份有限公司 Switching method and device suitable for 180-degree panoramic video and storage medium
US11533431B2 (en) * 2020-03-16 2022-12-20 Realsee (Beijing) Technology Co., Ltd. Method and device for generating a panoramic image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621634A (en) * 2009-07-24 2010-01-06 北京工业大学 Method for splicing large-scale video with separated dynamic foreground
CN103516995A (en) * 2012-06-19 2014-01-15 中南大学 A real time panorama video splicing method based on ORB characteristics and an apparatus
CN103679636A (en) * 2013-12-23 2014-03-26 江苏物联网研究发展中心 Rapid image splicing method based on point and line features
CN106210535A (en) * 2016-07-29 2016-12-07 北京疯景科技有限公司 The real-time joining method of panoramic video and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621634A (en) * 2009-07-24 2010-01-06 北京工业大学 Method for splicing large-scale video with separated dynamic foreground
CN103516995A (en) * 2012-06-19 2014-01-15 中南大学 A real time panorama video splicing method based on ORB characteristics and an apparatus
CN103679636A (en) * 2013-12-23 2014-03-26 江苏物联网研究发展中心 Rapid image splicing method based on point and line features
CN106210535A (en) * 2016-07-29 2016-12-07 北京疯景科技有限公司 The real-time joining method of panoramic video and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
罗立宏, 谭夏梅: "基于灰度累积评价的全景图像自动拼接算法", 《兰州理工大学学报》 *
罗立宏,谭夏梅: "球形全景图像的自动拼接", 《计算机应用与软件》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154476A (en) * 2017-12-22 2018-06-12 成都华栖云科技有限公司 The method of video-splicing correction
CN109063632A (en) * 2018-07-27 2018-12-21 重庆大学 A kind of parking position Feature Selection method based on binocular vision
CN109063632B (en) * 2018-07-27 2022-02-01 重庆大学 Parking space characteristic screening method based on binocular vision
CN110264397A (en) * 2019-07-01 2019-09-20 广东工业大学 A kind of method and apparatus of effective coverage that extracting fish eye images
US11533431B2 (en) * 2020-03-16 2022-12-20 Realsee (Beijing) Technology Co., Ltd. Method and device for generating a panoramic image
CN113793382A (en) * 2021-08-04 2021-12-14 北京旷视科技有限公司 Video image splicing seam searching method and video image splicing method and device
CN114390222A (en) * 2022-03-24 2022-04-22 北京唱吧科技股份有限公司 Switching method and device suitable for 180-degree panoramic video and storage medium
CN114390222B (en) * 2022-03-24 2022-07-08 北京唱吧科技股份有限公司 Switching method and device suitable for 180-degree panoramic video and storage medium

Also Published As

Publication number Publication date
CN107333064B (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN107333064A (en) The joining method and system of a kind of spherical panorama video
Yang et al. Object detection in equirectangular panorama
US10136055B2 (en) Method for stitching together images taken through fisheye lens in order to produce 360-degree spherical panorama
KR102227583B1 (en) Method and apparatus for camera calibration based on deep learning
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN104699842B (en) Picture display method and device
CN100437639C (en) Image processing apparatus and image processing meethod, storage medium, and computer program
JP4904264B2 (en) System and method for image processing based on 3D spatial dimensions
CN104246795B (en) The method and system of adaptive perspective correction for extrawide angle lens image
CN107369129B (en) Panoramic image splicing method and device and portable terminal
CN108122191A (en) Fish eye images are spliced into the method and device of panoramic picture and panoramic video
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN107960121A (en) Frame is spliced into panoramic frame
Huang et al. Panoramic imaging: sensor-line cameras and laser range-finders
CN108200360A (en) A kind of real-time video joining method of more fish eye lens panoramic cameras
CN106357976A (en) Omni-directional panoramic image generating method and device
JP2008086017A (en) Apparatus and method for generating panoramic image
CN109559349A (en) A kind of method and apparatus for calibration
CN106408556A (en) Minimal object measurement system calibration method based on general imaging model
CN101394573A (en) Panoramagram generation method and system based on characteristic matching
CN106705849A (en) Calibration method of linear-structure optical sensor
CN107527336A (en) Relative position of lens scaling method and device
CN104412298B (en) Method and apparatus for changing image
CN106504196A (en) A kind of panoramic video joining method and equipment based on space sphere
CN107318010A (en) Method and apparatus for generating stereoscopic panoramic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant