CN104506828B - A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes - Google Patents
A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes Download PDFInfo
- Publication number
- CN104506828B CN104506828B CN201510016447.2A CN201510016447A CN104506828B CN 104506828 B CN104506828 B CN 104506828B CN 201510016447 A CN201510016447 A CN 201510016447A CN 104506828 B CN104506828 B CN 104506828B
- Authority
- CN
- China
- Prior art keywords
- video
- subgraph
- splicing
- leak
- crack
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a kind of real-time joining method of fixed point orientation video of nothing effectively overlapping structure changes, methods described includes:The Video stream information of diverse location is gathered respectively;Video stream information after compression is divided into the multiple first static frame of video groups sequentially in time;It, per the corresponding still image of Video stream information all the way, will be converted in described first static frame of video group by the top view of bat object, form the second static frame of video group;To each frame of video in the described second static frame of video group is positioned, panorama slightly splices, compensate fusion, panorama essence splicing obtains real-time panoramic video stream.The image for the nothing effectively overlapping structure changes that the present invention is gathered for different visual angles, different directions to Same Scene carries out splicing, then generation video flowing in real time.The joining method of the present invention not only increases image mosaic precision, while ensure that image mosaic efficiency, meets the requirement of real-time of video stream splicing.
Description
Technical field
The present invention relates to real time video image splicing field, is more particularly to a kind of fixed point of nothing effectively overlapping structure changes
The real-time joining method of orientation video.
Background technology
Recently as the fast development of video-splicing technology, it is setting up the high-definition picture field at big visual angle, void
Intend being widely used in field of reality, field of medical images, remote sensing technology and military field.Video-splicing technology is main
Include image mosaic technology and the real-time synthetic technology of video.First, image mosaic technology is to realize the core of video-splicing technology,
Mainly include two key links of image registration and image co-registration:Image registration is the basis for realizing image mosaic, and its target is
Multiple image under different camera positions and angle is matched;Image co-registration be then by eliminate due to geometric correction,
Intensity or color non-continuous event between adjacent image caused by dynamic scene or illumination variation, synthesize high-quality image.
Second, the real-time synthetic technology of video is then by using parallel computing frameworks such as FPGA, the IPP of Intel, the tall and handsome CUDA reached,
The execution efficiency of merging algorithm for images is improved to realize.
From the perspective of IMAQ, it can substantially be divided into four classes.1) one camera fixed pivot, rotation of lens is to same
Scene carries out IMAQ;2) one camera is fixed on a slide rail to move in parallel carries out IMAQ to Same Scene;3) it is many
Camera is in different viewing directional angles, and different directions carry out having between IMAQ, image available having with image mosaic to Same Scene
Imitate overlapping region;4) polyphaser is in different viewing directional angles, and different directions, which carry out incapability between IMAQ, image to Same Scene, to be used for
Effective overlapping region of image mosaic, or even have less gap and leak between image.The practical problem studied according to the present invention,
The 4th class situation is primarily upon, i.e., pinpoint orientation using multi-path camera carries out video acquisition and splicing to Same Scene.
From the point of view of the core technology image registration angle of image mosaic, current image registration techniques have two kinds, i.e. phase phase
Pass degree method and geometrical measurers.Phase correlation has scene independence, can be by the image of pure two-dimension translational accurately
Alignment.But phase correlation is only suitable for the image registration problem for solving to have pure flat shifting or rotation, in affine and perspective transform
In model, this method just can not registering image, and the absolute of camera position and imaging plane can not possibly be accomplished in real process
It is parallel, therefore its actual use value is low.Geometrical measurers be by based on the rudimentary geometric properties in image, such as side model,
Angle model, vertex model etc., the method spliced to image.But the premise of the method for registering images based on geometric properties is two
Image is it is necessary to have a certain amount of overlapping region, and the image matched must have coherent feature in time, for many
The image mosaic helpless in the non-overlapping region that camera is gathered in different visual angles, different directions to Same Scene.
From the perspective of image co-registration, the purpose is to the splicing between image is eliminated from color of image brightness and structure
Seam.The method of image co-registration is a lot, for color of image and the blending algorithm of brightness, simply there is light intensity weighted average and weighting
Average fusion, complicated has the image Voronoi methods of weighting and Gauss spline method.Its core concept is that first image is divided
Cut, by the use of overlapping region as the standard of matching, then by means such as image rectification, colour switching and picture element interpolations merge figure
The splicing seams of picture.And for the excessive difference on picture structure, typically all using the method simply sprouted wings, according to Euclidean away from
It is averaged to weight, recycles filtering technique to eliminate the ghost in the image blur phenomena caused due to sprouting wings and splicing video
Shadow phenomenon.The image mosaic in the non-overlapping region gathered clearly for polyphaser in different visual angles, different directions to Same Scene
The fusion existing method of seam can not be handled, and for the fusion of real-time video flowing, prior art can not also be met in real time
The requirement of property.
Patent publication No. CN103501415A patents of invention are that a kind of video based on lap malformation is spelled in real time
Method is connect, its operation principle is to calculate the respective splicing seams of two images first, then carries out one-dimensional spy on two splicing seams
Extraction a little is levied with matching, the characteristic point of matching is moved to overlapping positions and displacement is recorded, shadow is spread in the deformation of setting
The diffusion that structural deformation is carried out in scope is rung, the gradient map after malformation is finally calculated, utilizes the fusion method in gradient field
Complete image co-registration and obtain final stitching image.For the requirement of the real-time that meets video-splicing, image is spelled in patent
Connect algorithm to realize on FPGA, so as to reach video-splicing effect rapidly and efficiently.The method that this obvious patent is used can not be right
The image in the non-overlapping region that polyphaser is gathered in different visual angles, different directions to Same Scene carries out splicing, it is impossible to full
The actual demand for the practical problem that the sufficient present invention is studied.
Patent publication No. CN101593350A patents of invention are a kind of method of depth adaptive video-splicing, device and are
System.Its video-splicing system includes video camera array, calibrating installation, video-splicing device and pixel interpolator and blender.
First, video camera array generates multiple source videos;Then, calibrating installation is performed calibrates to pole calibration and camera position, it is determined that
The seam region of each space adjacent image pair in multiple source videos, and generate pixel index table;Again, video-splicing device profit
Pixel index table is updated with the compensation term of average offset value formation pixel index table;Finally, pixel interpolator and blender are utilized
Pixel index table combination multisource video generation panoramic video after renewal.Obviously, this patent wishes to calculate by simplifying image mosaic
Method, an equalization point is sought on joining quality and splicing efficiency, so that the splicing to video flowing is realized, but with collection video
Diversity and complexity, single serial graphic joining algorithm can not meet that data volume is big, calculate complicated video flowing at all
The requirement of real-time of splicing, poor practicability.
The content of the invention
The weak point existed for this area, object of the present invention is to provide one kind for multiple-camera in difference
The image of the nothing that visual angle, different directions are gathered to Same Scene effectively overlapping structure changes carries out the side that fixed point orientation is spliced in real time
Method.
The technical scheme for realizing above-mentioned purpose of the present invention is:
The invention discloses a kind of real-time joining method of fixed point orientation video of nothing effectively overlapping structure changes, methods described bag
Include following steps:
Step 1: installing multiple-camera video acquisition array, the Video stream information of diverse location is gathered respectively, and will be described
Video stream information carries out analog-to-digital conversion, synchronization and compression processing;
Step 2: the Video stream information after compression is converted into same video format, multiple are divided into sequentially in time
One static frame of video group;Wherein, each described first static frame of video group includes the multiple-camera video of synchronization
Gather the n roads Video stream information of array acquisition;
Step 3: by every corresponding still image of Video stream information all the way in the described first static frame of video group, according to side
View turns top view geometrical model, is converted to by the top view of bat object, forms the second static frame of video group;
Step 4: according to location model, positioning, carrying out to each frame of video in the described second static frame of video group
Panorama slightly splices, and obtains the thick spliced map of panorama;
Step 5: according to the location model, determining in the thick spliced map of the panorama, have overlapping region splicing seams position,
Seamless without the non-overlapping region splicing seams position in hole and have hole or have crack area splicing seams position;
Step 6: have overlapping region splicing seams or seamless without the non-overlapping region splicing seams in hole for described, using brightness and
Splicing seams are carried out anastomosing and splicing by color interpolation algorithm;
Step 7: as follows for the splicing for having hole or having crack area splicing seams:
Determined according to the locating module in the described second static frame of video group between the corresponding subgraph of each frame of video
And border on relation between subgraph and leak or crack, and leak or crack subgraph are determined according to the relation of bordering on;
Extract the line feature of the subgraph adjacent with leak or crack subgraph;
The line feature to extraction is matched, and obtains line feature pair;
Crack or leak are compensated using the feature extrapolated boundary point of adjacent area line;
To the thick spliced map of the panorama after leak or crack compensation, using brightness and color interpolation algorithm, to splicing seams
Anastomosing and splicing is carried out, the spliced panoramic video frame of panorama essence is obtained;
Step 8: the first static frame of video group not in the same time is handled according to the step 3 to step 7, obtain
To panoramic video frame not in the same time, the panoramic video frame is synthesized sequentially in time, real-time panoramic video is obtained
Stream.
Wherein, the step 2 is carried out after synchronization video split order is received, and after the step 2 end of run,
Described first static frame of video is stored sequentially in time.
Wherein, in the step 3 side view become top view geometrical model into:
Wherein, s is scale factor, fx,fyFor the focal length of video camera, cx,cyFor image flame detection parameter, Rx,Ry,RzFor rotation
Three column vectors of matrix, t is translation vector, and (x, y, z) is the element coordinate of the still image side view, and (X, Y, Z) is
The top view coordinate of corresponding element.
Wherein, it is described Step 4: location model is in five:
Wherein, x0,y0,z0For camera lens center point coordinate, x1,y1,z1For by the friendship of bat object and image pickup plane xoy
Point coordinates, (α, beta, gamma) is video camera to the deflection of the cone bus in domain of doing something for the occasion, x2,y2,z2It is video camera to domain of doing something for the occasion
Latitude circle and the intersecting point coordinate of the cone bus, x, y, z is video camera to domain and the image pickup plane xoy intersecting point coordinates of doing something for the occasion.
Wherein, the thick splicing of panorama is specially in the step 4:
First, the big blank images such as the panorama view domain size of one and made thing are generated;
Secondly, the location model is utilized to the corresponding subgraph of each frame of video in the described second static frame of video group
Localization process is carried out, position of the every subgraph in blank image, size and Orientation is determined;
Again, believe according to the predetermined label order of each video camera in multi-camera array and its positioning for shooting subgraph
Subgraph is filled into corresponding place in blank image by breath one by one, realizes the thick splicing of panorama sketch.
Wherein, the step 7, the line feature for extracting the subgraph adjacent with leak or crack subgraph is specially:
Assuming that centered on C (x, y) pixel, while set L (x, y) and R (x, y) be respectively with C (x, y) point along some
The average gray value of the left and right adjacent area in direction, then shown in average compared estimate such as formula (3);
RoA:C (x, y)=max { R (x, y)/L (x, y), L (x, y)/R (x, y) } (3)
Then, compareWith predetermined threshold value T0It is compared, whenMore than threshold
Value T0Shi Ze thinks that point C is boundary point;
The line characteristic fragment that will be extracted by above-mentioned algorithm in the subgraph adjacent with leak or crack subgraph, and recombinate
It is made into line feature.
Wherein, in the step 7, the line feature to extraction is matched specially:
The line feature is described with corresponding line segment function, it is assumed that surrounding the subgraph of leak or crack has n, head
First, the line segment function slope of every width subgraph of extraction, the set I of composition is expressed as follows by formula (4),
Wherein, m, n, l represent the sum for the line feature extracted in correspondence subgraph;
The line characteristic matching between subgraph is realized using such as following formula (5),
Wherein,It is an arbitrary element, T in set I1For matching threshold;Meet formula wherein, the step
In seven, it is specially to compensate crack or leak using adjacent area line feature extrapolated boundary point:
First, according to corresponding first line segment function for having matched described line feature pair, construction one can meet correspondence
The second line segment function of all line features of feature centering, while thinking that the second line segment function is the line to leak or crack
The reasonable fitting of feature;
Then, at using second line segment function extrapolation leak or crack, the line thus determined to matched line feature pair
Position where feature;
Finally, the leak or the line feature in crack externally released, utilize the subgraph adjacent with leak or crack subgraph
The color of corresponding matched line feature pair and brightness as in, calculate with color and brightness interpolating, splicing seams are merged
Splicing.
Wherein, it is described Step 6: in seven, using color and brightness interpolating algorithm, carrying out anastomosing and splicing is specially:
It is assumed that the subgraph adjacent with leak or crack subgraph has m width, then in crack or leak a point P gray scale, face
Color and brightness value, can pass through formula (6) and calculate according to gray scale, color and the brightness value of the points of range points P recently in m width subgraphs
Obtain
Wherein, any one in gray value, color value and the brightness value of g (p) expressions P points, gi(xi,yi) represent the i-th width
Image is from the gray value, color value or brightness value corresponding to g (p) of P point closest approaches, and function ξ (x) is Line Weight Function;
Mixing operation as implied above is carried out by each pixel in fracture or leak one by one, completely described is obtained
Panoramic video frame.
Wherein, in the step 8, the panoramic video stream is compressed, stored and shown.
Corresponding to the method for the present invention, device used includes multiple-camera video acquisition array U3, and multi-channel video is synchronous
Collecting unit U4, multi-channel video synchronization cutting unit U5, multi-channel video side view, which becomes top view unit U6, GPGPU frame of video, to be determined
Position, splicing, compensation and integrated unit U7 and real-time panoramic video stream generation unit U9, as shown in Figure 2;
The multiple-camera video acquisition array acquisition is passed to described by the Video stream information of bat object diverse location
Multi-channel video synchronous acquisition unit;The multi-channel video synchronous acquisition unit by the Video stream information of reception carry out analog-to-digital conversion,
Synchronous and compression processing, and pass to the synchronous cutting unit of the multi-channel video;The synchronous cutting unit of the multi-channel video is received
The synchronization video split order of the multi-channel video synchronous acquisition unit, the information received is converted to same video format,
It is divided into the multiple first static frame of video groups sequentially in time, and the described first static frame of video group is passed into the multichannel and regards
Frequency side view becomes top view unit;Wherein, many shootings of each described first static frame of video group including synchronization
The n roads Video stream information of machine video acquisition array acquisition;The multi-channel video side view becomes top view unit by described in reception
Per the corresponding still image of Video stream information all the way in first static frame of video group, be converted to by the top view of bat object, formed
Second static frame of video group, and the described second static frame of video group is passed into the GPGPU videos frame alignment, splicing, compensation
And integrated unit;The GPGPU videos frame alignment, splicing, compensation and integrated unit are in the described second static frame of video group
Each frame of video is positioned, is spliced, compensated and merged, and obtains being clapped the panoramic video frame of object, and by the panoramic video
Frame passes to the real-time panoramic video stream generation unit;When the real-time panoramic video stream generation unit is by the difference received
The panoramic video frame at quarter, synthesizes real-time panoramic video stream sequentially in time.Above-mentioned GPGPU videos frame alignment, splicing, compensation and
Integrated unit is based on CUDA parallel computing frameworks.
Many camera video acquisition arrays
Many camera video acquisition arrays be the image mechanisms installed by n according to fixed installation parameter into take the photograph
As array, as shown in Fig. 2 the camera lens of each video camera in array by using different viewing directional angles, and different shooting angles
Degree, realizes the basic covering to gathering scene U1.But without effectively spelling between the Same Scene U1 of collection each camera review U2
Connect overlapping region, or even there is less gap and leak.
Multi-channel video synchronous acquisition unit
This unit is as shown in Fig. 2 the video frequency collection card for having multi-channel video synchronous acquisition function by polylith is constituted.Its work
It is that multi-channel video synchronous acquisition unit U4 leads to the n roads analog signal of many camera video acquisition array U3 video sources as flow
The A/D modular converters shunt conversion crossed on capture card is then passed in the holder that board is carried into data signal, then by video
The video compress chip and audio video synchronization chip carried on capture card performs synchronization and compression algorithm to each road video, so that by Pang
Big vision signal is synchronous, and compression forms n roads video flowing after diminishing, then passes to the synchronous cutting unit U5 of multi-channel video, completes
Whole workflow.
Multi-channel video synchronization cutting unit
This unit is to have preloaded a parallel image Processing Algorithm hardware in a FPGA programmable hardware platform, platform to patrol
Circuit is collected, the function of the image processing algorithm is the n roads video flowing for passing over multi-channel video synchronous acquisition unit U4 in Fig. 2
It is divided into several static subgraph groups (especially to declare " frame of video " in the present invention and " subgraph " no by priority time sequencing
Plus difference), and every group of n still image by n roads video flowing synchronization constitute, while again by the image sets at each moment
Sequentially it is sequentially transmitted and becomes top view unit U6 to multi-channel video frame side view, completes the whole workflow of the unit.
Multi-channel video frame side view becomes top view unit
In order to save the cost of device, the integrated level of device is improved, as shown in Fig. 2 synchronous with multi-channel video point of this unit
Unit U5 is cut to be realized with a FPGA programmable hardware platform.The bullet image conversion hardware algorithm logic electricity of this unit
Road has also been preloaded in platform as the synchronous cutting unit U5 of multi-channel video subsequent algorithm, to realize the n of synchronization
Frame of video is converted into the top view that video camera is just being shot to made thing in multi-channel video frame side view;The present invention becomes in image to convert
Constructed in method is used for side view change top view based on multiple-camera installation parameter Image geometry transform model, and its specific steps is such as
Under:
(1) according to video camera imaging principle, the side view plane coordinate system (x, y, z) in kind that can set up video camera arrives video camera
The coordinate transformation equation of virtual vertical view coordinate system (X, Y, Z) be shown below,
The concrete meaning of wherein each parameter is as follows:
S scale factors
fx,fyThe focal length of video camera
cx,cyImage correction parameters
Rx,Ry,RzThree column vectors of spin matrix
T translation vectors
(2) standard picture is shot according to the installation parameter combination video camera imaged more and rower is entered to the imaging parameters of video camera
It is fixed, obtain the imaging parameters that side view becomes required for top view.Its process as shown in figure 3, using chequered with black and white grid figure U61 as
Example, required calibration point U63 is calculated by being imaged calibrating procedure calibrating parameters, is utilized the change in location of calibration point to set up and is calculated ginseng
The required equation group of number, so as to solve the value for the parameter that side view becomes in top view transformation model, completes video camera imaging parameter
Staking-out work.
(3) utilize and demarcated camera parameters, and the side view set up on this basis becomes the image geometry of top view
Transformation model, you can realize that chequered with black and white grid side view U61 as shown in Figure 3 is transformed into chequered with black and white grid top view U62.
To sum up, multi-channel video frame side view becomes top view unit U6 workflow into it is received by multi-channel video first
The a certain moment subgraph group (the first static frame of video group) that synchronous cutting unit U5 transmission comes, it is then suitable according to shooting header laber
Sequence is utilized successively to be converted into the side view of atomic lens group based on multiple-camera installation parameter Image geometry transform model overlooking
Figure, and subgraph group after conversion is passed to the frame alignment of GPGPU videos, splicing, compensation as new subgraph group and merged
Unit U7, and prepare to receive subsequent time subgraph group, the rest may be inferred completes whole workflow.
The frame alignment of GPGPU videos, splicing, compensation and integrated unit
This unit, as the essential elements of the present invention, is one using the tall and handsome high-performance GPU up to company as hardware platform
Image processing software system.It is based on the exploitation of CUDA parallel calculations framework, by subgraph localization function module U71, subgraph
As group fixed point orientation panorama slightly splices functional module U72, subgraph spelling seam compensation functions module U73, the splicing of subgraph group
Fusion function module U74 is stitched, 4 function sub-modules are constituted.Above-mentioned each function sub-modules are a kind of figures proposed by the present invention
As Processing Algorithm, its principle is respectively described below:
(1) subgraph localization function module
According to the single camera mounting means of many camera video acquisition arrays be it is fixed as shown in figure 4, in conjunction with
The principle that scape domain is formed is imaged, then can set up the location model of splicing subgraph, so that it is determined that each video camera shooting is specific
Region, its step is as follows,
1) first, single camera U711 pinpoints installation parameter according to Fig. 4, it may be determined that camera lens central point p0
U712 and camera center line l0The U714 and intersection point p by xoy planes where bat object1U713, such as Fig. 4 set up (x,
Y, z) coordinate is respectively p in coordinate system0(x0,y0,z0) and p1(x1,y1,z1), then camera center line l0Space line equation
As follows shown in following formula,
2) secondly, single camera U711 orients installation parameter according to Fig. 4, it may be determined that camera lens center line U714
Direction in space angle, in conjunction with video camera the angle of visual field can determine that video camera U711 formation scape domain cone bus l1
U712 deflection (α, beta, gamma), and because bus l1U712 crosses camera lens midpoint p0(x0,y0,z0), then space line l1
U712 equations are shown below,
3) finally, understand that video camera U711 forms scape domain curve Γ in xoy planes by latitude circule method2U717 can be eliminated by following formula
Parameter x2,y2,z2Try to achieve,
Wherein, point M1Justify Γ for any latitude1U715 is upper with bus l1Intersection point, coordinate be (x2,y2,z2)。
In summary, subgraph localization function module, only need to be according to each in many camera video acquisition array U3 in Fig. 2
Video camera fixed point orientation installation parameter, you can each video camera scape domain curvilinear equation of foundation, so as to predict each splicing subgraph
As the region in panorama sketch and size, the positioning for splicing subgraph in panorama sketch is realized.
(2) subgraph group fixed point orientation panorama slightly splices functional module
The function of this function sub-modules is that multi-channel video frame side view is become into the vertical view subgraph that top view unit U6 is obtained
Group carries out the thick splicing of panorama sketch in the presence of subgraph localization function module.Its specific workflow is:First, according to pre-
The big blank image such as first setting generation one and panorama view domain size;Secondly, it is every in subgraph group to overlooking for reception
One subgraph carries out localization process successively using Fig. 5 neutron image localization function modules U71, determines every subgraph in blank
Position, size and Orientation in figure;Again, according to the predetermined label order of each video camera in multi-camera array and its shooting
Subgraph is filled into the thick splicing that panorama sketch is realized in corresponding place in blank sheet by the location information of subgraph one by one;Finally,
The panorama sketch finished to slightly splicing carries out splicing seams demarcation, marks overlapping region, seam or hole region and the seamless spelling in splicing seams
Particular location, shape and the area size in region are connect, it is whole that completion subgraph group fixed point orientation panorama slightly splices functional module U72
Workflow.
(3) subgraph spelling seam compensation functions module
This function sub-modules is as shown in fig. 6, by subgraph line feature extraction submodule U731, adjacent sub-images top-stitching feature
Tri- submodule compositions of matched sub-block U732 and adjacent area line feature extrapolated boundary point compensation seam and leak submodule U733,
Its operation principle is respectively described below,
1) subgraph line feature extraction submodule
In this submodule, the two-dimensional line feature of every subgraph is extracted, it is necessary to first obtain step change type border in image.
The border of subgraph is detected in the present invention using RoA algorithms, the algorithm is determined by calculating the average ratio of adjacent area
Whether target pixel points are marginal points.Because this method uses the strength mean value of adjacent area, so greatly reducing
The surging of single pixel because of caused by speckle noise so that the line characteristic reliability of the subgraph obtained by this method compared with
It is high.In order to reduce amount of calculation, the subgraph for extracting line feature is needed only to extract the certain area adjacent with splicing seams for every
Interior line feature.Algorithm is completed by comparing along a direction adjacent area.Its step is, first, it is assumed that with C (x,
Y) pixel centered on, while it is respectively that the left and right adjacent region along some direction is put with C (x, y) to set L (x, y) and R (x, y)
The average gray value in domain, then average ratio be estimated as follows shown in formula;
RoA:C (x, y)=max { R (x, y)/L (x, y), L (x, y)/R (x, y) }
Then, RoA is compared:C (x, y) and predetermined threshold value T0It is compared, works as RoA:When C (x, y) is more than threshold value
It is boundary point then to think point C;Finally, the line characteristic fragment extracted by above-mentioned algorithm in subgraph is recombinated using certain means
Meaningful line feature is made into, the function of whole function sub-modules is completed.
2) adjacent sub-images top-stitching characteristic matching submodule
In this submodule, the two-dimensional line feature extracted adjacent sub-images is matched, just must be first special by line
Levy carry out mathematicization.The method that the present invention uses Mathematical Fitting, by the corresponding line segment of line feature extracted in adjacent sub-images
Function is described.Exemplified by splicing leak, it is assumed that surrounding the subgraph of splicing leak has n.First, every width subgraph of extraction
Line segment function slope composition set I be expressed from the next,
Wherein, the subscripts such as m, n, l represent the sum for the line feature extracted in correspondence subgraph;Then realized using following formula
Line characteristic matching between subgraph,
Wherein,It is an arbitrary element in set I, butThe same element of different times table, T1For
Matching threshold is a less positive number;Finally, the line feature of perfect matching is reassembled into line feature pair, completes whole son
The line characteristic matching process of image.
By taking the splicing leak T734 shown in Fig. 7 as an example, above-mentioned line characteristic matching process can be described as following steps:First,
Extract left figure T731 as depicted, right figure T732 and figure below T733 line be characterized as respectively T7311, T7312, T7313,
T7314 }, { T7321, T7322 }, { T7331, T7332, T7333 };Then, the line characteristic matching of line characteristic matching formula is utilized
Line feature of the algorithmic match from three width image zooming-outs;Finally, the line feature of matching is reassembled into line feature to for
{ (T7311, T7331), (T7312, T7332), (T7313, T7322, T7333), (T7314, T7321) }, so as to complete such as Fig. 7
The line characteristic matching process of the three adjacent width figures of shown splicing leak.
3) adjacent area line feature extrapolated boundary point compensation seam and leak submodule
In this submodule, the splicing leak T734 as shown in Figure 7 due to existing will be mended to leak T734 image
Repay, just must be extrapolated to carry out using existing image information.In the present invention, it is to utilize adjacent sub-images top-stitching feature
The line feature that is matched in the Fig. 7 obtained in matched sub-block U732 is to compensating.Specifically, first according to existing
The line segment function for the line feature pair matched somebody with somebody, construction one can meet the line segment function of all line features of matching characteristic centering, simultaneously
It is also the reasonable fitting to splicing the line feature of leak to think this function;Then extrapolated using the line segment function of neotectonics splicing
At leak, the position where the line feature thus determined to matched line feature pair;The line feature of the last leak externally released,
Merged, filled up using gray value, color and the brightness of matched line feature pair corresponding in former splicing subgraph.When right
All matched line features are to carrying out after as above similar process, you can realize the line characteristic pattern as compensated in splicing leak in Fig. 7
Picture, as shown in T7341, T342, T343 and T344.Pass through the comparison with artwork T735 at splicing leak, it is clear that by adjacent
The leak image of region line feature extrapolated boundary point compensation seam and leak submodule block compensation is substantially preferably kept with original image
Unanimously.
To sum up, subgraph spelling seam compensation functions module workflow is as shown in fig. 6, first, pass through subgraph line spy
Levy line features of the abstraction function submodule U731 to all subgraphs in thick splicing frame of video panorama sketch, and the every width subgraph extracted
The line segment function slope composition set of picture;Secondly, complete all by adjacent sub-images top-stitching characteristic matching function sub-modules U732
The line characteristic matching of subgraph, obtains the line feature pair of all matchings between subgraph;Again, adjacent area line feature extrapolated boundary
Point compensation seam and leak function sub-modules U733 are according to the line segment function of the line feature pair matched, and reconfiguring an energy expires
Line feature extrapolation of the line segment function of all line features of sufficient matching characteristic centering to carry out leak and seam is fitted, so that it is determined that leakage
Hole and the position of seam center line feature;Finally, using former gray value, color and the brightness for splicing subgraph to leak center line feature and
Image is merged, filled up, so as to obtain the frame of video panorama sketch of no splicing seams and leak, completes the whole work of this functional module
Make flow.
(4) subgraph spelling seam fusion function module
By above analyzing, the complete nothing of nature is obtained using subgraph spelling seam compensation functions module U73 realization
The frame of video panorama sketch of splicing seams and leak, must just eliminate between adjacent sub-images, the splicing seams of adjacent sub-images and compensation and
Splice gray scale, color and the difference of brightness between leak, must just carry out at gray scale, color and the fusion of brightness between them
Reason.In the present invention, subgraph spelling seam fusion function module U74 is proposed using Szeliski method carries out fusion behaviour
Make, the method assumes that some splicing seams or the adjacent subgraph of splicing leak have m width, then in splicing seams or splicing leak
Certain point P gray scale, color and brightness value, can be by leaking in adjacent area, from P with this splicing seams in m width subgraphs or splicing
Gray scale, color and the brightness value of the closest point of point, are calculated by following formula and obtained,
Wherein, any one in gray value, color value and the brightness value of g (p) expressions P points, gi(xi,yi) represent the i-th width figure
As the gray value, color value or brightness value corresponding to g (p) from P point closest approaches.Function ξ (x) is Line Weight Function, by
I-th width image with a distance from P points from P point closest approaches with determining, the bigger weight of distance is bigger, and weight is when distance is ultimate range
1, weight is 0 when distance on the contrary is minimum range.By being carried out one by one as above to each pixel in splicing seams and splicing leak
Shown mixing operation, you can obtain the complete frame of video panorama sketch without splicing seams and leak of nature, realize subgraph spelling
Seam compensation functions module U73 function.
It is former by the work above to each sub-function module of the frame alignment of GPGPU videos, splicing, compensation and integrated unit
Reason illustrates that then the frame alignment of GPGPU videos, splicing, compensation and integrated unit U7 workflow can be described as such as Fig. 5 institutes one by one
Show:First, the vertical view subgraph group for multi-channel video frame side view being become to top view unit U6 outputs is used as Panorama Mosaic
Original image group, under subgraph positioning function submodule U71 processing, region of each splicing subgraph of precognition in panorama sketch
And size;Secondly, in subgraph group fixed point orientation panorama slightly splices function sub-modules U72, subgraph positioning function is utilized
Module U71 subgraph location information carries out the thick splicing of panorama sketch, obtains the frame of video panorama for having splicing seams and splicing leak
Figure;Again, the frame of video panorama sketch with splicing seams and splicing leak is sent to subgraph spelling seam compensation function submodule
The compensation of splicing seams and splicing leak is carried out in U73;Finally, by subgraph spelling seam fusion function submodule U74 to it
The fusion treatment of splicing seams and splicing leak is carried out, the complete frame of video panorama sketch without splicing seams and leak of nature is obtained, it is complete
Into the frame alignment of GPGPU videos, splicing, compensation and integrated unit U7 whole workflow.
Real-time panoramic video stream generation unit
This unit is as the output unit of the present invention as shown in Fig. 2 real-time panoramic video stream generation unit U9 is also one
Software systems are generated using the tall and handsome high-performance GPU up to company as the video flowing of hardware platform.In this element, multithreading make use of
Scheduling mechanism and CUDA parallel computation frameworks, the nature that the frame alignment of GPGPU videos, splicing, compensation and integrated unit U7 are obtained
The complete frame of video panorama sketch (panoramic video frame) without splicing seams and leak, it is per second with 24 frames according to the sequencing of time
Mode form video flowing.Simultaneously by simple video compression algorithm, video stream compression is carried out into conventional video format
Storage.The whole workflow of complete cost-element.
The beneficial effects of the present invention are:
The invention provides a kind of nothing gathered for different visual angles, different directions to Same Scene effectively overlapping structure changes
Image carry out splicing, then in real time generation video flowing the real-time joining method of video;This method is installed based on collecting device
Parameter, performance parameter and the linear character for gathering image, establish the location model, transformation model and compensation fusion mould of image
Type, by using these efficient image processing models, joining method of the invention not only increases image mosaic precision, simultaneously
Image mosaic efficiency is ensure that, the requirement of real-time of video stream splicing is met.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of real-time joining method of fixed point orientation video of nothing effectively overlapping structure changes of the present invention;
Fig. 2 is the apparatus structure schematic diagram that video-splicing is carried out using joining method provided in an embodiment of the present invention;
Fig. 3 is that chequered with black and white fine jade lattice side view provided in an embodiment of the present invention becomes top view schematic diagram
Fig. 4 is subgraph localization function module schematic diagram provided in an embodiment of the present invention;
Fig. 5 is GPGPU videos frame alignment provided in an embodiment of the present invention, splicing, compensation and integrated unit structured flowchart;
Fig. 6 is subgraph spelling seam compensation functions module structured flowchart provided in an embodiment of the present invention;
Fig. 7 is subgraph spelling seam compensation functions module principle schematic provided in an embodiment of the present invention;
Fig. 8 is a kind of stream of the fixed point real-time joining method of orientation video of nothing effectively overlapping structure changes of the embodiment of the present invention
Cheng Tu.
Embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.Following examples are used to illustrate this hair
It is bright, but can not be used for limiting the scope of the present invention.
With reference to Fig. 8, specific implementation situation of the present invention is further detailed, the present invention is applied to certain 2650m of the country3
On blast furnace, it is mounted with that 3 side views shoot the camera of charge level from the different directions of blast furnace respectively, for obtaining a diameter of 8.2 meters
Disc charge level.Due to the adverse circumstances of unglazed, high temperature and many dirt in State of Blast Furnace, what is made can not use single camera to obtain
The whole charge level of blast furnace, and each video camera can only pinpoint directional at-tachment installation.This just meets 3 video cameras and regarded in difference
To angle, different directions carry out incompetent effective overlapping region for image mosaic between IMAQ, collection image to same charge level,
Even there is the use premise of the present invention of less gap and leak between image.First, according to Fig. 2 install video-splicing device must
Want after equipment, equipment installs, the flow further according to Fig. 1 starts to carry out video-splicing, its job step to collection video information
It is rapid as follows,
1st, according to multiple-camera installation parameter S61 to its video camera imaging of the camera calibration of 3 shooting charge levels of installation
Parameter S62, again on the basis of build side view become top view geometric transformation model S63, so as to subgraph group side view become overlook
Figure S6 is used;
2nd, the plane S711 where orientation parameter and reference object is pinpointed using 3 video cameras, builds in panorama sketch and splices
The location model S712 of subgraph, and using the location model S712 for splicing subgraph in panorama sketch, determine subgraph in panorama
Position S713, the position S714 of subgraph splicing leak and fracture, subgraph splicing overlapping region S715 and son in image
Between image and subgraph and leak and be broken border on relation S716, to facilitate the splicing in later stage to use;
3rd, installed in 3 video camera arrays of blast furnace roof, the video of the blast furnace charge level diverse location gathered respectively is formed
Multiple-camera video sequence S4, pass through video stream synchronization cutting unit to video stream synchronization split S51 obtain i-th of moment
Corresponding i-th frame subgraph group S52 to be spliced;
4th, the Image geometry transform model S63 built using step 1, subgraph group S52 to be spliced to the i-th frame carry out side view
Figure becomes top view;
5th, position S713 of the subgraph determined using step 2 in panoramic picture, to carrying out son to be transformed into top view
Image sets fixed point orientation panorama slightly splices S721;
6th, the thick spliced map of panorama obtained according to step 5, by judging splicing seams S722, is divided into three kinds by splicing seams region
Situation:There are overlapping region splicing seams, seamless without the non-overlapping splicing seams in hole and there are the crannied splicing seams in hole;
7th, there are overlapping region splicing seams for what step 6 was determined, the subgraph obtained using step 2 splices overlapping region
S715 particular location, utilizes traditional brightness and color Similarity matching method direct splicing and fusion S741;
8th, for the presence or absence of step 6 determination seam, without hole and non-overlapping splicing seams, brightness and color blend are used in seam crossing
Carry out splicing S742;
9th, for step 6 determine have hole, crannied splicing seams, first with step 2 determine subgraph between and
Relation S716 is bordered in subgraph and leak and fracture, it is determined that the splicing hole subgraph S731 adjacent with seam;Secondly, to subgraph
Adjacent area extract line feature and match S732, and on this basis by being obtained to adjacent area line feature extrapolated boundary point
The line feature S733 of seam and leak, so as to realize the line feature S734 of compensation leak and seam;Finally leak and other parts are adopted
With color and brightness interpolating fusion S735;
10th, panorama sketch complete the frame i that can obtain for the i-th moment by step 7,8,9, while rebound step 3 pair i-th
+ 1 moment carried out panoramic mosaic by the subgraph group of 3 camera acquisitions, and so circulation obtains whole blast furnace panorama charge level at any time
Between the image sequence that is distributed;Real-time video flowing is synthesized with the image sequence of Annual distribution using the blast furnace panorama charge level of acquisition,
So as to obtain the real-time video information of blast furnace panorama charge level.
Embodiment of above is merely to illustrate the present invention, rather than limitation of the present invention.Although with reference to embodiment to this hair
It is bright to be described in detail, it will be understood by those within the art that, to technical scheme carry out it is various combination,
Modification or equivalent substitution, without departure from the spirit and scope of technical solution of the present invention, the right that all should cover in the present invention is wanted
Ask among scope.
Claims (10)
1. the real-time joining method of fixed point orientation video of a kind of nothing effectively overlapping structure changes, it is characterised in that methods described includes
Following steps:
Step 1: installing multiple-camera video acquisition array, the Video stream information of diverse location is gathered respectively, and by the video
Stream information carries out analog-to-digital conversion, synchronization and compression processing;
Step 2: the Video stream information after compression is converted into same video format, multiple first are divided into sequentially in time quiet
State frame of video group;Wherein, each described first static frame of video group includes the multiple-camera video acquisition of synchronization
The n roads Video stream information of array acquisition;
Step 3: by every corresponding still image of Video stream information all the way in the described first static frame of video group, according to side view
Turn top view geometrical model, be converted to by the top view of bat object, form the second static frame of video group;
Step 4: according to location model, being positioned to each frame of video in the described second static frame of video group, carrying out panorama
Thick splicing, obtains the thick spliced map of panorama;
Step 5: according to the location model, determining in the thick spliced map of the panorama, there is overlapping region splicing seams position, seamless
Without the non-overlapping region splicing seams position in hole and there is hole or have crack area splicing seams position;
Step 6: there are overlapping region splicing seams or seamless without the non-overlapping region splicing seams in hole for described, brightness and color are utilized
Splicing seams are carried out anastomosing and splicing by interpolation algorithm;
Step 7: as follows for the splicing for having hole or having crack area splicing seams:
Determined according to the location model in the described second static frame of video group between the corresponding subgraph of each frame of video and
Border on relation between subgraph and leak or crack, and leak or crack subgraph are determined according to the relation of bordering on;
Extract the line feature of the subgraph adjacent with leak or crack subgraph;
The line feature to extraction is matched, and obtains line feature pair;
Crack or leak are compensated using the feature extrapolated boundary point of adjacent area line;
To the thick spliced map of the panorama after leak or crack compensation, using brightness and color interpolation algorithm, splicing seams are carried out
Anastomosing and splicing, obtains the spliced panoramic video frame of panorama essence;
Step 8: the first static frame of video group not in the same time is handled according to the step 3 to step 7, obtain not
Panoramic video frame in the same time, is sequentially in time synthesized the panoramic video frame, obtains real-time panoramic video stream.
2. according to the method described in claim 1, it is characterised in that the step 2 is to receive synchronization video split order laggard
OK, and after the step 2 end of run, the described first static frame of video is stored sequentially in time.
3. method according to claim 2, it is characterised in that side view becomes top view geometrical model in the step 3
For:
Wherein, s is scale factor, fx,fyFor the focal length of video camera, cx,cyFor image flame detection parameter, Rx,Ry,RzFor spin matrix
Three column vectors, t is translation vector, and (x, y, z) is the element coordinate of the still image side view, and (X, Y, Z) is correspondence
The top view coordinate of element.
4. method according to claim 3, it is characterised in that described Step 4: location model is in five:
Wherein, x0,y0,z0For camera lens center point coordinate, x1,y1,z1To be sat by the intersection point of bat object and image pickup plane xoy
Mark, (α, beta, gamma) is video camera to the deflection of the cone bus in domain of doing something for the occasion, x2,y2,z2The latitude in domain of doing something for the occasion is justified for video camera
With the intersecting point coordinate of the cone bus, x, y, z is video camera to domain and the image pickup plane xoy intersecting point coordinates of doing something for the occasion.
5. method according to claim 4, it is characterised in that slightly splicing is specially panorama in the step 4:
First, the big blank images such as the panorama view domain size of one and made thing are generated;
Secondly, the corresponding subgraph of each frame of video in the described second static frame of video group is carried out using the location model
Localization process, determines position of the every subgraph in blank image, size and Orientation;
Again, according to each video camera in multi-camera array it is predetermined label order and its shoot subgraph location information by
Open and subgraph is filled into corresponding place in blank image, realize the thick splicing of panorama sketch.
6. method according to claim 5, it is characterised in that the step 7, is extracted and leak or crack subgraph phase
The line feature of adjacent subgraph is specially:
Assuming that centered on C (x, y) pixel, while it is respectively along some direction with C (x, y) points to set L (x, y) and R (x, y)
Left and right adjacent area average gray value, then shown in average compared estimate such as formula (3);
Then, RoA is compared:C (x, y) and predetermined threshold value T0It is compared, works as RoA:C (x, y) is more than threshold value T0Shi Ze recognizes
It is boundary point for point C;
The line characteristic fragment that will be extracted by above-mentioned algorithm in the subgraph adjacent with leak or crack subgraph, lays equal stress on and is organized into
Line feature.
7. method according to claim 6, it is characterised in that in the step 7, the line feature to extraction is carried out
Matching is specially:
The line feature is described with corresponding line segment function, it is assumed that surrounding the subgraph of leak or crack there are n, first,
The line segment function slope of the every width subgraph extracted, the set I of composition is expressed as follows by formula (4),
Wherein, m, n, l represent the sum for the line feature extracted in correspondence subgraph;
The line characteristic matching between subgraph is realized using such as following formula (5),
Wherein,It is an arbitrary element, T in set I1For matching threshold;Meeting formula (5), then the line feature is matched somebody with somebody
To success.
8. method according to claim 7, it is characterised in that in the step 7, is extrapolated using adjacent area line feature
Boundary point is specially to compensate crack or leak:
First, according to corresponding first line segment function for having matched described line feature pair, construction one can meet character pair
The second line segment function of all line features of centering, while thinking that the second line segment function is the line feature to leak or crack
Reasonable fitting;
Then, at using second line segment function extrapolation leak or crack, the line feature thus determined to matched line feature pair
The position at place;
Finally, the leak or the line feature in crack externally released, using in the subgraph adjacent with leak or crack subgraph
Splicing seams, with color and brightness interpolating algorithm, are carried out fusion spelling by the color of corresponding matched line feature pair and brightness
Connect.
9. method according to claim 8, it is characterised in that described Step 6: in seven, being calculated using color and brightness interpolating
Method, carrying out anastomosing and splicing is specially:
It is assumed that the subgraph adjacent with leak or crack subgraph has m width, then the gray scale of a point P in crack or leak, color and
Brightness value, can pass through formula (6) and calculates and obtain according to gray scale, color and the brightness value of the points of range points P recently in m width subgraphs
Wherein, any one in gray value, color value and the brightness value of g (p) expressions P points, gi(xi,yi) represent the i-th width subgraph
From the gray value, color value or brightness value corresponding to g (p) of P point closest approaches, function ξ (x) is Line Weight Function;
Mixing operation as implied above is carried out by each pixel in fracture or leak one by one, the complete panorama is obtained
Frame of video.
10. method according to claim 9, it is characterised in that in the step 8, the panoramic video stream is carried out
Compression, storage and display.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510016447.2A CN104506828B (en) | 2015-01-13 | 2015-01-13 | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510016447.2A CN104506828B (en) | 2015-01-13 | 2015-01-13 | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104506828A CN104506828A (en) | 2015-04-08 |
CN104506828B true CN104506828B (en) | 2017-10-17 |
Family
ID=52948542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510016447.2A Active CN104506828B (en) | 2015-01-13 | 2015-01-13 | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104506828B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180048877A1 (en) * | 2016-08-10 | 2018-02-15 | Mediatek Inc. | File format for indication of video content |
CN107085842B (en) * | 2017-04-01 | 2020-04-10 | 上海讯陌通讯技术有限公司 | Self-learning multipath image fusion real-time correction method and system |
CN109214979B (en) * | 2017-07-04 | 2020-09-29 | 北京京东尚科信息技术有限公司 | Method and apparatus for fusing objects in panoramic video |
CN108460738A (en) * | 2018-02-11 | 2018-08-28 | 湖南文理学院 | Medical image sloped correcting method based on B-spline |
CN109685845B (en) * | 2018-11-26 | 2023-04-07 | 普达迪泰(天津)智能装备科技有限公司 | POS system-based real-time image splicing processing method for FOD detection robot |
CN111127478B (en) * | 2019-12-13 | 2023-09-05 | 上海众源网络有限公司 | View block segmentation method and device |
CN113763570A (en) * | 2020-06-01 | 2021-12-07 | 武汉海云空间信息技术有限公司 | Tunnel point cloud high-precision rapid automatic splicing method |
CN116612390B (en) * | 2023-07-21 | 2023-10-03 | 山东鑫邦建设集团有限公司 | Information management system for constructional engineering |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008112776A2 (en) * | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for filling occluded information for 2-d to 3-d conversion |
CN101479765A (en) * | 2006-06-23 | 2009-07-08 | 图象公司 | Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition |
WO2011121117A1 (en) * | 2010-04-02 | 2011-10-06 | Imec | Virtual camera system |
CN103763479A (en) * | 2013-12-31 | 2014-04-30 | 深圳英飞拓科技股份有限公司 | Splicing device for real-time high speed high definition panoramic video and method thereof |
CN103985254A (en) * | 2014-05-29 | 2014-08-13 | 四川川大智胜软件股份有限公司 | Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring |
-
2015
- 2015-01-13 CN CN201510016447.2A patent/CN104506828B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101479765A (en) * | 2006-06-23 | 2009-07-08 | 图象公司 | Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition |
WO2008112776A2 (en) * | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for filling occluded information for 2-d to 3-d conversion |
WO2011121117A1 (en) * | 2010-04-02 | 2011-10-06 | Imec | Virtual camera system |
CN103763479A (en) * | 2013-12-31 | 2014-04-30 | 深圳英飞拓科技股份有限公司 | Splicing device for real-time high speed high definition panoramic video and method thereof |
CN103985254A (en) * | 2014-05-29 | 2014-08-13 | 四川川大智胜软件股份有限公司 | Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring |
Non-Patent Citations (1)
Title |
---|
基于鱼眼相机的实时视频拼接技术研究;孙炬辉;《中国优秀硕士学位论文全文数据库 科技信息辑》;20140915(第9期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN104506828A (en) | 2015-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104506828B (en) | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes | |
CN103763479B (en) | The splicing apparatus and its method of real time high-speed high definition panorama video | |
CN109544484B (en) | A kind of method for correcting image and device | |
CN111047510B (en) | Large-field-angle image real-time splicing method based on calibration | |
CN104299215B (en) | The image split-joint method that a kind of characteristic point is demarcated and matched | |
CN105118055B (en) | Camera position amendment scaling method and system | |
CN106462944B (en) | High-resolution panorama VR generator and method | |
WO2021120407A1 (en) | Parallax image stitching and visualization method based on multiple pairs of binocular cameras | |
CN101276465B (en) | Method for automatically split-jointing wide-angle image | |
CN102984453B (en) | Single camera is utilized to generate the method and system of hemisphere full-view video image in real time | |
CN110782394A (en) | Panoramic video rapid splicing method and system | |
CN101146231A (en) | Method for generating panoramic video according to multi-visual angle video stream | |
CN104732482B (en) | A kind of multi-resolution image joining method based on control point | |
CN105354796B (en) | Image processing method and system for auxiliary of driving a vehicle | |
CN107424118A (en) | Based on the spherical panorama mosaic method for improving Lens Distortion Correction | |
CN107358577B (en) | Rapid splicing method of cubic panoramic image | |
CN107274346A (en) | Real-time panoramic video splicing system | |
CN108769578A (en) | A kind of real-time omnidirectional imaging system and method based on multi-path camera | |
CN107424120A (en) | A kind of image split-joint method in panoramic looking-around system | |
CN101938599A (en) | Method for generating interactive dynamic panoramic image | |
CN104506826B (en) | A kind of real-time splicing apparatus of fixed point orientation video without effective overlapping region | |
CN107154014A (en) | A kind of real-time color and depth Panorama Mosaic method | |
CN104159026A (en) | System for realizing 360-degree panoramic video | |
CN107105209A (en) | Projected image geometric distortion automatic correction system and its bearing calibration | |
CN104392416A (en) | Video stitching method for sports scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |