CN107274346A - Real-time panoramic video splicing system - Google Patents
Real-time panoramic video splicing system Download PDFInfo
- Publication number
- CN107274346A CN107274346A CN201710488347.9A CN201710488347A CN107274346A CN 107274346 A CN107274346 A CN 107274346A CN 201710488347 A CN201710488347 A CN 201710488347A CN 107274346 A CN107274346 A CN 107274346A
- Authority
- CN
- China
- Prior art keywords
- mrow
- parameter
- msub
- video
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 claims abstract description 25
- 238000012937 correction Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 20
- 230000008859 change Effects 0.000 claims abstract description 18
- 230000003044 adaptive effect Effects 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 11
- 230000009466 transformation Effects 0.000 claims description 38
- 239000011159 matrix material Substances 0.000 claims description 26
- 238000013507 mapping Methods 0.000 claims description 15
- 230000001360 synchronised effect Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 4
- 230000001427 coherent effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 3
- 239000000155 melt Substances 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 9
- 230000001133 acceleration Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005304 joining Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a kind of real-time panoramic video splicing system, the system passes through parameter cycle correction module, exposure parameter is upgraded in time, enable a system to adapt to the change of the factor such as light in scene, the robustness of raising system, and the obvious seam that the change of scene light is caused can be eliminated, it is ensured that the splicing effect of panoramic video;Meanwhile, the optimal seam that upgraded in time when seam crossing has mobile object is being detected by adaptive Fusion Module, and optimal seam is combined with adaptive weighted fusion, the foreground object ghost problem that frequently movement is caused at overlapping region can be avoided;In addition, carrying out acceleration processing all in GPU by building CPU and GPU isomery system by the overall process of splicing, the speed of panoramic video splicing is improved, splicing real-time is met.
Description
Technical field
The present invention relates to panoramic video splicing field, more particularly to a kind of real-time panoramic video splicing system.
Background technology
With the development of computer science and technology, computer image processing technology has also obtained rapid development, gives people
Life bring conveniently.Panoramic video has been widely applied to different fields because its big visual angle and high-resolution, can
For monitoring, the every field such as urban transportation, virtual reality (VR), online teaching.
Panoramic video splicing is based on image mosaic, but is due to continuity and the encoding and decoding processing of video, and video is spelled
Connect more complicated, it is higher to requirement of real-time.Existing technology is main to be accelerated in terms of the splicing strategy and computing capability two
Splice speed.
1st, strategy is spliced.
South electric network Co., Ltd is in patent " a kind of panoramic video joining method ", by splicing from splicing strategy
It is divided into initialization and video-splicing stage;The calculating of relevant parameter is completed in initial phase and utilizes parameter consistency, it is to avoid
Repeated and redundant is calculated, and reduces the time-consuming of splicing stage, it is ensured that real-time;But exposure compensating parameter is actually needed adaptation and shot
The light varience of scene, otherwise panoramic video obvious lapping defect occurs.University of Electronic Science and Technology Cheng Hong et al. is " a kind of in patent
In 360 degree of panoramic video splicing systems ", the homography matrix of adjacent image is calculated by standardization in initial phase, it is to avoid
The complicated calculations of image registration;But the splicing of a small amount of video image is may be only available for, and the accuracy of matching is relatively low.Zhejiang space
Depending on Science and Technology Ltd. in patent " a kind of panoramic video joining method and device ", stitching position is determined using geometric method,
Compare the angle point of adjacent image in initial phase, choosing the angle point closest to lap two ends is used to determine final position
Put information;Although carrying out video image splicing using the positional information of geometry, the calculating of complexity is saved, by cutting
Mode splice, accuracy is too low, it is difficult to ensure splicing effect.
2nd, computing capability.
Beijing epoch open up clever science and technology in patent " a kind of processing method and system of panoramic video splicing ", utilize high in the clouds
Computing technique completes panoramic mosaic beyond the clouds;Although using the computing resource in high in the clouds, improving splicing speed, it is transmitted through upper
Delay issue occurs in journey, it is impossible to meet real-time." high definition based on FPGA is real in paper by the Chen Quanbing of University of Electronic Science and Technology
When panoramic video splicing research and design " in propose the panoramic video monitoring system based on FPGA hardware platform, using ARM come
Complete image registration and splicing;But using hardware to carry out acceleration processing, cost is too high, and is unfavorable for extension maintenance.In addition, by
In panoramic video gatherer process there is foreground object movement in consecutive frame overlapping region, cause general panoramic video to be spelled
There is " ghost " phenomenon in the panoramic video of welding system generation, influences video effect.In order to eliminate the effects of the act, Ying Lijian is in paper " base
In 360 degree of Panorama Mosaic technical research of multiple-camera " in propose a kind of multiband and optimal seam fusion method,
Laplce's fusion pyramid is set up in seam crossing, but sets up pyramid process and calculates complicated, it is difficult to real-time is met.
The content of the invention
It is an object of the invention to provide a kind of real-time panoramic video splicing system, it can be ensured that the splicing effect of panoramic video
With real-time, simultaneously, it is thus also avoided that the frequent mobile ghost problem caused of foreground object at overlapping region.
The purpose of the present invention is achieved through the following technical solutions:
A kind of real-time panoramic video splicing system, including:
Multi-channel video synchronous acquisition module, for synchronous acquisition multiple paths of video images, and to being carried out per video image all the way
Distortion correction processing;
Parameter initialization module, for carrying out image registration to the multiple paths of video images received, and is spelled to video image
The projective transformation parameter connect carries out initialization calculating with juncture information;
Parameter cycle correction module, the projective transformation parameter for being calculated according to initialization calculates exposure compensating parameter, and
Exposure compensating parameter is updated periodically in circulation splicing;
Adaptive Fusion Module, judges whether adjacent video picture frame seam crossing has according to the juncture information that initialization is calculated
Object is moved, if so, then recalculating optimal juncture information, is then combined optimal juncture information with Weighted Fusion algorithm
Calculate image co-registration parameter;
Real-time concatenation module, based on projective transformation parameter, exposure compensating parameter and image co-registration parameter to multi-channel video figure
As being spliced, panoramic video stream is obtained.
The described pair of multiple paths of video images received carries out image registration, and to the projective transformation parameter of video image splicing
Carrying out initialization calculating with juncture information includes:
Image registration step is as follows:First, the characteristic point of image is extracted using SURF algorithm;Then, slightly matched:It is special
Its similarity is weighed in the matching levied a little using Euclidean distance, by the nearest characteristic point of Euclidean distance and the near spy of Euclidean distance time
The foundation for being used for Feature Points Matching of Euclidean distance a little is levied, if ratio is less than threshold value, retains corresponding match point, it is no
Then, reject;Finally, match point is purified using RANSAC algorithms and calculates homography matrix;
Camera parameters in the homography matrix H that is obtained after image registration are corrected using bundle adjustment, make to regard
The chemical conversion identical rotation of frequency image initial and focal length;In projective transformation, selection cylindrical type, ball-type or cubic type are thrown
Shadow model, projective transformation parameter is calculated in conjunction with the camera parameters after correction;When calculating juncture information, in adjacent two videos figure
A paths are found in the overlapping region of picture, make image after splicing coherent not overlapping.
An argument structure is established in the parameter initialization module, it includes following parameter:The width of video image and height,
Upper left angle point x-axis and y-axis coordinate after mapping transformation are carried out to video image using projective transformation parameter, after mapping transformation
The mapping table of x-axis and y-axis, image co-registration parameter, and exposure compensating parameter;Image co-registration parameter therein and and exposure benefit
Parameter is repaid, according to the result of calculation real-time update of corresponding parameter cycle correction module and adaptive Fusion Module.
The projective transformation parameter calculated according to initialization calculates exposure compensating parameter, and the week in circulation splicing
Updating to phase property exposure compensating parameter includes:
The projective transformation parameter calculated using initialization enters line translation to corresponding video image, then to the video after conversion
Image carries out piecemeal processing;Illumination compensation process is carried out for each piece of video image, and corresponding mask is preserved, is increased
Benefit;Linear filtering is carried out again, creates exposure compensating gain weight matrix as exposure compensating parameter, and it is initial with this undated parameter
Change module and set up exposure compensating parameter initial in argument structure;
In the splicing stage in real time, according to the calibration cycle of determination, correction periodically is updated to exposure compensating parameter, and
Exposure compensating parameter in synchronized update argument structure.
The juncture information calculated according to initialization judges adjacent video picture frame seam crossing whether there are objects moving to wrap
Include:
The juncture information calculated using initialization, the original gradient value of each pixel of seam crossing is preserved, in t,
The new Grad of each pixel is calculated, is then calculated in each seam region, the change of Grad exceeds the pixel count of predetermined value delta
N is measured, its formula is as follows:
Wherein, gi0Represent pixel piOriginal gradient value, gitRepresent t pixel piNew Grad.
If the change of Grad is more than the threshold value of setting beyond the pixel quantity N of predetermined value, adjacent video image is judged
There are objects moving for frame seam crossing, recycles the similitude of color and structure at optimal seam, sets up following lookup criterion:
Ecolor(x, y)=I1(x,y)-I2(x,y);
Egeometry(x, y)=Diff (I1(x,y),I2(x,y));
E (x, y)=Ecolor(x,y)2+Egeometry(x,y);
Wherein, I1(x, y) and I2(x, y) is adjacent video image to be fused, Ecolor(x, y) and Egeometry(x, y) is phase
The color and architectural difference of adjacent video image overlapping region to be fused, Diff are used to calculate adjacent video image to be fused in x, y
The gradient disparities in direction;By minimizing Ecolor(x, y) and EgeometryThe summed result E (x, y) of (x, y) finds optimal connect
Stitch information.
It is described by optimal juncture information combined with Weighted Fusion algorithm calculating image co-registration parameter include:
On the basis of optimal juncture information, width is L adjacent domain between determining adjacent left images, utilizes neighbour
Pixel near field apart from the distance of adjacent domain right boundary and width L ratio as weights, in adjacent domain
Pixel be weighted superposition, be expressed as:
Wherein, the pixel value that P (x, y) is pixel P in adjacent domain, P1(x,y),P2(x, y) represents pixel P respectively
The pixel value of pixel in left images, d1Represent right margin xs of the pixel P apart from adjacent domainrightDistance, d2Table
Show left margin xs of the pixel P apart from adjacent domainleftDistance, xseamFor the x coordinate of pixel in seam.
The real-time splicing of video image is as follows:
CPU and GPU isomery system is built, parameter initialization module, parameter cycle correction module are with adaptively merging mould
The processing procedure of block is all based on CPU realizations;
Meanwhile, read in CPU using the software scenario based on DirectShow per video image all the way, and by frame of video
Image is converted to RGBA forms from yuv format;Exposure compensating parameter is superimposed with image co-registration parameter weighting again, total weight is generated
Matrix;Then, projective transformation parameter and total weight matrix are passed to by the spelling that GPU carries out panoramic video by memory management function
Connect, obtain panoramic video frame, and panoramic video frame is converted to from RGBA forms aphorama is carried out in yuv format, then incoming CPU
The storage of frequency frame.
As seen from the above technical solution provided by the invention, by parameter cycle correction module, exposure parameter is entered
Row upgrades in time, enables a system to adapt to the change of the factor such as light in scene, improves the robustness of system, it is possible to eliminate field
The obvious seam that the change of scape light is caused, it is ensured that the splicing effect of panoramic video;Meanwhile, by adaptive Fusion Module in detection
Upgraded in time optimal seam when having mobile object to seam crossing, and optimal seam is combined with adaptive weighted fusion, can be with
Avoid the foreground object ghost problem that frequently movement is caused at overlapping region;In addition, the isomery system by building CPU and GPU
The overall process of splicing is subjected to acceleration processing all in GPU, the speed of panoramic video splicing is improved, splicing real-time is met.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, being used required in being described below to embodiment
Accompanying drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for this
For the those of ordinary skill in field, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings
Accompanying drawing.
Fig. 1 is a kind of composition schematic diagram of real-time panoramic video splicing system provided in an embodiment of the present invention;
Fig. 2 is a kind of workflow diagram of real-time panoramic video splicing system provided in an embodiment of the present invention.
Embodiment
With reference to the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Ground is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Based on this
The embodiment of invention, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to protection scope of the present invention.
Fig. 1 is a kind of schematic diagram of real-time panoramic video splicing system provided in an embodiment of the present invention, as shown in figure 1, its
Mainly include:Multi-channel video synchronous acquisition module, parameter initialization module, parameter cycle correction module, adaptive Fusion Module
With real-time concatenation module;By the collaborative work of these modules, it can finally realize that panoramic video splices in real time.
The workflow of the system is as shown in Fig. 2 first, multi-channel video is gathered by multi-channel video synchronous acquisition module synchronization
Image, and to carrying out distortion correction processing per video image all the way.Then, parameter initialization module judges whether system is initial
Change, carry out image registration to the multiple paths of video images received if no initializtion, and the projection that video image splices is become
Change parameter and carry out initialization calculating with juncture information;Afterwards, the projection calculated by parameter cycle correction module according to initialization becomes
Change parameter and calculate exposure compensating parameter, and exposure compensating parameter is updated periodically in circulation splicing;Then, by adaptive
The juncture information for answering Fusion Module to be calculated according to initialization judges adjacent video picture frame seam crossing, and whether there are objects moving, if
It is then to recalculate optimal juncture information, and optimal juncture information is combined into calculating image co-registration with Weighted Fusion algorithm
Parameter;Finally, by real-time concatenation module, multichannel is regarded based on projective transformation parameter, exposure compensating parameter and image co-registration parameter
Frequency image is spliced, and obtains panoramic video stream;Also, utilize GPU parallel programming models in real-time splicing, it is ensured that splicing
Real-time.
In order to make it easy to understand, being described in detail below for each module in system.
First, multi-channel video synchronous acquisition module.
In the embodiment of the present invention, multi-channel video synchronous acquisition module is same to being carried out per video image all the way using parallel algorithm
Step collection, and camera parameters are obtained to collecting frame of video progress distortion correction processing by camera calibration.It can utilize
Traditional camera marking method, to specific demarcation thing, by the collection of continuous different angles, asks for camera model
Inner parameter:Including principal point (u0,v0), focal length f, distortion coefficient.External parameter:Including being tied to video camera from world coordinates
The translation vector and rotational transformation matrix T, R of coordinate system.
Exemplary, 6 road USB interface wide-angle camera synchronous acquisition video images, the folder of adjacent camera can be used
Angle is 72 degree, with public center, and position is fixed.
2nd, parameter initialization module.
1st, image registration.
It is comprised the following steps that:
1) image characteristic point is extracted
In the embodiment of the present invention, the characteristic point of image, the calculating speed ratio of the SURF algorithm are extracted using SURF algorithm
SIFT increases significantly, and is also a kind of lifting for processing speed.
2) image characteristic point is slightly matched.
Its similarity is weighed in the matching of characteristic point using Euclidean distance, is slightly matched using than value-based algorithm;Specifically such as
Under:By the Feature Points Matching that is used for of the Euclidean distance of time near characteristic point of the nearest characteristic point of Euclidean distance and Euclidean distance
Foundation, if ratio is less than threshold value, retains corresponding match point, otherwise, rejects.
3) matching characteristic point is to purifying and calculating homography matrix
By step 2) the matching characteristic point that obtains after thick matching be to that can have some wrong match points, it is necessary to carry
It is pure just to can guarantee that the accuracy subsequently calculated.In the embodiment of the present invention, characteristic point is purified and counted using RANSAC algorithms
Calculate homography matrix H.
2nd, global parameter is adjusted
In order to splicing accuracy, it is necessary to the adjustment of the overall situation be carried out, using bundle adjustment to obtaining after image registration
Camera parameters are corrected in homography matrix H, video image is initialized to identical rotation and focal length.
In projective transformation, the projection model such as selection cylindrical type, ball-type or cubic type, in conjunction with the shooting after correction
Machine parameter, calculates projective transformation parameter;When calculating juncture information, Yi Tiaolu is found in the overlapping region of adjacent two video image
Footpath, makes image after splicing coherent not overlapping.
In addition, also establishing an argument structure as shown in table 1 in the parameter initialization module, it includes following ginseng
Number:The width of video image with it is high, using projective transformation parameter video image is carried out after mapping transformation upper left angle point x-axis with
Y-axis coordinate, the mapping table of x-axis and y-axis after mapping transformation, image co-registration parameter (i.e. the weight matrix of image co-registration), and
Exposure compensating parameter (i.e. exposure compensating gain weight matrix);Image co-registration parameter therein with and exposure compensating parameter, root
According to the result of calculation real-time update of corresponding parameter cycle correction module and adaptive Fusion Module.
Variable name | Implication is explained |
height | Source images are high |
width | Source images are wide |
corner_x | The angle point x coordinate information of upper left after mapping transformation |
corner_y | The angle point y-coordinate information of upper left after mapping transformation |
xmap | X-axis mapping table after mapping transformation |
ymap | Y-axis mapping table after mapping transformation |
blend_weight | The weight matrix of image co-registration |
co_weight | Exposure compensating gain weight matrix |
The argument structure of table 1
3rd, parameter cycle correction module
In the embodiment of the present invention, the projective transformation parameter calculated using initialization enters line translation to corresponding video image,
Piecemeal processing is carried out to the video image after conversion again;Illumination compensation process is carried out for each piece of video image, and will be corresponding
Mask preserve, obtain gain;Linear filtering is carried out again, creates exposure compensating gain weight matrix as exposure compensating parameter,
And exposure compensating parameter initial in argument structure is set up with this undated parameter initialization module.
In the splicing stage in real time, according to the calibration cycle of determination, correction periodically is updated to exposure compensating parameter, and
Exposure compensating parameter in synchronized update argument structure;The size in cycle can according to CPU processing speed and splicing effect come
It is adjusted.
4th, adaptive Fusion Module.
The juncture information calculated using initialization, the original gradient value of each pixel of seam crossing is preserved, in t,
The new Grad of each pixel is calculated, is then calculated in each seam region, the change of Grad exceeds the pixel count of predetermined value delta
Measure N, pixel piThe criterion at place is expressed as follows by formula:
Wherein, gi0Represent pixel piOriginal gradient value, gitRepresent t pixel piNew Grad.
Whether the change that can calculate all pixels Grad by above formula exceeds predetermined value, is included in if beyond predetermined value
In N.
If the change of Grad is more than the threshold value of setting beyond the pixel quantity N of predetermined value, adjacent video image is judged
There are objects moving for frame seam crossing, recycles the similitude of color and structure at optimal seam, sets up following lookup criterion:
Ecolor(x, y)=I1(x,y)-I2(x,y);
Egeometry(x, y)=Diff (I1(x,y),I2(x,y));
E (x, y)=Ecolor(x,y)2+Egeometry(x,y);
Wherein, I1(x, y) and I2(x, y) is adjacent video image to be fused, Ecolor(x, y) and Egeometry(x, y) is phase
The color and architectural difference of adjacent video image overlapping region to be fused, Diff are used to calculate adjacent video image to be fused in x, y
The gradient disparities in direction;By minimizing Ecolor(x, y) and EgeometryThe summed result E (x, y) of (x, y) finds optimal connect
Stitch information.
On the basis of optimal juncture information, width is L adjacent domain between determining adjacent left images, utilizes neighbour
Pixel near field apart from the distance of adjacent domain right boundary and width L ratio as weights, in adjacent domain
Pixel be weighted superposition, be expressed as:
Wherein, the pixel value that P (x, y) is pixel P in adjacent domain, P1(x,y),P2(x, y) represents pixel P respectively
The pixel value of pixel in left images, d1Represent right margin xs of the pixel P apart from adjacent domainrightDistance, d2Table
Show left margin xs of the pixel P apart from adjacent domainleftDistance, xseamFor the x coordinate of pixel in seam.
In above-mentioned processing procedure, by calculating the weighted connections of each pixel in seam adjacent domain, obtain last
The weight matrix blend_weight of image co-registration;The weight matrix of image co-registration represent in adjacent domain all pixels point with
The relation of corresponding pixel points in left images.
In the embodiment of the present invention, moved and detected by real-time seam object, dynamically adjusted most when foreground object is moved
Good seam, while re-starting fusion calculation, enables a system to the conversion adjust automatically parameter according to environment, reaches and adaptively melt
The effect of conjunction.
5th, real-time concatenation module.
In the embodiment of the present invention, according to related splicing parameter incoming in real time, using GPU parallel computation frames, complete to spell
Processing procedure is connect, the real-time splicing to video image is realized.Specifically:Using the parameter in argument structure, by reflecting
Firing table xmap and ymap, carry out the projective transformation of image, are combined by exposure gain weight matrix with image co-registration weight matrix
Total fusion weight matrix is set up, processing is directly weighted to image, final panoramic video stream is obtained.
When implementing, in order to meet splicing requirement of real-time, CPU and GPU isomery system, parameter initialization are built
The processing procedure of module, parameter cycle correction module and adaptive Fusion Module is all based on CPU realizations;Meanwhile, utilized in CPU
Software scenario based on DirectShow is read per video image, and video frame images are converted into RGBA from yuv format all the way
Form;Exposure compensating parameter is superimposed with image co-registration parameter weighting again, total weight matrix is generated;Then, memory management is passed through
Projective transformation parameter and total weight matrix are passed to the splicing that GPU carries out panoramic video by function, obtain panoramic video frame, and will
Panoramic video frame is converted to the storage that panoramic video frame is carried out in yuv format, then incoming CPU from RGBA forms.
Exemplary, CUDA threading model can be utilized in GPU, 32*16 sub-line journey is set up, by the process of splicing
Carry out parallel acceleration processing.
In the such scheme of the embodiment of the present invention, by parameter cycle correction module, exposure parameter is upgraded in time,
Enable a system to adapt to the change of the factor such as light in scene, improve the robustness of system, it is possible to eliminate the change of scene light
The obvious seam caused, it is ensured that the splicing effect of panoramic video;Meanwhile, have by adaptive Fusion Module detecting seam crossing
Upgraded in time optimal seam during mobile object, and optimal seam is combined with adaptive weighted fusion, can avoid overlay region
The foreground object ghost problem that frequently movement is caused at domain;In addition, by building CPU and GPU isomery system by the complete of splicing
Process carries out acceleration processing all in GPU, improves the speed of panoramic video splicing, meets splicing real-time.
It is apparent to those skilled in the art that, for convenience and simplicity of description, only with above-mentioned each function
The division progress of module is for example, in practical application, as needed can distribute above-mentioned functions by different function moulds
Block is completed, i.e., the internal structure of system is divided into different functional modules, to complete all or part of work(described above
Energy.
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited thereto,
Any one skilled in the art is in the technical scope of present disclosure, the change or replacement that can be readily occurred in,
It should all be included within the scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims
Enclose and be defined.
Claims (8)
1. a kind of real-time panoramic video splicing system, it is characterised in that including:
Multi-channel video synchronous acquisition module, for synchronous acquisition multiple paths of video images, and to entering line distortion per video image all the way
Correction process;
Parameter initialization module, for carrying out image registration to the multiple paths of video images received, and to video image splicing
Projective transformation parameter carries out initialization calculating with juncture information;
Parameter cycle correction module, the projective transformation parameter for being calculated according to initialization calculates exposure compensating parameter, and is following
Exposure compensating parameter is updated periodically in ring splicing;
Adaptive Fusion Module, judges whether adjacent video picture frame seam crossing has object according to the juncture information that initialization is calculated
It is mobile, if so, then recalculating optimal juncture information, optimal juncture information is then combined into calculating with Weighted Fusion algorithm
Image co-registration parameter;
Real-time concatenation module, is entered based on projective transformation parameter, exposure compensating parameter and image co-registration parameter to multiple paths of video images
Row splicing, obtains panoramic video stream.
2. a kind of real-time panoramic video splicing system according to claim 1, it is characterised in that described pair receive it is many
Road video image carries out image registration, and the projective transformation parameter spliced to video image carries out initialization calculating with juncture information
Including:
Image registration step is as follows:First, the characteristic point of image is extracted using SURF algorithm;Then, slightly matched:Characteristic point
Matching its similarity is weighed using Euclidean distance, by the characteristic point that the nearest characteristic point of Euclidean distance and Euclidean distance are time near
Euclidean distance the foundation for being used for Feature Points Matching, if ratio be less than threshold value, retain corresponding match point, otherwise, pick
Remove;Finally, match point is purified using RANSAC algorithms and calculates homography matrix;
Camera parameters in the homography matrix H that is obtained after image registration are corrected using bundle adjustment, make video figure
As being initialized to identical rotation and focal length;In projective transformation, selection cylindrical type, ball-type or cubic type projective module
Type, projective transformation parameter is calculated in conjunction with the camera parameters after correction;When calculating juncture information, in adjacent two video image
A paths are found in overlapping region, make image after splicing coherent not overlapping.
3. a kind of real-time panoramic video splicing system according to claim 1, it is characterised in that the parameter initialization mould
An argument structure is established in block, it includes following parameter:The width of video image and height, using projective transformation parameter to video figure
As carrying out upper left angle point x-axis and y-axis coordinate after mapping transformation, the mapping table of x-axis and y-axis after mapping transformation, image melts
Close parameter, and exposure compensating parameter;Image co-registration parameter therein with and exposure compensating parameter, according to corresponding parameter week
The result of calculation real-time update of phase correction module and adaptive Fusion Module.
4. a kind of real-time panoramic video splicing system according to claim 1, it is characterised in that described to be counted according to initialization
The projective transformation parameter of calculation calculates exposure compensating parameter, and is updated periodically exposure compensating parameter bag in circulation splicing
Include:
The projective transformation parameter calculated using initialization enters line translation to corresponding video image, then to the video image after conversion
Carry out piecemeal processing;Illumination compensation process is carried out for each piece of video image, and corresponding mask is preserved, gain is obtained;
Linear filtering is carried out again, creates exposure compensating gain weight matrix as exposure compensating parameter, and initialize with this undated parameter
Module sets up exposure compensating parameter initial in argument structure;
In the splicing stage in real time, according to the calibration cycle of determination, correction periodically is updated to exposure compensating parameter, and synchronously
Exposure compensating parameter in undated parameter structure.
5. a kind of real-time panoramic video splicing system according to claim 1, it is characterised in that described to be counted according to initialization
The juncture information of calculation judges adjacent video picture frame seam crossing whether there are objects moving to include:
The juncture information calculated using initialization, the original gradient value of each pixel of seam crossing is preserved, in t, is calculated
The new Grad of each pixel, is then calculated in each seam region, and the change of Grad exceeds the pixel quantity N of predetermined value delta,
Its formula is as follows:
<mrow>
<mi>N</mi>
<mo>=</mo>
<mo>{</mo>
<msub>
<mi>p</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<mfrac>
<mrow>
<msub>
<mi>g</mi>
<mrow>
<mi>i</mi>
<mi>t</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>g</mi>
<mrow>
<mi>i</mi>
<mn>0</mn>
</mrow>
</msub>
</mrow>
<msub>
<mi>g</mi>
<mrow>
<mi>i</mi>
<mn>0</mn>
</mrow>
</msub>
</mfrac>
<mo>></mo>
<mi>&delta;</mi>
<mo>}</mo>
<mo>;</mo>
</mrow>
Wherein, gi0Represent pixel piOriginal gradient value, gitRepresent t pixel piNew Grad.
6. a kind of real-time panoramic video splicing system according to claim 5, it is characterised in that
If the change of Grad is more than the threshold value of setting beyond the pixel quantity N of predetermined value, judge that adjacent video picture frame connects
There are objects moving at seam, recycles the similitude of color and structure at optimal seam, sets up following lookup criterion:
Ecolor(x, y)=I1(x,y)-I2(x,y);
Egeometry(x, y)=Diff (I1(x,y),I2(x,y));
E (x, y)=Ecolor(x,y)2+Egeometry(x,y);
Wherein, I1(x, y) and I2(x, y) is adjacent video image to be fused, Ecolor(x, y) and Egeometry(x, y) treats to be adjacent
The color and architectural difference of fusion video image overlapping region, Diff are used to calculate adjacent video image to be fused in x, y directions
Gradient disparities;By minimizing Ecolor(x, y) and EgeometryThe summed result E (x, y) of (x, y) finds optimal seam letter
Breath.
7. a kind of real-time panoramic video splicing system according to claim 6, it is characterised in that described by optimal seam
Information, which is combined calculating image co-registration parameter with Weighted Fusion algorithm, to be included:
On the basis of optimal juncture information, width is L adjacent domain between determining adjacent left images, utilizes proximity
Pixel in domain apart from the distance of adjacent domain right boundary and width L ratio as weights, to the picture in adjacent domain
Vegetarian refreshments is weighted superposition, is expressed as:
<mrow>
<mi>P</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>x</mi>
<mrow>
<mi>l</mi>
<mi>e</mi>
<mi>f</mi>
<mi>t</mi>
</mrow>
</msub>
<mo><</mo>
<mi>x</mi>
<mo><</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>s</mi>
<mi>e</mi>
<mi>a</mi>
<mi>m</mi>
</mrow>
</msub>
<mo>-</mo>
<mfrac>
<mi>L</mi>
<mn>2</mn>
</mfrac>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mfrac>
<msub>
<mi>d</mi>
<mn>1</mn>
</msub>
<mi>L</mi>
</mfrac>
<msub>
<mi>P</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mfrac>
<msub>
<mi>d</mi>
<mn>2</mn>
</msub>
<mi>L</mi>
</mfrac>
<msub>
<mi>P</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mi>x</mi>
<mo>-</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>s</mi>
<mi>e</mi>
<mi>a</mi>
<mi>m</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>&le;</mo>
<mfrac>
<mi>L</mi>
<mn>2</mn>
</mfrac>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>P</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>x</mi>
<mrow>
<mi>s</mi>
<mi>e</mi>
<mi>a</mi>
<mi>m</mi>
</mrow>
</msub>
<mo>+</mo>
<mfrac>
<mi>L</mi>
<mn>2</mn>
</mfrac>
<mo><</mo>
<mi>x</mi>
<mo><</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>r</mi>
<mi>i</mi>
<mi>g</mi>
<mi>h</mi>
<mi>t</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
Wherein, the pixel value that P (x, y) is pixel P in adjacent domain, P1(x,y),P2(x, y) represents pixel P on a left side respectively
The pixel value of pixel in right image, d1Represent right margin xs of the pixel P apart from adjacent domainrightDistance, d2Represent picture
Left margin xs of the vegetarian refreshments P apart from adjacent domainleftDistance, xseamFor the x coordinate of pixel in seam.
8. a kind of real-time panoramic video splicing system according to claim 1, it is characterised in that video image splices in real time
Process is as follows:
Build CPU and GPU isomery system, parameter initialization module, parameter cycle correction module and adaptive Fusion Module
Processing procedure is all based on CPU realizations;
Meanwhile, read in CPU using the software scenario based on DirectShow per video image all the way, and by video frame images
RGBA forms are converted to from yuv format;Exposure compensating parameter is superimposed with image co-registration parameter weighting again, total weight square is generated
Battle array;Then, projective transformation parameter and total weight matrix are passed to by the spelling that GPU carries out panoramic video by memory management function
Connect, obtain panoramic video frame, and panoramic video frame is converted to from RGBA forms aphorama is carried out in yuv format, then incoming CPU
The storage of frequency frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710488347.9A CN107274346A (en) | 2017-06-23 | 2017-06-23 | Real-time panoramic video splicing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710488347.9A CN107274346A (en) | 2017-06-23 | 2017-06-23 | Real-time panoramic video splicing system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107274346A true CN107274346A (en) | 2017-10-20 |
Family
ID=60068680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710488347.9A Pending CN107274346A (en) | 2017-06-23 | 2017-06-23 | Real-time panoramic video splicing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107274346A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945112A (en) * | 2017-11-17 | 2018-04-20 | 浙江大华技术股份有限公司 | A kind of Panorama Mosaic method and device |
CN107958440A (en) * | 2017-12-08 | 2018-04-24 | 合肥工业大学 | Double fish eye images real time panoramic image split-joint methods and system are realized on GPU |
CN107995421A (en) * | 2017-11-30 | 2018-05-04 | 潍坊歌尔电子有限公司 | A kind of panorama camera and its image generating method, system, equipment, storage medium |
CN108093221A (en) * | 2017-12-27 | 2018-05-29 | 南京大学 | A kind of real-time video joining method based on suture |
CN108921776A (en) * | 2018-05-31 | 2018-11-30 | 深圳市易飞方达科技有限公司 | A kind of image split-joint method and device based on unmanned plane |
CN109688329A (en) * | 2018-12-24 | 2019-04-26 | 天津天地伟业信息系统集成有限公司 | A kind of anti-fluttering method for high-precision panoramic video |
CN109889792A (en) * | 2019-04-12 | 2019-06-14 | 北京航空航天大学 | A kind of Vehicular video based on V2X direct transfers method |
CN110246081A (en) * | 2018-11-07 | 2019-09-17 | 浙江大华技术股份有限公司 | A kind of image split-joint method, device and readable storage medium storing program for executing |
CN110580679A (en) * | 2019-06-25 | 2019-12-17 | 上海圭目机器人有限公司 | Mixed splicing method applied to large-area planar image |
CN110708464A (en) * | 2019-10-18 | 2020-01-17 | 合肥学院 | Digital image acquisition system |
CN110796597A (en) * | 2019-10-10 | 2020-02-14 | 武汉理工大学 | Vehicle-mounted all-round-view image splicing device based on space-time compensation |
CN111008932A (en) * | 2019-12-06 | 2020-04-14 | 烟台大学 | Panoramic image splicing method based on image screening |
TWI695295B (en) * | 2018-01-24 | 2020-06-01 | 香港商阿里巴巴集團服務有限公司 | Image processing method, device and electronic equipment based on augmented reality |
CN111726566A (en) * | 2019-03-21 | 2020-09-29 | 上海飞猿信息科技有限公司 | Implementation method for correcting splicing anti-shake in real time |
CN111968243A (en) * | 2020-06-28 | 2020-11-20 | 成都威爱新经济技术研究院有限公司 | AR image generation method, system, device and storage medium |
CN112465702A (en) * | 2020-12-01 | 2021-03-09 | 中国电子科技集团公司第二十八研究所 | Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video |
CN112565590A (en) * | 2020-11-16 | 2021-03-26 | 李诚专 | Object 360-degree all-round-looking image generation method |
CN114339157A (en) * | 2021-12-30 | 2022-04-12 | 福州大学 | Multi-camera real-time splicing system and method with adjustable observation area |
CN114972023A (en) * | 2022-04-21 | 2022-08-30 | 合众新能源汽车有限公司 | Image splicing processing method, device and equipment and computer storage medium |
CN116612168A (en) * | 2023-04-20 | 2023-08-18 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment, image processing system and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344980A (en) * | 2008-08-21 | 2009-01-14 | 中国工商银行股份有限公司 | Safety detection system and method for ATM equipment |
CN102938827A (en) * | 2012-11-29 | 2013-02-20 | 深圳英飞拓科技股份有限公司 | Stratified monitoring command system and cross-camera virtual tracking method |
US20140099022A1 (en) * | 2012-10-04 | 2014-04-10 | 3Dmedia Corporation | Image color matching and equalization devices and related methods |
CN103856727A (en) * | 2014-03-24 | 2014-06-11 | 北京工业大学 | Multichannel real-time video splicing processing system |
CN104392416A (en) * | 2014-11-21 | 2015-03-04 | 中国电子科技集团公司第二十八研究所 | Video stitching method for sports scene |
CN103763479B (en) * | 2013-12-31 | 2017-03-29 | 深圳英飞拓科技股份有限公司 | The splicing apparatus and its method of real time high-speed high definition panorama video |
-
2017
- 2017-06-23 CN CN201710488347.9A patent/CN107274346A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344980A (en) * | 2008-08-21 | 2009-01-14 | 中国工商银行股份有限公司 | Safety detection system and method for ATM equipment |
US20140099022A1 (en) * | 2012-10-04 | 2014-04-10 | 3Dmedia Corporation | Image color matching and equalization devices and related methods |
CN102938827A (en) * | 2012-11-29 | 2013-02-20 | 深圳英飞拓科技股份有限公司 | Stratified monitoring command system and cross-camera virtual tracking method |
CN103763479B (en) * | 2013-12-31 | 2017-03-29 | 深圳英飞拓科技股份有限公司 | The splicing apparatus and its method of real time high-speed high definition panorama video |
CN103856727A (en) * | 2014-03-24 | 2014-06-11 | 北京工业大学 | Multichannel real-time video splicing processing system |
CN104392416A (en) * | 2014-11-21 | 2015-03-04 | 中国电子科技集团公司第二十八研究所 | Video stitching method for sports scene |
Non-Patent Citations (3)
Title |
---|
应礼剑: "《基于多摄像机的360度全景图像拼接技术研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
朱云芳: "《基于图像拼接的视频编辑》", 《中国博士学位论文全文数据库 信息科技辑 》 * |
陶荷梦: "《基于多摄像头的实时视频拼接技术的研究与实现》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11455709B2 (en) | 2017-11-17 | 2022-09-27 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for image processing |
CN107945112A (en) * | 2017-11-17 | 2018-04-20 | 浙江大华技术股份有限公司 | A kind of Panorama Mosaic method and device |
CN107995421A (en) * | 2017-11-30 | 2018-05-04 | 潍坊歌尔电子有限公司 | A kind of panorama camera and its image generating method, system, equipment, storage medium |
CN107958440A (en) * | 2017-12-08 | 2018-04-24 | 合肥工业大学 | Double fish eye images real time panoramic image split-joint methods and system are realized on GPU |
CN108093221A (en) * | 2017-12-27 | 2018-05-29 | 南京大学 | A kind of real-time video joining method based on suture |
TWI695295B (en) * | 2018-01-24 | 2020-06-01 | 香港商阿里巴巴集團服務有限公司 | Image processing method, device and electronic equipment based on augmented reality |
CN108921776A (en) * | 2018-05-31 | 2018-11-30 | 深圳市易飞方达科技有限公司 | A kind of image split-joint method and device based on unmanned plane |
CN110246081A (en) * | 2018-11-07 | 2019-09-17 | 浙江大华技术股份有限公司 | A kind of image split-joint method, device and readable storage medium storing program for executing |
CN109688329A (en) * | 2018-12-24 | 2019-04-26 | 天津天地伟业信息系统集成有限公司 | A kind of anti-fluttering method for high-precision panoramic video |
CN109688329B (en) * | 2018-12-24 | 2020-12-11 | 天津天地伟业信息系统集成有限公司 | Anti-shake method for high-precision panoramic video |
CN111726566A (en) * | 2019-03-21 | 2020-09-29 | 上海飞猿信息科技有限公司 | Implementation method for correcting splicing anti-shake in real time |
CN109889792A (en) * | 2019-04-12 | 2019-06-14 | 北京航空航天大学 | A kind of Vehicular video based on V2X direct transfers method |
CN109889792B (en) * | 2019-04-12 | 2020-07-03 | 北京航空航天大学 | Vehicle-mounted video direct transmission method based on V2X |
CN110580679A (en) * | 2019-06-25 | 2019-12-17 | 上海圭目机器人有限公司 | Mixed splicing method applied to large-area planar image |
CN110796597A (en) * | 2019-10-10 | 2020-02-14 | 武汉理工大学 | Vehicle-mounted all-round-view image splicing device based on space-time compensation |
CN110796597B (en) * | 2019-10-10 | 2024-02-02 | 武汉理工大学 | Vehicle-mounted all-round image splicing device based on space-time compensation |
CN110708464A (en) * | 2019-10-18 | 2020-01-17 | 合肥学院 | Digital image acquisition system |
CN110708464B (en) * | 2019-10-18 | 2021-02-26 | 合肥学院 | Digital image acquisition system |
CN111008932A (en) * | 2019-12-06 | 2020-04-14 | 烟台大学 | Panoramic image splicing method based on image screening |
CN111968243A (en) * | 2020-06-28 | 2020-11-20 | 成都威爱新经济技术研究院有限公司 | AR image generation method, system, device and storage medium |
CN111968243B (en) * | 2020-06-28 | 2023-04-11 | 成都威爱新经济技术研究院有限公司 | AR image generation method, system, device and storage medium |
CN112565590A (en) * | 2020-11-16 | 2021-03-26 | 李诚专 | Object 360-degree all-round-looking image generation method |
CN112465702B (en) * | 2020-12-01 | 2022-09-13 | 中国电子科技集团公司第二十八研究所 | Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video |
CN112465702A (en) * | 2020-12-01 | 2021-03-09 | 中国电子科技集团公司第二十八研究所 | Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video |
CN114339157A (en) * | 2021-12-30 | 2022-04-12 | 福州大学 | Multi-camera real-time splicing system and method with adjustable observation area |
CN114339157B (en) * | 2021-12-30 | 2023-03-24 | 福州大学 | Multi-camera real-time splicing system and method with adjustable observation area |
CN114972023A (en) * | 2022-04-21 | 2022-08-30 | 合众新能源汽车有限公司 | Image splicing processing method, device and equipment and computer storage medium |
CN116612168A (en) * | 2023-04-20 | 2023-08-18 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment, image processing system and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107274346A (en) | Real-time panoramic video splicing system | |
CN103763479B (en) | The splicing apparatus and its method of real time high-speed high definition panorama video | |
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
CN104299215B (en) | The image split-joint method that a kind of characteristic point is demarcated and matched | |
US11783446B2 (en) | Large-field-angle image real-time stitching method based on calibration | |
CN101859433B (en) | Image mosaic device and method | |
CN103150715B (en) | Image mosaic processing method and processing device | |
TWI383666B (en) | An advanced dynamic stitching method for multi-lens camera system | |
CN100437639C (en) | Image processing apparatus and image processing meethod, storage medium, and computer program | |
CN110782394A (en) | Panoramic video rapid splicing method and system | |
CN106791623A (en) | A kind of panoramic video joining method and device | |
CN109685913B (en) | Augmented reality implementation method based on computer vision positioning | |
CN204090039U (en) | Integration large scene panoramic video monitoring device | |
CN108447022B (en) | Moving target joining method based on single fixing camera image sequence | |
CN107945112A (en) | A kind of Panorama Mosaic method and device | |
CN106157246B (en) | A kind of full automatic quick cylinder panoramic image joining method | |
CN104506828B (en) | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes | |
CN103337094A (en) | Method for realizing three-dimensional reconstruction of movement by using binocular camera | |
CN107948544A (en) | A kind of multi-channel video splicing system and method based on FPGA | |
CN106447602A (en) | Image mosaic method and device | |
CN106534670B (en) | It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group | |
CN105242568B (en) | Tobacco leaf based on Digital Image Processing accurately rejects control method | |
CN111462503A (en) | Vehicle speed measuring method and device and computer readable storage medium | |
CN112085659A (en) | Panorama splicing and fusing method and system based on dome camera and storage medium | |
CN109544635B (en) | Camera automatic calibration method based on enumeration heuristic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171020 |
|
RJ01 | Rejection of invention patent application after publication |