CN105245841B - A kind of panoramic video monitoring system based on CUDA - Google Patents

A kind of panoramic video monitoring system based on CUDA Download PDF

Info

Publication number
CN105245841B
CN105245841B CN201510647067.9A CN201510647067A CN105245841B CN 105245841 B CN105245841 B CN 105245841B CN 201510647067 A CN201510647067 A CN 201510647067A CN 105245841 B CN105245841 B CN 105245841B
Authority
CN
China
Prior art keywords
image
point
images
suture
overlapping region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510647067.9A
Other languages
Chinese (zh)
Other versions
CN105245841A (en
Inventor
陶荷梦
禹晶
肖创柏
段娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510647067.9A priority Critical patent/CN105245841B/en
Publication of CN105245841A publication Critical patent/CN105245841A/en
Application granted granted Critical
Publication of CN105245841B publication Critical patent/CN105245841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

A kind of panoramic video monitoring system based on CUDA estimates the overlapping region between multiple video images using phase correlation method first, and SURF characteristic points are extracted in overlapping region and are registrated, does so and shortens the algorithm time, decrease subsequent error hiding;Then, the present invention proposes the blending algorithm being combined with multi-resolution algorithm based on improved best fusion line, eliminates edge saltus step and ghost phenomenon, improves the visual effect of video;GPU finally is utilized in fusing stage to accelerate, further improves splicing speed.The experimental results showed that this method can effectively realize that the real-time splicing of 3 road monitor videos, frame per second reach 20 frames, compared to the requirement of real time that traditional CPU versions can more meet video-splicing.

Description

A kind of panoramic video monitoring system based on CUDA
Technical field
The invention belongs to Computer Vision and parallel computation fields, are related to a kind of panoramic video monitoring system based on CUDA System.
Background technology
The mankind can obtain bulk information daily, and wherein visual information is extremely important to the mankind.Image is obtained as the mankind Thus the important channel of visual information seems particularly significant.The scene information demand of wider visual range is got over however as people It is very limited, Hen Duoshi to come the angle for the scene that more, common cameras or video camera can take and field range Time needs to obtain the more complete video image of bigger to obtain information.Therefore, in order to obtain broader visual angle, multi-cam regards Frequency splicing is extremely necessary.
Video-splicing is different from synchronization gain position in multiple video image acquisition equipment and angle and has part weight The video image in folded region obtains full-view video image by the technologies such as being registrated and merging.The basis of video-splicing technology is Image mosaic.The basic step of image mosaic includes:Image Acquisition, image registration, projective transformation, image co-registration.Wherein most calculate It is also to calculate most time-consuming step as image registration and image co-registration that method is most complicated.
Image registration is a basic problem of image procossing, for by different time, different visual angles, different sensors and The image obtained under different shooting conditions is matched, and final purpose is the correspondence between establishing two images, with true Determine the geometric transform relation formula of piece image and another piece image.
Kuglin proposed phase correlation registration method in 1975, and this process employs the translation properties pair of Fourier transformation Image is registrated, but this method be suitable only for there are between the two images of pure translation relation pixel scale be registrated due to Fu In leaf transformation fundamental property, determine that the registration model is also only suitable for parallel views, should in affine and Perspective transformation model Method cannot be successfully registrated.And it is also difficult to accomplish the absolute parallel of camera position and its imaging plane in real process, one As imaging plane have certain angle, therefore need to take new method.
2004, David.Lowe summarized the existing feature detection algorithm based on invariant technology, formally proposes one Kind is based on scale space, to image translation, rotation, scaling, the image local feature that even affine transformation maintains the invariance, i.e., SIFT (Scale Invariant Feature Transform) feature detection algorithm.The basic thought of the algorithm:First with Difference of Gaussian pyramid extracts candidate extreme point from image, then uses the principal direction of candidate vertex neighborhood gradient as the point Direction finally extracts stable feature descriptor.2006, on the basis of SIFT algorithm researches, Bay et al. proposed SURF (SpeededUp Robust Transform) feature detection algorithm.The major advantage of SURF feature detection algorithms is to maintain While SIFT high robusts, the time of feature detection is greatly reduced, this is critically important for entire video-splicing system 's.
Image fusion technology is the image co-registration that has been registrated back into a width panoramic picture, in the fusion of image, Since there is difference in exposure between different images so that there is apparent splicing fine cracks for the image after direct splicing.Cause This, Szeliski is proposed does weighted average with the pixel value to overlapping region, by assigning a weighting function so that image Between transitions smooth.
Transition problem can be solved to a certain extent by being weighted average treatment to pixel value, but works as two images Difference in exposure it is excessive when, the block-based exposure adjustment technology that Uyttendaele can be utilized to propose, it can be image point At several pieces, it is exposed correction by each piece, then these corrected piece is merged again.This method can be preferable The excessive discontinuous problem that is generated by exposure is solved, if but there are error hiding when be registrated, i.e., projection matrix has Error, then after fusion will cause the fuzzy of image so that overlapping region generates ghost image.
For the fuzzy and diplopia problem that overlapping region occurs, a preferable solution is exactly to be found out in overlapping region The minimum suture of one surrounding pixel transformation, only selects the pixel value in piece image to be thrown in the right and left of this line Shadow, and two images are no longer subjected to simple weighted fusion.Efros is utilized the thought of Dynamic Programming and is asked to carry out optimal path Solution.
A kind of fusion method most popular at present is exactly to be melted using the multi-resolution characteristics of image different scale It closes.Multi-resolution Fusion method is proposed by Brown and Burt, and main thought is to utilize Gauss-Laplace gold word Tower decomposes the high frequency section and low frequency part of image.Different piece uses different convergence strategies.Low frequency part is asked using weighting With play fuzzy effect;High frequency section then utilizes the information of maximum weights, retains the information of the variations such as edge, finally by two Part combines, and obtains the relatively good result of syncretizing effect.But for real-time video stream splicing, this algorithm Processing speed cannot still meet requirement of real-time, and similarly can have diplopia when being registrated bad.
For video-splicing since the pixel of video image itself is more, processing speed is always a difficult point, algorithm stage Optimization it is difficult to ensure that splicing real-time.So from programming model, to video-splicing system using CUDA to video figure As the algorithm of fusing stage optimizes acceleration.CUDA is by the tall and handsome parallel computation framework proposed up to company, it is base In high rate burst communication unit GPU, high speed concurrently executes on GPU, greatly improves the speed of service of programmed algorithm. Include mainly two parts CPU and GPU in CUDA programmed environments.CPU is as host, the i.e. ends Host, and GPU is as equipment, i.e., The ends Device.The ends Host have dedicated channel into row data communication with the ends Device, and the ends Host are responsible for carrying out logicality affairs Processing, and the control to serializing operation;Device is responsible at end executing large-scale parallelization processing task.Figure will be operated in As the CUDA parallel computation functions on processing unit GPU are known as kernel function, i.e. kernel functions.
Invention content
A kind of panoramic video monitoring system based on CUDA has been invented herein, under the premise of ensureing splicing effect, has been improved The efficiency of algorithm so that real time monitoring video is more smooth.
The video image of different angle different direction is acquired in same level with three identical IP Cameras, it is synchronous The first frame image of each camera is captured, Fig. 1 is from left to right the adjacent three roads video frame images I of acquisition1(x, y), I2(x,y) And I3(x,y).First, translation position relationship (the Δ x, Δ y), by flat between video image are calculated using Fourier transformation Shifting parameter (Δ x, Δ y) can approximate calculation go out the overlapping region between video image.
Using SURF algorithm principle, the use of box filtering and integral image is picture construction scale space pyramid, passes through The size for changing cassette filter, is rolled up with different size of filter on tri- different directions of x, y and xy of original image Product, you can form multiscale space function Dxx,Dyy,Dxy, as shown in Figure 2.Select 6 multiple as cardinal scales interval, it is next The scale interval of layer every time can be double.
After having built scale space pyramid, need to extracting Local Extremum.Approach very much det's (H) using one Expression formula Δ (H) judges, if the value of Δ (H) is just, can determine that the point is Local Extremum.Obtain Local Extremum Afterwards, they are carried out with non-maximum value inhibition in 3 × 3 × 3 field, qualified click is characterized a little, Fig. 3 is video Frame image I1(x, y), I2(x, y) and I3(x, y) collected characteristic point in overlapping region.
After characteristic point detection, to ensure that characteristic point has rotation and scale invariability, with Haar small echos to characteristic point Distribute principal direction.Around the characteristic point in 60 degree of circle shaped neighborhood region (gray area), calculates the Haar small echos that size is 4 σ and responds, Wherein σ is the scale where characteristic point.Then response is established into coordinate system with dx and dy, each response point is mapped to the coordinate system And it adds up.The direction for obtaining peak response is finally defined as principal direction.
Reference axis is rotated into principal direction centered on characteristic point, the square window that the length of side is 20 σ is chosen, by this window Mouth is divided into 4 × 4 child windows.It is σ with the sampling interval for the child window region that the length of side is 5 σ, calculates separately every sub- window Mouth both horizontally and vertically on small echo response, obtained wavelet coefficient is denoted as dx and dy.Then it sums to obtain ∑ to response coefficient Dx and ∑ dy, asks the sum of response coefficient absolute value to obtain ∑ | dx | and ∑ | dy | each child window obtains a 4 dimensional vector (∑s Dx, ∑ dy, ∑ | dx |, ∑ | dy |), feature point description symbol is made of the vector of all child windows of surrounding, therefore feature vector is long Degree is 4 × 4 × 4=64.Thus obtained descriptor has to rotation, scale, brightness and all preferable robustness to comparing.
Since the overlapping region of two images is similar after detecting SURF features, of SURF characteristic points is found With when, region of search is limited in a neighborhood of corresponding translation position.This neighborhood can be a circle of radius 32 Shape region.It only needs to find matched characteristic point in this border circular areas, as shown in Figure 4.It can reduce in this way and need to carry out The characteristic point quantity compared, improves algorithm speed.
First to image I1The sample characteristics point P of (x, y)1, in I2It is corresponded in (x, y) in the circle domain of overlapping region radius 32 It finds with it apart from nearest characteristic point P12With secondary nearly characteristic point P12', it then calculates European between the two characteristic points and sample point The ratio of distance.It is less than the characteristic point of threshold value N for ratio, then it is assumed that be correct matched characteristic point, be otherwise erroneous matching Characteristic point.Similarly, to image I3The sample characteristics point P of (x, y)3, in I2The circle domain of overlapping region radius 32 is corresponded in (x, y) Interior searching arest neighbors characteristic point P32With secondary neighbour's characteristic point P32', the ratio of Euclidean distance is then calculated to judge match point.
It is matched by arest neighbors, thus obtains the adjacent a series of matching double points of two images, but due to the office of algorithm Sex-limited, inevitably there are many Mismatching points during these characteristic points must be gathered, while there is also certain mistakes for the precision of feature point extraction Difference can thus influence the quality and efficiency of splicing, therefore purify characteristic point using RANSAC algorithms and seek transformation square Battle array.
As shown in figure 5, the basic thought of RANSAC algorithms is:For some data acquisition system, two points are randomly selected first Determine straight line;Then an allowable error threshold value is set to the straight line, the point being in threshold range is determined as directly The interior point set of line.This random sampling procedure of continuous iteration, until the number of interior point is maximum and no longer changes, then at this time really Fixed interior point set is combined into point set in maximum.
It is exactly to be merged to image in next step after video image registration is completed.Here using based on optimal stitching line Multi-resolution Fusion method.Firstly the need of the optimal stitching line for acquiring video image overlapping region, after acquiring suture, schemed The gaussian pyramid of picture indicates that the laplacian pyramid that image is then obtained by gaussian pyramid indicates, then in each layer A transition fusion band is built on laplacian pyramid around the suture of overlapping region, schemes each layer according to weighted average As being merged, later by extending every layer of pyramid, and the image after cumulative every layer of extension obtains final stitching image.
Due to taking for stitching algorithm, requirement of real-time cannot be met.The optimization in algorithm stage have reached substantially it is ultimate attainment, The present invention is from programming model, using multithreading principle and GPU programming models, carries out GPU optimizations to the image co-registration stage, such as Shown in Fig. 6, real-time panoramic video splicing system is realized, splicing effect is intact, and picture is smooth, and splicing result is as shown in Figure 7.
Description of the drawings
Fig. 1 is the 3 road video images acquired in embodiment
Fig. 2 is that scale space builds schematic diagram
Fig. 3 is the SURF characteristic points detected in overlapping region
Fig. 4 is the search characteristics match point in circle domain
Fig. 5 is RANSAC algorithm principle figures
Fig. 6 is the image co-registration flow based on CUDA
Fig. 7 is full-view video image splicing result figure
Specific implementation mode
Go out the overlapping region of two images to be spliced with phase correlation method approximate calculation first.If image I1(x, y) and figure As I2Have between (x, y) (Δ x, the displacement relation of Δ y), then the relationship between two images be represented by:
I1(x, y)=I2(x-Δx,y-Δy)
Crosspower spectrum after normalization is defined as:
WhereinWithIt is I1(x, y) and I2The Fourier transformation of (x, y);It is multiple altogether Yoke.Above formula is subjected to inverse Fourier transform, obtains a two-dimensional function:
δ (x- Δs x, y- Δ y)=F-1[e-j2π(uΔx+vΔy)]
The position of function peak-peak corresponds to translation parameters (the Δ x, Δ y) between two images.If two images it Between only translation relation, then function peak-peak size reaction two images between correlation size, value range be [0, 1].It indicates that two images are identical when value is 1, indicates that two images are entirely different for 0.Pass through translation parameters (Δ x, Δ y) The overlapping region of two images can be gone out with approximate calculation.Extract the SURF characteristic points of each video image overlapping region.
It needs to carry out image registration after extraction characteristic point, the specific steps are:
(1) the KD trees index with priority is first established.
(2) set of characteristic points P1 is traversed, is found out out of the corresponding overlapping region of video image to be matched 32 circle domains nearest Adjacent and time neighbour's characteristic point.
(3) ratio R atio=arest neighbors ÷ neighbour's characteristic distance of characteristic distance of arest neighbors and time nearest neighbor distance is calculated, When ratio R atio is less than 0.7, then it is assumed that find match point.
(4) two more than repeating, until having traversed set P1.
Obtaining characteristic point, slightly matching calculates transformation matrix to rear using RANSAC algorithms.Since same o'clock at two Projection relation in space coordinates can be determined by 8 parameters 3 × 3 matrixes undetermined, and 8 unknown parameters of matrix only need 4 pairs of match points can calculate.If set S is made of match point the N slightly matched, RANSAC algorithm specific steps For:
(1) 4 pairs of not conllinear match points are randomly selected from set S, then carry out that transformation matrix H is calculated1
(2) H is used1Error calculation is carried out to sample to remaining N-4, judges whether it is interior point.Judgment step is:IfWithIt is a pair of of match point, calculatesWithSum of the distance, referred to as mapping fault, if being less than threshold Value T, then it is assumed that this is interior point to match point.
(3) (1) (2) step is executed repeatedly, when interior number reaches maximum and more than threshold value M, has just obtained initial transformation square Battle array H.
Basic optimal stitching line thought is the lap in two images, finds a suture so that suture Two edge images between color distortion and architectural difference reach minimum simultaneously, to only select piece image on the both sides of suture Pixel carry out synthesis panoramic picture.Therefore, suture solution criterion is:
E (x, y)=Ecolor(x,y)2+Egeometry(x,y)
Wherein, Ecolor indicates that the colour-difference of overlapping region pixel, Egeometry indicate the knot of overlapping region pixel Structure is poor.Optimal stitching line is asked to comprise the concrete steps that:
(1) suture is established as starting point using each pixel of the first row, the intensity value E at each point is calculated, to next line Extension;
(2) compare with 3 pixel intensity value E in the adjacent next line of the current point of each suture, take corresponding Propagation direction of the minimum point of intensity value as the suture, updates the total intensity value of this suture, and by the current of suture Point is updated to the corresponding point of next line minimal intensity value;
(3) select minimum one of total intensity value as most strong suture from all sutures.
Before image co-registration, need to carry out projective transformation to image with the transformation matrix H that first group of synchronization frame acquires, merely The more calculating time is needed by CPU, cannot achieve the real-time processing of picture frame.It, will in order to realize real-time effect The coordinate transformation process and image co-registration process of image, which are transplanted on GPU, to be handled.Due to the projective transformation process of image Identical coordinate transform is all carried out to all pixels point of whole sub-picture, then carries out the color value copy of pixel, the process With preferable concurrency.
Be located at a pair of of Feature Points Matching for being obtained in Feature Points Matching to forWithAccording to aperture Image-forming principle, it is understood that a three dimensional space coordinate point corresponds to two image I respectively1(x, y) and I2Different location in (x, y) Pixel, then they there are one-to-one relationships, as shown in figure 3 above -7.It can be utilized by perspective projection mapping function The homography matrix Hs of one 3*3 so that image registration.The point that homography matrix is used for calculating on the same three-dimensional planar exists Projected position in different two dimensional images, be an one-to-one mapping.Its 8 parameter matrix form of expression, as follows:
In order to " image cavity " do not occur in the coordinate transform image that ensures, using the inverse process of coordinate transformation algorithm It is calculated, i.e., its corresponding pixel points in original image is searched out to each pixel in image after coordinate transform, then The color value of the point is assigned to the pixel after coordinate transform in image.Coordinate transform inverse process is as follows:
Then the image after coordinate transform is subjected to down-sampled operation and generates gaussian pyramid, to gaussian pyramid image sequence It arranges into the gaussian pyramid adjacent layer progress phase reducing after row interpolation and team's interpolation and obtains laplacian pyramid, using most Good suture algorithm merges every layer of Laplacian-pyramid image, and finally image is reconstructed, to target image into Row interpolation and cumulative, to generate final panoramic video monitoring image.

Claims (1)

1. a kind of panoramic video monitoring system based on CUDA, it is characterised in that:With three identical IP Cameras same Horizontal plane acquires the video image of different angle different direction, the synchronous first frame image for capturing each camera, from left to right for Acquire adjacent three roads video frame images I1(x, y), I2(x, y) and I3(x,y);First, video figure is calculated using Fourier transformation As between translation position relationship (Δ x, Δ y), by translation parameters (Δ x, Δ y) can approximate calculation go out between video image Overlapping region;
Using SURF algorithm principle, the use of box filtering and integral image is picture construction scale space pyramid, passes through change The size of cassette filter does convolution, i.e., with different size of filter on tri- different directions of x, y and xy of original image Multiscale space function D can be formedxx,Dyy,Dxy;Selecting 6 multiple, next layer of scale interval is every as cardinal scales interval It is secondary all can be double;
After having built scale space pyramid, need to extracting Local Extremum;Using the expression formula Δ (H) of a det (H) come Judge, if the value of Δ (H) is just, can determine that the point is Local Extremum;After obtaining Local Extremum, to them 3 × 3 Non- maximum value inhibition is carried out in × 3 field, and qualified click is characterized a little;
After characteristic point detection, to ensure that characteristic point has rotation and scale invariability, characteristic point is distributed with Haar small echos Principal direction;Around the characteristic point in 60 degree of circle shaped neighborhood region, calculates the Haar small echos that size is 4 σ and respond, wherein σ is characteristic point The scale at place;Then it is that wavelet coefficient establishes coordinate system with dx and dy by response, each response point is mapped to the coordinate system simultaneously It adds up;The direction for obtaining peak response is finally defined as principal direction;
Reference axis is rotated into principal direction centered on characteristic point, the square window that the length of side is 20 σ is chosen, this window is drawn It is divided into 4 × 4 child windows;For the child window region that the length of side is 5 σ, it is σ with the sampling interval, calculates separately each child window water Small echo response in gentle vertical direction, obtained wavelet coefficient are denoted as dx and dy;Then to response coefficient sum ∑ dx and ∑ dy asks the sum of response coefficient absolute value to obtain ∑ | dx | and ∑ | dy | each child window obtains 4 dimensional vector (∑ dx, a ∑ Dy, ∑ | dx |, ∑ | dy |), feature point description symbol is made of the vector of all child windows of surrounding, therefore feature vector length is 4 × 4 × 4=64;Thus obtained descriptor all has preferable robustness to rotation, scale, brightness and contrast;
Since the overlapping region of two images is similar after detecting SURF features, the match point of SURF characteristic points is found When, region of search is limited in a neighborhood of corresponding translation position;This neighborhood is a border circular areas of radius 32;Only It needs to find matched characteristic point in this border circular areas;The characteristic point quantity for needing to be compared can be reduced in this way, carried High algorithm speed;
First to image I1The sample characteristics point P of (x, y)1, in I2It corresponds in the circle domain of overlapping region radius 32 and finds in (x, y) With it apart from nearest characteristic point P12With secondary nearly characteristic point P12', then calculate Euclidean distance between the two characteristic points and sample point Ratio;It is less than the characteristic point of threshold value N for ratio, then it is assumed that be correct matched characteristic point, be otherwise the spy of erroneous matching Sign point;Similarly, to image I3The sample characteristics point P of (x, y)3, in I2It corresponds in (x, y) and is sought in the circle domain of overlapping region radius 32 Look for arest neighbors characteristic point P32With secondary neighbour's characteristic point P32', the ratio of Euclidean distance is then calculated to judge match point;
It is matched by arest neighbors, thus obtains the adjacent a series of matching double points of two images, but due to the limitation of algorithm, These characteristic points must gather in inevitably there are many Mismatching point, while the precision of feature point extraction is there is also certain error, this Sample will influence the quality and efficiency of splicing, therefore purify characteristic point using RANSAC algorithms and seek transformation matrix;
The basic thought of RANSAC algorithms is:For some data acquisition system, two points are randomly selected first and determine straight line; Then an allowable error threshold value is set to the straight line, the point being in threshold range is determined as the interior point set of straight line;No This random sampling procedure of disconnected iteration, until the number of interior point is maximum and no longer changes, then the interior point set determined at this time is combined into Most imperial palace point set;
It is exactly to be merged to image in next step after video image registration is completed;Using more resolutions based on optimal stitching line Rate fusion method;Firstly the need of the optimal stitching line for acquiring video image overlapping region, after acquiring suture, the Gauss of image is obtained Pyramid representation, the laplacian pyramid that image is then obtained by gaussian pyramid indicates, then in each layer Laplce A transition fusion band is built on pyramid around the suture of overlapping region, merges each tomographic image according to weighted average Together, later by extending every layer of pyramid, and the image after cumulative every layer of extension obtains final stitching image;
Due to taking for stitching algorithm, requirement of real-time cannot be met;The optimization in algorithm stage has reached ultimate attainment substantially, this is System is from programming model, using multithreading principle and GPU programming models, carries out GPU optimizations to the image co-registration stage, realizes Real-time panoramic video splicing system, splicing effect is intact, and picture is smooth;
Go out the overlapping region of two images to be spliced with phase correlation method approximate calculation first;If image I1(x, y) and image I2 Have between (x, y) (Δ x, the displacement relation of Δ y), then the relationship between two images be expressed as:
I1(x, y)=I2(x-Δx,y-Δy)
Crosspower spectrum after normalization is defined as:
WhereinWithIt is I1(x, y) and I2The Fourier transformation of (x, y);ForComplex conjugate;It will Above formula carries out inverse Fourier transform, obtains a two-dimensional function:
δ (x- Δs x, y- Δ y)=F-1[e-j2π(uΔx+vΔy)]
The position of function peak-peak corresponds to translation parameters (the Δ x, Δ y) between two images;If between two images only There is translation relation, then the correlation size between the peak-peak size reaction two images of function, value range is [0,1];Value It indicates that two images are identical when being 1, indicates that two images are entirely different when being 0;By translation parameters, (Δ x, Δ y) can be with Approximate calculation goes out the overlapping region of two images;Extract the SURF characteristic points of each video image overlapping region;
It needs to carry out image registration after extraction characteristic point, the specific steps are:
(1) the KD trees index with priority is first established;
(2) traverse set of characteristic points P1, the arest neighbors found out out of the 32 of the corresponding overlapping region of video image to be matched circle domains with Secondary neighbour's characteristic point;
(3) the ratio R atio=arest neighbors ÷ neighbour's characteristic distance of characteristic distance for calculating arest neighbors and time nearest neighbor distance, when than When value Ratio is less than 0.7, then it is assumed that find match point;
(4) above two steps are repeated, until having traversed set P1;
Obtaining characteristic point, slightly matching calculates transformation matrix to rear using RANSAC algorithms;Since the same point is in two spaces Projection relation in coordinate system is determined by 8 parameters 3 × 3 matrixes undetermined, and 8 unknown parameters of matrix only need 4 pairs It can be calculated with point;If set S is made of match point the N slightly matched, RANSAC algorithms the specific steps are:
(1) 4 pairs of not conllinear match points are randomly selected from set S, then carry out that transformation matrix H is calculated1
(2) transformation matrix H is used1Error calculation is carried out to sample to remaining N-4, judges whether it is interior point;Judgment step For:IfWithIt is a pair of of match point, calculatesWithSum of the distance, referred to as mapping fault, if small In threshold value T, then it is assumed that this is interior point to match point;
(3) (1), (2) step are executed repeatedly, when interior number reaches maximum and more than threshold value M, has just obtained initial transformation matrix H;
Basic optimal stitching line thought is the lap in two images, finds a suture so that the two of suture Color distortion and architectural difference between edge image reach minimum simultaneously, to only select the picture of piece image on the both sides of suture Element carries out synthesis panoramic picture;Therefore, suture solution criterion is:
E (x, y)=Ecolor(x,y)2+Egeometry(x,y)
Wherein, EcolorIndicate the colour-difference of overlapping region pixel, EgeometryIndicate that the structure of overlapping region pixel is poor;It asks most Good suture comprises the concrete steps that:
(1) suture is established as starting point using each pixel of the first row, calculates the intensity value E at each point, extended to next line;
(2) compare with 3 pixel intensity value E in the adjacent next line of the current point of each suture, take corresponding intensity Propagation direction of the minimum point of value as the suture, updates the total intensity value of this suture, and more by the current point of suture It is newly the corresponding point of next line minimal intensity value;
(3) select minimum one of total intensity value as most strong suture from all sutures;
Before image co-registration, need to carry out projective transformation to image with the initial transformation matrix H that first group of synchronization frame acquires, merely The more calculating time is needed by CPU, cannot achieve the real-time processing of picture frame;It, will in order to realize real-time effect The coordinate transformation process and image co-registration process of image, which are transplanted on GPU, to be handled;Due to the projective transformation process of image Identical coordinate transform is all carried out to all pixels point of whole sub-picture, then carries out the color value copy of pixel, the process With preferable concurrency;
Be located at a pair of of Feature Points Matching for being obtained in Feature Points Matching to forWithAccording to pinhole imaging system Principle, it is understood that a three dimensional space coordinate point corresponds to two image I respectively1(x, y) and I2The picture of different location in (x, y) Vegetarian refreshments, then they there are one-to-one relationships;By perspective projection mapping function, the initial transformation matrix of a 3*3 is utilized H so that image registration;Homography matrix is used for calculating projection position of the point in different two dimensional images on the same three-dimensional planar It sets, is an one-to-one mapping;Its 8 parameter matrix form of expression, as follows:
In order to " image cavity " do not occur in the coordinate transform image that ensures, carried out using the inverse process of coordinate transformation algorithm It calculates, i.e., its corresponding pixel points in original image is searched out to each pixel in image after coordinate transform, then should The color value of point is assigned to the pixel in image after coordinate transform;Coordinate transform inverse process is as follows:
Then the image after coordinate transform is subjected to down-sampled operation generation gaussian pyramid, to gaussian pyramid image sequence into Gaussian pyramid adjacent layer after row interpolation and team's interpolation carries out phase reducing and obtains laplacian pyramid, uses best seam Zygonema algorithm merges every layer of Laplacian-pyramid image, and finally image is reconstructed, and is inserted to target image It is worth and cumulative, to generate final panoramic video monitoring image.
CN201510647067.9A 2015-10-08 2015-10-08 A kind of panoramic video monitoring system based on CUDA Active CN105245841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510647067.9A CN105245841B (en) 2015-10-08 2015-10-08 A kind of panoramic video monitoring system based on CUDA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510647067.9A CN105245841B (en) 2015-10-08 2015-10-08 A kind of panoramic video monitoring system based on CUDA

Publications (2)

Publication Number Publication Date
CN105245841A CN105245841A (en) 2016-01-13
CN105245841B true CN105245841B (en) 2018-10-09

Family

ID=55043313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510647067.9A Active CN105245841B (en) 2015-10-08 2015-10-08 A kind of panoramic video monitoring system based on CUDA

Country Status (1)

Country Link
CN (1) CN105245841B (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869120B (en) * 2016-06-16 2018-10-26 哈尔滨工程大学 A kind of real-time optimization method of image mosaic
CN106372598A (en) * 2016-08-31 2017-02-01 广州精点计算机科技有限公司 Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering
CN107920252B (en) * 2016-10-11 2021-11-12 阿里巴巴集团控股有限公司 Panoramic video data processing method, device and system
CN106296719A (en) * 2016-11-01 2017-01-04 山东省科学院情报研究所 The intelligent safety check instrument of blending algorithm based on a kind of local invariant features and safety inspection method
CN106954044B (en) * 2017-03-22 2020-05-26 山东瀚岳智能科技股份有限公司 Video panorama processing method and system
CN106991665B (en) * 2017-03-24 2020-03-17 中国人民解放军国防科学技术大学 Parallel computing method based on CUDA image fusion
CN107301620B (en) * 2017-06-02 2019-08-13 西安电子科技大学 Method for panoramic imaging based on camera array
CN107240070A (en) * 2017-06-08 2017-10-10 广东容祺智能科技有限公司 A kind of unmanned plane image mosaic system and method based on emergency processing
CN107301618B (en) * 2017-06-21 2019-11-22 华中科技大学 Based on the GPU basis matrix accelerated parallel and homography matrix estimation method and system
CN111418213B (en) * 2017-08-23 2022-07-01 联发科技股份有限公司 Method and apparatus for signaling syntax for immersive video coding
CN107767410A (en) * 2017-10-27 2018-03-06 中国电子科技集团公司第三研究所 The multi-band image method for registering of the multispectral system acquisition of polyphaser parallel optical axis
CN107909637A (en) * 2017-10-31 2018-04-13 黑龙江省科学院自动化研究所 A kind of magnanimity monitor video uses and presentation mode
CN109800765A (en) * 2017-11-17 2019-05-24 三橡股份有限公司 A kind of forming process sampling check for quality method of ocean oil hose
CN109948398B (en) * 2017-12-20 2024-02-13 深圳开阳电子股份有限公司 Image processing method for panoramic parking and panoramic parking device
CN108093221B (en) * 2017-12-27 2020-09-25 南京大学 Suture line-based real-time video splicing method
CN108234924B (en) * 2018-02-02 2019-02-19 北京百度网讯科技有限公司 Video mixed flow method, apparatus, equipment and computer-readable medium
CN109493278A (en) * 2018-10-24 2019-03-19 北京工业大学 A kind of large scene image mosaic system based on SIFT feature
CN109474697B (en) * 2018-12-11 2019-07-26 长春金阳高科技有限责任公司 A kind of monitoring system audio/video transmission method
CN109618135B (en) * 2018-12-12 2019-10-29 苏州新海宜电子技术有限公司 A kind of quantum cryptography type video monitoring system for intelligent self-organized network communication
CN109859105B (en) * 2019-01-21 2023-01-03 桂林电子科技大学 Non-parameter image natural splicing method
CN110120012B (en) * 2019-05-13 2022-07-08 广西师范大学 Video stitching method for synchronous key frame extraction based on binocular camera
CN110136164B (en) * 2019-05-21 2022-10-25 电子科技大学 Method for removing dynamic background based on online transmission transformation and low-rank sparse matrix decomposition
CN110189322B (en) * 2019-06-04 2021-11-19 广州视源电子科技股份有限公司 Flatness detection method, device, equipment, storage medium and system
CN110728176B (en) * 2019-08-30 2022-11-11 长安大学 Unmanned aerial vehicle visual image feature rapid matching and extracting method and device
CN111383204A (en) * 2019-12-19 2020-07-07 北京航天长征飞行器研究所 Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN111080525B (en) * 2019-12-19 2023-04-28 成都海擎科技有限公司 Distributed image and graphic primitive splicing method based on SIFT features
CN113129213A (en) * 2020-01-14 2021-07-16 中国计量大学 Automatic splicing and fusing method for digital holographic subaperture phase diagram
CN111898589B (en) * 2020-08-26 2023-11-14 中国水利水电科学研究院 Unmanned aerial vehicle image rapid registration method based on GPU+feature recognition
CN112163996B (en) * 2020-09-10 2023-12-05 沈阳风驰软件股份有限公司 Flat angle video fusion method based on image processing
CN112735198A (en) * 2020-12-31 2021-04-30 深兰科技(上海)有限公司 Experiment teaching system and method
CN113256492B (en) * 2021-05-13 2023-09-12 上海海事大学 Panoramic video stitching method, electronic equipment and storage medium
CN113506214B (en) * 2021-05-24 2023-07-21 南京莱斯信息技术股份有限公司 Multi-path video image stitching method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247513A (en) * 2007-12-25 2008-08-20 谢维信 Method for real-time generating 360 degree seamless full-view video image by single camera
CN101414379A (en) * 2007-10-17 2009-04-22 日电(中国)有限公司 Apparatus and method for generating panorama image
CN101840570A (en) * 2010-04-16 2010-09-22 广东工业大学 Fast image splicing method
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN103997609A (en) * 2014-06-12 2014-08-20 四川川大智胜软件股份有限公司 Multi-video real-time panoramic fusion splicing method based on CUDA
CN104301677A (en) * 2014-10-16 2015-01-21 北京十方慧通科技有限公司 Panoramic video monitoring method and device orienting large-scale scenes
CN104463859A (en) * 2014-11-28 2015-03-25 中国航天时代电子公司 Real-time video stitching method based on specified tracking points
CN104599258A (en) * 2014-12-23 2015-05-06 大连理工大学 Anisotropic characteristic descriptor based image stitching method
CN104680516A (en) * 2015-01-08 2015-06-03 南京邮电大学 Acquisition method for high-quality feature matching set of images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7778461B2 (en) * 2006-05-05 2010-08-17 New Jersey Institute Of Technology System and/or method for image tamper detection

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414379A (en) * 2007-10-17 2009-04-22 日电(中国)有限公司 Apparatus and method for generating panorama image
CN101247513A (en) * 2007-12-25 2008-08-20 谢维信 Method for real-time generating 360 degree seamless full-view video image by single camera
CN101840570A (en) * 2010-04-16 2010-09-22 广东工业大学 Fast image splicing method
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN103997609A (en) * 2014-06-12 2014-08-20 四川川大智胜软件股份有限公司 Multi-video real-time panoramic fusion splicing method based on CUDA
CN104301677A (en) * 2014-10-16 2015-01-21 北京十方慧通科技有限公司 Panoramic video monitoring method and device orienting large-scale scenes
CN104463859A (en) * 2014-11-28 2015-03-25 中国航天时代电子公司 Real-time video stitching method based on specified tracking points
CN104599258A (en) * 2014-12-23 2015-05-06 大连理工大学 Anisotropic characteristic descriptor based image stitching method
CN104680516A (en) * 2015-01-08 2015-06-03 南京邮电大学 Acquisition method for high-quality feature matching set of images

Also Published As

Publication number Publication date
CN105245841A (en) 2016-01-13

Similar Documents

Publication Publication Date Title
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
Wang et al. 360sd-net: 360 stereo depth estimation with learnable cost volume
CN109064404A (en) It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN112085659B (en) Panorama splicing and fusing method and system based on dome camera and storage medium
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
CN104392416B (en) Video stitching method for sports scene
CN107833179A (en) The quick joining method and system of a kind of infrared image
CN101853524A (en) Method for generating corn ear panoramic image by using image sequence
CN103856727A (en) Multichannel real-time video splicing processing system
CN107316275A (en) A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
CN105005964B (en) Geographic scenes panorama sketch rapid generation based on video sequence image
CN107016646A (en) One kind approaches projective transformation image split-joint method based on improved
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN107767339B (en) Binocular stereo image splicing method
CN109064409A (en) A kind of the visual pattern splicing system and method for mobile robot
CN112184604B (en) Color image enhancement method based on image fusion
WO2022222077A1 (en) Indoor scene virtual roaming method based on reflection decomposition
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
CN113221665A (en) Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method
CN109697696B (en) Benefit blind method for panoramic video
CN111553939A (en) Image registration algorithm of multi-view camera
TW201926244A (en) Real-time video stitching method
JP4327919B2 (en) A method to recover radial distortion parameters from a single camera image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant