CN104301677B - The method and device monitored towards the panoramic video of large scene - Google Patents

The method and device monitored towards the panoramic video of large scene Download PDF

Info

Publication number
CN104301677B
CN104301677B CN201410547110.XA CN201410547110A CN104301677B CN 104301677 B CN104301677 B CN 104301677B CN 201410547110 A CN201410547110 A CN 201410547110A CN 104301677 B CN104301677 B CN 104301677B
Authority
CN
China
Prior art keywords
video
module
frame
splicing
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410547110.XA
Other languages
Chinese (zh)
Other versions
CN104301677A (en
Inventor
刘启芳
黄美姜
陶荣伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING SHIFANG HUITONG TECHNOLOGY Co Ltd
Original Assignee
BEIJING SHIFANG HUITONG TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING SHIFANG HUITONG TECHNOLOGY Co Ltd filed Critical BEIJING SHIFANG HUITONG TECHNOLOGY Co Ltd
Priority to CN201410547110.XA priority Critical patent/CN104301677B/en
Publication of CN104301677A publication Critical patent/CN104301677A/en
Application granted granted Critical
Publication of CN104301677B publication Critical patent/CN104301677B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention provides the method and device that a kind of panoramic video towards large scene monitors.Described device includes head end video acquisition encoding device and data transmission set, large scene monitoring system includes data receiver, video decodes, video handles and exports the software and hardwares such as coding, and the reception of the method completion video data, video decoding, video registration, GPU splices in real time and color bleeding.The overall view monitoring image sequence that splicing fusion obtains continuously is output to display equipment by HDMI/DVI and is shown, while is supported to overall view monitoring image real-time coding, and pass through network and be transmitted and store.The method and device can ensure the splicing effect and joining quality of panoramic video, improve the efficiency of video-splicing, reach the demand of real-time, and obtained panoramic video is more natural, true.In practical applications, it can ensure the requirement of follow-up panoramic video splicing well and simplify the mounting arrangements of field device, exploitativeness is strong.

Description

The method and device monitored towards the panoramic video of large scene
Technical field
The present invention relates to technical field of video monitoring, more particularly to a kind of method monitored towards large scene panoramic video And device.
Background technology
In recent years with the development of social situations, the interrogatory of video monitoring and explain, application is more and more extensive, and multiply Go out various with specific function and the monitoring device and system for different application scene.However, suitable for there is urgent prison instantly The large scene real-time video monitoring of control demand is but more rare.Such as large-scale square, large-scale site of activity, crossroad, What security protection staff not only will be clear that sees the details in some corner, but also need to control the entirety of large scene at a glance The main trajectory of situation and active agent.For the application scenarios of large scene, according to traditional multi-cam monitoring system, though The details of important specific region can be seen clearly, but lack continuity, and can not slap due to the limitation of the effective ken of single camera Control whole global situation;According to panoramic monitoring camera or fish eye camera, though the ken for having ultra-wide can grasp it is whole Situation, but presence can not check that details, deformation are serious and do not have the shortcomings that real-time.
High definition panorama video-splicing method of the prior art, since HD video source includes big data quantity, and base in itself In online real-time splicing, there is the problem of splicing efficiency is low, it is impossible to reach the requirement of real time of video monitoring.For this purpose, someone sets A kind of joining method of real time high-speed high definition panorama video has been counted, i.e., has obtained the multichannel real-time video figure for meeting splicing condition first Then picture splices by video image, chooses perspective plane, establish the coordinate mapping relations of single video and panoramic projection face, most Directly realize that splicing obtains panoramic video to a few road videos in real time using coordinate mapping relations afterwards.This method is in actual application In have some limitations, such as:It needs to judge whether arbitrary two-path video meets splicing condition repeatedly before formal splicing, if not The acquisition posture of front end camera need to further be adjusted by meeting;It is not ensured that precisely by calculating primary splicing projective parameter Property;Lack the synchronization between splicing video, it is impossible to the situation for preventing moving target from occurring slur in splicing regions and disappearing without foundation;Most Exposure fusion emphasis afterwards is directed to the brightness adjustment of overlapping region, it is impossible to ensure final panoramic mosaic picture color normalization.Mirror In this, the present invention targetedly makes improvements and extends.
Invention content
It a kind of is monitored in view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to propose towards large scene panoramic video Device and system.Main technical problems to be solved are:
1. improve the accuracy of panoramic video splicing projection mapping relationship.It can not by calculating primary splicing projective parameter Its accuracy is ensured, especially for the relatively simple either more complicated situation characteristic point deficiency of scene or characteristic point mistake In mixing the accuracy registration that influences whether between image.
2. synchronizing multi-channel video content, the authenticity and reasonability of monitoring are improved.Head end video collecting device is compiled in acquisition There may be the nonsynchronous situation of clock during code, and collected video data is transmitted by IP network mostly, is connect The data that receiving end receives, which are no lack of, has out of order situation, if will appear various exceptions with regard to doing splicing on the basis of this, particularly At overlapping region.
3. improving the efficiency of panoramic video splicing, meet the real-time demand of monitoring.Requirement of the monitoring to video source at present More tending to high definition, HD video source also increases the calculation amount of splicing projection in itself comprising big data quantity, therefore using one As processing means exist splicing efficiency it is low the problem of, it is impossible to reach the requirement of real time of video monitoring.
The technical scheme is that:
The present invention includes a set of large scene monitoring device and system.Large scene monitoring device includes head end video acquisition coding Equipment and data transmission set, it is soft that large scene monitoring system includes data receiver, video decodes, video handles and exports coding etc. Hardware.
The present invention obtains 3-4 video simultaneously by head end video harvester, these videos are to being up to 180 ° of ranges Scene areas realizes seamless covering, and each video is separately added into unified time synchronizing signal after coding, through sending mould Block is transferred to large scene monitoring system.Large scene monitoring system completes video data reception, video decoding, video registration, GPU realities When splicing and color bleeding, the obtained overall view monitoring image sequence of splicing fusion display equipment is continuously output to by HDMI/DVI It is shown, while is supported to overall view monitoring image real-time coding, and pass through network and be transmitted and store.
The present invention advantageous effects be:
The present invention proposes the device and system that a kind of panoramic video towards large scene monitors, and realizes a wide range of real time panoramic Video monitoring, the panorama exported are the independent pictures by video-splicing and color bleeding.With general panorama Video-splicing device and overall view monitoring camera are different with flake monitoring camera, and the present invention can provide more than 25fps frame per second The ultra high-definition video resolution of smooth overall view monitoring picture and up to 7680*1080.
It is provided by the present invention towards large scene monitoring device, can ensure follow-up panoramic video splicing well will It asks, and the mounting arrangements of field device can be simplified, exploitativeness is strong.
Large scene monitoring system provided by the present invention, video registration realize that the automatic of video-splicing parameter calculates and deposit Storage for determining best video autoregistration parameter, is configured in conjunction with the manual parameters of human-computer interaction, ensures final aphorama The splicing effect and joining quality of frequency;It uses GPU high-speed parallel treatment characteristics, improves the efficiency of video-splicing, output frame Rate reaches the demand of real-time;Its time synchronizing and color bleeding processing, Shi Ji roads video pass through the panorama that splicing obtains Video is more natural, true, is shot just like a video camera as coming.
Description of the drawings
Fig. 1 is the overall structure figure of apparatus and system that the present invention is monitored towards large scene panoramic video
Fig. 2 is the fixed dial schematic diagram of video capture device in the present invention
Fig. 3 is the effective ken schematic diagram of video capture device in the present invention
Fig. 4 is the panoramic video monitoring method flow chart towards large scene
Specific embodiment
For the technical solution that the present invention will be described in detail, below in conjunction with specific embodiment and attached drawing is coordinated to be described in detail.
The system global structure of panoramic video monitoring scheme towards large scene proposed by the invention can be found in Fig. 1.Prison Prosecutor case includes large scene monitoring device and large scene monitoring system.
The large scene monitoring device, including front-end video collection module, video encoding module and data transmission blocks.
The front-end video collection module, the video capture device being packaged in a transparent sphere cloche, by one A horizontal scale and four specific video camera compositions.For obtaining the original video sequence of front end monitoring area, and passed It is defeated by video encoding module and carries out Video coding respectively.Horizontal circle is labelled with the position angle letter of camera apparatus placement Breath, it is shown in Figure 2.The fan-shaped placement angle of fixed each video camera, respectively 22.5 °, 67.5 °, 112.5 ° and 157.5 °, edge A video camera is fixed in each graduation mark direction, and 4 video cameras are fixed on a dial.Taking the photograph in video capture device Camera is according to the horizontal fan-shaped array of certain angle so that the normal of each camera imaging plane meets at same central point and same In plane.The effective ken of video camera is shown in Figure 3, and effective ken of single camera is 45 °, ken weight between adjacent camera It is 10 ° folded.
The video encoding module, for being encoded respectively to each road video acquisition result.It will be in video capture device H.264, the video frame of every sub- imaging device output is encoded, and respectively obtains all the way H.264 form code stream.
The data transmission blocks, including clock module and sending module.Clock module is used for the time between multi-channel video Synchronous, the time signal of output is added in the form of timestamp in each video code flow.Data transmission blocks are believed according to the time Number sequencing sends video frame successively.
Wherein, each sub- imaging device is connect with video encoding module in video acquisition module, video encoding module and data Sending module connects.
The large scene monitoring system, including data reception module, Video decoding module, video registration module, database Module, GPU splice Fusion Module and exports coding module in real time.
The data reception module, for receiving the IP code streams of every road video, and the time synchronizing signal in code stream Video frame is submitted successively, the spatial synchronization between the Time Continuous and multi-channel video of video content of going the same way with holding.
The Video decoding module, for H.264 code stream to be decoded as sequence of frames of video respectively by multichannel, convenient for multi-channel video Between splicing frame by frame.
The video registration module is processed offline module, including autoregistration module and parameter adjustment module.Automatically match Quasi-mode block realizes that the automatic of video-splicing parameter calculates and store, and is mainly used for determining best video autoregistration parameter.One Denier determines the parameter of best splicing effect, just no longer needs to perform video registration module, because of head end video collecting device posture It has fixed;Parameter adjustment module optimizes autoregistration parameter for adjusting the splicing parameter of video manually.By automatic The final splicing parameter that the mode being combined is calculated is registrated and manually adjusts, including splicing homography matrix and splicing edge Mask images.The splicing parameter being calculated is stored in local database files, and use is transferred for real-time concatenation module.Its In, autoregistration process uses sift Feature Points Matching same places, rejects Mismatching point using RANSC algorithms and 3* is calculated 3 homography matrixs.It manually adjusts and refers to be adjusted 9 elements in homography matrix respectively, realize image or so, put down up and down It moves, scaling, rotate.
The database module preserves the Registration and connection parameter of each video by the way of database.Video registration module Can be by the Registration and connection parameter read-in database of each video, parameter configuration module can be to each video registration in database Parameter is read out and changes.Each video registration splicing parameter input GPU concatenation modules in database, realize panoramic video Splicing.
GPU splices Fusion Module in real time, accelerates to design using hardware concurrent, realizes and video is spliced in real time frame by frame.The mould Block obtains real-time video frame from Video decoding module, and splicing parameter is obtained from video-splicing database, and carrying out image using GPU reflects It penetrates and Fusion Edges, exports spliced entire image.Wherein, color bleeding uses three layers of pyramid decomposition model, realizes and spells Wiring fusion treatment.
Exports coding module, including real-time display module and coding transmission module.The realization of real-time display module will splice institute The overall view monitoring image obtained continuously exports display by HDMI/DVI, and realize user's interactive controlling.Coding transmission module is realized The H.264 coding and network transmission of overall view monitoring image after splicing, so that the user of Terminal Server Client grasps monitoring picture in real time Face.
Wherein, data reception module is connect with Video decoding module, Video decoding module respectively with video registration module and GPU splices Fusion Module connection in real time.Video registration module and parameter configuration module by connect database module finally with GPU splices Fusion Module and is connected in real time.GPU splice in real time Fusion Module as input respectively with real-time display module and coding Output module connects.
It is shown in Figure 4 the invention also provides a kind of accordingly towards the panoramic video monitoring method of large scene, including Following specific processing step:
Step 1, video acquisition
The acquisition of original video data will realize that the driving provides picture format setting by Video4Linux2, The interface function of a variety of operation video equipments such as frame buffer application, memory mapping.After gatherer process starts, driving is ceaselessly It writes video data into the allocated buffer area, after a data ready buffered, driving just puts it into output In queue, the processing of application program is waited for.When reading data, driving first falls out a buffer area, and application program passes through this The sequence number value of buffer area obtains corresponding buffer area length in the user space and offset address, and data are accessed so as to reach Purpose, after completion is handled, which can be reentered into acquisition queue.The output of acquisition module is YUV420 videos Frame.
Step 2, coding and transmission
Using H.264 encoder, the original video data that step 1 collects is encoded.H.264 in encoder Class is encoded to choose:H.264 encoder uses basic class, and reference frame quantity selection 1 selects CQP code check control modes, and measures Change parameter QP=26, select DIA macroblock search patterns, the search range of estimation is set as 8 pixels, sub-pixel interpolation LEVEL=1 during interframe encode, selects P16x16 macro-block partition modes.Selected parameter value when being designed according to encoder scheme, H.264 parameter options are configured, using YUV420 format videos frame as the input of encoder, NALU is as the defeated of encoder Go out.
Coding obtains ES video flowings, after PES packing devices, is converted into PES packets, after again passing by PS packings, leads to Real-time Transport Protocol is crossed to be transmitted.System clock is periodically synchronized to long-range reference clock, and (same as unique clock information Walk timestamp) it is embedded in video flowing for the time synchronization between video decoding and multiple video flowings.Wherein, in PES headers Middle embedded Presentation Time Stamp PTS (Presentation Time Stamp) and Decoding Time Stamp DTS (Decoding Time Stamp), embedded system clock reference SCR (the System Clock Reference) in PS headers.
Step 3, decoding with it is synchronous
The PS streams sent in step 2 are successively parsed, finally obtain ES streams and synchronized timestamp.It is solved using ffmpeg Code device is decoded ES streams, obtains YUV420 format video frames.Video buffer pond is established, to per video stream buffer 25 all the way YUV420 format videos frame synchronized timestamp corresponding with the frame is stored in buffering by frame together.When submitting video, read respectively Current synchronization time stamp per video all the way, with the synchronized timestamp (T of first via videol) on the basis of, Ti(i=2,3 ...) is the The synchronized timestamp of i roads video, the submission rule of video frame are defined as follows:
1) it takes the current YUV420 video frame of first via video and submits;
2) For i=2,3...
If Ti-Tl> 20ms then read a YUV420 video frame on the i-th road video and submit;
If -20ms≤MTi-Tl≤ 20ms then reads the current YUV420 video frame of the i-th road video and submits;
Otherwise, the next YUV420 video frame of the i-th road video is read to submit;
3) first via video reading position moves down a frame, repeats 1), 2) operation.
Step 4, autoregistration
Splice parameter and splicing line computation after step 3 obtains video frame, figure is carried out by the way of Sift characteristic matchings As autoregistration.Each the Sift characteristic point for calculating gained is the descriptor of one 4 × (4 × 4) dimension, while vector is returned One changes, and has more robustness to illumination.Using the arest neighbors Vectors matching method based on Euclidean distance, in low-resolution image Characteristic point, using K-D treeˉsearch methods find in a reference image with low-resolution image characteristic point Euclidean distance recently before Two characteristic points.If minimum distance is d1, secondary is closely d2If threshold value is wThen this is candidate special to characteristic point Point is levied, is otherwise rejected.Under normal circumstances, threshold value is relatively reasonable in [0.4,0.8] section.
The characteristic point between reference picture and image subject to registration is obtained to after, needs to estimate saturating between two images Resampling is carried out depending on coefficient, and then to imagery exploitation interpolation algorithm subject to registration, realizes the registration between image.If (u1, v1, 1), (u2, v2, 1) homogeneous coordinates a little pair are characterized, it can be obtained according to perspective matrix H:
Although characteristic point passes through above-mentioned thick matching, but still inevitably there are error hidings, influence registration accuracy, therefore also need To use the further screening that characteristic point pair is carried out based on classical random sampling consistency (RANSAC) algorithm.First, from candidate 3 characteristic points are randomly selected in characteristic point pair to establishing equation group, solve 6 parameters of H.Characteristic point is calculated to convert by H Afterwards with the distance of candidate feature point, if distance is less than given threshold, for interior point, it is otherwise exterior point, is rejected, counted simultaneously Interior number.Next 3 characteristic points pair are taken again, repeated the above steps, after several times, choose most comprising interior point A point to collecting last, affine matrix H is solved to collection to the point using least square method.
Splicing line segmentation is realized using the method divided based on graph theory, and a figure is represented with G=< γ, ε > (Graph), it is made of the set ε on the side of node in a node set γ and a connection γ.To an image I, One figure G of construction is corresponding to it.Element γ in wherein γiA pixel x in corresponding Ii, each two adjacent image point x in Ii And xjBetween have a line e in εijIt is corresponding to it, and takes a nonnegative value w (vi, vj) be the side energy weight.Definition The energy function of segmentation is:
Wherein
Make its weights to segmentation portion side related, figure G can be divided into two subgraphs A and B so that A ∪ B=γ, A ∩ B=φ.
Specific dividing processing flow is as follows:
1. initialization
(a) rectangle for surrounding foreground area is given;
(b) background area B0It is initially the outer picture element of rectangle, zone of ignorance U0It is initially picture element in rectangle;
(c) B is used0And U0The GMM of initial background and prospect;
2. iterative segmentation
(a) the point people in team is assigned to an immediate Gaussian Profile in GMM;
(b) GMM parameters are recalculated;
(c) max-flow is calculated, solves smallest partition minE, and update Ui, BiAnd Fi
(d) 2 (a) -2 (c) are repeated until convergence;
(e) optimal segmentation line is obtained.
Optimal segmentation line is corresponded in original video frame, obtains two pieces or polylith connected region, by effective video region Corresponding connected region is filled with 255, and other area fillings, to get to the corresponding Mask figures of the video, are denoted as 0 Maskframe·
Step 5, parameter optimization
To the homography matrix H that step 4 is calculated, it is denoted as
By to the element h in HijIt modifies, realizes image translation, scaling and rotation.Wherein, left and right translation:h′12 =h12+ Δ x, Δ x are more than zero, represent that, to right translation, Δ x is less than zero, represents to left;Upper and lower translation:h′22=h22+ Δ y, Δ y is more than zero, represents translation downwards, and Δ y is less than zero, represents to translate up;Scaling:H '=S*H, whereins11 Controlled level direction scales, 0 < s11< 1 represents horizontal diminution, s11> 1 represents horizontal amplification;s22Vertical direction is controlled to scale, 0 < s22< 1 represents vertically to reduce, s22> 1 represents vertically to amplify.
Rotation:H '=T*H, whereinθ is the rotation angle under right-handed system.Modified list should The original homography matrix H that matrix H ' replacement step 4 obtains.
Step 6, CUDA splices in real time
Transfer process from single image to panorama sketch is known as image mapping, can be represented by the formula:
Ipano-frame=Warp (Hframe, Iframe), wherein Ipano-frameIt is interior in panorama sketch to represent that single-frame images is mapped to Hold, Iframe, HframeRespectively single-frame images and the corresponding homography matrix of the image.Image mapping process realized by CUDA, The block that every frame image is first divided into 16*16 sizes carries out bilinear interpolation mapping.Similarly, the corresponding Mask of the video is schemed, MaskframeIt is mapped, Maskframe'=Warp (Hframe, Maskframe).Intercept single frames effective video content:I′pano-frame =Mask 'frame and Ipano-frame.Calculate image overlapping region (Tl, Tr, Tt, Tb), wherein Tl, TrRepresent horizontal direction overlapping The right boundary in region, Tt, TbRepresent the lower coboundary of vertical direction overlapping region.Three layers of pyramid are performed in overlapping region It decomposes and restores, realize splicing line fusion.Specific fusion steps are as follows:
(1) laplacian decomposition transformation is implemented respectively to the source images for participating in fusion, obtains drawing pula corresponding with them This pyramid.
Source images G0As the pyramidal bottom, the 1st, 2 layer of decomposition formula is as follows:
Wherein, GlRepresent l tomographic images, ClRepresent the columns of l tomographic images, RlRepresent the function of l tomographic images, N is represented The number of plies of top, ω (m, n) are 5*5 window functions.
(2) pyramidal each layer is extracted, makes their mutually independent carry out fusion operations, in each layer of operation, Every layer is taken the fusion rule being adapted with it, finally, each decomposition layer is combined to obtain the gold of a fused image Word tower.
(3) inverse Laplace transform operation is carried out, image is reconstructed, and then obtains the image after fusion.
Step 6, exports coding
Resolution specifications, while output size Resolutions image are carried out to the panoramic video frame that step 5 exports, point It is not big code stream image 7680*1080, primary bit stream image 1920*256.Big primary bit stream is carried out using ffmpeg code databases independent Coding exports H.264 video flowing.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art Member, without departing from the principles of the present invention, can also make several improvements and modifications, these improvements and modifications should also regard For protection scope of the present invention.

Claims (7)

1. a kind of method that panoramic video towards large scene monitors, which is characterized in that the method comprises the steps of:
Step 1, video acquisition;
Step 2, the original video data that step 1 collects is encoded, is respectively incorporated into the video flowing after coding the time Synchronizing signal is sent by network;
Step 3, video data is received, the video frame of coding is subjected to real-time decoding;
Step 4, monitor video to be spliced according to left and right adjacent sequential is registrated, its autoregistration parameter is calculated simultaneously It remains in database;
Step 5, splicing parameter is adjusted manually, optimizes the parameter of autoregistration, and update the data in step 4 with optimum results Library;
Step 6, the registration parameter in video-splicing database is read, monitor video to be spliced is spliced;
Step 7, splicing is standardized as two kinds of code streams of size to export;
Wherein, the step of video frame of coding being carried out real-time decoding in the step 3 includes:The PS sent in step 2 is flowed It is successively parsed, finally obtains ES streams and synchronized timestamp;ES streams are decoded using ffmpeg decoders, are obtained YUV420 format video frames;Video buffer pond is established, to per 25 frame of video stream buffer all the way, by YUV420 format videos frame and being somebody's turn to do The corresponding synchronized timestamp of frame is stored in buffering together, when submitting video, reads the current synchronization time per video all the way respectively Stamp, with the synchronized timestamp T of first via video1On the basis of, TiThe synchronized timestamp of (i=2,3 ...) for the i-th road video, video frame Submission rule be defined as follows:
1) it takes the current YUV420 video frame of first via video and submits;
2) For i=2, if 3... Ti-T1> 20ms then read a YUV420 video frame on the i-th road video and submit;If 20ms≤MTi-T1≤ 20ms then reads the current YUV420 video frame of the i-th road video and submits;Otherwise, it reads one under the i-th road video A YUV420 video frame is submitted;
3) first via video reading position moves down a frame, repeats 1), 2) operation.
2. the method that the panoramic video according to claim 1 towards large scene monitors, which is characterized in that the step 2 It is middle to encode and include the step of being incorporated into time synchronizing signal to the video flowing after coding respectively:
Coding obtains ES video flowings, after PES packing devices, is converted into PES packets, after again passing by PS packings, passes through RTP Agreement is transmitted;
System clock is periodically synchronized to long-range reference clock, and is regarded as unique clock information (synchronized timestamp) insertion Frequency is used for the time synchronization between video decoding and multiple video flowings in flowing;
Wherein, embedded Presentation Time Stamp and Decoding Time Stamp, the embedded system clock in PS headers in PES headers Benchmark.
3. the method that the panoramic video according to claim 1 or 2 towards large scene monitors, which is characterized in that the step The step of monitor video to be spliced according to left and right adjacent sequential is registrated in rapid 4, its autoregistration parameter is calculated It is calculated including splicing parameter, the specific steps are:
From step 3 obtain video frame after, carry out automatic image registration by the way of Sift characteristic matchings, using based on it is European away from From arest neighbors Vectors matching method, for the characteristic point in low-resolution image, looked in a reference image using K-D treeˉsearch methods To the first two characteristic point nearest with low-resolution image characteristic point Euclidean distance, if minimum distance is d1, secondary is closely d2, If threshold value is wThen this is candidate feature point to characteristic point, is otherwise rejected;
The characteristic point between reference picture and image subject to registration is obtained to after, needs to estimate the perspective system between two images Number, and then resampling is carried out to imagery exploitation interpolation algorithm subject to registration, the registration between image is realized, if (u1, v1, 1), (u2, v2, 1) homogeneous coordinates a little pair are characterized, it can be obtained according to perspective matrix H:
4. the method that the panoramic video according to claim 3 towards large scene monitors, which is characterized in that further include use The further screening of candidate feature point pair is carried out based on classical RANSAC algorithm:
First, 3 characteristic points are randomly selected from candidate feature point pair to establishing equation group, solve 6 parameters of H;
Distance of the characteristic point after H is converted with candidate feature point is calculated, if distance is less than given threshold, for interior point, otherwise It for exterior point, is rejected, while counts interior point number;
Next 3 characteristic points pair are taken again, repeated the above steps, after several times, choose include interior point at most one A point solves affine matrix H to the point to collecting last, using least square method to collection.
5. the method that the panoramic video according to claim 1 or 2 towards large scene monitors, which is characterized in that the step The step of parameter for optimizing autoregistration in rapid 5, includes:
To the homography matrix H that step 4 is calculated, by the element h in HijModify, realize image translation, scaling and Rotation;The original homography matrix H that modified homography matrix H ' replacement steps 4 obtain.
6. the method that the panoramic video according to claim 1 towards large scene monitors, which is characterized in that the step 6 In the step of splicing to monitor video to be spliced include:
By CUDA, the block that every frame image is first divided into 16*16 sizes carries out bilinear interpolation mapping;
By the corresponding Mask figures of the video, MaskframeIt is mapped, Maskframe'=Warp (Hframe, Maskframe), interception is single Frame effective video content:I′pano-frame=Mask 'frameand Ipano-frame, wherein Ipano-frameRepresent that single-frame images is mapped to Content in panorama sketch, Iframe, HframeRespectively single-frame images and the corresponding homography matrix of the image;
Calculate image overlapping region (Tl, Tr, Tt, Tb), wherein Tl, TrRepresent the right boundary of horizontal direction overlapping region, Tt, Tb Represent the lower coboundary of vertical direction overlapping region;
Three layers of pyramid decomposition are performed in overlapping region and are restored, realize splicing line fusion.
7. the device that a kind of panoramic video towards large scene monitors, including large scene monitoring device and large scene monitoring system, It is characterized in that:
The large scene monitoring device includes front-end video collection module, video encoding module and data transmission blocks, wherein institute It states data transmission blocks and includes clock module and sending module, clock module is used for the time synchronization between multi-channel video, output Time signal is added in the form of timestamp in each video code flow, data transmission blocks according to time signal sequencing according to Secondary transmission video frame;
The large scene monitoring system include data reception module, Video decoding module, video registration module, database module, GPU splices Fusion Module and exports coding module in real time, and wherein GPU splices Fusion Module and accelerates design using hardware concurrent in real time Video is spliced in realization in real time frame by frame;
Wherein, each sub- imaging device is connect with video encoding module in video acquisition module, and video encoding module is sent with data Module connects;Data reception module is connect with Video decoding module, and Video decoding module is real with video registration module and GPU respectively When splicing Fusion Module connection;Video registration module is finally real with GPU by connecting database module with parameter configuration module When splicing Fusion Module be connected;GPU splices Fusion Module and is exported respectively with real-time display module and coding as input in real time Module connects;
The front-end video collection module, the video capture device being packaged in a transparent sphere cloche, for obtaining The original video sequence of front end monitoring area, and be transmitted to video encoding module and carry out Video coding respectively, by a water Flat dial and four specific video camera compositions;
Wherein, the horizontal circle is labelled with the position angle information of camera apparatus placement, and 4 are fixed on a dial Video camera, the fan-shaped placement angle of fixed each video camera is respectively 22.5 °, 67.5 °, 112.5 ° and 157.5;Wherein, four A video camera fixes one along each graduation mark direction, according to the horizontal fan-shaped array of certain angle so that each camera imaging plane Normal meet at same central point and in the same plane, effective ken of single camera is 45 °, is regarded between adjacent camera Domain is overlapped 10 °;
The video registration module is processed offline module, including autoregistration module and parameter adjustment module:
Wherein autoregistration module realizes that the automatic of video-splicing parameter calculates and store, and the video for determining best is matched automatically Quasi- parameter, the autoregistration process use sift Feature Points Matching same places, reject Mismatching point using RANSC algorithms and count Calculation obtains 3*3 homography matrixs;
Parameter adjustment module optimizes autoregistration parameter, the manual tune for adjusting the splicing parameter of video manually It is whole to refer to be adjusted 9 elements in homography matrix respectively, realize image or so, upper and lower translation, scaling, rotation;
Wherein, since head end video collecting device posture has been fixed, once it is determined that the parameter of best splicing effect, just no longer needs Perform video registration;The final splicing parameter being calculated in a manner that autoregistration and manually adjusting is combined, including Splice homography matrix and splicing edge Mask images;The splicing parameter being calculated is stored in local database files, for reality When concatenation module transfer use.
CN201410547110.XA 2014-10-16 2014-10-16 The method and device monitored towards the panoramic video of large scene Expired - Fee Related CN104301677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410547110.XA CN104301677B (en) 2014-10-16 2014-10-16 The method and device monitored towards the panoramic video of large scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410547110.XA CN104301677B (en) 2014-10-16 2014-10-16 The method and device monitored towards the panoramic video of large scene

Publications (2)

Publication Number Publication Date
CN104301677A CN104301677A (en) 2015-01-21
CN104301677B true CN104301677B (en) 2018-06-15

Family

ID=52321213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410547110.XA Expired - Fee Related CN104301677B (en) 2014-10-16 2014-10-16 The method and device monitored towards the panoramic video of large scene

Country Status (1)

Country Link
CN (1) CN104301677B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796623B (en) * 2015-02-03 2016-02-24 中国人民解放军国防科学技术大学 Splicing video based on pyramid Block-matching and functional optimization goes structural deviation method
CN105072393B (en) * 2015-07-31 2018-11-30 深圳英飞拓科技股份有限公司 A kind of more camera lens panorama web cameras and joining method
CN105245841B (en) * 2015-10-08 2018-10-09 北京工业大学 A kind of panoramic video monitoring system based on CUDA
CN105407278A (en) * 2015-11-10 2016-03-16 北京天睿空间科技股份有限公司 Panoramic video traffic situation monitoring system and method
CN108353195A (en) * 2015-11-17 2018-07-31 索尼公司 Sending device, sending method, receiving device, method of reseptance and transmitting/receiving system
CN105282526A (en) * 2015-12-01 2016-01-27 北京时代拓灵科技有限公司 Panorama video stitching method and system
CN105493712B (en) * 2015-12-11 2017-12-01 湖北钟洋机电科技有限公司 A kind of agricultural seeding machine with Anti-seismic seeding wheel
CN107205158A (en) * 2016-03-18 2017-09-26 中国科学院宁波材料技术与工程研究所 A kind of multichannel audio-video frequency stream synchronous decoding method based on timestamp
CN107306347A (en) * 2016-04-18 2017-10-31 中国科学院宁波材料技术与工程研究所 A kind of real-time video streaming transmission method based on spliced panoramic camera
CN106027886B (en) * 2016-05-17 2019-08-06 深圳市极酷威视科技有限公司 A kind of panoramic video realizes the method and system of synchronization frame
CN107426507A (en) * 2016-05-24 2017-12-01 中国科学院苏州纳米技术与纳米仿生研究所 Video image splicing apparatus and its joining method
CN112738531B (en) * 2016-11-17 2024-02-23 英特尔公司 Suggested viewport indication for panoramic video
CN112770178A (en) * 2016-12-14 2021-05-07 上海交通大学 Panoramic video transmission method, panoramic video receiving method, panoramic video transmission system and panoramic video receiving system
CN108234820A (en) * 2016-12-21 2018-06-29 上海杰图软件技术有限公司 The method and system of real-time splicing panorama image based on the processing of single channel picture signal
CN106954030A (en) * 2017-03-20 2017-07-14 华平智慧信息技术(深圳)有限公司 Monitor the video-splicing method and system of cloud platform
CN107846604A (en) * 2017-11-09 2018-03-27 北京维境视讯信息技术有限公司 A kind of panoramic video processing manufacturing system and method
CN108419010A (en) * 2018-02-06 2018-08-17 深圳岚锋创视网络科技有限公司 Panorama camera and its in real time method and apparatus of output HDMI panoramic video streams
EP3606032B1 (en) * 2018-07-30 2020-10-21 Axis AB Method and camera system combining views from plurality of cameras
CN109166078A (en) * 2018-10-22 2019-01-08 广州微牌智能科技有限公司 Panoramic view joining method, device, viewing system, equipment and storage medium
CN109587408A (en) * 2018-12-14 2019-04-05 北京大视景科技有限公司 A kind of large scene video fusion covering method that single-column type monitoring camera is vertically installed
CN111343415A (en) * 2018-12-18 2020-06-26 杭州海康威视数字技术股份有限公司 Data transmission method and device
CN111510731B (en) * 2019-01-31 2022-03-25 杭州海康威视数字技术股份有限公司 System and method for splicing traffic images
CN112051975B (en) * 2020-08-11 2023-09-01 深圳市创凯智能股份有限公司 Adjustment method for spliced picture, splicing equipment and storage medium
CN112135068A (en) * 2020-09-22 2020-12-25 视觉感知(北京)科技有限公司 Method and device for fusion processing of multiple input videos
CN112272306B (en) * 2020-09-28 2023-03-28 天下秀广告有限公司 Multi-channel real-time interactive video fusion transmission method
CN112465702B (en) * 2020-12-01 2022-09-13 中国电子科技集团公司第二十八研究所 Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
CN112885096A (en) * 2021-02-05 2021-06-01 同济大学 Bridge floor traffic flow full-view-field sensing system and method depending on bridge arch ribs
CN112885110A (en) * 2021-02-05 2021-06-01 同济大学 Bridge floor traffic flow full-view-field sensing system and method depending on adjacent high-rise structure
CN112820112A (en) * 2021-02-05 2021-05-18 同济大学 Bridge floor traffic flow full-view-field sensing system and method depending on bridge tower column
CN114070828B (en) * 2022-01-17 2022-05-17 中央广播电视总台 Program stream fault detection method and device, computer equipment and readable storage medium
CN114827491B (en) * 2022-04-18 2023-02-14 鹰驾科技(深圳)有限公司 Wireless transmission panoramic view splicing technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012089989A (en) * 2010-10-18 2012-05-10 Hitachi Kokusai Electric Inc Video monitoring system
CN103905741A (en) * 2014-03-19 2014-07-02 合肥安达电子有限责任公司 Ultra-high-definition panoramic video real-time generation and multi-channel synchronous play system
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode
CN104104911A (en) * 2014-07-04 2014-10-15 华中师范大学 Timestamp eliminating and resetting method in panoramic image generation process and system thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012089989A (en) * 2010-10-18 2012-05-10 Hitachi Kokusai Electric Inc Video monitoring system
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode
CN103905741A (en) * 2014-03-19 2014-07-02 合肥安达电子有限责任公司 Ultra-high-definition panoramic video real-time generation and multi-channel synchronous play system
CN104104911A (en) * 2014-07-04 2014-10-15 华中师范大学 Timestamp eliminating and resetting method in panoramic image generation process and system thereof

Also Published As

Publication number Publication date
CN104301677A (en) 2015-01-21

Similar Documents

Publication Publication Date Title
CN104301677B (en) The method and device monitored towards the panoramic video of large scene
CN204090039U (en) Integration large scene panoramic video monitoring device
US10757423B2 (en) Apparatus and methods for compressing video content using adaptive projection selection
US11064110B2 (en) Warp processing for image capture
US11341715B2 (en) Video reconstruction method, system, device, and computer readable storage medium
CN111279673B (en) System and method for image stitching with electronic rolling shutter correction
CN103763479B (en) The splicing apparatus and its method of real time high-speed high definition panorama video
CN107249096B (en) Panoramic camera and shooting method thereof
JP2020515937A (en) Method, apparatus and stream for immersive video format
WO2021093584A1 (en) Free viewpoint video generation and interaction method based on deep convolutional neural network
US10681272B2 (en) Device for providing realistic media image
CN103905741A (en) Ultra-high-definition panoramic video real-time generation and multi-channel synchronous play system
CN108886611A (en) The joining method and device of panoramic stereoscopic video system
CN104580933A (en) Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method
CN112365407B (en) Panoramic stitching method for camera with configurable visual angle
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN101521823B (en) Spatial correlation panoramic data compressing method
CN107426491B (en) Implementation method of 360-degree panoramic video
CN102857739A (en) Distributed panorama monitoring system and method thereof
CN111557094A (en) Method, apparatus and stream for encoding/decoding a volumetric video
CN104809719A (en) Virtual view synthesis method based on homographic matrix partition
CN106447602A (en) Image mosaic method and device
KR101933037B1 (en) Apparatus for reproducing 360 degrees video images for virtual reality
KR102141319B1 (en) Super-resolution method for multi-view 360-degree image and image processing apparatus
CN104318604A (en) 3D image stitching method and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180615

Termination date: 20181016

CF01 Termination of patent right due to non-payment of annual fee