CN103795976B - A kind of full-time empty 3 d visualization method - Google Patents

A kind of full-time empty 3 d visualization method Download PDF

Info

Publication number
CN103795976B
CN103795976B CN201310747292.0A CN201310747292A CN103795976B CN 103795976 B CN103795976 B CN 103795976B CN 201310747292 A CN201310747292 A CN 201310747292A CN 103795976 B CN103795976 B CN 103795976B
Authority
CN
China
Prior art keywords
video
camera
full
data
time
Prior art date
Application number
CN201310747292.0A
Other languages
Chinese (zh)
Other versions
CN103795976A (en
Inventor
张政
周锋
刘舟
张贺
何浩
Original Assignee
北京正安维视科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京正安维视科技股份有限公司 filed Critical 北京正安维视科技股份有限公司
Priority to CN201310747292.0A priority Critical patent/CN103795976B/en
Publication of CN103795976A publication Critical patent/CN103795976A/en
Application granted granted Critical
Publication of CN103795976B publication Critical patent/CN103795976B/en

Links

Abstract

The invention provides the full-time empty 3 d visualization method of one kind, including:The real time video data of camera acquisition is subjected to package form conversion;The splicing of real-time fixed angle video data is fused in 3D GIS spatial datas, panoramic stereoscopic video is formed;Using the event target in panoramic stereoscopic video as driving, realize that video camera collaboration is chased after and regard;According to above-mentioned package form transformation result, real time video data is preserved in the storage device, fixed angle video data therein splicing is fused in 3D GIS spatial datas by history of forming video data, realize the full-time empty 3 d visualization displaying of history video.In addition, in order to carry out video intelligent analysis under overall scenario and realize multidimensional data fusion and 3 d visualization displaying, the invention also provides whole scene Intelligent video analysis method and the full-time empty 3 d visualization method of diversiform data.The present invention provides effective means, strong applicability for macroscopic view commander monitoring, overall association, integrated dispatch.

Description

A kind of full-time empty 3 d visualization method

Technical field

The present invention relates to computer graphics field, especially, it is related to a kind of full-time empty 3 d visualization method.

Background technology

With the rapid development of information technology, various application apparatus, such as scanner, CCTV camera, alarm, sensor Deng collecting the data of magnanimity, these data contain the various information such as locus, word, image, sound.Data volume It is growing day by day to bring great challenge to data management.The characteristics of being obtained based on the information above of people 80% by vision, visualization Technology has obtained significant progress, is mounted with substantial amounts of CCTV camera, alarm, sensor everywhere in city at present Deng, due to multiple video cameras capture video pictures be discrete independent, relevance is poor between each picture.Meanwhile, based on video The intellectual analysis of monitoring becomes current study hotspot, even if being replaced going out in people, contributor's analysis monitoring scene with computer Existing suspicious object and potentially dangerous event, and complete corresponding control task.In order to realize comprehensive large scene Intelligent video monitoring so that observer faster can more accurately perceive the change of surrounding scene, and it is desirable to by taking the photograph Camera sees the video image of wider scope.In order to realize this target, mainly there are following a few class methods at present:

1. panoramic shooting technology

What traditional video image acquisition was obtained is scene information of the single view on some direction of observation, and uses panorama Imaging can then obtain scene information of the single view on all direction of observations.It is different from speed dome camera, panoramic shooting function Multiple different regions are observed and shoot with video-corder simultaneously, with realizing that monitoring range is wide, while can be taken the photograph within the specified range instead of many Camera, possesses no mechanical part, avoids the advantage such as fault rate height and reduction later period maintenance volume of services.The realization of panoramic camera Mode mainly has two kinds, and one kind is to use fish eye lens, and 360 degree of overall view monitorings are realized when ceiling is installed, or real by wall Existing 180 degree overall view monitoring;Another is the monitoring for using the splicing of many camera lenses to realize 360 degree.

(1) fish eye lens panoramic shooting technology

Lancam, MOBOTIX, odd even, rise dragon, Haikang prestige depending on and Yin Hao etc. all develop such technology.Fish eye lens is complete Scape video camera has broader field range, and advantage is had more in terms of capturing panoramic view image.However, in actual panoramic picture Generating process in, fish eye camera except picture center scenery keep it is constant, because optical principle produce deformation, Qi Taben Scenery that should horizontally or vertically all there occurs corresponding change, it is necessary to carry out the calibrated of complexity to the image that fish eye camera is gathered And calibration, but the deformation of the fish eye images after processing is still present, the image fault of especially fringe region is serious.Meanwhile, Because the limitation of installation site, its scope of application is limited.

Current fish eye camera is primarily located within professional application market, such as large scene environment or is easy to vertically arranged field Close.As a supplement of conventional DRAM, it has good development prospect, is not to pay attention to detail very much particularly at those, But pay close attention to very much the occasion of process, such as crossroad, front door, meeting room etc..

(2) many camera lens panoramic shooting technologies

Although how the problem of fish eye images straightening is not present in many camera lens panoramic cameras, ensure many lens shootings The splicing of picture is realized without dead angle, without blind spot, coherent overall view monitoring, is key issue therein.It is current mainly to have two classes real Existing method:Panorama collection imaging based on splicing and the panorama collection imaging based on catadioptric.

Panoramic technique based on splicing comparative maturity, all oneself has the work(of spliced panoramic for the products of many renowned companies Can, typically there are a QulckTimeVR systems of Apple Inc., the Surrounding Video systems of Microsoft, Canon A71015, V705 digital cameras of Kodak etc., the domestic National University of Defense technology, Zhejiang University, Tsing-Hua University etc. propose many New image split-joint method and thought, extreme enrichment and the perfect panoramic imaging techniques based on image mosaic.But it is this to be based on There are splicing seams and light and shade difference in the panoramic picture that splicing is produced, the associative perception of picture is poor, and generating process is complicated.If will spell Formula panoramic technique is connect applied to video, then needs the multi-camera array of accurate calibration and sync pulse jamming, equipment is complicated, and and is spelled Connect formula panoramic picture the same, the problem of picture associative perception is poor can not still solve.

Realize that panoramic imagery has gradually grown up using catadioptric optical element, and it is simple in construction with its, and compatibility is existing Imaging device, can easily expanded application in panoramic video collection etc. advantage.

Panoramic camera product is still within the primary stage due to the limitation of various factors, at present domestic application.When The cost of preceding panoramic camera is too high to remain its big weakness that can not be employed extensively in market, meanwhile, panoramic camera The effect of picture is also the key factor of restriction development, and this resolution ratio for including panoramic camera and picture correct two disasters Topic.The product of the high pixel of panoramic camera is not only Sensor selection, ISP processing, coding and corresponding with network matches somebody with somebody Close all critically important, and in the requirement such as corresponding structure, technique, several times stricter than common camera.

2.3D GIS and monitor video integration technology

" image China ", Ev- with Google Earth, Skyline Software, GeoFusion and the country The release of the three-dimensional softwares such as Globe, VGEGIS, status of the three-dimension GIS in digital city basic platform system is built becomes to get over Come more important.How three-dimension GIS and be distributed widely in each corner of city two-dimensional video monitoring system integrate into For the study hotspot of location-based video monitoring system.The video monitoring system of spatial positional information is integrated with enhancing user Bigger effect will be played in terms of locus consciousness, auxiliary user's Emergency decision.Way popular at present is interruption Property from various regions monitor image in real time in gather out picture, shown in the form of mark in three-dimensional scenic.When user thinks When knowing the monitoring image in somewhere, click on the mark can eject in real time monitoring image sectional drawing, such as stand space by video camera Data are introduced into 3DGIS platforms, realize " dynamic " " quiet " state supervision of the cities.In actual applications, due to originals such as network transfer speeds Cause, the three-dimensional video monitor mode is also far from reaching the target merged with three-dimensional peripheral information, lost substantial amounts of letter Breath.

3. three-dimensional video-frequency monitoring technology

With the Domain Theories such as computer vision and pattern-recognition study progressively deeply, the stereoscopic fusion technology of video into For study hotspot.United States Patent (USP) US2002191003 is related to the video fusion under video way limited situation, and this method can The video way of processing is limited simultaneously, it is impossible to handle the monitor video more than 16 tunnels in real time, and the large-scale application of distance also has Very big gap;And without for the video analysis function after fusion.Because monitor video data volume is huge, for more than 16 tunnels Video data can not handle in real time, limit the suitable environment of this method.The country is limited to technical merit, there is no similar production Product.Real time fusion massive video data simultaneously carries out stereoscopic full views and shows to be still a world-class problem.

4. the Intellectual Analysis Technology of video monitoring

For the video analysis of story board, worldwide, the state such as American and Britain has carried out grinding for a large amount of relevant items Study carefully.Since upper world's the mid-90, in American-European countries, intelligent vision analysis achieves fast development, and it mainly studies use The automatic video frequency understanding technology being monitored in battlefield and normal domestic scene.The ADVISOR projects of Britain(Annotated Digital Video for Intelligent Surveillance and Optimized Retrieval), according to video Crowd density and its amount of exercise in data estimation subway transportation environment, the behavior of analysis person to person group, to what is be potentially dangerous Event or crime dramas carry out early warning.Currently, IBM S3 (Smart Surveillance System) project team, Intel's IRISNET (Internet-scale, Resource-intensive Sensor Network ServICes) project team etc., point Do not maintained the leading position in the different field of distributed intelligent monitoring system.At home in research institution, Tsing-Hua University, Shanghai are handed over The units such as logical university, the Computer Department of the Chinese Academy of Science have carried out the intelligent vision monitoring project research of correlation.But, domestic at present grinds Study carefully majority and also rest on laboratory stage, the multiple target tracking in the video surveillance network of outdoor multiple-camera does not have still with identification Have and applied on a large scale.

Under the stimulation of the market demand sharp increase of video intelligent analysis software, foreign countries provide video intelligent analysis software The manufacturer of product has had a lot, wherein the video intelligent analysis product that many manufacturers provide, is all based on ObjectVideo companies Image analysis technology, design and create the OEM products of oneself brand using Object Video Onboard platforms.In solution The certainly provider of scheme, there are many successful stories in foreign countries, as San Francisco International Airport is employed as providing Vidient companies Intelligent video analysis system Smart Catch, the system and the existing closed-circuit television system cooperation detection in airport are abnormal or suspicious Behavior.When intelligent video analysis software identifies an abnormal conditions, video clips are just passed through into pager, hand-held electric immediately Brain, mobile phone or other communication apparatus, which are sent to respondent, to come to carry out field investigation.But the product of offshore company is basic It is that based on SD or CIF resolution ratio, monitoring visual field is smaller, the unsuitable Chinese stream of people is complicated, move the irregular application back of the body Scape;And though the intellectual analysis of the external military relatively succeeds, technology is limited by secrecy.Internal video intellectual analysis product is mostly To single channel monitor video and analyze, such as intelligence stablizes the country, Wenan science and technology, shellfish letter, currently for the overall situation intelligence of whole scene Analysis product is seldom.

In summary, Video Supervision Technique experienced from locally panorama is monitored, monitor solid from planar video The evolution of video monitoring, some important achievements are had been achieved for currently for the Video Supervision Technique of large scene, but overall For, there is no both at home and abroad at present be capable of large-scale application to monitor area carry out global monitoring technology and can be to monitored space Domain carries out the general technology of whole scene video intelligent analysis.

The content of the invention

The invention aims to overcome in the prior art above shortcomings there is provided general, the efficient side of one kind Method, by the scattered video of different visual angles, multiple sensors information, GPS information, access information, the real-time automatic Mosaic of warning message It is fused in 3D GIS spatial datas, full-time empty 3 d visualization displaying and control is realized in panorama.

To achieve these goals, according to an aspect of the invention, there is provided a kind of full-time empty three-dimensional spelling of monitor video Fusion method is connect, is comprised the following steps:

Step 1), the real time video data of camera acquisition is subjected to package form conversion, the real time video data bag The fixed angle video data that gun type camera and/or ball-shaped camera are gathered in presetting bit is included, in addition to ball-shaped camera is adopted The on-fixed angle video data of collection;

Step 2), the fixed angle video data gathered in real time splicing is fused in 3D GIS spatial datas, formed complete Scape three-dimensional video-frequency, realizes empty 3 d visualization displaying full-time in real time;

Step 3), according to step 1)The on-fixed angle video data and step 2 of pretreatment)The full-view stereo of formation is regarded Frequently, regarded by using the event target in overall view monitoring as driving, realizing that video camera collaboration is chased after, i.e., in panoramic stereoscopic video or 3D Click on observed object or position in GIS spatial data, multiple video cameras on its periphery are by called to lock the region;

Step 4), according to step 1)Package form transformation result, real time video data is preserved in the storage device, formed History video data;

Step 5), by step 4)Fixed angle video data splicing in the history video data of formation is fused to 3D GIS In spatial data, the full-time empty 3 d visualization displaying of history video is realized, is reverted back with carrying out panorama to history video data Trace back.

Further, in described step 2)The real-time full-time empty 3 d visualization displaying realized includes:Key area is big Scene monitoring, critical path automatic cruising, two and three dimensions information association are shown, video camera reverse correlation and video camera are assisted Regarded with chasing after.

Further, in described step 5)The full-time empty 3 d visualization displaying realized includes:Key area large scene Monitoring, critical path automatic cruising, two and three dimensions information association are shown, video camera reverse correlation, and the big field of historical events Scape is recalled.

Further, the key area large scene monitoring includes:User preset observation station monitors the scene of key area, With global view key area dynamic;The critical path automatic cruising includes:Self-defined cruise track, and according to setting Visual angle well, speed carry out automatic cruising;The two and three dimensions information association show including:Panoramic stereoscopic video is shown and 2D GIS spatial data simultaneous display, panoramic stereoscopic video shows and show with story board video data synchronization, 2D GIS spatial datas and Story board video data synchronization shows that the position and overlay area of video camera and the position of user's current observation point are recorded In 2D GIS spatial datas;The video camera reverse correlation includes:Selected on panoramic stereoscopic video or 2D GIS spatial datas The target of required observation or geographical position, the target or geographical position are irradiated to according to the target or geographic location association are all Video camera.

Further, video camera collaboration is chased after depending on including:Pass through interaction in panoramic stereoscopic video or 3D GIS spatial datas Observed object or geographical position are selected, the multiple video cameras for calling periphery according to the target or geographical position are irradiated to the region, And optical zoom can be carried out to video camera, comprehensive, multi-angle catches detailed information.

Further, historical events large scene backtracking includes:The history of multiple camera acquisitions is read in from storage device Video data, by the visualization of history video data into 3D GIS spatial datas, is played forward or backwards under full-time Altitude Search, that is, set backtracking period and region, provide just broadcasting, broadcasting frame by frame frame by frame, stop broadcasting, F.F., rewind and at random Positioning playing, to lift the search efficiency of historical events.

Further, in described step 1)It is middle collection fixed angle video data when, camera acquisition to video cover Cover area does not change;When gathering on-fixed angle video data, the video overlay area that camera acquisition is arrived can be any Change.

According to another aspect of the present invention there is provided a kind of whole scene Intelligent video analysis method, comprise the following steps:

Step 10), selected video intelligent analyzed area;

Step 20), using the full-time empty 3 D spliced fusion method of the monitor video described in one of claim 1-7, set up The corresponding relation of video data and three dimensions, carries out the video intelligent analysis under across camera lens whole scene, when noting abnormalities certainly Dynamic alarm;

Step 30), by step 20)Analysis result carried out in the panoramic stereoscopic video described in one of claim 1-7 Displaying, or trigger according to warning message video camera collaboration automatically and chase after and regard, pass through the detailed information of video camera overview alarm point.

Further, in described step 20)In, video intelligent analysis is close including across camera lens target following, stream of people's wagon flow Degree estimation, unusual checking.The unusual checking include mix line detection, crowd's abnormal aggregation detection, leave analyte detection, Abnormal speed is detected.

According to a further aspect of the invention there is provided a kind of full-time empty 3 d visualization method of diversiform data, realize Panoramic video and sensor, GPS, gate inhibition, the information fusion of alert data, comprise the following steps:

Step 100), input pickup information, GPS information, access information, warning message;

Step 200), set up using the full-time empty 3 D spliced fusion method of the monitor video described in one of claim 1-7 Video data and three dimensions corresponding relation, by multiple sensors information, GPS information, access information, warning message integrate Into panoramic stereoscopic video, to realize that comprehensive multidimensional is merged and visual presentation.

According to another aspect of the invention there is provided it is a kind of with sensing system, the linkage of gate control system, warning system it is many The full-time empty 3 d visualization application of categorical data, it is characterised in that the full-time sky of diversiform data according to claim 10 3 d visualization method realizes the application.

The present invention is by by the scattered camera shooting and video with different visual angles and being in the sensor, GPS, door of diverse location Prohibit and warning message automatic Mosaic be fused in 3D GIS formed multidimensional data full-time empty 3 d visualization, user without it is understood that The particular location of data acquisition equipment, any video pictures need not be switched, you can realize full-view stereo monitoring, panoramic video displaying Controlled with details organically combine, the backtracking of history video panorama, intelligent alarm and displaying and diversiform data is comprehensive under whole scene Application is closed, effective means is provided for macroscopic view commander monitoring, overall association, integrated dispatch, this method has wide applicability.

Brief description of the drawings

Embodiments of the invention are described in detail below in conjunction with accompanying drawing, wherein:

Fig. 1 is the inventive method flow chart;

Fig. 2 is the image registration algorithm schematic diagram of feature based in an example;

Fig. 3 is that video-splicing is fused to the panoramic stereoscopic video formed in three-dimensional scene models;

Fig. 4 is that video camera collaboration is chased after regarding schematic diagram;

Fig. 5 is panorama intellectual analysis flow chart;

Fig. 6 is panorama intellectual analysis result schematic diagram;

Fig. 7 is that diversiform data merges schematic diagram.

Embodiment

The specific embodiment of the present invention is provided below in conjunction with the accompanying drawings.

There is provided a kind of full-time empty 3 D spliced fusion method of monitor video according to one embodiment of present invention.It flows Journey figure is referring to Fig. 1.This method comprises the following steps:

Step 1, generate and/or obtain high-precision 2D/3D GIS spatial datas, collection and/or obtain video data.

As a kind of embodiment, based on scene scan data, scene image, CAD/ architectural drawings data generation 2D/3D GIS spatial data.

The 2D/3D GIS spatial datas can be(But it is not limited to)Two-dimensional map or three-dimensional scene models.It is excellent at one In the embodiment of choosing, 2D/3D GIS spatial datas are three-dimensional scene models.

It is used as a kind of embodiment, real time video data of the video data from camera acquisition.As another Embodiment, the real time video data that the video data is provided from third-party platform.Video data includes following data:Rifle The fixed angle video data that formula video camera and/or ball-shaped camera are gathered in presetting bit, in addition to ball-shaped camera collection On-fixed angle video data.

According to the preferred embodiment of the present invention, in the fixed angle video data described in collection, video camera is adopted The video overlay area collected does not change;When gathering on-fixed angle video data, the video covering that camera acquisition is arrived Region can arbitrarily change.

Step 2, the video data are real-time transmitted to command centre by network service, and in command centre to all kinds of numbers According to being pre-processed.

The step for main purpose be the above-mentioned video data for getting outside, carrying out package form to video data turns Change, and distributed away with internal data transmission format.

As a kind of embodiment, obtaining for above-mentioned video data can be real by way of following several video accesses It is existing.Wherein, external video data both can be the video data of camera acquisition in first three mode(For example by described previously Camera acquisition real time video data mode)Or the video data that third party system is provided(For example by preceding The mode for the real time video data that third party system described in text is provided), external video data is only for taking the photograph in the 4th kind of mode The video data of camera collection.

(1) RTSP modes(RTSP Over TCP、RTSP Over UDP)

According to standard RTSP protocol realizations, parse H.264 data and H.264 data will be encapsulated into MPEG2-TS streams.Its The realization of middle RTSP agreements, the realization with MEPG2-TS agreements is standard agreement.

(2) RTP modes(RTP Over UDP)

RTP parses the H.264 data that RTP data are preserved to transmit the encapsulation format of H.264 data in RTSP agreements, and H.264 data it will be encapsulated into MPEG2-TS streams.RTP data receivers are UDP multicasts or unicast.

(3) MPEG-TS modes(MPEG-TS Over UDP)

MPEG2-TS is the standard agreement in MPEG2, parses H.264 data, and H.264 data will be encapsulated into MPEG2-TS In stream.

(4) SDK customized developments

When access video camera does not have open protocol, the SDK provided by video camera is accessed.If only carried in SDK For display data such as YUV, RGB, then first this data is encoded to after H.264 data, then be encapsulated into MPEG2-TS.

In a preferred embodiment, internal data transfer form is MPEG-TS Over Multicast UDP, is led to Different TS-PID are crossed to be demultiplexed and distinguished.

Step 3, will in real time gather and/or obtain fixed angle video data automatic Mosaic be fused to three-dimensional scene models In, panoramic stereoscopic video is formed, empty 3 d visualization displaying full-time in real time is realized.

As a preferred embodiment, the automatic Mosaic fusion is realized according to the steps:To all upper State fixed angle video data to be analyzed, foreground target is isolated in detection, it is empty to accurately calculate it based on camera calibration technology Between position, melt according to its corresponding time serial message splicing into three-dimensional scene models, realize in three-dimensional scene models Splicing fusion display, and the full-time empty three-dimensional video-frequency after fusion is undeformed, undistorted.

Specifically, in this step, real-time fixed angle video data automatic Mosaic is fused to 3D GIS spaces In data, panoramic stereoscopic video is formed, realizes that empty 3 d visualization displaying full-time in real time specifically includes foreground target detection, three Dimension rebuilds fusion, video image normalization and full-time empty 3 d visualization and shows several parts or step.With reference to the present invention's One preferred embodiment illustrates above each several part or step.

(1) foreground target is detected

Prospect refers to assuming that background is in the case of static, any significant moving target.Wherein emphasis be related to as Lower two aspect:

A. multi-level prospect background modeling

Background modeling is the important step that foreground target is extracted, and basic thought is that prospect is extracted from present frame.Its Purpose is the background for making background closer to current video frame.It is weighted using the current background frame in present frame and video sequence It is average to update background.But it is due to illuminance abrupt variation and the influence of other external environments, the background after general modeling not ten Divide clean clear;And moving target gait of march changes at any time, it is more likely that appearance is totally stationary, if they are updated to Background, can cause counterweight syllabus target to omit.Based on this, multi-level gauss hybrid models are used in this preferred embodiment, are used In real-time, robust the various speed of detection(Including static from moving to)Target.Using multi-level mixed Gauss model come The influence that the method for background robustly overcomes light, branch shake etc. to cause is extracted, and moving object can be overcome quiet for a long time Disabled status when only.

Specific method is to use K(For example, from 3 to 5)Individual Gauss model carrys out the feature of each pixel in phenogram picture.Tool For body, the video sequence of specific pixel point can regard a time series { X as1, X2..., Xt}={I(x0, y0,i):1≤i≤t }, And the time series is represented by the superposition of K Gaussian Profile, the probability of current point is expressed as:

In formula:K is Gaussian function number, ωi,tFor the weight coefficient of corresponding Gaussian function, μi,tFor i-th of Gauss model Mathematic expectaion, Σi,tFor the covariance matrix of i-th of Gauss model, η is Gauss model, and it is calculated as follows:

In formula, xiFor time series XiSome moment.

After the acquisition of a new frame video image, mixed Gauss model is updated, with each pixel in present image with mixing Gauss model matching is closed, according to matching result, it is possible to determine that the point is background dot, sport foreground point and stops foreground point(Such as stop The pedestrian that gets off, the parcel left etc.).

B. motion cast shadow suppressing, noise are eliminated and absent compensation

Motion shade is often divided into target by mistake, causes Target Segmentation and the tracking of mistake, and system considers colouring information And texture information, its deformation caused is determined using the color of shade, space and texture properties, is compensated by using color deformation Cast shadow suppressing is carried out with texture correction.For noise and fraction the target missing produced in target detection, based on Mathematical Morphology Image procossing quickly carries out noise-filtering and absent compensation.

(2) three-dimensional reconstruction is merged

Three-dimensional reconstruction fusion is exactly dynamically to be mapped to the two-dimensional video information of collection on three-dimensional scene models in real time, is led to Three-dimensional observation is crossed, so as to realize the real-time volume monitoring to real scene Multi-angle omnibearing.This target is realized, it is first The camera parameter of calculating video is first needed, the dynamic modeling of camera mapping and moving target is then carried out, so as to realize undeformed Distortionless full temporal-spatial fusion.

In computer graphics, physics camera or video camera can be described with perspective projection model, borrow camera projection Matrix, can calculate pixel coordinate of the arbitrfary point in final projected image in world coordinates, real camera video camera is also logical The change real scene of changing commanders for crossing projection matrix shoots into image and video., whereas if existing image and video data, also may be used To be returned by projection matrix back projection on three-dimensional scene models, so as to realize that distortionless real-time three-dimensional is rendered.Usual camera Projection matrix be unknown, and it is known that video data and three-dimensional scene models.

According to one of the present invention feature for preferred embodiment, first detecting two-dimensional video and three-dimensional scene models Point, in real time splicing fusion seeks to realize real time image registration from automatic accurate computational algorithm.In the image of feature based In registration, be used for carrying out the features of two images similarity measurement using feature descriptor, suitable feature descriptor for The registering mapping relations and raising registration accuracy set up between image are significant.In order to adapt to the dimensional variation of image, The precision of registration Algorithm is improved, Based on Multiscale Matching algorithm is introduced.In fig 2, Fig. 2(a)And Fig. 2(b)It is that video camera is clapped respectively The center of circle of circle represents the position of algorithm institute detection feature point in the image taken the photograph, figure, and the radius of circle represents the yardstick where it. Fig. 2(c)It is the Feature Points Matching result of this two width figure.As it is assumed that Fig. 2(a)All parameters, it is known that can then pass through Fig. 2 (c)With Fig. 2(a)Realize and obtain Fig. 2 after matching(b)Parameter.

Then by automatic or semi-supervised characteristic matching, projection change of the inverse from three-dimensional scene models to two-dimensional video Matrix and accurate three-dimensional physics camera parameter.Projection camera is fictionalized in three-dimensional scenic, then dynamically projection is shown up by video The surface of scape is so as to complete the fusion of space-time.

By analyzing video data, foreground target is isolated in detection.Borrow camera parameter, the pixel coordinate of target can be with Three dimensional local information is changed into, is modeled so as to implement real-time dynamic 3 D to dynamic object on three-dimensional position.Merging Cheng Zhong, background information need to be only projected on the three-dimensional scene models of static state, and foreground target is projected in the mesh of Three-Dimensional Dynamic reconstruction Mark on model, that is, realize undeformed, distortionless full temporal-spatial fusion.This technology can be realized at the real-time video of any multichannel Reason.

(3) video image is normalized

Extensive camera video data, which are realized, by above-mentioned steps splices fusion on room and time.Due to video Data may be from the video camera in different brands, or use different photometric parameters, for example, the time for exposure, white balance, The sensitivity of gamma correction, sensor(ISO)Deng these will directly produce inconsistent color data.Further, since video camera The difference of building time, causes situation of the video image in terms of color, brightness, saturation degree and contrast also different.In order to More preferable vision splicing syncretizing effect is reached, video image is normalized in terms of color, brightness, saturation degree, contrast, Improve the uniformity of extensive camera network color.Specifically divide following two steps:

A. video color is calibrated

Macbeth chromatic image plates are placed in monitor area, the demarcation to each video camera carries out gain and skew, Contrast and blackness are reduced to greatest extent, and ensure the balance of linear response and white scene.

B. the color transmission of video

Normalization target is that consistent color reacts, rather than absolute color accuracy, it is therefore not necessary to will each image Machine video matching carries out match colors two-by-two by color transmission into standard color to camera shooting and video.Specifically, be by The color characteristic of one width video image passes to another width video image, target image is had the color similar to source images.

Assuming that two videos are derived from different visual angles, but there are fixed illumination and different photometric parameters.In Lambertian Assuming that in scene, there is globally consistent color mapping between two video images.Due to there are different zones in two images, use Target image and source images, correspondence is respectively classified into using the method for characteristic point by the color of image transmission method of auto selecting swatches Sub-block.According to corresponding sub-block color histogram match, optimal color transmission function is calculated.For different visual angles, no With illumination and the video of photometric parameter, mapped due to globally consistent color being not present between camera shooting and video, provide one group of color biography Delivery function, with the visual method of human assistance, chooses optimal result.In the presence of the mapping of globally consistent color, in RGB In three color channels, RMS error is estimated no more than 5%.In the case of the mapping of globally consistent color is non-existent, naked eyes are reached Observation is without obvious aberration.

(4) full-time empty 3 d visualization displaying

The monitoring of support key area large scene, critical path automatic cruising, two and three dimensions information association are shown, video camera Reverse correlation, video camera collaboration, which are chased after, to be regarded and the backtracking of historical events large scene.

Key area large scene monitoring:It is corresponding that key area large scene refers to no less than two story board video overlay areas Scene, user monitors the large scene of key area by presetting observation station, with global view key area dynamic.By Virtual projection camera in three-dimensional scenic, it is any that observation viewpoint is set, with current visual angle monitoring key area large scene dynamic;For The simultaneous situation of height video camera in Same Scene, it is automatically that high point video camera and the unified splicing fusion of low spot video camera is aobvious Show, spliced and shown using different video sources for different viewpoints:For high viewpoint, wide-field situation, use High point video camera carries out splicing fusion display;When viewpoint is reduced, carry out spelling fusion display using low spot video camera;Support Magnifying function, localized region carries out digital zoom and shown.

Critical path automatic cruising:Self-defined cruise track is supported, and is patrolled automatically according to the visual angle, speed set Boat.Cruise path is made up of multiple path clustering points, by setting path clustering point, composition straight line path, arc path, circle road Footpath, catmull-rom paths or compound path, system carry out automatic cruising successively according to the path and speed set.

Two and three dimensions information association is shown:Panoramic stereoscopic video is shown and 2D GIS spatial data simultaneous displays, panorama Stereoscopic Video Presentation shows that 2D GIS spatial datas are shown with story board video data synchronization with story board video data synchronization, The position and overlay area of video camera and the position of user's current observation point are recorded in 2D GIS spatial datas.Pass through three Dimension rebuilds fusion and sets up panoramic stereoscopic video, 2D GIS spatial datas and story board video data synchronization interaction relation, and panorama is stood Volumetric video is shown and 2D GIS spatial datas simultaneous display, panoramic stereoscopic video are shown and story board video data synchronization is shown, 2D GIS spatial datas and story board video data synchronization show, the position and overlay area of video camera and current observation point Position may be displayed in 2D GIS spatial datas.

Video camera reverse correlation:On panoramic stereoscopic video or 2D GIS spatial datas select required for observation target or Geographical position, all video cameras for being irradiated to the target or geographical position are automatically associated to according to target or geographical position.

Video camera collaboration, which is chased after, to be regarded:Video camera collaboration is chased after depending on including:Lead in panoramic stereoscopic video or 3D GIS spatial datas Target interactive selection observed object or geographical position are crossed, the multiple video cameras for calling periphery according to the target or geographical position irradiate To the region, and target size is adjustable, and the optical zoom of video camera is realized by adjusting target size, and comprehensive, multi-angle is fast Speed catches detailed information.For there is the monitor area of fixed video camera and on-fixed video camera, regarding for video camera is fixed in unification Frequency carries out splicing and merges display, when user mutual selects observed object or geographical position, video camera will be fixed automatically and non-solid Determine the unified association of video camera, call multiple on-fixed video cameras of association to be irradiated to the target area(Refer to step 4).

Historical events large scene is recalled:The backtracking of historical events large scene includes:Multiple video cameras are read in from storage device The history video data of collection, it is positive under full-time Altitude by the visualization of history video data into 3D GIS spatial datas Or reverse-play search, that is, set backtracking period and region, provide just broadcasting, broadcasting frame by frame frame by frame, stop broadcasting, F.F., Rewind and random position are played, to lift the search efficiency of historical events(Refer to step 5).

In fig. 3, Fig. 3(a)Give multiple story board videos in monitoring scene, Fig. 3(b)Give story board The panoramic stereoscopic video that video fusion is formed into three-dimensional scene models.Shooting can be intuitively shown in panoramic stereoscopic video The coverage of machine and monitoring blind area, therefore this method is also to design the arrangement of video camera, realize that non-blind area planning is provided Intuitively scientific basis.

Step 4, all on-fixed video cameras in scene are associated with panoramic stereoscopic video, by with overall view monitoring Event target is driving, and video camera collaboration is chased after and regarded, clicked in panoramic stereoscopic video or 3D GIS spatial datas observed object or Position, multiple video cameras on its periphery will be invoked automatically to lock the region, accomplish to pay close attention to details with a definite target in view, realize vertical Look at the combination of global and details control.

That realizes panorama fusion video and on-fixed video camera cooperates with linkage, when user passes through target in panoramic stereoscopic video When marking interactive selection observed object or geographical position, or it is during interactive selection specific geographic position in three-dimensional scene models System starts video camera collaboration and chases after visual function, sends video camera and cooperates with instruction, calculates what region-of-interest was associated according to instruction is automatic All on-fixed video cameras, and the high camera video of display output priority.

The step includes camera calibration, interaction setting, camera control and the several parts of display output or step.Tie below The preferred embodiment of the present invention is closed to illustrate above each several part or step.

(1) camera calibration

Three-dimensional geometry position and its correlation in the picture between corresponding points of space object surface point are determined, is built The geometrical model of vertical camera imaging, camera calibration is to realize that video camera collaboration chases after the basis regarded.

(2) interaction is set

Man-machine interaction is realized, when user has found potential focus incident in panoramic video, using event target as driving, The target in event is clicked directly in panoramic video, position, quantity and its control range without predicting video camera, without to take the photograph Camera is operation object, and multiple video cameras that periphery is automatically dispatched according to target location are irradiated to the target area, and target Size is adjustable, and optical zoom is realized by adjusting target size.

(3) camera control

Control instruction is sent to video camera, the coordinated signals to video camera are realized.For there is the region of multiple video cameras, According to information such as target area, Camera coverages, associated multiple video cameras, such as mesh are called according to certain priority Mark video camera of the region in camera center point, equally distributed multiple video cameras around concern target area.

(4) display output

The display multiple camera videos associated with selection area, panoramic stereoscopic video is synchronous with multiple camera videos Display.

Accompanying drawing 4 is that video camera collaboration is chased after regarding schematic diagram, Fig. 4(a)For panoramic stereoscopic video, Fig. 4(b)For the video camera of association Video.

Step 5, real time video data are after video format is changed, and/or gate inhibition, temperature/humidity sensor data pass through After the mark of locus, data are stored in history of forming data in storage device.When needs are counter looks into historical events, by that will protect Fixed angle video data and other types the data splicing deposited are fused in three-dimensional scene models, are realized to the complete of historical events Space-time 3 d visualization is shown(For example, by playing forward or backwards), recalled with carrying out panorama reduction to history video data.

According to one of the present invention preferred embodiment, the Video coding of storage is MPEG2-TS streams H.264.Due to H.264 encoding stream does not have B frames for most video camera outputs, and the video that implementation below is directed to no B frames is handled. Gate inhibition's data and temperature/humidity sensor data are the data for the locus attribute for being labelled with equipment in three-dimensional scene models. Video file pre-allocates for disk, is set up in store historical data and video recording text is marked in video recording index file, index file Between at the beginning of part and the end time.

Land parcel change trace, which completes function, to be included:Just broadcast, broadcast, accelerate to play, slow down and play and random position broadcasting.Can root Certain passage is chosen in camera list according to needs(Can multiselect)Or special time period, recall going through for specific region special time period History video.Land parcel change trace process is specific as follows:

(1) video file is retrieved

The period selected according to user, by being loaded into video recording index file and searching for which file meets goal condition, examine Rope goes out the period corresponding video file information.

(2) video frame index is set up

After retrieval video file is completed, frame index is set up to the video file retrieved.Parse video file MPEG2- TS flows, and passage time stamp judges, current to read whether document location is objective time interval.It is defined as after objective time interval, parsing is H.264 Video data, and each frame frame type, the frame starting position and timestamp are recorded, when judging current video not in target In section.Completion is set up after video frame index, obtains a frame of video, document location, a concordance list of timestamp.

(3) frame of video is obtained

When there is video request, video frame index table is retrieved according to claim frame video time stamp, correspondence frame is found out.Judge Present frame whether YUV caching in, if in current frame buffer, returning to the yuv data after the frame decoding and showing.If Present frame does not need to decode the frame in the buffer, then.Judge whether current be that I frames and P frames or I frames and I frames are combined, such as Fruit is, then from current frame decoding, and decoded data is put into YUV cachings, and returns to the frame yuv data;If P frames, Nearest IP, II frame combination had then previously been traced back to, has been decoded backward since the frame, yuv data is added in buffering area, Until being decoded to untill claim frame gets yuv data.

(4) playback duration axle

After the completion of getting frame flow, playback duration axle is set up.

Just broadcasting and falling to broadcast:Getting frame is dispatched according to the current time of time shaft, you can complete video and just broadcasting frame by frame and frame by frame Broadcast.

Variable playback:Shorten, the frame period time is dispatched in extension.

Random position is played:Frame of video is obtained according to the time scheduling of time shaft.

There is provided a kind of whole scene Intelligent video analysis method, including following step according to another embodiment of the invention Suddenly:

Step 10), selected video intelligent analyzed area.According to one preferred embodiment, the region is to be analyzed Area-of-interest.

Step 20), using the foregoing full-time empty 3 D spliced fusion method of monitor video, set up video data and three-dimensional space Between corresponding relation, carry out the video intelligent analysis under across camera lens whole scene, the automatic alarm when noting abnormalities;

Step 30), by step 20)Analysis result be shown in foregoing panoramic stereoscopic video, or according to alarm Information triggering video camera collaboration, which is chased after, to be regarded, and passes through the detailed information of video camera overview alarm point.

In one preferred embodiment, in described step 20)In, video intelligent analysis include across camera lens target with Track, the estimation of stream of people's vehicle density, unusual checking.

In such scheme, many anomalous events can all be classified as BMAT problem, such as stream of people's vehicle density is excessive, Multiple camera supervised areas are often crossed in the unusual aggregation of crowd, the inter-subspecific hybrid of personage, unexpected acceleration and deceleration etc., these behaviors Goal behavior pattern feature identification under domain, single visual angle is often difficult to get a desired effect.In full-time empty 3 d visualization In method, correlation and complementarity between multiple video cameras are to realize that accurately identifying for goal behavior pattern has laid solid Basis.Therefore, present invention combination panoramic stereoscopic video, realizes that the video intelligent across story board region is analyzed, to various abnormal things Part carries out automatic early-warning, while optimized according to early warning type and rank, the panoramic stereoscopic video in automatic display early warning place, Suspicious object is realized across camera lens is continuously tracked, stream of people's vehicle density is estimated and unusual checking(The unusual checking Including:Mix line detection, the detection of crowd's abnormal aggregation, leave analyte detection, abnormal speed detection), improve to full-time empty intelligent video The effectiveness of monitoring system.Flow chart of data processing as shown in Figure 5, specifically includes following steps:

(1) single-lens abnormal behaviour analysis

A. abnormal behaviour preliminary examination technology

Abnormal behaviour is varied, technically, it is impossible to pre-defined to all behaviors in advance, and fast by calculating The influence of degree, it is impossible to which all predefined exception behaviors are all detected and analyzed.This requires intelligent analysis process One-level carries out preliminary examination to all possible abnormal behaviour, and gives next stage by abnormal results and be further analyzed and handle. Although can not be exhaustive to abnormal behaviour, substantial amounts of normal scene can be collected into, the proper motion data of magnanimity are accumulated.It is aobvious So, it is exactly abnormal behaviour pattern to the inexplicable behavior pattern of these data.The key of this method is that from mass data It is middle to find and sort out normally performed activity pattern.

Matrix approximation and decomposition technique based on example can be effectively used for monitoring and analyzing extraordinary wave in large-scale data It is dynamic.Matrix approximation provides " sketch " of a small and many initial data, greatly improves computational efficiency.In addition, low-rank Approach the ability for automatically extracting and abating the noise from the data of matrix structure.It is unusual that a kind of widely used low-rank, which is approached, Value is decomposed, still, and singular value decomposition needs to pass through matrix-vector multiplication computing repeatedly, and speed is slow, can not carry out in real time.Based on model The matrix approximation and decomposition technique of example can retain the openness of matrix, and arithmetic speed is greatly improved, so that real-time intelligent point Analysis becomes possibility.Specifically, a kinematic matrix A is given, its low-rank, which is approached, to be typically expressed asIn formula, C is One representational data of frame of video, wherein one group of row selected from A is contained, that is, example.Equally, R is considered as It is the subspace for representing motion feature, contains one group of row selected from A.For time-variable data, C and R selection are Quickly updated on the basis of having.Intermediary matrix U computational methods are to minimizeWhereinIt is Frobenius norms.Normal abnormal condition can be distinguished according to the error SSE of matrix approximation, so that early warning abnormal behaviour.

B. abnormal behaviour identification and analysis

Abnormal behaviour analysis be unable to do without the tracking to target.It is real using multi-level mixed Gauss model statistical learning method Existing foreground target detection, by calculating the difference of present frame and background model, realizes the detection to moving target, extracts the mesh of detection Mark color characteristic and Corner Feature information.By the color and Corner Feature according to sport foreground, quick, the accurate, extraction of robust The movement locus of target in scene, this method can apply in the serious environment of eclipse phenomena.These information and full-view stereo Video is combined, the comprehensive analysis for realizing the stream of people and wagon flow in large scene, is the real-time analysis, OK of the numerous targets of large scene For pattern analysis is quick and target is quickly excluded and laid a good foundation.Meanwhile, by statistical analysis movement locus, set up behavior Distribution map, predefine a series of abnormal behaviour patterns, such as illegal invasion, abnormal aggregation, illegally leave, abnormal speed.Pre- In the case of inspection alarm, according to statistical nature, abnormal behaviour can be carried out further to recognize, classify and be classified.

(2) space time correlation of large scene, multiple-camera

To realize multi-path monitoring video image intellectual analysis under across camera lens large scene, static Bayesian Network is primarily based on The spatial topotaxy between video camera is built, then using between dynamic bayesian network reasoning and a variety of behavior patterns of prediction Semantic association structure.

Bayesian network method is a kind of expression of uncertainty knowledge and inference pattern based on probability analysis, graph theory, is A kind of information representational framework for being combined causal knowledge and probabilistic knowledge.Bayesian network show as a kind of assignment it is complicated because Each node represents that the directed arc between a video camera, each video camera represents event in fruit relational network figure, network Direct causality.In Bayesian network, qualitative information is mainly expressed by the topological structure of network, and quantitative information master To be represented by the joint probability density of node.In Bayesian network, the node inputted without directed arc is referred to as root node, right In root node it needs to be determined that its prior probability;The node for having directed arc to input is child node, and it is determined for each child node Conditional probability under father node different conditions.As the basis of Bayesian Network Inference, by according to the space between video camera Relation pair Bayesian network parameters(Prior probability and conditional probability)Carry out assignment.

Based on static Bayesian Network, with reference to panoramic stereoscopic video, the trans-regional overall large scene stream of people across camera lens is realized And train flow analysis, personnel's trajectory analysis and velocity analysis.

Dynamic bayesian network is the dynamic expansion that Bayesian network is changed over time, can reflect each video camera it Between a series of probability dependency between behavior patterns.Because camera network Space expanding is not changed over, it is assumed that First order Markov is met between each video camera, the time continuity of wherein behavior pattern is modeled.Basic thought is Global behavior pattern is made up of a series of local behaviors, by the identification to local behavior and its relation, can be effective Predict global scene and behavior.It is rapid right using dynamic bayesian network in the case where a video camera notes abnormalities behavior Associated camera and behavior pattern carry out anticipation, and associated video and information are shown under large scene.

In the case of abnormal alarm, matched according to target component with predefined pattern, abnormal behaviour is known Not with determination priority level, across camera lens intelligent alarm and result are shown in real time.

Simultaneously according to the priority of abnormal alarm, choose and the warning message there are all video camera numbers of spatial and temporal association According to realizing that panoramic stereoscopic video focuses on display automatically.Visual function is chased after with reference to video camera collaboration, is called and taken the photograph automatically according to warning message Camera locks the region.

In figure 6, Fig. 6(a)For monitor video to be fused to the panoramic stereoscopic video formed in three-dimensional scene models, figure 6(b)For stream of people's vehicle density estimated result, the upper left corner is density scale, and stream of people's vehicle density is divided into different grades.Pass through Density of stream of people intellectual analysis, once certain region density of stream of people is larger, by clicking on density analysis result respective regions, you can viewing The corresponding panoramic video in the region.

According to still another embodiment of the invention there is provided a kind of full-time empty 3 d visualization method of diversiform data, bag Include following steps:

Step 100), input multiple sensors information, GPS information, access information, warning message;In a preferred reality Apply in mode, above- mentioned information, which is arranged on, to be obtained and/or input from analyzed area interested.

Step 200), the video data set up using the full-time empty 3 D spliced fusion method of foregoing monitor video and three-dimensional The corresponding relation in space, panoramic stereoscopic video is incorporated into by multiple sensors information, GPS information, access information, warning message In, to realize that comprehensive multidimensional is merged and visual presentation.

Specifically, in region to be analyzed interested, Various types of data is obtained first, and it is pre-processed.That is, The locus attribute of sensor in the three-dimensional model will be marked in gate inhibition's data and temperature/humidity sensor data.Access information Main source with sensor information is that sensor SDK or other external systems parse obtained data through proprietary protocol.If It is direct acquisition data, then is called by SDK and get data, and be encapsulated into MPEG2-TS;If being obtained from external system Take, the acquisition of data can actively be called or passively push, and be got by the privately owned analysis protocol of external system Data, and be encapsulated into MPEG2-TS.

To the demultiplexing of different pieces of information, when built-in system receives MPEG2-TS data, data include video data, Gate inhibition's data and temperature/humidity sensor data.By MPEG2-TS demultiplexing modules, different PID is passed through in demultiplexing module To distinguish the type difference of each data, and cache different pieces of information.The module of demultiplexing realizes standard compliant MPEG2-TS associations View.Gate inhibition's data with locus and the splicing of temperature/humidity sensor data are fused in three-dimensional scene models, base is set up The multidimensional data visualization displaying under video is merged in panorama, and supports multidimensional data attribute to check.

Accompanying drawing 7 gives merges showing for video data, gate inhibition's data and temperature/humidity sensor data in three-dimensional scene models It is intended to.Merged by diversiform data, realize in panoramic stereoscopic video displaying multidimensional data attribute function, and then can be with Other systems are implemented in combination with coordinated signals function, e.g., and when temperature/humidity sensor detects the condition of a fire, system is automatically switched to out The panoramic stereoscopic video of existing alert, while the gate control system that links, realizes the remote control to gate control system in panoramic video, it is real Quick response and macroscopic view now to accident is commanded.In Fig. 7, data pass through camera 7-1, gate inhibition 7-2, humidity sensor 7- 3 and temperature sensor 7-4 is gathered.

By above-mentioned steps, will be in diverse location, the scattered video data with different visual angles and gate inhibition's data and Splicing is fused in three-dimensional scene models temperature/humidity sensor data in real time, forms the full-view stereo visualization of diversiform data, History video panorama is recalled and broadcast frame by frame, panoramic video displaying and details are controlled combination and the whole audience are realized simultaneously Intelligent alarm and displaying under scape.

Linked according to still a further embodiment there is provided a kind of with sensing system, gate control system, warning system The full-time empty 3 d visualization application of diversiform data, realized according to the full-time empty 3 d visualization method of foregoing diversiform data The application.

In the present embodiment, although only to three-dimensional scene models, camera shooting and video, gate inhibition, temperature/humidity sensor data it is complete Space-time visualization is illustrated, it should be appreciated by those of ordinary skill in the art that the method for the present invention is equally applicable In 2D and 3D GIS spatial datas, video and various sensing datas are more than what is obtained by data acquisition equipment Direct Acquisition Data, can also be the diversiform data of third-party platform output.

The above embodiments are merely illustrative of the technical solutions of the present invention, and not the scope of the present invention is defined, not On the premise of departing from design spirit of the present invention, the various modifications that those of ordinary skill in the art make to technical scheme And improvement, it all should fall into the protection domain that claims of the present invention is determined.

Claims (11)

1. a kind of full-time empty 3 D spliced fusion method of monitor video, comprises the following steps:Step 1), by the reality of camera acquisition When video data carry out package form conversion, the real time video data includes gun type camera and/or ball-shaped camera pre- The fixed angle video data of set collection, includes the on-fixed angle video data of ball-shaped camera collection;Step 2), will The fixed angle video data splicing gathered in real time is fused in 3DGIS spatial datas, forms panoramic stereoscopic video, is realized real-time Full-time empty 3 d visualization displaying;Step 3), according to step 1) pretreatment on-fixed angle video data and step 2) shape Into panoramic stereoscopic video, by using the event target in overall view monitoring as driving, realizing that video camera collaboration is chased after and regarded, i.e., in panorama Observed object or position are clicked in three-dimensional video-frequency or 3DGIS spatial datas, multiple video cameras on its periphery will be called with locking The region;Step 4), according to step 1) package form transformation result, real time video data is preserved in the storage device, formed History video data;Step 5), by step 4) the fixed angle video data splicing in the history video data that is formed is fused to In 3DGIS spatial datas, the full-time empty 3 d visualization displaying of history video is realized, to carry out panorama also to history video data Original is recalled, the step 2) specifically include foreground target detection, three-dimensional reconstruction fusion, video image normalization and full-time empty solid The detection of visual presentation, wherein foreground target includes:
A. multi-level prospect background modeling
Background modeling is the important step that foreground target is extracted, and basic thought is that prospect is extracted from present frame, its purpose It is the background for making background closer to current video frame, is weighted using the current background frame in present frame and video sequence average To update background, using multi-level gauss hybrid models, for the target of real-time, robust the various speed of detection, using many Level mixed Gauss model extracts the influence that causes such as robustly overcome light, branch to shake of the method for background, and can be with Overcome disabled status during moving object long inactivity;
B. motion cast shadow suppressing, noise are eliminated and absent compensation
Motion shade is often divided into target by mistake, causes Target Segmentation and the tracking of mistake, and system considers colouring information and line Information is managed, its deformation caused is determined using the color of shade, space and texture properties, by using color deformation compensation and line Reason correction carries out cast shadow suppressing;For noise and fraction the target missing produced in target detection, based on mathematical morphology Image procossing quickly carries out noise-filtering and absent compensation;
Three-dimensional reconstruction fusion is dynamically to be mapped to the two-dimensional video information of collection on three-dimensional scene models in real time, passes through three-dimensional Virtual observation, so as to realize the real-time volume monitoring to real scene Multi-angle omnibearing, specifically, detects that two dimension is regarded first The characteristic point of frequency and three-dimensional scene models, in real time splicing fusion seeks to realize realtime graphic from automatic accurate computational algorithm Registration;In the image registration of feature based, it is used for carrying out similarity measurement to the feature of two images using feature descriptor, Suitable feature descriptor for setting up image between registering mapping relations and to improve registration accuracy significant;In order to The dimensional variation of image is adapted to, the precision of registration Algorithm is improved, Based on Multiscale Matching algorithm is introduced;Then by automatic or semi-supervised Characteristic matching, projection transformation matrices of the inverse from three-dimensional scene models to two-dimensional video and accurate three-dimensional physics camera ginseng Number;Projection camera is fictionalized in three-dimensional scenic, then video is dynamically projected the surface of scene to complete the fusion of space-time;
By analyzing video data, foreground target is isolated in detection;Camera parameter is borrowed, the pixel coordinate of target can be converted Into three dimensional local information, modeled so as to implement real-time dynamic 3 D to dynamic object on three-dimensional position;In fusion process, Background information need to be only projected on the three-dimensional scene models of static state, and foreground target is projected in the object module of Three-Dimensional Dynamic reconstruction On, that is, realize undeformed, distortionless full temporal-spatial fusion;
The video image normalization specifically divides following two steps:
A. video color is calibrated
Macbeth chromatic image plates are placed in monitor area, the demarcation to each video camera carries out gain and skew, maximum Contrast and blackness are reduced to limit, and ensures the balance of linear response and white scene;
B. the color transmission of video
Normalization target is that consistent color reacts, rather than absolute color accuracy, it is therefore not necessary to which each video camera is regarded Frequency matches into standard color, but carries out match colors two-by-two to camera shooting and video by color transmission;Specifically, it is by a width The color characteristic of video image passes to another width video image, target image is had the color similar to source images.
2. the full-time empty 3 D spliced fusion method of monitor video according to claim 1, it is characterised in that in described step The rapid real-time full-time empty 3 d visualization displaying 2) realized includes:Key area large scene monitoring, critical path automatic cruising, two Peacekeeping three-dimensional information association display, video camera reverse correlation and video camera collaboration are chased after and regarded.
3. the full-time empty 3 D spliced fusion method of monitor video according to claim 1, it is characterised in that in described step The rapid full-time empty 3 d visualization displaying 5) realized includes:Key area large scene monitoring, critical path automatic cruising, two peacekeepings Three-dimensional information association display, video camera reverse correlation, and the backtracking of historical events large scene.
4. the full-time empty 3 D spliced fusion method of monitor video according to Claims 2 or 3, it is characterised in that described heavy Point region large scene monitoring includes:User preset observation station monitors the scene of key area, with global view key area Dynamic;The critical path automatic cruising includes:Self-defined cruise track, and carried out automatically according to the visual angle, speed set Cruise;The two and three dimensions information association show including:Panoramic stereoscopic video show with 2DGIS spatial data simultaneous displays, Panoramic stereoscopic video shows and shown with story board video data synchronization that 2DGIS spatial datas show with story board video data synchronization Show, the position and overlay area of video camera and the position of user's current observation point are recorded in 2DGIS spatial datas;It is described Video camera reverse correlation includes:The target or geography of observation required for being selected on panoramic stereoscopic video or 2DGIS spatial datas Position, according to all video cameras for being irradiated to the target or geographical position of the target or geographic location association.
5. the full-time empty 3 D spliced fusion method of monitor video according to claim 2, it is characterised in that video camera is cooperateed with Chase after depending on including:By interactive selection observed object or geographical position in panoramic stereoscopic video or 3DGIS spatial datas, according to this Target or geographical position call multiple video cameras on periphery to be irradiated to the region, it is possible to carry out optical zoom to video camera, entirely Orientation, multi-angle catch detailed information.
6. the full-time empty 3 D spliced fusion method of monitor video according to claim 3, it is characterised in that historical events is big Scene backtracking includes:The history video data of multiple camera acquisitions is read in from storage device, history video data is visual Change into 3DGIS spatial datas, play search forward or backwards under full-time Altitude, that is, period and the area of backtracking are set Domain, offer are just being broadcast, broadcast frame by frame, stopping broadcasting, F.F., rewind and random position broadcasting, to lift looking into for historical events frame by frame Ask efficiency.
7. the full-time empty 3 D spliced fusion method of monitor video according to claim 1, it is characterised in that in described step It is rapid 1) in collection fixed angle video data when, camera acquisition to video overlay area do not change;Gather on-fixed During angle video data, camera acquisition to video overlay area can arbitrarily change.
8. a kind of whole scene Intelligent video analysis method, comprises the following steps:Step 10), selected video intelligent analyzed area;Step It is rapid 20), using the full-time empty 3 D spliced fusion method of the monitor video described in one of claim 1-7, set up video data and The corresponding relation of three dimensions, carries out the video intelligent analysis under across camera lens whole scene, the automatic alarm when noting abnormalities;Step 30), by step 20) analysis result be shown in the panoramic stereoscopic video described in one of claim 1-7, or according to report Alert information triggers video camera collaboration and chased after automatically to be regarded, and passes through the detailed information of video camera overview alarm point.
9. whole scene Intelligent video analysis method according to claim 8, it is characterised in that in described step 20) in, Video intelligent analysis includes across camera lens target following, the estimation of stream of people's vehicle density, unusual checking.
10. a kind of full-time empty 3 d visualization method of diversiform data, comprises the following steps:Step 100), input pickup letter Breath, GPS information, access information, warning message;Step 200), using the full-time sky of monitor video described in one of claim 1-7 Video data and the corresponding relation of three dimensions that 3 D spliced fusion method is set up, by multiple sensors information, GPS information, Access information, warning message are incorporated into panoramic stereoscopic video, to realize that comprehensive multidimensional is merged and visual presentation.
11. a kind of full-time empty 3 d visualization of diversiform data with sensing system, gate control system, warning system linkage is answered With, it is characterised in that the full-time empty 3 d visualization method of diversiform data according to claim 10 realizes the application.
CN201310747292.0A 2013-12-30 2013-12-30 A kind of full-time empty 3 d visualization method CN103795976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310747292.0A CN103795976B (en) 2013-12-30 2013-12-30 A kind of full-time empty 3 d visualization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310747292.0A CN103795976B (en) 2013-12-30 2013-12-30 A kind of full-time empty 3 d visualization method

Publications (2)

Publication Number Publication Date
CN103795976A CN103795976A (en) 2014-05-14
CN103795976B true CN103795976B (en) 2017-09-19

Family

ID=50671204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310747292.0A CN103795976B (en) 2013-12-30 2013-12-30 A kind of full-time empty 3 d visualization method

Country Status (1)

Country Link
CN (1) CN103795976B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063224B (en) * 2014-06-27 2017-06-09 广东威创视讯科技股份有限公司 Switch the method and device of multiple Precise control scenes based on three-dimension GIS
CN104301673B (en) * 2014-09-28 2017-09-05 北京正安维视科技股份有限公司 A kind of real-time traffic analysis and panorama visual method based on video analysis
CN104363427A (en) * 2014-11-28 2015-02-18 北京黎阳之光科技有限公司 Full-live-action video intelligent monitoring system
CN104615735B (en) * 2015-02-11 2019-03-15 中科星图股份有限公司 A kind of space time information method for visualizing based on geographical information space system
CN104639916A (en) * 2015-03-04 2015-05-20 合肥巨清信息科技有限公司 Large-scene multi-target tracking shooting video monitoring system and monitoring method thereof
CN106033623A (en) * 2015-03-16 2016-10-19 深圳市贝尔信智能系统有限公司 3D visualized mass data processing method, apparatus and system thereof
CN106034222A (en) * 2015-03-16 2016-10-19 深圳市贝尔信智能系统有限公司 Stereometric object capturing method, apparatus and system thereof
CN106034221A (en) * 2015-03-16 2016-10-19 深圳市贝尔信智能系统有限公司 Wisdom-city omnibearing video information acquisition method, apparatus and system thereof
CN106034220A (en) * 2015-03-16 2016-10-19 深圳市贝尔信智能系统有限公司 Smart city mass monitoring method, device and system
CN105100733A (en) * 2015-08-27 2015-11-25 广东威创视讯科技股份有限公司 Video playing method and system of mosaic display device
CN105204470A (en) * 2015-09-28 2015-12-30 山东电力工程咨询院有限公司 Live-action visual intelligent construction site managing system based on three-dimensional model
CN105262949A (en) * 2015-10-15 2016-01-20 浙江卓锐科技股份有限公司 Multifunctional panorama video real-time splicing method
CN105208372A (en) * 2015-10-15 2015-12-30 浙江卓锐科技股份有限公司 3D landscape generation system and method with interaction measurable function and reality sense
CN105611253A (en) * 2016-01-13 2016-05-25 天津中科智能识别产业技术研究院有限公司 Situation awareness system based on intelligent video analysis technology
CN106096502A (en) * 2016-05-27 2016-11-09 大连楼兰科技股份有限公司 Car networked virtual reality panorama playback system and method
CN106096501A (en) * 2016-05-27 2016-11-09 大连楼兰科技股份有限公司 Car networked virtual reality panorama playback platform
CN106210631A (en) * 2016-07-16 2016-12-07 惠州学院 The system for rapidly identifying of a kind of different angles video object and method
CN108174265B (en) * 2016-12-07 2019-11-29 华为技术有限公司 A kind of playback method, the apparatus and system of 360 degree of panoramic videos
CN106896736A (en) * 2017-03-03 2017-06-27 京东方科技集团股份有限公司 Intelligent remote nurses method and device
CN107018383A (en) * 2017-05-10 2017-08-04 合肥慧图软件有限公司 A kind of video frequency monitoring system being combined based on virtual reality with high accuracy positioning
CN108614853A (en) * 2018-03-15 2018-10-02 中国人民解放军63895部队 A kind of multi-data source synchronizing information mixing storage and playback system and method
CN108810517A (en) * 2018-07-05 2018-11-13 盎锐(上海)信息科技有限公司 Image processor with monitoring function and method
CN109040730A (en) * 2018-08-20 2018-12-18 武汉理工大学 A kind of dynamic spends extra large scene system and its working method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745126A (en) * 1995-03-31 1998-04-28 The Regents Of The University Of California Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
CN102006456A (en) * 2010-10-28 2011-04-06 中星电子股份有限公司 Cloud platform camera, cloud platform monitoring system and method for carrying out direction orientation
CN102045549A (en) * 2010-12-28 2011-05-04 天津市亚安科技电子有限公司 Method and device for controlling linkage-tracking moving target of monitoring device
CN102118611A (en) * 2011-04-15 2011-07-06 中国电信股份有限公司 Digital video surveillance method, digital video surveillance system and digital video surveillance platform for moving object
CN102256154A (en) * 2011-07-28 2011-11-23 中国科学院自动化研究所 Method and system for positioning and playing three-dimensional panoramic video
CN103096032A (en) * 2012-04-17 2013-05-08 北京明科全讯技术有限公司 Panorama monitoring system and method thereof
CN103400371A (en) * 2013-07-09 2013-11-20 河海大学 Multi-camera synergistic monitoring equipment and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745126A (en) * 1995-03-31 1998-04-28 The Regents Of The University Of California Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
CN102006456A (en) * 2010-10-28 2011-04-06 中星电子股份有限公司 Cloud platform camera, cloud platform monitoring system and method for carrying out direction orientation
CN102045549A (en) * 2010-12-28 2011-05-04 天津市亚安科技电子有限公司 Method and device for controlling linkage-tracking moving target of monitoring device
CN102118611A (en) * 2011-04-15 2011-07-06 中国电信股份有限公司 Digital video surveillance method, digital video surveillance system and digital video surveillance platform for moving object
CN102256154A (en) * 2011-07-28 2011-11-23 中国科学院自动化研究所 Method and system for positioning and playing three-dimensional panoramic video
CN103096032A (en) * 2012-04-17 2013-05-08 北京明科全讯技术有限公司 Panorama monitoring system and method thereof
CN103400371A (en) * 2013-07-09 2013-11-20 河海大学 Multi-camera synergistic monitoring equipment and method

Also Published As

Publication number Publication date
CN103795976A (en) 2014-05-14

Similar Documents

Publication Publication Date Title
US8274564B2 (en) Interface for browsing and viewing video from multiple cameras simultaneously that conveys spatial and temporal proximity
US9904852B2 (en) Real-time object detection, tracking and occlusion reasoning
Liu et al. Intelligent video systems and analytics: A survey
Cucchiara Multimedia surveillance systems
CN104040601B (en) Video monitor based on cloud management system
Collins et al. A system for video surveillance and monitoring
US20070008408A1 (en) Wide area security system and method
US20090278938A1 (en) Cognitive Change Detection System
JP2004015106A (en) Image processor and image processing method, program and program recording medium, and data structure and data recording medium
DE102009049849A1 (en) Method for determining the pose of a camera and for detecting an object of a real environment
Kanade et al. Advances in cooperative multi-sensor video surveillance
KR101557297B1 (en) 3d content aggregation built into devices
KR101302803B1 (en) Intelligent image surveillance system using network camera and method therefor
US20070058717A1 (en) Enhanced processing for scanning video
Haering et al. The evolution of video surveillance: an overview
US9749526B2 (en) Imaging system for immersive surveillance
US9298986B2 (en) Systems and methods for video processing
CN106797460B (en) The reconstruction of 3 D video
US20050073585A1 (en) Tracking systems and methods
US10061318B2 (en) Drone device for monitoring animals and vegetation
Sebe et al. 3d video surveillance with augmented virtual environments
US9007432B2 (en) Imaging systems and methods for immersive surveillance
US7583815B2 (en) Wide-area site-based video surveillance system
Wang et al. CDnet 2014: an expanded change detection benchmark dataset
US8331611B2 (en) Overlay information over video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
COR Change of bibliographic data
CB02 Change of applicant information

Address after: 100088, room 600, block C, 28 Xinjie street, Xinjie, Beijing, Xicheng District

Applicant after: BEIJING INNOVISGROUP TECHNOLOGY CO., LTD.

Address before: 100088, No. 1, block A, block 28, Xinjie street, Xicheng District, Beijing,

Applicant before: BEIJING ZHENGAN RONGHAN TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant