CN101375599A - Method and system for performing video flashlight - Google Patents

Method and system for performing video flashlight Download PDF

Info

Publication number
CN101375599A
CN101375599A CNA2005800260173A CN200580026017A CN101375599A CN 101375599 A CN101375599 A CN 101375599A CN A2005800260173 A CNA2005800260173 A CN A2005800260173A CN 200580026017 A CN200580026017 A CN 200580026017A CN 101375599 A CN101375599 A CN 101375599A
Authority
CN
China
Prior art keywords
video
observation
point
comes
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2005800260173A
Other languages
Chinese (zh)
Inventor
S·萨马拉塞克拉
K·汉纳
H·索奈伊
R·库马尔
A·阿尔帕
V·帕拉加诺
T·杰尔马诺
M·阿加尔沃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
L3 COMMUNICATION AVIAT RECORDE
L3 Technologies Inc
Original Assignee
L3 COMMUNICATION AVIAT RECORDE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by L3 COMMUNICATION AVIAT RECORDE filed Critical L3 COMMUNICATION AVIAT RECORDE
Publication of CN101375599A publication Critical patent/CN101375599A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

In an immersive surveillance system, videos or other data from a large number of cameras and other sensors is managed and displayed by a video processing system overlaying the data within a rendered 2D or 3D model of a scene. The system has a viewpoint selector configured to allow a user to selectively identify a viewpoint from which to view the site. A video control system receives data identifying the viewpoint and based on the viewpoint automatically selects a subset of the plurality of cameras that is generating video relevant to the view from the viewpoint, and causes video from the subset of cameras to be transmitted to the video processing system. As the viewpoint changes, the cameras communicating with the video processor are changed to hand off to cameras generating relevant video to the new position. Playback in the immersive environment is provided by synchronization of time stamped recordings of video. Navigation of the viewpoint on constrained paths in the model or map-based navigation is also provided.

Description

Be used to carry out the method and system of video monitor
Related application
The title that the application requires on June 1st, 2004 to submit to is the U.S. Provisional Application sequence number 60/575 of " METHOD ANDSYSTEM FOR PERFORMING VIDEO FLASHLIGHT ", 895, the title that on June 1st, 2004 submitted to is the U.S. Provisional Patent Application sequence number 60/575 of " METHOD ANDSYSTEM FOR WIDE AREA SECURITY MONITORING; SENSORMANAGEMENT AND SITUATIONAL AWARENESS ", the title that on June 1st, 894 and 2004 submitted to is the priority of the U.S. Provisional Application sequence number 60/576,050 of " VIDEOFLASHLIGHT/VISION ALERT ".
Technical field
The present invention relates generally to image processing, and more specifically relate to the system and method that is used to provide immersion to monitor, in this system and method, manage video on 2D by will covering scene or the 3D model from these video cameras from the video of the many video cameras in locality or the environment.
Background technology
The immersion surveillance provides checking the security camera system at place, a certain place.The video output of the video camera in the immersion system is combined with the computer model of playing up in this place.These systems allow users to move through dummy model and check to be presented on the associated video that comprises from the immersive virtual environment of the real-time video of camera feed automatically.An example of this system is the VIDEO FLASHLIGHT that shows in the disclosed U.S. publication application on May 8th, 2,003 2003/0085992 TMSystem quotes this application at this as a reference.
Such system can run into the problem of communication bandwidth.The immersion surveillance can be made of tens of, hundreds of and even thousands of all video cameras that all generate video simultaneously.When the communication network of form by system with stream transmits or when being transferred to the center observation station of checking the immersion system, terminal or other display unit, this has jointly constituted very a large amount of flow datas.In order to hold such data volume, other connected system that a large amount of cables must be provided or have a big amount of bandwidth is to transmit all data, otherwise this system just may encounter problems under the restriction of message transmission rate, mean for some significant potentially videos of Security Officer and may can't obtain at observation station or the terminal that is used to show fully, thereby reduced the validity that monitors.
In addition, immersion system does not early provide the immersion of the video of system to play, but only can allow the user check current video from video camera, perhaps only can play the immersion video of previous demonstration again and does not change any degree of freedom of position.
In addition, in such system, the user comes to browse substantially ad lib by adopting mouse or joystick to control his or her point of observation usually.Although this to the very big degree of freedom that the user has given investigation and moved, also substantially can make the user lost in the scene of checking, and is difficult to point of observation is moved back into useful positions.
Summary of the invention
Therefore, an object of the present invention is to provide the system and method that is used for the immersion video system herein, it has improved the immersion video system in these areas.
In one embodiment, the present invention relates generally to be used to provide the system and method for managing the system of multitude of video in 2D by multitude of video being covered scene or the 3D model, particularly in the system that in such as U.S.'s publication application 2003/0085992, shows, this application is quoted at this as a reference.
According to an aspect of the present invention, the surveillance that is used for a certain place has a plurality of video cameras, and each video camera produces each video of the each several part in this place.The point of observation selector is configured to allow the point of observation in this place of identification, user selection ground, the part that can check this place or this place from this point of observation.Processing system for video is connected with the point of observation selector so that receive the data of indicating point of observation from the point of observation selector, and is connected so that from a plurality of video camera receiver, videos with a plurality of video cameras.Processing system for video can use the computer model in this place.Processing system for video from computer model play up with from the corresponding realtime graphic of the what comes into a driver's in this place that point of observation is seen, at least a portion of at least one video is covered on the computer model in this realtime graphic.Processing system for video is to viewer's display image in real time.Video control system receives the data of identification point of observation and automatically selects to generate the subclass of a plurality of video cameras of relevant with the what comes into a driver's in this place of seeing from the point of observation video of being played up by processing system for video based on point of observation, and makes from the video transmission of the subclass of video camera and arrive processing system for video.
According to a further aspect in the invention, the surveillance that is used for a certain place has a plurality of video cameras, and each video camera generates each data flow.Each data flow comprises a series of frame of video, and each frame of video is corresponding with the realtime graphic of the part in this place, and each frame has the timestamp of indication realtime graphic by the time of associated camera generation.Recorder system receives and writes down the data flow from video camera.Processing system for video is connected with register and the broadcast of the data flow of record is provided.Processing system for video has the renderer of the image of playing up the what comes into a driver's of seeing from the broadcast point of observation of the model in this place during the broadcast of data flow of record, and this renderer is applied to data flow from the record of at least two video cameras relevant with this what comes into a driver's.Processing system for video is included in during the broadcast from the synchronizer of the data flow of recorder system receiving record.This synchronizer is given renderer with synchronous form with the distribution of flows of record, and the frame of video that makes each image adopt all to be taken is simultaneously played up.
According to a further aspect in the invention, the immersion surveillance has a plurality of video cameras, and each video camera produces each video of the each several part in a certain place.Image processor is connected with a plurality of video cameras and from a plurality of video camera receiver, videos.Image processor is produced as the image of playing up based on the point of observation of the model in this place and combine with a plurality of videos about this point of observation.Display unit is connected with image processor and shows the image of playing up.The what comes into a driver's controller that is connected to image processor provides the data of the point of observation that definition will show to image processor.The what comes into a driver's controller also is connected with the interactive browse assembly that allows user selection ground change point of observation and receives input from this interactive browse assembly.
According to a further aspect of the present invention, provide a kind of method, it comprises from input unit and receives data, this data indication to the selection of point of observation and what comes into a driver's scope to be used for checking at least some videos from a plurality of video cameras of surveillance.Identification is positioned at the subgroup of one or more described video cameras of the position that makes video camera can generate the video relevant with the what comes into a driver's scope.Will be from the video transmission of the subgroup of video camera to video processor.The video demonstration is generated by the computer model rendering image from the place by video processor, and wherein image is corresponding with the what comes into a driver's scope of seeing from the point of observation in place, and at least a portion of at least one video is covered on the computer model in this image.Image is displayed to the viewer, and makes from the video of the video camera of at least some in the subgroup not and be not transferred to the video rendering system, thereby reduces to be transferred to the data volume of video processor.
According to a further aspect in the invention, the method that is used for surveillance is included in the data flow of the video camera of register system on one or more registers.Data flow is with synchronous form quilt record together, and each frame has the timestamp of indication realtime graphic by the time of associated camera generation.Exist and communicating by letter that register carries out so that make the data flow of register to the record of video processor transmission video camera.The data flow of receiving record also makes its frame synchronization based on its timestamp.Receive data from input unit, this data indication to the selection of point of observation and what comes into a driver's scope to be used to check at least some videos from video camera.The video demonstration is generated by the computer model rendering image from the place by video processor, and wherein image is corresponding with the what comes into a driver's scope of seeing from the point of observation in place, and at least a portion of at least two videos is covered on the computer model in this image.For each image of playing up, covering video is thereon all indicated the frame of identical period from all timestamps.Image is displayed to the viewer.
According to other method of the present invention, the data flow of the record of video camera is transferred to video processor.Receive data from input unit, this data indication to the selection of point of observation and what comes into a driver's scope to be used to check at least some videos from video camera.The video demonstration is generated by the computer model rendering image from the place by video processor.Image is corresponding with the what comes into a driver's scope of seeing from the point of observation in place, and at least a portion of at least two videos is covered on the computer model in this image.Image is displayed to the viewer.Receive the input of the change of indication point of observation and/or what comes into a driver's scope.Import restrainedly, make that the operator only can be to the change of new what comes into a driver's scope input point of observation, these changes are limited subclass of all possible change.This limited subclass is corresponding with the path by this place.
Description of drawings
How the conventional mode of operation that Fig. 1 demonstrates in the explanation camera control room is transformed into the diagrammatic sketch that is used for overall multiple-camera visual (global muti-camera visualization) and effectively breaks through the visible environment of handling;
The module that provides a whole set of instrument to assess threat is provided Fig. 2;
Fig. 3 shows the video that presents and covers on the high-resolution screen that has with the control interface of DVR and PTZ unit;
Fig. 4 shows highlighted icon that shows as map and the information of presenting to the user as the text list diagrammatic sketch;
Fig. 5 shows and scribbles the zone whether color is enlivened with the indication alarm;
Fig. 6 shows the scalable system architecture that is used for covering rapidly the video camera system with several video cameras or hundreds of video camera;
Fig. 7 shows what comes into a driver's selective system of the present invention;
Fig. 8 is the diagrammatic sketch that the data in synchronization in the system of the present invention is caught, play again and show;
Fig. 9 is the diagrammatic sketch that the data set in such system is grown up to be a useful person and shown;
Figure 10 demonstrates the demonstration of being used by the immersion video system based on map;
Figure 11 demonstrates the software architecture of system.
For the ease of understanding, represent similar elements total among each figure at the possible identical Reference numeral that uses Anywhere.
Embodiment
To effective supervision and safe military installations or other safety place need be than more urgent in the past.Effectively regular job needs to break through the significant response that breaks through with inlet control and lasting in company with fail safe reliably with to periphery.Operation and supervision based on video are used in military base and other responsive place more and more.
For example, there is the video camera of 54 installations in the Campbell military camp in Heidelberg, Germany (Heidelberg), and in contiguous Mark Twain Village military stronghold, and the base of plan will have and surpass 100 video camera.Current video modes of operation only allows to check the traditional mode of video on TV Monitor and the border, overall 3D field (context) that can't understand environment.In addition, the breakthrough based on video detects the normally non-existent and visual breakthrough detection system that is not directly connected to of video.
VIDEO FLASHLIGHT TMAssessment (VFA), alarm assessment (AA) and can be used for: (i) by a plurality of videos stereoprojection seamlessly (multiplex) being provided the comprehensive visual of neighboring area for example to the 3D model of environment based on alarm (VBA) technology of vision, and healthy and strong motion detection and other intelligent alarm (ii) is provided, such as the patrol detection of periphery breakthrough, the target that stays and these positions.
In this application, will be with reference to being called VIDEO FLASHLIGHT TMThe immersion surveillance, it is advantageously to use the example of the environment of invention herein, is different from VIDEO FLASHLIGHT although it should be understood that the invention of this paper can be used on TMIn the system of system, thereby produce confers similar advantages.VIDEO FLASHLIGHT TMIt is such system, in this system, instant video is mapped on the 2D in place or the 3D computer model and with the 2D or the 3D computer model in place and combines, and the operator can mobile point of observation by scene and a plurality of points of observation from scene space check in conjunction with after the instant video of playing up video and suitably using.
In such surveillance, video camera can provide the comprehensive covering to the zone of being concerned about.The video quilt is record continuously.Video seamlessly is rendered on the 3D model of airport or other position to provide the global field border visual.But break through based on the auto-alarm-signal keying device test example of video such as the safety at doorway and fence place.Video camera cover (Blanket of Video Camera) (BVC) system be responsible for individual of Continuous Tracking and will make the Security Officer can in the space, browse subsequently immersion and back in time moment that safety breaks through also F.F. in time subsequently to follow this individuality up to current time.How the conventional mode of operation that Fig. 1 demonstrates in the camera control room is transformed into the visible environment that is used for the visual and effective breakthrough processing of overall multiple-camera.
In a word, the BVC system provides following function.Single unified demonstration demonstrates the real-time video of seamlessly playing up about the 3D model of environment.The user can freely browse by this environment, checks simultaneously from a plurality of camera videos about the 3D model.The user can be rapidly and rollback and check event in the past again in time intuitively.The user can one or more left and right sides pan/pan/zoom shot video camera be taken a certain position up and down to handle by clicking on model simply, comes the high-resolution video of rapid acquisition incident.
This system allows the operator to detect safe breakthrough, and it makes the operator follow individuality by adopting a plurality of video cameras to follow the tracks of.This system also makes the Security Officer check current location and alarm events by the FA demonstration or as the video segment that files.
VIDEO FLASHLIGHT TM With hold up the newspaper module based on vision
VIDEO FLASHLIGHT TMComprise four different modules with warning system based on vision:
Video assessment (VIDEO FLASHLIGHT TMPlay up) module
Visual warning alarm modules (Vision Alert Alarm Module)
The alarm evaluation module
The system health information module
Video evaluation module (VIDEO FLASHLIGHT TM) provide integrated interface to check the video that covers on the 3D model.This makes the guarder can seamlessly browse by geodetic point and is evaluated at any threat that takes place in the big zone rapidly.There are not other order and control system to have this video covering function.System covers from the fixing video camera and the video of Pan/Tilt/Zoom camera, and utilizes DVR (digital video recorder) module with record and broadcast event.
As best illustrating among Fig. 2, this module provides a whole set of instrument to assess threat.Alarm situations generally is divided into 3 parts:
Pre-Evaluation: alarm takes place, and is necessary to assess the incident that causes this alarm.The technology use DVR device of competition or precaution alarm buffer are stored the information from alarm.Yet the precaution alarm buffer is often too short, and the DVR device only shows video from a particular camera by using complicated control interface.On the other hand, GUI allows to check all video flowings synchronously on any instant moment immersion ground video evaluation module by using intuitively.
Immediate assessment: alarm takes place, and needs the instant video of quick this alarm of locating and displaying, assessment situation, and response fast.In addition, need to monitor simultaneously that the zone that centers on this alarm is to check extra activity.Most of existing systems provide the what comes into a driver's of scene by using a different set of monitor, and its spended time and need being familiar with so that can switch to find the peripheral region between the video camera what comes into a driver's scene.
The back assessment: alarm situations finishes, and the point of being concerned about has shifted out the what comes into a driver's scope of fixed cameras.Need follow the point of being concerned about and pass through scene.VIDEOFLASHLIGHT TMModule allows to control quickly and easily Pan/Tilt/Zoom camera by using the control of click intuitively on the 3D model.As shown in Figure 3, video covers on the high-resolution screen that has with the control interface of DVR and PTZ unit and is presented.
Input and output
VIDEO FLASHLIGHT TMThe video evaluation module is obtained view data and the sensing data of putting into computer storage with known form, obtains the attitude of calculating and estimate during initial model is set up, and it is covered on the 3D model.In a word, the input and output of video evaluation module are:
Input:
From fixed cameras that is positioned at known location and the video represented with known format;
Video and positional information from the Pan/Tilt/Zoom camera position;
Each video camera is about the 3D attitude (these 3D attitudes are set up in the process by using calibration steps to be resumed in system) of model;
The 3D model of scene (using existing 3D model, commercial 3D method for establishing model or any other computer-model-method for building up to recover this 3D model)
Use that joystick or keyboard provide or by alarm that the user the disposed what comes into a driver's of the expectation of control automatically by the operator.
Output:
Show image in the memory from the supervision what comes into a driver's of the what comes into a driver's of expecting
The PTZ order of control PTZ position
The DVR control of rollback and preview bygone spare
Principal character in the video evaluating system is:
3D place model in visibleization is to provide abundant border, 3D field.(browsing in the space)
Real-time video is covered on the 3D model so that the assessment based on video to be provided
The a plurality of DVR of Synchronization Control unit is seamlessly to fetch and video is covered on the 3D model.(browsing in time)
By simple click control on the 3D model and covering PTZ video.The guarder does not need to know specially that the position of video camera is with mobile PTZ unit.Which PTZ unit is system determine be suitable for checking the zone of being concerned about most automatically.
Automatically select video based on the point of observation of selecting, allow system integration video matrix switch so that the virtual access to very a large amount of video cameras to be provided.
The level of detail render engine provides seamless the browsing of crossing over very large 3D place.
Be used for video evaluated user's interface (VIDEO FLASHLIGHT TM )
Visual:In the video evaluation module, there are two kinds of views to present to the user, (a) 3D render view and (b) map inset view.The 3D render view shows to have that the video that is arranged in 3d space covers or the ground point model of video bulletin board (video billboard).This provides the details in place.The map inset view be have the video camera footprint cover (footprint overlay) this place look down view.This view provides a whole border in place.
Browse:
Browse by the preferred view point:By using the preferred view dot cycle that browsing by the place is provided.A left side and Right Arrow allow you to move between these critical observation points.Multiple such point of observation circulation (the different zoom rank in the point of observation) in different level of detail definition is arranged.Use upper and lower arrow key to browse by these level of zoom.
The employing mouse is browsed:The user can click this point is placed the center in the preferred view point on a left side on any video covers.This allows the user to follow the tracks of the moving target of the what comes into a driver's scope that is moving through overlapping video camera at an easy rate.The user can click to convert preferred covering point of observation on a left side on the video bulletin board.
The employing map inset is browsed:The user can click to move to the preferred view point of particular camera on a left side on the footprint of map inset.The user also can a left side clicks and drags mouse and preferably dwindles what comes into a driver's to discern one group of footprint with what obtain this place.
PTZ control:
Adopt mouse to move PTZ:The user can pin a shift left side and click (shift left click) so that the PTZ cell moving is arrived ad-hoc location on model or map inset view.System determines automatically that then which PTZ unit is suitable for checking this point and correspondingly moves those PTZ to observe that position.When pressing the shift button, the rotatable mouse wheel of user is to amplify from the previous nominal level of zoom of selecting of system or to dwindle.When checking the PTZ video, system will be automatically places center on the initial PTZ point of observation to what comes into a driver's.
Between PTZ, move:When specified point is seen in a plurality of PTZ unit, will distribute to the PTZ unit nearest to preferred what comes into a driver's apart from this point.By using a left side and Right Arrow, the user can switch to preferred what comes into a driver's other PTZ unit of seeing this point.
From getting a bird's eye view what comes into a driver's control PTZ:Under this pattern, control PTZ when the user can and be got a bird's eye view what comes into a driver's at all fixed cameras what comes into a driver's of seeing the place.Use upper and lower arrow key, the guarder can move between the amplification what comes into a driver's of getting a bird's eye view what comes into a driver's and PTZ video.Control to PTZ is undertaken by pin the shift click as mentioned above on place or illustration map.
DVR control:
Select the DVR control panel:The user can press ctrl-v with flyout panel with the DVR unit in the control system.
The DVR Play Control:Under default situations, the DVR subsystem is sent to video assessment station with instant video with the form that flows, that is, immersion shows the video station that is displayed to the user.The user can select pause button so that video is stopped at current point in time.The user switches to the DVR pattern then.Under the DVR pattern, the user can synchronously play till the restriction of the recording of video up to reaching in time forward or backward.When with DVR mode playback video, the user can browse and passes through the place as described in the part as superincumbent browsing.
DVR searches control:The user can search the video of all DVR controls by the time of being concerned about of specifying it to want to move to some preset time.System will move to this time point to all videos and suspend subsequently up to the user and select another DVR order.
The alarm evaluation module
Browser-summary based on map
Browser based on map is the visualization tool that is used for broader region.Its primary clustering is to comprise rolling and the scalable map of just penetrating of the different assemblies that are used to represent transducer (fixed cameras, ptz video camera, guard sensor (fencesensor)) and symbolic information (text, system health, boundary line, target in time move).
What follow this what comes into a driver's is the scaled example of map, it can not roll can not convergent-divergent, its objective is the profile of the scope of the observation panel of depicting big what comes into a driver's, show the not state of the assembly in the what comes into a driver's scope of big what comes into a driver's, and the another kind of method that is provided for changing the observation panel of big what comes into a driver's.
Based on visual application, can have different characteristics and function based on the assembly in the demonstration of map.For alarm assessment, the alarm state of the transducer that assembly can be represented based on visible component and change color and flicker.When at the transducer place unacknowledged alarm being arranged, it will be red and flicker on based on the demonstration of map.In case confirmed all alarms of this transducer, assembly will be for red but will no longer glimmer.After all alarms of guaranteeing transducer do not have danger, assembly will turn back to its green normally.Transducer also can be by based on the assembly of map and disabled, after this they will for yellow till they are enabled once more.
Other module can be by visiting the assembly in the map demonstration via API (application programming interfaces) transmission incident.Alert list is such module, and it amounts to the alarm of crossing over many warning stations and it is shown to the user as text list and assesses to be used for alarm.By using API, alert list can change the state based on the assembly of map, and via such change, assembly will change color and flicker.Alert list can come alarm is sorted according to time, priority, sensor name or alarm types.It can also control VideoFlashlights to check the video that produces when the alarm.For the alarm based on video, alert list can be checked at video and show in the window that the video that causes alarm also can be saved in the video that causes alarm in the disk.
Mutual based on the browser of map and VideoFlashlights
The API control that has by exposure in the TCP/IP connection based on the assembly in the browser of map offers the virtual views of VideoFlashlights demonstration and the ability that video is supplied with.This provides the another kind of method of the 3D what comes into a driver's that is used for the browsing video supervision for the user.Except changing virtual views, also can control DVR and create virtual travelling based on the assembly in the demonstration of map, in virtual travelling, video camera changes its position after the process in the time of specified amount.Rearmost this function permission video monitor establishment is followed someone and is travelled by the individualized of 3D scene.
Browser display based on map
A plurality of alarms of a plurality of machines of the alarm assessment integrated leap in station also are presented to the guarder.This information that offers the user is used as highlighted icon that map shows and presents to user (Fig. 4) as text list view.Map view makes the guarder can discern this threat in the correct spatial field border that threatens.Its work that also plays hyperlink is stood to carve in order to the control of video assessment and is controlled the zone that video is concerned about with observation post.List View makes the user assess alarm and to make the user can watch the video segment of the band note of any alarm about the type of alarm, the time of alarm.
Key feature and explanation
The key feature at AA station is as follows:
It presents alarm from the visual warning station to the user, the dry contact input, and be integrated into other customization alarm in the system.
Symbolic information is covered on the 2D venue map so that border, the field that alarm takes place to be provided.
Text message according to time or prioritization is shown to obtain the details about any alarm.
Control VIDEO FLASHLIGHT TMThe specific point of observation of alarm of standing and importing guiding by the user to browse to automatically.
The video segment of the band note of preview actual alarm.
Preserve video segment and be provided with the back use.
The user can be by confirming alarm and in case having solved alarm situations and just reappear alarm and come administrative alert.The user also can be forbidden particular alert so that preplanned activity can be taken place and do not produced alarm.
The user interface of alarm evaluation module
Visual:
The alert list viewThe alarm of all visual warning stations and the external alarm message source or the system failure is integrated in the single tabulation.This tabulation is upgraded in real time.This tabulation can be according to time or alarm prioritization.
Map viewOn map, show where alarm is taking place.The user can be on map rolling everywhere or by using the illustration map to select the zone.Map view is assigned to alarm in the symbol area of mark with indication where alarm to take place.These zones have been encoded color to indicate alarm whether active, as shown in Figure 5.The preferred colors that is used for the alarm symbol is encoded to (a) redness: the active dangerous alarm that produces owing to suspicious actions, (b) grey: the alarm that produces owing to the system failure, (c) yellow: video source is disabled, and (d) green: all are eliminated, and do not have active alarm.
Video preview:For the alarm based on video, movable preview fragment also is available.These can preview in the video segment window.
Alarm is confirmed:
In List View, the user can confirm that alarm observes to indicate him.He can confirm individually one by one that alarm or he can click to obtain popup menu and to select to confirm guaranteeing not have danger about all alarms of particular sensor from map view by the right side.
If solved alarm situations, then the user can solve alarm situations by selecting secure option to indicate in List View.Do not have danger in case guarantee alarm, this alarm will be eliminated from List View.The user can be by right the click to obtain popup menu and to select to guarantee that option guarantees that all alarms of particular sensor do not have danger on the zone.This will remove all alarms of this transducer in the List View equally.
In addition, the user can be by using popup menu and selecting the forbidding option to forbid alarm from any transducer.For all disabled sources, any new alarm will automatically be identified and guarantee not have dangerous.
Video assessment station control:
The user can be by moving to preferred view with video assessment station from map view for left side click on the zone of particular sensor mark.Map view control will send navigation commands and assess the station to video assessment station with mobile video.The user generally will click to use video evaluation module assessment situation enlivening on the alarm region.
Video monitoring system Ti Xijiegou ﹠amp; Hardware implementations
Scalable system architecture has been developed to be used for covering rapidly the video camera system (Fig. 6) with several video cameras or hundreds of video camera.The present invention is based on and have such modular filter, it can be interconnected with the form with stream between them transmits data.These filters can be source (video capture device, PTZ communicator, database read device etc.), transducer (such as the algoritic module of motion detector, tracker) or receiver (writing device such as render engine, database).These all are built with the intrinsic thread function that allows a plurality of assembly parallel runnings.This allows system optimization ground to use available resource on the multi processor platform.
But this architecture also provides spanning network to send and the source and the receiver of receiving stream-oriented data.This permission system crosses over a plurality of PC work stations at an easy rate with the simple configuration change and distributes.
Filter module is dynamically loaded when moving according to the configuration file based on simple XML.Connectivity between these definition modules and define the specific behavior of each filter.This allows integrator to dispose different terminals user application of a plurality of machines of a plurality of leaps fast and need not to revise any code.
The key feature of system architecture is:
System's scalability: can cross over a plurality of processors, a plurality of machine connects.
Assembly module: modular construction adopts the clearly separation between the mechanism maintenance software module that transmits data between the software module with the form that flows.Each module is defined as having the filter that transmits the common interface of data between them with the form of stream.
Assembly upgradability: be easy to the assembly of the system that replaces and do not influence the remainder of system architecture.
Data stream type transfer body architecture: transmit data based on the form with stream between the module in system.What have a leap system is intrinsic to the understanding of time (understanding of time) and can be synchronously and merge data from multiple source.
The storage architecture: each processor can synchronous recording and a plurality of metadata streams of broadcast.Provide at each node place to search and review function, it can be driven by demonstration and other client computer based on map/model.Provide power by rear end SQL database engine.
The efficient communication that system of the present invention provides the transducer with system to carry out, transducer video camera normally wherein, but also can be the transducer of other type, open in transducer (door open sensor) or the various safety sensor any one such as smog or fire detector, motion detector, door.Similarly, from the data of transducer video normally, but also can be the data of other classification, such as alarm indication or any other sensing data of detected motion or intrusion, fire.
The key request of surveillance is to be chosen in the data that any given time is observed.Video camera can transmit tens of, hundreds of or thousands of video sequences with the form of stream.What comes into a driver's selective system herein is to be used for visual, management, storage, to play and analyze this video data and from the device of the data of other transducer again.
The what comes into a driver's selective system
Fig. 7 shows the choice criteria of video.(for example be better than the independent sensor camera numbering of input, video camera 1, video camera 2, video camera 3 etc.), the demonstration of monitoring data is based on point of observation selector 3, this point of observation selector 3 offers the suitable real-time what comes into a driver's of the monitoring data that system will show with indication to selecteed virtual camera position or point of observation, wherein selecteed virtual camera position or point of observation be meant definition certain a bit and one group of data of the what comes into a driver's scope of seeing from this point.Virtual camera position can be imported (such as the electronic data that receives from the mutual station that for example has such as the input unit of joystick) from the operator and obtain, perhaps from obtaining as the output from the alarm sensor of dynamic response for the incident under operator's control not.
In case selected point of observation, it is relevant with the what comes into a driver's scope of that certain observation point which transducer is system just automatically calculate then.In a preferred embodiment, which subclass of system-computed system sensor appears in the what comes into a driver's scope of the video overlay area relevant with video prioritization device/selector 5, and wherein video prioritization device/selector 5 is connected with point of observation selector 3 and receives the data of defining virtual video camera points of observation from point of observation selector 3.System dynamically switches to selecteed transducer by the control of video switcher 7 via video prioritization device/selector 5 then, i.e. the subclass of related sensor, and avoid other transducer of the system that switches to.Video switcher 7 is connected to the input of all the sensors (comprising video camera) that generates multitude of video or data supply 9 in the system.Based on control from selector 5, switch 7 switches on communication link with the data of the subclass that sends the auto-correlation transducer to be supplied with, and stop the data that send from other transducer to be supplied with, supply with 11 so that only cover the relevant data of one group of virtual video camera point of observation that be reduced and selecteed of station 13 transmission to video.
According to a preferred embodiment, to be by the analog matrix switch of video prioritization device/selector 5 control switch to video and cover in the station 13 so that will supply with 11 from the video of the lesser amt of initial bigger collection 9 switch 7.Especially be when being transferred to video assessment station with the analog video that is used to show, to use this system when supplying with by one group of limited rigid line circuit (hard wired line).In such system, stream from the analog signal of not relevant with current what comes into a driver's scope video camera is cut off, make them not enter into the circuit at video assessment station, and physically connected so that by those connection lines from the video supply of the video camera relevant with current what comes into a driver's scope.
Replacedly, video camera can produce digital video, and this can be transferred to the digital video frequency server that is connected to local area network (LAN), makes digital video to be sent to video assessment station with the form that flows by network, and wherein local area network (LAN) is connected to video assessment station with digital video frequency server.In such system, video switcher is the part at video assessment station, and it is communicated by letter with independent digital video frequency server by network.If server has relevant video camera, then switch indication server is sent to video assessment station with this video with the form that flows.If video is uncorrelated, then switch sends a command to video server so that do not send its video.Consequently reduced the traffic on the network, and the transmission associated video to video station to be used for having higher efficient aspect the demonstration.
The video that (promptly in such as U.S.'s publication application 2003/0085992 in the disclosed immersion video system) played up on the 2D or 3D model of scene is shown.Video cover station 13 by the related data of video imagery especially supply with 11 with rendering system by the 2-D in the place of using system or preferably the real-time rendering image of the what comes into a driver's that generates of 3-D model combine; produce and constitute the video that real-time immersion surveillance shows; wherein the 2-D in the place of system or preferred 3-D model also can be called as geospatial information usually, and preferably are stored in video and cover playing up on the addressable data storage device 15 of assembly of station 13.The relevant geospatial information of playing up in each screen picture that will be shown is determined by point of observation selector 3.
Video covers station 13 and for example is applied to rendering image in the suitable part of what comes into a driver's scope as the associated video video of texture (texture) by the general, prepares each image of display video.In addition, select geospatial information in an identical manner.The point of observation selector determines which geospatial information is shown.
In case the video that is used to show is played up and with relevant sensor data stream combination, it just is sent to display unit to be shown to the operator.
These four chunks, video selector 3, video prioritization device/selector 5, video switcher 7 and video cover station 13, and the processing to the demonstration of potential thousands of video camera what comes into a driver's is provided.
What those skilled in the art will be readily appreciated that is, these functions can be supported on the main single computerized system of realizing its function by software, and perhaps they can be to carry out their distributed computerized components of task separately respectively.If system relies on network to come transmission of video to video station, then preferably, point of observation selector 3, video selector, video switcher 7 and video cover that all use and are used for software module separately and originally are expressed at the video station computer with playing up the station on one's body.
If system more is fixed against the rigid line video and supplies with and non-network service or analog communication, then better scheme is, assembly is a discrete circuit, and video switcher is connected near the video source actual physics switch by circuit, to turn-off this actual physics switch and to save bandwidth when video is uncorrelated with selecteed what comes into a driver's scope.
Data in synchronization is caught, is play again and shows
Have under the situation about making from the function of the instant data visualization of thousands of transducers, existing to allow data similarly is that the mode that instant such quilt is play is again stored the needs of data just.
Most of Digital Video Systems are stored the data from each video camera respectively.Yet according to present embodiment, system is configured to synchronously recording video data, synchronously it is read back, and monitors (VIDEO FLASHLIGHT preferably at immersion TM) display video data in the demonstration.
Fig. 2 demonstrates VIDEO FLASHLIGHT TMIn the data in synchronization block diagram of catching, playing again and showing.The recording operation of recordercontroller 17 synchronous all data, in all data, each frame of stored data comprises the timestamp of the time that data, recognition data are generated.In a preferred embodiment, synchronous recording is to be carried out by the Ethernet control of DVR device 19,21.
Recordercontroller 17 is also controlled the broadcast of DVR device, and guarantees that record and reproduction time begin just at the same time.When playing, recordercontroller 17 makes the DVR device begin to play and the relevant video of selecting of actual camera point of observation from the time point that the operator selects.Data are sent to data synchronizing unit 23 by localized network with the form that flows, the any real-time fragment that the data that its buffer memory is play read with deal with data, reading such as the information of timestamp makes all frames of data flow of a plurality of records all from the identical period with synchronous a plurality of data flow correctly, then data in synchronization is distributed to immersion monitor and display system, for example VIDEOFLASHLIGHT TM, and distribute to any other assembly in the system, for example play up assembly, processing components and data fusion component in 27 indications usually.
In simulation embodiment, be sent to circuit frame (circuitrack) from the analog video of video camera, here analog video is cut apart.The part of video is gone to map reader station, as mentioned above.The video of other parts and three other video cameras leads to connection box (cord box) goes to the register of storing all four videos supplies with the method for synchronization.Video is recorded, if video is relevant with current point of observation in addition, then it is transferred to video station to be used to be rendered into VIDEO FLASHLIGHT via rigid line TMAdvance during immersion shows.
To be digitized as in the main environment, exist many digital video frequency servers, each digital video frequency server all is connected with about four to 12 video cameras.Video camera is connected to the digital video frequency server of the network that is connected to surveillance.Digital video frequency server has the digital video recorder (DVR) that is connected to this digital video frequency server at same physical location usually, and this digital video recorder storage is from the video of video camera.If video is relevant, then server is sent to video station being applied to be used for the rendering image that immersion shows with video with the form of stream, and if as mentioned above, video switcher indication server does not transmit, server transmission of video not then.
Be applied to immersion as mentioned above with the instant video data and monitor the same way as that shows, the synchrodata of record is comprised in the real-time immersion that is shown to the operator and monitors in the broadcast demonstration.The operator can move through the model of scene and check the scene of playing up from the point of observation that he selects, and uses video or other data from the period of being concerned about.
The special-purpose computer system that recordercontroller and data synchronizing unit preferably separate, but also can in one or more computer systems or electronic building brick, be supported, and its function can be realized by hardware in those systems and/or software, as those skilled in the art will be readily appreciated that.
Data set is grown up to be a useful person and is shown
Except video sensor is video camera, a hundreds of thousands non-transducer based on video can be arranged also in the system.Visual and the management of these transducers also is very important.
As best image among Fig. 3, symbol data integrator 27 is collected in real time from the data of different metadata sources (such as visual alarm, access control alarm, target following).Many information of regulation engine 29 combination to be generating the complex situations decision, and the predetermined response to rule that depends on the metadata input of different sets and offer this metadata input is made multiple definite conduct from dynamic response.Rule can be based on the geographical position of for example transducer, and also can import based on dynamic operation person.
Symbolic information reader 31 determines how user's (for example, color/icon) is presented in the decision of regulation engine 29.The result of regulation engine decision is used to the point of observation by what comes into a driver's control unit interface control of video assessment station then in due course.For example, the alarm of certain type alert operator and make operator's display unit show that at once the immersion of seeing from the virtual video camera point of observation of the position of the transducer of the metadata of observing transmission identification alarm situations monitors the demonstration what comes into a driver's automatically.
The assembly of this system can be the electronic hardware that separates, but also can use be positioned at operator's display terminal the place or realize with the appropriate software assembly that operator's display terminal is shared.
Affined browsing
The immersion monitor and display system provides unconfined means to browse in room and time.Yet in the use of every day, only the ad-hoc location in the room and time is relevant with on hand application.Therefore system of the present invention is at VIDEO FLASHLIGHT TMThe affined of application space and time browsed in the system.Can between automobile and train, carry out analogy; Train only can be along the certain path movement in the space, and automobile can move in the path of arbitrary number.
An example of such implementation is that the simple and easy of position that restriction does not have transducer to cover checks.This is to use the point of observation of the expectation that input unit or the click on the computer screen such as joystick provide to realize by analyzing by the operator.This system places the 3D of screen center to check the change of position by calculating the point will make click, comes the point of observation of calculation expectation.System determines whether point of observation comprises visible or potential visible any transducer then, and in response to determining of such transducer arranged, changes point of observation, and do not have such transducer in response to determining, system will not change point of observation.
As the back was disclosed, the hierarchy of the motion that is tied also was developed.
Based on browsing of map or incident
Except such as by the point in showing with click or by joystick etc. and browsing in the demonstration of immersion video itself, system also allows the operator to use outside incident of indicating to browse.
For example, as seeing in the screenshot capture of Fig. 4, VIDEO FLASHLIGHT TMDemonstration shows that except having the immersion video of playing up also having map 39 shows 37.Map demonstrates the tabulation of alarm 41 and the map in zone.Alarm or the map of listing by click simply, point of observation just is changed and the corresponding New Observer point in that position at once, and plays up VIDEO FLAHLIGHT for the New Observer point TMShow.
Map shows that 37 change color or icon occurs with the indication sensor incident, and as shown in Figure 4, barrier is broken through and is detected.The operator can click map then and show this indicating device on 37, and immersion shows that 39 point of observation will be changed into the point of observation of the pre-programmed of this Sensor Events at once, and then this Sensor Events will be shown.
PTZ control
Image processing system is known (x, y, z) world coordinates of each pixel in each camera sensor and in the 3D model.When the user adopts point in the demonstration of click 2D or 3D immersion video model, the best video camera of the what comes into a driver's scope that it is the center that system identification is used to check with this point.
In some cases, have the optimum position and check that the video camera of a certain position is the pan/zoom shot video camera (PTZ) of left and right sides pan/up and down, the pan/zoom shot video camera of this left and right sides pan/up and down can be directed and the different direction of the necessary direction in the position of checking expectation.Under these circumstances, system-computed location parameter (left and right sides pan of for example being commanded, mechanical left and right sides pan, pan, the zoom shot angle up and down of pan, transducer) up and down, by indicating PTZ to arrive that position to video camera, and receive and be inserted into immersion and monitor PTZ video in showing at the suitable electric control signal of transmission over networks.Below the details of this processing will be discussed further.
PTZ switches
As mentioned above, (x, y, z) world coordinates of each pixel in each camera sensor and in the 3D model are known by system.Because the position of camera sensor is known, so which transducer is system can require to select to use based on checking of expectation.For example, in a preferred embodiment, when scene comprised more than a Pan/Tilt/Zoom camera, one or more PTZ completely or partially selected automatically based on PTZ position and the ground-projection-2D (for example, latitude longitude (latt long)) or the 3D coordinate of the point of being concerned about in system.
In a preferred embodiment, system is calculated from each PTZ range-to-go based on 2D or 3D coordinate, and select to use the PTZ near target to check target.Additional rule comprises that explanation is from being modeled in blocking of 3D target in the scene, and left and right sides pan, up and down pan, zoom shot value can not enter the zone, and these rules are applied to determining which video camera is the best video camera that is used for checking the specific choice point in place.
The PTZ calibration
PTZ need arrive the calibration of 3D scene.This calibration is by selecting from the visible VIDEO FLAHLIGHT of PTZ TM(x, y z) carry out 3D in the model.Make PTZ point to that position and read and store mechanical left and right sides pan, pan, zoom shot value up and down.These some different some places that distribute around the position of Pan/Tilt/Zoom camera in model are repeated.Then respectively at left and right sides pan, independently these points are carried out linear fit in pan and the zoom shot space up and down.The zoom shot space is non-linear sometimes, and producer or experience are searched (look-up) and can be carried out before match.When each request PTZ moved, linear fit was just dynamically carried out.When request PTZ points to the 3D position, with respect to the PTZ position for the left and right sides pan in the position calculation model space of expectation and up and down the pan angle (phi, theta).Calculate Phi and theta with respect to the PTZ position for all calibration points then.Give the weighted least-squares of bigger weight to more approaching with the corresponding phi in position of expectation and those calibration phi and the theta of theta by using then, respectively from calibration the time the mechanical left and right sides pan of storage, pan, zoom shot value are carried out linear fit up and down.
Least square fitting use calibration phi and theta as the input of x coordinate and use the left and right sides pan measured from PTZ, pan, zoom shot value be as the y coordinate figure up and down.Least square fitting recovers to provide the parameter of output ' y ' value of given input ' x ' value then.Be fed to then in the computer program of expression parametrization equation (' x ' value) with the corresponding phi of point and the theta of expectation, this computer program returns the left and right sides pan (with pan, zoom shot up and down) of the machinery sensing of Pan/Tilt/Zoom camera then.Use these values of determining to determine that suitable electric control signal controls its position, direction and zoom shot to be transferred to the PTZ unit then.
Immersion monitors and shows index
At VIDEO FLASHLIGHT TMThe benefit of integrated video and out of Memory is to be that data are set up index in previous impossible mode in the system.For example, if VIDEOFLASHLIGHT TMSystem is connected to the licence plate reader system of installing at place, a plurality of checkpoint, then VIDEO FLASHLIGHT TMThe simple queries of system (using previously described rule-based system) can show the video of all examples of this vehicle immediately.This is unusual difficult task generally speaking.
VIDEO FLASHLIGHT TMBe " operating system " of transducer.The space of transducer and the fusion of algorithm greatly provide monitoring type in using the possibility that detects target and the possibility of correct recognition objective.These transducers can be any passive or Source Type is arranged, and comprise video, sound, earthquake, magnetic, IR etc..
Fig. 5 demonstrates the software architecture of system.Basically all the sensors information is fed to system by sensor driver and these are displayed on the bottom of figure.Aiding sensors 45 is that any active/passive transducer (such as the transducer of listing above) is effectively to monitor on the place.From the relevant information of all these transducers together with the meta data manager 51 that is fed to all these information of fusion from fixing instant video with Pan/Tilt/ Zoom camera 47 and 49.
In this layer 51 of the basic artificial intelligence of define system, exist rule-based processing.Rule has any device 45,47 of control meta data manager 51 belows or 49 ability, and can be such as " recording of video when only having any door to be opened on the A of corridor ", " adopting Pan/Tilt/Zoom camera to follow the tracks of any target automatically on the B of area " or " make VIDEOFLASHLIGHT TMLeap and zoom to individual with feature or iris matches criteria " rule.
These rules have direct influence to the what comes into a driver's of being played up by 3D render engine 53 (receiving data to be used for demonstration on meta data manager and from meta data manager), because the visual information that this what comes into a driver's normally finally is being verified, and generally speaking, user/guarder wants to leap on the target of being concerned about, amplify, and the visible feedback that the employing system provides is further assessed situation.
Can adopt available TCP/IP service remotely to use all above-mentioned functions.This module 55 is exposed to API in the remote place that may physically not have equipment but want to use service.Owing to the image of playing up is sent to remote place in real time, so the long-distance user has the ability of the output of seeing application as the local user.
This also is that all information (video sensor, aiding sensors and spatial information) are compressed into a kind of portable format (promptly, the output of the real-time program played up) means are because the user can need not remotely assessing all these information this locality to have except screen with as any equipment the input unit of certain kind of keyboard as him.An example is to adopt hand-held computer to visit all these information.
System has display terminal, and on this display terminal, the various display modules of system are displayed to the user, as shown in Figure 6.Display unit comprises graphic user interface (GUI), and this graphic user interface (GUI) especially shows the data of the point of observation that the video monitor played up and operator select, and accepts mouse, joystick or other input to change point of observation or management system.
Point of observation is browsed control
In the design early of immersion surveillance, the user freely browses in the 3D environment and does not have a constraint of point of observation.In the design, on user's potential point of observation, exist restriction, thereby increased visual quality and reduced the complexity of user interactions.
One of them shortcoming of freely browsing is fully, (this is not to be easy to task if the user is unfamiliar with 3D control, to control because have usually more than 7 parameter, comprise position (x, y, z), rotation (gradient, azimuth, rolling) and what comes into a driver's scope), then be easy to get lost or produce unsatisfied point of observation.Here it is why the system help user produce the reason of perfect point of observation because video-projection be arranged in discrete parts and these parts of continuous environment should be visual with possible best mode.Help can be such form,, provides the point of observation hierarchy by operator's control desk that is, clicks the rotation and the convergent-divergent that cause, and based on browsing of map or the like.
The point of observation hierarchy
The discontinuous character of having utilized video-projection is browsed in the point of observation layering, and depends on and use and basically the complexity of user interactions is reduced to about 4 or less than 4 dimensions from the 7+ dimension.This realizes by create the point of observation hierarchy in environment.A kind of method of possible this hierarchy of establishment is as follows: the lowermost layer representative of hierarchy accurately is equivalent to has the bigger what comes into a driver's scope of possibility with the camera position that obtains bigger border, field and the point of observation of direction in the scene.The point of observation of all the video camera projections in the scene is seen in the high node representative that more high-rise point of observation demonstrates increasing camera cluster and hierarchy.
In case set up this hierarchy, be not the absolute reference of control similar position and direction, but the user is made at and check in the scene that simple decision where and system use hierarchy to decide and create the best what comes into a driver's that is used for the user.The user also can rise in hierarchy clearly or descend or move to same node layer; That is laterally spaced point of observation on the same one deck in hierarchy.
Because all nodes all are the perfect points of observation of the careful selection of depending on that the camera arrangement in client's needs and place is finished in advance, so the simple selection that the user can be by adopting the low level complexity moves to another what comes into a driver's and browses scene from a what comes into a driver's, and visual quality is always on certain in check threshold value.
The Xuan Zhuan ﹠amp that click causes; Convergent-divergent
This navigation scheme makes the unnecessary user's interface device as system of joystick, and mouse is preferred input unit.
When the user was investigating the scene that is shown as the what comes into a driver's of seeing from point of observation, he can come further PCO by the target of clicking in the 3D scene of being concerned about.This input will cause the change of point of observation parameter to make what comes into a driver's be rotated, and make clicked target be positioned at the center of what comes into a driver's.In case target centers, can carry out convergent-divergent thereon by the additional input of using mouse.This Target is positioned at browse (the object-centric navigation) at centerMake browse more directly perceived up hill and dale.
Shi Tu ﹠amp based on map; Browse
Sometimes, when the user when the fraction in the world is seen, need check " big picture ", have bigger border, field, promptly check the map in place.This user in response to alarm and wanting rapidly particularly useful when switching to another part of 3D scene.
At VIDEO FLASHLIGHT TMIn the system, the user-accessible scene just penetrate map view.In this view, comprise in the scene that all resources of various transducers adopt their current states to be expressed.Video sensor also therein, and the user can be by selecting the shown footprint of one or more video sensors, and on this map view, select one or more video sensors, come on the 3D scene, to generate the best what comes into a driver's of his expectation, and system will correspondingly respond by browsing to the point of observation that shows all these transducers automatically.
PTZ browses control
Left and right sides pan pan zoom shot (PTZ) video camera up and down generally is fixed on a position and has the ability of rotation and zoom shot.Pan/Tilt/Zoom camera can be calibrated to the 3D environment, as illustrating in the part of front.
Rotation ﹠amp; The derivation of zooming parameter
In case carried out calibration, just can be any dot generation image in the 3D environment, because the position of this point and PTZ has generated a line that constitutes the pan of unique left and right sides pan/up and down/zoom shot combination.Herein, convergent-divergent can be adjusted to " tracking " specific size (people (~2m), automobile (~5m), truck (~15m), etc..) therefore and depend on distance from PTZ to this point, it correspondingly regulates convergent-divergent.Convergent-divergent can be depending on situation and after further regulated.
Control PTZ﹠amp; User interactions
At VIDEO FLASHLIGHT TMIn the system, in order to adopt the PTZ survey area, the user clicks this point in the rendering image of 3D environment.Software uses this position to generate the anglec of rotation and initial convergent-divergent.These parameters are sent to the PTZ controller unit.PTZ turns to and this point of convergent-divergent.Simultaneously, the PTZ unit is with its instant left and right sides pan, pan, zoom shot parameter and video are supplied with and sent up and down.These parameters are converted back to VIDEO FLASHLIGHT TMCoordinate system with video-projection to correct point and ongoing video be used as the image of projection.Therefore whole structure is that to adopt the realtime graphic that projects on the 3D model to swing to the PTZ of another point from any visual.
Possibility is to adopt keyboard to knock or any other input unit control PTZ left and right sides pan/pan/zoom shot and do not use the 3D model up and down.This is proved to be for deriving from motion (derivative movements), and pan/pan is useful up and down about for example when following the tracks of the someone, in this case, be not to click this people continuously, but the user clicks preallocated key.(for example, an arrow key left side/right side/on/down/shift-up/shift-down can be mapped to left pan/right pan/go up pan/following pan/camera lens to shift near/camera lens and move apart) ...
When control PTZ, make scene visualization
Describe the control of PTZ being carried out by clicking the 3D model in the superincumbent part and swung the visual of Pan/Tilt/Zoom camera.But the point of observation that visual this effect adopted is important.A kind of desirable method is to have " locking " to point of observation of PTZ, and in this case, the user checks that the point of observation of scene has the position identical with Pan/Tilt/Zoom camera and rotates along with the rotation of PTZ.The what comes into a driver's scope usually greater than actual camera so that a border to be provided to the user.
Another kind of useful PTZ visualization scheme is the point of observation of selecting on more high-rise in the point of observation hierarchy (referring to the point of observation hierarchy).By this way, a plurality of fixing and Pan/Tilt/Zoom cameras can be visual from a point of observation.
Many PTZ
Which when in the scene a plurality of PTZ being arranged, can will be applied in the system about rule that where and when to use PTZ.These rules can be the forms of pan/zoom shot figure etc. of scope map, left and right sides pan/up and down.If the what comes into a driver's of the specified point in the expectation scene, the PTZ collection that then transmits all these tests be used for this point is used to subsequent treatment, such as at VIDEO FLASHLIGHT TMIn show them or they sent to the video matrix reader.
The 3D-2D bulletin board
VIDEO FLASHLIGHT TMRender engine the video upright projection is visual to be used on the 3D scene.When but too little and point of observation and video camera differ too big when the what comes into a driver's scope of video camera especially, when being projected on the 3D environment, video has very large distortion.For still can display video and keep the spatial field border, bulletin board be to introduce the mode as the supply of display video on scene.Bulletin board is shown as very near the original camera position.The video camera overlay area also is shown and is linked to bulletin board.
Distortion can detect by multiple measurement, comprises shape morphological analysis (morphology) between the image of original and projection, image size differences etc..
Each bulletin board is shown in the immersion video screen that the sight line perpendicular to the viewer hangs basically, and shows the video that will be shown in distortion in the immersion environment from video camera thereon.Because bulletin board is the 3D target, so video camera is far away more from point of observation, bulletin board is just more little, has therefore kept the spatial field border well.
In application, can prove that still bulletin board is really effective with up to a hundred video cameras.In 1600 * 1200 screen, in a sectional drawing, will have nearly+250 mean sizes and be approximately 100 * 75 bulletin board as seen.Certainly, in this magnitude, bulletin board will serve as the instant texture of whole scene.
Although aforementioned content is illustrated embodiments of the invention, can design of the present invention other and not break away from its base region, and its scope is indicated in the appended claims with further embodiment.

Claims (28)

1. surveillance that is used for the place, described system comprises:
A plurality of video cameras, each video camera produce each video of the each several part in described place;
The point of observation selector, it is configured to allow the point of observation in the described place of identification, user selection ground, the part that can check described place or described place from described point of observation;
Video processor, it is connected with described a plurality of video cameras so that receive described video from described a plurality of video cameras;
Described video processor can use the computer model in described place, and play up realtime graphic from described computer model, described realtime graphic is corresponding with the what comes into a driver's scope in the described place of seeing from described point of observation, and at least a portion of at least one video is covered on the described computer model in described realtime graphic, and described video processor shows that described image is so that checked in real time by the user; And
Video control system, it automatically selects to generate the subclass of described a plurality of video cameras of relevant with the what comes into a driver's scope in the described place of seeing from the described point of observation video of being played up by described video processor based on described point of observation, and makes video transmission from the described subclass of video camera to described video processor.
2. immersion surveillance as claimed in claim 1, wherein, described video control system comprises video switcher, described video switcher allows to the video of described video processor transmission from the described subclass that is selected as the video camera relevant with described what comes into a driver's, and stops to the video of described video processor transmission from least some video cameras of the described a plurality of video cameras in the described subclass at video camera not.
3. immersion surveillance as claimed in claim 2, wherein, described video camera on network is sent to described video processor with its video by one or more servers with the form of stream, and described video switcher and described server communication are so that stop with the form of stream at least some videos of the video camera in the described subclass at video camera not of transmission on the described network.
4. immersion surveillance as claimed in claim 2, wherein, described video camera is transferred to described video processor with its video via communication line, and described video switcher is the analog matrix switching device shifter, and described analog matrix switching device shifter cuts off along the stream of the described communication line of at least some videos of the video camera in the described subclass of video camera not.
5. immersion surveillance as claimed in claim 1, wherein, described video control system is determined the distance between in described point of observation and the described a plurality of video camera each, and selects the described subclass of video camera so that comprise and video camera that described point of observation distance is the shortest.
6. immersion surveillance as claimed in claim 1, wherein, described point of observation selector is the interactive display at computer installation place, by described interactive display, discerns the described point of observation in the described computer model during user can be on checking display unit described image.
7. immersion surveillance as claimed in claim 1, wherein, described computer model is the 3-D model in described place.
8. immersion surveillance as claimed in claim 1, wherein, described point of observation selector receives operator's input or in response to the automatic signal of incident, and described point of observation is changed over second point of observation in response to this;
And described video control system automatically selects to generate second subclass of described a plurality of video cameras of relevant with the what comes into a driver's in the described place of seeing from described second point of observation video of being played up by described video processor based on described second point of observation, and makes video transmission from the described different subclass of video camera to described video processor.
9. immersion surveillance as claimed in claim 8, wherein, described point of observation selector receives operator's input to change described point of observation, and described change is described point of observation moving continuously to described second point of observation, and described point of observation selector is with the described path of checking that is constrained to permission of moving continuously, make and describedly check that the mobile of outside, path is under an embargo, and no matter whether there is the so any operator's input of moving of indication.
10. immersion surveillance as claimed in claim 1, wherein, at least one described video camera is the Pan/Tilt/Zoom camera with controllable direction or zooming parameter, and described video control system transmission of control signals so that make described video camera adjust the direction or the zooming parameter of described Pan/Tilt/Zoom camera, makes described Pan/Tilt/Zoom camera provide and the relevant data of described what comes into a driver's scope to described Pan/Tilt/Zoom camera.
11. a surveillance that is used for the place, described system comprises:
A plurality of video cameras, each video camera generates each data flow, each data flow comprises a series of frame of video, and each frame of video is corresponding with the realtime graphic of the part in described place, and each frame has the timestamp of the described realtime graphic of indication by the time of associated camera generation;
Register, its reception and record are from the data flow of described video camera;
Processing system for video, it is connected with described register and broadcast from the data flow of the described record of described register is provided, described processing system for video has the renderer of the image of playing up the what comes into a driver's of seeing from the broadcast point of observation of the model in described place during the broadcast of the data flow of described record, and described renderer is applied to data flow from the record of at least two video cameras relevant with described what comes into a driver's;
Described processing system for video is included in during the broadcast from the synchronizer of the data flow of described recorder system receiving record, described synchronizer arrives renderer with synchronous form with the distribution of flows that writes down, and the frame of video that makes each image adopt all to be taken is simultaneously played up.
12. immersion surveillance as claimed in claim 11, wherein, described synchronizer makes described synchronization of data streams based on the timestamp of the frame of video of described data flow.
13. immersion surveillance as claimed in claim 12, wherein, described register is connected to controller, and described controller makes described register store a plurality of data flow with synchronous form, and reads the timestamp of described a plurality of data flow so that described a plurality of data flow can be synchronous.
14. immersion surveillance as claimed in claim 11, wherein, described model is the 3D model.
15. an immersion surveillance comprises:
A plurality of video cameras, each video camera produce each video of the each several part in a certain place;
Image processor, it is connected with described a plurality of video cameras and from described a plurality of video camera receiver, videos, and described image processor is produced as the image of playing up based on the point of observation of the model in described place and combine with a plurality of described video about described point of observation;
Display unit, it is connected with described image processor and shows the described image of playing up; And
The what comes into a driver's controller, the data that it is connected to described image processor and the point of observation that definition will show is provided to described image processor, described what comes into a driver's controller is connected with the interactive browse assembly that allows user selection ground change point of observation and receives input from described interactive browse assembly, and described interactive browse assembly is constrained to the point of observation set of selecting in advance with the change of described point of observation.
16. immersion surveillance as claimed in claim 15, wherein, described what comes into a driver's controller calculates the change of checking the position of point of observation.
17. immersion surveillance as claimed in claim 15, wherein, when the user is modified to second point of observation with described point of observation, described what comes into a driver's controller determines whether any video except the video relevant with described first point of observation relevant with described second point of observation, and use to be identified as any additional video relevant with described second point of observation by described what comes into a driver's controller be that second video is played up second image.
18. method that is used for the immersion surveillance, described immersion surveillance has a plurality of video cameras and observation station, each video camera produces each video of the each several part in place, and observation station has display image so that the display unit of being checked by the user, and described method comprises:
Receive data from input unit, described data indication to the selection of point of observation and what comes into a driver's scope to be used to check at least some videos from described video camera;
Identification is positioned at the subgroup of one or more described video cameras of the position that makes video camera can generate the video relevant with described what comes into a driver's scope;
Will be from the video transmission of the described subgroup of video camera to video processor;
Adopting described video processor to generate video by the computer model rendering image from described place shows, wherein, described image is corresponding with the what comes into a driver's scope of seeing from the described point of observation in described place, and at least a portion of at least one video is covered on the described computer model in described image;
Described image is shown to the viewer; And
Make from the video of at least some video cameras in described subgroup not and be not transferred to the video rendering system, thereby reduce to be transferred to the data volume of described video processor.
19. method as claimed in claim 18, wherein, will be from the video transmission of the described subgroup of video camera to described video processor on network by the server relevant with described video camera, and wherein, the described step that video is not transmitted be by via described network with and at least one server that the described video camera of at least one in the subgroup of described video camera is not relevant communicate, make video that described server does not transmit described at least one video camera realize.
20. method as claimed in claim 18 also comprises:
Receive input, the change of described point of observation and/or described what comes into a driver's scope is indicated in described input, makes new what comes into a driver's scope and/or New Observer point be defined; With
Definite second subgroup that can generate the described video camera of the video relevant with described new what comes into a driver's scope or New Observer point;
Make video transmission from described second subgroup of described video camera to described video processor;
Described video processor uses the video of described computer model and reception to play up the new images of new what comes into a driver's scope or New Observer point; And
Wherein, make from the video of at least some the described video cameras in described second subgroup not and be not transferred to described video processor.
21. method as claimed in claim 20, wherein, described first and second subgroups have at least one described video camera jointly and each subgroup has not at least one video camera in another subgroup.
22. method as claimed in claim 20, wherein, each only has corresponding of described video camera therein described subgroup.
23. method as claimed in claim 18, wherein, one of described video camera in the described subgroup is the video camera with controllable direction or convergent-divergent, and described method also comprises to described video camera transmission of control signals so that make described video camera adjust its direction or convergent-divergent.
24. a method that is used for the surveillance in place, described surveillance has a plurality of video cameras, and each video camera generates each data flow with a series of frame of video, and each frame of video is corresponding with the realtime graphic of the part in described place, and described method comprises:
The data flow of the described video camera of record on one or more registers, described data flow are with synchronous form quilt record together, and each frame has the timestamp of the described realtime graphic of indication by the time of associated camera generation;
Communicate so that make described register transmit the data flow of the record of described video camera to video processor with described register;
Receive the data flow of described record and make its frame synchronization based on its timestamp;
Receive data from input unit, described data indication to the selection of point of observation and what comes into a driver's scope to be used to check at least some videos from video camera;
Adopting described video processor to generate video by the computer model rendering image from described place shows, wherein, described image is corresponding with the what comes into a driver's scope of seeing from the described point of observation in described place, and at least a portion of at least two videos is covered on the described computer model in described image;
Wherein, for each image of playing up, covering video is thereon all indicated the frame of identical period from all timestamps; And
Described image is shown to the viewer.
25. method as claimed in claim 24 wherein, in response to the input that receives, is optionally play described video forward and backward.
26. method as claimed in claim 25, wherein, by the transmission command signal to described register and from the described broadcast of described video processor Position Control.
27. method as claimed in claim 24, also comprise and receive input, described input indication changes over new what comes into a driver's scope with what comes into a driver's scope and/or point of observation, and described video processor generates image from described computer model and the video that is used for described New Observer point and/or what comes into a driver's scope.
28. a method that is used for the surveillance in place, described surveillance has a plurality of video cameras, and each video camera generates each data flow of a series of frame of video, and each frame of video is corresponding with the realtime graphic of the part in described place, and described method comprises:
With the data flow transmission of the record of described video camera to video processor;
Receive data from input unit, described data indication to the selection of point of observation and what comes into a driver's scope to be used to check at least some videos from described video camera;
Adopting described video processor to generate video by the computer model rendering image from described place shows, wherein, described image is corresponding with the what comes into a driver's scope of seeing from the described point of observation in described place, and at least a portion of at least two videos is covered on the described computer model in described image; And
Described image is shown to the viewer;
Receive input, the change of described point of observation and/or what comes into a driver's scope is indicated in described input, described input is restrained, make that the operator only can be to the change of new what comes into a driver's scope input point of observation, described change is the limited subclass of all possible change, and described limited subclass is corresponding with the path by described place.
CNA2005800260173A 2004-06-01 2005-06-01 Method and system for performing video flashlight Pending CN101375599A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US57589404P 2004-06-01 2004-06-01
US60/575,895 2004-06-01
US60/576,050 2004-06-01
US60/575,894 2004-06-01

Publications (1)

Publication Number Publication Date
CN101375599A true CN101375599A (en) 2009-02-25

Family

ID=40214849

Family Applications (3)

Application Number Title Priority Date Filing Date
CNA2005800180164A Pending CN101375598A (en) 2004-06-01 2005-06-01 Video flashlight/vision alert
CNA200580018008XA Pending CN101341753A (en) 2004-06-01 2005-06-01 Method and system for wide area security monitoring, sensor management and situational awareness
CNA2005800260173A Pending CN101375599A (en) 2004-06-01 2005-06-01 Method and system for performing video flashlight

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CNA2005800180164A Pending CN101375598A (en) 2004-06-01 2005-06-01 Video flashlight/vision alert
CNA200580018008XA Pending CN101341753A (en) 2004-06-01 2005-06-01 Method and system for wide area security monitoring, sensor management and situational awareness

Country Status (1)

Country Link
CN (3) CN101375598A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011023032A1 (en) * 2009-08-26 2011-03-03 中兴通讯股份有限公司 Method for switching video monitoring scene and system for monitoring video
CN103096032A (en) * 2012-04-17 2013-05-08 北京明科全讯技术有限公司 Panorama monitoring system and method thereof
CN103281547A (en) * 2009-04-15 2013-09-04 索尼公司 Data structure, recording medium, playing device, playing method, and program
CN103858073A (en) * 2011-09-19 2014-06-11 视力移动技术有限公司 Touch free interface for augmented reality systems
CN103873453A (en) * 2012-12-18 2014-06-18 中兴通讯股份有限公司 Immersion communication client, immersion communication server and content view obtaining method
CN104137154A (en) * 2011-08-05 2014-11-05 霍尼韦尔国际公司 Systems and methods for managing video data
CN104463948A (en) * 2014-09-22 2015-03-25 北京大学 Seamless visualization method for three-dimensional virtual reality system and geographic information system
CN104521230A (en) * 2012-07-16 2015-04-15 埃吉迪姆技术公司 Method and system for reconstructing 3d trajectory in real time
WO2016086878A1 (en) * 2014-12-04 2016-06-09 Huawei Technologies Co., Ltd. System and method for generalized view morphing over a multi-camera mesh
CN105898212A (en) * 2014-12-16 2016-08-24 罗伯特·博世有限公司 Transcoder device and client-server architecture comprising the transcoder device
CN110730333A (en) * 2019-10-23 2020-01-24 深圳震有科技股份有限公司 Monitoring video switching processing method and device, computer equipment and medium
CN112929599A (en) * 2019-12-05 2021-06-08 安讯士有限公司 Video management system and method for dynamic display of video streams

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156424A (en) * 2010-12-09 2011-08-17 广东高新兴通信股份有限公司 Sensing system and processing method of custom event and processing rule
US9398283B2 (en) * 2013-02-12 2016-07-19 Honeywell International Inc. System and method of alarm and history video playback
CN105653110B (en) * 2014-11-13 2021-02-12 安定宝公司 Method for creating rule based on electronic map of monitoring system
US9728071B2 (en) * 2015-03-12 2017-08-08 Honeywell International Inc. Method of performing sensor operations based on their relative location with respect to a user
CN109963111A (en) * 2017-12-14 2019-07-02 游丰安 Clustering safety management system
EP4055813A1 (en) * 2019-11-05 2022-09-14 Barco N.V. Zone-adaptive video generation

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281547A (en) * 2009-04-15 2013-09-04 索尼公司 Data structure, recording medium, playing device, playing method, and program
CN103281547B (en) * 2009-04-15 2014-09-10 索尼公司 Data structure, recording medium, playing device, playing method, and program
WO2011023032A1 (en) * 2009-08-26 2011-03-03 中兴通讯股份有限公司 Method for switching video monitoring scene and system for monitoring video
CN104137154A (en) * 2011-08-05 2014-11-05 霍尼韦尔国际公司 Systems and methods for managing video data
CN103858073A (en) * 2011-09-19 2014-06-11 视力移动技术有限公司 Touch free interface for augmented reality systems
CN103858073B (en) * 2011-09-19 2022-07-29 视力移动技术有限公司 Augmented reality device, method of operating augmented reality device, computer-readable medium
CN103096032B (en) * 2012-04-17 2015-12-02 北京明科全讯技术有限公司 A kind of overall view monitoring system and method
CN103096032A (en) * 2012-04-17 2013-05-08 北京明科全讯技术有限公司 Panorama monitoring system and method thereof
CN104521230B (en) * 2012-07-16 2018-10-02 埃吉迪姆技术公司 Method and system for the tracks real-time reconstruction 3D
CN104521230A (en) * 2012-07-16 2015-04-15 埃吉迪姆技术公司 Method and system for reconstructing 3d trajectory in real time
WO2014094537A1 (en) * 2012-12-18 2014-06-26 中兴通讯股份有限公司 Immersion communication client and server, and method for obtaining content view
CN103873453B (en) * 2012-12-18 2019-05-24 中兴通讯股份有限公司 Immerse communication customer end, server and the method for obtaining content view
CN103873453A (en) * 2012-12-18 2014-06-18 中兴通讯股份有限公司 Immersion communication client, immersion communication server and content view obtaining method
CN104463948A (en) * 2014-09-22 2015-03-25 北京大学 Seamless visualization method for three-dimensional virtual reality system and geographic information system
CN104463948B (en) * 2014-09-22 2017-05-17 北京大学 Seamless visualization method for three-dimensional virtual reality system and geographic information system
WO2016086878A1 (en) * 2014-12-04 2016-06-09 Huawei Technologies Co., Ltd. System and method for generalized view morphing over a multi-camera mesh
US9900583B2 (en) 2014-12-04 2018-02-20 Futurewei Technologies, Inc. System and method for generalized view morphing over a multi-camera mesh
CN105898212A (en) * 2014-12-16 2016-08-24 罗伯特·博世有限公司 Transcoder device and client-server architecture comprising the transcoder device
CN110730333A (en) * 2019-10-23 2020-01-24 深圳震有科技股份有限公司 Monitoring video switching processing method and device, computer equipment and medium
CN112929599A (en) * 2019-12-05 2021-06-08 安讯士有限公司 Video management system and method for dynamic display of video streams
CN112929599B (en) * 2019-12-05 2022-07-29 安讯士有限公司 Video management system and method for dynamic display of video streams

Also Published As

Publication number Publication date
CN101375598A (en) 2009-02-25
CN101341753A (en) 2009-01-07

Similar Documents

Publication Publication Date Title
CN101375599A (en) Method and system for performing video flashlight
US20080291279A1 (en) Method and System for Performing Video Flashlight
US11095858B2 (en) Systems and methods for managing and displaying video sources
US7633520B2 (en) Method and apparatus for providing a scalable multi-camera distributed video processing and visualization surveillance system
Fan et al. Heterogeneous information fusion and visualization for a large-scale intelligent video surveillance system
US20190037178A1 (en) Autonomous video management system
AU2011201215B2 (en) Intelligent camera selection and object tracking
CN101946215B (en) Method for controlling an alarm management system
US20060279630A1 (en) Method and apparatus for total situational awareness and monitoring
Milosavljević et al. Integration of GIS and video surveillance
US20110109747A1 (en) System and method for annotating video with geospatially referenced data
KR20190024934A (en) Multi image displaying method, Multi image managing server, Multi image displaying system, Computer program and Recording medium storing computer program for the same
JP4722537B2 (en) Monitoring device
CN112449093A (en) Three-dimensional panoramic video fusion monitoring platform
KR101954951B1 (en) Multi image displaying method
EP2093999A1 (en) Integration of video information
CN113573024A (en) AR real scene monitoring system suitable for Sharing VAN station
CN114549796A (en) Park monitoring method and park monitoring device
KR20120103328A (en) Surveillance apparatus for event tracking and event tracking method using the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1127891

Country of ref document: HK

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20090225

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1127891

Country of ref document: HK