WO2015101047A1 - 监控录像视频的提取方法及其装置 - Google Patents

监控录像视频的提取方法及其装置 Download PDF

Info

Publication number
WO2015101047A1
WO2015101047A1 PCT/CN2014/084295 CN2014084295W WO2015101047A1 WO 2015101047 A1 WO2015101047 A1 WO 2015101047A1 CN 2014084295 W CN2014084295 W CN 2014084295W WO 2015101047 A1 WO2015101047 A1 WO 2015101047A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
lens
extracting
video
visible field
Prior art date
Application number
PCT/CN2014/084295
Other languages
English (en)
French (fr)
Inventor
胡景翔
林福荣
赵均树
Original Assignee
杭州海康威视系统技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视系统技术有限公司 filed Critical 杭州海康威视系统技术有限公司
Priority to EP14876452.5A priority Critical patent/EP3091735B1/en
Priority to US15/109,756 priority patent/US9736423B2/en
Publication of WO2015101047A1 publication Critical patent/WO2015101047A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7335Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19665Details related to the storage of video surveillance data
    • G08B13/19671Addition of non-video data, i.e. metadata, to video stream
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19682Graphic User Interface [GUI] presenting system data to the user, e.g. information on a screen helping a user interacting with an alarm system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Definitions

  • the present invention relates to the field of video surveillance, and in particular, to a technique for extracting surveillance video.
  • the lens refers to a video surveillance gun camera or a dome camera.
  • the direction of the gun is fixed, the dome camera can be rotated, and the direction is not fixed.
  • the existing video point viewing method based on GIS (Geographic Information System) map works as follows: First, set a point or line or area on the GIS map; Center the line and face, draw the area with the preset radius, find all the shots in this area, form a list; Finally, display the video of the lens on the big screen of the video mosaic wall.
  • GIS Geographic Information System
  • This solution is based on GIS space search, similar to the peripheral search of Baidu map, but the surrounding search of Baidu map is aimed at restaurants, banks, hotels, etc., and the above scheme uses the lens as a search target.
  • the scheme still manually records the long-term video near the incident point to determine which shots are irradiated to the incident area, The route of the incident or the route of the suspect, etc., and in urban areas where video footage is very dense, this often costs a lot of manpower and time.
  • the object of the present invention is to provide a method for extracting a surveillance video and a device thereof, and by calculating a target position specified by a user and a selected camera visual field, finding a camera related to the target, thereby directly from the related camera Extract eligible videos that are useful for the actual application, Reduce the effort and time spent manually troubleshooting video recordings.
  • an embodiment of the present invention discloses a method for extracting a surveillance video, which includes the following steps: concentrating and storing a lens visible field of the camera and an illumination period corresponding to each lens visible field ;
  • Embodiments of the present invention also disclose a method for extracting surveillance video, which includes the following steps:
  • Embodiments of the present invention also disclose an apparatus for monitoring video recording video, including the following units:
  • a first visible field collection unit configured to collect and store a lens visible field of the camera and an illumination time period corresponding to each lens visible field
  • a first visible field extraction unit configured to extract and query time The lens visible field corresponding to the illumination period of the intersecting relationship
  • the intersection calculation unit is configured to calculate the intersection relationship between the extracted lens visible field and the target position
  • the set acquisition unit is configured to acquire a set of cameras corresponding to the lens visible field that has a relationship with the target position.
  • the first video extracting unit is configured to extract the video captured by the camera according to the illumination time period of each camera in the set of the camera.
  • Embodiments of the present invention also disclose an apparatus for monitoring video recording video, including the following units:
  • a second visible field extracting unit configured to extract a lens visible field of each camera in the query time period
  • an intersection calculating unit configured to calculate a intersection relationship between the extracted lens visible field and the target position
  • a set obtaining unit configured to acquire a set of cameras corresponding to the lens visible field having a relationship with the target position
  • a second video extracting unit configured to extract a video captured by each camera in the camera set during the query period.
  • the sub-time period with the target position that is smaller than the specified time period can be more accurately found and the sub-time period is extracted.
  • the video captured by the internal camera makes the final video recording less, which further improves the efficiency of the staff during the criminal investigation.
  • Traditional video surveillance only controls the camera dome in one direction. There is no reverse direction to obtain the current direction information from the dome, and the application based on the direction information is applied.
  • the innovation of the present invention proposes this reverse use mode, by calculating the target position specified by the user and the selected camera visual field, and finding the camera related to the target, thereby extracting the actual application directly from the relevant camera.
  • Useful and eligible videos greatly reduce labor and the time and effort spent troubleshooting video recordings, improving the efficiency of investigators in criminal investigations.
  • all the shooting areas in the variable range of the camera are collected and stored as the lens visible field of the camera, and the calculation process caused by deleting the visible field of the lens corresponding to different illumination time periods can be subtracted, thereby Reduce the amount of calculations for the entire process.
  • FIG. 1 is a schematic flow chart of a method for extracting a surveillance video from a first embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a method for extracting a surveillance video in a second embodiment of the present invention
  • FIG. 3 is a flow chart of a method for collecting and storing a visible domain in a third embodiment of the present invention.
  • FIG. 4 is a view showing actual position of video data collected by a camera in a third embodiment of the present invention
  • FIG. 5 is a flow chart for calculating position coordinates of a visible field in a third embodiment of the present invention
  • FIG. 6 is a third embodiment of the present invention.
  • FIG. 7 is a schematic diagram showing a schematic view of a visible field of a camera lens according to a third embodiment of the present invention
  • FIG. 8 is a schematic diagram of a method for extracting video surveillance video according to a fourth embodiment of the present invention
  • Figure 9 is a schematic diagram showing the lens visible field on the GIS map in the fourth embodiment of the present invention;
  • FIG. 10 is a schematic diagram of calibrating a target position on a GIS map in a fourth embodiment of the present invention
  • FIG. 11 is a schematic diagram of calibrating a target position on a GIS map in a fourth embodiment of the present invention
  • 12 is a schematic diagram of calibrating a target position on a GIS map in a fourth embodiment of the present invention
  • FIG. 13 is a schematic diagram showing a specified query time period in a fourth embodiment of the present invention
  • FIG. 14 is a camera set representation in a fourth embodiment of the present invention
  • FIG. 15 is a schematic diagram of a camera collection device according to a fourth embodiment of the present invention
  • FIG. 16 is a schematic diagram of a structure of an apparatus for extracting video surveillance video according to a fifth embodiment of the present invention
  • FIG. 1 is a schematic flow chart of a method for extracting a surveillance video.
  • the method for extracting the surveillance video includes the following steps: In step 101, the lens visible field of the camera and the storage camera and the illumination period corresponding to each lens visible field are collected. .
  • the visual field refers to the range of the area that the user can see through the lens when the video lens illuminates an area.
  • the method further includes the following steps: Get the query time period and target location from the input device.
  • the target location referred to in the present invention refers to a target point, line or plane selected by the user in the electronic map to be queried.
  • extracting a lens visible field corresponding to the illumination period that has a relationship with the query time period In other modes of the present invention, a larger query area including the target location may be specified first, and the lens visible domain is selected according to the query region, and then the selected lens visible domain is queried. Segment extraction to reduce the amount of calculations.
  • step 103 the intersection relationship between the extracted lens visible field and the target position is calculated.
  • the calculation of the intersection relationship in this step is performed by inputting two graphic objects to the engine using the spatial computing capability of the GIS engine, and then the engine returns the intersection result of the two graphics.
  • the calculation method of the intersection relationship may be implemented in other manners, and is not limited thereto.
  • step 104 a set of cameras corresponding to the lens visible field having a relationship with the target position is obtained.
  • step 105 the video captured by the camera is extracted according to the illumination period of each camera in the collection of cameras. This process ends thereafter.
  • a second embodiment of the present invention relates to a method of extracting a surveillance video.
  • 2 is a schematic flow chart of a method for extracting the surveillance video.
  • the method for extracting the surveillance video includes the following steps: In step 201, the lens visible field of each camera in the query time period is extracted. Before this step, the following steps are also included: ⁇ Set and store the lens visible field of the camera.
  • the captured and stored lens visible fields are all camera-capable regions of the camera corresponding to the lens visible field within the variable range of the camera.
  • a larger query area including a target location may be specified first, and the lens visible domain is selected according to the query area, and then the selected lens visible domain is queried. The extraction of time periods to reduce the amount of calculation. Thereafter, proceeding to step 202, the intersection relationship between the extracted lens visible field and the target position is calculated. In this step, the calculation of the intersection relationship is achieved by inputting two graphic objects to the engine using the spatial computing power of the engine of the geographic information system, and then the engine returns the intersection result of the two graphics.
  • the calculation method of the intersection may be implemented in other manners, and is not limited thereto.
  • the method further includes the steps of: obtaining a query time period and a target location from the input device.
  • the target location refers to the target point, line or polygon selected by the user in the electronic map that needs to be queried.
  • Traditional video surveillance only controls the camera dome in one direction. There is no reverse direction to obtain the current direction information from the dome, and the application based on the direction information is applied.
  • the innovation of the present invention proposes this reverse use mode, by calculating the target position specified by the user and the selected camera visual field, and finding the camera related to the target, thereby extracting directly from the relevant camera and useful for practical application.
  • Eligible video greatly reducing labor - the energy and time spent troubleshooting video recordings, and improving the efficiency of investigators in criminal investigations.
  • the third embodiment of the present invention relates to a method for collecting and storing a visible domain
  • FIG. 3 is a schematic flowchart of a method for collecting and storing the visible domain.
  • the method includes the following steps:
  • step 301 setting parameters are acquired from a camera.
  • the visual domain display system can use the transmission channel between the camera and the monitoring client to realize access to the camera; when necessary, the camera sends the setting parameters to the visual domain display system through the transmission channel.
  • Set the specific content of the parameter which can be selected as needed.
  • step 302 geometric operations are performed by setting parameters, camera height and camera point information, and obtaining position coordinates of the camera in a horizontally visible range.
  • the visual field display system can also acquire the operation His parameters, including camera height and camera point information, camera point information is the position coordinates of the camera's pole; where the camera height and camera point information can also be uploaded to the visual field by the staff system.
  • the coordinate position of the visible range can be specifically the coordinates of each edge point of the visible field.
  • a visual field is generated from the combination of position coordinates of the visual range.
  • the generated visual fields are superimposed and displayed on the electronic map.
  • the visual field can be marked significantly, for example, with a transparent area with color.
  • the method includes:
  • Geometric parameters are calculated by setting parameters, camera height and camera point information, and the position coordinates of the camera in the horizontal blind zone range are obtained; the blind zone is generated by the combination of the position coordinates of the blind zone range; and the generated blind zone is superimposed and displayed on the electronic map.
  • the geometric coordinates can be used to calculate the position coordinates of the camera in the horizontal direction and the blind area.
  • calculation methods There are many calculation methods, which can be set as needed. The following is an example:
  • the visible region is a trapezoidal region
  • the blind region is a triangular region
  • the setting parameters include: a horizontal viewing angle, a vertical viewing angle, a depression angle T, and a horizontal angle P; and the four vertices of the trapezoidal region are represented as D2, d3, d5 and d6, the triangle area is the area composed of 1 ⁇ /1, d2 and d3, and M is the position of the camera pole.
  • FIG. 4 the actual orientation map of the video data collected by the camera in the embodiment is shown.
  • step 501 the height of the triangle and the height of the trapezoid are calculated from the depression angle, the vertical field of view angle, and the height of the pole.
  • angle a 90 - the angle of depression - half of the vertical field of view; the angle of depression, that is, the angle between the bisector of the vertical field of view and the ground, as shown;
  • the height of the triangle the height of the pole * tan a.
  • angle b depression angle - half of the vertical field of view
  • the height of the trapezoid (the height of the pole *ctan b) - the height of the triangle.
  • step 502 from the height of the triangle, the height of the trapezoid, and the horizontal field of view, it is calculated that M, r2, M, and r2 are half the lengths of the upper and lower sides of the trapezoid.
  • M height of the triangle * tan (half of the horizontal field of view);
  • R2 (the height of the triangle + the height of the trapezoid) * tan (half the horizontal field of view). Then, proceeding to step 503, taking the bisector of the horizontal field of view as the X-axis, calculating the coordinates of d1 from the camera point information and the height of the triangle, and d1 is the intersection of the bisector of the horizontal field of view and the trapezoidal upper base; D1 and the horizontal field of view calculate the two vertices d2 and d3 of the trapezoidal upper base; convert d2 and d3 to the coordinates of the horizontal angle P upward.
  • d1 and d4 are the intersections of the bisector of the horizontal angle of view and the two parallel sides of the trapezoid.
  • the camera point information includes the camera's abscissa mapPoint.x and the camera's ordinate mapPoint.y.
  • the d2 and d3 obtained at this time are calculated from the bisector of the horizontal angle of view as the X-axis.
  • the camera is set to have a 0 degree angle, and the camera's current orientation with respect to the 0 degree angle is a horizontal angle; therefore, it is necessary to convert d2 and d3 into coordinates with the horizontal angle P facing upward, which is a geometry that is easily realized by those skilled in the art. Coordinate transformation, not to repeat here.
  • step 504 the coordinates of d4 are calculated from the camera point information, the height of the triangle and the height of the trapezoid, and d4 is the intersection of the bisector of the horizontal field of view and the trapezoidal lower base; calculated from d4 and the horizontal field of view angle Two vertices d5 and d6 of the trapezoidal upper base; convert d5 and d6 into coordinates of the horizontal angle P upward.
  • d4 can be calculated from mapPoint, the height of the triangle, and the height of the trapezoid, and the coordinates of d5 and d6 can be calculated from the triangle formula, which is not repeated here.
  • blind areas and visual fields can be generated on the electronic map, including:
  • A generate a triangle area (blind area):
  • the triangle area is generated by the point information of the camera and the points d2 and d3.
  • the trapezoidal region is generated by the combination of points d2, d3, d5 and d6.
  • FIG. 6 it is an example of a schematic diagram of a visible field of a camera.
  • the camera is displayed at the actual position of the electronic map, wherein the white triangle portion is a blind zone, and the gray trapezoidal portion is a visible domain.
  • the setting parameters can also be adjusted to realize the control of the camera, including: Receiving parameter adjustment information including a change value; determining an adjustment parameter by the parameter adjustment information, transmitting the adjustment parameter to the camera according to the adjustment parameter; and updating the acquired setting parameter according to the adjustment parameter, returning to 302 in the process of FIG.
  • the parameter adjustment information may be set as needed, for example, including the adjusted focal length; correspondingly, the determining the adjustment parameter by the parameter adjustment information includes: obtaining a horizontal field of view angle and a vertical field of view angle by focus conversion, and converting The obtained horizontal field of view angle and vertical field of view angle are used as adjustment parameters.
  • the focal length determines the values of the horizontal field of view and the vertical field of view. After determining the focal length, the horizontal field of view and the vertical field of view can be calculated by combining some setting parameters.
  • the combined setting parameters include the focal length of the camera.
  • the parameter adjustment information includes an angle of the horizontally rotating camera; the determining the adjustment parameter by the parameter adjustment information includes: calculating a corresponding horizontal angle from an angle of the horizontally rotating camera, and using the calculated horizontal angle as an adjustment parameter. Specifically, assuming that the positive east direction is a 0 degree angle and the clockwise direction is a positive direction, the current horizontal angle is 90 degrees, that is, facing the south direction, and the parameter adjustment information includes rotating the camera 90 degrees clockwise in the horizontal direction, and calculating The resulting horizontal angle is 180 degrees. As shown in Fig.
  • the user can operate each eye button to adjust the setting parameters of the camera;
  • the right eye button is a focus adjustment button, for example, dragging to the left to adjust Focus, drag the adjustable small focus to the right;
  • the middle eye button is the horizontal angle adjustment button, the user can rotate the eye button clockwise or counterclockwise;
  • the leftmost eye button is the vertical direction adjustment button, the user can be up or Drag the eye button down to rotate the camera in the vertical direction.
  • the setting parameters are acquired from the camera, the position coordinates of the camera in the horizontal direction of the visible field are calculated based on the setting parameters, and the trapezoidal area is generated by the combination of the position coordinates of the visible field; The generated trapezoidal area is superimposed and displayed on the electronic map.
  • the solution of the invention not only the position of the camera can be displayed in the electronic map, but also the visual field of the camera can be simultaneously displayed on the map, thereby visually displaying the visible field of the camera on the map without monitoring
  • the client views the corresponding video data, simplifies the operation, and enriches the information of the electronic map, further satisfying the demand.
  • a fourth embodiment of the present invention relates to a method for extracting a surveillance video.
  • Fig. 8 is a flow chart showing the method of extracting the surveillance video. The solution is based on the lens view field.
  • search information such as the location information of the suspect, the escape route, and the hiding area
  • the visual field is intersected with the case set, the escape route or the incident area set by the user, and whether the lens has irradiated the information of these areas within a specified time period.
  • the video system refers to a software system that manages a large number of lenses, stores and forwards the video of the lens, and provides the user with real-time lens monitoring, video playback, and pan/tilt control functions.
  • PTZ control refers to operations such as up, down, left and right rotation and lens focal length control of the dome camera.
  • the general flow of the method for extracting the surveillance video is: the visual field collection and storage of the lens; the user-specified query condition; the visual domain search and the video extraction.
  • Lens visual field collection and storage By visualizing the visual field of the lens, the visual field shown in Figure 9 can be obtained on the GIS map.
  • the video system stores a large amount of direction information of each time period of the lens, and can restore any lens and the visible field at any time point according to the information.
  • User-specified query conditions On the GIS map, the user can calibrate a point (as shown in Figure 10), or a line (as shown in Figure 11), or an area (as shown in Figure 12), and then Specify a time period, such as: 10:00 to 12:00 on July 10, 2013, as shown in Figure 13.
  • the visual field search finds the visible field of the lens within a certain range near the query position within a specified time period (ie, the query time period), and intersects it with the point, line or polygon specified by the user (ie, calculates the intersection relationship) If the lens visible field intersects the user-specified point, line, or face during this time period, the lens illuminates the user-specified position (ie, the target position).
  • a lens list ie, the intersection of the camera
  • a sub-period ie, the illumination period corresponding to the visible field
  • Lens 1 - 10 35: 00-10: 41: 00 and 1 1 : 21 : 06-1 1 : 34: 56
  • the list of lenses obtained at the same time is shown in Fig. 15.
  • the intersection of the visual domain and the user-specified target point uses the spatial computing power of the GIS engine, passing in two graphical objects to the GIS engine, and then the GIS engine returns whether the two graphics intersect.
  • the lens By calculating the lens, in addition to finding the lens that has been illuminated by the target point (or line or face), it is further ok, so there may be many smaller time segments in the specified time period to be on the set target). This results in fewer recorded videos and shorter manual recordings, which relieves staff stress. In order to avoid too many shots, the calculation speed is too slow. You can take the lens visible field within a certain range around the target point (or line or surface) to participate in the calculation, avoiding most of the system's internal lens. Simultaneously, Using the lens around the target point to do the visual field intersection calculation, a large number of shots can be excluded, and only a small part is used for calculation, thereby greatly reducing the calculation amount and making the response speed faster.
  • the video extraction request can be directly sent to the video system, thereby obtaining video files corresponding to the time segments of the shots.
  • the video intelligent analysis algorithm has no practical breakthrough. It can't do machine recognition.
  • the traditional video surveillance method is to determine which shots are taken after a long period of time by manually viewing a large number of videos near the incident. It was irradiated to the incident area, the incident site, and the suspect's walking route. In urban areas where video footage is very dense, this often costs a lot of manpower and time.
  • the video extraction method of the invention can help the user to save a large amount of time for screening the lens, directly extract the video that meets the conditions, greatly reduce the video of the manual investigation, and improve the work efficiency of the case handler. Enable the computer's powerful computing power to participate in video screening.
  • the method embodiments of the present invention can all be implemented in software, hardware, firmware, and the like. Whether the invention is implemented in software, hardware, or firmware, the instruction code can be stored in any type of computer-accessible memory (eg, permanent or modifiable, volatile or non-volatile, solid state Or non-solid, fixed or replaceable media, etc.). Similarly, the memory may be, for example, Programmable Array Logic ("PAL"), Random Access Memory (RAM), Programmable Read Only Memory (PROM).
  • PAL Programmable Array Logic
  • RAM Random Access Memory
  • PROM Programmable Read Only Memory
  • FIG. 16 is a view showing the structure of the apparatus for extracting the surveillance video.
  • the apparatus for extracting video surveillance video includes the following units: a first visible field collection unit for capturing and storing a lens visible field of the camera and each lens visible field Corresponding illumination period.
  • the first visual field extraction unit is configured to extract a lens visible field corresponding to the illumination time period that has a relationship with the query time period.
  • the intersection calculation unit is configured to calculate a intersection relationship between the extracted lens visible field and the target position.
  • the set obtaining unit is configured to acquire a set of cameras corresponding to the lens visible field that has a relationship with the target position.
  • the first video extracting unit is configured to extract a video captured by the camera according to an illumination period of each camera in the set of cameras.
  • the apparatus further includes the following unit: a parameter obtaining unit, configured to acquire a query time period and a target position from the input device.
  • a sixth embodiment of the present invention relates to an apparatus for monitoring video recording video.
  • Figure 17 is a block diagram showing the structure of the apparatus for extracting the surveillance video.
  • the apparatus for extracting video surveillance video includes: a second visual domain collection unit for collecting and storing a lens visible field of the camera.
  • the parameter obtaining unit is configured to obtain a query time period and a target position from the input device.
  • the second visual field extracting unit is configured to extract a lens visible field of each camera in the query time period.
  • the intersection calculation unit is configured to calculate a intersection relationship between the extracted lens visible field and the target position.
  • the set obtaining unit is configured to acquire a set of cameras corresponding to the lens visible field that has a relationship with the target position.
  • the second video extracting unit is configured to extract a video captured by each camera in the camera set during the query period.
  • the captured and stored lens visible fields are all of the recordable areas of the camera corresponding to the lens viewable area within the variable range of the camera.
  • each unit mentioned in each device implementation manner of the present invention is a logical unit. Physically, one logical unit may be a physical unit, or may be a part of a physical unit, or may have multiple physical parts. The combined implementation of the elements, the physical implementation of these logical units themselves is not the most important, the combination of the functions implemented by these logical units is the key to solving the technical problems raised by the present invention. In addition, in order to highlight the innovative part of the present invention, the above-mentioned various device embodiments of the present invention do not introduce a unit that is less closely related to solving the technical problem proposed by the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

本发明涉及视频监控领域,公开了一种监控录像视频的提取方法及其装置。在本发明中,该监控录像视频的提取方法包括以下步骤:采集和存储摄像机的镜头可视域和与每个镜头可视域相对应的照射时间段;提取与查询时间段有相交关系的照射时间段所对应的镜头可视域;计算提取的镜头可视域与目标位置的相交关系;获取与目标位置有相交关系的镜头可视域所对应的摄像机的集合;根据摄像机的集合中各摄像机的照射时间段提取摄像机拍摄的视频。通过将用户指定时间段内的指定目标位置和选取的摄像机可视域进行相交计算,可以找出与目标相关的摄像机,从而直接从相关摄像机中抽取对实际应用有用的符合条件的视频,减少人工排查视频录像时所耗的精力和时间。

Description

监控录像视频的提取方法及其装置 技术领域
本发明涉及视频监控领域, 特别涉及一种监控录像视频的提取技术。
背景技术
镜头是指视频监控枪型摄像机或者球型摄像机, 其枪机方向固定, 球机 可转动, 方向不固定。 现有的基于 GIS ( Geographic Information System , 地理信 , 系统) 地图的视频点查看方法的工作过程为: 首先, 在 GIS地图上设置一个点或者线或者区域; 然后, 以第一步设定 点、 线、 面为中心, 以预设半径绘制区域, 找出这个区域内的所有镜头, 形 成列表; 最后, 将镜头的视频在视频拼接墙的大屏幕上显示。
这个方案是基于 GIS空间进行搜索, 类似于百度地图的周边查找, 只是 百度地图的周边查找以餐馆、 银行、 酒店等为目标, 而上述方案以镜头为查 找目标。 但是, 该方案在查找到指定区域内的所有镜头后, 在查找指定时间 段内特定事件的视频时, 仍是人工查看案发点附近的长时间段录像来确定哪 些镜头照射到了案发区域、 案发点或嫌疑人的行走路线等, 而在视频镜头非 常密集的城区, 这往往要花费大量的人力和时间成本。
发明内容
本发明的目的在于提供一种监控录像视频的提取方法及其装置, 通过将 用户指定的目标位置和选取的摄像机可视域进行相交计算, 找出与目标相关 的摄像机,从而直接从相关摄像机中抽取对实际应用有用的符合条件的视频, 减少人工排查视频录像时所耗的精力和时间。 为解决上述技术问题, 本发明的实施方式公开了一种监控录像视频的提 取方法, 包括以下步骤: 釆集和存储摄像机的镜头可视域和与每个镜头可视域相对应的照射时 间段;
提取与查询时间段有相交关系的照射时间段所对应的镜头可视域; 计算提取的上述镜头可视域与目标位置的相交关系; 获取与目标位置有相交关系的镜头可视域所对应的摄像机的集合; 根据摄像机的集合中各摄像机的照射时间段提取摄像机拍摄的视频。 本发明的实施方式还公开了一种监控录像视频的提取方法, 包括以下步 骤:
提取查询时间段内各摄像机的镜头可视域;
计算提取的镜头可视域与目标位置的相交关系; 获取与目标位置有相交关系的镜头可视域所对应的摄像机的集合; 提取摄像机的集合中各摄像机在查询时间段内拍摄的视频。 本发明的实施方式还公开了一种监控录像视频的提取装置, 包括以下单 元:
第一可视域釆集单元, 用于釆集和存储摄像机的镜头可视域和与每个镜 头可视域相对应的照射时间段; 第一可视域提取单元, 用于提取与查询时间段有相交关系的照射时间段 所对应的镜头可视域; 相交计算单元, 用于计算提取的镜头可视域与目标位置的相交关系; 集合获取单元, 用于获取与目标位置有相交关系的镜头可视域所对应的 摄像机的集合; 第一视频提取单元, 用于根据摄像机的集合中各摄像机的照射时间段提 取摄像机拍摄的视频。 本发明的实施方式还公开了一种监控录像视频的提取装置, 包括以下单 元:
第二可视域提取单元, 用于提取查询时间段内各摄像机的镜头可视域; 相交计算单元, 用于计算提取的镜头可视域与目标位置的相交关系; 集合获取单元, 用于获取与目标位置有相交关系的镜头可视域所对应的 摄像机的集合; 第二视频提取单元, 用于提取摄像机集合中各摄像机在查询时间段内拍 摄的视频。 本发明实施方式与现有技术相比, 主要区别及其效果在于: 将用户指定的目标位置和选取的摄像机可视域进行相交计算, 可以找出 与目标相关的摄像机, 从而直接从相关摄像机中抽取对实际应用有用的符合 条件的视频, 减少人工排查视频录像时所耗的精力和时间。 同时, 由于摄像 机的镜头可视域在查询时间段内会发生变化, 根据照射时间段, 可以更精确 的找出比指定时间段更小的有拍摄到目标位置的子时间段并提取子时间段内 摄像机拍摄的视频, 从而使得最终抽取的录像视频更少, 进一步提高如刑侦 办案过程中工作人员的工作效率。 传统视频监控只是单向的对摄像机球机进行控制, 没有反向向球机获取 当前方向信息, 并基于该方向信息作应用的做法。 本发明创新的提出这种反 向的使用方式, 通过将用户指定的目标位置和选取的摄像机可视域进行相交 计算, 找出与目标相关的摄像机, 从而直接从相关摄像机中抽取对实际应用 有用的符合条件的视频, 极大地减少人工——排查视频录像所耗的精力和时 间, 提升如刑事侦查中办案人员的工作效率。 进一步地, 将摄像机可变范围内的所有拍摄区域作为该摄像机的镜头可 视域进行釆集和存储, 可以减去因删选不同照射时间段对应的镜头可视域带 来的计算过程, 从而减小整个流程的计算量。
附图说明
图 1是本发明第一实施方式中一种监控录像视频的提取方法的流程示意 图;
图 2是本发明第二实施方式中一种监控录像视频的提取方法的流程示意 图;
图 3 是本发明第三实施方式中一种可视域的釆集与存储的方法的流程 图;
图 4是本发明第三实施方式中摄像机釆集视频数据的实际方位图; 图 5是本发明第三实施方式中计算可视域位置坐标的流程图; 图 6是本发明第三实施方式中摄像机镜头可视域的展示示意图例一; 图 7是本发明第三实施方式中摄像机镜头可视域的展示示意图例二; 图 8是本发明第四实施方式中一种监控录像视频的提取方法的流程图; 图 9是本发明第四实施方式中在 GIS地图上的镜头可视域的展示示意 图;
图 10是本发明第四实施方式中在 GIS地图上标定目标位置的示意图; 图 1 1是本发明第四实施方式中在 GIS地图上标定目标位置的示意图; 图 12是本发明第四实施方式中在 GIS地图上标定目标位置的示意图; 图 13是本发明第四实施方式中指定查询时间段的示意图; 图 14是本发明第四实施方式中摄像机集合表示在 GIS地图上的示意图; 图 15是本发明第四实施方式中摄像机集合的列表; 图 16是本发明第五实施方式中一种监控录像视频的提取装置的结构示 意图; 图 17是本发明第六实施方式中一种监控录像视频的提取装置的结构示 意图。
具体实施方式
在以下的叙述中, 为了使读者更好地理解本申请而提出了许多技术细 节。 但是, 本领域的普通技术人员可以理解, 即使没有这些技术细节和基于 以下各实施方式的种种变化和修改, 也可以实现本申请各权利要求所要求保 护的技术方案。 为使本发明的目的、 技术方案和优点更加清楚, 下面将结合附图对本发 明的实施方式作进一步地详细描述。 本发明第一实施方式涉及一种监控录像视频的提取方法。 图 1是该监控 录像视频的提取方法的流程示意图。 具体地说, 如图 1所示, 该监控录像视频的提取方法包括以下步骤: 在步骤 101 中, 釆集和存储摄像机的镜头可视域和与每个镜头可视域相 对应的照射时间段。 其中, 可视域是指视频镜头照射某个区域时, 用户通过 这个镜头可以看清楚的区域范围。 在本步骤之前, 该方法还包括以下步骤: 从输入设备获取查询时间段和目标位置。 在实际应用中, 本发明所称目标位置是指用户在电子地图中选定的需要 查询的目标点、 线或者面。 此后进入步骤 1 02, 提取与查询时间段有相交关系的照射时间段所对应 的镜头可视域。 在本发明的其他方式中, 也可以在先指定一个包含目标位置的较大的查 询区域, 根据这个查询区域先对镜头可视域进行 选, 然后再对 选后的镜 头可视域进行查询时间段的提取, 以减少计算量。 此后进入步骤 103, 计算提取的镜头可视域与目标位置的相交关系。 在本实施方式中, 本步骤中相交关系的计算是通过使用地理信息系统引 擎的空间计算能力向该引擎输入两个图形对象, 然后该引擎返回这两个图形 的相交结果来实现的。 此外可以理解, 在本发明的其他实施方式中, 相交关系的计算方法可以 通过其他的方式实现, 不限于此。 此后进入步骤 1 04, 获取与目标位置有相交关系的镜头可视域所对应的 摄像机的集合。 此后进入步骤 1 05, 根据摄像机的集合中各摄像机的照射时间段提取摄 像机拍摄的视频。 此后结束本流程。 将用户指定的目标位置和选取的摄像机可视域进行相交计算, 可以找出 与目标相关的摄像机, 从而直接从相关摄像机中抽取对实际应用有用的符合 条件的视频, 减少人工排查视频录像时所耗的精力和时间。 同时, 由于摄像 机的镜头可视域在查询时间段内会发生变化, 根据照射时间段, 可以更精确 的找出比指定时间段更小的有拍摄到目标位置的子时间段并提取子时间段内 摄像机拍摄的视频, 从而使得最终抽取的录像视频更少, 进一步提高如刑侦 办案过程中工作人员的工作效率。 本发明第二实施方式涉及一种监控录像视频的提取方法。 图 2是该监控 录像视频的提取方法的流程示意图。 具体地说, 如图 2所示, 该监控录像视频的提取方法包括以下步骤: 在步骤 201 中, 提取查询时间段内各摄像机的镜头可视域。 在此步骤之前, 还包括以下步骤: 釆集和存储摄像机的镜头可视域。 在本实施方式中, 釆集和存储的镜头可视域为与该镜头可视域对应的摄 像机在该摄像机可变范围内的所有可拍摄区域。 将摄像机可变范围内的所有 拍摄区域作为该摄像机的镜头可视域进行釆集和存储, 可以减去因删选不同 照射时间段对应的镜头可视域带来的计算过程,从而减小整个流程的计算量。 在本发明的其他实施方式中, 也可以在先指定一个包含目标位置的较大 的查询区域, 根据这个查询区域先对镜头可视域进行 选, 然后再对 选后 的镜头可视域进行查询时间段的提取, 以减少计算量。 此后进入步骤 202, 计算提取的镜头可视域与目标位置的相交关系。 在本步骤中, 相交关系的计算是通过使用地理信息系统的引擎的空间计 算能力向该引擎输入两个图形对象, 然后该引擎返回这两个图形的相交结果 来实现的。 此外, 可以理解, 在本发明的其他实施方式中, 交集的计算方法可以通 过其他的方式实现, 不限于此。 此后进入步骤 203, 获取与目标位置有相交关系的镜头可视域所对应的 摄像机的集合。 此后进入步骤 204, 提取摄像机的集合中各摄像机在查询时间段内拍摄 的视频。 在本实施方式中, 在步骤 201之前, 还包括以下步骤: 从输入设备获取查询时间段和目标位置。 在实际应用中, 目标位置是指用户在电子地图中选定的需要查询的目标 点、 线或者面。 传统视频监控只是单向的对摄像机球机进行控制, 没有反向向球机获取 当前方向信息, 并基于该方向信息作应用的做法。 本发明创新的提出这种反 向的使用方式, 通过将用户指定的目标位置和选取的摄像机可视域进行相交 计算, 找出与目标相关的摄像机, 从而直接从相关摄像机中抽取对实际应用 有用的符合条件的视频, 极大地减少人工——排查视频录像所耗的精力和时 间, 提升如刑事侦查中办案人员的工作效率。 本发明第三实施方式涉及一种可视域的釆集与存储方法, 图 3是该可视 域的釆集与存储方法的流程示意图。
具体地说, 如图 3所示, 该方法包括以下步骤: 在步骤 301 中, 从摄像机获取设置参数。 具体实现时, 可视域展示系统可釆用摄像机与监控客户端之间的传输通 道, 实现对摄像机的访问; 在需要时, 摄像机通过该传输通道将设置参数发 送给可视域展示系统。 设置参数包含的具体内容, 可根据需要选取。 此后进入步骤 302, 由设置参数、 摄像机立杆高度和摄像机点位信息, 进行几何运算, 得到摄像机在水平方向可视范围的位置坐标。 可视域展示系统除了从摄像机获取设置参数外, 还可获取进行运算的其 他参数, 包括摄像机立杆高度和摄像机点位信息, 摄像机点位信息即摄影机 的立杆所在的位置坐标; 其中, 摄像机立杆高度和摄像机点位信息, 也可由 工作人员上传给可视域展示系统。 可视范围的坐标位置, 可具体为可视域的各边缘点的坐标。
此后进入步骤 303, 由可视范围的位置坐标组合生成可视域。 此后进入步骤 304, 将生成的可视域叠加显示在电子地图上。 在叠加时, 可将可视域进行显著标识, 例如釆用带有色彩的透明区域进 行标识。
不仅可以在电子地图上显示可视域, 还可显示摄像头的盲区, 具体地, 该方法包括:
由设置参数、 摄像机立杆高度和摄像机点位信息, 进行几何运算, 得到 摄像机在水平方向盲区范围的位置坐标; 由盲区范围的位置坐标组合生成盲区; 将生成的盲区叠加显示在电子地图上。
上述流程中, 获取设置参数、 摄像机立杆高度和摄像机点位信息之后, 通过几何运算, 便可计算得到摄像机在水平方向可视范围和盲区范围的位置 坐标。 该计算方法多种, 可根据需要设置, 下面举一实例进行说明:
在本实施方式中, 可视域为梯形区域, 盲区为三角形区域, 所述设置参 数包括: 水平视场角、 垂直视场角、 俯角 T和水平角度 P; 将梯形区域的四 个顶点表示为 d2、 d3、 d5和 d6, 三角形区域便为 1\/1、 d2和 d3所组成的区 域, M为摄像机立杆所在位置。 参见图 4, 为本实施方式中摄像机釆集视频 数据的实际方位图。 图 5为本实施方式中计算可视域位置坐标的流程图, 其 包括以下步骤: 在步骤 501 中, 由俯角、 垂直视场角和立杆高度, 计算出三角形的高度 和梯形的高度。
1 ) 三角形的高度:
先求出角度 a: 角度 a = 90 -俯角-垂直视场角的一半; 俯角, 即垂直视 场角的平分线与地面的夹角, 如图所示;
然后求出三角形的高度: 三角形的高度 =立杆高度 * tan a。
2 )梯形的高度:
先求出角度 b: 角度 b =俯角 - 垂直视场角的一半;
然后求出梯形的高度: 梯形的高度 = (立杆高度 *ctan b) -三角形的高 度。
此后进入步骤 502, 由三角形的高度、 梯形的高度和水平视场角, 计算 得到 M、 r2, M、 r2为梯形上下两底的一半长度。
M =三角形的高度 * tan (水平视场角的一半);
r2= (三角形的高度 +梯形的高度) * tan (水平视场角的一半)。 此后进入步骤 503, 以水平视场角的平分线为 X轴, 由摄像机点位信息 和三角形的高度, 计算得到 d1 的坐标, d1 为水平视场角的平分线与梯形上 底的交点; 由 d1 和水平视场角计算得到梯形上底的两个顶点 d2和 d3; 将 d2和 d3转换为水平角度 P朝向上的坐标。 图 5中, d1和 d4为水平视场角的平分线与梯形两平行边的交点。 已知 摄像机的点位信息 ( mapPoint ) , 摄像机点位信息包括摄像机的横坐标 mapPoint.x和摄像机的纵坐标 mapPoint.y。先计算出 d1 的坐标, d1 的横坐 标表示为 ch .x , 从坐标表示为 dl .y: d1 .x = mapPoint.x + 三角形高度; d1 .y = mapPoint.y。 计算出 d1, 由三角形公式, 便可计算出去 d2、 d3坐标。 此时得到的 d2 和 d3,是以水平视场角的平分线为 X轴计算而来的。而摄像机设置有 0度角, 且摄像机当前相对于 0度角的朝向为水平角; 因此, 需要将 d2和 d3转换为 水平角度 P朝向上的坐标, 该转换为本领域技术人员易于实现的几何坐标转 换, 这里不赘述。
此后进入步骤 504, 由摄像机点位信息、 三角形的高度和梯形的高度, 计算得到 d4 的坐标, d4 为水平视场角的平分线与梯形下底的交点; 由 d4 和水平视场角计算得到梯形上底的两个顶点 d5和 d6; 将 d5和 d6转换为水 平角度 P朝向上的坐标。
与步骤 503计算 d2和 d3的方法类似地, 由 mapPoint、 三角形的高度 和梯形高度可计算得到 d4,进而由三角形公式变可计算得到 d5、 d6的坐标, 这里不再——赘述。 之后, 便可在电子地图上生成盲区和可视域, 具体包括:
A、 生成三角形区域(盲区) :
由摄像机的点位信息、 点 d2和 d3, 组合生成三角形区域。
B、 生成梯形区域(可视域) :
由点 d2、 d3、 d5和 d6, 组合生成梯形区域。
C、 最后, 将三角形区域和梯形区域进行组合, 加载到电子地图上。 如图 6所示, 为摄像头可视域的展示示意图实例, 图中, 将摄像头展示 在电子地图的实际位置, 其中的白色三角形部分为盲区, 灰色梯形部分为可 视域。 将生成的盲区和可视域叠加显示在电子地图上之后, 还可对设置参数进 行调整, 以实现对摄像头的控制, 具体包括: 接收包含变化值的参数调整信息; 由参数调整信息确定出调整参数, 将调整参数发送给摄像机按照调整参 数进行调整; 并根据调整参数对获取的设置参数进行更新, 返回执行图 3流 程中的 302步骤, 以同时调整电子地图上的盲区和可视域。 所述参数调整信息可根据需要设置, 例如包含调整后的焦倍; 相应地, 所述由参数调整信息确定出调整参数包括: 由焦倍转换得到水平视场角和垂 直视场角, 将转换得到的水平视场角和垂直视场角作为调整参数。 焦倍决定 了水平视场角和垂直视场角的取值, 确定焦倍后, 结合某些设置参数便可计 算出水平视场角和垂直视场角, 结合的设置参数包括摄像机的焦距、 摄像机 中图像传感器(CCD, Charge-coupled Device ) 的水平宽度和 CDD的水平 高度; 该计算为已有技术, 这里不多赘述。 再如, 所述参数调整信息包含水平转动摄像机的角度; 所述由参数调整 信息确定出调整参数包括:由水平转动摄像机的角度计算得到对应的水平角, 将计算得到的水平角作为调整参数。 具体地, 假设正东方向为 0度角, 且顺 时针方向为正方向, 当前水平角为 90 度, 即朝向正南方向, 参数调整信息 包含将摄像机在水平方向上顺时针转动 90度,计算后得到的水平角则为 180 度。 如图 7, 示出了三个眼睛按钮, 用户可对各眼睛按钮进行操作, 以实现 对摄像头的设置参数进行调整; 最右边的眼睛按钮为焦倍调节按钮, 例如向 左拖动可调大焦倍, 向右拖动可调小焦倍; 中间的眼睛按钮为水平角调节按 钮, 用户可顺时针或逆时针旋转该眼睛按钮; 最左边的眼睛按钮为垂直方向 调节按钮, 用户可向上或向下拖动该眼睛按钮, 以使摄像头在垂直方向进行 旋转。 本发明中, 从摄像机获取设置参数, 基于设置参数运算得到摄像机在水 平方向可视域的位置坐标,再由可视域的位置坐标组合生成梯形区域; 然后, 将生成的梯形区域叠加显示在电子地图上。 釆用本发明方案, 不仅可以在电 子地图中显示出摄像头的位置,还可以在地图上同时显示出摄像机的可视域, 从而, 直观地将摄像机的可视域显示在地图上, 无需在监控客户端查看相应 的视频数据, 简化了操作, 也更加丰富了电子地图的信息, 进一步满足了需 求。
并且, 釆用本发明方案, 在电子地图上进行操作, 便可对摄像头进行远 程控制, 无需实地对摄像机参数进行调整, 简便了操作。 本发明第四实施方式涉及一种监控录像视频的提取方法。 图 8是该监控 录像视频的提取方法的流程图。 本方案以镜头可视域为基础, 当用户得到一些查找信息如嫌疑人曾出现 的位置信息、 逃跑路线、 藏匿区域, 需要在视频系统中查找嫌疑人出现的录 像片段时。 把可视域与用户设置的案发点、 逃跑路线或案发区域做交集, 得 到镜头是否在指定时间段内照射过这些区域的信息。 从而直接对镜头进行录 像抽取, 省去一个个查看镜头做人工排查花费的大量时间。 其中, 视频系统 是指对大量镜头进行管理, 对镜头的视频进行存储和转发, 对用户提供镜头 实时监控、 录像回放、 云台控制等功能的软件系统。 云台控制指对球型摄像 机进行的上下左右转动和镜头焦距控制等操作。 具体地说, 如图 8所示, 该监控录像视频的提取方法的大体流程为: 镜 头可视域釆集和存储; 用户指定查询条件; 可视域查找和录像抽取。
1 ) 镜头可视域釆集与存储 通过对镜头的可视域进行釆集建模,在 GIS地图上可以得到如图 9所示 的可视域。 视频系统内存储了大量镜头的各个时间段的方向信息, 可根据这些信息 还原出任意镜头, 任意时间点的可视域情况。 2 ) 用户指定查询条件: 在 GIS地图上, 用户可以标定一个点(如图 10所示), 或者一条线(如 图 1 1所示), 或者一个区域(如图 12所示), 然后再指定一个时间段, 如: 2013年 7月 10 日 10点〜 12点, 如图 13所示。
3 ) 可视域查找 取出查询位置附近一定范围内的镜头在指定时间段内 (即查询时间段) 的可视域, 将其和用户指定的点、 线或面做交集 (即计算相交关系) , 如果 镜头可视域在这个时间段内与用户指定的点、 线、 或面相交, 则镜头照射过 用户指定位置 (即目标位置) 。 经过计算会得到一个镜头列表(即摄像机的 交集) 和子时间段(即与可视域对应的照射时间段) , 如: 镜头 1—— 10: 35: 00-10: 41 : 00 和 1 1 : 21 : 06-1 1 : 34: 56 镜头 2—— 10: 00: 00-1 1 : 30: 00 镜头 3 …-. - 在 GIS上表示出来如图 14所示。 同时得到的镜头列表如图 15所示。可 视域与用户指定的目标点(或线或面)做交集的计算使用的是 GIS引擎的空 间计算能力, 向 GIS引擎传入两个图形对象, 然后 GIS引擎返回这两个图形 是否相交。 通过计算除了找到照射过目标点 (或线或面) 的镜头之外, 更进一步确 动的, 所以在指定时间段内可能有很多更小时间片段视野在设定的目标上)。 这使得抽取的录像更少, 需要人工处理的录像时长更短, 緩解了人员工作压 力。 为避免镜头数太多时, 计算速度过慢, 可以取目标点 (或线或面) 周围 一定范围内的镜头可视域参与计算, 规避掉绝大部分的系统内镜头。 同时, 用目标点周围镜头做可视域相交计算, 可以排除大量的镜头, 只取一小部分 进行计算, 从而大大减少计算量, 使响应速度更快,
4 ) 录像抽取 在得到镜头列表和照射时间段之后, 可以直接向视频系统发送录像抽取 请求, 从而得到这些镜头对应时间段的视频文件。 现在视频智能分析算法还没有实际意义上的突破, 做不到机器识别, 传 统视频监控使用方法都是在事情发生之后, 通过人工大量的查看案发点附近 的长时间段录像, 从而确定哪些镜头照射到了案发区域、 案发点, 嫌疑人的 行走路线。 在视频镜头非常密集的城区, 这往往要花费大量的人力和时间成 本。 本发明的录像抽取方法, 可以帮助用户节省大量筛查镜头的时间, 直接 提取出符合条件的录像, 人工排查的录像大大减少, 提升办案人员的工作效 率。 使计算机强大的计算能力能参与到录像筛查中来。 本发明的各方法实施方式均可以以软件、 硬件、 固件等方式实现。 不管 本发明是以软件、 硬件、 还是固件方式实现, 指令代码都可以存储在任何类 型的计算机可访问的存储器中 (例如永久的或者可修改的, 易失性的或者非 易失性的, 固态的或者非固态的, 固定的或者可更换的介质等等) 。 同样, 存储器可以例如是可编程阵列逻辑 ( Programmable Array Logic , 简称 "PAL" ) 、 随机存取存储器(Random Access Memory, 简称 " RAM" ) 、 可编程只读存储器( Programmable Read Only Memory, 简称 "PROM" ) 、 只读存储器(Read-Only Memory, 简称 " ROM" ) 、 电可擦除可编程只读 存储器( Electrically Erasable Programmable ROM , 简称 "EEPROM" ) 、 磁盘、 光盘、 数字通用光盘 ( Digital Versatile Disc, 简称 "DVD" ) 等等。 本发明第五实施方式涉及一种监控录像视频的提取装置。 图 16是该监 控录像视频的提取装置的结构示意图。 具体地说, 如图 16所示, 该监控录像视频的提取装置包括以下单元: 第一可视域釆集单元, 用于釆集和存储摄像机的镜头可视域和与每个镜 头可视域相对应的照射时间段。 第一可视域提取单元, 用于提取与查询时间段有相交关系的照射时间段 所对应的镜头可视域。 相交计算单元, 用于计算提取的镜头可视域与目标位置的相交关系。 集合获取单元, 用于获取与目标位置有相交关系的镜头可视域所对应的 摄像机的集合。 第一视频提取单元, 用于根据摄像机的集合中各摄像机的照射时间段提 取摄像机拍摄的视频。 在本实施方式中, 该装置还包括以下单元: 参数获取单元, 用于从输入设备获取查询时间段和目标位置。
第一实施方式互相配合实施。 第一实施方式中提到的相关技术细节在本实施 方式中依然有效, 为了减少重复, 这里不再赘述。 相应地, 本实施方式中提 到的相关技术细节也可应用在第一实施方式中。 本发明第六实施方式涉及一种监控录像视频的提取装置。 图 17是该监 控录像视频的提取装置的结构示意图。 具体地说, 如图 17所示, 该监控录像视频的提取装置包括: 第二可视域釆集单元, 用于釆集和存储摄像机的镜头可视域。 参数获取单元, 用于从输入设备获取查询时间段和目标位置。 第二可视域提取单元, 用于提取查询时间段内各摄像机的镜头可视域。 相交计算单元, 用于计算提取的镜头可视域与目标位置的相交关系。 集合获取单元, 用于获取与目标位置有相交关系的镜头可视域所对应的 摄像机的集合。 第二视频提取单元, 用于提取摄像机集合中各摄像机在查询时间段内拍 摄的视频。
在本实施方式中, 釆集和存储的镜头可视域为与该镜头可视域对应的摄 像机在该摄像机可变范围内的所有可拍摄区域。
第二实施方式互相配合实施。 第二实施方式中提到的相关技术细节在本实施 方式中依然有效, 为了减少重复, 这里不再赘述。 相应地, 本实施方式中提 到的相关技术细节也可应用在第二实施方式中。 需要说明的是, 本发明各设备实施方式中提到的各单元都是逻辑单元, 在物理上, 一个逻辑单元可以是一个物理单元, 也可以是一个物理单元的一 部分, 还可以以多个物理单元的组合实现, 这些逻辑单元本身的物理实现方 式并不是最重要的, 这些逻辑单元所实现的功能的组合才是解决本发明所提 出的技术问题的关键。 此外, 为了突出本发明的创新部分, 本发明上述各设 备实施方式并没有将与解决本发明所提出的技术问题关系不太密切的单元引
需要说明的是, 在本专利的权利要求和说明书中, 诸如第一和第二等之 类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来, 而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺 序。 而且, 术语 "包括" 、 "包含" 或者其任何其他变体意在涵盖非排他性 的包含, 从而使得包括一系列要素的过程、 方法、 物品或者设备不仅包括那 些要素, 而且还包括没有明确列出的其他要素, 或者是还包括为这种过程、 方法、 物品或者设备所固有的要素。 在没有更多限制的情况下, 由语句 "包 括一个" 限定的要素, 并不排除在包括所述要素的过程、 方法、 物品或者设 备中还存在另外的相同要素。 虽然通过参照本发明的某些优选实施方式, 已经对本发明进行了图示和 描述, 但本领域的普通技术人员应该明白, 可以在形式上和细节上对其作各 种改变, 而不偏离本发明的精神和范围。

Claims

权 利 要 求 书
1. 一种监控录像视频的提取方法, 其特征在于, 包括以下步骤: 釆集和存储摄像机的镜头可视域和与每个所述镜头可视域相对应的照 射时间段;
提取与查询时间段有相交关系的照射时间段所对应的镜头可视域; 计算提取的所述镜头可视域与目标位置的相交关系; 获取与所述目标位置有相交关系的镜头可视域所对应的摄像机的集合; 根据所述摄像机的集合中各摄像机的照射时间段提取所述摄像机拍摄 的视频。
2. —种监控录像视频的提取方法, 其特征在于, 包括以下步骤: 提取查询时间段内各摄像机的镜头可视域;
计算提取的所述镜头可视域与目标位置的相交关系; 获取与所述目标位置有相交关系的镜头可视域所对应的摄像机的集合; 提取所述摄像机的集合中各摄像机在所述查询时间段内拍摄的视频。
3. 根据权利要求 2 所述的监控录像视频的提取方法, 其特征在于, 在 所述提取查询时间段内各摄像机的镜头可视域的步骤之前,还包括以下步骤: 釆集和存储摄像机的镜头可视域。
4. 根据权利要求 3 所述的监控录像视频的提取方法, 其特征在于, 所 述釆集和存储的镜头可视域为与该镜头可视域对应的摄像机在该摄像机可变 范围内的所有可拍摄区域。
5. 根据权利要求 2 所述的监控录像视频的提取方法, 其特征在于, 在 所述计算提取的所述摄像机的镜头可视域与指定的目标位置的相交关系的步 骤中, 所述相交关系的计算是通过使用地理信息系统的引擎的空间计算能力 向该引擎输入两个图形对象, 然后该引擎返回这两个图形的相交结果来实现 的。
6. 根据权利要求 2至 5中任一项所述的监控录像视频的提取方法, 其 特征在于, 在所述提取指定的查询时间段内各摄像机的镜头可视域的步骤之 前, 还包括以下步骤: 从输入设备获取查询时间段和目标位置。
7. 一种监控录像视频的提取装置, 其特征在于, 包括以下单元: 第一可视域釆集单元, 用于釆集和存储摄像机的镜头可视域和与每个所 述镜头可视域相对应的照射时间段; 第一可视域提取单元, 用于提取与查询时间段有相交关系的照射时间段 所对应的镜头可视域; 相交计算单元, 用于计算提取的所述镜头可视域与目标位置的相交关 系;
集合获取单元, 用于获取与所述目标位置有相交关系的镜头可视域所对 应的摄像机的集合; 第一视频提取单元, 用于根据所述摄像机的集合中各摄像机的照射时间 段提取所述摄像机拍摄的视频。
8. 一种监控录像视频的提取装置, 其特征在于, 包括以下单元: 第二可视域提取单元, 用于提取查询时间段内各摄像机的镜头可视域; 相交计算单元, 用于计算提取的所述镜头可视域与目标位置的相交关 集合获取单元, 用于获取与所述目标位置有相交关系的镜头可视域所对 应的摄像机的集合; 第二视频提取单元, 用于提取所述摄像机集合中各摄像机在所述查询时 间段内拍摄的视频。
9. 根据权利要求 8 所述的监控录像视频的提取装置, 其特征在于, 还 包括以下单元:
第二可视域釆集单元, 用于釆集和存储摄像机的镜头可视域; 参数获取单元, 用于从输入设备获取查询时间段和目标位置。
1 0. 根据权利要求 9所述的监控录像视频的提取装置, 其特征在于, 所 述釆集和存储的镜头可视域为与该镜头可视域对应的摄像机在该摄像机可变 范围内的所有可拍摄区域。
PCT/CN2014/084295 2014-01-03 2014-08-13 监控录像视频的提取方法及其装置 WO2015101047A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14876452.5A EP3091735B1 (en) 2014-01-03 2014-08-13 Method and device for extracting surveillance record videos
US15/109,756 US9736423B2 (en) 2014-01-03 2014-08-13 Method and apparatus for extracting surveillance recording videos

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410005931.0A CN104717462A (zh) 2014-01-03 2014-01-03 监控录像视频的提取方法及其装置
CN201410005931.0 2014-01-03

Publications (1)

Publication Number Publication Date
WO2015101047A1 true WO2015101047A1 (zh) 2015-07-09

Family

ID=53416360

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/084295 WO2015101047A1 (zh) 2014-01-03 2014-08-13 监控录像视频的提取方法及其装置

Country Status (4)

Country Link
US (1) US9736423B2 (zh)
EP (1) EP3091735B1 (zh)
CN (1) CN104717462A (zh)
WO (1) WO2015101047A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713822A (zh) * 2015-08-14 2017-05-24 杭州海康威视数字技术股份有限公司 用于视频监控的摄像机及监控系统
CN105389375B (zh) * 2015-11-18 2018-10-02 福建师范大学 一种基于可视域的图像索引设置方法、系统及检索方法
CN106331618B (zh) * 2016-08-22 2019-07-16 浙江宇视科技有限公司 一种自动确认摄像机可视域的方法及装置
CN106803934A (zh) * 2017-02-22 2017-06-06 王培博 一种基于监控视频的儿童平安上学路监测系统
EP3804513A1 (en) * 2019-10-08 2021-04-14 New Direction Tackle Ltd. Angling system
CN111325791B (zh) * 2020-01-17 2023-01-06 中国人民解放军战略支援部队信息工程大学 基于规则格网dem的osp空间参考线通视域分析方法
CN113362392B (zh) * 2020-03-05 2024-04-23 杭州海康威视数字技术股份有限公司 可视域生成方法、装置、计算设备及存储介质
CN112732975B (zh) * 2020-12-31 2023-02-24 杭州海康威视数字技术股份有限公司 一种对象追踪方法、装置、电子设备及系统
CN113111843B (zh) * 2021-04-27 2023-12-29 北京赛博云睿智能科技有限公司 一种图像数据的远程采集方法及系统
CN116069976B (zh) * 2023-03-06 2023-09-12 南京和电科技有限公司 一种区域视频分析方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004297298A (ja) * 2003-03-26 2004-10-21 Fuji Photo Film Co Ltd 画像表示装置
CN101883261A (zh) * 2010-05-26 2010-11-10 中国科学院自动化研究所 大范围监控场景下异常目标检测及接力跟踪的方法及系统
CN102685460A (zh) * 2012-05-17 2012-09-19 武汉大学 一种集成可量测实景影像和电子地图的视频监控巡航方法
CN102929993A (zh) * 2012-10-23 2013-02-13 常州环视高科电子科技有限公司 视频查找系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0110480D0 (en) * 2001-04-28 2001-06-20 Univ Manchester Metropolitan Methods and apparatus for analysing the behaviour of a subject
JP3834641B2 (ja) * 2003-01-31 2006-10-18 国立大学法人大阪大学 カメラ映像データ管理システム、カメラ映像データ管理方法、およびプログラム
US8570373B2 (en) * 2007-06-08 2013-10-29 Cisco Technology, Inc. Tracking an object utilizing location information associated with a wireless device
CN101576926B (zh) * 2009-06-04 2011-01-26 浙江大学 一种基于地理信息系统的监控视频检索方法
CN103365848A (zh) * 2012-03-27 2013-10-23 华为技术有限公司 一种视频查询方法、装置与系统
CN103491339B (zh) * 2012-06-11 2017-11-03 华为技术有限公司 视频获取方法、设备及系统
WO2014121340A1 (en) * 2013-02-07 2014-08-14 Iomniscient Pty Ltd A surveillance system
CN103414872B (zh) * 2013-07-16 2016-05-25 南京师范大学 一种目标位置驱动ptz摄像机的方法
CN103414870B (zh) * 2013-07-16 2016-05-04 南京师范大学 一种多模式警戒分析方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004297298A (ja) * 2003-03-26 2004-10-21 Fuji Photo Film Co Ltd 画像表示装置
CN101883261A (zh) * 2010-05-26 2010-11-10 中国科学院自动化研究所 大范围监控场景下异常目标检测及接力跟踪的方法及系统
CN102685460A (zh) * 2012-05-17 2012-09-19 武汉大学 一种集成可量测实景影像和电子地图的视频监控巡航方法
CN102929993A (zh) * 2012-10-23 2013-02-13 常州环视高科电子科技有限公司 视频查找系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3091735A4 *

Also Published As

Publication number Publication date
EP3091735A1 (en) 2016-11-09
US20160323535A1 (en) 2016-11-03
CN104717462A (zh) 2015-06-17
EP3091735B1 (en) 2019-05-01
US9736423B2 (en) 2017-08-15
EP3091735A4 (en) 2016-11-16

Similar Documents

Publication Publication Date Title
WO2015101047A1 (zh) 监控录像视频的提取方法及其装置
US10949995B2 (en) Image capture direction recognition method and server, surveillance method and system and image capture device
JP6907325B2 (ja) 内部空間の3dグリッド表現からの2d間取り図の抽出
US10165179B2 (en) Method, system, and computer program product for gamifying the process of obtaining panoramic images
US10867437B2 (en) Computer vision database platform for a three-dimensional mapping system
CN102148965B (zh) 多目标跟踪特写拍摄视频监控系统
US20140152651A1 (en) Three dimensional panorama image generation systems and methods
TWI548276B (zh) 用於在現存的靜態影像中視覺化視頻的方法與電腦可讀取媒體
CN106156199B (zh) 一种视频监控图像存储检索方法
CN101511004A (zh) 一种摄像监控的方法及装置
TWI587241B (zh) Method, device and system for generating two - dimensional floor plan
WO2007124664A1 (fr) Appareil et procédé permettant d'obtenir une représentation panoramique contenant des informations de position et procédé de création, d'annotation et d'affichage d'un service de cartographie électrique panoramique
CN104038740A (zh) 一种遮蔽ptz监控摄像机隐私区域的方法及装置
KR101778744B1 (ko) 다중 카메라 입력의 합성을 통한 실시간 모니터링 시스템
CN102291527A (zh) 基于单个鱼眼镜头的全景视频漫游方法及装置
TW201215118A (en) Video summarization using video frames from different perspectives
CN103942820A (zh) 一种多角度仿真三维地图的方法及装置
CN103533313A (zh) 基于地理位置的电子地图全景视频合成显示方法和系统
CN110245199A (zh) 一种大倾角视频与2d地图的融合方法
CN112053415A (zh) 一种地图构建方法和自行走设备
CN103175527A (zh) 一种应用于微小卫星的大视场低功耗的地球敏感器系统
TWI279142B (en) Picture capturing and tracking method of dual cameras
JP5259853B2 (ja) 顕微鏡システム
KR101317428B1 (ko) 카메라(cctv) 제어기능을 갖는 공간 정보 시스템 및 그 동작방법
Levy et al. The art of implementing SfM for reconstruction of archaeological sites in Greece: Preliminary applications of cyber-archaeological recording at Corinth

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14876452

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 15109756

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2014876452

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014876452

Country of ref document: EP