CN111709972B - Space constraint-based method for quickly concentrating wide-area monitoring video - Google Patents

Space constraint-based method for quickly concentrating wide-area monitoring video Download PDF

Info

Publication number
CN111709972B
CN111709972B CN202010530175.9A CN202010530175A CN111709972B CN 111709972 B CN111709972 B CN 111709972B CN 202010530175 A CN202010530175 A CN 202010530175A CN 111709972 B CN111709972 B CN 111709972B
Authority
CN
China
Prior art keywords
video
boundary
target
space
monitoring area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010530175.9A
Other languages
Chinese (zh)
Other versions
CN111709972A (en
Inventor
张云佐
李汶轩
杨攀亮
郭亚宁
黄富瑜
张嘉煜
李怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shijiazhuang Tiedao University
Original Assignee
Shijiazhuang Tiedao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shijiazhuang Tiedao University filed Critical Shijiazhuang Tiedao University
Priority to CN202010530175.9A priority Critical patent/CN111709972B/en
Publication of CN111709972A publication Critical patent/CN111709972A/en
Application granted granted Critical
Publication of CN111709972B publication Critical patent/CN111709972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a method for quickly concentrating a monitoring video in a broad domain based on space constraint, and relates to the technical field of video image processing methods. The method comprises the following steps: detecting sensitive targets crossing the boundary of the monitoring area by adopting a space-time slicing method; matching the target by adding the boundary space information of the monitoring area; and according to the spatial position information of the camera, constructing the whole background of the monitoring area, labeling the target motion information and constructing a ubiquitous concentrated monitoring video. The method can efficiently concentrate the video and can help the user to quickly obtain the motion process of the target.

Description

Space constraint-based method for quickly concentrating wide-area monitoring video
Technical Field
The invention relates to the technical field of image processing methods, in particular to a method for quickly concentrating a wide-area monitoring video based on space constraint.
Background
Nowadays, with the rapid construction and development of smart cities, a single monitoring camera is difficult to meet the requirements of people on stable and safe life, the installation and the use of a cross-camera network are distributed in all corners of production and life of people, and a large amount of monitoring videos are generated by ceaseless monitoring of the monitoring cameras all day long. Unlike traditional videos, these videos have characteristics of large redundancy, user fixity, no apparent shot transitions, and the like. How to simply and efficiently store and browse massive cross-camera monitoring videos has urgent practical requirements.
Currently, a lot of research works have been done by researchers on monitoring video condensation with a single camera. The method comprises the following steps that for single-camera monitoring video concentration, two categories can be divided according to different media attributes, one category is a static video concentration method based on key frames, one or more frames of images capable of summarizing main contents of videos are selected mainly by extracting various characteristics of targets, and the images are combined in various forms to form a final static concentrated video; the other type is a dynamic video concentration method based on target motion tracks, which mainly reduces the space-time redundancy in the video by extracting the motion tracks of the target and carrying out translation recombination on the tracks on different time spaces to form the final dynamic concentrated video. For single-camera surveillance video concentration, a mature theoretical system is formed, and in the last two years, the concentration of cross-camera surveillance videos becomes an important research direction in the field of surveillance videos. Most of the existing cross-camera monitoring video concentration methods are to display the concentrated video under each camera to users through various combinations. For example zhu et al directly display condensed video under all cameras side by side in the screen, while Leo et al show condensed video under one camera in the main window and condensed video related to the main window in the sub-window. Although the existing cross-camera monitoring video concentration method can keep the complete motion track of the target and compress the video length to a certain extent, the common problem is that the display form which is easy to understand by the user is difficult to find.
Disclosure of Invention
The invention aims to solve the technical problem of how to provide a method for quickly concentrating a video in a broad area based on space constraint, which can efficiently concentrate the video and can help a user to quickly obtain the motion process of a target.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a method for quickly concentrating a monitoring video in a broad domain based on space constraint is characterized by comprising the following steps:
detecting sensitive targets crossing the boundary of the monitoring area by adopting a space-time slicing method;
matching the target by adding the boundary space information of the monitoring area;
and according to the spatial position information of the camera, constructing the whole background of the monitoring area, labeling the target motion information and constructing a ubiquitous concentrated monitoring video.
The further technical scheme is that the method for detecting the sensitive target crossing the monitoring area boundary comprises the following steps:
extracting space-time slices at the upper, lower, left and right 4 boundaries of the video;
performing mixed Gaussian background modeling on the 4 space-time slices to extract a foreground target;
traversing the space-time slice after the mixed Gaussian background modeling row by row and column by column, finding out the rows or columns of which the number of white pixel points of each row or each column in the space-time slice is more than a certain threshold value, and positioning a target;
and sampling a secondary boundary which is adjacent to and parallel to the video boundary, if the target enters the monitoring area, passing through the boundary first and then passing through the secondary boundary, and if the target exits the monitoring area, passing through the secondary boundary first and then passing through the boundary, thereby obtaining the motion direction of the target in the side surface of the video, namely entering or exiting the monitoring area.
The further technical scheme is that the method for matching the target by adding the boundary space information of the monitoring area comprises the following steps:
under the constraint of the spatial position of a camera in a camera network, firstly, calculating the vector distance between targets by using a pedestrian matching method based on a color space distribution model, and adding camera spatial information constraint in the calculation process;
each boundary of the monitoring area is regarded as a point, all points in the same camera are connected, adjacent points between the two cameras are connected, and only two points with smaller distance are connected;
giving higher weight between the similar space-time slices to ensure that the targets have higher similarity;
weight normalization:
Figure BDA0002535112360000031
wherein the content of the first and second substances,
Figure BDA0002535112360000032
is the ith camera A boundary, t1Denotes the direction of the object, sum is
Figure BDA0002535112360000033
And the sum of all other boundary weights,
Figure BDA0002535112360000034
is the weight between the two boundaries after normalization;
y-axis vector distance of two target-like MC classes
Figure BDA0002535112360000035
Adding a weight factor in the calculation:
Figure BDA0002535112360000036
wherein the content of the first and second substances,
Figure BDA0002535112360000037
is the target vector distance added to the monitoring region boundary weight.
The further technical scheme is that the method for constructing the extensive condensed monitoring video comprises the following steps:
according to the space position of a monitoring camera, a background image under the whole monitoring area is constructed, then triangles with different colors are used for representing different targets, the triangles are marked at the boundary where the targets appear, the direction of the triangles is the moving direction of the targets, and the target image and the starting time are independently displayed in another window.
The further technical scheme is that the video spatio-temporal slice is formed by extracting a certain row or a certain column of all frames of a video, the information on the video time dimension is completely reserved, the 1 st column of all frames in the video is extracted to form a spatio-temporal slice A, the size of each frame is m × n, the video has l frames in total, and the spatio-temporal slice A is shown as the following formula:
Figure BDA0002535112360000038
wherein the content of the first and second substances,
Figure BDA0002535112360000039
representing the pixel point of the kth frame and the jth line and the jth column in the video;
space-time slices have three kinds of extraction methods: the video is regarded as a three-dimensional image sequence, with three axes x, y, t, a vertical slice being a section parallel to the y-axis, a horizontal slice being parallel to the x-axis, and a diagonal slice being a straight line parallel to y-x.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: the method detects the moving target only by extracting the video boundary pixel points, greatly improves the detection speed, matches the target under the constraint of the boundary space position of the camera monitoring area, constructs the monitoring background of the whole monitoring area, and utilizes the color and the shape of the triangle to represent the type and the running direction of the target, so that a user can conveniently and quickly know the whole moving stroke of the moving target, and better user experience is obtained.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a schematic representation of three spatiotemporal slices in a method according to an embodiment of the present invention;
FIG. 2 is a spatiotemporal slice diagram of video left boundary formation in a method according to an embodiment of the present invention;
FIG. 3 is a spatiotemporal slice of a model with a Gaussian mixture background according to an embodiment of the invention;
FIG. 4 is a spatiotemporal slice diagram after object localization in a method according to an embodiment of the present invention;
FIG. 5 is a graph of boundary point connections in a method according to an embodiment of the invention;
FIG. 6 is a cross-camera condensed video map in a method according to an embodiment of the invention;
FIG. 7 is a diagram of the spatial position of the camera and its monitoring area in the method according to the embodiment of the present invention;
FIG. 8 is a diagram illustrating the detection effects of spatio-temporal slicing and three-frame difference methods in the method according to the embodiment of the present invention;
FIG. 9 is a flow chart of a method according to an embodiment of the invention;
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
As shown in fig. 9, an embodiment of the present invention discloses a method for quickly concentrating a wide-area surveillance video based on spatial constraint, which includes the following steps:
detecting sensitive targets crossing the boundary of the monitoring area by adopting a space-time slicing method;
matching the target by adding the boundary space information of the monitoring area;
and according to the spatial position information of the camera, constructing the whole background of the monitoring area, labeling the target motion information and constructing a ubiquitous concentrated monitoring video.
The above method will be described in detail with reference to the following specific contents
Detecting a cross-boundary target:
the video space-time slice is formed by extracting a certain row or a certain column of all frames of the video, and completely retains the information on the video time dimension. Assuming that the 1 st column of all frames in the video is extracted to form a spatio-temporal slice a, and the size of each frame is m × n, and the video has a total of l frames, the spatio-temporal slice a is shown as formula 1.
Figure BDA0002535112360000051
Wherein the content of the first and second substances,
Figure BDA0002535112360000052
and the pixel points at the k frame, the ith row and the jth column in the video are represented.
Spatiotemporal slices generally have three measures: vertical slices, horizontal slices, and diagonal slices. The video is regarded as a three-dimensional image sequence, the three axes are x, y and t respectively, the vertical slice is a section parallel to the y axis, the horizontal slice is parallel to the x axis, the diagonal slice is parallel to the x line, and the three space-time slices are shown in fig. 1.
The most common moving object detection methods at present include an optical flow method, a frame difference method and a background difference method, but the methods are easily influenced by a background environment and are low in detection speed. The method is based on a brand-new angle, adopts a space-time slicing method to detect all moving targets appearing in a monitoring area, and mainly comprises the following steps:
1) space-time slices at the upper, lower, left and right 4 boundaries of the video are extracted, and target images are left at the 4 space-time slices for each target entering and exiting the monitoring area, so that all sensitive targets crossing the monitoring area can be detected, and the space-time slices formed by sampling the left boundary of the video are shown in fig. 2.
2) And performing mixed Gaussian background modeling on the 4 space-time slices to extract a foreground target. As an extension and improvement on a single Gaussian model, the mixed Gaussian background model has certain adaptability to jitter, light change and the like. Spatiotemporal slices modeled with a Gaussian mixture background are shown in FIG. 3.
3) Traversing the space-time slice after the mixed Gaussian background modeling row by row and column by column, finding out the rows or columns of which the number of white pixel points in each row or each column in the space-time slice is more than a certain threshold value, and positioning the target. The spatiotemporal slice after the object is located is shown in fig. 4.
4) And sampling a secondary boundary which is adjacent to and parallel to the video boundary, if the target enters the monitoring area, passing through the boundary first and then passing through the secondary boundary, and if the target exits the monitoring area, passing through the secondary boundary first and then passing through the boundary, so that the motion direction of the target in the video side surface, namely entering or exiting the monitoring area, can be obtained.
Space constraint
The method and the device match the targets in the space-time slice by adding the spatial position information of the boundary of the camera monitoring area. Under the constraint of the spatial position of a camera in a camera network, the method firstly uses a pedestrian matching method based on a color space distribution model to calculate the vector distance between targets, and adds camera spatial information constraint in the calculation process.
Each boundary of the monitoring area is regarded as one point, all points in the same camera are connected, adjacent points between two cameras are connected, only two points with a small distance are connected, and the connection mode is shown in fig. 5.
Giving higher weight between close space-time slices to make the similarity between the objects higher, each side in fig. 5 represents weight 1, then monitoring area boundary a of camera 11The weights between the remaining boundaries are shown in table 1.
TABLE 1 weight table between boundary of spatio-temporal slice portions
Figure BDA0002535112360000061
Weight normalization:
Figure BDA0002535112360000062
wherein the content of the first and second substances,
Figure BDA0002535112360000063
is the ith camera A boundary, t1Denotes the direction of the object, sum is
Figure BDA0002535112360000064
And the sum of all other boundary weights,
Figure BDA0002535112360000065
is the weight between the two boundaries after normalization.
The y-axis vector distance of two target similar MC classes
Figure BDA0002535112360000071
And adding a weight factor in the calculation.
Figure BDA0002535112360000072
Wherein the content of the first and second substances,
Figure BDA0002535112360000073
is the target vector distance added to the monitoring region boundary weight.
Visualization of sensitive objects:
the method comprises the steps of obtaining the running direction of a target at the boundary of each monitoring area through two steps of sensitive target detection and matching, constructing a novel cross-camera monitoring video concentration mode under the constraint of the spatial position of a camera, firstly constructing a background image under the whole monitoring area according to the spatial position of the monitoring camera, secondly representing different targets by utilizing triangles with different colors, marking the triangles at the boundary where the targets appear, enabling the directions of the triangles to be the moving directions of the targets, independently displaying the target image and the starting time in another window, and finally displaying the extensive concentrated video as shown in figure 6.
Experimental results and analysis:
the actual measurement video is taken as an experimental object, and the camera and the monitoring area thereof are shown in fig. 7. The three cameras respectively generate three video segments of video1, video2 and video3, wherein 3 moving objects appear in the three video segments, the first object enters from the upper boundary of the video1, the lower boundary of the video exits from a monitoring area, the second object enters from the lower boundary of the video1 monitoring area, the upper boundary of the video exits, the third object enters from the right boundary of the video2, exits from the left boundary of the video2 monitoring area, enters from the left boundary of the video3 to the monitoring area of the camera 3, and exits from the right boundary of the video 3.
In order to verify the superiority of the proposed spatio-temporal slice-based moving object detection method, the performance comparison result is shown in table 2 and the object extraction result is shown in fig. 8, compared with the classical three-frame difference method in terms of operation speed and detection effect.
TABLE 2 comparison of spatio-temporal slicing and three-frame difference methods
Figure BDA0002535112360000081
As can be seen from table 2, the space-time slicing method has an obvious advantage in terms of operation speed under the condition that the number of detected targets is consistent, and the target detection efficiency is greatly improved. As can be seen from fig. 8, the object detected by the three-frame difference method has more voids, and the complete shape of the object is well shown by the space-time slicing method.
The final static summary of the cross-camera surveillance video consisting of video1, video2, and video3 is shown in fig. 6. The basic motion trail of each target can be clearly known from the graph, a user can find the position where the concerned target appears according to needs, the user interaction degree is good, and the abstract concentrates a plurality of multi-frame videos in one image, so that the data volume is greatly reduced.

Claims (2)

1. A method for quickly concentrating a monitoring video in a broad domain based on space constraint is characterized by comprising the following steps:
detecting sensitive targets crossing the boundary of the monitoring area by adopting a space-time slicing method;
matching the target by adding the boundary space information of the monitoring area;
according to the spatial position information of the camera, constructing an integral background of a monitoring area, marking target motion information and constructing a general-domain concentrated monitoring video;
the method of detecting sensitive objects crossing the border of the surveillance area is as follows:
extracting space-time slices at the upper, lower, left and right 4 boundaries of the video;
performing mixed Gaussian background modeling on the 4 space-time slices to extract a foreground target;
traversing the space-time slice after the mixed Gaussian background modeling row by row and column by column, finding out the rows or columns of which the number of white pixel points of each row or each column in the space-time slice is more than a certain threshold value, and positioning a target;
sampling a secondary boundary which is close to and parallel to the video boundary, if the target enters the monitoring area, passing through the boundary first and then passing through the secondary boundary, and if the target exits the monitoring area, passing through the secondary boundary first and then passing through the boundary, thereby obtaining the motion direction of the target in the video side, namely entering or exiting the monitoring area;
the method for matching the target by adding the boundary space information of the monitoring area comprises the following steps:
under the constraint of the spatial position of a camera in a camera network, firstly, calculating the vector distance between targets by using a pedestrian matching method based on a color space distribution model, and adding camera spatial information constraint in the calculation process;
each boundary of the monitoring area is regarded as a point, all points in the same camera are connected, adjacent points between the two cameras are connected, and only two points with smaller distance are connected;
giving higher weight between the similar space-time slices to ensure that the targets have higher similarity;
weight normalization:
Figure FDA0003485849340000011
wherein the content of the first and second substances,
Figure FDA0003485849340000012
is the ith camera A boundary, t1Denotes the direction of the object, sum is
Figure FDA0003485849340000013
And the sum of all other boundary weights,
Figure FDA0003485849340000014
is the weight between the two boundaries after normalization;
y-axis vector distance of two target-like MC classes
Figure FDA0003485849340000021
Adding a weight factor in the calculation:
Figure FDA0003485849340000022
wherein the content of the first and second substances,
Figure FDA0003485849340000023
the target vector distance for adding the boundary weight of the monitoring area;
the method for constructing the extensive condensed monitoring video comprises the following steps:
according to the space position of a monitoring camera, a background image under the whole monitoring area is constructed, then triangles with different colors are used for representing different targets, the triangles are marked at the boundary where the targets appear, the direction of the triangles is the moving direction of the targets, and the target image and the starting time are independently displayed in another window.
2. The method for fast-concentrating the monitoring video in the wide domain based on the spatial constraint of claim 1, wherein:
the video spatio-temporal slice is formed by extracting a certain line or a certain column of all frames of a video, completely retains information on a video time dimension, and the spatio-temporal slice A is formed by extracting the 1 st column of all frames in the video, wherein the size of each frame is m × n, and the video has l frames in total, and is represented by the following formula:
Figure FDA0003485849340000024
wherein the content of the first and second substances,
Figure FDA0003485849340000025
representing the pixel point of the kth frame and the jth line and the jth column in the video;
space-time slices have three kinds of extraction methods: the video is regarded as a three-dimensional image sequence, with three axes x, y, t, a vertical slice being a section parallel to the y-axis, a horizontal slice being parallel to the x-axis, and a diagonal slice being a straight line parallel to y-x.
CN202010530175.9A 2020-06-11 2020-06-11 Space constraint-based method for quickly concentrating wide-area monitoring video Active CN111709972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010530175.9A CN111709972B (en) 2020-06-11 2020-06-11 Space constraint-based method for quickly concentrating wide-area monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010530175.9A CN111709972B (en) 2020-06-11 2020-06-11 Space constraint-based method for quickly concentrating wide-area monitoring video

Publications (2)

Publication Number Publication Date
CN111709972A CN111709972A (en) 2020-09-25
CN111709972B true CN111709972B (en) 2022-03-11

Family

ID=72539818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010530175.9A Active CN111709972B (en) 2020-06-11 2020-06-11 Space constraint-based method for quickly concentrating wide-area monitoring video

Country Status (1)

Country Link
CN (1) CN111709972B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116647690B (en) * 2023-05-30 2024-03-01 石家庄铁道大学 Video concentration method based on space-time rotation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012019417A1 (en) * 2010-08-10 2012-02-16 中国科学院自动化研究所 Device, system and method for online video condensation
CN103092963A (en) * 2013-01-21 2013-05-08 信帧电子技术(北京)有限公司 Video abstract generating method and device
CN105469425A (en) * 2015-11-24 2016-04-06 上海君是信息科技有限公司 Video condensation method
CN105721826A (en) * 2014-12-02 2016-06-29 四川浩特通信有限公司 Intelligent combat system
KR101822443B1 (en) * 2016-09-19 2018-01-30 서강대학교산학협력단 Video Abstraction Method and Apparatus using Shot Boundary and caption
CN111159471A (en) * 2018-11-08 2020-05-15 北京航天长峰科技工业集团有限公司 Monitoring video concentration processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012019417A1 (en) * 2010-08-10 2012-02-16 中国科学院自动化研究所 Device, system and method for online video condensation
CN103092963A (en) * 2013-01-21 2013-05-08 信帧电子技术(北京)有限公司 Video abstract generating method and device
CN105721826A (en) * 2014-12-02 2016-06-29 四川浩特通信有限公司 Intelligent combat system
CN105469425A (en) * 2015-11-24 2016-04-06 上海君是信息科技有限公司 Video condensation method
KR101822443B1 (en) * 2016-09-19 2018-01-30 서강대학교산학협력단 Video Abstraction Method and Apparatus using Shot Boundary and caption
CN111159471A (en) * 2018-11-08 2020-05-15 北京航天长峰科技工业集团有限公司 Monitoring video concentration processing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Tracking interacting objects optimally using integer;Wang X et al;《Computer Vision–ECCV 2014》;20141231;第17-32页 *
一种基于时空切片提取摄像机运动的方法;李勇,刘雨等;《电视技术》;20041117;第75-78页 *
基于时空近邻轨迹分析算法的;李斌;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20170515;第1-63页 *

Also Published As

Publication number Publication date
CN111709972A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
Li et al. Reconstructing building mass models from UAV images
Cui et al. Automatic 3-D reconstruction of indoor environment with mobile laser scanning point clouds
CN103347167A (en) Surveillance video content description method based on fragments
US9626585B2 (en) Composition modeling for photo retrieval through geometric image segmentation
CN110347870A (en) The video frequency abstract generation method of view-based access control model conspicuousness detection and hierarchical clustering method
Zhang et al. Coarse-to-fine object detection in unmanned aerial vehicle imagery using lightweight convolutional neural network and deep motion saliency
CN108830327A (en) A kind of crowd density estimation method
CN110874592A (en) Forest fire smoke image detection method based on total bounded variation
CN102034267A (en) Three-dimensional reconstruction method of target based on attention
CN101365072A (en) Subtitle region extracting device and method
CN110688905A (en) Three-dimensional object detection and tracking method based on key frame
CN108198202A (en) A kind of video content detection method based on light stream and neural network
CN109978935A (en) A kind of picture depth algorithm for estimating analyzed based on deep learning and Fourier
CN112257549B (en) Floor danger detection early warning method and system based on computer vision
CN111145222A (en) Fire detection method combining smoke movement trend and textural features
CN111709972B (en) Space constraint-based method for quickly concentrating wide-area monitoring video
Hao et al. Slice-based building facade reconstruction from 3D point clouds
Bagheri et al. Temporal mapping of surveillance video for indexing and summarization
Recky et al. Window detection in complex facades
CN114422720A (en) Video concentration method, system, device and storage medium
CN113221976A (en) Multi-video-frame black smoke diesel vehicle detection method and system based on space-time optical flow network
CN104867129A (en) Light field image segmentation method
CN112465854A (en) Unmanned aerial vehicle tracking method based on anchor-free detection algorithm
CN105678268B (en) Subway station scene pedestrian counting implementation method based on double-region learning
Gao et al. Online building segmentation from ground-based LiDAR data in urban scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Yunzuo

Inventor after: Li Menxuan

Inventor after: Yang Panliang

Inventor after: Guo Yaning

Inventor after: Zhang Jiayu

Inventor after: Li Yi

Inventor before: Zhang Yunzuo

Inventor before: Li Menxuan

Inventor before: Yang Panliang

Inventor before: Guo Yaning

Inventor before: Huang Fuyu

Inventor before: Zhang Jiayu

Inventor before: Li Yi