CN113763435A - Tracking shooting method based on multiple cameras - Google Patents
Tracking shooting method based on multiple cameras Download PDFInfo
- Publication number
- CN113763435A CN113763435A CN202010493499.XA CN202010493499A CN113763435A CN 113763435 A CN113763435 A CN 113763435A CN 202010493499 A CN202010493499 A CN 202010493499A CN 113763435 A CN113763435 A CN 113763435A
- Authority
- CN
- China
- Prior art keywords
- target
- camera
- path
- cameras
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Multimedia (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a tracking shooting method based on multiple cameras, which relates to the technical field of camera monitoring, solves the problem that a suspicious target cannot be effectively tracked in time at present, and provides the following technical scheme, wherein the tracking shooting method based on the multiple cameras comprises the following steps: constructing a monitoring network of a plurality of cameras, and determining information of each path of camera; determining a global network topology structure based on the information of each path of camera; identifying and tracking a target, and acquiring a motion track of the target under a single-path camera; predicting the motion trail of the target based on the global network topology; and fusing the motion tracks under the single-path camera to obtain the global motion track of the target.
Description
Technical Field
The invention relates to the technical field of camera monitoring, in particular to a tracking shooting method based on multiple cameras.
Background
With the continuous development of monitoring networks, a large number of cameras are installed in more and more places to ensure the safety of a monitoring area. Because the position of the camera is relatively fixed and the visual field range is limited, a monitoring blind area can be avoided, and the installation of multiple cameras can enlarge the monitoring visual field range under certain conditions, but monitoring personnel are required to track a moving object through a video monitoring system, so that the moving object cannot be effectively tracked in time due to the factors of large number of monitors, complicated information and the like.
Disclosure of Invention
The invention mainly aims to provide a tracking shooting method based on multiple cameras, and aims to solve the problem that a suspicious target cannot be effectively tracked in time at present.
In order to achieve the above object, the present invention provides the following technical solution, a tracking shooting method based on multiple cameras, comprising the following steps:
constructing a monitoring network of a plurality of cameras, and determining information of each path of camera;
determining a global network topology structure based on the information of each path of camera;
identifying and tracking a target, and acquiring a motion track of the target under a single-path camera;
predicting the motion trail of the target based on a global network topology structure;
and fusing the motion tracks under the single-path camera to obtain the global motion track of the target.
By adopting the technical scheme, the position of the camera is reasonably set, so that the monitoring range of the camera covers the whole area to be monitored; constructing a global network topology structure based on the monitoring information acquired by each path of camera, and associating the images acquired by each path of camera subsequently; carrying out target identification on images acquired by each camera, starting tracking when the target is identified, and recording the motion track of the target in the scene corresponding to the corresponding camera; predicting the motion trail of the target based on the space-time constraint of the global network topological structure, and sending the predicted position information of the target to a corresponding camera, so that the camera locks the position of the monitored target in advance to achieve the aim of improving the tracking rate; and fusing the motion tracks of the targets under the cameras to obtain the global motion track of the targets, so that the working personnel can know the action route of the targets.
In an embodiment of the present application, the information of each camera includes: monitoring areas of the cameras and/or monitoring angles of the cameras.
By adopting the technical scheme, the positions of the cameras are adjusted according to the information of all the cameras so as to carry out overall monitoring on the monitored area and further ensure the safety of the monitored area.
In an embodiment of the present application, determining a global network topology based on the information of the cameras further includes the following steps:
acquiring color histogram information of a monitoring area corresponding to each path of camera;
based on the color histogram information, clustering image information acquired by each path of camera by adopting a Mean Shift algorithm to obtain image segmentation results corresponding to each path of camera;
determining a global network topology based on the corresponding image segmentation results.
By adopting the technical scheme, the network topology structure is constructed through the color histogram information, compared with the construction of the network topology structure according to target tracking, the requirement on a camera is lower, and when a target is shielded, the network topology structure can be constructed more accurately. Determining the corresponding area of the camera for monitoring the adjacent images, clustering the image information monitored by each camera by adopting a Mean Shift algorithm, extracting the characteristic points of the images, segmenting the images by using an optical flow method, performing space transformation on the images monitored by the cameras according to the image segmentation result so as to calibrate the image overlapping area, and further splicing the images based on the overlapping area so as to obtain a global network topological structure.
In an embodiment of the present application, identifying and tracking a target, and acquiring a motion trajectory of the target under a one-way camera further includes the following steps:
detecting images acquired by each camera based on an OTSU segmentation method to obtain a moving target and obtain initial information of the target;
extracting local features of the target based on an initial image of the target acquired by the camera, and constructing an initial model of the target;
updating the initial model based on the images acquired by the cameras in real time to obtain a real-time model of the target;
and searching the target in the images collected by the cameras on the basis of the real-time model of the target.
By adopting the technical scheme, when the target is detected, the initial model of the target is deeply learned, namely the model of the target is continuously updated to obtain the real-time model of the target, and further searching and tracking are carried out based on the real-time model, so that the accuracy of target tracking is effectively improved.
In an embodiment of the present application, predicting the motion trajectory of the target based on the global network topology further includes the following steps: and predicting the motion trail of the target by combining a Camshift algorithm and a Kalman filtering algorithm.
By adopting the technical scheme, the Camshift algorithm is adopted, and the Kalman filtering algorithm is introduced to predict the movement track of the target, so that the method has high real-time performance, can effectively eliminate other interference, and has stronger robustness.
The invention has the following beneficial effects: updating the initial model of the target by adopting a deep learning algorithm to obtain a real-time target model, thereby effectively improving the tracking precision of the camera; the method has the advantages that the target is identified by adopting a mode of combining local feature identification with space-time constraint of a global network topology structure, when one part of the target is shielded, the target can be correctly identified and tracked, and the applicability is strong.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only exemplary embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without inventive effort,
wherein:
fig. 1 is a flow chart of a multi-camera based tracking shooting method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only exemplary embodiments of the present invention, and not exclusive embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The multi-camera based tracking shooting method, as shown in fig. 1, includes the following steps:
s1, a multi-camera monitoring network is constructed, and information of each camera is determined.
Specifically, the connection relation between the camera and the switch is reasonably configured, so that the real-time performance of monitoring is ensured; and planning the installation position, the installation number and the monitoring view angle of the cameras based on the terrain of the monitoring area and the physical parameters of the cameras, ensuring that the monitoring range comprises the whole area to be monitored, and acquiring the information of each path of cameras, including the monitoring area of each path of cameras, the monitoring angle of each path of cameras, the connection relation of the monitoring area of each path of cameras and the like. There may be some degree of overlap in the images acquired by the cameras.
S2, based on the information of each path of camera, determining a global network topology.
S21, acquiring color histogram information of the monitoring area corresponding to each path of camera;
specifically, images collected by each camera are decomposed, and a corresponding color histogram is generated.
S22, based on the color histogram information, clustering the image information collected by each path of camera by using a Mean Shift algorithm to obtain image segmentation results corresponding to each path of camera;
specifically, based on color information in the color histogram, clustering processing is performed on image information monitored by each camera by using a Mean Shift algorithm, and feature points of the images are extracted.
S23 determines a global network topology based on the corresponding image segmentation results.
Specifically, an optical flow method is used for segmenting images, the images monitored by the camera are subjected to space transformation according to the image segmentation result, so that image overlapping areas are calibrated, and further splicing is carried out based on the overlapping areas so as to obtain a global network topology structure.
And S3, identifying and tracking the target, and acquiring the motion track of the target under the single-path camera.
S31, detecting the images collected by each camera based on an OTSU segmentation method to identify the moving target and obtain the initial information of the target;
specifically, images acquired by each camera are preprocessed, that is, the images acquired by each camera are processed through a preset normalized vegetation index to obtain a corresponding gray-scale map; and processing the gray-scale image by adopting an OTSU segmentation method to obtain a corresponding binary image, identifying the target according to the corresponding binary image, and obtaining initial information corresponding to the target, wherein the initial information comprises the positioning of the target, the scale of the target and the like.
S32, extracting local features of the target based on the initial image of the target collected by the camera, and constructing an initial model of the target;
specifically, images acquired by a camera which initially identifies the target are detected, local feature points in each frame of image are extracted, and an initial model of the target is constructed.
S33, updating the initial model based on the images acquired by the cameras in real time to obtain a real-time model of the target;
specifically, local feature points are extracted according to image information about the target acquired by each subsequent camera in real time, and are used for matching and updating the real-time model of the target.
Preferably, local feature points in the image are selected, and an initial Markov model is constructed; and taking local feature points in the images acquired in real time as training samples, inputting the training samples into the initial Markov model, and training the initial Markov model to obtain the real-time model until the optimized model is obtained.
S34 searching for the target in the images captured by the cameras based on the real-time model of the target.
Specifically, a real-time model of the current target is input into each camera for searching for the target in each camera.
S4, predicting the motion trail of the target based on the global network topology.
Specifically, based on a global network topology structure, that is, a network topology structure among cameras, a camera set associated with finding a moving target is obtained according to the network topology structure, and a corresponding camera is locked as a key monitoring camera to track the moving target by analyzing a spatial proximity relationship among the cameras and a time difference occurring in the moving target. And then combining a Camshift algorithm and a Kalman filtering algorithm, detecting the edge of the moving target, and determining the relation with the surrounding environment, so as to perform positioning processing, namely, converting the position in a local coordinate system of the target into indoor coordinate system data by using the Camshift algorithm, and predicting the moving track of the target, namely predicting the image acquired by which camera the target appears next.
And S5, fusing the motion tracks under the single-path camera to obtain the global motion track of the target.
Specifically, corresponding images collected by cameras for recognizing the moving target are spliced, so that a global motion track of the target is obtained.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the present invention may be made by those skilled in the art without departing from the principle of the present invention, and such modifications and embellishments should also be considered as within the scope of the present invention.
Claims (5)
1. The tracking shooting method based on the multiple cameras is characterized by comprising the following steps:
constructing a monitoring network of a plurality of cameras, and determining information of each path of camera;
determining a global network topology structure based on the information of each path of camera;
identifying and tracking a target, and acquiring a motion track of the target under a single-path camera;
predicting the motion trail of the target based on the global network topology;
and fusing the motion tracks under the single-path camera to obtain the global motion track of the target.
2. The multi-camera based tracking photography method according to claim 1, wherein the respective camera information comprises: monitoring areas of the cameras and/or monitoring angles of the cameras.
3. The multi-camera based tracking shooting method according to claim 2, wherein the step of determining the global network topology based on the information of each camera further comprises the steps of:
acquiring color histogram information of a monitoring area corresponding to each path of camera;
based on the color histogram information, clustering image information acquired by each path of camera by adopting a Mean Shift algorithm to obtain image segmentation results corresponding to each path of camera;
determining a global network topology based on the corresponding image segmentation results.
4. The multi-camera based tracking shooting method according to claim 3, characterized in that, the target is identified and tracked, and the obtaining of the motion track of the target under the single-path camera further comprises the following steps:
detecting images acquired by each camera based on an OTSU segmentation method to obtain a moving target and obtain initial information of the target;
extracting local features of the target based on an initial image of the target acquired by the camera, and constructing an initial model of the target;
updating the initial model based on the images acquired by the cameras in real time to obtain a real-time model of the target;
and searching the target in the images collected by the cameras on the basis of the real-time model of the target.
5. The multi-camera based tracking shooting method according to claim 4, characterized in that, based on the global network topology, the predicting the motion trail of the target further comprises the following steps: and predicting the motion trail of the target by combining a Camshift algorithm and a Kalman filtering algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010493499.XA CN113763435A (en) | 2020-06-02 | 2020-06-02 | Tracking shooting method based on multiple cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010493499.XA CN113763435A (en) | 2020-06-02 | 2020-06-02 | Tracking shooting method based on multiple cameras |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113763435A true CN113763435A (en) | 2021-12-07 |
Family
ID=78783236
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010493499.XA Pending CN113763435A (en) | 2020-06-02 | 2020-06-02 | Tracking shooting method based on multiple cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113763435A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101572804A (en) * | 2009-03-30 | 2009-11-04 | 浙江大学 | Multi-camera intelligent control method and device |
CN102629385A (en) * | 2012-02-28 | 2012-08-08 | 中山大学 | Object matching and tracking system based on multiple camera information fusion and method thereof |
CN103325121A (en) * | 2013-06-28 | 2013-09-25 | 安科智慧城市技术(中国)有限公司 | Method and system for estimating network topological relations of cameras in monitoring scenes |
JP2016099941A (en) * | 2014-11-26 | 2016-05-30 | 日本放送協会 | System and program for estimating position of object |
CN106709436A (en) * | 2016-12-08 | 2017-05-24 | 华中师范大学 | Cross-camera suspicious pedestrian target tracking system for rail transit panoramic monitoring |
KR20180032400A (en) * | 2016-09-22 | 2018-03-30 | 한국전자통신연구원 | multiple object tracking apparatus based Object information of multiple camera and method therefor |
CN110175583A (en) * | 2019-05-30 | 2019-08-27 | 重庆跃途科技有限公司 | It is a kind of in the campus universe security monitoring analysis method based on video AI |
CN111080679A (en) * | 2020-01-02 | 2020-04-28 | 东南大学 | Method for dynamically tracking and positioning indoor personnel in large-scale place |
-
2020
- 2020-06-02 CN CN202010493499.XA patent/CN113763435A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101572804A (en) * | 2009-03-30 | 2009-11-04 | 浙江大学 | Multi-camera intelligent control method and device |
CN102629385A (en) * | 2012-02-28 | 2012-08-08 | 中山大学 | Object matching and tracking system based on multiple camera information fusion and method thereof |
CN103325121A (en) * | 2013-06-28 | 2013-09-25 | 安科智慧城市技术(中国)有限公司 | Method and system for estimating network topological relations of cameras in monitoring scenes |
JP2016099941A (en) * | 2014-11-26 | 2016-05-30 | 日本放送協会 | System and program for estimating position of object |
KR20180032400A (en) * | 2016-09-22 | 2018-03-30 | 한국전자통신연구원 | multiple object tracking apparatus based Object information of multiple camera and method therefor |
CN106709436A (en) * | 2016-12-08 | 2017-05-24 | 华中师范大学 | Cross-camera suspicious pedestrian target tracking system for rail transit panoramic monitoring |
CN110175583A (en) * | 2019-05-30 | 2019-08-27 | 重庆跃途科技有限公司 | It is a kind of in the campus universe security monitoring analysis method based on video AI |
CN111080679A (en) * | 2020-01-02 | 2020-04-28 | 东南大学 | Method for dynamically tracking and positioning indoor personnel in large-scale place |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110619657B (en) | Multi-camera linkage multi-target tracking method and system for intelligent communities | |
CN108665487B (en) | Transformer substation operation object and target positioning method based on infrared and visible light fusion | |
CN105745687B (en) | Context aware Moving target detection | |
Hu et al. | Principal axis-based correspondence between multiple cameras for people tracking | |
TWI382762B (en) | Method for tracking moving object | |
US8611591B2 (en) | System and method for visually tracking with occlusions | |
CN104008371B (en) | Regional suspicious target tracking and recognizing method based on multiple cameras | |
CN104966304B (en) | Multi-target detection tracking based on Kalman filtering and nonparametric background model | |
US20030053659A1 (en) | Moving object assessment system and method | |
US20030123703A1 (en) | Method for monitoring a moving object and system regarding same | |
US20110142283A1 (en) | Apparatus and method for moving object detection | |
Bloisi et al. | Argos—A video surveillance system for boat traffic monitoring in Venice | |
WO2003003721A1 (en) | Surveillance system and methods regarding same | |
CN104200466A (en) | Early warning method and camera | |
Lin et al. | Collaborative pedestrian tracking and data fusion with multiple cameras | |
Papaioannou et al. | Tracking people in highly dynamic industrial environments | |
CN111666860A (en) | Vehicle track tracking method integrating license plate information and vehicle characteristics | |
CN111476160A (en) | Loss function optimization method, model training method, target detection method, and medium | |
Snidaro et al. | Quality-based fusion of multiple video sensors for video surveillance | |
CN114677640A (en) | Intelligent construction site safety monitoring system and method based on machine vision | |
CN115346155A (en) | Ship image track extraction method for visual feature discontinuous interference | |
CN113988228B (en) | Indoor monitoring method and system based on RFID and vision fusion | |
CN112329671B (en) | Pedestrian running behavior detection method based on deep learning and related components | |
CN109977796A (en) | Trail current detection method and device | |
CN111860392B (en) | Thermodynamic diagram statistical method based on target detection and foreground detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |