CN113743380A - Active tracking method based on video image dynamic monitoring - Google Patents

Active tracking method based on video image dynamic monitoring Download PDF

Info

Publication number
CN113743380A
CN113743380A CN202111292728.2A CN202111292728A CN113743380A CN 113743380 A CN113743380 A CN 113743380A CN 202111292728 A CN202111292728 A CN 202111292728A CN 113743380 A CN113743380 A CN 113743380A
Authority
CN
China
Prior art keywords
image
monitoring
marked
dynamic monitoring
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111292728.2A
Other languages
Chinese (zh)
Other versions
CN113743380B (en
Inventor
周冰清
桂丽
赵峥来
魏雪燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Bozidao Intelligent Industry Technology Research Institute Co ltd
Original Assignee
Jiangsu Bozidao Intelligent Industry Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Bozidao Intelligent Industry Technology Research Institute Co ltd filed Critical Jiangsu Bozidao Intelligent Industry Technology Research Institute Co ltd
Priority to CN202111292728.2A priority Critical patent/CN113743380B/en
Publication of CN113743380A publication Critical patent/CN113743380A/en
Application granted granted Critical
Publication of CN113743380B publication Critical patent/CN113743380B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an active tracking method based on video image dynamic monitoring, which relates to the field of image recognition technology and specifically comprises the following steps: collecting video images; judging face information in the dynamic image; extracting characteristic image information of the marked persons; recognizing the dynamic change of adjacent panoramic images, and training to obtain a plurality of characteristic image information of the dynamic object; comparing the characteristic image information, identifying and determining the marked person, and recording the focal length parameter of the marked person image; constructing a virtual two-dimensional plane, and outputting an actual coordinate point of a marked character on the regional map; outputting the actual movement track of the marked person in the monitoring range of the device; outputting the actual movement track of the marked person in the monitoring range of the adjacent device; and generating a marked character path in the monitoring area map. The invention can monitor and identify no dead angle in the monitoring area, reduces the calculation amount of tracking and identification, changes the campus monitoring tracking from passive to active, and improves the reliability and safety of campus security.

Description

Active tracking method based on video image dynamic monitoring
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to an active tracking method based on video image dynamic monitoring.
Background
The campus security is an important link for strengthening school safety management, is a guarantee for school to normally develop education and teaching and living order, and is also an effective method for effectively solving the problem of campus safety at present. The existing security protection is generally carried out manual supervision by arranging a camera in a campus, so that the supervision workload is large, the monitoring is not in place easily, non-campus personnel cannot be found and tracked in time, and the non-campus personnel are often checked in a video calling mode after a campus safety accident happens, so that the security protection is very passive. The existing positioning and tracking system is generally realized by adopting GPS positioning, that is, a tracked person must hold equipment capable of being monitored and tracked to carry out real-time tracking and monitoring on the equipment, and the method cannot effectively track and monitor non-campus persons in real time.
With the arrival of the 5G and artificial intelligence era, video images are used as a main means of visual information exchange, image processing is widely applied to the fields of mobile internet, intelligent identification, multimedia information exchange and the like, dynamic monitoring, capturing, processing and recording of the video images are greatly improved by means of strong computing capacity and data processing capacity, and the method also provides a breakthrough for discussing the security of modern campuses.
At present, in the prior art, the video images are adopted to track objects, which are all identified frame by frame, resulting in large calculation amount, and in order to reduce the calculation amount, for example, chinese patent CN201811386399.6 discloses an object tracking method and device based on video images, which aims to reduce the identification range when tracking objects in continuous video images, further reduce the calculation amount when tracking objects, and improve the calculation efficiency.
Disclosure of Invention
The invention aims to solve the defects that the campus security is very passive in monitoring and real-time tracking due to the fact that an existing mature tracking system cannot be used for tracking non-campus personnel in real time, and meanwhile, the video image identification calculation amount is large, the existing video image acquisition direction is fixed, and a video image acquisition dead angle exists.
In order to achieve the purpose, the invention adopts the following technical scheme:
the active tracking method based on video image dynamic monitoring comprises a cloud server, a plurality of conventional monitors and a plurality of image dynamic monitoring devices, wherein the conventional monitors and the image dynamic monitoring devices are in communication connection with the cloud server and automatically zoom, the conventional monitors are arranged on a single-way road at an inlet of a monitoring area in a circumferential matrix mode, the image dynamic monitoring devices are arranged in the monitoring area in a cluster distribution mode, the monitoring ranges of the image dynamic monitoring devices are mutually related and cover the whole monitoring area.
The active tracking method comprises the following specific steps:
s1, collecting video images;
s2, processing the video image, and judging whether the face information in the dynamic image exists in a face feature template database;
s3, extracting characteristic image information of the marked person;
s4, recognizing the dynamic change of the adjacent panoramic images, and training to obtain a plurality of characteristic image information of the dynamic object;
s5, comparing the characteristic image information, identifying and determining the marked person, and recording the focal length of the image of the marked person;
s6, constructing a virtual two-dimensional plane, and outputting the actual coordinate points of the marked characters on the regional map;
s7, outputting the actual movement track of the marked person in the monitoring range of the image dynamic monitoring device;
s8, outputting the actual movement track of the marked person in the monitoring range of the adjacent image dynamic monitoring device;
and S9, generating a marked character path in the monitoring area map.
The method comprises the following specific steps:
step 1, acquiring a group of omnibearing image information of entering personnel at a fixed posture through a plurality of conventional monitors at an entrance of a monitoring area;
step 2, uploading the collected group of personnel image information to a cloud server, performing deep learning training through a convolutional neural network to obtain face characteristics in the personnel image information, performing comparative analysis on the face characteristics and a face characteristic model database, setting a similarity threshold value to judge whether the face information in the dynamic image exists in a face characteristic template database, if so, not marking the personnel image information, if not, marking the personnel image information, and generating marked personnel information to a manager;
step 3, the cloud server performs model training on the personnel image information which does not exist in the step 2, processes the group of personnel image information through an edge monitoring algorithm, uses an image extraction matrix to perform convolution on the images, extracts a large amount of characteristic image information D to be stored in a characteristic information database, and sends the characteristic image information obtained through training to each image dynamic monitoring device;
step 4, continuously acquiring video images of the monitoring point on the monitoring point by each image dynamic monitoring device in the monitoring area in the circumferential direction, generating a 360-degree panoramic image after each rotation, identifying the dynamic change of adjacent panoramic images through an embedded AI image identification algorithm, and obtaining a plurality of characteristic image information D' of the dynamic object through embedded convolutional neural network training;
step 5, comparing the characteristic image information D' obtained by training in the step 4 with the characteristic image information D transmitted in the step 3, determining a marked person when the number of the compared characteristic image information is larger than a set threshold value, and recording the focal length of the image of the marked person acquired by the image dynamic monitoring device;
step 6, a virtual camera is built by taking the image dynamic monitoring device as a center, a virtual two-dimensional plane is built around the virtual camera, the radius value r of the marked person on the virtual two-dimensional plane is determined according to the shooting distance L corresponding to the focal length parameter in the step 5, the panoramic image collected in the step 4 is vertically overlapped on the virtual two-dimensional plane by taking the radius as r, the marked person in the panoramic image forms a coordinate point on the virtual two-dimensional plane, the virtual two-dimensional plane is overlapped on an actual area map where the image dynamic monitoring device is located in a semitransparent mode, and the coordinate point falling point on the virtual two-dimensional plane is an actual coordinate point A (x, y) of the marked person on the area map;
step 7, according to the step 6, after the marked figure enters the monitoring range of the image dynamic monitoring device, outputting a coordinate value of the marked figure after each 360-degree panoramic image acquisition, and outputting the actual coordinate trend of the marked figure in the monitoring range of the image dynamic monitoring device according to the coordinate change of the marked figure on the virtual two-dimensional plane;
step 8, the coordinates of the marked persons are simultaneously appeared by the adjacent image dynamic monitoring devices to serve as associated coordinate points A ' (x ', y '), the adjacent image dynamic monitoring devices execute the steps 6 to 7, and the actual coordinate trend of the marked persons in the monitoring range of the adjacent image dynamic monitoring devices is output according to the coordinate changes of the marked persons on the two adjacent virtual two-dimensional planes;
and 9, taking the associated coordinate points A ' (x ', y ') as connection points, associating the actual coordinate points A (x, y) of the marked persons output by the image dynamic monitoring devices on the area map, and generating marked person paths on the monitored area map according to the time sequence, namely completing the real-time tracking of the marked persons.
Furthermore, the number of the conventional monitors is at least four, the conventional monitors are in rectangular layout and are symmetrically distributed on two sides of a one-way road at an entrance of a monitoring area, and therefore all-dimensional dead-angle-free image acquisition is formed on the entrance road.
Furthermore, the image dynamic monitoring device comprises a machine body with a rod-shaped structure and a rotating tripod head arranged at the top of the machine body, wherein a camera for automatically zooming in a 360-degree video image acquisition place is fixedly arranged on the rotating tripod head, a micro server, an embedded AI image identification module and an embedded convolutional neural network module are arranged in the machine body, and the micro server is provided with a CPU for supporting the embedded AI image identification module and the embedded convolutional neural network module.
Further, the embedded AI image recognition algorithm monitors the moving object by using a frame difference method, in a specific manner:
Figure 173792DEST_PATH_IMAGE001
formula (1)
Wherein the content of the first and second substances,
Figure 989301DEST_PATH_IMAGE002
a difference image between two panoramic images adjacent in time,
Figure 57620DEST_PATH_IMAGE003
for the panoramic image at the time t,
Figure 616778DEST_PATH_IMAGE004
is a panoramic image at the time of T +1, T is a threshold value selected during the binarization of a differential image,
Figure 204885DEST_PATH_IMAGE005
the representation of the foreground is performed,
Figure 191296DEST_PATH_IMAGE006
indicates a background if
Figure 248375DEST_PATH_IMAGE005
Indicating that the pixel is in motion, if so
Figure 345644DEST_PATH_IMAGE006
And indicating that the pixel is a background pixel.
Furthermore, the embedded convolutional neural network module at least comprises an input layer, a convolutional layer, a pooling layer, a full-link layer and an output layer.
Further, in step 2, the face feature information in the face feature template database pool is formed through system acquisition training.
Further, in step 3, the monitoring operator of the edge monitoring algorithm adopts one of a canny operator, a prewitt operator, a sobel operator, a log operator or a roberts operator.
Further, in step 3, the convolution multiplier for convolving the image with the image extraction matrix is a 36 × 36 matrix.
Further, in step 9, linear connection is used between the coordinate points of the marker person.
Compared with the prior art, the active tracking method based on video image dynamic monitoring provided by the invention has the beneficial effects that:
(1) according to the invention, non-campus personnel entering the entrance of the monitoring area are identified and marked, and early warning information is sent out to remind campus security personnel, so that the monitoring pressure of the security personnel can be reduced through intelligent identification, and meanwhile, the monitoring strength can be effectively improved.
(2) According to the invention, through the cluster layout of the image dynamic monitoring device and the acquisition of the image dynamic monitoring device to the panoramic image of the monitoring point by 360 degrees, the monitoring and the identification of all-around and dead angles-free monitoring and identification can be carried out on the monitored area, the existence of the monitoring dead angles in the area is eliminated, meanwhile, the circumferential video is generated into the panoramic image of 360 degrees in a rotating mode of the image dynamic monitoring device, only the panoramic image needs to be identified in the rotating time period, the identification quantity of the video image is greatly reduced, the calculation quantity when non-campus personnel are tracked is reduced, and the calculation efficiency is improved.
(3) According to the method, the actual coordinates of the marked person in the monitored area are determined by constructing the virtual two-dimensional plane by utilizing the focal length parameters inside the camera, and then the virtual two-dimensional plane is overlapped with the actual area map, so that the non-campus person can be accurately positioned, the continuous movement of the coordinate points of the marked person is realized, and the moving direction of the non-campus person in the monitored area is determined.
(4) The non-campus personnel continuous tracking between adjacent devices is completed by utilizing the video image dynamic monitoring devices which are distributed globally, the tracking loss or discontinuous tracking loss of the marker person is avoided, and the non-campus personnel continuous monitoring and tracking can be completed without the help of tracking equipment such as a GPS (global positioning system). The campus monitoring and tracking are changed from passive to active, and the reliability of campus security is improved.
(5) The method and the system can track non-campus personnel in real time and automatically generate paths so that security protection personnel can master the movement track and the movement direction of the non-campus personnel in a monitoring area, tracking and searching of the non-campus personnel are facilitated, potential safety hazards existing in a campus can be eliminated by the campus security protection personnel at the first time, and safety of campus security protection is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block flow diagram of the present invention with respect to an active tracking method;
fig. 2 is a schematic diagram of communication connection of the present invention with respect to a cloud server, a conventional monitor, and a video dynamic monitoring apparatus.
FIG. 3 is a schematic diagram of the structure of the image motion monitoring device according to the present invention;
FIG. 4 is a schematic diagram of the present invention relating to a constructed virtual camera to be merged with a panoramic image to construct a virtual two-dimensional plane;
FIG. 5 is a schematic diagram of a movement track of a marker character in a virtual two-dimensional plane constructed in relation to an area where a single image dynamic monitoring device is located according to the present invention;
FIG. 6 is a schematic diagram of the relationship between the movement tracks of the character in the virtual two-dimensional plane constructed by the area where the plurality of image dynamic monitoring devices are located.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. These examples are intended to illustrate the invention and are not intended to limit the scope of the invention. In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted", "provided" and "connected" are to be interpreted broadly, e.g. as a fixed connection, a detachable connection or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The structural features of the present invention will now be described in detail with reference to the accompanying figures 1-6 of the specification.
Referring to fig. 2-3, an active tracking method based on video image dynamic monitoring requires a cloud server, a plurality of conventional monitors 1 and a plurality of image dynamic monitoring devices 2 which automatically zoom in communication with the cloud server. The image dynamic monitoring device 2 comprises a rod-shaped structure body 21 and a rotating tripod head 22 arranged at the top of the body 21, wherein a camera 23 for automatically zooming in a 360-degree video image acquisition place is fixedly arranged on the rotating tripod head 22, a micro server, an embedded AI image recognition module and an embedded convolutional neural network module are arranged in the body 21, and the micro server is provided with a CPU for supporting the embedded AI image recognition module and the embedded convolutional neural network module. The embedded convolutional neural network module at least comprises an input layer, a convolutional layer, a pooling layer, a full-link layer and an output layer.
Referring to fig. 1, the active tracking method includes the following specific steps:
s1, video image acquisition:
the method comprises the steps that 4 conventional monitors 1 are arranged at an entrance of a monitoring area in a circumferential matrix mode, the conventional monitors 1 are arranged on a one-way channel of the entrance of the monitoring area, video image collection is carried out on passing personnel in a fixed posture, a group of all-dimensional image information of the entering personnel is obtained, the all-dimensional image information specifically comprises information of two sides in front of the personnel and two sides in the back of the personnel, and an expanded personnel information image is synthesized through an image fusion technology.
S2, processing the video image, and judging whether the face information in the dynamic image exists in a face feature template database:
and uploading the acquired group of personnel image information to a cloud server, performing deep learning training through a convolutional neural network to obtain face characteristics in the personnel image information, performing comparative analysis on the face characteristics and a face characteristic model database, and acquiring and training face characteristic information in a face characteristic template database through a system to form the face characteristic information. And judging whether the face information in the dynamic image exists in a face feature template database or not by setting a similarity threshold, if so, not marking the person image information, if not, marking the person image information, and generating marked person information to an administrator.
S3, extracting characteristic image information of the marked person:
the cloud server performs model training on the personnel image information which does not exist in the step S2, processes the group of personnel image information through an edge monitoring algorithm, wherein a monitoring operator of the edge monitoring algorithm adopts one of a canny operator, a prewitt operator, a sobel operator, a log operator or a roberts operator, an image extraction matrix is used for convolving the image, a convolution multiplier for convolving the image by the image extraction matrix is a 36 × 36 matrix, a large amount of characteristic image information D is extracted and stored in a characteristic information database, and the characteristic image information obtained through training is sent to each image dynamic monitoring device 2. After receiving the characteristic image information D, the image dynamic monitoring device 2 stores the characteristic image information in the memory of the micro server for comparison and calling of the characteristic information.
S4, recognizing the dynamic change of the adjacent panoramic images, and training to obtain a plurality of characteristic image information of the dynamic object:
the method comprises the steps of arranging a plurality of image dynamic monitoring devices 2 in a monitoring area in a cluster distribution mode, enabling the monitoring ranges of the image dynamic monitoring devices 2 to be mutually related and cover the whole monitoring area, enabling the image dynamic monitoring devices 2 in the monitoring area to continuously acquire video images of a monitoring point on the monitoring point in the circumferential direction, generating a 360-degree panoramic image after each rotation, identifying dynamic changes of adjacent panoramic images through an embedded AI image identification algorithm, and obtaining a plurality of characteristic image information D' of a dynamic object through embedded convolutional neural network training.
The embedded AI image recognition algorithm adopts a frame difference method to monitor a moving object, and the specific mode is as follows:
Figure 116154DEST_PATH_IMAGE001
formula (1)
Wherein the content of the first and second substances,
Figure 148832DEST_PATH_IMAGE002
a difference image between two panoramic images adjacent in time,
Figure 67110DEST_PATH_IMAGE003
for the panoramic image at the time t,
Figure 827124DEST_PATH_IMAGE004
is a panoramic image at the time of T +1, T is a threshold value selected during the binarization of a differential image,
Figure 780037DEST_PATH_IMAGE005
the representation of the foreground is performed,
Figure 983616DEST_PATH_IMAGE006
indicates a background if
Figure 858031DEST_PATH_IMAGE005
Indicating that the pixel is in motion, if so
Figure 454360DEST_PATH_IMAGE006
And indicating that the pixel is a background pixel. And when the dynamic object is detected to appear, importing the grayed panoramic image corresponding to the t +1 moment into an embedded convolution neural network module for model training, and training to obtain the characteristic image information D' of the dynamic object.
S5, comparing the characteristic image information, identifying and determining the marked person, and recording the focal length of the image of the marked person:
comparing the characteristic image information D' obtained by training in S4 with the characteristic image information D transmitted in S3, determining the tagged person when the number of the compared characteristic image information is greater than a set threshold, and recording the focal length parameter of the image of the tagged person acquired by the image motion monitoring device 2.
S6, constructing a virtual two-dimensional plane, and outputting the actual coordinate points of the marked characters on the regional map:
a virtual camera is built by taking the image dynamic monitoring device 2 as a center, a virtual two-dimensional plane (see fig. 4) is built around the virtual camera, the area ratio of the built virtual two-dimensional plane to the actual area map where the image dynamic monitoring device 2 is located is 1:1, the radius value r of the marked person on the virtual two-dimensional plane is determined according to the shooting distance L corresponding to the focal length parameter in S5, the panoramic image collected in S4 is vertically superposed on the virtual two-dimensional plane by taking the radius as r, the marked person in the panoramic image forms a coordinate point on the virtual two-dimensional plane, the virtual two-dimensional plane is superposed on the actual area map where the image dynamic monitoring device 2 is located in a semi-transparent mode, and the coordinate point falling point on the virtual two-dimensional plane is the actual coordinate point a (x, y) of the marked person on the area map.
And S7, outputting the actual movement track of the marked person in the monitoring range of the image dynamic monitoring device:
according to S6, after the tag character enters the monitoring range of the image motion monitoring device 2, after each 360-degree panoramic image acquisition, a coordinate value of the tag character may be output, and a plurality of continuous actual coordinate points a (x, y) may be obtained by a plurality of continuous scans of the image motion monitoring device 2, and an actual coordinate trend of the tag character within the monitoring range of the image motion monitoring device 2 may be output according to a coordinate change of the tag character on the virtual two-dimensional plane (see fig. 5).
And S8, outputting the actual moving track of the marked person in the monitoring range of the adjacent image dynamic monitoring device:
by the adjacent image motion monitoring devices 2 simultaneously appearing the coordinates of the marker person as the associated coordinate points a ' (x ', y '), the adjacent image motion monitoring devices 2 execute S6 to S7, and output the actual coordinate orientation of the marker person within the monitoring range of the adjacent image motion monitoring devices 2, based on the change in the coordinates of the marker person on the adjacent two virtual two-dimensional planes (see fig. 6).
S9, generating a marked character path in the monitoring area map:
the actual coordinate points a (x, y) of the marker persons output from the image motion monitoring devices 2 on the area map are correlated with the correlated coordinate points a ' (x ', y ') as contact points, a marker person path is generated on the monitored area map in chronological order, and the coordinate points of the marker persons are linearly connected. Namely, the real-time tracking of the marked person is completed. The security personnel can master the movement track and the movement direction of non-campus personnel in the monitoring area according to the marked figure path generated in real time, the non-campus personnel can be conveniently tracked and searched, potential safety hazards existing in the campus can be eliminated by the campus security personnel at the first time, and the safety of campus security is improved.
According to the active tracking method based on video image dynamic monitoring, on one hand, non-campus personnel entering the entrance of the monitoring area are identified and marked, and early warning information is sent out to remind campus security personnel, so that the monitoring pressure of the security personnel can be relieved through intelligent identification, and meanwhile, the monitoring strength can be effectively improved. On the other hand, through the cluster layout of the dynamic image monitoring device 2 and the acquisition of the dynamic image monitoring device 2 to the panoramic image of 360 degrees of the monitoring point, the monitoring and the identification of all-round and no dead angle can be carried out on the monitoring area, the existence of the monitoring dead angle in the area is eliminated, meanwhile, the circumferential video is generated into a panoramic image of 360 degrees in a rotating mode through the dynamic image monitoring device 2, only one panoramic image needs to be identified in the rotating time period, the identification quantity of the video image is greatly reduced, the calculation quantity when non-campus personnel are tracked is reduced, and the calculation efficiency is improved. Meanwhile, the invention utilizes the camera internal focal length parameter, determines the actual coordinate of the marked person in the monitoring area by constructing the virtual two-dimensional plane, and then realizes the accurate positioning of the non-campus person by superposing the virtual two-dimensional plane and the actual area map, and marks the continuous movement of the coordinate point of the person, thereby determining the moving direction of the non-campus person in the monitoring area. The non-campus personnel continuous tracking between adjacent devices is completed by utilizing the video image dynamic monitoring devices which are distributed globally, the tracking loss or discontinuous tracking loss of the marker person is avoided, and the non-campus personnel continuous monitoring and tracking can be completed without the help of tracking equipment such as a GPS (global positioning system). The campus monitoring and tracking are changed from passive to active, and the reliability of campus security is improved.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An active tracking method based on video image dynamic monitoring is characterized by comprising a cloud server, a plurality of conventional monitors (1) and a plurality of image dynamic monitoring devices (2), wherein the conventional monitors (1) and the image dynamic monitoring devices (2) are in communication connection with the cloud server and automatically zoom, the conventional monitors (1) are arranged on a single-way road at an entrance of a monitoring area in a circumferential matrix mode, the image dynamic monitoring devices (2) are arranged in the monitoring area in a cluster distribution mode, and monitoring ranges of the image dynamic monitoring devices (2) are correlated with each other and cover the whole monitoring area;
the active tracking method comprises the following specific steps:
1) acquiring a group of omnibearing image information of entering personnel at a fixed posture through a plurality of conventional monitors (1) at an entrance of a monitoring area;
2) uploading the acquired group of personnel image information to a cloud server, performing deep learning training through a convolutional neural network to obtain face characteristics in the personnel image information, performing comparative analysis on the face characteristics and a face characteristic model database, setting a similarity threshold value to judge whether the face information in the dynamic image exists in a face characteristic template database, if so, not marking the personnel image information, if not, marking the personnel image information, and generating marked personnel information to a manager;
3) the cloud server carries out model training on the personnel image information which does not exist in the step 2), the personnel image information is processed through an edge monitoring algorithm, an image extraction matrix is used for carrying out convolution on the image, a large amount of characteristic image information D is extracted and stored in a characteristic information database, and the characteristic image information obtained through training is sent to each image dynamic monitoring device (2);
4) continuously acquiring video images of the monitoring point in the circumferential direction on the monitoring point by each image dynamic monitoring device (2) in the monitoring area, generating a 360-degree panoramic image after each rotation, identifying the dynamic change of adjacent panoramic images through an embedded AI image identification algorithm, and obtaining a plurality of characteristic image information D' of a dynamic object through embedded convolutional neural network training;
5) comparing the characteristic image information D' obtained by training in the step 4) with the characteristic image information D transmitted in the step 3), determining a marked person when the number of the compared characteristic image information is larger than a set threshold value, and recording the focal length of the image of the marked person acquired by the image dynamic monitoring device (2);
6) establishing a virtual camera by taking the image dynamic monitoring device (2) as a center, establishing a virtual two-dimensional plane around the virtual camera, determining a radius value r of the marked person on the virtual two-dimensional plane according to the shooting distance L corresponding to the focal length parameter in the step 5), vertically superposing the panoramic image acquired in the step 4) on the virtual two-dimensional plane by taking the radius as r, forming a coordinate point on the virtual two-dimensional plane by the marked person in the panoramic image, superposing the virtual two-dimensional plane on an actual area map where the image dynamic monitoring device (2) is located in a semi-transparent mode, and obtaining a coordinate point falling point on the virtual two-dimensional plane, namely an actual coordinate point A (x, y) of the marked person on the area map;
7) according to the step 6), after the marked figure enters the monitoring range of the image dynamic monitoring device (2), outputting a coordinate value of the marked figure after each 360-degree panoramic image acquisition, and outputting the actual coordinate trend of the marked figure in the monitoring range of the image dynamic monitoring device (2) according to the coordinate change of the marked figure on the virtual two-dimensional plane;
8) the coordinate of the marked person is simultaneously appeared by the adjacent image dynamic monitoring devices (2) to serve as a related coordinate point A ' (x ', y '), and the adjacent image dynamic monitoring devices (2) execute the steps 6) to 7), and the actual coordinate trend of the marked person in the monitoring range of the adjacent image dynamic monitoring devices (2) is output according to the coordinate change of the marked person on the adjacent two virtual two-dimensional planes;
9) and the real coordinate points A (x, y) of the marked people on the area map output by each image dynamic monitoring device (2) are correlated by taking the correlated coordinate points A ' (x ', y ') as connection points, and a marked people path is generated on the monitored area map according to the time sequence, namely the real-time tracking of the marked people is completed.
2. The active tracking method based on video image dynamic monitoring according to claim 1, characterized in that the number of the conventional monitors (1) is at least four, and the conventional monitors are in a rectangular layout and symmetrically distributed on two sides of a one-way road at the entrance of the monitored area, so as to form an omnibearing dead-angle-free image acquisition for the entrance road.
3. The active tracking method based on video image dynamic monitoring according to claim 1, characterized in that the image dynamic monitoring device (2) comprises a rod-shaped structure body (21) and a rotating pan-tilt (22) arranged on the top of the body (21), a camera (23) for automatic zooming of 360-degree video image collection is fixedly arranged on the rotating pan-tilt (22), a micro server, an embedded AI image recognition module and an embedded convolutional neural network module are arranged in the body (21), and the micro server has a CPU for supporting the computation of the embedded AI image recognition module and the embedded convolutional neural network module.
4. The active tracking method based on video image dynamic monitoring of claim 3, characterized in that the embedded AI image recognition algorithm monitors moving objects by frame difference method, in a specific manner:
Figure 567321DEST_PATH_IMAGE001
formula (1)
Wherein the content of the first and second substances,
Figure 667127DEST_PATH_IMAGE002
a difference image between two panoramic images adjacent in time,
Figure 636220DEST_PATH_IMAGE003
for the panoramic image at the time t,
Figure 463362DEST_PATH_IMAGE004
is a panoramic image at the time of T +1, T is a threshold value selected during the binarization of a differential image,
Figure 467090DEST_PATH_IMAGE005
the representation of the foreground is performed,
Figure 970752DEST_PATH_IMAGE006
indicates a background if
Figure 630404DEST_PATH_IMAGE005
Indicating that the pixel is in motion, if so
Figure 385870DEST_PATH_IMAGE006
And indicating that the pixel is a background pixel.
5. The active tracking method based on video image dynamic monitoring of claim 3, wherein the embedded convolutional neural network module is composed of at least one input layer, one convolutional layer, one pooling layer, one full link layer and one output layer.
6. The active tracking method based on video image dynamic monitoring according to claim 1, characterized in that in step 2), the face feature information in the face feature template database pool is formed by system acquisition training.
7. The active tracking method based on dynamic video image monitoring as claimed in claim 1, wherein in step 3), the monitoring operator of the edge monitoring algorithm employs one of canny operator, prewitt operator, sobel operator, log operator or roberts operator.
8. The active tracking method based on video image dynamic monitoring as claimed in claim 1, wherein in step 3), the convolution multiplier of the image extraction matrix convolving the image is 36 x 36 matrix.
9. The active tracking method based on video image dynamic monitoring as claimed in claim 1, wherein in step 6), the ratio of the area of the constructed virtual two-dimensional plane to the actual area map where the image dynamic monitoring device (2) is located is 1: 1.
10. The active tracking method based on video image dynamic monitoring as claimed in claim 1, wherein in step 9), linear connection is adopted between each coordinate point of the marker person.
CN202111292728.2A 2021-11-03 2021-11-03 Active tracking method based on video image dynamic monitoring Active CN113743380B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111292728.2A CN113743380B (en) 2021-11-03 2021-11-03 Active tracking method based on video image dynamic monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111292728.2A CN113743380B (en) 2021-11-03 2021-11-03 Active tracking method based on video image dynamic monitoring

Publications (2)

Publication Number Publication Date
CN113743380A true CN113743380A (en) 2021-12-03
CN113743380B CN113743380B (en) 2022-02-15

Family

ID=78727221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111292728.2A Active CN113743380B (en) 2021-11-03 2021-11-03 Active tracking method based on video image dynamic monitoring

Country Status (1)

Country Link
CN (1) CN113743380B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914582A (en) * 2023-01-05 2023-04-04 百鸟数据科技(北京)有限责任公司 Small object detection optimization method based on fusion time sequence information

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
US20090324010A1 (en) * 2008-06-26 2009-12-31 Billy Hou Neural network-controlled automatic tracking and recognizing system and method
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
CN105975956A (en) * 2016-05-30 2016-09-28 重庆大学 Infrared-panorama-pick-up-head-based abnormal behavior identification method of elderly people living alone
CN106648067A (en) * 2016-11-10 2017-05-10 四川赞星科技有限公司 Panoramic video information interaction method and system
CN106803286A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 Mutual occlusion real-time processing method based on multi-view image
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
CN108682038A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN108776822A (en) * 2018-06-22 2018-11-09 腾讯科技(深圳)有限公司 Target area detection method, device, terminal and storage medium
CN108848304A (en) * 2018-05-30 2018-11-20 深圳岚锋创视网络科技有限公司 A kind of method for tracking target of panoramic video, device and panorama camera
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
US20090324010A1 (en) * 2008-06-26 2009-12-31 Billy Hou Neural network-controlled automatic tracking and recognizing system and method
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system
CN105975956A (en) * 2016-05-30 2016-09-28 重庆大学 Infrared-panorama-pick-up-head-based abnormal behavior identification method of elderly people living alone
CN106648067A (en) * 2016-11-10 2017-05-10 四川赞星科技有限公司 Panoramic video information interaction method and system
CN106803286A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 Mutual occlusion real-time processing method based on multi-view image
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
CN108682038A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108848304A (en) * 2018-05-30 2018-11-20 深圳岚锋创视网络科技有限公司 A kind of method for tracking target of panoramic video, device and panorama camera
CN108776822A (en) * 2018-06-22 2018-11-09 腾讯科技(深圳)有限公司 Target area detection method, device, terminal and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
RENE KAISER等: "Real-time Person Tracking in High-resolution Panoramic Video for Automated Broadcast Production", 《2011 CONFERENCE FOR VISUAL MEDIA PRODUCTION》 *
ZHONG ZHOU等: "Static Object Tracking in Road Panoramic Videos", 《2010 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA》 *
周小龙等: "融合响应模板和特征组合的鱼眼视频目标跟踪方法", 《计算机辅助设计与图形学学报》 *
赵继伟: "基于全景视觉的目标识别与跟踪", 《数字技术与应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914582A (en) * 2023-01-05 2023-04-04 百鸟数据科技(北京)有限责任公司 Small object detection optimization method based on fusion time sequence information
CN115914582B (en) * 2023-01-05 2023-04-28 百鸟数据科技(北京)有限责任公司 Small object detection optimization method based on fusion time sequence information

Also Published As

Publication number Publication date
CN113743380B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN103942811B (en) Distributed parallel determines the method and system of characteristic target movement locus
CN109819208A (en) A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring
CN112216049A (en) Construction warning area monitoring and early warning system and method based on image recognition
CN111079600A (en) Pedestrian identification method and system with multiple cameras
CN109190508A (en) A kind of multi-cam data fusion method based on space coordinates
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN107977656A (en) A kind of pedestrian recognition methods and system again
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN103632427B (en) A kind of gate cracking protection method and gate control system
CN112149513A (en) Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN107241572A (en) Student's real training video frequency tracking evaluation system
CN114612823A (en) Personnel behavior monitoring method for laboratory safety management
CN111008993A (en) Method and device for tracking pedestrian across mirrors
CN112037252A (en) Eagle eye vision-based target tracking method and system
CN113743380B (en) Active tracking method based on video image dynamic monitoring
CN115188066A (en) Moving target detection system and method based on cooperative attention and multi-scale fusion
CN114511592A (en) Personnel trajectory tracking method and system based on RGBD camera and BIM system
CN107704851A (en) Character recognition method, Public Media exhibiting device, server and system
CN113963373A (en) Video image dynamic detection and tracking algorithm based system and method
CN106960193A (en) A kind of lane detection apparatus and method
CN110276379A (en) A kind of the condition of a disaster information rapid extracting method based on video image analysis
CN116912517B (en) Method and device for detecting camera view field boundary
CN115908493A (en) Community personnel track management and display method and system
CN206948499U (en) The monitoring of student's real training video frequency tracking, evaluation system
CN116358622A (en) Human-bridge mutual feedback displacement monitoring and early warning system and method based on vision technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant