CN108053427B - Improved multi-target tracking method, system and device based on KCF and Kalman - Google Patents

Improved multi-target tracking method, system and device based on KCF and Kalman Download PDF

Info

Publication number
CN108053427B
CN108053427B CN201711063087.7A CN201711063087A CN108053427B CN 108053427 B CN108053427 B CN 108053427B CN 201711063087 A CN201711063087 A CN 201711063087A CN 108053427 B CN108053427 B CN 108053427B
Authority
CN
China
Prior art keywords
target
targets
frame
tracking
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711063087.7A
Other languages
Chinese (zh)
Other versions
CN108053427A (en
Inventor
谢维信
王鑫
高志坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201711063087.7A priority Critical patent/CN108053427B/en
Publication of CN108053427A publication Critical patent/CN108053427A/en
Application granted granted Critical
Publication of CN108053427B publication Critical patent/CN108053427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an improved multi-target tracking method based on KCF and Kalman, which comprises the following steps: carrying out target detection by using a GoogLeNet network model and extracting a feature vector of a target; establishing a correlation matrix by combining the predicted position of each target in the previous frame tracking chain on the current frame with the observed position, the overlapping rate and the space distance of the feature vector of the current frame target, and matching by using a matching algorithm; updating the tracking frame of the tracking chain which is successfully matched directly and the corresponding characteristic vector; carrying out local tracking on the target failed in matching by using a KCF tracker; updating the position obtained by performing weighted fusion on the tracking result of the KCF and the Kalman tracking result; and predicting the next frame position of each target in the tracking chain. Through the mode, the CNN network extraction characteristic vector and the KCF local tracking are combined, the tracking effect is improved, and the problems of target shielding, target false detection and the like are solved well. In addition, the invention also provides a multi-target tracking system and a multi-target tracking device.

Description

Improved multi-target tracking method, system and device based on KCF and Kalman
Technical Field
The invention relates to the field of computer vision, in particular to an improved multi-target tracking method, system and device based on KCF and Kalman.
Background
With the development of the technology in the field of computer vision and the wide application in the monitoring field, the tracking analysis of the target is particularly important. The traditional tracking method is based on the thought of a Kalman filtering tracker (kalman), but the method has relatively poor effect of predicting the target position information when the target is partially shielded, thereby reducing the monitoring accuracy. Therefore, in order to meet the requirement of the development of the intelligent monitoring technology, a target tracking method and a multi-target tracking system with high accuracy and good prediction effect are needed.
Disclosure of Invention
The invention mainly solves the technical problems of providing an improved multi-target tracking method based on KCF and Kalman, a multi-target tracking system and a device with a storage function, and solving the problems of low tracking accuracy and large calculation amount.
In order to solve the technical problems, the invention adopts the technical scheme that an improved multi-target tracking method based on KCF and Kalman is provided, and the method comprises the following steps:
predicting a tracking frame of each target in the first plurality of targets in the current frame by combining a tracking chain and a detection frame corresponding to the first plurality of targets in the previous frame picture;
acquiring a tracking frame corresponding to a first plurality of targets in a previous frame of picture in a current frame and a detection frame of a second plurality of targets in the current frame of picture;
establishing a target incidence matrix of a tracking frame of the first plurality of targets in the current frame and a detection frame of a second plurality of targets in the current frame;
and correcting by using a target matching algorithm to obtain the actual position corresponding to the first part of targets of the current frame.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided an apparatus having a storage function, storing program data which, when executed, implements a multi-target tracking method as described above.
In order to solve the technical problem, the invention adopts another technical scheme that: providing a multi-target tracking analysis system: comprises a processor and a memory which are electrically connected;
the processor is coupled to the memory, and when in operation, executes instructions to implement the method for multi-target tracking as described above, and stores processing results generated by the executed instructions in the memory.
The beneficial effects of the above technical scheme are: different from the situation of the prior art, the method predicts the tracking frame of each target current frame in the first plurality of targets by combining the tracking chain and the detection frames corresponding to the first plurality of targets in the previous frame of picture; acquiring a tracking frame corresponding to a first plurality of targets in a previous frame of picture in a current frame and a detection frame of a second plurality of targets in the current frame of picture; establishing a target incidence matrix of a tracking frame of a first plurality of targets in a current frame and a detection frame of a second plurality of targets in the current frame; and correcting by using a target matching algorithm to obtain the actual position corresponding to the first part of targets of the current frame. By the method, the target tracking method and the multi-target tracking system with high tracking accuracy and good real-time performance can be provided.
Drawings
FIG. 1 is a flow chart diagram of an embodiment of an improved multi-target tracking method based on KCF and Kalman of the present invention;
FIG. 2 is a schematic flow chart diagram of another embodiment of an improved multi-target tracking method based on KCF and Kalman in the present invention;
FIG. 3 is a schematic flow chart diagram illustrating another embodiment of an improved multi-target tracking method based on KCF and Kalman according to the present invention;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of an improved multi-target tracking method based on KCF and Kalman according to the present invention;
FIG. 5 is a schematic flow chart of another embodiment of an improved multi-target tracking method based on KCF and Kalman;
FIG. 6 is a schematic flow chart diagram illustrating another embodiment of an improved multi-target tracking method based on KCF and Kalman according to the present invention;
FIG. 7 is a schematic flow chart diagram illustrating another embodiment of an improved multi-target tracking method based on KCF and Kalman according to the present invention;
FIG. 8 is a schematic flow chart diagram illustrating one embodiment of step S243 in the embodiment provided in FIG. 7;
FIG. 9 is a schematic diagram of a motion spatiotemporal container in an embodiment of an improved multi-target tracking method based on KCF and Kalman of the present invention;
FIG. 10 is a schematic diagram of an embodiment of a multi-target tracking analysis system of the present invention;
FIG. 11 is a diagram of an embodiment of a device with storage function according to the invention.
Detailed Description
Hereinafter, exemplary embodiments of the present application will be described with reference to the accompanying drawings. Well-known functions or constructions are not described in detail for clarity or conciseness. Terms described below, which are defined in consideration of functions in the present application, may be different according to intentions or implementations of users and operators. Therefore, the terms should be defined based on the disclosure of the entire specification.
Fig. 1 is a schematic flow chart of a video monitoring method based on video structured data and deep learning according to a first embodiment of the present invention. The method comprises the following steps:
s10: the video is read.
Optionally, reading the video includes reading a real-time video captured by the camera and/or pre-recording data of the saved video. The camera for collecting the real-time video can be one of a USB camera and a network camera based on rtsp protocol stream, or other types of cameras.
In one embodiment, the read video is a video acquired by real-time shooting of a USB camera or a network camera based on rtsp protocol stream.
In another embodiment, the read video is a pre-recorded video, which is read by inputting from a local storage or an external storage device such as a usb disk, a hard disk, or a video called from a network, which is not described in detail herein.
S20: and carrying out structuring processing on the video to obtain structured data.
Optionally, the step of performing a structuring process on the video to obtain structured data specifically means to convert the unstructured video data read in step S10 into structured data, and specifically, the structured data refers to data important for subsequent analysis. Optionally, the structured data comprises at least one of location of the object, object category, object attribute, object motion state, object motion trajectory, object dwell time, wherein it is understood that the structured data may also comprise other categories of information that are required by the user (the person using the method or system described in the present invention). Other data is not particularly important or may be mined from relevant information such as structured data. The specific information included in the structured information depends on different requirements. How the structured data is processed to obtain the structured data is described in detail below.
S30: and uploading the structured data to a cloud server, and carrying out deep analysis on the structured data to obtain a preset result.
Optionally, after the video is structured in step S20, the resulting structured data is uploaded to the cloud server and stored in the storage area of the cloud server.
In one embodiment, the data obtained by the video structuring processing is directly saved in a storage area of a cloud server to retain files and also used as a database for perfecting the system.
Optionally, after the video is processed in step S20, the obtained structured data is uploaded to a cloud server, and the cloud server performs further deep analysis on the structured data.
Optionally, the cloud server performs further in-depth analysis on the structured data uploaded from each monitoring node, wherein the in-depth analysis includes target trajectory analysis and target traffic analysis or other required analysis, and the target includes at least one of a person, a vehicle, an animal and the like.
In an embodiment, the cloud server further deeply analyzes the structured data uploaded from each monitoring node, namely, trajectory analysis, and further determines whether the target is suspicious, whether the target is retained in a certain area for a long time, and whether abnormal behaviors such as area intrusion occur or not according to the rule of the trajectory of the uploaded target and the residence time of the scene.
In another embodiment, the cloud server further analyzes the structured data uploaded from each monitoring node in a deep manner to obtain target traffic analysis, and performs statistics on a target appearing at a certain monitoring point according to the structured data uploaded from each monitoring point, and obtains the traffic of the target in each time period of the monitoring node through the statistics. The target may be a pedestrian or a vehicle, and the peak time or the low peak time of the target flow rate may be obtained. The target flow related data is calculated to reasonably prompt pedestrians and drivers, avoid traffic rush hours and provide reference basis for public resources such as illumination.
According to the method, the video is subjected to structuring processing to obtain the structured data which is critical to deep analysis, and then only the structured data is uploaded to the cloud instead of transmitting the whole video to the cloud, so that the problems of high network transmission pressure and high data flow cost are solved.
In an embodiment, according to a preset setting, when each monitoring node uploads the structured data processed by the video processing system to the cloud server, the cloud server performs in-depth analysis on the structured data after storing the structured data.
In another embodiment, when each monitoring node uploads the structured data processed by the video processing system to the cloud server, the server needs the user to select whether to perform deep analysis after saving the structured data.
In yet another embodiment, when the user needs, the structured data that has completed one in-depth analysis at the time of initial upload can be re-analyzed again for the set in-depth analysis.
Optionally, the deep analysis of the structured data uploaded by each monitoring node further includes: and counting and analyzing the structured data to obtain the behavior types and abnormal behaviors of one or more targets, alarming the abnormal behaviors and the like, or analyzing and processing the contents required by other users.
With respect to how the video structured data is processed to obtain the structured data, the following elaborates that the present application also provides a method for video structured processing based on target behavior attributes. In one embodiment, the video structured data processing is an intelligent analysis module embedded with deep learning object detection and recognition algorithm, multi-object tracking algorithm, abnormal behavior recognition based on moving optical flow features, and other algorithms, and converts the unstructured video data read in step S10 into structured data.
Referring to fig. 2, a flowchart of an embodiment of a video processing method provided by the present application is shown, where the step S20 of the above embodiment includes steps S22 to S23.
S22: and carrying out target detection and identification on the single-frame picture.
Optionally, step S22 is to perform object detection and recognition on all objects in the single-frame picture. The target detection and identification object comprises pedestrian detection and identification, vehicle detection and identification, animal detection and identification and the like.
Optionally, the step S22 of performing target detection and identification on the single frame picture includes: and extracting the characteristic information of the target in the single-frame picture. Feature information of all objects, categories of the objects, position information of the objects and the like are extracted from the single-frame picture, wherein the objects can be pedestrians, vehicles, animals and the like.
In an embodiment, when only pedestrians are contained in a single-frame picture, the target detection identification is detection identification of the pedestrians, that is, feature information of all the pedestrians in the picture is extracted.
In another embodiment, when multiple types of objects such as pedestrians, vehicles, etc. are contained in the single-frame picture, the object detection identification is to perform detection identification on multiple types of pedestrians, vehicles, etc., that is, to extract feature information of pedestrians, vehicles, etc. in the single-frame picture, it can be understood that the type of object identification can be specified by the user's specification.
Optionally, the algorithm used in the step S22 for performing target detection and identification on the single-frame picture is an optimized target detection algorithm based on deep learning. Specifically, a YOLOV2 deep learning target detection framework can be used for target detection and identification, and the core of the algorithm is to use the whole image as network input and directly regress the position of the bounding box and the category to which the bounding box belongs in the output layer.
Optionally, the target detection is composed of two parts of model training and model testing.
In one embodiment, in the aspect of model training, 50% of the pedestrian or vehicle images from the VOC data set and the COCO data set are taken, and the remaining 50% of the data are taken from real street, indoor aisle, square, etc. monitoring data. It can be understood that the ratio of the data in the common data set (VOC data set and COCO data set) and the data in the real monitoring data set used in the model training can be adjusted as required, wherein when the ratio of the data in the common data set is higher, the accuracy of the obtained data model in the real monitoring scene is relatively poor, and conversely, when the ratio of the data in the real monitoring data set is higher, the accuracy is relatively improved.
Optionally, in an embodiment, after the target is detected in the single-frame picture in step S22, the pedestrian target is placed in a tracking queue (hereinafter also referred to as a tracking chain), and then a target tracking algorithm is further used to perform preset tracking and analysis on the target.
Optionally, the step of extracting the feature information of the target in the single-frame picture further includes: a metadata structure is constructed. Optionally, the feature information of the target is extracted according to a metadata structure, that is, the feature information of the target in the single-frame picture is extracted according to the metadata structure.
In one embodiment, the metadata structure includes basic attribute units for pedestrians, such as: the system comprises at least one of a camera address, the time of the target entering and exiting the camera, track information of the target at the current monitoring node, the color worn by the target or a screenshot of the target. For example, a pedestrian's metadata structure may be seen in table 1 below, where the metadata structure may also include information needed by other users but not included in the table below.
Optionally, in an embodiment, in order to save resources of network transmission, the metadata structure only includes some basic attribute information, and other attributes may be obtained by mining and calculating related information such as a target trajectory.
TABLE 1 pedestrian metadata Structure
Attribute name Type (B) Description of the invention
Camera ID short Camera node numbering
Target time of occurrence long Target entry time to monitor node
Target departure time long Target departure monitoring node time
Target motion trajectory point Motion trail of target at current node
Object ID short Object ID identification number
Target jacket color short Predefining 10 colors
Target pant color short Predefining 5 colors
Target Whole screenshot image Recording the target Whole screenshot
Screenshot of the target head and shoulder image Recording target head screenshot
In another embodiment, the metadata structure may further include basic attribute information of the vehicle, such as: the camera address, the time of the target entering and exiting the camera, the track information of the target at the current monitoring node, the appearance color of the target, the license plate number of the target or the screenshot of the target.
It is understood that the definition of the information specifically included in the metadata structure and the data type of the metadata may be initially set as needed, or may be specific attribute information that needs to be acquired, which is specifically specified in a set plurality of information according to the needs of a user after the initial setting.
In an embodiment, the structure of the metadata initially sets the category of a camera address, time for the target to enter and exit the camera, track information of the target at the current monitoring node, color worn by the target, or a screenshot of the target, and the like, and when identifying the target, the user can particularly specify the time for obtaining the target to enter and exit the camera according to the needs of the user.
In an embodiment, when the target in the single frame picture is a pedestrian, extracting feature information of the pedestrian according to a preset structure of metadata of the pedestrian, that is, extracting at least one of time when the pedestrian enters or exits the camera, a current camera address where the pedestrian is located, time when the pedestrian enters or exits the camera, trajectory information of the pedestrian at a current monitoring node, a color worn by the pedestrian, or a current screenshot of the pedestrian, or according to other target attribute information specifically specified by a user, such as time when the pedestrian enters or exits the camera, a wearing color of the pedestrian, and the like.
Alternatively, when an object is detected and recognized from a single-frame picture, while feature information of the object is acquired, an image of the object is cut out from an original video frame, and then model training is performed by using a framework based on yolov2(yolov2 is a method for detecting and recognizing the object based on deep learning proposed by Joseph Redmon in 2016).
In one embodiment, when the detected object is a pedestrian when the single-frame picture is subjected to object detection, an image of the detected pedestrian is cut out from the original video frame, then the pedestrian is subjected to part segmentation by using a frame training model based on yolov2, clothes color information of the upper and lower body parts is judged, and the head and shoulder picture of the pedestrian is cut out.
In another embodiment, when the detected target is a vehicle when the single-frame picture is subjected to target detection, an image of the detected vehicle is cut out from an original video frame, then a detection model of the vehicle is trained by using a frame based on yolov2 to perform detection and identification on the vehicle, judge the appearance color of the vehicle body, identify the license plate information, and cut out the picture of the vehicle. It is understood that since the identified target category can be selected by user settings, the detection identification of the vehicle is decided by the administrator whether or not to proceed.
In another embodiment, when the detected object is an animal when the single frame picture is subjected to object detection, an image of the detected animal is cut out from the original video frame, then the animal is detected and identified by using a detection model of the animal trained based on the yolov2 framework, the appearance color, the variety and other information of the animal are judged, and the picture of the animal is cut out. It will be appreciated that the detection of the animal is determined by the user whether or not to proceed, since the target type of identification can be selected by the user setting.
Optionally, each single frame of picture identified by target detection may be one, or multiple single frames of pictures may be performed simultaneously.
In an embodiment, the single-frame picture for performing the target detection and identification each time is one, that is, only the target in one single-frame picture is subjected to the target detection and identification each time.
In another embodiment, the target detection and identification can be performed on multiple pictures at a time, that is, the target detection and identification can be performed on the targets in multiple single-frame pictures at the same time.
Optionally, id (identity) labeling is performed on the detected targets after model training for yolov 2-based framework to facilitate correlation at subsequent tracking. The ID numbers of the different object categories may be preset, and the upper limit of the ID numbers is set by the user.
Alternatively, the ID labeling may be performed automatically on the detected and identified object, or may be performed manually.
In one embodiment, the detected and identified objects are labeled, wherein the labeled ID numbers have differences according to the type of the detected object, for example, the ID number of the pedestrian can be set as: number + number, vehicle: capital + number, animal: the lower case letters + numbers facilitate association during subsequent tracking. The set rule can be set according to the habit and preference of the user, and is not described herein in detail.
In another embodiment, the detected and identified objects are labeled, wherein the intervals to which the labeled ID numbers of the objects belong differ depending on the category of the detected object. For example, the ID number of the detected pedestrian object is set in the section 1 to 1000000, and the ID number of the detected vehicle object is set in the section 1000001 to 2000000. Specifically, the setting can be determined by the initial setting personnel, and the adjustment and the change can be carried out according to the requirement.
Alternatively, ID labeling of the detected target may be performed automatically by the system by presetting, or may be performed manually by the user.
In one embodiment, when an object is detected in a single frame picture that identifies a pedestrian or a vehicle, the system automatically labels the detected object by its category and then automatically labels the ID number that has been previously labeled.
In another embodiment, the user manually ID labels objects in the picture. The ID labeling can be carried out on the single-frame picture targets which do not pass through the system automatic ID labeling, or the ID labeling can be carried out by the user independently on the missed targets or other targets outside the preset detection target types.
Optionally, referring to fig. 3, in an embodiment, before performing the target detection and identification on the single-frame picture in step S22, the method further includes:
s21: the video is sliced into single frame pictures.
Alternatively, the step of cutting the video into single-frame pictures is to cut the video read in the step S10 into single-frame pictures, in preparation for the step S22.
Optionally, in an embodiment, the step of segmenting the video into single-frame pictures is segmenting the video read in step S10 into frames with equal intervals or frames with unequal intervals.
In one embodiment, the step of dividing the video into single-frame pictures is to divide the video read in step S10 into equally spaced skipped frames, and the skipped frames are the same, i.e. equally spaced skipped same frames are divided into single-frame pictures, wherein the skipped frames are frames that do not contain important information, i.e. frames that can be ignored. For example, 1 frame is skipped in the middle of the equal interval, and video segmentation is performed, that is, the t th frame, the t +2 th frame and the t +4 th frame are taken, the skipped frames are the t +1 th frame and the t +3 th frame, the skipped frames are the frames of important information which is judged not to be contained, or the skipped frames are the frames which are coincident with the taken frames or the frames with high coincidence degree.
In another embodiment, the step of dividing the video into single-frame pictures is to divide the video read in step S10 into frames with unequal intervals, that is, the skipped frames may not be the same, and the frame with unequal intervals is divided into single-frame pictures, wherein the skipped frames are the frames that do not contain important information, that is, the skipped frames are negligible frames, wherein the frames that do not contain important information are determined, and the determination result is really unimportant frames. For example, the frame skipping division with unequal intervals is to take the t-th frame, then take the t +3 frame by skipping the 2 frame, then take the t +5 frame by skipping the 1 frame, and then take the t +9 frame by skipping the 3 frame, wherein the skipped frame numbers respectively include the t +1 frame, the t +2 frame, the t +4 frame, the t +6 frame, the t +7 frame, the t +8 frame and other frame numbers, and the skipped frame numbers are the frame numbers which are judged not to contain the information required by the analysis.
In different embodiments, the step of cutting the video into single-frame pictures may be that the system automatically cuts the read video into single-frame pictures, or the user selects whether to cut the video into single-frame pictures, or the user manually inputs the single-frame pictures that have been cut in advance.
Optionally, in an embodiment, after the step of segmenting the video into the single-frame pictures is completed, that is, when the step of segmenting the read-in video into the single-frame pictures is completed, step S22 is automatically performed on the segmented single-frame pictures, that is, the target detection and identification are performed on the segmented single-frame pictures, or the user selects and determines whether to perform the target detection and identification described in step S22 on the segmented single-frame pictures.
Optionally, in the process of detecting and identifying the targets, statistical calculation is performed on the values of the detection and identification of each target according to a certain rule.
In one embodiment, after step S22, for the detected object, the total number of frames (total number of frames appeared) in the current monitoring node is counted, wherein the detected value is the number of frames a, the detected value is the number of frames B, and so on (there may be more or one detected value, based on the detected result), and the counted result is saved for calling.
Alternatively, the correction method is mainly divided into trajectory correction and target attribute correction.
Optionally, after the structured data of each target is obtained through target detection, the obtained structured data is corrected. That is, the false detection data in the structured data is corrected, the correction is performed according to the weight ratio, the data value of the probability of the majority is the accurate value, and the data value of the minority result is the false detection value.
In an embodiment, after statistical calculation (calling the statistical result), it is found that the number of frames of the object appearing in the current monitoring node is detected and identified in step S22 is 200 frames, wherein 180 frames detect that the color of the top of the object is red, 20 frames detect that the color of the top of the object is black, voting is performed according to the weight ratio, the accurate value of the object is finally corrected to be red, and the corresponding value in the structured data is modified to be red, and finally the correction is completed.
Optionally, the trajectory correction is specifically as follows: assuming that a target appears for a time period of T frames in a certain monitoring scene, a set of trajectory points G ═ p1, p2, … …, p is obtainedNCalculating the mean value and the deviation of the track points on the X axis and the Y axis, and then eliminating abnormal and noise track points, wherein the specific expression is as follows:
Figure BDA0001453184150000081
Figure BDA0001453184150000082
Figure BDA0001453184150000083
Figure BDA0001453184150000084
in one embodiment, track points with small deviation or average value are eliminated in the track correction, and noise point interference is reduced.
Optionally, the target attribute correction is specifically as follows: the target attribute correction is to correct the attribute value of the same target based on a weighted decision method. Let the color label of the jacket of a certain object be label { "red", "black", "white", … … }, i.e., a certain attribute value has T classifications. Firstly, it is converted into digital code L ═ m1,m2,m3,……,mT](ii) a Then, the code value x with the highest frequency and the frequency F thereof are obtained, and finally, the attribute value Y (accurate value) of the target is directly output. The specific expression is as follows:
Figure BDA0001453184150000085
F=T-||M-mx||0
Y=label[mx]
the above formula needs to be satisfied,
Figure BDA0001453184150000091
optionally, in an embodiment, the present invention combines the YOLO target detection framework to perform target recognition and positioning, and uses the google lenet network to extract the feature vector of each target, so as to facilitate subsequent target matching. Google lenet is a 22-layer deep CNN neural network proposed by Google corporation in 2014, which is widely used in the fields of image classification, recognition and the like. The feature vectors extracted by the deep-level deep learning network have better robustness and differentiability, so that the accuracy of follow-up target tracking can be better improved by the steps.
S23: and tracking the target to obtain a tracking result.
Optionally, in the step of tracking the detected target to obtain a tracking result, the tracked target is the target detected in step S22 or another target specifically designated by the user, and step S23 further includes: and tracking the target, and recording the time when the target enters or leaves the monitoring node and each position where the target passes to obtain the motion track of the target. The application provides an improved multi-target tracking method based on KCF and Kalman, and details will be described below.
In another embodiment, the video processing method provided by the present application further includes step S24 on the basis that the above embodiment includes steps S21, S22, and S23, or the embodiment includes only steps S21, S22, and S24, see fig. 4 and 5. Step S24 is as follows:
s24: and detecting abnormal behaviors of the target.
Alternatively, step S24 is an operation of performing abnormal behavior detection on the target detected and identified in the above-described step S21.
Optionally, the abnormal behavior detection includes pedestrian abnormal behavior detection and vehicle abnormal behavior detection, wherein the abnormal behavior of the pedestrian includes: running, fighting and harassment, the abnormal behavior of traffic includes: impact and overspeed, etc.
The video is processed by the method to obtain important data, so that overlarge data volume can be avoided, and the pressure of network transmission is greatly reduced.
In one embodiment, when the abnormal behavior detection is performed on the pedestrian target detected in step S21, it is determined that the running of a preset number or more of people in a monitoring node occurs, and it may be determined that the crowd disturbance occurs. Such as: it may be set that when the running abnormality is determined to occur in the 10 person in step S24, the occurrence of the crowd disturbance may be determined, and in other embodiments, the threshold number of people determining the disturbance is determined according to specific situations.
In another embodiment, it may be set that when it is determined in step S24 that collision abnormality occurs in 2 vehicles, it may be determined that a traffic accident occurs, and when it is determined in step S24 that collision abnormality occurs in more than 3 vehicles, it may be determined that a major car accident occurs. It will be appreciated that the number of vehicles determined may be adjusted as desired.
In another embodiment, when the speed of the vehicle is detected to exceed the preset speed value in step S24, the vehicle may be determined to be an overspeed vehicle, and the corresponding video of the vehicle may be stored in a screenshot form to identify the vehicle. Wherein the information of the vehicle includes a license plate number.
Optionally, in an embodiment, when the abnormal behavior is detected in step S24, the monitoring node performs an audible and visual alarm process.
In one embodiment, the content of the sound and light alarm includes the following broadcast voice prompt content: for example, "please do not crowd with the house, pay attention to safety! "or other predetermined voice prompt content; the acousto-optic alarm content also comprises: and opening the warning lamp corresponding to the monitoring node to remind passing people and vehicles to pay attention to safety.
Optionally, the level of the bad abnormal behavior is set according to the number of people who have abnormal behavior, and different levels of the bad behavior correspond to different emergency treatment measures. The severity level of abnormal behavior may be divided into yellow, orange and red. The emergency measure corresponding to the yellow-grade abnormal behavior is to perform sound-light alarm, the emergency measure corresponding to the orange-grade abnormal behavior is to perform sound-light alarm and simultaneously connect with the security personnel monitoring the responsible point, and the abnormal behavior measure of the red early warning is to perform sound-light alarm and simultaneously connect with the security personnel monitoring the responsible point to perform on-line alarm.
In one embodiment, when the number of people with abnormal behaviors is 3 or less, the abnormal behaviors are set to the people with yellow level; the orange-grade crowd abnormal behaviors when the number of people with abnormal behaviors is more than 3 and less than or equal to 5; the abnormal behavior of the population set to the red level when the number of people who have abnormal behavior exceeds 5. The specific number of people to be set can be adjusted according to actual needs, which is not described in detail herein.
Optionally, in an embodiment, the step of detecting the abnormal behavior of the target further includes the following steps: and if the abnormal behavior is detected, storing the screenshot of the current video frame image, packaging the screenshot and the characteristic information of the target with the detected abnormal behavior, and sending the characteristic information to the cloud server.
Optionally, the corresponding characteristic information of the target in which the abnormal behavior occurs may include: the camera ID, the type of the abnormal event, the occurrence of the abnormal behavior, the screenshot of the abnormal behavior, etc., and may also include other types of information as needed. The information contained in the metadata structure of the abnormal behavior sent to the cloud server includes the structure in table 2 below, and may also include other types of information.
TABLE 2 Meta data Structure for abnormal behavior
Attribute name Data type Description of the invention
Camera ID short Unique ID identification of camera
Type of exception event short Predefining two abnormal behaviors
Time of occurrence of abnormality long Time of occurrence of abnormal situation
Abnormal situation screenshot image Recording abnormal behavior screenshot
In one embodiment, when the abnormal behavior of the target is detected, if the abnormal behavior that a pedestrian sends a frame is detected, the corresponding screenshot of the current video frame image is stored, and the screenshot and the structured data corresponding to the target with the abnormal behavior are packaged and sent to the cloud server. And when the screenshot of the detected abnormal behavior is sent to the cloud server, the monitoring node performs sound-light alarm processing and starts corresponding emergency measures according to the grade of the abnormal behavior.
In another embodiment, when abnormal behaviors of a target are detected and crowd disturbance is detected, the current video frame image screenshot is stored and sent to the cloud server for further processing by the cloud server, and meanwhile, the monitoring node performs sound-light alarm and starts corresponding emergency measures according to the level of the abnormal behaviors.
Specifically, in an embodiment, the step of detecting the abnormal behavior of the target includes: and extracting optical flow motion information of a plurality of feature points of one or more targets, and carrying out clustering and abnormal behavior detection according to the optical flow motion information. Based on the above, the present application also provides an abnormal behavior detection method based on the clustered optical flow features, which will be described in detail below.
Referring to fig. 6, a flow chart of an embodiment of an improved multi-target tracking method based on KCF and Kalman provided by the present application is also shown, and the method is also step S23 in the above embodiment, and specifically includes steps S231 to S234. The method specifically comprises the following steps:
s231: and predicting a tracking frame of each target in the first plurality of targets in the current frame by combining the tracking chain and the detection frames corresponding to the first plurality of targets in the picture of the previous frame.
Optionally, the tracking chain is calculated according to tracking of multiple targets in all single-frame pictures or partial continuous single-frame pictures segmented from the video before the current frame picture, and track information and empirical values of multiple targets in all previous pictures are collected.
In one embodiment, the tracking chain is calculated from the target tracking of all pictures before the current frame picture, and includes all the information of all the targets in all the pictures before the current frame picture.
In another embodiment, the tracking chain is calculated from target tracking of a partially consecutive picture preceding the current frame picture. The more the number of continuous pictures in the tracking calculation, the higher the accuracy of the budget.
Optionally, in combination with feature information of the objects in the tracking chain and according to a detection frame corresponding to the first plurality of objects in the previous frame picture, a tracking frame of the tracked first plurality of objects in the current frame picture is predicted, for example, a position where the first plurality of objects may appear in the current frame is predicted.
In an embodiment, the above steps may predict the positions of the tracking frames of the first plurality of targets in the current frame, that is, obtain the predicted values of the first plurality of targets.
In another embodiment, the above steps may predict the position of the tracking frame of the first plurality of targets in a frame next to the current frame. And the predicted positions of the first plurality of targets in the tracking frame of the next frame of the current frame are slightly larger than the error of the predicted positions of the first plurality of targets in the tracking frame of the current frame.
Optionally, the first plurality of targets refers to all detected targets in the last frame of picture.
S232: and acquiring a tracking frame corresponding to the first plurality of targets in the previous frame of picture in the current frame and a detection frame of the second plurality of targets in the current frame of picture.
Specifically, the second plurality of targets refers to all detected targets in the current frame picture.
Optionally, a tracking frame of the first plurality of targets in the previous frame picture in the current frame and a detection frame of the second plurality of targets in the current frame picture are obtained. Where the tracking box is a rectangular box, or other shaped box, that includes one or more objects in the box, in predicting where the first plurality of objects will appear in the current frame.
Optionally, when a tracking frame corresponding to the first plurality of targets in the previous frame of picture in the current frame and a detection frame of the second plurality of targets in the current frame of picture are obtained, the obtained tracking frame and detection frame include feature information of the targets corresponding to the tracking frame and the detection frame, respectively. Such as location information, color features, texture features, etc. of the object. Optionally, the corresponding feature information may be set by the user as needed.
S233: and establishing a target incidence matrix of a tracking frame of the first plurality of targets in the current frame and a detection frame of the second plurality of targets in the current frame.
Optionally, the target association matrix is established according to the tracking frame corresponding to the first plurality of targets in the previous frame of picture acquired in step S232 in the current frame and the detection frame corresponding to the second plurality of targets detected in the current frame of picture.
In one embodiment, for example, if the number of the first plurality of objects in the previous frame of picture is N and the number of the detected objects in the current frame is M, an object association matrix W with a size of M × N is established, where:
Figure BDA0001453184150000121
Aij(0<i≤M;0<j ≦ N) is determined by dist (i, j), IOU (i, j), m (i, j), and specifically, the following formula may be expressed:
Figure BDA0001453184150000122
wherein, IW、IhIs the width and height of the image frame; dist (i, j) is the centroid distance between the next frame tracking frame predicted by the jth target in the tracking chain obtained in the previous frame and the detection frame of the ith target detected and identified in the current frame, d (i, j) is the centroid distance normalized by the diagonal 1/2 distance of the image frame, m (i, j) is the Euclidean distance of the feature vectors of the two targets,
Figure BDA0001453184150000124
the feature vector extracted based on the GoogleLeNet network is more robust and distinguishable than the traditional manual feature extraction by adopting a CNN framework model for feature extraction. The purpose of normalization is to ensure that d (i, j) and IOU (i, j) have consistent influence on A (i, j). The IOU (i, j) represents the overlapping rate of the tracking frame in the current frame and the detection frame of the jth target detected and identified in the current frame, which is predicted by the jth target in the tracking chain of the previous frame, i.e. the intersection of the tracking frame and the detection frame is compared with the union thereof. The IOU specific expression is as follows:
Figure BDA0001453184150000123
optionally, the value range of the IOU (i, j) is 0 ≦ IOU (i, j) ≦ 1, and the larger the value, the larger the overlapping rate of the tracking frame and the detection frame is.
In one embodiment, when the target is stationary, the centroid positions of the same target detected in two frames before and after should be at the same point or have small deviation, so the value of IOU should be approximately 1, d (i, j) should also tend to 0, so A isijWhen the targets are matched, the value of m (i, j) is small, so that the probability that the target with the ID j and the detection target with the ID i in the detection chain are successfully matched in the tracking chain is higher; if the positions of the same target detection frame of the two previous frames and the two next frames are far away from each other and are not overlapped, the IOU should be 0, and the value of m (i, j) is large, so the value of d (i, j) is large, and therefore the probability that the target with the ID of j and the detection target with the ID of i in the tracking chain are successfully matched is small.
Optionally, the establishment of the target incidence matrix refers to the centroid distance, the IOU, and the euclidean distance of the feature vector of the target, and may also refer to other feature information of the target, such as: color features, texture features, etc. It is understood that the accuracy is higher when more indexes are referred to, but the real-time performance is slightly reduced due to the increase of the calculation amount.
Optionally, in an embodiment, when it is required to ensure better real-time performance, the target association matrix is established only by referring to the position information of the target in the two taken images in most cases.
In one embodiment, a target association matrix of a tracking frame corresponding to a first plurality of targets and a detection frame of a current frame corresponding to a second plurality of targets is established with reference to position information of the targets and wearing colors of the targets (or appearance colors of the targets).
S234: and correcting by using a target matching algorithm to obtain the actual position corresponding to the first part of targets of the current frame.
Optionally, the target value is corrected by using a target matching algorithm according to the observed value of the actually detected target and the predicted value corresponding to the target detection frame in step S231, so as to obtain the actual positions of the first multiple targets in the current frame, that is, the actual positions of the second multiple targets, which are simultaneously present in the current frame, in the first multiple targets in the previous frame. It can be understood that, since the observed values of the second plurality of targets in the current frame have a certain error due to factors such as the sharpness of the split picture, the predicted positions of the first plurality of targets in the current frame are corrected by using the detection frame in which the tracking chain and the first plurality of targets in the previous frame are combined in the previous frame picture.
Optionally, the target matching algorithm is Hungarian algorithm (Hungarian), the observed value is feature information of the target obtained when the target is detected and identified in step S22, the observed value includes a category of the target and position information of the target, and the predicted value of the target is a position value of the target in the current frame, predicted by combining the tracking chain and the position of the target in the previous frame in step S231, and other feature information. The position information of the target is used as a primary judgment basis, and other characteristic information is used as a secondary judgment basis.
Optionally, in an embodiment, a target in the second plurality of targets, which is successfully matched with the tracking frame of the first plurality of targets in the current frame, is defined as a first partial target, and a target in the first plurality of targets, which is successfully matched with the tracking frame of the current frame and the detection frame of the second plurality of targets in the current frame, is also defined as a first partial target, that is, each group of tracking frames and detection frames that are successfully matched are from the same target. It can be understood that, when the detection frame in the second plurality of targets is successfully matched with the tracking frame of the first plurality of targets in the current frame, the following steps are performed: the position information and other characteristic information are in one-to-one correspondence, or the corresponding item number is more, namely the matching is successful if the corresponding item number probability is higher.
In another embodiment, the number of the first part of the objects is smaller than that of the first plurality of objects, that is, only part of the tracking frames of the first plurality of objects in the current frame can be successfully matched with the detection frames of the second plurality of objects, and another part of the tracking frames of the first plurality of objects in the current frame cannot be successfully matched according to the feature information of the matching basis.
Optionally, in a different implementation, the step of successfully matching the detection frame of the second plurality of objects in the current frame with the tracking frame of the first plurality of objects in the previous frame in the current frame includes: and judging whether the matching is successful according to the centroid distance and/or the overlapping rate of the detection frame of the second plurality of targets in the current frame and the tracking frame of the first plurality of targets in the previous frame in the current frame.
In an embodiment, when the centroid distance of the detection frame of one or more of the second plurality of targets in the current frame and one or more of the first plurality of targets in the previous frame is close to the centroid distance of the tracking frame in the current frame, the overlap ratio is high, and the distance of the characteristic vector is close, it is determined that the target matching is successful. It can be understood that the time interval of the segmentation of the two adjacent frames of pictures is very short, that is, the distance that the target moves in the time interval is very small, so that it can be determined that the target in the two frames of pictures is successfully matched at this time.
Optionally, the second plurality of targets includes a first portion of targets and a second portion of targets, wherein, as can be seen from the above, the first portion of targets is: and matching the detection frame in the second plurality of targets with the tracking frame of the first plurality of targets in the current frame to obtain a successful target. The second part targets are: and the detection frame in the second plurality of targets and the target which is not successfully matched with the tracking frame of the first plurality of targets in the current frame define the target which is not recorded in the tracking chain in the second part of targets as a new target. It will be appreciated that, in the second partial target, there may be another type of target in addition to the new target: there are no targets in the first plurality that match successfully but have appeared in the tracking chain.
In an embodiment, the number of the second partial targets may be 0, that is, the detection frame of the second plurality of targets in the current frame and the tracking frame of the first plurality of targets in the current frame may both be successfully matched, so that the number of the second partial targets at this time is 0.
Optionally, after the step of performing a correction analysis by using a target matching algorithm to obtain an actual position corresponding to the first part of the targets in the current frame, the method includes: screening new targets in the second part of targets; and adding the newly added target into the tracking chain. Another embodiment further comprises: and initializing the corresponding filter tracker according to the initial position and/or characteristic information of the newly added target.
The filter tracker in one embodiment includes a Kalman filter (kalman), a coring correlation filter (kcf), and a combination of Kalman and coring correlation filters (the structure of the filter combines the structural features of both Kalman and coring correlation filters, i.e., a combination of both filters). The Kalman filter, the coring correlation filter and the filter combining the Kalman filter and the coring correlation filter are all multi-target tracking algorithms realized based on programming. The filter combining the kalman filter and the coring correlation filter is a filter structure implemented by an algorithm structure combining the structures of the kalman filter and the coring correlation filter. In other embodiments, the filter tracker may be other types of filters as long as the same function can be achieved.
Optionally, the data of the tracking chain is calculated by training data of the previous frame and all frames before the previous frame, and the targets in the tracking chain include the first partial target and the third partial target described above. Specifically, the first part of the targets refers to: the tracking frame in the current frame of the first plurality of objects matches the successfully detected object of the second plurality of objects. The third part of the target is: the target in the tracking chain is not matched with the target in the second plurality of targets successfully.
It will be appreciated that the third portion of targets is substantially all targets in the tracking chain except for the first portion of targets that successfully match the second plurality of targets.
Optionally, the step of performing a correction analysis by using a target matching algorithm in step S234 to obtain an actual position corresponding to the first part of the targets in the current frame includes: and adding 1 to a target lost frame number count value corresponding to the third part of targets, and removing the corresponding target from the tracking chain when the target lost frame number count value is greater than or equal to a preset threshold value. It can be understood that the preset threshold of the count value of the number of lost frames is preset and can be adjusted as required.
In an embodiment, when the count value of the number of lost frames corresponding to a certain target in the third part of targets is greater than or equal to a preset threshold, the certain target is removed from the current tracking chain.
Optionally, when a certain target is removed from the current tracking chain, the structured data corresponding to the target is uploaded to the cloud server, and the cloud server may perform in-depth analysis on the track or the abnormal behavior of the target again with respect to the structured data of the target or the empirical value in the database.
It can be understood that, when the structured data corresponding to the target removed from the tracking chain is sent to the cloud server, the system executing the method can select trust, and interrupt the cloud server from further analyzing the target.
Optionally, the step of performing a correction analysis by using a target matching algorithm in step S234 to obtain an actual position corresponding to the first part of the targets in the current frame includes: and adding 1 to the target lost frame number count value corresponding to the third part of targets, and locally tracking the third part of targets to obtain a current tracking value when the count value is smaller than a preset threshold value.
Further, in an embodiment, the current tracking value of the third part of targets and the predicted value corresponding to the third part of targets are corrected to obtain the actual position of the third part of targets. Specifically, in an embodiment, the current tracking value is obtained when the third part of the targets are locally tracked by a coring correlation filter and a filter in which a kalman filter and the coring correlation filter are combined, and the predicted value is a position value of the third part of the targets predicted by the kalman filter (kalman).
Alternatively, tracking the target detected in the step S22 is performed by combining filters of a kalman filter tracker (kalman) and a kernel correlation filter tracker (kcf).
In one embodiment, when the tracked targets are all targets that can be matched, that is, when a lost target is undoubtedly detected, only a kalman filter tracker (kalman) is called to complete the tracking work of the target.
In another embodiment, when a suspected lost target exists in the tracked targets, a filter combined by a Kalman filtering tracker (kalman) and a coring correlation filtering tracker (kcf) is called to cooperate together to complete the tracking work of the target, or the Kalman filtering tracker (kalman) and the coring correlation filtering tracker (kcf) cooperate together in sequence.
Optionally, in an embodiment, the step S234 of performing correction by using a target matching algorithm to obtain an actual position corresponding to the first part target of the current frame includes: and correcting each target in the first part of targets according to the predicted value corresponding to the current frame tracking frame corresponding to each target and the observed value corresponding to the current frame detection frame to obtain the actual position of each target in the first part of targets.
In an embodiment, the predicted value corresponding to the tracking frame in the current frame for each of the first partial targets may be understood as: and predicting the position information of each target in the first part of targets in the current frame by combining the empirical values in the tracking chain and the position information in the previous frame, and correcting the actual position of each target in the first part of targets by combining the observed actual position (namely, observed value) of the first part of targets in the current frame. This operation is performed to reduce the inaccuracy of the measured actual values of the respective targets due to errors in the measured values or the observed values.
Optionally, in an embodiment, the improved multi-target tracking method based on KCF and Kalman may implement tracking analysis on multiple targets, record the time of the target entering the monitoring node and each movement position in the monitoring scene, thereby generating a trajectory chain, and may specifically and clearly reflect the movement information of the target at the current monitoring node.
Referring to fig. 10, the present invention further provides a multi-target tracking analysis system 300, comprising a processor 302 and a memory 304 electrically connected to each other; the processor 302 executes instructions to implement the improved multi-target tracking method based on the combination of KCF and Kalman described above while operating, and stores processing results generated by executing the instructions in a memory. It is understood that the memory 304 stores data corresponding to the algorithm instructions of the above method, including data corresponding to algorithms such as a kalman filter, a coring correlation filter, and a combination filter of the kalman filter and the coring correlation filter.
In an embodiment, the processor 302 performs a correction according to the current tracking value and the predicted value corresponding to the third portion of the target to obtain an actual position of the third portion of the target. The details thereof have been set forth above in detail and are not repeated herein.
Referring to fig. 11, the present invention further provides an apparatus 400 having a storage function, which stores program data that, when executed, implements an embodiment of the improved multi-target tracking method based on KCF and Kalman as described above. Specifically, the apparatus 400 having the storage function may be one of a memory, a personal computer, a server, a network device, or a usb disk.
Referring to fig. 7, a schematic flowchart of an embodiment of an abnormal behavior detection method based on clustered optical flow features is also provided in the present application, and the method is also step 24 of the above embodiment, and includes steps S241 to S245. The method comprises the following specific steps:
s241: and carrying out optical flow detection on the area where the detection frame of the one or more targets is located.
Optionally, before the abnormal behavior detection is performed on the targets, the detection and identification of the targets are completed based on a preset algorithm, and a detection frame corresponding to each target and a position where the detection frame is located when the targets in the single-frame picture are subjected to the target detection are acquired, and then the optical flow detection is performed on the detection frames of one or more targets. The optical flow contains motion information of the object. Alternatively, the preset algorithm may be yolov2 algorithm, or may be other algorithms with similar functions.
It can be understood that the center of the detection frame and the center of gravity of the target are approximately coincident with each other, so that the position information of each pedestrian target or other types of targets in each frame of image can be obtained.
In one embodiment, the essence of performing optical flow detection on one or more detection frames of the target is to acquire motion information of optical flow points in the detection frames corresponding to the target, including the speed magnitude and the motion direction of the motion of the optical flow points.
Alternatively, the optical flow detection is to obtain the motion characteristic information of each optical flow point, and is performed by LK (Lucas-Kanade) pyramid optical flow method or other optical flow methods with the same or similar functions.
Alternatively, optical flow detection may be performed on one detection frame of an object in each frame of picture, or optical flow detection may be performed on a plurality of detection frames of objects in each frame of picture at the same time, and the number of objects subjected to optical flow detection per time generally depends on the system initial setting. It is understood that this setting can be adjusted as needed, and when rapid optical flow detection is needed, the setting can be set to detect the detection frames of multiple targets in each frame of picture at the same time. When very fine optical flow detection is required, it is possible to adjust the detection frame set to perform optical flow detection on one object at a time in each frame of picture.
Alternatively, in an embodiment, optical flow detection is performed on the detection frame of one object in consecutive multi-frame pictures at a time, or the detection frame of one object in a single-frame picture may be detected.
Optionally, in another embodiment, optical flow detection is performed on detection frames of a plurality of or all of the objects in consecutive multi-frame pictures at a time, or optical flow detection may be performed on detection frames of a plurality of or all of the objects in a single-frame picture at a time.
Alternatively, in an embodiment, before performing optical flow detection on the target, in the above step, an approximate position area of the target is detected, and then optical flow detection is directly performed on an area where the target appears (which may be understood as a target detection area) in two consecutive frame images. Two consecutive frames of images subjected to optical flow detection are images having the same size.
Optionally, in an embodiment, performing optical flow detection on the area where the detection frame of the target is located may perform optical flow detection on the area where the detection frame of the target is located in one frame of the picture, then store the obtained data and information in the local memory, and then perform optical flow detection on the area where the detection frame of the target is located in the picture in the next frame or in a preset frame.
In one embodiment, optical flow detection is performed on the detection frame and the area of one object at a time, and optical flow detection is performed on the detection frames of all the objects in the picture one by one.
In another embodiment, optical flow detection is performed on multiple objects in one picture at a time, that is, it can be understood that optical flow detection is performed on all or part of the detection frames of the objects in one single-frame picture at a time.
In yet another embodiment, optical flow detection is performed on detection frames of all objects in a plurality of single-frame pictures at a time.
In still another embodiment, optical flow detection is performed on target detection frames of the same category specified in a plurality of single-frame pictures at a time.
Alternatively, the optical flow information obtained after step S241 is added to the spatio-temporal model, so that the optical flow vector information of the preceding and following multi-frame images is obtained through statistical calculation.
S242: and extracting the optical flow motion information of the feature points corresponding to the detection frame in at least two continuous frames of images, and calculating the information entropy of the area where the detection frame is located.
Optionally, in step 242, extracting optical flow motion information of feature points corresponding to the detection frame in at least two consecutive images, calculating entropy of information of an area where the detection frame is located, and calculating feature points corresponding to the detection frame area in at least two consecutive images, where the optical flow motion information refers to the magnitude of the motion direction and the motion speed of the optical flow point, that is, the motion direction and the motion distance of the optical flow point are extracted, and then calculating the motion speed of the optical flow point, where the feature point is a set of one or more pixel points that can represent object feature information.
Alternatively, after extracting the optical flow motion information of the feature points corresponding to the detection frame in the two consecutive frames of images, and calculating the information entropy of the area where the detection frame is located according to the extracted optical flow motion information, it can be understood that the information entropy is calculated based on the optical flow information of all the optical flow points in the target detection area.
Alternatively,step 242 is to extract the optical flow motion information of the feature points corresponding to the detection frames in at least two consecutive frames of images, calculate the information entropy of the area where the detection frames are located, and extract the pixel optical flow feature information in the rectangular frame area where the adjacent frames only contain the pedestrian object by the LK (Lucas-Kanade) pyramid optical flow method (the LK pyramid optical flow method is hereinafter abbreviated as the LK optical flow method)
Figure BDA0001453184150000181
And the LK optical flow extraction algorithm is accelerated by using a Graphics Processing Unit (Graphics Processing Unit), so that the optical flow characteristic information of the pixels is extracted on line in real time. The optical flow feature information is optical flow vector information, which may be referred to as an optical flow vector for short.
Alternatively, optical flow vectors extracted due to optical flow algorithms
Figure BDA0001453184150000182
Is formed by two-dimensional matrix vectors
Figure BDA0001453184150000183
Is composed of, i.e.
Figure BDA0001453184150000184
Wherein each point in the matrix corresponds to each pixel position in the image;
Figure BDA0001453184150000185
representing the pixel interval of the same pixel point in the adjacent frames moving on the X axis,
Figure BDA0001453184150000186
and the pixel interval representing the movement of the same pixel point in the adjacent frames on the Y axis.
Alternatively, the pixel interval refers to the distance that the feature point moves in the two adjacent frame images, and can be directly extracted by the LK optical flow extraction algorithm.
In an embodiment, in step 242, optical flow motion information of feature points corresponding to the detection frame of each target in the single-frame image after the target detection is completed and the image of the detection frame obtained when the target detection is obtained is calculated. Wherein a feature point can also be interpreted to mean a point where the image grey value changes drastically or a point where the curvature is larger at the edge of the image (i.e. the intersection of two edges). This operation can reduce the amount of calculation and improve the calculation efficiency.
Alternatively, in step S242, the optical flow information of the feature points corresponding to all or part of the detection frames in two consecutive images may be extracted at the same time, or the optical flow information of the feature points corresponding to all the detection frames in more than two consecutive images may be calculated at the same time, and the number of images calculated at each time is set in advance in the system and may be set as needed.
In one embodiment, step S242 calculates optical flow information of feature points corresponding to all detection frames in two consecutive images at the same time.
In another embodiment, step S242 calculates optical flow information of feature points corresponding to all detection frames in more than two consecutive images at the same time.
Alternatively, step S242 may simultaneously calculate optical flow information of detection frames corresponding to all objects in at least two consecutive images, or may simultaneously calculate optical flow information of detection frames of objects specifically specified and corresponding in at least two consecutive images.
In one embodiment, step S242 is to calculate optical flow information of detection frames corresponding to all objects in at least two consecutive images, such as: and optical flow information of detection frames corresponding to all the targets in the t frame and the t +1 frame images.
In another embodiment, step S242 is to calculate the detection frame of the specific and corresponding target in at least two consecutive images, such as: optical flow information of detection frames corresponding to the tth frame A type object and the t +1 th frame image A' type object, objects having ID numbers of 1 to 3, that is, simultaneously extracting and calculating the object A1、A2、A3And its corresponding target A1’、A2’、A3Optical flow information of the detection frame of' is detected.
S243: and establishing clustering points according to the optical flow motion information and the information entropy.
Alternatively, the clustering points are established from the optical flow motion information calculated in step S242 and the calculated information entropy. The optical flow motion information is information reflecting motion characteristics of an optical flow, and comprises the motion direction and the motion speed, and can also comprise other related motion characteristic information, and the information entropy is obtained by calculation according to the optical flow motion information.
In one embodiment, the optical flow motion information extracted in step S242 includes at least one of a direction of motion, a distance of motion, a speed of motion, and other related motion characteristic information.
Optionally, before the step S243 establishes the clustering points according to the optical flow motion information and the calculated information entropy, the optical flows are clustered by using a K-mean algorithm (K-mean). The number of the clustering points can be determined according to the number of detection frames during target detection, and the clustering of the optical flow is based on: and establishing the optical flow points with the same motion direction and motion speed into clustering points. Optionally, in an embodiment, a value range of K is 6 to 9, and certainly, the value of K may be other values, which is not described herein.
Optionally, the cluster point is a set of optical flow points with the same or approximately the same magnitude of motion direction and motion speed.
S244: and calculating the kinetic energy of the clustering points or the kinetic energy of the area where the target detection frame is located. Specifically, the kinetic energy of the clustering points established in step S245 is calculated in units of the clustering points established in step S243, or the kinetic energy of the region where the target detection box is located is calculated at the same time.
In one embodiment, at least one of the kinetic energy of the cluster point or the kinetic energy of the region where the target is located, which is established in step S243, is calculated. It is understood that, in different embodiments, one of the required calculation modes may be configured according to specific requirements, or two calculation modes of calculating the kinetic energy of the clustering point or the kinetic energy of the region where the target is located may be configured at the same time, and when only one of the calculation modes needs to be calculated, the other calculation mode may be manually selected and not calculated. Optionally, a motion space-time container is established by using motion vectors of N frames before and after the cluster point according to the position of the cluster point, and an information entropy of an optical flow Histogram (HOF) of a detection region where each cluster point is located and an average kinetic energy of a cluster point set are calculated.
Optionally, the formula of the kinetic energy of the region where the target detection frame is located is as follows:
Figure BDA0001453184150000201
alternatively, i is 0, …, k-1 indicates the number of optical flows in the area where the single object detection frame is located, k indicates the total number of optical flows after clustering of the single object area, and for convenience of calculation, m is 1. Optionally, in an embodiment, a value range of K is 6 to 9, and certainly, the value of K may be other values, which is not described herein.
S245: and judging abnormal behaviors according to the kinetic energy and/or the information entropy of the clustering points.
Optionally, it is determined whether an abnormal behavior occurs in the target corresponding to the cluster point according to the kinetic energy of the cluster point or the kinetic energy of the area where the target detection frame is located, where the abnormal behavior includes running, fighting and harassment when the target is a pedestrian, and includes collision and overspeed when the target is a vehicle.
Specifically, the two abnormal behaviors of fighting and running are related to the information entropy of the region where the target detection frame is located and the kinetic energy of the clustering point. That is, when the abnormal behavior is fighting, the entropy of the optical flow information of the area where the target detection frame is located is large, and the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target is located is also large. When the abnormal behavior is running, the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target is located is larger, and the entropy of the optical flow information of the area where the target detection frame is located is smaller. When no abnormal behavior occurs, the entropy of the optical flow information of the area where the detection frame corresponding to the target is located is small, and the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target is located is also small.
Optionally, in an embodiment, the step of S245 determining the abnormal behavior according to the kinetic energy and/or the information entropy of the cluster point further includes: and if the entropy of the optical flow information of the area where the detection frame corresponding to the target is located is larger than or equal to a first threshold value, and the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target detection frame is located is larger than or equal to a second threshold value, judging that the abnormal behavior is fighting.
Optionally, in another embodiment, the step of determining the abnormal behavior according to the kinetic energy and/or the information entropy of the cluster point further includes: if the information entropy of the area where the detection frame corresponding to the target is located is greater than or equal to the third threshold and smaller than the first threshold, the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target detection frame is located is greater than the second threshold. The abnormal behavior is judged to be running.
In one embodiment, for example, the information entropy is represented by H and the kinetic energy is represented by E.
Optionally, the formula for determining the target running behavior is as follows:
Figure BDA0001453184150000202
in one embodiment, the present invention trains for running behavior
Figure BDA0001453184150000203
A value range of
Figure BDA0001453184150000204
λ1A value of 3000, wherein
Figure BDA0001453184150000205
Is a ratio, λ, of optical flow information entropy H representing the area of the target detection frame to kinetic energy E of the area of the target detection frame1Is a preset kinetic energy value.
Optionally, the formula for determining the target fighting behavior is as follows:
Figure BDA0001453184150000211
in one embodiment, the present invention trains for fighting behavior
Figure BDA0001453184150000212
A value range of
Figure BDA0001453184150000213
λ2A value of 3.0, wherein
Figure BDA0001453184150000214
Is used to express the ratio of information entropy H and kinetic energy E, lambda2Is a preset information entropy value.
Alternatively, the judgment formula of normal behavior:
Figure BDA0001453184150000215
in one embodiment, in the present invention, the normal behavior λ obtained by training3Take 1500, λ4Take 1.85, λ3Is a predetermined kinetic energy value and is less than lambda1,λ4Is a preset information entropy value and is less than lambda2
In an embodiment, when a certain pedestrian object runs, the optical flow kinetic energy of the clustering point corresponding to the pedestrian object is larger, and the optical flow information entropy is smaller.
Optionally, when crowd disturbance occurs, firstly, multiple pedestrian targets are detected in one single-frame picture, then when abnormal behavior detection is performed on the detected multiple pedestrian targets, it is found that running abnormality occurs on all the multiple targets, and at this time, the crowd disturbance can be determined to occur.
In one embodiment, when abnormal behavior detection is performed on a plurality of targets detected in a single-frame picture, the motion kinetic energy of cluster points corresponding to the targets exceeding a preset threshold number is larger, and the entropy of optical flow information is smaller; at this time, it can be judged that crowd disturbance may occur.
Alternatively, when the target is a vehicle, the determination of the abnormal behavior is also based on the magnitude of the distance between the detected vehicles (which can be calculated from the position information) and the majority of the optical flow directions in the detection frame corresponding to the target, and whether or not the collision has occurred. It is understood that when the majority of the optical flow directions of the detection frames of the two vehicle objects are opposite and the distance between the two vehicles is close, it is possible to determine that the collision event is suspected to occur.
Optionally, the result of the abnormal behavior determined in step S245 is saved and sent to the cloud server.
The method described in the above steps S241 to S245 can effectively improve the efficiency and real-time performance of detecting abnormal behavior.
Optionally, in an embodiment, the step S242 of extracting optical flow motion information of feature points corresponding to the detection frame in at least two consecutive images, and the step of calculating the information entropy of the area where the detection frame is located further includes: and extracting the characteristic points of at least two continuous frames of images.
Optionally, the feature points of at least two consecutive frames of images are extracted, the feature points of the target detection frame in the two consecutive frames of images may be extracted each time, or the feature points of the target detection frame in multiple frames (more than two frames) of consecutive images may be extracted each time, where the number of images extracted each time is set by initializing the system, and may be adjusted as needed. The feature point refers to a point where the image gray value changes drastically or a point where the curvature is large on the edge of the image (i.e., the intersection of two edges).
Optionally, in an embodiment, in step S242, extracting optical flow motion information of feature points corresponding to the detection frame in at least two consecutive images, and the step of calculating the information entropy of the area where the detection frame is located further includes: and calculating matched feature points of the targets in the two continuous frames of images by adopting a preset algorithm, and removing unmatched feature points in the two continuous frames of images.
Optionally, first, an image processing function (goodffeaturetotrack ()) is called to extract feature points (also called Shi-Tomasi corner points) in a target area which has been detected in an image of a previous frame, then a function calcoptical flow pyrlk () in an LK-pyramid optical flow extraction algorithm is called to calculate feature points of a target which is detected in a current frame and is matched with the previous frame, and feature points which are not moved in the previous frame and the next frame are removed, so that optical flow motion information of pixel points is obtained. The feature points in this embodiment may be Shi-Tomasi corner points, or simply corner points.
Optionally, in an embodiment, the step S245 of establishing a cluster point according to the optical flow motion information further includes: the optical flow motion direction of the feature points is drawn in the image.
In an embodiment, the step of establishing the clustering points according to the optical flow motion information further includes, before the step of establishing the clustering points according to the optical flow motion information, drawing the optical flow motion direction of each feature point in each frame of image.
Optionally, referring to fig. 8, in an embodiment, after the step of establishing the cluster point according to the optical flow motion information in step S243, step S2431 and step S2432 are further included:
s2431: a spatiotemporal container is established based on the position and motion vectors of the target detection region.
Optionally, a space-time container is established based on the position information of the target detection area, i.e. the target detection frame, and the motion vector relationship of the clustering points in the detection frame between the previous frame and the next frame.
Alternatively, FIG. 9 is a schematic diagram of a motion spatiotemporal container in an embodiment, where AB is the two-dimensional height of the spatiotemporal container, BC is the two-dimensional width of the spatiotemporal container, and CE is the depth of the spatiotemporal container. The depth CE of the space-time container is the video frame number, ABCD represents the two-dimensional size of the space-time container, and the two-dimensional size represents the size of a target detection frame during target detection. It is understood that the model of the spatiotemporal container may be other graphics, and when the graphics of the target detection box change, the model of the spatiotemporal container changes accordingly.
Optionally, in an embodiment, when the graph of the target detection box changes, the corresponding created spatiotemporal container changes according to the graph change of the target detection box.
S2432: and calculating the average information entropy and the average motion kinetic energy of the optical flow histogram of the detection frame corresponding to each clustering point.
Optionally, an average information entropy and an average kinetic energy of the optical flow histogram of the detection frame corresponding to each cluster point are calculated. Optical flow histogram HOF (histogram of ordered Optical flow) is used to count the probability of Optical flow point distribution in a specific direction.
Optionally, the basic idea of the HOF is to project each optical flow point into a corresponding histogram bin according to its direction value, and perform weighting according to the magnitude of the optical flow, in the present invention, the bin is 12, where the calculation formula of the magnitude and direction of the motion speed of each optical flow point is as follows, and T refers to the time between two adjacent frames of images.
Figure BDA0001453184150000231
Figure BDA0001453184150000232
In this case, the optical flow histogram is used to reduce the influence of factors such as the size of the target, the motion direction of the target, and noise in the video on the optical flow characteristics of the target pixels.
Optionally, the category of abnormal behavior in the different embodiments includes one of fighting running, harassment, or traffic abnormality.
In one embodiment, when the target is a pedestrian, the anomalous behavior comprises: fighting, running, and messing.
In another embodiment, when the target is a vehicle, the abnormal behavior is, for example: impact and overspeed.
Optionally, in an embodiment, the average information entropy and the average kinetic energy of the optical flow histogram of the detection frame corresponding to each cluster point are calculated, which are substantially the average information entropy and the average kinetic energy of the optical flow of each cluster center in the previous and next N frames of images.
The abnormal behavior detection method can effectively improve the intelligence of the existing security, can also effectively reduce the calculated amount in the abnormal behavior detection process, and improves the efficiency, the real-time performance and the accuracy of the system for detecting the abnormal behavior of the target.
Optionally, the step of tracking the target to obtain the tracking result further includes: and sending the structured data of the target object which leaves the current monitoring node to the cloud server.
Optionally, when the target is tracked, when the feature information, particularly the position information, of a certain target is not updated within a preset time, it can be determined that the target has left the current monitoring node, and the structured data of the target is sent to the cloud server. The preset time may be set by a user, for example, 5 minutes or 10 minutes, and is not described herein.
In an embodiment, when the target is tracked, when it is found that the position information, i.e., the coordinate value, of a certain pedestrian is not updated within a certain preset time, it can be determined that the pedestrian has left the current monitoring node, and the structured data corresponding to the pedestrian is sent to the cloud server.
In another embodiment, when the target is tracked, when the position coordinate of a certain pedestrian or a certain vehicle is found to stay at the view angle edge of the monitoring node all the time, it can be determined that the pedestrian or the vehicle has left the current monitoring node, and the structured data of the pedestrian or the vehicle is sent to the cloud server.
Optionally, preset feature information (such as a target attribute value, a motion trajectory, a target screenshot, and other required information) of a target determined to leave the current monitoring node is packaged into a preset metadata structure, and then is encoded into a preset format and sent to the cloud server, and the cloud server analyzes the received packaged data, extracts metadata of the target, and stores the metadata in the database.
In one embodiment, the preset feature information of the target which is determined to leave the current node is packaged into a preset metadata structure, then the preset feature information is encoded into a JSON data format and sent to a cloud server through a network, the cloud server analyzes the received JSON data packet, the metadata structure is extracted, and the metadata structure is stored in a database of the cloud server. It can be understood that the preset feature information can be adjusted and set as needed, which is not described herein any more.
Optionally, the step S23 tracks the target to obtain a tracking result and the step S24 detects abnormal behavior of the target, both based on the step S22 performing target detection and identification on the single frame picture, so that the target can be tracked and the abnormal behavior of the target can be detected.
Alternatively, the abnormal behavior detection of the target in step S24 may be performed directly after step S22 is completed, or simultaneously with step S23, or after step S23 and based on the tracking result in step S23.
Alternatively, when the abnormal behavior detection of the target in the step S24 is performed based on the tracking of the target in the step S23 to obtain the tracking result, the detection of the abnormal behavior of the target may be more accurate.
The method for video structuring processing based on the target behavior attribute in steps S21 to S24 can effectively reduce the pressure of network transmission of the monitoring video, effectively improve the real-time performance of the monitoring system, and greatly reduce the data traffic fee.
Optionally, the step of performing target detection and identification on the single-frame picture further includes extracting feature information of a target in the single-frame picture. It can be understood that after the read video is divided into a plurality of single-frame pictures, the target detection and identification are performed on the single-frame pictures after the division.
Optionally, feature information of an object in a single frame picture obtained by cutting the video is extracted, wherein the object includes pedestrians, vehicles and animals, and feature information of a building or a road and bridge can also be extracted according to needs.
In one embodiment, when the object is a pedestrian, the extracted feature information includes: the position of the pedestrian, the clothing color of the pedestrian, the sex of the pedestrian, the motion state, the motion track, the dwell time and other available information.
In another embodiment, when the target is a vehicle, the extracted feature information includes: the type of the vehicle, the color of the vehicle body, the running speed of the vehicle, the license plate number of the vehicle and the like.
In yet another embodiment, when the object is a building, the extracted feature information includes: basic information of the building: such as building story height, building appearance color, etc.
In still another embodiment, when the target is a road bridge, the extracted feature information includes: the width of the road, the name of the road, the speed limit value of the road and the like.
Optionally, the step of detecting abnormal behavior of the target includes: and extracting motion vectors of multiple pixel points of one or more targets, and detecting abnormal behaviors according to the relation between the motion vectors.
In one embodiment, for more details, reference is made to a method of abnormal behavior detection as described above.
In an embodiment, the structured data acquired in the video processing stage is initially set to include at least one of a position of the target, a category of the target, a property of the target, a motion state of the target, a motion trajectory of the target, and a residence time of the target. The method can be adjusted according to the needs of the user, and only the position information of the target is acquired in the video processing stage, or the position and the target category of the target are acquired simultaneously. It will be appreciated that the video processing stage obtains information and the user may select the type of information that needs to be obtained during the video processing stage.
Optionally, after the video structuring is finished, the obtained structured data is uploaded to a cloud server, and the cloud server stores the structured data uploaded by each monitoring node and deeply analyzes the structured data uploaded by each monitoring node to obtain a preset result.
Optionally, the step of deeply analyzing the structured data uploaded by each monitoring node by the cloud server may be set to be performed automatically by the system, or may be performed manually by the user.
In an embodiment, basic analysis contents included in the in-depth analysis of the cloud server are preset, such as the number of statistical pedestrians, target trajectory analysis, whether abnormal behaviors occur in the target, and the number of targets in which the abnormal behaviors occur, and other contents that need to be specially selected by the user, such as the proportion of each time period of the target, the speed of the target, and the like.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. An improved multi-target tracking method based on KCF and Kalman is characterized by comprising the following steps:
predicting a tracking frame of each target in a first plurality of targets in a current frame by combining a tracking chain and a detection frame corresponding to the first plurality of targets in a previous frame picture;
acquiring a tracking frame corresponding to a first plurality of targets in a previous frame of picture in a current frame and a detection frame of a second plurality of targets in the current frame of picture;
establishing a target incidence matrix of a tracking frame of the first plurality of targets in the current frame and a detection frame of a second plurality of targets in the current frame;
correcting by using a target matching algorithm to obtain the actual position corresponding to the first part of targets of the current frame;
the targets in the tracking chain comprise the first part of targets and a third part of targets, and the targets in the tracking chain which are not successfully matched with the second plurality of targets are defined as third part of targets;
adding 1 to a target lost frame number count value corresponding to the third part of targets, and locally tracking the third part of targets to obtain a current tracking value when the count value is smaller than a preset threshold value;
and correcting according to the current tracking value and the predicted value corresponding to the third part of targets to obtain the actual position of the third part of targets.
2. The improved multi-target tracking method based on KCF and Kalman according to claim 1, characterized in that the targets whose detection frames in the second plurality of targets are successfully matched with the tracking frames of the first plurality of targets in the current frame are defined as the first part of targets.
3. The improved multi-target tracking method based on KCF and Kalman according to claim 2, wherein the step of successful matching between the detection frame of the second plurality of targets in the current frame picture and the tracking frame of the first plurality of targets in the previous frame picture in the current frame comprises: and judging whether the matching is successful according to the centroid distance and/or the overlapping rate of the detection frame of the second plurality of targets in the current frame picture and the tracking frame of the first plurality of targets in the previous frame picture in the current frame picture.
4. The improved multi-target tracking method based on KCF and Kalman according to claim 1, wherein the second plurality of targets comprises the first part of targets and a second part of targets, wherein the targets successfully matched with the detection frame of the current frame and the tracking frame of the previous frame in the first and second plurality of targets are defined as the first part of targets, the targets unsuccessfully matched with the detection frame of the current frame and the tracking frame of the previous frame in the first and second plurality of targets are defined as the second part of targets, and the targets not recorded in the tracking chain in the second part of targets are defined as new added targets;
the step of using the target matching algorithm to perform the correction to obtain the actual position corresponding to the first part of the targets of the current frame comprises the following steps:
screening new targets in the second part of targets;
and adding the newly added target into a tracking chain.
5. The improved multi-target tracking method based on KCF and Kalman according to claim 1,
the step of using the target matching algorithm to perform the correction to obtain the actual position corresponding to the first part of the targets of the current frame comprises the following steps:
and adding 1 to a target lost frame number count value corresponding to the third part of targets, and removing the corresponding target from the tracking chain when the target lost frame number count value is greater than or equal to a preset threshold value.
6. The improved multi-target tracking method based on KCF and Kalman according to claim 1, wherein the step of correcting by using a target matching algorithm to obtain the actual position corresponding to the first part of targets of the current frame comprises:
and correcting each target in the first part of targets according to the predicted value corresponding to the current frame tracking frame corresponding to each target and the observed value corresponding to the current frame detection frame to obtain the actual position of each target in the first part of targets.
7. An apparatus having a storage function, characterized in that program data are stored which, when executed, implement the method according to any one of claims 1 to 6.
8. A multi-target tracking analysis system is characterized by comprising a processor and a memory which are electrically connected;
the processor is coupled to the memory and in operation executes instructions to implement the method of any of claims 1-6 and stores in the memory a processing result resulting from the execution of the instructions.
CN201711063087.7A 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman Active CN108053427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711063087.7A CN108053427B (en) 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711063087.7A CN108053427B (en) 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman

Publications (2)

Publication Number Publication Date
CN108053427A CN108053427A (en) 2018-05-18
CN108053427B true CN108053427B (en) 2021-12-14

Family

ID=62119840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711063087.7A Active CN108053427B (en) 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman

Country Status (1)

Country Link
CN (1) CN108053427B (en)

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019180917A1 (en) * 2018-03-23 2019-09-26 日本電気株式会社 Object tracking device, object tracking method, and object tracking program
CN108776974B (en) * 2018-05-24 2019-05-10 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN108921873B (en) * 2018-05-29 2021-08-31 福州大学 Markov decision-making online multi-target tracking method based on kernel correlation filtering optimization
CN108875600A (en) * 2018-05-31 2018-11-23 银江股份有限公司 A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO
CN108810616B (en) 2018-05-31 2019-06-14 广州虎牙信息科技有限公司 Object localization method, image display method, device, equipment and storage medium
CN108985162B (en) * 2018-06-11 2023-04-18 平安科技(深圳)有限公司 Target real-time tracking method and device, computer equipment and storage medium
CN109195100B (en) * 2018-07-09 2020-12-01 南京邮电大学 Blind area data early warning method based on self-adaptive window
CN110751674A (en) * 2018-07-24 2020-02-04 北京深鉴智能科技有限公司 Multi-target tracking method and corresponding video analysis system
CN109087331A (en) * 2018-08-02 2018-12-25 阿依瓦(北京)技术有限公司 A kind of motion forecast method based on KCF algorithm
CN109316202B (en) * 2018-08-23 2021-07-02 苏州佳世达电通有限公司 Image correction method and detection device
CN109325961B (en) * 2018-08-27 2021-07-09 北京悦图数据科技发展有限公司 Unmanned aerial vehicle video multi-target tracking method and device
CN109118523B (en) * 2018-09-20 2022-04-22 电子科技大学 Image target tracking method based on YOLO
CN109448027B (en) * 2018-10-19 2022-03-29 成都睿码科技有限责任公司 Adaptive and persistent moving target identification method based on algorithm fusion
CN109558505A (en) * 2018-11-21 2019-04-02 百度在线网络技术(北京)有限公司 Visual search method, apparatus, computer equipment and storage medium
CN109615641B (en) * 2018-11-23 2022-11-29 中山大学 Multi-target pedestrian tracking system and tracking method based on KCF algorithm
CN109657575B (en) * 2018-12-05 2022-04-08 国网安徽省电力有限公司检修分公司 Intelligent video tracking algorithm for outdoor constructors
CN109784173A (en) * 2018-12-14 2019-05-21 合肥阿巴赛信息科技有限公司 A kind of shop guest's on-line tracking of single camera
CN109712171B (en) * 2018-12-28 2023-09-01 厦门瑞利特信息科技有限公司 Target tracking system and target tracking method based on correlation filter
US20200226763A1 (en) * 2019-01-13 2020-07-16 Augentix Inc. Object Detection Method and Computing System Thereof
CN109949336A (en) * 2019-02-26 2019-06-28 中科创达软件股份有限公司 Target fast tracking method and device in a kind of successive video frames
CN109902627A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 A kind of object detection method and device
CN110031597A (en) * 2019-04-19 2019-07-19 燕山大学 A kind of biological water monitoring method
CN110276783B (en) * 2019-04-23 2021-01-08 上海高重信息科技有限公司 Multi-target tracking method and device and computer system
CN110111363A (en) * 2019-04-28 2019-08-09 深兰科技(上海)有限公司 A kind of tracking and equipment based on target detection
CN110110649B (en) * 2019-05-02 2023-04-07 西安电子科技大学 Selective human face detection method based on speed direction
CN109859239B (en) * 2019-05-05 2019-07-19 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of target tracking
CN110310305B (en) * 2019-05-28 2021-04-06 东南大学 Target tracking method and device based on BSSD detection and Kalman filtering
CN110223325B (en) * 2019-06-18 2021-04-27 北京字节跳动网络技术有限公司 Object tracking method, device and equipment
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110290410B (en) * 2019-07-31 2021-10-29 合肥华米微电子有限公司 Image position adjusting method, device and system and adjusting information generating equipment
CN110414447B (en) 2019-07-31 2022-04-15 京东方科技集团股份有限公司 Pedestrian tracking method, device and equipment
CN110490148A (en) * 2019-08-22 2019-11-22 四川自由健信息科技有限公司 A kind of recognition methods for behavior of fighting
CN110610510B (en) * 2019-08-29 2022-12-16 Oppo广东移动通信有限公司 Target tracking method and device, electronic equipment and storage medium
CN110610512B (en) * 2019-09-09 2021-07-27 西安交通大学 Unmanned aerial vehicle target tracking method based on BP neural network fusion Kalman filtering algorithm
CN110660225A (en) * 2019-10-28 2020-01-07 上海眼控科技股份有限公司 Red light running behavior detection method, device and equipment
CN110533013A (en) * 2019-10-30 2019-12-03 图谱未来(南京)人工智能研究院有限公司 A kind of track-detecting method and device
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
CN111161320B (en) * 2019-12-30 2023-05-19 浙江大华技术股份有限公司 Target tracking method, target tracking device and computer readable medium
CN111382784B (en) * 2020-03-04 2021-11-26 厦门星纵智能科技有限公司 Moving target tracking method
CN111462174B (en) * 2020-03-06 2023-10-31 北京百度网讯科技有限公司 Multi-target tracking method and device and electronic equipment
CN111428642A (en) * 2020-03-24 2020-07-17 厦门市美亚柏科信息股份有限公司 Multi-target tracking algorithm, electronic device and computer readable storage medium
CN111445501B (en) * 2020-03-25 2023-03-24 苏州科达科技股份有限公司 Multi-target tracking method, device and storage medium
CN112639872B (en) * 2020-04-24 2022-02-11 华为技术有限公司 Method and device for difficult mining in target detection
CN111551938B (en) * 2020-04-26 2022-08-30 北京踏歌智行科技有限公司 Unmanned technology perception fusion method based on mining area environment
CN111627045B (en) * 2020-05-06 2021-11-02 佳都科技集团股份有限公司 Multi-pedestrian online tracking method, device and equipment under single lens and storage medium
CN111768427B (en) * 2020-05-07 2023-12-26 普联国际有限公司 Multi-moving-object tracking method, device and storage medium
CN111709328B (en) * 2020-05-29 2023-08-04 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment
CN111582242B (en) * 2020-06-05 2024-03-26 上海商汤智能科技有限公司 Retention detection method, device, electronic equipment and storage medium
CN111681260A (en) * 2020-06-15 2020-09-18 深延科技(北京)有限公司 Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle
CN111709974B (en) * 2020-06-22 2022-08-02 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN111860373B (en) * 2020-07-24 2022-05-20 浙江商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112084867A (en) * 2020-08-10 2020-12-15 国信智能系统(广东)有限公司 Pedestrian positioning and tracking method based on human body skeleton point distance
CN111985379A (en) * 2020-08-13 2020-11-24 中国第一汽车股份有限公司 Target tracking method, device and equipment based on vehicle-mounted radar and vehicle
CN112052802B (en) * 2020-09-09 2024-02-20 上海工程技术大学 Machine vision-based front vehicle behavior recognition method
CN112232359B (en) * 2020-09-29 2022-10-21 中国人民解放军陆军炮兵防空兵学院 Visual tracking method based on mixed level filtering and complementary characteristics
CN112184772A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Target tracking method and device
CN112561963A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Target tracking method and device, road side equipment and storage medium
CN112700469A (en) * 2020-12-30 2021-04-23 武汉卓目科技有限公司 Visual target tracking method and device based on ECO algorithm and target detection
CN112581507A (en) * 2020-12-31 2021-03-30 北京澎思科技有限公司 Target tracking method, system and computer readable storage medium
CN113516158B (en) * 2021-04-15 2024-04-16 西安理工大学 Graph model construction method based on Faster R-CNN
CN113112526B (en) * 2021-04-27 2023-09-22 北京百度网讯科技有限公司 Target tracking method, device, equipment and medium
CN113674317B (en) * 2021-08-10 2024-04-26 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device for high-level video
TWI790957B (en) * 2022-04-06 2023-01-21 淡江大學學校財團法人淡江大學 A high-speed data association method for multi-object tracking
CN116883413B (en) * 2023-09-08 2023-12-01 山东鲁抗医药集团赛特有限责任公司 Visual detection method for retention of waste picking and receiving materials

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244113A (en) * 2014-10-08 2014-12-24 中国科学院自动化研究所 Method for generating video abstract on basis of deep learning technology
CN106228112A (en) * 2016-07-08 2016-12-14 深圳市优必选科技有限公司 Face datection tracking and robot head method for controlling rotation and robot
EP3118814A1 (en) * 2015-07-15 2017-01-18 Thomson Licensing Method and apparatus for object tracking in image sequences
CN106874854A (en) * 2017-01-19 2017-06-20 西安电子科技大学 Unmanned plane wireless vehicle tracking based on embedded platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244113A (en) * 2014-10-08 2014-12-24 中国科学院自动化研究所 Method for generating video abstract on basis of deep learning technology
EP3118814A1 (en) * 2015-07-15 2017-01-18 Thomson Licensing Method and apparatus for object tracking in image sequences
CN106228112A (en) * 2016-07-08 2016-12-14 深圳市优必选科技有限公司 Face datection tracking and robot head method for controlling rotation and robot
CN106874854A (en) * 2017-01-19 2017-06-20 西安电子科技大学 Unmanned plane wireless vehicle tracking based on embedded platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Tracking a Human Fast and Reliably Against Occlusion and Human-Crossing;Xuan-Phung Huynh 等;《PSIVT 2015: Image and Video Technology》;20160204;正文第462-467页 *

Also Published As

Publication number Publication date
CN108053427A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN110738127B (en) Helmet identification method based on unsupervised deep learning neural network algorithm
TWI749113B (en) Methods, systems and computer program products for generating alerts in a video surveillance system
CN109948582B (en) Intelligent vehicle reverse running detection method based on tracking trajectory analysis
US9652863B2 (en) Multi-mode video event indexing
US8744125B2 (en) Clustering-based object classification
US9208675B2 (en) Loitering detection in a video surveillance system
Singh et al. Visual big data analytics for traffic monitoring in smart city
CN113011367B (en) Abnormal behavior analysis method based on target track
CN111814638B (en) Security scene flame detection method based on deep learning
Kumar et al. Study of robust and intelligent surveillance in visible and multi-modal framework
Li et al. Traffic anomaly detection based on image descriptor in videos
CN103986910A (en) Method and system for passenger flow statistics based on cameras with intelligent analysis function
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN108376246A (en) A kind of identification of plurality of human faces and tracking system and method
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
KR20210062256A (en) Method, program and system to judge abnormal behavior based on behavior sequence
CN108830204B (en) Method for detecting abnormality in target-oriented surveillance video
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN113920585A (en) Behavior recognition method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant