CN108053427A - A kind of modified multi-object tracking method, system and device based on KCF and Kalman - Google Patents

A kind of modified multi-object tracking method, system and device based on KCF and Kalman Download PDF

Info

Publication number
CN108053427A
CN108053427A CN201711063087.7A CN201711063087A CN108053427A CN 108053427 A CN108053427 A CN 108053427A CN 201711063087 A CN201711063087 A CN 201711063087A CN 108053427 A CN108053427 A CN 108053427A
Authority
CN
China
Prior art keywords
target
tracking
frame
kcf
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711063087.7A
Other languages
Chinese (zh)
Other versions
CN108053427B (en
Inventor
谢维信
王鑫
高志坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201711063087.7A priority Critical patent/CN108053427B/en
Publication of CN108053427A publication Critical patent/CN108053427A/en
Application granted granted Critical
Publication of CN108053427B publication Critical patent/CN108053427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of modified multi-object tracking method based on KCF and Kalman, this method includes:Target detection is carried out using GoogLeNet network models and extracts clarification of objective vector;Each target in chain is tracked with reference to previous frame, one incidence matrix is established with present frame target observation position, Duplication and characteristic vector space distance to the predicted position of present frame, and matched using matching algorithm;The tracking box and character pair vector of the tracking chain of direct successful match are updated;Local tracking is carried out to the target exploitation KCF trackers that it fails to match;The tracking result of KCF is weighted the position merged with the Kalman results tracked to be updated into it;Each lower frame position of target in predicting tracing chain.By the above-mentioned means, the present invention combines the effect that CNN networks extraction feature vector improves tracking with KCF part tracking, preferably solves the problems such as target occlusion, target flase drop.In addition, the present invention also provides a kind of multiple-target system and devices.

Description

A kind of modified multi-object tracking method, system and device based on KCF and Kalman
Technical field
The present invention relates to computer vision field, more particularly to a kind of modified multiple target based on KCF and Kalman Tracking, system and device.
Background technology
Development with the technology of computer vision field and the extensive use in monitoring field so that for target with Track analysis is particularly important.Traditional tracking is all based on the thought of Kalman filtering tracker (kalman), but this The effect of method future position information in target partial occlusion is relatively poor, so as to reduce the accuracy of monitoring.Institute With in order to meet Intellectualized monitoring technology development the needs of, it is necessary to the method for tracking target that a kind of accuracy is high, prediction effect is good And multiple-target system.
The content of the invention
The present invention solves the technical problem of provide a kind of modified multiple target tracking based on KCF and Kalman Method, multiple-target system and with store function device, can solve tracking accuracy rate it is low, computationally intensive asks Topic.
In order to solve the above technical problems, the technical solution adopted by the present invention is to provide a kind of changing based on KCF and Kalman Into type multi-object tracking method, comprise the following steps:
With reference in tracking chain and previous frame picture more than first a corresponding detection block prediction of target more than first a mesh The tracking box of each target in the current frame in mark;
More than first a targets in previous frame picture are obtained in the current frame in corresponding tracking box and present frame picture The detection block of a target more than second;
Establish the detection block of more than second a targets in the tracking box in the current frame of a target more than described first and present frame Target association matrix;
It is corrected using Target Matching Algorithm, to obtain the corresponding physical location of present frame first portion target.
In order to solve the above technical problems, another technical solution used in the present invention is:It provides a kind of with store function Device, have program stored therein data, and described program data, which are performed, realizes multi-object tracking method described above.
In order to solve the above technical problems, another technical solution that the present invention uses is:A kind of multiple target tracking point is provided Analysis system:Processor and memory including electric connection;
The processor couples the memory, and the processor executes instruction to realize at work as described above more The method of target following, and the handling result for executing instruction generation is stored in the memory.
The advantageous effect of above technical scheme is:Be different from the situation of the prior art, the present invention by combine tracking chain with And the corresponding detection block of more than first a targets predicts the tracking of each target present frame in more than first a targets in previous frame picture Frame;More than first a targets in previous frame picture are obtained in the current frame second in corresponding tracking box and present frame picture The detection block of multiple targets;Establish the inspection of more than second a targets in the tracking box in the current frame of a target more than first and present frame Survey the target association matrix of frame;It is corrected using Target Matching Algorithm, to obtain the corresponding reality of present frame first portion target Border position.This method is by the above-mentioned means, the present invention is capable of providing a kind of track side for tracking high, the real-time good target of accuracy rate Method and multiple-target system.
Description of the drawings
Fig. 1 is a kind of stream of an embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention Journey schematic diagram;
Fig. 2 is a kind of stream of another embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention Journey schematic diagram;
Fig. 3 is a kind of stream of the another embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention Journey schematic diagram;
Fig. 4 is a kind of stream of the another embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention Journey schematic diagram;
Fig. 5 is a kind of stream of the another embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention Journey schematic diagram;
Fig. 6 is a kind of stream of the another embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention Journey schematic diagram;
Fig. 7 is a kind of stream of the another embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention Journey schematic diagram;
Fig. 8 is the flow diagram of an embodiment of step S243 in the embodiment that Fig. 7 is provided;
Fig. 9 is the fortune in an a kind of embodiment of the modified multi-object tracking method based on KCF and Kalman of the present invention The schematic diagram of empty container when dynamic;
Figure 10 is a kind of structure diagram of an embodiment of multiple target tracking analysis system of the present invention;
Figure 11 is a kind of schematic diagram of an embodiment of the device with store function of the present invention.
Specific embodiment
Hereinafter, it will be described with reference to the accompanying drawings the exemplary embodiment of the application.For clear and brief purpose, no Function well known to detailed description and construction.The term described below being limited in view of the function in the application can according to The intention or implementation of family and operator and difference.Therefore, it should the art is limited on the basis of disclosed in entire disclosure Language.
Referring to Fig. 1, for the present invention is based on video structural data and the methods of the video frequency monitoring method of deep learning The flow diagram of first embodiment.This method includes:
S10:Read video.
Optionally, reading video includes the video for reading the real-time video of camera acquisition and/or prerecording preservation Data.Wherein, gather the camera of real-time video, can be USB camera and IP Camera based on rtsp protocol streams its In one kind or other kinds of camera.
In an embodiment, the video of reading is that the USB camera either IP Camera based on rtsp protocol streams is real-time The video of shooting, collecting.
In another embodiment, the video of reading is the video for prerecording preservation, by from local storage either Video that is that such as USB flash disk, hard disk External memory equipment input is read or being transferred from network, is not described in detail one by one herein.
S20:Structuring processing is carried out to video, obtains structural data.
Optionally, structuring processing is carried out to video, obtains structural data and specifically refer to, by what is read in step S10 Non-structured video data changes into institutional data, specifically, structural data refer to it is heavier for subsequent analysis The data wanted.Optionally, position of the structural data including target, target classification, objective attribute target attribute, target state, target At least one information in movement locus, time on target, wherein it is possible to understand, structural data can also include The information for other classifications that user's (using the method described in the present invention or the people of system) needs.Other data are not special It is important or can be excavated to obtain by relevant informations such as structural datas.Which letter structured message specifically includes Breath, depending on different demands.On how to which structural data is handled, to obtain structural data, can hereafter do detailed It illustrates.
S30:Structural data is uploaded to cloud server, and structural data is analysed in depth, it is pre- to obtain If result.
Optionally, after step S20 handles video structural, the data of obtained structuring are uploaded to high in the clouds Server, the memory block of storage to cloud server.
In one embodiment, by the obtained data of video structural processing, it is saved directly to depositing for cloud server Storage area to retain archives, also serves as the database for improving the system.
Optionally, after step S20 handles video, obtained structural data is uploaded to cloud server, cloud End server further analyses in depth these structural datas.
Optionally, what the data of structuring of the cloud server to being uploaded from each monitoring node carried out further gos deep into Analysis, wherein, in-depth analysis include target trajectory analysis and target flow analysis or the analysis needed for other, target including people, At least one therein such as vehicle and animal.
In one embodiment, cloud server to the data of the structuring to being uploaded from each monitoring node carry out into one The in-depth analysis of step is trajectory analysis, is further sentenced according to the rule of the track of the target of upload, in the scene residence time Whether the fixed target is suspicious, and whether which is to be trapped in a certain region for a long time, if the abnormal behaviours such as generation area invasion.
In another embodiment, cloud server to from it is each monitoring node upload structuring data carry out into The in-depth analysis of one step is target flow analysis, according to the data for the structuring that each monitoring point uploads, to appearing in a certain prison The target of control point is counted, and is passed through statistics and obtained the flow of target in the monitoring node each period.Target therein It can be pedestrian and vehicle, while the peak period either ebb period of target flow can be obtained.By calculating target flow Related data for reasonably prompting pedestrian and driver, avoids rush hour, or public resource such as illuminates offer Reference frame.
This method plays critical structural data by handling to obtain video structural to in-depth analysis, then only Structural data is uploaded to high in the clouds rather than by entire transmission of video to high in the clouds, solves that network transmission pressure is big, data flow Measure the problem of of high cost.
In one embodiment, according to advance setting, when each monitoring node will pass through processing system for video processing gained When structural data is uploaded to cloud server, cloud server carries out structural data after storage configuration data It analyses in depth.
In another embodiment, when each monitoring node will be by the structural data of processing system for video processing gained When reaching cloud server, server needs user to choose whether to be analysed in depth after storage configuration data.
In another embodiment, when user can will complete once when initially upload whenever necessary The structural data of in-depth analysis re-starts the in-depth analysis of setting again.
Optionally, the in-depth analysis that the structural data uploaded to each monitoring node carries out further comprises:To structure Change data to be counted, analyze to obtain the behavior type and abnormal behaviour of one or more targets, and to abnormal behaviour into The content for the analyzing and processing that row alarm etc. or other users need.
Detailed below to obtain structural data on how to by video structural data processing, i.e., the application is also A kind of method of the video structural processing based on goal behavior attribute is provided.In one embodiment, at video structural data Reason is to utilize the target detection recognizer for being embedded in deep learning, multiple target tracking algorithm, based on the different of movement Optical-flow Feature The intelligent analysis module of normal Activity recognition scheduling algorithm, structure is changed by the non-structured video data read in step S10 The data of change.
Referring to Fig. 2, for a kind of flow diagram for one embodiment of method for processing video frequency that the application provides, this method is simultaneously And the step S20 of above example includes step S22 to step S23.
S22:Target detection identification is carried out to single frames picture.
Optionally, step S22 is to carry out target detection identification to all targets in single frames picture.Wherein, target detection is known Other object includes pedestrian detection identification, vehicle detection identification and animal detection identification etc..
Optionally, step S22, which carries out single frames picture target detection identification, includes:Extract clarification of objective in single frames picture Information.Location information of all clarification of objective information, the classification of target and target etc. in single frames picture is extracted, wherein target can To be pedestrian, vehicle and animal etc..
In one embodiment, when only including pedestrian in single frames picture, target detection identification is the detection identification to pedestrian, Extract the characteristic information of all pedestrians in picture.
In another embodiment, when including pedestrian, vehicle when the target of multiple types in single frames picture, target detection identification It is that a variety of species such as pedestrian, vehicle are detected with identification, that is, extracts the characteristic information of pedestrian, vehicle etc. in single frames picture, it can With understanding, the species of target identification can be specified by the specific of user.
Optionally, algorithm is based on depth after optimizing used by step S22 carries out single frames picture target detection identification Spend the algorithm of target detection of study.Specifically, YOLOV2 deep learning target detections frame, which may be employed, carries out target detection knowledge Not, the core of the algorithm is by the use of whole image as network inputs, directly returns bounding box's in output layer Position and the classification belonging to bounding box.
Optionally, target detection is made of model training and model measurement two parts.
In one embodiment, in terms of model training, use take 50% from VOC data sets and COCO data sets Pedestrian image or vehicle image, remaining 50% data be derived from the monitoring such as real street, indoor channel, square number According to.It is understood that (VOC data sets and COCO data sets) data in used common data sets in model training What the ratio for the data concentrated with real monitoring data can be adjusted as needed, wherein when the number of common data concentration Higher according to the ratio taken, comparatively, precision of the data obtained model under really monitoring scene will be relatively slightly poor, instead It, when real monitoring data is concentrated, taken ratio is higher, comparatively precision can be improved.
Optionally, in one embodiment, after step S22 detects target in single frames picture, which is put In entering into tracking queue and (also referred hereinafter as tracking chain), then can also target tracking algorism be used to carry out default tracking to target With analysis.
Optionally, further comprise before in said extracted single frames picture the step of clarification of objective information:The first number of structure According to structure.Optionally, clarification of objective information is extracted according to metadata structure, i.e., extracts single frames according to metadata structure Clarification of objective information in picture.
In one embodiment, metadata structure includes the essential attribute unit of pedestrian, such as:Image leading address, target disengaging Time of camera, target in the sectional drawing of the trace information of current monitor node, the color that target is worn or target extremely Few one kind.For example, the metadata structure of pedestrian may refer to shown in the following table 1, wherein metadata structure can also include other use The information not included in needed for family but following table.
Optionally, in one embodiment, it is basic that some are only included in order to save the resource of network transmission, in metadata structure Attribute information, other attributes can be carried out by relevant informations such as target trajectories excavate calculate i.e. can obtain.
The metadata structure of 1 pedestrian of table
Property Name Type Description
Camera ID short Camera node serial number
Target time of occurrence long Target enters monitoring node time
Target time departure long Target leaves monitoring node time
Target trajectory point Target is in present node movement locus
Target id short Target id identiflication number
Target jacket color short Pre-define 10 kinds of colors
Target trousers color short Color in pre-defined 5
Target entirety sectional drawing image Record target entirety sectional drawing
Target head and shoulder sectional drawing image Record target cranial sectional drawing
In another embodiment, metadata structure can also include the essential attribute information of vehicle, such as:Camera shooting leading address, Target disengaging time of camera, target the trace information of current monitor node, the appearance color of target, target license plate number Either at least one of sectional drawing of target.
It is understood that the definition for the information and the data type of metadata that metadata structure specifically includes is according to need It carries out initial setting or is referred in particular to after initial setting according to the needs of user in the numerous information set The particular community information obtained is needed calmly.
In one embodiment, the structure initial setting of metadata be image leading address, target disengaging camera time, Target is carrying out target knowledge in classifications such as the sectional drawings of the trace information of current monitor node, the color that target is worn or target When other, user can specify the time for obtaining target disengaging camera according to the needs of oneself.
In one embodiment, when the target in single frames picture is pedestrian, according to the metadata of preset pedestrian Structure extracts the characteristic information of pedestrian, that is, extract pedestrian pass in and out the time of camera, pedestrian it is residing when preceding camera address, The time of pedestrian's disengaging camera, pedestrian are current in the trace information of current monitor node, the color of pedestrian's dress or pedestrian At least one of sectional drawing or according to specially appointed other target property informations of user, as pedestrian passes in and out Time of camera and pedestrian's wears color etc..
Optionally, when from single frames picture detection recognize target, while clarification of objective information is obtained, from original Video frame in intercept out the image of target, then using based on yolov2, (yolov2 is that Joseph Redmon were carried in 2016 A kind of target detection based on deep learning gone out knows method for distinguishing) frame carry out model training.
In one embodiment, the target that detects is pedestrian when carrying out target detection to single frames picture, then from original The image of the pedestrian of detection is intercepted out in video frame, the frame based on yolov2 is then utilized to train head and shoulder, the upper part of the body, lower half Pedestrian is carried out position cutting by body detection model, judges the clothing colouring information at lower part of the body position thereon, and intercepts out pedestrian Head and shoulder picture.
In another embodiment, the target that detects is vehicle when carrying out target detection to single frames picture, then from original Video frame in intercept out detection vehicle image, the frame based on yolov2 is then utilized to train the detection model of vehicle Identification is detected to vehicle, its vehicle body appearance color, identification license board information is judged, and intercepts out the picture of vehicle.It can be with Understand, because the targeted species of identification can be set by the user selection, the detection of vehicle is identified and is determined by manager It is fixed whether to carry out.
In another embodiment, the target that detects is animal when carrying out target detection to single frames picture, then from original Video frame in intercept out detection animal image, the frame based on yolov2 is then utilized to train the detection model of animal Identification is detected to animal, judges the information such as its appearance color, kind, and intercepts out the picture of animal.It is appreciated that Be because the targeted species of identification can be set by the user selection, the detection of animal is identified by user decide whether into Row.
Optionally, the single frames picture of each target detection identification can be one or multiple single frames pictures simultaneously It carries out.
In one embodiment, the single frames picture for carrying out target detection identification every time is one, i.e., every time only to a single frames Target in picture carries out target detection identification.
In another embodiment, target detection identification can be carried out to plurality of pictures every time, i.e., every time simultaneously to multiple lists Target in frame picture carries out target detection identification.
Optionally, the frame based on yolov2 is carried out carrying out ID to the target that detects after model training (IDentity) label is associated with facilitating in follow-up tracking.Wherein, the ID number of the classification of different targets can be advance Setting, and the upper limit of ID number is to be set by the user.
Optionally, the target recognized to detection carries out ID labels or artificial progress ID labels automatically.
In one embodiment, to detecting the target recognized into line label, wherein, according to the classification of detection target Fixed, the ID number of mark has gap, such as the ID number of pedestrian can be set as:Number+number, vehicle:Capitalization+number, Animal:Letter+number of small letter, it is convenient to be associated in follow-up tracking.The rule of setting therein can be according to user's Custom and fancy setting, do not repeat one by one herein.
In another embodiment, to detecting the target recognized into line label, wherein, according to the classification of the target detected Depending on, the section belonging to ID number marked to target is different.For example, the ID labels of the pedestrian target detected are set In section 1 to 1000000, the ID labels of detected vehicle target are set in section 1000001 to 2000000.Specifically , it can also be adjusted and change as needed depending on initial setting personnel setting.
Optionally, ID labels are carried out to the target of detection, can be automatically performed by system by presetting or Manual ID labels are carried out by user.
In one embodiment, when detection recognizes pedestrian's either target of vehicle in single frames picture, system can be certainly It moves detected target, according to the classification of the target of detection, and then the ID number of label carries out ID marks automatically before Number.
In another embodiment, user carries out ID labels to the target in picture manually.Can be to not passing through system The target that the single frames picture target of automatic ID labels carries out ID labels or omits either other in preset inspection The target outside target classification is surveyed, can ID labels independently be carried out by user.
Optionally, referring to Fig. 3, in one embodiment, gone back before step S22 carries out target detection identification to single frames picture Including:
S21:By video slicing into single frames picture.
Optionally, step by video slicing into single frames picture be the video slicing that will be read in step S10 into single frames picture, It prepares for step S22.
Optionally, in one embodiment, by video slicing into the step of single frames picture it is the video that will be read in step S10 Equidistant frame-skipping or the cutting of not equidistant frame-skipping.
In one embodiment, it is that the video that will be read in step S10 is equidistant into the step of single frames picture by video slicing Frame-skipping cutting, the frame number skipped is identical, i.e., skips identical frame number at equal intervals and carry out being cut into single frames picture, The frame number wherein skipped is the frame number not comprising important information, you can with the frame number ignored.For example, it is skipped among at equal intervals 1 frame carries out video slicing, that is, takes t frames, t+2 frames, t+4 frames, the frame number skipped is t+1 frames, and t+3 frames are above-mentioned The frame number skipped is the frame number for the important information not included by judgement or above-mentioned skipped frame number is with being taken Frame number overlap frame number either the very high frame number of registration.
In another embodiment, it is that the video that will be read in step S10 differs into the step of single frames picture by video slicing The cutting of the frame-skipping of spacing, that is, the frame number skipped can be different, do not skip different frame numbers at equal intervals and cut It is divided into single frames picture, wherein the frame number skipped is the frame number not comprising important information, it is negligible frame number, wherein not Frame number comprising important information be through judgement, and judge result be strictly unessential frame number.For example, not equidistant frame-skipping Cutting, that is, take t frames, then skips 2 frames and takes t+3 frames, then skips 1 frame and take t+5 frames, then skips 3 frames and take t+9 frames, wherein, it is jumped The frame number crossed has a frame numbers such as t+1 frames, t+2 frames, t+4 frames, t+6 frames, t+7 frames, t+8 frames respectively, the above-mentioned frame number skipped be by Judge not include this frame number for analyzing required information.
In various embodiments, by video slicing into the step of single frames picture can be by system regarding reading automatically Frequency is cut into single frames picture or is chosen whether video slicing can also be that user is manual into single frames picture by user Input has been previously-completed the single frames picture of cutting.
Optionally, in one embodiment, video slicing is completed to regard reading into after the completion of the step of single frames picture Frequency performs step S22 when being cut into single frames picture to the single frames picture that cutting obtains automatically, i.e., to single frames picture obtained by cutting into Row target detection identifies or carries out step S22 institutes as the single frames picture obtained by user's selection is decided whether to cutting The target detection identification stated.
Optionally, during identification is detected to target, the value identified can be detected according to one to each target The statistics that fixed rule carries out calculates.
In one embodiment, after step S22, frame number is added up to (altogether in current monitor node to detecting a certain target Count the frame number occurred), frame number that wherein detected value is A, the statistics that detected value is the frame number of B etc. (detected value can there are many Or it is a kind of, be subject to testing result), and preserve statistics as a result, in case calling.
Optionally, the method for correction is broadly divided into trajectory corrector and objective attribute target attribute correction.
Optionally, after obtaining the structural data of each target to target detection, resulting structures data are carried out Correction.It is being corrected to the flase drop data in structural data, correction is voted according to weight ratio, final most Probability data value for exact value, the data value of a small number of results is flase drop value.
In one embodiment, (call above-mentioned statistical result) after statistics calculates, it is found that detection recognizes certain in step S22 The frame number occurred in current monitor node of one target is 200 frames, wherein there is 180 frames to detect that the jacket color of the target is red Color detects that the jacket color of the target for black, is voted according to weight ratio in 20 frames, the correction of a final proof target it is accurate It is worth jacket color for red, and corresponding value in structural data is revised as red, is finally completed correction.
Optionally, trajectory corrector is specific as follows:Assuming that a target a length of T frames when occurring under a certain monitoring scene, therefore Can obtain its track point set for G=p1, p2 ..., pN, tracing point is calculated in X-axis and the average and deviation of Y-axis, so Rejecting abnormalities and noise track point, expression are afterwards:
In one embodiment, the tracing point of deviation or average very little is rejected in trajectory corrector, reduces noise spot interference.
Optionally, objective attribute target attribute correction is specific as follows:Objective attribute target attribute school is based on weighting criterion and corrects same mesh Target property value.Assuming that the jacket color label of some target is label={ " red ", " black ", " white " ... ... }, I.e. some property value has T classification.First it is converted into digital coding L=[m1,m2,m3,……,mT];Then frequency is obtained Highest encoded radio x and its frequency F finally directly exports the property value Y (exact value) of target.Expression is as follows:
F=T- | | M-mx||0
Y=label [mx]
Above formula needs to meet,
Optionally, in one embodiment, the present invention combines YOLO target detections frame and carries out target identification and positioning, and makes Each clarification of objective vector is extracted with GoogLeNet networks, so that succeeding target matches.GoogLeNet is 2014 The CNN neutral nets for 22 layer depths that Google companies propose, are widely used in the fields such as image classification, identification.Due to The feature vector of profound deep learning network extraction has preferable robustness, ga s safety degree, so above-mentioned steps can be compared with The good accuracy for improving the subsequently tracking for target.
S23:To target into line trace, to obtain tracking result.
Optionally, to the target that detects into line trace, the step of to obtain tracking result in, the target tracked is step The target or other specially appointed targets of user, step S23 that rapid S22 is detected further comprise:To target into line trace, remember The time of the into or out monitoring node of target and each position of target process are recorded, to obtain the movement rail of target Mark.It is specific how to target into line trace, to obtain tracking result, the application be based on this provide it is a kind of based on KCF and The modified multi-object tracking method of Kalman, will hereafter elaborate.
In another embodiment, the method for processing video frequency that the application provides includes step S21, S22 and S23 in above example Basis on further comprise step S24 or the embodiment only include step S21, S22 and S24, referring to Fig. 4 and Fig. 5. Step S24 is as follows:
S24:Unusual checking is carried out to target.
Optionally, step S24 is the behaviour that unusual checking is carried out to detecting the target identified in above-mentioned steps S21 Make.
Optionally, unusual checking includes pedestrian's unusual checking and vehicle abnormality behavioral value, wherein pedestrian Abnormal behaviour includes:It runs, fight and riot, traffic abnormity behavior include:Hit and exceed the speed limit etc..
Video is handled by above method, to obtain significant data, and then data volume can be avoided excessive, mitigated significantly The pressure of network transmission.
In one embodiment, when the pedestrian target detected in step S21 carries out unusual checking, a prison is judged When human hair life in control node more than or equal to default quantity is run, it is possible to determine that crowd's riot occurs.Such as:It can set and work as step S24 judges the life of 10 human hairs when running abnormal, it is possible to determine that occurs crowd's riot, in other embodiment, judges the number threshold of riot Depending on being worth as the case may be.
In another embodiment, it can set when step S24 judges that 2 vehicles occur to hit abnormal, can be judged with this Generation traffic accident, when step S24 judges that more than 3 vehicles occur to hit abnormal behaviour, it is possible to determine that great traffic accident occurs.It can With understanding, the quantity on vehicle of judgement, which is possible, is set as needed adjustment.
In another embodiment, when the speed that vehicle is detected in step S24 is more than default velocity amplitude, can both sentence The fixed vehicle is over-speed vehicles, you can the corresponding video of the vehicle is carried out sectional drawing preservation, the information of the vehicle of identification.Wherein The information of vehicle includes license plate number.
Optionally, in an embodiment, when step S24 detects abnormal behaviour, monitoring node can be carried out at sound-light alarm Reason.
In one embodiment, the content of sound-light alarm includes reporting voice prompt content:As " asking don't be crowded, note Meaning safety!" or other preset voice prompt contents;The content of sound-light alarm further includes:Open corresponding monitoring node Warning lamp to remind passing crowd and vehicle, is taken care.
Optionally, according to the number for the behavior that is abnormal number carry out the severe grade of setting abnormal behaviour, it is different Severe grade corresponds to different emergency trouble shooting measures.The severe grade of abnormal behaviour can be divided into yellow, orange and red.It is yellow The corresponding emergency measure of abnormal behaviour of colour gradation is to carry out sound-light alarm, the corresponding emergency measure of abnormal behaviour of orange grade It is the Security Personnel that link monitor is responsible for a little while carrying out sound-light alarm, the abnormal behaviour measure of red early warning is to carry out acousto-optic The Security Personnel that alarm, link monitor are responsible for a little can alarm on timely line simultaneously.
In one embodiment, when the number for the behavior that is abnormal is below 3 people or 3 people, it is set as the people of yellowness ratings Group's abnormal behaviour;When the number for the behavior that is abnormal is more than crowd's exception row of orange grade when 3 people are more than to be less than or equal to 5 people For;It is set as crowd's abnormal behaviour of red scale when the number for the behavior that is abnormal is more than 5 people.Wherein, specific setting Number can be adjusted according to the actual needs, not repeated one by one herein.
Optionally, in an embodiment, to further comprising the steps of after the step of target progress unusual checking:If inspection Abnormal behaviour is measured, then preserve current video two field picture sectional drawing and is believed with the detected clarification of objective for being abnormal behavior Breath is transmitted to cloud server.
Optionally, the corresponding characteristic information of the target to being abnormal behavior can include:Camera ID, abnormal thing Event, abnormal behaviour sectional drawing etc. information occur for part type, abnormal behaviour, can also include required other kinds of letter Breath.Wherein being sent to the information that the metadata structure of the abnormal behaviour of cloud server is included includes structure in the following table 2, It can include the information of other classifications.
The metadata structure of 2 abnormal behaviour of table
Property Name Data type Description
Camera ID short Camera Unique ID
Anomalous event type short Pre-define two kinds of abnormal behaviours
Abnormal time of origin long Abnormal conditions time of origin
Abnormal conditions sectional drawing image Recording exceptional behavior sectional drawing
In one embodiment, when carrying out unusual checking to target, it is tested with pedestrian and sends the abnormal behaviour fought, Then corresponding current video two field picture sectional drawing is preserved, and by sectional drawing and is abnormal structuring number corresponding to the target of behavior According to being transmitted together to cloud server.The sectional drawing of detected abnormal behaviour is being sent to the same of cloud server When, this monitoring node carries out sound-light alarm processing, and starts corresponding emergency measure according to the grade of abnormal behaviour.
It in another embodiment,, will be current when detecting generation crowd's riot when carrying out unusual checking to target Video frame images sectional drawing preserves and is sent to cloud server, in case cloud server is further processed, monitors simultaneously Node carries out sound-light alarm, and starts corresponding emergency measure according to the grade of abnormal behaviour.
Specifically, in an embodiment, the step of target progress unusual checking, is included:The one or more targets of extraction Multiple characteristic points light stream movable information, and clustered according to light stream movable information and unusual checking.Based on this, The application also provides a kind of anomaly detection method based on cluster Optical-flow Feature, will hereafter elaborate.
Referring to Fig. 6, a kind of modified multi-object tracking method one based on KCF and Kalman also provided for the application is real The flow diagram of example is applied, this method is also the step S23 in above example simultaneously, specifically includes step S231 to step S234.Specifically include following steps:
S231:With reference in tracking chain and previous frame picture more than first more than first a mesh of a corresponding detection block prediction of target Each target is in the tracking box of present frame in mark.
Optionally, tracking chain be according to single frames picture from video obtained by cutting of all before present frame picture or Multiple target followings in the continuous single frames picture in part calculate gained, the track of multiple targets before collecting in all pictures Information and empirical value.
In one embodiment, tracking chain is to calculate institute according to the target following of all pictures before present frame picture It obtains, includes all information of all targets in all frame pictures before present frame picture.
In another embodiment, tracking chain is according to the target following to the continuous picture in part before present frame picture Calculate gained.Wherein track that the continuous picture number of calculating is more, and the accuracy rate of budget is higher.
Optionally, with reference to the clarification of objective information in tracking chain and according to more than first a target in previous frame picture Corresponding detection block, tracking box of more than first the tracked a targets of prediction in present frame picture, such as prediction more than first The position that target is likely to occur in the current frame.
In one embodiment, above-mentioned steps can predict the position of the tracking box of a target more than first in the current frame, i.e., Obtain the predicted value of a target more than first.
In another embodiment, above-mentioned steps can predict tracking box of a target in the next frame of present frame more than first Position.Wherein, a target more than first predicted is in the position of the tracking box of the next frame of present frame compared to being predicted The error of the position of the tracking box of a target more than first in the current frame is bigger.
Optionally, a target more than first refers to all targets detected in previous frame picture.
S232:Obtain more than first a targets corresponding tracking box and the present frame in the current frame in previous frame picture The detection block of a target more than second in picture.
Specifically, a target more than second refers to detected all targets in present frame picture.
Optionally, more than first a targets in previous frame picture corresponding tracking box and current in the current frame is obtained The detection block of more than second a targets in frame picture.Wherein tracking box will be present in the current frame in more than first a targets of prediction Position when rectangle frame or other shapes frame, frame includes one or more targets.
Optionally, more than first a targets in previous frame picture corresponding tracking box and current in the current frame is obtained In frame picture during the detection block of more than second a targets, acquired tracking box and detection block are right respectively comprising tracking box and detection block The clarification of objective information answered.Such as location information, color characteristic and textural characteristics of target etc..Optionally, corresponding feature Information can as needed be set by user.
S233:Establish the detection block of more than second a targets in the tracking box in the current frame of a target more than first and present frame Target association matrix.
Optionally, the correspondence of more than first a targets in the current frame in the previous frame picture obtained in step S232 Tracking box detection block corresponding with detected more than second a target in present frame picture, establish target association matrix.
In one embodiment, more than first a destination numbers are N such as in previous frame picture, the number of targets that present frame detects It measures as M, then establishes the target association matrix W of a size M × N, wherein:
Aij(0<i≤M;0<J≤N) value be by dist (i, j), IOU (i, j), m (i, j) determine, specifically, can table Show the following formula:
Wherein, IW、IhFor the width and height of picture frame;Dist (i, j) is j-th in the tracking chain obtained in previous frame The centroid distance of the next frame tracking box that target is predicted and the detection block for i-th of target that detection identification obtains in present frame, d (i, j) is the centroid distance after being normalized using 1/2 distance of picture frame diagonal, and m (i, j) is two target feature vectors Euclidean distance,For the feature vector extracted based on GoogLeNet networks, this feature vector uses CNN frames Model carry out feature extraction compared to traditional manual feature extraction more have robustness and ga s safety degree.Wherein, normalize Purpose primarily to ensure d (i, j) with influences of the IOU (i, j) to A (i, j) be consistent.IOU (i, j) represents previous frame Tracking chain in j-th of target prediction tracking box in the current frame and present frame in detection identification obtained j-th of target The Duplication of detection block, i.e., the intersection of above-mentioned tracking box and detection block is than its upper union.IOU expressions are:
Optionally, its value range of IOU (i, j) is 0≤IOU (i, j)≤1, and the value is bigger, shows above-mentioned tracking box and inspection It is bigger to survey frame Duplication.
In one embodiment, when target is static, same target should in the centroid position detected by front and rear two frame This is in same point or deviation very little, therefore the value of IOU should be approximately that 1, d (i, j) should also tend to 0, therefore AijValue It is smaller, and when object matching, the value of m (i, j) is smaller, therefore the target of ID=j in chain is tracked when being matched It is bigger with the successful possibility of detection object matching of detection chain ID=i;If the position of the front and rear same target detection frame of two frames It puts and falls far short, be not overlapped, then IOU should be that 0, m (i, j) value is larger, therefore the value of d (i, j) is bigger, therefore track chain The target of middle ID=j is with detecting the successful possibility of detection object matching of chain ID=i with regard to smaller.
Optionally, the foundation of target association matrix is with reference to centroid distance, IOU and clarification of objective vector Euclidean distance It outside, while can be with other characteristic informations of reference object, such as:Color characteristic, textural characteristics etc..It is understood that when ginseng According to index it is more when, then accuracy rate see it is higher, but under real-time can become slightly due to the increase of calculation amount accordingly Drop.
Optionally, in one embodiment, when needing to ensure preferable real-time, in most cases only referring to two taken The location information of target establishes target association matrix in two field picture.
In one embodiment, the color of wearing of the location information of reference object and target (can also be the appearance face of target Color) establish the target association square of the corresponding tracking box of a target more than first and the detection block of the corresponding present frame of more than second a targets Battle array.
S234:It is corrected using Target Matching Algorithm, to obtain the corresponding physical location of present frame first portion target.
Optionally, using Target Matching Algorithm, according to it is actually detected to the observation of target and step S231 in mesh The predicted value corresponding to detection block is marked, desired value is corrected, to obtain the physical location of a target more than first in present frame, It that is to say more than first in previous frame in a target while appear in the target of more than second a targets of present frame in the current frame Physical location.It should be understood that because the observation of a target more than second in present frame can be because the clarity of cutting picture Etc. factors have certain error, so using combine tracking chain and previous frame in more than first a targets in previous frame picture Detection block, the position of a target in the current frame more than first predicted is corrected the physical location of a target more than second.
Optionally, Target Matching Algorithm is Hungary Algorithm (Hungarian), and observation is that target is examined in step S22 Clarification of objective information, the location information of classification and target including target etc. is obtained when surveying identification, the predicted value of target is step Combined in rapid S231 the target predicted the position of tracking chain and target in previous frame positional value in the current frame and other Characteristic information.Wherein, using the location information of target as main basis for estimation, other characteristic informations are secondary basis for estimation.
Optionally, in an embodiment, by the detection block in more than second a targets, with more than first a targets in the current frame The object definition of tracking box successful match is first portion's target, while the tracking box and the more than first in a target in present frame A target more than two is in every group of tracking for being also defined as first portion's target, i.e. successful match of the detection block successful match of present frame Frame is all from same target with detection block.Wherein it is possible to understand, the detection block more than second in a target, more than first The tracking box successful match of a target in the current frame refers to:Location information and other characteristic information one-to-one corresponding or right The item number answered is relatively more, i.e., higher corresponding item number probability is successful match.
In another embodiment, the quantity of first portion's target is less than more than first a targets, is that more than first a targets exist Tracking box in present frame only has part can be with the detection block successful match of more than second a targets, some is in present frame The middle characteristic information according to matching foundation can not successful match.
Optionally, in different implementation, more than first in the detection block and previous frame of more than second a targets in present frame The step of a target tracking box successful match in the current frame, includes:The detection block of more than second a targets in present frame With first in previous frame more than the tracking box of a target in the current frame centroid distance and/or Duplication judge whether matching into Work(.
In one embodiment, the detection block and upper one of some in a target more than second in present frame or multiple targets When the centroid distance of the tracking box of some in more than first a targets in frame or multiple targets in the current frame is close, Duplication Object matching success is then judged when very high and eigen vector is closer to the distance.It is appreciated that adjacent two frames picture cutting time It is separated by very short, i.e., the distance that target moves in this time being separated by is very small, so can be determined that two frame pictures at this time In object matching success.
Optionally, a target more than second includes first portion's target and second portion target, wherein, from the foregoing, it will be observed that first Partial target is:The mesh of detection block and the tracking box successful match of more than first a targets in the current frame more than second in a target Mark.Second portion target is:Detection block more than second in a target, with the tracking box of more than first a targets in the current frame not With successful target, will not have the object definition recorded in second portion target in chain is tracked is newly-increased target.It is appreciated that , in second portion target, except newly-increased target, there is likely to be another kind of targets:In a target more than first without matching into The target that work(still occurred in tracking chain.
In one embodiment, the quantity of second portion target can be 0, i.e. the detection of a target more than second in present frame The tracking box of frame and more than first a targets in the current frame can be with successful match, so the quantity of second portion target at this time It is 0.
Optionally, analysis is being corrected using Target Matching Algorithm, it is corresponding to obtain present frame first portion target Include after the step of physical location:Filter out the newly-increased target in second portion target;Newly-increased target is added in into tracking chain.Separately It is further included in one embodiment:Corresponding filter tracker is initialized to increase the initial position of target and/or characteristic information newly.
Filter tracker includes Kalman filter (kalman), coring correlation filter (kcf) and card in one embodiment With wave filter that coring correlation filter is combined, (structure of the wave filter combines Kalman filter and core to Thalmann filter Change the design feature of both relevant wave filters, for the combination of both above-mentioned wave filter).Kalman filter, coring are related Wave filter and Kalman filter with the wave filter that coring correlation filter is combined be all based on multiple target that programming realizes with Track algorithm.Wherein, Kalman filter refers to combine Kalman filter with the wave filter that coring correlation filter is combined The filter construction realized with the algorithm structure of the structure of both coring correlation filters.In other embodiment, filter tracking Device or other kinds of wave filter, as long as identical function can be realized.
Optionally, the data for tracking chain calculate gained by the data training of the pervious all frames of previous frame and previous frame, The target tracked in chain includes first portion's target of foregoing description and Part III target.Specifically, first portion's target Refer to:The target of tracking box and the detection block successful match in more than second a targets more than first in a target in the current frame. Part III target refers to:Track the target of the target and more than second a non-successful match of target in chain.
It should be understood that it is to be removed in tracking chain and more than second a object matchings successful the on Part III objective spirit All targets outside a part of target.
Optionally, analysis is corrected using Target Matching Algorithm in step S234, to obtain present frame first portion mesh Include after the step of marking corresponding physical location:The corresponding target lost frames counting number value of Part III target adds 1, and in mesh It marks and removes corresponding target from tracking chain when lost frames counting number value is more than or equal to predetermined threshold value.It should be understood that lose frame number The predetermined threshold value of count value is to preset, and can be adjusted as needed.
In one embodiment, the corresponding lost frames counting number value of a certain target is more than or equal to default threshold in Part III target During value, this target is removed from current tracking chain.
Optionally, when a certain target is removed from current tracking chain, the structural data corresponding to the target is uploaded To cloud server, the empirical value in structural data or database that cloud server can be to combining the target is right again The target carries out the in-depth analysis of track or abnormal behaviour.
Wherein it is possible to understand, when this is sent to cloud by the structural data corresponding to the target that removes from tracking chain When holding server, performing the system of this method can select to trust, and interrupt in-depth analysis of the cloud server to the target.
Optionally, analysis is corrected using Target Matching Algorithm in step S234, to obtain present frame first portion mesh Include after the step of marking corresponding physical location:The corresponding target lost frames counting number value of Part III target adds 1, and is counting It is local to track Part III target to obtain current pursuit gain when numerical value is less than predetermined threshold value.
Further, according to the current pursuit gain of Part III target and the corresponding prediction of Part III target in an embodiment Value is corrected, to obtain the physical location of Part III target.Specifically, in an embodiment, current pursuit gain is by coring Correlation filter and Kalman filter carry out Part III target with the wave filter that coring correlation filter is combined local It is obtained during tracking, predicted value is the positional value of Kalman filter (kalman) prediction Part III target.
Optionally, it is by Kalman filtering tracker to the target that is detected in above-mentioned steps S22 into line trace (kalman) and the wave filter of coring correlation filtering tracker (kcf) is combined common completion.
In one embodiment, when the target of tracking be can be with matched target when, i.e., without it is doubtful loss target when, only adjust The tracking work to target can have both been completed with Kalman filtering tracker (kalman).
In another embodiment, when occurring doubtful lost target in the target of tracking, Kalman Filtering tracking is called The wave filter that device (kalman) and coring correlation filtering tracker (kcf) are combined coordinates the tracking work completed to target jointly Make or completed from Kalman filtering tracker (kalman) and coring correlation filtering tracker (kcf) with successively.
Optionally, in an embodiment, step S234 is corrected using Target Matching Algorithm, to obtain present frame first The step of partial objectives for corresponding physical location, includes:For each target in first portion's target, corresponded to according to each target The corresponding predicted value of present frame tracking box and the corresponding observation of present frame detection block be corrected, with first portion's mesh The physical location of each target in mark.
In one embodiment, for each target in first portion's target, the corresponding predicted value of tracking box can in the current frame To be interpreted as:With reference to the location information in the empirical value and previous frame in tracking chain, each mesh in first portion's target is predicted Target location information in the current frame, the physical location of the first portion's target obtained then in conjunction with observation station in the current frame is (i.e. Observation), the physical location of each target in correction first portion target.This operation is reducing because of predicted value or observation Error band come measure the problem of each target actual value is inaccurate.
Optionally, in one embodiment, the above-mentioned modified multi-object tracking method for being based on KCF and Kalman can be with It realizes and multiple targets is carried out with trace analysis, record target is into the access time of the monitoring node and under the monitoring scene Each movement position, so as to generate a track chain, specifically can clearly react fortune of the target in current monitor node Dynamic information.
Referring to Figure 10, the present invention also provides a kind of multiple target tracking analysis system 300, the processor including electric connection 302 and memory 304;Processor 302 executes instruction to realize at work above-mentioned to be changed based on KCF with what Kalman was combined It is preserved in memory into type multi-object tracking method, and by the handling result for executing instruction generation.It should be understood that memory 304 are stored with the data corresponding to the algorithm instruction of the above method, including Kalman filter, coring correlation filter and karr Data of the graceful wave filter corresponding to the filter joint scheduling algorithm of coring correlation filter.
In one embodiment, processor 302 carries out school according to current pursuit gain and the corresponding predicted value of Part III target Just, to obtain the physical location of Part III target.Wherein detailed details has elaborated, herein not again above It repeats.
Referring to Figure 11, the present invention also provides a kind of devices 400 with store function, and have program stored therein data, the program Data, which are performed, realizes a kind of above-mentioned embodiment of the modified multi-object tracking method based on KCF and Kalman.Specifically , the above-mentioned device 400 with store function can be memory, personal computer, server, the network equipment or USB flash disk etc. One kind therein.
Referring to Fig. 7, implement for a kind of anomaly detection method one based on cluster Optical-flow Feature that the application also provides The flow diagram of example, this method is also the step 24 of above example simultaneously, including step S241 to step S245.Specifically Step is as follows:
S241:Light stream detection is carried out to the detection block region of one or more targets.
Optionally, before unusual checking is carried out to target, detection of the preset algorithm completion to target is had been based on Identification, and where obtaining when target detection is carried out to the target in single frames picture the corresponding detection block of each target and detection block Position, light stream detection then is carried out to the detection blocks of one or more targets.Wherein, light stream contains the movement letter of target Breath.Optionally, preset algorithm can be yolov2 algorithms or other algorithms with similar functions.
It will be appreciated that the corresponding detection block of each target and the area where detection block in acquired single frames picture Domain, because detection block center can and target center of gravity close to overlap, so can this obtain each pedestrian in each two field picture Target again or other types target location information.
In one embodiment, the essence for light stream detection being carried out to the detection block of one or more targets is that acquisition target institute is right The movable information of light stream point in detection block is answered, the velocity magnitude and the direction of motion of the movement including light stream point.
Optionally, light stream detection is to obtain each body dynamics information of light stream point, is by LK (Lucas-Kanade) gold Word tower optical flow method or other there is same or like streamer method to complete.
It is alternatively possible to light stream detection is carried out to the detection block of a target in every frame picture every time, it can also be simultaneously Light stream detection is carried out to the detection block of target multiple in every frame picture, the general number of targets for carrying out light stream detection every time is foundation Depending on system initial setting.It is understood that this setting can be adjusted setting as needed, when the quick light of needs During stream detection, it can be set as simultaneously being detected the detection block of multiple targets in every frame picture.It is very delicate when needing When light stream detects, it can adjust and be set as carrying out light stream detection to the detection block of a target in every frame picture every time.
Optionally, in one embodiment, light stream is carried out to the detection block of a target in continuous multiframe picture every time It detects or the detection block of a target in single frames picture is detected.
Optionally, in another embodiment, every time to multiple or target complete detection blocks in continuous multiframe picture It carries out light stream detection or light stream detection is carried out to multiple or target complete detection blocks in single frames picture every time.
Optionally, in one embodiment, before light stream detection is carried out to target, target is first detected in above-mentioned steps Approximate location region, then directly in continuous two field pictures have target occur region (it is to be appreciated that target examine Survey region) carry out light stream detection.Wherein, the continuous two field pictures for carrying out light stream detection are the identical images of size.
Optionally, in one embodiment, it can be to a frame figure to carry out light stream detection to the detection block region of target The detection block region of the middle target of piece carries out light stream detection, and the data obtained and information then are stored in local storage In, then light stream detection is carried out to the detection block region of the target in the picture in next frame or default frame.
In one embodiment, the detection block to target and region carry out light stream detection every time, and one by one to figure The detection block of all targets in piece carries out light stream detection.
In another embodiment, every time multiple targets in a pictures are carried out at the same time with light stream detection, you can to understand Light stream detection is carried out to the detection block of the either partial target of all targets in a single frames picture every time.
In another embodiment, the carry out light stream detection to the detection blocks of all targets in multiple single frames pictures every time.
In another embodiment, every time in multiple single frames pictures, specially appointed same category of target detection frame into Row light stream detects.
Optionally, gained Optic flow information is added in space-time model after step S241, so as to be calculated by statistics Obtain the light stream vector information of front and rear multiple image.
S242:The light stream movable information of the corresponding characteristic point of detection block, calculates detection in extraction at least two continuous frames image The comentropy of frame region.
Optionally, step 242 extracts the light stream movable information of the corresponding characteristic point of detection block at least two continuous frames image, The comentropy of detection block region is calculated, it is that the corresponding feature in detection block region at least two continuous frames image is clicked through Row calculates, and wherein light stream movable information refers to the direction of motion of light stream point and the size of movement velocity, that is, extracts the fortune of light stream point Dynamic direction and the distance of movement, then calculate the movement velocity of light stream point, characteristic point is can represent object features information one The set of a or multiple pixels.
Optionally, in two continuous frames image is extracted after the light stream movable information of the corresponding characteristic point of detection block, and according to The comentropy of detection block region is calculated according to the light stream movable information extracted, it is to be understood that mesh is based on during comentropy The Optic flow information for marking all light stream points in detection zone calculates gained.
Optionally, step 242 extracts the light stream movable information of the corresponding characteristic point of detection block at least two continuous frames image, The comentropy of detection block region is calculated, is that (LK pyramid optical flow methods are under for LK (Lucas-Kanade) pyramid optical flow method Abbreviation LK optical flow methods in text) pixel light stream characteristic information in rectangle frame region of the extraction consecutive frame only containing pedestrian targetAnd LK light stream extraction algorithms are accelerated using graphics processor (Graphics Processing Unit), So as to fulfill the Optical-flow Feature information of real-time online extraction pixel.Wherein, Optical-flow Feature information refers to light stream vector information, can Abbreviation light stream vector.
Optionally, due to the light stream vector of optical flow algorithm extractionIt is by two two-dimensional matrix vectorsIt forms, i.e.,
Wherein, each point corresponds to each pixel position in image in matrix;Represent same picture in consecutive frame The pixel separation that vegetarian refreshments is moved in X-axis,Represent the pixel separation that same pixel is moved in Y-axis in consecutive frame.
Optionally, pixel separation refers to the distance that characteristic point moves in adjacent two field pictures, can be carried by LK light streams Algorithm is taken directly to extract acquisition.
In one embodiment, step 242 be to having completed the single-frame images of target detection, and got target inspection In the image of detection block during survey, the light stream movable information of the characteristic point corresponding to the detection block of each target is calculated.Its Middle characteristic point can also be construed to refer to that the point of acute variation occurs for image intensity value or curvature is larger on image border Point (intersection point at i.e. two edges).This operation can reduce calculation amount, improve computational efficiency.
Optionally, step S242 can extract all detection blocks or part detection block correspondence in two continuous frames image simultaneously Characteristic point Optic flow information, the corresponding characteristic point of detection block all in the consecutive image of two can also be calculated over simultaneously Optic flow information, the quantity of the image calculated every time is by advance in the setting of system, and can be set as needed.
In one embodiment, step S242 calculates the corresponding characteristic point of all detection blocks in two continuous frames image simultaneously Optic flow information.
In another embodiment, it is corresponding to be calculated over detection block all in the consecutive image of two simultaneously by step S242 The Optic flow information of characteristic point.
Optionally, step S242 can calculate the corresponding detection block of all targets at least two continuous frames image simultaneously Optic flow information or calculate simultaneously is specified at least two continuous frames image and the light of the detection block of corresponding target Stream information.
In one embodiment, step S242 is the corresponding detection of all targets calculated simultaneously in continuous at least two field pictures The Optic flow information of frame, such as:T frames neutralize the Optic flow information of the detection block corresponding to all targets in t+1 two field pictures.
In another embodiment, step S242 is to calculate specifying at least two continuous frames image and corresponding simultaneously Target detection block, such as:T frames A classes target and t+1 two field picture A ' class targets, the institute of targets of the ID marked as 1 to 3 are right The Optic flow information for the detection block answered extracts simultaneously and calculates target A1、A2、A3Target A corresponding with its1’、A2’、A3' inspection Survey the Optic flow information of frame.
S243:Cluster point is established according to light stream movable information and comentropy.
Optionally, cluster point is established according to the light stream movable information and calculating gained comentropy calculated in step S242.Its Middle light stream movable information is the information for the motion feature for reacting light stream, and the velocity magnitude in direction and movement including movement also may be used To include other relative motion characteristic informations, comentropy is foundation light stream movable information as obtained by calculating.
In one embodiment, the light stream movable information extracted in step S242 includes the direction of movement, the distance of movement, fortune At least one of dynamic velocity magnitude and other relative motion characteristic informations.
Optionally, before step S243 establishes cluster point according to the comentropy of light stream movable information and calculating gained, first Light stream is clustered using K- mean algorithms (k-mean).Wherein, detection when cluster point number can be according to target detection Frame number determines that it is foundation that cluster is carried out to light stream:The direction of motion light stream point identical with movement velocity size is created as gathering Class point.Optionally, in one embodiment, the value range of K is 6~9, and certain K values can also be other values, not do herein superfluous It states.
Optionally, cluster point is that the direction of motion is identical with movement velocity size or the set of approximately uniform light stream point.
S244:Calculate the kinetic energy of cluster point or the kinetic energy of target detection frame region.Specifically, with institute in step S243 The cluster point of foundation is unit, the kinetic energy or calculate target detection frame institute simultaneously that the cluster established in calculation procedure S245 is put Kinetic energy in region.
In one embodiment, in the kinetic energy of cluster point or the kinetic energy of target region established in calculation procedure S243 It is at least one.It is understood that in different embodiments, the calculating side of one of which needs can be configured according to specific requirements Formula can also configure two kinds of calculations of kinetic energy of the kinetic energy for calculating cluster point or target region, when only needing to count simultaneously When calculating one of which, it can manually select and not calculate another kind.Optionally, its front and rear N frame is utilized according to the position of cluster point Empty container when motion vector establishes a movement, and calculate the light stream histogram of each cluster point place detection zone (HOF) comentropy and the mean kinetic energy of cluster point set.
Optionally, the formula of the kinetic energy of target detection frame region is as follows:
Optionally, i=0 ..., k-1 represent the sequence number of light stream in single target detection block region, and k represents single mesh Light stream total number after the cluster in mark region, in addition, calculating for convenience, makes m=1.Optionally, in one embodiment, the value of K Scope is 6~9, and certain K values can also be other values, and this will not be repeated here.
S245:Abnormal behaviour is judged according to the kinetic energy of cluster point and/or comentropy.
Optionally, according to the kinetic energy of cluster point or moving for the target detection frame region calculated in step S244 It can judge whether the corresponding target of cluster point is abnormal behavior, wherein when target is pedestrian, abnormal behaviour includes, and runs quickly It runs, fight and riot, when target is vehicle, abnormal behaviour includes hitting and hypervelocity.
Specifically, fight and run what two kinds of abnormal behaviours were all put with the comentropy of target detection frame region with cluster Kinetic energy is related.I.e. abnormal behaviour is when fighting, and the Optic flow information entropy of target detection frame region is larger, poly- corresponding to target The kinetic energy of class point or the kinetic energy of target region are also larger.And abnormal behaviour is the cluster corresponding to target when running The kinetic energy of point or the kinetic energy of target region are larger, and the Optic flow information entropy of target detection frame region is smaller.When not sending out During raw abnormal behaviour, the Optic flow information entropy of detection block region corresponding to target is smaller, and the cluster point corresponding to target moves Energy or the kinetic energy of target region are also smaller.
Optionally, in an embodiment, S245 according to cluster point kinetic energy and/or comentropy judge abnormal behaviour the step of into One step includes:If the Optic flow information entropy of the detection block region corresponding to target is more than or equal to first threshold, and target institute is right The kinetic energy of cluster point or the kinetic energy of target detection frame region answered are more than or equal to second threshold, then it is to beat to judge abnormal behaviour Frame.
Optionally, in another embodiment, the step of judging abnormal behaviour according to the kinetic energy of cluster point and/or comentropy, is into one Step includes:If the comentropy of the detection block region corresponding to target is more than or equal to the 3rd threshold value and is less than first threshold, together When target corresponding to cluster point kinetic energy or target detection frame region kinetic energy be more than second threshold.Then judge abnormal row To be to run.
In one embodiment, for example, comentropy is represented with H, kinetic energy is represented with E.
Optionally, target run behavior judgment formula it is as follows:
In one embodiment, present invention training obtains the behavior of runningValue range isλ1Value is 3000, whereinIt is used to indicate that the Optic flow information entropy H of target detection frame region and the region of target detection frame The ratio of kinetic energy E, λ1It is a default kinetic energy values.
Optionally, target is fought the judgment formula of behavior:
In one embodiment, present invention training obtains the behavior of fightingValue range isλ2Value For 3.0, whereinIt is used to indicate that the ratio of comentropy H and kinetic energy E, λ2It is a default information entropy.
Optionally, the judgment formula of normal behaviour:
In one embodiment, in the present invention, the normal behaviour λ that training obtains3Take 1500, λ4Take 1.85, λ3It is one pre- If kinetic energy values, and less than λ1, λ4It is a default information entropy, and less than λ2
In one embodiment, when a certain pedestrian target is when running, the light stream of the cluster point corresponding to the pedestrian target is moved Can be larger, Optic flow information entropy is smaller.
Optionally, when crowd's riot occurs, multiple pedestrian targets can be detected in a single frames picture first, then When multiple pedestrian targets to being detected carry out unusual checking, it is found that exception of running has occurred in multiple targets, It can be determined that generation crowd's riot at this time.
In one embodiment, when carrying out unusual checking to multiple targets detected in a single frames picture, when The motion energy for clustering point having more than corresponding to the target of pre-set threshold numbers is larger, and Optic flow information entropy is smaller;It at this time can be with Judgement may have occurred crowd's riot.
Optionally, when target is vehicle, the judgement of abnormal behaviour is again based in detection block corresponding to target Most light stream directions and the distance between the vehicle that is detected size (can be drawn from positional information calculation), judge whether It hits.It is understood that the most light stream directions for working as the detection block of two vehicle targets are opposite, and the distance of two cars When close, it can be determined that doubtful generation crash.
Optionally, the result for step S245 being judged abnormal behaviour preserves, and is sent to cloud server.
Method described in above-mentioned steps S241 to step S245 can effectively improve the efficiency and reality of unusual checking Shi Xing.
Optionally, in an embodiment, the corresponding characteristic point of detection block in step S242 extractions at least two continuous frames image It is further included before the step of light stream movable information, the comentropy of calculating detection block region:Extraction at least two continuous frames image Characteristic point.
Optionally, the characteristic point of extraction at least two continuous frames image can extract the middle mesh of the continuous image of two frames every time It marks the characteristic point of detection block or extracts the feature of target detection frame in the continuous image of multiframe (more than two frames) every time Point wherein the quantity of the image extracted every time is during initialization system by setting, and can be adjusted as needed.Wherein, it is special Sign point refer to image intensity value occur the point of acute variation or on image border the larger point of curvature (i.e. two edges Intersection point).
Optionally, in an embodiment, the corresponding characteristic point of detection block in step S242 extractions at least two continuous frames image The step of light stream movable information, the comentropy for calculating detection block region, further comprises:It is calculated using preset algorithm continuous The characteristic point of object matching in two field pictures removes unmatched characteristic point in two continuous frames image.
Optionally, first, image processing function (goodFeaturesToTrack ()) is called to extract in previous frame image After testing to target area in characteristic point (also referred to as Shi-Tomasi angle points), then LK-pyramid light streams is called to carry The function calcOpticalFlowPyrLK () in algorithm is taken to calculate target and the matched feature of previous frame that present frame detects Point, the characteristic point not moved in two frames before and after removal, so as to obtain the light stream movable information of pixel.Wherein, in the present embodiment Characteristic point can be Shi-Tomasi angle points, and or abbreviation angle point.
Optionally, in an embodiment, step S245 is further included before establishing the step of clustering point according to light stream movable information: The light stream direction of motion of characteristic point is drawn in the picture.
In one embodiment, further included before the step of establishing cluster point according to light stream movable information, in each two field picture In draw the light stream direction of motion of each characteristic point.
Optionally, referring to Fig. 8, in an embodiment, the step of step S243 establishes cluster point according to light stream movable information it After further include step S2431 and step S2432:
S2431:Empty container when position and motion vector based on object detection area are established.
Optionally, existed based on the cluster point in the location information and detection block where object detection area, that is, target detection frame Empty container when the motion vector relation of front and rear multiframe is established.
Optionally, the schematic diagram of empty container when Fig. 9 is the movement in an embodiment, the two dimension of empty container when wherein AB is this Highly, the two-dimentional width of empty container when BC is this, the depth of empty container when CE is this.Wherein, the depth CE of empty container is to regard when Frequency frame number, the two-dimentional size of empty container when ABCD is represented, the size of target detection frame when two-dimentional size represents target detection.It can be with Understand, when empty container model can be other figures, when target detection frame figure change when, when empty container model Can accordingly it change.
Optionally, in one embodiment, when the figure of target detection frame changes, then corresponding established space-time Container can change according to the graphic change of target detection frame.
S2432:Calculate average information entropy and the mean motion of the light stream histogram of the corresponding detection block of each cluster point Kinetic energy.
Optionally, the average information entropy for calculating each light stream histogram for clustering the corresponding detection block of point is moved with average Energy.Light stream histogram HOF (Histogram of Oriented Optical Flow) counts light stream point in a certain certain party To the schematic diagram of the probability of distribution.
Optionally, the basic thought of HOF is that corresponding histogram is projected into according to the direction value of each light stream point It in bin, and is weighted according to the amplitude of the light stream, in the present invention, the value size of bin is 12, wherein each light stream point Movement velocity size and Orientation calculation formula it is as follows, T refers to adjacent two field pictures interlude.
Wherein, using light stream histogram, it is possible to reduce noise in the size of target, target direction of motion and video etc. Influence of the factor to the Optical-flow Feature of object pixel.
Optionally, the species of abnormal behaviour includes fighting running, in riot or traffic abnormity in different embodiments It is a kind of.
In one embodiment, when target is pedestrian, abnormal behaviour includes:It fights, run and riot.
In another embodiment, when target is vehicle, abnormal behaviour is for example:It hits and exceeds the speed limit.
Optionally, in one embodiment, the average letter of the light stream histogram of the corresponding detection block of each cluster point is calculated Entropy and mean kinetic energy are ceased, the average information entropy of the light stream of each cluster centre and average in N two field pictures before and after substantially calculating Kinetic energy.
The method of above-mentioned unusual checking, can effectively improve the intelligence of present security protection, while can also have Calculation amount of the reduction of effect during unusual checking improves efficiency, reality that system carries out target unusual checking When property and accuracy rate.
Optionally, to target into line trace, the step of to obtain tracking result after further comprise:It will leave current The structural data for monitoring the target object of node is sent to cloud server.
Optionally, to target into during line trace, when a certain clarification of objective information especially location information is in preset time It is not updated inside, you can judge that the target has been moved off current monitoring node, the structural data of the target is sent To cloud server.Wherein preset time can be set by the user, such as set 5 minutes either 10 minutes, herein not one by one It repeats.
In one embodiment, to target into during line trace, when finding location information, that is, coordinate value of certain pedestrian certain Preset time in be not updated, you can to judge that this pedestrian has been moved off current monitoring node, by the pedestrian couple The structural data answered is sent to cloud server.
In another embodiment, to target into during line trace, when finding the position coordinates of certain pedestrian or certain vehicle always When resting on the visual angle edge of monitoring node, you can, will to judge that the pedestrian or vehicle have been moved off current monitoring node The structural data of the pedestrian or vehicle is sent to cloud server.
Optionally, default characteristic information (such as Target Attribute values, the movement of the target for leaving current monitor node will be determined Track, target sectional drawing etc. and other required informations) it carries out being packaged into default metadata structure, it is then encoded into preset format Cloud server is sent to, cloud server parses received packaged data, extracts the metadata of target simultaneously It preserves to database.
In one embodiment, the default characteristic information for being determined the target for leaving present node is packaged as default member Data structure is then encoded into JSON data formats and is sent to cloud server by network, and cloud server is to receiving JSON data packets are parsed, and extract metadata structure, are then preserved to the database of cloud server.It should be understood that Default characteristic information can be adjusted setting as needed, do not do repeat one by one herein.
Optionally, step S23 carries out target abnormal row to target into line trace to obtain tracking result and step S24 For detection, be based on step S22 to single frames picture carry out target detection identification basis on, can just carry out to target with Track and target abnormal behaviour is detected.
Optionally, step S24, which carries out target unusual checking, directly to be carried out after step S22 is completed, Can be carried out at the same time with step S23 or be after step S23, and based on step S23 tracking result it is enterprising Row.
Optionally, step S23 is based on to target into line trace when step S24 carries out target unusual checking, to obtain It, can be more accurate to the detection of the abnormal behaviour of target to tracking result.
Wherein, the side of a kind of video structural processing based on goal behavior attribute described in step S21 to step S24 Method can effectively reduce the pressure of the network transmission of monitor video, effectively improve the real-time of monitoring system, significantly cut Subtrahend is according to traffic fee.
Optionally, the step of carrying out target detection identification to the single frames picture, further comprises extracting single frames picture In clarification of objective information.It is understood that by the video slicing of reading into after multiple single frames pictures, be to cutting after Single frames picture carry out target detection identification.
Optionally, to the clarification of objective information in the obtained single frames picture of video slicing is extracted, wherein mesh Mark includes pedestrian, vehicle and animal, can also extract the characteristic information of building or road and bridge as needed.
In one embodiment, when target is pedestrian, the characteristic information of extraction includes:The position of pedestrian, pedestrian wear face clothes The characterization informations such as color, the gender of pedestrian, motion state, movement locus, residence time and other retrievable information.
In another embodiment, when target is vehicle, the characteristic information of extraction includes:The model of vehicle, the face of vehicle body License plate number of color, the travel speed of vehicle and vehicle etc..
In another embodiment, when target is building, the characteristic information of extraction includes:The essential information of building: Such as build floor height, the height of building, the appearance color of building.
In another embodiment, when target is road and bridge, the characteristic information of extraction includes:Width, the road of road Title, the information such as speed limit of road.
Optionally, the step of carrying out unusual checking to target includes:More pixels of the one or more targets of extraction Motion vector, and according between motion vector relation carry out unusual checking.
In one embodiment, detail is referring to a kind of method of unusual checking as described above.
In one embodiment, being initially set in the structural data of video processing stage acquisition includes the position of target, mesh Mark at least one information in classification, objective attribute target attribute, target state, target trajectory, time on target.Wherein, may be used To need to adjust according to user, the location information of target is only obtained in video processing stage or obtains the position of target simultaneously It puts and target classification.It is understood that video processing stage obtains information, required for being selected by user at video The information category that the reason stage obtains.
Optionally, after terminating to video structural processing, the structural data obtained is uploaded to cloud service Device, cloud server can preserve the structural data that each monitoring node is uploaded, and to each knot for monitoring node and being uploaded Structure data are analysed in depth, to obtain default result.
Optionally, the step of structural data that cloud server uploads each monitoring node is analysed in depth can To be that setting is carried out automatically by system or carried out manually by user.
In one embodiment, the fundamental analysis content included by the in-depth analysis of cloud server is preset, is such as counted Whether quantity, target trajectory analysis, the target of pedestrian has abnormal behaviour generation, the quantity for the target for being abnormal behavior, simultaneously Analyse in depth the other content for further including and user being needed especially to select, such as the ratio of each period of target, the speed of target.
The foregoing is merely embodiments of the present invention, are not intended to limit the scope of the invention, every to utilize this It is relevant to be directly or indirectly used in other for the equivalent structure or equivalent flow shift that description of the invention and accompanying drawing content are made Technical field is included within the scope of the present invention.

Claims (10)

1. a kind of modified multi-object tracking method based on KCF and Kalman, which is characterized in that including:
With reference in tracking chain and previous frame picture more than first a corresponding detection block prediction of target more than first a targets Each target is in the tracking box of present frame;
More than first a targets in previous frame picture are obtained in the current frame second in corresponding tracking box and present frame picture The detection block of multiple targets;
Establish the mesh of the detection block of more than second a targets in the tracking box in the current frame of a target more than described first and present frame Mark incidence matrix;
It is corrected using Target Matching Algorithm, to obtain the corresponding physical location of present frame first portion target.
2. the modified multi-object tracking method according to claim 1 based on KCF and Kalman, which is characterized in that institute The target of the detection block and the tracking box successful match of more than described first a targets in the current frame more than second in a target is stated, it is fixed Justice is first portion's target.
3. the modified multi-object tracking method according to claim 2 based on KCF and Kalman, which is characterized in that institute More than first a targets in the detection block and the previous frame picture of a target more than second in present frame picture are stated in present frame In tracking box successful match the step of include:The detection block of more than second a targets in the present frame picture and described The centroid distance and/or Duplication of tracking box of more than the first a targets in present frame picture in previous frame picture judge whether Successful match.
4. the modified multi-object tracking method according to claim 1 based on KCF and Kalman, which is characterized in that institute Stating a target more than second includes first portion's target and second portion target, current in a target more than described first, second The object definition of the detection block of frame and the tracking box successful match of previous frame be first portion's target, a target more than first, second The object definition of the detection block of middle present frame and the non-successful match of the tracking box of previous frame be second portion target, described second There is no the object definition recorded in chain is tracked to increase target newly in partial objectives for;
It is described to be corrected analysis using Target Matching Algorithm, to obtain the corresponding physical location of present frame first portion target Include after step:
Filter out the newly-increased target in second portion target;
The newly-increased target is added in into tracking chain.
5. the modified multi-object tracking method according to claim 1 based on KCF and Kalman, which is characterized in that institute State tracking chain in target include first portion's target and Part III target, it is described tracking chain in target with it is described The target of a non-successful match of target, is defined as Part III target more than second;
It is described to be corrected analysis using Target Matching Algorithm, to obtain the corresponding physical location of present frame first portion target Include after step:
The corresponding target lost frames counting number value of the Part III target adds 1, and be more than in target lost frames counting number value etc. Corresponding target is removed from the tracking chain when predetermined threshold value.
6. the modified multi-object tracking method according to claim 1 based on KCF and Kalman, which is characterized in that institute State tracking chain in target include first portion's target and Part III target, it is described tracking chain in target with it is described The target of a non-successful match of target, is defined as Part III target more than second.
7. the modified multi-object tracking method according to claim 1 based on KCF and Kalman, which is characterized in that institute State and be corrected analysis using Target Matching Algorithm, the step of to obtain present frame first portion target corresponding physical location it After include:
The corresponding target lost frames counting number value of the Part III target adds 1, and when count value is less than predetermined threshold value, it is local The Part III target is tracked to obtain current pursuit gain;
It is corrected according to the local pursuit gain and the corresponding predicted value of the Part III target, to obtain described 3rd The physical location of partial objectives for.
8. the modified multi-object tracking method according to claim 1 based on KCF and Kalman, which is characterized in that institute It states and is corrected using Target Matching Algorithm, to include the step of obtaining present frame first portion target corresponding physical location:
For each target in first portion's target, according to the corresponding prediction of the corresponding present frame tracking box of each target Value and the corresponding observation of present frame detection block are corrected, to obtain the actual bit of each target in first portion's target It puts.
9. a kind of device with store function, which is characterized in that have program stored therein data, and described program data are performed reality Now such as claim 1~8 any one of them method.
10. a kind of multiple target tracking analysis system, which is characterized in that processor and memory including electric connection;
The processor couples the memory, and the processor executes instruction to realize such as claim 1~8 times at work Method described in one, and the handling result for executing instruction generation is stored in the memory.
CN201711063087.7A 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman Active CN108053427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711063087.7A CN108053427B (en) 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711063087.7A CN108053427B (en) 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman

Publications (2)

Publication Number Publication Date
CN108053427A true CN108053427A (en) 2018-05-18
CN108053427B CN108053427B (en) 2021-12-14

Family

ID=62119840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711063087.7A Active CN108053427B (en) 2017-10-31 2017-10-31 Improved multi-target tracking method, system and device based on KCF and Kalman

Country Status (1)

Country Link
CN (1) CN108053427B (en)

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776974A (en) * 2018-05-24 2018-11-09 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN108875600A (en) * 2018-05-31 2018-11-23 银江股份有限公司 A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO
CN108921873A (en) * 2018-05-29 2018-11-30 福州大学 The online multi-object tracking method of Markovian decision of filtering optimization is closed based on nuclear phase
CN109087331A (en) * 2018-08-02 2018-12-25 阿依瓦(北京)技术有限公司 A kind of motion forecast method based on KCF algorithm
CN109118523A (en) * 2018-09-20 2019-01-01 电子科技大学 A kind of tracking image target method based on YOLO
CN109195100A (en) * 2018-07-09 2019-01-11 南京邮电大学 A kind of blind area data method for early warning based on self-adapting window
CN109316202A (en) * 2018-08-23 2019-02-12 苏州佳世达电通有限公司 Image correcting method and detection device
CN109325961A (en) * 2018-08-27 2019-02-12 北京悦图遥感科技发展有限公司 UAV Video multi-object tracking method and device
CN109448027A (en) * 2018-10-19 2019-03-08 成都睿码科技有限责任公司 A kind of adaptive, lasting motion estimate method based on algorithm fusion
CN109558505A (en) * 2018-11-21 2019-04-02 百度在线网络技术(北京)有限公司 Visual search method, apparatus, computer equipment and storage medium
CN109615641A (en) * 2018-11-23 2019-04-12 中山大学 Multiple target pedestrian tracking system and tracking based on KCF algorithm
CN109657575A (en) * 2018-12-05 2019-04-19 国网安徽省电力有限公司检修分公司 Outdoor construction personnel's intelligent video track algorithm
CN109712171A (en) * 2018-12-28 2019-05-03 上海极链网络科技有限公司 A kind of Target Tracking System and method for tracking target based on correlation filter
CN109784173A (en) * 2018-12-14 2019-05-21 合肥阿巴赛信息科技有限公司 A kind of shop guest's on-line tracking of single camera
CN109859239A (en) * 2019-05-05 2019-06-07 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of target tracking
CN109902627A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 A kind of object detection method and device
CN109949336A (en) * 2019-02-26 2019-06-28 中科创达软件股份有限公司 Target fast tracking method and device in a kind of successive video frames
CN110031597A (en) * 2019-04-19 2019-07-19 燕山大学 A kind of biological water monitoring method
CN110110649A (en) * 2019-05-02 2019-08-09 西安电子科技大学 Alternative method for detecting human face based on directional velocity
CN110111363A (en) * 2019-04-28 2019-08-09 深兰科技(上海)有限公司 A kind of tracking and equipment based on target detection
CN110223325A (en) * 2019-06-18 2019-09-10 北京字节跳动网络技术有限公司 Method for tracing object, device and equipment
CN110276783A (en) * 2019-04-23 2019-09-24 上海高重信息科技有限公司 A kind of multi-object tracking method, device and computer system
CN110290410A (en) * 2019-07-31 2019-09-27 安徽华米信息科技有限公司 Image position adjusting method, device, system and adjusting information generating device
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110414447A (en) * 2019-07-31 2019-11-05 京东方科技集团股份有限公司 Pedestrian tracting method, device and equipment
CN110490148A (en) * 2019-08-22 2019-11-22 四川自由健信息科技有限公司 A kind of recognition methods for behavior of fighting
CN110533013A (en) * 2019-10-30 2019-12-03 图谱未来(南京)人工智能研究院有限公司 A kind of track-detecting method and device
WO2019228387A1 (en) * 2018-05-31 2019-12-05 广州虎牙信息科技有限公司 Target positioning method and apparatus, video display method and apparatus, device, and storage medium
WO2019237536A1 (en) * 2018-06-11 2019-12-19 平安科技(深圳)有限公司 Target real-time tracking method and apparatus, and computer device and storage medium
CN110610512A (en) * 2019-09-09 2019-12-24 西安交通大学 Unmanned aerial vehicle target tracking method based on BP neural network fusion Kalman filtering algorithm
CN110610510A (en) * 2019-08-29 2019-12-24 Oppo广东移动通信有限公司 Target tracking method and device, electronic equipment and storage medium
CN110660225A (en) * 2019-10-28 2020-01-07 上海眼控科技股份有限公司 Red light running behavior detection method, device and equipment
CN110751674A (en) * 2018-07-24 2020-02-04 北京深鉴智能科技有限公司 Multi-target tracking method and corresponding video analysis system
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
CN111161320A (en) * 2019-12-30 2020-05-15 浙江大华技术股份有限公司 Target tracking method, target tracking device and computer readable medium
CN111382784A (en) * 2020-03-04 2020-07-07 厦门脉视数字技术有限公司 Moving target tracking method
CN111428642A (en) * 2020-03-24 2020-07-17 厦门市美亚柏科信息股份有限公司 Multi-target tracking algorithm, electronic device and computer readable storage medium
CN111435962A (en) * 2019-01-13 2020-07-21 多方科技(广州)有限公司 Object detection method and related computer system
CN111462174A (en) * 2020-03-06 2020-07-28 北京百度网讯科技有限公司 Multi-target tracking method and device and electronic equipment
CN111551938A (en) * 2020-04-26 2020-08-18 北京踏歌智行科技有限公司 Unmanned technology perception fusion method based on mining area environment
CN111582242A (en) * 2020-06-05 2020-08-25 上海商汤智能科技有限公司 Retention detection method, retention detection device, electronic apparatus, and storage medium
CN111627045A (en) * 2020-05-06 2020-09-04 佳都新太科技股份有限公司 Multi-pedestrian online tracking method, device and equipment under single lens and storage medium
CN111681260A (en) * 2020-06-15 2020-09-18 深延科技(北京)有限公司 Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle
CN111709974A (en) * 2020-06-22 2020-09-25 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN111709328A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment
CN111768427A (en) * 2020-05-07 2020-10-13 普联国际有限公司 Multi-moving-target tracking method and device and storage medium
CN111814783A (en) * 2020-06-08 2020-10-23 三峡大学 Accurate license plate positioning method based on license plate vertex deviation estimation
CN111860373A (en) * 2020-07-24 2020-10-30 浙江商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN111985379A (en) * 2020-08-13 2020-11-24 中国第一汽车股份有限公司 Target tracking method, device and equipment based on vehicle-mounted radar and vehicle
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112084867A (en) * 2020-08-10 2020-12-15 国信智能系统(广东)有限公司 Pedestrian positioning and tracking method based on human body skeleton point distance
CN112184772A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Target tracking method and device
US20210004967A1 (en) * 2018-03-23 2021-01-07 Nec Corporation Object tracking device, object tracking method, and object tracking program
CN112232359A (en) * 2020-09-29 2021-01-15 中国人民解放军陆军炮兵防空兵学院 Visual tracking method based on mixed level filtering and complementary characteristics
CN112561963A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Target tracking method and device, road side equipment and storage medium
CN112581507A (en) * 2020-12-31 2021-03-30 北京澎思科技有限公司 Target tracking method, system and computer readable storage medium
CN110310305B (en) * 2019-05-28 2021-04-06 东南大学 Target tracking method and device based on BSSD detection and Kalman filtering
CN112639872A (en) * 2020-04-24 2021-04-09 华为技术有限公司 Method and device for difficult mining in target detection
CN112700469A (en) * 2020-12-30 2021-04-23 武汉卓目科技有限公司 Visual target tracking method and device based on ECO algorithm and target detection
WO2021189825A1 (en) * 2020-03-25 2021-09-30 苏州科达科技股份有限公司 Multi-target tracking method and apparatus, and storage medium
CN113516158A (en) * 2021-04-15 2021-10-19 西安理工大学 Graph model construction method based on fast R-CNN
CN113674317A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device of high-order video
WO2022227771A1 (en) * 2021-04-27 2022-11-03 北京百度网讯科技有限公司 Target tracking method and apparatus, device and medium
TWI790957B (en) * 2022-04-06 2023-01-21 淡江大學學校財團法人淡江大學 A high-speed data association method for multi-object tracking
CN116883413A (en) * 2023-09-08 2023-10-13 山东鲁抗医药集团赛特有限责任公司 Visual detection method for retention of waste picking and receiving materials
CN111814783B (en) * 2020-06-08 2024-05-24 深圳市富浩鹏电子有限公司 Accurate license plate positioning method based on license plate vertex offset estimation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244113A (en) * 2014-10-08 2014-12-24 中国科学院自动化研究所 Method for generating video abstract on basis of deep learning technology
CN106228112A (en) * 2016-07-08 2016-12-14 深圳市优必选科技有限公司 Face datection tracking and robot head method for controlling rotation and robot
EP3118814A1 (en) * 2015-07-15 2017-01-18 Thomson Licensing Method and apparatus for object tracking in image sequences
CN106874854A (en) * 2017-01-19 2017-06-20 西安电子科技大学 Unmanned plane wireless vehicle tracking based on embedded platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244113A (en) * 2014-10-08 2014-12-24 中国科学院自动化研究所 Method for generating video abstract on basis of deep learning technology
EP3118814A1 (en) * 2015-07-15 2017-01-18 Thomson Licensing Method and apparatus for object tracking in image sequences
CN106228112A (en) * 2016-07-08 2016-12-14 深圳市优必选科技有限公司 Face datection tracking and robot head method for controlling rotation and robot
CN106874854A (en) * 2017-01-19 2017-06-20 西安电子科技大学 Unmanned plane wireless vehicle tracking based on embedded platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XUAN-PHUNG HUYNH 等: "Tracking a Human Fast and Reliably Against Occlusion and Human-Crossing", 《PSIVT 2015: IMAGE AND VIDEO TECHNOLOGY》 *

Cited By (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11494922B2 (en) * 2018-03-23 2022-11-08 Nec Corporation Object tracking device, object tracking method, and object tracking program
US20210004967A1 (en) * 2018-03-23 2021-01-07 Nec Corporation Object tracking device, object tracking method, and object tracking program
CN108776974A (en) * 2018-05-24 2018-11-09 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN108921873A (en) * 2018-05-29 2018-11-30 福州大学 The online multi-object tracking method of Markovian decision of filtering optimization is closed based on nuclear phase
CN108921873B (en) * 2018-05-29 2021-08-31 福州大学 Markov decision-making online multi-target tracking method based on kernel correlation filtering optimization
CN108875600A (en) * 2018-05-31 2018-11-23 银江股份有限公司 A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO
WO2019228387A1 (en) * 2018-05-31 2019-12-05 广州虎牙信息科技有限公司 Target positioning method and apparatus, video display method and apparatus, device, and storage medium
US11284128B2 (en) 2018-05-31 2022-03-22 Guangzhou Huya Information Technology Co., Ltd. Object positioning method, video display method, apparatus, device, and storage medium
WO2019237536A1 (en) * 2018-06-11 2019-12-19 平安科技(深圳)有限公司 Target real-time tracking method and apparatus, and computer device and storage medium
CN109195100B (en) * 2018-07-09 2020-12-01 南京邮电大学 Blind area data early warning method based on self-adaptive window
CN109195100A (en) * 2018-07-09 2019-01-11 南京邮电大学 A kind of blind area data method for early warning based on self-adapting window
CN110751674A (en) * 2018-07-24 2020-02-04 北京深鉴智能科技有限公司 Multi-target tracking method and corresponding video analysis system
CN109087331A (en) * 2018-08-02 2018-12-25 阿依瓦(北京)技术有限公司 A kind of motion forecast method based on KCF algorithm
CN109316202A (en) * 2018-08-23 2019-02-12 苏州佳世达电通有限公司 Image correcting method and detection device
CN109325961B (en) * 2018-08-27 2021-07-09 北京悦图数据科技发展有限公司 Unmanned aerial vehicle video multi-target tracking method and device
CN109325961A (en) * 2018-08-27 2019-02-12 北京悦图遥感科技发展有限公司 UAV Video multi-object tracking method and device
CN109118523B (en) * 2018-09-20 2022-04-22 电子科技大学 Image target tracking method based on YOLO
CN109118523A (en) * 2018-09-20 2019-01-01 电子科技大学 A kind of tracking image target method based on YOLO
CN109448027A (en) * 2018-10-19 2019-03-08 成都睿码科技有限责任公司 A kind of adaptive, lasting motion estimate method based on algorithm fusion
CN109448027B (en) * 2018-10-19 2022-03-29 成都睿码科技有限责任公司 Adaptive and persistent moving target identification method based on algorithm fusion
CN109558505A (en) * 2018-11-21 2019-04-02 百度在线网络技术(北京)有限公司 Visual search method, apparatus, computer equipment and storage medium
US11348254B2 (en) 2018-11-21 2022-05-31 Baidu Online Network Technology (Beijing) Co., Ltd. Visual search method, computer device, and storage medium
CN109615641A (en) * 2018-11-23 2019-04-12 中山大学 Multiple target pedestrian tracking system and tracking based on KCF algorithm
CN109615641B (en) * 2018-11-23 2022-11-29 中山大学 Multi-target pedestrian tracking system and tracking method based on KCF algorithm
CN109657575B (en) * 2018-12-05 2022-04-08 国网安徽省电力有限公司检修分公司 Intelligent video tracking algorithm for outdoor constructors
CN109657575A (en) * 2018-12-05 2019-04-19 国网安徽省电力有限公司检修分公司 Outdoor construction personnel's intelligent video track algorithm
CN109784173A (en) * 2018-12-14 2019-05-21 合肥阿巴赛信息科技有限公司 A kind of shop guest's on-line tracking of single camera
CN109712171B (en) * 2018-12-28 2023-09-01 厦门瑞利特信息科技有限公司 Target tracking system and target tracking method based on correlation filter
CN109712171A (en) * 2018-12-28 2019-05-03 上海极链网络科技有限公司 A kind of Target Tracking System and method for tracking target based on correlation filter
CN111435962A (en) * 2019-01-13 2020-07-21 多方科技(广州)有限公司 Object detection method and related computer system
CN109949336A (en) * 2019-02-26 2019-06-28 中科创达软件股份有限公司 Target fast tracking method and device in a kind of successive video frames
CN109902627A (en) * 2019-02-28 2019-06-18 中科创达软件股份有限公司 A kind of object detection method and device
CN110031597A (en) * 2019-04-19 2019-07-19 燕山大学 A kind of biological water monitoring method
CN110276783A (en) * 2019-04-23 2019-09-24 上海高重信息科技有限公司 A kind of multi-object tracking method, device and computer system
CN110111363A (en) * 2019-04-28 2019-08-09 深兰科技(上海)有限公司 A kind of tracking and equipment based on target detection
CN110110649A (en) * 2019-05-02 2019-08-09 西安电子科技大学 Alternative method for detecting human face based on directional velocity
CN110110649B (en) * 2019-05-02 2023-04-07 西安电子科技大学 Selective human face detection method based on speed direction
CN109859239A (en) * 2019-05-05 2019-06-07 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of target tracking
CN110310305B (en) * 2019-05-28 2021-04-06 东南大学 Target tracking method and device based on BSSD detection and Kalman filtering
CN110223325A (en) * 2019-06-18 2019-09-10 北京字节跳动网络技术有限公司 Method for tracing object, device and equipment
CN110223325B (en) * 2019-06-18 2021-04-27 北京字节跳动网络技术有限公司 Object tracking method, device and equipment
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110414447A (en) * 2019-07-31 2019-11-05 京东方科技集团股份有限公司 Pedestrian tracting method, device and equipment
CN110414447B (en) * 2019-07-31 2022-04-15 京东方科技集团股份有限公司 Pedestrian tracking method, device and equipment
US11830273B2 (en) 2019-07-31 2023-11-28 Boe Technology Group Co., Ltd. Multi-target pedestrian tracking method, multi-target pedestrian tracking apparatus and multi-target pedestrian tracking device
CN110290410A (en) * 2019-07-31 2019-09-27 安徽华米信息科技有限公司 Image position adjusting method, device, system and adjusting information generating device
CN110290410B (en) * 2019-07-31 2021-10-29 合肥华米微电子有限公司 Image position adjusting method, device and system and adjusting information generating equipment
CN110490148A (en) * 2019-08-22 2019-11-22 四川自由健信息科技有限公司 A kind of recognition methods for behavior of fighting
CN110610510A (en) * 2019-08-29 2019-12-24 Oppo广东移动通信有限公司 Target tracking method and device, electronic equipment and storage medium
CN110610512B (en) * 2019-09-09 2021-07-27 西安交通大学 Unmanned aerial vehicle target tracking method based on BP neural network fusion Kalman filtering algorithm
CN110610512A (en) * 2019-09-09 2019-12-24 西安交通大学 Unmanned aerial vehicle target tracking method based on BP neural network fusion Kalman filtering algorithm
CN110660225A (en) * 2019-10-28 2020-01-07 上海眼控科技股份有限公司 Red light running behavior detection method, device and equipment
CN110533013A (en) * 2019-10-30 2019-12-03 图谱未来(南京)人工智能研究院有限公司 A kind of track-detecting method and device
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
CN111161320A (en) * 2019-12-30 2020-05-15 浙江大华技术股份有限公司 Target tracking method, target tracking device and computer readable medium
CN111382784A (en) * 2020-03-04 2020-07-07 厦门脉视数字技术有限公司 Moving target tracking method
CN111382784B (en) * 2020-03-04 2021-11-26 厦门星纵智能科技有限公司 Moving target tracking method
CN111462174A (en) * 2020-03-06 2020-07-28 北京百度网讯科技有限公司 Multi-target tracking method and device and electronic equipment
CN111462174B (en) * 2020-03-06 2023-10-31 北京百度网讯科技有限公司 Multi-target tracking method and device and electronic equipment
CN111428642A (en) * 2020-03-24 2020-07-17 厦门市美亚柏科信息股份有限公司 Multi-target tracking algorithm, electronic device and computer readable storage medium
WO2021189825A1 (en) * 2020-03-25 2021-09-30 苏州科达科技股份有限公司 Multi-target tracking method and apparatus, and storage medium
CN112639872B (en) * 2020-04-24 2022-02-11 华为技术有限公司 Method and device for difficult mining in target detection
CN112639872A (en) * 2020-04-24 2021-04-09 华为技术有限公司 Method and device for difficult mining in target detection
CN111551938A (en) * 2020-04-26 2020-08-18 北京踏歌智行科技有限公司 Unmanned technology perception fusion method based on mining area environment
CN111551938B (en) * 2020-04-26 2022-08-30 北京踏歌智行科技有限公司 Unmanned technology perception fusion method based on mining area environment
CN111627045B (en) * 2020-05-06 2021-11-02 佳都科技集团股份有限公司 Multi-pedestrian online tracking method, device and equipment under single lens and storage medium
CN111627045A (en) * 2020-05-06 2020-09-04 佳都新太科技股份有限公司 Multi-pedestrian online tracking method, device and equipment under single lens and storage medium
CN111768427B (en) * 2020-05-07 2023-12-26 普联国际有限公司 Multi-moving-object tracking method, device and storage medium
CN111768427A (en) * 2020-05-07 2020-10-13 普联国际有限公司 Multi-moving-target tracking method and device and storage medium
CN111709328A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment
CN111709328B (en) * 2020-05-29 2023-08-04 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment
CN111582242A (en) * 2020-06-05 2020-08-25 上海商汤智能科技有限公司 Retention detection method, retention detection device, electronic apparatus, and storage medium
CN111582242B (en) * 2020-06-05 2024-03-26 上海商汤智能科技有限公司 Retention detection method, device, electronic equipment and storage medium
CN111814783A (en) * 2020-06-08 2020-10-23 三峡大学 Accurate license plate positioning method based on license plate vertex deviation estimation
CN111814783B (en) * 2020-06-08 2024-05-24 深圳市富浩鹏电子有限公司 Accurate license plate positioning method based on license plate vertex offset estimation
CN111681260A (en) * 2020-06-15 2020-09-18 深延科技(北京)有限公司 Multi-target tracking method and tracking system for aerial images of unmanned aerial vehicle
CN111709974A (en) * 2020-06-22 2020-09-25 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN111709974B (en) * 2020-06-22 2022-08-02 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
WO2022017140A1 (en) * 2020-07-24 2022-01-27 浙江商汤科技开发有限公司 Target detection method and apparatus, electronic device, and storage medium
CN111860373B (en) * 2020-07-24 2022-05-20 浙江商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN111860373A (en) * 2020-07-24 2020-10-30 浙江商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112084867A (en) * 2020-08-10 2020-12-15 国信智能系统(广东)有限公司 Pedestrian positioning and tracking method based on human body skeleton point distance
CN111985379A (en) * 2020-08-13 2020-11-24 中国第一汽车股份有限公司 Target tracking method, device and equipment based on vehicle-mounted radar and vehicle
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112052802B (en) * 2020-09-09 2024-02-20 上海工程技术大学 Machine vision-based front vehicle behavior recognition method
CN112232359B (en) * 2020-09-29 2022-10-21 中国人民解放军陆军炮兵防空兵学院 Visual tracking method based on mixed level filtering and complementary characteristics
CN112232359A (en) * 2020-09-29 2021-01-15 中国人民解放军陆军炮兵防空兵学院 Visual tracking method based on mixed level filtering and complementary characteristics
CN112184772A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Target tracking method and device
CN112561963A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Target tracking method and device, road side equipment and storage medium
CN112700469A (en) * 2020-12-30 2021-04-23 武汉卓目科技有限公司 Visual target tracking method and device based on ECO algorithm and target detection
CN112581507A (en) * 2020-12-31 2021-03-30 北京澎思科技有限公司 Target tracking method, system and computer readable storage medium
CN113516158B (en) * 2021-04-15 2024-04-16 西安理工大学 Graph model construction method based on Faster R-CNN
CN113516158A (en) * 2021-04-15 2021-10-19 西安理工大学 Graph model construction method based on fast R-CNN
WO2022227771A1 (en) * 2021-04-27 2022-11-03 北京百度网讯科技有限公司 Target tracking method and apparatus, device and medium
CN113674317A (en) * 2021-08-10 2021-11-19 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device of high-order video
CN113674317B (en) * 2021-08-10 2024-04-26 深圳市捷顺科技实业股份有限公司 Vehicle tracking method and device for high-level video
TWI790957B (en) * 2022-04-06 2023-01-21 淡江大學學校財團法人淡江大學 A high-speed data association method for multi-object tracking
CN116883413B (en) * 2023-09-08 2023-12-01 山东鲁抗医药集团赛特有限责任公司 Visual detection method for retention of waste picking and receiving materials
CN116883413A (en) * 2023-09-08 2023-10-13 山东鲁抗医药集团赛特有限责任公司 Visual detection method for retention of waste picking and receiving materials

Also Published As

Publication number Publication date
CN108053427B (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN108009473A (en) Based on goal behavior attribute video structural processing method, system and storage device
CN108062349A (en) Video frequency monitoring method and system based on video structural data and deep learning
CN108052859A (en) A kind of anomaly detection method, system and device based on cluster Optical-flow Feature
CN110738127B (en) Helmet identification method based on unsupervised deep learning neural network algorithm
Liu et al. A computer vision system for early stage grape yield estimation based on shoot detection
CN110852283A (en) Helmet wearing detection and tracking method based on improved YOLOv3
CN102521565B (en) Garment identification method and system for low-resolution video
CN110717414A (en) Target detection tracking method, device and equipment
CN111814638B (en) Security scene flame detection method based on deep learning
CN104036236B (en) A kind of face gender identification method based on multiparameter exponential weighting
CN111488804A (en) Labor insurance product wearing condition detection and identity identification method based on deep learning
Zaki et al. Automated analysis of pedestrian group behavior in urban settings
CN109165685B (en) Expression and action-based method and system for monitoring potential risks of prisoners
CN111091098B (en) Training method of detection model, detection method and related device
CN109902560A (en) A kind of fatigue driving method for early warning based on deep learning
CN107133607B (en) Demographics&#39; method and system based on video monitoring
CN104361327A (en) Pedestrian detection method and system
CN106355154B (en) Method for detecting frequent passing of people in surveillance video
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN107256017B (en) Route planning method and system
CN105844245A (en) Fake face detecting method and system for realizing same
CN111401310B (en) Kitchen sanitation safety supervision and management method based on artificial intelligence
Fang et al. Traffic police gesture recognition by pose graph convolutional networks
CN115917589A (en) Climbing behavior early warning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant