CN110163889A - Method for tracking target, target tracker, target following equipment - Google Patents
Method for tracking target, target tracker, target following equipment Download PDFInfo
- Publication number
- CN110163889A CN110163889A CN201811198327.9A CN201811198327A CN110163889A CN 110163889 A CN110163889 A CN 110163889A CN 201811198327 A CN201811198327 A CN 201811198327A CN 110163889 A CN110163889 A CN 110163889A
- Authority
- CN
- China
- Prior art keywords
- frame
- prediction block
- target
- people
- target detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000001514 detection method Methods 0.000 claims abstract description 373
- 238000012360 testing method Methods 0.000 claims abstract description 125
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000008034 disappearance Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 13
- 238000012795 verification Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000007689 inspection Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000004069 differentiation Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Disclose a kind of method for tracking target, target tracker, target following equipment.The method for tracking target includes: to carry out target detection to video image present frame, obtains number of people detection block set and human testing frame set;Number of people detection block set and human testing frame set are associated, target detection frame set in present frame is obtained;According to the path velocity of each every existing track in video image, prediction block set in present frame is determined;And for each target detection frame in the target detection frame set, matched prediction block is determined in the prediction block set, and target following is carried out based on target detection frame and the matching result of prediction block.By being tracked using human body frame, the target following accuracy rate in video image can be effectively improved, realizes real-time and high-precision target following.
Description
Technical field
This disclosure relates to field of image processing, relate more specifically to a kind of method for tracking target, target tracker, target
Tracking equipment.
Background technique
As image procossing is in civilian and commercial kitchen area extensive use, multiple target tracking is in intelligent video monitoring, automatic
It drives and unmanned supermarket and other fields plays the role of becoming more and more important, therefore target following, especially multiple target tracking are also faced with
Higher requirement.It, can be by the local association algorithm between consecutive frame, i.e., first to the every of video at present in multiple target tracking
One frame carries out target detection, then carries out the association in time domain to detection block by respective algorithms, realizes target following;It can also be with
The association of all target detection frames is completed, by the global association algorithm between all frames to realize target following.
However, since available information is less, tracking accuracy rate is lower when using local association algorithm between consecutive frame,
Pedestrian density is higher or target between block in more serious complex scene, tracking effect is poor.Use all frames
Between global association algorithm when, although tracking accuracy rate is relatively high, real-time tracking can not be carried out, and the time of algorithm is multiple
Miscellaneous degree is often higher, and when the destination number in video is more, the time-consuming of track algorithm is big.
Therefore, it is necessary to one kind under the premise of realizing real-time tracking, with higher target following accuracy rate multiple target with
Track method.
Summary of the invention
In view of the above problems, present disclose provides a kind of method for tracking target, device, equipment and media.Utilize the disclosure
The method for tracking target of offer can effectively improve the target following accuracy rate in video image on the basis of real-time tracking,
Realize real-time and high-precision target following, and this method has good robustness.
According to the one side of the disclosure, a kind of method for tracking target is proposed, comprising: mesh is carried out to video image present frame
Mark detection, obtains number of people detection block set and human testing frame set;By number of people detection block set and human testing frame set into
Row association, obtains target detection frame set in present frame, wherein each target detection frame includes its number of people detection block and its human body
Detection block;According to the path velocity of every in video image existing track, prediction block set in present frame is determined, wherein each pre-
Surveying frame includes its number of people prediction block and its human body prediction block;And for each target detection in the target detection frame set
Frame determines matched prediction block, and the matching knot based on target detection frame and prediction block in the prediction block set
Fruit carries out target following.
According to another aspect of the present disclosure, a kind of target tracker is provided, comprising: number of people human detection module, quilt
It is configured to carry out target detection to video image present frame, obtains number of people detection block set and human testing frame set;Number of people people
Relating module is surveyed in physical examination, is configured as number of people detection block set and human testing frame set being associated, be obtained in present frame
Target detection frame set, wherein each target detection frame includes its number of people detection block and its human testing frame;Prediction block calculates mould
Block is configured as the path velocity according to every in video image existing track, determines prediction block set in present frame, wherein often
A prediction block includes its number of people prediction block and its human body prediction block;Target association module is configured as the target detection
Each target detection frame in frame set determines matched prediction block in the prediction block set, and is based on target
The matching result of detection block and prediction block carries out target following.
According to another aspect of the present disclosure, provide a kind of target following equipment, wherein the equipment include processor and
Memory, the memory include one group of instruction, and one group of instruction makes the target following when being executed by the processor
Equipment executes operation, and the operation includes: to carry out target detection to video image present frame, obtains number of people detection block set and people
Body detection block set;Number of people detection block set and human testing frame set are associated, target detection frame in present frame is obtained
Set, wherein each target detection frame includes its number of people detection block and its human testing frame;It is existing according to every in video image
The path velocity of track determines prediction block set in present frame, wherein each prediction block includes its number of people prediction block and its human body
Prediction block;And for each target detection frame in the target detection frame set, determined in the prediction block set with
Its matched prediction block, and target following is carried out based on target detection frame and the matching result of prediction block.
According to another aspect of the present disclosure, a kind of calculation machine readable storage medium storing program for executing is provided, which is characterized in that be stored thereon with
Computer-readable instruction executes the above method when executing described instruction using computer.
Using the method for the target following that the disclosure provides, can be very good to complete the reality for multiple target in video image
The tracking of when property, particularly, can target following accuracy rate with higher, and algorithm has good robustness.
Detailed description of the invention
It, below will be to required use in embodiment description in order to illustrate more clearly of the technical solution of the embodiment of the present disclosure
Attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present disclosure, for this
For the those of ordinary skill of field, without making creative work, it can also be obtained according to these attached drawings other
Attached drawing.The following drawings is not drawn by actual size equal proportion scaling deliberately, it is preferred that emphasis is shows the purport of the disclosure.
Figure 1A shows the illustrative flow chart of the method for tracking target according to the embodiment of the present disclosure;
Figure 1B shows the human testing frame set and number of people inspection obtained according to the method for tracking target of the embodiment of the present disclosure
Survey the schematic diagram of frame set;
Fig. 2 shows according to the embodiment of the present disclosure number of people detection block set and human testing frame set are associated with
Obtain the exemplary process diagram of target detection frame set in present frame;
Fig. 3 show according to the embodiment of the present disclosure to number of people detection block set and human testing frame sets match precision into
A kind of flow chart of illustrative methods of row verification;
Fig. 4 show according to the embodiment of the present disclosure calculate video image there is currently every track path velocity, obtain
The flow chart of the illustrative methods of track position prediction block set into present frame;
Fig. 5 shows the change in location figure that a plurality of track in video image changes with video frame number and generated;
Fig. 6 is shown for each target detection frame in the target detection frame set, in the prediction block set
It determines matched prediction block, and the exemplary of target following is carried out based on target detection frame and the matching result of prediction block
Flow chart;
Fig. 7 shows the schematic diagram that friendship and ratio are sought based on human testing frame and human body prediction block;
Fig. 8 shows the exemplary process diagram for target detection frame and the verification of track characteristics of human body's similarity;
Fig. 9 shows the illustrative block diagram of the target tracker according to the embodiment of the present disclosure;
Figure 10 shows the illustrative block diagram of the target following equipment according to the embodiment of the present disclosure.
Specific embodiment
The technical solution in the embodiment of the present disclosure is clearly and completely described below in conjunction with attached drawing, it is clear that
Ground, described embodiment are only the section Example of the disclosure, instead of all the embodiments.Implemented based on the disclosure
Example, every other embodiment obtained by those of ordinary skill in the art without making creative efforts also belong to
The range of disclosure protection.
As shown in the application and claims, unless context clearly prompts exceptional situation, " one ", "one", " one
The words such as kind " and/or "the" not refer in particular to odd number, may also comprise plural number.It is, in general, that term " includes " only prompts to wrap with "comprising"
Include clearly identify the step of and element, and these steps and element do not constitute one it is exclusive enumerate, method or apparatus
The step of may also including other or element.
Although the application is made that various references to the certain module in system according to an embodiment of the present application, however,
Any amount of disparate modules can be used and be operated on user terminal and/or server.The module is only illustrative
, and disparate modules can be used in the different aspect of the system and method.
Flow chart used herein is used to illustrate operation performed by system according to an embodiment of the present application.It should
Understand, before or operation below not necessarily accurately carry out in sequence.On the contrary, as needed, it can be according to inverted order
Or various steps are handled simultaneously.It is also possible to during other operations are added to these, or it is a certain from the removal of these processes
Step or number step operation.
Figure 1A shows the exemplary process diagram of the method for tracking target 100 according to the embodiment of the present disclosure.
Firstly, in step s101, carrying out target detection to video image present frame, obtaining number of people detection block set and people
Body detection block set.
The video image can be through camera or the real-time captured image of camera device, or be also possible in advance
The video image otherwise obtained.The embodiment of the present disclosure is not limited by the source of video image and acquisition modes.For example,
It can be the first-class image directly shot of monitoring camera by road camera, unmanned supermarket, or be also possible to by calculating
The video image that machine obtains after pre-processing.
Video image present frame is the picture frame to be analyzed under current time of video image, such as be can be current
When inscribe real-time captured image frame.Based on the current frame image captured in real time, the process for carrying out target detection for image can
To be realized by deep learning algorithm, for example, by zone algorithm (R-CNN) based on convolutional neural networks feature, be based on convolution
The realization of two-stages (two-stage) algorithm of target detection such as the fast area algorithm (Faster R-CNN) of neural network characteristics,
Or it can be by taking a glance at algorithm of target detection (You Only Look Once, YOLO), single-lens more box detector algorithms
A stage (one-stage) algorithm of target detection such as (Single Shot MultiBox Detector, SSD) is realized.The disclosure
Embodiment is not limited by the method for selected target detection.
It is schematically illustrated in Figure 1B and the obtained human body inspection of video image present frame is detected according to algorithm of target detection
Survey frame set and number of people detection block set.
B referring to Fig.1 detects the number of people image district in the video image of present frame based on selected algorithm of target detection
Domain and human body image region, and multiple number of people detection blocks and human testing are further marked in current video image
Frame obtains number of people detection block set and human testing frame set.For example, the number of people detection block is that can include that human body head is complete
Whole image and the rectangle frame with minimum area;The human testing frame is that can include Whole Body complete image and have most
The rectangle frame of small area.
In step s 102, number of people detection block set and human testing frame set are associated, obtain mesh in present frame
Mark detection block set.The each target detection frame in the target detection frame set obtained based on association algorithm is examined by its number of people
Frame and human testing frame composition are surveyed, i.e., number of people detection block included by each target detection frame and human testing frame are associated with each other.
It, in step s 103, will be further according to every in video image existing track after obtaining target detection frame set
Path velocity, determine prediction block set in present frame.Each prediction block in obtained prediction block set is predicted by the number of people
Frame and correspond/associated human body prediction block composition.
Based on target detection frame set obtained above and prediction block set, in step S104, the target is examined
Each target detection frame in frame set is surveyed, matched prediction block is determined in the prediction block set, and be based on mesh
The matching result for marking detection block and prediction block carries out target following, and high-precision mesh thus may be implemented under real-time tracking scene
Mark tracking.
Fig. 2 shows according to the embodiment of the present disclosure number of people detection block set and human testing frame set are associated with
Obtain the exemplary process diagram of the process 200 of target detection frame set in present frame.
Firstly, in step s 201, for each of number of people detection block set head detection block, in the human body
The determining and matched human testing frame of the number of people detection block in detection block set.
For example, number of people detection block set can be matched with human testing frame set by bipartite graph matching algorithm come
It realizes.Wherein the bipartite graph matching algorithm for example can be Kuhn-Munkres method (KM algorithm), or can be greediness
Method.The disclosure for selected bipartite graph matching algorithm with no restrictions.
More specifically, can choose KM algorithm to carry out number of people detection block set and human testing frame set is associated with/matching.
According to KM algorithm, it is first determined number of people detection block quantity and the human testing frame set in the number of people detection block set
In human testing frame quantity, determine the minimum value in number of people detection block quantity and human testing frame quantity, will have this most
The detection block set of small value is right as benchmaring frame set X and using another detection block set as detection block set Y to be compared
Each detection block in benchmaring frame set X determines corresponding detection matched in detection block set Y to be compared
Frame.
In some embodiments, in step s 201, number of people detection block set is matched with human testing frame set
Later, it can further include the process 300 verified for matching precision, detailed process is as shown in Figure 3.
Referring to Fig. 3, firstly, in step S301, the position based on number of people detection block set H and human testing frame set B
Information calculates the identity matrix A of number of people detection block set H and human testing frame set B.Specifically, for identity square
Each identity elements A in battle array Ajk, identity calculation formula are as follows:
Wherein, xjAnd yjFor j-th of number of people detection block H in number of people detection block set HjCentral point abscissa and center
Point ordinate, wjAnd hjFor j-th of number of people detection block H in number of people detection block set HjWidth and height;xkAnd ykFor human body
K-th of human testing frame B in detection block set BkCentral point abscissa and central point ordinate, wkAnd hkFor human testing
K-th of human testing frame B in frame set BkWidth and height.
In this way, an identity can be established between number of people detection block set H and human testing frame set B
Matrix, matrix can characterize the identity in number of people detection block set H in each detection block and human testing frame B between each detection block.
Above procedure can be more specifically described, for example, if the 5th number of people detection block in number of people detection block set H
Central point abscissa x5It is 5, central point ordinate y5It is 8, width w5It is 4, height h5It is 3;The 7th in human testing frame set B
The central point abscissa x of a human testing frame7It is 5, central point ordinate y7It is 5, width w7It is 10, height h7It is 12.Then it is based on
Above-mentioned formula, the identity that the two can be calculated are as follows:
Based on above-mentioned identity matrix, further, in step s 302, for by bipartite graph matching algorithmic match
Number of people detection block and human testing frame, obtain its corresponding identity numerical value, and by its identity numerical value and default identity
Threshold value is compared.If the identity threshold value of the number of people detection block and human testing frame that are mutually matched is more than or equal to default identity
Threshold value then in step s 304 matches the two;If the number of people detection block that is mutually matched and human testing frame is same
Property threshold value be less than default identity threshold value and the two do not matched then as shown in step S303.Wherein preset identity threshold
Value for example can be designed as 0.5, or may be designed in 0.8.
It can be more specifically described, such as when default identity threshold value is 0.5, if the number of people detection block H after matching2And B6
Between identity value be 0.7, be greater than 0.5, then the two is matched, and the number of people after matching can for example be examined
It surveys frame and human testing frame correspondence is named as H2' and B2', collectively form target detection frame M2。
Based on the above process, it can be obtained after matching and multiple number of people detection blocks and be associated with it/matched human testing frame,
Further, it is based on multiple interrelated/matched human bodies and number of people detection block in step S202, generates target detection frame collection
It closes.In this course, firstly, in step S2021, whether to judge number of people detection block in present frame and human testing frame
It matches.For the number of people detection block and human testing frame matched, then by step S2023, target is collectively constituted
Detection block, wherein each target detection frame includes a pair of matched number of people detection block and human testing frame;For what is do not matched
It can directly be deleted then such as step S2022, can also be incorporated into another set by number of people detection block, human testing frame
In, it is convenient for subsequent differentiation.Based on above-mentioned steps, the inspection of target described in the disclosure finally is formed by all obtained target detection frames
Survey frame set M.
After generating target detection frame set, further in step s 103, according to every in video image existing track
Path velocity, determine prediction block set in present frame, wherein each prediction block include its number of people prediction block and its human body prediction
Frame.
Fig. 4 show according to the calculating video image of the embodiment of the present disclosure there is currently the path velocity of every track obtain
The flow chart of the illustrative methods 400 of track position prediction block set into present frame.
Referring to Fig. 4, first in step S401, for every existing track, path velocity based on the existing track and
The human body frame position and number of people frame position of the end of the existing track, determine the human body prediction block of the existing track in the current frame
With number of people prediction block associated therewith.The end of existing track is to detect the existing track for the last time before present frame
The number of people/human body frame video frame (i.e. the end frame of the existing track) in human testing frame position and number of people detection block
Position, such as the position of human testing frame can be indicated using the position of human body frame central point, number of people frame center can be used
The position of point indicates the position of number of people detection block.
In above process, since the area of human body frame is relatively large, the location information of human body frame is selected to calculate
Path velocity can make relative error smaller, to further increase detection accuracy.Fig. 5 shows a plurality of rail in video image
Mark changes with video frame number and the change in location figure of generation.
Next, the process that trajectory calculation will be illustrated in conjunction with Fig. 5 and human body frame number of people frame position determines.Firstly,
In continuous multiple frames of video image, a plurality of track s1, the trace image of s2, s3 are as shown, the frame number of such as present frame is
6, then as shown in Figure 5, the trailing end of each existing track is shown in O1, O2, O3 in present frame.Existing rail specific for one
Mark can calculate the path velocity of the existing track of this using formula (2):
Wherein, l represents the length of the existing track of this, the human testing frame that can included by the existing track of this
Quantity calculates, i.e., in the presence of the existing track of this correspondence human testing frame video frame quantity.There are the existing tracks of this
Each video frame can be continuous video frame or discontinuous video frame sequentially in time, and there are the existing rails of this
Each video frame of mark is sequentially in time 1 by sequentially number, 2,3 ..., l-1, l.It will be appreciated that the video frame being sequentially numbered
It is not offered as the practical frame number of video frame.For example, the practical frame number for being numbered as 1 video frame is f1=5, it is numbered as 2
The practical frame number of video frame is f2=6, the practical frame number for being numbered as 3 video frame is f3=8, it is numbered as 3 video frame
Practical frame number is f4=10.
Wherein, flRepresent the practical frame number of the end frame l of the existing track of this, fl-1Represent the end frame of this track
The practical frame number of former frame l-1, pl-2Represent the practical frame number of the front cross frame l-2 of the end frame of this article of track, pl-3Representing should
The practical frame number of first three frame l-3 of the end frame of article track.
Wherein, plRepresent the center point coordinate of the human testing frame of the end frame l of this track, pl-1Represent this track
End frame former frame l-1 in human testing frame center point coordinate, pl-2It represents two before the of the end frame of this article of track
The center point coordinate of human testing frame in frame l-2, pl-3Represent the human body in first three frame l-3 of the end frame of this article of track
The center point coordinate of detection block.
Next in conjunction with three kinds of tracks in Fig. 5, it is specifically described.Such as the case where present frame frame number is 5
Under, the path velocity in present frame can be calculated based on frame number 5 and before.
For the s1 of track, frame number 5 and before frame number in, human testing, which outlines, in video image has showed 5 times, then
The quantity for knowing human testing frame is 5, that is, path length l is 5.It substitutes into formula (2), due to its l >=4, then based on differentiation item
Part is chosen following formula and is calculated:
If wherein central point of the known human testing frame central point in the 5th frame of video frame, the 4th frame, the 3rd frame, the 2nd frame is sat
Mark is respectively (24,5), (19,5), (15,5), (10,5), then substitutes into calculating it is found that its speed in the y-axis direction is 0, in x
Speed vx in axis direction is 4.5.
For the s2 of track, frame number 5 and before frame number in, human body, which outlines, in video image has showed 3 times, then known to
The quantity of human testing frame is 3, that is, path length l is 3.It substitutes into formula (2), due to its 1 < l < 4, then based on differentiation item
Part is chosen following formula and is calculated:
If wherein center point coordinate point of the human body frame central point in the 5th frame of video frame, the 4th frame, the 3rd frame in known trajectory s2
Not Wei (24,16), (19,12), (15,8), then substitute into calculating it is found that its speed in the y-axis direction is 3, in the direction of the x axis
Speed vx be 4.
For the s3 of track, frame number 5 and before frame number in, human body, which outlines, in video image has showed 1 time, then known to
The quantity of human testing frame is 1, that is, path length l is 1.It is substituted into formula, it is 0 that its speed, which is calculated,.
Based on the path velocity obtained by above-mentioned formula, the number of people frame and human body frame of the track in the current frame can be predicted
Position, predictor formula is as follows:
P'=p+v (i-fl) (3)
Based on above-mentioned formula, wherein p ' is number of people prediction block/human body prediction block central point coordinate in present frame, and p is
Number of people detection block/human testing frame central point coordinate in the end frame of the track, v are the path velocity of the track, can base
It is sought in aforementioned formula;I is the frame number of present frame, flThe frame number of the end frame of the track is represented, wherein human body prediction block/number of people
Human testing frame/number of people detection block height and width in the height and width of prediction block and the end frame of the track keep one
It causes.
Therefore, for track s1, s2, s3, its track human body frame in the 6th frame can be respectively obtained according to formula (3)
With the predicted position of number of people frame, as shown in broken line in fig. 5.Wherein the track s3 human body prediction block in the current frame and the number of people are pre-
Frame coordinate is surveyed to be overlapped with its human body frame and number of people frame in the 5th frame.The human body prediction block center point coordinate of the track s2 be (28,
19), the human body prediction block center point coordinate of the track s1 is (28.5,5).The coordinate of number of people detection block can also equally be based on above-mentioned
Formula is calculated, and is repeated no more herein.
After obtaining the human body prediction block and number of people prediction block associated therewith of existing track in the current frame, in step
In S402, prediction block set being generated, the prediction block set characterizes the predicted position of the existing track of each item in the current frame,
In, each prediction block in the prediction block set includes the human body prediction block and number of people prediction block of an existing track.
It is right in step S104 after the target detection frame set in present frame in prediction block set and present frame has been determined
Each target detection frame in the target detection frame set determines matched prediction in the prediction block set
Frame, and target following is carried out based on target detection frame and the matching result of prediction block.
Fig. 6 is shown for each target detection frame in the target detection frame set, in the prediction block set
It determines matched prediction block, and the exemplary of target following is carried out based on target detection frame and the matching result of prediction block
The flow chart of method.
Referring to Fig. 6, first in step s 601, based in the target detection frame set each target detection frame and institute
Each prediction block in prediction block set is stated, to the number of people detection block in the target detection frame, the number of people detection block is determined and is somebody's turn to do
The number of people frame similarity of number of people prediction block in prediction block.Wherein determine that its similarity can be by following formula:
Wherein xnAnd ynThe abscissa and ordinate of respectively n-th number of people detection block central point, xq’、yq' it is q-th of people
The transverse and longitudinal coordinate of head prediction block central point, wnFor the width of number of people detection block, hnFor the height of number of people detection block.
Above procedure can be more specifically described, such as the 2nd number of people detection in currently available number of people detection block set
Frame H2Center point coordinate be (17,26), the 7th number of people prediction block H in currently available number of people prediction block set7' central point
Coordinate is (19,30), and number of people detection block H2Width w2It is 4, height h2It is 4.Then substitute into the available above-mentioned people of above-mentioned formula
Head detection block H2And number of people prediction block H7' number of people frame similarity numerical value are as follows:
It, can be for each target detection frame in target detection frame set and prediction block set and pre- based on above-mentioned formula
Frame is surveyed, the similarity numerical value of wherein number of people detection block and number of people prediction block is sought.
Further, in step S602, based on each target detection frame in the target detection frame set and described
Each prediction block in prediction block set determines human body detection block and is somebody's turn to do for the human testing frame in the target detection frame
The human body frame similarity of human body prediction block in prediction block.Wherein determine that the two similarity can use public affairs as shown below
Formula:
Wherein, IOU indicates to hand over and compare, i.e. the intersection area of human testing frame and human body prediction block and human testing frame and people
The area ratio of the union refion of body prediction block.WhereinFor the people possessed by q-th of prediction block in prediction block set P
Body prediction block Bq, whereinFor the human testing frame B possessed by n-th of detection block in detection block set Mn.Can for institute
Each prediction block in each target detection frame and the prediction block set M in target detection frame set P is stated with above-mentioned public affairs
Formula seeks the human body frame similarity of the human testing frame in each target detection frame and the human body prediction block in each prediction block.
Fig. 7 shows the schematic diagram that friendship and ratio are sought based on human testing frame and human body prediction block.
Referring to Fig. 7, above procedure can be more specifically described, such as the 5th in currently available human testing frame set
Human testing frame Bm 5Center point coordinate be (22,150), the 1st human body prediction block in currently available human body prediction block set
Bp 1Center point coordinate be (47,250), and human testing frame Bm 5With human body prediction block Bp 1Width be 50, be highly
200.Then according to the intersection area of formula (5) human testing frame and human body prediction block are as follows:
The union area of human testing frame and human body prediction block are as follows:
It is possible to further obtain the friendship of the two and the ratio of ratio are as follows:
Based on the above process, available human testing frame Bm 5With human body prediction block Bp 1The friendship of the two area and the ratio of ratio
Value, that is, obtain human testing frame Bm 5With human body prediction block Bp 1The similarity of the two.
It will be appreciated that the operation of step S601 and S602 can carry out parallel, or execute, it is not made herein in sequence
Any restrictions out.
Obtained the similarity of human body prediction block Yu human testing frame, the similarity of number of people prediction block and number of people detection block it
It afterwards, further, can be according to the number of people frame similarity and the human body frame similarity, described in determination in step S603
Total similarity of target detection frame and the prediction block.Overall phase is wherein determined based on number of people frame similarity and human body frame similarity
Like degree for example overall similarity can be obtained by the way that correspondingly weight sums it up according to it by number of people similarity and human body similarity
Numerical value;Or can directly by obtaining the numerical value of total similarity for number of people similarity and human body similarity averaged,
Shown in its is specific as follows:
S=(Sh+Sb)/2 (6)
Above procedure can be more specifically described, such as when seeking target detection frame M1With prediction block P4Total similarity when,
If target detection frame M1Number of people detection block and prediction block P4The number of people similarity numerical value of number of people prediction block is 0.34, target detection
Frame M1Human testing frame and prediction block P4The human body similarity of human body prediction block is 0.21, if then taking the average calculation in above formula
When method seeks total similarity, target detection frame M1With prediction block P4Total similarity be 0.55.
After obtaining total similarity of target detection frame and the prediction block, in step s 604, according to the similarity,
The determining and matched prediction block of target detection frame in the prediction block set.
In step s 604, target detection frame set and target prediction frame set can be matched first, this
Target detection frame set can for example be matched with prediction block set by bipartite graph matching algorithm with process to realize.Its
Described in bipartite graph matching algorithm for example can be Kuhn-Munkres method (KM algorithm), or can be greedy method.This public affairs
It opens for selected bipartite graph matching algorithm with no restrictions.
More specifically, can choose KM algorithm to carry out target detection frame set and prediction block set is associated with/matching.According to
KM algorithm, it is first determined the prediction in target detection frame quantity and the prediction block set in the target detection frame set
Frame quantity determines the minimum value in target detection frame quantity and prediction block quantity, using the set with the minimum value as base
Quasi- set X simultaneously regard another set as set Y to be compared, and each frame in benchmark set X is determined in set Y to be compared
Matched respective block.
It, further, can be for matching precision after completing the matching process of target detection frame set and prediction block set
Verified, such as settable total similarity threshold, by the target detection frame matched and total similarity numerical value of prediction block with
Total similarity threshold is compared.If total similarity numerical value by the matched target detection frame of KM algorithm tag and prediction block is big
Total similarity threshold is preset in being equal to, then the two is matched;If by the matched target detection frame of KM algorithm tag with
Total similarity numerical value of prediction block, which is less than, presets total similarity threshold, then does not match to the two.It wherein presets total similar
Degree threshold value for example can be designed as 0.5, or may be designed in 0.8.
It can be more specifically described, for example, preset total similarity threshold be 0.8 when, if marking matched mesh in KM algorithm
The total similarity numerical value for marking detection block and prediction block is 0.97, is greater than 0.8, then the two is matched, if target is examined
The total similarity numerical value for surveying frame and prediction block is 0.62, less than 0.8, then the two is not matched.
It should be appreciated that the disclosure is not limited to carry out carrying out matching precision verification based on total similarity.In some embodiments
In, above-mentioned matching process can be based only upon number of people similarity and be verified, that is, setting number of people similarity threshold, if indicia matched
Target detection frame and the number of people prediction block in prediction block are similar more than or equal to the default number of people with the similarity numerical value of number of people detection block
Threshold value is spent, then the target detection frame and prediction block are matched, otherwise without matching.
In further embodiments, above-mentioned matching process can be based only upon human body similarity and be verified, that is, setting human body phase
Like degree threshold value, if the similarity of the human testing frame and human body prediction block that include in the target detection frame and prediction block of indicia matched
Numerical value is more than or equal to default human body similarity threshold, then matches to the target detection frame and prediction block, otherwise without
Matching.
In addition, further, in step S604 for each target detection frame in the target detection frame set,
There are matched prediction block, characteristics of human body's similarity school can also be carried out for target detection frame and track
It tests.
Fig. 8 shows the exemplary process diagram for target detection frame and the verification of track characteristics of human body's similarity.
Referring to Fig. 8, each target detection frame in the target detection frame set, there are matched prediction blocks
In the case where, firstly, being based on step S801, determine in the target detection frame between image and its track object for corresponding to track
Characteristics of human body's similarity, the track object are the human object to form the track.Wherein characteristics of human body's similarity table
Image and the track object of associated existing track are in human physical characteristics, image in human testing frame in sign target detection frame
Hold the similarity ratio in feature.The similarity is quoted, is conducive in the occasion that the crowd is dense, the preceding people in position exists position
When people afterwards is wholly or largely blocked, by the prediction of the target detection frame of the preceding people in position and the track of the posterior people in position
Frame carry out accidentally association, caused by track following mistake, cause error.
In step S801, characteristics of human body's similarity can be obtained by way of image procossing.Such as everyone
Body detection block determines in the human testing frame in human body detection block in image and its end frame for corresponding to track between image
Characteristics of human body's similarity.
For example, using image in the human testing frame in the end frame of the track as reference picture, and will be in present frame
Human testing block diagram picture as images to be recognized.Reference picture and images to be recognized are inputted into deep learning network, respectively
To the reference picture feature vector f and images to be recognized feature vector g of fixed dimension.Wherein deep learning network can for example select
Depth convolutional neural networks or full Connection Neural Network are taken, to realize that different practical application requests, the disclosure do not limit selected
The type of the neural network taken.
Thereafter, the similar of reference picture feature vector and images to be recognized feature vector can be calculated by cosine similarity
Degree, obtains characteristics of human body's similarity.The formula of adoptable cosine similarity is as follows:
Wherein, f and g respectively represents reference picture feature vector and images to be recognized feature vector.
According to the embodiment of the present disclosure, in step s 604, frame set B will testmWith prediction block set BpIt is matched, is obtained
To existing track detection frame set BRComprising R detection block.For every in R detection block obtaining in step s 604
One detection block Br R, the human body in human body detection block in image and its end frame for corresponding to track can be determined by the above process
Characteristics of human body's similarity S in detection block between imagear.For example, the existing track detection frame set BRIt is detected including 30
Frame can seek its cosine similarity S then for wherein the 5th detection blocka5。
Thereafter, further, in step S802, in the case where characteristics of human body's similarity meets predetermined threshold,
The target detection frame and matched prediction block are determined as matched target detection frame and prediction block.Such as spy can be preset
Similarity threshold is levied, when the similarity of the reference picture feature vector and images to be recognized feature vector that are calculated is more than or equal to
When default characteristic similarity threshold value, target detection frame and matched prediction block are matched.When the ginseng being calculated
When examining the similarity of image feature vector and images to be recognized feature vector less than default characteristic similarity threshold value, then not for mesh
Mark detection block and matched prediction block are matched.
It completes in the prediction block set after the determining and matched prediction block of target detection frame, can be examined based on target
The matching result for surveying frame and prediction block carries out target following.For the target detection frame and prediction block being mutually matched, mesh can be used
Mark detection block is updated track associated by target prediction frame, and the target matched is incorporated into the track being associated
In, it is convenient for subsequent further tracking, and can be based on the target detection frame in present frame, referring to schematic diagram in Fig. 5 and aforementioned
Formula determines the path velocity of track in the current frame corresponding to the target detection frame, uses convenient for next frame, realizes real-time
Tracking.Not matched target detection frame and prediction block directly can be deleted or are placed in another set, be convenient for and current track
Detection and prediction block distinguish.
In some embodiments, for each target detection frame in the target detection frame set, be not present and its
In the case where matched prediction block, using the target detection frame as the starting point of new track.It will be at for matched target detection frame
It is independent track, is conducive to maximumlly retain the target detected in image, improves the efficiency of target detection.
In further embodiments, for every existing track, not matched target detection frame in the current frame
In the case where, the quantity of the continuous disappearance frame of the existing track can be determined first.Thereafter, in the quantity of the continuous disappearance frame
In the case where less than or equal to preset first threshold value, persistently retain the existing track;It is greater than institute in the quantity of the continuous disappearance frame
In the case where stating preset first threshold value, the existing track is terminated.
Wherein the preset first threshold value for example can be set to 5 frames, or can be set to 8 frames, continue for limiting
Retain the time of track.
The process can be more specifically described, such as preset first threshold value is set as 5 frames, and there is prediction in the current frame
Frame M78, matched target detection frame is not present, then can check track 78 associated there, check that track 78 exists
The quantity continuously to disappear in present frame and pervious video image frame.If track 78 is in present frame and pervious video image frame
The quantity continuously to disappear is 5 frames, is equal to preset first threshold value, then retains the existing track;If reach next frame, do not go out yet
Now with prediction block M78The target detection frame to match, then the track continuously disappears 6 frames at this time, is greater than the first preset threshold,
Then terminate the existing track.
The above process can solve that crowd density is higher well or target between serious shielding complex scene under, tracking
The problem of effect difference.In the case where there is the target detection frame blocked cause in present frame and can not carry out matched situation with track, lead to
The track for retaining predetermined quantity is crossed, the robustness of target following can be improved, and the interruption because track is blocked can be reduced
And the continuous path of same target is made to be mistaken for multiple truncation tracks, improve the accuracy of detection.
Fig. 9 shows the exemplary block diagram of the target tracker according to the embodiment of the present disclosure.
Target tracker 900 as shown in Figure 9 includes: number of people human detection module 910, number of people human testing association mould
Block 920, prediction block computing module 930, target association module 940.
The number of people human detection module 910 is configured as carrying out target detection to video image present frame, obtains the number of people
Detection block set and human testing frame set.
The number of people human testing relating module 920 be configured as by number of people detection block set and human testing frame set into
Row association, obtains target detection frame set in present frame, wherein each target detection frame includes its number of people detection block and its human body
Detection block.
The prediction block computing module 930 is configured as the path velocity according to every in video image existing track, really
Prediction block set in settled previous frame, wherein each prediction block includes its number of people prediction block and its human body prediction block.
The target association module 940 is configured as each target detection frame in the target detection frame set,
Determine matched prediction block in the prediction block set, and the matching result based on target detection frame and prediction block into
Row target following.
Wherein the video image can be through road camera or the real-time captured image of automatic monitoring camera head, or
Person is also possible to the video image otherwise obtained in advance.The embodiment of the present disclosure is not by the source of video image and acquisition side
The limitation of formula.For example, can be the image directly shot by the monitoring camera of road overspeed detection camera or unmanned supermarket,
Or it can be the video image obtained after computer pre-processes.
Video image present frame is the picture frame to be analyzed under current time of video image, such as be can be current
When inscribe real-time captured image frame.Based on the current frame image captured in real time, the process for carrying out target detection for image can
To be realized by deep learning algorithm, for example, by zone algorithm (R-CNN) based on convolutional neural networks feature, be based on convolution
The realization of two-stages (two-stage) algorithm of target detection such as the fast area algorithm (Faster R-CNN) of neural network characteristics,
Or it can be by taking a glance at algorithm of target detection (You Only Look Once, YOLO), single-lens more box detector algorithms
A stage (one-stage) algorithm of target detection such as (Single Shot MultiBox Detector, SSD) is realized.The disclosure
Embodiment is not limited by the method for selected target detection.
Wherein, in number of people human testing relating module 920, process as shown in Figure 2 can be executed, by number of people detection block
Set is associated with human testing frame set, obtains target detection frame set in present frame.It is further can include: number of people people
Generation module 922 is gathered in body matching module 921 and target position.
The number of people human body matching module 921 is configured as executing the operation such as step S201 in Fig. 2, for the number of people
Each of detection block set head detection block, the determining and matched people of the number of people detection block in the human testing frame set
Body detection block.For example, can be matched to number of people detection block set with human testing frame set by bipartite graph matching algorithm.
Wherein the bipartite graph matching algorithm for example can be Kuhn-Munkres method (KM algorithm), or can be greedy method.This
It discloses for selected bipartite graph matching algorithm with no restrictions.
More specifically, can choose KM algorithm to carry out number of people detection block set and human testing frame set is associated with/matching.
According to KM algorithm, it is first determined number of people detection block quantity and the human testing frame set in the number of people detection block set
In human testing frame quantity, determine the minimum value in number of people detection block quantity and human testing frame quantity, will have this most
The detection block set of small value is right as benchmaring frame set X and using another detection block set as detection block set Y to be compared
Each detection block in benchmaring frame set X determines corresponding detection matched in detection block set Y to be compared
Frame.
In addition, can further include matching precision correction verification module in number of people human body matching module 921, can be performed
Checking procedure as shown in Figure 3 is obtained after verification with higher identity and the number of people detection block being mutually matched and human testing
Frame.
Generation module 922 is gathered in the target position, is configured as executing the operation such as step S202 in Fig. 2, generates
Target detection frame set, wherein each target detection frame in the target detection frame set includes a pair of matched number of people inspection
Survey frame and human testing frame.
Further, in the prediction block computing module 930 shown in Fig. 9, including 931 He of prediction block position computation module
Prediction block set generation module 932.
In prediction block position computation module 931, the process in Fig. 4 step S401 can be performed, for every existing rail
Mark, the human body frame position and number of people frame position of the end of path velocity and the existing track based on the existing track, determining should
The human body prediction block and number of people prediction block associated therewith of existing track in the current frame.The end of existing track is current
The video frame number of human testing frame associated with existing track is detected in frame before frame for the last time.
In above process, since the area of human body frame is relatively large, the location information of human body frame is selected to calculate
Path velocity can make relative error smaller, to further increase detection accuracy.
In prediction block set generation module 932, the operation in Fig. 4 step S402 will be executed, prediction block set will be generated,
The prediction block set characterizes the predicted position of the existing track of each item in the current frame, wherein every in the prediction block set
A prediction block includes the human body prediction block and number of people prediction block of an existing track.
Process as shown in Figure 6 can be performed in the target association module 940, in the target detection frame set
Each target detection frame determines matched prediction block in the prediction block set, and based on target detection frame and in advance
Survey frame matching result carry out target following, and the target association module 940 its may include 941 He of similarity calculation module
Object matching module 942.
The similarity calculation module 941 can execute the operation of step S601 to step S603 in Fig. 6, object matching mould
Block 942 can execute the operation of step S604 in Fig. 6.
In addition, further, may also include characteristics of human body's similarity correction verification module in object matching module 942, for
It, can be with there are matched prediction block when each target detection frame in the target detection frame set
Process as shown in Figure 8 is executed, the verification of characteristics of human body's similarity is carried out for target detection frame and track.
Target following equipment 950 as shown in Figure 10 can be implemented as one or more dedicated or general computer systems
Module or component, such as PC, laptop, tablet computer, mobile phone, personal digital assistant (personal
Digital assistance, PDA) and any intelligent and portable equipment.Wherein, target following equipment 950 may include at least one
A processor 960 and memory 970.
Wherein, at least one described processor is for executing program instructions.The memory 970 is in target following equipment
In 950 can program storage unit in different forms and data storage element exist, such as hard disk, read-only memory
(ROM), random access memory (RAM), it can be used to use during storage processor processing and/or performance objective tracking
Various data files and processor performed by possible program instruction.Although being not shown, target following is set
Standby 950 can also include an input output assembly, support target following equipment 950 and other assemblies (such as image capture device
980) input/output data stream between.Target following equipment 950 can also send and receive letter from network by communication port
Breath and data.
In some embodiments, one group of instruction that the memory 970 is stored by the processor 960 execute when,
The target following equipment 950 is set to execute operation, the operation includes: to carry out target detection to video image present frame, is obtained
Number of people detection block set and human testing frame set;Number of people detection block set and human testing frame set are associated, obtained
Target detection frame set in present frame, wherein each target detection frame includes its number of people detection block and its human testing frame;According to
The path velocity of every existing track in video image, determines prediction block set in present frame, wherein each prediction block includes it
Number of people prediction block and its human body prediction block;And for each target detection frame in the target detection frame set, described
Matched prediction block is determined in prediction block set, and target is carried out based on target detection frame and the matching result of prediction block
Tracking.
In some embodiments, target following equipment 950 can receive the image outside the target following equipment 950
Equipment video image collected is acquired, and in received image data execution object described above tracking, realization
The function of the target tracker of text description.
The video capture device can be, for example, the automatic monitoring equipment of road camera or unmanned supermarket.
Although processor 960, memory 970 are rendered as individual module in Figure 10, those skilled in the art can be managed
Solution, above equipment module may be implemented as individual hardware device, can also be integrated into one or more hardware devices.Only
It can be realized the principle of disclosure description, the specific implementation of different hardware devices should not be used as limitation disclosure protection
The factor of range.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium is additionally provided, is deposited thereon
Computer-readable instruction is contained, foregoing method can be executed when executing described instruction using computer.
It is existing in the form of program part in technology is considered code and/or related data can be performed
" product " or " product ", is participated in or is realized by computer-readable medium.Tangible, permanent storage medium can wrap
Include memory or memory used in any computer, processor or similar devices or relevant module.For example, various partly lead
Body memory, tape drive, disc driver can provide the equipment of store function similar to any for software.
All softwares or in which a part there may come a time when to be communicated by network, such as internet or other communication networks
Network.Software can be loaded into another from a computer equipment or processor by such communication.Such as: from target following equipment
A server or host computer be loaded onto a computer environment hardware platform or other realize systems computer
Environment, or the system of similar functions relevant to information required for target following is provided.Therefore, another kind can transmit software
The medium of element is also used as physical connection between local devices, such as light wave, electric wave, electromagnetic wave etc., by cable,
The realizations such as optical cable or air are propagated.It, can also for the physical medium such as similar devices such as cable, wireless connection or optical cable of carrier wave
To be considered as the medium for carrying software.Unless limiting tangible " storage " medium, other indicate to calculate usage herein
The term of machine or machine " readable medium " all indicates the medium participated in during processor executes any instruction.
The application has used particular words to describe embodiments herein.Such as " first/second embodiment ", " one implements
Example ", and/or " some embodiments " mean a certain feature relevant at least one embodiment of the application, structure or feature.Cause
This, it should be highlighted that and it is noted that " embodiment " or " an implementation referred to twice or repeatedly in this specification in different location
Example " or " alternate embodiment " are not necessarily meant to refer to the same embodiment.In addition, in one or more embodiments of the application
Certain features, structure or feature can carry out combination appropriate.
In addition, it will be understood by those skilled in the art that the various aspects of the application can be by several with patentability
Type or situation are illustrated and described, the combination or right including any new and useful process, machine, product or substance
Their any new and useful improvement.Correspondingly, the various aspects of the application can completely by hardware execute, can be complete
It is executed, can also be executed by combination of hardware by software (including firmware, resident software, microcode etc.).Hardware above is soft
Part is referred to alternatively as " data block ", " module ", " engine ", " unit ", " component " or " system ".In addition, the various aspects of the application
The computer product being located in one or more computer-readable mediums may be shown as, which includes computer-readable program
Coding.
Unless otherwise defined, all terms (including technical and scientific term) used herein have leads with belonging to the present invention
The identical meanings that the those of ordinary skill in domain is commonly understood by.It is also understood that those of definition term such as in usual dictionary
The meaning consistent with their meanings in the context of the relevant technologies should be interpreted as having, without application idealization or
The meaning of extremely formalization explains, unless being clearly defined herein.
The above is the description of the invention, and is not considered as limitation ot it.Notwithstanding of the invention several
Exemplary embodiment, but those skilled in the art will readily appreciate that, before without departing substantially from teaching and advantage of the invention
Many modifications can be carried out to exemplary embodiment by putting.Therefore, all such modifications are intended to be included in claims institute
In the scope of the invention of restriction.It should be appreciated that being the description of the invention above, and it should not be considered limited to disclosed spy
Determine embodiment, and the model in the appended claims is intended to encompass to the modification of the disclosed embodiments and other embodiments
In enclosing.The present invention is limited by claims and its equivalent.
Claims (15)
1. a kind of method for tracking target, comprising:
Target detection is carried out to video image present frame, obtains number of people detection block set and human testing frame set;
Number of people detection block set and human testing frame set are associated, target detection frame set in present frame is obtained, wherein
Each target detection frame includes its number of people detection block and its human testing frame;
According to the path velocity of every in video image existing track, prediction block set in present frame is determined, wherein each prediction
Frame includes its number of people prediction block and its human body prediction block;And
For each target detection frame in the target detection frame set, determination is matched in the prediction block set
Prediction block, and target following is carried out based on target detection frame and the matching result of prediction block.
2. method for tracking target as described in claim 1, wherein carry out number of people detection block set and human testing frame set
Association, obtaining target detection frame set in present frame includes:
For each of number of people detection block set head detection block, determining and the people in the human testing frame set
The head matched human testing frame of detection block;And
Generate target detection frame set, wherein each target detection frame in the target detection frame set includes a pair of of matching
Number of people detection block and human testing frame.
3. method for tracking target as described in claim 1, wherein according to the track of every in video image existing track speed
Degree, determines that prediction block set includes: in present frame
For every existing track, the human body frame position of path velocity and the existing trailing end based on the existing track and people
Head frame position, determines the human body prediction block and number of people prediction block associated therewith of the existing track in the current frame;And
Generate prediction block set, wherein each prediction block in the prediction block set includes the human body prediction of the existing track
Frame and number of people prediction block.
4. method for tracking target as described in claim 1, wherein each target in the target detection frame set is examined
Frame is surveyed, matched prediction block, and the matching based on target detection frame and prediction block are determined in the prediction block set
As a result carry out target following operation include:
It is right based on each prediction block in each target detection frame and the prediction block set in the target detection frame set
Number of people detection block in the target detection frame determines number of people frame phase of the number of people detection block with the number of people prediction block in the prediction block
Like degree;For the human testing frame in the target detection frame, the human body prediction block in human body detection block and the prediction block is determined
Human body frame similarity;According to the number of people frame similarity and the human body frame similarity, the target detection frame and institute are determined
State total similarity of prediction block;And
According to total similarity, the determining and matched prediction block of target detection frame in the prediction block set.
5. method for tracking target as described in claim 1, further includes:
Based on each target detection frame in present frame, the track of track in the current frame corresponding to the target detection frame is determined
Speed.
6. method for tracking target as claimed in claim 5, wherein determine track corresponding to the target detection frame in present frame
In path velocity include:
Based on the human testing frame in the target detection frame, to determine the path velocity.
7. method for tracking target as described in claim 1, further includes: for each target in the target detection frame set
Detection block, there are matched prediction block,
Determine characteristics of human body's similarity of the track where the target detection frame and matched prediction block;And
In the case where characteristics of human body's similarity meets predetermined threshold, by the target detection frame and matched prediction block
It is determined as matched target detection frame and prediction block.
8. method for tracking target as described in claim 1, further includes:
For each target detection frame in the target detection frame set, the case where matched prediction block is not present
Under, using the target detection frame as the starting point of new track.
9. method for tracking target as described in claim 1, further includes:
For every existing track, the existing track is determined in the case where not matched target detection frame in the current frame
Continuous disappearance frame quantity;
In the case where the quantity of the continuous disappearance frame is less than or equal to preset first threshold value, persistently retain the existing track;
In the case where the quantity of the continuous disappearance frame is greater than the preset first threshold value, the existing track is terminated.
10. a kind of target tracker, comprising:
Number of people human detection module is configured as carrying out target detection to video image present frame, obtains number of people detection block set
With human testing frame set;
Number of people human testing relating module is configured as number of people detection block set and human testing frame set being associated, obtain
The target detection frame set into present frame, wherein each target detection frame includes its number of people detection block and its human testing frame;
Prediction block computing module is configured as being determined in present frame according to the path velocity of every in video image existing track
Prediction block set, wherein each prediction block includes its number of people prediction block and its human body prediction block;
Target association module is configured as each target detection frame in the target detection frame set, in the prediction
Determine matched prediction block in frame set, and based on target detection frame and the matching result of prediction block carry out target with
Track.
11. target tracker as claimed in claim 10, wherein the number of people human testing relating module includes:
Number of people human body matching module is configured as each of number of people detection block set head detection block, described
The determining and matched human testing frame of the number of people detection block in human testing frame set;
Generation module is gathered in target position, is configurable to generate target detection frame set, wherein in the target detection frame set
Each target detection frame include a pair of matched number of people detection block and human testing frame.
12. target tracker as claimed in claim 10, wherein target association module includes:
Similarity calculation module, based in each target detection frame and the prediction block set in the target detection frame set
Each prediction block the number of people in the number of people detection block and the prediction block is determined to the number of people detection block in the target detection frame
The number of people frame similarity of prediction block;For the human testing frame in the target detection frame, human body detection block and the prediction are determined
The human body frame similarity of human body prediction block in frame;According to the number of people frame similarity and the human body frame similarity, institute is determined
State total similarity of target detection frame Yu the prediction block;And
Object matching module, according to total similarity, determination is matched with the target detection frame in the prediction block set
Prediction block.
13. a kind of target following equipment, wherein the equipment includes processor and memory, the memory includes one group and refers to
It enables, one group of instruction makes the target following equipment execute operation when being executed by the processor, and the operation includes:
Target detection is carried out to video image present frame, obtains number of people detection block set and human testing frame set;
Number of people detection block set and human testing frame set are associated, target detection frame set in present frame is obtained, wherein
Each target detection frame includes its number of people detection block and its human testing frame;
According to the path velocity of every in video image existing track, prediction block set in present frame is determined, wherein each prediction
Frame includes its number of people prediction block and its human body prediction block;And
For each target detection frame in the target detection frame set, determination is matched in the prediction block set
Prediction block, and target following is carried out based on target detection frame and the matching result of prediction block.
14. target following equipment as claimed in claim 13, wherein by number of people detection block set and human testing frame set into
Row association, the operation for obtaining target detection frame set in present frame include:
For each of number of people detection block set head detection block, determining and the people in the human testing frame set
The head matched human testing frame of detection block;And
Generate target detection frame set, wherein each target detection frame in the target detection frame set includes a pair of of matching
Number of people detection block and human testing frame.
15. target as claimed in claim 13 is with equipment, wherein each target in the target detection frame set is examined
Frame is surveyed, matched prediction block, and the matching based on target detection frame and prediction block are determined in the prediction block set
As a result carry out target following operation include:
It is right based on each prediction block in each target detection frame and the prediction block set in the target detection frame set
Number of people detection block in the target detection frame determines number of people frame phase of the number of people detection block with the number of people prediction block in the prediction block
Like degree;For the human testing frame in the target detection frame, the human body prediction block in human body detection block and the prediction block is determined
Human body frame similarity;According to the number of people frame similarity and the human body frame similarity, the target detection frame and institute are determined
State total similarity of prediction block;And
According to total similarity, the determining and matched prediction block of target detection frame in the prediction block set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811198327.9A CN110163889A (en) | 2018-10-15 | 2018-10-15 | Method for tracking target, target tracker, target following equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811198327.9A CN110163889A (en) | 2018-10-15 | 2018-10-15 | Method for tracking target, target tracker, target following equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110163889A true CN110163889A (en) | 2019-08-23 |
Family
ID=67645068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811198327.9A Pending CN110163889A (en) | 2018-10-15 | 2018-10-15 | Method for tracking target, target tracker, target following equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163889A (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110648490A (en) * | 2019-09-26 | 2020-01-03 | 华南师范大学 | Multi-factor flame identification method suitable for embedded platform |
CN111008631A (en) * | 2019-12-20 | 2020-04-14 | 浙江大华技术股份有限公司 | Image association method and device, storage medium and electronic device |
CN111063083A (en) * | 2019-12-16 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Access control method and device, computer readable storage medium and computer equipment |
CN111091123A (en) * | 2019-12-02 | 2020-05-01 | 上海眼控科技股份有限公司 | Text region detection method and equipment |
CN111161320A (en) * | 2019-12-30 | 2020-05-15 | 浙江大华技术股份有限公司 | Target tracking method, target tracking device and computer readable medium |
CN111310595A (en) * | 2020-01-20 | 2020-06-19 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN111428607A (en) * | 2020-03-19 | 2020-07-17 | 浙江大华技术股份有限公司 | Tracking method and device and computer equipment |
CN111445501A (en) * | 2020-03-25 | 2020-07-24 | 苏州科达科技股份有限公司 | Multi-target tracking method, device and storage medium |
CN111680587A (en) * | 2020-05-26 | 2020-09-18 | 河海大学常州校区 | Multi-target tracking-based chicken flock activity real-time estimation method and system |
CN111709975A (en) * | 2020-06-22 | 2020-09-25 | 上海高德威智能交通系统有限公司 | Multi-target tracking method and device, electronic equipment and storage medium |
CN111783618A (en) * | 2020-06-29 | 2020-10-16 | 联通(浙江)产业互联网有限公司 | Garden brain sensing method and system based on video content analysis |
CN111814612A (en) * | 2020-06-24 | 2020-10-23 | 浙江大华技术股份有限公司 | Target face detection method and related device thereof |
CN112037253A (en) * | 2020-08-07 | 2020-12-04 | 浙江大华技术股份有限公司 | Target tracking method and device thereof |
CN112488057A (en) * | 2020-12-17 | 2021-03-12 | 北京航空航天大学 | Single-camera multi-target tracking method utilizing human head point positioning and joint point information |
CN112529942A (en) * | 2020-12-22 | 2021-03-19 | 深圳云天励飞技术股份有限公司 | Multi-target tracking method and device, computer equipment and storage medium |
CN112668487A (en) * | 2020-12-29 | 2021-04-16 | 杭州晨安科技股份有限公司 | Teacher tracking method based on fusion of body fitness and human similarity |
CN112883819A (en) * | 2021-01-26 | 2021-06-01 | 恒睿(重庆)人工智能技术研究院有限公司 | Multi-target tracking method, device, system and computer readable storage medium |
CN113192048A (en) * | 2021-05-17 | 2021-07-30 | 广州市勤思网络科技有限公司 | Multi-mode fused people number identification and statistics method |
CN113223051A (en) * | 2021-05-12 | 2021-08-06 | 北京百度网讯科技有限公司 | Trajectory optimization method, apparatus, device, storage medium, and program product |
CN113284168A (en) * | 2020-12-17 | 2021-08-20 | 深圳云天励飞技术股份有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN113393419A (en) * | 2021-04-29 | 2021-09-14 | 北京迈格威科技有限公司 | Video processing method and device and electronic system |
CN113544701A (en) * | 2020-12-29 | 2021-10-22 | 商汤国际私人有限公司 | Method and device for detecting associated object |
CN113822211A (en) * | 2021-09-27 | 2021-12-21 | 山东睿思奥图智能科技有限公司 | Interactive person information acquisition method |
WO2021259055A1 (en) * | 2020-06-22 | 2021-12-30 | 苏宁易购集团股份有限公司 | Human body tracking method and device based on rgb-d image |
CN114092845A (en) * | 2020-07-03 | 2022-02-25 | 顺丰科技有限公司 | Target detection method, device and system and computer readable storage medium |
CN114120160A (en) * | 2022-01-25 | 2022-03-01 | 成都合能创越软件有限公司 | Object space distinguishing method and device based on fast-RCNN, computer equipment and storage medium |
CN114217626A (en) * | 2021-12-14 | 2022-03-22 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle inspection video |
CN114549578A (en) * | 2021-11-05 | 2022-05-27 | 北京小米移动软件有限公司 | Target tracking method, device and storage medium |
CN116824549A (en) * | 2023-08-29 | 2023-09-29 | 所托(山东)大数据服务有限责任公司 | Target detection method and device based on multi-detection network fusion and vehicle |
CN116935074A (en) * | 2023-07-25 | 2023-10-24 | 苏州驾驶宝智能科技有限公司 | Multi-target tracking method and device based on adaptive association of depth affinity network |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101231755A (en) * | 2007-01-25 | 2008-07-30 | 上海遥薇实业有限公司 | Moving target tracking and quantity statistics method |
JP2013149177A (en) * | 2012-01-22 | 2013-08-01 | Suzuki Motor Corp | Optical flow processor |
JP2013219531A (en) * | 2012-04-09 | 2013-10-24 | Olympus Imaging Corp | Image processing device, and image processing method |
CN103714553A (en) * | 2012-10-09 | 2014-04-09 | 杭州海康威视数字技术股份有限公司 | Multi-target tracking method and apparatus |
CN104081757A (en) * | 2012-02-06 | 2014-10-01 | 索尼公司 | Image processing apparatus, image processing method, program, and recording medium |
US20150009323A1 (en) * | 2013-07-03 | 2015-01-08 | Zmodo Technology Shenzhen Corp. Ltd | Multi-target tracking method for video surveillance |
US8958602B1 (en) * | 2013-09-27 | 2015-02-17 | The United States Of America As Represented By The Secretary Of The Navy | System for tracking maritime domain targets from full motion video |
CN105046220A (en) * | 2015-07-10 | 2015-11-11 | 华为技术有限公司 | Multi-target tracking method, apparatus and equipment |
US20160227106A1 (en) * | 2015-01-30 | 2016-08-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and image processing system |
CN108108697A (en) * | 2017-12-25 | 2018-06-01 | 中国电子科技集团公司第五十四研究所 | A kind of real-time UAV Video object detecting and tracking method |
CN108460787A (en) * | 2018-03-06 | 2018-08-28 | 北京市商汤科技开发有限公司 | Method for tracking target and device, electronic equipment, program, storage medium |
-
2018
- 2018-10-15 CN CN201811198327.9A patent/CN110163889A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101231755A (en) * | 2007-01-25 | 2008-07-30 | 上海遥薇实业有限公司 | Moving target tracking and quantity statistics method |
JP2013149177A (en) * | 2012-01-22 | 2013-08-01 | Suzuki Motor Corp | Optical flow processor |
CN104081757A (en) * | 2012-02-06 | 2014-10-01 | 索尼公司 | Image processing apparatus, image processing method, program, and recording medium |
JP2013219531A (en) * | 2012-04-09 | 2013-10-24 | Olympus Imaging Corp | Image processing device, and image processing method |
CN103714553A (en) * | 2012-10-09 | 2014-04-09 | 杭州海康威视数字技术股份有限公司 | Multi-target tracking method and apparatus |
US20150009323A1 (en) * | 2013-07-03 | 2015-01-08 | Zmodo Technology Shenzhen Corp. Ltd | Multi-target tracking method for video surveillance |
US8958602B1 (en) * | 2013-09-27 | 2015-02-17 | The United States Of America As Represented By The Secretary Of The Navy | System for tracking maritime domain targets from full motion video |
US20160227106A1 (en) * | 2015-01-30 | 2016-08-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and image processing system |
CN105046220A (en) * | 2015-07-10 | 2015-11-11 | 华为技术有限公司 | Multi-target tracking method, apparatus and equipment |
CN108108697A (en) * | 2017-12-25 | 2018-06-01 | 中国电子科技集团公司第五十四研究所 | A kind of real-time UAV Video object detecting and tracking method |
CN108460787A (en) * | 2018-03-06 | 2018-08-28 | 北京市商汤科技开发有限公司 | Method for tracking target and device, electronic equipment, program, storage medium |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110648490A (en) * | 2019-09-26 | 2020-01-03 | 华南师范大学 | Multi-factor flame identification method suitable for embedded platform |
CN110648490B (en) * | 2019-09-26 | 2021-07-27 | 华南师范大学 | Multi-factor flame identification method suitable for embedded platform |
CN111091123A (en) * | 2019-12-02 | 2020-05-01 | 上海眼控科技股份有限公司 | Text region detection method and equipment |
CN111063083A (en) * | 2019-12-16 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Access control method and device, computer readable storage medium and computer equipment |
CN111008631A (en) * | 2019-12-20 | 2020-04-14 | 浙江大华技术股份有限公司 | Image association method and device, storage medium and electronic device |
CN111008631B (en) * | 2019-12-20 | 2023-06-16 | 浙江大华技术股份有限公司 | Image association method and device, storage medium and electronic device |
CN111161320A (en) * | 2019-12-30 | 2020-05-15 | 浙江大华技术股份有限公司 | Target tracking method, target tracking device and computer readable medium |
CN111310595B (en) * | 2020-01-20 | 2023-08-25 | 北京百度网讯科技有限公司 | Method and device for generating information |
CN111310595A (en) * | 2020-01-20 | 2020-06-19 | 北京百度网讯科技有限公司 | Method and apparatus for generating information |
CN111428607A (en) * | 2020-03-19 | 2020-07-17 | 浙江大华技术股份有限公司 | Tracking method and device and computer equipment |
CN111428607B (en) * | 2020-03-19 | 2023-04-28 | 浙江大华技术股份有限公司 | Tracking method and device and computer equipment |
CN111445501A (en) * | 2020-03-25 | 2020-07-24 | 苏州科达科技股份有限公司 | Multi-target tracking method, device and storage medium |
CN111680587A (en) * | 2020-05-26 | 2020-09-18 | 河海大学常州校区 | Multi-target tracking-based chicken flock activity real-time estimation method and system |
CN111709975A (en) * | 2020-06-22 | 2020-09-25 | 上海高德威智能交通系统有限公司 | Multi-target tracking method and device, electronic equipment and storage medium |
CN111709975B (en) * | 2020-06-22 | 2023-11-03 | 上海高德威智能交通系统有限公司 | Multi-target tracking method, device, electronic equipment and storage medium |
WO2021259055A1 (en) * | 2020-06-22 | 2021-12-30 | 苏宁易购集团股份有限公司 | Human body tracking method and device based on rgb-d image |
CN111814612A (en) * | 2020-06-24 | 2020-10-23 | 浙江大华技术股份有限公司 | Target face detection method and related device thereof |
CN111783618A (en) * | 2020-06-29 | 2020-10-16 | 联通(浙江)产业互联网有限公司 | Garden brain sensing method and system based on video content analysis |
CN114092845A (en) * | 2020-07-03 | 2022-02-25 | 顺丰科技有限公司 | Target detection method, device and system and computer readable storage medium |
CN112037253A (en) * | 2020-08-07 | 2020-12-04 | 浙江大华技术股份有限公司 | Target tracking method and device thereof |
WO2022127180A1 (en) * | 2020-12-17 | 2022-06-23 | 深圳云天励飞技术股份有限公司 | Target tracking method and apparatus, and electronic device and storage medium |
CN113284168B (en) * | 2020-12-17 | 2024-08-27 | 深圳云天励飞技术股份有限公司 | Target tracking method, device, electronic equipment and storage medium |
CN113284168A (en) * | 2020-12-17 | 2021-08-20 | 深圳云天励飞技术股份有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN112488057A (en) * | 2020-12-17 | 2021-03-12 | 北京航空航天大学 | Single-camera multi-target tracking method utilizing human head point positioning and joint point information |
CN112529942B (en) * | 2020-12-22 | 2024-04-02 | 深圳云天励飞技术股份有限公司 | Multi-target tracking method, device, computer equipment and storage medium |
CN112529942A (en) * | 2020-12-22 | 2021-03-19 | 深圳云天励飞技术股份有限公司 | Multi-target tracking method and device, computer equipment and storage medium |
WO2022135027A1 (en) * | 2020-12-22 | 2022-06-30 | 深圳云天励飞技术股份有限公司 | Multi-object tracking method and apparatus, computer device, and storage medium |
CN112668487B (en) * | 2020-12-29 | 2022-05-27 | 杭州晨安科技股份有限公司 | Teacher tracking method based on fusion of body fitness and human similarity |
CN112668487A (en) * | 2020-12-29 | 2021-04-16 | 杭州晨安科技股份有限公司 | Teacher tracking method based on fusion of body fitness and human similarity |
CN113544701A (en) * | 2020-12-29 | 2021-10-22 | 商汤国际私人有限公司 | Method and device for detecting associated object |
CN112883819A (en) * | 2021-01-26 | 2021-06-01 | 恒睿(重庆)人工智能技术研究院有限公司 | Multi-target tracking method, device, system and computer readable storage medium |
CN112883819B (en) * | 2021-01-26 | 2023-12-08 | 恒睿(重庆)人工智能技术研究院有限公司 | Multi-target tracking method, device, system and computer readable storage medium |
CN113393419A (en) * | 2021-04-29 | 2021-09-14 | 北京迈格威科技有限公司 | Video processing method and device and electronic system |
CN113223051A (en) * | 2021-05-12 | 2021-08-06 | 北京百度网讯科技有限公司 | Trajectory optimization method, apparatus, device, storage medium, and program product |
CN113192048A (en) * | 2021-05-17 | 2021-07-30 | 广州市勤思网络科技有限公司 | Multi-mode fused people number identification and statistics method |
CN113822211A (en) * | 2021-09-27 | 2021-12-21 | 山东睿思奥图智能科技有限公司 | Interactive person information acquisition method |
CN114549578A (en) * | 2021-11-05 | 2022-05-27 | 北京小米移动软件有限公司 | Target tracking method, device and storage medium |
CN114217626A (en) * | 2021-12-14 | 2022-03-22 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle inspection video |
CN114120160B (en) * | 2022-01-25 | 2022-04-29 | 成都合能创越软件有限公司 | Object space distinguishing method and device based on fast-RCNN, computer equipment and storage medium |
CN114120160A (en) * | 2022-01-25 | 2022-03-01 | 成都合能创越软件有限公司 | Object space distinguishing method and device based on fast-RCNN, computer equipment and storage medium |
CN116935074A (en) * | 2023-07-25 | 2023-10-24 | 苏州驾驶宝智能科技有限公司 | Multi-target tracking method and device based on adaptive association of depth affinity network |
CN116935074B (en) * | 2023-07-25 | 2024-03-26 | 苏州驾驶宝智能科技有限公司 | Multi-target tracking method and device based on adaptive association of depth affinity network |
CN116824549A (en) * | 2023-08-29 | 2023-09-29 | 所托(山东)大数据服务有限责任公司 | Target detection method and device based on multi-detection network fusion and vehicle |
CN116824549B (en) * | 2023-08-29 | 2023-12-08 | 所托(山东)大数据服务有限责任公司 | Target detection method and device based on multi-detection network fusion and vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163889A (en) | Method for tracking target, target tracker, target following equipment | |
CN110188720A (en) | A kind of object detection method and system based on convolutional neural networks | |
CN108520229A (en) | Image detecting method, device, electronic equipment and computer-readable medium | |
CN108460362B (en) | System and method for detecting human body part | |
CN109271888A (en) | Personal identification method, device, electronic equipment based on gait | |
CN105760836A (en) | Multi-angle face alignment method based on deep learning and system thereof and photographing terminal | |
Deepa et al. | Comparison of yolo, ssd, faster rcnn for real time tennis ball tracking for action decision networks | |
CN109344899A (en) | Multi-target detection method, device and electronic equipment | |
CN108986138A (en) | Method for tracking target and equipment | |
US20110142282A1 (en) | Visual object tracking with scale and orientation adaptation | |
TWI581207B (en) | Computing method for ridesharing path, computing apparatus and recording medium using the same | |
CN108280843A (en) | A kind of video object detecting and tracking method and apparatus | |
CN109711241A (en) | Object detecting method, device and electronic equipment | |
CN106295598A (en) | A kind of across photographic head method for tracking target and device | |
CN110648363A (en) | Camera posture determining method and device, storage medium and electronic equipment | |
CN109583505A (en) | A kind of object correlating method, device, equipment and the medium of multisensor | |
CN109087337B (en) | Long-time target tracking method and system based on hierarchical convolution characteristics | |
JP2019212291A (en) | Indoor positioning system and method based on geomagnetic signals in combination with computer vision | |
CN106326853A (en) | Human face tracking method and device | |
CN109154938A (en) | Using discrete non-trace location data by the entity classification in digitized map | |
KR20170036747A (en) | Method for tracking keypoints in a scene | |
CN105513083A (en) | PTAM camera tracking method and device | |
CN111027555B (en) | License plate recognition method and device and electronic equipment | |
CN110046212A (en) | Road condition change information determines method, apparatus, computer equipment and storage medium | |
Cancela et al. | Unsupervised trajectory modelling using temporal information via minimal paths |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190823 |