CN103699908A - Joint reasoning-based video multi-target tracking method - Google Patents
Joint reasoning-based video multi-target tracking method Download PDFInfo
- Publication number
- CN103699908A CN103699908A CN201410016404.XA CN201410016404A CN103699908A CN 103699908 A CN103699908 A CN 103699908A CN 201410016404 A CN201410016404 A CN 201410016404A CN 103699908 A CN103699908 A CN 103699908A
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- image
- frame
- image block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a joint reasoning-based video multi-target tracking method in the technical field of video processing. The method comprises the following steps: firstly, reading a frame of image of a video file, carrying out image rasterization processing, and then calibrating a candidate position of a target by adopting an online detector and a KLT tracking algorithm as a tracker, respectively screening and then integrating the results; secondly, carrying out quantitative grading on the obtained candidate position result; finally, describing the target tracking condition by using a joint function, and taking the optimal solution based on the joint function as the position of the target in the frame, so as to achieve target tracking. By adopting the joint reasoning-based video multi-target tracking method, the processing of the treatment method combined with a detection tracking algorithm and multi-target relationship in the tracking technology under multi-target tracking can be achieved, and the multi-target relationship is described by using the joint function, so that not only is the result fusion of detection and tracking achieved, but also the relationship between the targets is integrated according to overall situation, so that a global optimal resolution is obtained.
Description
Technical field
What the present invention relates to is a kind of method of technical field of video processing, specifically a kind of video multi-target tracking based on cooperate reasoning.
Background technology
Along with the development of camera device and universal, video tracking produce and life in occupied more and more consequence.Particularly, in video monitoring system, track algorithm can effectively reduce cost of labor, saves time.Yet due to the imperfection of tracking technique itself and tracking environmental complicated and changeable, affected the accuracy of tracking results, the application of track algorithm is restricted.
Target following in video is very challenging problem, because there is a lot of uncertain factors in tracing process, such as: complicated background, if background is similar to tracked target, can disturb the judgement of track algorithm to target location; Obvious object blocks, the quick variation of target shape, and these all can cause the outward appearance of target in picture to have obvious variation, causes algorithm to lose tracked target.For multiple target tracking, except above-mentioned problem, the similarity between each target, interaction and mutually block all and can bring difficulty to correct tracking.
The method that adopts often detection algorithm to combine with track algorithm for the general disposal route of these problems, namely utilizes detection algorithm to improve the effect of final tracking.Yet, in the correctness of detection algorithm, be difficult under the prerequisite of assurance, whether can improve tracking rate and be still a problem.
Literature search through prior art is found, a lot of scholars utilize the method for adaptive learning to guarantee the correctness of detecting device, thereby process the problem that testing result and tracking results are coordinated mutually, " Tracking-Learning-Detection " delivering on < < IEEE Transaction on Pattern Analysis and Machine Intelligence > > (the 34th phase 1409-1422 page in 2012) such as Zdenek etc., this article has utilized the framed structure of " following the tracks of-study-detect (TLD) ", utilize and follow the tracks of and detect the method processing tracking problem combining.Specifically, first, input one two field picture, after image grid, obtains a large amount of image block; Then, tracker filters out satisfactory image block, and detecting device is also demarcated all image block that may become in appearance target; Finally, the method that adopts tracking link and detection to combine can utilize the result of detecting device to reinitialize tracker when tracker failure, utilizes tracking results to expand training sample set simultaneously and trains online detecting device, improves the precision of detecting device.From the disclosed experimental result of this technology, TLD has good effect on long-time tracking.Still there are a lot of restrictions in the method but: 1) be only applicable to monotrack 2) if there is larger variation in the outward appearance of target, or occurred to block completely, the method effect is bad, and this is because can not correctly provide in this case the possible position of target at thread detector.
Chinese patent literature CN103176185A Shen Qing Publication day: 2013.06.26, a kind of method and system for detection of road barricade thing are disclosed, the first detection of obstacles model of this technology based on video shooting device, the second detection of obstacles model based on video shooting device and millimetre-wave radar, the 3rd detection of obstacles model based on three-dimensional laser radar and infrared pick-up device, and make described a plurality of model form complementary detection by the fuzzy neural network algorithm based on rough set, thereby the characteristic information of Real-time Obtaining road barricade thing.But this technical equipment cost is high, thereby can not do effective tracking to barrier, in conjunction with historical information, makes and follow testing result accurately.
Summary of the invention
The present invention is directed to prior art above shortcomings, a kind of video multi-target tracking based on cooperate reasoning is proposed, can solve under multiple target tracking in tracking technique for detecting the disposal route of track algorithm combination and the processing of multiple goal mutual relationship, utilize Copula to describe relation between multiple goal, not only solved to detect with the result of following the tracks of and merged problem, also from the overall situation, consider simultaneously, combine the relation between each target, drawn globally optimal solution.
The present invention is achieved by the following technical solutions, first the present invention by reading in a two field picture of video file and it being carried out to image grid processing, then adopt at thread detector and as the position candidate of the KLT track algorithm spotting of tracker, synthesis result after screening respectively, secondly the position candidate result drawing is quantized to scoring, finally utilize Copula target following situation is described and using the optimum solution based on Copula as target in the position of this frame, realize target is followed the tracks of.
The first frame when described image is video file first carries out initialization operation before image gridization is processed, and is specially: manually input needs the target number of following the tracks of, and then manually frame goes out target.
Described initialization operation refers to: initialization detecting device and tracker, by the method for random fern on-line study, upgrade detecting device, and the tracker based on KLT algorithm also can be selected unique point in target zone simultaneously, for the target following of next frame.
Described image gridization is processed, refer to: with moving window not of uniform size, scan whole two field picture and obtain the image block that diverse location varies in size, be used as candidate target, be specially: first according to the size of initialization target, geometric ratio draws the moving window of a series of different scale sizes, and the proportional range of change of scale is 1.2
-10~1.2
10; Each moving window is successively according to order from top to bottom traversal entire image from left to right, and moving window displacement size is 0.1 of window size.Like this, just can obtain the image block that a lot of diverse locations vary in size.
Described position candidate obtains in the following manner:
First, computed image piece density variance, too small when image block density variance, be excluded, tracked target template image is obtained when the first frame initialization, and the concrete poor computing formula in image block side is:
wherein: p
irefer to the gray level image of i width image block, E () refers to and is averaging function, when
i width image block is just retained, wherein: l is preset parameter,
the variance that represents template image.
Then, input picture block is extracted to two value tags, the sorter that utilizes the online training of random fern algorithm to obtain estimates that each passes through image block that density variance judges and the similarity of detected target, and similarity judgment formula is: P (c
1| x)=
wherein: c
1represent training classification, while training, only have two kinds here, use c similar to detected target
1represent, dissimilar with detected target, use c
0represent; P
i(c
1| x) represent i the posterior probability that fern obtains.
Finally, the posterior probability that all ferns are obtained averages, and obtains final posterior probability values, as similarity P (c
1| x) >50%, input picture piece is similar to detected target, retains this image block.
Described demarcation refers to: when the first frame of image, do KLT algorithm initialization and process, do not follow the tracks of, each frame all utilizes KLT algorithm picks tracked target feature from previous frame target location afterwards, finds the characteristic area of answering in contrast in present frame; Then tracker determines whether retaining according to the unique point number of each image block inside of image block, if the unique point number in an image block areas has surpassed certain empirical value, this image block is retained with regard to being judged as candidate state so.
Described quantification scoring refers to: extract the Haar feature of candidate image piece, and by the authenticity of cascade Adaboost sorter evaluate candidate position, the evaluation quantizing:
wherein:
the meaning be detected target
be parked in of cascade Adaboost sorter
layer,
represented detected rectangle frame
quantification scoring;
wherein:
represented a Weak Classifier in cascade classifier, s (L) has represented a series of Weak Classifiers of L layer; Work as function F
ibe greater than certain empirical value, so
just can pass through this one deck, otherwise can not pass through; Described Weak Classifier weight w
i,lto learn to draw by the learning method of AdaBoost.
Described Copula is comprised of the scoring of the spatial relation between each target and each target candidate position, by building this Copula model, asks the optimum solution of function, just can show that best position candidate is as the result of following the tracks of.
Described spatial relation refers to, in multiobject situation, also can think to change also not remarkable at the position relationship between each target between consecutive frame.Mutual alignment between previous frame target also can be used as the tracking results that reference Help improves this frame so.Based on this principle, that t moment, the position relationship probability between i target and N target
be described as:
Described Copula model refers to, the joint probability of all targets,
Wherein: γ is weight parameter, function
represented the t state space degree of confidence of target i constantly, function
describe the spatial positional information between t moment target i and target N, comprised the variation of distance and the variation of angle.
Described optimum solution utilizes standard belief propagation algorithm effectively to solve multiple target tracking problem, and the relation between each target is described with tree construction; When tracked target is the node in markov random file, a selected target is root node at random, and all the other are child node, to root node transmission of information, last object definition, are root node, other as the child node in tree construction; Constructor node is to the information transport function of root node transmission of information like this:
each leaf node carries the information to root node place, and root node is that the degree of confidence of tracked target N Nodes can be expressed as so:
when
while obtaining maximal value, it is the tracking results of this frame multiple target tracking.
Technique effect
Although the position candidate of target can be detected rapidly at thread detector, produce more error result, false alarm rate is higher.Secondly, at thread detector only to appearance change target susceptibility slowly.In the situation that training sample is few, for Rotation, non-rigid targets etc., the method is unsettled.With at thread detector, compare, offline inspection is more accurately but speed is slower.For the effect of these two kinds of detecting devices of balance, so the present invention uses online detection to carry out spotting position candidate, and utilize offline inspection to come, to position candidate scoring, to have taken into account the advantage of two kinds of technology.
The present invention not only balance offline inspection and detecting online, make detecting device have better robustness, can reach better testing result, be used for improving final tracking effect.And proposed to utilize Copula to describe target following situation, thereby integrated the structural information between multiple goal, simultaneously also effective integration the result of detecting device and tracker.Compare with general tracking, the present invention has accomplished to solve tracking problem from overall angle, can better solve the problems such as occlusion issue in multiple target tracking process and appearance change, and it is all had wide practical use in daily life with in producing.
Accompanying drawing explanation
The treatment scheme schematic diagram of Fig. 1 embodiment of the present invention.
Fig. 2 application example schematic diagram 1 of the present invention.
Fig. 3 application example schematic diagram 2 of the present invention.
Embodiment
Below embodiments of the invention are elaborated, the present embodiment is implemented take technical solution of the present invention under prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment 1
The implementation environment of the present embodiment is, for removable or fixing camera, in indoor or outdoors collection, gets video, and available the present invention does to follow the tracks of and processes.Concrete using method is, first, the place frame followed the tracks of in needs monitoring good fixable or movably camera do the use of image data; Then, gather the video information input computer getting, with the present invention, analyze, the present invention can require input to need tracked target number at video flowing the first frame, manually with rectangle frame, goes out tracked target; Finally, meeting of the present invention is automatically accurately followed the tracks of and is marked and is selected target in video flowing.
As shown in Figure 1, the present embodiment comprises following operation steps: first read a two field picture of input video, if the first frame is done initialization setting; Afterwards, this two field picture rasterizing is input at thread detector and tracker and calibrates candidate target position after processing; Then, by candidate target Position input scorer, utilize off-line sorter to each position candidate, to do the scoring of a quantification; Finally, by the position candidate information quantizing and each object space structural information input Copula, by optimum solution, calculate the tracking results of each target of output.
As shown in Figure 1, the present embodiment first user is inputted video to be tracked, reads a two field picture of video, when video the first frame, carries out initialization process, by user, selects voluntarily target to be tracked, can be one or more.After initial target has been selected, the present invention follows the tracks of target automatically.
Next, this two field picture is carried out to rasterizing, obtain a lot of image blocks not of uniform size as preliminary position candidate.Again by rasterizing image difference Input Online detecting device and tracker.At thread detector, first get rid of the too small image block of density variance, i.e. the less image block of inclusion information amount, the feature of Program extraction residual image piece then, the detecting device by online training retains satisfactory image block as candidate target position; In tracker, first program chooses key point from the objective result of previous frame, then in this frame, find corresponding key point, meeting of the present invention is according to fixed empirical value in experiment, by analyzing the number of the key point being traced in each image block, decide and retain those image blocks as target candidate position.
Obtain behind target candidate position, this program can quantize scoring to each candidate image piece with off-line sorter, then in conjunction with the Spatial Relational Model between each target, input Copula is described the tracking situation of all targets.Program is using the optimum solution of Copula as last tracking results output.Obtain after the tracking results of present frame, program also can, to training at thread detector, guarantee the stability of next frame testing result.
After each frame of video image all travels through, the present embodiment finishes.
As shown in Figure 2, the present embodiment application example schematic diagram.In this example, the parameter of above mentioning in formula is set to: λ=10, γ=60, β
1=0.1, β
2=2, in scene, 4 targets are walked to the right from the left side of camera lens simultaneously, and target is not only leaned on very closely each other, can produce mutually and block, and the profile of 4 tracked targets is also rather similar, to following the tracks of, bring certain difficulty.This method is owing to having used structural inference function, consider that from overall angle thereby all target following results try to achieve optimum solution, reduced the impact of mutually blocking between target, frame# 187 Green target is not subject to the impact of similar object around in the process of following the tracks of yet, and very accurately 4 targets is followed the tracks of simultaneously.From frame# 220, can obviously find out, this method has good tracking effect under this scene.
As shown in Figure 3, be also application example schematic diagram of the present invention.In this example, the parameter of above mentioning in formula is set to: λ=10, γ=60, β
1=0.1, β
2=2, for the trace model of this method is had to direct cognition, structural object has been done to tracking and testing.Test video shown in Fig. 3 is recorded voluntarily, eye, nose and the corners of the mouth of tracked target behaviour face.People's face can be seen as a structural object, 5 parts of fixing, consists of, and centered by nose, surrounding has two eyes and the corners of the mouth.The algorithm that this method proposes can well be processed this occlusion issue, even if there be blocking of moment, trace model does not have lose objects yet herein.This is because structural reasoning joint probability function in this paper has utilized the information of the mutual relationship between each target, rather than only consider the probability of single target, when tracker and detecting device lose objects, still can infer the position that tracked target according to the mutual spatial relationship between target.
Claims (10)
1. the video multi-target tracking based on cooperate reasoning, it is characterized in that, by first reading in a two field picture of video file and it being carried out to image grid processing, then adopt at thread detector and as the position candidate of the KLT track algorithm spotting of tracker, synthesis result after screening respectively, secondly the position candidate result drawing is quantized to scoring, finally utilize Copula target following situation is described and using the optimum solution based on Copula as target in the position of this frame, realize target is followed the tracks of.
2. method according to claim 1, it is characterized in that, described initialization operation refers to: initialization detecting device and tracker, method by random fern on-line study is upgraded detecting device, tracker based on KLT algorithm also can be selected unique point in target zone simultaneously, for the target following of next frame.
3. method according to claim 1, it is characterized in that, described image gridization is processed, refer to: with moving window not of uniform size, scan whole two field picture and obtain the image block that diverse location varies in size, be used as candidate target, be specially: first according to the size of initialization target, geometric ratio draws the moving window of a series of different scale sizes, and the proportional range of change of scale is 1.2
-10~1.2
10; Each moving window is successively according to order from top to bottom traversal entire image from left to right, and moving window displacement size is 0.1 of window size, obtains the image block that a lot of diverse locations vary in size.
4. method according to claim 1, is characterized in that, described position candidate obtains in the following manner:
First, computed image piece density variance, too small when image block density variance, be excluded, tracked target template image is obtained when the first frame initialization, and the concrete poor computing formula in image block side is:
wherein: p
irefer to the gray level image of i width image block, E () refers to and is averaging function, when
i width image block is just retained, wherein: l is preset parameter,
the variance that represents template image;
Then, input picture block is extracted to two value tags, the sorter that utilizes the online training of random fern algorithm to obtain estimates that each passes through image block that density variance judges and the similarity of detected target, and similarity judgment formula is: Pc
1| x)=
wherein: c
1represent training classification, while training, only have two kinds here, use c similar to detected target
1represent, dissimilar with detected target, use c
0represent; P
i(c
1| x) represent i the posterior probability that fern obtains;
Finally, the posterior probability that all ferns are obtained averages, and obtains final posterior probability values, as similarity P (c
1| x) >50%, input picture piece is similar to detected target, retains this image block.
5. method according to claim 1, it is characterized in that, described demarcation refers to: when the first frame of image, do KLT algorithm initialization and process, do not follow the tracks of, each frame all utilizes KLT algorithm picks tracked target feature from previous frame target location afterwards, finds the characteristic area of answering in contrast in present frame; Then tracker determines whether retaining according to the unique point number of each image block inside of image block, if the unique point number in an image block areas has surpassed certain empirical value, this image block is retained with regard to being judged as candidate state so.
6. method according to claim 1, is characterized in that, described quantification scoring refers to: extract the Haar feature of candidate image piece, and by the authenticity of cascade Adaboost sorter evaluate candidate position, the evaluation quantizing:
wherein:
the meaning be detected target
be parked in of cascade Adaboost sorter
layer,
represented detected rectangle frame
quantification scoring;
wherein:
represented a Weak Classifier in cascade classifier, s (L) has represented a series of Weak Classifiers of L layer; Work as function F
ibe greater than certain empirical value, so
just can pass through this one deck, otherwise can not pass through; Described Weak Classifier weight w
i,lto learn to draw by the learning method of AdaBoost.
7. method according to claim 1, it is characterized in that, described Copula is comprised of the scoring of the spatial relation between each target and each target candidate position, by building this Copula model, ask the optimum solution of function, just can show that best position candidate is as the result of following the tracks of.
8. method according to claim 7, it is characterized in that, described spatial relation refers to, in multiobject situation, mutual alignment between previous frame target also can be used as the tracking results that reference Help improves this frame, therefore the position relationship probability between i target and N target at t constantly,
be described as:
9. method according to claim 7, is characterized in that, described Copula model refers to, the joint probability of all targets,
wherein: ε and γ are weight parameter, function
represented the t state space degree of confidence of target i constantly, function
describe the spatial positional information between t moment target i and target N, comprised the variation of distance and the variation of angle.
10. method according to claim 7, is characterized in that, described optimum solution utilizes standard belief propagation algorithm effectively to solve multiple target tracking problem, and the relation between each target is described with tree construction; When tracked target is the node in markov random file, a selected target is root node at random, and all the other are child node, to root node transmission of information, last object definition, are root node, other as the child node in tree construction; Constructor node is to the information transport function of root node transmission of information like this:
each leaf node carries the information to root node place, and root node is that the degree of confidence of tracked target N Nodes can be expressed as so:
when
while obtaining maximal value, it is the tracking results of this frame multiple target tracking.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410016404.XA CN103699908B (en) | 2014-01-14 | 2014-01-14 | Video multi-target tracking based on associating reasoning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410016404.XA CN103699908B (en) | 2014-01-14 | 2014-01-14 | Video multi-target tracking based on associating reasoning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103699908A true CN103699908A (en) | 2014-04-02 |
CN103699908B CN103699908B (en) | 2016-10-05 |
Family
ID=50361430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410016404.XA Expired - Fee Related CN103699908B (en) | 2014-01-14 | 2014-01-14 | Video multi-target tracking based on associating reasoning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103699908B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156734A (en) * | 2014-08-19 | 2014-11-19 | 中国地质大学(武汉) | Fully-autonomous on-line study method based on random fern classifier |
CN104268536A (en) * | 2014-10-11 | 2015-01-07 | 烽火通信科技股份有限公司 | Face detection method through images |
CN105701840A (en) * | 2015-12-31 | 2016-06-22 | 上海极链网络科技有限公司 | System for real-time tracking of multiple objects in video and implementation method |
CN105787498A (en) * | 2014-12-25 | 2016-07-20 | 财团法人车辆研究测试中心 | Pedestrian detection system |
CN106534967A (en) * | 2016-10-25 | 2017-03-22 | 司马大大(北京)智能系统有限公司 | Video editing method and device |
CN106537420A (en) * | 2014-07-30 | 2017-03-22 | 三菱电机株式会社 | Method for transforming input signals |
CN106934332A (en) * | 2015-12-31 | 2017-07-07 | 中国科学院深圳先进技术研究院 | A kind of method of multiple target tracking |
CN107292908A (en) * | 2016-04-02 | 2017-10-24 | 上海大学 | Pedestrian tracting method based on KLT feature point tracking algorithms |
CN107578368A (en) * | 2017-08-31 | 2018-01-12 | 成都观界创宇科技有限公司 | Multi-object tracking method and panorama camera applied to panoramic video |
CN108447079A (en) * | 2018-03-12 | 2018-08-24 | 中国计量大学 | A kind of method for tracking target based on TLD algorithm frames |
CN109858402A (en) * | 2019-01-16 | 2019-06-07 | 腾讯科技(深圳)有限公司 | A kind of image detecting method, device, terminal and storage medium |
CN110411447A (en) * | 2019-06-04 | 2019-11-05 | 恒大智慧科技有限公司 | Personnel positioning method, platform, server and storage medium |
CN111553934A (en) * | 2020-04-24 | 2020-08-18 | 哈尔滨工程大学 | Multi-ship tracking method adopting multi-dimensional fusion |
CN113570637A (en) * | 2021-08-10 | 2021-10-29 | 中山大学 | Multi-target tracking method, device, equipment and storage medium |
CN114119674A (en) * | 2022-01-28 | 2022-03-01 | 深圳佑驾创新科技有限公司 | Static target tracking method and device and storage medium |
CN114937060A (en) * | 2022-04-26 | 2022-08-23 | 南京北斗创新应用科技研究院有限公司 | Monocular pedestrian indoor positioning prediction method guided by map meaning |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102385690B (en) * | 2010-09-01 | 2014-01-15 | 汉王科技股份有限公司 | Target tracking method and system based on video image |
CN102831618B (en) * | 2012-07-20 | 2014-11-12 | 西安电子科技大学 | Hough forest-based video target tracking method |
-
2014
- 2014-01-14 CN CN201410016404.XA patent/CN103699908B/en not_active Expired - Fee Related
Non-Patent Citations (6)
Title |
---|
CHENYUAN ZHANG .ETC: ""A KLT-Based Approach for Occlusion Handling in Human Tracking"", 《PICTURE CODING SYMPOSIUM》 * |
孙宸: ""基于半监督在线学习的跟踪算法研究"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
孙德禄: ""基于随机蕨类的视频图像中运动目标实时跟踪算法研究"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
王守超 ,等: ""基于在线学习和结构约束的目标跟踪算法"", 《计算机工程》 * |
钱志明,等: ""基于视频的车辆检测与跟踪研究进展"", 《中南大学学报(自然科学版)》 * |
黄立 ,等: ""基于在线Boosting和LK光流的视频跟踪算法"", 《西南科技大学学报》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106537420A (en) * | 2014-07-30 | 2017-03-22 | 三菱电机株式会社 | Method for transforming input signals |
CN106537420B (en) * | 2014-07-30 | 2019-06-11 | 三菱电机株式会社 | Method for converted input signal |
CN104156734B (en) * | 2014-08-19 | 2017-06-13 | 中国地质大学(武汉) | A kind of complete autonomous on-line study method based on random fern grader |
CN104156734A (en) * | 2014-08-19 | 2014-11-19 | 中国地质大学(武汉) | Fully-autonomous on-line study method based on random fern classifier |
CN104268536A (en) * | 2014-10-11 | 2015-01-07 | 烽火通信科技股份有限公司 | Face detection method through images |
CN104268536B (en) * | 2014-10-11 | 2017-07-18 | 南京烽火软件科技有限公司 | A kind of image method for detecting human face |
CN105787498B (en) * | 2014-12-25 | 2019-05-10 | 财团法人车辆研究测试中心 | Pedestrian's detecting system |
CN105787498A (en) * | 2014-12-25 | 2016-07-20 | 财团法人车辆研究测试中心 | Pedestrian detection system |
CN105701840A (en) * | 2015-12-31 | 2016-06-22 | 上海极链网络科技有限公司 | System for real-time tracking of multiple objects in video and implementation method |
CN106934332A (en) * | 2015-12-31 | 2017-07-07 | 中国科学院深圳先进技术研究院 | A kind of method of multiple target tracking |
CN107292908A (en) * | 2016-04-02 | 2017-10-24 | 上海大学 | Pedestrian tracting method based on KLT feature point tracking algorithms |
CN106534967A (en) * | 2016-10-25 | 2017-03-22 | 司马大大(北京)智能系统有限公司 | Video editing method and device |
CN107578368A (en) * | 2017-08-31 | 2018-01-12 | 成都观界创宇科技有限公司 | Multi-object tracking method and panorama camera applied to panoramic video |
CN108447079A (en) * | 2018-03-12 | 2018-08-24 | 中国计量大学 | A kind of method for tracking target based on TLD algorithm frames |
CN109858402A (en) * | 2019-01-16 | 2019-06-07 | 腾讯科技(深圳)有限公司 | A kind of image detecting method, device, terminal and storage medium |
CN110411447A (en) * | 2019-06-04 | 2019-11-05 | 恒大智慧科技有限公司 | Personnel positioning method, platform, server and storage medium |
CN111553934A (en) * | 2020-04-24 | 2020-08-18 | 哈尔滨工程大学 | Multi-ship tracking method adopting multi-dimensional fusion |
CN111553934B (en) * | 2020-04-24 | 2022-07-15 | 哈尔滨工程大学 | Multi-ship tracking method adopting multi-dimensional fusion |
CN113570637A (en) * | 2021-08-10 | 2021-10-29 | 中山大学 | Multi-target tracking method, device, equipment and storage medium |
CN113570637B (en) * | 2021-08-10 | 2023-09-19 | 中山大学 | Multi-target tracking method, device, equipment and storage medium |
CN114119674A (en) * | 2022-01-28 | 2022-03-01 | 深圳佑驾创新科技有限公司 | Static target tracking method and device and storage medium |
CN114119674B (en) * | 2022-01-28 | 2022-04-26 | 深圳佑驾创新科技有限公司 | Static target tracking method and device and storage medium |
CN114937060A (en) * | 2022-04-26 | 2022-08-23 | 南京北斗创新应用科技研究院有限公司 | Monocular pedestrian indoor positioning prediction method guided by map meaning |
Also Published As
Publication number | Publication date |
---|---|
CN103699908B (en) | 2016-10-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103699908A (en) | Joint reasoning-based video multi-target tracking method | |
Wang et al. | Automatic laser profile recognition and fast tracking for structured light measurement using deep learning and template matching | |
Liu et al. | A vision-based pipeline for vehicle counting, speed estimation, and classification | |
Gavrila et al. | Vision-based pedestrian detection: The protector system | |
US7409076B2 (en) | Methods and apparatus for automatically tracking moving entities entering and exiting a specified region | |
CN106373143A (en) | Adaptive method and system | |
Ellis | Performance metrics and methods for tracking in surveillance | |
Lookingbill et al. | Reverse optical flow for self-supervised adaptive autonomous robot navigation | |
Levinson | Automatic laser calibration, mapping, and localization for autonomous vehicles | |
CN102447835A (en) | Non-blind-area multi-target cooperative tracking method and system | |
CN110136186B (en) | Detection target matching method for mobile robot target ranging | |
CN115113206B (en) | Pedestrian and obstacle detection method for assisting driving of underground rail car | |
CN114022910A (en) | Swimming pool drowning prevention supervision method and device, computer equipment and storage medium | |
CN109003290A (en) | A kind of video tracing method of monitoring system | |
CN106228570A (en) | A kind of Truth data determines method and apparatus | |
CN114067384A (en) | Highway epidemic situation prevention and control personnel generate heat and detect and vehicle trajectory tracking roadside equipment | |
CN109636834A (en) | Video frequency vehicle target tracking algorism based on TLD innovatory algorithm | |
Bacca et al. | Long-term mapping and localization using feature stability histograms | |
Zhao et al. | Dynamic object tracking for self-driving cars using monocular camera and lidar | |
Martirena et al. | Automated annotation of lane markings using lidar and odometry | |
CN111967443A (en) | Image processing and BIM-based method for analyzing interested area in archive | |
Qing et al. | A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation | |
Heimsch et al. | Re-identification for multi-target-tracking systems using multi-camera, homography transformations and trajectory matching | |
Sheh et al. | Extracting terrain features from range images for autonomous random stepfield traversal | |
KR101032098B1 (en) | Stand-alone environment traffic detecting system using thermal infra-red |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161005 Termination date: 20200114 |