CN103116896B - Visual saliency model based automatic detecting and tracking method - Google Patents

Visual saliency model based automatic detecting and tracking method Download PDF

Info

Publication number
CN103116896B
CN103116896B CN201310071858.2A CN201310071858A CN103116896B CN 103116896 B CN103116896 B CN 103116896B CN 201310071858 A CN201310071858 A CN 201310071858A CN 103116896 B CN103116896 B CN 103116896B
Authority
CN
China
Prior art keywords
tracking
target
tracks
model
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310071858.2A
Other languages
Chinese (zh)
Other versions
CN103116896A (en
Inventor
徐智勇
金炫
魏宇星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS filed Critical Institute of Optics and Electronics of CAS
Priority to CN201310071858.2A priority Critical patent/CN103116896B/en
Publication of CN103116896A publication Critical patent/CN103116896A/en
Application granted granted Critical
Publication of CN103116896B publication Critical patent/CN103116896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a visual saliency model based automatic detecting and tracking method. The visual saliency model based automatic detecting and tracking method comprises the steps of: calculating a color, brightness and direction saliency graph of an input video image by using a visual saliency model, and defining a simple scene and a complex scene based on the weighted saliency graph; establishing a rectangular frame to serve as a tracking target to be tracked by using a saliency region when the simple scene is detected; correcting a manually selected tracking frame based on different weights when the complex scene is detected; tracking the tracking frame by utilizing a tracking studying and detecting algorithm, and detecting that the tracking is failed; detecting the image of each frame after the failure by using the visual saliency model, performing histogram matching on each region in the saliency graph and the online model before the tracking failure, and tracking a region with the highest similarity; and sending multiple regions with similar similarity into a target detector at the same time for detection, repeating tracking detection for the image target of the next frame, using a histogram comparison step until a target is detected again and tracking.

Description

A kind of automatic detecting and tracking method of view-based access control model conspicuousness model
Technical field
The invention belongs to computer vision and biological vision interleaving techniques field, relating to one utilizes vision significance model to carry out reformed AHP to tracking-study-detection algorithm, realize multi-class targets at the automatic detecting and tracking of simple scenario, in the function of the semi-automatic detecting and tracking of complex scene, and there is good effect for the attitudes vibration of object and the testing process again after blocking.
Background technology
In tracker, in the past conventional tracking has: frame difference method, background modeling method, optical flow method etc.But they, when in the face of complex background and polytype target, often because the change of illumination, attitude and shape, make target signature generation acute variation, thus cause following the tracks of unsuccessfully.When there is partial occlusion and fast mobile time, the Partial Feature information dropout of target, makes feature lose coupling, thus cause following the tracks of unsuccessfully or tracking offset.
In order to improve tracking accuracy, realizing following function with single tracker and can not meet.Therefore follow the tracks of-study-detection proposition is a kind of solves online real-time follow-up problem the thought that tracker, detecting device and learning machine are fused into one.It can solve tracking Problems existing in the past well, but it has obvious deficiency for feature selecting, attitudes vibration.
Summary of the invention
For the deficiencies in the prior art, the object of the invention is can from motion tracking to target, and have the characteristic that tracking accuracy is high, robustness is good, can meet tracing detection systematic difference in engineering, the present invention proposes a kind of automatic detecting and tracking method of view-based access control model conspicuousness model for this reason.
For realizing object of the present invention, the technical scheme that the present invention is based on the automatic detecting and tracking method of vision significance model comprises following steps:
Step S1: utilize first frame video image of vision significance model to input to carry out scene and slightly understand, the color of computed image, brightness, direction Saliency maps after normalization, obtain the weighting Saliency maps of image scene, if the weights of a salient region in Saliency maps account for all more than 80%, be then defined as simple scenario; If the weights of a salient region in Saliency maps account for all less than 80%, then in filtering scene, weights are less than the salient region of 10%, and are defined as complex scene;
Step S2: when Programmable detection is simple scenario to video sequence, directly takes salient region and sets up rectangle frame and follow the tracks of as tracking target;
Step S3: when Programmable detection is complex scene to video sequence, ejecting dialog box allows user select tracking target voluntarily, then by the artificial manually tracking box of selection and the distance of each salient region, add and recalculate the weights of each salient region, then according to weights are different, the artificial tracking box manually selected being corrected;
Step S4: utilize tracking-study-detection algorithm tracking box is followed the tracks of, when occurring that target is left the visual field or is blocked, can detect and follow the tracks of unsuccessfully, and when following the tracks of unsuccessfully timely feedback information; When not occurring that target is left the visual field or is not blocked, continue to carry out tenacious tracking to target and in real time whether detecting and tracking occurs failure;
Step S5: when occurring following the tracks of unsuccessfully, uses vision significance model to detect each two field picture after failure, obtains Saliency maps; To the regional in Saliency maps with follow the tracks of model on the line unsuccessfully and carry out Histogram Matching, when the similarity in the high region of Histogram Matching similarity first is much larger than the second high region, directly high to similarity first region is followed the tracks of; When the region that Histogram Matching has multiple similarity close, send into object detector simultaneously and detect, constantly repeat step S4 until again detect that target is gone forward side by side line trace.
Preferred embodiment, carries out the calculating of vision significance model to the first two field picture of video input, calculate color respectively, brightness, direction significantly scheme, normalization obtains weighting and significantly schemes.
Preferred embodiment, for simple scenario, using the center of salient region as the center of tracking box, the area of salient region, as the area of tracking box, reaches and automatically selects tracking target, realizes the function from motion tracking.
Preferred embodiment, the calculating of salient region weights in vision significance model is brought into as two parameters using the center of the artificial tracking box manually selected and tracking box size, according to result of calculation, the center of the artificial tracking box manually selected and tracking box size are approached to the salient region that weights are higher, reach and the artificial tracking box selected is corrected, because the center of each artificial tracking box manually selected can not be identical with tracking box size, this generates the initialized stochastic error of tracking target, conspicuousness model computation process for video image first frame is then identical, it is utilized artificial tracking box manually selected to be revised to the stochastic error just achieving and revise and artificially select to substitute into, reach stable tracking effect.
Preferred embodiment, when tracking target be blocked, occur drastic mechanical deformation or tracking target leave visual field time, in tracking box, the front and back item error of most of trace point is very large, be then defined as and follow the tracks of unsuccessfully.
Preferred embodiment, when following the tracks of unsuccessfully, calculating the Saliency maps of present frame, setting up histogram, contrast with the histogram following the tracks of model on the line unsuccessfully the regional in Saliency maps, the target finding attitude to change or occur again after disappearing.
Beneficial effect of the present invention: the present invention is compared with classic method, and since vision significance model proposes, its very fast obtaining at computer vision field is applied widely.Imitate human brain visual cortex for the method for target process in visual field because it utilizes, finally calculate interested target, thus allow computing machine realize the function that can automatically identify interesting target the same as human eye.Technology in the present invention utilizes vision significance model to combine with tracking-study-detection, proposes a kind of automatic detecting and tracking algorithm of dual model.Vision significance model is utilized to improve tracking-study-detecting and tracking algorithm, realize multi-class targets at the automatic detecting and tracking of simple scenario, in the function of the semi-automatic detecting and tracking of complex scene, and there is good effect for the attitudes vibration of object and the testing process again after blocking.This method further increases tracking accuracy, solves the attitude of non-rigid object, rotates the impact on following the tracks of, and realizes automatic or semiautomatic tracing in use.
Accompanying drawing explanation
Fig. 1 a is the process flow diagram of the automatic detecting and tracking method of the present invention.
Fig. 1 b is the inventive method specific embodiment process flow diagram.
Fig. 2 is the process structure diagram of computation vision conspicuousness model modified in the present invention.
Fig. 3 is the interesting target region utilizing vision significance model to obtain in the present invention under simple scenario.
Fig. 4 a-Fig. 4 d is color characteristic figure, brightness figure, the direction character figure of the vision significance model obtained in calculating chart 3 under simple scenario in the present invention, and weighted feature figure.
Fig. 5 is the interesting target region utilizing vision significance model to obtain in the present invention under complex scene.
Fig. 6 a-Fig. 6 d is color characteristic figure, brightness figure, the direction character figure of the vision significance model obtained in calculating chart 3 under complex scene in the present invention, and weighted feature figure.
Fig. 7 a-Fig. 7 c is the front and back item error detector legend used in the present invention in tracking-study-detection algorithm.
Fig. 8 uses three series filter schematic diagram in the object detector in tracking-study-detection algorithm in the present invention.
In Fig. 9, label 1-6 is some frames that following the tracks of unsuccessfully appears when running into attitudes vibration in prior art tracking-study-detection algorithm.
In Figure 10, label 1-6 utilizes vision significance model to calculate the regional of the Saliency maps unsuccessfully in the present invention.
In Figure 11, label 1-6 utilizes vision significance model to calculate laggard column hisgram to contrast the improvement result obtained in the present invention.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated.The present embodiment is implemented under premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Fig. 1 a and Fig. 1 b illustrates, the realization of the automatic tracing detection of the present embodiment based target, and input picture is general objectives sequence of frames of video.This example provides one and utilizes vision significance model to realize automatic or semi-automatic selection to tracking target, can process tracking target and causes tracking loss problem because of attitude or rotation and have good improvement for following the tracks of the detection algorithm again unsuccessfully.
The step of the automatic detecting and tracking method of the present invention comprises step S1 to step S5, wherein:
Step S1: utilize first frame video image of vision significance model to input to carry out scene and slightly understand, the color of computed image, brightness, direction Saliency maps after normalization, obtain the weighting Saliency maps of image scene, if the weights of a salient region in Saliency maps account for all more than 80%, be then defined as simple scenario; If the weights of a salient region in Saliency maps account for all less than 80%, then in filtering scene, weights are less than the salient region of 10%, and are defined as complex scene;
First pre-service.Preprocessing process we need to do a discriminating to input picture, judge that scene is simple scenario or complex scene.Here we are realized by the weighting Saliency maps of computed image.The modified calculating significance visual model computation process that we use is as Fig. 2, color characteristic, brightness and direction character is extracted in the picture by linear filter, after gaussian pyramid, central peripheral operation operator and normalized, formed and normalization is done to 12 color Saliency maps, 6 brightness Saliency maps and 24 direction Saliency maps and merge, obtain weighting Saliency maps.
The color characteristic figure of the vision significance model that the present invention obtains in calculating chart 3 under simple scenario is illustrated by Fig. 4 a, brightness figure is illustrated by Fig. 4 b, direction character figure is illustrated by Fig. 4 c, and weighted feature figure is illustrated by Fig. 4 d.These characteristic patterns are combined and after normalized, form color respectively, brightness, direction significantly scheme, three remarkable figure are fused into the remarkable figure of a weighting.The weights of zones of different in comparison diagram, if there is a region weights size to account for all 80%, Fig. 3 shows the interesting target region that the present invention utilizes vision significance model to obtain under simple scenario, and we are defined as simple scenario, and only retains this salient region.As Fig. 5 show if having multiple weights close salient region time, this scene is judged to be complex scene, but also only retain the region that weights account for all more than 10%, as Fig. 6 a-Fig. 6 d illustrates that the color utilizing vision significance model to obtain under complex scene in the present invention is significantly schemed, brightness is significantly schemed, direction is significantly schemed and weighting is significantly schemed.
Fig. 6 a illustrates the color characteristic figure of the vision significance model obtained in calculating chart 5 under complex scene in the present invention, Fig. 6 b illustrates that the brightness obtained in calculating chart 5 under complex scene in the present invention is significantly schemed, Fig. 6 c illustrates that the direction obtained in calculating chart 5 under complex scene in the present invention is significantly schemed, and Fig. 6 d illustrates that the weighting obtained in calculating chart 5 under complex scene in the present invention is significantly schemed.
Step S2: the target of simple scenario is selected automatically.When Programmable detection is simple scenario to video sequence, directly takes salient region and set up rectangle frame and follow the tracks of as tracking target.
Step S3: the target selection of complex scene corrects.When Programmable detection is complex scene to video sequence, ejecting dialog box allows user select tracking target voluntarily, then by the artificial manually tracking box of selection and the distance of each salient region, add and recalculate the weights of each salient region, then according to weights are different, the artificial tracking box manually selected being corrected.
Step S4: target following.
Utilize tracking-study-detection algorithm tracking box is followed the tracks of, when occurring that target is left the visual field or is blocked, can detect and follow the tracks of unsuccessfully, and when following the tracks of unsuccessfully timely feedback information; When not occurring that target is left the visual field or is not blocked, continue to carry out tenacious tracking to target and in real time whether detecting and tracking occurs failure;
Target following is divided into four parts: tracker, failure detection device, object detector and learning machine.
What obtained by step S2 or step S3 sends into S41 tracker and step S42 object detector by the frame of video that rectangle tracking box is demarcated simultaneously, and tracking results and testing result are sent into step S43 learning machine in real time learn, utilize correct tracking and model on testing result establishment step S44 line, and step of updating S41 tracker and step S42 object detector.Meanwhile, the isolated operation of step S45 failure detection device, and follow the tracks of feedback information unsuccessfully in judgement.
Tracker: in tracking-study-detection algorithm, the tracking of target uses the algorithm that optical flow method and image pyramid combine, and set up failure detection device by the mode calculating front and back item error.
The algorithm that optical flow method and image pyramid combine is: video image is set up to the pixel decimation carrying out different scale, obtain the video image of multiple different scale, follow the tracks of each video image sparse optical flow method, the contrast of adjacent yardstick is got and is maximumly obtained last tracking results.This method overcomes three shortcomings of existing sparse optical flow method: 1, process illumination can not change; 2, target travel can not be excessive; 3, local motion needs consistent.
Failure detection device: its algorithm is that t, t+1 moment order is inputted tracker, then t+1, t order are inputted tracker, and on the same frame that contrast two times result obtains, the tracking results of same location of pixels, judges whether this point is effectively followed the tracks of.When the error of major part point all exceedes threshold value, be judged to be that blocking appears in target, or follow the tracks of unsuccessfully, in the present invention as shown in Fig. 7 a and Fig. 7 b, use the front and back item error detector legend in tracking-study-detection algorithm.
Point in tracking box in Fig. 7 a is initial trace point, is namely a little trace point, and the point in Fig. 7 b, for after item error before and after calculating, remains the point that error is less.Can find out, the point retained in Fig. 7 b is exactly in tracing process, target object does not have vicissitudinous point, and t is time constant, and what refer at t in the present invention is the video frame images of any time t, and namely t+1 is the next frame image of t frame of video.Namely item error physical meaning in the present invention in front and back is video positive sequence and inverted order are inputted, and contrasts, obtains error amount.
Give the concrete computation process of front and back item error in Fig. 7 c, sequence of frames of video calculates trace point forward and backward respectively, calculates the pixel difference of trace point as front and back item error, I trepresent t video frame images, I t+1represent the next frame video image of t, I t+krepresent the lower k frame video image of t, X tthe two-dimensional coordinate of a trace point in t positive sequence tracing process, X t+1the two-dimensional coordinate of a trace point in t+1 moment positive sequence tracing process, X t+kthe two-dimensional coordinate of a trace point in t+k moment positive sequence tracing process, the two-dimensional coordinate of a trace point in t+k moment inverted order tracing process, the two-dimensional coordinate of a trace point in t+1 moment inverted order tracing process, be the two-dimensional coordinate of a trace point in t inverted order tracing process, before and after t, the value of item error is
Object detector: be in fact exactly three filters in series, uses variance filter device, maximum a posteriori wave filter and arest neighbors wave filter in the object detector in tracking-study-detection (TLD) algorithm in the present invention as shown in Figure 8.Target detection process as shown in Figure 8, the left side first sub-picture is video frame images, rectangle frame in figure is scanning window, we adopt scanning window traversing graph picture, to in each scanning window pixel calculate variance, think the content of the scanning window that variance is lower be follow the tracks of background, the scanning window that variance is higher is target area, we just extract a series of images block like this, and this process is variance filter process.Feature templates in the result figure obtain variance filter extracts characteristics of image, four grids in figure represent four pixels, be divided into up and down, size is compared twice in left and right, result is relatively divided into four kinds of situations, we represent by 2 bits of encoded: 00, 01, 10, 11, correspond to different posterior probability, do sums on average obtain being greater than 50% image block, the image block being less than 50% is judged to be image background regions, so-called background is relative target, except tracking target, remaining image is all background in the track, this process is maximum a posteriori filtering.The tracking target selected in the image block obtained and tracking initiation process (being model on line in tracing process) is carried out arest neighbors filtering, in Fig. 8, the rightest figure is that nearest neighbor classifier finds classification interface process, in figure, white point is the corresponding point on classification interface of current image block, d ' is the minimum distance of the point on classification interface to nearest black color dots, d " is the minimum distance of the point on classification interface to nearest red point; the question mark representative in figure solves d ', and " which is minimum, and this process is arest neighbors filtering with d.
Learning machine: be divided into positive negative sample, positive negative sample two set constantly calculates inter-object distance, and the sample higher than threshold value is removed, and original sample territory is redistributed.This process by constantly correcting the error sample number in positive negative sample, thus constantly updates model on line, makes tracing process stable and accurately.
Above-mentioned several part is carried out in tracing process simultaneously, constantly updates model on line.On line, model is dynamic model, along with the carrying out of tracing process.
Step S5: again detect after failure.When occurring following the tracks of unsuccessfully, using vision significance model to detect each two field picture after failure, obtaining Saliency maps; To the regional in Saliency maps, (on line, model is according to tracking results real-time update trace template in tracing process with following the tracks of model on the line unsuccessfully, constantly update change, different from utilizing the vision significance model of single-frame images) carry out Histogram Matching, when the similarity in the high region of Histogram Matching similarity first is much larger than the second high region, directly high to similarity first region is followed the tracks of; When the region that Histogram Matching has multiple similarity close, send into object detector simultaneously and detect, constantly repeat step S4 until again detect that target is gone forward side by side line trace.
Target after failure original in tracking-study-detection again detecting device is directly object detector, and for after failure, detection efficiency is low again, and when target carriage change, the detecting device false judgment that often leads to the failure is followed the tracks of unsuccessfully.
Therefore for above situation, we reuse vision significance model, are implemented as follows:
When failure detection device feedback information is for following the tracks of unsuccessfully, uses vision significance model to detect each frame after failure, obtaining Saliency maps.Histogram is set up to the regional in Saliency maps, contrasts with the histogram following the tracks of model on the line unsuccessfully.Situation one, when the similarity in the high region of similarity first is much larger than the second high region, directly high to similarity first region is followed the tracks of; Situation two, when the region having multiple similarity close, simultaneously send into object detector detect.Constantly repeat above step S4 until again detect that target is gone forward side by side line trace.
In Fig. 9,1,2,3,4,5,6 show some frames occurring when prior art tracking-study-detection algorithm runs into attitudes vibration following the tracks of unsuccessfully, in Figure 10,1,2,3,4,5,6 show in the present invention the regional utilizing model to calculate the Saliency maps unsuccessfully, and in Figure 11,1,2,3,4,5,6 show in the present invention and utilize conspicuousness model to calculate laggard column hisgram to contrast the improvement result obtained.To such as Fig. 9, Figure 10, Figure 11, picture 1,2,3,4,5,6 in Fig. 9 is 6 frame video pictures of original tracking-study-detection algorithm process video, front and back item error shows that tracking occurs unsuccessfully not have tracking box to illustrate in picture, and target also in video, illustrate this is because targeted attitude generation acute variation causes algorithm failure.Blue point is trace point, follows the tracks of unsuccessfully and rests on original place.Picture 1,2,3,4,5,6 in Figure 10 is results that the conspicuousness model set up in the process of failure detection device for above produced problem calculates, yellow region is exactly each salient region that conspicuousness model calculates, and can see that target is included in salient region wherein.Picture 1,2,3,4,5,6 in Figure 11 adds the laggard column hisgram control methods of failure detection device to pick up the effect that target carries out following the tracks of, also be the actual tracing process after improving, target in red tracking box is exactly target the most close with model histogram on line among each salient region, reaches to solve target and violent attitudes vibration occurs cause following the tracks of failed problem.After finding to utilize model to improve tracking-study-detection algorithm in the present invention, automatic or semi-automatic selection tracking target can not only can be realized well in front end, and original reluctant attitudes vibration can be processed cause following the tracks of failed problem, have good lifting to tracking effect.
The above; be only the embodiment in the present invention, but protection scope of the present invention is not limited thereto, any people being familiar with this technology is in the technical scope disclosed by the present invention; the conversion or replacement expected can be understood, all should be encompassed in of the present invention comprising within scope.

Claims (6)

1. an automatic detecting and tracking method for view-based access control model conspicuousness model, its feature comprises following concrete steps:
Step S1: utilizing first frame video image of vision significance model to input to carry out, scene slightly understands namely is the color of computed image, brightness, direction Saliency maps after normalization, obtain the weighting Saliency maps of image scene, if the weights of a salient region in Saliency maps account for all more than 80%, be then defined as simple scenario; If the weights of a salient region in Saliency maps account for all less than 80%, then in filtering scene, weights are less than the salient region of 10%, and are defined as complex scene;
Step S2: when Programmable detection is simple scenario to video sequence, directly takes salient region and sets up rectangle frame and follow the tracks of as tracking target;
Step S3: when Programmable detection is complex scene to video sequence, ejecting dialog box allows user select tracking target voluntarily, then by the artificial manually tracking box of selection and the distance of each salient region, add and recalculate the weights of each salient region, then according to weights are different, the artificial tracking box manually selected being corrected;
Step S4: utilize tracking-study-detection algorithm tracking box is followed the tracks of, when occurring that target is left the visual field or is blocked, can detect and follow the tracks of unsuccessfully, and when following the tracks of unsuccessfully timely feedback information; When not occurring that target is left the visual field or is not blocked, continue to carry out tenacious tracking to target and in real time whether detecting and tracking occurs failure;
Step S5: when occurring following the tracks of unsuccessfully, uses vision significance model to detect each two field picture after failure, obtains Saliency maps; To the regional in Saliency maps with follow the tracks of model on the line unsuccessfully and carry out Histogram Matching, when the similarity in the high region of Histogram Matching similarity first is much larger than the second high region, directly high to similarity first region is followed the tracks of; When the region that Histogram Matching has multiple similarity close, send into object detector simultaneously and detect, constantly repeat step S4 until again detect that target is gone forward side by side line trace; On described line, model is according to tracking results real-time update trace template in tracing process.
2. automatic detecting and tracking method according to claim 1, is characterized in that, the first two field picture of video input is carried out to the calculating of vision significance model, and calculate color respectively, brightness, direction significantly scheme, normalization obtains weighting and significantly schemes.
3. automatic detecting and tracking method according to claim 1, is characterized in that, for simple scenario, using the center of salient region as the center of tracking box, the area of salient region, as the area of tracking box, reaches and automatically selects tracking target, realizes the function from motion tracking.
4. automatic detecting and tracking method according to claim 1, it is characterized in that, the calculating of salient region weights in vision significance model is brought into as two parameters using the center of the artificial tracking box manually selected and tracking box size, according to result of calculation, the center of the artificial tracking box manually selected and tracking box size are approached to the salient region that weights are higher, reach and the artificial tracking box selected is corrected, because the center of each artificial tracking box manually selected can not be identical with tracking box size, this generates the initialized stochastic error of tracking target, conspicuousness model computation process for video image first frame is then identical, it is utilized artificial tracking box manually selected to be revised to the stochastic error just achieving and revise and artificially select to substitute into, reach stable tracking effect.
5. automatic detecting and tracking method according to claim 1, is characterized in that, when tracking target be blocked, occur drastic mechanical deformation or tracking target leave visual field time, in tracking box, the front and back item error of most of trace point is very large, be then defined as and follow the tracks of unsuccessfully.
6. automatic detecting and tracking method according to claim 1, it is characterized in that, when following the tracks of unsuccessfully, calculate the Saliency maps of present frame, histogram is set up to the regional in Saliency maps, contrast with the histogram following the tracks of model on the line unsuccessfully, the target finding attitude to change or occur again after disappearing.
CN201310071858.2A 2013-03-07 2013-03-07 Visual saliency model based automatic detecting and tracking method Active CN103116896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310071858.2A CN103116896B (en) 2013-03-07 2013-03-07 Visual saliency model based automatic detecting and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310071858.2A CN103116896B (en) 2013-03-07 2013-03-07 Visual saliency model based automatic detecting and tracking method

Publications (2)

Publication Number Publication Date
CN103116896A CN103116896A (en) 2013-05-22
CN103116896B true CN103116896B (en) 2015-07-15

Family

ID=48415260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310071858.2A Active CN103116896B (en) 2013-03-07 2013-03-07 Visual saliency model based automatic detecting and tracking method

Country Status (1)

Country Link
CN (1) CN103116896B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875396A (en) * 2016-12-28 2017-06-20 深圳信息职业技术学院 The extracting method and device in the notable area of video based on kinetic characteristic

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI514327B (en) * 2013-06-26 2015-12-21 Univ Nat Taiwan Science Tech Method and system for object detection and tracking
CN103747258B (en) * 2014-01-27 2015-02-04 中国科学技术大学 Encryption processing method for high-performance video coding standard
CN105654454B (en) * 2014-11-10 2018-08-10 中国船舶重工集团公司第七二三研究所 A kind of Contrast tracking method of fast and stable
CN104463907A (en) * 2014-11-13 2015-03-25 南京航空航天大学 Self-adaptation target tracking method based on vision saliency characteristics
CN104700431B (en) * 2015-02-10 2017-08-11 浙江工业大学 A kind of natural contour tracing method of the flexible object based on significance
CN104637038B (en) * 2015-03-11 2017-06-09 天津工业大学 A kind of improvement CamShift trackings based on weighted histogram model
CN105825168B (en) * 2016-02-02 2019-07-02 西北大学 A kind of Rhinopithecus roxellana face detection and method for tracing based on S-TLD
CN105787962B (en) * 2016-02-25 2018-10-30 哈尔滨工程大学 A kind of monocular vision tracking recycled under water based on UUV
CN106778570B (en) * 2016-12-05 2018-08-24 清华大学深圳研究生院 A kind of pedestrian detection and tracking in real time
CN109213672A (en) * 2017-07-07 2019-01-15 博彦科技股份有限公司 Dialog box removing method, device, storage medium and processor
CN109003290A (en) * 2017-12-11 2018-12-14 罗普特(厦门)科技集团有限公司 A kind of video tracing method of monitoring system
CN108986045A (en) * 2018-06-30 2018-12-11 长春理工大学 A kind of error correction tracking based on rarefaction representation
CN110954922B (en) * 2018-09-27 2021-08-24 千寻位置网络有限公司 Method and device for automatically identifying scene of GNSS dynamic drive test
CN109584269A (en) * 2018-10-17 2019-04-05 龙马智芯(珠海横琴)科技有限公司 A kind of method for tracking target
CN111104948A (en) * 2018-10-26 2020-05-05 中国科学院长春光学精密机械与物理研究所 Target tracking method based on adaptive fusion of double models
CN109658440A (en) * 2018-11-30 2019-04-19 华南理工大学 A kind of method for tracking target based on target significant characteristics
CN109864806A (en) * 2018-12-19 2019-06-11 江苏集萃智能制造技术研究所有限公司 The Needle-driven Robot navigation system of dynamic compensation function based on binocular vision
CN109785661A (en) * 2019-02-01 2019-05-21 广东工业大学 A kind of parking guide method based on machine learning
CN109887005B (en) * 2019-02-26 2023-05-30 天津城建大学 TLD target tracking method based on visual attention mechanism
CN110796012B (en) * 2019-09-29 2022-12-27 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110677635B (en) * 2019-10-07 2020-10-30 董磊 Data parameter field setting system
CN111027505B (en) * 2019-12-19 2022-12-23 吉林大学 Hierarchical multi-target tracking method based on significance detection
CN111325124B (en) * 2020-02-05 2023-05-12 上海交通大学 Real-time man-machine interaction system under virtual scene
CN112270657A (en) * 2020-11-04 2021-01-26 成都寰蓉光电科技有限公司 Sky background-based target detection and tracking algorithm
CN112465871B (en) * 2020-12-07 2023-10-17 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Evaluation method and system for accuracy of visual tracking algorithm
CN112634332A (en) * 2020-12-21 2021-04-09 合肥讯图信息科技有限公司 Tracking method based on YOLOv4 model and DeepsORT model
CN113327272B (en) * 2021-05-28 2022-11-22 北京理工大学重庆创新中心 Robustness long-time tracking method based on correlation filtering
CN113065559B (en) * 2021-06-03 2021-08-27 城云科技(中国)有限公司 Image comparison method and device, electronic equipment and storage medium
CN113793260B (en) * 2021-07-30 2022-07-22 武汉高德红外股份有限公司 Method and device for semi-automatically correcting target tracking frame and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034096A (en) * 2010-12-08 2011-04-27 中国科学院自动化研究所 Video event recognition method based on top-down motion attention mechanism
CN102184557A (en) * 2011-06-17 2011-09-14 电子科技大学 Salient region detection method for complex scene
CN102521844A (en) * 2011-11-30 2012-06-27 湖南大学 Particle filter target tracking improvement method based on vision attention mechanism
CN102881024A (en) * 2012-08-24 2013-01-16 南京航空航天大学 Tracking-learning-detection (TLD)-based video object tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034096A (en) * 2010-12-08 2011-04-27 中国科学院自动化研究所 Video event recognition method based on top-down motion attention mechanism
CN102184557A (en) * 2011-06-17 2011-09-14 电子科技大学 Salient region detection method for complex scene
CN102521844A (en) * 2011-11-30 2012-06-27 湖南大学 Particle filter target tracking improvement method based on vision attention mechanism
CN102881024A (en) * 2012-08-24 2013-01-16 南京航空航天大学 Tracking-learning-detection (TLD)-based video object tracking method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875396A (en) * 2016-12-28 2017-06-20 深圳信息职业技术学院 The extracting method and device in the notable area of video based on kinetic characteristic

Also Published As

Publication number Publication date
CN103116896A (en) 2013-05-22

Similar Documents

Publication Publication Date Title
CN103116896B (en) Visual saliency model based automatic detecting and tracking method
CN108830252B (en) Convolutional neural network human body action recognition method fusing global space-time characteristics
CN105023278B (en) A kind of motion target tracking method and system based on optical flow method
CN102999920B (en) Target tracking method based on nearest neighbor classifier and mean shift
CN110472467A (en) The detection method for transport hub critical object based on YOLO v3
WO2022121039A1 (en) Bankcard tilt correction-based detection method and apparatus, readable storage medium, and terminal
CN109285179A (en) A kind of motion target tracking method based on multi-feature fusion
CN108537782B (en) Building image matching and fusing method based on contour extraction
CN103886325B (en) Cyclic matrix video tracking method with partition
CN103530599A (en) Method and system for distinguishing real face and picture face
CN103295016A (en) Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN104598883A (en) Method for re-recognizing target in multi-camera monitoring network
CN104835182A (en) Method for realizing dynamic object real-time tracking by using camera
CN104751466B (en) A kind of changing object tracking and its system based on conspicuousness
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
CN111160291B (en) Human eye detection method based on depth information and CNN
CN103577815A (en) Face alignment method and system
CN106709938B (en) Based on the multi-target tracking method for improving TLD
Nassu et al. A vision-based approach for rail extraction and its application in a camera pan–tilt control system
CN104574401A (en) Image registration method based on parallel line matching
CN104123529A (en) Human hand detection method and system thereof
CN104036483A (en) Image processing system and image processing method
CN103955950B (en) Image tracking method utilizing key point feature matching
CN109087337A (en) Long-time method for tracking target and system based on layering convolution feature
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant