CN106909885A - A kind of method for tracking target and device based on target candidate - Google Patents
A kind of method for tracking target and device based on target candidate Download PDFInfo
- Publication number
- CN106909885A CN106909885A CN201710038722.XA CN201710038722A CN106909885A CN 106909885 A CN106909885 A CN 106909885A CN 201710038722 A CN201710038722 A CN 201710038722A CN 106909885 A CN106909885 A CN 106909885A
- Authority
- CN
- China
- Prior art keywords
- target
- frame image
- object candidate
- tracked target
- current frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present invention discloses a kind of method for tracking target and device based on target candidate, wherein method for tracking target, comprises the following steps:It is determined that the current frame image comprising tracked target;Tracked target region is obtained in current frame image;Obtain next two field picture of present frame;Multiple object candidate areas are obtained in next two field picture;Calculate tracked target region and the similarity of each object candidate area;Target following region is determined in multiple object candidate areas according to similarity.The present invention determines the particular location of tracked target object by way of target candidate, during tracking, can be accurately detected tracked target object, therefore can effectively improve the stability for tracking target object, so as to avoid tracking from failing.
Description
Technical field
The present invention relates to image processing field, and in particular to a kind of method for tracking target and device based on target candidate.
Background technology
The purpose of target following is the movement locus for obtaining specific objective in video sequence, recently as computer network
The fast propagation of video, the research of target following is always the heat subject of computer vision field, also in many practicality visions
Key player is play in system, and during target object is tracked, generally requires to select tracked in video image
The candidate region of target can complete specific tracking.
Method for tracking target in currently available technology is mainly by the detection mode extraction target of learning classification task
Track target object after the characteristic information of object, but image information in video streaming is diversified, so in tracking
During, the target that the characteristic information with tracked target is found in all of video image is complex, and cannot be true
Determine the candidate region of video object image, during tracking, because of external condition such as:Illumination, the influence of metamorphosis, more hold
Easily cause tracking failure.
The content of the invention
Therefore, the embodiment of the present invention technical problem to be solved is that method for tracking target of the prior art mainly passes through
Target object is tracked after the characteristic information of detection mode extraction target object, because target candidate area cannot be determined in video image
Domain, because of external condition during tracking, it is easier to cause tracking to fail.
Therefore, the embodiment of the invention provides following technical scheme:
The embodiment of the present invention provides a kind of method for tracking target based on target candidate, comprises the following steps:
It is determined that the current frame image comprising tracked target;
Tracked target region is obtained in the current frame image;
Obtain next two field picture of present frame;
Multiple object candidate areas are obtained in next two field picture;
Calculate the similarity in the tracked target region and each object candidate area;
Target following region is determined in multiple object candidate areas according to the similarity.
Alternatively, the multiple object candidate areas of the acquisition in next two field picture, including:
Obtain the positive sample and negative sample of the current frame image;
The next frame image edge information is detected,
The edge that the tracked target region overlaps with described image marginal information is detected in the current frame image
Information.
Alternatively, the detection next frame image edge information, including:
Obtain the boundary response of the next frame image edge information;
According to non-maximal correlation boundary response is filtered, edge peaks figure is determined;
Perform the packet of the edge peaks figure information.
Alternatively, the straight border for determining edge peaks figure has high correlation, curved boundary or not connected side
Boundary has low correlation.
Alternatively, the similarity for calculating the tracked target region and each object candidate area, including:
Obtain the boundary rectangle comprising each object candidate area;
The tracked target region and boundary rectangle input are compared into neural network model;
Obtain maximum object candidate area score.
The embodiment of the present invention provides a kind of target tracker based on target candidate, including:
First determining unit, for determining the current frame image comprising tracked target;
First acquisition unit, for obtaining tracked target region in the current frame image;
Second acquisition unit, the next two field picture for obtaining present frame;
3rd acquiring unit, for obtaining multiple object candidate areas in next two field picture;
Computing unit, the similarity for calculating the tracked target region and each object candidate area;
Second determining unit, for determining target following region in multiple object candidate areas according to the similarity.
Alternatively, the 3rd acquiring unit, including:
First acquisition module, positive sample and negative sample for obtaining the current frame image;
First detection module, for detecting the next frame image edge information;
Second detection module, for detecting the tracked target region and described image side in the current frame image
The marginal information that edge information overlaps.
Alternatively, the first detection module, including:
First acquisition submodule, the boundary response for obtaining the next frame image edge information;
Determination sub-module, for according to non-maximal correlation boundary response is filtered, determining edge peaks figure;
Performing module, the packet of the information for performing the edge peaks figure.
Alternatively, determine that the straight border of edge peaks figure has high correlation, curve in first acquisition submodule
Border or not connected border have low correlation.
Alternatively, the computing unit, including:
Second acquisition module, for obtaining the boundary rectangle comprising each object candidate area;
Input module, for the tracked target region and boundary rectangle input to be compared into neural network model;
3rd acquisition module, the object candidate area score for obtaining maximum.
Embodiment of the present invention technical scheme, has the following advantages that:
The present invention provides a kind of method for tracking target and device based on target candidate, wherein method for tracking target, including
Following steps:It is determined that the current frame image comprising tracked target;Tracked target region is obtained in current frame image;Obtain
Next two field picture of present frame;Multiple object candidate areas are obtained in next two field picture;Calculate tracked target region and every
The similarity of individual object candidate area;Target following region is determined in multiple object candidate areas according to similarity.The present invention
The particular location of tracked target object is determined by way of target candidate, during tracking, can be detected exactly
To tracked target object, therefore the stability for tracking target object can be effectively improved, so as to avoid tracking from failing.
Brief description of the drawings
In order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art, below will be to specific
The accompanying drawing to be used needed for implementation method or description of the prior art is briefly described, it should be apparent that, in describing below
Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid
Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the flow chart of the method for tracking target of target candidate in the embodiment of the present invention 1;
Fig. 2 is the stream of the multiple object candidate areas of acquisition in the method for tracking target of target candidate in the embodiment of the present invention 1
Cheng Tu;
Fig. 3 is detection next frame image edge information in the method for tracking target of target candidate in the embodiment of the present invention 1
Flow chart;
Fig. 4 is the flow chart of calculating object candidate area in the method for tracking target of target candidate in the embodiment of the present invention 1;
Fig. 5 is the structured flowchart of the target tracker of target candidate in the embodiment of the present invention 2;
Fig. 6 is the structured flowchart of the 3rd acquiring unit in the target tracker of target candidate in the embodiment of the present invention 2;
Fig. 7 is the structured flowchart of first detection module in the target tracker of target candidate in the embodiment of the present invention 2;
Fig. 8 is the structured flowchart of computing unit in the target tracker of target candidate in the embodiment of the present invention 2.
Specific embodiment
The technical scheme of the embodiment of the present invention is clearly and completely described below in conjunction with accompanying drawing, it is clear that described
Embodiment be a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this area is general
The every other embodiment that logical technical staff is obtained under the premise of creative work is not made, belongs to present invention protection
Scope.
, it is necessary to explanation in the description of the embodiment of the present invention, term " " center ", " on ", D score, "left", "right",
The orientation or position relationship of the instruction such as " vertical ", " level ", " interior ", " outward " be based on orientation shown in the drawings or position relationship,
It is for only for ease of the description embodiment of the present invention and simplifies description, must has rather than the device or element for indicating or imply meaning
Have specific orientation, with specific azimuth configuration and operation, therefore be not considered as limiting the invention.Additionally, term " the
One ", " second ", " the 3rd " are only used for describing purpose, and it is not intended that indicating or implying relative importance.
, it is necessary to explanation, unless otherwise clearly defined and limited, term " is pacified in the description of the embodiment of the present invention
Dress ", " connected ", " connection " should be interpreted broadly, for example, it may be fixedly connected, or be detachably connected, or integratedly
Connection;Can mechanically connect, or electrically connect;Can be joined directly together, it is also possible to be indirectly connected to by intermediary,
Two connections of element internal are can also be, can be wireless connection, or wired connection.For the common skill of this area
For art personnel, above-mentioned term concrete meaning in the present invention can be understood with concrete condition.
As long as additionally, technical characteristic involved in invention described below different embodiments non-structure each other
Can just be combined with each other into conflict.
Embodiment 1
The embodiment of the present invention provides a kind of method for tracking target based on target candidate, as shown in figure 1, including following step
Suddenly:
The current frame image of S1, determination comprising tracked target;Being input into several continuous images in video streaming could structure
Into a complete video, and image is continuous sequential correlation data, thus only by obtain with present frame by with
Track target image could complete specific tracking, and the current frame image position comprising tracked target is only determined in video streaming
Tracked target can just be found.
Specifically, target following is often referred to provide original state of the target in the frame of video first is tracked, and mesh is estimated automatically
Mark object state in subsequent frames.Human eye can compare easily within a period of time with living certain specific objective, but to machine
For device, this task is simultaneously remarkable, occurs that target occurs drastic mechanical deformation during tracking, by other target occlusions or
There are the various complicated situations of similar object interference etc..Above-mentioned present frame is comprising the initial frame being input into from video flowing or upper
The image information of frame or next frame at current time, the information includes the positions and dimensions of present frame.
S2, the acquisition tracked target region in current frame image;Specifically quilt is as detected in current frame image
Tracking mesh target area (current patch), this region is the image block being made up of multiple pixels.
S3, the next two field picture for obtaining present frame;The purpose of tracking is exactly in order to catch up with object above, only by obtaining
Taking a frame frame image information could finally track this specific object, so need exist in next frame, using present frame
The tracked target region of middle acquisition completes further tracking.
S4, the multiple object candidate areas of acquisition in next two field picture;Using proposal generating modes, generation target is built
The purpose for discussing position is exactly to generate a relatively small choice box Candidate Set of quantity, as multiple object candidate areas.
As a kind of implementation, the method for tracking target based on target candidate in the present embodiment, as shown in Fig. 2 step
S4, obtains multiple object candidate areas in next two field picture, including:
S41, the positive sample and negative sample that obtain current frame image;Positive sample refers in picture the only a certain mesh of searching in need
Mark, that is, be often referred to the target sample information related to tracking, and negative sample refers to wherein not comprising the target for needing to find, that is, is often referred to
Unnecessary, incoherent sample information, it is also possible to using other candidate regions outside correct target area as negative sample.
S42, detection next frame image edge information, target possible position is obtained by the image edge information of next frame
Some candidate regions.
As a kind of implementation, the method for tracking target based on target candidate in the present embodiment, as shown in figure 3, step
S42, detects next frame image edge information, including:
S421, the boundary response for obtaining next frame image edge information;Using in structuring edge detector acquisition image
The boundary response of each pixel, so can be obtained by a dense boundary response.
S422, basis filter non-maximal correlation boundary response, determine edge peaks figure;Side is found by non-maxima suppression
Edge peak value, so can obtain a sparse edge graph, as edge peaks figure again.
S423, the packet for performing edge peaks figure information, straight line is divided into above-mentioned sparse edge graph in detail
Boundary or curved boundary or not connected border.Wherein it is determined that the straight border of edge peaks figure has a high correlation, curved boundary or
Not connected border has low correlation.
S43, the marginal information that detection tracked target region overlaps with image edge information in current frame image.Connect down
Come, detected in current frame image region using the method for sliding window and believed with the edge of the coincident of multiple object candidate areas
The more object candidate area of breath, so as to obtain specific object candidate area.
The similarity of S5, calculating tracked target region and each object candidate area;Calculate acquisition in current frame image
Similarity between tracked target region and multiple object candidate area, it is understood that be to calculate previous frame patch with
Similarity between the target candidate patch of one frame, choose similarity score highest object candidate area as present frame with
The target location that track is arrived.
As a kind of implementation, method for tracking target in the present embodiment, as shown in figure 4, step S5, calculates tracked mesh
Mark region and the similarity of each object candidate area, including:
The boundary rectangle of S51, acquisition comprising each object candidate area;Obtaining the target of the more marginal information that overlaps
The maximum boundary rectangle comprising each object candidate area of its composition, (rectangle patch) are obtained in candidate region.
S52, by tracked target region and boundary rectangle input compare neural network model;This compares neural network model
Mainly by the similarity one-time calculation of tracked target region and object candidate area out, while sharing convolutional layer, this
The effect of convolutional layer is mainly used in being mapped to the boundary rectangle of object candidate area the convolution feature of correspondence position;Then pass through
Pyramidal pond layer, the convolution Feature Conversion that dimension is differed is the inconsistent full connection input of dimension, the work of this pond layer
With the convolution feature that unified dimensional differs is mainly used in, so as to reduce the feature of the output of convolutional layer;Again by full articulamentum with
All nodes of last layer are attached, and the effect of this full articulamentum is mainly used in comprehensive convolution feature.Finally by decision-making mode
Network uses softmax, is calculated the similarity of tracked target region and each candidate region in current frame image.
S53, the object candidate area score for obtaining maximum.After above-mentioned steps S52 is calculated, and then obtain the mesh of maximum
Mark candidate region is used as the tracking result for finally giving.
S6, target following region is determined in multiple object candidate areas according to similarity.Such as, a certain car is being tracked
, when acquiring all vehicles on highway for positive sample, after pedestrian is negative sample, and obtain the similar of tracked vehicle
Degree, the scope as target following region where now needing the tracked all much like vehicle of concern.
Embodiment 2
The present embodiment provides a kind of target tracker based on target candidate, with embodiment 1 in based on target candidate
Method for tracking target it is corresponding, as shown in figure 5, including:
First determining unit 41, for determining the current frame image comprising tracked target;
First acquisition unit 42, for obtaining tracked target region in current frame image;
Second acquisition unit 43, the next two field picture for obtaining present frame;
3rd acquiring unit 44, for obtaining multiple object candidate areas in next two field picture;
Computing unit 45, the similarity for calculating tracked target region and each object candidate area;
Second determining unit 46, for determining target following region in multiple object candidate areas according to similarity.
As a kind of implementation, the target tracker based on target candidate in the present embodiment, as shown in fig. 6, the 3rd
Acquiring unit 44, including:
First acquisition module 441, positive sample and negative sample for obtaining current frame image;
First detection module 442, for detecting next frame image edge information,
Second detection module 443, for detecting tracked target region and image edge information weight in current frame image
The marginal information of conjunction.
As a kind of implementation, the target tracker based on target candidate in the present embodiment, as shown in fig. 7, first
Detection module 442, including:
First acquisition submodule 4421, the boundary response for obtaining next frame image edge information;
Determination sub-module 4442, for according to non-maximal correlation boundary response is filtered, determining edge peaks figure;
Implementation sub-module 4443, the packet of the information for performing edge peaks figure.
As a kind of implementation, the target tracker based on target candidate, the first acquisition submodule in the present embodiment
Determine that the straight border of edge peaks figure has high correlation in 4421, there is low phase to close for curved boundary or not connected border
Property.
As a kind of implementation, the target tracker based on target candidate in the present embodiment, as shown in figure 8, calculating
Unit 45, including:
Second acquisition module 451, for obtaining the boundary rectangle comprising each object candidate area;
Input module 452, for tracked target region and boundary rectangle input to be compared into neural network model;
3rd acquisition module 453, the object candidate area score for obtaining maximum.
Obviously, above-described embodiment is only intended to clearly illustrate example, and not to the restriction of implementation method.It is right
For those of ordinary skill in the art, can also make on the basis of the above description other multi-forms change or
Change.There is no need and unable to be exhaustive to all of implementation method.And the obvious change thus extended out or
Among changing still in the protection domain of the invention.
Claims (10)
1. a kind of method for tracking target based on target candidate, it is characterised in that comprise the following steps:
It is determined that the current frame image comprising tracked target;
Tracked target region is obtained in the current frame image;
Obtain next two field picture of present frame;
Multiple object candidate areas are obtained in next two field picture;
Calculate the similarity in the tracked target region and each object candidate area;
Target following region is determined in multiple object candidate areas according to the similarity.
2. method according to claim 1, it is characterised in that described multiple targets are obtained in next two field picture to wait
Favored area, including:
Obtain the positive sample and negative sample of the current frame image;
Detect the next frame image edge information;
The marginal information that the tracked target region overlaps with described image marginal information is detected in the current frame image.
3. method according to claim 2, it is characterised in that the detection next frame image edge information, including:
Obtain the boundary response of the next frame image edge information;
According to non-maximal correlation boundary response is filtered, edge peaks figure is determined;
Perform the packet of the edge peaks figure information.
4. method according to claim 3, it is characterised in that the straight border of the determination edge peaks figure has phase high
Guan Xing, curved boundary or not connected border have low correlation.
5. method according to claim 1, it is characterised in that the calculating tracked target region and each described in
The similarity of object candidate area, including:
Obtain the boundary rectangle comprising each object candidate area;
The tracked target region and boundary rectangle input are compared into neural network model;
Obtain maximum object candidate area score.
6. a kind of target tracker based on target candidate, it is characterised in that including:
First determining unit, for determining the current frame image comprising tracked target;
First acquisition unit, for obtaining tracked target region in the current frame image;
Second acquisition unit, the next two field picture for obtaining present frame;
3rd acquiring unit, for obtaining multiple object candidate areas in next two field picture;
Computing unit, the similarity for calculating the tracked target region and each object candidate area;
Second determining unit, for determining target following region in multiple object candidate areas according to the similarity.
7. device according to claim 6, it is characterised in that the 3rd acquiring unit, including:
First acquisition module, positive sample and negative sample for obtaining the current frame image;
First detection module, for detecting the next frame image edge information;
Second detection module, for detecting that the tracked target region is believed with described image edge in the current frame image
Cease the marginal information for overlapping.
8. device according to claim 7, it is characterised in that the first detection module, including:
First acquisition submodule, the boundary response for obtaining the next frame image edge information;
Determination sub-module, for according to non-maximal correlation boundary response is filtered, determining edge peaks figure;
Performing module, the packet of the information for performing the edge peaks figure.
9. device according to claim 8, it is characterised in that edge peaks figure is determined in first acquisition submodule
Straight border has high correlation, and curved boundary or not connected border have low correlation.
10. device according to claim 6, it is characterised in that the computing unit, including:
Second acquisition module, for obtaining the boundary rectangle comprising each object candidate area;
Input module, for the tracked target region and boundary rectangle input to be compared into neural network model;
3rd acquisition module, the object candidate area score for obtaining maximum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710038722.XA CN106909885A (en) | 2017-01-19 | 2017-01-19 | A kind of method for tracking target and device based on target candidate |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710038722.XA CN106909885A (en) | 2017-01-19 | 2017-01-19 | A kind of method for tracking target and device based on target candidate |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106909885A true CN106909885A (en) | 2017-06-30 |
Family
ID=59207273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710038722.XA Pending CN106909885A (en) | 2017-01-19 | 2017-01-19 | A kind of method for tracking target and device based on target candidate |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106909885A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108113750A (en) * | 2017-12-18 | 2018-06-05 | 中国科学院深圳先进技术研究院 | Flexibility operation instrument tracking method, apparatus, equipment and storage medium |
CN108491816A (en) * | 2018-03-30 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus for carrying out target following in video |
CN108596957A (en) * | 2018-04-26 | 2018-09-28 | 北京小米移动软件有限公司 | Object tracking methods and device |
CN108960213A (en) * | 2018-08-16 | 2018-12-07 | Oppo广东移动通信有限公司 | Method for tracking target, device, storage medium and terminal |
CN110147768A (en) * | 2019-05-22 | 2019-08-20 | 云南大学 | A kind of method for tracking target and device |
CN110222632A (en) * | 2019-06-04 | 2019-09-10 | 哈尔滨工程大学 | A kind of waterborne target detection method of gray prediction auxiliary area suggestion |
CN110648327A (en) * | 2019-09-29 | 2020-01-03 | 无锡祥生医疗科技股份有限公司 | Method and equipment for automatically tracking ultrasonic image video based on artificial intelligence |
CN111242973A (en) * | 2020-01-06 | 2020-06-05 | 上海商汤临港智能科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN111612822A (en) * | 2020-05-21 | 2020-09-01 | 广州海格通信集团股份有限公司 | Object tracking method and device, computer equipment and storage medium |
CN112070036A (en) * | 2020-09-11 | 2020-12-11 | 联通物联网有限责任公司 | Target detection method and device based on multi-frame pictures and storage medium |
CN112505683A (en) * | 2020-09-25 | 2021-03-16 | 扬州船用电子仪器研究所(中国船舶重工集团公司第七二三研究所) | Radar and electronic chart information fusion detection method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101290681A (en) * | 2008-05-26 | 2008-10-22 | 华为技术有限公司 | Video frequency object tracking method, device and automatic video frequency following system |
CN104200236A (en) * | 2014-08-22 | 2014-12-10 | 浙江生辉照明有限公司 | Quick target detection method based on DPM (deformable part model) |
CN104200210A (en) * | 2014-08-12 | 2014-12-10 | 合肥工业大学 | License plate character segmentation method based on parts |
CN105224947A (en) * | 2014-06-06 | 2016-01-06 | 株式会社理光 | Sorter training method and system |
CN105260997A (en) * | 2015-09-22 | 2016-01-20 | 北京好运到信息科技有限公司 | Method for automatically obtaining target image |
CN105632186A (en) * | 2016-03-11 | 2016-06-01 | 博康智能信息技术有限公司 | Method and device for detecting vehicle queue jumping behavior |
CN105678338A (en) * | 2016-01-13 | 2016-06-15 | 华南农业大学 | Target tracking method based on local feature learning |
CN105933678A (en) * | 2016-07-01 | 2016-09-07 | 湖南源信光电科技有限公司 | Multi-focal length lens linkage imaging device based on multi-target intelligent tracking |
-
2017
- 2017-01-19 CN CN201710038722.XA patent/CN106909885A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101290681A (en) * | 2008-05-26 | 2008-10-22 | 华为技术有限公司 | Video frequency object tracking method, device and automatic video frequency following system |
CN105224947A (en) * | 2014-06-06 | 2016-01-06 | 株式会社理光 | Sorter training method and system |
CN104200210A (en) * | 2014-08-12 | 2014-12-10 | 合肥工业大学 | License plate character segmentation method based on parts |
CN104200236A (en) * | 2014-08-22 | 2014-12-10 | 浙江生辉照明有限公司 | Quick target detection method based on DPM (deformable part model) |
CN105260997A (en) * | 2015-09-22 | 2016-01-20 | 北京好运到信息科技有限公司 | Method for automatically obtaining target image |
CN105678338A (en) * | 2016-01-13 | 2016-06-15 | 华南农业大学 | Target tracking method based on local feature learning |
CN105632186A (en) * | 2016-03-11 | 2016-06-01 | 博康智能信息技术有限公司 | Method and device for detecting vehicle queue jumping behavior |
CN105933678A (en) * | 2016-07-01 | 2016-09-07 | 湖南源信光电科技有限公司 | Multi-focal length lens linkage imaging device based on multi-target intelligent tracking |
Non-Patent Citations (3)
Title |
---|
NAM H 等: "Learning Multi-Domain Convolutional Neural Networks for Visual Tracking", 《COMPUTER SCIENCE》 * |
ZITNICK C L等: "Edge Boxes:Locating object proposals from edges", 《COMPUTER VISION–ECCV 2014》 * |
孔军 等: "一种面向高斯差分图的压缩感知目标跟踪算法", 《红外与毫米波学报》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108113750A (en) * | 2017-12-18 | 2018-06-05 | 中国科学院深圳先进技术研究院 | Flexibility operation instrument tracking method, apparatus, equipment and storage medium |
CN108491816A (en) * | 2018-03-30 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus for carrying out target following in video |
CN108596957A (en) * | 2018-04-26 | 2018-09-28 | 北京小米移动软件有限公司 | Object tracking methods and device |
CN108596957B (en) * | 2018-04-26 | 2022-07-22 | 北京小米移动软件有限公司 | Object tracking method and device |
CN108960213A (en) * | 2018-08-16 | 2018-12-07 | Oppo广东移动通信有限公司 | Method for tracking target, device, storage medium and terminal |
CN110147768B (en) * | 2019-05-22 | 2021-05-28 | 云南大学 | Target tracking method and device |
CN110147768A (en) * | 2019-05-22 | 2019-08-20 | 云南大学 | A kind of method for tracking target and device |
CN110222632A (en) * | 2019-06-04 | 2019-09-10 | 哈尔滨工程大学 | A kind of waterborne target detection method of gray prediction auxiliary area suggestion |
CN110648327B (en) * | 2019-09-29 | 2022-06-28 | 无锡祥生医疗科技股份有限公司 | Automatic ultrasonic image video tracking method and equipment based on artificial intelligence |
CN110648327A (en) * | 2019-09-29 | 2020-01-03 | 无锡祥生医疗科技股份有限公司 | Method and equipment for automatically tracking ultrasonic image video based on artificial intelligence |
CN111242973A (en) * | 2020-01-06 | 2020-06-05 | 上海商汤临港智能科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN111612822A (en) * | 2020-05-21 | 2020-09-01 | 广州海格通信集团股份有限公司 | Object tracking method and device, computer equipment and storage medium |
CN111612822B (en) * | 2020-05-21 | 2024-03-15 | 广州海格通信集团股份有限公司 | Object tracking method, device, computer equipment and storage medium |
CN112070036A (en) * | 2020-09-11 | 2020-12-11 | 联通物联网有限责任公司 | Target detection method and device based on multi-frame pictures and storage medium |
CN112505683A (en) * | 2020-09-25 | 2021-03-16 | 扬州船用电子仪器研究所(中国船舶重工集团公司第七二三研究所) | Radar and electronic chart information fusion detection method |
CN112505683B (en) * | 2020-09-25 | 2024-05-03 | 扬州船用电子仪器研究所(中国船舶重工集团公司第七二三研究所) | Radar and electronic chart information fusion detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106909885A (en) | A kind of method for tracking target and device based on target candidate | |
CN111460926B (en) | Video pedestrian detection method fusing multi-target tracking clues | |
CN106920248A (en) | A kind of method for tracking target and device | |
US11288818B2 (en) | Methods, systems, and computer readable media for estimation of optical flow, depth, and egomotion using neural network trained using event-based learning | |
CN106920247A (en) | A kind of method for tracking target and device based on comparison network | |
CN109059895B (en) | Multi-mode indoor distance measurement and positioning method based on mobile phone camera and sensor | |
CN101633356B (en) | System and method for detecting pedestrians | |
TWI332453B (en) | The asynchronous photography automobile-detecting apparatus and method thereof | |
WO2019129255A1 (en) | Target tracking method and device | |
CN102750527A (en) | Long-time stable human face detection and tracking method in bank scene and long-time stable human face detection and tracking device in bank scene | |
CN109579868A (en) | The outer object localization method of vehicle, device and automobile | |
Bertozzi et al. | IR pedestrian detection for advanced driver assistance systems | |
CN108960115A (en) | Multi-direction Method for text detection based on angle point | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
CN107862713A (en) | Video camera deflection for poll meeting-place detects method for early warning and module in real time | |
CN104376323B (en) | A kind of method and device for determining target range | |
JP2011513876A (en) | Method and system for characterizing the motion of an object | |
CN113591722B (en) | Target person following control method and system for mobile robot | |
JP6798609B2 (en) | Video analysis device, video analysis method and program | |
CN109344685A (en) | A kind of wisdom pallet and its intelligent positioning method for tracing | |
CN112364793A (en) | Target detection and fusion method based on long-focus and short-focus multi-camera vehicle environment | |
CN108460724A (en) | The Adaptive image fusion method and system differentiated based on mahalanobis distance | |
CN109544594A (en) | Target tracking method and system under multiple nonlinear distorted lenses | |
US20230186506A1 (en) | Object Detection Device and Object Detection Method | |
CN104182993B (en) | Target tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170630 |
|
RJ01 | Rejection of invention patent application after publication |