CN108182388A - A kind of motion target tracking method based on image - Google Patents

A kind of motion target tracking method based on image Download PDF

Info

Publication number
CN108182388A
CN108182388A CN201711337556.XA CN201711337556A CN108182388A CN 108182388 A CN108182388 A CN 108182388A CN 201711337556 A CN201711337556 A CN 201711337556A CN 108182388 A CN108182388 A CN 108182388A
Authority
CN
China
Prior art keywords
image
feature
networks
network
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711337556.XA
Other languages
Chinese (zh)
Inventor
马立勇
马城宽
谢玮
孙明健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Weihai
Original Assignee
Harbin Institute of Technology Weihai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Weihai filed Critical Harbin Institute of Technology Weihai
Priority to CN201711337556.XA priority Critical patent/CN108182388A/en
Publication of CN108182388A publication Critical patent/CN108182388A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The present invention provides a kind of motion target tracking method based on image, the problem of failure is tracked when size or scale for moving target existing for the tracking currently based on convolutional neural networks change, the present invention applies depth learning technology, feature extraction is carried out to image using pre-training good network, using the method for correlation filter, to target point into line trace.The invention obtains the accuracy of temporal correlation information raising tracking using C3D networks, utilize the network structure of DenseNet network integration feature pyramid networks, the characteristic of multiscale tracing is provided, so as to fulfill the accurate tracking when the size of moving target or scale change.The present invention can be widely applied in the motion target tracking based on image.

Description

A kind of motion target tracking method based on image
Technical field
The present invention relates to a kind of motion target tracking methods based on image.
Background technology
With the decline of imaging sensor cost and the development of the information processing technology, the motion target tracking based on image obtains To being widely applied, such as in the fields such as security monitoring, automatic Pilot, environment measuring, field investigation and observation, human-computer interaction All play increasingly important role.
Since method for tracking target is the basis of the high-level vision processing task based on image, such as into pedestrian or the stream of vehicle Statistics, unusual checking etc. are measured, has obtained quick development in recent years.Traditional method for tracking target is mainly included based on life The target performance modeling method of an accepted way of doing sth and the apparent modeling method of target based on discriminate.These methods are carried using manual mostly The feature and the grader structure of shallow-layer taken, there is the dependence to feature selection approach is stronger, the dependence for data set Stronger, the learning training result bad adaptability of property, the problems such as recognition accuracy is low.Along with deep learning method in speech recognition and The fields such as object classification identification are successful, and the deep learning method based on convolutional neural networks (CNN) is also applied to target Tracking, compared to traditional method for tracking target, this method has the characteristics that feature extraction automation, does not need to select by hand special Sign, tracking accuracy rate is also superior to conventional method.But it when the size or scale of moving target change, can lead The deep learning method failure based on convolutional neural networks is caused, so as to track.
Invention content
Become for the size or scale of moving target existing for the tracking currently based on convolutional neural networks The problem of failure is tracked when change, the present invention provide a kind of motion target tracking method based on image, and this method application is deep Learning art is spent, feature extraction is carried out to image using pre-training good network, using the method for correlation filter, to target point Into line trace.The advantages of invention, is:First, transfer learning is carried out using the deep learning network Jing Guo pre-training, Trained model parameter moves to new model to help new model training dataset, and the task that do not need to has very huge Training dataset carries out appropriate fine tuning by the model to similar tasks and is obtained with outstanding learning effect;Second, profit With C3D networks overcome existing for two-dimensional convolution neural network can not extraction time characteristic of field information difficulty, make full use of the time Correlation information improves the accuracy of tracking;Third utilizes the network of DenseNet network integration feature pyramid networks (FPN) Structure eliminates random existing for manual selected depth learning network feature and uncertainty, while pyramid can be utilized special Sign provides the characteristic of multiscale tracing, realizes the accurate tracking when the size or scale of moving target change.
The present invention carries out target following based on correlation filter, and main thought is as follows:First in the artwork of first frame An area-of-interest to be tracked is specified as upper, the suitable region of search of a block size is determined centered on the region, and For target location establishing label.Then transfer learning is carried out using the good convolutional neural networks of pre-training, extracts region of search Feature trains a correlation filter using feature and label as training sample.In the next frame with correlation filter to image into Row filtering, obtains relevant information figure, and the coordinate of maximum of points is the new position of target on figure.Again new position as tracking Area-of-interest, repeat the above steps and wave filter be updated, the new position of target in next frame again is obtained, until most Until a later frame.
Convolutional neural networks are typically all to be carried out for image, use two-dimensional convolution, can not effective extraction time Characteristic information in dimension, temporal correlation is critically important for the accurately tracking based on image in practical application, is this Invention introduces C3D networks [2], and C3D networks extract time and the space characteristics of video by 3D convolution operations core simultaneously, these 3D feature extractors operate in room and time dimension, therefore can effectively capture the movable information of video flowing.
Tracking target can be positioned using the convolution feature of convolutional neural networks, the convolutional layer of different depth with Have the characteristics that different during track, high-level characteristic, which is good at, distinguishes different classes of object, deformation to target and blocks very robust, It is but excessively poor to the separating capacity of object in class.Therefore, select which convolutional layer feature tracking effect is just had it is larger It influences.In order to solve this problem, present invention introduces DenseNet network structures [3], and the network is in order to all in maximization network Information flow between layer, all layers in network is all attached two-by-two so that each layer all receives institute before it in network There is the feature of layer as input.There are two characteristics for this structure tool:First, mitigate in the training process gradient dispersion the problem of, Be not in the increase with network depth, close to input layer gradient it is less and less the problem of so that the feature of bottom is in height Also there is reflection in layer, solve the problems, such as not know which layer convolution feature selected.Second, since a large amount of feature is re-used, make It obtains and uses a small amount of convolution kernel that can generate a large amount of feature, the size of final mask is smaller.
In target detection, when limited calculated amount, the size of the depth and detection target of network is typically conflict Thing, the size of the corresponding detection target of common network structure generally can be bigger, if length is 32 pixels, and in image Wisp can then be less than the size, caused by result be exactly that the detection performance of wisp drastically declines.In order to solve this problem, Present invention introduces feature pyramid network (FPN) [4].The network is directly made an amendment on original network, each resolution ratio The Feature Mapping that the latter resolution ratio of Feature Mapping (feature map) introducing scales twice does phase add operation.Company in this way It connects, each layer of prediction Feature Mapping used has all merged the feature of the semantic intensity of different resolution, difference, the difference point of fusion The Feature Mapping of resolution does the object detection of corresponding resolution sizes respectively.This ensure that each layer has suitable resolution ratio And strong semantic feature.Simultaneously as the method only adds additional parallel link on former network foundation, actually should Hardly increase additional calculating time and calculation amount in.
A kind of motion target tracking method based on image, it is characterized in that:This method includes the following steps:
1) area-of-interest to be tracked is specified on the current t frames of image, is determined centered on the region Region of search, to track the initial position (x of targett-1,yt-1) centered on, convolution feature is extracted using the method for space interpolation, Note Feature Mapping is h, and image is x by l layers of the characteristic vector that interpolation over-sampling obtains, and size is M × N × D, in place It is set to i, the interpolation weights of Feature Mapping are denoted as α at kik, then the characteristic vector at i-th of position be
2) setting image has D channel, obtains the correlation filter w that a size is M × N × D by study, learns It is in the frequency domain filter of d-th of channel
Wherein, 1≤d≤D, t are frame number, and W is the Fourier transform of w, and X is the Fourier transform of x, and ⊙ represents Hadamard Multiply,Represent the complex conjugate of X, Y represents the Fourier transform of Gauss label function, and λ is regular parameter.Use F-1Fourier is sought in expression Inverse transformation, then l layers of the relevant response that size is M × N map flIt calculates as follows:
The deep learning network of aforementioned extraction convolution feature is by C3D networks, DenseNet networks and feature pyramid network Composition, connection feature are:C3D networks are directly connected to DenseNet networks, and feature pyramid network directly applies to On DenseNet networks;F is calculated all in accordance with (3) formula for each layer of deep learning networkl
3) f is mapped in the relevant response of the most shallow convolutional layer of feature pyramid network of t frame images firstlIt is upper to search maximum Value point position, cuts out one piece of subregion, on the area again in the intermediate convolution of feature pyramid network centered on the point The relevant response mapping f of layerlUpper lookup maximum of points position simultaneously cuts subregion again, finally will feature gold word on the area The relevant response mapping f of the most deep convolutional layer of tower networklMaximum of points position (xt,yt) it is determined as the mesh traced into t frames Cursor position;
4), the molecule in note formula (2) isDenominator isT is frame number, and η is learning rate, correlation filter w according to Following computational methods update to obtain
Position (xt,yt) as t+1 frames tracking target initial position (xt-1,yt-1), using t+1 frames as t Frame repeats the above steps, whereinAsF is mapped for the relevant response of formula (3)lCalculating, a to the last frame Until image determines target location.
The present invention is using correlation filter technology, using transfer learning to carrying out motion target tracking, with reference to C3D networks, DenseNet networks and feature pyramid network can accurately track mesh in the case where the scale and size of target change Mark.Below in conjunction with the accompanying drawings, specific implementation example and advantage are further described.
Description of the drawings
Fig. 1 feature pyramid networks
Specific embodiment
Below in conjunction with the accompanying drawings, the specific embodiment of the motion target tracking method based on image is described as follows:
In this realization example, λ=10-4, η=0.45, the realization that the core width of Gaussian function label is set as 0.1, C3D adopts With the method in [5], using the method in [6], feature pyramid network uses the scheme in document [4] for the realization of DenseNet With parameter, it is directly applied on the DenseNet in network structure of the present invention.In the Python environment of Ubuntu systems It realizes, transfer learning is using the VGG-Net-19 pre-training results on ImageNet.
Method in the present invention and document [1] compares, in the Singer2 sequences of OTB50 test data sets, text The method offered in [1] has serious deviation when target is tracked, and the method for the present invention then can accurately track target.It can See, the present invention is using correlation filter technology, using transfer learning to carrying out motion target tracking, with reference to C3D networks, DenseNet networks and feature pyramid network can obtain the accurate tracking effect in scale and size variation.
Bibliography
[1]Chao Ma,Patras I.Hierarchical Convolutional Features for Visual Tracking.Proceedings of the IEEE International Conference on Computer Vision (ICCV),IEEE Publisher.2016,322-337.
[2]Du T,Bourdev L,Fergus R,Torresani L.Learning Spatiotemporal Features with 3D Convolutional Networks.Proceedings of the IEEE International Conference on Computer Vision (ICCV),IEEE Publisher.2015:4489-4497.
[3]Gao Huang,Zhuang Liu,Laurens van der Maaten,Kilian Weinberger.Densely connected convolutional networks.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2017:2261-2269.
[4]Lin TY,Dollar P,Girshick R,He K,Hanharan B.Feature Pyramid Networks for Object Detection.Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), IEEE Publisher.2017:936-944.
[5]https://github.com/facebook/C3D
[6]https://github.com/liuzhuang13/DenseNet。

Claims (1)

1. a kind of motion target tracking method based on image, it is characterised in that:This method includes the following steps:
1) area-of-interest to be tracked is specified on the current t frames of image, determines to search for centered on the region Region, to track the initial position (x of targett-1,yt-1) centered on, convolution feature is extracted using the method for space interpolation, note is special Sign is mapped as h, and image is x by l layers of the characteristic vector that interpolation over-sampling obtains, and size is M × N × D, is set in place The interpolation weights of Feature Mapping are denoted as α at i, kik, then the characteristic vector at i-th of position be
2) setting image has D channel, and the correlation filter w that a size is M × N × D is obtained by study, learn the The frequency domain filter of d channel is
Wherein, 1≤d≤D, t are frame number, and W is the Fourier transform of w, and X is the Fourier transform of x, and ⊙ represents that Hadamard multiplies, Represent the complex conjugate of X, Y represents the Fourier transform of Gauss label function, and λ is regular parameter.Use F-1Fourier inversion is asked in expression It changes, then l layers of the relevant response that size is M × N maps flIt calculates as follows:
The deep learning network of aforementioned extraction convolution feature is made of C3D networks, DenseNet networks and feature pyramid network, Its connection feature is:C3D networks are directly connected to DenseNet networks, and feature pyramid network directly applies to DenseNet networks On;F is calculated all in accordance with (3) formula for each layer of deep learning networkl
3) f is mapped in the relevant response of the most shallow convolutional layer of feature pyramid network of t frame images firstlUpper lookup maximum of points Position cuts out one piece of subregion centered on the point, on the area again in the intermediate convolutional layer of feature pyramid network Relevant response maps flUpper lookup maximum of points position simultaneously cuts subregion again, finally will feature pyramid network on the area The relevant response mapping f of the most deep convolutional layer of networklMaximum of points position (xt,yt) be determined as in the target position that t frames trace into It puts;
4), the molecule in note formula (2) isDenominator isT is frame number, and η is learning rate, and correlation filter w is according to following Computational methods update to obtain
Position (xt,yt) as t+1 frames tracking target initial position (xt-1,yt-1), using t+1 frames as t frames, It repeats the above steps, whereinAsF is mapped for the relevant response of formula (3)lCalculating, a to the last frame image Until determining target location.
CN201711337556.XA 2017-12-14 2017-12-14 A kind of motion target tracking method based on image Pending CN108182388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711337556.XA CN108182388A (en) 2017-12-14 2017-12-14 A kind of motion target tracking method based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711337556.XA CN108182388A (en) 2017-12-14 2017-12-14 A kind of motion target tracking method based on image

Publications (1)

Publication Number Publication Date
CN108182388A true CN108182388A (en) 2018-06-19

Family

ID=62545942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711337556.XA Pending CN108182388A (en) 2017-12-14 2017-12-14 A kind of motion target tracking method based on image

Country Status (1)

Country Link
CN (1) CN108182388A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108820233A (en) * 2018-07-05 2018-11-16 西京学院 A kind of fixed-wing unmanned aerial vehicle vision feels land bootstrap technique
CN109005409A (en) * 2018-07-27 2018-12-14 浙江工业大学 A kind of intelligent video coding method based on object detecting and tracking
CN109035779A (en) * 2018-08-30 2018-12-18 南京邮电大学 Freeway traffic flow prediction technique based on DenseNet
CN109214505A (en) * 2018-08-29 2019-01-15 中山大学 A kind of full convolution object detection method of intensive connection convolutional neural networks
CN109272530A (en) * 2018-08-08 2019-01-25 北京航空航天大学 Method for tracking target and device towards space base monitoring scene
CN109615640A (en) * 2018-11-19 2019-04-12 北京陌上花科技有限公司 Correlation filtering method for tracking target and device
CN109934236A (en) * 2019-01-24 2019-06-25 杰创智能科技股份有限公司 A kind of multiple dimensioned switch target detection algorithm based on deep learning
CN109934846A (en) * 2019-03-18 2019-06-25 南京信息工程大学 Deep integrating method for tracking target based on time and spatial network
CN110084124A (en) * 2019-03-28 2019-08-02 北京大学 Feature based on feature pyramid network enhances object detection method
CN110163887A (en) * 2019-05-07 2019-08-23 国网江西省电力有限公司检修分公司 The video target tracking method combined with foreground segmentation is estimated based on sport interpolation
CN110188753A (en) * 2019-05-21 2019-08-30 北京以萨技术股份有限公司 One kind being based on dense connection convolutional neural networks target tracking algorism
CN110378288A (en) * 2019-07-19 2019-10-25 合肥工业大学 A kind of multistage spatiotemporal motion object detection method based on deep learning
CN110689559A (en) * 2019-09-30 2020-01-14 长安大学 Visual target tracking method based on dense convolutional network characteristics
CN110728214A (en) * 2019-09-26 2020-01-24 中国科学院大学 Weak and small figure target detection method based on scale matching
CN110738231A (en) * 2019-07-25 2020-01-31 太原理工大学 Method for classifying mammary gland X-ray images by improving S-DNet neural network model
CN111046964A (en) * 2019-12-18 2020-04-21 电子科技大学 Convolutional neural network-based human and vehicle infrared thermal image identification method
CN111383249A (en) * 2020-03-02 2020-07-07 西安理工大学 Target tracking method based on multi-region layer convolution characteristics
CN112016535A (en) * 2020-10-26 2020-12-01 成都合能创越软件有限公司 Vehicle-mounted garbage traceability method and system based on edge calculation and block chain
CN112084931A (en) * 2020-09-04 2020-12-15 厦门大学 DenseNet-based leukemia cell microscopic image classification method and system
CN112801068A (en) * 2021-04-14 2021-05-14 广东众聚人工智能科技有限公司 Video multi-target tracking and segmenting system and method
CN113947144A (en) * 2021-10-15 2022-01-18 北京百度网讯科技有限公司 Method, apparatus, device, medium and program product for object detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184779A (en) * 2015-08-26 2015-12-23 电子科技大学 Rapid-feature-pyramid-based multi-dimensioned tracking method of vehicle
CN107016689A (en) * 2017-02-04 2017-08-04 中国人民解放军理工大学 A kind of correlation filtering of dimension self-adaption liquidates method for tracking target
CN107154024A (en) * 2017-05-19 2017-09-12 南京理工大学 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter
CN107369166A (en) * 2017-07-13 2017-11-21 深圳大学 A kind of method for tracking target and system based on multiresolution neutral net

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184779A (en) * 2015-08-26 2015-12-23 电子科技大学 Rapid-feature-pyramid-based multi-dimensioned tracking method of vehicle
CN107016689A (en) * 2017-02-04 2017-08-04 中国人民解放军理工大学 A kind of correlation filtering of dimension self-adaption liquidates method for tracking target
CN107154024A (en) * 2017-05-19 2017-09-12 南京理工大学 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter
CN107369166A (en) * 2017-07-13 2017-11-21 深圳大学 A kind of method for tracking target and system based on multiresolution neutral net

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALI DIBA 等: "Temporal 3D ConvNets: New Architecture and Transfer Learning for Video Classification", 《COMPUTER SCIENCE》 *
CHAO MA 等: "Hierarchical Convolutional Features for Visual Tracking", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108820233A (en) * 2018-07-05 2018-11-16 西京学院 A kind of fixed-wing unmanned aerial vehicle vision feels land bootstrap technique
CN109005409A (en) * 2018-07-27 2018-12-14 浙江工业大学 A kind of intelligent video coding method based on object detecting and tracking
CN109005409B (en) * 2018-07-27 2021-04-09 浙江工业大学 Intelligent video coding method based on target detection and tracking
CN109272530A (en) * 2018-08-08 2019-01-25 北京航空航天大学 Method for tracking target and device towards space base monitoring scene
CN109272530B (en) * 2018-08-08 2020-07-21 北京航空航天大学 Target tracking method and device for space-based monitoring scene
US10719940B2 (en) 2018-08-08 2020-07-21 Beihang University Target tracking method and device oriented to airborne-based monitoring scenarios
CN109214505A (en) * 2018-08-29 2019-01-15 中山大学 A kind of full convolution object detection method of intensive connection convolutional neural networks
CN109035779A (en) * 2018-08-30 2018-12-18 南京邮电大学 Freeway traffic flow prediction technique based on DenseNet
CN109035779B (en) * 2018-08-30 2021-01-19 南京邮电大学 DenseNet-based expressway traffic flow prediction method
CN109615640A (en) * 2018-11-19 2019-04-12 北京陌上花科技有限公司 Correlation filtering method for tracking target and device
CN109934236A (en) * 2019-01-24 2019-06-25 杰创智能科技股份有限公司 A kind of multiple dimensioned switch target detection algorithm based on deep learning
CN109934846A (en) * 2019-03-18 2019-06-25 南京信息工程大学 Deep integrating method for tracking target based on time and spatial network
CN109934846B (en) * 2019-03-18 2023-06-06 南京信息工程大学 Depth integrated target tracking method based on time and space network
CN110084124A (en) * 2019-03-28 2019-08-02 北京大学 Feature based on feature pyramid network enhances object detection method
CN110163887A (en) * 2019-05-07 2019-08-23 国网江西省电力有限公司检修分公司 The video target tracking method combined with foreground segmentation is estimated based on sport interpolation
CN110163887B (en) * 2019-05-07 2023-10-20 国网江西省电力有限公司检修分公司 Video target tracking method based on combination of motion interpolation estimation and foreground segmentation
CN110188753A (en) * 2019-05-21 2019-08-30 北京以萨技术股份有限公司 One kind being based on dense connection convolutional neural networks target tracking algorism
CN110378288A (en) * 2019-07-19 2019-10-25 合肥工业大学 A kind of multistage spatiotemporal motion object detection method based on deep learning
CN110378288B (en) * 2019-07-19 2021-03-26 合肥工业大学 Deep learning-based multi-stage space-time moving target detection method
CN110738231A (en) * 2019-07-25 2020-01-31 太原理工大学 Method for classifying mammary gland X-ray images by improving S-DNet neural network model
CN110728214A (en) * 2019-09-26 2020-01-24 中国科学院大学 Weak and small figure target detection method based on scale matching
CN110689559A (en) * 2019-09-30 2020-01-14 长安大学 Visual target tracking method based on dense convolutional network characteristics
CN110689559B (en) * 2019-09-30 2022-08-12 长安大学 Visual target tracking method based on dense convolutional network characteristics
CN111046964A (en) * 2019-12-18 2020-04-21 电子科技大学 Convolutional neural network-based human and vehicle infrared thermal image identification method
CN111383249A (en) * 2020-03-02 2020-07-07 西安理工大学 Target tracking method based on multi-region layer convolution characteristics
CN111383249B (en) * 2020-03-02 2023-02-28 西安理工大学 Target tracking method based on multi-region layer convolution characteristics
CN112084931B (en) * 2020-09-04 2022-04-15 厦门大学 DenseNet-based leukemia cell microscopic image classification method and system
CN112084931A (en) * 2020-09-04 2020-12-15 厦门大学 DenseNet-based leukemia cell microscopic image classification method and system
CN112016535A (en) * 2020-10-26 2020-12-01 成都合能创越软件有限公司 Vehicle-mounted garbage traceability method and system based on edge calculation and block chain
CN112801068A (en) * 2021-04-14 2021-05-14 广东众聚人工智能科技有限公司 Video multi-target tracking and segmenting system and method
CN113947144A (en) * 2021-10-15 2022-01-18 北京百度网讯科技有限公司 Method, apparatus, device, medium and program product for object detection
US11620815B2 (en) 2021-10-15 2023-04-04 Beijing Baidu Netcom Science Technology Co., Ltd. Method and device for detecting an object in an image

Similar Documents

Publication Publication Date Title
CN108182388A (en) A kind of motion target tracking method based on image
CN110738697B (en) Monocular depth estimation method based on deep learning
CN108830285B (en) Target detection method for reinforcement learning based on fast-RCNN
CN110555433B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN106952288B (en) Based on convolution feature and global search detect it is long when block robust tracking method
CN108053419A (en) Inhibited and the jamproof multiscale target tracking of prospect based on background
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN108171112A (en) Vehicle identification and tracking based on convolutional neural networks
CN107154024A (en) Dimension self-adaption method for tracking target based on depth characteristic core correlation filter
CN111476159B (en) Method and device for training and detecting detection model based on double-angle regression
CN106570893A (en) Rapid stable visual tracking method based on correlation filtering
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN106548169B (en) Fuzzy literal Enhancement Method and device based on deep neural network
CN110084836A (en) Method for tracking target based on the response fusion of depth convolution Dividing Characteristics
CN108647694A (en) Correlation filtering method for tracking target based on context-aware and automated response
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
CN107689052A (en) Visual target tracking method based on multi-model fusion and structuring depth characteristic
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
Abdollahi et al. SC-RoadDeepNet: A new shape and connectivity-preserving road extraction deep learning-based network from remote sensing data
CN107452022A (en) A kind of video target tracking method
CN109886356A (en) A kind of target tracking method based on three branch's neural networks
CN110310305A (en) A kind of method for tracking target and device based on BSSD detection and Kalman filtering
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
Bhattacharya et al. Interleaved deep artifacts-aware attention mechanism for concrete structural defect classification
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180619