CN117437261A - Tracking method suitable for edge-end remote target - Google Patents

Tracking method suitable for edge-end remote target Download PDF

Info

Publication number
CN117437261A
CN117437261A CN202311291891.6A CN202311291891A CN117437261A CN 117437261 A CN117437261 A CN 117437261A CN 202311291891 A CN202311291891 A CN 202311291891A CN 117437261 A CN117437261 A CN 117437261A
Authority
CN
China
Prior art keywords
tracking
target
matching
feature
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311291891.6A
Other languages
Chinese (zh)
Other versions
CN117437261B (en
Inventor
桑明华
顾先军
田甜
陈梦香
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weixiang Science And Technology Co ltd
Original Assignee
Nanjing Weixiang Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weixiang Science And Technology Co ltd filed Critical Nanjing Weixiang Science And Technology Co ltd
Priority to CN202311291891.6A priority Critical patent/CN117437261B/en
Priority claimed from CN202311291891.6A external-priority patent/CN117437261B/en
Publication of CN117437261A publication Critical patent/CN117437261A/en
Application granted granted Critical
Publication of CN117437261B publication Critical patent/CN117437261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tracking method suitable for a remote target at an edge, which belongs to the technical field of picture processing, and comprises the steps of carrying out target detection of a nanodet+ algorithm, tracking the target by a target tracking algorithm based on a strongsort+ method, solving the technical problem of improving the detection rate of the remote target, enhancing the characteristics of the target and inhibiting the background noise; the real-time performance of the edge end is guaranteed, the detection rate of a target which is short of texture information in a long distance is greatly improved, and the real-time problem of the edge end reasoning of a target tracking algorithm is solved while the tracking accuracy is guaranteed.

Description

Tracking method suitable for edge-end remote target
Technical Field
The invention belongs to the technical field of picture processing, and relates to a tracking method suitable for a remote target at an edge end.
Background
The security system is widely applied to various fields of society, and the preventive capability plays a key role in maintaining national, collective property, people safety and the like. The traditional security system is based on means such as manpower patrol, video monitoring, and the like, and has the problems of low manual supervision efficiency, poor security and protection preventive property and the like when a large amount of manpower cost is required. The camera picture real-time monitoring through deep learning can improve autonomous analysis capability of the security system, quickly identify suspicious information judgment feedback in a scene, reduce report missing and false report generated by artificial subjective factors and realize real-time investigation of potential safety hazards.
The target detection network used in the common security monitoring cannot effectively relieve the loss of the characteristic information of the remote target in the downsampling process, and the problems of limited monitoring range, short early warning reaction time and the like caused by the missing detection of the remote target are solved.
Disclosure of Invention
The invention aims to provide a tracking method suitable for a remote target at an edge end, which solves the technical problem of improving the detection rate of the remote target.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a tracking method suitable for an edge-end remote target comprises the following steps:
step 1: the target detection module acquires video data acquired by a camera, carries out target detection of a nanodet+ algorithm on a current frame picture in the video data, and detects candidate target information in the picture;
the Nanodet+ algorithm comprises a back bone part of a Nanodet target detection algorithm, wherein an MLC module is newly added to a shallower feature map, the maximum local contrast under different scales is selected at the same point position, the feature balance of Libra RCNN is introduced into a Neck part of the Nanodet target detection algorithm on the basis of PAN feature fusion, the balanced semantic features of the feature map generated by PAN are obtained, and the importance of shallow feature information in the fusion process is ensured;
step 2: the target tracking module acquires candidate target information, and sends coordinate frame information of a target in the candidate target information and a current frame image of the target into a target tracking algorithm based on strongsort+ to predict a coordinate frame of the same target in a next frame in continuous frames.
Preferably, in executing step 1, the nanodet+ algorithm comprises the steps of:
step 1-1: newly adding an MLC module in a Backbone part of a Nanodet target detection algorithm, wherein the MLC module is used for carrying out multi-scale local contrast measurement on a shallower feature map and selecting the maximum local contrast under different scales at the same point position;
step 1-2: introducing feature balance of Libra RCNN on the basis of PAN feature fusion in the Neck part of the Nanodet target detection algorithm, obtaining balanced semantic features of a feature map generated by the PAN feature fusion, enhancing feature integration through a Gaussian non-local attention module, and finally carrying out feature fusion from top to bottom of the PAN.
Preferably, in executing step 2, the strongsort+ based target tracking algorithm specifically includes the following steps:
step 2-1: after obtaining candidate target information, capturing an in-frame image corresponding to the frame image according to the coordinate frame information, and sending the in-frame image into a feature extraction module to obtain a feature image corresponding to each target;
step 2-2: judging the type of the current frame image: if the target object is an initial frame, taking all detected target objects of the current frame image as non-confirmed tracking objects, finishing track initialization by using Kalman filtering, adding the tracking objects into a tracking queue, and executing 2-3; if not, executing the step 2-4;
step 2-3: performing primary Kalman filtering track prediction by using a motion track stored by a tracking object in a tracking queue, wherein prediction frame information of the motion track is used for performing iou matching with current frame target frame information, and the successfully matched tracking object is used as a tracking result to be output;
step 2-4: for all candidate targets in the current frame image, the cosine distance of the feature map between the calculator and the tracking result is calculated, the optimal matching combination is obtained under the condition of minimum cost matching, and the matching successful matching combination, the redundant non-matching successful confirmation state tracking object and the redundant non-matching successful detection target are output;
step 2-5: performing iou matching on the redundant non-matching successful confirmation state tracking object, the non-confirmation state tracking object and the redundant non-matching successful detection object, calculating the minimal cost matching of the iou of the prediction frame and the detection frame, thereby obtaining an optimal matching combination, and outputting the redundant non-matching successful tracking object, the redundant non-matching successful detection object and the matching successful matching combination;
step 2-6: for redundant unmatched successfully tracked objects, according to the blocked condition of the tracked objects, keeping the confirmed tracked objects smaller than the maximum tracking times, updating the track by using Kalman filtering, and discarding the rest tracked objects from the tracking queue;
for redundant unmatched successfully detected targets, taking the targets as non-confirmed tracking targets, finishing track initialization by using Kalman filtering, and adding the tracking targets into a tracking queue;
for successfully matched tracking objects, updating tracking information by using a feature map and coordinate information corresponding to the current frame detection target, and finishing track updating by using the current frame detection frame information through Kalman filtering;
step 2-7: and after the successfully-matched tracking object completes the updating of the tracking information, outputting the predicted frame information and the category information after the track updating as the tracking result of the current frame.
Preferably, in executing step 1-2, specifically, a series of feature maps are generated from bottom to top by PAN, and then the multi-layer feature map is adjusted to an intermediate size by interpolation and maximum pooling, and then weighted average is performed to obtain balanced semantic features.
The invention discloses a tracking method suitable for a remote target at an edge end, which solves the technical problem of improving the detection rate of the remote target, is based on a Nanodet lightweight target detection algorithm, and is added with an MLC module to calculate the multi-scale contrast of a shallow feature map as far as possible, so that the feature of the target is enhanced, and the background noise is restrained; the PAN feature fusion part introduces a feature weighted average and Gaussian non-local attention module which are carried out by a Libra RCNN feature balance idea to ensure shallow feature information fusion and feature integration, and greatly improves the detection rate of a long-distance short-of-texture information target while ensuring the real-time performance of an edge end.
Drawings
FIG. 1 is a block diagram of a main flow algorithm of the present invention;
FIG. 2 is a schematic view of an MLC module according to the present invention;
fig. 3 is a schematic diagram of a PAN introduction feature balancing module of the present invention;
fig. 4 is a strongsort+target tracking algorithm flow of the present invention.
Detailed Description
In the embodiment, the Nanodet algorithm and the Strongsort target tracking algorithm are both improved, as shown in fig. 1-4, and the method for tracking the edge-end remote target comprises the following steps:
step 1: the target detection module acquires video data acquired by a camera, carries out target detection of a nanodet+ algorithm on a current frame picture in the video data, and detects candidate target information in the picture;
the Nanodet+ algorithm comprises a back bone part of a Nanodet target detection algorithm, wherein an MLC module is newly added to a shallower feature map, the maximum local contrast under different scales is selected at the same point position, the feature balance of Libra RCNN is introduced into a Neck part of the Nanodet target detection algorithm on the basis of PAN feature fusion, the balanced semantic features of the feature map generated by PAN are obtained, and the importance of shallow feature information in the fusion process is ensured;
the nanodet+ algorithm includes the following steps:
step 1-1: newly adding an MLC module in a Backbone part of a Nanodet target detection algorithm, wherein the MLC module is used for carrying out multi-scale local contrast measurement on a shallower feature map and selecting the maximum local contrast under different scales at the same point position;
in this embodiment, the local contrast is calculated by using the expansion convolution concept, the expansion rate is used to control the size of the local contrast calculation window, the division window is 9 blocks, the maximum value of the pixels in the central sub-window is divided by the average pixel values of eight neighboring windows around the central sub-window to obtain the contrast map by sequentially calculating LCM by window sliding.
LCM is calculated as follows:
wherein C is i Representing the likelihood that the central region is the target, L i Represents the maximum pixel value of each center sub-window, M i Representing the surrounding neighborhood window pixel average.
Step 1-2: the Neck part of the Nanodet target detection algorithm introduces the feature balance of Libra RCNN on the basis of PAN feature fusion, acquires the balance semantic features of the feature map generated by the PAN feature fusion, enhances feature integration through a Gaussian non-local attention module, and finally performs the feature fusion from top to bottom of the PAN.
In this embodiment, the method includes the steps of obtaining the balanced semantic features of the feature map generated by the PAN feature fusion, specifically, obtaining the balanced semantic features by performing weighted average after generating a series of feature maps of the PAN from bottom to top, adjusting the multi-layer feature map to the middle size by interpolation and maximum pooling, enhancing feature integration by the Gaussian non-local attention module, and finally performing top-down feature fusion of the PAN, so that importance of shallow feature information in the fusion process is ensured.
Step 2: the target tracking module acquires candidate target information, and sends coordinate frame information of a target in the candidate target information and a current frame image of the target into a target tracking algorithm based on strongsort+ to predict a coordinate frame of the same target in a next frame in continuous frames.
The target tracking algorithm based on strongsort+ specifically comprises the following steps:
step 2-1: after obtaining candidate target information, capturing an in-frame image corresponding to the frame image according to the coordinate frame information, and sending the in-frame image into a feature extraction module to obtain a feature image corresponding to each target;
step 2-2: judging the type of the current frame image: if the target object is an initial frame, taking all detected target objects of the current frame image as non-confirmed tracking objects, finishing track initialization by using Kalman filtering, adding the tracking objects into a tracking queue, and executing 2-3; if not, executing the step 2-4;
step 2-3: performing primary Kalman filtering track prediction by using a motion track stored by a tracking object in a tracking queue, wherein prediction frame information of the motion track is used for performing iou matching with current frame target frame information, and the successfully matched tracking object is used as a tracking result to be output;
step 2-4: for all candidate targets in the current frame image, the cosine distance of the feature map between the calculator and the tracking result is calculated, the optimal matching combination is obtained under the condition of minimum cost matching, and the matching successful matching combination, the redundant non-matching successful confirmation state tracking object and the redundant non-matching successful detection target are output;
step 2-5: performing iou matching on the redundant non-matching successful confirmation state tracking object, the non-confirmation state tracking object and the redundant non-matching successful detection object, calculating the minimal cost matching of the iou of the prediction frame and the detection frame, thereby obtaining an optimal matching combination, and outputting the redundant non-matching successful tracking object, the redundant non-matching successful detection object and the matching successful matching combination;
the tracking objects are divided into two groups of a confirmation state and a non-confirmation state, the detection targets and the confirmation state tracking objects are in many-to-many matching, the characteristic information carried by the detection targets and the confirmation state tracking objects are used for matching, the matching is successful, the detection targets are matched with the confirmation state tracking objects corresponding to one detection target, if the matching is failed, some detection targets and the confirmation state tracking objects are superfluous, the confirmation state tracking objects firstly pass through the matching between the characteristic graphs, the confirmation state objects with the excessive matching failure and the non-confirmation state tracking objects without the characteristic matching are used as the input of the tracking objects, and the detection targets with the excessive characteristic matching are used for performing the iou matching by using the frame information carried by the detection targets.
Step 2-6: for redundant unmatched successfully tracked objects, according to the blocked condition of the tracked objects, keeping the confirmed tracked objects smaller than the maximum tracking times, updating the track by using Kalman filtering, and discarding the rest tracked objects from the tracking queue;
for redundant unmatched successfully detected targets, taking the targets as non-confirmed tracking targets, finishing track initialization by using Kalman filtering, and adding the tracking targets into a tracking queue;
for successfully matched tracking objects, updating tracking information by using a feature map and coordinate information corresponding to the current frame detection target, and finishing track updating by using the current frame detection frame information through Kalman filtering;
step 2-7: and after the successfully-matched tracking object completes the updating of the tracking information, outputting the predicted frame information and the category information after the track updating as the tracking result of the current frame.
The invention firstly uses Kalman filtering to predict the track of the tracking object, then carries out cascade matching and iou matching, updates the matching track, inspects and screens the unmatched track, and initializes the track by Kalman filtering on the unmatched detection target.
Because the remote target lacks detail features, complex feature extraction is not needed through convolution, a feature extraction module in the Strongsort is replaced by a traditional feature extraction mode, extraction of the external contour features of the remote target is emphasized, and the mahalanobis distance calculation of a track prediction frame and a detection target frame is removed, so that the problem that the real-time performance of an edge end is greatly influenced due to the fact that matching time is consumed is solved.
As shown in fig. 4, in this embodiment, the specific flow of the target tracking algorithm of the Strongsort method is as follows:
step S1: acquiring a frame image;
step S2: extracting an in-frame image in the frame image;
step S3: generating a contour feature map;
step S4: judging whether the frame is an initial frame or not, in this embodiment, judging according to the frame number: step S5 is executed; if not, executing step S6;
step S5: initializing a tracking object and placing the tracking object into a tracking queue;
step S6: confirming the tracking object from the tracking queue, and particularly confirming the tracking object by adopting a Kalman filtering prediction mode;
performing contour feature matching on the confirmation tracking object and a candidate target in the current frame: step S7, the matching is successful, and the step S7 is executed; step S8 is executed without success;
step S7: track updating of the tracking queue is completed through Kalman filtering;
step S8: performing iou matching on the redundant validation state tracking object and the redundant detection target in the matching result and the non-validation state tracking object obtained by the Kalman filtering prediction in the step S6: step S9 is executed after successful matching; step S10 is executed if the matching is unsuccessful;
step S9: step S7, namely finishing track updating of the tracking queue through Kalman filtering;
step S10: the matching result generates a redundant detection target (i.e., a redundant non-matching successful detection target) and a redundant tracking object (i.e., a redundant non-matching successful tracking object):
for redundant detection targets: after obtaining the redundant detection targets, executing a step S5;
for the excess heel tracking object: after the tracking object is redundant, according to the shielding condition of the tracking object, the confirmed tracking object smaller than the maximum tracking times max_age is reserved, the track is updated by using Kalman filtering, and the rest tracking objects are discarded from the tracking queue.
The invention discloses a tracking method suitable for a remote target at an edge end, which solves the technical problem of improving the detection rate of the remote target, is based on a Nanodet lightweight target detection algorithm, and is added with an MLC module to calculate the multi-scale contrast of a shallow feature map as far as possible, so that the feature of the target is enhanced, and the background noise is restrained; the PAN feature fusion part introduces a feature weighted average and Gaussian non-local attention module which are carried out by a Libra RCNN feature balance idea to ensure shallow feature information fusion and feature integration, and greatly improves the detection rate of a long-distance short-of-texture information target while ensuring the real-time performance of an edge end.

Claims (4)

1. A tracking method suitable for a remote target at an edge end is characterized by comprising the following steps: the method comprises the following steps:
step 1: the target detection module acquires video data acquired by a camera, carries out target detection of a nanodet+ algorithm on a current frame picture in the video data, and detects candidate target information in the picture;
the Nanodet+ algorithm comprises a back bone part of a Nanodet target detection algorithm, wherein an MLC module is newly added to a shallower feature map, the maximum local contrast under different scales is selected at the same point position, the feature balance of Libra RCNN is introduced into a Neck part of the Nanodet target detection algorithm on the basis of PAN feature fusion, the balanced semantic features of the feature map generated by PAN are obtained, and the importance of shallow feature information in the fusion process is ensured;
step 2: the target tracking module acquires candidate target information, and sends coordinate frame information of a target in the candidate target information and a current frame image of the target into a target tracking algorithm based on strongsort+ to predict a coordinate frame of the same target in a next frame in continuous frames.
2. The method for tracking a distant target at an edge according to claim 1, wherein: in performing step 1, the nanodet+ algorithm includes the steps of:
step 1-1: newly adding an MLC module in a Backbone part of a Nanodet target detection algorithm, wherein the MLC module is used for carrying out multi-scale local contrast measurement on a shallower feature map and selecting the maximum local contrast under different scales at the same point position;
step 1-2: introducing feature balance of Libra RCNN on the basis of PAN feature fusion in the Neck part of the Nanodet target detection algorithm, obtaining balanced semantic features of a feature map generated by the PAN feature fusion, enhancing feature integration through a Gaussian non-local attention module, and finally carrying out feature fusion from top to bottom of the PAN.
3. The method for tracking a distant target at an edge according to claim 1, wherein: when executing the step 2, the target tracking algorithm based on strongsort+ specifically comprises the following steps:
step 2-1: after obtaining candidate target information, capturing an in-frame image corresponding to the frame image according to the coordinate frame information, and sending the in-frame image into a feature extraction module to obtain a feature image corresponding to each target;
step 2-2: judging the type of the current frame image: if the target object is an initial frame, taking all detected target objects of the current frame image as non-confirmed tracking objects, finishing track initialization by using Kalman filtering, adding the tracking objects into a tracking queue, and executing 2-3; if not, executing the step 2-4;
step 2-3: performing primary Kalman filtering track prediction by using a motion track stored by a tracking object in a tracking queue, wherein prediction frame information of the motion track is used for performing iou matching with current frame target frame information, and the successfully matched tracking object is used as a tracking result to be output;
step 2-4: for all candidate targets in the current frame image, the cosine distance of the feature map between the calculator and the tracking result is calculated, the optimal matching combination is obtained under the condition of minimum cost matching, and the matching successful matching combination, the redundant non-matching successful confirmation state tracking object and the redundant non-matching successful detection target are output;
step 2-5: performing iou matching on the redundant non-matching successful confirmation state tracking object, the non-confirmation state tracking object and the redundant non-matching successful detection object, calculating the minimal cost matching of the iou of the prediction frame and the detection frame, thereby obtaining an optimal matching combination, and outputting the redundant non-matching successful tracking object, the redundant non-matching successful detection object and the matching successful matching combination;
step 2-6: for redundant unmatched successfully tracked objects, according to the blocked condition of the tracked objects, keeping the confirmed tracked objects smaller than the maximum tracking times, updating the track by using Kalman filtering, and discarding the rest tracked objects from the tracking queue;
for redundant unmatched successfully detected targets, taking the targets as non-confirmed tracking targets, finishing track initialization by using Kalman filtering, and adding the tracking targets into a tracking queue;
for successfully matched tracking objects, updating tracking information by using a feature map and coordinate information corresponding to the current frame detection target, and finishing track updating by using the current frame detection frame information through Kalman filtering;
step 2-7: and after the successfully-matched tracking object completes the updating of the tracking information, outputting the predicted frame information and the category information after the track updating as the tracking result of the current frame.
4. A method for tracking a remote target at an edge according to claim 2, wherein: when executing step 1-2, specifically, a series of feature maps are generated from bottom to top by PAN, interpolation and maximum pooling are used to adjust the multi-layer feature map to the middle size, and weighted average is performed to obtain the balanced semantic features.
CN202311291891.6A 2023-10-08 Tracking method suitable for edge-end remote target Active CN117437261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311291891.6A CN117437261B (en) 2023-10-08 Tracking method suitable for edge-end remote target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311291891.6A CN117437261B (en) 2023-10-08 Tracking method suitable for edge-end remote target

Publications (2)

Publication Number Publication Date
CN117437261A true CN117437261A (en) 2024-01-23
CN117437261B CN117437261B (en) 2024-05-31

Family

ID=

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674328A (en) * 2021-07-14 2021-11-19 南京邮电大学 Multi-target vehicle tracking method
US20220280087A1 (en) * 2021-03-02 2022-09-08 Shenzhen Xiangsuling Intelligent Technology Co., Ltd. Visual Perception-Based Emotion Recognition Method
WO2023065395A1 (en) * 2021-10-18 2023-04-27 中车株洲电力机车研究所有限公司 Work vehicle detection and tracking method and system
CN116310482A (en) * 2022-12-01 2023-06-23 航天时代飞鸿技术有限公司 Target detection and recognition method and system based on domestic chip multi-target real-time tracking
CN116823878A (en) * 2023-05-26 2023-09-29 哈尔滨工业大学(威海) Visual multi-target tracking method based on fusion paradigm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220280087A1 (en) * 2021-03-02 2022-09-08 Shenzhen Xiangsuling Intelligent Technology Co., Ltd. Visual Perception-Based Emotion Recognition Method
CN113674328A (en) * 2021-07-14 2021-11-19 南京邮电大学 Multi-target vehicle tracking method
WO2023065395A1 (en) * 2021-10-18 2023-04-27 中车株洲电力机车研究所有限公司 Work vehicle detection and tracking method and system
CN116310482A (en) * 2022-12-01 2023-06-23 航天时代飞鸿技术有限公司 Target detection and recognition method and system based on domestic chip multi-target real-time tracking
CN116823878A (en) * 2023-05-26 2023-09-29 哈尔滨工业大学(威海) Visual multi-target tracking method based on fusion paradigm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUAN-HAO YEH等: "An Aerial Crowd-Flow Analyzing System for Drone Under YOLOv5 and StrongSort", IEEE, 31 December 2022 (2022-12-31) *
王志余;: "基于特征融合的复杂场景多目标跟踪算法研究", 软件导刊, no. 04, 26 November 2019 (2019-11-26) *

Similar Documents

Publication Publication Date Title
CN110443203B (en) Confrontation sample generation method of face spoofing detection system based on confrontation generation network
CN108447080B (en) Target tracking method, system and storage medium based on hierarchical data association and convolutional neural network
CN103971386B (en) A kind of foreground detection method under dynamic background scene
CN106296725B (en) Moving target real-time detection and tracking method and target detection device
CN110688987A (en) Pedestrian position detection and tracking method and system
CN104966304B (en) Multi-target detection tracking based on Kalman filtering and nonparametric background model
CN107301624B (en) Convolutional neural network defogging method based on region division and dense fog pretreatment
CN112883819A (en) Multi-target tracking method, device, system and computer readable storage medium
CN113052876B (en) Video relay tracking method and system based on deep learning
CN110610150B (en) Tracking method, device, computing equipment and medium of target moving object
CN109685045B (en) Moving target video tracking method and system
CN108022258B (en) Real-time multi-target tracking method based on single multi-frame detector and Kalman filtering
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
CN104601964A (en) Non-overlap vision field trans-camera indoor pedestrian target tracking method and non-overlap vision field trans-camera indoor pedestrian target tracking system
CN107833239B (en) Optimization matching target tracking method based on weighting model constraint
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN110555866A (en) Infrared target tracking method for improving KCF feature descriptor
CN106600580A (en) Hough transform-based abnormal recognition method and system of power line
CN113052869A (en) Track tracking method and system based on intelligent AI temperature measurement and storage medium
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN115049954A (en) Target identification method, device, electronic equipment and medium
CN104408432B (en) Infrared image target detection method based on histogram modification
CN114037087A (en) Model training method and device, depth prediction method and device, equipment and medium
CN107729811B (en) Night flame detection method based on scene modeling
CN117437261B (en) Tracking method suitable for edge-end remote target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant