WO2018121286A1 - Target tracking method and device - Google Patents

Target tracking method and device Download PDF

Info

Publication number
WO2018121286A1
WO2018121286A1 PCT/CN2017/116329 CN2017116329W WO2018121286A1 WO 2018121286 A1 WO2018121286 A1 WO 2018121286A1 CN 2017116329 W CN2017116329 W CN 2017116329W WO 2018121286 A1 WO2018121286 A1 WO 2018121286A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking
target
confidence
tracking target
frame image
Prior art date
Application number
PCT/CN2017/116329
Other languages
French (fr)
Chinese (zh)
Inventor
唐矗
卿明
吴庆
孙晓路
Original Assignee
纳恩博(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纳恩博(北京)科技有限公司 filed Critical 纳恩博(北京)科技有限公司
Publication of WO2018121286A1 publication Critical patent/WO2018121286A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of video image processing, and in particular to a target tracking method and apparatus.
  • the visual tracking technology based on online learning has become a hot spot of visual tracking after its rise in recent years.
  • Such a method extracts a feature template according to the specified tracking target in the initial frame picture without any prior experience of offline learning, and the training model is used for tracking the target in subsequent videos.
  • This type of method does not require any offline training, and can track any object specified by the user, which has high versatility.
  • the tracking algorithm has a severe performance degradation in the case of obvious illumination changes, complex backgrounds, and similar things.
  • the method will detect in each frame of the image to filter out Tracking candidate target areas, in this way, after the target loss reappears, the chances of re-finding the target are small, more situations are impossible to retrieve (because it has been the wrong tracking) or find the wrong target (such as Different people), making it difficult to form a stable tracking system for a long time.
  • At least some embodiments of the present invention provide a target tracking method and apparatus to at least solve the technical problem in the prior art that when tracking a target in a video image, it is difficult to determine whether the target is lost or difficult to retrieve after the target is lost.
  • a target tracking method including: acquiring a tracking target and a target template corresponding to the tracking target; tracking the tracking target in the video image according to the target template, and obtaining a tracking result of the current frame image in the video image.
  • the tracking result includes: a region of the current target image that tracks the target and a corresponding confidence; the tracking state of the tracking target is determined according to the confidence; if the tracking state of the tracking target is lost, the tracking target is repositioned and repositioned Continue tracking after tracking the target.
  • the target frame is scanned by the target template to obtain an area of the tracking target; the information of the tracking target is collected by using a preset sensor; and the information of the tracking target collected by the preset sensor is fused to obtain a confidence level.
  • obtaining depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image setting a weight value corresponding to depth confidence, tracking confidence, and color confidence; according to depth confidence, tracking confidence, and The color confidence and the corresponding weight value get confidence.
  • comparing the depth of the area of the tracking target in the current frame image with the average depth of the area of the tracking target in each frame image obtaining depth confidence; determining the area of the tracking target in the current frame image and the target template The similarity is the tracking confidence; determining the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
  • a preset confidence threshold is obtained; the tracking state of the tracking target is determined by comparing the confidence with the confidence threshold, wherein the tracking state includes: tracking, low confidence, and loss.
  • the candidate target is detected in the new video image by the target detector to determine the candidate target region; the tracking target is identified from the candidate target region.
  • acquiring an image region of the tracking target extracting feature information from the image region of the tracking target, and constructing the feature model according to the feature information; and determining, according to the tracking state, that the tracking target is not lost, according to the tracking result of the current frame image
  • the feature model is updated until the tracking target is lost; the candidate target region is compared with the last updated feature model to identify the tracking target.
  • the preset sensor comprises any one or more of the following: a depth camera, an infrared sensor, and an ultra-wideband positioning sensor.
  • the target template is updated according to the tracking result of the current frame image.
  • a target tracking device including: a first acquiring module configured to acquire a tracking target and a target template corresponding to the tracking target; and a second acquiring module configured to be in the video image according to the target template Tracking the tracking target, obtaining the tracking result of the current frame image in the video image, wherein the tracking result includes: the area of the tracking target in the current frame image and the corresponding confidence; the determining module is set to determine the tracking of the tracking target according to the confidence level Status; reposition module, set to track the tracking status of the target To lose, retarget the tracking target and continue tracking after retargeting the tracking target.
  • the second obtaining module includes: a scanning submodule, configured to scan the current frame image by the target template to obtain an area of the tracking target; and the collecting submodule is configured to collect information of the tracking target by using a preset sensor; the fusion processing submodule , configured to fuse the information of the tracking target collected by the preset sensor to obtain a confidence level.
  • the fusion processing sub-module includes: a first acquiring unit, configured to acquire depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image; setting a unit, setting the depth confidence and tracking confidence a weight value corresponding to the color confidence; the second obtaining unit is configured to obtain a confidence according to the depth confidence, the tracking confidence and the color confidence, and the corresponding weight value.
  • the first obtaining unit includes: an acquiring subunit, configured to compare the depth of the area of the tracking target in the current frame image with the average depth of the area of the tracking target in each frame image, to obtain a depth confidence; a determining subunit, configured to determine that the similarity between the region of the tracking target in the current frame image and the target template is tracking confidence; and the second determining subunit is configured to determine a color value and a preset of the region of the tracking target in the current frame image The similarity of the color model is the color confidence.
  • the determining module includes: an obtaining submodule configured to obtain a preset confidence threshold; and a determining submodule configured to determine a tracking state of the tracking target by comparing the confidence with the confidence threshold, wherein the tracking Status includes: tracking, low confidence, and loss.
  • the relocation module comprises: a detection submodule configured to detect the candidate target in the new video image by the target detector to determine the candidate target region; and the identification submodule configured to identify the tracking target from the candidate target region.
  • the identifying sub-module includes: a third acquiring unit configured to acquire an image region of the tracking target; a constructing unit configured to extract feature information from the image region of the tracking target, and construct a feature model according to the feature information; The method is configured to update the feature model according to the tracking result of the current frame image until the tracking target is lost according to the tracking state, and the tracking target is lost; the comparison unit is configured to perform the candidate target region and the last updated feature model. Compare to identify the tracking target.
  • the target template is updated according to the tracking result of the current frame image.
  • a tracking target and a target template corresponding to the tracking target are acquired, and the tracking target is tracked in the video image according to the target template, and a tracking result of the current frame image in the video image is obtained. Determining a tracking state of the tracking target according to the confidence level, if the tracking state of the tracking target is lost, relocating the tracking target and continuing tracking after repositioning the tracking target.
  • the above solution determines the next step by continuously judging the tracking state during the tracking process, thereby enabling the tracking system to adapt to various states appearing in the long-term tracking process, and solving the problem in the tracking video image in the prior art in the prior art.
  • FIG. 1 is a flow chart of a target tracking method according to an embodiment of the present invention.
  • FIG. 2 is an information interaction diagram of a target tracking method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a target tracking device in accordance with one embodiment of the present invention.
  • an embodiment of a target tracking method is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and Although the logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flow chart of a target tracking method according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • Step S102 Acquire a tracking target and a target template corresponding to the tracking target.
  • the foregoing tracking target may be determined by an artificially specified or by a pedestrian detector in an initial frame image of the video, and the target template may be a shape template of the tracking target.
  • Step S104 tracking the tracking target in the video image according to the target template, and obtaining a tracking result of the current frame image in the video image, where the tracking result includes: a region of the tracking target in the current frame image Corresponding confidence.
  • the confidence in the tracking result of the current frame image is used to determine the similarity between the region of the tracking target in the tracking result and the target template.
  • the similarity may include a parameter similarity, and may also include multiple parameter similarities.
  • Step S106 determining a tracking state of the tracking target according to the confidence level.
  • the tracking status of the tracking target may include: a status of tracking, low confidence, loss, etc., but is not limited thereto, wherein the tracking status may be determined by one or more confidence thresholds or multiple confidence ranges.
  • Step S108 If the tracking status of the tracking target is lost, relocate the tracking target and continue tracking after repositioning the tracking target.
  • the foregoing relocation is used to retrieve the initially set tracking target in the new video image.
  • two steps of target detection and target re-identification are required, wherein the target detection uses offline training.
  • the obtained target detector detects the target in the video frame to obtain a candidate target region, and the target re-identification is used to identify the tracking target in the candidate target region.
  • the target detection does not limit the specific target detector type, and may be a traditional template plus classifier sliding window detector or a depth learning based detector; the target re-identification method also does not define a specific method, and simple features can be used.
  • the deep learning model can also be used to obtain feature extraction and similarity measurement methods.
  • a larger learning rate can be used in the initial stage to improve the update range of the tracking template to quickly reform a stable tracking.
  • the foregoing step of the present application acquires a tracking target and a target template corresponding to the tracking target, and tracks the tracking target in the video image according to the target template to obtain a tracking result of the current frame image in the video image, according to The confidence determines a tracking state of the tracking target, and if the tracking state of the tracking target is lost, relocating the tracking target and continuing tracking after repositioning the tracking target.
  • the above scheme The next step is determined by continuously judging the tracking state during the tracking process, so that the tracking system can adapt to various states appearing in the long-term tracking process, and the prior art in the prior art tracks the target in the video image. It is difficult to judge whether the target is lost or a technical problem that is difficult to retrieve after the target is lost.
  • step S104 the tracking target is tracked in the video image according to the target template, and the tracking result of the current frame image in the video image is obtained, including:
  • Step S1041 Scan the current frame image by using the target template to obtain an area of the tracking target.
  • the target template is scanned in the current frame image to find the strongest peak region of the response, thereby determining the peak region with the strongest response as the region where the tracking target is located in the current frame image. It should be noted that the target template can always scan the region with the strongest response regardless of whether the current frame includes the tracking target. In the case where the current frame image does not include the tracking target, the region with the strongest response is not actually the region of the tracking target. After the tracking result is obtained, the tracking result needs to be further judged.
  • Step S1043 Collect information of the tracking target by using a preset sensor.
  • the foregoing sensors may include, but are not limited to, a depth camera, an infrared sensor, a UWB sensor, etc.
  • the preset sensors may be a plurality of different sensors, or may be multiple types of sensors. The above steps use different sensors to obtain different information of the tracking target.
  • Step S1045 Perform fusion processing on the information of the tracking target collected by the preset sensor to obtain the confidence.
  • the confidence of the tracking result is determined only by the detection result of a single vision sensor (for example, a camera, etc.), and the information of the tracking target is collected by the preset sensor in the above step, thereby improving the accuracy of the confidence. To prevent the influence of changes in the environment and light on the confidence of the tracking results.
  • a single vision sensor for example, a camera, etc.
  • step S1045 the information of the tracking target collected by the preset sensor is fused to obtain the confidence, including:
  • Step S10451 Acquire depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image.
  • Step S10453 setting a weight value corresponding to the depth confidence, the tracking confidence, and the color confidence.
  • Step S10455 The confidence is obtained according to the depth confidence, the tracking confidence, the color confidence, and a corresponding weight value.
  • the depth confidence, the tracking confidence, and the color confidence are respectively s depth , s tracking , and s color , if the corresponding weight values are ⁇ 1 , ⁇ 2 , and ⁇ 3 , respectively.
  • step S10451 obtaining depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image, including:
  • Step S10451a the depth confidence is obtained by comparing the depth of the region of the tracking target in the current frame image with the average depth of the region of the tracking target in each frame image.
  • the depth of the tracking target area can be obtained by using a depth camera mounted by the robot. Since the depth of the target does not change greatly in consecutive frames during continuous tracking, the frame-by-frame comparison is performed. The depth is changed to obtain the normalized depth confidence s depth .
  • Step S10451b determining that the similarity between the area of the tracking target and the target template in the current frame image is the tracking confidence.
  • each frame tracking algorithm uses the target template to detect near the region where the previous frame target appears, find the target region closest to the target template, and normalize the tracking confidence s tracking .
  • Step S10451c determining that the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
  • the target area detected in the previous embodiment may be sent to the target discarding module (ie, the module is set to determine the tracking status), and the target discarding module is obtained according to the preset color model.
  • the normalized similarity s color of the current target area to the color model.
  • step S106 determining a tracking status of the tracking target according to the confidence level, including:
  • Step S1061 Acquire a preset confidence threshold.
  • Step S1063 Determine a tracking state of the tracking target by comparing the confidence level with the confidence threshold, wherein the tracking state includes: tracking, low confidence, and loss.
  • the foregoing preset reliability threshold may include a first confidence threshold and a second confidence threshold, where the first confidence threshold is greater than the second confidence threshold, and thus passes the first confidence threshold
  • the second confidence threshold may constitute a single interval, and the tracking state corresponding to the tracking result with the confidence greater than or equal to the first confidence threshold is the tracking result in the tracking, and the confidence is greater than the second confidence threshold and less than or equal to the first confidence threshold.
  • the corresponding tracking state is low confidence, and the tracking state corresponding to the tracking result whose confidence is less than the second confidence threshold is lost.
  • the target template is updated by the tracking result to continue tracking, and if the tracking status of the tracking result is lost, repositioning Track the target.
  • step S108 the tracking target is relocated and the tracking is continued after the tracking target is repositioned, including:
  • Step S1081 The candidate target is detected in the new video image by the target detector to determine the candidate target region.
  • the foregoing step is a step of target detection.
  • the target detector may be a pedestrian detector, and the pedestrian is detected in the video by the pedestrian detector.
  • a candidate target area is obtained.
  • Step S1083 Identify the tracking target from the candidate target area.
  • the above steps are a process of target re-identification, and the target is re-identified as a process of finding and identifying a pre-selected designated target from different scenarios.
  • the method for re-identifying the target may be to use paired image data in different scenes of the same target and image data of different target pairs, respectively extracting specified features, such as color histograms, as feature vectors, and then learning by using the metric learning method.
  • a similarity measure function in the application, the similarity measure function is used to calculate the similarity between the two targets, thereby determining whether it is the same target, optionally, according to the above embodiment of the present application, step S1083, from the candidate
  • the step of identifying the tracking target in the target area may further include:
  • Step S10831 Acquire an image area of the tracking target.
  • Step S10833 extracting feature information from an image region of the tracking target, and constructing a feature model according to the feature information.
  • the extracted feature may be a color feature, an edge feature, or the like of the image. Since the tracking target is usually dynamic in the video, it is difficult to track the shape of the tracking target only, and the accuracy is low. However, for a continuous image in a video, the tracking target's time shape changes continuously with the change of the timestamp, but the features of the image are generally consistent, so the above steps construct the model by the extracted image features.
  • Step S10835 In the case that it is determined that the tracking target is not lost according to the tracking state, the feature model is updated according to the tracking result of the current frame image until the tracking target is lost.
  • Step S10837 comparing the candidate target area with the last updated feature model to identify the tracking target.
  • the tracking target in the case that it is determined that the current frame image has been lost, can be retrieved based on the last updated tracking model, that is, the previous frame tracking model, since the previous frame image is not With the state of losing, Therefore, after obtaining the tracking result of the previous frame image, the feature model of the tracking target is also updated, so that the feature model used for retrieving the tracking target is the closest feature model.
  • the tracking target may be in a dynamic state in the video
  • other environmental information in the video also changes with time, that is, the shape of the tracking target is constantly changing in the video, and the video is in the video.
  • Light and environment are also changing. Therefore, it is very difficult to track or retrieve by simply tracking the shape of the target.
  • tracking with the feature model of the initially determined tracking target does not yield accurate results. Therefore, the feature model of the tracking target introduced by the above scheme can effectively remove the influence of the change of the environment or the change of the shape of the target in the process of tracking or retrieving, thereby upgrading the robustness of the tracking model.
  • the feature information is extracted from the image area of the tracking target, and the feature mode is constructed according to the feature information, and if it is determined that the tracking target is not lost according to the tracking state, according to the The tracking result of the current frame image updates the feature model until the tracking target is lost, and compares the candidate target region with the last updated feature model to identify the tracking target.
  • the above scheme constructs a feature model according to the feature information of the image region of the tracking target, and continuously updates the feature model according to different tracking results during the tracking process, and uses the feature model as a tracking model to track, thereby improving the tracking model.
  • the stickiness further solves the technical problem that it is difficult to judge whether the target is lost when tracking the target in the video image in the prior art, or is difficult to retrieve after the target is lost.
  • the preset sensor includes any one or more of the following: a depth camera, an infrared sensor, and an ultra-wideband positioning sensor.
  • the target template is updated according to the tracking result of the current frame image.
  • the tracking template is updated according to the tracking result, thereby maintaining an online updated template during the tracking process, thereby enabling tracking.
  • the template can introduce features of the image that is closest to the current frame.
  • the tracking target 20 is selected by the target detector 10, according to The tracking target selected by the target detector 10 extracts the target template 30 of the tracking target, and tracks the tracking target in the video image according to the target template 30. After tracking the image for each frame 40, the tracking result is obtained, and then the tracking result is determined according to the tracking result. Tracking status, that is, performing tracking and discarding 50. If the tracking status corresponding to the tracking result is in tracking, the tracking result is used to update the target template 30, and the tracking is continued. If the tracking status corresponding to the tracking result is lost, the target re-discovery is performed.
  • the target re-recovery 60 includes two steps of target detection and target re-identification, and the target recognition can be performed by the pedestrian detector 10, and then the target re-recognition is performed. If the tracking target is successfully re-finished, The target template 30 is updated using the retrieved tracking target, and if it is not retrieved, the target retrieval process is continued.
  • FIG. 3 is a schematic diagram of a target tracking device according to an embodiment of the present invention. As shown in FIG. 3, the device includes:
  • the first obtaining module 10 is configured to acquire a tracking target and a target template corresponding to the tracking target.
  • the second obtaining module 20 is configured to track the tracking target in the video image according to the target template, to obtain a tracking result of the current frame image in the video image, where the tracking result includes: the current frame image Track the area of the target and the corresponding confidence.
  • the determining module 30 is configured to determine a tracking state of the tracking target according to the confidence level.
  • the relocation module 40 is configured to reposition the tracking target and continue tracking after repositioning the tracking target if the tracking state of the tracking target is lost.
  • the device of the present application acquires the tracking target and the target template corresponding to the tracking target by using the first acquiring module, and the tracking target is tracked in the video image according to the target template by the second acquiring module, to obtain the video.
  • the tracking result of the current frame image in the image is determined by the determining module according to the confidence level, and if the tracking state of the tracking target is lost, the tracking target is repositioned by the repositioning module and Continue tracking after repositioning the tracking target.
  • the above solution determines the next step by continuously judging the tracking state during the tracking process, thereby enabling the tracking system to adapt to various states appearing in the long-term tracking process, and solving the problem in the prior art tracking video image in the prior art. It is difficult to judge whether the target is lost or a technical problem that is difficult to retrieve after the target is lost.
  • the second obtaining module includes:
  • scanning a sub-module, configured to scan the current frame image by using the target template to obtain an area of the tracking target.
  • the collecting submodule is configured to collect information of the tracking target by using a preset sensor.
  • the fusion processing sub-module is configured to perform fusion processing on the information of the tracking target collected by the preset sensor to obtain the confidence level.
  • the fusion processing submodule includes:
  • the first obtaining unit is configured to acquire depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image.
  • a setting unit configured to set the depth confidence, the tracking confidence, and the weight value corresponding to the color confidence.
  • the second obtaining unit is configured to obtain the confidence according to the depth confidence, the tracking confidence, the color confidence, and a corresponding weight value.
  • the first acquiring unit includes:
  • Obtaining a subunit configured to compare the depth of the region of the tracking target in the current frame image with the average depth of the region of the tracking target in each frame image, to obtain the depth confidence.
  • the first determining subunit is configured to determine that the similarity between the area of the tracking target in the current frame image and the target template is the tracking confidence.
  • the second determining subunit is configured to determine that the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
  • the determining module includes:
  • a determining submodule is configured to determine a tracking state of the tracking target by comparing the confidence to the confidence threshold, wherein the tracking state comprises: tracking, low confidence, and loss.
  • the relocation module includes:
  • a detection sub-module is arranged to detect a candidate target in a new video image by the target detector to determine a candidate target region.
  • An identification sub-module configured to identify the tracking target from the candidate target area.
  • the identifying submodule includes:
  • a third acquiring unit configured to acquire an image area of the tracking target.
  • a construction unit configured to extract feature information from an image region of the tracking target and construct a feature model based on the feature information.
  • an updating unit configured to update the feature model according to the tracking result of the current frame image until the tracking target is lost, if it is determined that the tracking target is not lost according to the tracking state.
  • the comparison unit is configured to compare the candidate target area with the last updated feature model to identify the tracking target.
  • the tracking target if the tracking target is not lost, according to the current frame image
  • the tracking result updates the target template.
  • the tracking result is tracked and lost after obtaining the tracking result of each frame image, thereby determining the tracking state, and stopping in time when the tracking state is lost.
  • the tracking result is tracked and lost after obtaining the tracking result of each frame image, thereby determining the tracking state, and stopping in time when the tracking state is lost.
  • a storage medium wherein the storage medium comprises a stored program, wherein the device in which the storage medium is located is controlled to execute the target tracking method when the program is running.
  • the above storage medium may include, but is not limited to, a U disk, a read only memory (ROM), a random access memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.
  • a processor configured to execute a program, wherein the target tracking method is executed when the program runs.
  • the above processor may include, but is not limited to, a processing device such as a microprocessor (MCU) or a programmable logic device (FPGA).
  • MCU microprocessor
  • FPGA programmable logic device
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be in the form of a software product.
  • the computer software product is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the methods of the various embodiments of the present invention. step.
  • the patch software upgrade method and system provided by at least some embodiments of the present invention have the following beneficial effects: determining the subsequent steps by continuously determining the tracking status during the tracking process, thereby enabling the tracking system to adapt to the long-term tracking process.

Abstract

A target tracking method and device. The method comprises: acquiring a tracked target and a target template corresponding to the tracked target (S102); tracking the tracked target in a video image according to the target template to obtain a tracking result of a current frame image in the video image (S104), wherein the tracking result comprises: an area of the tracked target in the current frame image and a corresponding confidence degree; determining the tracking state of the tracked target according to the confidence degree (S106); and if the tracking state of the tracked target is lost, repositioning the tracked target and continuing tracking after the tracked target is repositioned (S108). The solution solves the technical problem in the prior art that it is difficult to determine whether a target is lost or difficult to retrieve after the target is lost when the target is tracked in a video image.

Description

目标跟踪方法和装置Target tracking method and device 技术领域Technical field
本发明涉及视频图像处理领域,具体而言,涉及一种目标跟踪方法和装置。The present invention relates to the field of video image processing, and in particular to a target tracking method and apparatus.
背景技术Background technique
基于在线学习的视觉跟踪技术在近年来兴起之后,成为视觉跟踪的一个热点。此类方法在没有任何离线学习的先验经验的前提下,根据初始帧画面中指定的跟踪目标提取特征模板,训练模型用于后续视频中对于该目标的跟踪。该类方法不需要任何的离线训练,可以对用户指定的任何物体进行跟踪,具有较高的通用性。但是由于实际应用场景的复杂性,导致跟踪算法在光照变化明显、背景复杂、有相似事物干扰等情况下性能下降严重;此外,由于跟踪目标的特征及模板单一,在目标的跟踪过程中,很难准确地判断目标是否跟丢;相应的,在目标跟丢之后,跟踪模板的持续更新会放大误差,导致目标难以找回;此外,该类方法在每一帧图像中都会进行检测来筛选出跟踪的候选目标区域,依靠这种方式,在目标丢失重新出现后,重新找到该目标的几率很小,更多的情况是无法找回(由于一直是错误的跟踪)或找到错误的目标(如不同的人),从而难以形成长时间稳定的跟踪系统。The visual tracking technology based on online learning has become a hot spot of visual tracking after its rise in recent years. Such a method extracts a feature template according to the specified tracking target in the initial frame picture without any prior experience of offline learning, and the training model is used for tracking the target in subsequent videos. This type of method does not require any offline training, and can track any object specified by the user, which has high versatility. However, due to the complexity of the actual application scenario, the tracking algorithm has a severe performance degradation in the case of obvious illumination changes, complex backgrounds, and similar things. In addition, due to the unique characteristics of the tracking target and the template, in the target tracking process, It is difficult to accurately determine whether the target is lost or not; correspondingly, after the target is lost, the continuous update of the tracking template will amplify the error, making the target difficult to retrieve; in addition, the method will detect in each frame of the image to filter out Tracking candidate target areas, in this way, after the target loss reappears, the chances of re-finding the target are small, more situations are impossible to retrieve (because it has been the wrong tracking) or find the wrong target (such as Different people), making it difficult to form a stable tracking system for a long time.
目前的视觉跟踪方法大多使用一个在线更新的模板,对每一帧图像使用该模板去进行匹配,并根据匹配结果去更新模板,单独依靠该类方法无法形成一个稳定的长时间跟踪系统,原因在于:Most of the current visual tracking methods use an online update template. The template is used to match each frame image, and the template is updated according to the matching result. A stable long-time tracking system cannot be formed by this method alone. :
1、这种方法无法自我判断目标是否已经跟丢;1. This method cannot judge for itself whether the target has been lost or not;
2、在目标丢失之后的跟踪会持续的放大误差;2. The tracking error after the target is lost will continue to increase the error;
3、在跟丢目标后,依赖该类跟踪方法的检测机制,很难准确地找回丢失前的跟踪目标。3. After losing the target, relying on the detection mechanism of this type of tracking method, it is difficult to accurately retrieve the tracking target before the loss.
针对现有技术中跟踪视频图像中的目标对象时,即使跟丢仍然继续跟踪导致目标对象难以重新定位的问题,目前尚未提出有效的解决方案。In the prior art, when tracking a target object in a video image, even if the tracking continues to cause the target object to be difficult to relocate, an effective solution has not been proposed yet.
发明内容Summary of the invention
本发明至少部分实施例提供了一种目标跟踪方法和装置,以至少解决现有技术中跟踪视频图像中的目标时,难以判断目标是否丢失,或是在目标丢失后难以找回的技术问题。 At least some embodiments of the present invention provide a target tracking method and apparatus to at least solve the technical problem in the prior art that when tracking a target in a video image, it is difficult to determine whether the target is lost or difficult to retrieve after the target is lost.
根据本发明其中一实施例,提供了一种目标跟踪方法,包括:获取跟踪目标以及跟踪目标对应的目标模板;根据目标模板在视频图像中跟踪跟踪目标,得到视频图像中当前帧图像的跟踪结果,其中,跟踪结果包括:当前帧图像中跟踪目标的区域和所对应的置信度;根据置信度确定跟踪目标的跟踪状态;如果跟踪目标的跟踪状态为丢失,则重新定位跟踪目标并在重新定位跟踪目标后继续跟踪。According to an embodiment of the present invention, a target tracking method is provided, including: acquiring a tracking target and a target template corresponding to the tracking target; tracking the tracking target in the video image according to the target template, and obtaining a tracking result of the current frame image in the video image. The tracking result includes: a region of the current target image that tracks the target and a corresponding confidence; the tracking state of the tracking target is determined according to the confidence; if the tracking state of the tracking target is lost, the tracking target is repositioned and repositioned Continue tracking after tracking the target.
可选地,通过目标模板扫描当前帧图像得到跟踪目标的区域;通过预设的传感器采集跟踪目标的信息;将预设传感器采集的跟踪目标的信息进行融合处理,得到置信度。Optionally, the target frame is scanned by the target template to obtain an area of the tracking target; the information of the tracking target is collected by using a preset sensor; and the information of the tracking target collected by the preset sensor is fused to obtain a confidence level.
可选地,获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度;设置深度置信度、跟踪置信度和颜色置信度对应的权重值;根据深度置信度、跟踪置信度和颜色置信度和对应的权重值得到置信度。Optionally, obtaining depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image; setting a weight value corresponding to depth confidence, tracking confidence, and color confidence; according to depth confidence, tracking confidence, and The color confidence and the corresponding weight value get confidence.
可选地,通过当前帧图像中跟踪目标的区域的深度与每一帧图像中跟踪目标的区域的平均深度进行比对,得到深度置信度;确定当前帧图像中跟踪目标的区域与目标模板的相似度为跟踪置信度;确定当前帧图像中跟踪目标的区域的颜色值与预设的颜色模型的相似度为颜色置信度。Optionally, comparing the depth of the area of the tracking target in the current frame image with the average depth of the area of the tracking target in each frame image, obtaining depth confidence; determining the area of the tracking target in the current frame image and the target template The similarity is the tracking confidence; determining the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
可选地,获取预设的置信度阈值;通过将置信度与置信度阈值比对,以确定跟踪目标的跟踪状态,其中,跟踪状态包括:跟踪中、置信度低以及丢失。Optionally, a preset confidence threshold is obtained; the tracking state of the tracking target is determined by comparing the confidence with the confidence threshold, wherein the tracking state includes: tracking, low confidence, and loss.
可选地,通过目标检测器在新的视频图像中检测候选目标,以确定候选目标区域;从候选目标区域中识别跟踪目标。Optionally, the candidate target is detected in the new video image by the target detector to determine the candidate target region; the tracking target is identified from the candidate target region.
可选地,获取跟踪目标的图像区域;从跟踪目标的图像区域中提取特征信息,并根据特征信息构造特征模型;在根据跟踪状态确定跟踪目标未丢失的情况下,根据当前帧图像的跟踪结果更新特征模型,直至跟踪目标丢失;将候选目标区域与最后一次更新得到的特征模型进行比对,以识别跟踪目标。Optionally, acquiring an image region of the tracking target; extracting feature information from the image region of the tracking target, and constructing the feature model according to the feature information; and determining, according to the tracking state, that the tracking target is not lost, according to the tracking result of the current frame image The feature model is updated until the tracking target is lost; the candidate target region is compared with the last updated feature model to identify the tracking target.
可选地,预设传感器包括如下任意一种或多种:深度相机、红外传感器、超宽带定位传感器。Optionally, the preset sensor comprises any one or more of the following: a depth camera, an infrared sensor, and an ultra-wideband positioning sensor.
可选地,如果跟踪目标未丢失,则根据当前帧图像的跟踪结果更新目标模板。Alternatively, if the tracking target is not lost, the target template is updated according to the tracking result of the current frame image.
根据本发明其中一实施例,还提供了一种目标跟踪装置,包括:第一获取模块,设置为获取跟踪目标以及跟踪目标对应的目标模板;第二获取模块,设置为根据目标模板在视频图像中跟踪跟踪目标,得到视频图像中当前帧图像的跟踪结果,其中,跟踪结果包括:当前帧图像中跟踪目标的区域和所对应的置信度;确定模块,设置为根据置信度确定跟踪目标的跟踪状态;重新定位模块,设置为如果跟踪目标的跟踪状态 为丢失,则重新定位跟踪目标并在重新定位跟踪目标后继续跟踪。According to an embodiment of the present invention, a target tracking device is further provided, including: a first acquiring module configured to acquire a tracking target and a target template corresponding to the tracking target; and a second acquiring module configured to be in the video image according to the target template Tracking the tracking target, obtaining the tracking result of the current frame image in the video image, wherein the tracking result includes: the area of the tracking target in the current frame image and the corresponding confidence; the determining module is set to determine the tracking of the tracking target according to the confidence level Status; reposition module, set to track the tracking status of the target To lose, retarget the tracking target and continue tracking after retargeting the tracking target.
可选地,第二获取模块包括:扫描子模块,设置为通过目标模板扫描当前帧图像得到跟踪目标的区域;采集子模块,设置为通过预设的传感器采集跟踪目标的信息;融合处理子模块,设置为将预设传感器采集的跟踪目标的信息进行融合处理,得到置信度。Optionally, the second obtaining module includes: a scanning submodule, configured to scan the current frame image by the target template to obtain an area of the tracking target; and the collecting submodule is configured to collect information of the tracking target by using a preset sensor; the fusion processing submodule , configured to fuse the information of the tracking target collected by the preset sensor to obtain a confidence level.
可选地,融合处理子模块包括:第一获取单元,设置为获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度;设置单元,设置为设置深度置信度、跟踪置信度和颜色置信度对应的权重值;第二获取单元,设置为根据深度置信度、跟踪置信度和颜色置信度和对应的权重值得到置信度。Optionally, the fusion processing sub-module includes: a first acquiring unit, configured to acquire depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image; setting a unit, setting the depth confidence and tracking confidence a weight value corresponding to the color confidence; the second obtaining unit is configured to obtain a confidence according to the depth confidence, the tracking confidence and the color confidence, and the corresponding weight value.
可选地,第一获取单元包括:获取子单元,设置为通过当前帧图像中跟踪目标的区域的深度与每一帧图像中跟踪目标的区域的平均深度进行比对,得到深度置信度;第一确定子单元,设置为确定当前帧图像中跟踪目标的区域与目标模板的相似度为跟踪置信度;第二确定子单元,设置为确定当前帧图像中跟踪目标的区域的颜色值与预设的颜色模型的相似度为颜色置信度。Optionally, the first obtaining unit includes: an acquiring subunit, configured to compare the depth of the area of the tracking target in the current frame image with the average depth of the area of the tracking target in each frame image, to obtain a depth confidence; a determining subunit, configured to determine that the similarity between the region of the tracking target in the current frame image and the target template is tracking confidence; and the second determining subunit is configured to determine a color value and a preset of the region of the tracking target in the current frame image The similarity of the color model is the color confidence.
可选地,确定模块包括:获取子模块,设置为获取预设的置信度阈值;确定子模块,设置为通过将置信度与置信度阈值比对,以确定跟踪目标的跟踪状态,其中,跟踪状态包括:跟踪中、置信度低以及丢失。Optionally, the determining module includes: an obtaining submodule configured to obtain a preset confidence threshold; and a determining submodule configured to determine a tracking state of the tracking target by comparing the confidence with the confidence threshold, wherein the tracking Status includes: tracking, low confidence, and loss.
可选地,重新定位模块包括:检测子模块,设置为通过目标检测器在新的视频图像中检测候选目标,以确定候选目标区域;识别子模块,设置为从候选目标区域中识别跟踪目标。Optionally, the relocation module comprises: a detection submodule configured to detect the candidate target in the new video image by the target detector to determine the candidate target region; and the identification submodule configured to identify the tracking target from the candidate target region.
可选地,识别子模块包括:第三获取单元,设置为获取跟踪目标的图像区域;构造单元,设置为从跟踪目标的图像区域中提取特征信息,并根据特征信息构造特征模型;更新单元,设置为在根据跟踪状态确定跟踪目标未丢失的情况下,根据当前帧图像的跟踪结果更新特征模型,直至跟踪目标丢失;比对单元,设置为将候选目标区域与最后一次更新得到的特征模型进行比对,以识别跟踪目标。Optionally, the identifying sub-module includes: a third acquiring unit configured to acquire an image region of the tracking target; a constructing unit configured to extract feature information from the image region of the tracking target, and construct a feature model according to the feature information; The method is configured to update the feature model according to the tracking result of the current frame image until the tracking target is lost according to the tracking state, and the tracking target is lost; the comparison unit is configured to perform the candidate target region and the last updated feature model. Compare to identify the tracking target.
可选地,如果跟踪目标未丢失,则根据当前帧图像的跟踪结果更新目标模板。Alternatively, if the tracking target is not lost, the target template is updated according to the tracking result of the current frame image.
在本发明至少部分实施例中,获取跟踪目标以及所述跟踪目标对应的目标模板,根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,根据所述置信度确定所述跟踪目标的跟踪状态,如果所述跟踪目标的跟踪状态为丢失,则重新定位所述跟踪目标并在重新定位跟踪目标后继续跟踪。上述方案通过在跟踪过程中不断判断跟踪状态来确定下一步骤,从而使跟踪系统能够适应长时间跟踪过程中出现的各种状态,解决了现有技术中现有技术中跟踪视频图像中的目 标时,难以判断目标是否丢失,或是在目标丢失后难以找回的技术问题。In at least some embodiments of the present invention, a tracking target and a target template corresponding to the tracking target are acquired, and the tracking target is tracked in the video image according to the target template, and a tracking result of the current frame image in the video image is obtained. Determining a tracking state of the tracking target according to the confidence level, if the tracking state of the tracking target is lost, relocating the tracking target and continuing tracking after repositioning the tracking target. The above solution determines the next step by continuously judging the tracking state during the tracking process, thereby enabling the tracking system to adapt to various states appearing in the long-term tracking process, and solving the problem in the tracking video image in the prior art in the prior art. At the time of the standard, it is difficult to judge whether the target is lost or a technical problem that is difficult to retrieve after the target is lost.
附图说明DRAWINGS
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The drawings described herein are intended to provide a further understanding of the invention, and are intended to be a part of the invention. In the drawing:
图1是根据本发明其中一实施例的目标跟踪方法的流程图;1 is a flow chart of a target tracking method according to an embodiment of the present invention;
图2是根据本发明其中一实施例的目标跟踪方法的信息交互图;以及2 is an information interaction diagram of a target tracking method according to an embodiment of the present invention;
图3是根据本发明其中一实施例的目标跟踪装置的示意图。3 is a schematic diagram of a target tracking device in accordance with one embodiment of the present invention.
具体实施方式detailed description
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is an embodiment of the invention, but not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts shall fall within the scope of the present invention.
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It is to be understood that the terms "first", "second" and the like in the specification and claims of the present invention are used to distinguish similar objects, and are not necessarily used to describe a particular order or order. It is to be understood that the data so used may be interchanged where appropriate, so that the embodiments of the invention described herein can be implemented in a sequence other than those illustrated or described herein. In addition, the terms "comprises" and "comprises" and "the" and "the" are intended to cover a non-exclusive inclusion, for example, a process, method, system, product, or device that comprises a series of steps or units is not necessarily limited to Those steps or units may include other steps or units not explicitly listed or inherent to such processes, methods, products or devices.
实施例1Example 1
根据本发明其中一实施例,提供了一种目标跟踪方法的实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。In accordance with an embodiment of the present invention, an embodiment of a target tracking method is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and Although the logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
图1是根据本发明其中一实施例的目标跟踪方法的流程图,如图1所示,该方法包括如下步骤: 1 is a flow chart of a target tracking method according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
步骤S102,获取跟踪目标以及所述跟踪目标对应的目标模板。Step S102: Acquire a tracking target and a target template corresponding to the tracking target.
具体的,上述跟踪目标可以通过人为指定或通过行人检测器在视频的初始帧图像确定,上述目标模板可以是跟踪目标的外形模板。Specifically, the foregoing tracking target may be determined by an artificially specified or by a pedestrian detector in an initial frame image of the video, and the target template may be a shape template of the tracking target.
步骤S104,根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,其中,所述跟踪结果包括:当前帧图像中所述跟踪目标的区域和所对应的置信度。Step S104, tracking the tracking target in the video image according to the target template, and obtaining a tracking result of the current frame image in the video image, where the tracking result includes: a region of the tracking target in the current frame image Corresponding confidence.
具体的,上述当前帧图像的跟踪结果中的置信度用于确定跟踪结果中的跟踪目标的区域与目标模板的相似度。该相似度可以包括一种参数相似度,也可以包括多种参数相似度。Specifically, the confidence in the tracking result of the current frame image is used to determine the similarity between the region of the tracking target in the tracking result and the target template. The similarity may include a parameter similarity, and may also include multiple parameter similarities.
步骤S106,根据所述置信度确定所述跟踪目标的跟踪状态。Step S106, determining a tracking state of the tracking target according to the confidence level.
具体的,上述跟踪目标的跟踪状态可以包括:跟踪中、低置信度、丢失等状态但不限于此,其中,可以通过一个或多个置信度阈值,或多个置信度范围来确定跟踪状态。Specifically, the tracking status of the tracking target may include: a status of tracking, low confidence, loss, etc., but is not limited thereto, wherein the tracking status may be determined by one or more confidence thresholds or multiple confidence ranges.
步骤S108,如果所述跟踪目标的跟踪状态为丢失,则重新定位所述跟踪目标并在重新定位跟踪目标后继续跟踪。Step S108: If the tracking status of the tracking target is lost, relocate the tracking target and continue tracking after repositioning the tracking target.
具体的,上述重新定位用于在新的视频图像中找回初始设定的跟踪目标,在进行重新定位的过程中,需要进行目标检测和目标再识别两个步骤,其中,目标检测使用离线训练获得的目标检测器,在视频帧中检测目标,获得候选的目标区域,目标再识别用于在候选的目标区域中识别出跟踪目标。目标检测不限定具体的目标检测器类型,可以是传统的模板加分类器滑窗检测器也可以是基于深度学习技术的检测器;目标再识别方法同样不限定具体的方法,可以使用简单的特征加距离度量的方式,也可以使用深度学习模型离线学习获得特征提取和相似度度量方法。Specifically, the foregoing relocation is used to retrieve the initially set tracking target in the new video image. In the process of repositioning, two steps of target detection and target re-identification are required, wherein the target detection uses offline training. The obtained target detector detects the target in the video frame to obtain a candidate target region, and the target re-identification is used to identify the tracking target in the candidate target region. The target detection does not limit the specific target detector type, and may be a traditional template plus classifier sliding window detector or a depth learning based detector; the target re-identification method also does not define a specific method, and simple features can be used. In addition to the distance measurement method, the deep learning model can also be used to obtain feature extraction and similarity measurement methods.
此处需要说明的是,上述方案充分适应了长时间跟踪过程中出现的各种情况,并对应各种状态设置了完善的状态判断以及跳转机制,从而形成了鲁棒的跟踪系统。It should be noted here that the above scheme fully adapts to various situations occurring during the long-term tracking process, and sets a perfect state judgment and jump mechanism corresponding to various states, thereby forming a robust tracking system.
此处还需要说明的是,当目标被重新找回,重新进入目标跟踪步骤后,可以在初始阶段,使用一个较大的学习率,提升跟踪模板的更新幅度,以快速重新形成稳定的跟踪。It should also be noted here that when the target is retrieved and re-entered the target tracking step, a larger learning rate can be used in the initial stage to improve the update range of the tracking template to quickly reform a stable tracking.
由上可知,本申请上述步骤获取跟踪目标以及所述跟踪目标对应的目标模板,根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,根据所述置信度确定所述跟踪目标的跟踪状态,如果所述跟踪目标的跟踪状态为丢失,则重新定位所述跟踪目标并在重新定位跟踪目标后继续跟踪。上述方案 通过在跟踪过程中不断判断跟踪状态来确定下一步骤,从而使跟踪系统能够适应长时间跟踪过程中出现的各种状态,解决了现有技术中现有技术中跟踪视频图像中的目标时,难以判断目标是否丢失,或是在目标丢失后难以找回的技术问题。As can be seen from the above, the foregoing step of the present application acquires a tracking target and a target template corresponding to the tracking target, and tracks the tracking target in the video image according to the target template to obtain a tracking result of the current frame image in the video image, according to The confidence determines a tracking state of the tracking target, and if the tracking state of the tracking target is lost, relocating the tracking target and continuing tracking after repositioning the tracking target. The above scheme The next step is determined by continuously judging the tracking state during the tracking process, so that the tracking system can adapt to various states appearing in the long-term tracking process, and the prior art in the prior art tracks the target in the video image. It is difficult to judge whether the target is lost or a technical problem that is difficult to retrieve after the target is lost.
可选的,根据本申请上述实施例,步骤S104,根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,包括:Optionally, according to the foregoing embodiment of the present application, in step S104, the tracking target is tracked in the video image according to the target template, and the tracking result of the current frame image in the video image is obtained, including:
步骤S1041,通过所述目标模板扫描所述当前帧图像得到所述跟踪目标的区域。Step S1041: Scan the current frame image by using the target template to obtain an area of the tracking target.
在上述步骤中,通过目标模板在当前帧图像中进行扫描,找到响应最强峰值区域,从而确定响应最强的峰值区域为跟踪目标在当前帧图像所处的区域。需要注意的是,无论当前帧是否包括跟踪目标,目标模板总能扫描得到响应最强的区域,在当前帧图像不包括跟踪目标的情况下,响应最强的区域实际并非跟踪目标的区域,因此在获取到跟踪结果后,还需要对跟踪结果进行进一步判断。In the above step, the target template is scanned in the current frame image to find the strongest peak region of the response, thereby determining the peak region with the strongest response as the region where the tracking target is located in the current frame image. It should be noted that the target template can always scan the region with the strongest response regardless of whether the current frame includes the tracking target. In the case where the current frame image does not include the tracking target, the region with the strongest response is not actually the region of the tracking target. After the tracking result is obtained, the tracking result needs to be further judged.
步骤S1043,通过预设的传感器采集所述跟踪目标的信息。Step S1043: Collect information of the tracking target by using a preset sensor.
具体的,上述传感器可以包括但不限于深度相机、红外传感器、UWB传感器等,且上述预设的传感器可以为多种不同的传感器,也可以为多个种类相同的传感器。上述步骤通过不同的传感器来获取跟踪目标的不同信息。Specifically, the foregoing sensors may include, but are not limited to, a depth camera, an infrared sensor, a UWB sensor, etc., and the preset sensors may be a plurality of different sensors, or may be multiple types of sensors. The above steps use different sensors to obtain different information of the tracking target.
步骤S1045,将所述预设传感器采集的所述跟踪目标的信息进行融合处理,得到所述置信度。Step S1045: Perform fusion processing on the information of the tracking target collected by the preset sensor to obtain the confidence.
通常情况下仅通过单一的视觉传感器(例如:摄像头等)的检测结果来确定跟踪结果的置信度,在上述步骤中通过预设的传感器来采集跟踪目标的信息,从而提高置信度的准确程度,以防止环境、光线的变化对跟踪结果的置信度的影响。Generally, the confidence of the tracking result is determined only by the detection result of a single vision sensor (for example, a camera, etc.), and the information of the tracking target is collected by the preset sensor in the above step, thereby improving the accuracy of the confidence. To prevent the influence of changes in the environment and light on the confidence of the tracking results.
可选的,根据本申请上述实施例,步骤S1045,将所述预设传感器采集的所述跟踪目标的信息进行融合处理,得到所述置信度,包括:Optionally, according to the foregoing embodiment of the present application, in step S1045, the information of the tracking target collected by the preset sensor is fused to obtain the confidence, including:
步骤S10451,获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度。Step S10451: Acquire depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image.
步骤S10453,设置所述深度置信度、所述跟踪置信度和所述颜色置信度对应的权重值。Step S10453, setting a weight value corresponding to the depth confidence, the tracking confidence, and the color confidence.
步骤S10455,根据所述深度置信度、所述跟踪置信度和所述颜色置信度和对应的权重值得到所述置信度。Step S10455: The confidence is obtained according to the depth confidence, the tracking confidence, the color confidence, and a corresponding weight value.
在一种可选的实施例中,以深度置信度、跟踪置信度和颜色置信度分别为sdepth、stracking、scolor为例,如果对应的权重值分别为σ1、σ2、σ3,则置信度s可以通过如下 公式计算得到:s=σ1·sdepth2·stracking3·scolorIn an optional embodiment, the depth confidence, the tracking confidence, and the color confidence are respectively s depth , s tracking , and s color , if the corresponding weight values are σ 1 , σ 2 , and σ 3 , respectively. Then, the confidence s can be calculated by the following formula: s = σ 1 · s depth + σ 2 · s tracking + σ 3 · s color .
可选的,根据本申请上述实施例,步骤S10451,获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度,包括:Optionally, according to the foregoing embodiment of the present application, step S10451, obtaining depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image, including:
步骤S10451a,通过当前帧图像中所述跟踪目标的区域的深度与每一帧图像中跟踪目标的区域的平均深度进行比对,得到所述深度置信度。Step S10451a, the depth confidence is obtained by comparing the depth of the region of the tracking target in the current frame image with the average depth of the region of the tracking target in each frame image.
在一种可选的实施例中,可以使用机器人搭载的深度相机获取跟踪目标区域的平均深度,由于连续的跟踪过程中目标的深度不会在连续帧中发生大幅度的变化,因此逐帧比较深度变化,得到归一化后的深度置信度sdepthIn an optional embodiment, the depth of the tracking target area can be obtained by using a depth camera mounted by the robot. Since the depth of the target does not change greatly in consecutive frames during continuous tracking, the frame-by-frame comparison is performed. The depth is changed to obtain the normalized depth confidence s depth .
步骤S10451b,确定当前帧图像中所述跟踪目标的区域与所述目标模板的相似度为所述跟踪置信度。Step S10451b: determining that the similarity between the area of the tracking target and the target template in the current frame image is the tracking confidence.
在一种可选的实施例中,每一帧跟踪算法使用目标模板在上一帧目标出现的区域附近进行检测,找到与目标模板最相近的目标区域以及归一化后的跟踪置信度strackingIn an optional embodiment, each frame tracking algorithm uses the target template to detect near the region where the previous frame target appears, find the target region closest to the target template, and normalize the tracking confidence s tracking .
步骤S10451c,确定当前帧图像中所述跟踪目标的区域的颜色值与预设的颜色模型的相似度为颜色置信度。Step S10451c: determining that the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
在一种可选的实施例中,可将上一实施例中检测到的目标区域送入目标判丢模块(即设置为判断跟踪状态的模块),目标判丢模块根据预设的颜色模型得到当前的目标区域与颜色模型的归一化相似度scolorIn an optional embodiment, the target area detected in the previous embodiment may be sent to the target discarding module (ie, the module is set to determine the tracking status), and the target discarding module is obtained according to the preset color model. The normalized similarity s color of the current target area to the color model.
可选的,根据本申请上述实施例,步骤S106,根据所述置信度确定所述跟踪目标的跟踪状态,包括:Optionally, according to the foregoing embodiment of the present application, in step S106, determining a tracking status of the tracking target according to the confidence level, including:
步骤S1061,获取预设的置信度阈值。Step S1061: Acquire a preset confidence threshold.
步骤S1063,通过将所述置信度与所述置信度阈值比对,以确定所述跟踪目标的跟踪状态,其中,所述跟踪状态包括:跟踪中、置信度低以及丢失。Step S1063: Determine a tracking state of the tracking target by comparing the confidence level with the confidence threshold, wherein the tracking state includes: tracking, low confidence, and loss.
在一种可选的实施例中,上述预设置信度阈值可以包括第一置信度阈值和第二置信度阈值,其中第一置信度阈值大于第二置信度阈值,因此通过第一置信度阈值和第二置信度阈值可以构成单个区间,置信度大于等于第一置信度阈值的跟踪结果对应的跟踪状态为跟踪中,置信度大于第二置信度阈值且小于等于第一置信度阈值的跟踪结果对应的跟踪状态为低置信度,置信度小于第二置信度阈值的跟踪结果对应的跟踪状态为丢失。In an optional embodiment, the foregoing preset reliability threshold may include a first confidence threshold and a second confidence threshold, where the first confidence threshold is greater than the second confidence threshold, and thus passes the first confidence threshold And the second confidence threshold may constitute a single interval, and the tracking state corresponding to the tracking result with the confidence greater than or equal to the first confidence threshold is the tracking result in the tracking, and the confidence is greater than the second confidence threshold and less than or equal to the first confidence threshold. The corresponding tracking state is low confidence, and the tracking state corresponding to the tracking result whose confidence is less than the second confidence threshold is lost.
仍在上述实施例中,如果跟踪结果的跟踪状态为跟踪中或低置信度,则通过跟踪结果更新目标模板以继续进行跟踪,如果跟踪结果的跟踪状态为丢失,则重新定位跟 踪目标。In the above embodiment, if the tracking status of the tracking result is tracking or low confidence, the target template is updated by the tracking result to continue tracking, and if the tracking status of the tracking result is lost, repositioning Track the target.
可选的,根据本申请上述实施例,步骤S108,重新定位所述跟踪目标并在重新定位所述跟踪目标后继续跟踪,包括:Optionally, according to the foregoing embodiment of the present application, in step S108, the tracking target is relocated and the tracking is continued after the tracking target is repositioned, including:
步骤S1081,通过目标检测器在新的视频图像中检测候选目标,以确定候选目标区域。Step S1081: The candidate target is detected in the new video image by the target detector to determine the candidate target region.
具体的,上述步骤为目标检测的步骤,在一种可选的实施例中,以检测视频中的目标行人为例,上述目标检测器可以是行人检测器,通过行人检测器在视频中检测行人得到候选的目标区域。Specifically, the foregoing step is a step of target detection. In an optional embodiment, taking the target pedestrian in the video as an example, the target detector may be a pedestrian detector, and the pedestrian is detected in the video by the pedestrian detector. A candidate target area is obtained.
步骤S1083,从所述候选目标区域中识别所述跟踪目标。Step S1083: Identify the tracking target from the candidate target area.
具体的,上述步骤为目标再识别的过程,目标再识别为从不同的场景中找到并识别预选指定的目标的过程。目标再识别的方法可以是使用同一目标不同场景下成对的图像数据以及不同目标成对的图像数据,分别提取指定的特征,如颜色直方图,作为特征向量,然后利用度量学习的方法学习到一个相似度度量函数,在应用中,利用该相似度度量函数计算两个目标的相似度,进而判别是否是同一个目标,可选的,根据本申请上述实施例,步骤S1083,从所述候选目标区域中识别所述跟踪目标的步骤还可以包括:Specifically, the above steps are a process of target re-identification, and the target is re-identified as a process of finding and identifying a pre-selected designated target from different scenarios. The method for re-identifying the target may be to use paired image data in different scenes of the same target and image data of different target pairs, respectively extracting specified features, such as color histograms, as feature vectors, and then learning by using the metric learning method. A similarity measure function, in the application, the similarity measure function is used to calculate the similarity between the two targets, thereby determining whether it is the same target, optionally, according to the above embodiment of the present application, step S1083, from the candidate The step of identifying the tracking target in the target area may further include:
步骤S10831,获取所述跟踪目标的图像区域。Step S10831: Acquire an image area of the tracking target.
步骤S10833,从所述跟踪目标的图像区域中提取特征信息,并根据所述特征信息构造特征模型。Step S10833, extracting feature information from an image region of the tracking target, and constructing a feature model according to the feature information.
具体的,上述提取的特征可以是图像的颜色特征、边缘特征等,由于跟踪目标在视频中通常是动态的,因此仅以跟踪目标的外形为模型跟踪具有一定的难度,且准确度较低,但通常对于视频中的连续图像来说,跟踪目标及时外形随着时间戳的变动而不断变化,但图像的特征通常保持一致,因此上述步骤通过提取的图像特征来构造模型。Specifically, the extracted feature may be a color feature, an edge feature, or the like of the image. Since the tracking target is usually dynamic in the video, it is difficult to track the shape of the tracking target only, and the accuracy is low. However, for a continuous image in a video, the tracking target's time shape changes continuously with the change of the timestamp, but the features of the image are generally consistent, so the above steps construct the model by the extracted image features.
步骤S10835,在根据所述跟踪状态确定所述跟踪目标未丢失的情况下,根据所述当前帧图像的跟踪结果更新所述特征模型,直至所述跟踪目标丢失。Step S10835: In the case that it is determined that the tracking target is not lost according to the tracking state, the feature model is updated according to the tracking result of the current frame image until the tracking target is lost.
步骤S10837,将所述候选目标区域与最后一次更新得到的特征模型进行比对,以识别所述跟踪目标。Step S10837, comparing the candidate target area with the last updated feature model to identify the tracking target.
在一种可选的实施例中,在确定当前帧图像已跟丢的情况下,可以最近更新的跟踪模型,即上一帧跟踪模型为依据来找回跟踪目标,由于上一帧图像为未跟丢的状态, 因此得到上一帧图像的跟踪结果后还更新了跟踪目标的特征模型,从而使得用于找回跟踪目标的特征模型为最接近的特征模型。In an optional embodiment, in the case that it is determined that the current frame image has been lost, the tracking target can be retrieved based on the last updated tracking model, that is, the previous frame tracking model, since the previous frame image is not With the state of losing, Therefore, after obtaining the tracking result of the previous frame image, the feature model of the tracking target is also updated, so that the feature model used for retrieving the tracking target is the closest feature model.
此处需要说明的是,由于跟踪目标在视频中可能处于动态的状态,视频中其他环境信息也随着时间而改变,也就是说跟踪目标的外形在视频中是不断变化的,且视频中的光照、环境也是变化的,因此单纯的通过跟踪目标的外形来进行跟踪或再找回是十分困难的,进一步的,一直使用最初确定的跟踪目标的特征模型来进行跟踪也并不能得到准确的结果,因此,上述方案引入的跟踪目标的特征模型能够有效的在跟踪或再找回的过程中去除环境的变化或跟踪目标外形的变化的影响,从而升级了跟踪模型的鲁棒性。It should be noted here that since the tracking target may be in a dynamic state in the video, other environmental information in the video also changes with time, that is, the shape of the tracking target is constantly changing in the video, and the video is in the video. Light and environment are also changing. Therefore, it is very difficult to track or retrieve by simply tracking the shape of the target. Further, tracking with the feature model of the initially determined tracking target does not yield accurate results. Therefore, the feature model of the tracking target introduced by the above scheme can effectively remove the influence of the change of the environment or the change of the shape of the target in the process of tracking or retrieving, thereby upgrading the robustness of the tracking model.
由上可知,本申请上述从所述跟踪目标的图像区域中提取特征信息,并根据所述特征信息构造特征模,在根据所述跟踪状态确定所述跟踪目标未丢失的情况下,根据所述当前帧图像的跟踪结果更新所述特征模型,直至所述跟踪目标丢失,将所述候选目标区域与最后一次更新得到的特征模型进行比对,以识别所述跟踪目标。上述方案根据跟踪目标的图像区域的特征信息构建特征模型,并在跟踪过程中不断的根据不同的跟踪结果对特征模型进行更新,以特征模型作为跟踪模型来进行跟踪,从而提高了跟踪模型的鲁棒性,进而解决了现有技术中跟踪视频图像中的目标时,难以判断目标是否丢失,或是在目标丢失后难以找回的技术问题。It can be seen that, in the above application, the feature information is extracted from the image area of the tracking target, and the feature mode is constructed according to the feature information, and if it is determined that the tracking target is not lost according to the tracking state, according to the The tracking result of the current frame image updates the feature model until the tracking target is lost, and compares the candidate target region with the last updated feature model to identify the tracking target. The above scheme constructs a feature model according to the feature information of the image region of the tracking target, and continuously updates the feature model according to different tracking results during the tracking process, and uses the feature model as a tracking model to track, thereby improving the tracking model. The stickiness further solves the technical problem that it is difficult to judge whether the target is lost when tracking the target in the video image in the prior art, or is difficult to retrieve after the target is lost.
可选的,根据本申请上述实施例,所述预设传感器包括如下任意一种或多种:深度相机、红外传感器、超宽带定位传感器。Optionally, according to the foregoing embodiment of the present application, the preset sensor includes any one or more of the following: a depth camera, an infrared sensor, and an ultra-wideband positioning sensor.
可选的,根据本申请上述实施例,如果所述跟踪目标未丢失,则根据当前帧图像的跟踪结果更新所述目标模板。Optionally, according to the foregoing embodiment of the present application, if the tracking target is not lost, the target template is updated according to the tracking result of the current frame image.
在上述步骤中,如果跟踪状态为跟踪中,即未丢失的状态,则在检测到跟踪结果后,根据跟踪结果来更新跟踪模板,从而在跟踪的过程中维持一个在线更新的模板,进而使跟踪模板能够引入与当前帧最接近的图像的特征。In the above steps, if the tracking status is in the tracking, that is, the state that is not lost, after the tracking result is detected, the tracking template is updated according to the tracking result, thereby maintaining an online updated template during the tracking process, thereby enabling tracking. The template can introduce features of the image that is closest to the current frame.
图2是根据本申请实施例的一种可选的目标跟踪方法的信息交互图,结合图2所示,在一种可选的实施例中,通过目标检测器10选定跟踪目标20,根据目标检测器10选定的跟踪目标提取跟踪目标的目标模板30,根据目标模板30在视频图像中对跟踪目标进行跟踪40,在对每一帧图像进行跟踪40得到跟踪结果后,根据跟踪结果判断跟踪状态,即进行跟踪判丢50,如果跟踪结果对应的跟踪状态为跟踪中,使用跟踪结果来更新目标模板30,并继续进行跟踪,如果跟踪结果对应的跟踪状态为丢失,则进行目标重找回60,其中,目标重找回60包括目标检测和目标再识别两个步骤,可以通过行人检测器10进行目标识别后再进行目标再识别,如果成功重找跟踪目标,则 使用重找回的跟踪目标来更新目标模板30,如果未能重找回,则继续进行目标重找回的步骤。2 is an information interaction diagram of an optional target tracking method according to an embodiment of the present application. As shown in FIG. 2, in an optional embodiment, the tracking target 20 is selected by the target detector 10, according to The tracking target selected by the target detector 10 extracts the target template 30 of the tracking target, and tracks the tracking target in the video image according to the target template 30. After tracking the image for each frame 40, the tracking result is obtained, and then the tracking result is determined according to the tracking result. Tracking status, that is, performing tracking and discarding 50. If the tracking status corresponding to the tracking result is in tracking, the tracking result is used to update the target template 30, and the tracking is continued. If the tracking status corresponding to the tracking result is lost, the target re-discovery is performed. Returning to 60, wherein the target re-recovery 60 includes two steps of target detection and target re-identification, and the target recognition can be performed by the pedestrian detector 10, and then the target re-recognition is performed. If the tracking target is successfully re-finished, The target template 30 is updated using the retrieved tracking target, and if it is not retrieved, the target retrieval process is continued.
实施例2Example 2
根据本发明其中一实施例,提供了一种目标跟踪装置的实施例,图3是根据本发明其中一实施例的目标跟踪装置的示意图,如图3所示,该装置包括:According to an embodiment of the present invention, an embodiment of a target tracking device is provided. FIG. 3 is a schematic diagram of a target tracking device according to an embodiment of the present invention. As shown in FIG. 3, the device includes:
第一获取模块10,设置为获取跟踪目标以及所述跟踪目标对应的目标模板。The first obtaining module 10 is configured to acquire a tracking target and a target template corresponding to the tracking target.
第二获取模块20,设置为根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,其中,所述跟踪结果包括:当前帧图像中所述跟踪目标的区域和所对应的置信度。The second obtaining module 20 is configured to track the tracking target in the video image according to the target template, to obtain a tracking result of the current frame image in the video image, where the tracking result includes: the current frame image Track the area of the target and the corresponding confidence.
确定模块30,设置为根据所述置信度确定所述跟踪目标的跟踪状态。The determining module 30 is configured to determine a tracking state of the tracking target according to the confidence level.
重新定位模块40,设置为如果所述跟踪目标的跟踪状态为丢失,则重新定位所述跟踪目标并在重新定位所述跟踪目标后继续跟踪。The relocation module 40 is configured to reposition the tracking target and continue tracking after repositioning the tracking target if the tracking state of the tracking target is lost.
由上可知,本申请上述装置通过第一获取模块获取跟踪目标以及所述跟踪目标对应的目标模板,通过第二获取模块根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,通过确定模块根据所述置信度确定所述跟踪目标的跟踪状态,如果所述跟踪目标的跟踪状态为丢失,则通过重新定位模块重新定位所述跟踪目标并在重新定位所述跟踪目标后继续跟踪。上述方案通过在跟踪过程中不断判断跟踪状态来确定下一步骤,从而使跟踪系统能够适应长时间跟踪过程中出现的各种状态,解决了现有技术中现有技术中跟踪视频图像中的目标时,难以判断目标是否丢失,或是在目标丢失后难以找回的技术问题。As shown in the above, the device of the present application acquires the tracking target and the target template corresponding to the tracking target by using the first acquiring module, and the tracking target is tracked in the video image according to the target template by the second acquiring module, to obtain the video. The tracking result of the current frame image in the image is determined by the determining module according to the confidence level, and if the tracking state of the tracking target is lost, the tracking target is repositioned by the repositioning module and Continue tracking after repositioning the tracking target. The above solution determines the next step by continuously judging the tracking state during the tracking process, thereby enabling the tracking system to adapt to various states appearing in the long-term tracking process, and solving the problem in the prior art tracking video image in the prior art. It is difficult to judge whether the target is lost or a technical problem that is difficult to retrieve after the target is lost.
可选的,根据本申请上述实施例,所述第二获取模块包括:Optionally, according to the foregoing embodiment of the present application, the second obtaining module includes:
扫描子模块,设置为通过所述目标模板扫描所述当前帧图像得到所述跟踪目标的区域。And scanning a sub-module, configured to scan the current frame image by using the target template to obtain an area of the tracking target.
采集子模块,设置为通过预设的传感器采集所述跟踪目标的信息。The collecting submodule is configured to collect information of the tracking target by using a preset sensor.
融合处理子模块,设置为将所述预设传感器采集的所述跟踪目标的信息进行融合处理,得到所述置信度。The fusion processing sub-module is configured to perform fusion processing on the information of the tracking target collected by the preset sensor to obtain the confidence level.
可选的,根据本申请上述实施例,所述融合处理子模块包括:Optionally, according to the foregoing embodiment of the present application, the fusion processing submodule includes:
第一获取单元,设置为获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度。 The first obtaining unit is configured to acquire depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image.
设置单元,设置为设置所述深度置信度、所述跟踪置信度和所述颜色置信度对应的权重值。And a setting unit configured to set the depth confidence, the tracking confidence, and the weight value corresponding to the color confidence.
第二获取单元,设置为根据所述深度置信度、所述跟踪置信度和所述颜色置信度和对应的权重值得到所述置信度。The second obtaining unit is configured to obtain the confidence according to the depth confidence, the tracking confidence, the color confidence, and a corresponding weight value.
可选的,根据本申请上述实施例,所述第一获取单元包括:Optionally, according to the foregoing embodiment of the present application, the first acquiring unit includes:
获取子单元,设置为通过当前帧图像中所述跟踪目标的区域的深度与每一帧图像中跟踪目标的区域的平均深度进行比对,得到所述深度置信度。Obtaining a subunit, configured to compare the depth of the region of the tracking target in the current frame image with the average depth of the region of the tracking target in each frame image, to obtain the depth confidence.
第一确定子单元,设置为确定当前帧图像中所述跟踪目标的区域与所述目标模板的相似度为所述跟踪置信度。The first determining subunit is configured to determine that the similarity between the area of the tracking target in the current frame image and the target template is the tracking confidence.
第二确定子单元,设置为确定当前帧图像中所述跟踪目标的区域的颜色值与预设的颜色模型的相似度为颜色置信度。The second determining subunit is configured to determine that the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
可选的,根据本申请上述实施例,所述确定模块包括:Optionally, according to the foregoing embodiment of the present application, the determining module includes:
获取子模块,设置为获取预设的置信度阈值。Get the submodule, set to get the preset confidence threshold.
确定子模块,设置为通过将所述置信度与所述置信度阈值比对,以确定所述跟踪目标的跟踪状态,其中,所述跟踪状态包括:跟踪中、置信度低以及丢失。A determining submodule is configured to determine a tracking state of the tracking target by comparing the confidence to the confidence threshold, wherein the tracking state comprises: tracking, low confidence, and loss.
可选的,根据本申请上述实施例,所述重新定位模块包括:Optionally, according to the foregoing embodiment of the present application, the relocation module includes:
检测子模块,设置为通过目标检测器在新的视频图像中检测候选目标,以确定候选目标区域。A detection sub-module is arranged to detect a candidate target in a new video image by the target detector to determine a candidate target region.
识别子模块,设置为从所述候选目标区域中识别所述跟踪目标。An identification sub-module configured to identify the tracking target from the candidate target area.
可选的,根据本申请上述实施例,,所述识别子模块包括:Optionally, according to the foregoing embodiment of the present application, the identifying submodule includes:
第三获取单元,设置为获取所述跟踪目标的图像区域。And a third acquiring unit configured to acquire an image area of the tracking target.
构造单元,设置为从所述跟踪目标的图像区域中提取特征信息,并根据所述特征信息构造特征模型。a construction unit configured to extract feature information from an image region of the tracking target and construct a feature model based on the feature information.
更新单元,设置为在根据所述跟踪状态确定所述跟踪目标未丢失的情况下,根据所述当前帧图像的跟踪结果更新所述特征模型,直至所述跟踪目标丢失。And an updating unit, configured to update the feature model according to the tracking result of the current frame image until the tracking target is lost, if it is determined that the tracking target is not lost according to the tracking state.
比对单元,设置为将所述候选目标区域与最后一次更新得到的特征模型进行比对,以识别所述跟踪目标。The comparison unit is configured to compare the candidate target area with the last updated feature model to identify the tracking target.
可选的,根据本申请上述实施例,如果所述跟踪目标未丢失,则根据当前帧图像 的跟踪结果更新所述目标模板。Optionally, according to the foregoing embodiment of the present application, if the tracking target is not lost, according to the current frame image The tracking result updates the target template.
在上述实施例中,在不断更新目标模板的同时,还在得到每帧图像的跟踪结果后对跟踪结果进行了跟踪判丢,从而确定跟踪的状态,在跟踪状态为丢失的情况下,及时停止继续跟踪,同时停止使用跟踪结果更新目标模板,而是进入目标重找回的步骤,直至找回跟踪目标,重新进行跟踪并更新目标模板。In the above embodiment, while continuously updating the target template, the tracking result is tracked and lost after obtaining the tracking result of each frame image, thereby determining the tracking state, and stopping in time when the tracking state is lost. Continue tracking and stop using the tracking results to update the target template. Instead, go to the target re-recovery step until you retrieve the tracking target, re-track and update the target template.
根据本发明其中一实施例,还提供了一种存储介质,存储介质包括存储的程序,其中,在程序运行时控制存储介质所在设备执行上述目标跟踪方法。上述存储介质可以包括但不限于:U盘、只读存储器(ROM)、随机存取存储器(RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。According to an embodiment of the present invention, there is further provided a storage medium, wherein the storage medium comprises a stored program, wherein the device in which the storage medium is located is controlled to execute the target tracking method when the program is running. The above storage medium may include, but is not limited to, a U disk, a read only memory (ROM), a random access memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.
根据本发明其中一实施例,还提供了一种处理器,处理器用于运行程序,其中,程序运行时执行上述目标跟踪方法。上述处理器可以包括但不限于:微处理器(MCU)或可编程逻辑器件(FPGA)等的处理装置。According to an embodiment of the present invention, there is further provided a processor, wherein the processor is configured to execute a program, wherein the target tracking method is executed when the program runs. The above processor may include, but is not limited to, a processing device such as a microprocessor (MCU) or a programmable logic device (FPGA).
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present invention, the descriptions of the various embodiments are different, and the parts that are not detailed in a certain embodiment can be referred to the related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided by the present application, it should be understood that the disclosed technical contents may be implemented in other manners. The device embodiments described above are only schematic. For example, the division of the unit may be a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的 形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be in the form of a software product. Formally embodied, the computer software product is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the methods of the various embodiments of the present invention. step.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above description is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art can also make several improvements and retouchings without departing from the principles of the present invention. It should be considered as the scope of protection of the present invention.
工业实用性Industrial applicability
如上所述,本发明至少部分实施例提供的一种补丁软件升级方法及系统具有以下有益效果:通过在跟踪过程中不断判断跟踪状态来确定后续步骤,从而使跟踪系统能够适应长时间跟踪过程中出现的各种状态。 As described above, the patch software upgrade method and system provided by at least some embodiments of the present invention have the following beneficial effects: determining the subsequent steps by continuously determining the tracking status during the tracking process, thereby enabling the tracking system to adapt to the long-term tracking process. Various states that appear.

Claims (19)

  1. 一种目标跟踪方法,包括:A target tracking method, including:
    获取跟踪目标以及所述跟踪目标对应的目标模板;Obtaining a tracking target and a target template corresponding to the tracking target;
    根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,其中,所述跟踪结果包括:当前帧图像中所述跟踪目标的区域和所对应的置信度;Tracking the tracking target in the video image according to the target template, and obtaining a tracking result of the current frame image in the video image, where the tracking result includes: a region of the tracking target in the current frame image and a corresponding Confidence;
    根据所述置信度确定所述跟踪目标的跟踪状态;Determining a tracking state of the tracking target according to the confidence level;
    如果所述跟踪目标的跟踪状态为丢失,则重新定位所述跟踪目标并在重新定位所述跟踪目标后继续跟踪。If the tracking status of the tracking target is lost, the tracking target is repositioned and tracking continues after the tracking target is repositioned.
  2. 根据权利要求1所述的方法,其中,根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,包括:The method according to claim 1, wherein the tracking target is tracked in the video image according to the target template, and the tracking result of the current frame image in the video image is obtained, including:
    通过所述目标模板扫描所述当前帧图像得到所述跟踪目标的区域;Scanning the current frame image by the target template to obtain an area of the tracking target;
    通过预设传感器采集所述跟踪目标的信息;Collecting information of the tracking target by using a preset sensor;
    将所述预设传感器采集的所述跟踪目标的信息进行融合处理,得到所述置信度。Performing fusion processing on the information of the tracking target collected by the preset sensor to obtain the confidence.
  3. 根据权利要求2所述的方法,其中,将所述预设传感器采集的所述跟踪目标的信息进行融合处理,得到所述置信度,包括:The method according to claim 2, wherein the information of the tracking target collected by the preset sensor is subjected to a fusion process to obtain the confidence, comprising:
    获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度;Obtaining depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image;
    设置所述深度置信度、所述跟踪置信度和所述颜色置信度对应的权重值;Setting a depth value, the tracking confidence, and a weight value corresponding to the color confidence;
    根据所述深度置信度、所述跟踪置信度和所述颜色置信度和对应的权重值得到所述置信度。The confidence is obtained based on the depth confidence, the tracking confidence, and the color confidence and corresponding weight values.
  4. 根据权利要求3所述的方法,其中,获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度,包括:The method of claim 3, wherein obtaining depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image comprises:
    通过当前帧图像中所述跟踪目标的区域的深度与每一帧图像中跟踪目标的区域的平均深度进行比对,得到所述深度置信度;Comparing the depth of the region of the tracking target in the current frame image with the average depth of the region of the tracking target in each frame image, the depth confidence is obtained;
    确定当前帧图像中所述跟踪目标的区域与所述目标模板的相似度为所述跟踪置信度;Determining, that the similarity between the area of the tracking target in the current frame image and the target template is the tracking confidence;
    确定当前帧图像中所述跟踪目标的区域的颜色值与预设的颜色模型的相似度为颜色置信度。 Determining the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
  5. 根据权利要求1至4中任意一项所述的方法,其中,根据所述置信度确定所述跟踪目标的跟踪状态,包括:The method according to any one of claims 1 to 4, wherein determining the tracking state of the tracking target based on the confidence comprises:
    获取预设的置信度阈值;Obtain a preset confidence threshold;
    通过将所述置信度与所述置信度阈值比对,以确定所述跟踪目标的跟踪状态,其中,所述跟踪状态包括:跟踪中、置信度低以及丢失。A tracking state of the tracking target is determined by comparing the confidence with the confidence threshold, wherein the tracking state includes: tracking, low confidence, and loss.
  6. 根据权利要求5所述的方法,其中,重新定位所述跟踪目标并在重新定位所述跟踪目标后继续跟踪,包括:The method of claim 5 wherein relocating the tracking target and continuing tracking after repositioning the tracking target comprises:
    通过目标检测器在新的视频图像中检测候选目标,以确定候选目标区域;A candidate target is detected in a new video image by the target detector to determine a candidate target region;
    从所述候选目标区域中识别所述跟踪目标。The tracking target is identified from the candidate target area.
  7. 根据权利要求6所述的方法,其中,从所述候选目标区域中识别所述跟踪目标,包括:The method of claim 6 wherein identifying the tracking target from the candidate target regions comprises:
    获取所述跟踪目标的图像区域;Obtaining an image area of the tracking target;
    从所述跟踪目标的图像区域中提取特征信息,并根据所述特征信息构造特征模型;Extracting feature information from an image region of the tracking target, and constructing a feature model according to the feature information;
    在根据所述跟踪状态确定所述跟踪目标未丢失的情况下,根据所述当前帧图像的跟踪结果更新所述特征模型,直至所述跟踪目标丢失;And determining, in the case that the tracking target is not lost according to the tracking state, updating the feature model according to a tracking result of the current frame image until the tracking target is lost;
    将所述候选目标区域与最后一次更新得到的特征模型进行比对,以识别所述跟踪目标。The candidate target area is compared with the last updated feature model to identify the tracking target.
  8. 根据权利要求2所述的方法,其中,所述预设传感器包括如下任意一种或多种:深度相机、红外传感器、超宽带定位传感器。The method of claim 2, wherein the preset sensor comprises any one or more of the following: a depth camera, an infrared sensor, an ultra-wideband positioning sensor.
  9. 根据权利要求1所述的方法,其中,如果所述跟踪目标未丢失,则根据当前帧图像的跟踪结果更新所述目标模板。The method of claim 1, wherein if the tracking target is not lost, the target template is updated according to a tracking result of the current frame image.
  10. 一种目标跟踪装置,包括:A target tracking device comprising:
    第一获取模块,设置为获取跟踪目标以及所述跟踪目标对应的目标模板;a first acquiring module, configured to acquire a tracking target and a target template corresponding to the tracking target;
    第二获取模块,设置为根据所述目标模板在视频图像中跟踪所述跟踪目标,得到所述视频图像中当前帧图像的跟踪结果,其中,所述跟踪结果包括:当前帧图像中所述跟踪目标的区域和所对应的置信度;a second acquiring module, configured to track the tracking target in the video image according to the target template, to obtain a tracking result of the current frame image in the video image, where the tracking result includes: the tracking in the current frame image The area of the target and the corresponding confidence;
    确定模块,设置为根据所述置信度确定所述跟踪目标的跟踪状态; Determining a module, configured to determine a tracking state of the tracking target according to the confidence level;
    重新定位模块,设置为如果所述跟踪目标的跟踪状态为丢失,则重新定位所述跟踪目标并在重新定位所述跟踪目标后继续跟踪。Relocating the module, if the tracking state of the tracking target is lost, relocating the tracking target and continuing tracking after repositioning the tracking target.
  11. 根据权利要求10所述的装置,其中,所述第二获取模块包括:The apparatus of claim 10, wherein the second acquisition module comprises:
    扫描子模块,设置为通过所述目标模板扫描所述当前帧图像得到所述跟踪目标的区域;a scanning submodule, configured to scan the current frame image by using the target template to obtain an area of the tracking target;
    采集子模块,设置为通过预设传感器采集所述跟踪目标的信息;Collecting a sub-module, configured to collect information of the tracking target by using a preset sensor;
    融合处理子模块,设置为将所述预设传感器采集的所述跟踪目标的信息进行融合处理,得到所述置信度。The fusion processing sub-module is configured to perform fusion processing on the information of the tracking target collected by the preset sensor to obtain the confidence level.
  12. 根据权利要求11所述的装置,其中,所述融合处理子模块包括:The apparatus of claim 11, wherein the fusion processing sub-module comprises:
    第一获取单元,设置为获取当前帧图像中跟踪结果的深度置信度、跟踪置信度和颜色置信度;a first acquiring unit, configured to acquire depth confidence, tracking confidence, and color confidence of the tracking result in the current frame image;
    设置单元,设置为设置所述深度置信度、所述跟踪置信度和所述颜色置信度对应的权重值;a setting unit, configured to set a weight value corresponding to the depth confidence, the tracking confidence, and the color confidence;
    第二获取单元,设置为根据所述深度置信度、所述跟踪置信度和所述颜色置信度和对应的权重值得到所述置信度。The second obtaining unit is configured to obtain the confidence according to the depth confidence, the tracking confidence, the color confidence, and a corresponding weight value.
  13. 根据权利要求12所述的装置,其中,所述第一获取单元包括:The apparatus of claim 12, wherein the first obtaining unit comprises:
    获取子单元,设置为通过当前帧图像中所述跟踪目标的区域的深度与每一帧图像中跟踪目标的区域的平均深度进行比对,得到所述深度置信度;Obtaining a subunit, configured to compare the depth of the area of the tracking target in the current frame image with the average depth of the area of the tracking target in each frame image, to obtain the depth confidence;
    第一确定子单元,设置为确定当前帧图像中所述跟踪目标的区域与所述目标模板的相似度为所述跟踪置信度;a first determining subunit, configured to determine that the similarity between the area of the tracking target in the current frame image and the target template is the tracking confidence;
    第二确定子单元,设置为确定当前帧图像中所述跟踪目标的区域的颜色值与预设的颜色模型的相似度为颜色置信度。The second determining subunit is configured to determine that the similarity between the color value of the region of the tracking target in the current frame image and the preset color model is the color confidence.
  14. 根据权利要求10至13中任意一项所述的装置,其中,所述确定模块包括:The apparatus according to any one of claims 10 to 13, wherein the determining module comprises:
    获取子模块,设置为获取预设的置信度阈值;Obtain a submodule, set to obtain a preset confidence threshold;
    确定子模块,设置为通过将所述置信度与所述置信度阈值比对,以确定所述跟踪目标的跟踪状态,其中,所述跟踪状态包括:跟踪中、置信度低以及丢失。A determining submodule is configured to determine a tracking state of the tracking target by comparing the confidence to the confidence threshold, wherein the tracking state comprises: tracking, low confidence, and loss.
  15. 根据权利要求14所述的装置,其中,所述重新定位模块包括:The apparatus of claim 14, wherein the repositioning module comprises:
    检测子模块,设置为通过目标检测器在新的视频图像中检测候选目标,以确 定候选目标区域;a detection sub-module configured to detect a candidate target in a new video image by the target detector to confirm a candidate target area;
    识别子模块,设置为从所述候选目标区域中识别所述跟踪目标。An identification sub-module configured to identify the tracking target from the candidate target area.
  16. 根据权利要求15所述的装置,其中,所述识别子模块包括:The apparatus of claim 15 wherein said identifying sub-module comprises:
    第三获取单元,设置为获取所述跟踪目标的图像区域;a third acquiring unit, configured to acquire an image area of the tracking target;
    构造单元,设置为从所述跟踪目标的图像区域中提取特征信息,并根据所述特征信息构造特征模型;a constructing unit configured to extract feature information from an image region of the tracking target, and construct a feature model according to the feature information;
    更新单元,设置为在根据所述跟踪状态确定所述跟踪目标未丢失的情况下,根据所述当前帧图像的跟踪结果更新所述特征模型,直至所述跟踪目标丢失;And an updating unit, configured to update the feature model according to the tracking result of the current frame image until the tracking target is lost, if the tracking target is not lost according to the tracking state;
    比对单元,设置为将所述候选目标区域与最后一次更新得到的特征模型进行比对,以识别所述跟踪目标。The comparison unit is configured to compare the candidate target area with the last updated feature model to identify the tracking target.
  17. 根据权利要求10所述的装置,其中,如果所述跟踪目标未丢失,则根据当前帧图像的跟踪结果更新所述目标模板。The apparatus according to claim 10, wherein if the tracking target is not lost, the target template is updated according to a tracking result of the current frame image.
  18. 一种存储介质,其特征在于,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至9中任意一项所述的目标跟踪方法。A storage medium, characterized in that the storage medium comprises a stored program, wherein the device in which the storage medium is located is controlled to perform the target tracking method according to any one of claims 1 to 9 while the program is running.
  19. 一种处理器,其特征在于,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至9中任意一项所述的目标跟踪方法。 A processor, wherein the processor is configured to execute a program, wherein the program is executed to perform the target tracking method according to any one of claims 1 to 9.
PCT/CN2017/116329 2016-12-30 2017-12-15 Target tracking method and device WO2018121286A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611264077.5 2016-12-30
CN201611264077.5A CN108269269A (en) 2016-12-30 2016-12-30 Method for tracking target and device

Publications (1)

Publication Number Publication Date
WO2018121286A1 true WO2018121286A1 (en) 2018-07-05

Family

ID=62710214

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/116329 WO2018121286A1 (en) 2016-12-30 2017-12-15 Target tracking method and device

Country Status (2)

Country Link
CN (1) CN108269269A (en)
WO (1) WO2018121286A1 (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584213A (en) * 2018-11-07 2019-04-05 复旦大学 A kind of selected tracking of multiple target number
CN109671103A (en) * 2018-12-12 2019-04-23 易视腾科技股份有限公司 Method for tracking target and device
CN110189360A (en) * 2019-05-28 2019-08-30 四川大学华西第二医院 A kind of recognition and tracking method of pair of specific objective
CN110211158A (en) * 2019-06-04 2019-09-06 海信集团有限公司 Candidate region determines method, apparatus and storage medium
CN110349183A (en) * 2019-05-30 2019-10-18 西安电子科技大学 A kind of tracking based on KCF, device, electronic equipment and storage medium
CN110400347A (en) * 2019-06-25 2019-11-01 哈尔滨工程大学 A kind of method for tracking target for judging to block and target relocates
CN110807377A (en) * 2019-10-17 2020-02-18 浙江大华技术股份有限公司 Target tracking and intrusion detection method, device and storage medium
CN110827324A (en) * 2019-11-08 2020-02-21 江苏科技大学 Video target tracking method
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium
WO2020052513A1 (en) * 2018-09-14 2020-03-19 阿里巴巴集团控股有限公司 Image identification and pedestrian re-identification method and apparatus, and electronic and storage device
CN111027370A (en) * 2019-10-16 2020-04-17 合肥湛达智能科技有限公司 Multi-target tracking and behavior analysis detection method
CN111091583A (en) * 2019-11-22 2020-05-01 中国科学技术大学 Long-term target tracking method
CN111161324A (en) * 2019-11-20 2020-05-15 山东工商学院 Target tracking method based on adaptive multi-mode updating strategy
CN111179312A (en) * 2019-12-24 2020-05-19 北京欣奕华科技有限公司 High-precision target tracking method based on combination of 3D point cloud and 2D color image
CN111209837A (en) * 2019-12-31 2020-05-29 武汉光庭信息技术股份有限公司 Target tracking method and device
CN111223123A (en) * 2019-12-17 2020-06-02 西安天和防务技术股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111241931A (en) * 2019-12-30 2020-06-05 沈阳理工大学 Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3
CN111354022A (en) * 2020-02-20 2020-06-30 中科星图股份有限公司 Target tracking method and system based on kernel correlation filtering
CN111369590A (en) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN111383246A (en) * 2018-12-29 2020-07-07 杭州海康威视数字技术股份有限公司 Scroll detection method, device and equipment
CN111402291A (en) * 2020-03-04 2020-07-10 北京百度网讯科技有限公司 Method and apparatus for tracking a target
CN111489375A (en) * 2019-01-28 2020-08-04 杭州海康威视系统技术有限公司 Information detection method, device and equipment
CN111582062A (en) * 2020-04-21 2020-08-25 电子科技大学 Re-detection method in target tracking based on YOLOv3
CN111815671A (en) * 2019-04-10 2020-10-23 曜科智能科技(上海)有限公司 Target quantity statistical method, system, computer device and storage medium
CN111814590A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Personnel safety state monitoring method, equipment and computer readable storage medium
CN111862160A (en) * 2020-07-23 2020-10-30 中国兵器装备集团自动化研究所 Target tracking method, medium and system based on ARM platform
CN112069879A (en) * 2020-07-22 2020-12-11 深圳市优必选科技股份有限公司 Target person following method, computer-readable storage medium and robot
CN112085769A (en) * 2020-09-09 2020-12-15 武汉融氢科技有限公司 Object tracking method and device and electronic equipment
CN112150505A (en) * 2020-09-11 2020-12-29 浙江大华技术股份有限公司 Target object tracker updating method and device, storage medium and electronic device
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN112464900A (en) * 2020-12-16 2021-03-09 湖南大学 Multi-template visual target tracking method based on twin network
CN112489086A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN113223185A (en) * 2021-05-26 2021-08-06 北京奇艺世纪科技有限公司 Image processing method and device, electronic equipment and storage medium
CN113298848A (en) * 2021-06-04 2021-08-24 东南大学 Object tracking method integrating instance segmentation and Camshift
CN113643330A (en) * 2021-10-19 2021-11-12 青岛根尖智能科技有限公司 Target tracking method and system based on dynamic semantic features
CN113947616A (en) * 2021-09-23 2022-01-18 北京航空航天大学 Intelligent target tracking and loss rechecking method based on hierarchical perceptron
WO2022134886A1 (en) * 2020-12-22 2022-06-30 深圳市道通智能航空技术股份有限公司 Target tracking method, apparatus and device, and storage medium
CN117808848A (en) * 2024-03-01 2024-04-02 杭州穿石物联科技有限责任公司 Identification tracking method and device, electronic equipment and storage medium

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151439B (en) * 2018-09-28 2020-07-31 上海爱观视觉科技有限公司 Automatic tracking shooting system and method based on vision
CN111223139B (en) * 2018-11-26 2024-02-13 深圳市优必选科技有限公司 Target positioning method and terminal equipment
CN111476063B (en) * 2019-01-23 2023-04-25 北京小米松果电子有限公司 Target tracking method, device, storage medium and electronic equipment
CN109949341B (en) * 2019-03-08 2020-12-22 广东省智能制造研究所 Pedestrian target tracking method based on human skeleton structural features
CN110929620B (en) * 2019-11-15 2023-04-07 浙江大华技术股份有限公司 Target tracking method and device and storage device
CN111508001A (en) * 2020-04-15 2020-08-07 上海摩象网络科技有限公司 Method and device for retrieving tracking target and handheld camera
CN111738053B (en) * 2020-04-15 2022-04-01 上海摩象网络科技有限公司 Tracking object determination method and device and handheld camera
CN111627045B (en) * 2020-05-06 2021-11-02 佳都科技集团股份有限公司 Multi-pedestrian online tracking method, device and equipment under single lens and storage medium
CN111738063B (en) * 2020-05-08 2023-04-18 华南理工大学 Ship target tracking method, system, computer equipment and storage medium
CN112037259A (en) * 2020-08-27 2020-12-04 北京极智嘉科技有限公司 System and method for tracking dynamic target
CN113658210A (en) * 2021-09-02 2021-11-16 西安中科西光航天科技有限公司 Front-end real-time target tracking method based on Jetson NX platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101888479A (en) * 2009-05-14 2010-11-17 汉王科技股份有限公司 Method and device for detecting and tracking target image
CN102945554A (en) * 2012-10-25 2013-02-27 西安电子科技大学 Target tracking method based on learning and speeded-up robust features (SURFs)
CN104751157A (en) * 2013-12-31 2015-07-01 中核控制系统工程有限公司 FPGA (Field Programmable Gate Array)-based detection tracking method
CN105931269A (en) * 2016-04-22 2016-09-07 海信集团有限公司 Tracking method for target in video and tracking device thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996312B (en) * 2009-08-18 2015-03-18 索尼株式会社 Method and device for tracking targets
CN103778641B (en) * 2012-10-25 2016-08-03 西安电子科技大学 Method for tracking target based on Wavelet Descriptor
CN104794733B (en) * 2014-01-20 2018-05-08 株式会社理光 Method for tracing object and device
CN105654515A (en) * 2016-01-11 2016-06-08 上海应用技术学院 Target tracking method based on fragmentation and multiple cues adaptive fusion
CN105931276B (en) * 2016-06-15 2019-04-02 广州高新兴机器人有限公司 A kind of long-time face tracking method based on patrol robot intelligence cloud platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101888479A (en) * 2009-05-14 2010-11-17 汉王科技股份有限公司 Method and device for detecting and tracking target image
CN102945554A (en) * 2012-10-25 2013-02-27 西安电子科技大学 Target tracking method based on learning and speeded-up robust features (SURFs)
CN104751157A (en) * 2013-12-31 2015-07-01 中核控制系统工程有限公司 FPGA (Field Programmable Gate Array)-based detection tracking method
CN105931269A (en) * 2016-04-22 2016-09-07 海信集团有限公司 Tracking method for target in video and tracking device thereof

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020052513A1 (en) * 2018-09-14 2020-03-19 阿里巴巴集团控股有限公司 Image identification and pedestrian re-identification method and apparatus, and electronic and storage device
CN109584213A (en) * 2018-11-07 2019-04-05 复旦大学 A kind of selected tracking of multiple target number
CN109584213B (en) * 2018-11-07 2023-05-30 复旦大学 Multi-target number selection tracking method
CN109671103A (en) * 2018-12-12 2019-04-23 易视腾科技股份有限公司 Method for tracking target and device
CN111383246B (en) * 2018-12-29 2023-11-07 杭州海康威视数字技术股份有限公司 Scroll detection method, device and equipment
CN111383246A (en) * 2018-12-29 2020-07-07 杭州海康威视数字技术股份有限公司 Scroll detection method, device and equipment
CN111489375B (en) * 2019-01-28 2023-09-01 杭州海康威视系统技术有限公司 Information detection method, device and equipment
CN111489375A (en) * 2019-01-28 2020-08-04 杭州海康威视系统技术有限公司 Information detection method, device and equipment
CN111815671A (en) * 2019-04-10 2020-10-23 曜科智能科技(上海)有限公司 Target quantity statistical method, system, computer device and storage medium
CN111815671B (en) * 2019-04-10 2023-09-15 曜科智能科技(上海)有限公司 Target quantity counting method, system, computer device and storage medium
CN110189360A (en) * 2019-05-28 2019-08-30 四川大学华西第二医院 A kind of recognition and tracking method of pair of specific objective
CN110349183A (en) * 2019-05-30 2019-10-18 西安电子科技大学 A kind of tracking based on KCF, device, electronic equipment and storage medium
CN110349183B (en) * 2019-05-30 2022-12-09 西安电子科技大学 Tracking method and device based on KCF, electronic equipment and storage medium
CN110211158B (en) * 2019-06-04 2023-03-28 海信集团有限公司 Candidate area determination method, device and storage medium
CN110211158A (en) * 2019-06-04 2019-09-06 海信集团有限公司 Candidate region determines method, apparatus and storage medium
CN110400347B (en) * 2019-06-25 2022-10-28 哈尔滨工程大学 Target tracking method for judging occlusion and target relocation
CN110400347A (en) * 2019-06-25 2019-11-01 哈尔滨工程大学 A kind of method for tracking target for judging to block and target relocates
CN111027370A (en) * 2019-10-16 2020-04-17 合肥湛达智能科技有限公司 Multi-target tracking and behavior analysis detection method
CN110807377A (en) * 2019-10-17 2020-02-18 浙江大华技术股份有限公司 Target tracking and intrusion detection method, device and storage medium
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium
CN110827324A (en) * 2019-11-08 2020-02-21 江苏科技大学 Video target tracking method
CN111161324B (en) * 2019-11-20 2023-06-23 山东工商学院 Target tracking method based on self-adaptive multimode updating strategy
CN111161324A (en) * 2019-11-20 2020-05-15 山东工商学院 Target tracking method based on adaptive multi-mode updating strategy
CN111091583A (en) * 2019-11-22 2020-05-01 中国科学技术大学 Long-term target tracking method
CN111091583B (en) * 2019-11-22 2022-09-06 中国科学技术大学 Long-term target tracking method
CN111223123A (en) * 2019-12-17 2020-06-02 西安天和防务技术股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111223123B (en) * 2019-12-17 2024-03-19 西安天和防务技术股份有限公司 Target tracking method, device, computer equipment and storage medium
CN111179312A (en) * 2019-12-24 2020-05-19 北京欣奕华科技有限公司 High-precision target tracking method based on combination of 3D point cloud and 2D color image
CN111179312B (en) * 2019-12-24 2023-07-21 北京欣奕华科技有限公司 High-precision target tracking method based on combination of 3D point cloud and 2D color image
CN111241931A (en) * 2019-12-30 2020-06-05 沈阳理工大学 Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3
CN111241931B (en) * 2019-12-30 2023-04-18 沈阳理工大学 Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3
CN111209837A (en) * 2019-12-31 2020-05-29 武汉光庭信息技术股份有限公司 Target tracking method and device
CN111209837B (en) * 2019-12-31 2022-07-01 武汉光庭信息技术股份有限公司 Target tracking method and device
CN111354022A (en) * 2020-02-20 2020-06-30 中科星图股份有限公司 Target tracking method and system based on kernel correlation filtering
CN111354022B (en) * 2020-02-20 2023-08-22 中科星图股份有限公司 Target Tracking Method and System Based on Kernel Correlation Filtering
CN111369590A (en) * 2020-02-27 2020-07-03 北京三快在线科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN111402291A (en) * 2020-03-04 2020-07-10 北京百度网讯科技有限公司 Method and apparatus for tracking a target
CN111402291B (en) * 2020-03-04 2023-08-29 阿波罗智联(北京)科技有限公司 Method and apparatus for tracking a target
CN111582062A (en) * 2020-04-21 2020-08-25 电子科技大学 Re-detection method in target tracking based on YOLOv3
CN111814590A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Personnel safety state monitoring method, equipment and computer readable storage medium
CN111814590B (en) * 2020-06-18 2023-09-29 浙江大华技术股份有限公司 Personnel safety state monitoring method, equipment and computer readable storage medium
CN112069879A (en) * 2020-07-22 2020-12-11 深圳市优必选科技股份有限公司 Target person following method, computer-readable storage medium and robot
CN111862160A (en) * 2020-07-23 2020-10-30 中国兵器装备集团自动化研究所 Target tracking method, medium and system based on ARM platform
CN111862160B (en) * 2020-07-23 2023-10-13 中国兵器装备集团自动化研究所有限公司 Target tracking method, medium and system based on ARM platform
CN112085769A (en) * 2020-09-09 2020-12-15 武汉融氢科技有限公司 Object tracking method and device and electronic equipment
CN112150505A (en) * 2020-09-11 2020-12-29 浙江大华技术股份有限公司 Target object tracker updating method and device, storage medium and electronic device
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN112417977B (en) * 2020-10-26 2023-01-17 青岛聚好联科技有限公司 Target object searching method and terminal
CN112489086A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN112464900B (en) * 2020-12-16 2022-04-29 湖南大学 Multi-template visual target tracking method based on twin network
CN112464900A (en) * 2020-12-16 2021-03-09 湖南大学 Multi-template visual target tracking method based on twin network
WO2022134886A1 (en) * 2020-12-22 2022-06-30 深圳市道通智能航空技术股份有限公司 Target tracking method, apparatus and device, and storage medium
CN113223185B (en) * 2021-05-26 2023-09-05 北京奇艺世纪科技有限公司 Image processing method and device, electronic equipment and storage medium
CN113223185A (en) * 2021-05-26 2021-08-06 北京奇艺世纪科技有限公司 Image processing method and device, electronic equipment and storage medium
CN113298848A (en) * 2021-06-04 2021-08-24 东南大学 Object tracking method integrating instance segmentation and Camshift
CN113947616A (en) * 2021-09-23 2022-01-18 北京航空航天大学 Intelligent target tracking and loss rechecking method based on hierarchical perceptron
CN113947616B (en) * 2021-09-23 2022-08-30 北京航空航天大学 Intelligent target tracking and loss rechecking method based on hierarchical perceptron
CN113643330A (en) * 2021-10-19 2021-11-12 青岛根尖智能科技有限公司 Target tracking method and system based on dynamic semantic features
CN117808848A (en) * 2024-03-01 2024-04-02 杭州穿石物联科技有限责任公司 Identification tracking method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108269269A (en) 2018-07-10

Similar Documents

Publication Publication Date Title
WO2018121286A1 (en) Target tracking method and device
EP3826317B1 (en) Method and device for identifying key time point of video, computer apparatus and storage medium
CN109446942B (en) Target tracking method, device and system
CN109076198B (en) Video-based object tracking occlusion detection system, method and equipment
US9478039B1 (en) Background modeling and foreground extraction method based on depth image
US10417773B2 (en) Method and apparatus for detecting object in moving image and storage medium storing program thereof
US11715227B2 (en) Information processing apparatus, control method, and program
WO2018121287A1 (en) Target re-identification method and device
CN110264493B (en) Method and device for tracking multiple target objects in motion state
KR101557376B1 (en) Method for Counting People and Apparatus Therefor
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
KR101697161B1 (en) Device and method for tracking pedestrian in thermal image using an online random fern learning
CN104992453A (en) Target tracking method under complicated background based on extreme learning machine
WO2019042195A1 (en) Method and device for recognizing identity of human target
US10991124B2 (en) Determination apparatus and method for gaze angle
JP6803525B2 (en) Face detection device, face detection system equipped with this, and face detection method
WO2020258978A1 (en) Object detection method and device
US20160210756A1 (en) Image processing system, image processing method, and recording medium
CN112016353A (en) Method and device for carrying out identity recognition on face image based on video
US20220366570A1 (en) Object tracking device and object tracking method
WO2013075295A1 (en) Clothing identification method and system for low-resolution video
KR20170053807A (en) A method of detecting objects in the image with moving background
US10762659B2 (en) Real time multi-object tracking apparatus and method using global motion
CN113657250A (en) Flame detection method and system based on monitoring video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17887296

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17887296

Country of ref document: EP

Kind code of ref document: A1