WO2020082258A1 - Multi-objective real-time tracking method and apparatus, and electronic device - Google Patents

Multi-objective real-time tracking method and apparatus, and electronic device Download PDF

Info

Publication number
WO2020082258A1
WO2020082258A1 PCT/CN2018/111589 CN2018111589W WO2020082258A1 WO 2020082258 A1 WO2020082258 A1 WO 2020082258A1 CN 2018111589 W CN2018111589 W CN 2018111589W WO 2020082258 A1 WO2020082258 A1 WO 2020082258A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
current
match
feature
Prior art date
Application number
PCT/CN2018/111589
Other languages
French (fr)
Chinese (zh)
Inventor
肖梦秋
Original Assignee
深圳鲲云信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳鲲云信息科技有限公司 filed Critical 深圳鲲云信息科技有限公司
Priority to CN201880083620.2A priority Critical patent/CN111512317B/en
Priority to PCT/CN2018/111589 priority patent/WO2020082258A1/en
Publication of WO2020082258A1 publication Critical patent/WO2020082258A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention relates to the field of software development, and more specifically, to a multi-target real-time tracking method, device and electronic equipment.
  • Tracking can determine the trajectory of the target (object or person).
  • Current single target tracking algorithms such as based on correlation filtering (KCF)
  • KCF correlation filtering
  • the current tracking algorithm has a real-time problem for multi-target tracking.
  • the combination of multiple single target tracking algorithms requires a large amount of calculation and a high data processing delay, so that the tracking accuracy obtained using this tracking method is low.
  • the purpose of the present invention is to provide a multi-target real-time tracking method, device and electronic equipment in view of the above-mentioned defects in the prior art, which solves the problem of low tracking accuracy.
  • a multi-target real-time tracking method includes:
  • image information where the image information includes current frame information and previous frame information of multiple targets;
  • a second match is performed to determine whether the at least one target is successfully matched twice, and the second match includes at least one of feature matching and distance matching;
  • the information of the at least one target with a successful match and / or a successful second match is formed as output information, and the output information includes current presence information and identification information.
  • performing a second match to determine whether the at least one target is successfully matched a second time further includes:
  • the current frame information of the remaining target is regenerated to obtain new image information, and the new image information includes the current frame information and the next frame information.
  • the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
  • the matching the current frame information of the multiple targets with the previous frame information once to determine whether at least one of the multiple targets is successfully matched at a time includes:
  • the degree of overlap it is determined whether the at least one target is successfully matched at a time.
  • the judging whether the at least one target is successfully matched according to the overlapping degree includes:
  • the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
  • the performing second matching to determine whether the at least one target is successfully matched twice includes:
  • the judging whether the at least one target is successfully matched twice according to the characteristic value and the distance value of the at least one target includes:
  • the output information of the formation of the at least one target with a successful first match and / or a successful second match includes:
  • the output information of the at least one target is formed according to the current presence information and the corresponding identification information.
  • a multi-target real-time tracking device includes:
  • An acquisition module for acquiring image information, the image information including current frame information and previous frame information of multiple targets;
  • the first matching module is configured to perform a match based on the current frame information of the multiple targets and the previous frame information, and determine whether at least one of the multiple targets is successfully matched at one time;
  • the second matching module is used to perform secondary matching if the at least one target has not been matched once, and determine whether the at least one target has been successfully matched twice.
  • the secondary matching includes at least one of feature matching and distance matching. item;
  • the output module is configured to form information of the at least one target with a successful match and / or a successful second match into output information, where the output information includes current presence information and identification information.
  • an electronic device including: a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the implementation of the present invention is implemented Example provides the steps in the multi-target real-time tracking method.
  • a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the multi-target real-time tracking method provided by the embodiments of the present invention are implemented .
  • Beneficial effects brought by the present invention acquiring image information, the image information includes current frame information and previous frame information of multiple targets; according to the current frame information of the multiple targets and the previous frame information, a match is made to determine Whether at least one target among the plurality of targets is matched successfully once; if the at least one target is not matched successfully once, then a second match is performed to determine whether the at least one target is matched twice successfully; Or the output information of the formation of the at least one target for which the second match is successful, the output information includes current presence information and identification information.
  • the tracking accuracy can be increased.
  • FIG. 1 is a schematic flowchart of a multi-target real-time tracking method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of current information according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of image information according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another multi-target real-time tracking method provided by an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of another multi-target real-time tracking method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a multi-target real-time tracking device provided by an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention.
  • the invention provides a multi-target real-time tracking method, device and electronic equipment.
  • FIG. 1 is a schematic flowchart of a multi-target real-time tracking method according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • the above image information may be the image information of the video frame collected by the camera, and the image information may be identified according to the time of the video frame.
  • the time for acquiring one frame of image in the video is 15.6789 seconds. Identify the frame image as 15S6789. It may also be the sequence number obtained from the frame in the total video frame. For example, if the frame is the 14567th frame in the total video frame, the frame image may be identified as 14567.
  • the embodiments of the present invention are not limited to the above two identification methods, and may also be other identification methods, such as a time stamp with a date and a sequential identification with a camera number.
  • the above current frame information includes the feature coordinate values, feature range values, and confidence values of multiple targets in the image.
  • the previous frame information includes the identification, feature coordinate values, feature range values, and confidence levels of multiple targets.
  • the feature coordinate value and the feature range value may be measured in pixels or measured in actual size, which is not specifically limited in the embodiment of the present invention.
  • the current frame information can be obtained by real-time detection of multiple targets in the original image of the current frame. If the image contains target information, a current feature box is used to represent the target information. It is the target. Obtain the center coordinate information, length and width (width and height) information of the current feature box, and the confidence level that appears. The confidence level measures the credibility of the existence of the target. The higher it is, the more credible the current feature box is, and the confidence can be obtained when the image is detected in real time.
  • the above feature range value includes the area value occupied by the feature image in the feature frame.
  • the above information of the previous frame may be the identification, feature coordinate value, feature range value, confidence value, etc. of multiple targets in the image in the previous frame.
  • the identification of multiple targets in the current frame is the same as the previous The marks of multiple targets in the frame are different from each other. It can also be said that the marks of multiple targets in the current frame do not overlap with the marks of multiple targets in the previous frame.
  • the marks of multiple targets in the current frame are A and B, respectively.
  • C, D the identification of multiple targets in the previous frame can be A ', B', C ', D', where A and A 'can be different targets.
  • the information of the previous frame can be obtained by real-time detection of multiple targets in the original image of the previous frame, or by real-time detection of multiple targets in the processed image of the previous frame.
  • the target information in the information is represented by a previous feature box that is different from the feature box in the current frame information.
  • the current feature box is a solid box
  • the previous feature box is a dotted box.
  • you can also use the feature box To distinguish between, for example, associating the target's identifier with the feature frame, configuring two different identifiers for the feature frame in the current frame information and the feature frame in the previous frame information, etc.
  • the above image information includes the original image of the current frame, the current feature box, and the previous feature box.
  • the current feature box includes the current identification, current center coordinate information, current length and width (width and height) information, and the appearance of the
  • the previous feature box includes the last logo, the last center coordinate information, and the last length and width (width and height) information.
  • the feature frame may also be referred to as a target frame
  • the aforementioned identifier may also be referred to as an ID
  • the aforementioned current feature frame may also be referred to as a detection frame
  • the aforementioned previous feature frame may also be referred to as a tracking frame
  • the aforementioned feature range The value is the area value occupied by the feature image in the feature frame.
  • the above-mentioned real-time detection can be performed in the tracker or can be obtained by a tracking algorithm. Those skilled in the art know that the tracker and the tracking algorithm will not repeat them here.
  • the current frame information includes the current feature boxes of multiple targets or the current feature vectors of multiple targets.
  • the previous frame information includes the previous feature boxes of multiple targets or the previous feature vectors of multiple targets.
  • the detection algorithm matches at least one target among multiple targets.
  • the current frame information in step 102 includes multiple current feature frames corresponding to multiple targets
  • the previous frame information includes multiple previous feature frames corresponding to multiple targets.
  • the information obtained by real-time detection is performed.
  • Output get a set of image information including multiple current feature boxes and multiple previous feature boxes, for each previous feature box, put it in multiple current feature boxes to match, calculate each The overlapping area of a feature box and multiple current feature boxes, according to the overlapping area, calculate the overlap degree of each previous feature box and multiple current feature boxes (Intersection-over-Union, IoU, also known as intersection and merge ratio), select and The maximum overlapping degree of the previous feature box and a current feature box become a group, as shown in FIG. 3.
  • the previous feature box compares the maximum overlap degree of the previous feature box with the preset overlap degree threshold. If the maximum overlap degree meets the overlap degree threshold, it is recorded as a successful match; if the maximum overlap degree does not meet the overlap degree threshold, it is recorded as a match. If it is unsuccessful, the previous feature box enters step 103 for secondary matching.
  • the similarity of the feature frames may also be compared to match the current feature frame corresponding to the feature frame of the previous frame. Similarity includes: area similarity, length-width (width-height) similarity, etc.
  • one-time matching may also be called first-time matching, first-time matching, one-time tracking, first-time tracking, first-time tracking, etc., and may also be directly called tracking.
  • the second match includes at least one of feature matching and distance matching.
  • the target that did not match successfully in step 102 is subjected to secondary matching, and the secondary matching includes at least one of feature matching and distance matching.
  • Feature matching includes obtaining the current feature vector of the current feature box, and acquiring the previous feature vector of the previous feature box, and calculating the similarity between the previous feature vector and the current feature vector.
  • Distance matching includes obtaining the distance value between the previous feature box and the current feature box.
  • second matching may also be referred to as first matching, rematching, second tracking, second tracking, retracking, and so on.
  • the above-mentioned current presence information includes the current feature box. For example, if the successful match is the current feature box A and the previous feature box A ', the information of the current feature box A is output.
  • the information of the current feature box includes: the center coordinates of the current feature box Information, length and width (width and height) information, etc.
  • the above identification information includes the identification information of the current feature box, which is used to indicate the current feature box. For example, if the current feature box is A, output A.
  • the identification information of the current feature box and the current feature Box is used to indicate the current feature box.
  • step 102 if the target is successfully matched, the target information of the successful match may be placed in an active set, and the target information of unsuccessful match may be placed in a lost set.
  • step 103 after the target in the missing set is matched for a second time, a target with a successful match is obtained, and the target information of the successful match may be added to the active set, and the information of the target in the active set is output.
  • A is the identification of the current feature box
  • a ' is the identification of the previous feature box
  • a and A' are the pair of current feature boxes and the previous feature box that match successfully, and the identification of the current feature box is changed from A to A ', And then delete the previous feature box in the image information, and record the current feature box A' into the active set, then the output information is the current feature box identification A 'and the center coordinate information, length and width of the current feature box ( Width and height) information and other information.
  • image information is obtained, and the image information includes current frame information and previous frame information of multiple targets; according to the current frame information of the multiple targets and the previous frame information, a match is made to determine Whether at least one target among the plurality of targets is matched successfully once; if the at least one target is not matched successfully once, then a second match is performed to determine whether the at least one target is successfully matched twice; the first match is successful and / or
  • the output information of the formation of the at least one target after the second match is successful, the output information includes current presence information and identification information. Simultaneous processing of at least one target in a frame of image can increase the efficiency of tracking.
  • the method for installing a container orchestration engine provided by an embodiment of the present invention can be applied to installation equipment for a container orchestration engine, such as computers, servers, mobile phones, and other devices that can install a container orchestration engine.
  • FIG. 4 is a schematic flowchart of another multi-target real-time tracking method provided by an embodiment of the present invention. As shown in FIG. 4, the method includes the following steps:
  • the second match includes at least one of feature matching and distance matching.
  • step 202 if the target is successfully matched, the target information of the successful match may be placed in an active set, and the target information of unsuccessful match may be placed in a lost set.
  • step 203 after the target in the missing set is matched a second time, the target with a successful match can be obtained. The target information in the successful set can be added to the active set, and the target information in the active set can be output. If the match is not successful, Then go to step 205.
  • step 205 for the current feature frame and the previous feature frame that have not been successfully matched for the second time, the target feature frame is regenerated, the regenerated feature frame is recorded in the active set, and the feature frames are regenerated It is consistent with the identification information in the active set. For example: suppose there are two elements of the current feature box A 'and the current feature box D' in the active set, B is the current feature box without matching success, B 'is the previous feature box with matching success, and C is the successful matching with B' In the current feature box, C 'is the previous feature box without a successful match, then the identifier C of the successfully matched current feature box is changed to the identifier B' and recorded in the active set, and the current feature box B 'is obtained.
  • Feature box A ', current feature box B' and current feature box D 'three elements delete the previous feature box B', delete the previous feature box for the current feature box B and the previous feature box C 'without matching C ', the identifier B of the current feature box is regenerated as E' and recorded in the active set, then there are three elements of the current feature box A ', the current feature box B', the current feature box D 'and the current feature in the active set Box E 'four elements, get the current frame information.
  • Obtaining new image information includes acquiring new original image information, performing real-time detection on the new original image to obtain the next frame information, adding the current feature frame in the active set to the new image information, and performing the new tracking process cyclically to obtain All tracking results are shown in Figure 5.
  • step 201 to step 205 can also be executed cyclically, and multiple targets can be tracked.
  • step 205 is optional. In some embodiments, it is only necessary to form the output information output for the information of the at least one target with a successful match and / or a successful second match.
  • the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
  • the matching the current frame information of the multiple targets with the previous frame information once to determine whether at least one of the multiple targets is successfully matched at a time includes:
  • the degree of overlap it is determined whether the at least one target is successfully matched at a time.
  • the above current detection information includes the current feature frame information obtained by real-time detection of the current original image and the generated identifier of each current feature frame, and the current feature frame information information includes center coordinate information, length and width (width and height) information, and confidence information ,
  • the identification of the current feature box may be a unique identification such as a number unique identification, a letter unique identification, and so on.
  • the above-mentioned historical existence information may be the characteristic frame information existing in the image of the previous frame, and the corresponding identification information is the unique identification of the characteristic frame existing in the image of the previous frame, and may also be said to be the unique identification of the previous characteristic frame.
  • the degree of overlap may be the degree of overlap between the current feature frame and the previous feature frame, including the degree of overlap of the length, width (width and height) coordinates, and the degree of overlap of the area.
  • the degree of overlap may also be the similarity of feature vectors or the similarity of feature frames.
  • the at least one target When the above overlapping degree or similarity is greater than the preset threshold, the at least one target can be judged as a successful match, and when the above overlapping degree or similarity is less than the preset threshold, the at least one target can be determined as One match was unsuccessful.
  • the judging whether the at least one target is successfully matched according to the overlapping degree includes:
  • each previous feature box place it in multiple current feature boxes to match, calculate the overlapping area of each previous feature box and multiple current feature boxes, and calculate each previous feature based on the overlapping area
  • the overlap degree of the frame and multiple current feature frames (Intersection-over-Union, IoU, also known as cross-combination ratio), select the maximum overlap degree with the previous feature frame, one current feature frame becomes a group, and the previous feature frame
  • the maximum overlap degree is compared with the preset overlap threshold, for example: A 'is the identifier of the previous feature box, B' is the identifier of the previous feature box, A is the identifier of the current feature box, and B is the current feature box.
  • the overlapping degree of A 'and A is 0.4, and the overlapping degree of A' and B is 0.8, then A 'and B have the maximum overlapping degree, which is recorded as a group, the overlapping degree of B' and A is 0.4, B 'and If the overlap degree of B is 0.5, then B 'and B have the maximum overlap degree, which is recorded as a group, and the overlap degree of A' and A is 0.4. If the maximum overlap degree meets the overlap degree threshold, it is recorded as a successful match. Assuming the overlap degree The threshold is 0.6, A 'matches with B successfully, and the overlap between B' and B is less than 0.6, Then, the matching between B 'and B is unsuccessful. If the maximum degree of overlap does not meet the threshold of the degree of overlap, it is recorded as one-time unsuccessful matching, and the previous feature box proceeds to step 203 to perform the second-time matching.
  • the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
  • the performing second matching to determine whether the at least one target is successfully matched twice includes:
  • the above-mentioned current detection information includes the current feature frame information and identification
  • the historical presence information includes the previous feature frame information and identification.
  • the current feature vector can be obtained by extracting the direction gradient histogram (Histogram of Oriented Gradient, HOG for short) of the current feature box.
  • the previous feature vector can be obtained by extracting the direction gradient histogram of the previous feature box On the previous HOG feature vector, the cosine similarity between the current HOG feature vector and the previous HOG feature vector is calculated.
  • the current feature box includes the current center coordinate information, current length and width (width and height) information
  • the previous feature box includes the previous center coordinate information, the previous length and width (width and height) information
  • the distance value between the current feature box and the previous feature box It can be calculated by the following formula:
  • D is the distance between the current feature box and the previous feature box
  • x1, y1, w1 belong to the current feature box
  • x1, y1 are the center coordinates of the current feature box
  • w1 is the width of the current feature box
  • x2, y2, w2 Belongs to the previous feature frame
  • x2 and y2 are the center coordinates of the previous feature frame
  • w2 is the width of the previous feature frame.
  • the judging whether the at least one target is successfully matched twice according to the characteristic value and the distance value of the at least one target includes:
  • the judgment rule includes: when the cosine similarity is greater than the preset cosine similarity threshold, it can be considered that the second match is successful. In addition, when the distance value is less than the set distance threshold, it can also be considered that the secondary matching is successful. Of course, when the cosine similarity is greater than the preset cosine similarity threshold and the distance value is less than the set distance threshold, it can also be considered that the second match is successful.
  • the output information of the formation of the at least one target with a successful first match and / or a successful second match includes:
  • the output information of the at least one target is formed according to the current presence information and the corresponding identification information.
  • the above current presence information includes the current feature box. For example, if the matching is successful between the current feature box A and the previous feature box A ', the information of the current feature box A is output.
  • the information of the current feature box includes: the center coordinates of the current feature box Information, length and width (width and height) information, etc.
  • the above identification information includes the identification information of the current feature box, used to represent the current feature box, for example, the current feature box is A, then output A, and the current feature box is A ', then output A ', the identification information of the current feature box is associated with the current feature box.
  • step 202 if the target is successfully matched, the target information of the successful match may be placed in an active set, and the target information of unsuccessful match may be placed in a lost set.
  • step 203 after the target in the missing set is matched a second time, a target with a successful match is obtained, and the target information of the successful match can be added to the active set, and the information of the target in the active set is output.
  • A is the identification of the current feature box
  • a ' is the identification of the previous feature box
  • a and A' are the pair of current feature boxes and the previous feature box that match successfully, and the identification of the current feature box is changed from A to A ', And then delete the previous feature box in the image information, and record the current feature box A' into the active set, then the output information is the current feature box identification A 'and the center coordinate information, length and width of the current feature box ( Width and height) information and other information.
  • a multi-target real-time tracking device includes:
  • the obtaining module 401 is used to obtain image information, where the image information includes current frame information and previous frame information of multiple targets;
  • the first matching module 402 is configured to perform a match based on the current frame information of the multiple targets and the previous frame information, and determine whether at least one of the multiple targets is successfully matched at one time;
  • the second matching module 403 is configured to perform secondary matching if the at least one target has not been matched successfully once, and determine whether the at least one target has succeeded in secondary matching.
  • the secondary matching includes at least one of feature matching and distance matching.
  • the output module 404 is configured to form information of the at least one target with a successful match and / or a successful second match into output information, where the output information includes current presence information and identification information.
  • the device further includes:
  • the generating module 405 is configured to regenerate the current frame information of the remaining target to obtain new image information if the second matching is unsuccessful, and the new image information includes the current frame information and the next frame information.
  • the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
  • the first matching module 402 includes:
  • the first processing unit 4021 is configured to calculate the degree of overlap between the current detection information of at least one target of the plurality of targets and the historical presence information of at least one target of the plurality of targets to obtain the current detection of the at least one target The degree of overlap between the information and the historical existence information of the at least one target;
  • the first determining unit 4022 is configured to determine whether the at least one target is successfully matched at a time according to the degree of overlap.
  • the first judgment unit 4022 includes:
  • the comparison subunit 40221 is used to select a maximum overlap degree and compare with a preset overlap degree threshold to determine whether the maximum overlap degree is greater than the overlap degree threshold;
  • the judgment subunit 40222 is configured to match once if the maximum overlap degree is greater than the overlap degree threshold, and fail to match once if the maximum overlap degree is less than the overlap degree threshold.
  • the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
  • the second matching module 403 includes:
  • the second processing unit 4031 is configured to extract a current feature vector of current detection information of at least one target of the plurality of targets, extract a historical feature vector of historical presence information of at least one target of the plurality of targets, and convert the current Calculating the feature vector and the historical feature vector to obtain the cosine similarity of the at least one target;
  • the third processing unit 4032 is configured to extract the current coordinates of the current detection information of the at least one target and the historical coordinates of the historical presence information of the at least one target, and calculate the current coordinates and the historical coordinates of the at least one target, Obtain the distance value of the at least one target;
  • the second determining unit 4033 is configured to determine whether the at least one target is successfully matched twice according to the cosine similarity and distance values of the at least one target.
  • the output module includes:
  • the updating unit 4041 is configured to update the current detection information of the at least one target to the current presence information, and associate the corresponding identification information of the at least one target to the current detection information;
  • the output unit 4042 forms the output information of the at least one target according to the current presence information and the corresponding identification information.
  • an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program
  • the steps in the multi-target real-time tracking method provided by the embodiments of the present invention are implemented.
  • an embodiment of the present invention provides a computer-readable storage medium that stores a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, realizes multi-target real-time tracking provided by an embodiment of the present invention Steps in the method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a multi-objective real-time tracking method and apparatus, and an electronic device. The method comprises: acquiring image information (101), wherein the image information comprises information of the current frame and information of the previous frame of a plurality of objectives; matching the information of the current frame with the information of the previous frame of the plurality of objectives once, and determining whether at least one objective in the plurality of objectives is successfully matched a first time (102); if the at least one objective is not successfully matched a first time, then performing a second match, and determining whether the at least one objective is successfully matched a second time (103); and the information of the at least one objective that is successfully matched a first time and/or that is successfully matched a second time being formed as output information (104), wherein the output information comprises the current existence information and identification information. Performing matching on at least one objective twice in one frame image can increase the accuracy rate of tracking.

Description

一种多目标实时跟踪方法、装置及电子设备Multi-target real-time tracking method, device and electronic equipment 技术领域Technical field
本发明涉及软件开发领域,更具体的说,是涉及一种多目标实时跟踪方法、装置及电子设备。The invention relates to the field of software development, and more specifically, to a multi-target real-time tracking method, device and electronic equipment.
背景技术Background technique
跟踪可以确定目标(物体或人)的运动轨迹。目前的单目标跟踪算法,如基于相关滤波(KCF)等,可以在低功耗设备上进行实时单目标跟踪。不过,在终端设备中,由于功耗的限制,当前跟踪算法对于多目标跟踪,存在实时性问题。特别是当目标总数大于10以后,多个单目标跟踪算法的组合,计算量大,数据处理延时高,从而使用这样的跟踪方式得到的跟踪准确度较低。Tracking can determine the trajectory of the target (object or person). Current single target tracking algorithms, such as based on correlation filtering (KCF), can perform real-time single target tracking on low-power devices. However, in the terminal device, due to the limitation of power consumption, the current tracking algorithm has a real-time problem for multi-target tracking. Especially when the total number of targets is greater than 10, the combination of multiple single target tracking algorithms requires a large amount of calculation and a high data processing delay, so that the tracking accuracy obtained using this tracking method is low.
发明内容Summary of the invention
本发明的目的是针对上述现有技术存在的缺陷,提供一种多目标实时跟踪方法、装置及电子设备,解决了跟踪准确度较低的问题。The purpose of the present invention is to provide a multi-target real-time tracking method, device and electronic equipment in view of the above-mentioned defects in the prior art, which solves the problem of low tracking accuracy.
本发明的目的是通过以下技术方案来实现的:The purpose of the present invention is achieved by the following technical solutions:
第一方面,提供一种多目标实时跟踪方法,所述方法包括:In a first aspect, a multi-target real-time tracking method is provided. The method includes:
获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;Obtain image information, where the image information includes current frame information and previous frame information of multiple targets;
根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功;Perform a match based on the current frame information of the multiple targets and the previous frame information, and determine whether at least one of the multiple targets is successfully matched at a time;
若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功,所述二次匹配包括特征匹配、距离匹配中至少一项;If the at least one target is not matched successfully once, then a second match is performed to determine whether the at least one target is successfully matched twice, and the second match includes at least one of feature matching and distance matching;
将一次匹配成功和/或二次匹配成功的所述至少一个目标的信息形成输出信息,所述输出信息包括当前存在信息及标识信息。The information of the at least one target with a successful match and / or a successful second match is formed as output information, and the output information includes current presence information and identification information.
可选的,在所述若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功之后,还包括:Optionally, after the at least one target is not successfully matched once, performing a second match to determine whether the at least one target is successfully matched a second time, further includes:
若二次匹配不成功,则重新生成剩余目标的当前帧信息,获取新的图像信 息,所述新的图像信息包括所述当前帧信息与下一帧信息。If the second match is unsuccessful, the current frame information of the remaining target is regenerated to obtain new image information, and the new image information includes the current frame information and the next frame information.
可选的,所述当前帧包括所述多个目标的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;Optionally, the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
所述根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功包括:The matching the current frame information of the multiple targets with the previous frame information once to determine whether at least one of the multiple targets is successfully matched at a time includes:
将所述多个目标的至少一个目标的当前检测信息与所述多个目标的至少一个目标的历史存在信息进行重叠度计算,得到所述至少一个目标的当前检测信息与所述至少一个目标的历史存在信息的重叠度;Calculating the degree of overlap between the current detection information of at least one target of the plurality of targets and the historical presence information of at least one target of the plurality of targets to obtain the current detection information of the at least one target and the information of the at least one target The degree of overlap of information in history;
根据重叠度判断所述至少一个目标是否一次匹配成功。According to the degree of overlap, it is determined whether the at least one target is successfully matched at a time.
可选的,所述根据重叠度判断所述至少一个目标是否匹配成功包括:Optionally, the judging whether the at least one target is successfully matched according to the overlapping degree includes:
选取最大的重叠度与预先设置的重叠度阈值进行对比,判断所述最大的重叠度是否大于所述重叠度阈值;Selecting the maximum overlap degree to compare with the preset overlap degree threshold to determine whether the maximum overlap degree is greater than the overlap degree threshold;
若所述最大的重叠度大于所述重叠度阈值,则一次匹配成功,若所述最大的重叠度小于所述重叠度阈值,则一次匹配不成功。If the maximum overlap degree is greater than the overlap degree threshold, a match is successful, and if the maximum overlap degree is less than the overlap degree threshold, a match is unsuccessful.
可选的,所述当前帧包括所述多个目标的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;Optionally, the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
所述进行二次匹配,判断所述至少一个目标是否二次匹配成功包括:The performing second matching to determine whether the at least one target is successfully matched twice includes:
提取所述多个目标的至少一个目标的当前检测信息的当前特征向量,提取所述多个目标的至少一个目标的历史存在信息的历史特征向量,将所述当前特征向量与所述历史特征向量进行计算,得到所述至少一个目标的余弦相似度;Extracting a current feature vector of current detection information of at least one target of the plurality of targets, extracting a historical feature vector of historical presence information of at least one target of the multiple targets, and combining the current feature vector with the historical feature vector Performing calculation to obtain the cosine similarity of the at least one target;
提取所述至少一个目标的当前检测信息的当前坐标与所述至少一个目标的历史存在信息的历史坐标,将所述至少一个目标的当前坐标与历史坐标进行计算,得到所述至少一个目标的距离值;Extracting the current coordinates of the current detection information of the at least one target and the historical coordinates of the historical presence information of the at least one target, calculating the current coordinates of the at least one target and the historical coordinates to obtain the distance of the at least one target value;
根据所述至少一个目标的余弦相似度及距离值,判断所述至少一个目标是否二次匹配成功。According to the cosine similarity and distance values of the at least one target, it is determined whether the at least one target is successfully matched twice.
可选的,所述根据所述至少一个目标的特征值及距离值,判断所述至少一个目标是否二次匹配成功包括:Optionally, the judging whether the at least one target is successfully matched twice according to the characteristic value and the distance value of the at least one target includes:
将所述特征值余弦相似度与预先设置的余弦相似度阈值进行比较,将所述距离值与预先设置有距离阈值进行比较,得到比较结果;Comparing the characteristic value cosine similarity with a preset cosine similarity threshold, and comparing the distance value with a preset distance threshold to obtain a comparison result;
根据所述比较结果,按照预先设置的判断规则判断所述至少一个目标是否二次匹配成功。According to the comparison result, it is determined whether the at least one target is successfully matched twice according to a preset judgment rule.
可选的,所述将一次匹配成功和/或二次匹配成功的所述至少一个目标的形 成输出信息包括:Optionally, the output information of the formation of the at least one target with a successful first match and / or a successful second match includes:
将所述至少一个目标的当前检测信息更新为所述当前存在信息,将所述至少一个目标的对应的标识信息关联到所述当前检测信息;Updating the current detection information of the at least one target to the current presence information, and associating the corresponding identification information of the at least one target to the current detection information;
根据所述当前存在信息及所述对应的标识信息,形成所述至少一个目标的输出信息。The output information of the at least one target is formed according to the current presence information and the corresponding identification information.
第二方面,提供一种多目标实时跟踪装置,所述装置包括:In a second aspect, a multi-target real-time tracking device is provided. The device includes:
获取模块,用于获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;An acquisition module, for acquiring image information, the image information including current frame information and previous frame information of multiple targets;
第一匹配模块,用于根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功;The first matching module is configured to perform a match based on the current frame information of the multiple targets and the previous frame information, and determine whether at least one of the multiple targets is successfully matched at one time;
第二匹配模块,用于若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功,所述二次匹配包括特征匹配、距离匹配中至少一项;The second matching module is used to perform secondary matching if the at least one target has not been matched once, and determine whether the at least one target has been successfully matched twice. The secondary matching includes at least one of feature matching and distance matching. item;
输出模块,用于将一次匹配成功和/或二次匹配成功的所述至少一个目标的信息形成输出信息,所述输出信息包括当前存在信息及标识信息。The output module is configured to form information of the at least one target with a successful match and / or a successful second match into output information, where the output information includes current presence information and identification information.
第三方面,提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的多目标实时跟踪方法中的步骤。In a third aspect, an electronic device is provided, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the implementation of the present invention is implemented Example provides the steps in the multi-target real-time tracking method.
第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本发明实施例提供的多目标实时跟踪方法中的步骤。According to a fourth aspect, a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the multi-target real-time tracking method provided by the embodiments of the present invention are implemented .
本发明带来的有益效果:获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功;若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功;将一次匹配成功和/或二次匹配成功的所述至少一个目标的形成输出信息,所述输出信息包括当前存在信息及标识信息。在一帧图像中通过两次对至少一个目标进行匹配,可以增加跟踪准确率。Beneficial effects brought by the present invention: acquiring image information, the image information includes current frame information and previous frame information of multiple targets; according to the current frame information of the multiple targets and the previous frame information, a match is made to determine Whether at least one target among the plurality of targets is matched successfully once; if the at least one target is not matched successfully once, then a second match is performed to determine whether the at least one target is matched twice successfully; Or the output information of the formation of the at least one target for which the second match is successful, the output information includes current presence information and identification information. By matching at least one target twice in a frame of image, the tracking accuracy can be increased.
附图说明BRIEF DESCRIPTION
图1为本发明实施例提供的一种多目标实时跟踪方法流程示意图;FIG. 1 is a schematic flowchart of a multi-target real-time tracking method according to an embodiment of the present invention;
图2为本发明实施例的当前信息示意图;2 is a schematic diagram of current information according to an embodiment of the present invention;
图3为本发明实施例的图像信息示意图;3 is a schematic diagram of image information according to an embodiment of the present invention;
图4为本发明实施例提供的另一种多目标实时跟踪方法流程示意图;4 is a schematic flowchart of another multi-target real-time tracking method provided by an embodiment of the present invention;
图5为本发明实施例提供的另一种多目标实时跟踪方法流程示意图;FIG. 5 is a schematic flowchart of another multi-target real-time tracking method according to an embodiment of the present invention;
图6为本发明实施例提供的一种多目标实时跟踪装置示意图;6 is a schematic diagram of a multi-target real-time tracking device provided by an embodiment of the present invention;
图7为本发明实施例提供的另一种多目标实时跟踪装置示意图;7 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention;
图8为本发明实施例提供的另一种多目标实时跟踪装置示意图;8 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention;
图9为本发明实施例提供的另一种多目标实时跟踪装置示意图;9 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention;
图10为本发明实施例提供的另一种多目标实时跟踪装置示意图;10 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention;
图11为本发明实施例提供的另一种多目标实时跟踪装置示意图。FIG. 11 is a schematic diagram of another multi-target real-time tracking device provided by an embodiment of the present invention.
具体实施方式detailed description
下面描述本发明的优选实施方式,本领域普通技术人员将能够根据下文所述用本领域的相关技术加以实现,并能更加明白本发明的创新之处和带来的益处。The following describes the preferred embodiments of the present invention. Those of ordinary skill in the art will be able to implement related technologies in the art according to the description below, and will be able to more clearly understand the innovations and benefits of the present invention.
本发明提供了一种多目标实时跟踪方法、装置及电子设备。The invention provides a multi-target real-time tracking method, device and electronic equipment.
本发明的目的是通过以下技术方案来实现的:The purpose of the present invention is achieved by the following technical solutions:
第一方面,请参见图1,图1是本发明实施例提供的一种多目标实时跟踪方法的流程示意图,如图1所示,所述方法包括以下步骤:In the first aspect, please refer to FIG. 1. FIG. 1 is a schematic flowchart of a multi-target real-time tracking method according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
101、获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息。101. Acquire image information, where the image information includes current frame information and previous frame information of multiple targets.
该步骤中,上述的图像信息可以是由摄像头采集到的视频帧的图像信息,可以根据视频帧的时间对图像信息进行标识,比如,视频中的一帧图像获取的时间为15.6789秒,则可以将该帧图像标识为15S6789。也可以是视频总帧中该帧获取的序号,比如,该帧为视频总帧中第14567帧,则可以将该帧图像标识为14567。当然,本发明实施例中并不限定为上述两种标识方式,也还可以是其他的标识方式,比如带日期的时间戳、带摄像头编号的顺序标识等标识。In this step, the above image information may be the image information of the video frame collected by the camera, and the image information may be identified according to the time of the video frame. For example, the time for acquiring one frame of image in the video is 15.6789 seconds. Identify the frame image as 15S6789. It may also be the sequence number obtained from the frame in the total video frame. For example, if the frame is the 14567th frame in the total video frame, the frame image may be identified as 14567. Of course, the embodiments of the present invention are not limited to the above two identification methods, and may also be other identification methods, such as a time stamp with a date and a sequential identification with a camera number.
上述当前帧信息包括图像中多个目标的特征坐标值、特征范围值、置信度值等信息,上一帧信息包括多个目标的标识、特征坐标值、特征范围值、及置信度等信息,特征坐标值、特征范围值可以是以像素为度量单位,也可以是以 现实尺寸为度量单位,在本发明实施例中并不做具体的限定。如图2所示,当前帧信息可以通过对当前帧的原始图像中多个目标进行实时检测得到,检测到图像中包含有目标的信息,则用一个当前特征框来进行表示该目标信息,也就是目标,获取该当前特征框的中心坐标信息、长宽(宽高)信息以及出现的置信度,置信度为衡量该目标存在的可信度,即置信度越高,该目标存在的可能性就越高,该当前特征框就越可信,置信度可以在对图像进行实时检测时得到。上述特征范围值包括特征框内特征图像所占的面积值。The above current frame information includes the feature coordinate values, feature range values, and confidence values of multiple targets in the image. The previous frame information includes the identification, feature coordinate values, feature range values, and confidence levels of multiple targets. The feature coordinate value and the feature range value may be measured in pixels or measured in actual size, which is not specifically limited in the embodiment of the present invention. As shown in Figure 2, the current frame information can be obtained by real-time detection of multiple targets in the original image of the current frame. If the image contains target information, a current feature box is used to represent the target information. It is the target. Obtain the center coordinate information, length and width (width and height) information of the current feature box, and the confidence level that appears. The confidence level measures the credibility of the existence of the target. The higher it is, the more credible the current feature box is, and the confidence can be obtained when the image is detected in real time. The above feature range value includes the area value occupied by the feature image in the feature frame.
上述的上一帧信息可以是图像中多个目标在上一帧的标识、特征坐标值、特征范围值、置信度值等信息,需要说明的是,当前帧中多个目标的标识与上一帧中多个目标的标识互为不同,也可以说,当前帧中多个目标的标识与上一帧中多个目标的标识没有重叠,比如,当前帧多个目标的标识分别为A、B、C、D,上一帧多个目标的标识则可以分别为A’、B’、C’、D’,其中A与A’可以是不同的目标。如图3所示,上一帧信息可以通过对上一帧的原始图像中多个目标进行实时检测得到,也可以通过对上一帧的处理图像中多个目标进行实时检测得到,上一帧信息中的目标信息,用一个区别于当前帧信息中特征框的上一特征框进行表示,比如,当前特征框为实线框,则上一特征框为虚线框,当然,也可以用特征框的标识来进行区分,比如,将目标的标识与特征框进行关联、为当前帧信息中特征框与上一帧信息中特征框配置两种不同的标识等来进行区分。The above information of the previous frame may be the identification, feature coordinate value, feature range value, confidence value, etc. of multiple targets in the image in the previous frame. It should be noted that the identification of multiple targets in the current frame is the same as the previous The marks of multiple targets in the frame are different from each other. It can also be said that the marks of multiple targets in the current frame do not overlap with the marks of multiple targets in the previous frame. For example, the marks of multiple targets in the current frame are A and B, respectively. , C, D, the identification of multiple targets in the previous frame can be A ', B', C ', D', where A and A 'can be different targets. As shown in Figure 3, the information of the previous frame can be obtained by real-time detection of multiple targets in the original image of the previous frame, or by real-time detection of multiple targets in the processed image of the previous frame. The target information in the information is represented by a previous feature box that is different from the feature box in the current frame information. For example, if the current feature box is a solid box, the previous feature box is a dotted box. Of course, you can also use the feature box To distinguish between, for example, associating the target's identifier with the feature frame, configuring two different identifiers for the feature frame in the current frame information and the feature frame in the previous frame information, etc.
具体的可选的,上述的图像信息包括当前帧原始图像、当前特征框、上一特征框等信息,当前特征框包括当前标识、当前中心坐标信息、当前长宽(宽高)信息以及出现的置信度,上一特征框包括上一标识、上一中心坐标信息、上一长宽(宽高)信息。Specifically, the above image information includes the original image of the current frame, the current feature box, and the previous feature box. The current feature box includes the current identification, current center coordinate information, current length and width (width and height) information, and the appearance of the For the confidence level, the previous feature box includes the last logo, the last center coordinate information, and the last length and width (width and height) information.
需要说明的是,特征框也可以称为目标框,上述标识也可以称为ID,上述的当前特征框也可以称为检测框,上述的上一特征框也可以称为跟踪框,上述特征范围值为特征框内特征图像所占的面积值,上述的实时检测可以在跟踪器中执行,也可以通过跟踪算法得到,由于跟踪器和跟踪算法本领域技术人员可知,在此不做赘述。It should be noted that the feature frame may also be referred to as a target frame, the aforementioned identifier may also be referred to as an ID, the aforementioned current feature frame may also be referred to as a detection frame, the aforementioned previous feature frame may also be referred to as a tracking frame, and the aforementioned feature range The value is the area value occupied by the feature image in the feature frame. The above-mentioned real-time detection can be performed in the tracker or can be obtained by a tracking algorithm. Those skilled in the art know that the tracker and the tracking algorithm will not repeat them here.
102、根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功。102. Perform a match based on the current frame information of the multiple targets and the previous frame information to determine whether at least one of the multiple targets is successfully matched at one time.
上述当前帧信息包括多个目标的当前特征框或多个目标的当前特征向量,上述上一帧信息包括多个目标的上一特征框或多个目标的上一特征向量,利用 深度学习的目标检测算法,对多个目标中的至少一个目标进行匹配。The current frame information includes the current feature boxes of multiple targets or the current feature vectors of multiple targets. The previous frame information includes the previous feature boxes of multiple targets or the previous feature vectors of multiple targets. The detection algorithm matches at least one target among multiple targets.
本发明实施例中,步骤102的当前帧信息包括对应于多个目标的多个当前特征框,上一帧信息包括对应于多个目标的多个上一特征框,将由实时检测得到的信息进行输出,得到一组包括多个当前特征框与多个上一特征框的图像信息,对于每一个上一特征框,将其放在在多个当前特征框中进行匹配,计算所述每一个上一特征框与多个当前特征框的重叠面积,根据重叠面积计算每一个上一特征框与多个当前特征框的重叠度(Intersection-over-Union,IoU,也称交并比),选取与上一特征框最大重叠度一个当前特征框成为一组,如图3所示。并将上一特征框的最大重叠度与预先设置的重叠度阈值进行比对,最大重叠度满足重叠度阈值的则记为一次匹配成功,最大重叠度不满足重叠度阈值的则记为一次匹配不成功,上一特征框则进入步骤103中进行二次匹配。In the embodiment of the present invention, the current frame information in step 102 includes multiple current feature frames corresponding to multiple targets, and the previous frame information includes multiple previous feature frames corresponding to multiple targets. The information obtained by real-time detection is performed. Output, get a set of image information including multiple current feature boxes and multiple previous feature boxes, for each previous feature box, put it in multiple current feature boxes to match, calculate each The overlapping area of a feature box and multiple current feature boxes, according to the overlapping area, calculate the overlap degree of each previous feature box and multiple current feature boxes (Intersection-over-Union, IoU, also known as intersection and merge ratio), select and The maximum overlapping degree of the previous feature box and a current feature box become a group, as shown in FIG. 3. Compare the maximum overlap degree of the previous feature box with the preset overlap degree threshold. If the maximum overlap degree meets the overlap degree threshold, it is recorded as a successful match; if the maximum overlap degree does not meet the overlap degree threshold, it is recorded as a match. If it is unsuccessful, the previous feature box enters step 103 for secondary matching.
在一些可能实施方式中,还可以对特征框的相似度进行对比,匹配出对应于上一帧特征框的当前特征框。相似度包括:面积相似度、长宽(宽高)相似度等。In some possible implementation manners, the similarity of the feature frames may also be compared to match the current feature frame corresponding to the feature frame of the previous frame. Similarity includes: area similarity, length-width (width-height) similarity, etc.
需要说明的是,上述一次匹配也可以称为第一次匹配,初次匹配,一次跟踪,第一次跟踪,初次跟踪等,也可以直接称为跟踪。It should be noted that the above-mentioned one-time matching may also be called first-time matching, first-time matching, one-time tracking, first-time tracking, first-time tracking, etc., and may also be directly called tracking.
103、若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功,所述二次匹配包括特征匹配、距离匹配中至少一项。103. If none of the at least one target is successfully matched, perform a second match to determine whether the at least one target is successfully matched twice. The second match includes at least one of feature matching and distance matching.
该步骤中,对102步骤中没有匹配成功的目标进行二次匹配,二次匹配包括特征匹配、距离匹配中至少一项。特征匹配包括获取当前特征框的当前特征向量,以及获取上一特征框的上一特征向量,计算上一特征向量与当前特征向量的相似度。距离匹配包括获取上一特征框与当前特征框中的距离值。选取相相似度大于预设相似度阈值的上一特征框与当前特征框进行二次匹配,或者选取距离值小于预设距离阈值的上一特征框与当前特征框进行二次匹配,或者选取同时满足踪相似度大于预设相似度阈值及距离值小于预设距离阈值的上一特征框与当前特征框进行二次匹配。In this step, the target that did not match successfully in step 102 is subjected to secondary matching, and the secondary matching includes at least one of feature matching and distance matching. Feature matching includes obtaining the current feature vector of the current feature box, and acquiring the previous feature vector of the previous feature box, and calculating the similarity between the previous feature vector and the current feature vector. Distance matching includes obtaining the distance value between the previous feature box and the current feature box. Select the previous feature box with a similarity greater than the preset similarity threshold to perform a second match with the current feature box, or select the previous feature box with a distance less than the preset distance threshold to perform a second match with the current feature box, or select both The previous feature frame that meets the tracking similarity greater than the preset similarity threshold and the distance value is less than the preset distance threshold performs a second match with the current feature frame.
需要说明的是,上述二次匹配也可以称为第一次匹配,再次匹配,二次跟踪,第二次跟踪,再次跟踪等。It should be noted that the foregoing second matching may also be referred to as first matching, rematching, second tracking, second tracking, retracking, and so on.
104、将一次匹配成功和/或二次匹配成功的所述至少一个目标的信息形成输出信息,所述输出信息包括当前存在信息及标识信息。104. Form information of the at least one target with a successful match and / or a successful second match into output information, where the output information includes current presence information and identification information.
上述的当前存在信息包括当前特征框,比如,匹配成功的是当前特征框A 和上一特征框A’,则输出当前特征框A的信息,当前特征框的信息包括:当前特征框的中心坐标信息、长宽(宽高)信息等信息,上述标识信息包括当前特征框的标识信息,用于表示当前特征框,比如当前特征框为A,则输出A,当前特征框的标识信息与当前特征框进行关联。The above-mentioned current presence information includes the current feature box. For example, if the successful match is the current feature box A and the previous feature box A ', the information of the current feature box A is output. The information of the current feature box includes: the center coordinates of the current feature box Information, length and width (width and height) information, etc. The above identification information includes the identification information of the current feature box, which is used to indicate the current feature box. For example, if the current feature box is A, output A. The identification information of the current feature box and the current feature Box.
在步骤102中匹配成功的目标,可以将匹配成功的目标信息放入一个活跃集合中,将匹配不成功的目标信息放入一个丢失集合中。在步骤103中对丢失集合中的目标进行二次匹配后,得到匹配成功的目标,可以将匹配成功的目标信息添加进活跃集合中,将活跃集合中目标的信息进行输出。In step 102, if the target is successfully matched, the target information of the successful match may be placed in an active set, and the target information of unsuccessful match may be placed in a lost set. In step 103, after the target in the missing set is matched for a second time, a target with a successful match is obtained, and the target information of the successful match may be added to the active set, and the information of the target in the active set is output.
将匹配成功的目标进行更新,得到当前存在信息及标识信息,对于匹配成功的当前特征框以及上一特征框,将上一特征框的标识信息更新到当前特征框,使当前特征框的标识与上一特征框的标识统一,用于表示同一个目标,也就是说对于该目标跟踪成功,同时,删除匹配成功的上一特征框,使活跃集合中只存在当前特征框的信息,形成目标的当前存在信息与标识信息。例如:A为当前特征框的标识,A’为上一特征框的标识,A和A’为匹配成功的一对当前特征框及上一特征框,将当前特征框的标识由A更改为A’,然后将上一特征框在图像信息中删除,将当前特征框A’记入活跃集合中,则输出的信息为当前特征框的标识A’以及当前特征框的中心坐标信息、长宽(宽高)信息等信息。Update the target that matches successfully to obtain the current presence information and identification information. For the current feature box and the previous feature box that match successfully, update the identification information of the previous feature box to the current feature box, so that the identification of the current feature box and The identity of the previous feature box is unified, which is used to represent the same target, that is to say, the target is successfully tracked, and at the same time, the previous feature box that matches successfully is deleted, so that only the information of the current feature box exists in the active set, forming the target Current presence information and identification information. For example: A is the identification of the current feature box, A 'is the identification of the previous feature box, A and A' are the pair of current feature boxes and the previous feature box that match successfully, and the identification of the current feature box is changed from A to A ', And then delete the previous feature box in the image information, and record the current feature box A' into the active set, then the output information is the current feature box identification A 'and the center coordinate information, length and width of the current feature box ( Width and height) information and other information.
在本发明实施例中,获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功;若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功;将一次匹配成功和/或二次匹配成功的所述至少一个目标的形成输出信息,所述输出信息包括当前存在信息及标识信息。在一帧图像中同时处理至少一个目标,可以增加跟踪的效率。In the embodiment of the present invention, image information is obtained, and the image information includes current frame information and previous frame information of multiple targets; according to the current frame information of the multiple targets and the previous frame information, a match is made to determine Whether at least one target among the plurality of targets is matched successfully once; if the at least one target is not matched successfully once, then a second match is performed to determine whether the at least one target is successfully matched twice; the first match is successful and / or The output information of the formation of the at least one target after the second match is successful, the output information includes current presence information and identification information. Simultaneous processing of at least one target in a frame of image can increase the efficiency of tracking.
需要说明的是,本发明实施例提供的容器编排引擎的安装方法可以应用于容器编排引擎的安装设备,例如:计算机、服务器、手机等可以进行容器编排引擎安装的设备。It should be noted that the method for installing a container orchestration engine provided by an embodiment of the present invention can be applied to installation equipment for a container orchestration engine, such as computers, servers, mobile phones, and other devices that can install a container orchestration engine.
请参见图4,图4是本发明实施例提供的另一种多目标实时跟踪方法的流程示意图,如图4所示,所述方法包括以下步骤:Please refer to FIG. 4. FIG. 4 is a schematic flowchart of another multi-target real-time tracking method provided by an embodiment of the present invention. As shown in FIG. 4, the method includes the following steps:
201、获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;201. Acquire image information, where the image information includes current frame information and previous frame information of multiple targets;
202、根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所 述多个目标中的至少一个目标是否一次匹配成功;202. Perform a match based on the current frame information of the multiple targets and the previous frame information to determine whether at least one of the multiple targets is successfully matched at one time;
203、若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功,所述二次匹配包括特征匹配、距离匹配中至少一项;203. If none of the at least one target is successfully matched, perform a second match to determine whether the at least one target is successfully matched twice. The second match includes at least one of feature matching and distance matching.
204、将一次匹配成功和/或二次匹配成功的所述至少一个目标的信息形成输出信息,所述输出信息包括当前存在信息及标识信息;204. Form information of the at least one target with a successful match and / or a successful second match into output information, where the output information includes current presence information and identification information;
205、若二次匹配不成功,则重新生成剩余目标的当前帧信息,获取新的图像信息,所述新的图像信息包括所述当前帧信息与下一帧信息。205. If the secondary matching is unsuccessful, regenerate the current frame information of the remaining target to obtain new image information, where the new image information includes the current frame information and the next frame information.
在步骤202中匹配成功的目标,可以将匹配成功的目标信息放入一个活跃集合中,将匹配不成功的目标信息放入一个丢失集合中。在步骤203中对丢失集合中的目标进行二次匹配后,得到匹配成功的目标,可以将匹配成功的目标信息添加进活跃集合中,将活跃集合中的目标信息进行输出,若匹配不成功,则转入步骤205。In step 202, if the target is successfully matched, the target information of the successful match may be placed in an active set, and the target information of unsuccessful match may be placed in a lost set. In step 203, after the target in the missing set is matched a second time, the target with a successful match can be obtained. The target information in the successful set can be added to the active set, and the target information in the active set can be output. If the match is not successful, Then go to step 205.
在步骤205中,对于二次匹配仍未成功的当前特征框与上一特征框,则重新生成目标的特征框,将重新生成的特征框记入活跃集合中,并为该些特征框重新生成符合于活跃集合中的标识信息。例如:假设活跃集中存在当前特征框A’和当前特征框D’两个元素,B为没有匹配成功的当前特征框,B’为匹配成功的上一特征框,C为与B’匹配成功的当前特征框,C’为没有匹配成功的上一特征框,则将匹配成功的当前特征框的标识C更改为标识B’记入活跃集,得到当前特征框B’,此时活跃集中存在当前特征框A’、当前特征框B’和当前特征框D’三个元素,删除上一特征框B’,对于没有匹配上的当前特征框B与上一特征框C’,删除上一特征框C’,对当前特征框的标识B重新生成为E’,记入活跃集中,则此时活跃集中存在当前特征框A’、当前特征框B’、当前特征框D’三个元素和当前特征框E’四个元素,得到当前帧信息。In step 205, for the current feature frame and the previous feature frame that have not been successfully matched for the second time, the target feature frame is regenerated, the regenerated feature frame is recorded in the active set, and the feature frames are regenerated It is consistent with the identification information in the active set. For example: suppose there are two elements of the current feature box A 'and the current feature box D' in the active set, B is the current feature box without matching success, B 'is the previous feature box with matching success, and C is the successful matching with B' In the current feature box, C 'is the previous feature box without a successful match, then the identifier C of the successfully matched current feature box is changed to the identifier B' and recorded in the active set, and the current feature box B 'is obtained. Feature box A ', current feature box B' and current feature box D 'three elements, delete the previous feature box B', delete the previous feature box for the current feature box B and the previous feature box C 'without matching C ', the identifier B of the current feature box is regenerated as E' and recorded in the active set, then there are three elements of the current feature box A ', the current feature box B', the current feature box D 'and the current feature in the active set Box E 'four elements, get the current frame information.
获取新的图像信息包括获取新的原始图像信息,对新的原始图像进行实时检测,得到下一帧信息,将活跃集中的当前特征框加入新的图像信息,可以循环执行新的跟踪过程以得到全部跟踪结果,如图5所示。另外,也可以循环执行步骤201到步骤205,可以对多目标进行跟踪。Obtaining new image information includes acquiring new original image information, performing real-time detection on the new original image to obtain the next frame information, adding the current feature frame in the active set to the new image information, and performing the new tracking process cyclically to obtain All tracking results are shown in Figure 5. In addition, step 201 to step 205 can also be executed cyclically, and multiple targets can be tracked.
需要说明的是,步骤205为可选的,在一些实施例中,只需要对一次匹配成功和/或二次匹配成功的所述至少一个目标的信息形成输出信息输出就可以。可选的,所述当前帧包括所述多个目标的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;It should be noted that step 205 is optional. In some embodiments, it is only necessary to form the output information output for the information of the at least one target with a successful match and / or a successful second match. Optionally, the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
所述根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功包括:The matching the current frame information of the multiple targets with the previous frame information once to determine whether at least one of the multiple targets is successfully matched at a time includes:
将所述多个目标的至少一个目标的当前检测信息与所述多个目标的至少一个目标的历史存在信息进行重叠度计算,得到所述至少一个目标的当前检测信息与所述至少一个目标的历史存在信息的重叠度;Calculating the degree of overlap between the current detection information of at least one target of the plurality of targets and the historical presence information of at least one target of the plurality of targets to obtain the current detection information of the at least one target and the information of the at least one target The degree of overlap of information in history;
根据重叠度判断所述至少一个目标是否一次匹配成功。According to the degree of overlap, it is determined whether the at least one target is successfully matched at a time.
上述当前检测信息包括对当前原始图像实时检测所得的当前特征框信息以及生成的各当前特征框的标识,对当前特征框信息信息包括中心坐标信息、长宽(宽高)信息及置信度等信息,当前特征框的标识可以是数字唯一标识、字母唯一标识等唯一标识。上述的历史存在信息可以是上一帧图像中存在的特征框信息,上述对应的标识信息为上一帧图像中存在的特征框的唯一标识,也可以说是上一特征框的唯一标识。重叠度可以是当前特征框与上一特征框的重叠度,包括长宽(宽高)的坐标重叠度,面积重叠度等。The above current detection information includes the current feature frame information obtained by real-time detection of the current original image and the generated identifier of each current feature frame, and the current feature frame information information includes center coordinate information, length and width (width and height) information, and confidence information , The identification of the current feature box may be a unique identification such as a number unique identification, a letter unique identification, and so on. The above-mentioned historical existence information may be the characteristic frame information existing in the image of the previous frame, and the corresponding identification information is the unique identification of the characteristic frame existing in the image of the previous frame, and may also be said to be the unique identification of the previous characteristic frame. The degree of overlap may be the degree of overlap between the current feature frame and the previous feature frame, including the degree of overlap of the length, width (width and height) coordinates, and the degree of overlap of the area.
在一些可能的实施例中,重叠度也可以是特征向量的相似度或是特征框的相似度等。In some possible embodiments, the degree of overlap may also be the similarity of feature vectors or the similarity of feature frames.
当上述重叠度或是相似度大于预先设置的阈值时,可以判断所述至少一个目标为一次匹配成功,当上述重叠度或是相似度小于预先设置的阈值时,可以判断所述至少一个目标为一次匹配不成功。When the above overlapping degree or similarity is greater than the preset threshold, the at least one target can be judged as a successful match, and when the above overlapping degree or similarity is less than the preset threshold, the at least one target can be determined as One match was unsuccessful.
可选的,所述根据重叠度判断所述至少一个目标是否匹配成功包括:Optionally, the judging whether the at least one target is successfully matched according to the overlapping degree includes:
选取最大的重叠度与预先设置的重叠度阈值进行对比,判断所述最大的重叠度是否大于所述重叠度阈值;Selecting the maximum overlap degree to compare with the preset overlap degree threshold to determine whether the maximum overlap degree is greater than the overlap degree threshold;
若所述最大的重叠度大于所述重叠度阈值,则一次匹配成功,若所述最大的重叠度小于所述重叠度阈值,则一次匹配不成功。If the maximum overlap degree is greater than the overlap degree threshold, a match is successful, and if the maximum overlap degree is less than the overlap degree threshold, a match is unsuccessful.
对于每一个上一特征框,将其放在在多个当前特征框中进行匹配,计算所述每一个上一特征框与多个当前特征框的重叠面积,根据重叠面积计算每一个上一特征框与多个当前特征框的重叠度(Intersection-over-Union,IoU,也称交并比),选取与上一特征框最大重叠度一个当前特征框成为一组,并将上一特征框的最大重叠度与预先设置的重叠度阈值进行比对,例如:A’为上一特征框的标识,B’为上一特征框的标识,A为当前特征框的标识,B为当前特征框的标识,A’与A的重叠度为0.4,A’与B的重叠度为0.8,则A’与B拥有最大重叠度,记为一组,B’与A的重叠度为0.4,B’与B的重叠度为0.5,则B’与B拥有最大重叠度,记为一组,A’与A的重叠度为0.4,最大重叠度满足重叠度阈值的 则记为一次匹配成功,假设重叠度阈值为0.6,A’与B匹配成功,B’与B的重叠度小于0.6,则B’与B匹配不成功,若最大重叠度不满足重叠度阈值的则记为一次匹配不成功,上一特征框则进入步骤203中进行二次匹配。For each previous feature box, place it in multiple current feature boxes to match, calculate the overlapping area of each previous feature box and multiple current feature boxes, and calculate each previous feature based on the overlapping area The overlap degree of the frame and multiple current feature frames (Intersection-over-Union, IoU, also known as cross-combination ratio), select the maximum overlap degree with the previous feature frame, one current feature frame becomes a group, and the previous feature frame The maximum overlap degree is compared with the preset overlap threshold, for example: A 'is the identifier of the previous feature box, B' is the identifier of the previous feature box, A is the identifier of the current feature box, and B is the current feature box. Mark, the overlapping degree of A 'and A is 0.4, and the overlapping degree of A' and B is 0.8, then A 'and B have the maximum overlapping degree, which is recorded as a group, the overlapping degree of B' and A is 0.4, B 'and If the overlap degree of B is 0.5, then B 'and B have the maximum overlap degree, which is recorded as a group, and the overlap degree of A' and A is 0.4. If the maximum overlap degree meets the overlap degree threshold, it is recorded as a successful match. Assuming the overlap degree The threshold is 0.6, A 'matches with B successfully, and the overlap between B' and B is less than 0.6, Then, the matching between B 'and B is unsuccessful. If the maximum degree of overlap does not meet the threshold of the degree of overlap, it is recorded as one-time unsuccessful matching, and the previous feature box proceeds to step 203 to perform the second-time matching.
可选的,所述当前帧包括所述多个目标的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;Optionally, the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
所述进行二次匹配,判断所述至少一个目标是否二次匹配成功包括:The performing second matching to determine whether the at least one target is successfully matched twice includes:
提取所述多个目标的至少一个目标的当前检测信息的当前特征向量,提取所述多个目标的至少一个目标的历史存在信息的历史特征向量,将所述当前特征向量与所述历史特征向量进行计算,得到所述至少一个目标的余弦相似度;Extracting a current feature vector of current detection information of at least one target of the plurality of targets, extracting a historical feature vector of historical presence information of at least one target of the multiple targets, and combining the current feature vector with the historical feature vector Performing calculation to obtain the cosine similarity of the at least one target;
提取所述至少一个目标的当前检测信息的当前坐标与所述至少一个目标的历史存在信息的历史坐标,将所述至少一个目标的当前坐标与历史坐标进行计算,得到所述至少一个目标的距离值;Extracting the current coordinates of the current detection information of the at least one target and the historical coordinates of the historical presence information of the at least one target, calculating the current coordinates of the at least one target and the historical coordinates to obtain the distance of the at least one target value;
根据所述至少一个目标的余弦相似度及距离值,判断所述至少一个目标是否二次匹配成功。According to the cosine similarity and distance values of the at least one target, it is determined whether the at least one target is successfully matched twice.
上述的当前检测信息包括当前特征框信息及标识,历史存在信息包括上一特征框信息及标识。当前特征向量可以通过提取当前特征框的方向梯度直方图(Histogram of Oriented Gradient,简称HOG),得到当前HOG特征向量,同样,上一特征向量可以通过提取上一特征框的方向梯度直方图来得到上一HOG特征向量,计算得到当前HOG特征向量与上一HOG特征向量的余弦相似度。The above-mentioned current detection information includes the current feature frame information and identification, and the historical presence information includes the previous feature frame information and identification. The current feature vector can be obtained by extracting the direction gradient histogram (Histogram of Oriented Gradient, HOG for short) of the current feature box. Similarly, the previous feature vector can be obtained by extracting the direction gradient histogram of the previous feature box On the previous HOG feature vector, the cosine similarity between the current HOG feature vector and the previous HOG feature vector is calculated.
当前特征框包括当前中心坐标信息、当前长宽(宽高)信息,上一特征框包括上一中心坐标信息、上一长宽(宽高)信息,当前特征框与上一特征框的距离值可以通过以下公式进行计算:The current feature box includes the current center coordinate information, current length and width (width and height) information, the previous feature box includes the previous center coordinate information, the previous length and width (width and height) information, and the distance value between the current feature box and the previous feature box It can be calculated by the following formula:
D=sqrt((x1-x2) 2+(y1-y2) 2)/min(w1,w2) D = sqrt ((x1-x2) 2 + (y1-y2) 2 ) / min (w1, w2)
其中,D为当前特征框与上一特征框的距离值,x1,y1,w1属于当前特征框,x1,y1为当前特征框的中心坐标,w1为当前特征框的宽,x2,y2,w2属于上一特征框,x2,y2为上一特征框的中心坐标,w2为上一特征框的宽。Where D is the distance between the current feature box and the previous feature box, x1, y1, w1 belong to the current feature box, x1, y1 are the center coordinates of the current feature box, w1 is the width of the current feature box, x2, y2, w2 Belongs to the previous feature frame, x2 and y2 are the center coordinates of the previous feature frame, and w2 is the width of the previous feature frame.
可选的,所述根据所述至少一个目标的特征值及距离值,判断所述至少一个目标是否二次匹配成功包括:Optionally, the judging whether the at least one target is successfully matched twice according to the characteristic value and the distance value of the at least one target includes:
将所述特征值余弦相似度与预先设置的余弦相似度阈值进行比较,将所述距离值与预先设置有距离阈值进行比较,得到比较结果;Comparing the characteristic value cosine similarity with a preset cosine similarity threshold, and comparing the distance value with a preset distance threshold to obtain a comparison result;
根据所述比较结果,按照预先设置的判断规则判断所述至少一个目标是否二次匹配成功。According to the comparison result, it is determined whether the at least one target is successfully matched twice according to a preset judgment rule.
判断规则包括:当余弦相似度大于预先设定的余弦相似度阈值时,则可以认为二次匹配成功。另外,当距离值小于设定的距离阈值时,则也可以认为二次匹配成功。当然,当同时满足余弦相似度大于预先设定的余弦相似度阈值以及距离值小于设定的距离阈值时,也可以认为二次匹配成功。The judgment rule includes: when the cosine similarity is greater than the preset cosine similarity threshold, it can be considered that the second match is successful. In addition, when the distance value is less than the set distance threshold, it can also be considered that the secondary matching is successful. Of course, when the cosine similarity is greater than the preset cosine similarity threshold and the distance value is less than the set distance threshold, it can also be considered that the second match is successful.
可选的,所述将一次匹配成功和/或二次匹配成功的所述至少一个目标的形成输出信息包括:Optionally, the output information of the formation of the at least one target with a successful first match and / or a successful second match includes:
将所述至少一个目标的当前检测信息更新为所述当前存在信息,将所述至少一个目标的对应的标识信息关联到所述当前检测信息;Updating the current detection information of the at least one target to the current presence information, and associating the corresponding identification information of the at least one target to the current detection information;
根据所述当前存在信息及所述对应的标识信息,形成所述至少一个目标的输出信息。The output information of the at least one target is formed according to the current presence information and the corresponding identification information.
上述的当前存在信息包括当前特征框,比如,匹配成功的是当前特征框A和上一特征框A’,则输出当前特征框A的信息,当前特征框的信息包括:当前特征框的中心坐标信息、长宽(宽高)信息等信息,上述标识信息包括当前特征框的标识信息,用于表示当前特征框,比如当前特征框为A,则输出A,当前特征框为A’,则输出A’,当前特征框的标识信息与当前特征框进行关联。The above current presence information includes the current feature box. For example, if the matching is successful between the current feature box A and the previous feature box A ', the information of the current feature box A is output. The information of the current feature box includes: the center coordinates of the current feature box Information, length and width (width and height) information, etc. The above identification information includes the identification information of the current feature box, used to represent the current feature box, for example, the current feature box is A, then output A, and the current feature box is A ', then output A ', the identification information of the current feature box is associated with the current feature box.
在步骤202中匹配成功的目标,可以将匹配成功的目标信息放入一个活跃集合中,将匹配不成功的目标信息放入一个丢失集合中。在步骤203中对丢失集合中的目标进行二次匹配后,得到匹配成功的目标,可以将匹配成功的目标信息添加进活跃集合中,将活跃集合中目标的信息进行输出。In step 202, if the target is successfully matched, the target information of the successful match may be placed in an active set, and the target information of unsuccessful match may be placed in a lost set. In step 203, after the target in the missing set is matched a second time, a target with a successful match is obtained, and the target information of the successful match can be added to the active set, and the information of the target in the active set is output.
将匹配成功的目标进行更新,得到当前存在信息及标识信息,对于匹配成功的当前特征框以及上一特征框,将上一特征框的标识信息更新到当前特征框,使当前特征框的标识与上一特征框的标识统一,用于表示同一个目标,也就是说对于该目标跟踪成功,同时,删除匹配成功的上一特征框,使活跃集合中只存在当前特征框的信息,形成目标的当前存在信息与标识信息。例如:A为当前特征框的标识,A’为上一特征框的标识,A和A’为匹配成功的一对当前特征框及上一特征框,将当前特征框的标识由A更改为A’,然后将上一特征框在图像信息中删除,将当前特征框A’记入活跃集合中,则输出的信息为当前特征框的标识A’以及当前特征框的中心坐标信息、长宽(宽高)信息等信息。Update the target that matches successfully to obtain the current presence information and identification information. For the current feature box and the previous feature box that match successfully, update the identification information of the previous feature box to the current feature box, so that the identification of the current feature box and The identity of the previous feature box is unified, which is used to represent the same target, that is to say, the target is successfully tracked, and at the same time, the previous feature box that matches successfully is deleted so that only the information of the current feature box exists in the active set, forming the target Current presence information and identification information. For example: A is the identification of the current feature box, A 'is the identification of the previous feature box, A and A' are the pair of current feature boxes and the previous feature box that match successfully, and the identification of the current feature box is changed from A to A ', And then delete the previous feature box in the image information, and record the current feature box A' into the active set, then the output information is the current feature box identification A 'and the center coordinate information, length and width of the current feature box ( Width and height) information and other information.
第二方面,如图6所示,提供一种多目标实时跟踪装置,所述装置包括:In a second aspect, as shown in FIG. 6, a multi-target real-time tracking device is provided. The device includes:
获取模块401,用于获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;The obtaining module 401 is used to obtain image information, where the image information includes current frame information and previous frame information of multiple targets;
第一匹配模块402,用于根据所述多个目标的当前帧信息与上一帧信息进 行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功;The first matching module 402 is configured to perform a match based on the current frame information of the multiple targets and the previous frame information, and determine whether at least one of the multiple targets is successfully matched at one time;
第二匹配模块403,用于若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功,所述二次匹配包括特征匹配、距离匹配中至少一项;The second matching module 403 is configured to perform secondary matching if the at least one target has not been matched successfully once, and determine whether the at least one target has succeeded in secondary matching. The secondary matching includes at least one of feature matching and distance matching. One item
输出模块404,用于将一次匹配成功和/或二次匹配成功的所述至少一个目标的信息形成输出信息,所述输出信息包括当前存在信息及标识信息。The output module 404 is configured to form information of the at least one target with a successful match and / or a successful second match into output information, where the output information includes current presence information and identification information.
可选的,如图7所示,在所述若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功之后,所述装置还包括:Optionally, as shown in FIG. 7, after the at least one target is not successfully matched once, a second match is performed to determine whether the at least one target is successfully matched twice, the device further includes:
生成模块405,用于若二次匹配不成功,则重新生成剩余目标的当前帧信息,获取新的图像信息,所述新的图像信息包括所述当前帧信息与下一帧信息。The generating module 405 is configured to regenerate the current frame information of the remaining target to obtain new image information if the second matching is unsuccessful, and the new image information includes the current frame information and the next frame information.
可选的,如图8所示,所述当前帧包括所述多个目标的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;Optionally, as shown in FIG. 8, the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
所述第一匹配模块402包括:The first matching module 402 includes:
第一处理单元4021,用于将所述多个目标的至少一个目标的当前检测信息与所述多个目标的至少一个目标的历史存在信息进行重叠度计算,得到所述至少一个目标的当前检测信息与所述至少一个目标的历史存在信息的重叠度;The first processing unit 4021 is configured to calculate the degree of overlap between the current detection information of at least one target of the plurality of targets and the historical presence information of at least one target of the plurality of targets to obtain the current detection of the at least one target The degree of overlap between the information and the historical existence information of the at least one target;
第一判断单元4022,用于根据重叠度判断所述至少一个目标是否一次匹配成功。The first determining unit 4022 is configured to determine whether the at least one target is successfully matched at a time according to the degree of overlap.
可选的,如图9所示,所述第一判断单元4022包括:Optionally, as shown in FIG. 9, the first judgment unit 4022 includes:
对比子单元40221,用于选取最大的重叠度与预先设置的重叠度阈值进行对比,判断所述最大的重叠度是否大于所述重叠度阈值;The comparison subunit 40221 is used to select a maximum overlap degree and compare with a preset overlap degree threshold to determine whether the maximum overlap degree is greater than the overlap degree threshold;
判断子单元40222,用于若所述最大的重叠度大于所述重叠度阈值,则一次匹配成功,若所述最大的重叠度小于所述重叠度阈值,则一次匹配不成功。The judgment subunit 40222 is configured to match once if the maximum overlap degree is greater than the overlap degree threshold, and fail to match once if the maximum overlap degree is less than the overlap degree threshold.
可选的,如图10所示,所述当前帧包括所述多个目标的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;Optionally, as shown in FIG. 10, the current frame includes current detection information of the multiple targets, and the previous frame information includes historical existence information and corresponding identification information of the multiple targets;
所述第二匹配模块403包括:The second matching module 403 includes:
第二处理单元4031,用于提取所述多个目标的至少一个目标的当前检测信息的当前特征向量,提取所述多个目标的至少一个目标的历史存在信息的历史特征向量,将所述当前特征向量与所述历史特征向量进行计算,得到所述至少一个目标的余弦相似度;The second processing unit 4031 is configured to extract a current feature vector of current detection information of at least one target of the plurality of targets, extract a historical feature vector of historical presence information of at least one target of the plurality of targets, and convert the current Calculating the feature vector and the historical feature vector to obtain the cosine similarity of the at least one target;
第三处理单元4032,用于提取所述至少一个目标的当前检测信息的当前坐 标与所述至少一个目标的历史存在信息的历史坐标,将所述至少一个目标的当前坐标与历史坐标进行计算,得到所述至少一个目标的距离值;The third processing unit 4032 is configured to extract the current coordinates of the current detection information of the at least one target and the historical coordinates of the historical presence information of the at least one target, and calculate the current coordinates and the historical coordinates of the at least one target, Obtain the distance value of the at least one target;
第二判断单元4033,用于根据所述至少一个目标的余弦相似度及距离值,判断所述至少一个目标是否二次匹配成功。The second determining unit 4033 is configured to determine whether the at least one target is successfully matched twice according to the cosine similarity and distance values of the at least one target.
可选的,如图11所示,所述输出模块包括:Optionally, as shown in FIG. 11, the output module includes:
更新单元4041,用于将所述至少一个目标的当前检测信息更新为所述当前存在信息,将所述至少一个目标的对应的标识信息关联到所述当前检测信息;The updating unit 4041 is configured to update the current detection information of the at least one target to the current presence information, and associate the corresponding identification information of the at least one target to the current detection information;
输出单元4042,根据所述当前存在信息及所述对应的标识信息,形成所述至少一个目标的输出信息。The output unit 4042 forms the output information of the at least one target according to the current presence information and the corresponding identification information.
第三方面,本发明实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的多目标实时跟踪方法中的步骤。In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program The steps in the multi-target real-time tracking method provided by the embodiments of the present invention are implemented.
第四方面,本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本发明实施例提供的多目标实时跟踪方法中的步骤。According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium that stores a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, realizes multi-target real-time tracking provided by an embodiment of the present invention Steps in the method.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施方式只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be assumed that the specific embodiments of the present invention are limited to these descriptions. For a person of ordinary skill in the technical field to which the present invention belongs, without deviating from the concept of the present invention, several simple deductions or replacements can be made, which should be regarded as falling within the protection scope of the present invention.

Claims (10)

  1. 一种多目标实时跟踪方法,其特征在于,所述方法包括:A multi-target real-time tracking method, characterized in that the method includes:
    获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;Obtain image information, where the image information includes current frame information and previous frame information of multiple targets;
    根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功;Perform a match based on the current frame information of the multiple targets and the previous frame information, and determine whether at least one of the multiple targets is successfully matched at a time;
    若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功,所述二次匹配包括特征匹配、距离匹配中至少一项;If the at least one target is not matched successfully once, then a second match is performed to determine whether the at least one target is successfully matched twice, and the second match includes at least one of feature matching and distance matching;
    将一次匹配成功和/或二次匹配成功的所述至少一个目标的信息形成输出信息,所述输出信息包括当前存在信息及标识信息。The information of the at least one target with a successful match and / or a successful second match is formed as output information, and the output information includes current presence information and identification information.
  2. 如权利要求1所述的方法,其特征在于,在所述若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功之后,还包括:The method according to claim 1, wherein after the at least one target is not successfully matched once, performing a second match to determine whether the at least one target is successfully matched a second time, further comprising:
    若二次匹配不成功,则重新生成剩余目标的当前帧信息,获取新的图像信息,所述新的图像信息包括所述当前帧信息与下一帧信息。If the second matching is unsuccessful, the current frame information of the remaining target is regenerated to obtain new image information, where the new image information includes the current frame information and the next frame information.
  3. 如权利要求2所述的方法,其特征在于,所述当前帧包括所述多个目标的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;The method of claim 2, wherein the current frame includes current detection information of the multiple targets, and the previous frame information includes historical presence information and corresponding identification information of the multiple targets;
    所述根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功包括:The matching the current frame information of the multiple targets with the previous frame information once to determine whether at least one of the multiple targets is successfully matched at a time includes:
    将所述多个目标的至少一个目标的当前检测信息与所述多个目标的至少一个目标的历史存在信息进行重叠度计算,得到所述至少一个目标的当前检测信息与所述至少一个目标的历史存在信息的重叠度;Calculating the degree of overlap between the current detection information of at least one target of the plurality of targets and the historical presence information of at least one target of the plurality of targets to obtain the current detection information of the at least one target and the information of the at least one target The degree of overlap of information in history;
    根据重叠度判断所述至少一个目标是否一次匹配成功。According to the degree of overlap, it is determined whether the at least one target is successfully matched at a time.
  4. 如权利要求3所述的方法,其特征在于,所述根据重叠度判断所述至少一个目标是否匹配成功包括:The method according to claim 3, wherein the judging whether the at least one target is successfully matched according to the degree of overlap includes:
    选取最大的重叠度与预先设置的重叠度阈值进行对比,判断所述最大的重叠度是否大于所述重叠度阈值;Selecting the maximum overlap degree to compare with the preset overlap degree threshold to determine whether the maximum overlap degree is greater than the overlap degree threshold;
    若所述最大的重叠度大于所述重叠度阈值,则一次匹配成功,若所述最大的重叠度小于所述重叠度阈值,则一次匹配不成功。If the maximum overlap degree is greater than the overlap degree threshold, a match is successful, and if the maximum overlap degree is less than the overlap degree threshold, a match is unsuccessful.
  5. 如权利要求2所述的方法,其特征在于,所述当前帧包括所述多个目标 的当前检测信息,所述上一帧信息包括所述多个目标的历史存在信息及对应的标识信息;The method according to claim 2, wherein the current frame includes current detection information of the plurality of targets, and the previous frame information includes historical existence information and corresponding identification information of the plurality of targets;
    所述进行二次匹配,判断所述至少一个目标是否二次匹配成功包括:The performing second matching to determine whether the at least one target is successfully matched twice includes:
    提取所述多个目标的至少一个目标的当前检测信息的当前特征向量,提取所述多个目标的至少一个目标的历史存在信息的历史特征向量,将所述当前特征向量与所述历史特征向量进行计算,得到所述至少一个目标的余弦相似度;Extracting a current feature vector of current detection information of at least one target of the plurality of targets, extracting a historical feature vector of historical presence information of at least one target of the multiple targets, and combining the current feature vector with the historical feature vector Performing calculation to obtain the cosine similarity of the at least one target;
    提取所述至少一个目标的当前检测信息的当前坐标与所述至少一个目标的历史存在信息的历史坐标,将所述至少一个目标的当前坐标与历史坐标进行计算,得到所述至少一个目标的距离值;Extracting the current coordinates of the current detection information of the at least one target and the historical coordinates of the historical presence information of the at least one target, calculating the current coordinates of the at least one target and the historical coordinates to obtain the distance of the at least one target value;
    根据所述至少一个目标的余弦相似度及距离值,判断所述至少一个目标是否二次匹配成功。According to the cosine similarity and distance values of the at least one target, it is determined whether the at least one target is successfully matched twice.
  6. 如权利要求5所述的方法,其特征在于,所述根据所述至少一个目标的特征值及距离值,判断所述至少一个目标是否二次匹配成功包括:The method according to claim 5, wherein the determining whether the at least one target is successfully matched twice according to the feature value and the distance value of the at least one target includes:
    将所述特征值余弦相似度与预先设置的余弦相似度阈值进行比较,将所述距离值与预先设置有距离阈值进行比较,得到比较结果;Comparing the characteristic value cosine similarity with a preset cosine similarity threshold, and comparing the distance value with a preset distance threshold to obtain a comparison result;
    根据所述比较结果,按照预先设置的判断规则判断所述至少一个目标是否二次匹配成功。According to the comparison result, it is determined whether the at least one target is successfully matched twice according to a preset judgment rule.
  7. 如权利要求5所述的方法,其特征在于,所述将一次匹配成功和/或二次匹配成功的所述至少一个目标的形成输出信息包括:The method according to claim 5, wherein the output information of the formation of the at least one target for which the first match is successful and / or the second match is successful includes:
    将所述至少一个目标的当前检测信息更新为所述当前存在信息,将所述至少一个目标的对应的标识信息关联到所述当前检测信息;Updating the current detection information of the at least one target to the current presence information, and associating the corresponding identification information of the at least one target to the current detection information;
    根据所述当前存在信息及所述对应的标识信息,形成所述至少一个目标的输出信息。The output information of the at least one target is formed according to the current presence information and the corresponding identification information.
  8. 一种多目标实时跟踪装置,其特征在于,所述装置包括:A multi-target real-time tracking device, characterized in that the device includes:
    获取模块,用于获取图像信息,所述图像信息包括多个目标的当前帧信息及上一帧信息;An acquisition module, for acquiring image information, the image information including current frame information and previous frame information of multiple targets;
    第一匹配模块,用于根据所述多个目标的当前帧信息与上一帧信息进行一次匹配,判断所述多个目标中的至少一个目标是否一次匹配成功;The first matching module is configured to perform a match based on the current frame information of the multiple targets and the previous frame information, and determine whether at least one of the multiple targets is successfully matched at one time;
    第二匹配模块,用于若所述至少一个目标没有一次匹配成功,则进行二次匹配,判断所述至少一个目标是否二次匹配成功,所述二次匹配包括特征匹配、距离匹配中至少一项;The second matching module is used to perform secondary matching if the at least one target has not been matched once, and determine whether the at least one target has been successfully matched twice. The secondary matching includes at least one of feature matching and distance matching. item;
    输出模块,用于将一次匹配成功和/或二次匹配成功的所述至少一个目标的 信息形成输出信息,所述输出信息包括当前存在信息及标识信息。The output module is configured to form information of the at least one target with a successful match and / or a successful second match into output information, where the output information includes current presence information and identification information.
  9. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至7中任一项所述的多目标实时跟踪方法中的步骤。An electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the computer program as claimed in claim 1 Steps in the multi-target real-time tracking method described in any one of 7.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的多目标实时跟踪方法中的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the multi-objective according to any one of claims 1 to 7 is realized Follow the steps in the method in real time.
PCT/CN2018/111589 2018-10-24 2018-10-24 Multi-objective real-time tracking method and apparatus, and electronic device WO2020082258A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880083620.2A CN111512317B (en) 2018-10-24 2018-10-24 Multi-target real-time tracking method and device and electronic equipment
PCT/CN2018/111589 WO2020082258A1 (en) 2018-10-24 2018-10-24 Multi-objective real-time tracking method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/111589 WO2020082258A1 (en) 2018-10-24 2018-10-24 Multi-objective real-time tracking method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2020082258A1 true WO2020082258A1 (en) 2020-04-30

Family

ID=70330247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111589 WO2020082258A1 (en) 2018-10-24 2018-10-24 Multi-objective real-time tracking method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN111512317B (en)
WO (1) WO2020082258A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914653A (en) * 2020-07-02 2020-11-10 泰康保险集团股份有限公司 Personnel marking method and device
CN112037256A (en) * 2020-08-17 2020-12-04 中电科新型智慧城市研究院有限公司 Target tracking method and device, terminal equipment and computer readable storage medium
CN112084914A (en) * 2020-08-31 2020-12-15 的卢技术有限公司 Multi-target tracking method integrating spatial motion and apparent feature learning
CN112101223A (en) * 2020-09-16 2020-12-18 北京百度网讯科技有限公司 Detection method, device, equipment and computer storage medium
CN112634327A (en) * 2020-12-21 2021-04-09 合肥讯图信息科技有限公司 Tracking method based on YOLOv4 model
CN113238209A (en) * 2021-04-06 2021-08-10 宁波吉利汽车研究开发有限公司 Road sensing method, system, equipment and storage medium based on millimeter wave radar
CN113361456A (en) * 2021-06-28 2021-09-07 北京影谱科技股份有限公司 Face recognition method and system
CN113723311A (en) * 2021-08-31 2021-11-30 浙江大华技术股份有限公司 Target tracking method
CN114155275A (en) * 2021-11-17 2022-03-08 深圳职业技术学院 IOU-Tracker-based fish tracking method and device
CN115223135A (en) * 2022-04-12 2022-10-21 广州汽车集团股份有限公司 Parking space tracking method and device, vehicle and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070802B (en) * 2020-09-02 2024-01-26 合肥英睿系统技术有限公司 Target tracking method, device, equipment and computer readable storage medium
CN114185034A (en) * 2020-09-15 2022-03-15 郑州宇通客车股份有限公司 Target tracking method and system for millimeter wave radar

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778465A (en) * 2015-05-06 2015-07-15 北京航空航天大学 Target tracking method based on feature point matching
CN106033613A (en) * 2015-03-16 2016-10-19 北京大学 Object tracking method and device
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106203274A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 Pedestrian's real-time detecting system and method in a kind of video monitoring
CN107316322A (en) * 2017-06-27 2017-11-03 上海智臻智能网络科技股份有限公司 Video tracing method and device and object identifying method and device
CN108154118A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of target detection system and method based on adaptive combined filter with multistage detection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424638A (en) * 2013-08-27 2015-03-18 深圳市安芯数字发展有限公司 Target tracking method based on shielding situation
CN104517275A (en) * 2013-09-27 2015-04-15 株式会社理光 Object detection method and system
CN104765886A (en) * 2015-04-29 2015-07-08 百度在线网络技术(北京)有限公司 Information acquisition method and device based on images
CN108664930A (en) * 2018-05-11 2018-10-16 西安天和防务技术股份有限公司 A kind of intelligent multi-target detection tracking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033613A (en) * 2015-03-16 2016-10-19 北京大学 Object tracking method and device
CN104778465A (en) * 2015-05-06 2015-07-15 北京航空航天大学 Target tracking method based on feature point matching
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106203274A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 Pedestrian's real-time detecting system and method in a kind of video monitoring
CN107316322A (en) * 2017-06-27 2017-11-03 上海智臻智能网络科技股份有限公司 Video tracing method and device and object identifying method and device
CN108154118A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of target detection system and method based on adaptive combined filter with multistage detection

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914653B (en) * 2020-07-02 2023-11-07 泰康保险集团股份有限公司 Personnel marking method and device
CN111914653A (en) * 2020-07-02 2020-11-10 泰康保险集团股份有限公司 Personnel marking method and device
CN112037256A (en) * 2020-08-17 2020-12-04 中电科新型智慧城市研究院有限公司 Target tracking method and device, terminal equipment and computer readable storage medium
CN112084914A (en) * 2020-08-31 2020-12-15 的卢技术有限公司 Multi-target tracking method integrating spatial motion and apparent feature learning
CN112084914B (en) * 2020-08-31 2024-04-26 的卢技术有限公司 Multi-target tracking method integrating space motion and apparent feature learning
CN112101223A (en) * 2020-09-16 2020-12-18 北京百度网讯科技有限公司 Detection method, device, equipment and computer storage medium
CN112101223B (en) * 2020-09-16 2024-04-12 阿波罗智联(北京)科技有限公司 Detection method, detection device, detection equipment and computer storage medium
CN112634327A (en) * 2020-12-21 2021-04-09 合肥讯图信息科技有限公司 Tracking method based on YOLOv4 model
CN113238209B (en) * 2021-04-06 2024-01-16 宁波吉利汽车研究开发有限公司 Road perception method, system, equipment and storage medium based on millimeter wave radar
CN113238209A (en) * 2021-04-06 2021-08-10 宁波吉利汽车研究开发有限公司 Road sensing method, system, equipment and storage medium based on millimeter wave radar
CN113361456A (en) * 2021-06-28 2021-09-07 北京影谱科技股份有限公司 Face recognition method and system
CN113361456B (en) * 2021-06-28 2024-05-07 北京影谱科技股份有限公司 Face recognition method and system
CN113723311A (en) * 2021-08-31 2021-11-30 浙江大华技术股份有限公司 Target tracking method
CN114155275A (en) * 2021-11-17 2022-03-08 深圳职业技术学院 IOU-Tracker-based fish tracking method and device
CN115223135A (en) * 2022-04-12 2022-10-21 广州汽车集团股份有限公司 Parking space tracking method and device, vehicle and storage medium
CN115223135B (en) * 2022-04-12 2023-11-21 广州汽车集团股份有限公司 Parking space tracking method and device, vehicle and storage medium

Also Published As

Publication number Publication date
CN111512317B (en) 2023-06-06
CN111512317A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
WO2020082258A1 (en) Multi-objective real-time tracking method and apparatus, and electronic device
CN107833236B (en) Visual positioning system and method combining semantics under dynamic environment
CN108960211B (en) Multi-target human body posture detection method and system
CN105164700B (en) Detecting objects in visual data using a probabilistic model
WO2016034059A1 (en) Target object tracking method based on color-structure features
Tian et al. Robust 6d object pose estimation by learning rgb-d features
WO2019242672A1 (en) Method, device and system for target tracking
CN109426785B (en) Human body target identity recognition method and device
US20200257890A1 (en) Method and device for determining path of human target
CN109919002B (en) Yellow stop line identification method and device, computer equipment and storage medium
CN112016402B (en) Self-adaptive method and device for pedestrian re-recognition field based on unsupervised learning
US20120106784A1 (en) Apparatus and method for tracking object in image processing system
CN111354022B (en) Target Tracking Method and System Based on Kernel Correlation Filtering
WO2019033575A1 (en) Electronic device, face tracking method and system, and storage medium
CN111553234A (en) Pedestrian tracking method and device integrating human face features and Re-ID feature sorting
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
Hu et al. Density-insensitive unsupervised domain adaption on 3d object detection
CN113255651A (en) Package security check method, device and system, node equipment and storage device
US11594073B2 (en) Face recognition method and face recognition device
JP2009129237A (en) Image processing apparatus and its method
JP2015204023A (en) Subject detection device, subject detection method, and program
JP5931646B2 (en) Image processing device
US11741151B1 (en) Indexing key frames for localization
JP2015007919A (en) Program, apparatus, and method of realizing high accuracy geometric inspection for images different in point of view
JP2013239011A (en) Motion vector on moving object detection device, motion vector on moving object detection method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937754

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18937754

Country of ref document: EP

Kind code of ref document: A1