CN112037253A - Target tracking method and device thereof - Google Patents

Target tracking method and device thereof Download PDF

Info

Publication number
CN112037253A
CN112037253A CN202010791467.8A CN202010791467A CN112037253A CN 112037253 A CN112037253 A CN 112037253A CN 202010791467 A CN202010791467 A CN 202010791467A CN 112037253 A CN112037253 A CN 112037253A
Authority
CN
China
Prior art keywords
face
tracking
tracker
shoulder
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010791467.8A
Other languages
Chinese (zh)
Inventor
杨希
李照亮
俞旭锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010791467.8A priority Critical patent/CN112037253A/en
Publication of CN112037253A publication Critical patent/CN112037253A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a target tracking method and a related device thereof. The target tracking method comprises the following steps: carrying out face detection and body detection on a target in an acquired image to be detected, and establishing a face tracking body and a body tracking body, wherein the face tracking body comprises face information, and the body tracking body comprises body information; correlating the face tracker and the body tracker such that the face tracker has body information therein and the body tracker has face information therein; tracking the target in the continuously acquired image to be detected based on the face tracker and the body tracker, and obtaining the body information when only the face is tracked or obtaining the face information when only the body is tracked. The method and the device can solve the problems of target loss and target error, and improve the robustness of target tracking.

Description

Target tracking method and device thereof
Technical Field
The present application relates to the field of image detection technologies, and in particular, to a target tracking method and a related apparatus.
Background
With the rapid development of related technologies such as computer vision, artificial intelligence and the like, the target tracking method is more and more widely applied. Because the target has the changeable characteristic of gesture, the outward appearance is easily influenced by factors such as clothing, scale change, sheltering from and shoot the angle and pedestrian's shuttle production shelters from each other for target tracking appears with losing, with wrong scheduling problem.
Disclosure of Invention
The application provides a target tracking method and a related device thereof, which can solve the problems of target tracking loss and target tracking error and improve the robustness of target tracking.
To achieve the above object, the present application provides a target tracking method, including:
carrying out face detection and body detection on a target in an acquired image to be detected, and establishing a face tracking body and a body tracking body, wherein the face tracking body comprises face information, and the body tracking body comprises body information;
correlating the face tracker and the body tracker such that the face tracker has body information therein and the body tracker has face information therein;
tracking the target in the continuously acquired image to be detected based on the face tracker and the body tracker, and obtaining the body information when only the face is tracked or obtaining the face information when only the body is tracked.
Wherein, the body is tracked the body and is included head and shoulder tracking body and whole body tracking body, and the face is tracked the body and is tracked body correlation including with the body:
the head and shoulder tracking body, the whole body tracking body and the face tracking body are mutually associated, so that the face tracking body has head and shoulder information and whole body information, the head and shoulder tracking body has face information and whole body information, and the whole body tracking body has face information and head and shoulder information; and/or the presence of a gas in the gas,
the face tracking body and the head and shoulder tracking body are mutually associated, so that the face tracking body has head and shoulder information, and the head and shoulder tracking body has face information; and/or the presence of a gas in the gas,
and the face tracker and the whole-body tracker are mutually related, so that the face tracker has whole-body information, and the whole-body tracker has face information.
Wherein, with head and shoulder tracking body, whole body tracking body and body tracking body interrelatedly, include:
associating the head and shoulder tracking body with the face tracking body, so that the head and shoulder tracking body has face information and the face tracking body has head and shoulder information; the head and shoulder tracking body is associated with the whole body tracking body, so that the head and shoulder tracking body has whole body information and the whole body tracking body has head and shoulder information;
and re-associating the successfully-associated set of the head-shoulder tracking body and the face tracking body with the successfully-associated set of the head-shoulder tracking body and the whole-body tracking body to complete the association of the head-shoulder tracking body, the face tracking body and the whole-body tracking body.
Wherein, the successful association set of the head and shoulder tracking body and the face tracking body and the successful association set of the head and shoulder tracking body and the whole body tracking body are associated again, and the method comprises the following steps:
confirming whether the head and shoulder tracking body stores face information and whole body information;
if yes, storing the face information corresponding to the head-shoulder tracking body into the whole body tracking body related to the head-shoulder tracking body, and storing the whole body information corresponding to the head-shoulder tracking body into the face tracking body related to the head-shoulder tracking body.
Wherein, the head and shoulder tracking body includes the head and shoulder frame, and the face tracking body includes the face frame, and the whole body is tracked the body and is included whole body frame, with the head and shoulder tracking body and face tracking body correlation, include:
calculating the intersection ratio of the head and shoulder frame and the face frame;
associating the face box with the intersection ratio being maximum and exceeding the first threshold with the head-shoulder box,
storing the face information corresponding to the face frame into a head and shoulder tracking body corresponding to the head and shoulder frame associated with the face frame, and storing the head and shoulder information corresponding to the head and shoulder frame into a face tracking body corresponding to the face frame;
associating a head-shoulder tracker with a whole-body tracker, comprising:
calculating the intersection ratio of the head and shoulder frames and the whole body frame;
the whole body box with the intersection ratio being maximum and exceeding the second threshold is associated with the head-shoulder box,
and storing the whole body information corresponding to the whole body frame into the head and shoulder tracking body corresponding to the head and shoulder frame associated with the whole body frame, and storing the head and shoulder information corresponding to the head and shoulder frame into the whole body tracking body corresponding to the whole body frame.
Wherein the step of associating the head-shoulder tracker with the face tracker and the step of associating the head-shoulder tracker with the whole-body tracker comprise:
and associating the face tracking body which is not associated with the head-shoulder tracking body with the whole-body tracking body which is not associated with the head-shoulder tracking body, so that the whole-body tracking body has face information and the face tracking body has whole-body information.
Wherein, the face tracking body includes the face frame, and the whole body tracking body includes whole body frame, and the face tracking body that will not be correlated with the head and shoulder tracking body is correlated with the whole body tracking body that does not correlate with the head and shoulder tracking body, includes:
calculating the intersection ratio of the face frame and the whole body frame,
associating the face frame with the cross ratio that is the largest and exceeds the third threshold with the whole-body frame;
storing whole body information corresponding to the whole body frame into a face tracking body corresponding to the face frame associated with the whole body frame, and storing the face information corresponding to the face frame into the whole body tracking body corresponding to the whole body frame;
wherein, the ratio of the area of the face frame to the union of the face frame and the upper body frame is the intersection ratio of the face frame and the whole body frame.
Wherein, the body tracking body and the face tracking body are both called tracking bodies;
the tracking body is created when corresponding part frames of the same target are detected from continuous multiple frames of images to be detected, and the confidence degrees of the corresponding part frames detected from the continuous multiple frames of images to be detected are all larger than a fourth threshold value;
the tracking body comprises position information, identity information and type information of the corresponding position frame.
To achieve the above object, the present application provides a target tracking apparatus including a memory and a processor; the memory has stored therein a computer program for execution by the processor to perform the steps of the above method.
To achieve the above object, the present application provides a readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the above method.
The method comprises the following steps: the face tracking body and the body tracking body are mutually associated, so that the face tracking body has body information, the body tracking body has face information, when a target in a continuously acquired image to be detected is tracked, the face information can be acquired when only the target body is tracked from one or more frames of images to be detected because the body tracking body stores the face information of the face tracking body associated with the body tracking body, and further when the target face is re-tracked from a subsequent image to be detected, the face information stored in the body tracking body associated with the target face can be re-assigned to the face tracking body corresponding to the re-tracked target face, so that the face information in the face tracking body of the target is not changed, and in addition, when the target in the continuously acquired image to be detected is tracked, the body information of the body tracking body associated with the face tracking body is stored in the face tracking body, therefore, when only the target face is tracked from one or more frames of images to be detected, the body information can be acquired, and further when the target body is tracked from the subsequent images to be detected again, the body information stored in the face tracking body associated with the target body can be endowed to the body tracking body corresponding to the target body to be tracked again, so that the body information in the body tracking body of the target is unchanged, and therefore, in the process of continuously tracking the target in the images to be detected, even if the body or the face of the target is not detected for a short time and is lost, the body and the face of the target can be accurately determined by the target tracking method, the problems of target loss and target tracking error can be solved, and the robustness of target tracking is improved. In addition, the association strategy in the target tracking method does not need to extract the feature of the component, the calculated amount is extremely small, and the execution efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a first embodiment of a target tracking method according to the present application;
FIG. 2 is a schematic diagram of the establishment of a tracking object in the object tracking method of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a second embodiment of the object tracking method of the present application;
FIG. 4 is a schematic diagram illustrating a face frame and a head-shoulder frame intersection ratio calculated in the target tracking method of the present application;
FIG. 5 is a schematic diagram illustrating calculation of the intersection ratio of the whole body frame and the head-shoulder frame in the target tracking method of the present application;
FIG. 6 is a schematic diagram of calculating the intersection ratio of the whole body frame and the face frame in the target tracking method of the present application;
FIG. 7 is a schematic diagram of an application example of the object tracking method of the present application;
FIG. 8 is a schematic diagram of another application example of the object tracking method of the present application;
FIG. 9 is a schematic diagram of an embodiment of a target tracking device according to the present application;
FIG. 10 is a schematic structural diagram of an embodiment of a storage medium readable by the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present application, the following describes in detail a face detection method, a model training method and a related apparatus thereof provided by the present application with reference to the accompanying drawings and the detailed description.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of a target tracking method according to the present application. The target tracking method of the present embodiment includes the following steps.
S101: and carrying out face detection and body detection on the target in the acquired image to be detected, and establishing a face tracking body and a body tracking body.
The target may be various targets having a face and a body, such as a human or an animal.
The method comprises the steps of carrying out face detection and body detection on an acquired target in an image to be detected, then establishing a face tracking body based on a detected face frame, and establishing a body tracking body based on a detected body frame.
The face detection and the body detection can be carried out on the target in the image to be detected through the same detection algorithm. The detection algorithm may be a YOLO algorithm, although not limited thereto. Both face and body trackers may be referred to as trackers. The face frame and the body frame may also be referred to as detection frames.
In an application scenario, a face tracker and a body tracker may be created directly based on multiple frames of images to be detected. In the process of creating the tracking body, the unordered detection frames may be firstly divided into several types, such as a body frame and a face frame, according to the target type, and as shown in fig. 2, the body frame and the face frame are respectively arranged in order, and the tracking body is created based on the detection frames of each type arranged in order.
In another application scene, whether a tracking body matched with a detection frame in an image to be detected exists or not can be confirmed firstly, if so, the detection frame is added into the tracking body matched with the detection frame so as to establish a tracking body corresponding to the detection frame; and if not, creating a tracking body corresponding to the detection frame.
It is to be understood that adding the detection frame to the tracking volume matched therewith may refer to adding the detection frame information to the tracking volume matched therewith. The detection box information may include face identity information, such as an id number of the detection box. The detection frame information may further include detection frame position information. For example, the upper left-hand coordinates (X _ min, Y _ min) and the lower right-hand coordinates (X _ max, Y _ max) of the detection frame are added to the tracking volume that matches them. In addition, the detection frame information may further include a target type so as to determine which of the face tracker, the head-shoulder tracker, and the whole-body tracker the tracker belongs to based on the target type.
In one implementation, the step of determining whether there is a tracking object matching a detection frame detected in the image to be detected may include: confirming whether a detection frame matched with a detection frame in the current frame image to be detected exists in the previous frame image to be detected or not based on the position information of all detection frames in the previous frame image to be detected; if the detected image exists and the detection frame in the previous frame of image to be detected has the corresponding tracking body, matching the detection frame of the current frame of image to be detected with the corresponding tracking body of the detection frame of the previous frame of image to be detected; if the tracking body exists, and the detection frame in the previous frame of image to be detected does not have the corresponding tracking body, the tracking body matched with the detection frame of the current frame of image to be detected does not exist. The step of determining whether a detection frame matching with the detection frame in the current frame image to be detected exists in the previous frame image to be detected may include: and calculating the intersection ratio between the detection frame in the current frame image to be detected and the detection frame of the previous frame image to be detected, wherein the detection frame of the previous frame image to be detected with the maximum intersection ratio and exceeding a fifth threshold value is matched with the detection frame of the current frame image to be detected.
In another implementation, the step of confirming whether there is a tracking object matching with the detection frame detected in the image to be detected may include: and directly reading the position information of the tracking frame in the tracking body, and determining whether the tracking body matched with the detection frame of the current frame image to be detected exists or not based on the position information of the tracking frame in the tracking body and the position information of the detection frame in the current frame image to be detected. The step of determining whether there is a tracking body matching the detection frame of the current frame image to be detected based on the position information of the tracking frame in the tracking body and the position information of the detection frame in the current frame image to be detected may include: and calculating the intersection ratio between the detection frame and the tracking frame in the current frame image to be detected based on the position information of the tracking frame, wherein the tracking body corresponding to the tracking frame with the maximum intersection ratio and exceeding a sixth threshold value is matched with the detection frame of the current frame image to be detected. It can be understood that, after the tracking body matched with the detection frame detected in the image to be detected is confirmed, the position information of the detection frame can be stored in the tracking body, and the position information of the tracking frame originally stored in the tracking body can be replaced, so as to judge whether the detection frame matched with the tracking body exists in the image to be detected in the next frame.
Wherein, creating the tracking body corresponding to the detection frame may further include: and when detecting the detection frames of the corresponding parts of the same target from the continuous multi-frame images to be detected and the confidence degrees of the detection frames of the corresponding parts of the same target detected from the continuous multi-frame images to be detected are all larger than a fourth threshold value, creating a tracking body corresponding to the detection frame of the phase part. The continuous multiframe images to be detected can refer to continuous two frames or more than two frames of images to be detected. And the fourth threshold value can be adjusted according to actual conditions, and optionally, the fourth threshold value can be 0.5, 0.6, 0.7, and the like.
S102: the face tracker and the body tracker are correlated.
Correlating the face tracker and the body tracker, comprising: determining whether a face tracker can be associated with a body tracker; when the face tracking body and the body tracking body can be associated, information interaction of the face tracking body and the body tracking body is carried out to complete the association of the face tracking body and the body tracking body, so that the face tracking body has body information, and the body tracking body has face information. Wherein the body tracker includes a body frame and the face tracker includes a face frame, and the face tracker can be confirmed whether the face tracker is associated with the body tracker by calculating an intersection ratio of the body frame and the face frame.
Wherein the body trace body may include at least one of a head-shoulder trace body, an upper-half body trace body, and a whole-body trace body.
For example, the body tracker may include only the head and shoulder trackers. The step of correlating the face tracker and the body tracker may refer to: the face tracking volume and the head and shoulder tracking volume are correlated.
For another example, the body tracker may include only the upper body tracker. The step of correlating the face tracker and the body tracker may refer to: the face tracker and the upper body tracker are correlated.
As another example, the body trackers may include a head-shoulder tracker and a whole-body tracker. Correlating the face tracker and the body tracker may refer to: the face tracking body, the whole body tracking body and the head and shoulder tracking body are mutually related. It can be understood that, in the process of correlating the face tracker with the whole body tracker and the head-shoulder tracker: the method comprises the steps that a target only has a face tracking body and a whole body tracking body, the face tracking body and the whole body tracking body of the target can be correlated first until at least one frame of image to be detected containing the head and the shoulder of the target appears, the head and shoulder tracking body of the target is successfully established, and then the face tracking body of the target, the whole body tracking body and the head and shoulder tracking body are correlated; the face tracking body and the head-shoulder tracking body can be associated with each other until at least one frame of image to be detected of the whole body containing the target appears, so that the whole body tracking body of the target is successfully established, and the face tracking body of the target, the whole body tracking body and the head-shoulder tracking body are associated with each other; the whole-body tracking body and the head-shoulder tracking body may be associated with each other until at least one frame of image to be detected containing the face of the target appears, so that the face tracking body of the target is successfully created, and the face tracking body of the target, the whole-body tracking body and the head-shoulder tracking body are associated with each other.
In addition, the step of correlating the face tracking body, the whole body tracking body and the head and shoulder tracking body can comprise the following steps: the head-shoulder tracking body and the face tracking body are correlated, and the head-shoulder tracking body and the whole-body tracking body are correlated; and then, carrying out information interaction on the face tracking body and the whole body tracking body which are related to the same head and shoulder tracking body so as to complete the related association of the face tracking body, the whole body tracking body and the head and shoulder tracking body. In addition, the face tracking body which is not associated with the head-shoulder tracking body and the body tracking body which is not associated with the head-shoulder tracking body can be associated with each other, so that the situation that the face tracking body cannot be associated with the body tracking body in advance under the condition that only the face tracking body and the body tracking body exist in one target can be avoided.
Of course, in other implementations, the step of associating the face tracker, the whole body tracker, and the head-shoulder tracker may include: the head and shoulder tracking body and the face tracking body are mutually associated, and the face tracking body and the whole body tracking body are mutually associated; and then, performing information interaction on the head and shoulder tracking body and the whole body tracking body which are related to the same face tracking body to complete the related association of the face tracking body, the whole body tracking body and the head and shoulder tracking body.
For another example, the body trackers may include a head-shoulder tracker and an upper body tracker. Correlating the face tracker and the body tracker may refer to: the face tracking body, the upper half body tracking body and the head and shoulder tracking body are mutually related.
S103: and tracking the target in the continuously acquired image to be detected based on the face tracking body and the body tracking body.
This enables body information to be obtained simultaneously when only the face is tracked, or face information to be obtained simultaneously when only the body is tracked.
In the present embodiment, the face tracker and the body tracker are associated with each other, so that the face tracker has body information and the body tracker has face information, and thus, when a target in a continuously acquired image to be detected is tracked, since the body tracker stores the face information of the face tracker associated with the body tracker, the face information can be acquired even when only the target body is tracked from one or more frames of images to be detected, and further, when the target face is re-tracked from a subsequent image to be detected, the face information stored in the body tracker associated with the target face can be newly given to the face tracker corresponding to the re-tracked target face, so that the face information in the face tracker of the target is not changed, and further, when the target in the continuously acquired image to be detected is tracked, since the body information of the body tracker associated with the face tracker is stored in the face tracker, therefore, when only the target face is tracked from one or more frames of images to be detected, the body information can be obtained, and further when the target body is tracked from the subsequent images to be detected again, the body information stored in the face tracking body associated with the target body can be endowed to the body tracking body corresponding to the retraced target body again, so that the body information in the body tracking body of the target is not changed, and in the process of continuously tracking the target in the image to be detected, even if the body or the face of the target cannot be detected in a short time and the target is lost, the body or the face of the target can be accurately determined by the target tracking method, so that the problem of target loss can be solved, and the target tracking robustness is improved. In addition, the association strategy in the target tracking method does not need to extract the feature of the component, the calculated amount is extremely small, and the execution efficiency is greatly improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a second embodiment of the target tracking method of the present application. The target tracking method of the present embodiment includes the following steps.
S201: and carrying out face detection, head and shoulder detection and whole body detection on the obtained target in the image to be detected, and establishing a face tracking body, a head and shoulder tracking body and a body tracking body.
Wherein the face tracker includes face information. The head and shoulder tracker includes head and shoulder information. The body tracker includes body information.
Optionally, the face information may include identity information of the face. The head and shoulder information includes identity information of the head and shoulder. The whole body information comprises whole body identity information, so that in the process of tracking a target in a continuously acquired image to be detected, the head and shoulder identity information and the whole body identity information can be obtained simultaneously when only the face is tracked, or the face identity information and the head and shoulder identity information can be obtained simultaneously when only the whole body is tracked, or the face identity information and the whole body identity information can be obtained simultaneously when only the head and shoulder are tracked, therefore, the problem of target tracking loss can be effectively solved, and the target tracking robustness can be improved.
Further, the face information, the head-shoulder information and the whole-body information may further include a target type so as to be able to distinguish the face information, the head-shoulder information and the whole-body information in the successfully associated tracking body and the partially associated tracking body, so as to facilitate extraction of the identity information of the face, the whole-body identity information or the head-shoulder identity information from the successfully associated tracking body.
Alternatively, the face information, the head-shoulder information, and the whole-body information may further include position information of a face frame, position information of a head-shoulder frame, and position information of a whole-body frame, respectively.
It is to be understood that the head-shoulder tracker, the whole-body tracker, and the face tracker may be collectively referred to as trackers. The tracking body may include its own identity information, target type, IOU information (maximum IOU value in the set of the tracking body and the similar target in the previous frame), association information (for storing the association relationship between the tracking body and the tracking body corresponding to the other two body parts), and position information of the tracking frame (X _ min, Y _ min, X _ max, Y _ max, which respectively represent the upper left corner coordinate and the lower right corner coordinate of the tracking frame). It will be appreciated that information for the other two body parts may be stored in the association information.
All the obtained tracking bodies can be divided into three types, namely a face tracking body, a head and shoulder tracking body and a whole body tracking body according to the target type.
All face trackers and all head-shoulder trackers are sent to step S202; all the head-shoulder trackers and all the whole-body trackers are also sent to step S203.
S202: and associating the head and shoulder tracking bodies with the face tracking body.
Wherein, the head and shoulder tracking body comprises a head and shoulder frame. The face tracker includes a face frame.
Whether the head-shoulder box can be associated with the face box can be confirmed by calculating the IOUs of the head-shoulder box and the face box. Under the condition that the head-shoulder frame can be associated with the face frame, face information corresponding to the face frame is stored into a head-shoulder tracking body corresponding to the head-shoulder frame, and head-shoulder information corresponding to the head-shoulder frame is stored into a face tracking body corresponding to the face frame, so that the head-shoulder tracking body is internally provided with the face information, and the face tracking body is internally provided with the head-shoulder information, so that the association between the head-shoulder tracking body and the face tracking body is completed.
Specifically, the IOU of each head-shoulder frame and each face frame is calculated, and then the face frame with the largest IOU corresponding to each head-shoulder frame and exceeding the first threshold can be associated with each head-shoulder frame. In the process of calculating the IOU of each head-shoulder frame and each face frame, only the IOU of each head-shoulder frame and the face frame intersected with each head-shoulder frame can be calculated, and the calculation time and the calculation resources can be saved. Of course, the IOU for each of the head-shoulder box and all face boxes may also be calculated. The first threshold may be adjusted as it is, and optionally, the first threshold may be 0.7, but is not limited thereto.
As shown in fig. 4, the IOU of the head-shoulder frame and the face frame may be: the ratio of the area of the intersection region of the face frame and the head-shoulder frame to the area of the face frame. Of course, the calculation method of the IOU of the head-shoulder frame and the face frame is not limited to this, and for example, the ratio of the area of the intersection region of the face frame and the head-shoulder frame to the area of the head-shoulder frame is taken as the IOU of the head-shoulder frame and the face frame.
It is to be understood that in the process of associating the head-shoulder tracking body and the face tracking body with each other, there may be cases where some head-shoulder tracking body is not associated with the face tracking body, or where some face tracking body is not associated with the head-shoulder tracking body. Therefore, there are at most four output data in this step, which are: a head-shoulder tracking volume not associated with a face tracking volume, a face tracking volume not associated with a head-shoulder tracking volume, a head-shoulder tracking volume associated with a face tracking volume, and a face tracking volume associated with a head-shoulder tracking volume. It is to be understood that a head-shoulder tracker associated with a face tracker, and a face tracker associated with a head-shoulder tracker may be collectively referred to as an associated tracker. The head-shoulder tracker that is not associated with a face tracker and the face tracker that is not associated with a head-shoulder tracker may be referred to simply as the unassociated head-shoulder tracker and the unassociated face tracker, respectively.
The unassociated head and shoulder tracking body is sent to step S207.
The unassociated face tracking volume is fed into step S206.
The associated trace volume is sent to step S204.
S203: and associating the head and shoulder tracking bodies with the face tracking body.
Wherein, the head and shoulder tracking body comprises a head and shoulder frame. The whole-body tracker includes a whole-body frame.
Whether the head-shoulder box can be associated with the whole-body box can be confirmed by calculating the IOU of the head-shoulder box and the whole-body box. Under the condition that the head-shoulder frame can be associated with the whole-body frame, the whole-body information corresponding to the whole-body frame is stored into the head-shoulder tracking body corresponding to the head-shoulder frame, and the head-shoulder information corresponding to the head-shoulder frame is stored into the whole-body tracking body corresponding to the whole-body frame, so that the whole-body information is contained in the head-shoulder tracking body, and the head-shoulder information is contained in the whole-body tracking body, so that the association between the head-shoulder tracking body and the whole-body tracking body is completed.
Specifically, the IOU of each head-shoulder box and each whole-body box is calculated, and then the whole-body box with the largest IOU corresponding to each head-shoulder box and exceeding the second threshold can be associated with each head-shoulder box. In the process of calculating the IOU of each head-shoulder frame and each whole-body frame, only the IOU of each head-shoulder frame and the whole-body frame intersected with each head-shoulder frame can be calculated, and the calculation time and the calculation resources can be saved. Of course, the IOU can also be calculated for each head-shoulder box and all body-body boxes. The second threshold may be adjusted as it is, and optionally, the second threshold may be 0.7, but is not limited thereto.
As shown in fig. 5, the IOU of the head-shoulder frame and the whole-body frame may be: the ratio of the area of the intersection region of the whole body frame and the head-shoulder frame to the area of the whole body frame. Of course, the calculation method of the IOU of the whole-body frame and the head-shoulder frame is not limited to this, and for example, the ratio of the area of the intersection region of the whole-body frame and the head-shoulder frame to the area of the head-shoulder frame is taken as the IOU of the whole-body frame and the head-shoulder frame.
It is understood that in the process of associating the head-shoulder tracker and the whole-body tracker, there may be a case where some head-shoulder trackers are not associated with the whole-body tracker, or a case where some whole-body trackers are not associated with the head-shoulder trackers. Therefore, there are at most four output data in this step, which are: a head-shoulder tracker not associated with the whole-body tracker, a whole-body tracker not associated with the head-shoulder tracker, a head-shoulder tracker associated with the whole-body tracker, and a whole-body tracker associated with the head-shoulder tracker. It is to be understood that a head-shoulder tracker associated with a whole-body tracker, and a whole-body tracker associated with a head-shoulder tracker may be collectively referred to as an associated tracker. The head-shoulder tracker that is not associated with the whole-body tracker and the whole-body tracker that is not associated with the head-shoulder tracker may be simply referred to as the unassociated head-shoulder tracker and the unassociated whole-body tracker, respectively.
The unassociated head and shoulder tracking body is sent to step S207.
The unassociated whole body trace is sent to step S206.
The associated trace volume is sent to step S204.
In addition, step S203 and step S202 may be performed in parallel.
S204: the head and shoulder tracking body, the face tracking body and the whole body tracking body are related.
And re-associating the successfully-associated set of the head-shoulder tracking body and the face tracking body with the successfully-associated set of the head-shoulder tracking body and the whole-body tracking body to complete the mutual association of the head-shoulder tracking body, the face tracking body and the whole-body tracking body.
The step of correlating the head and shoulder tracker, the face tracker, and the whole body tracker may include: judging whether face information and whole body information are stored in each head and shoulder tracking body; if the face information and the whole body information are stored, storing the face information of the face tracking body related to the head-shoulder tracking body into the whole body tracking body related to the head-shoulder tracking body, and storing the whole body information of the whole body tracking body related to the head-shoulder tracking body into the face tracking body related to the head-shoulder tracking body, so that the whole body information is contained in the face tracking body, and the face information is contained in the whole body tracking body; if the head-shoulder tracking body only stores face information or only stores whole-body information, the head-shoulder tracking body, the face tracking body and the whole-body tracking body are not successfully associated.
It is understood that the face information stored in the head-shoulder tracker body, or the face information stored in the face tracker body associated with the head-shoulder tracker body, may be stored in the whole-body tracker body associated with the head-shoulder tracker body. Further, the whole body information stored in the head-shoulder tracker body or the whole body information stored in the whole body tracker body related to the head-shoulder tracker body may be stored in the face tracker body related to the head-shoulder tracker body.
It is understood that, during the process of associating the head-shoulder tracker with the whole-body tracker, there may be a case where some sets of head-shoulder trackers successfully associated with the face tracker are not associated with the sets of head-shoulder trackers successfully associated with the whole-body tracker, or there is a case where some sets of head-shoulder trackers successfully associated with the whole-body tracker are not associated with the sets of head-shoulder trackers successfully associated with the face tracker. Therefore, the output data of this step is seven kinds at most, which are respectively: a head-shoulder tracking body associated with the whole-body tracking body only, a head-shoulder tracking body associated with the face tracking body only, a head-shoulder tracking body associated with the whole-body tracking body and the face tracking body, a face tracking body associated with the head-shoulder tracking body only, a face tracking body associated with the head-shoulder tracking body and the whole-body tracking body, a whole-body tracking body associated with the head-shoulder tracking body only, and a whole-body tracking body associated with the head-shoulder tracking body and the face tracking body.
Among them, a head-shoulder tracker associated only with the whole-body tracker, a head-shoulder tracker associated only with the face tracker, a face tracker associated only with the head-shoulder tracker, and a whole-body tracker associated only with the head-shoulder tracker may be collectively referred to as a partially-associated tracker. The partially associated trackers are each sent to step S207.
The head-shoulder tracker associated with the whole-body tracker and the face tracker, the face tracker associated with the head-shoulder tracker and the whole-body tracker, and the whole-body tracker associated with the head-shoulder tracker and the face tracker may be collectively referred to as successfully associated trackers. The successfully associated trace volume is sent to step S205.
The state of the successfully associated tracking body can be changed into matching tracking, so that the tracking body can be tracked on the basis of the tracking body and the two types of tracking bodies associated with the tracking body in the subsequent images to be detected.
S205: and judging whether the boundary deletion condition is met.
And judging whether the successfully associated tracking body meets the boundary deletion condition or not, if so, deleting the successfully associated tracking body, and otherwise, continuing to perform the iterative tracking of the next frame. The boundary deletion condition is to judge whether the target is at the edge position of the image to be detected. Whether the successfully associated tracked volume satisfies the boundary deletion condition can be confirmed by the aspect ratio of the face box. For example, when the aspect ratio of the face frame is greater than the seventh threshold, it may be determined that the face tracker corresponding to the face frame and the whole-body tracker and the head-shoulder tracker associated with the face tracker satisfy the boundary deletion condition, and the face tracker corresponding to the face frame and the whole-body tracker and the head-shoulder tracker associated with the face tracker may be deleted.
If the successfully associated trace object satisfies the boundary deletion condition, the process proceeds to step S208.
If the successfully associated trace object does not satisfy the boundary deletion condition, the process proceeds to step S209.
S206: the face tracker and the whole-body tracker are correlated.
A whole-body tracker that is not associated with the head-shoulder tracker and a face tracker that is not associated with the head-shoulder tracker are associated with each other.
Whether the face frame can be associated with the whole-body frame can be confirmed by calculating the IOU of the face frame and the whole-body frame. Under the condition that the face frame can be associated with the whole-body frame, the whole-body information corresponding to the whole-body frame is stored in the face tracking body corresponding to the face frame, and the face information corresponding to the face frame is stored in the whole-body tracking body corresponding to the whole-body frame, so that the whole-body information is contained in the face tracking body, the face information is contained in the whole-body tracking body, and the association between the face tracking body and the whole-body tracking body is completed.
Specifically, the IOU of each face frame and each whole-body frame is calculated, and then the whole-body frame with the largest IOU corresponding to each face frame and exceeding the third threshold can be associated with each face frame. In the process of calculating the IOU of each face frame and each whole-body frame, only the IOU of each face frame and the whole-body frame intersecting each face frame can be calculated, and calculation time and calculation resources can be saved. Of course, the IOU may be calculated for each face frame and all whole body frames. The third threshold may be adjusted as it is, and optionally, the third threshold may be 0.8, but is not limited thereto.
As shown in fig. 6, the IOU of the face frame and the whole body frame may be: the ratio of the area of the intersection region of the face frame and the upper body frame to the area of the face frame. Of course, the calculation method of the IOU of the face frame and the whole-body frame is not limited to this, and for example, the ratio of the area of the intersection region of the face frame and the whole-body frame to the area of the face frame may be used as the IOU of the face frame and the whole-body frame.
It will be appreciated that in the process of correlating the face tracker and the whole-body tracker, there may be instances where some of the face tracker is not correlated with the whole-body tracker, or where some of the whole-body tracker is not correlated with the face tracker. Therefore, there are at most four output data in this step, which are: a face tracker not associated with a full-body tracker, a full-body tracker not associated with a face tracker, a face tracker associated with a full-body tracker, and a full-body tracker associated with a face tracker. Among them, a face tracker that is not associated with a whole-body tracker and a whole-body tracker that is not associated with a face tracker may be collectively referred to as an unassociated tracker. A face tracker associated with a full-body tracker, and a full-body tracker associated with a face tracker may be collectively referred to as a partially-associated tracker.
The unassociated trace volume is sent to the unassociated parts temporary target library, and the process goes to step S207.
The partially correlated trace object is sent to the partially correlated target library, and is sent to step S207.
S207: and judging whether the target deleting condition is met.
The unassociated head and shoulder trackers may also be referred to as unassociated trackers.
If the unassociated tracking object and/or the partially associated tracking object fails to associate in the N consecutive frames of images to be detected, the unassociated tracking object and/or the partially associated tracking object which fails to associate in the N consecutive frames satisfy the target deletion condition, and the process needs to enter step S208.
If the number of consecutive frames in which the unassociated trace volume and/or the partially associated trace volume fails to associate does not exceed N frames, the unassociated trace volume and/or the partially associated trace volume that does not exceed N frames does not satisfy the target deletion condition, and the process proceeds to step S209.
S208: the trace volume is deleted.
S209: and (5) iterative association and tracking.
And for the unassociated trace body and the partially associated trace body which do not meet the target deletion condition and the successfully associated trace body which do not meet the boundary deletion condition, the iterative association and tracking are required to be continuously carried out.
In the iterative association and tracking process, for each target type, calculating the IOU of the current frame tracking frame and the current frame detection frame, and taking the detection frame with the largest IOU as the detection frame matched with the tracking body corresponding to the tracking frame. The information of the tracking frame in the tracking body can be updated by using the information of the detection frame, namely, the detection frame with the largest IOU is taken as the tracking frame of the current target.
The IOU of the tracking box and the detection box may be: the ratio of the area of the intersection region of the detection frame and the tracking frame to the union area of the detection frame and the tracking frame.
It can be understood that when the target corresponding to the partially associated tracking body or the successfully associated tracking body is re-detected, the re-detected detection frame and the detection frame corresponding to the tracking body associated therewith may be re-associated, and then the related information of the re-detected detection frame stored in the tracking body is re-assigned to the tracking body corresponding to the re-detected detection frame, so that the information of the partially associated tracking body or the successfully associated tracking body is not changed, and the problems of target loss, tracking error, repeated snapshot and the like caused by the shielding of foreign objects, mutual shielding of pedestrian motion and the like may be solved, so that the tracking target can be accurately determined, and the robustness of target tracking is greatly improved. For example, the face tracker No. 1 and the whole-body tracker No. 5 are correlated, and in the 10 th frame of the image to be detected, the face frame corresponding to the face tracker No. 1 is not detected, but the whole-body frame corresponding to the whole-body tracker No. 5 is detected, re-detecting the face frame corresponding to the No. 1 face tracking body in the No. 11 frame image to be detected, and also detecting the whole body frame corresponding to the No. 5 whole body tracking body, re-associating the face frame and the whole body frame of the No. 11 frame image to be detected, and the face information stored in the No. 5 whole body tracking body corresponding to the whole body frame is given to the face frame and is stored in the No. 1 face tracking body, therefore, even if the face of the target is not detected in the image to be detected in one frame, the face information of the target can be unchanged by the target tracking method, and the problem of target tracking loss can be solved.
Further, the target tracking method in the above embodiment may be applied to face detection.
For example, fig. 7 shows three key frames in a certain tracker tracking sequence, wherein the association among the face tracker, the head-shoulder tracker, and the whole-body tracker of fig. 7-a is successful, but the human face cannot be detected due to the twisting of the target in the frame of fig. 7-b, but key information such as the ID of the face of fig. 7-a is already recorded in the Link _ Info fields of the head-shoulder tracker and the whole-body tracker at this time, and when the frame of fig. 7-c is reached, the face tracker, the head-shoulder tracker, and the whole-body tracker can be used again for association, and information such as the ID of the face recorded in the head-shoulder tracker or the whole-body tracker is then assigned to the currently detected human face, so that the ID information of the whole face process can be maintained. If only pure face tracking volume tracking is used, then the face ID in FIG. 7-a and FIG. 7-c will generally not be the same in this case.
For another example, fig. 8 also shows three key frames in a certain tracking body tracking sequence, wherein the association among the face tracking body, the head-shoulder tracking body and the whole-body tracking body is successful in the frame of fig. 8-a, but the target turns around the head and the shoulder to cause that both the face and the head and the shoulder cannot be detected when the frame of fig. 8-b arrives, so that the situation cannot be solved by using the head-shoulder tracking body, but at this time, the Link _ Info field in the whole-body tracking body still records the key information such as the ID of the face in fig. 8-a, and when the frame of fig. 8-c arrives, the face tracking body, the head-shoulder tracking body and the whole-body tracking body can be reused to associate the ID information of the whole-body face.
The target tracking method is generally implemented by a target tracking device, and therefore the present application also provides a target tracking device. Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a target tracking device according to the present application. The present smart device 10 includes a processor 12 and a memory 11; memory 11 is used to store program instructions for implementing the target tracking methods described above, and processor 12 is used to execute the program instructions stored by memory 11.
The logical processes of the above-described object tracking method are presented as a computer program which, in terms of a computer program, may be stored in a computer storage medium if it is sold or used as a stand-alone software product, and thus the present application proposes a readable storage medium. Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a storage medium readable by the present application, a computer program 21 is stored in the storage medium readable by the present embodiment 20, and the computer program is executed by a processor to implement the steps in the target tracking method.
The readable storage medium 20 may be a medium that can store a computer program, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or may be a server that stores the computer program, and the server can send the stored computer program to another device for running or can run the stored computer program by itself. The readable storage medium 20 may be a combination of a plurality of entities from a physical point of view, for example, a plurality of servers, a server plus a memory, or a memory plus a removable hard disk.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. A method of target tracking, the method comprising:
carrying out face detection and body detection on an obtained target in an image to be detected, and establishing a face tracking body and a body tracking body, wherein the face tracking body comprises face information, and the body tracking body comprises body information;
correlating the face tracker and the body tracker such that the face tracker has body information therein and the body tracker has face information therein;
tracking the target in the continuously acquired image to be detected based on the face tracking body and the body tracking body, and obtaining body information when only the face is tracked or obtaining face information when only the body is tracked.
2. The target tracking method of claim 1, wherein the body tracker includes a head and shoulder tracker and a whole body tracker, and wherein correlating the face tracker and the body tracker comprises:
the head and shoulder tracking body, the whole body tracking body and the face tracking body are mutually associated, so that the face tracking body has head and shoulder information and whole body information, the head and shoulder tracking body has face information and whole body information, and the whole body tracking body has face information and head and shoulder information; and/or the presence of a gas in the gas,
the face tracking body and the head and shoulder tracking body are mutually associated, so that the face tracking body has head and shoulder information, and the head and shoulder tracking body has face information; and/or the presence of a gas in the gas,
and correlating the face tracker with the whole-body tracker, so that the face tracker has whole-body information, and the whole-body tracker has face information.
3. The target tracking method of claim 2, wherein said correlating the head-shoulder tracker, the whole-body tracker, and the body tracker comprises:
associating the head and shoulder tracking body with the face tracking body, so that the head and shoulder tracking body has face information and the face tracking body has head and shoulder information; associating the head and shoulder tracker with the whole body tracker so that the head and shoulder tracker has whole body information and the whole body tracker has head and shoulder information;
and re-associating the successfully-associated set of the head-shoulder tracking body and the face tracking body with the successfully-associated set of the head-shoulder tracking body and the whole-body tracking body so as to complete association of the head-shoulder tracking body, the face tracking body and the whole-body tracking body.
4. The target tracking method of claim 3, wherein re-associating the successfully associated set of head and shoulder trackers with the face tracker and the successfully associated set of head and shoulder trackers with the whole-body tracker comprises:
confirming whether the head and shoulder tracking body stores face information and whole body information;
if so, storing the face information corresponding to the head-shoulder tracking body into the whole body tracking body associated with the head-shoulder tracking body, and storing the whole body information corresponding to the head-shoulder tracking body into the face tracking body associated with the head-shoulder tracking body.
5. The target tracking method of claim 3, wherein the head-shoulder tracker includes a head-shoulder frame, the face tracker includes a face frame, the whole-body tracker includes a whole-body frame, and the associating the head-shoulder tracker with the face tracker includes:
calculating the intersection ratio of the head and shoulder frame and the face frame;
associating the face box with the intersection ratio being maximum and exceeding the first threshold with the head-shoulder box,
storing the face information corresponding to the face frame into a head and shoulder tracking body corresponding to the head and shoulder frame associated with the face frame, and storing the head and shoulder information corresponding to the head and shoulder frame into a face tracking body corresponding to the face frame;
the associating the head and shoulder tracker with the whole body tracker comprises:
calculating the intersection ratio of the head and shoulder frame and the whole body frame;
the whole body box with the intersection ratio being maximum and exceeding the second threshold is associated with the head-shoulder box,
and storing the whole body information corresponding to the whole body frame into the head and shoulder tracking body corresponding to the head and shoulder frame associated with the whole body frame, and storing the head and shoulder information corresponding to the head and shoulder frame into the whole body tracking body corresponding to the whole body frame.
6. The target tracking method of claim 2, wherein the step of associating the head-shoulder tracker with the face tracker and the step of associating the head-shoulder tracker with the whole-body tracker are followed by:
associating a face tracker that is not associated with the head-shoulder tracker with a whole-body tracker that is not associated with the head-shoulder tracker such that the whole-body tracker has face information within the whole-body tracker and the face tracker has whole-body information within the whole-body tracker.
7. The target tracking method of claim 6, wherein the face tracker includes a face frame and the whole-body tracker includes a whole-body frame, and wherein associating the face tracker that is not associated with the head-shoulder tracker with the whole-body tracker that is not associated with the head-shoulder tracker comprises:
calculating the intersection ratio of the face frame and the whole body frame,
associating the face frame with the cross ratio that is the largest and exceeds the third threshold with the whole-body frame;
storing the whole body information corresponding to the whole body frame into a face tracking body corresponding to the face frame associated with the whole body frame, and storing the face information corresponding to the face frame into the whole body tracking body corresponding to the whole body frame;
and the ratio of the area of the face frame to the area of the union of the face frame and the upper body frame is the intersection ratio of the face frame and the whole body frame.
8. The target tracking method of claim 1, wherein the body tracker and the face tracker are both referred to as trackers;
the tracking body is created when corresponding part frames of the same target are detected from continuous multiple frames of images to be detected, and the confidence degrees of the corresponding part frames detected from the continuous multiple frames of images to be detected are all larger than a fourth threshold value;
the tracking body comprises position information, identity information and type information of the corresponding position frame.
9. A target tracking apparatus, characterized in that the target tracking apparatus comprises a memory and a processor; the memory has stored therein a computer program for execution by the processor to implement the steps of the method according to any one of claims 1-8.
10. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202010791467.8A 2020-08-07 2020-08-07 Target tracking method and device thereof Pending CN112037253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010791467.8A CN112037253A (en) 2020-08-07 2020-08-07 Target tracking method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010791467.8A CN112037253A (en) 2020-08-07 2020-08-07 Target tracking method and device thereof

Publications (1)

Publication Number Publication Date
CN112037253A true CN112037253A (en) 2020-12-04

Family

ID=73582820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010791467.8A Pending CN112037253A (en) 2020-08-07 2020-08-07 Target tracking method and device thereof

Country Status (1)

Country Link
CN (1) CN112037253A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219832A (en) * 2021-11-29 2022-03-22 浙江大华技术股份有限公司 Face tracking method and device and computer readable storage medium
CN114782993A (en) * 2022-04-29 2022-07-22 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018195979A1 (en) * 2017-04-28 2018-11-01 深圳市大疆创新科技有限公司 Tracking control method and apparatus, and flight vehicle
CN109740516A (en) * 2018-12-29 2019-05-10 深圳市商汤科技有限公司 A kind of user identification method, device, electronic equipment and storage medium
CN110163889A (en) * 2018-10-15 2019-08-23 腾讯科技(深圳)有限公司 Method for tracking target, target tracker, target following equipment
CN110852269A (en) * 2019-11-11 2020-02-28 青岛海信网络科技股份有限公司 Cross-lens portrait correlation analysis method and device based on feature clustering
CN111161320A (en) * 2019-12-30 2020-05-15 浙江大华技术股份有限公司 Target tracking method, target tracking device and computer readable medium
CN111353473A (en) * 2020-03-30 2020-06-30 浙江大华技术股份有限公司 Face detection method and device, electronic equipment and storage medium
CN111428607A (en) * 2020-03-19 2020-07-17 浙江大华技术股份有限公司 Tracking method and device and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018195979A1 (en) * 2017-04-28 2018-11-01 深圳市大疆创新科技有限公司 Tracking control method and apparatus, and flight vehicle
CN110163889A (en) * 2018-10-15 2019-08-23 腾讯科技(深圳)有限公司 Method for tracking target, target tracker, target following equipment
CN109740516A (en) * 2018-12-29 2019-05-10 深圳市商汤科技有限公司 A kind of user identification method, device, electronic equipment and storage medium
CN110852269A (en) * 2019-11-11 2020-02-28 青岛海信网络科技股份有限公司 Cross-lens portrait correlation analysis method and device based on feature clustering
CN111161320A (en) * 2019-12-30 2020-05-15 浙江大华技术股份有限公司 Target tracking method, target tracking device and computer readable medium
CN111428607A (en) * 2020-03-19 2020-07-17 浙江大华技术股份有限公司 Tracking method and device and computer equipment
CN111353473A (en) * 2020-03-30 2020-06-30 浙江大华技术股份有限公司 Face detection method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219832A (en) * 2021-11-29 2022-03-22 浙江大华技术股份有限公司 Face tracking method and device and computer readable storage medium
CN114782993A (en) * 2022-04-29 2022-07-22 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN109376684B (en) Face key point detection method and device, computer equipment and storage medium
CN107633526B (en) Image tracking point acquisition method and device and storage medium
CN109344742B (en) Feature point positioning method and device, storage medium and computer equipment
CN111696130B (en) Target tracking method, target tracking device, and computer-readable storage medium
CN112037253A (en) Target tracking method and device thereof
US9361343B2 (en) Method for parallel mining of temporal relations in large event file
CN114119676B (en) Target detection tracking identification method and system based on multi-feature information fusion
CN111445501A (en) Multi-target tracking method, device and storage medium
CN104050449A (en) Face recognition method and device
CN111553234B (en) Pedestrian tracking method and device integrating facial features and Re-ID feature ordering
CN110009662B (en) Face tracking method and device, electronic equipment and computer readable storage medium
CN107679437B (en) Bar code image recognition method based on Zbar
CN105303163B (en) A kind of method and detection device of target detection
WO2018214086A1 (en) Method and apparatus for three-dimensional reconstruction of scene, and terminal device
US20220406059A1 (en) Closed-loop detecting method using inverted index-based key frame selection strategy, storage medium and device
CN104182974A (en) A speeded up method of executing image matching based on feature points
CN114357958A (en) Table extraction method, device, equipment and storage medium
CN104915952A (en) Method for extracting local salient objects in depth image based on multi-way tree
CN103871089A (en) Image superpixel meshing method based on fusion
CN109598201B (en) Action detection method and device, electronic equipment and readable storage medium
Huang et al. Robust salient object detection via fusing foreground and background priors
CN111507999A (en) FDSST algorithm-based target tracking method and device
CN107369150B (en) Method for detecting rectangular target and rectangular target detection device
CN114220045A (en) Object recognition method, device and computer-readable storage medium
CN113284165A (en) Target tracking method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201204