CN111047626A - Target tracking method and device, electronic equipment and storage medium - Google Patents

Target tracking method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111047626A
CN111047626A CN201911374132.XA CN201911374132A CN111047626A CN 111047626 A CN111047626 A CN 111047626A CN 201911374132 A CN201911374132 A CN 201911374132A CN 111047626 A CN111047626 A CN 111047626A
Authority
CN
China
Prior art keywords
target
optical flow
tracking point
frame
flow tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911374132.XA
Other languages
Chinese (zh)
Other versions
CN111047626B (en
Inventor
曾佐祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911374132.XA priority Critical patent/CN111047626B/en
Publication of CN111047626A publication Critical patent/CN111047626A/en
Application granted granted Critical
Publication of CN111047626B publication Critical patent/CN111047626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a target tracking method, a target tracking device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first target frame of at least one target object in a current image frame of a target video; marking the overlapping area of the first target frame to obtain the shielding condition of the first target frame; acquiring an optical flow tracking point set of a current image frame according to the shielding condition of the first target frame and the category of the target object; performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame; and acquiring a second target frame of the target object in the next image frame according to the target tracking point. The method and the device for tracking the multiple targets in the video are beneficial to improving the efficiency and the effect of tracking the multiple targets in the video.

Description

Target tracking method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video target tracking technologies, and in particular, to a target tracking method and apparatus, an electronic device, and a storage medium.
Background
With the development of machine vision theory and technology, identifying and understanding video content becomes a research hotspot, and especially after a single target tracking related product based on a video falls to the ground, the requirement of tracking multiple targets in the video is more and more strong. The current video-based target tracking methods are more, for example: the method for associating the targets in the adjacent video frames according to physical indexes such as the distance between the central points of the targets, the intersection ratio of the target areas and the like is not suitable in some scenes; in addition, the optical flow tracking method which is relatively common is applied at present, but the optical flow tracking method is mostly used for tracking a single target in a video image, and a mode of uniformly selecting tracking points is adopted, so that background pixel points are inevitably used as the tracking points, and therefore, the tracking efficiency is poor and the tracking effect is not obvious when the current optical flow tracking method carries out multi-target tracking.
Disclosure of Invention
In order to solve the above problems, the present application provides a target tracking method, an apparatus, an electronic device, and a storage medium, which are beneficial to improving efficiency and effect of tracking multiple targets in a video.
A first aspect of an embodiment of the present application provides a target tracking method, where the target tracking method includes:
acquiring a first target frame of at least one target object in a current image frame of a target video;
marking an overlapping area of the first target frame to obtain the shielding condition of the first target frame;
acquiring an optical flow tracking point set of the current image frame according to the shielding condition of the first target frame and the category of the target object;
performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame;
and acquiring a second target frame of the target object in the next image frame according to the target tracking point.
With reference to the first aspect, in one possible example, the acquiring the set of optical flow tracking points of the current image frame according to the occlusion condition of the first target frame and the category of the target object includes:
selecting the optical flow tracking point from a preset area of each first target frame according to the shielding condition of each first target frame and the category of the target object corresponding to each first target frame;
and adding the optical flow tracking points selected from each first target frame into the same set to obtain the optical flow tracking point set.
With reference to the first aspect, in a possible example, the selecting the optical flow tracking point from the preset area of each first target frame according to the occlusion condition of each first target frame and the category of the target object corresponding to each first target frame includes:
for the first target frame belonging to the first class of target objects, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a first preset proportion to obtain a first selection window, and selecting the optical flow tracking points in the first selection window; if the first selection window is shielded, selecting the optical flow tracking points in the area where the first selection window is not shielded;
for the first target frame belonging to the second type target object, if the first target frame is not shielded, selecting the optical flow tracking point at a first preset height and a second preset height of the first target frame; if the first preset height or the second preset height is shielded, selecting the optical flow tracking points in the areas which are not shielded at the first preset height and the second height;
for the first target frame belonging to a third type target object, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a second preset proportion to obtain a second selection window, and selecting the optical flow tracking point in the second selection window; and if the second selection window is blocked, selecting the optical flow tracking point in the area where the second selection window is not blocked.
With reference to the first aspect, in one possible example, the performing sparse optical flow computation on the set of optical-flow tracking points to obtain a corresponding target tracking point of each optical-flow tracking point in the set of optical-flow tracking points in an image frame next to the current image frame includes:
converting the current image frame into a gray image, and constructing an image pyramid by using the gray image of the current image frame; the bottom layer of the image pyramid is the current image frame;
acquiring coordinates of each optical flow tracking point in the optical flow tracking point set on each layer of image of an image pyramid;
calculating an optical flow of each optical flow tracking point in the set of optical flow tracking points;
and obtaining the target tracking point through the coordinates of each optical flow tracking point in the optical flow tracking point set on the current image frame and the optical flow.
With reference to the first aspect, in one possible example, the acquiring a second target frame of the target object in the next image frame according to the target tracking point includes:
calculating the displacement of each optical flow tracking point in the X direction and the Y direction according to the coordinate of each optical flow tracking point in the optical flow tracking point set and the coordinate of the target tracking point corresponding to each optical flow tracking point;
calculating a first distance between every two optical flow tracking points and taking an absolute value, calculating a second distance between two target tracking points corresponding to the two optical flow tracking points and taking an absolute value, obtaining a distance ratio between the first distance after the absolute value is taken and the second distance after the absolute value is taken, and determining a median of the distance ratios as a scaling size;
selecting a first median value of displacement of each optical flow tracking point in the optical flow tracking point set in the X direction and a second median value of displacement of each optical flow tracking point in the Y direction;
and calculating to obtain the second target frame according to the coordinate, the width and the height of the central point of the first target frame, the scaling size, the first median and the second median.
A second aspect of the embodiments of the present application provides a multi-target tracking apparatus, including:
the target detection module is used for acquiring a first target frame of at least one target object in a current image frame of a target video;
the occlusion detection module is used for marking the overlapping area of the first target frame to obtain the occlusion condition of the first target frame;
a tracking point set acquisition module, configured to acquire an optical flow tracking point set of the current image frame according to the occlusion condition of the first target frame and the category of the target object;
the optical flow calculation module is used for performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame;
and the target position determining module is used for acquiring a second target frame of the target object in the next image frame according to the target tracking point.
A third aspect of embodiments of the present application provides an electronic device, which includes an input device, an output device, and a processor, and is adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of the method of the first or second aspect.
A fourth aspect of embodiments of the present application provides a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of the method according to the first or second aspect.
It can be seen that, in the technical solution provided by the embodiment of the present application, a first target frame of at least one target object in a current image frame of a target video is obtained; marking the overlapping area of the first target frame to obtain the shielding condition of the first target frame; acquiring an optical flow tracking point set of a current image frame according to the shielding condition of the first target frame and the category of the target object; performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame; and acquiring a second target frame of the target object in the next image frame according to the target tracking point. In this way, the optical flow tracking points of each object are selected according to the shielding condition of each target object in the current image frame, and then all the selected optical flow tracking points are placed in one set to perform single sparse optical flow calculation; meanwhile, when the optical flow tracking points are selected, the selection is carried out according to the shielding condition of the target object, the condition that background pixels are also selected as the optical flow tracking points for tracking is avoided, and therefore the multi-target tracking effect is favorably improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application architecture provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a target tracking method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an example of an occlusion handling unit according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a process for obtaining a second target frame according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another target tracking method provided in an embodiment of the present application;
FIG. 6a is an exemplary diagram of selecting optical flow tracking points when a first target frame is not occluded according to the present application;
FIG. 6b is a diagram illustrating an example of selecting optical flow tracking points when a first target frame is occluded according to the present application;
FIG. 7a is a diagram illustrating another example of selecting optical flow tracking points when a first target frame is not occluded according to the embodiment of the present application;
FIG. 7b is a diagram illustrating another example of selecting optical flow tracking points when a first target frame is occluded according to the embodiment of the present application;
FIG. 8a is a diagram illustrating another example of selecting optical flow tracking points when a first target frame is not occluded according to the embodiment of the present application;
FIG. 8b is a diagram illustrating another example of selecting optical flow tracking points when a first target frame is occluded according to the embodiment of the present application;
fig. 9 is a schematic structural diagram of a target tracking apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as appearing in the specification, claims and drawings of this application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
First, an application architecture to which the scheme of the embodiments of the present application may be applied is described by way of example with reference to the accompanying drawings. Referring to fig. 1, fig. 1 is an application architecture diagram provided in an embodiment of the present application, and as shown in fig. 1, the application architecture diagram includes a user terminal, a server, a database, and an image acquisition device, where each part is connected and communicated through a network, so as to provide a reliable system architecture for the target tracking method of the present application. The user terminal provides a man-machine interaction interface for sending instructions or requests input by the user to the server, such as: a target tracking request, a convolutional neural network training request, a video image acquisition request, and the like, and receive results returned after the server performs corresponding processing according to the instructions or requests, and display the results in a display window, for example: and displaying a target frame of the pedestrian obtained in the target tracking process, and the like. The server is an execution main body of the whole target tracking method, and is used for executing a series of target tracking operations on various objects in the acquired video image according to an instruction or a request sent by the user terminal, for example: target detection, tracking point selection, algorithm operation and the like, wherein the servers comprise but are not limited to a local server, a cloud server and a server cluster. The database may be a database of the server, or may be a database independent of the server, for example: a cloud database or some open source databases, in which data sets that can be used for target tracking experiments are stored, for example: the complete video sequence, the adjacent image frames, and of course the database is also used to store the video captured by the image capturing device, such as: videos collected by certain cell monitoring equipment, videos collected by certain road high-speed cameras and the like. The image capturing device may be any device capable of capturing a video image, the captured video image may be displayed at the user terminal, and the captured video image may be sent to the server in real time to enable the server to perform the target tracking operation, or the captured video image may be stored in a database for subsequent invocation, which is not limited specifically.
Based on the application architecture shown in fig. 1, an embodiment of the present application provides a target tracking method, which can be executed by an electronic device, and is not only applicable to a single target tracking scene in a video image, but also applicable to a multi-target tracking scene and a scene in which a target is blocked, please refer to fig. 2, and the target tracking method may include the following steps:
s21, a first target frame of at least one target object in the current image frame of the target video is obtained.
In this embodiment of the present application, the target video refers to a video acquired by an image acquisition device, and may be a video acquired by the image acquisition device in real time, for example: the real-time monitoring video of the camera on the street or the industrial park can be historical video which is collected by the image collecting device and stored in a database, for example, in a scene of testing the effect of the target tracking method, the video which is collected by the image collecting device in real time is not needed, and the purpose of testing can be achieved by any section of historical video. The current image frame, i.e. the image frame of the target video at the current time appearing on the display window of the user terminal, may also be a user-selected image frame in some specific scenes, for example: in the criminal investigation process, the public security selects an image frame of a target video, in which a suspect first appears, as a current image frame.
Specifically, the target object may be a pedestrian, a human face, a vehicle or the like in any image frame in the target video, and after the current image frame is acquired, the current image frame is input into a pre-trained neural network for feature extraction and target detection, and is output as a detection frame O of all target objects in the current image frameiI.e. the first target frame, where i represents the unique tracking identity of the ith target object in the current image frame. The pre-trained neural Network may be Fast R-CNN (Fast Region-based Convolutional neural Network), MTCNN (Multi-tasking Convolutional neural Network), OR-CNN (Occlusion-aware Region-Convolutional neural Network), OR the like.
And S22, marking the overlapping area of the first target frame to obtain the shielding condition of the first target frame.
In this embodiment, the overlap area may be marked according to the coordinates of the center point of each first target frame, the width and the height of each first target frame, for example: the overlapped area marking is carried out by calculating the intersection and comparison of every two first target frames, the intersection area of the two first target frames is the overlapped area, and then the front and back positions of the two first target frames are judged, so that the shielding condition of each first target frame, namely whether the first target frame is shielded, the shielded area and the non-shielded area, can be obtained. Optionally, an occlusion processing unit provided in the OR-CNN may be further used to perform overlapping region labeling, after the first target frame is obtained by using the OR-CNN in step S21, the first target frame is divided into a preset number (for example, 5) of target regions, then the features of the preset number of target regions are respectively extracted, the extracted features are input into the occlusion processing unit shown in fig. 3, and are subjected to three 3 × 3 convolution processes and two classifications by the softmax classifier, and an occlusion score of each target region in the preset number of target regions is output, when an occlusion score of a certain target region is smaller than a threshold (for example, 0.9, 0.8, and the like), the target region is labeled as an overlapping region, which indicates that the first target frame is occluded, and the occluded region is the labeled region.
S23, acquiring the optical flow tracking point set of the current image frame according to the occlusion condition of the first target frame and the category of the target object.
In this embodiment of the present application, when selecting the optical flow tracking point in the first target frame, the occlusion condition of the first target frame may be considered, and the category of the target object may also be considered, for example: the target object is a human face, a human body, or another object such as a vehicle, the category of the target object is obtained when the target is detected in step S21, and the area of the optical flow tracking point selected for each category of the target object is different, but the selection method is: if the first target frame is not shielded, selecting m-n feature points as optical flow tracking points in a preset area, wherein m and n can be equal, if the preset area of the first target frame is shielded, selecting the feature points of the preset area which are not shielded as the optical flow tracking points, and finally forming an optical flow tracking point set { p } of the current image frame by all the selected optical flow tracking pointsijJ denotes the jth optical-flow tracking point of the ith target object.
And S24, performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in the next image frame of the current image frame.
In the embodiment of the present application, the target tracking point, that is, the point corresponding to the selected optical flow tracking point in the next image frame of the current image frame, is pointed to the obtained optical flow tracking point set { pijAdopting a sparse optical flow algorithm to track a point set { p) of the optical flowijAll optical flow tracking points p inijPerforming one-time calculation to obtain each optical flow tracking point pijTarget tracking point p 'in the next image frame'ij
The performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame includes:
converting the current image frame into a gray image, and constructing an image pyramid by using the gray image of the current image frame; the bottom layer of the image pyramid is the current image frame;
acquiring coordinates of each optical flow tracking point in the optical flow tracking point set on each layer of image of an image pyramid;
calculating an optical flow of each optical flow tracking point in the set of optical flow tracking points;
and obtaining the target tracking point through the coordinates of each optical flow tracking point in the optical flow tracking point set on the current image frame and the optical flow.
Specifically, the grayscale image of the current image frame is scaled to obtain a total L-layer image of an image pyramid, where L is 0, 1, 2mL with lowest resolutionmThe layer is the top layer, and the original image of the current image frame is at the bottom layer of the image pyramid. The optical flow algorithm aims to calculate the optical flow on the topmost image from the topmost layer of the image pyramid, and the calculation result of the topmost layer is transmitted to the next layer (L) as an initial valuem-1Layer) and then calculates the L-th value based on the initial valuem-1Luminous flux of the layer, willm-1And the calculation result of the layer is taken as an initial value and is continuously transmitted to the next layer until the calculation result is transmitted to the bottommost layer, and the optical flow calculated by the bottommost layer is taken as a final optical flow calculation result. The calculation of each layer of optical flow needs to use each optical flow tracking point pijAt the coordinates of this layer, it is therefore necessary to locate each of the optical flow tracking points pijCoordinates on each layer of image of the image pyramid, including coordinates on the original image of the current image frame, tracked by each of said optical flow tracking points pijCalculating the coordinates of each layer to obtain each optical flow tracking point pijThe luminous flow at each layer, for example: l thm-1The luminous flux of the layer being
Figure BDA0002339101980000081
From Lm-1Initial value of luminous flux of layer
Figure BDA0002339101980000082
And the same
Figure BDA0002339101980000083
Can calculate the L < th > ofm-2The initial values of the layers, so that each optical flow tracking point p can be calculated by iterationijOptical flow d on the image at the bottom of the image pyramid (i.e., the original image of the current image frame), each optical flow tracking point pijAdding the optical flow d to the coordinates of the original image of the current image frame to obtain each optical flow tracking point pijTarget tracking point p 'in the next image frame'ij
And S25, acquiring a second target frame of the target object in the next image frame according to the target tracking point.
In an embodiment of the present application, as shown in fig. 4, the above acquiring a second target frame of the target object in the next image frame according to the target tracking point includes steps S2501 to S2504:
s2501, calculating the displacement of each optical flow tracking point in the X direction and the displacement of each optical flow tracking point in the Y direction according to the coordinate of each optical flow tracking point in the optical flow tracking point set and the coordinate of the target tracking point corresponding to each optical flow tracking point;
s2502, calculating a first distance between every two optical flow tracking points and taking an absolute value, calculating a second distance between two target tracking points corresponding to the two optical flow tracking points and taking an absolute value, obtaining a distance ratio between the first distance after the absolute value is taken and the second distance after the absolute value is taken, and determining a median of the distance ratios as a scaling size;
s2503, selecting a first median value of displacement of each optical flow tracking point in the optical flow tracking point set in the X direction and a second median value of displacement of each optical flow tracking point in the Y direction;
s2504, calculating according to the coordinate, the width and the height of the center point of the first target frame, the scaling size, the first median and the second median to obtain the second target frame.
As can be appreciated, each optical flow tracking point p is derivedijCorresponding target tracking point p'ijThen, each optical flow tracking point p is adoptedijAnd its corresponding target tracking point p'ijCalculating the coordinates of each optical flow tracking point pijDisplacement dx in the X directionijAnd displacement dy in the Y directionijAnd select dxijIs the first median value Deltax, dy is selectedijIs the second median value deltay, then every two of the optical-flow tracking points p are calculatedijThe distance between which is defined as the first distance a, the two optical flow tracking points p are calculatedijCorresponding two target tracking points p'ijThe distance between b and a is defined as a second distance b, and the median of the distance ratio | b |/| a | between the absolute value | b | of b and the absolute value | a | of a is taken as a scaling size scale, and the formula is adopted: and x + Δ x + width (1-scale)/2, y + Δ y + height (1-scale)/2 respectively calculate the coordinates (x ', y') of the center point of the second target frame, wherein (x, y) represents the coordinates of the center point of the first target frame, width and height are the width and height of the first target frame, and similarly, width is the width of the second target frame, height is the height of the second target frame, and the position of the second target frame in the next image frame of the current image frame is determined according to the coordinates of the center point of the second target frame, the width of the second target frame and the height of the second target frame.
It can be seen that, in the embodiment of the present application, a first target frame of at least one target object in a current image frame of a target video is obtained; marking the overlapping area of the first target frame to obtain the shielding condition of the first target frame; acquiring an optical flow tracking point set of a current image frame according to the shielding condition of the first target frame and the category of the target object; performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame; and acquiring a second target frame of the target object in the next image frame according to the target tracking point. In this way, the optical flow tracking points of each object are selected according to the shielding condition of each target object in the current image frame, and then all the selected optical flow tracking points are placed in one set to perform single sparse optical flow calculation; meanwhile, when the optical flow tracking points are selected, the selection is carried out according to the shielding condition of the target object, the condition that background pixels are also selected as the optical flow tracking points for tracking is avoided, and therefore the multi-target tracking effect is favorably improved.
Referring to fig. 5, fig. 5 is a schematic flowchart of another target tracking method provided in the embodiment of the present application, and as shown in fig. 5, the method includes steps S51-S56:
s51, acquiring a first target frame of at least one target object in the current image frame of the target video;
s52, marking the overlapping area of the first target frame to obtain the shielding condition of the first target frame;
s53, selecting the optical flow tracking points from the preset area of each first target frame according to the shielding condition of each first target frame and the category of the target object corresponding to each first target frame;
in the specific embodiment of the application, different optical flow tracking point selection areas are preset mainly for three types of target objects, and for a first target frame belonging to a first type of target object, if the first target frame is not shielded, the first target frame is reduced by taking the center of the first target frame as the center and a first preset proportion to obtain a first selection window, and the optical flow tracking points are selected in the first selection window; and if the first selection window is blocked, selecting the optical flow tracking point in the area where the first selection window is not blocked. If the first target frame is a face detection frame and the face detection frame is not blocked, reducing the face detection frame by taking the center of the face detection frame as the center and a first preset proportion (for example, 50% -80% of width and height) to obtain a first selection window shown in figure 6a, and selecting 5 x 5 characteristic points as optical flow tracking points in the first selection window to cover eyes, a nose and a mouth; as shown in fig. 6b, if the human face detection frame is occluded and the occluded area is the right lower half corner of the first selection window obtained according to the foregoing method, the feature points of the area where the first selection window is not occluded are selected as optical flow tracking points, and in short, the feature points which are not occluded in the 5 × 5 feature points are selected as optical flow tracking points.
For the first target frame belonging to the second type target object, if the first target frame is not shielded, selecting the optical flow tracking point at a first preset height and a second preset height of the first target frame; and if the first preset height or the second preset height is shielded, selecting the optical flow tracking points in the areas which are not shielded at the first preset height and the second height. As shown in fig. 7a, if the first target frame is a human body detection frame and the human body detection frame is not blocked, the 1/4 (where is close to the head of the human body) of the height of the human body detection frame is taken as a first preset height, the feature point at the midpoint of the first preset height is taken as a center, the feature point of 5 × 3 is selected as an optical flow tracking point, meanwhile, the 1/2 (where is close to the chest and abdomen of the human body) of the height of the human body detection frame is taken as a second preset height, and the feature point at the midpoint of the second preset height is taken as a center, and the feature point of 5 × 5 is selected as an optical flow tracking point; as shown in fig. 7b, if there is a block in the human body detection frame and the blocked area covers most of the left feature points at the second preset height, 5 × 3 feature points are still selected as optical flow tracking points at the first preset height, and the non-blocked feature points in the 5 × 5 feature points are selected as optical flow tracking points at the second preset height, and similarly, if there is a block also at the first preset height, the non-blocked feature points in the 5 × 3 feature points at the first preset height are selected as optical flow tracking points.
For the first target frame belonging to a third type target object, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a second preset proportion to obtain a second selection window, and selecting the optical flow tracking point in the second selection window; and if the second selection window is blocked, selecting the optical flow tracking point in the area where the second selection window is not blocked. Taking a vehicle detection frame as an example, if the vehicle detection frame is not blocked, reducing the vehicle detection frame by taking the center of the vehicle detection frame as the center according to a second preset proportion (for example, 80% of width and height) to obtain a second selection window shown in fig. 8a, and selecting 5 × 5 feature points in the second selection window as optical flow tracking points; as shown in fig. 8b, if there is a block in the vehicle detection frame and the blocked area is a partial area of the second selection window obtained according to the foregoing method, the feature point of the area where the second selection window is not blocked is also selected as the optical flow tracking point.
S54, adding the optical flow tracking points selected by each first target frame into the same set to obtain the optical flow tracking point set;
in the embodiment of the present application, in step S53, optical flow tracking points are selected for each first target frame and then added to the same set, where the set is an optical flow tracking point set, which is convenient for subsequent single sparse optical flow calculation.
S55, performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame;
and S56, acquiring a second target frame of the target object in the next image frame according to the target tracking point.
Some steps in the embodiment shown in fig. 5 have been described in relation to the embodiment shown in fig. 2, and can achieve the same or similar beneficial effects, and are not repeated here to avoid repetition.
Based on the description of the above target tracking method embodiments, the present application also provides a target tracking apparatus, which may be a computer program (including program code) running in a terminal. The target tracking device may perform the methods illustrated in fig. 2 or fig. 5. Referring to fig. 9, the target tracking apparatus includes:
the target detection module 91 is configured to acquire a first target frame of at least one target object in a current image frame of a target video;
the occlusion detection module 92 is configured to mark an overlapping area of the first target frame to obtain an occlusion condition of the first target frame;
a tracking point set obtaining module 93, configured to obtain an optical flow tracking point set of the current image frame according to the occlusion condition of the first target frame and the category of the target object;
an optical flow calculation module 94, configured to perform sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame;
a target position determining module 95, configured to obtain a second target frame of the target object in the next image frame according to the target tracking point.
In one possible example, the tracking point set obtaining module 93 is specifically configured to, in terms of obtaining the optical flow tracking point set of the current image frame according to the occlusion condition of the first target frame and the category of the target object:
selecting the optical flow tracking point from a preset area of each first target frame according to the shielding condition of each first target frame and the category of the target object corresponding to each first target frame;
and adding the optical flow tracking points selected from each first target frame into the same set to obtain the optical flow tracking point set.
In a possible example, the tracking point set obtaining module 93 is specifically configured to, in terms of selecting the optical flow tracking points from a preset area of each first target frame according to the occlusion condition of each first target frame and the category of the target object corresponding to each first target frame:
for the first target frame belonging to the first class of target objects, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a first preset proportion to obtain a first selection window, and selecting the optical flow tracking points in the first selection window; if the first selection window is shielded, selecting the optical flow tracking points in the area where the first selection window is not shielded;
for the first target frame belonging to the second type target object, if the first target frame is not shielded, selecting the optical flow tracking point at a first preset height and a second preset height of the first target frame; if the first preset height or the second preset height is shielded, selecting the optical flow tracking points in the areas which are not shielded at the first preset height and the second height;
for the first target frame belonging to a third type target object, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a second preset proportion to obtain a second selection window, and selecting the optical flow tracking point in the second selection window; and if the second selection window is blocked, selecting the optical flow tracking point in the area where the second selection window is not blocked.
In one possible example, the optical flow calculation module 94 is specifically configured to perform sparse optical flow calculation on the set of optical flow tracking points to obtain a corresponding target tracking point of each optical flow tracking point in the set of optical flow tracking points in an image frame next to the current image frame:
converting the current image frame into a gray image, and constructing an image pyramid by using the gray image of the current image frame; the bottom layer of the image pyramid is the current image frame;
acquiring coordinates of each optical flow tracking point in the optical flow tracking point set on each layer of image of an image pyramid;
calculating an optical flow of each optical flow tracking point in the set of optical flow tracking points;
and obtaining the target tracking point through the coordinates of each optical flow tracking point in the optical flow tracking point set on the current image frame and the optical flow.
In one possible example, the occlusion detection module 92 is specifically configured to, in terms of marking the overlapping area of the first target frame:
dividing the first target frame into a preset number of target areas;
obtaining the shielding score of each target area in the preset number of target areas;
labeling the target region with the occlusion score less than a threshold as an overlapping region.
In one possible example, the target position determination module 95, in acquiring a second target frame of the target object in the next image frame according to the target tracking point, includes:
calculating the displacement of each optical flow tracking point in the X direction and the Y direction according to the coordinate of each optical flow tracking point in the optical flow tracking point set and the coordinate of the target tracking point corresponding to each optical flow tracking point;
calculating a first distance between every two optical flow tracking points and taking an absolute value, calculating a second distance between two target tracking points corresponding to the two optical flow tracking points and taking an absolute value, obtaining a distance ratio between the first distance after the absolute value is taken and the second distance after the absolute value is taken, and determining a median of the distance ratios as a scaling size;
selecting a first median value of displacement of each optical flow tracking point in the optical flow tracking point set in the X direction and a second median value of displacement of each optical flow tracking point in the Y direction;
and calculating to obtain the second target frame according to the coordinate, the width and the height of the central point of the first target frame, the scaling size, the first median and the second median.
According to an embodiment of the present application, the units in the target tracking apparatus shown in fig. 9 may be respectively or entirely combined into one or several other units to form the target tracking apparatus, or some unit(s) thereof may be further split into multiple units with smaller functions to form the target tracking apparatus, which may achieve the same operation without affecting the achievement of the technical effect of the embodiment of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the target tracking apparatus may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present application, the apparatus device shown in fig. 9 may be constructed by running a computer program (including program codes) capable of executing steps involved in the respective methods shown in fig. 2 or fig. 5 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and the above-described method of the embodiment of the present invention may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
Based on the description of the method embodiment and the device embodiment, the embodiment of the invention also provides electronic equipment. Referring to fig. 10, the electronic device includes at least a processor 1001, an input device 1002, an output device 1003, and a computer storage medium 1004. The processor 1001, the input device 1002, the output device 1003, and the computer storage medium 1004 in the electronic device may be connected by a bus or other means.
A computer storage medium 1004 may be stored in the memory of the electronic device, the computer storage medium 1004 being used for storing a computer program comprising program instructions, the processor 1001 being used for executing the program instructions stored by the computer storage medium 1004. The processor 1001 (or CPU) is a computing core and a control core of the electronic device, and is adapted to implement one or more instructions, and in particular, is adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function.
In one embodiment, the processor 1001 of the electronic device provided in the embodiment of the present application may be configured to perform a series of target tracking processes, including:
acquiring a first target frame of at least one target object in a current image frame of a target video;
marking an overlapping area of the first target frame to obtain the shielding condition of the first target frame;
acquiring an optical flow tracking point set of the current image frame according to the shielding condition of the first target frame and the category of the target object;
performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame;
and acquiring a second target frame of the target object in the next image frame according to the target tracking point.
In one embodiment, the processor 1001 performs the acquiring the set of optical flow tracking points of the current image frame according to the occlusion condition of the first target frame and the category of the target object, including:
selecting the optical flow tracking point from a preset area of each first target frame according to the shielding condition of each first target frame and the category of the target object corresponding to each first target frame;
and adding the optical flow tracking points selected from each first target frame into the same set to obtain the optical flow tracking point set.
In one embodiment, the processor 1001 selects the optical flow tracking point from the preset area of each first target frame according to the blocking condition of each first target frame and the category of the target object corresponding to each first target frame, including:
for the first target frame belonging to the first class of target objects, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a first preset proportion to obtain a first selection window, and selecting the optical flow tracking points in the first selection window; if the first selection window is shielded, selecting the optical flow tracking points in the area where the first selection window is not shielded;
for the first target frame belonging to the second type target object, if the first target frame is not shielded, selecting the optical flow tracking point at a first preset height and a second preset height of the first target frame; if the first preset height or the second preset height is shielded, selecting the optical flow tracking points in the areas which are not shielded at the first preset height and the second height;
for the first target frame belonging to a third type target object, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a second preset proportion to obtain a second selection window, and selecting the optical flow tracking point in the second selection window; and if the second selection window is blocked, selecting the optical flow tracking point in the area where the second selection window is not blocked.
In one embodiment, the performing the sparse optical flow calculation on the set of optical flow tracking points by the processor 1001 to obtain a corresponding target tracking point of each optical flow tracking point in the set of optical flow tracking points in a next image frame of the current image frame includes:
converting the current image frame into a gray image, and constructing an image pyramid by using the gray image of the current image frame; the bottom layer of the image pyramid is the current image frame;
acquiring coordinates of each optical flow tracking point in the optical flow tracking point set on each layer of image of an image pyramid;
calculating an optical flow of each optical flow tracking point in the set of optical flow tracking points;
and obtaining the target tracking point through the coordinates of each optical flow tracking point in the optical flow tracking point set on the current image frame and the optical flow.
In one embodiment, the processor 1001 performs the overlapping area marking on the first target box, including:
dividing the first target frame into a preset number of target areas;
obtaining the shielding score of each target area in the preset number of target areas;
labeling the target region with the occlusion score less than a threshold as an overlapping region.
In one embodiment, the processor 1001 performs the acquiring a second target frame of the target object in the next image frame according to the target tracking point, including:
calculating the displacement of each optical flow tracking point in the X direction and the Y direction according to the coordinate of each optical flow tracking point in the optical flow tracking point set and the coordinate of the target tracking point corresponding to each optical flow tracking point;
calculating a first distance between every two optical flow tracking points and taking an absolute value, calculating a second distance between two target tracking points corresponding to the two optical flow tracking points and taking an absolute value, obtaining a distance ratio between the first distance after the absolute value is taken and the second distance after the absolute value is taken, and determining a median of the distance ratios as a scaling size;
selecting a first median value of displacement of each optical flow tracking point in the optical flow tracking point set in the X direction and a second median value of displacement of each optical flow tracking point in the Y direction;
and calculating to obtain the second target frame according to the coordinate, the width and the height of the central point of the first target frame, the scaling size, the first median and the second median.
Illustratively, the electronic device may be a computer, a notebook computer, a tablet computer, a palm computer, a server, or the like. Electronic devices may include, but are not limited to, a processor 1001, an input device 1002, an output device 1003, and a computer storage medium 1004. It will be appreciated by those skilled in the art that the schematic diagrams are merely examples of an electronic device and are not limiting of an electronic device and may include more or fewer components than those shown, or some components in combination, or different components.
It should be noted that, since the processor 1001 of the electronic device executes the computer program to implement the steps in the target tracking method, the embodiments of the target tracking method are all applicable to the electronic device, and all can achieve the same or similar beneficial effects.
An embodiment of the present application further provides a computer storage medium (Memory), which is a Memory device in an electronic device and is used to store programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the terminal, and may also include an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 1001. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; alternatively, it may be at least one computer storage medium located remotely from the processor 1001. In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 1001 to perform the corresponding steps described above with respect to the target tracking method.
It should be noted that, since the computer program of the computer storage medium is executed by the processor to implement the steps in the target tracking method, all the embodiments or implementations of the target tracking method described above are applicable to the computer storage medium, and can achieve the same or similar beneficial effects.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of target tracking, the method comprising:
acquiring a first target frame of at least one target object in a current image frame of a target video;
marking an overlapping area of the first target frame to obtain the shielding condition of the first target frame;
acquiring an optical flow tracking point set of the current image frame according to the shielding condition of the first target frame and the category of the target object;
performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame;
and acquiring a second target frame of the target object in the next image frame according to the target tracking point.
2. The method of claim 1, wherein the obtaining the set of optical flow tracking points for the current image frame according to the occlusion condition of the first target frame and the category of the target object comprises:
selecting the optical flow tracking point from a preset area of each first target frame according to the shielding condition of each first target frame and the category of the target object corresponding to each first target frame;
and adding the optical flow tracking points selected from each first target frame into the same set to obtain the optical flow tracking point set.
3. The method according to claim 2, wherein said selecting the optical flow tracking points from the preset area of each of the first target frames according to the occlusion condition of each of the first target frames and the category of the target object corresponding to each of the first target frames comprises:
for the first target frame belonging to the first class of target objects, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a first preset proportion to obtain a first selection window, and selecting the optical flow tracking points in the first selection window; if the first selection window is shielded, selecting the optical flow tracking points in the area where the first selection window is not shielded;
for the first target frame belonging to the second type target object, if the first target frame is not shielded, selecting the optical flow tracking point at a first preset height and a second preset height of the first target frame; if the first preset height or the second preset height is shielded, selecting the optical flow tracking points in the areas which are not shielded at the first preset height and the second height;
for the first target frame belonging to a third type target object, if the first target frame is not shielded, reducing the first target frame by taking the center of the first target frame as the center and a second preset proportion to obtain a second selection window, and selecting the optical flow tracking point in the second selection window; and if the second selection window is blocked, selecting the optical flow tracking point in the area where the second selection window is not blocked.
4. The method of claim 1, wherein the performing sparse optical flow computations on the set of optical flow tracking points to obtain a corresponding target tracking point in a next image frame of the current image frame for each optical flow tracking point in the set of optical flow tracking points comprises:
converting the current image frame into a gray image, and constructing an image pyramid by using the gray image of the current image frame; the bottom layer of the image pyramid is the current image frame;
acquiring coordinates of each optical flow tracking point in the optical flow tracking point set on each layer of image of an image pyramid;
calculating an optical flow of each optical flow tracking point in the set of optical flow tracking points;
and obtaining the target tracking point through the coordinates of each optical flow tracking point in the optical flow tracking point set on the current image frame and the optical flow.
5. The method of claim 1, wherein said marking the overlap region of the first target box comprises:
dividing the first target frame into a preset number of target areas;
obtaining the shielding score of each target area in the preset number of target areas;
labeling the target region with the occlusion score less than a threshold as an overlapping region.
6. The method of claim 1, wherein said obtaining a second target frame of the target object in the next image frame according to the target tracking point comprises:
calculating the displacement of each optical flow tracking point in the X direction and the Y direction according to the coordinate of each optical flow tracking point in the optical flow tracking point set and the coordinate of the target tracking point corresponding to each optical flow tracking point;
calculating a first distance between every two optical flow tracking points and taking an absolute value, calculating a second distance between two target tracking points corresponding to the two optical flow tracking points and taking an absolute value, obtaining a distance ratio between the first distance after the absolute value is taken and the second distance after the absolute value is taken, and determining a median of the distance ratios as a scaling size;
selecting a first median value of displacement of each optical flow tracking point in the optical flow tracking point set in the X direction and a second median value of displacement of each optical flow tracking point in the Y direction;
and calculating to obtain the second target frame according to the coordinate, the width and the height of the central point of the first target frame, the scaling size, the first median and the second median.
7. A multi-target tracking apparatus, the apparatus comprising:
the target detection module is used for acquiring a first target frame of at least one target object in a current image frame of a target video;
the occlusion detection module is used for marking the overlapping area of the first target frame to obtain the occlusion condition of the first target frame;
a tracking point set acquisition module, configured to acquire an optical flow tracking point set of the current image frame according to the occlusion condition of the first target frame and the category of the target object;
the optical flow calculation module is used for performing sparse optical flow calculation on the optical flow tracking point set to obtain a target tracking point corresponding to each optical flow tracking point in the optical flow tracking point set in a next image frame of the current image frame;
and the target position determining module is used for acquiring a second target frame of the target object in the next image frame according to the target tracking point.
8. The apparatus according to claim 7, wherein the tracking point set obtaining module, in obtaining the set of optical flow tracking points of the current image frame according to the occlusion condition of the first target frame and the category of the target object, is specifically configured to:
selecting the optical flow tracking point from a preset area of each first target frame according to the shielding condition of each first target frame and the category of the target object corresponding to each first target frame;
and adding the optical flow tracking points selected from each first target frame into the same set to obtain the optical flow tracking point set.
9. An electronic device comprising an input device and an output device, further comprising:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having stored thereon one or more instructions adapted to be loaded by the processor and to perform the method of any of claims 1-6.
10. A computer storage medium having stored thereon one or more instructions adapted to be loaded by a processor and to perform the method of any of claims 1-6.
CN201911374132.XA 2019-12-26 2019-12-26 Target tracking method, device, electronic equipment and storage medium Active CN111047626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911374132.XA CN111047626B (en) 2019-12-26 2019-12-26 Target tracking method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911374132.XA CN111047626B (en) 2019-12-26 2019-12-26 Target tracking method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111047626A true CN111047626A (en) 2020-04-21
CN111047626B CN111047626B (en) 2024-03-22

Family

ID=70240454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911374132.XA Active CN111047626B (en) 2019-12-26 2019-12-26 Target tracking method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111047626B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686204A (en) * 2021-01-12 2021-04-20 昆明理工大学 Video flow measurement method and device based on sparse pixel point tracking
CN112699854A (en) * 2021-03-22 2021-04-23 亮风台(上海)信息科技有限公司 Method and device for identifying stopped vehicle
CN112784680A (en) * 2020-12-23 2021-05-11 中国人民大学 Method and system for locking dense contacts in crowded place
CN113160149A (en) * 2021-03-31 2021-07-23 杭州海康威视数字技术股份有限公司 Target display method and device, electronic equipment and endoscope system
CN113284167A (en) * 2021-05-28 2021-08-20 深圳数联天下智能科技有限公司 Face tracking detection method, device, equipment and medium
CN113516093A (en) * 2021-07-27 2021-10-19 浙江大华技术股份有限公司 Marking method and device of identification information, storage medium and electronic device
CN113660443A (en) * 2020-05-12 2021-11-16 武汉Tcl集团工业研究院有限公司 Video frame insertion method, terminal and storage medium
WO2022052853A1 (en) * 2020-09-10 2022-03-17 华为技术有限公司 Object tracking method and apparatus, device, and a computer-readable storage medium
CN115049954A (en) * 2022-05-09 2022-09-13 北京百度网讯科技有限公司 Target identification method, device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599286A (en) * 2013-10-31 2015-05-06 展讯通信(天津)有限公司 Optical flow based feature tracking method and device
US20170178357A1 (en) * 2015-12-21 2017-06-22 Zerotech (Shenzhen) Intelligence Robot Co., Ltd Moving object tracking apparatus, method and unmanned aerial vehicle using the same
CN109558815A (en) * 2018-11-16 2019-04-02 恒安嘉新(北京)科技股份公司 A kind of detection of real time multi-human face and tracking
CN109785363A (en) * 2018-12-29 2019-05-21 中国电子科技集团公司第五十二研究所 A kind of unmanned plane video motion Small object real-time detection and tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599286A (en) * 2013-10-31 2015-05-06 展讯通信(天津)有限公司 Optical flow based feature tracking method and device
US20170178357A1 (en) * 2015-12-21 2017-06-22 Zerotech (Shenzhen) Intelligence Robot Co., Ltd Moving object tracking apparatus, method and unmanned aerial vehicle using the same
CN109558815A (en) * 2018-11-16 2019-04-02 恒安嘉新(北京)科技股份公司 A kind of detection of real time multi-human face and tracking
CN109785363A (en) * 2018-12-29 2019-05-21 中国电子科技集团公司第五十二研究所 A kind of unmanned plane video motion Small object real-time detection and tracking

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660443A (en) * 2020-05-12 2021-11-16 武汉Tcl集团工业研究院有限公司 Video frame insertion method, terminal and storage medium
EP4207066A4 (en) * 2020-09-10 2024-06-26 Huawei Technologies Co., Ltd. Object tracking method and apparatus, device, and a computer-readable storage medium
WO2022052853A1 (en) * 2020-09-10 2022-03-17 华为技术有限公司 Object tracking method and apparatus, device, and a computer-readable storage medium
CN112784680A (en) * 2020-12-23 2021-05-11 中国人民大学 Method and system for locking dense contacts in crowded place
CN112784680B (en) * 2020-12-23 2024-02-02 中国人民大学 Method and system for locking dense contactors in people stream dense places
CN112686204B (en) * 2021-01-12 2022-09-02 昆明理工大学 Video flow measurement method and device based on sparse pixel point tracking
CN112686204A (en) * 2021-01-12 2021-04-20 昆明理工大学 Video flow measurement method and device based on sparse pixel point tracking
CN112699854A (en) * 2021-03-22 2021-04-23 亮风台(上海)信息科技有限公司 Method and device for identifying stopped vehicle
CN112699854B (en) * 2021-03-22 2021-07-20 亮风台(上海)信息科技有限公司 Method and device for identifying stopped vehicle
CN113160149A (en) * 2021-03-31 2021-07-23 杭州海康威视数字技术股份有限公司 Target display method and device, electronic equipment and endoscope system
CN113160149B (en) * 2021-03-31 2024-03-01 杭州海康威视数字技术股份有限公司 Target display method and device, electronic equipment and endoscope system
CN113284167B (en) * 2021-05-28 2023-03-07 深圳数联天下智能科技有限公司 Face tracking detection method, device, equipment and medium
CN113284167A (en) * 2021-05-28 2021-08-20 深圳数联天下智能科技有限公司 Face tracking detection method, device, equipment and medium
CN113516093A (en) * 2021-07-27 2021-10-19 浙江大华技术股份有限公司 Marking method and device of identification information, storage medium and electronic device
CN115049954A (en) * 2022-05-09 2022-09-13 北京百度网讯科技有限公司 Target identification method, device, electronic equipment and medium
CN115049954B (en) * 2022-05-09 2023-09-22 北京百度网讯科技有限公司 Target identification method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN111047626B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN111047626A (en) Target tracking method and device, electronic equipment and storage medium
CN110427905B (en) Pedestrian tracking method, device and terminal
US10198689B2 (en) Method for object detection in digital image and video using spiking neural networks
CN110443210B (en) Pedestrian tracking method and device and terminal
CN107358149B (en) Human body posture detection method and device
CN108062525B (en) Deep learning hand detection method based on hand region prediction
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
CN108986152B (en) Foreign matter detection method and device based on difference image
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
KR101409810B1 (en) Real-time object tracking method in moving camera by using particle filter
CN109543641A (en) A kind of multiple target De-weight method, terminal device and the storage medium of real-time video
CN111209774A (en) Target behavior recognition and display method, device, equipment and readable medium
Tang et al. Multiple-kernel based vehicle tracking using 3D deformable model and camera self-calibration
Gu et al. Embedded and real-time vehicle detection system for challenging on-road scenes
Sharma Feature-based efficient vehicle tracking for a traffic surveillance system
Zhang et al. Real-time lane detection by using biologically inspired attention mechanism to learn contextual information
Haggui et al. Centroid human tracking via oriented detection in overhead fisheye sequences
Li et al. Occluded pedestrian detection through bi-center prediction in anchor-free network
CN112819889B (en) Method and device for determining position information, storage medium and electronic device
CN113689475A (en) Cross-border head trajectory tracking method, equipment and storage medium
Gad et al. Crowd density estimation using multiple features categories and multiple regression models
CN115953744A (en) Vehicle identification tracking method based on deep learning
Truong et al. Single object tracking using particle filter framework and saliency-based weighted color histogram
EP3516592A1 (en) Method for object detection in digital image and video using spiking neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant