CN111476065A - Target tracking method and device, computer equipment and storage medium - Google Patents

Target tracking method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111476065A
CN111476065A CN201910065114.7A CN201910065114A CN111476065A CN 111476065 A CN111476065 A CN 111476065A CN 201910065114 A CN201910065114 A CN 201910065114A CN 111476065 A CN111476065 A CN 111476065A
Authority
CN
China
Prior art keywords
target
tracking
pixel value
tracked
tracking area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910065114.7A
Other languages
Chinese (zh)
Inventor
何军林
刘洛麒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201910065114.7A priority Critical patent/CN111476065A/en
Publication of CN111476065A publication Critical patent/CN111476065A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a target tracking method, a target tracking device, computer equipment and a storage medium, wherein the target tracking method comprises the following steps: acquiring a tracking area of a target to be tracked; reducing the pixel value in the tracking area to a preset pixel value, and extracting the characteristics of the target to be tracked from the reduced tracking area; and tracking the characteristics by a preset target tracking algorithm. By reducing the pixel value in the tracking area to the preset pixel value and extracting the target to be tracked from the reduced tracking area, the method extracts the characteristics after reducing the pixel value in the tracking area, can avoid the interference of other factors, accurately and quickly extracts the characteristics of the target to be tracked, and further improves the tracking speed.

Description

Target tracking method and device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of video detection, in particular to a target tracking method, a target tracking device, computer equipment and a storage medium.
Background
In recent years, a target tracking algorithm of related filtering is popular due to the characteristics of real-time tracking, relatively high precision, low hardware requirement and the like. And a target tracking algorithm based on the relevant filtering, such as KCF, has the characteristic of real-time tracking.
However, in some scenarios, the tracking speed of the target tracking algorithm based on the correlation filtering is low, and cannot meet the actual requirements, thereby further influencing the application range of the target tracking algorithm.
Disclosure of Invention
The embodiment of the invention provides a target tracking method, a target tracking device, computer equipment and a storage medium.
In order to solve the above technical problem, the embodiment of the present invention adopts a technical solution that: provided is a target tracking method, including the steps of:
acquiring a tracking area of a target to be tracked;
reducing the pixel value in the tracking area to a preset pixel value, and extracting the characteristics of the target to be tracked from the reduced tracking area;
and tracking the characteristics by a preset target tracking algorithm.
Optionally, the acquiring a tracking area of the target to be tracked includes:
extracting video frames from a video needing target tracking processing;
acquiring a frame of the target to be tracked from the video frame;
and acquiring the movement speed of the target to be tracked in the video frame, and determining the tracking area of the target to be tracked according to the movement speed.
Optionally, the determining a tracking area of the target to be tracked according to the motion speed includes:
determining a speed range in which the movement speed of the target to be tracked conforms;
searching for the magnification factor corresponding to the speed range in a preset information table;
and taking the target to be tracked as a center, amplifying the boundary according to the amplification factor, and taking the amplified region as the tracking region.
Optionally, the reducing the pixel values in the tracking area to the preset pixel values includes:
acquiring the current pixel value of the tracking area;
judging whether the pixel value is larger than a preset pixel value or not;
and when the pixel value is larger than a preset pixel value, reducing the pixel value of the tracking area to the preset pixel value.
Optionally, the extracting the feature of the target to be tracked from the reduced tracking area includes:
acquiring a horizontal gradient image and a vertical gradient image of the tracking area;
obtaining a gradient amplitude image matrix according to the horizontal gradient image and the vertical gradient image, and calculating a gradient amplitude integral graph of each image in the gradient amplitude image matrix;
and performing addition operation on the gradient amplitude integral image of each image to obtain the directional gradient histogram characteristics of the target to be tracked.
Optionally, the tracking the feature by using a preset target tracking algorithm includes:
tracking in the first video frame group by using the characteristic as a tracking target through a correlation filtering algorithm;
acquiring the characteristics of the target to be tracked from the first frame in the second video frame group;
tracking in the second set of video frames with the acquired feature, wherein the first set of video frames is contiguous with the second set of video frames.
Optionally, after determining whether the pixel value is greater than a preset pixel value, the method further includes;
and when the pixel value is smaller than a preset pixel value, reserving the pixel value of the tracking area.
In order to solve the above technical problem, an embodiment of the present invention further provides a target tracking apparatus, including:
the acquisition module is used for acquiring a tracking area of a target to be tracked;
the processing module is used for reducing the pixel value in the tracking area to a preset pixel value and extracting the characteristics of the target to be tracked from the reduced tracking area;
and the execution module is used for tracking the characteristics through a preset target tracking algorithm.
Optionally, the obtaining module includes:
the first acquisition submodule is used for extracting a video frame from a video needing target tracking processing;
the second obtaining submodule is used for obtaining a frame of the target to be tracked from the video frame;
and the third acquisition submodule is used for acquiring the movement speed of the target to be tracked in the video frame and determining the tracking area of the target to be tracked according to the movement speed.
Optionally, the third obtaining sub-module includes:
the first processing submodule is used for determining a speed range which is met by the movement speed of the target to be tracked;
the second processing submodule is used for searching the amplification factor corresponding to the speed range in a preset information table;
and the first execution submodule is used for amplifying the boundary according to the amplification factor by taking the target to be tracked as a center, and taking the amplified region as the tracking region.
Optionally, the processing module includes:
the fourth obtaining submodule is used for obtaining the current pixel value of the tracking area;
the third processing submodule is used for judging whether the pixel value is larger than a preset pixel value or not;
and the second execution submodule is used for reducing the pixel value of the tracking area to the preset pixel value when the pixel value is larger than the preset pixel value.
Optionally, the processing module includes:
the fifth acquisition submodule is used for acquiring a horizontal gradient image and a vertical gradient image of the tracking area;
the fourth processing submodule is used for obtaining a gradient amplitude image matrix according to the horizontal gradient image and the vertical gradient image and calculating a gradient amplitude integral image of each image in the gradient amplitude image matrix;
and the third execution submodule is used for performing addition operation on the gradient amplitude integral image of each image to obtain the directional gradient histogram characteristics of the target to be tracked.
Optionally, the execution module includes:
the fifth processing submodule is used for tracking in the first video frame group by using the characteristics as a tracking target through a correlation filtering algorithm;
a sixth obtaining submodule, configured to obtain a feature of the target to be tracked from a first frame in the second video frame group;
a fourth execution sub-module for tracking in the second group of video frames with the obtained features, wherein the first group of video frames is consecutive to the second group of video frames.
Optionally, further comprising;
and the sixth processing submodule is used for reserving the pixel value of the tracking area when the pixel value is smaller than a preset pixel value.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device, including a memory and a processor, where the memory stores computer-readable instructions, and the computer-readable instructions, when executed by the processor, cause the processor to execute the steps of the target tracking method.
To solve the above technical problem, an embodiment of the present invention further provides a storage medium storing computer-readable instructions, which, when executed by one or more processors, cause the one or more processors to execute the steps of the target tracking method.
The embodiment of the invention has the beneficial effects that: by reducing the pixel value in the tracking area to the preset pixel value and extracting the target to be tracked from the reduced tracking area, the method extracts the characteristics after reducing the pixel value in the tracking area, can avoid the interference of other factors, accurately and quickly extracts the characteristics of the target to be tracked, and further improves the tracking speed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic basic flow chart of a target tracking method according to an embodiment of the present invention;
fig. 2 is a schematic basic flowchart of a method for acquiring a tracking area according to an embodiment of the present invention;
fig. 3 is a schematic basic flowchart of a method for obtaining a tracking area of a target to be tracked according to a magnification of a boundary determined by a movement speed according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a basic flow chart of a method for reducing pixel values in a tracking area to predetermined pixel values according to an embodiment of the present invention;
fig. 5 is a schematic basic flowchart of a method for extracting features of an object to be tracked from a reduced tracking area according to an embodiment of the present invention;
fig. 6 is a schematic basic flowchart of a method for tracking features by a preset target tracking algorithm according to an embodiment of the present invention;
fig. 7 is a block diagram of a basic structure of a target tracking apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of a basic structure of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
As will be appreciated by those skilled in the art, "terminal" as used herein includes both devices that are wireless signal receivers, devices that have only wireless signal receivers without transmit capability, and devices that include receive and transmit hardware, devices that have receive and transmit hardware capable of performing two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" or "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. As used herein, a "terminal Device" may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, or a smart tv, a set-top box, etc.
The client terminal in this embodiment is the above terminal.
Specifically, referring to fig. 1, fig. 1 is a basic flow chart of the target tracking method according to the present embodiment.
As shown in fig. 1, the target tracking method includes the steps of:
s1100, acquiring a tracking area of a target to be tracked;
the tracking area is an area where the target to be tracked is located, which is determined in a video frame in the video needing target tracking processing. In general, the tracking area is an area of a preset shape including the tracking target, for example, a rectangular area, a square area, or the like including the tracking target. The video to be subjected to target tracking processing may be a video acquired in an application scene where a target needs to be monitored, for example, a video recorded in an application environment by using a camera when a person needs to be monitored in security application. It should be noted that the target to be tracked may be various types of objects, such as people and objects, which are set, for example, a confidential environment is monitored, and the monitored people are tracked in the monitored video.
In this embodiment, when a tracking area is obtained, a video frame is first extracted from a video, a target to be tracked is identified in the video frame through a preset identification model, and an area where the target is located in a preset size ratio is used as a target area.
The recognition model is a model obtained by training a neural network model in advance using sample data. For example, when the target to be tracked is a human, images of various forms of human, such as human faces, human body forms, and the like, may be used as sample data, and the neural network model algorithm may be trained by such sample data.
In some embodiments, when determining a tracking area where an object to be tracked is located, taking the identified object to be tracked as a center, and taking a shape of a size of a preset proportion including the object as the tracking area, for example, taking an area where a minimum rectangle containing the object to be tracked is located as the tracking area or taking a circumscribed circle of the object to be tracked as the tracking area.
S1200, reducing the pixel value in the tracking area to a preset pixel value, and extracting the characteristics of the target to be tracked from the reduced tracking area;
in this embodiment, the pixel size of the tracking area is obtained, the pixel size is compared with a preset pixel value, and when the pixel size of the tracking area is larger than the preset pixel value, the pixel size of the tracking area is reduced to the preset pixel value.
In practical applications, when tracking a target in a video, since the tracking speed is low and the application is limited, in order to increase the tracking speed, in this embodiment, the pixel value of the tracking area is reduced, and the feature of the target is extracted from the reduced tracking area to track the feature, so that the tracking speed can be increased. Preferably, the preset pixel value may be 30 × 30. In other embodiment modes, an appropriate pixel value can be set for the speed of the target.
In some embodiments, when extracting the feature of the tracking area, the histogram feature may be extracted, for example, by performing gray processing on a video frame, equally dividing the gray value from 0 to 255 into 8 sections, traversing each pixel in the image, counting the number of pixels respectively falling into each section, and finally dividing the number of pixels in each of the 8 sections by the sum of the pixels and performing normalization, thereby obtaining the histogram feature.
In other embodiments, local Binary Pattern features (L ocal Binary Pattern, L BP) may also be extracted, that is, the central pixel of the tracking area image is used as a threshold, the gray values of adjacent 8 pixels are compared with the central pixel, if the peripheral pixel values are greater than the central pixel value, the position of the pixel is marked as 1, otherwise, the pixel is 0, so that 8-bit Binary numbers may be generated, and the L BP feature of the central pixel of the tracking area image is obtained.
And S1300, tracking the characteristics through a preset target tracking algorithm.
The preset target tracking algorithm comprises the following steps: correlation filter based tracking algorithms, such as the kalman filter tracking algorithm and the particle filter tracking algorithm.
In some embodiments, to improve the tracking accuracy, the features may be updated according to the number of video frames, for example, the features are extracted from the tracking area again every 3-5 frames and used as updated features, and the features are tracked by the above listed tracking algorithm of the correlation filtering.
In the target tracking method, the pixel value in the tracking area is reduced to the preset pixel value, and the target to be tracked is extracted from the reduced tracking area.
In practical application, when the moving speed of a target in a video is too fast, the target cannot be tracked in time, that is, the tracking speed is too slow. Therefore, in order to solve this problem, an embodiment of the present invention further provides a method for acquiring a tracking area, as shown in fig. 2, fig. 2 is a basic flowchart diagram of the method for acquiring a tracking area provided by the embodiment of the present invention.
Specifically, as shown in fig. 2, step S1100 includes the steps of:
s1110, extracting video frames from a video needing target tracking processing;
s1120, acquiring a frame of a target to be tracked from the video frame;
the video that needs to be subjected to the target tracking processing may be a video acquired in an application scene where a target needs to be monitored, for example, a video recorded in an application environment by using a camera when a person needs to be monitored in security application.
In the embodiment, each frame of video frame is sequentially extracted from the acquired video according to the playing sequence, whether a target to be tracked exists in each frame of video frame is judged by using the target identification model, and the video frame of the target to be tracked which is firstly identified in the video is extracted.
It should be noted that the target recognition model may be a model that is trained to converge on the convolutional neural network algorithm in advance through sample data of the target to be tracked. The target recognition model may also be a feature extraction model, for example, a feature of the target to be tracked is extracted from a video frame, and the feature is compared with a feature extracted from a next frame of video frame, and if the comparison is consistent, the target to be tracked is obtained.
In this embodiment, a target to be tracked in a video frame is identified by using a target identification model, and coordinates and a size of a currently identified frame to be tracked are obtained. The frame of the target to be tracked is a preset boundary of an area which can include the target to be tracked, and in this embodiment, the frame is a boundary of a minimum area of the target to be tracked, and is preferably a rectangle.
S1130, the movement speed of the target to be tracked in the video frame is obtained, and the tracking area of the target to be tracked is determined according to the movement speed.
And inputting the video frame into software by adopting preset application software, such as Adobe Premiere Pro Cs, and calculating the movement speed of the target to be tracked.
In practical application, when the moving speed of the target to be tracked is high, the tracking speed is easy to be unable to keep up with the moving speed, namely the tracking speed is low. When the moving speed is small, the target can be tracked by setting a small amplification speed.
An embodiment of the present invention further provides a method for obtaining a tracking area of a target to be tracked according to the magnification of the boundary determined by the motion speed, as shown in fig. 3, fig. 3 is a schematic basic flow diagram of the method for obtaining the tracking area of the target to be tracked according to the magnification of the boundary determined by the motion speed, provided by the embodiment of the present invention.
Specifically, as shown in fig. 3, step S1130 includes the steps of:
s1131, determining a speed range which meets the motion speed of the target to be tracked;
s1132, searching for the amplification factor corresponding to the speed range in a preset information table;
in the embodiment of the invention, a plurality of speed ranges are preset, after the movement speed is obtained, the movement speed is compared with the preset plurality of speed ranges, and the speed range matched with the movement speed is obtained. The preset information table records the corresponding relation between the speed range and the magnification factor.
For example, the obtained movement speed of the target to be tracked is 1.5, the preset speeds are in the ranges of 0-1, 1-2 and 2-3, the speed range 0-1 in the information table corresponds to 1.5 times, 1-2 corresponds to 2 times, and 2-3 corresponds to 2.5 times. The speed range of coincidence is 1-2 as can be determined by the movement speed 1.5, and the magnification factor corresponding to 1-2 is 2 as determined by the information table.
Note that the magnification is a magnification of the size of the frame.
And S1133, taking the target to be tracked as the center, amplifying the boundary according to the amplification factor, and taking the amplified region as a tracking region.
And when the boundary is enlarged, fixing the coordinates of the center of the target to be tracked, enlarging the size of the frame according to the magnification factor, and taking the enlarged area as a tracking area.
An embodiment of the present invention provides a method for reducing a pixel value in a tracking area to a predetermined pixel value, as shown in fig. 4, where fig. 4 is a basic flowchart of the method for reducing a pixel value in a tracking area to a predetermined pixel value according to the embodiment of the present invention.
Specifically, as shown in fig. 4, step S1200 includes the steps of:
s1211, acquiring a current pixel value of the tracking area;
when acquiring the pixel value of the tracking area, the pointer operation mode can be adopted to double-loop traverse all the pixel values in the tracking area. The implementation code is as follows:
for (int i ═ 0; < rowNumber; i + +// line cycle)
{
Uchar data output purtimeger. ptr < Uchar > (i); // obtaining the head Address of the ith line
For (int j ═ 0; j < colNumber; j + + >)// column cycle
{
/[ begin processing each pixel ]
Data[j]=data[j]/div*div+div/2;
/[ end of treatment ]
End of row/line processing
}
S1212, judging whether the pixel value is larger than a preset pixel value;
s1213, reducing the pixel value of the tracking area to a preset pixel value when the pixel value is larger than the preset pixel value;
and after the pixel value of the tracking area is obtained, comparing the obtained pixel value with a preset pixel value. Preferably, the preset pixel value may be set to 30 × 30. When the pixel value is greater than 30 × 30, the pixel value of the tracking area is reduced to 30 × 30.
And S1214, when the pixel value is less than or equal to the preset pixel value, keeping the pixel value of the tracking area.
By reducing the pixel value of the tracking area, on one hand, the interference of other factors in the tracking area can be eliminated, the characteristics of the target to be tracked can be conveniently obtained, and the accuracy of target tracking is improved, and on the other hand, the speed of tracking the target can be improved due to the fact that the pixel is low.
An embodiment of the present invention provides a method for extracting a feature of an object to be tracked from a reduced tracking area, as shown in fig. 5, fig. 5 is a basic flowchart of the method for extracting a feature of an object to be tracked from a reduced tracking area according to an embodiment of the present invention.
Specifically, as shown in fig. 5, step S1200 includes the steps of:
s1221, acquiring a horizontal gradient image and a vertical gradient image of the tracking area;
first, the images of the tracking area are normalized. That is, the image is converted into a gray scale image, and the image in the tracking area is normalized in color space by a Gamma correction method, so that the contrast of the image is adjusted, the influence of local shadows and illumination changes in the image is reduced, and the noise interference is suppressed.
A horizontal gradient image and a vertical gradient image of the corrected image of the tracking area are calculated. And the gradient direction value of each pixel position is calculated through the horizontal gradient image and the vertical gradient image obtained through calculation, so that not only can contour, shadow and texture information be captured, but also the influence of illumination can be further weakened.
Suppose Gx(x, y) is a horizontal gradient image, Gy(x, y) is a vertical gradient image.
Then G isx(x,y)=H(x+1,y)-H(x-1,y);
Gy(x,y)=H(x,y+1)-H(x,y-1);
Wherein G isx(x,y),Gy(x, y), and H (x, y) respectively represent a horizontal direction gradient, a vertical direction gradient, and a pixel value at a pixel point (x, y) of the input image.
S1222, obtaining a gradient amplitude image matrix according to the horizontal gradient image and the vertical gradient image, and calculating a gradient amplitude integral graph of each image in the gradient amplitude image matrix;
the cartToPolar function is used for calculating an angle matrix image angleMat and a gradient amplitude matrix image magnMat corresponding to the horizontal gradient image and the vertical gradient image, pixel intensity values in the angle matrix image are normalized to 9 ranges with intensity ranges of [0,9 ], and each range represents one bin of a directional gradient Histogram (HOG). And taking the angle as an index, splitting the gradient amplitude image matrix into 9 gradient amplitude image matrices according to 9 directions, calculating an integral image corresponding to 9 images according to the gradient amplitude image matrix corresponding to each angle and 9 angles by utilizing an integral function integral in OpenCv.
And S1223, performing addition operation on the gradient amplitude integral image of each image to obtain directional gradient histogram characteristics of the target to be tracked.
And performing addition operation on the gradient amplitude integral image of each image by adopting a cacHOGCell function to obtain the directional gradient histogram characteristics of each image, and performing tail connection on the directional gradient histogram characteristics of each image to obtain the directional gradient histogram characteristics of the target to be tracked.
An embodiment of the present invention provides a method for tracking a feature through a preset target tracking algorithm, as shown in fig. 6, fig. 6 is a schematic basic flow diagram of the method for tracking a feature through a preset target tracking algorithm according to the embodiment of the present invention.
Specifically, as shown in fig. 6, step S1300 includes the steps of:
s1310, tracking in the first video frame group by using the characteristics as tracking targets through a related filtering algorithm;
the first video frame group is a plurality of video frames with preset number extracted from the video to be tracked. Firstly, inputting a first frame of a first video frame group into a terminal or a server, extracting features from an area where a target to be tracked is located, and training to obtain a correlation filter. Then, for other video frames in the first video frame group, firstly cutting the predicted area, extracting the characteristics, performing FFT (Fourier transform) on the characteristics, then multiplying the characteristics by a related filter, performing IFFT (inverse Fourier transform) on the obtained result, wherein the area with the maximum response point in the other video frames in the first video frame group is the position to be tracked.
S1320, acquiring the characteristics of the target to be tracked from the first frame in the second video frame group;
s1330, tracking in a second video frame group with the obtained features, wherein the first video frame group is consecutive to the second video frame group.
The second video frame group is continuous with the first video frame group and comprises a plurality of video frames, wherein the number of the second video frame group is the same as that of the first video frame group. In this embodiment, in order to improve the tracking accuracy, in the tracking process, after the first video frame group is tracked, the feature of the target to be tracked is extracted from the first frame in the second video frame group again, and the correlation filter is retrained by using the feature to update the correlation filter, so that the target to be tracked of the subsequent video frame is predicted conveniently.
In some embodiments, a third video frame group and a fourth video frame group … are further included, that is, features of the object to be tracked are extracted by taking a preset number of video frames as a period, and the correlation filter is updated.
In order to solve the above technical problems, an embodiment of the present invention further provides a target tracking apparatus. Referring to fig. 7, fig. 7 is a block diagram of a basic structure of the target tracking device in the present embodiment.
As shown in fig. 7, an object tracking apparatus includes: an acquisition module 2100, a processing module 2200, and an execution module 2300. The acquiring module 2100 is configured to acquire a tracking area of a target to be tracked; a processing module 2200, configured to reduce the pixel value in the tracking area to a preset pixel value, and extract a feature of the target to be tracked from the reduced tracking area; and the executing module 2300 is used for tracking the features through a preset target tracking algorithm.
According to the target tracking device, the pixel value in the tracking area is reduced to the preset pixel value, and the target to be tracked is extracted from the reduced tracking area.
In some embodiments, the obtaining module comprises: the first acquisition submodule is used for extracting a video frame from a video needing target tracking processing; the second obtaining submodule is used for obtaining a frame of the target to be tracked from the video frame; and the third acquisition submodule is used for acquiring the movement speed of the target to be tracked in the video frame and determining the tracking area of the target to be tracked according to the movement speed.
In some embodiments, the third acquisition submodule comprises: the first processing submodule is used for determining a speed range which is met by the movement speed of the target to be tracked; the second processing submodule is used for searching the amplification factor corresponding to the speed range in a preset information table; and the first execution submodule is used for amplifying the boundary according to the amplification factor by taking the target to be tracked as a center, and taking the amplified region as the tracking region.
In some embodiments, the processing module comprises: the fourth obtaining submodule is used for obtaining the current pixel value of the tracking area; the third processing submodule is used for judging whether the pixel value is larger than a preset pixel value or not; and the second execution submodule is used for reducing the pixel value of the tracking area to the preset pixel value when the pixel value is larger than the preset pixel value.
In some embodiments, the processing module comprises: the fifth acquisition submodule is used for acquiring a horizontal gradient image and a vertical gradient image of the tracking area; the fourth processing submodule is used for obtaining a gradient amplitude image matrix according to the horizontal gradient image and the vertical gradient image and calculating a gradient amplitude integral image of each image in the gradient amplitude image matrix; and the third execution submodule is used for performing addition operation on the gradient amplitude integral image of each image to obtain the directional gradient histogram characteristics of the target to be tracked.
In some embodiments, the execution module comprises: the fifth processing submodule is used for tracking in the first video frame group by using the characteristics as a tracking target through a correlation filtering algorithm; a sixth obtaining submodule, configured to obtain a feature of the target to be tracked from a first frame in the second video frame group; a fourth execution sub-module for tracking in the second group of video frames with the obtained features, wherein the first group of video frames is consecutive to the second group of video frames.
In some embodiments, further comprising; and the sixth processing submodule is used for reserving the pixel value of the tracking area when the pixel value is smaller than a preset pixel value.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device. Referring to fig. 8, fig. 8 is a block diagram of a basic structure of a computer device according to the present embodiment.
As shown in fig. 8, the internal structure of the computer device is schematically illustrated. As shown in fig. 8, the computer apparatus includes a processor, a nonvolatile storage medium, a memory, and a network interface connected through a system bus. The non-volatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions can enable the processor to realize a target tracking method when being executed by the processor. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform a method of target tracking. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific contents of the obtaining module 2100, the processing module 2200, and the executing module 2300 in fig. 7, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data required for executing all the sub-modules in the target tracking method, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The computer equipment reduces the pixel value in the tracking area to the preset pixel value and extracts the target to be tracked from the reduced tracking area.
The present invention also provides a storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the object tracking method of any of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A target tracking method, comprising the steps of:
acquiring a tracking area of a target to be tracked;
reducing the pixel value in the tracking area to a preset pixel value, and extracting the characteristics of the target to be tracked from the reduced tracking area;
and tracking the characteristics by a preset target tracking algorithm.
2. The target tracking method according to claim 1, wherein the acquiring the tracking area of the target to be tracked comprises:
extracting video frames from a video needing target tracking processing;
acquiring a frame of the target to be tracked from the video frame;
and acquiring the movement speed of the target to be tracked in the video frame, and determining the tracking area of the target to be tracked according to the movement speed.
3. The target tracking method according to claim 2, wherein the determining the tracking area of the target to be tracked according to the movement speed comprises:
determining a speed range in which the movement speed of the target to be tracked conforms;
searching for the magnification factor corresponding to the speed range in a preset information table;
and taking the target to be tracked as a center, amplifying the boundary according to the amplification factor, and taking the amplified region as the tracking region.
4. The target tracking method of claim 1, wherein the reducing the pixel values within the tracking area to a preset pixel value comprises:
acquiring the current pixel value of the tracking area;
judging whether the pixel value is larger than a preset pixel value or not;
and when the pixel value is larger than a preset pixel value, reducing the pixel value of the tracking area to the preset pixel value.
5. The target tracking method according to claim 1, wherein the extracting the feature of the target to be tracked from the reduced tracking area comprises:
acquiring a horizontal gradient image and a vertical gradient image of the tracking area;
obtaining a gradient amplitude image matrix according to the horizontal gradient image and the vertical gradient image, and calculating a gradient amplitude integral graph of each image in the gradient amplitude image matrix;
and performing addition operation on the gradient amplitude integral image of each image to obtain the directional gradient histogram characteristics of the target to be tracked.
6. The target tracking method according to claim 1, wherein the tracking the features by a preset target tracking algorithm comprises:
tracking in the first video frame group by using the characteristic as a tracking target through a correlation filtering algorithm;
acquiring the characteristics of the target to be tracked from the first frame in the second video frame group;
tracking in the second set of video frames with the acquired feature, wherein the first set of video frames is contiguous with the second set of video frames.
7. The target tracking method of claim 1, wherein after determining whether the pixel value is greater than a preset pixel value, further comprising;
and when the pixel value is smaller than a preset pixel value, reserving the pixel value of the tracking area.
8. An object tracking device, comprising:
the acquisition module is used for acquiring a tracking area of a target to be tracked;
the processing module is used for reducing the pixel value in the tracking area to a preset pixel value and extracting the characteristics of the target to be tracked from the reduced tracking area;
and the execution module is used for tracking the characteristics through a preset target tracking algorithm.
9. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the object tracking method of any one of claims 1 to 7.
10. A storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the object tracking method of any one of claims 1 to 7.
CN201910065114.7A 2019-01-23 2019-01-23 Target tracking method and device, computer equipment and storage medium Pending CN111476065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910065114.7A CN111476065A (en) 2019-01-23 2019-01-23 Target tracking method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910065114.7A CN111476065A (en) 2019-01-23 2019-01-23 Target tracking method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111476065A true CN111476065A (en) 2020-07-31

Family

ID=71743496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910065114.7A Pending CN111476065A (en) 2019-01-23 2019-01-23 Target tracking method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111476065A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308871A (en) * 2020-10-30 2021-02-02 地平线(上海)人工智能技术有限公司 Method and device for determining motion speed of target point in video
CN113223057A (en) * 2021-06-04 2021-08-06 北京奇艺世纪科技有限公司 Face tracking method and device, electronic equipment and storage medium
CN113781416A (en) * 2021-08-30 2021-12-10 武汉理工大学 Conveyer belt tearing detection method and device and electronic equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030035051A1 (en) * 2001-08-07 2003-02-20 Samsung Electronics Co., Ltd. Device for and method of automatically tracking a moving object
CN1988653A (en) * 2005-12-21 2007-06-27 中国科学院自动化研究所 Night target detecting and tracing method based on visual property
CN102136147A (en) * 2011-03-22 2011-07-27 深圳英飞拓科技股份有限公司 Target detecting and tracking method, system and video monitoring device
JP2011196940A (en) * 2010-03-23 2011-10-06 Mitsubishi Electric Corp Tracking device
JP2012073997A (en) * 2010-09-01 2012-04-12 Ricoh Co Ltd Object tracking device, object tracking method, and program thereof
CN102663366A (en) * 2012-04-13 2012-09-12 中国科学院深圳先进技术研究院 Method and system for identifying pedestrian target
CN102831617A (en) * 2012-07-17 2012-12-19 聊城大学 Method and system for detecting and tracking moving object
CN103489199A (en) * 2012-06-13 2014-01-01 通号通信信息集团有限公司 Video image target tracking processing method and system
CN104376576A (en) * 2014-09-04 2015-02-25 华为技术有限公司 Target tracking method and device
CN104637038A (en) * 2015-03-11 2015-05-20 天津工业大学 Improved CamShift tracing method based on weighted histogram model
CN105913453A (en) * 2016-04-01 2016-08-31 海信集团有限公司 Target tracking method and target tracking device
CN108198205A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 A kind of method for tracking target based on Vibe and Camshift algorithms
CN108198201A (en) * 2017-12-19 2018-06-22 深圳市深网视界科技有限公司 A kind of multi-object tracking method, terminal device and storage medium
CN108876816A (en) * 2018-05-31 2018-11-23 西安电子科技大学 Method for tracking target based on adaptive targets response
CN108876818A (en) * 2018-06-05 2018-11-23 国网辽宁省电力有限公司信息通信分公司 A kind of method for tracking target based on like physical property and correlation filtering
CN108898057A (en) * 2018-05-25 2018-11-27 广州杰赛科技股份有限公司 Track method, apparatus, computer equipment and the storage medium of target detection

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030035051A1 (en) * 2001-08-07 2003-02-20 Samsung Electronics Co., Ltd. Device for and method of automatically tracking a moving object
CN1988653A (en) * 2005-12-21 2007-06-27 中国科学院自动化研究所 Night target detecting and tracing method based on visual property
JP2011196940A (en) * 2010-03-23 2011-10-06 Mitsubishi Electric Corp Tracking device
JP2012073997A (en) * 2010-09-01 2012-04-12 Ricoh Co Ltd Object tracking device, object tracking method, and program thereof
CN102136147A (en) * 2011-03-22 2011-07-27 深圳英飞拓科技股份有限公司 Target detecting and tracking method, system and video monitoring device
CN102663366A (en) * 2012-04-13 2012-09-12 中国科学院深圳先进技术研究院 Method and system for identifying pedestrian target
CN103489199A (en) * 2012-06-13 2014-01-01 通号通信信息集团有限公司 Video image target tracking processing method and system
CN102831617A (en) * 2012-07-17 2012-12-19 聊城大学 Method and system for detecting and tracking moving object
CN104376576A (en) * 2014-09-04 2015-02-25 华为技术有限公司 Target tracking method and device
CN104637038A (en) * 2015-03-11 2015-05-20 天津工业大学 Improved CamShift tracing method based on weighted histogram model
CN105913453A (en) * 2016-04-01 2016-08-31 海信集团有限公司 Target tracking method and target tracking device
CN108198201A (en) * 2017-12-19 2018-06-22 深圳市深网视界科技有限公司 A kind of multi-object tracking method, terminal device and storage medium
CN108198205A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 A kind of method for tracking target based on Vibe and Camshift algorithms
CN108898057A (en) * 2018-05-25 2018-11-27 广州杰赛科技股份有限公司 Track method, apparatus, computer equipment and the storage medium of target detection
CN108876816A (en) * 2018-05-31 2018-11-23 西安电子科技大学 Method for tracking target based on adaptive targets response
CN108876818A (en) * 2018-06-05 2018-11-23 国网辽宁省电力有限公司信息通信分公司 A kind of method for tracking target based on like physical property and correlation filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
况扬 主编: "《影视特技与后期合成》", 31 January 2010, 中央广播电视大学出版社, pages: 257 *
尹小港 编: "《新编After Effects CC 标准教程》", 30 April 2014, 海洋出版社, pages: 91 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308871A (en) * 2020-10-30 2021-02-02 地平线(上海)人工智能技术有限公司 Method and device for determining motion speed of target point in video
CN112308871B (en) * 2020-10-30 2024-05-14 地平线(上海)人工智能技术有限公司 Method and device for determining movement speed of target point in video
CN113223057A (en) * 2021-06-04 2021-08-06 北京奇艺世纪科技有限公司 Face tracking method and device, electronic equipment and storage medium
CN113781416A (en) * 2021-08-30 2021-12-10 武汉理工大学 Conveyer belt tearing detection method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN111192292B (en) Target tracking method and related equipment based on attention mechanism and twin network
CN110378264B (en) Target tracking method and device
CN108921782B (en) Image processing method, device and storage medium
CN109858384B (en) Face image capturing method, computer readable storage medium and terminal device
CN112651438A (en) Multi-class image classification method and device, terminal equipment and storage medium
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
CN111476065A (en) Target tracking method and device, computer equipment and storage medium
CN111612822B (en) Object tracking method, device, computer equipment and storage medium
CN107944381B (en) Face tracking method, face tracking device, terminal and storage medium
CN112163120A (en) Classification method, terminal and computer storage medium
CN112037254B (en) Target tracking method and related device
CN110399826B (en) End-to-end face detection and identification method
CN113569740B (en) Video recognition model training method and device, and video recognition method and device
CN111507138A (en) Image recognition method and device, computer equipment and storage medium
US10180782B2 (en) Fast image object detector
CN110648284A (en) Image processing method and device for uneven illumination
CN108682021B (en) Rapid hand tracking method, device, terminal and storage medium
CN108734712B (en) Background segmentation method and device and computer storage medium
CN116862812A (en) Infrared image processing method, device, computer equipment, storage medium and product
CN116777953A (en) Remote sensing image target tracking method based on multi-scale feature aggregation enhancement
CN116563588A (en) Image clustering method and device, electronic equipment and storage medium
CN112084874B (en) Object detection method and device and terminal equipment
CN115908831A (en) Image detection method and device
CN106033550B (en) Method for tracking target and device
CN114419086A (en) Edge extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination