CN111402291A - Method and apparatus for tracking a target - Google Patents

Method and apparatus for tracking a target Download PDF

Info

Publication number
CN111402291A
CN111402291A CN202010142613.4A CN202010142613A CN111402291A CN 111402291 A CN111402291 A CN 111402291A CN 202010142613 A CN202010142613 A CN 202010142613A CN 111402291 A CN111402291 A CN 111402291A
Authority
CN
China
Prior art keywords
frame image
current frame
tracking
tracking target
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010142613.4A
Other languages
Chinese (zh)
Other versions
CN111402291B (en
Inventor
李映辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010142613.4A priority Critical patent/CN111402291B/en
Publication of CN111402291A publication Critical patent/CN111402291A/en
Application granted granted Critical
Publication of CN111402291B publication Critical patent/CN111402291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The embodiment of the application discloses a method and a device for tracking a target. One embodiment of the method comprises: carrying out target detection on the acquired current frame image to determine a tracking target; extracting integrable features of the current frame image and calculating an integral image of the integrable features; determining an interested area of a tracking target in a current frame image; extracting transverse feature integrals and longitudinal feature integrals of the region of interest from the integral map; and tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image. According to the embodiment, the two-dimensional characteristics of the tracking target are converted into the two one-dimensional characteristics, the target tracking is carried out based on the two one-dimensional characteristics, the calculated amount of the tracking algorithm is reduced by one order of magnitude, and the complexity of the tracking algorithm is reduced.

Description

Method and apparatus for tracking a target
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for tracking a target.
Background
Visual target tracking is an important research direction in computer vision and has wide application. Such as video surveillance, human-computer interaction, unmanned driving, etc. The visual target tracking technology has advanced greatly in the past decades, and particularly in recent years, the target tracking method using deep learning has achieved satisfactory effects, so that the target tracking technology has made breakthrough progress.
At present, a common target tracking method is to extract features of a two-dimensional Histogram of Oriented Gradients (HOG), and to realize target tracking by using a tracking algorithm. However, tracking algorithms based on two-dimensional features are more complex.
Disclosure of Invention
The embodiment of the application provides a method and a device for tracking a target.
In a first aspect, an embodiment of the present application provides a method for tracking a target, including: carrying out target detection on the acquired current frame image to determine a tracking target; extracting integrable features of the current frame image and calculating an integral image of the integrable features; determining an interested area of a tracking target in a current frame image; extracting transverse feature integrals and longitudinal feature integrals of the region of interest from the integral map; and tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image.
In some embodiments, extracting integrable features of the current frame image comprises: and extracting edge features from the current frame image by using a canny operator.
In some embodiments, extracting edge features from the current frame image using a canny operator comprises: dividing a current frame image into four triangular image blocks by using diagonal lines; and solving a gradient threshold value of the lower triangular image block in the four triangular image blocks, and binarizing the current frame image based on the gradient threshold value to determine edge points in the current frame image.
In some embodiments, determining a region of interest of the tracking target in the current frame image includes: and determining the region of interest of the tracking target in the current frame image according to the region of the tracking target in the previous frame image of the current frame image, wherein the region of interest takes the center of the region of the tracking target in the previous frame image as the center, and the size of the region of interest is a preset multiple of the size of the region of the tracking target in the previous frame image.
In some embodiments, tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image, includes: acquiring a transverse feature integral and a longitudinal feature integral of a tracking target in an integral image corresponding to a previous frame of image; and calculating the positions of the tracking target in the transverse direction and the longitudinal direction according to the transverse feature integral and the longitudinal feature integral of the tracking target, and the transverse feature integral and the longitudinal feature integral of the region of interest by using a tracking algorithm, and combining to obtain the position of the tracking target in the current frame image.
In some embodiments, the method further comprises: and updating the characteristic template of the tracking target according to the position of the tracking target in the current frame image.
In a second aspect, an embodiment of the present application provides an apparatus for tracking a target, including: the detection unit is configured to perform target detection on the acquired current frame image and determine a tracking target; a first extraction unit configured to extract an integrable feature of a current frame image and calculate an integrogram of the integrable feature; a determination unit configured to determine a region of interest of a tracking target in a current frame image; a second extraction unit configured to extract a lateral feature integral and a longitudinal feature integral of the region of interest from the integral map; and the tracking unit is configured to track the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image.
In some embodiments, the first extraction unit comprises: and the extraction subunit is configured to extract the edge features from the current frame image by using a canny operator.
In some embodiments, the extraction subunit is further configured to: dividing a current frame image into four triangular image blocks by using diagonal lines; and solving a gradient threshold value of the lower triangular image block in the four triangular image blocks, and binarizing the current frame image based on the gradient threshold value to determine edge points in the current frame image.
In some embodiments, the determining unit is further configured to: and determining the region of interest of the tracking target in the current frame image according to the region of the tracking target in the previous frame image of the current frame image, wherein the region of interest takes the center of the region of the tracking target in the previous frame image as the center, and the size of the region of interest is a preset multiple of the size of the region of the tracking target in the previous frame image.
In some embodiments, the tracking unit is further configured to: acquiring a transverse feature integral and a longitudinal feature integral of a tracking target in an integral image corresponding to a previous frame of image; and calculating the positions of the tracking target in the transverse direction and the longitudinal direction according to the transverse feature integral and the longitudinal feature integral of the tracking target, and the transverse feature integral and the longitudinal feature integral of the region of interest by using a tracking algorithm, and combining to obtain the position of the tracking target in the current frame image.
In some embodiments, the apparatus further comprises: and the updating unit is configured to update the characteristic template of the tracking target according to the position of the tracking target in the current frame image.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for tracking the target, firstly, target detection is carried out on the collected current frame image, and the tracking target is determined; then, extracting the integrable feature of the current frame image and calculating an integral graph of the integrable feature; then determining an interested area of a tracking target in the current frame image, and extracting a transverse characteristic integral and a longitudinal characteristic integral of the interested area from the integral image; and finally, tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image. Converting the two-dimensional characteristics of the tracked target into two one-dimensional characteristics, and tracking the target based on the two one-dimensional characteristics. The method not only solves the problem that the scale change of the target in two directions is inconsistent by independently tracking the change scales in the two directions, but also reduces the calculated amount of the tracking algorithm by one order of magnitude, thereby reducing the complexity of the tracking algorithm. In addition, the integral graph can be used for simultaneously extracting the features of different targets in different directions, so that multi-target tracking can be realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for tracking a target according to the present application;
FIG. 3 shows a schematic of an integral map;
FIG. 4 shows a schematic of transverse feature integration and longitudinal feature integration;
FIG. 5 is a flow diagram of yet another embodiment of a method for tracking a target according to the present application;
FIG. 6 shows a one-dimensional feature tracking schematic of a tracked target;
FIG. 7 is a schematic block diagram illustrating one embodiment of an apparatus for tracking a target according to the present application;
FIG. 8 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for tracking a target or apparatus for tracking a target may be applied.
As shown in fig. 1, the system architecture 100 may include photographing devices 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium of communication links between the photographing devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The photographing devices 101, 102, 103 may interact with the server 105 through the network 104 to receive or transmit messages or the like. The photographing devices 101, 102, 103 may be hardware or software. When the photographing devices 101, 102, 103 are hardware, they may be various electronic devices supporting image photographing or video photographing, including but not limited to a video camera, a smart phone, and the like. When the photographing apparatuses 101, 102, 103 are software, they may be installed in the above-described electronic apparatuses. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may provide various services, for example, the server 105 may analyze and process data of the current frame image and the like acquired from the photographing apparatuses 101, 102, 103, and generate a processing result (for example, a position of a tracking target in the current frame image).
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for tracking the target provided by the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for tracking the target is generally disposed in the server 105.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for tracking a target in accordance with the present application is shown. The method for tracking a target includes the steps of:
step 201, performing target detection on the acquired current frame image, and determining a tracking target.
In this embodiment, an executing subject (for example, the server 105 shown in fig. 1) of the method for tracking a target may perform target detection on a current frame image acquired by a shooting device (for example, the shooting devices 101, 102, 103 shown in fig. 1) to determine a tracking target.
In practice, the shooting device can collect images and send the images to the execution main body in real time. The image received by the execution main body at the current moment is the current frame image. The execution main body can perform target detection on the current frame image and determine a tracking target. For example, the execution subject may detect whether a specific target exists in the current frame image, and if the specific target exists, determine the specific target as a tracking target to implement single-target tracking. For another example, the execution subject may detect all targets existing in the current frame image, and determine all the detected targets as tracking targets to implement multi-target tracking. In addition, the execution subject may also assign an identifier to the detected tracking target. Generally, each tracking target corresponds to a unique identifier.
Step 202, extracting the integrable features of the current frame image, and calculating an integral map of the integrable features.
In this embodiment, the execution body may extract an integrable feature of the current frame and calculate an integral map of the integrable feature. Where integrable features may refer to two-dimensional features that may be directly added, including but not limited to edge features, grayscale features, gradient features, binarized edge point features, and the like. Fig. 3 shows a schematic diagram of an integral map. As shown in fig. 3, the value at each point in the integrogram is the sum of the elements within the rectangular box determined for that point. For example, the value at (i, j) in fig. 3 is the sum of the elements within the rectangular box determined by (i, j), and the value at (m, n) in fig. 3 is the sum of the elements within the rectangular box determined by (m, n).
In some optional implementations of the present embodiment, the execution subject may extract the edge feature from the current frame image using a canny operator. Specifically, the execution body may divide the current frame image into four triangular image blocks using diagonal lines; and solving a gradient threshold value of the lower triangular image block in the four triangular image blocks, and binarizing the current frame image based on the gradient threshold value to determine edge points in the current frame image. Generally, binarizing the current frame image based on the gradient threshold can preserve a portion of pixels with larger gradients (e.g., the largest 90% of pixels). Wherein, the reserved pixel points are edge points.
Step 203, determining the region of interest of the tracking target in the current frame image.
In this embodiment, the execution subject may determine a region of interest (ROI) of the tracking target in the current frame image. Wherein the region of interest may be a region including a tracking target. For example, the execution subject may outline an area including the tracking target, i.e., a region of interest, in a manner of a box, a circle, an ellipse, an irregular polygon, or the like, according to the tracking target detected by the target detection on the current frame image.
And step 204, extracting the transverse feature integrals and the longitudinal feature integrals of the region of interest from the integral map.
In this embodiment, the execution subject described above may extract the lateral feature integration and the longitudinal feature integration of the region of interest from the integral map. Here, features of the tracking target in the transverse direction and the longitudinal direction can be extracted simultaneously by using the integral map, and the two-dimensional feature of the tracking target is converted into two one-dimensional features. Fig. 4 shows a schematic diagram of the transverse feature integration and the longitudinal feature integration. The transverse direction is transverse feature integration, and the longitudinal direction is longitudinal feature integration.
And step 205, tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image.
In this embodiment, the execution subject may track the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest, and obtain a position of the tracking target in the current frame image. The target tracking is carried out based on two one-dimensional features of the transverse feature integration and the longitudinal feature integration, the calculated amount of a tracking algorithm is reduced by one order of magnitude, and the complexity of the tracking algorithm is reduced.
The method for tracking the target provided by the embodiment of the application comprises the steps of firstly, carrying out target detection on a current frame image to be collected, and determining a tracking target; then, extracting the integrable feature of the current frame image and calculating an integral graph of the integrable feature; then determining an interested area of a tracking target in the current frame image, and extracting a transverse characteristic integral and a longitudinal characteristic integral of the interested area from the integral image; and finally, tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image. Converting the two-dimensional characteristics of the tracked target into two one-dimensional characteristics, and tracking the target based on the two one-dimensional characteristics. The method not only solves the problem that the scale change of the target in two directions is inconsistent by independently tracking the change scales in the two directions, but also reduces the calculated amount of the tracking algorithm by one order of magnitude, thereby reducing the complexity of the tracking algorithm. In addition, the integral graph can be used for simultaneously extracting the features of different targets in different directions, so that multi-target tracking can be realized.
With further reference to FIG. 5, a flow 500 of yet another embodiment of a method for tracking a target in accordance with the present application is illustrated. The method for tracking a target includes the steps of:
step 501, performing target detection on the acquired current frame image, and determining a tracking target.
Step 502, extracting the integrable feature of the current frame image, and calculating the integral map of the integrable feature.
In the present embodiment, the specific operations of steps 501-502 have been described in detail in step 201-202 in the embodiment shown in fig. 2, and are not described herein again.
Step 503, determining the region of interest of the tracking target in the current frame image according to the region of the tracking target in the previous frame image of the current frame image.
In the present embodiment, an execution subject (e.g., the server 105 shown in fig. 1) of the method for tracking a target may determine a region of interest of the tracking target in the current frame image according to a region of the tracking target in a previous frame image of the current frame image. The region of interest may be centered at the center of the region of the tracking target in the previous frame image, and the size of the region of interest is a preset multiple (e.g., 4 times) of the size of the region of the tracking target in the previous frame image.
Step 504, extracting the transverse feature integrals and the longitudinal feature integrals of the region of interest from the integral map.
In this embodiment, the specific operation of step 504 is described in detail in step 204 in the embodiment shown in fig. 2, and is not described herein again.
And 505, acquiring the transverse feature integral and the longitudinal feature integral of the tracking target in the integral image corresponding to the previous frame of image.
In this embodiment, the executing body may obtain the transverse feature integration and the longitudinal feature integration of the tracking target in the integral map corresponding to the previous frame image.
And step 506, calculating the positions of the tracking target in the transverse direction and the longitudinal direction according to the transverse feature integral and the longitudinal feature integral of the tracking target, the transverse feature integral and the longitudinal feature integral of the region of interest by using a tracking algorithm, and combining to obtain the position of the tracking target in the current frame image.
In this embodiment, the executing body may calculate, by using a tracking algorithm, positions of the tracking target in the transverse direction and the longitudinal direction according to the transverse feature integral and the longitudinal feature integral of the tracking target in the integral map corresponding to the previous frame image and the transverse feature integral and the longitudinal feature integral of the region of interest in the integral map corresponding to the current frame image, and combine the positions to obtain the position of the tracking target in the current frame image. The tracking algorithm may include, but is not limited to, KCF (Kernel correlation filter), DSST (discriminant Scale Space Tracker), and the like.
Here, taking the feature of the binarized edge point as an example, the pixel value of the edge point is 1, and histograms are obtained by projecting in both the horizontal and vertical directions of the region of interest of the current frame image, and the histograms are expressed in a vector form. The horizontal feature integration of the region of interest of the current frame image is a horizontal projection vector Fh1, and the vertical feature integration of the region of interest is a vertical projection vector Fv 1. The transverse feature integration of the tracking target of the previous frame image is Fh, and the longitudinal feature integration is Fv. The executive agent may use a DSST tracking algorithm to calculate the range and location of Fh in Fh 1. Similarly, the executives can use the DSST tracking algorithm to calculate the extent and location of the Fv in Fv 1. Different from a DSST tracking algorithm, HOG features need to be extracted from an image firstly, then the HOG features are organized into two-dimensional matrix features, a histogram is directly used as one-dimensional features, and the rest steps are consistent with the DSST tracking algorithm.
And step 507, updating the characteristic template of the tracking target according to the position of the tracking target in the current frame image.
In this embodiment, the execution subject may update the feature template of the tracking target according to the position of the tracking target in the current frame image. The formula for updating the feature template of the tracking target can be as follows:
F=F(n-1)×a+F(n)×(1-a);
wherein n is a positive integer. F (n-1) is a feature template of the tracking target obtained based on the previous frame image, and F (n) is a histogram feature of the tracking target obtained based on the current frame image. a is the update coefficient. Which takes on a value between 0 and 1.
For example, the feature template is F (N-1) ═ F (0 … N-1), the feature vector of the region of interest of the current frame image is x ═ x [0 … N-1], the DSST tracking algorithm tracks the target range as x [ m … m + k ], x [ m … m + k ] is first adjusted to the size x (N) ═ x '[ 0 … N-1] of F (N-1) by interpolation, and then the feature template is updated as F (N) ═ F (N-1) × a + x (N) × (1-a) ═ F (0 … N-1) × a + x' [0 … N-1] × (1-a), where N, m, and k are positive integers.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 2, the flow 500 of the method for tracking a target in the present embodiment highlights the steps of target tracking. Therefore, the scheme described in this embodiment tracks the tracking target in the current frame image in combination with the position information of the tracking target in the previous frame image, and improves the accuracy of the tracking result.
With further reference to fig. 7, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for tracking a target, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 7, the apparatus 700 for tracking a target of the present embodiment may include: a detection unit 701, a first extraction unit 702, a determination unit 703, a second extraction unit 704, and a tracking unit 705. The detection unit 701 is configured to perform target detection on the acquired current frame image and determine a tracking target; a first extraction unit 702 configured to extract an integrable feature of a current frame image and calculate an integrogram of the integrable feature; a determining unit 703 configured to determine a region of interest of the tracking target in the current frame image; a second extraction unit 704 configured to extract the transverse feature integrals and the longitudinal feature integrals of the region of interest from the integral map; the tracking unit 705 is configured to track the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest, and obtain a position of the tracking target in the current frame image.
In the present embodiment, in the apparatus 700 for tracking a target: the detailed processing and the technical effects of the detection unit 701, the first extraction unit 702, the determination unit 703, the second extraction unit 704 and the tracking unit 705 can refer to the related descriptions of step 201 and step 205 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of the present embodiment, the first extracting unit 702 includes: and an extraction subunit (not shown in the figure) configured to extract the edge feature from the current frame image using a canny operator.
In some optional implementations of this embodiment, the extraction subunit is further configured to: dividing a current frame image into four triangular image blocks by using diagonal lines; and solving a gradient threshold value of the lower triangular image block in the four triangular image blocks, and binarizing the current frame image based on the gradient threshold value to determine edge points in the current frame image.
In some optional implementations of this embodiment, the determining unit 703 is further configured to: and determining the region of interest of the tracking target in the current frame image according to the region of the tracking target in the previous frame image of the current frame image, wherein the region of interest takes the center of the region of the tracking target in the previous frame image as the center, and the size of the region of interest is a preset multiple of the size of the region of the tracking target in the previous frame image.
In some optional implementations of the present embodiment, the tracking unit 705 is further configured to: acquiring a transverse feature integral and a longitudinal feature integral of a tracking target in an integral image corresponding to a previous frame of image; and calculating the positions of the tracking target in the transverse direction and the longitudinal direction according to the transverse feature integral and the longitudinal feature integral of the tracking target, and the transverse feature integral and the longitudinal feature integral of the region of interest by using a tracking algorithm, and combining to obtain the position of the tracking target in the current frame image.
In some optional implementations of this embodiment, the apparatus 700 for tracking a target further includes: and an updating unit (not shown in the figure) configured to update the feature template of the tracking target according to the position of the tracking target in the current frame image.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
To the I/O interface 805, AN input section 806 including a keyboard, a mouse, and the like, AN output section 807 including a network interface card such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 808 including a hard disk, and the like, and a communication section 809 including a network interface card such as a L AN card, a modem, and the like are connected, the communication section 809 performs communication processing via a network such as the internet, a drive 810 is also connected to the I/O interface 805 as necessary, a removable medium 811 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted into the storage section 808 as.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a detection unit, a first extraction unit, a determination unit, a second extraction unit, and a tracking unit. The names of the units do not constitute a limitation to the units themselves in this case, and for example, the detection unit may also be described as a "unit for performing target detection on the acquired current frame image and determining a tracking target".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: carrying out target detection on the acquired current frame image to determine a tracking target; extracting integrable features of the current frame image and calculating an integral image of the integrable features; determining an interested area of a tracking target in a current frame image; extracting transverse feature integrals and longitudinal feature integrals of the region of interest from the integral map; and tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A method for tracking a target, comprising:
carrying out target detection on the acquired current frame image to determine a tracking target;
extracting integrable features of the current frame image and calculating an integral graph of the integrable features;
determining a region of interest of the tracking target in the current frame image;
extracting transverse feature integrals and longitudinal feature integrals of the region of interest from the integral map;
and tracking the tracking target based on the transverse feature integration and the longitudinal feature integration of the interested region to obtain the position of the tracking target in the current frame image.
2. The method of claim 1, wherein said extracting integrable features of the current frame image comprises:
and extracting edge features from the current frame image by using a canny operator.
3. The method of claim 2, wherein said extracting edge features from the current frame image using a canny operator comprises:
dividing the current frame image into four triangular image blocks by using diagonal lines;
solving a gradient threshold value for the lower triangular image block in the four triangular image blocks, and binarizing the current frame image based on the gradient threshold value to determine edge points in the current frame image.
4. The method of claim 1, wherein the determining a region of interest of the tracking target in the current frame image comprises:
and determining the region of interest of the tracking target in the current frame image according to the region of the tracking target in the previous frame image of the current frame image, wherein the region of interest is centered on the center of the region of the tracking target in the previous frame image and has the size which is a preset multiple of the size of the region of the tracking target in the previous frame image.
5. The method according to claim 1, wherein the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest to obtain the position of the tracking target in the current frame image comprises:
acquiring a transverse feature integral and a longitudinal feature integral of the tracking target in an integral image corresponding to the previous frame of image;
and calculating the position of the tracking target in the transverse direction and the longitudinal direction according to the transverse feature integral and the longitudinal feature integral of the tracking target and the transverse feature integral and the longitudinal feature integral of the region of interest by using a tracking algorithm, and combining to obtain the position of the tracking target in the current frame image.
6. The method of claim 5, wherein the method further comprises:
and updating the characteristic template of the tracking target according to the position of the tracking target in the current frame image.
7. An apparatus for tracking a target, comprising:
the detection unit is configured to perform target detection on the acquired current frame image and determine a tracking target;
a first extraction unit configured to extract an integrable feature of the current frame image and calculate an integrogram of the integrable feature;
a determination unit configured to determine a region of interest of the tracking target in the current frame image;
a second extraction unit configured to extract a lateral feature integral and a longitudinal feature integral of the region of interest from the integral map;
a tracking unit configured to track the tracking target based on the transverse feature integration and the longitudinal feature integration of the region of interest, so as to obtain a position of the tracking target in the current frame image.
8. The apparatus of claim 7, wherein the first extraction unit comprises:
an extraction subunit configured to extract an edge feature from the current frame image using a canny operator.
9. The apparatus of claim 8, wherein the extraction subunit is further configured to:
dividing the current frame image into four triangular image blocks by using diagonal lines;
solving a gradient threshold value for the lower triangular image block in the four triangular image blocks, and binarizing the current frame image based on the gradient threshold value to determine edge points in the current frame image.
10. The apparatus of claim 7, wherein the determination unit is further configured to:
and determining the region of interest of the tracking target in the current frame image according to the region of the tracking target in the previous frame image of the current frame image, wherein the region of interest is centered on the center of the region of the tracking target in the previous frame image and has the size which is a preset multiple of the size of the region of the tracking target in the previous frame image.
11. The apparatus of claim 7, wherein the tracking unit is further configured to:
acquiring a transverse feature integral and a longitudinal feature integral of the tracking target in an integral image corresponding to the previous frame of image;
and calculating the position of the tracking target in the transverse direction and the longitudinal direction according to the transverse feature integral and the longitudinal feature integral of the tracking target and the transverse feature integral and the longitudinal feature integral of the region of interest by using a tracking algorithm, and combining to obtain the position of the tracking target in the current frame image.
12. The apparatus of claim 11, wherein the apparatus further comprises:
and the updating unit is configured to update the characteristic template of the tracking target according to the position of the tracking target in the current frame image.
13. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202010142613.4A 2020-03-04 2020-03-04 Method and apparatus for tracking a target Active CN111402291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010142613.4A CN111402291B (en) 2020-03-04 2020-03-04 Method and apparatus for tracking a target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010142613.4A CN111402291B (en) 2020-03-04 2020-03-04 Method and apparatus for tracking a target

Publications (2)

Publication Number Publication Date
CN111402291A true CN111402291A (en) 2020-07-10
CN111402291B CN111402291B (en) 2023-08-29

Family

ID=71432179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010142613.4A Active CN111402291B (en) 2020-03-04 2020-03-04 Method and apparatus for tracking a target

Country Status (1)

Country Link
CN (1) CN111402291B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570892A (en) * 2015-08-18 2017-04-19 航天图景(北京)科技有限公司 Moving-target active tracking method based on edge enhancement template matching
CN107123088A (en) * 2017-04-21 2017-09-01 山东大学 A kind of method of automatic replacing photo background color
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking
CN107507222A (en) * 2016-06-13 2017-12-22 浙江工业大学 A kind of anti-particle filter method for tracking target based on integration histogram blocked
CN108198201A (en) * 2017-12-19 2018-06-22 深圳市深网视界科技有限公司 A kind of multi-object tracking method, terminal device and storage medium
WO2018121286A1 (en) * 2016-12-30 2018-07-05 纳恩博(北京)科技有限公司 Target tracking method and device
CN109671103A (en) * 2018-12-12 2019-04-23 易视腾科技股份有限公司 Method for tracking target and device
CN109993052A (en) * 2018-12-26 2019-07-09 上海航天控制技术研究所 The method for tracking target and system of dimension self-adaption under a kind of complex scene
US20190236771A1 (en) * 2016-10-13 2019-08-01 Mccain Foods Limited Method, medium, and system for detecting potato virus in a crop image
CN110517291A (en) * 2019-08-27 2019-11-29 南京邮电大学 A kind of road vehicle tracking based on multiple feature spaces fusion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570892A (en) * 2015-08-18 2017-04-19 航天图景(北京)科技有限公司 Moving-target active tracking method based on edge enhancement template matching
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking
US20180211104A1 (en) * 2016-03-10 2018-07-26 Zhejiang Shenghui Lighting Co., Ltd Method and device for target tracking
CN107507222A (en) * 2016-06-13 2017-12-22 浙江工业大学 A kind of anti-particle filter method for tracking target based on integration histogram blocked
US20190236771A1 (en) * 2016-10-13 2019-08-01 Mccain Foods Limited Method, medium, and system for detecting potato virus in a crop image
WO2018121286A1 (en) * 2016-12-30 2018-07-05 纳恩博(北京)科技有限公司 Target tracking method and device
CN107123088A (en) * 2017-04-21 2017-09-01 山东大学 A kind of method of automatic replacing photo background color
CN108198201A (en) * 2017-12-19 2018-06-22 深圳市深网视界科技有限公司 A kind of multi-object tracking method, terminal device and storage medium
CN109671103A (en) * 2018-12-12 2019-04-23 易视腾科技股份有限公司 Method for tracking target and device
CN109993052A (en) * 2018-12-26 2019-07-09 上海航天控制技术研究所 The method for tracking target and system of dimension self-adaption under a kind of complex scene
CN110517291A (en) * 2019-08-27 2019-11-29 南京邮电大学 A kind of road vehicle tracking based on multiple feature spaces fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CALAFATE, CT 等: "An Integral Model for Target Tracking Based on the Use of a WSN" *
刘哲,高广珠,余理富: "一种快速而有效地定位人脸特征的方法" *
程卫亮;王向军;万子敬;郭志翼;: "压缩域目标跟踪算法在小型化DSP平台上的研究与实现" *

Also Published As

Publication number Publication date
CN111402291B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN109508681B (en) Method and device for generating human body key point detection model
CN108509915B (en) Method and device for generating face recognition model
CN109410218B (en) Method and apparatus for generating vehicle damage information
CN109035319B (en) Monocular image depth estimation method, monocular image depth estimation device, monocular image depth estimation apparatus, monocular image depth estimation program, and storage medium
CN108229419B (en) Method and apparatus for clustering images
CN107220652B (en) Method and device for processing pictures
CN109255767B (en) Image processing method and device
CN110675407B (en) Image instance segmentation method and device, electronic equipment and storage medium
CN108229418B (en) Human body key point detection method and apparatus, electronic device, storage medium, and program
CN109389072B (en) Data processing method and device
CN109300151B (en) Image processing method and device and electronic equipment
CN109344762B (en) Image processing method and device
CN109118456B (en) Image processing method and device
CN109711508B (en) Image processing method and device
CN109377508B (en) Image processing method and device
CN111882565B (en) Image binarization method, device, equipment and storage medium
CN111767750A (en) Image processing method and device
CN108597034B (en) Method and apparatus for generating information
CN108492284B (en) Method and apparatus for determining perspective shape of image
CN112435223A (en) Target detection method, device and storage medium
CN108734718B (en) Processing method, device, storage medium and equipment for image segmentation
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN111563916B (en) Long-term unmanned aerial vehicle tracking and positioning method, system and device based on stereoscopic vision
CN108664948B (en) Method and apparatus for generating information
CN111402291B (en) Method and apparatus for tracking a target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211009

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant