CN108154522B - Target tracking system - Google Patents

Target tracking system Download PDF

Info

Publication number
CN108154522B
CN108154522B CN201611105151.9A CN201611105151A CN108154522B CN 108154522 B CN108154522 B CN 108154522B CN 201611105151 A CN201611105151 A CN 201611105151A CN 108154522 B CN108154522 B CN 108154522B
Authority
CN
China
Prior art keywords
training
target
module
feature
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611105151.9A
Other languages
Chinese (zh)
Other versions
CN108154522A (en
Inventor
姚颂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xilinx Technology Beijing Ltd
Original Assignee
Xilinx Technology Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xilinx Technology Beijing Ltd filed Critical Xilinx Technology Beijing Ltd
Priority to CN201611105151.9A priority Critical patent/CN108154522B/en
Publication of CN108154522A publication Critical patent/CN108154522A/en
Application granted granted Critical
Publication of CN108154522B publication Critical patent/CN108154522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention provides a target tracking system, comprising: the target characteristic training module is used for training image information of a target area in a current video image frame in a Fourier domain to obtain current training characteristics; the candidate feature training module is used for training image information of a plurality of candidate target areas in a next video image frame in a Fourier domain to obtain a plurality of candidate training features; a target region calculation module that selects a predicted next target region from a plurality of candidate target regions based on the current training feature and the plurality of candidate training features; the target feature training module and the candidate feature training module use the same Fourier transform module realized by logic hardware to train a Fourier domain. The feature training in the fourier domain can ensure extremely low computational complexity while including a large number of samples to improve tracking accuracy, and the implementation of the multiplexed FFT hardware unit further ensures extremely low power consumption and hardware resource usage.

Description

Target tracking system
Technical Field
The present application relates to image processing, and more particularly, to a target tracking system implemented by image processing.
Background
Target detection and tracking has been an important research direction in academia and industry. For example, computer vision-based pedestrian detection is one of the most active research topics in the current computer vision and smart vehicle fields due to its important application value in vehicle-assisted driving systems. Moreover, object detection and tracking has tremendous utility and potential implications in areas such as security, transportation, and gaming.
In recent years, a major breakthrough in object tracking research is the widespread adoption of discriminant learning methods. The object tracking task can be considered as an online learning problem, and a classifier capable of distinguishing the appearance and environment of an object is trained for a given initial image region containing the object. For the discriminant learning method, learning of a negative sample (background environment) is as important as that of a positive sample (tracking target).
For accurate tracking, it is desirable to include more negative examples to learn more environmental characteristics, but the real-time performance of target tracking requires maintaining the computational load at a lower level. In the traditional target tracking implementation method, the picture information is uploaded to a server side, and the tracking result is returned to the local for display after the calculation is completed by the server side. However, this method usually has two disadvantages, one is that a lot of time is spent for data transmission, resulting in poor tracking performance and difficulty in realizing real-time tracking; secondly, because the calculation amount of the tracking algorithm is large, a small-sized processor is difficult to serve as a result of tracking calculation, and therefore target tracking is difficult to achieve a good effect on a small-sized embedded device. Therefore, how to widely apply the target detection and tracking system to mobile terminals such as unmanned aerial vehicles, automobiles, robots and mobile phones is still a research hotspot in the industry.
Disclosure of Invention
In view of at least one of the above problems, the present invention provides an object tracking scheme, which uses logic hardware to implement a tracking algorithm performed in the fourier domain, wherein the algorithm can perform fast tracking while ensuring that a large number of sample features are included, and uses logic hardware with high parallelism to implement fourier transform, so that extremely high computational efficiency can be achieved with extremely small storage and power consumption, thereby further laying a foundation for system miniaturization application.
According to an aspect of the present invention, there is provided a target tracking system, comprising: the target characteristic training module is used for training image information of a target area in a current video image frame in a Fourier domain to obtain current training characteristics; the candidate feature training module is used for training image information of a plurality of candidate target areas in a next video image frame in a Fourier domain to obtain a plurality of candidate training features; a target region calculation module that selects a predicted next target region from a plurality of candidate target regions according to the current training feature and the plurality of candidate training features; the target feature training module and the candidate feature training module use the same Fourier transform module realized by logic hardware to train a Fourier domain.
Preferably, the training of the fourier domain comprises a calculation of a two-dimensional Fast Fourier Transform (FFT) and the fourier transform module comprises two, more preferably only one, Fast Fourier Transform (FFT) module unit to perform said calculation of the two-dimensional FFT. The FFT module unit is also capable of performing IFFT calculations required for the fourier domain training.
The processing object of the target tracking system is preferably a matrix, and the training module may acquire a feature training matrix based on the input image matrix and determine an optimal target region from the candidate regions by an operation on the feature training matrix. A result matrix may be calculated based on a plurality of regions obtained by the plurality of candidate training feature matrices and the current training feature matrix, and the selection of the most suitable target region may be achieved by the selection of the largest element in the result matrix. The information of the neighboring elements of the largest element can also be obtained to further optimize the selected optimal region. The determination of the largest element and the above-mentioned information acquisition may preferably be realized by only a shift register of a capacity to accommodate two lines of element information of the result matrix, thereby further reducing the storage requirements while ensuring computational efficiency. In addition, the IFFT calculation may transpose the transposed matrix in the second FFT calculation back to the original direction while performing the inverse operation, so as to further optimize the operation and improve the calculation efficiency.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a general flow of target tracking.
Fig. 2 shows a schematic view of an object tracking system according to the invention.
FIG. 3 illustrates an optimization scheme of the object tracking system shown in FIG. 2.
Figure 4 shows a schematic diagram of the operation of a complete object detection and tracking system.
FIG. 5 is a hardware functional diagram of a target tracking device according to one embodiment of the invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The replacement of manpower by machines has been the direction of human technological effort, where target tracking is a loop that machines must traverse for intelligence to be developed. The key factor restricting the application of the vision technology in the target tracking method is the problem of insufficient real-time performance of the system caused by massive data processing. In order to solve the problem, research institutions at home and abroad mainly start from two aspects: firstly, a high-performance processor is adopted, and secondly, a new visual processing algorithm is proposed.
The inventor of the present invention selects a tracking algorithm particularly suitable for high-parallelism hardware implementation, uses logic hardware instead of at least a part of functions of target tracking implemented by software in a processor, realizes a real-time tracking system capable of satisfying practical applications, and maintains system power consumption at a level that is considerably low and enables miniaturization.
Fig. 1 shows a general flow of target tracking. The current target area is first input to the target tracking unit. A partial map including a tracking target cut out from a current video image frame may be directly input. Preferably, in order to reduce the data dimension required to be calculated by the follow-up tracking algorithm, the image feature extraction unit may also be input to extract the image feature related to the target from the local image. The features reflecting the essential attributes of the modes can be obtained through image feature extraction, so that the tracking of a follow-up tracking algorithm on a specific target can be facilitated. The target tracking system then gives a predicted next frame target position based on the current input, and finally the processor processes and gives the tracking result (e.g., a box on the image frame) via the display and gives a picture input for the next frame.
Target tracking systems typically employ specific tracking algorithms to achieve tracking of a given target. Since object tracking can be considered an online learning problem, a classifier is trained that can distinguish the appearance and environment of an object given the initial image region containing the object. For the discriminant learning method, learning of a negative sample (background environment) is as important as that of a positive sample (tracking target). Since the calculation amount of the common tracking algorithm needs to be maintained at an acceptable level, a small number of environment areas are randomly selected for each frame of image to perform negative sample training. The lack of negative examples is often a large factor limiting the tracking performance.
For the dilemma, a training method of the fourier domain is proposed. By using a special model, some learning algorithms become easier in the fourier domain instead as a large number of samples are added. By transforming a complex algebraic solution process into a simple operation in the fourier domain, a simple solution including a large number of sample trainings can be achieved, thereby algorithmically ensuring the accuracy of tracking while increasing the tracking speed.
Fig. 2 shows a schematic view of an object tracking system according to the invention. As shown, the target tracking system 200 includes a target feature training module 210, a candidate feature training module 220, and a target region calculation module 230, and the candidate feature training module 220 and the target region calculation module 230 include a common fourier transform module, preferably a Fast Fourier Transform (FFT) module, implemented by logic hardware.
The target feature training module 210 may train image information of a target region in a current video image frame in the fourier domain to obtain current training features. The training may be sample-bound training (e.g., a plurality of positive samples for the target and negative samples for the environment), and the acquisition of the current training features may also be based on historical training features, thereby further enhancing the richness of the sample training. The candidate feature training module 220 may train image information of a plurality of candidate target regions in a next video image frame in the fourier domain to obtain a plurality of candidate training features. The target region calculation module 230 may then select a predicted next target region from a plurality of candidate target regions based on the current training feature and the plurality of candidate training features. The common FFT module is capable of performing fourier domain training for the target feature training module 210 and the candidate feature training module 220, respectively, in different time periods.
The training of the fourier domain may include the computation of a two-dimensional FFT, depending on the implementation of the particular tracking algorithm. The fourier transform module thus comprises two Fast Fourier Transform (FFT) module units to perform the calculation of the two-dimensional FFT. Preferably, the fourier transform module may comprise only one FFT module unit, e.g. known one-dimensional FFT IP core hardware, and implement a two-dimensional FFT by specific configuration and multiplexing, such as a Discrete Fourier (DFT) transform.
The training in the fourier domain may also include the computation of an Inverse Fast Fourier Transform (IFFT) to convert the results of the operations in the fourier domain back to the algebraic domain. The FFT block unit implemented as one or more as described above may be used to implement the computation of the IFFT.
The inputs for feature training and the input are preferably matrices, whereby the computation of the fourier domain can be exploited to simplify the matrix operations (e.g., matrix inversion) that are originally responsible. In one embodiment, the target feature training module 210 and the candidate feature training module 220 train feature matrices of the target region and the candidate target region, respectively, in the fourier domain to obtain a current training feature matrix and a plurality of candidate training feature matrices, respectively. More preferably, a matrix transposition function may be added to the calculation of the IFFT. The IFFT calculation may transpose the transposed matrix in the second FFT calculation back to the original direction while performing the inverse operation, so as to further optimize the operation and improve the calculation efficiency.
The target region calculation module 230 selects the next target region based on a plurality of region calculation result matrices obtained by the plurality of candidate training feature matrices and the current training feature matrix. In the next image video frame, n (n is an integer of 2 or more) image areas may be selected as candidate areas in the next frame on the basis of the position of the current target area based on a predetermined rule. And training the image feature matrix of each candidate region to obtain a corresponding candidate training feature matrix. The current training feature matrix is operated with each of the n candidate training feature matrices, thereby obtaining n for each initial candidate region. And finally, determining the area which is determined as the next target area in the n candidate areas according to the n area calculation result matrixes.
In one embodiment, the determination of the next target region is determined by selecting the largest one of the elements in the n region computation result matrices. Thus, the target area calculation module 230 may further include a maximum element selection unit for selecting a maximum element among all elements of the plurality of area calculation result matrices. For example, the largest one of all the 16 × 3 matrix elements of the 16 × 16 area calculation result matrix of 3(n ═ 3) candidate areas is selected, and the area where the element is located is taken as the predicted next target area.
The maximum element selection unit may further record values of the maximum element and its neighboring elements and a position of the maximum element. The target area calculation module further comprises a target area optimization unit that adjusts a next target area based on the value and position recorded by the maximum element selection unit to obtain an optimized next target area. The target region optimizing unit may find the position offset value by quadratic fitting based on the neighboring elements of the largest element (e.g., four elements, up, down, left, and right, or 8 surrounding elements, etc.), and adjust the position and size of the selected next target region according to the value.
As shown in fig. 3, the maximum element selection unit is preferably implemented by a shift register. The shift register may include only a capacity to accommodate information of two lines of elements of the area calculation result matrix, and the maximum element may be determined and the value and position may be acquired by inputting all elements of the plurality of area calculation result matrices in series into the shift register for comparison.
Referring back to fig. 1, the current target area is a basis for determining a next target area, and the next target area is a basis for determining a next target area. Thus, the training features of any current target region may also preferably be based on historical training features of previous training. In one embodiment, the current training feature is a historical training feature + b the current image feature training result (0< a <1, 0< b <1, and a + b is 1), and the values of the coefficients a and b may be flexibly adjusted according to the specific implementation to adjust the weights occupied by the historical training feature and the current image training feature.
Preferably, the object tracking system of the present invention is particularly adapted for object tracking using a kernel correlation filter tracking algorithm (KCF). The basic idea of KCF is to cyclically shift the tracking target area, so as to construct a large number of samples to train a classifier. And calculating the similarity degree of the candidate region and the tracking target through a kernel function, selecting the candidate region with the maximum similarity as a new tracking target, and simultaneously reducing the operation amount in the training and detecting processes of the classifier by utilizing discrete Fourier transform. The training samples of the classifier are skillfully constructed through cyclic migration, so that the data matrix becomes a cyclic matrix. Then, the solution of the problem is transformed to a discrete Fourier transform domain based on the characteristics of the cyclic matrix, so that the matrix inversion process is avoided, and the algorithm complexity of several orders of magnitude is reduced. Although the present invention is particularly suitable for target tracking implemented by a KCF algorithm, it is understood that the tracking system of the present invention may use other correlation methods capable of training image features in the fourier domain, and can also obtain similar effects of optimizing tracking accuracy and real-time performance.
In one embodiment, the target tracking system of the present invention can track multiple targets simultaneously. For example, after the prediction of the target a of the current frame is completed, the prediction result of the target a is stored, the input related to the target B is read, and the tracking prediction … for the target B is performed. Thus, the target feature training module, the candidate feature training module, and the target region calculation module repeat the operations of feature training and region selection for different targets to track multiple targets in the same video image frame.
The preferred embodiment of the modular implementation of the object tracking system of the present invention is described above in connection with fig. 2 and 3. Although the figures illustrate the target feature training module, the candidate feature training module, and the target region calculation module, each of which may be implemented in part by logic hardware. It should be understood that the target feature training module and the candidate feature training module may be implemented in software and/or hardware based on the specific implementation, in addition to the fourier transform module implemented by logic hardware. Similarly, the target area calculation module may be implemented in whole or in part by software or logical hardware.
Preferably, the logic hardware used by the present invention may be an FPGA, an ASIC, other hardware platform, or any combination thereof. The target tracking system is implemented on a system on a chip (SoC) that includes a general purpose processor and logic hardware. The object tracking system of the present invention may be part of a complete object detection and tracking system. Figure 4 shows a schematic diagram of the operation of a complete object detection and tracking system. The object tracking system 200 (and 300) of the present invention may be implemented as the object tracking module in this figure.
The modules described above may be combined in different ways in specific applications. In one embodiment, the target feature training module may be regarded as a sample training module for performing sample training on image features of a target region of a current image video frame to obtain feature mapping parameters. Meanwhile, a part of the target region calculation module and the candidate feature training module can be combined into a target detection module, so that a region calculation result matrix is directly obtained. The other part of the target area calculation module, i.e. the maximum element selection unit, and preferably including the target area calculation module, may be regarded as a separate detection result generation module or incorporated into the above-mentioned target detection module.
FIG. 4 illustrates a system that enables high performance real-time target detection and tracking with little hardware resource consumption. In particular, the system may be a software and hardware coordinated system for real-time target detection and tracking of FPGA-based system-on-a-chip, where the inputs and outputs on the FPGA are preferably both fixed-point values to further simplify the computation and to prompt computational efficiency. The system comprises an object detection module for globally locating the position and size of an object, and relocating the object when the system starts, a user inputs, a timing expires or the object is lost. In this example, the target detection module is implemented using a convolutional neural network algorithm (CNN).
The system also includes a target tracking module for tracking the target in real time, after the target is detected (i.e., a local map including the target is detected), the target is tracked in real time. In this example, the image is feature extracted using a directional gradient histogram algorithm (HOG), and the target position and size are predicted in real time from the image features using a kernel correlation filter algorithm (KCF).
And the control module is used for controlling the operation of the whole system, such as the input and output of video images, module call control and the like. For example, the image input module described above may be part of the control module.
In specific operation, the CNN is called first, and the target is positioned on the input full image to obtain the position and size of the target in the video image (i.e., local image detection). Then, feature calculation is performed on the partial image of the input video image by using the HOG algorithm according to the target position and size calculated in the previous step (according to the requirement of the KCF algorithm, the module runs four times in the same image). And the HOG module sends the calculated feature map to the KCF calculation module. The Fourier transform, inverse Fourier transform and point multiplication functional modules in the KCF calculation module suitable for parallel processing are preferably realized by an FPGA, and the whole calculation process of the KCF can be controlled by a microprocessor. The KCF module sends the calculated position and size of the target to the controller, and the controller marks on the output video image. After the KCF module is calculated every time, the controller compares the confidence probability attached in the calculation with a preset threshold value, if the confidence probability is smaller than the threshold value, the tracking is considered to be lost, and the system recalls the CNN to perform target positioning. And if the confidence probability is larger than the threshold value, the system considers that the operation is normal, and the characteristic calculation of the HOG algorithm starts to operate.
The object tracking system of the present invention is described above in connection with fig. 2-4 in the form of functional modules. The hardware implementation of the object tracking system of the present invention will be described with emphasis on fig. 5 as follows. Fig. 5 shows a hardware functional diagram of the object tracking device according to the present invention. The target tracking device comprises a receiving module preferably implemented as a FIFO memory for receiving and storing externally input picture information; the sample training module is used for carrying out characteristic training on the input partial picture information; the DFT calculation module is used for calculating discrete Fourier transform of the picture information; the target detection module is used for carrying out position detection on the input picture information; and an output module, also preferably implemented as a FIFO memory, for sending the predicted trace-target information to the general-purpose processor.
The steps involved in the target tracking process are as follows:
step 1: and the general processor determines a screenshot area according to the detection result and sends the picture characteristics of the screenshot area to the target tracking device.
And 2, step: the receiving module receives the image information of the detection area input from the general processor, preferably a multi-channel feature matrix, and transmits the multi-channel image feature information matrix to the sample training module. And obtaining a feature mapping matrix after the processing of the sample training module.
a. And the sample training module receives the multi-channel picture characteristic information matrix. The module first takes the feature matrix data from the input FIFO memory and then performs a two-dimensional fast fourier transform calculation on the feature matrix using a multiplexed DFT calculation module.
b. The fourier transform calculation result is stored in a BLOCK memory (BLOCK RAM, also called BRAM), the calculation result and its conjugate result are subjected to complex multiplication, and the complex multiplication result is stored in BRAM using a complex multiplier.
c. And accumulating the complex multiplication results of the channels, and storing the accumulated result into the BRAM. Until all the feature inputs of the multiple channels are calculated. And then, performing inverse Fourier transform operation on the accumulated result.
d. And performing matrix rearrangement on the calculation result of the inverse Fourier transform, removing the imaginary part of the complex matrix, and outputting a real matrix. And updating the matrix and the characteristic mapping matrix obtained in history to obtain a new characteristic mapping matrix.
And step 3: the receiving module receives the image information of the prediction region input from the general processor, preferably a multi-channel feature matrix, transmits the image information to the target detection module, and obtains the prediction position information after the image information is processed by the target detection module.
a. The target detection module receives three or more multi-channel image matrixes of the area to be detected. The input data is first fourier transformed using a multiplexed DFT computation module.
b. And performing complex multiplication on results of two continuous input image matrixes after Fourier transform operation, and storing the complex multiplication results into BRAM.
c. And accumulating the complex multiplication results of the channels, and storing the accumulated result into the BRAM until all the image matrix inputs of the channels are calculated. And then performing inverse Fourier transform operation on the accumulated result.
d. And adding the operation result of the inverse Fourier transform and the ridge regression coefficient matrix, and multiplying the addition result by the feature mapping matrix obtained by the sample training module to obtain the prediction result of the picture of the region to be predicted.
And 4, step 4: and the position fitting module receives the output of the target detection module, and the final predicted position information is obtained after secondary fitting processing realized by the shift register.
And 5: and the output module sends the final prediction result to the general processor through the FIFO, and the general processor displays the final tracking result according to the prediction information.
In order to improve the operation efficiency, a set of DFT calculation resources can be used in the sample training module, and the operation result of DFT and the transposed result are directly operated, so that the operation of performing DFT operation twice in the original algorithm is omitted. In addition, the sample training module and the target detection module can be made to repeatedly share one DFT computation unit for the serial computation characteristic of the entire KCF tracking algorithm.
In addition, the implementation framework designed by the invention can modify the configured precision parameters according to different precision requirements, so that the calculation precision of the discrete Fourier transform of 16 bits, 24 bits and 32 bits can be provided.
The object tracking system and the implementation method according to the present invention have been described in detail above with reference to the accompanying drawings. The system can be regarded as a hardware-based target tracking accelerator, and can be implemented on a programmable gate array (FPGA), or an Application Specific Integrated Circuit (ASIC) chip, or a chip of an ARM, a CPU, and a GPU in a hardcore manner.
The device and the method for multiplying the sparse matrix and the vector solve the problem of insufficient CPU operation performance, effectively improve the operation speed and improve the real-time performance of target tracking. According to the characteristic that the whole KCF tracking algorithm runs in series, the reusability of a discrete Fourier transform calculation module is fully excavated, so that the use amount of on-chip calculation resources is greatly reduced. And different parameters can be configured to achieve different tracking performances according to different precision requirements. Therefore, the target tracking system and the tracking realized by the invention only use few hardware resources to realize the optimization of the tracking realization method. The calculation amount of the algorithm and the required hardware resources and power consumption are greatly reduced. The device can achieve excellent tracking real-time performance on small-sized equipment.
It should be appreciated that the preferred features of the embodiments described above with reference to fig. 2-5 may be combined or split to provide a new embodiment. Embodiments combining these features are intended to be within the scope of the present invention as defined by the appended claims.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. An object tracking system, comprising:
the target feature training module is used for training image information of a target area in a current video image frame in a Fourier domain to obtain current training features, wherein the image information of the target area is a multi-channel feature matrix detected by the target detection module;
the candidate feature training module is used for circularly shifting the tracking target area to construct a plurality of samples so as to train image information of a plurality of candidate target areas in the next video image frame in a Fourier domain to obtain a plurality of candidate training features;
a target region calculation module that selects a predicted next target region from a plurality of candidate target regions according to the current training feature and the plurality of candidate training features;
wherein the target feature training module and the candidate feature training module use the same Fourier transform module implemented by logic hardware for Fourier domain training,
wherein the selection of the next target region is determined by selecting a largest one of the elements of the candidate target region calculation result matrices, and the target region calculation module comprises a largest element selection unit for selecting a largest element of all elements of the plurality of region calculation result matrices,
the maximum element selection unit is realized by a shift register, and all elements of a plurality of area calculation result matrixes are serially input into the shift register to be compared, so that the maximum element is determined, and a value and a position are obtained.
2. The system of claim 1, wherein the training of the fourier domain comprises computation of a two-dimensional Fast Fourier Transform (FFT), and
the fourier transform module includes two or only one Fast Fourier Transform (FFT) module unit to perform the computation of the two-dimensional FFT.
3. The system of claim 2, wherein the training of the fourier domain further comprises a computation of an Inverse Fast Fourier Transform (IFFT), and the FFT module unit further performs the computation of the IFFT.
4. The system of claim 3, wherein the target feature training module and the candidate feature training module respectively train feature matrices of a target region and a candidate target region in a Fourier domain to obtain a current training feature matrix and a plurality of candidate training feature matrices, respectively.
5. The system of claim 4, wherein the computation of the two-dimensional FFT transposes the matrix it computes, and the computation of the IFFT transposes the transposed matrix back into the direction of the feature matrix again.
6. The system of claim 5, wherein the target region calculation module selects the next target region based on a plurality of region calculation result matrices derived from a plurality of candidate training feature matrices and the current training feature matrix, respectively.
7. The system of claim 6, wherein,
and the target area selection module selects a candidate target area corresponding to the area calculation result matrix containing the maximum elements as the predicted next target area.
8. The system of claim 7, wherein the maximum element extracting unit further records values of the maximum element and its neighboring elements and a position of the maximum element, and
the target area calculation module further includes a target area optimization unit that adjusts the next target area based on the value and position recorded by the maximum element selection unit to obtain the optimized next target area.
9. The system of claim 8, wherein the shift register has a capacity to accommodate two rows of element information of the regional calculation result matrix.
10. The system of claim 1, wherein the current training features are further based on prior historical training features.
11. The system of claim 1, wherein the system employs a coring correlation filter (KCF) algorithm for target tracking.
12. The system of claim 1, wherein the logic hardware may be at least one of:
FPGA;
an ASIC; and
other hardware platforms.
13. The system of claim 1, wherein the system is implemented on a system on a chip (SoC) that includes a general purpose processor and logic hardware.
14. The system of claim 1, wherein the target feature training module, the candidate feature training module, and the target region calculation module repeat the operations of feature training and region selection for different targets to track multiple targets in the same video image frame.
CN201611105151.9A 2016-12-05 2016-12-05 Target tracking system Active CN108154522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611105151.9A CN108154522B (en) 2016-12-05 2016-12-05 Target tracking system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611105151.9A CN108154522B (en) 2016-12-05 2016-12-05 Target tracking system

Publications (2)

Publication Number Publication Date
CN108154522A CN108154522A (en) 2018-06-12
CN108154522B true CN108154522B (en) 2022-07-12

Family

ID=62471055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611105151.9A Active CN108154522B (en) 2016-12-05 2016-12-05 Target tracking system

Country Status (1)

Country Link
CN (1) CN108154522B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705334A (en) * 2018-07-09 2020-01-17 翔升(上海)电子技术有限公司 Target tracking method, device, equipment and medium
CN110517285B (en) * 2019-08-05 2021-09-10 西安电子科技大学 Large-scene minimum target tracking based on motion estimation ME-CNN network
CN112183493A (en) * 2020-11-05 2021-01-05 北京澎思科技有限公司 Target tracking method, device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886325A (en) * 2014-02-18 2014-06-25 浙江大学 Cyclic matrix video tracking method with partition
CN104392469A (en) * 2014-12-15 2015-03-04 辽宁工程技术大学 Target tracking method based on soft characteristic theory
CN105787964A (en) * 2016-02-29 2016-07-20 深圳电科技有限公司 Target tracking method and device
CN105894538A (en) * 2016-04-01 2016-08-24 海信集团有限公司 Target tracking method and target tracking device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8520956B2 (en) * 2009-06-09 2013-08-27 Colorado State University Research Foundation Optimized correlation filters for signal processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886325A (en) * 2014-02-18 2014-06-25 浙江大学 Cyclic matrix video tracking method with partition
CN104392469A (en) * 2014-12-15 2015-03-04 辽宁工程技术大学 Target tracking method based on soft characteristic theory
CN105787964A (en) * 2016-02-29 2016-07-20 深圳电科技有限公司 Target tracking method and device
CN105894538A (en) * 2016-04-01 2016-08-24 海信集团有限公司 Target tracking method and target tracking device

Also Published As

Publication number Publication date
CN108154522A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
CN111192292B (en) Target tracking method and related equipment based on attention mechanism and twin network
CN111950453B (en) Random shape text recognition method based on selective attention mechanism
CN114529825B (en) Target detection model, method and application for fire fighting access occupied target detection
CN112132844A (en) Recursive non-local self-attention image segmentation method based on lightweight
CN111260037B (en) Convolution operation method and device of image data, electronic equipment and storage medium
CN110222760A (en) A kind of fast image processing method based on winograd algorithm
CN108154522B (en) Target tracking system
CN112085056B (en) Target detection model generation method, device, equipment and storage medium
CN102375987B (en) Image processing device and image feature vector extracting and image matching method
CN110348531B (en) Deep convolution neural network construction method with resolution adaptability and application
CN111898735A (en) Distillation learning method, distillation learning device, computer equipment and storage medium
CN110399826B (en) End-to-end face detection and identification method
CN111210446A (en) Video target segmentation method, device and equipment
CN111179270A (en) Image co-segmentation method and device based on attention mechanism
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN111914596A (en) Lane line detection method, device, system and storage medium
CN104616035B (en) Visual Map fast matching methods based on image overall feature and SURF algorithm
CN116432736A (en) Neural network model optimization method and device and computing equipment
CN114049491A (en) Fingerprint segmentation model training method, fingerprint segmentation device, fingerprint segmentation equipment and fingerprint segmentation medium
CN111914809A (en) Target object positioning method, image processing method, device and computer equipment
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN113177546A (en) Target detection method based on sparse attention module
Wang et al. EASNet: searching elastic and accurate network architecture for stereo matching
CN114973410A (en) Method and device for extracting motion characteristics of video frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180606

Address after: 100083, 17 floor, four building four, 1 Wang Zhuang Road, Haidian District, Beijing.

Applicant after: BEIJING DEEPHI INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 100083, 8 floor, 807 building, four building, 1 Wang Zhuang Road, Haidian District, Beijing.

Applicant before: BEIJING DEEPHI INTELLIGENCE TECHNOLOGY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200901

Address after: Unit 01-19, 10 / F, 101, 6 / F, building 5, yard 5, Anding Road, Chaoyang District, Beijing 100029

Applicant after: Xilinx Electronic Technology (Beijing) Co.,Ltd.

Address before: 100083, 17 floor, four building four, 1 Wang Zhuang Road, Haidian District, Beijing.

Applicant before: BEIJING DEEPHI INTELLIGENT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant