CN108282597A - A kind of real-time target tracing system and method based on FPGA - Google Patents
A kind of real-time target tracing system and method based on FPGA Download PDFInfo
- Publication number
- CN108282597A CN108282597A CN201810011296.5A CN201810011296A CN108282597A CN 108282597 A CN108282597 A CN 108282597A CN 201810011296 A CN201810011296 A CN 201810011296A CN 108282597 A CN108282597 A CN 108282597A
- Authority
- CN
- China
- Prior art keywords
- image block
- high likelihood
- dynamic
- video data
- detection module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
Abstract
The real-time target tracing system based on FPGA that the invention discloses a kind of, including mean filter, motion compensator, dynamic block detection module, Suggestion box search module and object matching module, target tracking method is, video data is divided into multiple images block and calculates the mean value of image block by mean filter, by motion compensator, high likelihood image block is obtained after the processing of dynamic block detection module, object matching module obtains final tracking target after carrying out matching primitives to the high likelihood image block after the optimization of Suggestion box search module, the present invention is optimized on the framework of system, a variety of parallel computations are used in entire calculating process simultaneously, improve processing speed, it can achieve the purpose that real-time tracing.
Description
Technical field
The present invention relates to vision application field, especially a kind of real-time target tracing system and method based on FPGA.
Background technology
Real-time modeling method system is widely used in the fields such as human-computer interaction, security monitoring, augmented reality, good in order to obtain
Good tracking performance and have a stronger robustness to complicated external environment, tracking system need to carry out complicated processing procedure and
Huge calculating cost, and certain specific platforms are more demanding for system power dissipation, it is difficult to realize real-time tracing.
Current existing target following scheme, although having carried out combined optimization to track algorithm and specific framework,
Since processing speed is slower, tracking performance is still not up to real-time, therefore is there is an urgent need to a kind of high-performance and energy-efficient
System, realizes real-time target following and energy-efficient.
Invention content
To solve the above problems, the real-time target tracing system that the purpose of the present invention is to provide a kind of based on FPGA and side
Method so that system processes data speed is rapider, hardware spending smaller, reaches one in various aspects such as real-time, accuracy, power consumptions
A preferable balance.
Technical solution is used by the present invention solves the problems, such as it:
A kind of real-time target tracing system based on FPGA, including fpga chip include in the fpga chip:
Mean filter carries out average value processing for the video data to dynamic camera and obtains regarding for dynamic camera
Frequency data mean value, and to the video data of Still camera carry out average value processing and obtain Still camera video data it is equal
Value;
Motion compensator compensates for the video data mean value to dynamic camera and obtains compensation motion vector
Video frame;
Dynamic block detection module, for the video data mean value of Still camera and the video frame of compensation motion vector
It is handled and obtains high likelihood image block;
Suggestion box search module, for being optimized to high likelihood image block;
Object matching module, for the high likelihood image block after optimization is matched and is obtained target object position and
Size;
The mean filter is connected with motion compensator, dynamic block detection module respectively, the motion compensator and dynamic
State block detection module connects, and the dynamic block detection module, Suggestion box search module and object matching module three's data connect
It connects.
Further, further include for the first data cached data buffer, the second data buffer, the mean filter
Device is connect with motion compensator, dynamic block detection module respectively by the first data buffer, second data buffer point
It is not connect with dynamic block detection module, Suggestion box search module and object matching module, the object matching module and first
Data buffer connects, and the object matching module is by the storage to the first data buffer of the matching result of output, and described the
Two data buffers store the high likelihood image block that dynamic block detection module generates, and the object matching module reads the second number
According to the high likelihood image block after optimizing in buffer and handled.Using the first data buffer, the second data buffer
Intermediate data is stored, it is possible to reduce data are outside piece to the bandwidth demand for the extra power consumption and communication brought in piece.
Further, first data buffer, the second data buffer are using the distribution for caching intermediate data
On piece cache way.
Further, the mean filter includes the processing unit of the parallel connection for handling video data, the place
Manage unit parallel processing video data.Mean filter carries out parallel processing using multiple processing units to video data, can be with
The processing speed of calculating task is improved, while reducing time delay, reduces resource consumption.
A kind of real-time target method for tracing based on FPGA, includes the following steps:
A, video data is divided into image block and carries out mean value computation to each image block by mean filter;
B, motion compensator receives the video data mean value of dynamic camera of the mean filter after calculating, is passing through
Dynamic block detection module is transferred to after calculating, while mean filter passes the video data mean value of the Still camera after calculating
It is defeated to arrive dynamic block detection module;
C, dynamic block detection module obtains high likelihood image block after calculating video data mean value;
D, Suggestion box search module is adjusted optimization to high likelihood image block;
E, object matching module according to target object Model Matching each optimization after high likelihood image block after determine
Go out position and the size of target object.
Further, in the step A mean filter by video data be divided into image block and to each image block into
Row mean value computation, each frame image in video data is divided into the image block that size is S*S by mean filter module, each
Have between a image block and adjacent image block 25% it is overlapping, and the mean value of each image block is calculated, then according to taking the photograph
As head type being transferred to video data mean value selectivity in dynamic block detection module and motion compensator, wherein image block
Size S*S be configured according to tracking object size.It is divided into multiple images block and is conducive to subsequent detection and matching.
Further, motion compensator receives regarding for dynamic camera of the mean filter after calculating in the step B
Frequency data mean value is being transferred to dynamic block detection module after calculating, and motion compensator calculates movement arrow using Arps algorithms
The video frame of compensation is measured, then motion compensator is by the video frame transmission of compensation motion vector to dynamic block detection module.For
For Still camera and dynamic camera, the flow direction of data is different, and the video data mean value of dynamic camera is passing
It needs to compensate between the defeated detection module to dynamic block, subsequently more accurately to analyze to obtain target object.
Further, dynamic block detection module calculates video data mean value and obtains high likelihood in the step C
Image block, wherein when dynamic block detection module receive be the video data mean value of Still camera when, Still camera
Video data mean information be compared with the mean information of former frame after obtain high likelihood image block;When dynamic block detects
When what module received is the video frame for the compensation motion vector that motion compensator transmission comes, according to the image block of front and back interframe
The difference of mean value obtains high likelihood image block after being compared with preset threshold values.Dynamic block detection module handles video counts
Some high likelihood image blocks can be obtained after, in order to which object matching module carries out last matching primitives.
Further, Suggestion box search module is adjusted optimization, specific steps to high likelihood image block in the step D
For:By the comparison to each high likelihood image block, adjustment is iterated to the size of high likelihood image block, to optimize
The size of high likelihood image block.High likelihood image block is optimized by Suggestion box search module, so as to subsequent
With calculating.
Further, in the step E object matching module according to target object Model Matching each optimization after height can
Position and the size that target object is determined after energy property image block, the specific steps are:To high likelihood image block and target pair
As being modeled, the L1 distances between Model of target image and the model of all high likelihood image blocks are calculated, L1 distances are selected
Minimum high likelihood image block exports as tracking object, obtains position and the size of target object.Due to target
It is to obtain final target object by way of modeling with module, so the process calculated is relatively accurate, calculating speed
Soon, the target object obtained is also more accurate.
The beneficial effects of the invention are as follows:A kind of real-time target tracing system based on FPGA that the present invention uses, by equal
Value filter obtains image block and calculates the mean value of image block after handling video data, then by motion compensator and
Dynamic block detection module handles video data mean value to obtain high likelihood image block, after the optimization of Suggestion box search module
It is sent in object matching module, final target object, this hair is obtained after finally carrying out analysis calculating by object matching module
Bright system is very fast to the tracking calculating speed of target by optimizing, and can be quickly obtained target object;
A kind of real-time target method for tracing based on FPGA of the present invention, is that video data is divided by mean filter first
Multiple images block, and average value processing is carried out to each image block, then video data mean value is sent according to the type of camera
Into different modules, for the video data mean value of dynamic camera, it is sent in motion compensator and carries out movement arrow
The calculating of the video frame of compensation is measured, the video frame transmission for the compensation motion vector being calculated to dynamic block is then detected into mould again
It is detected in block, obtains high likelihood image block;And for Still camera, then directly video data mean value is transferred to dynamic
State block detection module is identified to obtain high likelihood image block, and dynamic block detection module passes all high likelihood image blocks
It is optimized in the defeated search module to Suggestion box, finally by object matching module to the high likelihood image block progress after optimization
Final target object is can be obtained with calculating, this method optimizes entire calculating process, the speed of calculating, to target
Tracking can reach real-time effect.
Description of the drawings
The invention will be further described with example below in conjunction with the accompanying drawings.
Fig. 1 is a kind of structure diagram of the real-time target tracing system based on FPGA of the present invention;
Fig. 2 is a kind of hardware architecture diagram of the real-time target tracing system based on FPGA of the present invention;
Fig. 3 is the schematic diagram for dividing image block;
Fig. 4 is the structural schematic diagram of mean filter;
Fig. 5 is schematic diagram when system of the invention is applied to target following;
Fig. 6 is a kind of flow diagram of the real-time target method for tracing based on FPGA of the present invention;
Fig. 7 is a kind of system architecture diagram of the real-time target tracing system based on FPGA of the present invention.
Specific implementation mode
- Fig. 5 referring to Fig.1, a kind of real-time target tracing system based on FPGA of the invention, mean filter 1 is to video
Each frame in data is handled, and the mean value of each frame video image is obtained, and then deposits the video data mean value of all frames
It stores up into the first data buffer 6, motion compensator 2 reads the video data of the dynamic camera in the first data buffer 6
Mean value is transmitted to after handling it in dynamic block detection module 3, and dynamic block detection module 3 also reads first simultaneously
The video data mean value of Still camera in data buffer 6, dynamic block detection module 3 handle the video counts of Still camera
According to the video data of mean value and motion-compensated device 2 treated dynamic camera, height can obtained after calculating
Possibility image block, then dynamic block detection module 3 these high likelihood image blocks are stored and are worked as to the second data buffer 7
In, it is proposed that frame search module 4 reads the data in the second data buffer 7, and processing is optimized to high likelihood image block
Afterwards by the high likelihood image block storage to the second data buffer 7 after optimization, ideal matching module 5 reads the second number
According to the data of buffer 7, the high likelihood image block after optimization is modeled, by the model of the high likelihood image block after optimization with
The model of target object is compared calculating, so as to obtain target object.
Mean filter 1 is before carrying out division operation and mean value computation, it is necessary first to which smoothed image can be eliminated big
Partial noise, to improve the success rate of tracking, Fig. 3 is the schematic diagram for dividing image block, and mean filter 1 is by each frame figure
Be S*S image blocks as being divided into multiple sizes, there is 25% overlapping between adjacent image block, it is of the invention in mean filter
Device 1 not only needs the whole mean value of filter, but also needs the mean value of each image block of computation partition, and what is handled is equal
Value is by the detection for dynamic block detection module 3, for judging in image block whether to include target object, and tile size S
It will be updated according to tracking object size L, L is obtained during object matching.
Mean filter 1 in the present invention is used to calculate the mean value of image smoothing and image block, is illustrated in figure 4 mean value filter
The structural schematic diagram of wave device 1, mean filter 1 calculates average value using the kernel of 4*4, therefore mean filter 1 has 16 phases
Pixel multiplexing of the adjacent shift register in computing unit, it is only necessary to 4 pixels of load every time;Adding in mean filter 1
Method tree makees addition to 16 values in shift register, and addition is made in output and the first equal value cell;Second equal value cell is used
In calculating image block mean value, the output of the first equal value cell can be multiplexed to calculate the average value of pixel in image block, then schemed
As block calculator will control correct output of the second equal value cell according to target sizes the first equal value cell of capture.
The average value that mean filter 1 calculates is 16 pixels, and compared to conventional frequency divider, this design needs hard
Part will lack, and the kernel of mean filter 4*4, which is equivalent to, contains 4 processing units, each processing unit can handle image
The mean value of block, video data is passed in mean filter 1, by the processing image of 4 processing unit for parallel in mean filter 1
Frame processing can improve the processing speed of calculating task, reduce time delay.
Motion compensator 2 is for ensureing that real-time tracking system can support dynamic camera, according to ARPS in the present invention
Algorithm come calculate scene motion as a result, by the former frame of compensation motion vector video, to ensure that two successive frames have phase
Same scene background has equally used the identical tracking solution of Still camera for dollying head.
First data buffer 6 and the second data buffer 7 are that distributed on piece caches, and are used for caching of target track algorithm
The intermediate result of calculating task, the present invention in mean filter 1 handle video data in picture frame after, the mean data of generation
During all storage is cached to distributed on piece, wait for entering next processing procedure;Dynamic block detection module 3 is by high likelihood figure
As in block storage to on-chip distributed caching, waiting for the use of next processing procedure, passing through distributed on piece buffer memory target
Median in tracking calculating process, reducing bandwidth of the data to the extra power consumption and communication brought in piece outside piece needs
It asks.
The system architecture of the present invention devises the Computational frame of high concurrency and distributed buffer memory target following is calculated
Median in method calculating process optimizes entire calculating process, while applying in a variety of parallel mechanism processing track algorithms
The different computing tasks in portion improve the efficiency of calculating, reduce time delay, and carried out to intermediate data using distributed on piece caching
Storage is to reduce the power consumption of system.
Fig. 5 is schematic diagram when real-time modeling method system of the invention is applied to target following, including programmable logic
Gate array XilinxZC706FPGA and host computer are connected between FPGA and host computer by interface.
And host computer, including display and external memory;Video data is stored in external memory;Display is used for
The object tracked in real-time display video.
Programming logic gate array FPGA, including it is loaded into controller and tracking system module;Controller is loaded into for configuring
Communication between FPGA interface and tracking system controls the communication between FPGA interface and tracking system, controls video stream data
The transmission of the tracking system module in FPGA is transferred to from interface;Tracking system module is used for the every of the sequence of video images
One frame image, extraction latent image target area carry out object matching to above-mentioned zone, matched target area result are sent into
Host computer is shown;The real-time tracking system realize method for tracking target the specific steps are:By being loaded into controller
Video data is loaded into real-time tracking system by control instruction;Real-time tracking system first to each frame in video data carry out with
Track processing determines position and the size of target object;The tracking result of real-time tracking system passes to host computer by bus, and
And the display in host computer carries out real-time display tracking result.
With reference to a kind of real-time modeling method method based on FPGA of the present invention shown in fig. 6, mean filter 1 is by video
Each frame of data is divided into multiple images block, and calculates the mean value of each image block, then different according to the type of camera,
It is transferred to dynamic block detection module 3, dynamic block after being handled the video data mean value of dynamic camera by motion compensator 2
Detection module 3 also can directly obtain the video data mean value of Still camera from mean filter 1 simultaneously, and dynamic block detects mould
High likelihood image block can be obtained in block 3 after handling received data, then can to height by Suggestion box search module 4
Energy property image block optimizes, after object matching module 5 carries out final matching primitives to the high likelihood image block after optimization
Target object can be obtained.
It is calculated using multiple processing unit for parallel when mean value computation, some intermediate data in entire calculating process
It is stored by distributed on piece, the power consumption of system can be reduced, while the processing speed of system can also be improved, finally
It can reach the purpose of real-time tracking.
Dynamic block detection module 3 can detect to obtain high likelihood image block, and the detection process of dynamic block detection module 3
It is divided into two parts, first part is for the video data of Still camera, and dynamic block detection module 3 is filtered according to mean value
1 calculated mean information of wave device is compared with the mean information of former frame can find high likelihood image block, for quiet
For state camera, the mean information of present frame is differed with the mean information of former frame when within 80%, you can thinks current
The image block of frame is the higher image block of target possibility, as high likelihood image block;
And for the video data of dynamic camera, video data mean value is counted by motion compensator 2 first
It calculates, motion compensator 2 calculates the video frame of compensation motion vector according to Arps algorithms, and by calculated compensation motion vector
Video frame is sent into dynamic block detection module 3, and dynamic block detection module 3 is each according to the mean value and former frame of present frame each region
The mean value in region calculates the absolute difference of the mean value of consecutive frame each region, and preset according to the absolute difference of mean value
Threshold value TdiffIt is compared, exceeds the region of threshold value, be considered there is a possibility that the high image block of target to get to high likelihood
Image block;Less than the region of threshold value, it is considered there is a possibility that the image block that target is low, passes through the inspection of dynamic block detection module 3
Survey can screen out some lower image blocks of target possibility, reduce subsequent calculation amount, be detected in dynamic block detection module 3
To after high likelihood image block, during the storage of high likelihood image block is cached to distributed on piece, the processing of next step is waited for;
For preset threshold value Tdiff, calculation formula is:
Wherein,Indicate n-th frame and the (n-1)th frame result of the comparison,Indicate i-th of image of n-th frame
Block,Indicate i-th of image block of the (n-1)th frame.
The present invention devises dedicated target detection hardware, it is possible to reduce certain calculating cost, while optimizing HOG calculations
The value computational short cut of gradient optimizing is following formula by method, the HOG algorithms of optimization:
G (x, y)=abs (H (x+1, y)-H (x-1, y))+abs (H (x, y+1)-H (x, y-1));The method of implementation is:It will
0 ° -180 ° are divided into 6 regions, and vote the linear gradient in 6 regions, for the method compared to 9 regions, only meeting
Lose the precision of very little.
With reference to a kind of system architecture diagram of real-time target tracing system based on FPGA of the present invention shown in Fig. 7, tracking
The gradient of HOG is to calculate the filter of [- 1,0,1] in system, and the optimization of each gradient filter is required for 4 pixels:H(x
+ 1, y), H (x-1, y), H (x, y+1) and H (x, y-1);What 4 pixel values can be solved in three adjacent image lines,
Simultaneously this means that each image line needs the processing of mean filter 1 three times.
The mode of data-reusing is used in the present invention, in order to obtain best data-reusing mode, the displacement of three rows is posted
Storage connects into a global chain, and there are one serial input ports for band, to ensure that every a line of image is only read once.
As described in Figure 7, the shift register R12, R32, R21 and R23 in four left sides export four computing units on the right
Register Ry+1, Rx-1, Ry-1, Rx+1, and four pixels of each period removal.
HOG processing units calculate gradient orientation histogram data, generate the HOG features of image block, then by calculate with
The L1 distances of target object HOG models match HOG features, after calculating, using the minimum high likelihood image block of L1 distances as
The step of tracking object exports, implementation goal detection method is as follows:According to the feature of target object, the HOG of target object is established
Model;HOG processing units calculate gradient orientation histogram data, generate the HOG features of image block;According to established mesh before
Model is marked, the L1 distances of two models are calculated;In numerous high likelihood image blocks, selecting the minimum height of L1 distances may
Property image block, as tracking object export.
The above, only presently preferred embodiments of the present invention, the invention is not limited in the above embodiments, as long as
It reaches the technique effect of the present invention with identical means, should all belong to the scope of protection of the present invention.
Claims (10)
1. a kind of real-time target tracing system based on FPGA, including fpga chip, it is characterised in that:Packet in the fpga chip
It includes:
Mean filter (1) carries out average value processing for the video data to dynamic camera and obtains regarding for dynamic camera
Frequency data mean value, and to the video data of Still camera carry out average value processing and obtain Still camera video data it is equal
Value;
Motion compensator (2) compensates for the video data mean value to dynamic camera and obtains compensation motion vector
Video frame;
Dynamic block detection module (3), for the video data mean value of Still camera and the video frame of compensation motion vector
It is handled and obtains high likelihood image block;
Suggestion box search module (4), for being optimized to high likelihood image block;
Object matching module (5), for the high likelihood image block after optimization is matched and is obtained target object position and
Size;
The mean filter (1) connects with motion compensator (2), dynamic block detection module (3) respectively, the motion compensator
(2) it is connected with dynamic block detection module (3), the dynamic block detection module (3), Suggestion box search module (4) and target
With module (5) three's data connection.
2. a kind of real-time target tracing system based on FPGA according to claim 1, it is characterised in that:Further include being used for
Data cached the first data buffer (6), the second data buffer (7), the mean filter (1) are slow by the first data
Storage (6) is connect with motion compensator (2), dynamic block detection module (3) respectively, second data buffer (7) respectively with
Dynamic block detection module (3), Suggestion box search module (4) and object matching module (5) connection, the object matching module
(5) it is connect with the first data buffer (6), the object matching module (5) stores the matching result of output to the first data
In buffer (6), the high likelihood image block that the second data buffer (7) storage dynamic block detection module (3) generates, institute
Object matching module (5) is stated to read the high likelihood image block in the second data buffer (7) after optimization and handled.
3. a kind of real-time target tracing system based on FPGA according to claim 2, it is characterised in that:First number
According to buffer (6), the second data buffer (7) using the distributed on piece cache way for caching intermediate data.
4. a kind of real-time target tracing system based on FPGA according to claim 1, it is characterised in that:The mean value filter
Wave device (1) includes the processing unit of the parallel connection for handling video data, and the processing unit for parallel handles video data.
5. a kind of method using a kind of any real-time target tracing system based on FPGA of Claims 1-4, special
Sign is:Include the following steps:
A, video data is divided into image block and carries out mean value computation to each image block by mean filter (1);
B, motion compensator (2) receives the video data mean value of dynamic camera of the mean filter (1) after calculating, is passing through
It is transferred to dynamic block detection module (3) after crossing calculating, while mean filter (1) is by the video counts of the Still camera after calculating
It is transferred to dynamic block detection module (3) according to mean value;
C, dynamic block detection module (3) obtains high likelihood image block after calculating video data mean value;
D, Suggestion box search module (4) is adjusted optimization to high likelihood image block;
E, object matching module (5) according to target object Model Matching each optimization after high likelihood image block after determine
The position of target object and size.
6. a kind of real-time target method for tracing based on FPGA according to claim 5, it is characterised in that:The step A
Video data is divided into image block and carries out mean value computation, mean filter to each image block by middle mean filter (1)
(1) each frame image in video data is divided into the image block that size is S*S, each image block and adjacent image block
Between have 25% overlapping, and the mean value of each image block is calculated, then according to camera types by video data mean value
Selectivity be transferred in dynamic block detection module (3) and motion compensator (2), the wherein size S*S of image block according to
Track object size is configured.
7. a kind of real-time target method for tracing based on FPGA according to claim 5, it is characterised in that:The step B
Middle motion compensator (2) receives the video data mean value of dynamic camera of the mean filter (1) after calculating, by counting
Dynamic block detection module (3) is transferred to after calculation, motion compensator (2) calculates the video of compensation motion vector using Arps algorithms
Frame, then motion compensator (2) is by the video frame transmission of compensation motion vector to dynamic block detection module (3).
8. a kind of real-time target method for tracing based on FPGA according to claim 5, it is characterised in that:The step C
Middle dynamic block detection module (3) obtains high likelihood image block after calculating video data mean value, wherein when dynamic block is examined
When what survey module (3) received is the video data mean value of Still camera, the video data mean information of Still camera
High likelihood image block is obtained after being compared with the mean information of former frame;When what dynamic block detection module (3) received is
Motion compensator (2) transmission come compensation motion vector video frame when, according to the mean value of the image block of front and back interframe difference with
Preset threshold values obtains high likelihood image block after being compared.
9. a kind of real-time target method for tracing based on FPGA according to claim 5, it is characterised in that:The step D
Middle Suggestion box search module (4) is adjusted optimization to high likelihood image block, the specific steps are:By to each high likelihood
The comparison of image block is iterated adjustment to the size of high likelihood image block, to optimize the size of high likelihood image block.
10. a kind of real-time target method for tracing based on FPGA according to claim 5, it is characterised in that:The step E
Middle object matching module (5) according to target object Model Matching each optimization after high likelihood image block after determine target
The position of object and size, the specific steps are:High likelihood image block and target object are modeled, target image is calculated
L1 distances between model and the model of all high likelihood image blocks select the minimum high likelihood image block of L1 distances, will
It is exported as tracking object, obtains position and the size of target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810011296.5A CN108282597A (en) | 2018-01-05 | 2018-01-05 | A kind of real-time target tracing system and method based on FPGA |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810011296.5A CN108282597A (en) | 2018-01-05 | 2018-01-05 | A kind of real-time target tracing system and method based on FPGA |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108282597A true CN108282597A (en) | 2018-07-13 |
Family
ID=62803214
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810011296.5A Pending CN108282597A (en) | 2018-01-05 | 2018-01-05 | A kind of real-time target tracing system and method based on FPGA |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108282597A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728617A (en) * | 2019-09-30 | 2020-01-24 | 上海电机学院 | FPGA-based dynamic target identification and real-time tracking system |
CN111008305A (en) * | 2019-11-29 | 2020-04-14 | 百度在线网络技术(北京)有限公司 | Visual search method and device and electronic equipment |
CN114820630A (en) * | 2022-07-04 | 2022-07-29 | 国网浙江省电力有限公司电力科学研究院 | Target tracking algorithm model pipeline acceleration method and circuit based on FPGA |
WO2023151385A1 (en) * | 2022-02-10 | 2023-08-17 | Oppo广东移动通信有限公司 | Image processing method and device, terminal, and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101393609A (en) * | 2008-09-18 | 2009-03-25 | 北京中星微电子有限公司 | Target detection tracking method and device |
CN101640761A (en) * | 2009-08-31 | 2010-02-03 | 杭州新源电子研究所 | Vehicle-mounted digital television signal processing method |
CN103413326A (en) * | 2013-08-12 | 2013-11-27 | 上海盈方微电子股份有限公司 | Method and device for detecting feature points in Fast approximated SIFT algorithm |
US20170109582A1 (en) * | 2015-10-19 | 2017-04-20 | Disney Enterprises, Inc. | Incremental learning framework for object detection in videos |
-
2018
- 2018-01-05 CN CN201810011296.5A patent/CN108282597A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101393609A (en) * | 2008-09-18 | 2009-03-25 | 北京中星微电子有限公司 | Target detection tracking method and device |
CN101640761A (en) * | 2009-08-31 | 2010-02-03 | 杭州新源电子研究所 | Vehicle-mounted digital television signal processing method |
CN103413326A (en) * | 2013-08-12 | 2013-11-27 | 上海盈方微电子股份有限公司 | Method and device for detecting feature points in Fast approximated SIFT algorithm |
US20170109582A1 (en) * | 2015-10-19 | 2017-04-20 | Disney Enterprises, Inc. | Incremental learning framework for object detection in videos |
Non-Patent Citations (5)
Title |
---|
SHENG TANG: "Object Localization Based on Proposal Fusion", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
SHUHAN CHEN ET AL: "Saliency Detection for Improving Object Proposals", 《2016 IEEE INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING》 * |
SIYANG LI ET AL: "Box Refinement: Object Proposal Enhancement and Pruning", 《2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION》 * |
XIAOBAI CHEN ET AL: "A fast and energy efficient FPGA-based system for real-time object tracking", 《PROCEEDINGS OF APSIPA ANNUAL SUMMIT AND CONFERENCE 2017》 * |
胡秀华等: "一种利用物体性检测的目标跟踪算法", 《西安电子科技大学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728617A (en) * | 2019-09-30 | 2020-01-24 | 上海电机学院 | FPGA-based dynamic target identification and real-time tracking system |
CN111008305A (en) * | 2019-11-29 | 2020-04-14 | 百度在线网络技术(北京)有限公司 | Visual search method and device and electronic equipment |
US11704813B2 (en) | 2019-11-29 | 2023-07-18 | Baidu Online Network Technology (Beijing) Co., Ltd. | Visual search method, visual search device and electrical device |
WO2023151385A1 (en) * | 2022-02-10 | 2023-08-17 | Oppo广东移动通信有限公司 | Image processing method and device, terminal, and readable storage medium |
CN114820630A (en) * | 2022-07-04 | 2022-07-29 | 国网浙江省电力有限公司电力科学研究院 | Target tracking algorithm model pipeline acceleration method and circuit based on FPGA |
CN114820630B (en) * | 2022-07-04 | 2022-09-06 | 国网浙江省电力有限公司电力科学研究院 | Target tracking algorithm model pipeline acceleration method and circuit based on FPGA |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zou et al. | Df-net: Unsupervised joint learning of depth and flow using cross-task consistency | |
CN108282597A (en) | A kind of real-time target tracing system and method based on FPGA | |
CN102148934B (en) | Multi-mode real-time electronic image stabilizing system | |
CN102509071B (en) | Optical flow computation system and method | |
WO2018219931A1 (en) | Block-matching optical flow and stereo vision for dynamic vision sensors | |
RU2623806C1 (en) | Method and device of processing stereo images | |
Ding et al. | Real-time stereo vision system using adaptive weight cost aggregation approach | |
Fan et al. | F-C3D: FPGA-based 3-dimensional convolutional neural network | |
Zhao et al. | Real-time stereo on GPGPU using progressive multi-resolution adaptive windows | |
CN104240217B (en) | Binocular camera image depth information acquisition methods and device | |
Boikos et al. | A high-performance system-on-chip architecture for direct tracking for SLAM | |
Xiong et al. | Self-supervised monocular depth and visual odometry learning with scale-consistent geometric constraints | |
Zhao et al. | FP-Stereo: Hardware-efficient stereo vision for embedded applications | |
Stumpp et al. | Harms: A hardware acceleration architecture for real-time event-based optical flow | |
CN102932643A (en) | Expanded variable block movement estimation circuit suitable for HEVC (high efficiency video coding) standard | |
Ding et al. | Improved real-time correlation-based FPGA stereo vision system | |
CN101227611A (en) | AVS-based motion estimation apparatus and searching method | |
Chen et al. | A fast and energy efficient FPGA-based system for real-time object tracking | |
Gudis et al. | Multi-resolution real-time dense stereo vision processing in FPGA | |
Bui et al. | A hardware/software co-design approach for real-time object detection and tracking on embedded devices | |
CN114584785A (en) | Real-time image stabilizing method and device for video image | |
Tomasi et al. | A novel architecture for a massively parallel low level vision processing engine on chip | |
CN109493349B (en) | Image feature processing module, augmented reality equipment and corner detection method | |
CN104820652B (en) | A kind of image template coalignment using AXI buses | |
Saldaña-González et al. | FPGA based acceleration for image processing applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180713 |
|
RJ01 | Rejection of invention patent application after publication |