CN106204660A - A kind of Ground Target Tracking device of feature based coupling - Google Patents

A kind of Ground Target Tracking device of feature based coupling Download PDF

Info

Publication number
CN106204660A
CN106204660A CN201610596748.1A CN201610596748A CN106204660A CN 106204660 A CN106204660 A CN 106204660A CN 201610596748 A CN201610596748 A CN 201610596748A CN 106204660 A CN106204660 A CN 106204660A
Authority
CN
China
Prior art keywords
point
image
module
characteristic
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610596748.1A
Other languages
Chinese (zh)
Other versions
CN106204660B (en
Inventor
钟胜
喻鹏
张清洋
崔宗阳
董太行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201610596748.1A priority Critical patent/CN106204660B/en
Publication of CN106204660A publication Critical patent/CN106204660A/en
Application granted granted Critical
Publication of CN106204660B publication Critical patent/CN106204660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a kind of Ground Target Tracking device based on image key points characteristic matching, including: programmable gate array FPGA and digital signal processor DSP;FPGA extracts characteristics of image for external camera feeds image sequence, and then completes adjacent interframe characteristic matching, sends success interframe characteristic matching result to DSP;Interframe geometric transform relation according to DSP feedback carries out cross-correlation and accurately mates;Digital signal processor DSP calculates adjacent image Inter-frame Transformation relation for the characteristic matching result exported according to described FPGA.The method for tracking target of complicated distinguished point based is fully realized on embedded FPGA by the present invention, interframe transformational relation is calculated and realizes at DSP, take into account the low-power consumption requirement of algorithm complexity and embedded board, processed substantial amounts of view data in real time, compared to prior art, become order of magnitude ground boosting algorithm processing speed.

Description

A kind of Ground Target Tracking device of feature based coupling
Technical field
The invention belongs to data image signal process field, be specifically related to the Ground Target Tracking of a kind of feature based coupling Device.
Background technology
Computer vision realizes target following and has multiple research application, in built-in field, typically often use DSP The core devices realized as tracking, but in DSP embedded, under power consumption and requirement of real-time, DSP is difficult to Realize complicated target tracking algorism.And image characteristic point has the invariance of rotation, yardstick, illumination so that it is image registration, All there is more application in the fields such as target following.And characteristic point algorithm itself is complex, make it in the application of built-in field There are many obstacles.Such as, the feature extraction algorithm of similar SIFT, SURF needs to set up metric space, is accomplished by single-frame images Obtaining the gaussian filtering image under multiple yardstick, this is high to power consumption requirements in embedded scene, calculates resource-constrained, high real-time Requirement contradicts.For meeting the tracking application demand of embedded scene, proprietary hardware such as ASIC can be used, at the assistance such as FPGA Reason image, accelerating algorithm.
Wang,J.,et al.,An Embedded System-on-Chip Architecture for Real-time Visual Detection and Matching.IEEE Transactions on Circuits&Systems for Video Technology, 2014.24 (3): p.525-538, it is proposed that a kind of real-time vision characteristic matching realized on monolithic FPGA System.This system devises the system structure of SIFT+BRIEF framework, is realized on monolithic FPGA by whole algorithm, and completes reality The registration of Shi Qianhou two field picture, speed can reach 720p image 60FPS speed.This hardware verification that is mainly characterized by of this system is put down Platform only relies on monolithic FPGA and completes, and frame before and after feature can be mated real-time implementation on sheet, and resource occupation is less.But It is that the limitation of the method is that work is only limitted to feature extraction acceleration and interframe characteristic matching but the most reasonable association FPGA resource and DSP resource is adjusted to complete a tracking system.
Summary of the invention
For defect and the technical need of prior art, the invention provides the ground target of a kind of feature based coupling with Track Hardware Implementation and device, fully realize the target tracking algorism of complicated distinguished point based at embedded FPGA On, being calculated by interframe transformational relation is present DSP, takes into account the low-power consumption requirement of algorithm complexity and embedded board, with FPGA Algorithm as core realizes device, is processed substantial amounts of view data in real time, compared to prior art, carries with becoming the order of magnitude Rise algorithm process speed.
A kind of Ground Target Tracking device based on image key points characteristic matching, including: programmable gate array FPGA and Digital signal processor DSP;
Programmable gate array FPGA, including feature extraction unit, trace point coordinate calculating unit and accurate matching unit;Institute State feature extraction unit for each two field picture of image sequence that outside is inputted, extract characteristics of image, complete according to characteristics of image Become adjacent interframe characteristic matching result, send success interframe characteristic matching result to DSP;Described trace point coordinate calculating unit For the interframe geometric transform relation fed back according to DSP, former frame trace point coordinate it is calculated trace point in present frame and sits Scale value;Described accurate matching unit, for centered by DSP calculated present frame trace point position coordinates, at its neighborhood Do Cross Correlation Matching based on template, obtain trace point coordinate accurate coordinates;
Digital signal processor DSP, calculates adjacent image frame for the characteristic matching result exported according to described FPGA Between transformation relation, and feed back to FPGA.
Further, described feature extraction unit include feature detection module, feature description module, characteristic storage module and Frame image features point registration module front and back;
Feature detection module for carrying out the gaussian filtering of multiple yardstick, difference, judgement extreme point, rejecting to view data The point of low-response degree, obtains the characteristics of image point coordinates detected;
Feature description module is for according to described characteristics of image point coordinates, extracting image information to the field of image, obtain The description vectors of described characteristic point;
Characteristic storage module describes information for the characteristic point coordinate caching each two field picture with described characteristic point;Feature is deposited Storage module includes two double-port RAM RAMA and RAMB, uses ping-pong operation, caching former frame information to believe with present frame Breath, i.e. the characteristic point coordinate of N-1 two field picture and description vectors are stored in RAMA, then the characteristic point coordinate of nth frame image It is stored in RAMB with description information;
Front and back frame image features point registration module describes information for the characteristic point coordinate according to image and described characteristic point Complete adjacent interframe characteristic matching result, send success interframe characteristic matching result to DSP.
Further,
Described feature detection module includes the feature point detection module that down sample module is identical with two groups of structures;Feature spot check Survey module and include that multiple gaussian filtering unit, multiple Difference Calculation unit, multiple window signal generating unit and a characteristic point are chosen Unit;
Multiple gaussian filtering unit of the first stack features point detection module are for producing described analogue camera device concurrently The raw each two field picture of image sequence carries out the gaussian filtering of different scale parameter;Difference Calculation unit is for two adjacent yardsticks Two width images after gaussian filtering carry out calculus of differences and obtain difference of Gaussian image;Window signal generating unit is for Gaussian difference component Centered by pixel in Xiang and neighborhood be border generate window;Characteristic point chooses unit for determining in the window generated Extreme point, using extreme point as candidate feature point, deletes low contrast point or marginal point, the candidate of reservation from candidate feature point Characteristic point is final characteristic point;
From multiple gaussian filtering unit, choose the gaussian filtering unit of medium scale, this gaussian filtering unit is exported Gaussian filtering image inputs to down sample module, and down sample module is for carrying out down-sampling, after down-sampling to the image of input Image output to the second stack features point detection module, the second stack features point detection module according to the first stack features point detection mould The mode that block is identical determines characteristic point.
Further,
Described feature description module includes data control block and description vectors computing module;
Described data control block is used for reading characteristic point coordinate, respectively on the basis of each characteristic point and random number delays The side-play amount depositing middle storage extracts a certain amount of image pixel data;
Described description vectors computing module is for by carrying out pixel gray value two-by-two to the image pixel data extracted Comparison, obtains binary system description vectors.
Further,
Before and after described, frame image features point registration module includes description vectors distance calculator, reads to interrupt maker and coupling Point is to memorizer FIFO;
Description vectors distance calculator, uses the first state machine to read present frame and the characteristic point of previous frame image, makes Present frame and the characteristic point description vectors of previous frame image is read, by present frame and previous frame image with the second state machine Characteristic point description vectors, this distance then regards as the Feature Points Matching success of two frames less than a certain threshold value;
Matching double points memorizer FIFO is used for storing successful match point pair;
Read to interrupt maker for, at the end of front and back's frame image features point registration, providing interrupt signal to DSP, wait DSP responds.
Further,
Described accurate matching unit includes field of search cache module, relevant matches module and template caching and updates mould Block;
Field of search cache module is for caching from the image of external interface input, and a new frame will cover previous frame when coming Image;
Relevant matches module is for creating a region to be matched in present frame centered by trace point coordinate figure, from template Caching extracts template with more new module, carries out gray scale related operation, in gray scale correlation result by the way of window travels through The window center point that maximum is corresponding is best match position, simultaneously by window corresponding for maximum in gray scale correlation result It is updated to template caching and more new module;
Template renewal module is used for template cache.
Further,
Described DSP, for after capturing the interrupt signal that FPGA sends, initiates an enhancement mode direct memory and visits Ask, receive the feature point pairs that in FPGA, the match is successful, use stochastic sampling concordance to calculate between feature point pairs and reflect that interframe is several The transformation matrix of what transformation relation;Send interrupt signal to FPGA, and after meeting with a response, transformation matrix is fed back to FPGA.
In general, by the contemplated above technical scheme of the present invention compared with prior art:
The present invention is by the decomposition to characteristic point algorithm, and FPGA/DSP is with rationally having worked in coordination with one is based primarily upon spy Levying the Ground Target Tracking device of the highly-parallel of coupling, the method utilizes the parallel of FPGA to accelerate and FPGA Yu DSP height Ground Target Tracking under effect cooperative achievement complicated algorithm framework.Compared to traditional use DSP as main signal processor, Be capable of more complicated algorithm, will characteristic point algorithm, correlation matching algorithm and sampling unification algorism organically combine Get up, can have preferably live effect in Deep space tracking, can reach 50 frames/second.
The present invention is by by characteristics algorithm decomposing module, using FPGA as core processing device, by complicated based on spy Levy target tracking algorism a little and cross-correlation essence matching algorithm fully realizes on Embedded FPGA, take into account algorithm complicated Property with the low-power consumption requirement of embedded board, and a large amount of calculating of algorithm realized by the paralell design of FPGA, is able to reality Time process substantial amounts of data and process, compared to pure DSP Processing Algorithm, become order of magnitude ground boosting algorithm processing speed.
Present invention employs dynamic buffering structure to have to the use linking different processing components, FIFO and synchronous memories Solve to effect different pieces of information width, the interconnection problem that the difference between different pieces of information speed, distinct interface causes, reduce resource and disappear Consumption, improves the resource utilization of system.
The present invention can process and reach real-time image trace to real time imaging synchronization reception, utilizes FPGA as system tray Structure core and data processing core, module interface standard is prone to reconstruct, and has the advantages that real-time big data quantity is handled up, And low in energy consumption, feature that volume is little, it is effectively applied to target following, navigation, the field such as identification.
Accompanying drawing explanation
Fig. 1 is present invention Ground Target Tracking based on image key points characteristic matching device overall structure block diagram
Fig. 2 is present invention Ground Target Tracking based on image key points characteristic matching device detailed block diagram;
Fig. 3 is that present invention Ground Target Tracking based on image key points characteristic matching device SIFT feature detects frame segment Figure;
Fig. 4 is present invention Ground Target Tracking based on image key points characteristic matching device BRIEF feature extraction structure Block diagram;
Fig. 5 is present invention Ground Target Tracking based on image key points characteristic matching device adjacent interframe characteristic matching knot Structure block diagram;
Fig. 6 is that present invention Ground Target Tracking based on image key points characteristic matching device FPGA with DSP communicates signal Figure;
Fig. 7 is that present invention Ground Target Tracking based on image key points characteristic matching device gray scale Cross Correlation Matching is detailed Structure realizes figure;
Fig. 8 is present invention Ground Target Tracking based on image key points characteristic matching device workflow diagram.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, right The present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, and It is not used in the restriction present invention.If additionally, technical characteristic involved in each embodiment of invention described below The conflict of not constituting each other just can be mutually combined.
As it is shown in figure 1, present invention Ground Target Tracking based on image key points characteristic matching device, including: able to programme Gate array FPGA and digital signal processor DSP.Interface between FPGA and DSP uses EMIF interface, FPGA and analogue camera Interface between unit uses cameralink interface, and the interface between FPGA and outside host computer is RS422 serial line interface.
Programmable gate array FPGA, including feature extraction unit, trace point coordinate calculating unit and accurate matching unit;Special Levy extraction unit for each two field picture of image sequence that described analogue camera device is produced, extract characteristics of image, according to figure As feature completes adjacent interframe characteristic matching result, send interframe characteristic matching result to DSP;Trace point coordinate calculating unit For the interframe geometric transform relation fed back according to DSP, former frame trace point coordinate it is calculated trace point in present frame Coordinate figure;Accurately matching unit is for centered by present frame trace point position coordinates, does based on template mutual at its neighborhood Close coupling, obtain trace point coordinate accurate coordinates.
Digital signal processor DSP, including transformation relation computing unit, described transformation relation computing unit is for according to institute The characteristic matching result stating FPGA output calculates adjacent image Inter-frame Transformation relation.
Fig. 2 provides a kind of better embodiment of programmable gate array FPGA, and FPGA includes feature extraction unit, trace point Coordinate calculating unit and accurate matching unit.
Feature extraction unit includes again feature detection module, feature description module, characteristic storage module and front and back two field picture Characteristic points match module.
SIFT (Scale invariant features transform, the Scale Invariant that preferred feature detection module of the present invention extracts Feature Transform) feature.
Feature detection module carries out the gaussian filtering of multiple yardstick, difference to view data, it is judged that extreme point, rejects low sound The flow processs such as the point of response, obtain the image characteristic point coordinate information detected.As it is shown on figure 3, feature detection module is adopted under including Original mold block and the identical feature point detection module of two groups of structures;Feature point detection module includes multiple gaussian filtering unit, multiple Difference Calculation unit, multiple window signal generating unit and a characteristic point choose unit;First stack features point detection module multiple Gaussian filtering unit carries out different scale for each two field picture of image sequence produced described analogue camera device concurrently The gaussian filtering of parameter;Difference Calculation unit obtains for two width images after two adjacent yardstick gaussian filterings are carried out calculus of differences To difference of Gaussian image;Window signal generating unit is centered by the pixel in difference of Gaussian image and neighborhood is raw for border Become window;Characteristic point chooses unit for determining extreme point, using extreme point as candidate feature point, from time in the window generated Selecting and delete low contrast point or marginal point in characteristic point, the candidate feature point of reservation is final characteristic point;From multiple Gausses Filter unit is chosen the gaussian filtering unit of medium scale, the gaussian filtering image that this gaussian filtering unit exports is inputed to Down sample module, the image after down-sampling, for the image of input is carried out down-sampling, is exported to second group by down sample module Feature point detection module, the second stack features point detection module determines spy according to the mode identical with the first stack features point detection module Levy a little.
The feature point detection algorithm of SIFT algorithm needs image carries out the gaussian filtering under multiple scale parameter σ and drops Sampled images, forms Gaussian scale-space, uses here and arranges multiple gaussian filtering template inside FPGA, and view data exists Under pixel clock drives, enter template pixel-by-pixel and carry out convolution operation with image, therefore can realize multiple Gaussian convolution simultaneously Carry out, and convolution results will also can under pixel clock drives the most out result, the most relatively and raw image data There is certain clock cycle delay.After view data has fully entered, through described certain clock cycle delay, all Scale parameter σ under filter result, multiple gaussian filtering images can produce simultaneously, and this has been generated as Gaussian scale-space image Group.The Gaussian scale-space image sets generated, does the grey scale difference between image between adjacent two-layer, generate Gaussian difference scale Space.In described Gaussian difference scale space, remove top layer and bottom layer image, each picture to remaining each tomographic image Element, can find its 26 pixel facing territory, i.e. around this pixel such as 3 × 3 × 3 region.If described each pixel, meet It is 26 to face gray scale extreme value inside territory, maximum or minimum, then preliminary judgement is characterized a little.The feature of described preliminary judgement Use Hessian matrix is judged whether this point is skirt response point by point, is that the point of skirt response is disallowable.Need explanation It is that the feature point detection algorithm herein selected is SIFT algorithm, but be not limited to that SIFT, the widely used SURF of also having calculate The feature point detecting methods such as method, Harris angle point, Fast angle point.
Describing module caching image data, according to the described image characteristic point coordinate information detected, the field to image Extract image information according to BRIEF (Binary Robust Independent Elementary Feature) algorithm, obtain The description information of described characteristic point.Feature description module includes data control block and description vectors computing module;Described data Control module is used for reading characteristic point coordinate, the respectively side-play amount of storage on the basis of each characteristic point and in random number caching Extract a certain amount of image pixel data;Described description vectors computing module is for by carrying out the image pixel data extracted Pixel gray value comparison two-by-two, obtains binary system description vectors.
The thought of BRIEF algorithm be by characteristic point around image take out a region, be usually centered by characteristic point A square, store one group of coordinate pair simultaneously, general recommendations is 256 right.To every a pair coordinate pair, at described image They being taken out in block, compare the gray value size of taken out two pixel, the former is relatively big, and then comparative result is 1, no It is then 0.Having compared 256 to afterwards, obtained the binary sequence of 256 bit widths, this sequence is i.e. this feature point BEIRF description vectors.Realizing flow process such as Fig. 4 inside hardware FPGA framework, the view data of each frame all will be buffered in institute Inside the image buffer storage DPRAM stated, described feature point detection module will be buffered in inside FIFO, and random generating module is at the knot that resets After bundle, generation 256 is to random coordinate points, and random coordinate points will be stored in, inside a DPRAM, coming for inside FIFO Each characteristic point coordinate, put to comparison module by according to inside characteristic point coordinate and DPRAM caching random number pair, read Corresponding image intensity value inside image buffer storage DPRAM, performs BRIEF algorithm steps, and the BRIEF obtaining current signature point describes Information, the i.e. binary vector of 256 bit wides.It is pointed out that the extraction of information described herein is not merely defined to BRIEF algorithm, such as SIFT describes son and extracts, and SURF describes sub-extraction etc. and can use.
Characteristic storage module caches the characteristic point coordinate of each two field picture and describes information with described characteristic point.Characteristic storage mould Block uses double-port RAM (Random Access Memory), used here as ping-pong operation, uses two dual port RAMs All the time caching former frame information and current frame information, i.e. two two-port RAMs are respectively RAMA and RAMB, N-1 two field picture Characteristic point coordinate and description information be stored in RAMA, then the characteristic point coordinate of nth frame image is stored with description information In RAMB, during N+1 frame, the characteristic point coordinate of image and description information are stored in RAMA, original storage in RAMA The characteristic point coordinate of N-1 two field picture is then capped with description information, and the rest may be inferred.
Front and back frame image features point registration module include BRIEF description vectors distance calculator, read interrupt maker and Join a little to memorizer FIFO.Image characteristic point registration part as shown in Figure 2, its flow process is first to read a present frame Characteristic point description vectors, traversal ground reads the previous frame image characteristic point description vectors of caching in another block RAM one by one subsequently, will Two vectors do characteristic distance and calculate, and this distance then regards as two Point matching successes less than a certain threshold value.Embodiment such as use The description vectors of BRIEF, between two description vectors, less than 30, Hamming distance thinks that the match is successful.This distance calculates process such as Shown in Fig. 5, using two state machines to complete, described state machine 1 completes the characteristic point description vectors of present frame and former frame and reads Taking, described state machine 2 completes previous frame image characteristic point description vectors and reads.All successful match points are to buffering into matching double points Memorizer FIFO, in the matching process end time, by providing interrupt signal in FPGA, waits DSP response.
Described registration information is transferred in DSP by EMIF interface, after DSP receives described registration information, and can Stochastic sampling unification algorism (RANSAC, RANdom Sample Consensus) is used to calculate these registration information described Between corresponding transformation relation.The transformation relation calculating gained is a matrix, uses different the algorithm matrix types to have difference Different.Described result of calculation is transmitted back in FPGA by EMIF interface and feeds back to the accurate matching unit of FPGA.
Accurately matching unit is according to matrixing relation and the coordinate points of the tracking of the previous frame of caching, is calculated and works as The trace point coordinate position of front frame.Accurately the trace point coordinate position calculating gained described in matching unit utilization is as the field of search The central point in territory, certain field, such as do template matching inside the field of 7 × 7 around, finds and mates most with template cache Coordinate is as following the tracks of coordinate.As in figure 2 it is shown, accurately matching unit includes field of search cache module, relevant matches module and mould Plate caching and more new module.
Field of search cache module: the control module of cache image, view data, will from outside cameralink interface input Inside image pixel by pixel write DPRAM, write a whole two field picture.The image of previous frame will be covered when a new frame comes.Caching figure The dual port RAM of picture, write end is the image data stream that previous stage is given, read end be the module doing template matching below from Here fetch data.
Relevant matches: in present frame centered by trace point coordinate figure create a region to be matched, from template caching with More new module extracts template, carries out gray scale related operation, maximum in gray scale correlation result by the way of window travels through Corresponding window center point is best match position, is updated to by window corresponding for maximum in gray scale correlation result simultaneously Template caching and more new module.Such as, refer to Fig. 7, read the XY coordinate of trace point in present frame, then in outside picture number According to input come time can generate one centered by XY 15 × 15 window.Caching this region of 15 × 15 when, Having the match window generation of 7 × 7, along with pixel comes one by one, one pixel ground of window one pixel of meeting is past simultaneously Front movement.Data inside this window carry out related calculation with the template (being also the template of 7 × 7) of the former frame of caching, So calculate streamline backward, after the view data of the whole field of search is over, postpone several clock cycle, relevant fortune The result calculated also can calculate complete.When the result of calculation of each relevant matches is effective, it is judged that the matching degree of relevant matches, all the time Cache regional center point corresponding when of currently associated matching degree maximum, until the result of all of point is all calculated.This After sample whole traversal matching process terminates, the position of optimal coupling is the most available.When whole calculating completes, the point coordinates of caching As NewXY to template renewal module.
Template renewal module is responsible for according to the NewXY coordinate fed, corresponding template being updated, after then updating Template Information is using the template as next one traversal coupling.
It it is the preferred implementation of frame image features point registration before and after the present invention shown in Fig. 5.The description of described characteristic point Vector is the binary vector of 256 bit wides, and the coordinate of characteristic point itself, dimensional information 32 bit data are merged into 288 bit wides The description information of merging.By ping-pong buffer, respectively by the description information of the merging described in present frame and merging of former frame Description information cache inside two pieces of DPRAM.Shown in Fig. 5 is internal data handling process in preference.Matching process by Two finite state machines (Finite State Machine FSM) complete.First state machine controls to read the one of present frame Individual description information controls, in circulation each time, to find optimal match point as the once circulation of state machine, second state machine.Figure In,
State machine 1 is to be triggered by each new characteristic point from present frame of described feature description module, and it does not stops Ground, by the characteristic point iteration of former frame, provides a pair characteristic point in each cycle.The process of each step is as follows:
Other: are as the state after undefined behavior or reset.Once enter this state, jump into wait shape at once State.
Wait: wait it is known that there is new feature point signal effective, skip to read present frame state.
Read present frame: from characteristic point DPRAM of present frame caches, read a characteristic point, then jump into frame shape before reading State.
Frame before reading: take out characteristic point inside characteristic point DPRAM of previous frame caches, then produces one for state machine The NewResult signal of 2.In addition, the characteristic point that this state is also responsible for judging whether in former frame all iteration complete. If iteration is over, then jump into write state, otherwise continue to read the next characteristic point in former frame.
Write: the characteristic point write match point DPRAM of the coupling that waiting state machine 2 outputs it, then jump into wait shape State.
State machine 2 receive Hamming distance between two characteristic points from, then select the feature point pairs with beeline.Symbol The point that conjunction requires will be to being stored in match point DPRAM, and in state machine 2, the function of concrete each state is as follows:
Other: the state after undefined state or reset, once enter this state and jump into waiting state at once.
Wait: wait until NewResult signal is effective, then jump into and look for minimum state, and initialize a minimum Distance depositor (MIN_DIST) as judge whether coupling threshold value, the distance after only comparing be less than this threshold value feature The point point to being considered as just coupling is right.
Look for minimum: this state reads a distance results and also this result compared with MIN_DIST, if distance than MIN_DIST is little, then current distance is assigned to MIN_DIST, and current feature point pairs is labeled as the match is successful feature Point is right.If distance is bigger than MIN_DIST, the most do not make any process.This state loop iteration always, until all of distance is all Having compared, then state machine jumps into WRITE state.
Write: if the feature point pairs found out by FIND_MIN state is an effective feature point pairs signal, then by this spy Levy a some DPRAM to write-in characteristic point pair.Once entering this state, next state just waits for state.
It it is FPGA Yu DSP communication loop design diagram shown in Fig. 6.Certain clock week has been inputted at each two field picture After phase, will cache all of matching double points with previous frame of present frame in FPGA, FPGA will produce an interrupt signal afterwards, Pass to DSP.After interrupt signal is captured by DSP, according to setting, initiate the carrying of an EDMA, the point that will cache in FPGA To being transferred in DSP.DSP according to transmission come in point to data, with stochastic sampling concordance (RANdomSAmple Consensus, RANSAC) calculate a little between corresponding transformation matrix.Matrix D SP obtained is again by providing interruption Signal, FPGA captures interrupt signal, reads and obtains matrix parameter.
Referring to Fig. 8, the work process of apparatus of the present invention is: external analog camera is with cameralink form input picture Data, view data enters the SIFT feature detection module of next stage under pixel clock drives, and the characteristic point information of detection passes Passing BRIEF feature description extraction module, extract the description vectors of each characteristic point, the description vectors obtained is in characteristic matching Completing the Feature Points Matching of adjacent interframe in module, what characteristic matching completed to obtain is the match point that the match is successful a group by a group Right, this matching double points is stored in FIFO, DSP take out match point data, uses RANSAC algorithm to be calculated matrix form The geometric transform relation of consecutive frame image, then obtained this matrix data by FPGA, FPGA according to matrixing relation, in conjunction with Front frame trace point coordinate, obtains present frame trace point coordinate range, and gray scale is correlated with the accurate matching module neighborhood at this coordinate, does The relevant accurately coupling locating and tracking point of gray scale.Output is shown after final trace point and image overlay.
As it will be easily appreciated by one skilled in the art that and the foregoing is only presently preferred embodiments of the present invention, not in order to Limit the present invention, all any amendment, equivalent and improvement etc. made within the spirit and principles in the present invention, all should comprise Within protection scope of the present invention.

Claims (7)

1. a Ground Target Tracking device based on image key points characteristic matching, it is characterised in that including: gate array able to programme Row FPGA and digital signal processor DSP;
Programmable gate array FPGA, including feature extraction unit, trace point coordinate calculating unit and accurate matching unit;Described spy Levy extraction unit for each two field picture of image sequence that outside is inputted, extract characteristics of image, complete phase according to characteristics of image Adjacent interframe characteristic matching result, sends success interframe characteristic matching result to DSP;Described trace point coordinate calculating unit is used for According to the interframe geometric transform relation of DSP feedback, former frame trace point coordinate it is calculated trace point coordinate figure in present frame; Described accurate matching unit, for centered by DSP calculated present frame trace point position coordinates, its neighborhood do based on The Cross Correlation Matching of template, obtains trace point coordinate accurate coordinates;
Digital signal processor DSP, the characteristic matching result for exporting according to described FPGA calculates adjacent image interframe and becomes Change relation, and feed back to FPGA.
Ground Target Tracking device based on image key points characteristic matching the most according to claim 1, it is characterised in that Described feature extraction unit includes feature detection module, feature description module, characteristic storage module and front and back frame image features point Registration module;
Feature detection module for carrying out the gaussian filtering of multiple yardstick, difference, judgement extreme point, rejecting low sound to view data The point of response, obtains the characteristics of image point coordinates detected;
Feature description module, for according to described characteristics of image point coordinates, the field of image being extracted image information, obtains described The description vectors of characteristic point;
Characteristic storage module describes information for the characteristic point coordinate caching each two field picture with described characteristic point;Characteristic storage mould Block includes two double-port RAM RAMA and RAMB, uses ping-pong operation, caches former frame information and current frame information, i.e. The characteristic point coordinate of N-1 two field picture and description vectors are stored in RAMA, then the characteristic point coordinate of nth frame image and description Information is stored in RAMB;
Front and back frame image features point registration module completes for describing information according to the characteristic point coordinate of image with described characteristic point Adjacent interframe characteristic matching result, sends success interframe characteristic matching result to DSP.
Ground Target Tracking device based on image key points characteristic matching the most according to claim 2, it is characterised in that
Described feature detection module includes the feature point detection module that down sample module is identical with two groups of structures;Feature point detection mould Block includes that multiple gaussian filtering unit, multiple Difference Calculation unit, multiple window signal generating unit and a characteristic point choose unit;
Multiple gaussian filtering unit of the first stack features point detection module are for concurrently to the generation of described analogue camera device The each two field picture of image sequence carries out the gaussian filtering of different scale parameter;Difference Calculation unit is for two adjacent yardstick Gausses Filtered two width images carry out calculus of differences and obtain difference of Gaussian image;Window signal generating unit is for in difference of Gaussian image Pixel centered by and neighborhood be border generate window;Characteristic point chooses unit for determining extreme value in the window generated Point, using extreme point as candidate feature point, deletes low contrast point or marginal point, the candidate feature of reservation from candidate feature point Point is final characteristic point;
The gaussian filtering unit of medium scale is chosen, the Gauss exported by this gaussian filtering unit from multiple gaussian filtering unit Filtering image inputs to down sample module, and down sample module is for carrying out down-sampling, by the figure after down-sampling to the image of input As output is to the second stack features point detection module, the second stack features point detection module according to the first stack features point detection module phase Same mode determines characteristic point.
Ground Target Tracking device based on image key points characteristic matching the most according to claim 2, it is characterised in that Described feature description module includes data control block and description vectors computing module;
Described data control block is used for reading characteristic point coordinate, respectively on the basis of each characteristic point and in random number caching The side-play amount of storage extracts a certain amount of image pixel data;
Described description vectors computing module is used for by the image pixel data extracted is carried out pixel gray value comparison two-by-two, Obtain binary system description vectors.
Ground Target Tracking device based on image key points characteristic matching the most according to claim 2, it is characterised in that Before and after described, frame image features point registration module includes description vectors distance calculator, reads to interrupt maker and matching double points storage Device FIFO;
Description vectors distance calculator, uses the first state machine to read present frame and the characteristic point of previous frame image, uses the The machine-readable characteristic point description vectors taking present frame and previous frame image of two-state, by present frame and the feature of previous frame image Point description vectors, this distance then regards as the Feature Points Matching success of two frames less than a certain threshold value;
Matching double points memorizer FIFO is used for storing successful match point pair;
Read to interrupt maker to be used for, at the end of front and back's frame image features point registration, providing interrupt signal to DSP, waiting that DSP rings Should.
Ground Target Tracking device based on image key points characteristic matching the most according to claim 1, it is characterised in that Described accurate matching unit includes field of search cache module, relevant matches module and template caching and more new module;
Field of search cache module from the image of external interface input, will covers the figure of previous frame when a new frame comes for caching Picture;
Relevant matches module, for creating a region to be matched in present frame centered by trace point coordinate figure, caches from template Extract template with more new module, by the way of window travels through, carry out gray scale related operation, maximum in gray scale correlation result The window center point of value correspondence is best match position, is updated by window corresponding for maximum in gray scale correlation result simultaneously To template caching and more new module;
Template renewal module is used for template cache.
Ground Target Tracking device based on image key points characteristic matching the most according to claim 1, it is characterised in that Described DSP, for, after capturing the interrupt signal that FPGA sends, initiating an enhancement mode direct memory access, receives The feature point pairs that in FPGA, the match is successful, uses stochastic sampling concordance to calculate between feature point pairs and reflects that interframe geometric transformation is closed The transformation matrix of system;Send interrupt signal to FPGA, and after meeting with a response, transformation matrix is fed back to FPGA.
CN201610596748.1A 2016-07-26 2016-07-26 A kind of Ground Target Tracking device based on characteristic matching Active CN106204660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610596748.1A CN106204660B (en) 2016-07-26 2016-07-26 A kind of Ground Target Tracking device based on characteristic matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610596748.1A CN106204660B (en) 2016-07-26 2016-07-26 A kind of Ground Target Tracking device based on characteristic matching

Publications (2)

Publication Number Publication Date
CN106204660A true CN106204660A (en) 2016-12-07
CN106204660B CN106204660B (en) 2019-06-11

Family

ID=57495902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610596748.1A Active CN106204660B (en) 2016-07-26 2016-07-26 A kind of Ground Target Tracking device based on characteristic matching

Country Status (1)

Country Link
CN (1) CN106204660B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123078A (en) * 2017-04-25 2017-09-01 北京小米移动软件有限公司 The method and device of display image
CN107316038A (en) * 2017-05-26 2017-11-03 中国科学院计算技术研究所 A kind of SAR image Ship Target statistical nature extracting method and device
CN107516296A (en) * 2017-07-10 2017-12-26 昆明理工大学 A kind of moving object detection tracking system and method based on FPGA
CN107657175A (en) * 2017-09-15 2018-02-02 北京理工大学 A kind of homologous detection method of malice sample based on image feature descriptor
CN107992100A (en) * 2017-12-13 2018-05-04 中国科学院长春光学精密机械与物理研究所 High frame frequency image tracking method based on programmable logic array
CN109146918A (en) * 2018-06-11 2019-01-04 西安电子科技大学 A kind of adaptive related objective localization method based on piecemeal
CN109246331A (en) * 2018-09-19 2019-01-18 郑州云海信息技术有限公司 A kind of method for processing video frequency and system
CN109801207A (en) * 2019-01-08 2019-05-24 桂林电子科技大学 The image feature high speed detection and matching system of CPU-FPGA collaboration
CN110956178A (en) * 2019-12-04 2020-04-03 深圳先进技术研究院 Plant growth measuring method and system based on image similarity calculation and electronic equipment
CN111369650A (en) * 2020-03-30 2020-07-03 广东精鹰传媒股份有限公司 Method for realizing object connecting line effect of two-dimensional space and three-dimensional space
CN111460941A (en) * 2020-03-23 2020-07-28 南京智能高端装备产业研究院有限公司 Visual navigation feature point extraction and matching method in wearable navigation equipment
CN112182042A (en) * 2020-10-12 2021-01-05 上海扬灵能源科技有限公司 Point cloud feature matching method and system based on FPGA and path planning system
CN112233252A (en) * 2020-10-23 2021-01-15 上海影谱科技有限公司 AR target tracking method and system based on feature matching and optical flow fusion
CN112529016A (en) * 2020-12-21 2021-03-19 浙江欣奕华智能科技有限公司 Method and device for extracting feature points in image
CN112926593A (en) * 2021-02-20 2021-06-08 温州大学 Image feature processing method and device for dynamic image enhancement presentation
CN113838089A (en) * 2021-09-20 2021-12-24 哈尔滨工程大学 Bubble trajectory tracking method based on feature matching algorithm
CN114283065A (en) * 2021-12-28 2022-04-05 北京理工大学 ORB feature point matching system and matching method based on hardware acceleration
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065131A (en) * 2012-12-28 2013-04-24 中国航天时代电子公司 Method and system of automatic target recognition tracking under complex scene
CN103646232A (en) * 2013-09-30 2014-03-19 华中科技大学 Aircraft ground moving target infrared image identification device
CN104978749A (en) * 2014-04-08 2015-10-14 南京理工大学 FPGA (Field Programmable Gate Array)-based SIFT (Scale Invariant Feature Transform) image feature extraction system
JP2015210677A (en) * 2014-04-25 2015-11-24 国立大学法人 東京大学 Information processor and information processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065131A (en) * 2012-12-28 2013-04-24 中国航天时代电子公司 Method and system of automatic target recognition tracking under complex scene
CN103646232A (en) * 2013-09-30 2014-03-19 华中科技大学 Aircraft ground moving target infrared image identification device
CN104978749A (en) * 2014-04-08 2015-10-14 南京理工大学 FPGA (Field Programmable Gate Array)-based SIFT (Scale Invariant Feature Transform) image feature extraction system
JP2015210677A (en) * 2014-04-25 2015-11-24 国立大学法人 東京大学 Information processor and information processing method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴平 等: "基于模板匹配的加速肺结节检测算法研究", 《计算机工程与应用》 *
张浩鹏: "基于互相关计算加速器的实时目标跟踪系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王建辉: "实时视觉特征检测与匹配硬件架构研究", 《中国博士学位论文全文数据库 信息科技辑》 *
钟露明: "基于SIFT动态背景下的视频目标跟踪方法", 《南昌工程学院学报》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123078A (en) * 2017-04-25 2017-09-01 北京小米移动软件有限公司 The method and device of display image
CN107316038B (en) * 2017-05-26 2020-04-28 中国科学院计算技术研究所 SAR image ship target statistical feature extraction method and device
CN107316038A (en) * 2017-05-26 2017-11-03 中国科学院计算技术研究所 A kind of SAR image Ship Target statistical nature extracting method and device
CN107516296A (en) * 2017-07-10 2017-12-26 昆明理工大学 A kind of moving object detection tracking system and method based on FPGA
CN107657175A (en) * 2017-09-15 2018-02-02 北京理工大学 A kind of homologous detection method of malice sample based on image feature descriptor
CN107992100A (en) * 2017-12-13 2018-05-04 中国科学院长春光学精密机械与物理研究所 High frame frequency image tracking method based on programmable logic array
CN107992100B (en) * 2017-12-13 2021-01-15 中国科学院长春光学精密机械与物理研究所 High frame rate image tracking method and system based on programmable logic array
CN109146918A (en) * 2018-06-11 2019-01-04 西安电子科技大学 A kind of adaptive related objective localization method based on piecemeal
CN109246331A (en) * 2018-09-19 2019-01-18 郑州云海信息技术有限公司 A kind of method for processing video frequency and system
CN109801207A (en) * 2019-01-08 2019-05-24 桂林电子科技大学 The image feature high speed detection and matching system of CPU-FPGA collaboration
CN110956178A (en) * 2019-12-04 2020-04-03 深圳先进技术研究院 Plant growth measuring method and system based on image similarity calculation and electronic equipment
CN110956178B (en) * 2019-12-04 2023-04-18 深圳先进技术研究院 Plant growth measuring method and system based on image similarity calculation and electronic equipment
CN111460941A (en) * 2020-03-23 2020-07-28 南京智能高端装备产业研究院有限公司 Visual navigation feature point extraction and matching method in wearable navigation equipment
CN111369650A (en) * 2020-03-30 2020-07-03 广东精鹰传媒股份有限公司 Method for realizing object connecting line effect of two-dimensional space and three-dimensional space
CN112182042A (en) * 2020-10-12 2021-01-05 上海扬灵能源科技有限公司 Point cloud feature matching method and system based on FPGA and path planning system
CN112233252A (en) * 2020-10-23 2021-01-15 上海影谱科技有限公司 AR target tracking method and system based on feature matching and optical flow fusion
CN112233252B (en) * 2020-10-23 2024-02-13 上海影谱科技有限公司 AR target tracking method and system based on feature matching and optical flow fusion
CN112529016A (en) * 2020-12-21 2021-03-19 浙江欣奕华智能科技有限公司 Method and device for extracting feature points in image
CN112926593A (en) * 2021-02-20 2021-06-08 温州大学 Image feature processing method and device for dynamic image enhancement presentation
CN113838089A (en) * 2021-09-20 2021-12-24 哈尔滨工程大学 Bubble trajectory tracking method based on feature matching algorithm
CN113838089B (en) * 2021-09-20 2023-12-15 哈尔滨工程大学 Bubble track tracking method based on feature matching algorithm
CN114283065A (en) * 2021-12-28 2022-04-05 北京理工大学 ORB feature point matching system and matching method based on hardware acceleration
CN114283065B (en) * 2021-12-28 2024-06-11 北京理工大学 ORB feature point matching system and method based on hardware acceleration
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence
CN116433887B (en) * 2023-06-12 2023-08-15 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence

Also Published As

Publication number Publication date
CN106204660B (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN106204660A (en) A kind of Ground Target Tracking device of feature based coupling
CN111476709B (en) Face image processing method and device and electronic equipment
Qian et al. BADet: Boundary-aware 3D object detection from point clouds
CN105096377B (en) A kind of image processing method and device
US20230030020A1 (en) Defining a search range for motion estimation for each scenario frame set
CN106331723B (en) Video frame rate up-conversion method and system based on motion region segmentation
CN104508680B (en) Improved video signal is tracked
Kim et al. Exposing fake faces through deep neural networks combining content and trace feature extractors
CN113095106A (en) Human body posture estimation method and device
CN103955682A (en) Behavior recognition method and device based on SURF interest points
Peng et al. RGB-D human matting: A real-world benchmark dataset and a baseline method
US12094240B2 (en) Object tracking method and object tracking device
CN111882581A (en) Multi-target tracking method for depth feature association
CN118658062A (en) Occlusion environment pose estimation method based on foreground probability
Stahl et al. Ist-style transfer with instance segmentation
WO2023056833A1 (en) Background picture generation method and apparatus, image fusion method and apparatus, and electronic device and readable medium
CN113724176B (en) Multi-camera motion capture seamless connection method, device, terminal and medium
CN114863199A (en) Target detection method based on optimized anchor frame mechanism
CN115660969A (en) Image processing method, model training method, device, equipment and storage medium
CN114821482A (en) Vector topology integrated passenger flow calculation method and system based on fisheye probe
CN114724209A (en) Model training method, image generation method, device, equipment and medium
CN115210758A (en) Motion blur robust image feature matching
Suo et al. Neural3d: Light-weight neural portrait scanning via context-aware correspondence learning
Wang et al. ST-PixLoc: a scene-agnostic network for enhanced camera localization
Fang et al. CAMION: Cascade multi-input multi-output network for skeleton extraction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant