CN106204660A - A kind of Ground Target Tracking device of feature based coupling - Google Patents

A kind of Ground Target Tracking device of feature based coupling Download PDF

Info

Publication number
CN106204660A
CN106204660A CN201610596748.1A CN201610596748A CN106204660A CN 106204660 A CN106204660 A CN 106204660A CN 201610596748 A CN201610596748 A CN 201610596748A CN 106204660 A CN106204660 A CN 106204660A
Authority
CN
China
Prior art keywords
feature
image
point
matching
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610596748.1A
Other languages
Chinese (zh)
Other versions
CN106204660B (en
Inventor
钟胜
喻鹏
张清洋
崔宗阳
董太行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201610596748.1A priority Critical patent/CN106204660B/en
Publication of CN106204660A publication Critical patent/CN106204660A/en
Application granted granted Critical
Publication of CN106204660B publication Critical patent/CN106204660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种基于图像关键点特征匹配的地面目标跟踪装置,包括:可编程门阵列FPGA和数字信号处理器DSP;FPGA用于对外部相机给入图像序列提取图像特征,进而完成相邻帧间特征匹配,将成功帧间特征匹配结果传送给DSP;根据DSP反馈的帧间几何变换关系进行互相关精确匹配;数字信号处理器DSP用于根据所述FPGA输出的特征匹配结果计算出相邻图像帧间变换关系。本发明将复杂的基于特征点的目标跟踪方法完全地实现在嵌入式FPGA上,将帧间转换关系计算实现在DSP,兼顾算法复杂性与嵌入式板卡的低功耗要求,得以实时处理大量的图像数据,相比于现有技术,成数量级地提升算法处理速度。

The invention provides a ground target tracking device based on feature matching of image key points, comprising: a programmable gate array FPGA and a digital signal processor DSP; Inter-frame feature matching, sending the successful inter-frame feature matching results to DSP; performing cross-correlation and precise matching according to the inter-frame geometric transformation relationship fed back by DSP; the digital signal processor DSP is used to calculate phase based on the feature matching results output by the FPGA. The transformation relationship between adjacent image frames. The present invention fully implements the complex target tracking method based on feature points on the embedded FPGA, implements the calculation of the inter-frame conversion relationship on the DSP, takes into account the complexity of the algorithm and the low power consumption requirements of the embedded board, and can process a large number of image data, compared with the existing technology, the algorithm processing speed is increased by an order of magnitude.

Description

一种基于特征匹配的地面目标跟踪装置A Ground Target Tracking Device Based on Feature Matching

技术领域technical field

本发明属于数字图像信号处理领域,具体涉及一种基于特征匹配的地面目标跟踪装置。The invention belongs to the field of digital image signal processing, and in particular relates to a ground target tracking device based on feature matching.

背景技术Background technique

计算机视觉中实现目标跟踪有着多种研究应用,在嵌入式领域中一般常使用DSP作为跟踪方法实现的核心器件,然而在嵌入式DSP中,在功耗以及实时性要求下,DSP中难以实现复杂的目标跟踪算法。而图像特征点具有旋转、尺度、光照的不变性,使其在图像配准、目标跟踪等领域均有较多的应用。而特征点算法本身较为复杂,令其在嵌入式领域的应用有诸多障碍。例如,类似SIFT、SURF的特征提取算法需要建立尺度空间,对单帧图像就需要得到多个尺度下的高斯滤波图像,这对嵌入式场景中功耗要求高,计算资源受限,高实时性要求相矛盾。为满足嵌入式场景的跟踪应用需求,可使用专有的硬件如ASIC,FPGA等协助处理图像、加速算法。Target tracking in computer vision has a variety of research applications. In the embedded field, DSP is often used as the core device for tracking methods. However, in embedded DSP, it is difficult to realize complex tracking in DSP due to power consumption and real-time requirements. target tracking algorithm. Image feature points are invariant to rotation, scale, and illumination, making them widely used in image registration, object tracking, and other fields. However, the feature point algorithm itself is relatively complicated, which makes its application in the embedded field have many obstacles. For example, feature extraction algorithms like SIFT and SURF need to establish a scale space, and Gaussian filter images at multiple scales need to be obtained for a single frame image, which requires high power consumption, limited computing resources, and high real-time performance in embedded scenarios. Contradictory requirements. In order to meet the tracking application requirements of embedded scenarios, proprietary hardware such as ASIC, FPGA, etc. can be used to assist in image processing and algorithm acceleration.

Wang,J.,et al.,An Embedded System-on-Chip Architecture for Real-timeVisual Detection and Matching.IEEE Transactions on Circuits&Systems for VideoTechnology,2014.24(3):p.525-538,提出了一种在单片FPGA上实现的实时视觉特征匹配系统。该系统设计了SIFT+BRIEF架构的系统结构,将整个算法实现在单片FPGA上,并完成实时前后帧图像的配准,速度能达到720p图像60FPS速度。该系统的主要特点是该硬件验证平台仅依靠单片FPGA完成,并能将特征的前后帧匹配在片上实时实现,并且资源占用较少。但是该方法的局限在于工作仅限于特征提取加速以及帧间特征匹配但并没有进一步合理协调FPGA资源与DSP资源完成一个跟踪系统。Wang, J., et al., An Embedded System-on-Chip Architecture for Real-time Visual Detection and Matching.IEEE Transactions on Circuits&Systems for VideoTechnology, 2014.24(3):p.525-538, proposed a single-chip Real-time visual feature matching system implemented on FPGA. The system designs the system structure of SIFT+BRIEF architecture, implements the whole algorithm on a single FPGA, and completes the registration of front and back frame images in real time, and the speed can reach 60FPS for 720p images. The main feature of the system is that the hardware verification platform only relies on a single FPGA, and can match the front and rear frames of the feature in real time on the chip, and takes up less resources. However, the limitation of this method is that the work is limited to feature extraction acceleration and inter-frame feature matching, but it does not further reasonably coordinate FPGA resources and DSP resources to complete a tracking system.

发明内容Contents of the invention

针对现有技术的缺陷和技术需求,本发明提供了一种基于特征匹配的地面目标跟踪硬件实现方法及装置,将复杂的基于特征点的目标跟踪算法完全地实现在嵌入式FPGA上,将帧间转换关系计算是现在DSP,兼顾算法复杂性与嵌入式板卡的低功耗要求,以FPGA作为核心的算法实现器件,得以实时处理大量的图像数据,相比于现有技术,成数量级地提升算法处理速度。Aiming at the defects and technical requirements of the prior art, the present invention provides a ground target tracking hardware implementation method and device based on feature matching, which fully implements the complex target tracking algorithm based on feature points on the embedded FPGA, and converts the frame The calculation of the conversion relationship is the current DSP, which takes into account the complexity of the algorithm and the low power consumption requirements of the embedded board. The algorithm implementation device with FPGA as the core can process a large amount of image data in real time. Compared with the existing technology, it is an order of magnitude Improve algorithm processing speed.

一种基于图像关键点特征匹配的地面目标跟踪装置,包括:可编程门阵列FPGA和数字信号处理器DSP;A ground target tracking device based on image key point feature matching, comprising: a programmable gate array FPGA and a digital signal processor DSP;

可编程门阵列FPGA,包括特征提取单元、跟踪点坐标计算单元和精确匹配单元;所述特征提取单元用于对外部输入的图像序列每一帧图像,提取图像特征,根据图像特征完成相邻帧间特征匹配结果,将成功帧间特征匹配结果传送给DSP;所述跟踪点坐标计算单元用于根据DSP反馈的帧间几何变换关系,由前一帧跟踪点坐标计算得到当前帧中跟踪点坐标值;所述精确匹配单元,用于以DSP计算得到的当前帧跟踪点位置坐标为中心,在其邻域做基于模板的互相关匹配,得到跟踪点坐标精确坐标;The programmable gate array FPGA includes a feature extraction unit, a tracking point coordinate calculation unit and an accurate matching unit; the feature extraction unit is used to extract image features for each frame of an image sequence input from the outside, and complete adjacent frames according to the image features The inter-frame feature matching result, the successful inter-frame feature matching result is sent to the DSP; the tracking point coordinate calculation unit is used to calculate the tracking point coordinates in the current frame according to the inter-frame geometric transformation relationship fed back by the DSP Value; the exact matching unit is used to take the current frame tracking point position coordinates calculated by the DSP as the center, and do cross-correlation matching based on the template in its neighborhood to obtain the precise coordinates of the tracking point coordinates;

数字信号处理器DSP,用于根据所述FPGA输出的特征匹配结果计算出相邻图像帧间变换关系,并反馈给FPGA。The digital signal processor DSP is used for calculating the transformation relationship between adjacent image frames according to the feature matching result output by the FPGA, and feeding it back to the FPGA.

进一步地,所述特征提取单元包括特征检测模块、特征描述模块、特征存储模块和前后帧图像特征点配准模块;Further, the feature extraction unit includes a feature detection module, a feature description module, a feature storage module, and a feature point registration module of front and rear frame images;

特征检测模块用于对图像数据进行多个尺度的高斯滤波、差分、判断极值点、剔除低响应度的点,得到检测到的图像特征点坐标;The feature detection module is used to perform multi-scale Gaussian filtering, difference, judgment of extreme points, and elimination of low-responsivity points on image data to obtain the coordinates of detected image feature points;

特征描述模块用于根据所述图像特征点坐标,对图像的领域提取图像信息,得到所述特征点的描述向量;The feature description module is used to extract image information from the field of the image according to the coordinates of the feature points of the image to obtain a description vector of the feature points;

特征存储模块用于缓存每一帧图像的特征点坐标与所述特征点描述信息;特征存储模块包括两双端口随机存储器RAMA和RAMB,采用乒乓操作,缓存前一帧信息与当前帧信息,即第N-1帧图像的特征点坐标与描述向量被存储在RAMA中,则第N帧图像的特征点坐标与描述信息被存储在RAMB中;The feature storage module is used to cache the feature point coordinates and the feature point description information of each frame of image; the feature storage module includes two dual-port random access memories RAMA and RAMB, and adopts a ping-pong operation to cache the previous frame information and the current frame information, i.e. The feature point coordinates and description vectors of the N-1th frame image are stored in RAMA, and the feature point coordinates and description information of the Nth frame image are stored in RAMB;

前后帧图像特征点配准模块用于根据图像的特征点坐标与所述特征点描述信息完成相邻帧间特征匹配结果,将成功帧间特征匹配结果传送给DSP。The image feature point registration module of the front and rear frames is used to complete the feature matching results between adjacent frames according to the feature point coordinates of the image and the feature point description information, and transmit the feature matching results between successful frames to the DSP.

进一步地,further,

所述特征检测模块包括下采样模块和两组结构相同的特征点检测模块;特征点检测模块包括多个高斯滤波单元、多个差分计算单元、多个窗口生成单元和一个特征点选取单元;The feature detection module includes a downsampling module and two groups of structurally identical feature point detection modules; the feature point detection module includes a plurality of Gaussian filter units, a plurality of difference calculation units, a plurality of window generation units and a feature point selection unit;

第一组特征点检测模块的多个高斯滤波单元用于并行地对所述模拟相机装置产生的图像序列每一帧图像进行不同尺度参数的高斯滤波;差分计算单元用于对两相邻尺度高斯滤波后的两幅图像进行差分运算得到高斯差分图像;窗口生成单元用于以高斯差分图像中的像素点为中心及其邻域为边界生成窗口;特征点选取单元用于在生成的窗口中确定极值点,将极值点作为候选特征点,从候选特征点中删除低对比度点或边缘点,保留的候选特征点即为最终的特征点;A plurality of Gaussian filtering units of the first group of feature point detection modules are used to perform Gaussian filtering of different scale parameters on each frame image of the image sequence generated by the analog camera device in parallel; the difference calculation unit is used to perform Gaussian filtering on two adjacent scales of Gaussian The difference operation is performed on the filtered two images to obtain a Gaussian difference image; the window generation unit is used to generate a window centered on the pixel point in the Gaussian difference image and its neighborhood as a boundary; the feature point selection unit is used to determine in the generated window Extreme points, using extreme points as candidate feature points, deleting low-contrast points or edge points from candidate feature points, and retaining candidate feature points are the final feature points;

从多个高斯滤波单元中选取中间尺度的高斯滤波单元,将该高斯滤波单元输出的高斯滤波图像输入给下采样模块,下采样模块用于对输入的图像进行下采样,将下采样后的图像输出至第二组特征点检测模块,第二组特征点检测模块按照与第一组特征点检测模块相同的方式确定特征点。Select an intermediate-scale Gaussian filter unit from multiple Gaussian filter units, and input the Gaussian filter image output by the Gaussian filter unit to the downsampling module. The downsampling module is used to downsample the input image, and the downsampled image output to the second group of feature point detection modules, and the second group of feature point detection modules determine feature points in the same manner as the first group of feature point detection modules.

进一步地,further,

所述特征描述模块包括数据控制模块和描述向量计算模块;The feature description module includes a data control module and a description vector calculation module;

所述数据控制模块用于读取特征点坐标,分别以每一特征点为基准以及随机数缓存中存储的偏移量提取一定量的图像像素数据;The data control module is used to read the coordinates of feature points, and extracts a certain amount of image pixel data based on each feature point and the offset stored in the random number cache;

所述描述向量计算模块用于通过对提取的图像像素数据进行两两像素点灰度值比对,得到二进制描述向量。The description vector calculation module is used to obtain a binary description vector by comparing gray values of two pixels of the extracted image pixel data.

进一步地,further,

所述前后帧图像特征点配准模块包括描述向量距离计算器、读中断生成器和匹配点对存储器FIFO;The front and back frame image feature point registration module includes a description vector distance calculator, a read interrupt generator and a matching point pair memory FIFO;

描述向量距离计算器,使用第一状态机读取当前帧以及前一帧图像的特征点,使用第二状态机读取当前帧以及前一帧图像的特征点描述向量,将当前帧以及前一帧图像的特征点描述向量,该距离小于某一阈值则认定为两帧的特征点匹配成功;The description vector distance calculator uses the first state machine to read the feature points of the current frame and the previous frame image, uses the second state machine to read the feature point description vector of the current frame and the previous frame image, and uses the current frame and the previous frame image The feature point description vector of the frame image, if the distance is less than a certain threshold, it is determined that the feature points of the two frames are successfully matched;

匹配点对存储器FIFO用于存储成功匹配点对;The matching point pair memory FIFO is used to store the successful matching point pair;

读中断生成器用于在前后帧图像特征点配准结束时,向DSP给出中断信号,等待DSP响应。The read interrupt generator is used to give an interrupt signal to the DSP and wait for the DSP to respond when the registration of the feature points of the front and rear frame images ends.

进一步地,further,

所述精确匹配单元包括搜索区缓存模块、相关匹配模块以及模板缓存与更新模块;The exact matching unit includes a search area cache module, a relevant matching module, and a template cache and update module;

搜索区缓存模块用于缓存从外部接口输入的图像,新的一帧过来时将覆盖上一帧的图像;The search area cache module is used to cache the image input from the external interface, and when a new frame comes, it will overwrite the image of the previous frame;

相关匹配模块用于以当前帧中跟踪点坐标值为中心创建一个待匹配区域,从模板缓存与更新模块提取模板,通过窗口遍历的方式进行灰度相关运算,灰度相关运算结果中最大值对应的窗口中心点为最佳匹配位置,同时将灰度相关运算结果中最大值对应的窗口更新至模板缓存与更新模块;The correlation matching module is used to create an area to be matched centered on the coordinate value of the tracking point in the current frame, extract the template from the template cache and the update module, and perform gray-scale correlation calculations through window traversal, and the maximum value of the gray-scale correlation calculation results corresponds to The center point of the window is the best matching position, and at the same time, the window corresponding to the maximum value in the gray correlation operation result is updated to the template cache and update module;

模板更新模块用于缓存模板。The template update module is used to cache templates.

进一步地,further,

所述DSP用于在捕获到FPGA发送的中断信号之后,发起一个增强型直接存储器访问,接收FPGA中匹配成功的特征点对,采用随机抽样一致性计算特征点对之间反映帧间几何变换关系的变换矩阵;向FPGA发送中断信号,并在得到响应后将变换矩阵反馈给FPGA。The DSP is used to initiate an enhanced direct memory access after capturing the interrupt signal sent by the FPGA, receive the pair of feature points that are successfully matched in the FPGA, and use random sampling consistency to calculate the geometric transformation relationship between the feature point pairs The transformation matrix; send an interrupt signal to the FPGA, and feed back the transformation matrix to the FPGA after getting a response.

总体而言,通过本发明所构思的以上技术方案与现有技术相比:Generally speaking, compared with the prior art, the above technical solutions conceived by the present invention:

本发明通过对特征点算法的分解,以及FPGA/DSP和合理协同完成一个主要基于特征匹配的高度并行化的地面目标跟踪装置,该方法利用FPGA的并行加速以及FPGA与DSP高效协同实现复杂算法架构下的地面目标跟踪。相比于传统的使用DSP作为主要信号处理器,能够实现更加复杂的算法,即将特征点算法、相关匹配算法、以及抽样一致算法有机地结合起来,能在地面跟踪中具有更好地实时效果,可达到50帧/秒。The present invention completes a highly parallelized ground target tracking device mainly based on feature matching through the decomposition of the feature point algorithm, and FPGA/DSP and reasonable cooperation. The method utilizes the parallel acceleration of FPGA and the efficient cooperation of FPGA and DSP to realize complex algorithm architecture. Under ground target tracking. Compared with the traditional use of DSP as the main signal processor, more complex algorithms can be realized, that is, feature point algorithms, correlation matching algorithms, and sampling consensus algorithms are organically combined, which can have better real-time effects in ground tracking. Can reach 50 frames per second.

本发明通过将特征算法分解模块化,将FPGA作为核心处理器件,将复杂的基于特征点的目标跟踪算法以及互相关精匹配算法完全地实现在嵌入式的FPGA上,兼顾算法复杂性与嵌入式板卡的低功耗要求,并且算法的大量计算通过FPGA的并行化设计实现,得以实时处理大量的数据处理,相比于纯DSP处理算法,成数量级地提升算法处理速度。The present invention decomposes and modularizes the feature algorithm, uses FPGA as the core processing device, and completely realizes the complex target tracking algorithm based on feature points and the cross-correlation fine matching algorithm on the embedded FPGA, taking into account the complexity of the algorithm and the embedded The low power consumption requirements of the board, and the large amount of calculation of the algorithm is realized through the parallel design of the FPGA, which can process a large amount of data in real time. Compared with the pure DSP processing algorithm, the algorithm processing speed is increased by an order of magnitude.

本发明采用了动态缓存结构来链接不同的处理部件,FIFO和同步存储器的使用有效地解决了不同数据宽度,不同数据速率、不同接口间的差异引起的互联问题,减少资源消耗,提高系统的资源利用率。The present invention adopts a dynamic cache structure to link different processing components. The use of FIFO and synchronous memory effectively solves the interconnection problems caused by differences in different data widths, different data rates, and different interfaces, reduces resource consumption, and improves system resources. utilization rate.

本发明能对实时图像同步接收处理并达到实时的图像跟踪,利用FPGA作为系统架构核心与数据处理核心,模块接口标准化易于重构,并且具有实时的大数据量吞吐的特点,以及功耗低,体积小的特点,可以有效地应用于目标跟踪,导航,识别等领域。The present invention can receive and process real-time images synchronously and achieve real-time image tracking. FPGA is used as the core of the system architecture and data processing core. The standardization of the module interface is easy to reconfigure, and it has the characteristics of real-time large data volume throughput and low power consumption. Small in size, it can be effectively used in target tracking, navigation, recognition and other fields.

附图说明Description of drawings

图1为本发明基于图像关键点特征匹配的地面目标跟踪装置整体结构框图Figure 1 is a block diagram of the overall structure of the ground target tracking device based on image key point feature matching in the present invention

图2为本发明基于图像关键点特征匹配的地面目标跟踪装置详细结构框图;Fig. 2 is a detailed structural block diagram of the ground target tracking device based on image key point feature matching in the present invention;

图3为本发明基于图像关键点特征匹配的地面目标跟踪装置SIFT特征检测部分框图;Fig. 3 is a partial block diagram of the ground object tracking device SIFT feature detection based on image key point feature matching in the present invention;

图4为本发明基于图像关键点特征匹配的地面目标跟踪装置BRIEF特征提取结构框图;Fig. 4 is the structural block diagram of the BRIEF feature extraction of the ground target tracking device based on image key point feature matching in the present invention;

图5为本发明基于图像关键点特征匹配的地面目标跟踪装置相邻帧间特征匹配结构框图;Fig. 5 is a structural block diagram of feature matching between adjacent frames of the ground target tracking device based on image key point feature matching in the present invention;

图6为本发明基于图像关键点特征匹配的地面目标跟踪装置FPGA与DSP通信示意图;6 is a schematic diagram of the communication between FPGA and DSP of the ground target tracking device based on image key point feature matching in the present invention;

图7为本发明基于图像关键点特征匹配的地面目标跟踪装置灰度互相关匹配详细结构实现图;Fig. 7 is a detailed structural realization diagram of the gray level cross-correlation matching of the ground target tracking device based on image key point feature matching in the present invention;

图8为本发明基于图像关键点特征匹配的地面目标跟踪装置工作流程图。Fig. 8 is a working flow chart of the ground object tracking device based on image key point feature matching according to the present invention.

具体实施方式detailed description

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

如图1所示,本发明基于图像关键点特征匹配的地面目标跟踪装置,包括:可编程门阵列FPGA和数字信号处理器DSP。FPGA与DSP之间的接口采用EMIF接口,FPGA与模拟相机单元之间的接口采用cameralink接口,FPGA与外部上位机之间的接口为RS422串行接口。As shown in FIG. 1 , the ground target tracking device based on image key point feature matching in the present invention includes: a programmable gate array FPGA and a digital signal processor DSP. The interface between FPGA and DSP adopts EMIF interface, the interface between FPGA and analog camera unit adopts cameralink interface, and the interface between FPGA and external host computer is RS422 serial interface.

可编程门阵列FPGA,包括特征提取单元、跟踪点坐标计算单元和精确匹配单元;特征提取单元用于对所述模拟相机装置产生的图像序列每一帧图像,提取图像特征,根据图像特征完成相邻帧间特征匹配结果,将帧间特征匹配结果传送给DSP;跟踪点坐标计算单元用于根据DSP反馈的帧间几何变换关系,由前一帧跟踪点坐标计算得到到当前帧中跟踪点坐标值;精确匹配单元用于以当前帧跟踪点位置坐标为中心,在其邻域做基于模板的互相关匹配,得到跟踪点坐标精确坐标。The programmable gate array FPGA includes a feature extraction unit, a tracking point coordinate calculation unit, and an accurate matching unit; the feature extraction unit is used to extract image features for each frame of image sequence generated by the analog camera device, and complete phase matching according to the image features. The feature matching result between adjacent frames is sent to the DSP; the tracking point coordinate calculation unit is used to calculate the tracking point coordinates in the current frame from the tracking point coordinates of the previous frame according to the geometric transformation relationship between frames fed back by the DSP value; the precise matching unit is used to take the coordinates of the tracking point in the current frame as the center, and perform template-based cross-correlation matching in its neighborhood to obtain the precise coordinates of the tracking point.

数字信号处理器DSP,包括变换关系计算单元,所述变换关系计算单元用于根据所述FPGA输出的特征匹配结果计算出相邻图像帧间变换关系。The digital signal processor DSP includes a transformation relationship calculation unit, and the transformation relationship calculation unit is used to calculate the transformation relationship between adjacent image frames according to the feature matching result output by the FPGA.

图2给出可编程门阵列FPGA的一种较佳实施方式,FPGA包括特征提取单元、跟踪点坐标计算单元和精确匹配单元。Figure 2 shows a preferred implementation of the programmable gate array FPGA, which includes a feature extraction unit, a tracking point coordinate calculation unit and an exact matching unit.

特征提取单元又包括特征检测模块、特征描述模块、特征存储模块和前后帧图像特征点配准模块。The feature extraction unit further includes a feature detection module, a feature description module, a feature storage module, and a feature point registration module of front and rear frame images.

本发明优选特征检测模块提取的SIFT(尺度不变特征变换,Scale InvariantFeature Transform)特征。The present invention preferably uses the SIFT (Scale Invariant Feature Transform) feature extracted by the feature detection module.

特征检测模块对图像数据进行多个尺度的高斯滤波,差分,判断极值点,剔除低响应度的点等流程,得到检测到的图像特征点坐标信息。如图3所示,特征检测模块包括下采样模块和两组结构相同的特征点检测模块;特征点检测模块包括多个高斯滤波单元、多个差分计算单元、多个窗口生成单元和一个特征点选取单元;第一组特征点检测模块的多个高斯滤波单元用于并行地对所述模拟相机装置产生的图像序列每一帧图像进行不同尺度参数的高斯滤波;差分计算单元用于对两相邻尺度高斯滤波后的两幅图像进行差分运算得到高斯差分图像;窗口生成单元用于以高斯差分图像中的像素点为中心及其邻域为边界生成窗口;特征点选取单元用于在生成的窗口中确定极值点,将极值点作为候选特征点,从候选特征点中删除低对比度点或边缘点,保留的候选特征点即为最终的特征点;从多个高斯滤波单元中选取中间尺度的高斯滤波单元,将该高斯滤波单元输出的高斯滤波图像输入给下采样模块,下采样模块用于对输入的图像进行下采样,将下采样后的图像输出至第二组特征点检测模块,第二组特征点检测模块按照与第一组特征点检测模块相同的方式确定特征点。The feature detection module performs multi-scale Gaussian filtering, difference, judgment of extreme points, and elimination of low-responsivity points on the image data to obtain the coordinate information of the detected image feature points. As shown in Figure 3, the feature detection module includes a downsampling module and two sets of feature point detection modules with the same structure; the feature point detection module includes multiple Gaussian filter units, multiple differential calculation units, multiple window generation units and a feature point The selection unit; the multiple Gaussian filtering units of the first group of feature point detection modules are used to perform Gaussian filtering of different scale parameters on each frame of the image sequence generated by the analog camera device in parallel; the difference calculation unit is used to perform two-phase The Gaussian difference image is obtained by performing difference operation on two images after Gaussian filtering in the adjacent scale; the window generation unit is used to generate a window with the pixel point in the Gaussian difference image as the center and its neighborhood as the boundary; the feature point selection unit is used to generate Determine the extreme points in the window, use the extreme points as candidate feature points, delete low-contrast points or edge points from the candidate feature points, and keep the candidate feature points as the final feature points; select the middle point from multiple Gaussian filter units The scaled Gaussian filter unit, the Gaussian filter image output by the Gaussian filter unit is input to the downsampling module, the downsampling module is used to downsample the input image, and output the downsampled image to the second group of feature point detection modules , the second group of feature point detection modules determine feature points in the same manner as the first group of feature point detection modules.

SIFT算法的特征点检测算法需要对图像进行多个尺度参数σ下的高斯滤波并且降采样图像,来形成高斯尺度空间,这里采用在FPGA内部设置多个高斯滤波模板,图像数据在像素时钟驱动下,逐像素进入模板与图像进行卷积操作,因此可以实现多个高斯卷积同时进行,并且卷积结果将也会在像素时钟驱动下逐像素出来结果,只是相对与原始图像数据有一定的时钟周期延迟。在图像数据全部输入完成后,经过所述的一定时钟周期延迟,所有的尺度参数σ下的滤波结果,多个高斯滤波图像会同时产生,这就生成了高斯尺度空间图像组。生成的高斯尺度空间图像组,相邻的两层之间做图像间的灰度差分,生成高斯差分尺度空间。在所述高斯差分尺度空间中,除去顶层与底层图像,对其余的每一层图像的每个像素,可以找到其26临域的像素,即该像素周围譬如3×3×3的区域。若所述的每个像素,符合其为26临域里面灰度极值,极大值或者极小值,则初步判定为特征点。所述初步判定的特征点将使用Hessian矩阵来判断该点是否为边缘响应点,是边缘响应的点被剔除。需要说明的是,此处选用的特征点检测算法是SIFT算法,但是并不局限于SIFT,广泛使用的还有SURF算法、Harris角点、Fast角点等特征点检测方法。The feature point detection algorithm of the SIFT algorithm needs to perform Gaussian filtering under multiple scale parameters σ on the image and down-sample the image to form a Gaussian scale space. Here, multiple Gaussian filter templates are set inside the FPGA, and the image data is driven by the pixel clock. , enter the template pixel by pixel to perform convolution operation with the image, so multiple Gaussian convolutions can be performed at the same time, and the convolution results will also be driven by the pixel clock to come out pixel by pixel, but there is a certain clock relative to the original image data cycle delay. After all the image data is input, after a certain clock cycle delay, multiple Gaussian filtered images will be generated simultaneously for all the filtering results under the scale parameter σ, thus generating a Gaussian scale space image group. For the generated Gaussian scale space image group, the gray level difference between the images is made between two adjacent layers to generate a Gaussian difference scale space. In the Gaussian difference scale space, except for the top layer and the bottom layer images, for each pixel of each other image layer, 26 adjacent pixels can be found, that is, a 3×3×3 area around the pixel. If each of the pixels described above conforms to the gray extreme value, maximum value or minimum value in the 26 neighborhoods, then it is preliminarily judged as a feature point. The feature point of the preliminary determination will use the Hessian matrix to judge whether the point is an edge response point, and the point that is an edge response point is eliminated. It should be noted that the feature point detection algorithm selected here is the SIFT algorithm, but it is not limited to SIFT. There are also feature point detection methods such as the SURF algorithm, Harris corner point, and Fast corner point that are widely used.

描述模块缓存图像数据,根据检测到的所述图像特征点坐标信息,对图像的领域按照BRIEF(Binary Robust Independent Elementary Feature)算法提取图像信息,得到所述特征点的描述信息。特征描述模块包括数据控制模块和描述向量计算模块;所述数据控制模块用于读取特征点坐标,分别以每一特征点为基准以及随机数缓存中存储的偏移量提取一定量的图像像素数据;所述描述向量计算模块用于通过对提取的图像像素数据进行两两像素点灰度值比对,得到二进制描述向量。The description module caches the image data, extracts image information from the image field according to the BRIEF (Binary Robust Independent Elementary Feature) algorithm according to the detected coordinate information of the image feature points, and obtains the description information of the feature points. The feature description module includes a data control module and a description vector calculation module; the data control module is used to read the coordinates of feature points, and extract a certain amount of image pixels based on each feature point and the offset stored in the random number cache data; the description vector calculation module is used to obtain a binary description vector by comparing gray values of two pixels of the extracted image pixel data.

BRIEF算法的思想是将特征点周围的图像取出一个区域,一般是以特征点为中心的一个正方形,同时存储一组坐标对,一般建议的是256对。对每一对坐标对,在所述的图像块中将它们取出来,比较所取出来的两个像素的灰度值大小,前者较大则比较结果为1,否则为0。比较完256对之后,得到一个256位宽度的二进制序列,这个序列即是该特征点的BEIRF描述向量。在硬件FPGA架构里面实现流程如图4,每一帧的图像数据都将会缓存在所述的图像缓存DPRAM里面,所述特征点检测模块将缓存在FIFO里面,随机生成模块在复位结束后产生256对随机的坐标点,随机的坐标点将存储在一个DPRAM里面,针对FIFO里面过来的每一个特征点坐标,点对比较模块将根据特征点坐标和DPRAM里面缓存的随机数对,读取图像缓存DPRAM里面相应的图像灰度值,执行BRIEF算法步骤,得到当前特征点的BRIEF描述信息,即一个256位宽的二进制向量。可以指出的是,这里描述信息的提取并不仅仅限定为BRIEF算法,比如SIFT描述子提取,SURF描述子提取等都可以采用。The idea of the BRIEF algorithm is to take out an area of the image around the feature point, usually a square centered on the feature point, and store a set of coordinate pairs at the same time, generally 256 pairs are recommended. For each pair of coordinates, extract them from the image block, and compare the gray values of the two extracted pixels. If the former is larger, the comparison result is 1, otherwise it is 0. After comparing 256 pairs, a 256-bit wide binary sequence is obtained, which is the BEIRF description vector of the feature point. The implementation process in the hardware FPGA architecture is shown in Figure 4. The image data of each frame will be cached in the image cache DPRAM, the feature point detection module will be cached in the FIFO, and the random generation module will be generated after the reset. 256 pairs of random coordinate points, the random coordinate points will be stored in a DPRAM, for each feature point coordinates in the FIFO, the point pair comparison module will read the image according to the feature point coordinates and the random number pairs cached in the DPRAM Cache the corresponding gray value of the image in DPRAM, execute the brief algorithm steps, and obtain the brief description information of the current feature point, that is, a 256-bit wide binary vector. It can be pointed out that the extraction of description information here is not limited to the BRIEF algorithm, such as SIFT descriptor extraction, SURF descriptor extraction, etc. can be used.

特征存储模块缓存每一帧图像的特征点坐标与所述特征点描述信息。特征存储模块采用双端口随机存储器(Random Access Memory),这里使用乒乓操作,使用两个双口RAM来始终缓存前一帧信息与当前帧信息,即两个双端口RAM分别为RAMA和RAMB,第N-1帧图像的特征点坐标与描述信息被存储在RAMA中,则第N帧图像的特征点坐标与描述信息被存储在RAMB中,第N+1帧时,图像的特征点坐标与描述信息被存储在RAMA中,RAMA中原来存储的第N-1帧图像的特征点坐标与描述信息则被覆盖,依此类推。The feature storage module caches the feature point coordinates and the feature point description information of each frame of image. The feature storage module adopts dual-port random access memory (Random Access Memory). Ping-pong operation is used here, and two dual-port RAMs are used to always cache the previous frame information and current frame information, that is, the two dual-port RAMs are RAMA and RAMB respectively. The feature point coordinates and description information of the N-1 frame image are stored in RAMA, and the feature point coordinates and description information of the Nth frame image are stored in RAMB. In the N+1th frame, the feature point coordinates and description of the image The information is stored in RAMA, and the feature point coordinates and description information of the N-1th frame image originally stored in RAMA are overwritten, and so on.

前后帧图像特征点配准模块包括BRIEF描述向量距离计算器、读中断生成器和匹配点对存储器FIFO。如图2中所示的图像特征点配准部分,其流程为首先读取一个当前帧的特征点描述向量,随后遍历地逐个读取另一块RAM中缓存的上一帧图像特征点描述向量,将两向量做特征距离计算,该距离小于某一阈值则认定为两点匹配成功。例如实施例中使用BRIEF的描述向量,两个描述向量之间汉明距离小于30则认为匹配成功。该距离计算过程如图5所示,使用两个状态机完成,所述状态机1完成当前帧以及前一帧的特征点描述向量读取,所述状态机2完成前一帧图像特征点描述向量读取。所有成功匹配点对缓存入匹配点对存储器FIFO,在匹配过程结束时间,由FPGA中给出中断信号,等待DSP响应。The front and back frame image feature point registration module includes a BRIEF description vector distance calculator, a read interrupt generator and a matching point pair memory FIFO. As shown in Figure 2, the image feature point registration part, its process is to first read the feature point description vector of a current frame, and then iteratively read the previous frame image feature point description vector cached in another block of RAM one by one, Calculate the feature distance between the two vectors, and if the distance is less than a certain threshold, it is considered that the two points match successfully. For example, the description vector of BRIEF is used in the embodiment, and if the Hamming distance between two description vectors is less than 30, the matching is considered successful. The distance calculation process is shown in Figure 5, using two state machines to complete, the state machine 1 completes the feature point description vector reading of the current frame and the previous frame, and the state machine 2 completes the feature point description of the previous frame image Vector read. All successful matching point pairs are buffered into the matching point pair memory FIFO, and at the end of the matching process, an interrupt signal is given by the FPGA, waiting for the DSP to respond.

所述的配准信息通过EMIF接口传输到DSP中,DSP接收到所述的配准信息之后,可使用随机抽样一致算法(RANSAC,RANdom Sample Consensus)来计算出所述这些配准信息之间对应的变换关系。计算所得的变换关系为一个矩阵,使用不同的算法矩阵类型会有差异。所述的计算结果再通过EMIF接口传输回FPGA中并反馈给FPGA的精确匹配单元。The registration information is transmitted to the DSP through the EMIF interface. After the DSP receives the registration information, it can use a random sampling consensus algorithm (RANSAC, RANdom Sample Consensus) to calculate the correspondence between the registration information. transformation relationship. The calculated transformation relationship is a matrix, and the matrix type will be different when using different algorithms. The calculation results are then transmitted back to the FPGA through the EMIF interface and fed back to the precise matching unit of the FPGA.

精确匹配单元根据矩阵变换关系以及缓存的上一帧的跟踪的坐标点,计算得到当前帧的跟踪点坐标位置。精确匹配单元利用所述的计算所得的跟踪点坐标位置作为搜索区域的中心点,在周围一定领域,例如7×7的领域里面做模板匹配,找到与缓存模板最匹配的坐标作为跟踪坐标。如图2所示,精确匹配单元包括搜索区缓存模块、相关匹配模块以及模板缓存与更新模块。The exact matching unit calculates the coordinate position of the tracking point of the current frame according to the matrix transformation relationship and the cached tracking coordinate point of the previous frame. The exact matching unit uses the calculated tracking point coordinates as the center point of the search area, performs template matching in a certain surrounding area, such as a 7×7 area, and finds the coordinates that best match the cached template as tracking coordinates. As shown in Figure 2, the exact matching unit includes a search area cache module, a relevant matching module, and a template cache and update module.

搜索区缓存模块:缓存图像的控制模块,图像数据从外部cameralink接口输入,将图像逐像素写入DPRAM里面,写一整帧图像。新的一帧过来时将覆盖上一帧的图像。缓存图像的双口RAM,写入端为前面一级给过来的图像数据流,读取端为后面做模板匹配的模块从这里取数据。Search area cache module: a control module for caching images. The image data is input from the external cameralink interface, and the image is written pixel by pixel into the DPRAM to write a whole frame of image. When a new frame comes over, it will overwrite the image of the previous frame. The dual-port RAM that caches the image, the write end is the image data stream from the previous stage, and the read end is the module that does the template matching later to get the data from here.

相关匹配:以当前帧中跟踪点坐标值为中心创建一个待匹配区域,从模板缓存与更新模块提取模板,通过窗口遍历的方式进行灰度相关运算,灰度相关运算结果中最大值对应的窗口中心点为最佳匹配位置,同时将灰度相关运算结果中最大值对应的窗口更新至模板缓存与更新模块。譬如,请参见图7,读取当前帧中跟踪点的XY坐标,然后在外部图像数据输入过来时会生成一个以XY为中心的15×15的窗口。在缓存这个15×15的区域的时候,同时会有一个7×7的匹配窗口生成,随着像素一个个过来,窗口会一个像素一个像素地往前移动。这个窗口里面的数据与缓存的前一帧的模板(也是一个7×7的模板)做相关运算,这样流水线地往后计算,在整个搜索区的图像数据来完了之后,延迟几个时钟周期,相关运算的结果也会计算完毕。在每次相关匹配的计算结果有效时,判断相关匹配的匹配度,始终缓存当前相关匹配度最大的时候对应的区域中心点,直到所有的点的结果都计算出来。这样整个遍历匹配过程结束后,最佳匹配的位置即可得到。在整个计算完成时,缓存的点坐标作为NewXY给模板更新模块。Correlation matching: Create an area to be matched centered on the coordinate value of the tracking point in the current frame, extract the template from the template cache and update module, and perform gray-scale correlation operations through window traversal, and the window corresponding to the maximum value in the gray-scale correlation operation results The center point is the best matching position, and at the same time, the window corresponding to the maximum value in the gray correlation operation result is updated to the template cache and update module. For example, please refer to Figure 7, read the XY coordinates of the tracking point in the current frame, and then generate a 15×15 window centered on XY when the external image data is input. When caching this 15×15 area, a 7×7 matching window will be generated at the same time. As the pixels come one by one, the window will move forward pixel by pixel. The data in this window is correlated with the cached template of the previous frame (also a 7×7 template), so that the pipeline calculates backwards, and after the image data in the entire search area is finished, it is delayed by several clock cycles. The results of related operations are also calculated. When the calculation result of each relevant matching is valid, the matching degree of the relevant matching is judged, and the corresponding area center point when the current relevant matching degree is the largest is always cached until the results of all points are calculated. In this way, after the entire traversal matching process ends, the position of the best match can be obtained. When the entire calculation is completed, the cached point coordinates are used as NewXY to the template update module.

模板更新模块负责根据给入的NewXY坐标,将相应的模板进行更新,然后更新后的模板信息将作为下一个遍历匹配的模板。The template update module is responsible for updating the corresponding template according to the given NewXY coordinates, and then the updated template information will be used as the template for the next traversal match.

图5所示是本发明前后帧图像特征点配准的优选实施方式。所述的特征点的描述向量是256位宽的二进制向量,以及特征点本身的坐标、尺度信息32位数据合并成288位宽的合并的描述信息。通过乒乓缓存,分别将当前帧所述的合并的描述信息与前一帧的合并的描述信息缓存在两块DPRAM里面。图5所示的是优选例中内部数据处理流程。匹配过程由两个有限状态机(Finite State Machine FSM)来完成。第一个状态机控制读取当前帧的一个描述信息作为状态机的一次循环,第二个状态机控制每一次循环中,找到最佳匹配点。图中,Fig. 5 shows a preferred implementation of the registration of feature points of front and rear frame images in the present invention. The description vector of the feature point is a 256-bit wide binary vector, and the 32-bit data of the coordinates and scale information of the feature point itself are combined into 288-bit wide combined description information. Through ping-pong buffering, the merged description information of the current frame and the merged description information of the previous frame are respectively cached in two DPRAMs. Figure 5 shows the internal data processing flow in the preferred example. The matching process is completed by two finite state machines (Finite State Machine FSM). The first state machine controls to read a description information of the current frame as a cycle of the state machine, and the second state machine controls to find the best matching point in each cycle. In the figure,

状态机1是由所述特征描述模块从当前帧中的每一个新的特征点触发的,它不停地将前一帧的特征点迭代,在每一个周期中给出一对特征点。每一步的过程如下所示:State machine 1 is triggered by each new feature point in the current frame by the feature description module, it iterates the feature points of the previous frame continuously, and gives a pair of feature points in each cycle. Each step of the process is as follows:

其他:作为未定义状态或者复位后的状态。一旦进入这个状态,马上跳入等待状态。Others: As an undefined state or a state after reset. Once in this state, immediately jump into the waiting state.

等待:等待,知道有新特征点信号有效,跳至读当前帧状态。Waiting: Waiting, know that there is a new feature point signal is valid, jump to read the current frame status.

读当前帧:从当前帧的特征点DPRAM缓存中读取一个特征点,然后跳入读前帧状态。Read the current frame: read a feature point from the feature point DPRAM cache of the current frame, and then jump into the state of reading the previous frame.

读前帧:从上一帧的特征点DPRAM缓存里面取出特征点,然后产生一个针对状态机2的NewResult信号。除此之外,该状态还负责判断是否前一帧里的特征点已经全部迭代完。如果迭代完了,则跳入写状态,否则继续读取前一帧中的下一个特征点。Read the previous frame: take out the feature points from the feature point DPRAM cache of the previous frame, and then generate a NewResult signal for state machine 2. In addition, this state is also responsible for judging whether all the feature points in the previous frame have been iterated. If the iteration is over, jump into the write state, otherwise continue to read the next feature point in the previous frame.

写:等待状态机2来将其输出的匹配的特征点写入匹配点DPRAM,然后跳入等待状态。Write: Waiting for the state machine 2 to write the matching feature points output by it into the matching point DPRAM, and then jump into the waiting state.

状态机2接收两特征点间的Hamming距离,然后选出具有最短距离的特征点对。符合要求的点对将被存储在匹配点DPRAM中,状态机2中具体每个状态的功能如下:State machine 2 receives the Hamming distance between two feature points, and then selects the feature point pair with the shortest distance. The point pairs that meet the requirements will be stored in the matching point DPRAM. The specific functions of each state in state machine 2 are as follows:

其他:未定义的状态或者是复位后的状态,一旦进入这个状态马上跳入等待状态。Others: Undefined state or the state after reset, once entering this state, jump into the waiting state immediately.

等待:等待直至NewResult信号有效,然后跳入找最小状态,并且初始化一个最小距离寄存器(MIN_DIST)作为判断是否匹配的阈值,只有比较后的距离小于这个阈值的特征点对才被认为是匹配的点对。Wait: Wait until the NewResult signal is valid, then jump into the minimum state, and initialize a minimum distance register (MIN_DIST) as the threshold for judging whether it matches. Only feature point pairs whose distance after comparison is less than this threshold are considered matching points right.

找最小:该状态读取一个距离结果并将该结果与MIN_DIST进行比较,如果距离比MIN_DIST小,则将当前的距离赋给MIN_DIST,并且将当前的特征点对标记为匹配成功特征点对。如果距离比MIN_DIST大,则不作任何处理。该状态一直循环迭代,直到所有的距离都比较完,然后状态机跳入WRITE状态。Find the minimum: This state reads a distance result and compares the result with MIN_DIST. If the distance is smaller than MIN_DIST, assign the current distance to MIN_DIST and mark the current feature point pair as a successful matching feature point pair. If the distance is greater than MIN_DIST, nothing will be done. This state has been cyclically iterated until all distances are compared, and then the state machine jumps into the WRITE state.

写:如果由FIND_MIN状态找出的特征点对是一个有效的特征点对信号,则将该特征点点对写入特征点对的DPRAM。一旦进入该状态,下一个状态就是等待状态。Write: If the feature point pair found by the FIND_MIN state is a valid feature point pair signal, write the feature point pair into the DPRAM of the feature point pair. Once in this state, the next state is the waiting state.

图6所示是FPGA与DSP通信回路设计示意图。在每一帧图像输入完成一定的时钟周期之后,FPGA中将缓存当前帧所有的与上一帧的匹配点对,之后FPGA将产生一个中断信号,传递给DSP。DSP将中断信号捕获之后,按照设置,发起一个EDMA的搬运,将FPGA中缓存的点对传输到DSP中。DSP根据传输进来的点对数据,用随机抽样一致性(RANdomSAmpleConsensus,RANSAC)来计算出点对之间对应的变换矩阵。得到的矩阵DSP同样通过给出中断信号,FPGA捕获中断信号,读取得到矩阵参数。Figure 6 is a schematic diagram of FPGA and DSP communication circuit design. After each frame of image input completes a certain clock cycle, the FPGA will cache all the matching point pairs of the current frame and the previous frame, and then the FPGA will generate an interrupt signal and pass it to the DSP. After the DSP captures the interrupt signal, it initiates an EDMA transfer according to the settings, and transfers the point pairs cached in the FPGA to the DSP. According to the transmitted point pair data, DSP uses random sampling consistency (RANdomSAmpleConsensus, RANSAC) to calculate the corresponding transformation matrix between point pairs. The obtained matrix DSP also gives the interrupt signal, FPGA captures the interrupt signal, and reads the matrix parameters.

请参见图8,本发明装置的工作过程为:外部模拟相机以cameralink格式输入图像数据,图像数据在像素时钟驱动下进入下一级的SIFT特征检测模块,检测的特征点信息传递给BRIEF特征描述提取模块,提取出每个特征点的描述向量,得到的描述向量在特征匹配模块中完成相邻帧间的特征点匹配,特征匹配完成得到的是一组一组的匹配成功的匹配点对,该匹配点对存储在FIFO中,由DSP取出匹配点数据,使用RANSAC算法计算得到矩阵形式的相邻帧图像的几何变换关系,再由FPGA得到这个矩阵数据,FPGA根据矩阵变换关系,结合前帧跟踪点坐标,得到当前帧跟踪点坐标范围,灰度相关精确匹配模块在该坐标的邻域,做灰度相关精确匹配定位跟踪点。最终的跟踪点与图像叠加后显示输出。Please refer to Figure 8, the working process of the device of the present invention is as follows: the external analog camera inputs image data in cameralink format, the image data enters the next-level SIFT feature detection module driven by the pixel clock, and the detected feature point information is passed to the BRIEF feature description The extraction module extracts the description vector of each feature point, and the obtained description vector completes the feature point matching between adjacent frames in the feature matching module. After the feature matching is completed, a group of successfully matched matching point pairs is obtained. The matching point pair is stored in FIFO, and the matching point data is taken out by DSP, and the geometric transformation relationship of adjacent frame images in matrix form is calculated by using RANSAC algorithm, and then the matrix data is obtained by FPGA, which combines the previous frame according to the matrix transformation relationship The coordinates of the tracking point are obtained to obtain the coordinate range of the tracking point in the current frame, and the gray-scale correlation precise matching module performs gray-scale correlation precise matching to locate the tracking point in the neighborhood of the coordinates. The output is displayed after the final track point is overlaid with the image.

本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。Those skilled in the art can easily understand that the above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, All should be included within the protection scope of the present invention.

Claims (7)

1.一种基于图像关键点特征匹配的地面目标跟踪装置,其特征在于,包括:可编程门阵列FPGA和数字信号处理器DSP;1. A ground target tracking device based on image key point feature matching, is characterized in that, comprises: programmable gate array FPGA and digital signal processor DSP; 可编程门阵列FPGA,包括特征提取单元、跟踪点坐标计算单元和精确匹配单元;所述特征提取单元用于对外部输入的图像序列每一帧图像,提取图像特征,根据图像特征完成相邻帧间特征匹配结果,将成功帧间特征匹配结果传送给DSP;所述跟踪点坐标计算单元用于根据DSP反馈的帧间几何变换关系,由前一帧跟踪点坐标计算得到当前帧中跟踪点坐标值;所述精确匹配单元,用于以DSP计算得到的当前帧跟踪点位置坐标为中心,在其邻域做基于模板的互相关匹配,得到跟踪点坐标精确坐标;The programmable gate array FPGA includes a feature extraction unit, a tracking point coordinate calculation unit and an accurate matching unit; the feature extraction unit is used to extract image features for each frame of an image sequence input from the outside, and complete adjacent frames according to the image features The inter-frame feature matching result, the successful inter-frame feature matching result is sent to the DSP; the tracking point coordinate calculation unit is used to calculate the tracking point coordinates in the current frame according to the inter-frame geometric transformation relationship fed back by the DSP Value; the exact matching unit is used to take the current frame tracking point position coordinates calculated by the DSP as the center, and do cross-correlation matching based on the template in its neighborhood to obtain the precise coordinates of the tracking point coordinates; 数字信号处理器DSP,用于根据所述FPGA输出的特征匹配结果计算出相邻图像帧间变换关系,并反馈给FPGA。The digital signal processor DSP is used for calculating the transformation relationship between adjacent image frames according to the feature matching result output by the FPGA, and feeding it back to the FPGA. 2.根据权利要求1所述的基于图像关键点特征匹配的地面目标跟踪装置,其特征在于,所述特征提取单元包括特征检测模块、特征描述模块、特征存储模块和前后帧图像特征点配准模块;2. The ground target tracking device based on image key point feature matching according to claim 1, wherein the feature extraction unit includes a feature detection module, a feature description module, a feature storage module and front and rear frame image feature point registration module; 特征检测模块用于对图像数据进行多个尺度的高斯滤波、差分、判断极值点、剔除低响应度的点,得到检测到的图像特征点坐标;The feature detection module is used to perform multi-scale Gaussian filtering, difference, judgment of extreme points, and elimination of low-responsivity points on image data to obtain the coordinates of detected image feature points; 特征描述模块用于根据所述图像特征点坐标,对图像的领域提取图像信息,得到所述特征点的描述向量;The feature description module is used to extract image information from the field of the image according to the coordinates of the feature points of the image to obtain a description vector of the feature points; 特征存储模块用于缓存每一帧图像的特征点坐标与所述特征点描述信息;特征存储模块包括两双端口随机存储器RAMA和RAMB,采用乒乓操作,缓存前一帧信息与当前帧信息,即第N-1帧图像的特征点坐标与描述向量被存储在RAMA中,则第N帧图像的特征点坐标与描述信息被存储在RAMB中;The feature storage module is used to cache the feature point coordinates and the feature point description information of each frame of image; the feature storage module includes two dual-port random access memories RAMA and RAMB, and adopts a ping-pong operation to cache the previous frame information and the current frame information, i.e. The feature point coordinates and description vectors of the N-1th frame image are stored in RAMA, and the feature point coordinates and description information of the Nth frame image are stored in RAMB; 前后帧图像特征点配准模块用于根据图像的特征点坐标与所述特征点描述信息完成相邻帧间特征匹配结果,将成功帧间特征匹配结果传送给DSP。The image feature point registration module of the front and rear frames is used to complete the feature matching results between adjacent frames according to the feature point coordinates of the image and the feature point description information, and transmit the feature matching results between successful frames to the DSP. 3.根据权利要求2所述的基于图像关键点特征匹配的地面目标跟踪装置,其特征在于,3. the ground target tracking device based on image key point feature matching according to claim 2, is characterized in that, 所述特征检测模块包括下采样模块和两组结构相同的特征点检测模块;特征点检测模块包括多个高斯滤波单元、多个差分计算单元、多个窗口生成单元和一个特征点选取单元;The feature detection module includes a downsampling module and two groups of structurally identical feature point detection modules; the feature point detection module includes a plurality of Gaussian filter units, a plurality of difference calculation units, a plurality of window generation units and a feature point selection unit; 第一组特征点检测模块的多个高斯滤波单元用于并行地对所述模拟相机装置产生的图像序列每一帧图像进行不同尺度参数的高斯滤波;差分计算单元用于对两相邻尺度高斯滤波后的两幅图像进行差分运算得到高斯差分图像;窗口生成单元用于以高斯差分图像中的像素点为中心及其邻域为边界生成窗口;特征点选取单元用于在生成的窗口中确定极值点,将极值点作为候选特征点,从候选特征点中删除低对比度点或边缘点,保留的候选特征点即为最终的特征点;A plurality of Gaussian filtering units of the first group of feature point detection modules are used to perform Gaussian filtering of different scale parameters on each frame image of the image sequence generated by the analog camera device in parallel; the difference calculation unit is used to perform Gaussian filtering on two adjacent scales of Gaussian The difference operation is performed on the filtered two images to obtain a Gaussian difference image; the window generation unit is used to generate a window centered on the pixel point in the Gaussian difference image and its neighborhood as a boundary; the feature point selection unit is used to determine in the generated window Extreme points, using extreme points as candidate feature points, deleting low-contrast points or edge points from candidate feature points, and retaining candidate feature points are the final feature points; 从多个高斯滤波单元中选取中间尺度的高斯滤波单元,将该高斯滤波单元输出的高斯滤波图像输入给下采样模块,下采样模块用于对输入的图像进行下采样,将下采样后的图像输出至第二组特征点检测模块,第二组特征点检测模块按照与第一组特征点检测模块相同的方式确定特征点。Select an intermediate-scale Gaussian filter unit from multiple Gaussian filter units, and input the Gaussian filter image output by the Gaussian filter unit to the downsampling module. The downsampling module is used to downsample the input image, and the downsampled image output to the second group of feature point detection modules, and the second group of feature point detection modules determine feature points in the same manner as the first group of feature point detection modules. 4.根据权利要求2所述的基于图像关键点特征匹配的地面目标跟踪装置,其特征在于,所述特征描述模块包括数据控制模块和描述向量计算模块;4. The ground target tracking device based on image key point feature matching according to claim 2, wherein the feature description module includes a data control module and a description vector calculation module; 所述数据控制模块用于读取特征点坐标,分别以每一特征点为基准以及随机数缓存中存储的偏移量提取一定量的图像像素数据;The data control module is used to read the coordinates of feature points, and extracts a certain amount of image pixel data based on each feature point and the offset stored in the random number cache; 所述描述向量计算模块用于通过对提取的图像像素数据进行两两像素点灰度值比对,得到二进制描述向量。The description vector calculation module is used to obtain a binary description vector by comparing gray values of two pixels of the extracted image pixel data. 5.根据权利要求2所述的基于图像关键点特征匹配的地面目标跟踪装置,其特征在于,所述前后帧图像特征点配准模块包括描述向量距离计算器、读中断生成器和匹配点对存储器FIFO;5. The ground target tracking device based on image key point feature matching according to claim 2, wherein the front and rear frame image feature point registration module includes a description vector distance calculator, a read interrupt generator and a matching point pair memory FIFO; 描述向量距离计算器,使用第一状态机读取当前帧以及前一帧图像的特征点,使用第二状态机读取当前帧以及前一帧图像的特征点描述向量,将当前帧以及前一帧图像的特征点描述向量,该距离小于某一阈值则认定为两帧的特征点匹配成功;The description vector distance calculator uses the first state machine to read the feature points of the current frame and the previous frame image, uses the second state machine to read the feature point description vector of the current frame and the previous frame image, and uses the current frame and the previous frame image The feature point description vector of the frame image, if the distance is less than a certain threshold, it is determined that the feature points of the two frames are successfully matched; 匹配点对存储器FIFO用于存储成功匹配点对;The matching point pair memory FIFO is used to store the successful matching point pair; 读中断生成器用于在前后帧图像特征点配准结束时,向DSP给出中断信号,等待DSP响应。The read interrupt generator is used to give an interrupt signal to the DSP and wait for the DSP to respond when the registration of the feature points of the front and rear frame images ends. 6.根据权利要求1所述的基于图像关键点特征匹配的地面目标跟踪装置,其特征在于,所述精确匹配单元包括搜索区缓存模块、相关匹配模块以及模板缓存与更新模块;6. The ground target tracking device based on image key point feature matching according to claim 1, wherein the precise matching unit includes a search area cache module, a correlation matching module, and a template cache and update module; 搜索区缓存模块用于缓存从外部接口输入的图像,新的一帧过来时将覆盖上一帧的图像;The search area cache module is used to cache the image input from the external interface, and when a new frame comes, it will overwrite the image of the previous frame; 相关匹配模块用于以当前帧中跟踪点坐标值为中心创建一个待匹配区域,从模板缓存与更新模块提取模板,通过窗口遍历的方式进行灰度相关运算,灰度相关运算结果中最大值对应的窗口中心点为最佳匹配位置,同时将灰度相关运算结果中最大值对应的窗口更新至模板缓存与更新模块;The correlation matching module is used to create an area to be matched centered on the coordinate value of the tracking point in the current frame, extract the template from the template cache and the update module, and perform gray-scale correlation calculations through window traversal, and the maximum value of the gray-scale correlation calculation results corresponds to The center point of the window is the best matching position, and at the same time, the window corresponding to the maximum value in the gray correlation operation result is updated to the template cache and update module; 模板更新模块用于缓存模板。The template update module is used to cache templates. 7.根据权利要求1所述的基于图像关键点特征匹配的地面目标跟踪装置,其特征在于,所述DSP用于在捕获到FPGA发送的中断信号之后,发起一个增强型直接存储器访问,接收FPGA中匹配成功的特征点对,采用随机抽样一致性计算特征点对之间反映帧间几何变换关系的变换矩阵;向FPGA发送中断信号,并在得到响应后将变换矩阵反馈给FPGA。7. the ground target tracking device based on image key point feature matching according to claim 1, is characterized in that, described DSP is used for after capturing the interruption signal that FPGA sends, initiates an enhanced direct memory access, receives FPGA For the feature point pairs that are successfully matched, the transformation matrix between the feature point pairs that reflects the geometric transformation relationship between frames is calculated using random sampling consistency; an interrupt signal is sent to the FPGA, and the transformation matrix is fed back to the FPGA after receiving a response.
CN201610596748.1A 2016-07-26 2016-07-26 A ground target tracking device based on feature matching Active CN106204660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610596748.1A CN106204660B (en) 2016-07-26 2016-07-26 A ground target tracking device based on feature matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610596748.1A CN106204660B (en) 2016-07-26 2016-07-26 A ground target tracking device based on feature matching

Publications (2)

Publication Number Publication Date
CN106204660A true CN106204660A (en) 2016-12-07
CN106204660B CN106204660B (en) 2019-06-11

Family

ID=57495902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610596748.1A Active CN106204660B (en) 2016-07-26 2016-07-26 A ground target tracking device based on feature matching

Country Status (1)

Country Link
CN (1) CN106204660B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123078A (en) * 2017-04-25 2017-09-01 北京小米移动软件有限公司 The method and device of display image
CN107316038A (en) * 2017-05-26 2017-11-03 中国科学院计算技术研究所 A kind of SAR image Ship Target statistical nature extracting method and device
CN107516296A (en) * 2017-07-10 2017-12-26 昆明理工大学 An FPGA-based moving target detection and tracking system and method
CN107657175A (en) * 2017-09-15 2018-02-02 北京理工大学 A kind of homologous detection method of malice sample based on image feature descriptor
CN107992100A (en) * 2017-12-13 2018-05-04 中国科学院长春光学精密机械与物理研究所 High frame frequency image tracking method based on programmable logic array
CN109146918A (en) * 2018-06-11 2019-01-04 西安电子科技大学 A kind of adaptive related objective localization method based on piecemeal
CN109246331A (en) * 2018-09-19 2019-01-18 郑州云海信息技术有限公司 A kind of method for processing video frequency and system
CN109801207A (en) * 2019-01-08 2019-05-24 桂林电子科技大学 The image feature high speed detection and matching system of CPU-FPGA collaboration
CN110956178A (en) * 2019-12-04 2020-04-03 深圳先进技术研究院 A plant growth measurement method, system and electronic device based on image similarity calculation
CN111369650A (en) * 2020-03-30 2020-07-03 广东精鹰传媒股份有限公司 Method for realizing object connecting line effect of two-dimensional space and three-dimensional space
CN111460941A (en) * 2020-03-23 2020-07-28 南京智能高端装备产业研究院有限公司 Visual navigation feature point extraction and matching method in wearable navigation equipment
CN112182042A (en) * 2020-10-12 2021-01-05 上海扬灵能源科技有限公司 Point cloud feature matching method and system based on FPGA and path planning system
CN112233252A (en) * 2020-10-23 2021-01-15 上海影谱科技有限公司 AR target tracking method and system based on feature matching and optical flow fusion
CN112529016A (en) * 2020-12-21 2021-03-19 浙江欣奕华智能科技有限公司 Method and device for extracting feature points in image
CN112926593A (en) * 2021-02-20 2021-06-08 温州大学 Image feature processing method and device for dynamic image enhancement presentation
CN113838089A (en) * 2021-09-20 2021-12-24 哈尔滨工程大学 A Bubble Trajectory Tracking Method Based on Feature Matching Algorithm
CN114022345A (en) * 2021-11-12 2022-02-08 中国科学院长春光学精密机械与物理研究所 A dynamic windowing method and device for multi-channel large area array images
CN114283065A (en) * 2021-12-28 2022-04-05 北京理工大学 A hardware-accelerated ORB feature point matching system and matching method
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence
CN117726921A (en) * 2023-11-06 2024-03-19 上海科技大学 FPGA-implemented ORB feature extraction accelerator based on stream processing and non-blocking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065131A (en) * 2012-12-28 2013-04-24 中国航天时代电子公司 Method and system of automatic target recognition tracking under complex scene
CN103646232A (en) * 2013-09-30 2014-03-19 华中科技大学 Aircraft ground moving target infrared image identification device
CN104978749A (en) * 2014-04-08 2015-10-14 南京理工大学 FPGA (Field Programmable Gate Array)-based SIFT (Scale Invariant Feature Transform) image feature extraction system
JP2015210677A (en) * 2014-04-25 2015-11-24 国立大学法人 東京大学 Information processing apparatus and information processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065131A (en) * 2012-12-28 2013-04-24 中国航天时代电子公司 Method and system of automatic target recognition tracking under complex scene
CN103646232A (en) * 2013-09-30 2014-03-19 华中科技大学 Aircraft ground moving target infrared image identification device
CN104978749A (en) * 2014-04-08 2015-10-14 南京理工大学 FPGA (Field Programmable Gate Array)-based SIFT (Scale Invariant Feature Transform) image feature extraction system
JP2015210677A (en) * 2014-04-25 2015-11-24 国立大学法人 東京大学 Information processing apparatus and information processing method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴平 等: "基于模板匹配的加速肺结节检测算法研究", 《计算机工程与应用》 *
张浩鹏: "基于互相关计算加速器的实时目标跟踪系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王建辉: "实时视觉特征检测与匹配硬件架构研究", 《中国博士学位论文全文数据库 信息科技辑》 *
钟露明: "基于SIFT动态背景下的视频目标跟踪方法", 《南昌工程学院学报》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123078A (en) * 2017-04-25 2017-09-01 北京小米移动软件有限公司 The method and device of display image
CN107316038B (en) * 2017-05-26 2020-04-28 中国科学院计算技术研究所 SAR image ship target statistical feature extraction method and device
CN107316038A (en) * 2017-05-26 2017-11-03 中国科学院计算技术研究所 A kind of SAR image Ship Target statistical nature extracting method and device
CN107516296A (en) * 2017-07-10 2017-12-26 昆明理工大学 An FPGA-based moving target detection and tracking system and method
CN107657175A (en) * 2017-09-15 2018-02-02 北京理工大学 A kind of homologous detection method of malice sample based on image feature descriptor
CN107992100A (en) * 2017-12-13 2018-05-04 中国科学院长春光学精密机械与物理研究所 High frame frequency image tracking method based on programmable logic array
CN107992100B (en) * 2017-12-13 2021-01-15 中国科学院长春光学精密机械与物理研究所 High frame rate image tracking method and system based on programmable logic array
CN109146918A (en) * 2018-06-11 2019-01-04 西安电子科技大学 A kind of adaptive related objective localization method based on piecemeal
CN109246331A (en) * 2018-09-19 2019-01-18 郑州云海信息技术有限公司 A kind of method for processing video frequency and system
CN109801207A (en) * 2019-01-08 2019-05-24 桂林电子科技大学 The image feature high speed detection and matching system of CPU-FPGA collaboration
CN110956178A (en) * 2019-12-04 2020-04-03 深圳先进技术研究院 A plant growth measurement method, system and electronic device based on image similarity calculation
CN110956178B (en) * 2019-12-04 2023-04-18 深圳先进技术研究院 Plant growth measuring method and system based on image similarity calculation and electronic equipment
CN111460941A (en) * 2020-03-23 2020-07-28 南京智能高端装备产业研究院有限公司 Visual navigation feature point extraction and matching method in wearable navigation equipment
CN111369650A (en) * 2020-03-30 2020-07-03 广东精鹰传媒股份有限公司 Method for realizing object connecting line effect of two-dimensional space and three-dimensional space
CN112182042A (en) * 2020-10-12 2021-01-05 上海扬灵能源科技有限公司 Point cloud feature matching method and system based on FPGA and path planning system
CN112233252A (en) * 2020-10-23 2021-01-15 上海影谱科技有限公司 AR target tracking method and system based on feature matching and optical flow fusion
CN112233252B (en) * 2020-10-23 2024-02-13 上海影谱科技有限公司 AR target tracking method and system based on feature matching and optical flow fusion
CN112529016A (en) * 2020-12-21 2021-03-19 浙江欣奕华智能科技有限公司 Method and device for extracting feature points in image
CN112926593A (en) * 2021-02-20 2021-06-08 温州大学 Image feature processing method and device for dynamic image enhancement presentation
CN113838089A (en) * 2021-09-20 2021-12-24 哈尔滨工程大学 A Bubble Trajectory Tracking Method Based on Feature Matching Algorithm
CN113838089B (en) * 2021-09-20 2023-12-15 哈尔滨工程大学 A bubble trajectory tracking method based on feature matching algorithm
CN114022345A (en) * 2021-11-12 2022-02-08 中国科学院长春光学精密机械与物理研究所 A dynamic windowing method and device for multi-channel large area array images
CN114283065A (en) * 2021-12-28 2022-04-05 北京理工大学 A hardware-accelerated ORB feature point matching system and matching method
CN114283065B (en) * 2021-12-28 2024-06-11 北京理工大学 ORB feature point matching system and method based on hardware acceleration
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence
CN116433887B (en) * 2023-06-12 2023-08-15 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence
CN117726921A (en) * 2023-11-06 2024-03-19 上海科技大学 FPGA-implemented ORB feature extraction accelerator based on stream processing and non-blocking

Also Published As

Publication number Publication date
CN106204660B (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN106204660A (en) A kind of Ground Target Tracking device of feature based coupling
CN103646232B (en) Aircraft ground moving target infrared image identification device
CN111583097A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN103279952B (en) A kind of method for tracking target and device
CN112364865B (en) A detection method for moving small objects in complex scenes
CN105761233A (en) FPGA-based real-time panoramic image mosaic method
CN102509071B (en) Optical flow computation system and method
CN111275746B (en) An FPGA-based dense optical flow computing system and method
Ding et al. Real-time stereo vision system using adaptive weight cost aggregation approach
CN114118181A (en) A high-dimensional regression point cloud registration method, system, computer equipment and application
Huang et al. S3: Learnable sparse signal superdensity for guided depth estimation
Sun et al. A 42fps full-HD ORB feature extraction accelerator with reduced memory overhead
CN117726921B (en) ORB feature extraction accelerator based on stream processing and non-blocking
Liu et al. MobileSP: An FPGA-based real-time keypoint extraction hardware accelerator for mobile VSLAM
CN106651923A (en) Method and system for video image target detection and segmentation
CN119478274B (en) Scanning data generation method, electronic device and storage medium
CN108282597A (en) A kind of real-time target tracing system and method based on FPGA
WO2023109086A1 (en) Character recognition method, apparatus and device, and storage medium
CN119048675A (en) Point cloud construction method and device, electronic equipment and readable storage medium
CN114419103B (en) A skeleton detection and tracking method, device and electronic equipment
CN113033256A (en) Training method and device for fingertip detection model
Sulzbachner et al. An optimized silicon retina stereo matching algorithm using time-space correlation
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
Li et al. Stereo matching accelerator with re-computation scheme and data-reused pipeline for autonomous vehicles
CN111369425B (en) Image processing method, apparatus, electronic device, and computer readable medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant