CN117237417A - System for realizing optical flow tracking based on image and imu data hardware - Google Patents

System for realizing optical flow tracking based on image and imu data hardware Download PDF

Info

Publication number
CN117237417A
CN117237417A CN202311500139.8A CN202311500139A CN117237417A CN 117237417 A CN117237417 A CN 117237417A CN 202311500139 A CN202311500139 A CN 202311500139A CN 117237417 A CN117237417 A CN 117237417A
Authority
CN
China
Prior art keywords
image
module
optical flow
imu
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311500139.8A
Other languages
Chinese (zh)
Inventor
吴杰
姜爱鹏
郑明肖
陈钊
储继慎
杨恪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yaoyu Vision Core Technology Co ltd
Original Assignee
Nanjing Yaoyu Vision Core Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yaoyu Vision Core Technology Co ltd filed Critical Nanjing Yaoyu Vision Core Technology Co ltd
Priority to CN202311500139.8A priority Critical patent/CN117237417A/en
Publication of CN117237417A publication Critical patent/CN117237417A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a system for realizing optical flow tracking based on hardware of image and imu data, which comprises: imu and image synchronization module, rotation matrix calculation module, image pyramid module, characteristic point prediction module, LK optical flow calculation module, error match eliminating module, characteristic point update module and DDR control module; the imu and image synchronization module is used for carrying out time synchronization on imu data and image data, outputting an imu data stream after synchronization to the rotation matrix calculation module and outputting an image stream after synchronization to the image pyramid module; the image pyramid module establishes an image pyramid data stream after receiving the image stream; the DDR control module realizes control storage of the image pyramid data stream. The beneficial effects of the invention are as follows: the system for realizing optical flow tracking based on the hardware of the image and imu data achieves the purposes of low power consumption, low cost, real-time performance and high performance.

Description

System for realizing optical flow tracking based on image and imu data hardware
Technical Field
The invention relates to a system for realizing optical flow tracking based on hardware of image and imu data, which is applied to the front end of slam in a tracking stage to calculate the optical flow from the characteristic point of the previous frame to the current frame, and calculates the image pose according to the optical flow result.
Background
Optical flow tracking is currently realized on an embedded system or an FPGA, and when the embedded platform operates, due to the limitation of equipment resources, real-time performance and low power consumption can be ensured through optimization, but the accuracy performance is reduced to some extent; if the FPGA platform is adopted for development, the cost is high, and the power consumption is not small.
Disclosure of Invention
The present invention is directed to a system for implementing optical flow tracking by hardware based on image and imu data, so as to solve the problems set forth in the background art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a system for hardware-implemented optical flow tracking based on image and imu data, comprising: imu and image synchronization module, rotation matrix calculation module, image pyramid module, characteristic point prediction module, LK optical flow calculation module, error match eliminating module, characteristic point update module and DDR control module;
the imu and image synchronization module is used for carrying out time synchronization on imu data and image data, outputting an imu data stream after synchronization to the rotation matrix calculation module and outputting an image stream after synchronization to the image pyramid module; the image pyramid module establishes an image pyramid data stream after receiving the image stream; the DDR control module realizes control storage of the image pyramid data stream; the rotation matrix calculation module performs integral operation on imu gyro data between two frames of images and then converts the integrated data into a rotation matrix under a camera coordinate system through the coordinate system; the characteristic point prediction module predicts the coordinates of the characteristic points of the previous frame on the current frame image according to the rotating matrix under the camera coordinate system and outputs the coordinates to the LK optical flow calculation module, wherein the characteristic points of the previous frame are obtained by outputting by the updating characteristic point module; the LK optical flow calculation module calculates the optical flow of the predicted tracking point by a pyramid Lucas-Kanade optical flow method, obtains the tracking point coordinates of the current frame and outputs the tracking point coordinates to the rejection mismatching module; the LK optical flow calculation module reads image pyramid data of the current frame and the previous frame through the DDR control module; the rejecting mismatching module filters out mismatching characteristic points; the characteristic point updating module updates the characteristic points of the data processed by the rejecting mismatching module by using the characteristic points extracted by the current frame, and the obtained characteristic point information is output to the next-stage module to be used, and iterates to the characteristic point predicting module to be used as the characteristic points of the previous frame.
As a further scheme of the invention: the hardware implementation method for obtaining the rotation matrix of the rotation matrix calculation module comprises the following steps:
the integration segment control engine obtains imu time stamp of each segment, generates integration enabling pulse after image time stamp and gyro data of imu to trigger integration operation of each segment, then performs accumulation operation, generates accumulation ending pulse to end accumulation operation, obtains a rotation vector, obtains a rotation matrix through exponential operation, and obtains a rotation matrix under a camera coordinate system through rotation coordinate system transformation operation.
As a further scheme of the invention: and the image pyramid module performs Gaussian filtering and 2 times downsampling operation on the image synchronized by the imu and the image synchronization module to obtain a three-layer pyramid image.
As a further scheme of the invention: the method for predicting the coordinates of the characteristic points of the previous frame on the current frame image by the characteristic point predicting module comprises the following steps:
converting the feature point coordinates tracked by the previous frame from pixel coordinates to normalized coordinates, performing de-distortion by adopting an optical corresponding distortion model to obtain feature point de-distorted coordinates, obtaining predicted feature point normalized coordinates by considering the rotation relation in European transformation, and converting the predicted feature point normalized coordinates into pixel coordinates through distortion adding operation for use by an LK optical flow calculation module.
As a further scheme of the invention: the computing method of the LK optical flow computing module comprises the following steps:
when calculating the optical flow, firstly calculating from the image of the top layer, and then taking the tracking result of the upper layer as the initial value of the optical flow of the lower layer;
and calculating the optical flow estimated value of each layer, carrying out multiple iterations of optical flow calculation until the iteration termination condition is met, and continuously updating the optical flow vector in the iteration process to enable the optical flow vector to more accurately represent the position of the feature point in the next frame.
As a further scheme of the invention: the LK optical flow calculation module includes an engine management, a plurality of LK calculation engines, an image window data arbiter, and an output buffer unit:
the engine manages and schedules idle LK calculation engines, and gives the characteristic point coordinates of the scheduled LK calculation engines; each LK calculation engine performs LK optical flow calculation according to the assigned feature point coordinates; the image window data arbiter receives the requests of the image window data sent by each LK calculation engine in a polling mode, and then obtains the data through the DDR control module and returns the data to the corresponding LK calculation engine; the output buffer unit outputs the results of the parallel LK calculation engines in series.
As a further scheme of the invention: the method for eliminating the feature points of the mismatching module for filtering the mismatching comprises the following steps:
and normalizing all the characteristic point coordinates to increase the stability of the numerical value, taking the coordinate of the previous frame and the coordinate of the current frame as a point pair, solving the distance of each point pair, judging that the characteristic points are mismatched if the distance is larger than the distance of a normalized plane corresponding to one pixel, and removing the characteristic points.
As a further scheme of the invention: the method for eliminating the feature points of the mismatching module for filtering the mismatching further comprises the following steps:
after all the point pairs are judged, an average distance is obtained, if the rest characteristic points are smaller than 3 or the average distance is smaller than the distance of a normalized plane corresponding to one pixel, the matching is considered to be failed, and all the characteristic points are removed; if the number of the residual feature points is more than or equal to 3 and the average distance is smaller than the distance of the normalized plane corresponding to one pixel, the residual point pairs are eliminated by adopting a two-point RANSAC method.
As a further scheme of the invention: the method for updating the feature points by the feature point updating module comprises the following steps:
dividing the image into a plurality of grids, wherein the number of characteristic points of each grid is not more than 4;
each grid compares the distances between the extracted feature points of the current frame and the tracked feature points, picks up the feature points extracted from the current frame, the distances between the feature points and all the tracked feature points in the same grid are larger than a threshold value, sorts the feature points processed by the rejection mismatching module and the feature points extracted from the selected current frame from small to large according to IDs, and selects the first 4 feature points as the tracked feature points of the grid to be output.
As a further scheme of the invention: imu and image synchronization module defines an offset of a timeshift parameter to an image timestamp to align imu timestamps and image timestamps.
Compared with the prior art, the invention has the beneficial effects that: the system for realizing optical flow tracking based on the hardware of the image and imu data achieves the purposes of low power consumption, low cost, real-time performance and high performance.
The invention adopts the data flow driving, module parallel computing, multi-LK optical flow computing engine and other hardware realization methods to fully utilize hardware resources and provide highly optimized performance and efficiency. And under the 28nm process, a 150MHz processing clock can be realized, and the delay from the end of the image data stream to the output of all characteristic point information of the whole module is less than 1ms, and the area is 1.6mm2.
Other features and advantages of the present invention will be disclosed in the following detailed description of the invention and the accompanying drawings.
Drawings
FIG. 1 is a block diagram of a system for implementing optical flow tracking based on hardware of image and imu data;
FIG. 2 is a schematic diagram of imu and image timestamps for a system of FIG. 1 that implements optical flow tracking based on hardware of the image and imu data;
FIG. 3 is a schematic diagram of the image and image synchronization module of the hardware-implemented optical flow tracking system of FIG. 1 based on image and imu data processing data;
FIG. 4 is a schematic diagram of imu data and time stamps between images of two frames of a rotation matrix computation module of the hardware-implemented optical flow tracking system of FIG. 1 based on the images and imu data;
FIG. 5 is a hardware implementation block diagram of a rotation matrix computation module of the system of FIG. 1 for implementing optical flow tracking based on hardware of image and imu data;
FIG. 6 is a schematic diagram of a three-layer pyramid image obtained by an image pyramid module of the system for implementing optical flow tracking based on hardware of image and imu data of FIG. 1;
FIG. 7 is a schematic diagram of 5x5 window data of an acquired image of an image pyramid module of the hardware-implemented optical flow tracking system of FIG. 1 based on image and imu data;
FIG. 8 is a schematic diagram of a DDR control module write DDR of the hardware implemented optical flow tracking system of FIG. 1 based on image and imu data;
FIG. 9 is a schematic diagram of the architecture of a multi-compute engine of the LK optical flow computation module of a system of FIG. 1 for hardware-implemented optical flow tracking based on image and imu data.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 to 9, in an embodiment of the present invention, as shown in fig. 1, a system for implementing optical flow tracking based on hardware of image and imu data includes: imu and image synchronization module, rotation matrix calculation module, image pyramid module, characteristic point prediction module, LK optical flow calculation module, eliminating mismatching module, updating characteristic point module and DDR control module.
And the imu and image synchronization module is used for carrying out time synchronization on imu data and image data, outputting an imu data stream after synchronization to the rotation matrix calculation module and outputting an image stream after synchronization to the image pyramid module. And the image pyramid module establishes an image pyramid data stream after receiving the image stream. The DDR control module realizes control storage of the image pyramid data stream. The image pyramid data stream is stored into DDR3 through the DDR control module. The rotation matrix calculation module performs integration operation on imu gyro data between two frames of images and then obtains a rotation matrix under a camera coordinate system through coordinate system conversion. And the characteristic point prediction module predicts the coordinates of the characteristic points of the previous frame on the current frame image according to the rotating matrix under the camera coordinate system and outputs the coordinates to the LK optical flow calculation module, wherein the characteristic points of the previous frame are obtained by outputting by the characteristic point updating module. The LK optical flow calculation module calculates the optical flow of the predicted tracking point by a pyramid Lucas-Kanade optical flow method, obtains the tracking point coordinates of the current frame and outputs the tracking point coordinates to the error-eliminating matching module. And the LK optical flow calculation module reads the image pyramid data of the current frame and the previous frame through the DDR control module. The rejecting mismatching module filters out the feature points of mismatching. The characteristic point updating module updates the characteristic points of the data processed by the rejecting mismatching module by using the characteristic points extracted by the current frame, and the obtained characteristic point information is output to the next-stage module to be used, and iterates to the characteristic point predicting module to be used as the characteristic points of the previous frame.
The input data stream has extracted feature point information of the current frame, a time stamp information stream (including a time stamp of imu, a time stamp of an image), a gyro data stream of imu, an image data stream, and feature point information stream (coordinates and IDs of feature points extracted from the current frame).
The output data stream is a feature point information stream (feature point coordinates and IDs) after optical flow tracking.
imu and image synchronization module
imu's time stamp refers to the point in time at which imu data was acquired, and image time stamp refers to the point in time at which each frame of image began because imu was acquired at a relatively high frequency, such as, but not limited to, 1000 times per second, and image time stamp is once per frame of image change, such as, but not limited to, 30 frames per second. To ensure that imu is acquired and the image is acquired at the same time, imu and image synchronization modules define an offset of the time shift parameter from the image timestamp to align the imu timestamp with the image timestamp.
Referring to fig. 2, the imu and picture synchronization module ensures that the timestamp of the i-th frame imu needs to be less than the start_time, i.e., the start_time should be between the i-th frame imu and the i+1th frame imu. Similarly, the end_time is obtained after the original time stamp end_time_origin of the image of the next frame is converted, and the end_time is between the j-1 th frame imu and the j-th frame imu.
Referring to fig. 3, the gyro data of imu and the time stamp of imu enter the FIFO buffer memory, and meanwhile, it is determined whether the first imu time stamp is smaller than the image time stamp processed by timeshift, if the condition is satisfied, the current time stamp and the image data are sent to the synchronized image time stamp and the image data interface for output, and at the same time, the imu time stamp is read from the FIFO, whether the two former and latter imu data are satisfied is compared, and if the condition is satisfied, the gyro data of imu and the time stamp of imu are sent to the synchronized imu time stamp and the gyro data of imu for output.
Rotation matrix calculation module
Referring to fig. 4, first discrete integration of the imu gyro data between two frames yields a rotation vector containing three orientationsThe implementation of discrete integration is described below:
multiple imu data are acquired between the images of two frames,corresponding timestamp data +.>The timestamp of the image is +.>Where the time stamp of the image has been synchronized using time), the formula is as follows
Is the mean value of imu gyro data per segment, < >>Is the integration time per segment.
Obtaining a rotation vectorThen by exponential operation, will ∈>Conversion into a rotation matrix>
Wherein the method comprises the steps of,/>Is->Is an antisymmetric matrix of>Is an identity matrix.
And then obtaining a rotation matrix under the camera coordinate system through the change of the rotation coordinate system.
Referring to fig. 5, the hardware implementation method for obtaining the rotation matrix of the rotation matrix calculation module includes the following steps:
the integration segment control engine obtains imu time stamp of each segment, generates integration enabling pulse after image time stamp and gyro data of imu, triggers to perform integration operation of each segment, then performs accumulation operation, and generates accumulation ending pulse to end accumulation operation to obtain rotation vectorThen the rotation matrix is obtained by exponential operation>Then obtaining a rotation matrix +.A rotation coordinate system under the camera coordinate system through conversion operation of the rotation coordinate system>. In the course of the implementation of the exponential-computing hardware,and->The first-order Taylor approximation is used for simplification, so that the hardware cost is reduced while the algorithm accuracy is ensured. For example, a->,/>
Image pyramid module
Referring to fig. 6, the image pyramid module performs gaussian filtering twice and downsampling 2 times on the image synchronized by the imu and image synchronization module, to obtain a three-layer pyramid image.
Referring to fig. 7, through a dual port SRAM having a depth of 640 and a bit width of 32bits, the data of the first 4 rows and the same column of the current data is read out from the SRAM, the current data and the data of the first 3 rows are obtained through a shift operation, and written into the SRAM. The current data and the first 4 data read from the SRAM pass through 20D flip-flops to obtain 5x5 window data of the image at the same time,/>,/>,/>,/>
The symmetry of Gaussian filtering template is utilized to add pixel values corresponding to the same template coefficients, and multiplication operation is converted into shift addition operation to realize weighted summation operation in hardware.
DDR control module
Referring to fig. 8, the present module implements control logic for writing pyramid 3-layer images into DDR and reading window data of 12×12 images required for LK calculation from DDR, and designs ping-pong logic for rapidly switching image storage data of a current frame and a previous frame by switching addresses.
Feature point prediction module
The method for predicting the coordinates of the characteristic points of the previous frame on the current frame image by the characteristic point predicting module comprises the following steps:
converting the feature point coordinates tracked by the previous frame from pixel coordinates to normalized coordinates, and performing de-distortion by adopting an optical corresponding distortion model to obtain coordinates after the feature point de-distortionTaking into consideration the rotation relation in European transformation, obtaining predicted feature point normalized coordinates ++>The normalized coordinates of the predicted feature points are converted into pixel coordinates through distortion adding operation, and the pixel coordinates are used by an LK optical flow calculation module.
LK optical flow calculation module
The computing method of the LK optical flow computing module comprises the following steps:
when calculating the optical flow, firstly calculating from the image of the top layer, and then taking the tracking result of the upper layer as the initial value of the optical flow of the lower layer;
and calculating the optical flow estimated value of each layer, carrying out multiple iterations of optical flow calculation until the iteration termination condition is met, and continuously updating the optical flow vector in the iteration process to enable the optical flow vector to more accurately represent the position of the feature point in the next frame.
Specifically, when calculating the optical flow, the calculation is started from the image of the top layer, and then the tracking result of the upper layer is used as the initial value of the optical flow of the lower layer
Is the initial value of the optical flow of the layer L. N is the iteration number of the L-layer optical flow algorithm, and the optical flow initial value of the top layer (the second layer) is one fourth of the predicted characteristic point coordinates.
Calculation of the optical flow estimation value of each layer, and performing multiple iterations of optical flow calculation untilThe iteration termination condition is satisfied, and in the iteration process, the optical flow vector is continuously updated so as to more accurately represent the position of the feature point in the next frame. At each iteration, the increment obtained by the optical flow algorithm formula is calculatedAdded to the current optical flow vector to obtain an updated optical flow vector.
Is the derivative in x-direction of the previous frame image, < >>For the derivative in the y-direction of the previous frame image, is->Is the gray level difference between the current frame and the previous image.
And stopping iteration when the iteration number reaches the maximum value and the increment of the optical flow vector is small, and entering the calculation of the next pyramid.
Referring to fig. 9, since the LK optical flow calculation is performed for a plurality of iterations to obtain a result, the invention designs a multi-calculation engine architecture to increase the calculation rate.
The LK optical flow calculation module includes an engine management, a plurality of LK calculation engines, an image window data arbiter, and an output buffer unit:
the engine manages and schedules idle LK calculation engines, and gives the characteristic point coordinates of the scheduled LK calculation engines; each LK calculation engine performs LK optical flow calculation according to the assigned feature point coordinates; the image window data arbiter receives the requests of the image window data sent by each LK calculation engine in a polling mode, and then obtains the data through the DDR control module and returns the data to the corresponding LK calculation engine; the output buffer unit outputs the results of the parallel LK calculation engines in series.
The data flow of each LK calculation engine is as follows, the characteristic point window image data (matrix of 12x 12) of the previous frame is divided into 4x4 matrix data of 81 clocks through sliding window operation, and the derivative of x direction is obtained by using pipelineDerivative in y-direction of previous frame image +.>The previous frame of image data ival is then accumulated and matrix inverted to obtain +.>Matrix, between which->,/>The ival is merged and cached down for iterative use of the optical flow; the characteristic point window image data (matrix of 12x 12) of the current frame is buffered +.>,/>The ival matrix is divided into 81 clocks of 4x4 matrix data by a sliding window operation and corresponding +.>,/>Ival data, which is obtained by using pipeline>,/>Then add up to get +.>Finally, matrix multiplication is carried out to obtain +.>
Rejecting mismatching module
And projecting the characteristic point of the previous frame to the current frame through a rotation matrix obtained by the rotation matrix module, wherein the characteristic point vector is pts1, and the characteristic point vector output by the LK optical flow calculation module is pts2. Firstly, normalizing all the coordinates of the characteristic points to increase the stability of the numerical value, taking the coordinates of the previous frame and the coordinates of the current frame as a point pair, solving the distance of each point pair, judging that the characteristic points are mismatched if the distance is larger than the distance of a normalized plane corresponding to one pixel, and removing the characteristic points.
After all the point pairs are judged, an average distance is obtained, if the rest characteristic points are smaller than 3 or the average distance is smaller than the distance of a normalized plane corresponding to one pixel, the matching is considered to be failed, and all the characteristic points are removed; if the number of the residual feature points is more than or equal to 3 and the average distance is smaller than the distance of the normalized plane corresponding to one pixel, the residual point pairs are eliminated by adopting a two-point RANSAC method.
Updating feature point module
The method for updating the feature points by the feature point updating module comprises the following steps:
the image is divided into a plurality of grids, specifically, the image is equally divided into 40 grids, and the pixel size of each grid is 80×96, wherein redundant operation is not generated, and the number of characteristic points of each grid is limited to be not more than 4.
Traversing 40 grids, comparing the distances between the extracted characteristic points of the current frame and the tracked characteristic points of each grid, selecting the extracted characteristic points of the current frame, the distances between the extracted characteristic points and all the tracked characteristic points in the same grid are larger than a threshold value, sorting the characteristic points processed by the rejecting mismatching module and the extracted characteristic points of the current frame according to IDs from small to large, and selecting the first 4 characteristic points as the tracked characteristic points of the grid to be output.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (10)

1. A system for hardware-implemented optical flow tracking based on image and imu data, comprising: imu and image synchronization module, rotation matrix calculation module, image pyramid module, characteristic point prediction module, LK optical flow calculation module, error match eliminating module, characteristic point update module and DDR control module;
the imu and image synchronization module is used for performing time synchronization on imu data and image data, outputting an imu data stream after synchronization to the rotation matrix calculation module and outputting an image stream after synchronization to the image pyramid module; the image pyramid module establishes an image pyramid data stream after receiving the image stream; the DDR control module realizes control storage of the image pyramid data stream; the rotation matrix calculation module performs integral operation on imu gyro data between two frames of images and then converts the integrated data into a rotation matrix under a camera coordinate system through the coordinate system; the characteristic point prediction module predicts the coordinates of the characteristic points of the previous frame on the current frame image according to the rotating matrix under the camera coordinate system and outputs the coordinates to the LK optical flow calculation module, wherein the characteristic points of the previous frame are output by the updated characteristic point module; the LK optical flow calculation module calculates the optical flow of the predicted tracking point by a pyramid Lucas-Kanade optical flow method, obtains the tracking point coordinates of the current frame and outputs the tracking point coordinates to the rejecting mismatching module; the LK optical flow calculation module reads image pyramid data of the current frame and the previous frame through the DDR control module; the rejecting mismatching module filters out mismatching characteristic points; and the characteristic point updating module updates the characteristic points of the data processed by the rejecting mismatching module by using the characteristic points extracted by the current frame, and outputs the obtained characteristic point information to a next-stage module for use, and the characteristic point information is iterated to the characteristic point predicting module for use as the characteristic points of the previous frame.
2. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
the hardware implementation method for obtaining the rotation matrix of the rotation matrix calculation module comprises the following steps:
the integration segment control engine obtains imu time stamp of each segment, generates integration enabling pulse after image time stamp and gyro data of imu to trigger integration operation of each segment, then performs accumulation operation, generates accumulation ending pulse to end accumulation operation, obtains a rotation vector, obtains a rotation matrix through exponential operation, and obtains a rotation matrix under a camera coordinate system through rotation coordinate system transformation operation.
3. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
and the image pyramid module performs two Gaussian filtering and 2 times downsampling operations on the image synchronized by the imu and image synchronization module to obtain a three-layer pyramid image.
4. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
the method for predicting the coordinates of the characteristic points of the previous frame on the current frame image by the characteristic point predicting module comprises the following steps:
converting the feature point coordinates tracked by the previous frame from pixel coordinates to normalized coordinates, performing de-distortion by adopting an optical corresponding distortion model to obtain feature point de-distorted coordinates, obtaining predicted feature point normalized coordinates by considering the rotation relation in European transformation, and converting the predicted feature point normalized coordinates into pixel coordinates through distortion adding operation for use by the LK optical flow calculation module.
5. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
the computing method of the LK optical flow computing module comprises the following steps:
when calculating the optical flow, firstly calculating from the image of the top layer, and then taking the tracking result of the upper layer as the initial value of the optical flow of the lower layer;
and calculating the optical flow estimated value of each layer, carrying out multiple iterations of optical flow calculation until the iteration termination condition is met, and continuously updating the optical flow vector in the iteration process to enable the optical flow vector to more accurately represent the position of the feature point in the next frame.
6. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
the LK optical flow calculation module includes an engine management, a plurality of LK calculation engines, an image window data arbiter, and an output buffer unit:
the engine manages the LK computing engine which is scheduled to be idle, and gives the feature point coordinates of the LK computing engine which is scheduled; each LK calculation engine calculates LK light flow according to the assigned feature point coordinates; the image window data arbiter receives the requests of the image window data sent by each LK calculation engine in a polling mode, and then obtains the data through the DDR control module and returns the data to the corresponding LK calculation engine; the output buffer unit outputs the parallel results of the LK calculation engines in series.
7. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
the method for eliminating the feature points of the mismatching by the mismatching module comprises the following steps:
and normalizing all the characteristic point coordinates to increase the stability of the numerical value, taking the coordinate of the previous frame and the coordinate of the current frame as a point pair, solving the distance of each point pair, judging that the characteristic points are mismatched if the distance is larger than the distance of a normalized plane corresponding to one pixel, and removing the characteristic points.
8. The hardware-implemented optical flow tracking system based on image and imu data of claim 7, wherein,
the method for eliminating the feature points of the mismatching by the mismatching module comprises the following steps:
after all the point pairs are judged, an average distance is obtained, if the rest characteristic points are smaller than 3 or the average distance is smaller than the distance of a normalized plane corresponding to one pixel, the matching is considered to be failed, and all the characteristic points are removed; if the number of the residual feature points is more than or equal to 3 and the average distance is smaller than the distance of the normalized plane corresponding to one pixel, the residual point pairs are eliminated by adopting a two-point RANSAC method.
9. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
the method for updating the feature points by the feature point updating module comprises the following steps:
dividing the image into a plurality of grids, wherein the number of characteristic points of each grid is not more than 4;
and each grid compares the distances between the extracted characteristic points of the current frame and the tracked characteristic points, picks out the characteristic points extracted from the current frame, the distances between the characteristic points and all the tracked characteristic points in the same grid are larger than a threshold value, sorts the characteristic points processed by the rejection mismatching module and the characteristic points extracted from the selected current frame according to the ID from small to large, and selects the first 4 characteristic points as the tracked characteristic points of the grid to output.
10. The system for hardware-implemented optical flow tracking based on image and imu data of claim 1, wherein,
the imu and image synchronization module defines an offset of a timeshift parameter from an image timestamp to align the imu timestamp with the image timestamp.
CN202311500139.8A 2023-11-13 2023-11-13 System for realizing optical flow tracking based on image and imu data hardware Pending CN117237417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311500139.8A CN117237417A (en) 2023-11-13 2023-11-13 System for realizing optical flow tracking based on image and imu data hardware

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311500139.8A CN117237417A (en) 2023-11-13 2023-11-13 System for realizing optical flow tracking based on image and imu data hardware

Publications (1)

Publication Number Publication Date
CN117237417A true CN117237417A (en) 2023-12-15

Family

ID=89098675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311500139.8A Pending CN117237417A (en) 2023-11-13 2023-11-13 System for realizing optical flow tracking based on image and imu data hardware

Country Status (1)

Country Link
CN (1) CN117237417A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978728A (en) * 2014-04-08 2015-10-14 南京理工大学 Image matching system of optical flow method
CN110009681A (en) * 2019-03-25 2019-07-12 中国计量大学 A kind of monocular vision odometer position and posture processing method based on IMU auxiliary
CN115457127A (en) * 2022-09-01 2022-12-09 东南大学 Self-adaptive covariance method based on feature observation number and IMU pre-integration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978728A (en) * 2014-04-08 2015-10-14 南京理工大学 Image matching system of optical flow method
CN110009681A (en) * 2019-03-25 2019-07-12 中国计量大学 A kind of monocular vision odometer position and posture processing method based on IMU auxiliary
CN115457127A (en) * 2022-09-01 2022-12-09 东南大学 Self-adaptive covariance method based on feature observation number and IMU pre-integration

Similar Documents

Publication Publication Date Title
JP4964159B2 (en) Computer-implemented method for tracking an object in a sequence of video frames
CN110108258B (en) Monocular vision odometer positioning method
CN102148934B (en) Multi-mode real-time electronic image stabilizing system
CN110781262B (en) Semantic map construction method based on visual SLAM
CN101923718B (en) Optimization method of visual target tracking method based on particle filtering and optical flow vector
CN109974743B (en) Visual odometer based on GMS feature matching and sliding window pose graph optimization
CN109389044B (en) Multi-scene crowd density estimation method based on convolutional network and multi-task learning
Pan et al. Visual tracking using high-order particle filtering
CN111027505B (en) Hierarchical multi-target tracking method based on significance detection
CN109800915A (en) A kind of traffic flow forecasting method based on missing data collection
CN112836652B (en) Multi-stage human body posture estimation method based on event camera
CN112785636B (en) Multi-scale enhanced monocular depth estimation method
CN113393385B (en) Multi-scale fusion-based unsupervised rain removing method, system, device and medium
CN104091352A (en) Visual tracking method based on structural similarity
CN116502774B (en) Time sequence prediction method based on time sequence decomposition and Legend projection
CN117237417A (en) System for realizing optical flow tracking based on image and imu data hardware
CN104680194A (en) On-line target tracking method based on random fern cluster and random projection
CN111783979B (en) Image similarity detection hardware accelerator VLSI structure based on SSIM algorithm
US9092672B1 (en) Power-efficient sensory recognition processor
CN111461995B (en) Video racemization method and device based on approximate dense optical flow method and reference frame update
CN114648560A (en) Distributed image registration method, system, medium, computer device and terminal
CN113538527A (en) Efficient lightweight optical flow estimation method
Yu et al. An incremental learning based convolutional neural network model for large-scale and short-term traffic flow
Yang et al. Differential Enhancement and Commonality Fusion for RGBT Tracking
Duan et al. Learning Dynamic Spatial Graphs and Spatial Patterns for Accurate Traffic Prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination