CN109919825B - ORB-SLAM hardware accelerator - Google Patents

ORB-SLAM hardware accelerator Download PDF

Info

Publication number
CN109919825B
CN109919825B CN201910084078.9A CN201910084078A CN109919825B CN 109919825 B CN109919825 B CN 109919825B CN 201910084078 A CN201910084078 A CN 201910084078A CN 109919825 B CN109919825 B CN 109919825B
Authority
CN
China
Prior art keywords
module
orb
unit
cache
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910084078.9A
Other languages
Chinese (zh)
Other versions
CN109919825A (en
Inventor
杨建磊
刘润泽
赵巍胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201910084078.9A priority Critical patent/CN109919825B/en
Publication of CN109919825A publication Critical patent/CN109919825A/en
Application granted granted Critical
Publication of CN109919825B publication Critical patent/CN109919825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an ORB-SLAM hardware accelerator, which comprises an FPGA hardware acceleration module, a feature matching module and a hardware processing module, wherein the FPGA hardware acceleration module is used for accelerating feature extraction and feature matching; a sensor module to capture an image; and the processor system is used as a host to control the FPGA hardware acceleration module and the sensor module and is responsible for operation pose estimation, pose optimization and map updating. The invention utilizes the FPGA hardware acceleration module to accelerate the process with the largest calculated amount and the largest consumed time in the ORB-SLAM flow, can effectively improve the running speed of the ORB-SLAM and reduce the power consumption, greatly improves the energy consumption ratio and reduces the difficulty of deploying the ORB-SLAM on a platform with limited power consumption.

Description

ORB-SLAM hardware accelerator
Technical Field
The invention relates to the field of autonomous navigation, in particular to an ORB-SLAM hardware accelerator.
Background
SLAM (simultaneous localization and mapping) is one of the most critical technologies in the field of autonomous navigation, and enables an autonomous navigation system to incrementally map the surrounding environment in an unknown environment according to information captured by sensors, and at the same time, determine the position of itself in the environment. SLAM is widely applied to the fields of automatic driving automobiles, autonomous navigation robots, virtual reality, augmented reality and the like, and is a vital technology.
ORB-SLAM is a feature point method visual SLAM based on ORB descriptors. It is a very efficient and robust SLAM system, which has received extensive research attention. Existing ORB-SLAM systems can only run on traditional computing platforms (CPU, GPU), which results in their performance to power consumption ratio being relatively low. The low-power real-time operation of the ORB-SLAM is always a difficult problem to solve due to the limitations of performance and power consumption of a computing platform. The ORB-SLAM consumes a large amount of computing resources in the process of feature extraction and matching, and often needs to use a high-performance CPU and GPU in order to ensure real-time operation, which brings a large amount of power consumption overhead. However, if the embedded CPU and GPU are used to reduce power consumption, the number of frames is too low to run in real time.
Disclosure of Invention
The invention provides an ORB-SLAM hardware accelerator which is used for solving the problem that ORB-SLAM is difficult to operate in real time with low power consumption.
An ORB-SLAM hardware accelerator comprises a sensor module, an FPGA hardware acceleration module and a processor system;
the sensor module is used for acquiring image data;
the FPGA hardware acceleration module is used for extracting ORB features from the image acquired by the sensor module and matching the extracted features with map points in the global map;
and the processor system is used for calculating the camera pose and maintaining the global map according to the ORB features extracted by the FPGA hardware acceleration module and the matching result of the ORB features and map points in the global map.
Preferably, the FPGA hardware acceleration module includes: the device comprises an image down-sampling module, a feature extraction module and a feature matching module;
the image down-sampling module is used for generating an image pyramid;
the feature extraction module is used for extracting ORB features on each layer of the image pyramid;
the feature matching module is used for matching the ORB features extracted by the feature extraction module with map points in the global map.
Preferably, the feature extraction module includes: the device comprises a key point extracting unit, a non-maximum value inhibiting unit, a Gaussian filtering unit, a characteristic direction calculating unit, a descriptor calculating unit and a first cache unit;
the key point extracting unit is used for extracting a FAST corner on an input image and calculating a Harris response value of the FAST corner;
the non-maximum suppression unit is used for performing non-maximum suppression on the FAST corner according to the Harris response value of the FAST corner, and reserving the FAST corner with the maximum Harris response value in the neighborhood;
the Gaussian filtering unit is used for carrying out Gaussian blur processing on the input image, removing noise in the image and generating a smooth image;
the characteristic direction calculating unit is used for calculating a characteristic direction on the image after Gaussian filtering;
the descriptor calculation unit is used for calculating a BRIEF descriptor of the feature on the image after Gaussian filtering according to the feature direction;
the first buffer unit is used for temporarily storing input data, intermediate calculation results and final results.
Preferably, the work flow of the feature direction calculating unit is as follows: firstly, calculating the pixel gray scale centroid coordinates of the neighborhood where the features are located; and then, calculating the ratio of the horizontal coordinate to the vertical coordinate of the gray centroid, and obtaining the direction of the features according to a lookup table.
Preferably, the first cache unit includes: input image caching, smooth image caching, response value caching, and feature caching. The input image cache, the smooth image cache and the response value cache adopt a ping-pong-like structure, are composed of a plurality of same caches, and can simultaneously process the input and the output of data. The feature cache adopts a maximum heap architecture, is used for screening the features while preserving the features, and only reserves partial features with large Harris response values.
Preferably, each unit of the feature extraction module adopts a streaming computing architecture, all computing units run in parallel, only part of data is stored in a cache, and the data is discarded immediately after being used.
Preferably, the feature matching module includes: the matching unit and the second cache unit;
the matching unit is used for matching the features extracted by the feature extraction module with map points in the global map. The working process is as follows: first, the hamming distance between any two descriptors in two sets of descriptors (descriptors of map points in the global map and descriptors of features extracted from the previous frame at the present time) is calculated; then, matching the two groups of descriptors according to the Hamming distance by using a violent searching method;
the second cache unit comprises a descriptor cache and a result cache, and is used for temporarily storing two groups of descriptors to be matched and matching results.
The processor system includes a general purpose processor, a memory, and a memory controller. And the general processor is used for calculating the camera pose according to the extracted image features and the matching relation between the features and map points in the global map, and updating and maintaining the global map.
The FPGA hardware acceleration module and the processor system communicate via an AXI bus. A general processor in the processor system can directly configure an instruction register in the FPGA hardware acceleration module through an AXI bus; the FPGA hardware acceleration module may also directly read data from the memory in the processor system through the AXI bus, or store the calculation result in the memory.
The FPGA hardware acceleration module and the processor system run in parallel in a pipeline mode.
The ORB-SLAM hardware accelerator has the advantages and effects that: the most intensive calculation process in the ORB-SLAM is accelerated by utilizing a special hardware acceleration module, the frame rate of the whole system is improved, and meanwhile, the power consumption is reduced, so that the low-power-consumption real-time operation of the ORB-SLAM becomes possible.
Drawings
Fig. 1 is a schematic structural diagram of an ORB-SLAM hardware accelerator according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a feature extraction module of an ORB-SLAM hardware accelerator according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a feature matching module of an ORB-SLAM hardware accelerator according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a pipeline when the FPGA hardware acceleration module of the ORB-SLAM hardware accelerator runs in parallel with the processor system according to an embodiment of the present invention.
Detailed Description
The invention provides an ORB-SLAM hardware accelerator aiming at the problem that the existing ORB-SLAM is difficult to run on a low-power-consumption platform in real time. The acceleration module accelerates the process with the largest calculated amount in the ORB-SLAM flow through a special hardware acceleration module, improves the overall performance power consumption ratio of the system and enables low-power consumption real-time operation to be possible. The present invention will be described in further detail below with reference to specific embodiments and the accompanying drawings.
Fig. 1 is a schematic structural diagram of an ORB-SLAM hardware accelerator according to an embodiment of the present invention. As shown in fig. 1, the present accelerator consists of three parts: FPGA hardware acceleration module, processor system and sensor module. The processor system is a host, the sensor module is connected with the processor system through a universal serial bus, and the FPGA hardware acceleration module is connected with the processor system through an AXI bus. The processor system comprises a general processor and a memory; the FPGA hardware acceleration module comprises an image down-sampling module, a feature extraction module and a feature matching module.
The sensor module captures images at a frequency, and each time an image is captured, it is transferred to memory in the processor system for temporary storage. The general purpose processor then notifies the feature extraction module to begin operation. After receiving the notification, the feature extraction module starts data transmission under the condition that no general processor participates, stores the pictures stored in the memory into the input image cache, and starts to extract features on the images. Meanwhile, the feature extraction module informs the image down-sampling module to down-sample the pictures stored in the memory, and generates an image pyramid for further feature extraction. After the feature extraction is finished, the feature extraction module stores the extracted features into the memory and the descriptor cache of the feature matching module, and informs the general processor through interruption. The general purpose processor then notifies the feature matching module to begin operation. The feature matching module reads map points in the global map from the memory, matches the map points with the features obtained from the feature extraction module, and stores the result in the memory and informs the general processor after the matching is finished. After that, the general-purpose processor proceeds to: pose estimation, pose optimization and key frame judgment. And if the key frame is the key frame, updating the global map.
Fig. 2 is a schematic structural diagram of a feature extraction module of an ORB-SLAM hardware accelerator according to an embodiment of the present invention, where NMS represents a non-maximum suppression unit. The feature extraction module is used for extracting ORB features on the input image. The method comprises the steps of reading a picture from a memory through an AXI interface, extracting ORB features from the picture, writing the result back to the memory through the AXI interface, and sending a descriptor of the features to a feature matching module. As shown in fig. 2, the feature extraction module includes:
AXI interface: for communicating with a processor system. The AXI interface includes a master interface for receiving instructions from a general purpose processor in the processor system and a slave interface for reading and writing memory in the processor system. It should be noted that, when the feature extraction module reads and writes the memory through the AXI host interface, the feature extraction module does not need the participation of the general-purpose processor, and the general-purpose processor can process other tasks at the same time.
A key point extraction unit: for extracting FAST keypoints on images and calculating Harris response values for the keypoints. The keypoint extraction unit takes a pixel area of 7 × 7 as input, and compares the gray value of the central pixel with the gray values of the pixels on the circumference of which the peripheral radius is 3 to judge whether the pixel is a FAST keypoint. And calculating a response value by calculating a difference in gray values of the center pixel and the pixels on the circumference.
A Gaussian filtering unit: for gaussian blurring of the image. The gaussian filter unit stores a 7 × 7 gaussian kernel, and generates a smoothed image by convolving the gaussian kernel with pixels of the input image.
Non-maximum suppression unit: for non-maxima suppression of keypoints. The non-maximum suppression unit filters the key points according to Harris response values of the key points, and only one key point with the maximum response value is reserved in any 3x3 neighborhood.
A characteristic direction calculation unit: for computing feature directions on the smoothed image. The feature direction calculation unit takes a circular region of radius 15 pixels around the feature position as an input, first calculates the grayscale centroid of the region, and then takes the vector of the feature position to the grayscale centroid position as the feature direction.
The descriptor computation unit: a descriptor for computing features on the smoothed image. 256 pairs of test positions are stored in the descriptor computation unit, and a 256-bit descriptor is obtained by comparing the pixel gray values of the 256 pairs of test positions in the neighborhood of 31x31 around the feature. When the descriptor computing unit computes the descriptor, the test position is rotated to be consistent with the direction of the feature point, so that the rotation invariance of the feature is guaranteed.
A first cache unit: the image processing method is characterized by comprising an input image cache, a smooth image cache, a response value cache and a characteristic cache, wherein:
input image caching, smooth image caching, response value caching: respectively used for buffering the response values of the input image, the smooth image and the key points. The three cache regions adopt a ping-pong architecture-like design and are composed of a plurality of same caches. When a part of the caches are occupied due to output data and cannot input, the rest caches can receive input data. It should be noted that, because the feature extraction module adopts a pipeline computing architecture, the data stored in the cache can be discarded after the use of the data is completed, and thus, only a small amount of data needs to be stored in the three caches. Taking the input image buffer as an example, only 16 lines of pixels of the image need to be saved in the input image buffer, and the whole picture does not need to be saved.
Characteristic caching: for saving the extracted features and screening the features. The feature cache adopts a maximum heap architecture, when the features are input, the features are subjected to heap sequencing according to the response values of the features, and only the features with larger response values are reserved.
The process of the feature extraction module for extracting the ORB features from the picture is as follows: first, a portion of the pixels of the image are stored in the input image buffer via the AXI interface. The key point extraction unit and the Gaussian filtering unit respectively start to extract FAST key points on the part of pixels and perform Gaussian blur. In the process of extracting the FAST keypoint, the keypoint extracting unit calculates the Harris response value of the keypoint and stores the response value into the response value cache (note that the data stored in the response value cache not only indicates the response value of the keypoint, but also indicates whether the pixel is the keypoint, if the response value is 0, the pixel is not the keypoint, otherwise the pixel is the keypoint). And the Gaussian filtering unit stores the generated smooth image into a smooth image buffer. Then, a non-maximum suppression (NMS) unit performs non-maximum suppression on the extracted key points, and a feature direction calculation unit calculates feature directions of the key points on the smoothed image. Then, the descriptor computation unit computes descriptors of the features according to the feature directions, and stores the results in a feature cache. And after all the calculation is finished, the result stored in the feature cache is sent back to the memory and is sent to the feature matching module. It should be noted that the individual computing units and caches in the feature extraction module do not run serially during actual operation, but run in parallel in a pipelined manner.
Fig. 3 is a schematic structural diagram of a feature matching module of an ORB-SLAM hardware accelerator according to an embodiment of the present invention. The feature matching module is used for matching the features extracted from the image by the feature extraction module with map points in the global map, and comprises:
AXI interface: consistent with the AXI interface in the feature extraction module.
A matching unit: for matching two sets of descriptors (descriptors of map points in the global map and descriptors of features currently extracted from previous frames). The matching unit comprises a plurality of Hamming distance calculating units and comparators. It first calculates the hamming distance between descriptors, which represents the degree of similarity between descriptors. Then, the two sets of descriptors with the highest similarity are matched with each other.
Descriptor caching and result caching: for caching descriptors and matching results, respectively. Two sets of descriptors (descriptors of map points in the global map and descriptors of features extracted from the previous frame) stored in the descriptor cache are respectively from the memory and the feature extraction module. It is noted that in order to reduce the data transfer overhead, the descriptors of map points stored in the descriptor cache are updated if and only if the global map is updated.
Fig. 4 is a schematic diagram of a pipeline when an FPGA hardware acceleration module of an ORB-SLAM hardware accelerator and a processor system run in parallel according to an embodiment of the present invention, where a rectangle represents each process in a SLAM workflow, PE represents pose estimation, PO represents pose optimization, FE represents feature extraction, FM represents feature matching, and MU represents global map update. Further, PS stands for processor system.
When the ORB-SLAM hardware accelerator processes a common frame, the steps of feature extraction, feature matching, pose estimation and pose optimization are sequentially carried out. Wherein feature extraction and feature matching run in an FPGA hardware acceleration module, while pose estimation and pose optimization run in a processor system. In order to enable the FPGA hardware acceleration module and the processor system to run in parallel and improve the throughput speed, the FPGA hardware acceleration module starts to extract and match the features of the next frame of image while the processor system carries out pose estimation and pose optimization.
The difference between the ORB-SLAM hardware accelerator and the process of processing the normal frame when processing the key frame is that the processor system needs to perform global map update after performing pose estimation and pose optimization. Because the updated global map is needed when the feature matching of the next frame is carried out, when the processor system runs, the FPGA hardware acceleration module only simultaneously runs the feature extraction of the next frame and starts to run the feature matching after the global map is completely updated.
The invention is further illustrated above using specific embodiments. It should be noted that the above-mentioned embodiments are only specific embodiments of the present invention, and should not be construed as limiting the present invention. Any modification, replacement, improvement and the like within the idea of the present invention should be within the protection scope of the present invention.

Claims (6)

1. An ORB-SLAM hardware accelerator, comprising: the accelerator comprises a sensor module, an FPGA hardware acceleration module and a processor system;
the sensor module is used for acquiring image data;
the FPGA hardware acceleration module is used for extracting ORB features from the image acquired by the sensor module and matching the extracted features with map points in the global map;
the processor system is used for calculating the camera pose and maintaining the global map according to the ORB features extracted by the FPGA hardware acceleration module and the matching result of the ORB features and map points in the global map;
the FPGA hardware acceleration module comprises: the device comprises an image down-sampling module, a feature extraction module and a feature matching module;
the image down-sampling module is used for generating an image pyramid;
the feature extraction module is used for extracting ORB features on each layer of the image pyramid;
the feature matching module is used for matching the ORB features extracted by the feature extraction module with map points in the global map;
the feature extraction module includes: the device comprises a key point extracting unit, a non-maximum value inhibiting unit, a Gaussian filtering unit, a characteristic direction calculating unit, a descriptor calculating unit and a first cache unit;
the key point extracting unit is used for extracting a FAST corner on an input image and calculating a Harris response value of the FAST corner;
the non-maximum suppression unit is used for performing non-maximum suppression on the FAST corner according to the Harris response value of the FAST corner, and reserving the FAST corner with the maximum Harris response value in the neighborhood;
the Gaussian filtering unit is used for carrying out Gaussian blur processing on the input image, removing noise in the image and generating a smooth image;
the characteristic direction calculating unit is used for calculating a characteristic direction on the image after Gaussian filtering;
the descriptor calculation unit is used for calculating a BRIEF descriptor of the feature on the image after Gaussian filtering according to the feature direction;
the first cache unit is used for temporarily storing input data, intermediate calculation results and final results;
the work flow of the characteristic direction calculating unit is as follows: firstly, calculating the pixel gray scale centroid coordinates of the neighborhood where the features are located; then, calculating the ratio of the horizontal coordinate to the vertical coordinate of the gray centroid, and obtaining the direction of the features according to a lookup table;
the first cache unit includes: inputting an image cache, a smooth image cache, a response value cache and a characteristic cache; the input image cache, the smooth image cache and the response value cache adopt a ping-pong-like architecture, are composed of a plurality of same caches and can simultaneously process the input and the output of data; the feature cache adopts a maximum heap architecture, is used for screening the features while preserving the features, and only reserves partial features with large Harris response values.
2. The ORB-SLAM hardware accelerator of claim 1, wherein: each unit of the feature extraction module adopts a pipeline type computing architecture, all computing units run in parallel, only partial data is stored in a cache, and the data is discarded immediately after being used.
3. The ORB-SLAM hardware accelerator of claim 1, wherein: the feature matching module includes: the matching unit and the second cache unit;
the matching unit is used for matching the features extracted by the feature extraction module with map points in the global map; the working process is as follows: firstly, calculating the Hamming distance between any two descriptors in two sets of descriptors; then, matching the two groups of descriptors according to the Hamming distance by using a violent searching method; the two groups of descriptors refer to descriptors of map points in the global map and descriptors of features extracted from a previous frame;
the second cache unit comprises a descriptor cache and a result cache, and is used for temporarily storing two groups of descriptors to be matched and matching results.
4. The ORB-SLAM hardware accelerator of claim 1, wherein: the processor system comprises a general processor, a memory and a memory controller; and the general processor is used for calculating the camera pose according to the extracted image features and the matching relation between the features and map points in the global map, and updating and maintaining the global map.
5. The ORB-SLAM hardware accelerator of claim 4, wherein: the processor system and the FPGA hardware acceleration module communicate via an AXI bus; a general processor in the processor system can directly configure an instruction register in the FPGA hardware acceleration module through an AXI bus; the FPGA hardware acceleration module may also directly read data from the memory in the processor system through the AXI bus, or store the calculation result in the memory.
6. The ORB-SLAM hardware accelerator of claim 1, wherein: the FPGA hardware acceleration module and the processor system run in parallel in a pipeline mode.
CN201910084078.9A 2019-01-29 2019-01-29 ORB-SLAM hardware accelerator Active CN109919825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910084078.9A CN109919825B (en) 2019-01-29 2019-01-29 ORB-SLAM hardware accelerator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910084078.9A CN109919825B (en) 2019-01-29 2019-01-29 ORB-SLAM hardware accelerator

Publications (2)

Publication Number Publication Date
CN109919825A CN109919825A (en) 2019-06-21
CN109919825B true CN109919825B (en) 2020-11-27

Family

ID=66961065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910084078.9A Active CN109919825B (en) 2019-01-29 2019-01-29 ORB-SLAM hardware accelerator

Country Status (1)

Country Link
CN (1) CN109919825B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991291B (en) * 2019-11-26 2021-09-07 清华大学 Image feature extraction method based on parallel computing
CN113052750A (en) * 2021-03-31 2021-06-29 广东工业大学 Accelerator and accelerator for task tracking in VSLAM system
CN113112394A (en) * 2021-04-13 2021-07-13 北京工业大学 Visual SLAM front-end acceleration method based on CUDA technology
CN113536024B (en) * 2021-08-11 2022-09-09 重庆大学 ORB-SLAM relocation feature point retrieval acceleration method based on FPGA
CN114283065B (en) * 2021-12-28 2024-06-11 北京理工大学 ORB feature point matching system and method based on hardware acceleration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1468399A (en) * 2000-10-10 2004-01-14 纳佐米通信公司 Java hardware accelerator using microcode engine
CN102446085A (en) * 2010-10-01 2012-05-09 英特尔移动通信技术德累斯顿有限公司 Hardware accelerator module and method for setting up same
CN104062977A (en) * 2014-06-17 2014-09-24 天津大学 Full-autonomous flight control method for quadrotor unmanned aerial vehicle based on vision SLAM
CN105022401A (en) * 2015-07-06 2015-11-04 南京航空航天大学 SLAM method through cooperation of multiple quadrotor unmanned planes based on vision
CN108846867A (en) * 2018-08-29 2018-11-20 安徽云能天智能科技有限责任公司 A kind of SLAM system based on more mesh panorama inertial navigations

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400388B (en) * 2013-08-06 2016-12-28 中国科学院光电技术研究所 A kind of method utilizing RANSAC to eliminate Brisk key point error matching points pair
CN108171734B (en) * 2017-12-25 2022-01-07 西安因诺航空科技有限公司 ORB feature extraction and matching method and device
CN108960251A (en) * 2018-05-22 2018-12-07 东南大学 A kind of images match description generates the hardware circuit implementation method of scale space

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1468399A (en) * 2000-10-10 2004-01-14 纳佐米通信公司 Java hardware accelerator using microcode engine
CN102446085A (en) * 2010-10-01 2012-05-09 英特尔移动通信技术德累斯顿有限公司 Hardware accelerator module and method for setting up same
CN104062977A (en) * 2014-06-17 2014-09-24 天津大学 Full-autonomous flight control method for quadrotor unmanned aerial vehicle based on vision SLAM
CN105022401A (en) * 2015-07-06 2015-11-04 南京航空航天大学 SLAM method through cooperation of multiple quadrotor unmanned planes based on vision
CN108846867A (en) * 2018-08-29 2018-11-20 安徽云能天智能科技有限责任公司 A kind of SLAM system based on more mesh panorama inertial navigations

Also Published As

Publication number Publication date
CN109919825A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109919825B (en) ORB-SLAM hardware accelerator
JP6789402B2 (en) Method of determining the appearance of an object in an image, equipment, equipment and storage medium
Fang et al. FPGA-based ORB feature extraction for real-time visual SLAM
CN106650592B (en) Target tracking system
CN109948457B (en) Real-time target recognition method based on convolutional neural network and CUDA acceleration
CN111928842B (en) Monocular vision based SLAM positioning method and related device
CN110637461A (en) Densified optical flow processing in computer vision systems
CN106709500A (en) Image feature matching method
CN111928857B (en) Method and related device for realizing SLAM positioning in dynamic environment
Wan et al. An energy-efficient quad-camera visual system for autonomous machines on fpga platform
Li et al. Detail preservation and feature refinement for object detection
CN113076914B (en) Image processing method, device, electronic equipment and storage medium
CN112084988B (en) Lane line instance clustering method and device, electronic equipment and storage medium
CN117115200A (en) Hierarchical data organization for compact optical streaming
CN111798481B (en) Image sequence segmentation method and device
Hua et al. Dilated fully convolutional neural network for depth estimation from a single image
JP2013120517A (en) Image processing device
CN114897999B (en) Object pose recognition method, electronic device, storage medium, and program product
CN116245915A (en) Target tracking method based on video
WO2018195069A1 (en) Hardware accelerator for histogram of oriented gradients computation
CN110473258B (en) Monocular SLAM system initialization algorithm based on point-line unified framework
CN111626913B (en) Image processing method, device and storage medium
CN109493349B (en) Image feature processing module, augmented reality equipment and corner detection method
Che et al. Traffic light recognition for real scenes based on image processing and deep learning
CN115115698A (en) Pose estimation method of equipment and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant