US11080864B2 - Feature detection, sorting, and tracking in images using a circular buffer - Google Patents
Feature detection, sorting, and tracking in images using a circular buffer Download PDFInfo
- Publication number
- US11080864B2 US11080864B2 US15/864,029 US201815864029A US11080864B2 US 11080864 B2 US11080864 B2 US 11080864B2 US 201815864029 A US201815864029 A US 201815864029A US 11080864 B2 US11080864 B2 US 11080864B2
- Authority
- US
- United States
- Prior art keywords
- feature
- image
- features
- pixels
- circular buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G06K9/2054—
-
- G06K9/36—
-
- G06K9/4642—
-
- G06K9/6201—
-
- G06K9/78—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
Definitions
- Image feature detection and tracking is used in a wide variety of computer vision applications.
- computer vision applications may include object tracking, image or video panorama, video stabilization, Structure from Motion, Visual Inertial Odometry (VIO), and Simultaneous Localization and Mapping (SLAM).
- VIO Visual Inertial Odometry
- SLAM Simultaneous Localization and Mapping
- a VIO or SLAM algorithm may be used to determine user head position or movement to deliver relevant augmented or virtual content correctly to a display.
- Image feature detection and tracking are some of the primary steps involved in detection of a user head position or a camera pose.
- FIG. 1 is a block diagram illustrating an example system for processing features in images using a circular buffer
- FIG. 2 is a block diagram illustrating an example apparatus for detecting, tracking, and sorting features in images using a circular buffer
- FIG. 3 is a block diagram illustrating an example apparatus including a group of data producers and consumers working using a circular buffer
- FIG. 4 is a block diagram illustrating an example implementation of an apparatus for detecting and tracking features in images using a circular buffer in a shared L2 SRAM;
- FIG. 5 is a block diagram illustrating an example system for detecting and sorting features in images using a circular buffer
- FIG. 6 is a sequence diagram illustrating a sequence of operation between an example feature detector and example feature sorter
- FIG. 7 is a timing diagram of an example operation between an example feature detector and example feature sorter
- FIG. 8 is a flow chart illustrating a method for detecting and tracking features in images using a circular buffer
- FIG. 9 is block diagram illustrating an example computing device that can detect, track, and sort features in images using a circular buffer.
- FIG. 10 is a block diagram showing computer readable media that store code for detecting, tracking, and sorting features in images using a circular buffer.
- image feature detection and tracking is used in a wide variety of computer vision applications.
- VIO Visual Inertial Odometry
- SLAM Simultaneous Localization and Mapping
- one or more image sensors or cameras may be used to capture a three dimensional (3D) world around a user as temporal streams of images.
- the images may then be processed frame-by-frame to detect image features within each of the frames and then track the features in subsequent frames.
- the tracked features can be used to simultaneously estimate a 3D depth of a captured scene and six degrees of freedom (6DOF) pose of the camera using multi-view geometry and Kalman filters.
- 6DOF degrees of freedom
- such techniques may use a large on-chip SRAM buffer to store full image frames.
- SoC system-on-chip
- SoC system-on-chip
- the present disclosure relates generally to techniques for detecting and tracking features in images.
- the techniques described herein include an apparatus, method and system for detecting and tracking features in images using a circular buffer.
- a circular buffer refers to a data structure that uses a single, fixed-size buffer as if it were connected end-to-end. For example, data may be written over previous data in the circular buffer as new data is received at the circular buffer.
- the features can include corner points. Corner points, as used herein, refer to points in an image with a change in image intensity in one or more directions.
- An example apparatus includes an image data receiver to receive initial image data corresponding to an image from a camera and store the image data a circular buffer.
- the apparatus includes a feature detector to detect features in the image data.
- the apparatus further includes a feature sorter to sort the detected features to generate sorted feature points.
- the apparatus also includes a feature tracker to track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver.
- the techniques described herein thus enable processing, including feature detection and feature tracking, of an image frame to start in parallel with an image frame transmission.
- the frame transmission may be over a MIPI CSIx interface.
- This parallel processing may result in a significant reduction of end-result-latency in detection of image features and feature tracks and consequently pose-estimate, etc.
- the techniques may enable on-the-fly selection of a predetermined number k of top feature points among all the features detected all over the image in parallel with feature detection, without requiring large on-chip storage for intermediate detected feature or their descriptors.
- the techniques described herein may be used to create embedded imaging or vision processing SoCs that can reduce overall system latency of image feature detection/tracking based tasks and reduce ASIC and SoC costs by eliminating usage of large on-chip SRAM or completely eliminating need for adding external SDRAM.
- the techniques described herein may eliminate the need for storing an incoming image in external SDRAM and subsequently fetching the entire image.
- the SoCs may be used for sensor processing in augmented reality (AR)/virtual reality (VR) applications, including as 6DOF head pose estimation.
- AR augmented reality
- VR virtual reality
- FIG. 1 is a block diagram illustrating an example system for processing features in images using a circular buffer.
- the example system is referred to generally by the reference number 100 and can be implemented in the computing device 900 below in FIG. 9 using the method 800 of FIG. 8 below.
- the example system 100 includes a computing device 102 that is communicatively coupled to one or more cameras 104 .
- the cameras 104 can be integrated into the computing device 102 as shown in FIG. 1 , or external to the computing device 102 .
- the computing device 104 also include an image data receiver 106 to receive one or more images from the cameras 104 .
- the computing device 104 includes an image signal processor (ISP) 108 communicatively coupled to the image data receiver 106 .
- the computing device 104 further includes a system-on-chip (SoC) Interconnect 110 communicatively coupled to the ISP 108 .
- SoC system-on-chip
- the computing device 104 also include an inertial measurement unit (IMU) 112 , an external host processor 114 , and a host CPU 116 , all communicatively coupled to the SoC Interconnect 110 .
- the computing device 104 also includes a clock, reset, and interrupt module 118 that is communicatively coupled to a Visual Inertial SLAM Hardware Accelerator (VIS HWA) 120 .
- the VIS HWA includes an L1 SRAM 122 and is communicatively coupled to the SoC Interconnect 110 .
- the computing device 100 further also includes an L2 SRAM 124 .
- the L2 SRAM may be on-ship memory rather than off-chip memory.
- image data 106 corresponding to a portion of an image may be received from the one or more cameras at the ISP 108 .
- the ISP 108 can process the image data 106 and send the processed image data to the VIS HWA 120 via the SoC Interconnect 110 .
- the ISP 108 can perform sensor color format conversion, color space conversion, and noise reduction, among other possible image processing.
- the VIS HWA 120 can then detect one or more features in the image data and send the image data and the detected features to the L2 SRAM via the SoC Interconnect 110 as indicated by another arrow 128 .
- the L2 SRAM 124 may thus be used to store image data temporarily.
- the L2 SRAM 124 may store one line of a line image at a time.
- a subsequent line of the image may be received and written over the previous line that was stored in the L2 SRAM 124 .
- no external or off-chip SDRAM may be used to store entire image, resulting in a more efficient design in addition to a lower latency as images can be processed in parallel or “on-the-fly” as they are received, as described in greater detail below.
- FIG. 1 The diagram of FIG. 1 is not intended to indicate that the example system 100 is to include all of the components shown in FIG. 1 . Rather, the example system 100 can be implemented using fewer or additional components not illustrated in FIG. 1 (e.g., additional feature trackers, feature detectors, feature processors, data masters, memory, etc.).
- FIG. 2 is a block diagram illustrating an example apparatus for detecting, tracking, and sorting features in images using a circular buffer.
- the example apparatus is generally referred to by the reference number 200 and can be implemented in the system 100 above or the computing device 900 below.
- the apparatus 200 can be implemented in the VIS HWA 120 of the system 100 of FIG. 1 , the image processor 928 of the computing device 900 of FIG. 9 below, or using the computer readable media 1000 of FIG. 10 below.
- FIG. 2 shows an input clock and reset data 202 into a VIS HWA 120 .
- the VIS HWA 120 is shown outputting interrupts 204 .
- the VIS HWA 120 is further communicatively coupled to configuration slave 206 and a data slave 208 .
- the VIS HWA 120 is also communicatively coupled to one or more data masters, including for example, data masters 210 A, 210 B, and 210 C.
- the VIS HWA 120 includes control and status registers 212 that are communicatively coupled to the configuration slave 206 .
- the VIS HWA 120 also includes a circular buffer manager 214 that is communicatively coupled to the control and status registers 212 and the data slave 208 .
- the VIS HWA 120 also further includes a feature tracker 216 , a feature detector 218 , and a feature sorter 220 , that are each communicatively coupled to the control and status registers 212 and the circular buffer manager 214 .
- the VIS HWA 120 further includes an L1 SRAM 122 communicatively coupled to the feature sorter 220 .
- the VIS HWA 120 also further includes an Interconnect 222 communicatively coupled to the data masters (for example, data master 210 A, data master 210 B, data master 210 C), the circular buffer manager 214 , the feature tracker 216 , the feature detector 218 , and the feature sorter 220 .
- the VIS HWA 120 can provide slave and master interfaces for image/data access and a programming and configuration interface to an external host CPU (not shown).
- the configuration slave 206 may provide configuration data to be stored in the control and status registers.
- the configuration slave 206 may provide configuration data via an advanced peripheral bus (APB).
- the configuration data may be 32 bit format.
- the circular buffer manager 214 of the VIS HWA 120 can manage storage and consumption of image data.
- the circular buffer manager 214 can retrieve data from data slave 208 to be processed by the feature tracker 216 , the feature detector 218 , and the feature sorter 220 .
- the circular buffer manager 214 can retrieve one line of an image at a time from the data slave 208 and store the line in one or more data masters 210 A, 210 B, 210 C. The circular buffer manager 214 can then retrieve an additional line of an image in response to detecting that the previous line has been processed.
- a limited amount of on-chip storage can be used to provide temporary storage of streaming input image data for consumption by the feature tracker 216 , the feature detector 218 , and the feature sorter 220 .
- the operation of the circular buffer manager is discussed in detail below with respect to FIGS. 3 and 4 .
- the feature tracker 216 can perform feature tracking.
- feature tracking may include matching corresponding image features.
- the feature tracker 216 can match image features detected by the feature detector 218 in previous image data with image features in image data currently stored in a circular buffer.
- the feature tracker 216 can track sorted feature points sorted by the feature sorter 220 as described below. For example, the sorted feature points may be a subset of the image features.
- the feature detector 218 can perform feature detection and descriptor computation.
- the feature detector 218 can detect FAST9 features in image data.
- FAST9 feature detection an image pixel can be detected as a corner in response to detecting that at least 9 consecutive image pixels along a Bresenham Circle of radius 3 around it are all brighter or darker than the pixel by more than a predetermined threshold.
- the Bresenham Circle of radius 3 may be the periphery of a 3 ⁇ 3 pixel grid centered on the pixel.
- the feature sorter 220 can sort the detected features to generate sorted feature points.
- the sorted feature points may be a pruned list of features.
- the feature sorter 220 can prune detected FAST9 features based on their strength.
- the feature sorter 220 can perform dynamic heap based sorting to produce sorted feature points and select a few top points to implement pruning in sync with on-the-fly feature detection, as described in greater detail with respect to FIGS. 5-7 below.
- the Interconnect 222 can perform arbitration and serialization of image data and other data traffic.
- the Interconnect 222 can connect the feature tracker 216 , the feature detector 218 , and the feature sorter 220 and provide read and write interfaces.
- the feature tracker 216 may use three read interfaces and one write interface
- the feature detector may use two read and two write interfaces
- the feature sorter 220 may use two read interfaces and one write interface
- the circular buffer manager 214 may use one write interface.
- the apparatus 200 may thus enable on-the-fly feature detection, tracking, and sorting.
- the feature tracker 216 , the feature detector 218 , and the feature sorter 220 can use the image data in the circular buffer storage during the time the line of image data is available and finish processing the data by the time it is overwritten with additional image data.
- FIG. 2 The diagram of FIG. 2 is not intended to indicate that the example apparatus 200 is to include all of the components shown in FIG. 2 . Rather, the example apparatus 200 can be implemented using fewer or additional components not illustrated in FIG. 2 (e.g., additional feature trackers, feature detectors, feature sorters, memory, data masters, etc.).
- FIG. 3 is a block diagram illustrating an example apparatus including a group of data producers and consumers working using a circular buffer.
- the example apparatus is generally referred to by the reference number 300 and can be implemented in the computing device 900 below.
- the apparatus 300 can be implemented in the VIS HWA 120 of the system 100 of FIG. 1 , the image processor 928 of the computing device 900 of FIG. 9 below, or using the computer readable media 1000 of FIG. 10 below.
- FIG. 3 shows a circular buffer manager 214 communicatively coupled to a producer 302 , a data buffer 304 , and a number of consumers 306 A, 306 B, and 306 C.
- the data buffer 304 may be a circular buffer.
- the consumers 306 A, 306 B, 306 C may be feature detectors, feature trackers or feature sorters, among other image data processors.
- the circular buffer manager 214 includes a consumption history 308 .
- the circular buffer manager 214 also includes a stream write controller 310 communicatively coupled to the producer 302 and the consumption history 308 .
- the circular buffer manager 214 further includes a stream read controller 312 communicatively coupled to the consumption history 308 , the stream write controller 310 and the consumers 306 A, 3068 , and 306 C.
- the circular buffer manager 214 can be used to implement a scheme for managing an input data stream of a single producer 302 and multiple consumers 306 A, 306 B, and 306 C.
- the circular buffer manager 214 can maintain a data buffer 304 .
- the data buffer 304 may be an N-Image-Line deep circular buffer of storage of incoming streaming image data.
- the data buffer 304 can be used to implement a configurable N-image-line deep sliding window of an image frame.
- the circular buffer manager 214 can thus keep track of production and consumption rates and synchronize data buffer availability for the consumers.
- the data buffer availability may indicate readiness of a data processing task.
- the consumption rate of each of the consumers 306 A, 306 B, and 306 C may be both variable and different with respect to each other and that of the producer 302 .
- the synchronization may not use a simple first-in first-out (FIFO) scheme.
- the circular buffer manager 214 can employ a voting scheme using consumption rate information from each of the consumers 306 A, 306 B, and 306 C, stored in the consumption history 308 to determine when all consumers have completed consumption of a particular data buffer entry. The circular buffer manager 214 may then cause the data buffer 304 to be populated with a subsequent image line to be processed.
- FIG. 3 is not intended to indicate that the example apparatus 300 is to include all of the components shown in FIG. 3 . Rather, the example apparatus 300 can be implemented using fewer or additional components not illustrated in FIG. 3 (e.g., additional producers, consumers, data buffers, etc.).
- FIG. 4 is a block diagram illustrating an example implementation of an apparatus for detecting and tracking features in images using a circular buffer in a shared L2 SRAM.
- the example apparatus is generally referred to by the reference number 400 and can be implemented in the computing device 900 below.
- the apparatus 400 can be implemented using the VIS HWA 120 and the L2 SRAM 124 of the system 100 of FIG. 1 , the image processor 928 of the computing device 900 of FIG. 9 below, or using the processor 1002 and computer readable media 1000 of FIG. 10 below.
- the apparatus 400 of FIG. 4 includes similarly numbered elements from FIGS. 1, 2, and 3 .
- the apparatus 400 includes an image data interface 402 communicatively coupled to the stream write controller 214 of the circular buffer manager 214 .
- the image processing interface may be a Mobile Industry Processing Interface (MIPI) standard compliant Camera Serial Interface (CSIx) or Image Signal Processor (ISP) Direct Memory Access (DMA).
- the Image Data Interface 402 may be a MIPI CSIx interface.
- the stream read controller 312 of the circular buffer manager 214 is communicatively coupled to a number of consumers including a feature detector 218 and feature trackers 216 A and 216 B, all of which are included in a VIS HWA 120 . In some examples, any number of additional feature trackers may be included.
- the data buffer 304 is included inside a shared L2 SRAM 124 .
- the data buffer 304 may be a circular buffer.
- the circular buffer manager 214 can implement an image pixel data buffer in a shared L2 SRAM 124 and synchronize the consumption of one or more instances of the feature detector 218 and feature trackers 216 A and 216 B that consume and process the image pixels.
- the producer of the image pixel data may be an image sensor connected via an image data interface 402 .
- the image data interface 402 may be a chip-level interface or an on-chip Image Signal Processor (ISP) DMA.
- ISP Image Signal Processor
- An example chip-level interface is the Mobile Industry Processor Interface (MIPI) Camera Serial Interface (CSI)-2SM interface (version 2.0 released in March 2017) or the MIPI CSI-3SM (version 1.1 released in March 2014).
- the feature detector 218 can perform FAST9 feature detection. For example, an image pixel can be detected as a corner in response to detecting that at least 9 consecutive image pixels along the periphery of a Bresenham Circle of radius 3 are all brighter or darker than the pixel by more than a predetermined threshold.
- the feature trackers 216 A, 216 B can also perform a best pixel correspondence search using Normalized Cross-Correlation (NCC) of pixel patches around candidate points for feature tracking.
- NCC Normalized Cross-Correlation
- the processing sequence and data access pattern of the feature detector 218 and feature trackers 216 A and 216 B may use a number ⁇ consecutive image lines of an image frame to process ⁇ image lines.
- ⁇ consecutive image lines of an image frame may be available to process ⁇ image lines, or ⁇ .
- ⁇ and ⁇ can be different for each consumer.
- each of the feature detector 218 and feature trackers 216 A and 216 B may use different values of ⁇ and ⁇ .
- the feature detector 218 may have an ⁇ value of 32 and a ⁇ value of 26.
- the feature trackers 216 A and 216 B may, for example, also have ⁇ values of 17 and 16 and ⁇ values of 1 and 10, respectively.
- the circular buffer manager 214 can be configured with a depth N that is higher than the largest ⁇ value of the consumers. For example, by providing an ⁇ i number of new image lines in the data buffer 304 , the circular buffer manager 214 can enable a corresponding consumer to process the next ⁇ i lines of an image. In some examples, the corresponding consumer may respond with a “processing done” indication back to the circular buffer manager 214 . For example, the “processing done” message may indicate that ⁇ i number of lines has been consumed by the corresponding consumer. In some examples, the “processing done” message can be used in the circular buffer manager 214 to manage the number of unprocessed new lines of image available in the data buffer 304 .
- N image lines worth of storage in the on-chip SRAM 124 may be used to process full image frame.
- feature detection and feature tracking of an image frame can start in parallel with image frame transmission, this may result in a significant reduction of end-result-latency. For example, latency may be reduced for detecting image features and feature tracking, and consequently pose-estimate latency in a Visual Inertial Odometry (VIO) application, and any other latencies that rely on image feature detection or feature tracking.
- VIO Visual Inertial Odometry
- FIG. 4 is not intended to indicate that the example apparatus 400 is to include all of the components shown in FIG. 4 . Rather, the example apparatus 400 can be implemented using fewer or additional components not illustrated in FIG. 4 (e.g., additional feature trackers, data buffers, memory, etc.).
- FIG. 5 is a block diagram illustrating an example system for detecting and sorting features in images using a circular buffer.
- the example system is generally referred to by the reference number 500 and can be implemented in the computing device 900 below.
- the system 500 can be implemented in the system 100 of FIG. 1 , the image processor 928 of the computing device 900 of FIG. 9 below, or using the processor 1002 and the computer readable media 1000 of FIG. 10 below.
- the system 500 of FIG. 5 includes a feature detector 218 communicatively coupled to a feature sorter 220 and a shared L2 SRAM 124 .
- the feature detector 218 includes a detection and scoring array 502 communicatively coupled to a masking logic 504 and a sliding window logic 506 .
- the feature detector 218 also includes a control logic 508 , an output packing logic 510 , and a patch extracting logic 512 , communicatively coupled to the detection and scoring array 502 .
- the feature detector 218 also further includes an interface 516 communicatively coupled to the shared L2 SRAM 124 .
- the feature sorter 220 includes a control logic 518 communicatively coupled to the control logic 508 of the feature detector 218 .
- the feature sorter 220 also includes a histogram update logic 520 that is communicatively coupled to an interface 522 .
- the interface 522 is communicatively coupled to the interface 514 of the feature detector 218 and an input corner memory 524 .
- the feature sorter 220 also includes a histogram memory 526 that is communicatively coupled to the histogram update logic 520 .
- the feature sorter 220 also includes a sorted corner table (SCT) memory 528 communicatively coupled to a sorted feature table update logic 530 and the interface 522 .
- the input corner memory 524 , the histogram memory 526 , and the SCT memory 528 may be implemented in an L1 SRAM, such as the L1 SRAM described above in FIGS. 1 and 2 .
- the feature sorter 220 further includes a patch copy and output packing logic 532 that is communicatively coupled to the histogram update logic 520 , the sorter feature table update logic 530 , and the shared L2 S
- the shared L2 SRAM 124 includes a stored mask 534 that can be retrieved by the masking logic 504 .
- the shared L2 SRAM 124 also includes a circular buffer image data 536 that can be retrieved by the sliding window 506 .
- the shared L2 SRAM 124 also includes a sorter input corner patch list 538 can be stored from the interface 516 and retrieved by the patch copy and output packing logic 532 .
- the shared L2 SRAM 124 also further includes a detected corner list 540 that can be received from the interface 514 and retrieved by the interface 522 .
- the shared L2 SRAM 124 also further includes a sorted corner table (SCT) patch list 542 that can be received from the patch copy and output packing logic 532 and retrieved by the patch copy and output packing logic 532 .
- the shared L2 SRAM 124 further includes a detector sorted corner patch list 544 that may be received from the patch copy and output packing logic 532 .
- the feature sorter 220 of system 500 can sort a batch of detected image features using a heap sort mechanism to enable a predetermined number n selected top of features to be detected.
- the selected predetermined number n of top sorted features are also referred to herein as sorted feature points.
- the feature sorter 220 can process each batch of detected images features on-the-fly in parallel with feature detection by the feature detector 218 and rearrange a heap to keep a predetermined number of features and corresponding descriptors using a small shared storage.
- the storage may be a shared L2 SRAM storage 124 .
- the top number of features may be selected based on a predetermined characteristic. For example, in the example of FIG.
- the top number of features are selected based on corner strength.
- Corner strength refers to a measure of cornerness or goodness of the feature. Cornerness, as used herein, refers to a change in image intensity in one or more directions at the feature. Goodness, as used herein, refers to how good a corner is as a candidate that can be easily tracked.
- the predetermined top number of features may then be used for tracking in a subsequent frame using a feature tracker (not shown).
- both the number of features in a batch W and top number of features to be selected K may be configurable/programmable quantities, making it possible to select any top K number of feature points out of all detected features over the entire image. For example, the values of K and W may vary within a maximum value chosen as design-time parameter.
- the feature detector 218 can detect image features on-the-fly using a circular buffer manager scheme, as described above with respect to FIGS. 2-4 .
- the masking logic 504 of the feature detector 218 may receive a mask 534 as indicated by an arrow 546 .
- the sliding window 506 of the feature detector 218 may receive circular buffer image data 536 in which to detect one or more images features as indicated by arrow 548 .
- the detection and scoring array 502 can use the mask data 534 to detect one or more image features only within certain sub-region of the image data received at the sliding window 506 .
- the feature detector 218 can stream out detected feature points to the feature sorter 220 over the interface A 514 .
- the feature points may include pixel co-ordinates (X, Y) and an integer score S.
- the feature points may be streamed out from the output packing logic 510 of the of the feature detector 218 via the interface 514 to the input corner memory 524 and the interface 522 of the feature sorter, as indicated by arrows 552 and 554 , respectively.
- the feature detector 218 can also write out detected feature descriptors to the Sorter Input Corner Patch List 538 memory portion in the Shared L2 SRAM 124 via the interface B 516 as indicated by an arrow 550 .
- the patch extractor 512 can extract one or more feature descriptors from the image data in the sliding window 506 and send the one or more extracted feature descriptors to the interface 516 .
- a feature descriptor may be an M ⁇ M pixel patch centered at a pixel coordinate (X, Y).
- the feature sorter 220 can populate received detected feature points into a histogram based on their score S i .
- the histogram update logic 520 may receive the feature points via the interface 522 and store the populated histogram in histogram memory 526 .
- the histogram may contain 2 ⁇ number of bins, where ⁇ is the bit-width of score S.
- the feature sorter 220 can store the incoming feature points, including pixel co-ordinates X,Y and the integer score S, into the input corner memory 524 as indicated by arrow 552 .
- the input corner memory 524 may be a local buffer in on-chip L1 SRAM.
- the feature detector 218 can trigger an intermediate sorting job on the feature sorter 220 .
- the feature detector 218 can trigger the intermediate sorting job in response to detecting that a threshold number W of detected features has been exceeded.
- the control logic 508 may send a SortingStart trigger to the control logic 518 of the feature sorter 220 , as described below in FIG. 6 .
- the feature sorter 220 can traverse down the histogram beginning from a largest-valued-bin and compute a cumulative histogram. The feature sorter 220 can then determine a bin index ⁇ k that crosses K. The feature sorter 220 can then read the saved feature points from input corner memory 524 as indicated by arrow 558 and select feature points that have a score (S i ) falling above the bin index ⁇ k . The feature sorter 220 can then temporarily store the selected feature points in the SCT memory 528 .
- the SCT memory 528 may be located in an on-chip L1 SRAM.
- the feature sorter 220 can signal completion of the intermediate sorting job back to the feature detector 218 .
- the control logic 518 can send a SortingDone trigger to the control logic 508 .
- the feature detector 220 can progress further to complete feature detection over the entire image. For example, the feature detector can schedule a new intermediate sorting job on the feature sorter 220 in response to detecting a threshold number W of new detected feature points is exceeded.
- the sorted corner table in the SCT memory 528 can also be traversed to find a corner entry that may fall below the updated ⁇ k value applicable to the processing of the current batch.
- the sorted feature table update can evict this corner entry out of SCT memory 528 and the next new feature point from input corner memory 524 is written back in its place. For example, the next new feature point that may qualify to be above the current ⁇ k .
- the feature detector 218 can trigger a final job of ordering the temporally stored feature points as per descending value of their scores. For example, the feature detector 218 can send an OrderingStart trigger via the control logic 508 to the control logic 518 of the feature sorter 220 , as described below.
- the content of the sorted corner table 524 may not be sorted with respect to any ascending or descending order of corner strength. For example, the sorted corner table 524 may just maintain the top K features/corners, which may not necessarily be in any order.
- the feature sorter 220 in response to detecting the OrderingStart trigger, can read the temporarily stored feature points in the SCT memory 528 as indicated by arrow 564 , and corresponding descriptors in the SCT patch list 542 as indicated by arrow 562 .
- the feature sorter 220 can sort the top k feature points by traversing the latest histogram downwards from the largest-valued-bin to find a number of feature points ⁇ i corresponding to each bins up to the index ⁇ k .
- the feature sorter 220 can use the values ⁇ i to determine the address offset of the feature points having same score as the index/bin ⁇ i .
- the unordered feature points and their descriptors can be saved temporarily in the SCT memory 528 and SCT patch list 542 , respectively, and can then be read sequentially and written out at the appropriate address and thereby produce an ordered list at the final result memory referred to herein as the detector sorted corner patch table memory 544 in the shared L2 SRAM 124 .
- the ordered set of top K image feature points and their descriptors may thus be saved to the detector sorted corner patch list 544 .
- the feature sorter 220 can re-write the final ordered output of top K feature points in a packed format into called detector sorted corner patch list 544 , as indicated by an arrow 566 .
- the packed format may include a feature descriptor followed by the feature point pixel co-ordinates (X, Y) and the score S.
- the feature sorter 220 can also save of all detected feature points and their descriptors from the feature detector 218 for further processing in the host.
- sorting may be performed on-the-fly using available detected features at the time. Therefore, the sorting process may be performed in parallel with feature detection, thus reducing latency. Moreover, a small amount of on-chip SRAM may be used since not all features and descriptors detected over entire image may need to be saved at any point in time. The use of smaller amounts of on-chip memory may result in increased efficiency in terms of both area and power cost. Moreover, the number of feature points that are to be extracted can be configurable or programmable. Thus, the system 500 can be adaptable to a particular application.
- FIG. 5 The diagram of FIG. 5 is not intended to indicate that the example system 500 is to include all of the components shown in FIG. 5 . Rather, the example system 500 can be implemented using fewer or additional components not illustrated in FIG. 5 (e.g., additional shared memory, feature processing components, etc.).
- FIG. 6 is a sequence diagram illustrating a sequence of operation between an example feature detector and example feature sorter.
- the example sequence diagram is generally referred to by the reference number 600 and can be implemented in the system 500 above or the computing device 900 below.
- the sequence diagram 600 includes a configuration map memory register (MMR) 602 communicatively coupled to a feature detector 218 .
- MMR configuration map memory register
- the feature detector 218 is further also coupled to a feature sorter 220 .
- the configuration MMR 602 sends a DetectorStart message to the feature detector 218 .
- the DetectorStart message may be used to start a feature detection operation.
- the configuration MMR 602 receives a DetectorDone message from the feature detector 218 .
- the feature detector 218 may perform feature detection and cause feature sorting to be performed as described below.
- the feature detector 218 may then send the DetectorDone message in response to detecting that the feature detection and sorting is completed.
- the feature detector 218 sends a SortingInit message to the feature sorter 220 .
- the SortingInit message may be sent to the feature sorter 220 to initialize a feature sorting process.
- the feature detector 218 sends a SortingStart message to the feature sorter 220 .
- the SortingStart message may be sent to the feature sorter 220 to start an intermediate sorting job.
- the feature detector 218 may sent the SortingStart message in response to detecting that a number of features exceeds a threshold number.
- the threshold number may be a configurable number of features.
- the feature sorter 220 sends a SortingDone message to the feature detector 218 .
- the feature sorter 220 may sent the Sortingdone message to the feature detector 218 in response to detecting that the intermediate sorting job has completed.
- the feature detector 218 sends an OrderingStart message to the feature sorter 220 .
- the feature detector 218 may send the OrderingStart message to the feature sorter 220 in response to detecting that an entire frame has been processed by intermediate sorting jobs.
- the OrderingStart message may be sent by the feature detector 218 to start an ordering process at the feature sorter 220 .
- the feature sorter 220 may order the temporally stored feature points according to descending value of their scores.
- the feature sorter 220 sends an OrderingDone message to the feature detector 218 .
- the feature sorter 220 can send the OrderingDone message to the feature detector 218 in response to detecting that the ordering process has finished.
- the ordering process may include reading temporarily stored feature points in a SCT memory 528 and corresponding feature descriptors in a SCT patch list 542 memory and re-writing the final ordered output of top K feature points into final result memory in a packed format.
- the ordering process may generate an ordered set of top K image feature points and corresponding descriptors.
- This process flow diagram is not intended to indicate that the blocks of the example sequence diagram 600 are to be executed in any particular order, or that all of the blocks are to be included in every case. Further, any number of additional blocks not shown may be included within the example sequence diagram 600 , depending on the details of the specific implementation.
- FIG. 7 is timing diagram of an example operation between an example feature detector and example feature sorter.
- the example timing diagram is generally referred to by the reference number 700 and can be implemented in the computing device 900 below.
- the timing diagram 700 can be used to implement the system 100 of FIG. 1 above, the image processor 928 of the computing device 900 of FIG. 9 below, or the instructions in the computer readable media 1000 of FIG. 10 below.
- FIG. 7 shows a set of signals, including a reset signal 702 , a DetectorStart signal 704 , a DetectorDone signal 706 , an InputImage signal 708 , a DetectorActive signal 710 , a SorterInit signal 712 , a SortingStart signal 714 , a SortingDone signal 716 , an OrderingStart signal 718 , an OrderingDone signal 720 , and a SorterActive signal 722 .
- a sync reset may be performed between a DetectorStart signal 704 and a SorterInit signal 712 to synchronize timing between a feature detector and a feature sorter.
- the feature detector may begin to receive an input image as indicated by the InputImage signal 708 .
- a first slice 0 may be detected by a feature detector.
- the slice may include one or more detected image features.
- the slice may include a W number of features.
- the SortingStart signal 714 may initiate a sorting process in which slice 0 may be sorted by the sorting process as shown in the second block of the SorterActive signal 722 at time 724 .
- the SortingDone signal 716 may indicate that the sorting process of slice 0 is finished in response to detecting that the sorting of slice 0 is complete.
- a second slice 1 may be detected by the feature detector.
- the SortingStart signal 714 may then initiate another sorting process for slice 1 , resulting in the sorting of slice 1 as shown in the second block of the SorterActive signal 722 at time 726 .
- the SortingDone signal 716 may similarly indicate that the sorting process of slice 1 is finished in response to detecting that the sorting of slice 1 is complete.
- a third slice 2 may similarly be detected by the feature detector.
- the SortingStart signal 714 may then similarly initiate yet another sorting process for slice 2 , resulting in the sorting of slice 2 as shown in the second block of the SorterActive signal 722 at time 728 .
- the SortingDone signal 716 may again similarly indicate that the sorting process of slice 2 is finished in response to detecting that the sorting of slice 2 is complete.
- a fourth slice 3 may be detected by the feature detector.
- the SortingStart signal 714 may then similarly initiate a further sorting process for slice 3 , resulting in the sorting of slice 3 as shown in the second block of the SorterActive signal 722 at time 730 .
- an OrderingStart signal 718 may initiate an ordering process at the end of time 730 .
- an ordering process is initiated in the SorterActive signal 722 in response to the OrderingStart signal 718 .
- the SortingDone signal 716 may similarly indicate that the sorting process of slice 3 is finished in response to detecting that the sorting of slice 3 is complete.
- the OrderingDone signal 720 may indicate completion of the ordering process.
- the DetectorDone signal 706 may then indicate detection process is complete in response to detecting the spike in the OrderingDone signal 720 .
- FIG. 7 is not intended to indicate that the example timing diagram 700 is to include all of the components shown in FIG. 7 . Rather, the example timing diagram 700 can be implemented using fewer or additional components not illustrated in FIG. 7 (e.g., additional signals, slices, etc.).
- FIG. 8 is a flow chart illustrating a method for detecting and tracking features in images using a circular buffer.
- the example method is generally referred to by the reference number 800 and can be implemented in the system 100 of FIG. 1 above, the processor 902 of the computing device 900 of FIG. 9 below, or the processor 1002 and computer readable media 1000 of FIG. 10 below.
- a processor receives initial image data corresponding to an image from a camera and store the image data a circular buffer.
- the initial image data may be a line of an image to be processed.
- the processor detects features in the image data. For example, the processor can detect a feature in a sliding window using a mask. In some examples, the processor can detect features as described in FIGS. 4 and 5 above.
- the processor sorts the detected features to generate sorted feature points. For example, the processor can perform on-the-fly dynamic heap sorting using the circular buffer. In some examples, the processor can perform an intermediate sorting job in response to detecting that a threshold number of detected features has been exceeded. For example, the processor can populate the detected features into a histogram based on score, and storing the detected features into an input corner memory including an on-chip L1 SRAM. The processor can also traverse down a histogram populated with detected features based on score beginning from a largest-valued-bin and computing a cumulative histogram. The processor can also further pack the features into a packed format including a feature descriptor, feature point pixel co-ordinates, and an integer score. The feature descriptor can be a pixel patch centered at the feature point pixel co-ordinates. For example, the processor can perform sorting according to the example feature sorter and sorting process described in FIG. 5-7 above.
- the processor tracks the sorted feature points in subsequent image data corresponding to the image received at the image data receiver. For example, the processor can match the sorted feature points with image features detected in the subsequent image data. In some examples, the processor can write the subsequent image data over the initial image data in the circular buffer. For example, the process may begin again at block 802 .
- This process flow diagram is not intended to indicate that the blocks of the example process 800 are to be executed in any particular order, or that all of the blocks are to be included in every case. Further, any number of additional blocks not shown may be included within the example process 800 , depending on the details of the specific implementation.
- the processor may receive additional image data and the process may repeat at blocks 802 - 808 until all the lines of an image have been processed. Thus, the image may be completely processed line by line using blocks 802 - 808 .
- the computing device 900 may be, for example, a laptop computer, desktop computer, tablet computer, mobile device, or wearable device, among others.
- the computing device 900 may be a VIO or SLAM system.
- the computing device 900 may include a central processing unit (CPU) 902 that is configured to execute stored instructions, as well as a memory device 904 that stores instructions that are executable by the CPU 902 .
- the CPU 902 may be coupled to the memory device 904 by a bus 906 .
- the CPU 902 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations.
- the computing device 900 may include more than one CPU 902 .
- the CPU 902 may be a system-on-chip (SoC) with a multi-core processor architecture.
- the CPU 902 can be a specialized digital signal processor (DSP) used for image processing.
- the memory device 904 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.
- the memory device 904 may include dynamic random access memory (DRAM).
- the memory device 904 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.
- RAM random access memory
- ROM read only memory
- flash memory or any other suitable memory systems.
- DRAM dynamic random access memory
- the computing device 900 may also include a graphics processing unit (GPU) 908 .
- the CPU 902 may be coupled through the bus 906 to the GPU 908 .
- the GPU 908 may be configured to perform any number of graphics operations within the computing device 900 .
- the GPU 908 may be configured to render or manipulate graphics images, graphics frames, videos, or the like, to be displayed to a user of the computing device 900 .
- the memory device 904 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.
- the memory device 904 may include dynamic random access memory (DRAM).
- the memory device 904 may include device drivers 910 that are configured to execute the instructions for detecting, tracking, and sorting features in images using a circular buffer.
- the device drivers 910 may be software, an application program, application code, or the like.
- the CPU 902 may also be connected through the bus 906 to an input/output (I/O) device interface 912 configured to connect the computing device 900 to one or more I/O devices 914 .
- the I/O devices 914 may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others.
- the I/O devices 914 may be built-in components of the computing device 900 , or may be devices that are externally connected to the computing device 900 .
- the memory 904 may be communicatively coupled to I/O devices 914 through direct memory access (DMA).
- DMA direct memory access
- the CPU 902 may also be linked through the bus 906 to a display interface 916 configured to connect the computing device 900 to a display device 918 .
- the display device 918 may include a display screen that is a built-in component of the computing device 900 .
- the display device 918 may also include a computer monitor, television, or projector, among others, that is internal to or externally connected to the computing device 900 .
- the computing device 900 also includes a storage device 920 .
- the storage device 920 is a physical memory such as a hard drive, an optical drive, a thumbdrive, an array of drives, a solid-state drive, or any combinations thereof.
- the storage device 920 may also include remote storage drives.
- the computing device 900 may also include a network interface controller (NIC) 922 .
- the NIC 922 may be configured to connect the computing device 900 through the bus 906 to a network 924 .
- the network 924 may be a wide area network (WAN), local area network (LAN), or the Internet, among others.
- the device may communicate with other devices through a wireless technology.
- the device may communicate with other devices via a wireless local area network connection.
- the device may connect and communicate with other devices via Bluetooth® or similar technology.
- the computing device 900 further includes a depth camera 926 .
- the depth camera may include one or more depth sensors.
- the depth camera may include a processor to generate depth information.
- the depth camera 926 may include functionality such as RealSenseTM technology.
- the computing device 900 further includes an image processor 928 .
- the image processor 928 can be used to detect, sort, and track image features in received images on-the-fly and in parallel.
- the image processor 928 can include an image data receiver 930 , a feature detector 932 , a feature sorter 934 , and a feature tracker 936 .
- each of the components 930 - 936 of the image processor 928 may be a microcontroller, embedded processor, or software module.
- the image data receiver 930 can receive image data corresponding to an image from a camera and store the image data a circular buffer.
- the image data may be a line of an image.
- the image data receiver 930 can receive subsequent lines of an image and store each subsequent line over the previous line in the circular buffer.
- the image data receiver 930 can receive subsequent image data from a camera and replace the initial image data with the subsequent image data in the circular buffer.
- the circular buffer may be an on-chip L2 static random-access memory (SRAM) that is communicatively coupled with the feature detector, the feature tracker, and the feature sorter.
- the feature detector, the feature tracker, and the feature sorter are to process initial image data as described below before subsequent image data is stored in the circular buffer.
- the feature detector 932 can detect features in the image data.
- the feature sorter 934 can sort the detected features to generate sorted feature points.
- the sorted feature points may include an ordered set of a top number of image feature points and corresponding feature descriptors.
- the sorted feature points each may be formatted in a packed format and include a feature descriptor, feature point pixel co-ordinates, and an integer score.
- the feature descriptor can include a pixel patch centered at the feature point pixel co-ordinates.
- the feature sorter 934 can perform on-the-fly dynamic heap sorting using the circular buffer.
- the feature tracker 936 can track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver. For example, the feature tracker 936 can match the detected image features with image features detected in the subsequent image data.
- the block diagram of FIG. 9 is not intended to indicate that the computing device 900 is to include all of the components shown in FIG. 9 . Rather, the computing device 900 can include fewer or additional components not illustrated in FIG. 9 , such as additional buffers, additional processors, and the like.
- the computing device 900 may include any number of additional components not shown in FIG. 9 , depending on the details of the specific implementation.
- any of the functionalities of the image data receiver 930 , the feature detector 932 , the feature sorter 934 , or the feature tracker 936 may be partially, or entirely, implemented in hardware and/or in the processor 902 .
- the functionality may be implemented with an application specific integrated circuit, in logic implemented in the processor 902 , or in any other device.
- any of the functionalities of the CPU 902 may be partially, or entirely, implemented in hardware and/or in a processor.
- the functionality of the image processor 928 may be implemented with an application specific integrated circuit, in logic implemented in a processor, in logic implemented in a specialized graphics processing unit such as the GPU 908 , or in any other device.
- FIG. 10 is a block diagram showing computer readable media 1000 that store code for detecting, tracking, and sorting features in images using a circular buffer.
- the computer readable media 1000 may be accessed by a processor 1002 over a computer bus 1004 .
- the computer readable medium 1000 may include code configured to direct the processor 1002 to perform the methods described herein.
- the computer readable media 1000 may be non-transitory computer readable media.
- the computer readable media 1000 may be storage media.
- an image data receiver module 1006 may be configured to receive initial image data corresponding to an image from a camera and store the image data a circular buffer.
- a feature detector module 1008 may be configured to detect one or more features in the image data. In some examples, the feature detector module 1008 may be configured to detect the feature in a sliding window using a mask.
- a feature sorter module 1010 may be configured to sort the detected features to generate sorted feature points. In some examples, the feature sorter module 1010 may be configured to perform on-the-fly dynamic heap sorting using the circular buffer.
- the feature sorter module 1010 may be configured to perform an intermediate sorting job in response to detecting that a threshold number of detected features has been exceeded.
- the feature sorter module 1010 may be configured to traverse down a histogram populated with detected features based on score beginning from a largest-valued-bin and computing a cumulative histogram.
- the feature sorter module 1010 may be configured to pack the features into a packed format including a feature descriptor, feature point pixel co-ordinates, and an integer score.
- the feature descriptor can include a pixel patch centered at the feature point pixel co-ordinates.
- a feature tracker module 1012 may be configured to track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver. In some examples, the feature tracker module 1012 may be configured to match the detected image features with image features detected in the subsequent image data. In some examples, the subsequent image data may replace the initial image data in the circular buffer. For example, the image data receiver module 1006 may be configured to write the subsequent image data over the initial image data in the circular buffer. In some examples, the image data receiver module 1006 may be configured to process additional received image data corresponding to the image until the image is completely processed.
- FIG. 10 The block diagram of FIG. 10 is not intended to indicate that the computer readable media 1000 is to include all of the components shown in FIG. 10 . Further, the computer readable media 1000 may include any number of additional components not shown in FIG. 10 , depending on the details of the specific implementation.
- Example 1 is an apparatus for tracking features in image data.
- the apparatus includes an image data receiver to receive initial image data corresponding to an image from a camera and store the image data a circular buffer.
- the apparatus also includes a feature detector to detect features in the image data.
- the apparatus further includes a feature sorter to sort the detected features to generate sorted feature points.
- the apparatus also further includes a feature tracker to track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver.
- Example 2 includes the apparatus of example 1, including or excluding optional features.
- the circular buffer includes an on-chip L2 static random-access memory (SRAM) that is communicatively coupled with the feature detector, the feature tracker, and the feature sorter.
- SRAM static random-access memory
- Example 3 includes the apparatus of any one of examples 1 to 2, including or excluding optional features.
- the image data receiver is to receive the subsequent image data from a camera and replace the initial image data with the subsequent image data in the circular buffer.
- Example 4 includes the apparatus of any one of examples 1 to 3, including or excluding optional features.
- the feature detector, the feature tracker, and the feature sorter are to process the initial image data before the subsequent image data is stored in the circular buffer.
- Example 5 includes the apparatus of any one of examples 1 to 4, including or excluding optional features.
- the sorted feature points include an ordered set of a top number of image feature points and corresponding feature descriptors.
- Example 6 includes the apparatus of any one of examples 1 to 5, including or excluding optional features.
- the sorted feature points each include a packed format including a feature descriptor, feature point pixel co-ordinates, and an integer score.
- the feature descriptor includes a pixel patch centered at the feature point pixel co-ordinates.
- Example 7 includes the apparatus of any one of examples 1 to 6, including or excluding optional features.
- the feature sorter is to perform on-the-fly dynamic heap sorting using the circular buffer.
- Example 8 includes the apparatus of any one of examples 1 to 7, including or excluding optional features.
- the feature tracker is to match the detected image features with image features detected in the subsequent image data.
- Example 9 includes the apparatus of any one of examples 1 to 8, including or excluding optional features.
- the apparatus includes a circular buffer manager to maintain the circular buffer, keep track of production and consumption rates of the feature detector, the feature tracker, and the feature sorter, and synchronize data buffer availability for the feature detector, the feature tracker, and the feature sorter.
- Example 10 includes the apparatus of any one of examples 1 to 9, including or excluding optional features.
- the initial image data and subsequent image data each include a line of the image.
- Example 11 is a method for tracking features in image data.
- the method includes receiving, via a processor, initial image data corresponding to an image from a camera and store the image data a circular buffer.
- the method also includes detecting, via the processor, features in the image data.
- the method further includes sorting, via the processor, the detected features to generate sorted feature points.
- the method also further includes tracking, via the processor, the sorted feature points in subsequent image data corresponding to the image received at the image data receiver.
- Example 12 includes the method of example 11, including or excluding optional features.
- the method includes writing the subsequent image data over the initial image data in the circular buffer.
- Example 13 includes the method of any one of examples 11 to 12, including or excluding optional features.
- detecting the feature in the image data includes detecting the feature in a sliding window using a mask.
- Example 14 includes the method of any one of examples 11 to 13, including or excluding optional features.
- sorting the detected features includes performing on-the-fly dynamic heap sorting using the circular buffer.
- Example 15 includes the method of any one of examples 11 to 14, including or excluding optional features.
- tracking the detected features includes matching the detected image features with image features detected in the subsequent image data.
- Example 16 includes the method of any one of examples 11 to 15, including or excluding optional features.
- sorting the detected features includes populating the detected features into a histogram based on score, and storing the detected features into an input corner memory including an on-chip L1 SRAM.
- Example 17 includes the method of any one of examples 11 to 16, including or excluding optional features.
- sorting the detected features includes performing an intermediate sorting job in response to detecting that a threshold number of detected features has been exceeded.
- Example 18 includes the method of any one of examples 11 to 17, including or excluding optional features.
- sorting the detected features includes traversing down a histogram populated with detected features based on score beginning from a largest-valued-bin and computing a cumulative histogram.
- Example 19 includes the method of any one of examples 11 to 18, including or excluding optional features.
- sorting the detected features includes packing the features into a packed format including a feature descriptor, feature point pixel co-ordinates, and an integer score.
- the feature descriptor includes a pixel patch centered at the feature point pixel co-ordinates.
- Example 20 includes the method of any one of examples 11 to 19, including or excluding optional features.
- the method includes processing additional received image data corresponding to the image until the image is completely processed.
- Example 21 is at least one computer readable medium for tracking features in image data having instructions stored therein that direct the processor to receive initial image data corresponding to an image from a camera and store the image data a circular buffer.
- the computer-readable medium also includes instructions that direct the processor to detect one or more features in the image data.
- the computer-readable medium further includes instructions that direct the processor to sort the detected features to generate sorted feature points.
- the computer-readable medium also further includes instructions that direct the processor to track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver. The subsequent image data is to replace the initial image data in the circular buffer.
- Example 22 includes the computer-readable medium of example 21, including or excluding optional features.
- the computer-readable medium includes instructions to write the subsequent image data over the initial image data in the circular buffer.
- Example 23 includes the computer-readable medium of any one of examples 21 to 22, including or excluding optional features.
- the computer-readable medium includes instructions to detect the feature in a sliding window using a mask.
- Example 24 includes the computer-readable medium of any one of examples 21 to 23, including or excluding optional features.
- the computer-readable medium includes instructions to perform on-the-fly dynamic heap sorting using the circular buffer.
- Example 25 includes the computer-readable medium of any one of examples 21 to 24, including or excluding optional features.
- the computer-readable medium includes instructions to match the detected image features with image features detected in the subsequent image data.
- Example 26 includes the computer-readable medium of any one of examples 21 to 25, including or excluding optional features.
- the computer-readable medium includes instructions to populate the detected features into a histogram based on score, and store the detected features into an input corner memory including an on-chip L1 SRAM.
- Example 27 includes the computer-readable medium of any one of examples 21 to 26, including or excluding optional features.
- the computer-readable medium includes instructions to perform an intermediate sorting job in response to detecting that a threshold number of detected features has been exceeded.
- Example 28 includes the computer-readable medium of any one of examples 21 to 27, including or excluding optional features.
- the computer-readable medium includes instructions to traverse down a histogram populated with detected features based on score beginning from a largest-valued-bin and computing a cumulative histogram.
- Example 29 includes the computer-readable medium of any one of examples 21 to 28, including or excluding optional features.
- the computer-readable medium includes instructions to pack the features into a packed format including a feature descriptor, feature point pixel co-ordinates, and an integer score.
- the feature descriptor includes a pixel patch centered at the feature point pixel co-ordinates.
- Example 30 includes the computer-readable medium of any one of examples 21 to 29, including or excluding optional features.
- the computer-readable medium includes instructions to process additional received image data corresponding to the image until the image is completely processed.
- Example 31 is a system for tracking features in image data.
- the system includes an image data receiver to receive initial image data corresponding to an image from a camera and store the image data a circular buffer.
- the system also includes a feature detector to detect features in the image data.
- the system further includes a feature sorter to sort the detected features to generate sorted feature points.
- the system also further includes a feature tracker to track the sorted feature points in subsequent image data corresponding to the image received at the image data receiver.
- Example 32 includes the system of example 31, including or excluding optional features.
- the circular buffer includes an on-chip L2 static random-access memory (SRAM) that is communicatively coupled with the feature detector, the feature tracker, and the feature sorter.
- SRAM static random-access memory
- Example 33 includes the system of any one of examples 31 to 32, including or excluding optional features.
- the image data receiver is to receive the subsequent image data from a camera and replace the initial image data with the subsequent image data in the circular buffer.
- Example 34 includes the system of any one of examples 31 to 33, including or excluding optional features.
- the feature detector, the feature tracker, and the feature sorter are to process the initial image data before the subsequent image data is stored in the circular buffer.
- Example 35 includes the system of any one of examples 31 to 34, including or excluding optional features.
- the sorted feature points include an ordered set of a top number of image feature points and corresponding feature descriptors.
- Example 36 includes the system of any one of examples 31 to 35, including or excluding optional features.
- the sorted feature points each include a packed format including a feature descriptor, feature point pixel co-ordinates, and an integer score.
- the feature descriptor includes a pixel patch centered at the feature point pixel co-ordinates.
- Example 37 includes the system of any one of examples 31 to 36, including or excluding optional features.
- the feature sorter is to perform on-the-fly dynamic heap sorting using the circular buffer.
- Example 38 includes the system of any one of examples 31 to 37, including or excluding optional features.
- the feature tracker is to match the detected image features with image features detected in the subsequent image data.
- Example 39 includes the system of any one of examples 31 to 38, including or excluding optional features.
- the system includes a circular buffer manager to maintain the circular buffer, keep track of production and consumption rates of the feature detector, the feature tracker, and the feature sorter, and synchronize data buffer availability for the feature detector, the feature tracker, and the feature sorter.
- Example 40 includes the system of any one of examples 31 to 39, including or excluding optional features.
- the initial image data and subsequent image data each include a line of the image.
- Example 41 is a system for tracking features in image data.
- the system includes means for receiving initial image data corresponding to an image from a camera and store the image data a circular buffer.
- the system also includes means for detecting features in the image data.
- the system further includes means for sorting the detected features to generate sorted feature points.
- the system also further includes means for tracking the sorted feature points in subsequent image data corresponding to the image received at the image data receiver.
- Example 42 includes the system of example 41, including or excluding optional features.
- the circular buffer includes an on-chip L2 static random-access memory (SRAM) that is communicatively coupled with the means for detecting the feature, the means for sorting the detected features, and the means for tracking the sorted feature points.
- SRAM static random-access memory
- Example 43 includes the system of any one of examples 41 to 42, including or excluding optional features.
- the means for receiving the initial image data is to receive the subsequent image data from a camera and replace the initial image data with the subsequent image data in the circular buffer.
- Example 44 includes the system of any one of examples 41 to 43, including or excluding optional features.
- the means for detecting the features, the means for sorting the detected features, and the means for tracking the sorted feature points are to process the initial image data before the subsequent image data is stored in the circular buffer.
- Example 45 includes the system of any one of examples 41 to 44, including or excluding optional features.
- the sorted feature points include an ordered set of a top number of image feature points and corresponding feature descriptors.
- Example 46 includes the system of any one of examples 41 to 45, including or excluding optional features.
- the sorted feature points each include a packed format including a feature descriptor, feature point pixel co-ordinates, and an integer score.
- the feature descriptor includes a pixel patch centered at the feature point pixel co-ordinates.
- Example 47 includes the system of any one of examples 41 to 46, including or excluding optional features.
- the means for sorting the detected features is to perform on-the-fly dynamic heap sorting using the circular buffer.
- Example 48 includes the system of any one of examples 41 to 47, including or excluding optional features.
- the means for tracking the sorted feature points is to match the detected image features with image features detected in the subsequent image data.
- Example 49 includes the system of any one of examples 41 to 48, including or excluding optional features.
- the system includes means for maintaining the circular buffer and keeping track of production and consumption rates of the means for detecting the features, the means for tracking the sorted feature points, and the means for sorting the detected features, and synchronize data buffer availability for the means for detecting the features, the means for tracking the sorted feature points, and the means for sorting the detected features.
- Example 50 includes the system of any one of examples 41 to 49, including or excluding optional features.
- the initial image data and subsequent image data each include a line of the image.
- the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar.
- an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein.
- the various elements shown in the figures may be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (22)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/864,029 US11080864B2 (en) | 2018-01-08 | 2018-01-08 | Feature detection, sorting, and tracking in images using a circular buffer |
DE102018131730.1A DE102018131730A1 (en) | 2018-01-08 | 2018-12-11 | CHARACTERIZATION, SORTING AND TRACKING IN PICTURES USING A RING STORAGE |
US17/387,697 US20210358135A1 (en) | 2018-01-08 | 2021-07-28 | Feature detection, sorting, and tracking in images using a circular buffer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/864,029 US11080864B2 (en) | 2018-01-08 | 2018-01-08 | Feature detection, sorting, and tracking in images using a circular buffer |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/387,697 Continuation US20210358135A1 (en) | 2018-01-08 | 2021-07-28 | Feature detection, sorting, and tracking in images using a circular buffer |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190043204A1 US20190043204A1 (en) | 2019-02-07 |
US11080864B2 true US11080864B2 (en) | 2021-08-03 |
Family
ID=65230365
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/864,029 Active 2038-05-05 US11080864B2 (en) | 2018-01-08 | 2018-01-08 | Feature detection, sorting, and tracking in images using a circular buffer |
US17/387,697 Abandoned US20210358135A1 (en) | 2018-01-08 | 2021-07-28 | Feature detection, sorting, and tracking in images using a circular buffer |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/387,697 Abandoned US20210358135A1 (en) | 2018-01-08 | 2021-07-28 | Feature detection, sorting, and tracking in images using a circular buffer |
Country Status (2)
Country | Link |
---|---|
US (2) | US11080864B2 (en) |
DE (1) | DE102018131730A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10832417B1 (en) * | 2019-06-04 | 2020-11-10 | International Business Machines Corporation | Fusion of visual-inertial-odometry and object tracker for physically anchored augmented reality |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6091420A (en) * | 1996-12-13 | 2000-07-18 | Sony Corporation | Method for approximating shape data, drawing apparatus and information recording medium |
US20110136676A1 (en) * | 2008-04-24 | 2011-06-09 | Greene Eric C | Geometric patterns and lipid bilayers for dna molecule organization and uses thereof |
US20120162454A1 (en) * | 2010-12-23 | 2012-06-28 | Samsung Electronics Co., Ltd. | Digital image stabilization device and method |
US20120212481A1 (en) * | 2005-09-29 | 2012-08-23 | Apple Inc. | Video Acquisition With Integrated GPU Processing |
US20120249956A1 (en) * | 2011-03-30 | 2012-10-04 | Carl Zeiss Meditec, Inc. | Systems and methods for efficiently obtaining measurements of the human eye using tracking |
US20130128735A1 (en) * | 2010-12-17 | 2013-05-23 | Microsoft Corporation | Universal rate control mechanism with parameter adaptation for real-time communication applications |
US20140266803A1 (en) * | 2013-03-15 | 2014-09-18 | Xerox Corporation | Two-dimensional and three-dimensional sliding window-based methods and systems for detecting vehicles |
US20150067008A1 (en) * | 2013-09-05 | 2015-03-05 | Texas Instruments Incorporated | Determining Median Value of an Array on Vector SIMD Architectures |
US20150131848A1 (en) * | 2013-11-08 | 2015-05-14 | Analog Devices Technology | Support vector machine based object detection system and associated method |
US20160163091A1 (en) * | 2014-12-09 | 2016-06-09 | Industrial Technology Research Institute | Electronic apparatus and method for incremental pose estimation and photographing thereof |
US20160379375A1 (en) * | 2014-03-14 | 2016-12-29 | Huawei Technologies Co., Ltd. | Camera Tracking Method and Apparatus |
US20180075593A1 (en) * | 2016-09-15 | 2018-03-15 | Qualcomm Incorporated | Automatic scene calibration method for video analytics |
US20180121819A1 (en) * | 2016-10-31 | 2018-05-03 | Salesforce.Com, Inc. | Jaccard similarity estimation of weighted samples: circular smearing with scaling and randomized rounding sample selection |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100958379B1 (en) * | 2008-07-09 | 2010-05-17 | (주)지아트 | Methods and Devices for tracking multiple 3D object, Storage medium storing the same |
DE102011011931A1 (en) * | 2011-02-18 | 2012-08-23 | Hella Kgaa Hueck & Co. | Method for evaluating a plurality of time-shifted images, device for evaluating images, monitoring system |
US8861893B2 (en) * | 2011-09-27 | 2014-10-14 | The Boeing Company | Enhancing video using super-resolution |
-
2018
- 2018-01-08 US US15/864,029 patent/US11080864B2/en active Active
- 2018-12-11 DE DE102018131730.1A patent/DE102018131730A1/en active Pending
-
2021
- 2021-07-28 US US17/387,697 patent/US20210358135A1/en not_active Abandoned
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6091420A (en) * | 1996-12-13 | 2000-07-18 | Sony Corporation | Method for approximating shape data, drawing apparatus and information recording medium |
US20120212481A1 (en) * | 2005-09-29 | 2012-08-23 | Apple Inc. | Video Acquisition With Integrated GPU Processing |
US20110136676A1 (en) * | 2008-04-24 | 2011-06-09 | Greene Eric C | Geometric patterns and lipid bilayers for dna molecule organization and uses thereof |
US20130128735A1 (en) * | 2010-12-17 | 2013-05-23 | Microsoft Corporation | Universal rate control mechanism with parameter adaptation for real-time communication applications |
US20120162454A1 (en) * | 2010-12-23 | 2012-06-28 | Samsung Electronics Co., Ltd. | Digital image stabilization device and method |
US20120249956A1 (en) * | 2011-03-30 | 2012-10-04 | Carl Zeiss Meditec, Inc. | Systems and methods for efficiently obtaining measurements of the human eye using tracking |
US20140266803A1 (en) * | 2013-03-15 | 2014-09-18 | Xerox Corporation | Two-dimensional and three-dimensional sliding window-based methods and systems for detecting vehicles |
US20150067008A1 (en) * | 2013-09-05 | 2015-03-05 | Texas Instruments Incorporated | Determining Median Value of an Array on Vector SIMD Architectures |
US20150131848A1 (en) * | 2013-11-08 | 2015-05-14 | Analog Devices Technology | Support vector machine based object detection system and associated method |
US20160379375A1 (en) * | 2014-03-14 | 2016-12-29 | Huawei Technologies Co., Ltd. | Camera Tracking Method and Apparatus |
US20160163091A1 (en) * | 2014-12-09 | 2016-06-09 | Industrial Technology Research Institute | Electronic apparatus and method for incremental pose estimation and photographing thereof |
US20180075593A1 (en) * | 2016-09-15 | 2018-03-15 | Qualcomm Incorporated | Automatic scene calibration method for video analytics |
US20180121819A1 (en) * | 2016-10-31 | 2018-05-03 | Salesforce.Com, Inc. | Jaccard similarity estimation of weighted samples: circular smearing with scaling and randomized rounding sample selection |
Also Published As
Publication number | Publication date |
---|---|
DE102018131730A1 (en) | 2019-07-11 |
US20190043204A1 (en) | 2019-02-07 |
US20210358135A1 (en) | 2021-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10282805B2 (en) | Image signal processor and devices including the same | |
US11620757B2 (en) | Dense optical flow processing in a computer vision system | |
US10242294B2 (en) | Target object classification using three-dimensional geometric filtering | |
US10867390B2 (en) | Computer vision processing | |
US10070134B2 (en) | Analytics assisted encoding | |
JP2020129386A (en) | Low power computer imaging | |
US11682212B2 (en) | Hierarchical data organization for dense optical flow processing in a computer vision system | |
US20210358135A1 (en) | Feature detection, sorting, and tracking in images using a circular buffer | |
CN105427235B (en) | A kind of image browsing method and system | |
US10769753B2 (en) | Graphics processor that performs warping, rendering system having the graphics processor, and method of operating the graphics processor | |
US9852092B2 (en) | System and method for memory access | |
US20180227581A1 (en) | Intelligent MSI-X Interrupts for Video Analytics and Encoding | |
US20140362094A1 (en) | System, method, and computer program product for recovering from a memory underflow condition associated with generating video signals | |
WO2013062514A1 (en) | Multiple stream processing for video analytics and encoding | |
US8749567B2 (en) | Apparatus for and method of processing vertex | |
US11924537B2 (en) | Image signal processor and image processing system performing interrupt control | |
US10609379B1 (en) | Video compression across continuous frame edges | |
US20180095877A1 (en) | Processing scattered data using an address buffer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL IP CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MANDAL, DIPAN KUMAR;C, NAGADASTAGIRI REDDY;MAMIDIPAKA, MAHESH;AND OTHERS;SIGNING DATES FROM 20171219 TO 20171223;REEL/FRAME:044556/0586 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
STCC | Information on status: application revival |
Free format text: WITHDRAWN ABANDONMENT, AWAITING EXAMINER ACTION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTEL IP CORPORATION;REEL/FRAME:056322/0723 Effective date: 20210512 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |