CN117392518B - Low-power-consumption visual positioning and mapping chip and method thereof - Google Patents

Low-power-consumption visual positioning and mapping chip and method thereof Download PDF

Info

Publication number
CN117392518B
CN117392518B CN202311710836.6A CN202311710836A CN117392518B CN 117392518 B CN117392518 B CN 117392518B CN 202311710836 A CN202311710836 A CN 202311710836A CN 117392518 B CN117392518 B CN 117392518B
Authority
CN
China
Prior art keywords
module
data
image
chip
visual positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311710836.6A
Other languages
Chinese (zh)
Other versions
CN117392518A (en
Inventor
吴俊�
姜爱鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yaoyu Vision Core Technology Co ltd
Original Assignee
Nanjing Yaoyu Vision Core Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yaoyu Vision Core Technology Co ltd filed Critical Nanjing Yaoyu Vision Core Technology Co ltd
Priority to CN202311710836.6A priority Critical patent/CN117392518B/en
Publication of CN117392518A publication Critical patent/CN117392518A/en
Application granted granted Critical
Publication of CN117392518B publication Critical patent/CN117392518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0042Universal serial bus [USB]

Abstract

The invention relates to a low-power-consumption visual positioning and mapping chip and a method thereof, wherein the low-power-consumption visual positioning and mapping chip comprises: the device comprises an image preprocessing module, a feature point processing module, a description sub-generation module, a feature point tracking module, a filtering tracking module, a visual compensation module, a time stamp module, a data packaging module, a data interface module and a CPU module. The beneficial effects of the invention are as follows: the low-power-consumption visual positioning and mapping chip is provided, so that the calculation load of a general CPU (Central processing Unit) and a DSP (digital Signal processor) chip can be effectively reduced, and the display instantaneity and the performance-to-power consumption ratio are improved; the software can conveniently synchronize and fuse the data by stamping the time stamp on the multipath data.

Description

Low-power-consumption visual positioning and mapping chip and method thereof
Technical Field
The invention relates to the field of AR and VR vision processing, in particular to a low-power-consumption vision positioning and mapping chip and a method thereof.
Background
The current AR glasses and VR head display products adopt a general CPU and a DSP chip for calculation for the vision real-time positioning and mapping algorithm (SLAM algorithm), so that the large power consumption of the glasses/head display is brought; the battery has short service time and short standby time; the image delay is large; and the multi-path data is difficult to synchronize.
Disclosure of Invention
The invention aims to provide a low-power-consumption visual positioning and mapping chip and a method thereof, which are used for solving the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a low power vision positioning and mapping chip comprising:
the image preprocessing module is used for preprocessing the image acquired by the camera;
the characteristic point processing module is used for extracting and screening characteristic points of the image preprocessed by the image preprocessing module;
the descriptor generation module calculates descriptors of the characteristics of the screened feature points;
the characteristic point tracking module is used for comparing the screened characteristic points between the front frame image and the rear frame image and finding out the change track of the characteristic points;
the filtering tracking module corrects and fuses the output of the characteristic point tracking module by using IMU data;
the visual compensation module is used for compensating rendering distortion of the output image caused by processing delay;
the time stamp module is used for sending out a real time stamp;
the data packaging module acquires the data of each module of the chip through the internal bus of the chip, combines the acquired data into a data packet according to the packet format requirement, adds a time stamp, and then outputs the data packet to the data interface module;
the data interface module receives the data packet of the data packaging module and sends the data packet to the host;
and the CPU module acquires data and commands from the host through the data interface module, executes the host commands through the internal bus of the chip and controls each module of the chip.
As a further scheme of the invention: the data interface module is a USB module; the USB module sends the data generated by the data packaging module to the host according to the format of the USB protocol; the USB module receives data and configuration issued by the host; a low-power-consumption visual positioning and mapping chip is controlled by a host through a USB module.
As a further scheme of the invention: a visual positioning and mapping chip with low power consumption is provided with an SPI interface; the SPI interface is used for connecting the IMU unit; the SPI interface is inside the chip and is accessed by the CPU module through a peripheral bus; the CPU module reads the IMU data of the IMU unit through the SPI interface and sends the IMU data to the data packaging module through the internal bus.
As a further scheme of the invention: the data packaging module is used for obtaining data comprising: the real-time stamp is sent out by the time stamp module, and IMU data is obtained through the characteristic value processed by the characteristic point processing module, the descriptor generating module, the characteristic point tracking module or the filtering tracking module;
the data packaging module packages data and uploads the data to the host through the data interface module.
As a further scheme of the invention: the image preprocessing module preprocesses the image acquired by the camera, and comprises the following steps:
step 1, cutting an image acquired by a camera to adjust the specification of the image;
step 2, processing the edges of the cut image to ensure that inconsistent image edge data does not influence the accuracy of subsequent processing;
and 3, straightening, namely, increasing the region with low gray value and decreasing the region with high gray value through statistics of the gray level of the image data so as to make the gray level of the whole image uniform in layering.
As a further scheme of the invention: the feature point processing module performs feature point extraction and screening operation on the image preprocessed by the image preprocessing module, and comprises the following steps:
step 1, cutting an image preprocessed by an image preprocessing module into blocks with fixed sizes, searching gray level differences of surrounding points in the blocks pixel by pixel, and preliminarily considering the gray level differences as candidate feature points if the gray level differences reach a configurable preset threshold value;
step 2, performing lens distortion removal processing on the alternative feature points to obtain feature point coordinates under the live-action;
step 3, predicting the position of the feature point in the frame according to the feature point of the previous frame and IMU data between frames;
step 4, screening out the candidate feature points around the predicted feature point position of the frame;
and step 5, outputting the screened characteristic points.
As a further scheme of the invention: the descriptor generating module calculates descriptors of characteristics of the screened feature points, and comprises the following steps:
step 1, performing downsampling processing on an image, and generating 7 layers of downsampled images outside an original image, wherein the downsampled images are used for simulating different pictures acquired by a camera at a distance from a same scene;
step 2, calculating the characteristics of the same characteristic point in different pyramid layers to generate characteristic values of 8 layers;
step 3, calculating the included angle between the gray scale of the characteristic points and the gray scale of the surrounding points to be used as the rotation angle value of the image;
and step 4, transmitting the calculated characteristic points of each layer, the layers and the rotation angles to a later-stage module for processing.
As a further scheme of the invention: the filtering tracking module corrects and fuses the output of the characteristic point tracking module by using IMU data, and the filtering tracking module comprises the following steps:
step 1, acquiring real-time IMU data, and obtaining the speed and angle information of each calculation moment image through integration;
step 2, calculating to obtain a real-time pose according to the pose calculated by the previous frame of image and the current calculated speed and angular speed information;
step 3, estimating the pose of the current image according to the information of all the global images and IMU data;
and 4, obtaining corrected characteristic point coordinates by Kalman filtering the data obtained in the step 2 and the step 3.
As a further scheme of the invention: the visual compensation module compensates rendering distortion of the output image caused by processing delay, and comprises the following steps:
step 1, predicting the pose of a camera at the current moment according to the pose of a previous frame image and IMU data;
and 2, regenerating the images to be drawn on the upper surface and the lower surface of the image according to the real-time pose of the camera to form an image based on an actual observation point.
As a further scheme of the invention: a method for performing VST processing on images by a low-power-consumption visual positioning and mapping chip comprises the following steps:
step 1, packaging images acquired by left and right eyes and adding a time stamp;
step 2, uploading the data to a host through a data interface module;
step 3, the host computer re-models through the left eye image and the right eye image to generate a three-dimensional image;
and 4, re-projecting the three-dimensional image to the visual planes of the left eye and the right eye.
Compared with the prior art, the invention has the beneficial effects that: the low-power-consumption visual positioning and mapping chip is provided, so that the calculation load of a general CPU (Central processing Unit) and a DSP (digital Signal processor) chip can be effectively reduced, and the display instantaneity and the performance-to-power consumption ratio are improved; the software can conveniently synchronize and fuse the data by stamping the time stamp on the multipath data.
The low-power-consumption visual positioning and mapping chip is used as a special chip for visual processing, and the problems of low processing efficiency, high power consumption and poor instantaneity of general hardware are solved. And packaging the image data of the left eye and the right eye, sending the image data to an ap end through a usb interface, and completing three-dimensional modeling and image issuing by the integrated GPU. The calculation force requirement of the chip end is reduced.
The low-power-consumption visual positioning and mapping chip has the advantages that VST (Video See Through) functions are added on the basis of the traditional VSLAM (visual simultaneous localization and mapping), and scene on site is truly restored through the color camera.
Other features and advantages of the present invention will be disclosed in the following detailed description of the invention and the accompanying drawings.
Drawings
FIG. 1 is a flow chart of a low power vision positioning and mapping chip of the present invention for vision processing;
FIG. 2 is a flow chart of an image preprocessing module of a low power visual positioning and mapping chip of the present invention;
FIG. 3 is a flow chart of a low power consumption feature point processing module of the visual positioning and mapping chip of the present invention;
FIG. 4 is a description generating module of a low power visual positioning and mapping chip of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 to 4, in an embodiment of the present invention, a low-power visual positioning and mapping chip includes: the device comprises an image preprocessing module, a feature point processing module, a description sub-generation module, a feature point tracking module, a filtering tracking module, a visual compensation module, a time stamp module, a data packaging module, a data interface module and a CPU module.
And the image preprocessing module is used for preprocessing the image acquired by the camera. And the characteristic point processing module is used for carrying out characteristic point extraction and screening operation on the image preprocessed by the image preprocessing module. And the descriptor generation module calculates descriptors of the characteristics of the filtered feature points. And the characteristic point tracking module is used for comparing the filtered characteristic points between the front frame image and the rear frame image and finding out the change track of the characteristic points. And the filtering tracking module corrects and fuses the output of the characteristic point tracking module by using IMU data. And the visual compensation module is used for compensating rendering distortion of the output image caused by processing delay. And the time stamp module is used for sending out a real time stamp. The data packaging module acquires the data of each module of the chip through the internal bus of the chip, combines the acquired data into a data packet according to the packet format requirement, adds a time stamp, and then outputs the data packet to the data interface module. And the data interface module is used for receiving the data packet of the data packaging module and sending the data packet to the host. And the CPU module acquires data and commands from the host through the data interface module, executes the host commands through the internal bus of the chip and controls each module of the chip.
Image preprocessing module
The image preprocessing module preprocesses the image, and reduces complexity and instability of subsequent processing caused by differences of various characteristics of the image.
The image preprocessing module preprocesses the image acquired by the camera, and comprises the following steps:
step 1, cutting an image acquired by a camera to adjust the specification of the image;
step 2, processing the edges of the cut image to ensure that inconsistent image edge data does not influence the accuracy of subsequent processing;
and 3, straightening, namely, by counting the gray level of the image data, increasing the region with low gray level value and decreasing the region with high gray level value, so that the gray level value of the whole image is layered uniformly, and the subsequent processing is convenient, and the bright and dark change of the image can be well dealt with.
Different camera parameters may be inconsistent and uniformly processed into the image specification acceptable by the subsequent processing module through cutting processing, so that the subsequent processing complexity is reduced.
And through the function of the image preprocessing module, the pictures acquired by the left and right cameras are transmitted to the subsequent module in the form of multi-layer images.
Feature point processing module
The feature point processing module performs feature point extraction and screening operation on the image preprocessed by the image preprocessing module. The extracted feature points are used as VIO (visual inertial odometry) feature points for subsequent modules.
The feature point processing module performs feature point extraction and screening operation on the image preprocessed by the image preprocessing module, and comprises the following steps:
step 1, cutting an image preprocessed by an image preprocessing module into blocks with fixed sizes, searching gray level differences of surrounding points in the blocks pixel by pixel, and preliminarily considering the gray level differences as candidate feature points if the gray level differences reach a configurable preset threshold value;
step 2, performing lens distortion removal processing on the alternative feature points to obtain feature point coordinates under the live-action;
step 3, predicting the position of the feature point in the frame according to the feature point of the previous frame and IMU data between frames;
step 4, screening out the candidate feature points around the predicted feature point position of the frame, wherein the screening ensures that only a few feature points exist in the image of the small block, and specific quantity of software can be configured;
and step 5, outputting the screened characteristic points.
Descriptor generation module
The descriptor generating module calculates the descriptors of the characteristics of the filtered characteristic points, and the generated descriptors and the characteristic points are sent to the software algorithm unit for further processing through the data packaging module.
The descriptor generating module calculates descriptors of characteristics of the screened feature points, and comprises the following steps:
step 1, performing downsampling processing on an image, and generating 7 layers of downsampled images outside an original image, wherein the downsampled images are used for simulating different pictures acquired by a camera at a distance from a same scene;
step 2, calculating the characteristics of the same characteristic point in different pyramid layers to generate characteristic values of 8 layers;
step 3, calculating the included angle between the gray scale of the characteristic points and the gray scale of the surrounding points to be used as the rotation angle value of the image;
and step 4, transmitting the calculated characteristic points of each layer, the layers and the rotation angles to a later-stage module for processing.
Feature point tracking module
And the characteristic point tracking module performs comparison operation on the screened characteristic points between the front frame image and the rear frame image, and finds out the change track of the characteristic points, namely the corresponding gesture change of the camera.
Filtering tracking module
And the filtering tracking module corrects and fuses the output of the characteristic point tracking module by using IMU data.
The filtering tracking module corrects and fuses the output of the characteristic point tracking module by using IMU data, and the filtering tracking module comprises the following steps:
step 1, acquiring real-time IMU data, and obtaining the speed and angle information of each calculation moment image through integration;
step 2, calculating to obtain a real-time pose according to the pose calculated by the previous frame of image and the current calculated speed and angular speed information;
step 3, estimating the pose of the current image according to the information of all the global images and IMU data;
and 4, obtaining corrected characteristic point coordinates by Kalman filtering the data obtained in the step 2 and the step 3.
Visual compensation module
The visual compensation module compensates rendering distortion of the output image caused by processing delay. The current 3D rendering result is ensured to be synchronous with the pose corresponding to the current image acquisition time.
The visual compensation module compensates rendering distortion of the output image caused by processing delay, and comprises the following steps:
step 1, predicting the pose of a camera at the current moment according to the pose of a previous frame image and IMU data;
and 2, regenerating the images to be drawn on the upper surface and the lower surface of the image according to the real-time pose (the position and the angle of an observer) of the camera to form an image based on an actual observation point. Rather than rendering the currently displayed image from the position and angle of the previous frame image.
Time stamp module
The time stamp generating module is used for sending the real time stamp to the data packaging module. The subsequent packing and processing module knows the accurate time of data generation, and is convenient for synchronous and compensating processing.
Data packing module
The data packaging module obtains the data of each module of the chip through the internal bus of the chip, combines the obtained data into a data packet according to the packet format requirement, adds a time stamp, and then outputs the data packet to the data interface module.
The data packaging module obtains the characteristic value, the time stamp and the IMU data from the characteristic extracting module, the time stamp module and the cpu through the internal bus of the chip. And combining the data paths into a data packet according to the packet format requirement, and adding a time stamp. And then output to the USB module. When multiple data are required to be output, the sequence of the output packets needs to be arbitrated.
The data packaging module is used for obtaining data comprising: the real-time stamp is sent out by the time stamp module, and IMU data is obtained through the characteristic value processed by the characteristic point processing module, the descriptor generating module, the characteristic point tracking module or the filtering tracking module; the data packaging module packages data and uploads the data to the host through the data interface module.
Data interface module
The data interface module is a USB module. The USB module sends the data generated by the data packaging module to the host according to the format of the USB protocol; the USB module receives data and configuration issued by the host; a low-power-consumption visual positioning and mapping chip is controlled by a host through a USB module. And the USB module is used for receiving the data packet of the packaging module and sending the data generated by the packaging module to the host according to the format of the USB protocol. And receiving data and configuration issued by the host computer, and finishing the initialization and command issuing of the chip. The whole chip is controlled by a host through a USB interface.
SPI interface
A visual positioning and mapping chip with low power consumption is provided with an SPI interface. The SPI interface is used for connecting the IMU units in the chip, and functions of initialization, data reading and the like of the IMU are completed. The SPI interface is inside the chip and is accessed by the CPU module through the peripheral bus. The CPU module reads the IMU data of the IMU unit through the SPI interface and sends the IMU data to the data packaging module through the internal bus.
CPU module
The CPU module obtains data and commands from the host through the data interface module, executes the host commands through the internal bus of the chip and controls each module of the chip. The CPU is used as a control center of the whole chip, acquires data and commands from the host through the USB module, executes the host commands through the internal bus of the chip and controls different chip modules. And the data of the IMU is read through the SPI bus, and the data of the IMU is sent to the packaging module through the internal bus.
A method for performing VST processing on images by a low-power-consumption visual positioning and mapping chip comprises the following steps:
step 1, packaging images acquired by left and right eyes and adding a time stamp;
step 2, uploading the data to a host through a data interface module;
step 3, the host computer re-models through the left eye image and the right eye image to generate a three-dimensional image;
and 4, re-projecting the three-dimensional image to the visual planes of the left eye and the right eye.
The image is issued to the chip side for processing by the vision compensation module.
And calculating and obtaining the position and the orientation of the glasses through two paths of single-color camera data, one path of depth camera data and one path of 9-axis IMU data.
The chip calculates the monochromatic camera data, extracts the characteristic value of the image, generates a data packet according to a certain format, adds a time stamp, and transmits the data packet to the host through the USB.
The camera collects depth data of objects in a picture, the depth data is input through the DTOF camera, a data packet is generated according to a certain format, a time stamp is added, and the data packet is transmitted to the host through the USB.
In the data processing process of the IMU unit (nine-axis sensor), the CPU in the chip reads out the data of the IMU through an SPI interface, sends the data to a packaging module, generates a data packet according to a certain format, adds a time stamp, and then transmits the data packet to a host through a USB.
The data of the color camera is processed through an image, packaged by a packaging module and other data, and transmitted to a host machine through USB for data fusion.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (8)

1. A low power vision positioning and mapping chip, comprising:
the image preprocessing module is used for preprocessing the image acquired by the camera;
the characteristic point processing module is used for extracting and screening characteristic points of the image preprocessed by the image preprocessing module;
the descriptor generation module calculates descriptors of the characteristics of the screened feature points;
the characteristic point tracking module is used for comparing the screened characteristic points between the front frame image and the rear frame image and finding out the change track of the characteristic points;
the filtering tracking module is used for correcting and fusing the output of the characteristic point tracking module by using IMU data;
the visual compensation module is used for compensating rendering distortion of the output image caused by processing delay;
the time stamp module is used for sending out a real time stamp;
the data packaging module acquires the data of each module of the chip through the internal bus of the chip, combines the acquired data into a data packet according to the packet format requirement, adds a time stamp, and then outputs the data packet to the data interface module;
the data interface module receives the data packet of the data packaging module and sends the data packet to the host;
the CPU module acquires data and commands from the host through the data interface module, executes the host commands through the internal bus of the chip and controls each module of the chip;
the image preprocessing module preprocesses images acquired by the camera, and comprises the following steps:
step 1, cutting an image acquired by a camera to adjust the specification of the image;
step 2, processing the edges of the cut image to ensure that inconsistent image edge data does not influence the accuracy of subsequent processing;
step 3, straightening, namely heightening a region with low gray values and lowering a region with high gray values through statistics of the gray levels of the image data so as to make the gray values of the whole image layering uniform;
the descriptor generating module calculates descriptors of characteristics of the screened characteristic points, and comprises the following steps:
step 1, performing downsampling processing on an image, and generating 7 layers of downsampled images outside an original image, wherein the downsampled images are used for simulating different pictures acquired by a camera at a distance from a same scene;
step 2, calculating the characteristics of the same characteristic point in different pyramid layers to generate characteristic values of 8 layers;
step 3, calculating the included angle between the gray scale of the characteristic points and the gray scale of the surrounding points to be used as the rotation angle value of the image;
and step 4, transmitting the calculated characteristic points of each layer, the layers and the rotation angles to a later-stage module for processing.
2. The low power visual positioning and mapping chip of claim 1, wherein,
the data interface module is a USB module; the USB module sends the data generated by the data packaging module to a host according to the format of a USB protocol; the USB module receives data and configuration issued by a host; the low-power-consumption visual positioning and mapping chip is controlled by the host through the USB module.
3. The low power visual positioning and mapping chip of claim 1, wherein,
the low-power-consumption visual positioning and mapping chip is provided with an SPI interface; the SPI interface is used for connecting the IMU unit; the SPI interface is arranged inside the chip and is accessed by the CPU module through a peripheral bus; and the CPU module reads the IMU data of the IMU unit through the SPI interface and sends the IMU data to the data packaging module through an internal bus.
4. The low power visual positioning and mapping chip of claim 1, wherein,
the data packaging module obtains data including: the characteristic value processed by the characteristic point processing module, the description sub-generating module, the characteristic point tracking module or the filtering tracking module is subjected to real-time stamping and IMU data sent by the time stamping module;
and the data packaging module packages data and uploads the data to the host through the data interface module.
5. A low power visual positioning and mapping chip visual positioning and mapping method as claimed in any one of claims 1 to 4, characterized in that,
the characteristic point processing module performs characteristic point extraction and screening operation on the image preprocessed by the image preprocessing module, and comprises the following steps:
step 1, cutting an image preprocessed by the image preprocessing module into blocks with fixed sizes, searching gray level differences between pixels and surrounding points in the blocks, and primarily considering the gray level differences as candidate feature points if the gray level differences reach a configurable preset threshold value;
step 2, performing lens distortion removal processing on the alternative feature points to obtain feature point coordinates under the live-action;
step 3, predicting the position of the feature point in the frame according to the feature point of the previous frame and IMU data between frames;
step 4, screening out the candidate feature points around the predicted feature point position of the frame;
and step 5, outputting the screened characteristic points.
6. The method for visual positioning and mapping of a low power consumption visual positioning and mapping chip of claim 5, wherein,
the filtering tracking module corrects and fuses the output of the characteristic point tracking module by using IMU data, and the filtering tracking module comprises the following steps:
step 1, acquiring real-time IMU data, and obtaining the speed and angle information of each calculation moment image through integration;
step 2, calculating to obtain a real-time pose according to the pose calculated by the previous frame of image and the current calculated speed and angular speed information;
step 3, estimating the pose of the current image according to the information of all the global images and IMU data;
and 4, obtaining corrected characteristic point coordinates by Kalman filtering the data obtained in the step 2 and the step 3.
7. The method for visual positioning and mapping of a low power consumption visual positioning and mapping chip of claim 5, wherein,
the visual compensation module compensates rendering distortion of an output image caused by processing delay, and comprises the following steps:
step 1, predicting the pose of a camera at the current moment according to the pose of a previous frame image and IMU data;
and 2, regenerating the images to be drawn on the upper surface and the lower surface of the image according to the real-time pose of the camera to form an image based on an actual observation point.
8. The method for visual positioning and mapping of a low power consumption visual positioning and mapping chip of claim 5, wherein,
the method for performing VST processing on the image by the low-power-consumption visual positioning and mapping chip comprises the following steps:
step 1, packaging images acquired by left and right eyes and adding a time stamp;
step 2, uploading the data to a host through the data interface module;
step 3, the host computer re-models through the left eye image and the right eye image to generate a three-dimensional image;
and 4, re-projecting the three-dimensional image to the visual planes of the left eye and the right eye.
CN202311710836.6A 2023-12-13 2023-12-13 Low-power-consumption visual positioning and mapping chip and method thereof Active CN117392518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311710836.6A CN117392518B (en) 2023-12-13 2023-12-13 Low-power-consumption visual positioning and mapping chip and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311710836.6A CN117392518B (en) 2023-12-13 2023-12-13 Low-power-consumption visual positioning and mapping chip and method thereof

Publications (2)

Publication Number Publication Date
CN117392518A CN117392518A (en) 2024-01-12
CN117392518B true CN117392518B (en) 2024-04-09

Family

ID=89441457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311710836.6A Active CN117392518B (en) 2023-12-13 2023-12-13 Low-power-consumption visual positioning and mapping chip and method thereof

Country Status (1)

Country Link
CN (1) CN117392518B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106600600A (en) * 2016-12-26 2017-04-26 华南理工大学 Wafer defect detection method based on characteristic matching
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN109579844A (en) * 2018-12-04 2019-04-05 电子科技大学 Localization method and system
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
CN110058602A (en) * 2019-03-27 2019-07-26 天津大学 Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision
CN112000225A (en) * 2020-08-25 2020-11-27 唯羲科技有限公司 Positioning mapping optimization method and device and positioning mapping optimization chip

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI357582B (en) * 2008-04-18 2012-02-01 Univ Nat Taiwan Image tracking system and method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106600600A (en) * 2016-12-26 2017-04-26 华南理工大学 Wafer defect detection method based on characteristic matching
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN109579844A (en) * 2018-12-04 2019-04-05 电子科技大学 Localization method and system
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
CN110058602A (en) * 2019-03-27 2019-07-26 天津大学 Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision
CN112000225A (en) * 2020-08-25 2020-11-27 唯羲科技有限公司 Positioning mapping optimization method and device and positioning mapping optimization chip

Also Published As

Publication number Publication date
CN117392518A (en) 2024-01-12

Similar Documents

Publication Publication Date Title
US11024092B2 (en) System and method for augmented reality content delivery in pre-captured environments
US11727626B2 (en) Damage detection from multi-view visual data
US11024093B2 (en) Live augmented reality guides
EP3051525B1 (en) Display
CN102959616B (en) Interactive reality augmentation for natural interaction
US8675048B2 (en) Image processing apparatus, image processing method, recording method, and recording medium
WO2017173735A1 (en) Video see-through-based smart eyeglasses system and see-through method thereof
CN109743626B (en) Image display method, image processing method and related equipment
EP2533191B1 (en) Image processing system, image processing method, and program
CN106896925A (en) The device that a kind of virtual reality is merged with real scene
WO2012153447A1 (en) Image processing device, image processing method, program, and integrated circuit
US11783443B2 (en) Extraction of standardized images from a single view or multi-view capture
US20230410332A1 (en) Structuring visual data
KR100560464B1 (en) Multi-view display system with viewpoint adaptation
WO2012020558A1 (en) Image processing device, image processing method, display device, display method and program
CN113518996A (en) Damage detection from multiview visual data
WO2020090316A1 (en) Information processing device, information processing method, and program
CN113253845A (en) View display method, device, medium and electronic equipment based on eye tracking
CN106981100A (en) The device that a kind of virtual reality is merged with real scene
CN107016730A (en) The device that a kind of virtual reality is merged with real scene
US20220408019A1 (en) Viewpoint path modeling
CN117392518B (en) Low-power-consumption visual positioning and mapping chip and method thereof
WO2019044123A1 (en) Information processing device, information processing method, and recording medium
CN111915739A (en) Real-time three-dimensional panoramic information interactive information system
TWM630947U (en) Stereoscopic image playback apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant