CN114216461A - Panoramic camera-based indoor positioning method and system for mobile robot - Google Patents

Panoramic camera-based indoor positioning method and system for mobile robot Download PDF

Info

Publication number
CN114216461A
CN114216461A CN202111153320.7A CN202111153320A CN114216461A CN 114216461 A CN114216461 A CN 114216461A CN 202111153320 A CN202111153320 A CN 202111153320A CN 114216461 A CN114216461 A CN 114216461A
Authority
CN
China
Prior art keywords
descriptor
descriptors
panoramic
feature point
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111153320.7A
Other languages
Chinese (zh)
Inventor
张志萌
曹颂
钟星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Turing Video Technology Co ltd
Original Assignee
Hangzhou Turing Video Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Turing Video Technology Co ltd filed Critical Hangzhou Turing Video Technology Co ltd
Priority to CN202111153320.7A priority Critical patent/CN114216461A/en
Publication of CN114216461A publication Critical patent/CN114216461A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Abstract

The invention discloses a mobile robot indoor positioning method and a mobile robot indoor positioning system based on a panoramic camera, wherein the method comprises the following steps: step S1: acquiring video data, constructing a feature point map, and performing dimension reduction and clustering on the feature point map to generate an index file; step S2: and performing dimensionality reduction, denoising and searching processing according to the panoramic image to obtain a 2D-3D coordinate pair, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain the position (t) and the posture (R) of the robot. The method does not use the geometric information of the environment, collects the image information in the environment, executes global positioning based on the texture, adds more information into the positioning process, and greatly improves the positioning robustness.

Description

Panoramic camera-based indoor positioning method and system for mobile robot
Technical Field
The invention relates to a mobile robot indoor positioning method and system based on a panoramic camera, and belongs to the technical field of robot positioning.
Background
The robot can receive passive measurement from a GPS when being positioned outdoors, but cannot receive GPS signals when being positioned indoors, and can only carry out active positioning through a sensor carried by the robot.
At present, when positioning is carried out indoors, the most used method is based on a laser radar, the laser radar collects geometric information of an indoor environment and carries out matching positioning with a pre-constructed map, the method is feasible under most conditions, and once positioning loss occurs, repositioning becomes extremely difficult.
Disclosure of Invention
The invention aims to overcome the technical defects in the prior art, solve the technical problems, and provide a mobile robot indoor positioning method and system based on a panoramic camera to solve the problem of unreliable positioning based on a laser radar.
The invention specifically adopts the following technical scheme: a mobile robot indoor positioning method based on a panoramic camera comprises the following steps:
step S1: acquiring video data, constructing a feature point map, and performing dimension reduction and clustering on the feature point map to generate an index file;
step S2: and performing dimensionality reduction, denoising and searching processing according to the panoramic image to obtain a 2D-3D coordinate pair, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain a position (t) and a posture (R).
As a preferred embodiment, the step S1 specifically includes:
step SS 11: collecting video data of the whole environment through a panoramic camera arranged at the top of the mobile robot;
step SS 12: constructing a feature point map of the whole environment in a visual SLAM mode according to the panoramic video data of the continuous frames obtained in the step SS 1;
step SS 13: the descriptor F ═ F corresponding to the feature point map obtained in the saving step SS2 is stored as F0,f1… and space coordinates P ═ P0,p1,…};
Step SS 14: performing dimension reduction on the descriptor of each feature point obtained in the step SS 3;
step SS 15: respectively searching the nearest clustering centers corresponding to the front and back parts of each dimensionality reduction descriptor according to the clustering centers trained in advance, and then constructing indexes of the front and back clustering centers of each descriptor into an index file in a Hash Map mode: h ═ H0,h1,…};
As a preferred embodiment, the step S2 specifically includes:
step SS 26: collecting panoramic images through a panoramic camera, extracting feature points of the images, then performing feature point descriptor dimensionality reduction on the feature points, and recording a descriptor set after dimensionality reduction of the panoramic images of the current frame as S ═ S0,s1,…};
Step SS 27: for each descriptor in the description subset S, searching out similar descriptors in the index file H to generate a similar description subset C ═ { C }0,c1,…};
Step SS 28: judging the searched similar description subset C, removing obviously wrong descriptors, and generating a denoised similar description subset C';
step SS 29: and respectively extracting coordinates corresponding to the descriptors in the similar description subset C 'and the description subset S, wherein C' is a pixel coordinate of a two-dimensional plane, S is a three-dimensional space coordinate, forming a 2D-3D coordinate pair by the coordinates corresponding to the descriptors according to the matching relation of the descriptors, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain the position (t) and the posture (R).
As a preferred embodiment, step SS27 specifically includes: firstly, respectively searching the front part and the rear part of each current descriptor in a description subset S to obtain the nearest clustering centers; then, constructing a Hash index through the index of the clustering center, and determining the position of the current descriptor in the Hash Map; and taking the top 10 descriptors in the current Hash Map, which are most similar to the descriptor.
As a preferred embodiment, step SS28 specifically includes: and counting the distribution of the searched descriptors on each panoramic image index by calculating the panoramic image indexes corresponding to the descriptors, and deleting the descriptors of the panoramic images with the counted number lower than a set threshold value from the search result to obtain a final descriptor set C'.
The invention also provides a mobile robot indoor positioning system based on the panoramic camera, which comprises the following components:
the characteristic point map and index file acquisition module specifically executes the following steps: acquiring video data, constructing a feature point map, and performing dimension reduction and clustering on the feature point map to generate an index file;
the panoramic image execution positioning module specifically executes: and performing dimensionality reduction, denoising and searching processing according to the panoramic image to obtain a 2D-3D coordinate pair, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain a position (t) and a posture (R).
As a preferred embodiment, the feature point map and index file obtaining module specifically includes:
the video acquisition module specifically executes: collecting video data of the whole environment through a panoramic camera arranged at the top of the mobile robot;
the characteristic point map building module specifically executes the following steps: constructing a feature point map of the whole environment in a visual SLAM mode according to panoramic video data of continuous frames obtained by a video acquisition module;
the storage module specifically executes: storing the descriptor F ═ F corresponding to the feature point map obtained by the feature point map construction module0,f1… and space coordinates P ═ P0,p1,…};
The data dimension reduction module specifically executes: performing data dimension reduction on the descriptor of each feature point obtained by the storage module;
a clustering module that specifically executes: respectively searching the nearest clustering centers corresponding to the front and back parts of each dimensionality reduction descriptor according to the clustering centers trained in advance, and then constructing indexes of the front and back clustering centers of each descriptor into an index file in a Hash Map mode: h ═ H0,h1,…}。
As a preferred embodiment, the panoramic image execution positioning module specifically includes:
the image extraction and dimension reduction module specifically executes: collecting panoramic images through a panoramic camera, extracting feature points of the images, then performing feature point descriptor dimensionality reduction on the feature points, and recording a descriptor set after dimensionality reduction of the panoramic images of the current frame as S ═ S0,s1,…};
The search module specifically executes: for each descriptor in the description subset S, searching out similar descriptors in the index file H to generate a similar description subset C ═ { C }0,c1,…};
The denoising module specifically executes: judging the searched similar description subset C, removing obviously wrong descriptors, and generating a denoised similar description subset C';
the coordinate pair generation module specifically executes: and respectively extracting coordinates corresponding to the descriptors in the similar description subset C 'and the description subset S, wherein C' is a pixel coordinate of a two-dimensional plane, S is a three-dimensional space coordinate, forming a 2D-3D coordinate pair by the coordinates corresponding to the descriptors according to the matching relation of the descriptors, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain the position (t) and the posture (R).
As a preferred embodiment, the search module specifically includes: firstly, respectively searching the front part and the rear part of each current descriptor in a description subset S to obtain the nearest clustering centers; then, constructing a Hash index through the index of the clustering center, and determining the position of the current descriptor in the Hash Map; and taking the top 10 descriptors in the current Hash Map, which are most similar to the descriptor.
As a preferred embodiment, the denoising module specifically includes: and counting the distribution of the searched descriptors on each panoramic image index by calculating the panoramic image indexes corresponding to the descriptors, and deleting the descriptors of the panoramic images with the counted number lower than a set threshold value from the search result to obtain a final descriptor set C'.
The invention achieves the following beneficial effects: aiming at solving the technical problem that the method is feasible in most cases when the existing indoor positioning is carried out, but once the positioning is lost, the relocation becomes abnormal and difficult, the method is based on the laser radar which is used most and collects the geometric information of the indoor environment and the map which is constructed in advance for matching positioning, and the characteristic point map is constructed by acquiring video data, and the dimension reduction and clustering are carried out on the characteristic point map to generate an index file; the method and the device have the advantages that the 2D-3D coordinate pair is obtained after dimensionality reduction, denoising and search processing are carried out according to the panoramic image, RANSAC EPnP calculation is carried out on the 2D-3D coordinate pair to obtain R and t, the method and the device do not use the geometric information of the environment, image information in the environment can be collected, global positioning is carried out based on texture, more information is added into the positioning process, and the positioning robustness is greatly improved.
Drawings
Fig. 1 is a flow chart of the indoor positioning method of the mobile robot based on the panoramic camera of the invention.
Fig. 2 is a topological schematic diagram of the panoramic camera-based mobile robot indoor positioning system of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1: as shown in fig. 1, a mobile robot indoor positioning method based on a panoramic camera includes the following steps:
step S1: acquiring video data, constructing a feature point map, and performing dimension reduction and clustering on the feature point map to generate an index file;
step S2: and performing dimensionality reduction, denoising and searching processing according to the panoramic image to obtain a 2D-3D coordinate pair, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain a position (t) and a posture (R).
As a preferred embodiment, the step S1 specifically includes:
step SS 11: capturing video data of the entire environment by means of a panoramic camera mounted on top of a mobile robot
Step SS 12: constructing a feature point map of the whole environment in a visual SLAM mode according to the panoramic video data of the continuous frames obtained in the step SS 1; when constructing the feature point map, selecting a typical value for the feature point as FAST + BRIEF or FAST + FREAK;
step SS 13: the descriptor F ═ F corresponding to the feature point map obtained in the saving step SS2 is stored as F0,f1… and space coordinates P ═ P0,p1,…};
Step SS 14: performing dimension reduction on the descriptor of each feature point obtained in the step SS 3; when the descriptor is subjected to dimensionality reduction, Principal Component Analysis (PCA) can be selected and used, or a dimensionality reduction matrix can be trained by a machine learning method;
step SS 15: respectively searching the nearest clustering centers corresponding to the front and back parts of each dimensionality reduction descriptor according to the clustering centers trained in advance, and then constructing indexes of the front and back clustering centers of each descriptor into an index file in a Hash Map mode: h ═ H0,h1,…};
As a preferred embodiment, the step S2 specifically includes:
step SS 26: collecting panoramic images through a panoramic camera, extracting feature points of the images, then performing feature point descriptor dimensionality reduction on the feature points, and recording a descriptor set after dimensionality reduction of the panoramic images of the current frame as S ═ S0,s1… }; in the extraction of the feature points of the panoramic image, 8-layer image pyramid operation needs to be performed on the image, and then 2000 feature points are extracted on the eight-layer pyramid image in a decreasing number.
Step SS 27: for each descriptor in the description subset S, searching out the index file HIts similar descriptor generates similar descriptor subset C ═ { C0,c1,…};
Step SS 28: judging the searched similar description subset C, removing obviously wrong descriptors, and generating a denoised similar description subset C';
step SS 29: and respectively extracting coordinates corresponding to the descriptors in the similar description subset C 'and the description subset S, wherein C' is a pixel coordinate of a two-dimensional plane, S is a three-dimensional space coordinate, forming a 2D-3D coordinate pair by the coordinates corresponding to the descriptors according to the matching relation of the descriptors, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain the position (t) and the posture (R).
As a preferred embodiment, step SS27 specifically includes: firstly, respectively searching the front part and the rear part of each current descriptor in a description subset S to obtain the nearest clustering centers; then, constructing a Hash index through the index of the clustering center, and determining the position of the current descriptor in the Hash Map; and taking the top 10 descriptors in the current Hash Map, which are most similar to the descriptor.
As a preferred embodiment, step SS28 specifically includes: and counting the distribution of the searched descriptors on each panoramic image index by calculating the panoramic image indexes corresponding to the descriptors, and deleting the descriptors of the panoramic images with the counted number lower than a set threshold value from the search result to obtain a final descriptor set C'.
Example 2: as shown in fig. 2, the present invention further provides a mobile robot indoor positioning system based on a panoramic camera, including:
the characteristic point map and index file acquisition module specifically executes the following steps: acquiring video data, constructing a feature point map, and performing dimension reduction and clustering on the feature point map to generate an index file;
the panoramic image execution positioning module specifically executes: and performing dimensionality reduction, denoising and searching processing according to the panoramic image to obtain a 2D-3D coordinate pair, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain a position (t) and a posture (R).
As a preferred embodiment, the feature point map and index file obtaining module specifically includes:
the video acquisition module specifically executes: collecting video data of the whole environment through a panoramic camera arranged at the top of the mobile robot;
the characteristic point map building module specifically executes the following steps: constructing a feature point map of the whole environment in a visual SLAM mode according to panoramic video data of continuous frames obtained by a video acquisition module;
the storage module specifically executes: storing the descriptor F ═ F corresponding to the feature point map obtained by the feature point map construction module0,f1… and space coordinates P ═ P0,p1,…};
The data dimension reduction module specifically executes: performing data dimension reduction on the descriptor of each feature point obtained by the storage module;
a clustering module that specifically executes: respectively searching the nearest clustering centers corresponding to the front and back parts of each dimensionality reduction descriptor according to the clustering centers trained in advance, and then constructing indexes of the front and back clustering centers of each descriptor into an index file in a Hash Map mode: h ═ H0,h1,…}。
As a preferred embodiment, the panoramic image execution positioning module specifically includes:
the image extraction and dimension reduction module specifically executes: collecting panoramic images through a panoramic camera, extracting feature points of the images, then performing feature point descriptor dimensionality reduction on the feature points, and recording a descriptor set after dimensionality reduction of the panoramic images of the current frame as S ═ S0,s1,…};
The search module specifically executes: for each descriptor in the description subset S, searching out similar descriptors in the index file H to generate a similar description subset C ═ { C }0,c1,…};
The denoising module specifically executes: judging the searched similar description subset C, removing obviously wrong descriptors, and generating a denoised similar description subset C';
the coordinate pair generation module specifically executes: and respectively extracting coordinates corresponding to the descriptors in the similar description subset C 'and the description subset S, wherein C' is a pixel coordinate of a two-dimensional plane, S is a three-dimensional space coordinate, forming a 2D-3D coordinate pair by the coordinates corresponding to the descriptors according to the matching relation of the descriptors, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain the position (t) and the posture (R).
As a preferred embodiment, the search module specifically includes: firstly, respectively searching the front part and the rear part of each current descriptor in a description subset S to obtain the nearest clustering centers; then, constructing a Hash index through the index of the clustering center, and determining the position of the current descriptor in the Hash Map; and taking the top 10 descriptors in the current Hash Map, which are most similar to the descriptor.
As a preferred embodiment, the denoising module specifically includes: and counting the distribution of the searched descriptors on each panoramic image index by calculating the panoramic image indexes corresponding to the descriptors, and deleting the descriptors of the panoramic images with the counted number lower than a set threshold value from the search result to obtain a final descriptor set C'.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A mobile robot indoor positioning method based on a panoramic camera is characterized by comprising the following steps:
step S1: acquiring video data, constructing a feature point map, and performing dimension reduction and clustering on the feature point map to generate an index file;
step S2: and performing dimensionality reduction, denoising and searching processing according to the panoramic image to obtain a 2D-3D coordinate pair, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain a position (t) and a posture (R).
2. The indoor positioning method for the mobile robot based on the panoramic camera of claim 1, wherein the step S1 specifically includes:
step SS 11: collecting video data of the whole environment through a panoramic camera arranged at the top of the mobile robot;
step SS 12: constructing a feature point map of the whole environment in a visual SLAM mode according to the panoramic video data of the continuous frames obtained in the step SS 1;
step SS 13: the descriptor F ═ F corresponding to the feature point map obtained in the saving step SS2 is stored as F0,f1… and space coordinates P ═ P0,p1,…};
Step SS 14: performing dimension reduction on the descriptor of each feature point obtained in the step SS 3;
step SS 15: respectively searching the nearest clustering centers corresponding to the front and back parts of each dimensionality reduction descriptor according to the clustering centers trained in advance, and then constructing indexes of the front and back clustering centers of each descriptor into an index file in a Hash Map mode: h ═ H0,h1,…}。
3. The indoor positioning method for the mobile robot based on the panoramic camera of claim 1, wherein the step S2 specifically includes:
step SS 26: collecting panoramic images through a panoramic camera, extracting feature points of the images, then performing feature point descriptor dimensionality reduction on the feature points, and recording a descriptor set after dimensionality reduction of the panoramic images of the current frame as S ═ S0,s1,…};
Step SS 27: for each descriptor in the description subset S, searching out similar descriptors in the index file H to generate a similar description subset C ═ { C }0,c1,…};
Step SS 28: judging the searched similar description subset C, removing obviously wrong descriptors, and generating a denoised similar description subset C';
step SS 29: and respectively extracting coordinates corresponding to the descriptors in the similar description subset C 'and the description subset S, wherein C' is a pixel coordinate of a two-dimensional plane, S is a three-dimensional space coordinate, forming a 2D-3D coordinate pair by the coordinates corresponding to the descriptors according to the matching relation of the descriptors, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain the position (t) and the posture (R).
4. The panoramic camera-based indoor positioning method for the mobile robot, as set forth in claim 3, wherein the step SS27 specifically comprises: firstly, respectively searching the front part and the rear part of each current descriptor in a description subset S to obtain the nearest clustering centers; then, constructing a Hash index through the index of the clustering center, and determining the position of the current descriptor in the Hash Map; and taking the top 10 descriptors in the current Hash Map, which are most similar to the descriptor.
5. The panoramic camera-based indoor positioning method for the mobile robot, as set forth in claim 3, wherein the step SS28 specifically comprises: and counting the distribution of the searched descriptors on each panoramic image index by calculating the panoramic image indexes corresponding to the descriptors, and deleting the descriptors of the panoramic images with the counted number lower than a set threshold value from the search result to obtain a final descriptor set C'.
6. A mobile robot indoor positioning system based on a panoramic camera is characterized by comprising:
the characteristic point map and index file acquisition module specifically executes the following steps: acquiring video data, constructing a feature point map, and performing dimension reduction and clustering on the feature point map to generate an index file;
the panoramic image execution positioning module specifically executes: and performing dimensionality reduction, denoising and searching processing according to the panoramic image to obtain a 2D-3D coordinate pair, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain a position (t) and a posture (R).
7. The panoramic camera-based mobile robot indoor positioning system of claim 6, wherein the feature point map and index file acquisition module specifically comprises:
the video acquisition module specifically executes: collecting video data of the whole environment through a panoramic camera arranged at the top of the mobile robot;
the characteristic point map building module specifically executes the following steps: constructing a feature point map of the whole environment in a visual SLAM mode according to panoramic video data of continuous frames obtained by a video acquisition module;
the storage module specifically executes: storing the descriptor F ═ F corresponding to the feature point map obtained by the feature point map construction module0,f1… and space coordinates P ═ P0,p1,…};
The data dimension reduction module specifically executes: performing data dimension reduction on the descriptor of each feature point obtained by the storage module;
a clustering module that specifically executes: respectively searching the nearest clustering centers corresponding to the front and back parts of each dimensionality reduction descriptor according to the clustering centers trained in advance, and then constructing indexes of the front and back clustering centers of each descriptor into an index file in a Hash Map mode: h ═ H0,h1,…}。
8. The panoramic camera-based mobile robot indoor positioning system of claim 7, wherein the panoramic image execution positioning module specifically comprises:
the image extraction and dimension reduction module specifically executes: collecting panoramic images through a panoramic camera, extracting feature points of the images, then performing feature point descriptor dimensionality reduction on the feature points, and recording a descriptor set after dimensionality reduction of the panoramic images of the current frame as S ═ S0,s1,…};
The search module specifically executes: for each descriptor in the description subset S, searching out similar descriptors in the index file H to generate a similar description subset C ═ { C }0,c1,…};
The denoising module specifically executes: judging the searched similar description subset C, removing obviously wrong descriptors, and generating a denoised similar description subset C';
the coordinate pair generation module specifically executes: and respectively extracting coordinates corresponding to the descriptors in the similar description subset C 'and the description subset S, wherein C' is a pixel coordinate of a two-dimensional plane, S is a three-dimensional space coordinate, forming a 2D-3D coordinate pair by the coordinates corresponding to the descriptors according to the matching relation of the descriptors, and then performing RANSAC EPnP calculation on the 2D-3D coordinate pair to obtain the position (t) and the posture (R).
9. The panoramic camera-based mobile robot indoor positioning system of claim 8, wherein the search module specifically comprises: firstly, respectively searching the front part and the rear part of each current descriptor in a description subset S to obtain the nearest clustering centers; then, constructing a Hash index through the index of the clustering center, and determining the position of the current descriptor in the Hash Map; and taking the top 10 descriptors in the current Hash Map, which are most similar to the descriptor.
10. The panoramic camera-based mobile robot indoor positioning system of claim 8, wherein the denoising module specifically comprises: and counting the distribution of the searched descriptors on each panoramic image index by calculating the panoramic image indexes corresponding to the descriptors, and deleting the descriptors of the panoramic images with the counted number lower than a set threshold value from the search result to obtain a final descriptor set C'.
CN202111153320.7A 2021-09-29 2021-09-29 Panoramic camera-based indoor positioning method and system for mobile robot Pending CN114216461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153320.7A CN114216461A (en) 2021-09-29 2021-09-29 Panoramic camera-based indoor positioning method and system for mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111153320.7A CN114216461A (en) 2021-09-29 2021-09-29 Panoramic camera-based indoor positioning method and system for mobile robot

Publications (1)

Publication Number Publication Date
CN114216461A true CN114216461A (en) 2022-03-22

Family

ID=80696027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111153320.7A Pending CN114216461A (en) 2021-09-29 2021-09-29 Panoramic camera-based indoor positioning method and system for mobile robot

Country Status (1)

Country Link
CN (1) CN114216461A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254015A (en) * 2011-07-21 2011-11-23 上海交通大学 Image retrieval method based on visual phrases
CN104794219A (en) * 2015-04-28 2015-07-22 杭州电子科技大学 Scene retrieval method based on geographical position information
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
CN109816686A (en) * 2019-01-15 2019-05-28 山东大学 Robot semanteme SLAM method, processor and robot based on object example match
CN111489393A (en) * 2019-01-28 2020-08-04 速感科技(北京)有限公司 VS L AM method, controller and mobile device
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254015A (en) * 2011-07-21 2011-11-23 上海交通大学 Image retrieval method based on visual phrases
CN104794219A (en) * 2015-04-28 2015-07-22 杭州电子科技大学 Scene retrieval method based on geographical position information
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
CN109816686A (en) * 2019-01-15 2019-05-28 山东大学 Robot semanteme SLAM method, processor and robot based on object example match
CN111489393A (en) * 2019-01-28 2020-08-04 速感科技(北京)有限公司 VS L AM method, controller and mobile device
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics

Similar Documents

Publication Publication Date Title
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN110070570B (en) Obstacle detection system and method based on depth information
WO2020259481A1 (en) Positioning method and apparatus, electronic device, and readable storage medium
CN108960211B (en) Multi-target human body posture detection method and system
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
CN112287866A (en) Human body action recognition method and device based on human body key points
CN112669349A (en) Passenger flow statistical method, electronic equipment and storage medium
WO2019136641A1 (en) Information processing method and apparatus, cloud processing device and computer program product
CN111476883B (en) Three-dimensional posture trajectory reconstruction method and device for multi-view unmarked animal
WO2021136386A1 (en) Data processing method, terminal, and server
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN113192646A (en) Target detection model construction method and different target distance monitoring method and device
CN113643365B (en) Camera pose estimation method, device, equipment and readable storage medium
US11354923B2 (en) Human body recognition method and apparatus, and storage medium
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN104486585A (en) Method and system for managing urban mass surveillance video based on GIS
CN115376034A (en) Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition
CN113256683B (en) Target tracking method and related equipment
CN110120090B (en) Three-dimensional panoramic model construction method and device and readable storage medium
CN116523957A (en) Multi-target tracking method, system, electronic equipment and storage medium
CN114216461A (en) Panoramic camera-based indoor positioning method and system for mobile robot
CN115880507A (en) Method, device, equipment and storage medium for de-duplication of defect detection of power transmission image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination