CN110796706A - Visual positioning method and system - Google Patents

Visual positioning method and system Download PDF

Info

Publication number
CN110796706A
CN110796706A CN201911089425.3A CN201911089425A CN110796706A CN 110796706 A CN110796706 A CN 110796706A CN 201911089425 A CN201911089425 A CN 201911089425A CN 110796706 A CN110796706 A CN 110796706A
Authority
CN
China
Prior art keywords
key frame
image
parameters
mobile terminal
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911089425.3A
Other languages
Chinese (zh)
Inventor
刘孟红
赵鹏博
展华益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Changhong Electric Co Ltd
Original Assignee
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Changhong Electric Co Ltd filed Critical Sichuan Changhong Electric Co Ltd
Priority to CN201911089425.3A priority Critical patent/CN110796706A/en
Publication of CN110796706A publication Critical patent/CN110796706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention relates to the technical field of positioning, aims to solve the problems of high deployment cost and high operation cost of a navigation scheme in a large-scale indoor environment in the prior art, and provides a visual positioning method, which comprises the following steps: constructing an environment map, acquiring first camera internal parameters and first distortion parameters of the constructed environment map, and establishing an image key frame database; receiving image data sent by the mobile terminal and second camera internal parameters and second distortion parameters of the mobile terminal; carrying out re-projection on the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image; and matching the re-projected image in an image key frame database, searching a positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame. The invention reduces the deployment cost and the operation cost of the navigation scheme.

Description

Visual positioning method and system
Technical Field
The invention relates to the technical field of positioning, in particular to a visual positioning method and a visual positioning system.
Background
With the rapid development of society and economy, various large buildings such as shopping malls, museums, office buildings, amusement parks and the like are in endless numbers. In places with large scenes and complex roads, how to quickly find an exit, a certain store, a car in a parking lot and the like becomes a strong demand of people.
Map-based GPS navigation is a technology that is applied more generally, but in the indoor environment, especially for multi-story buildings, the GPS navigation effect is not good.
In the prior art of indoor navigation, the navigation based on WiFi is a feasible solution. However, this solution has the following problems: a large number of WiFi base stations (APs, access points) need to be deployed, and network lines, power supply lines, and the like need to be installed, which results in high deployment cost and operation cost.
Disclosure of Invention
The invention aims to solve the problems of high deployment cost and high operation cost of a navigation scheme in a large-scale indoor environment in the prior art, and provides a visual positioning method and a system.
The technical scheme adopted by the invention for solving the technical problems is as follows: a visual positioning method, comprising the steps of:
step 1, constructing an environment map, acquiring first camera internal parameters and first distortion parameters for constructing the environment map, and establishing an image key frame database, wherein the image key frame database stores all key frame information, and the key frame information at least comprises global position information and depth information of key frames;
step 2, receiving image data sent by the mobile terminal, and second camera internal parameters and second distortion parameters of the mobile terminal;
step 3, carrying out re-projection on the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image;
and 4, matching the re-projected image in an image key frame database, searching a positioning target key frame, calculating the relative position information of the re-projected image and the positioning target key frame, obtaining the global position information of the positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame.
Further, in step 1, the constructing the environment map includes:
and fusing the IMU by a depth camera, a binocular camera or a monocular camera to construct a three-dimensional environment map.
Further, in step 3, the re-projecting the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter, and the second distortion parameter includes:
and calculating camera coordinates corresponding to the pixel points of the image data sent by the mobile terminal according to the second camera internal parameters and the second distortion parameters, and performing projection imaging on the camera coordinates according to the first camera internal parameters and the first distortion parameters.
Further, in step 4, the matching the re-projected image in the image key frame database, and finding a positioning target key frame includes:
and extracting first characteristic point information of the re-projected image, matching the first characteristic point information with second characteristic point information of all key frames in an image key frame database, and determining the key frame corresponding to the second characteristic point information with the matching degree of the first characteristic point information exceeding a first preset value as a positioning target key frame.
Further, in step 4, the matching the re-projected image in the image key frame database, and finding a positioning target key frame includes:
extracting feature point information of the reprojected image, converting the feature point information into a first bag-of-words vector, matching the first bag-of-words vector with second bag-of-words vectors of all key frames in an image key frame database, and determining a key frame corresponding to the second bag-of-words vector with the matching degree of the first bag-of-words vector exceeding a second preset value as a positioning target key frame.
Further, in step 4, the method for calculating the relative position information between the reprojected image and the positioning target key frame at least includes: the RANSAC PnP algorithm or the Bundle Adjustment algorithm.
Further, the method also comprises the following steps:
receiving destination information sent by a mobile terminal;
calculating a planned path to the destination according to the current global position information of the mobile terminal obtained by calculation;
and feeding back the planned path to the mobile terminal.
The invention also proposes a visual positioning system comprising:
the system comprises a construction module, a first image processing module and a second image processing module, wherein the construction module is used for constructing an environment map, acquiring first camera internal parameters and first distortion parameters for constructing the environment map, and establishing an image key frame database, the key frame database stores all key frame information, and the key frame information at least comprises global position information and depth information of key frames;
the first receiving module is used for receiving the image data sent by the mobile terminal and second camera internal parameters and second distortion parameters of the mobile terminal;
the image re-projection module is used for re-projecting the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image;
and the global positioning module is used for matching the re-projected image in the image key frame database, searching a positioning target key frame, calculating the relative position information of the re-projected image and the positioning target key frame, obtaining the global position information of the positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame.
Further, the re-projecting the image data according to the first camera intrinsic parameters, the first distortion parameters, the second camera intrinsic parameters, and the second distortion parameters includes:
and calculating camera coordinates corresponding to the pixel points of the image data sent by the mobile terminal according to the second camera internal parameters and the second distortion parameters, and performing projection imaging on the camera coordinates according to the first camera internal parameters and the first distortion parameters.
Further, the method also comprises the following steps:
the second receiving module is used for receiving the destination information sent by the mobile terminal;
the path planning module is used for calculating a planned path to the destination according to the current global position information of the mobile terminal obtained through calculation;
and the feedback module is used for feeding the planning path back to the mobile terminal.
The invention has the beneficial effects that: according to the visual positioning method and system, in a large-scale indoor environment, a user can quickly know the position of the user and a path to a destination only by taking a picture through a mobile phone, a large number of WiFi base stations do not need to be deployed, network lines and power supply lines do not need to be installed, and the deployment cost and the operation cost of a navigation scheme are reduced.
Drawings
Fig. 1 is a schematic flowchart of a visual positioning method according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a visual positioning system according to a second embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The visual positioning method comprises the following steps: step 1, constructing an environment map, acquiring first camera internal parameters and first distortion parameters for constructing the environment map, and establishing an image key frame database, wherein the image key frame database stores all key frame information, and the key frame information at least comprises global position information and depth information of key frames; step 2, receiving image data sent by the mobile terminal, and second camera internal parameters and second distortion parameters of the mobile terminal; step 3, carrying out re-projection on the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image; and 4, matching the re-projected image in an image key frame database, searching a positioning target key frame, calculating the relative position information of the re-projected image and the positioning target key frame, obtaining the global position information of the positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame.
Firstly, an environment map and an image key frame database are established, wherein the image key frame database stores key frame information and corresponding global position information and depth information, when the environment map and the image key frame database are used, a user shoots an environment picture through a mobile terminal, after image data corresponding to the environment picture shot by the mobile terminal are obtained, the image data are re-projected according to camera internal parameters and distortion parameters to obtain a re-projected image, then a positioning target key frame matched with the re-projected image in the image key frame database is obtained, the global position information corresponding to the positioning target key frame is obtained, the global position information of the mobile terminal is further obtained, the positioning of the mobile terminal is realized, and the user can know the position of the user through the global position information of the mobile terminal.
Example one
Fig. 1 shows a flowchart of a visual positioning method according to a first embodiment of the present invention, which includes:
s1, constructing an environment map, acquiring first camera internal parameters and first distortion parameters for constructing the environment map, and establishing an image key frame database, wherein the image key frame database stores all key frame information, and the key frame information at least comprises global position information and depth information of key frames;
the method for constructing the environment map at least comprises the following steps: the three-dimensional environment map is constructed by fusing IMU (inertial measurement Unit) with a depth camera or a binocular camera or a monocular camera, and the common method is to adopt an SLAM algorithm, wherein a key frame database is established in the SLAM process, so that the global position information of key frames is acquired in the SLAM process, and the number and distribution of the key frames can be adjusted according to the situation of a scene.
In order to accurately calculate the global position of the mobile terminal in the subsequent steps, the depth information of the key frame is required to be acquired, and the depth camera and the binocular camera can directly give a depth map. The monocular camera has no scale, needs IMU to assist calculation, and can adopt a depth filter to acquire depth information.
S2, receiving image data sent by the mobile terminal, and second camera internal parameters and second distortion parameters of the mobile terminal;
the user can take a picture of the surrounding environment through the mobile terminal to obtain image data, and the mobile terminal can be a smart phone, a tablet computer and the like.
S3, carrying out re-projection on the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image;
the specific implementation method of the reprojection can be as follows:
and calculating camera coordinates corresponding to all pixel points of the image sent by the mobile terminal according to camera internal parameters and distortion parameters of the mobile terminal of the user, and performing projection imaging on the camera coordinates according to first camera internal parameters and first distortion parameters used for image construction to obtain a re-projected image.
Since a camera is adopted when the map and image key frame database is constructed, and the camera of the mobile terminal is different from the camera, the camera needs to be unified through the step 2 and the step 3 so as to facilitate accurate calculation of the global position of the mobile terminal.
And S4, matching the re-projected image in an image key frame database, searching for a positioning target key frame, calculating the relative position information of the re-projected image and the positioning target key frame, obtaining the global position information of the positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame.
The method for finding the key frame of the positioning target can be as follows:
and extracting first characteristic point information of the re-projected image, matching the first characteristic point information with second characteristic point information of all key frames in an image key frame database, and determining the key frame corresponding to the second characteristic point information with the matching degree of the first characteristic point information exceeding a first preset value as a positioning target key frame.
The feature point information includes a key point and a descriptor. In particular, the image features may be SIFT, SURF, ORB, or the like.
The method for finding the key frame of the positioning target can also comprise the following steps:
extracting feature point information of the reprojected image, converting the feature point information into a first bag-of-words vector, matching the first bag-of-words vector with second bag-of-words vectors of all key frames in an image key frame database, and determining a key frame corresponding to the second bag-of-words vector with the matching degree of the first bag-of-words vector exceeding a second preset value as a positioning target key frame.
The method for calculating the relative position information of the reprojected image and the positioning target key frame at least comprises the following steps: the RANSAC PnP algorithm or the Bundle Adjustment algorithm.
On the basis of the four steps, a navigation function can be provided for a user, and the specific method comprises the following steps:
1) receiving destination information sent by a mobile terminal;
2) calculating a planned path to the destination according to the current global position information of the mobile terminal obtained by calculation;
3) and feeding back the planned path to the mobile terminal.
After the user inputs destination information through the mobile terminal, navigation information from the position of the user to the destination is automatically generated. The destination information transmitted from the receiving mobile terminal may be text information or one image, and the global position of the destination is calculated by the system according to steps S2-S4.
After the current global position of the user and the global position of the destination are obtained, a planning path to the destination can be obtained by adopting various methods, such as Dijkstra algorithm, a, D, and the like.
According to the method provided by the embodiment of the invention, in a large-scale indoor environment, a user can quickly know the position of the user and the path to the destination only by taking a picture by using a mobile phone without deploying a large number of WiFi base stations and installing network lines and power supply lines, so that the deployment cost and the operation cost of a navigation scheme are reduced.
Example two
Fig. 2 shows a schematic structural diagram of a visual positioning system according to a second embodiment of the present invention, including:
the system comprises a construction module, a first image processing module and a second image processing module, wherein the construction module is used for constructing an environment map, acquiring first camera internal parameters and first distortion parameters for constructing the environment map, and establishing an image key frame database, the key frame database stores all key frame information, and the key frame information at least comprises global position information and depth information of key frames;
the first receiving module is used for receiving the image data sent by the mobile terminal and second camera internal parameters and second distortion parameters of the mobile terminal;
the image re-projection module is used for re-projecting the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image;
and the global positioning module is used for matching the re-projected image in the image key frame database, searching a positioning target key frame, calculating the relative position information of the re-projected image and the positioning target key frame, obtaining the global position information of the positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame.
Wherein the reprojecting the image data according to the first camera intrinsic parameters, the first distortion parameters, the second camera intrinsic parameters, and the second distortion parameters comprises:
and calculating camera coordinates corresponding to the pixel points of the image data sent by the mobile terminal according to the second camera internal parameters and the second distortion parameters, and performing projection imaging on the camera coordinates according to the first camera internal parameters and the first distortion parameters.
In order to satisfy the navigation demand of the user, the method further comprises the following steps:
the second receiving module is used for receiving the destination information sent by the mobile terminal;
the path planning module is used for calculating a planned path to the destination according to the current global position information of the mobile terminal obtained through calculation;
and the feedback module is used for feeding the planning path back to the mobile terminal.
It can be understood that, since the visual positioning system according to the second embodiment of the present invention is a system for implementing the visual positioning method according to the first embodiment, the system disclosed in the first embodiment is relatively simple in description since it corresponds to the method disclosed in the first embodiment, and the relevant points can be referred to the partial description of the method. The visual positioning method can reduce the deployment cost and the operation cost of the navigation scheme, so that the system for realizing the visual positioning method can also reduce the deployment cost and the operation cost of the navigation scheme.

Claims (10)

1. Visual positioning method, characterized in that it comprises the following steps:
step 1, constructing an environment map, acquiring first camera internal parameters and first distortion parameters for constructing the environment map, and establishing an image key frame database, wherein the image key frame database stores all key frame information, and the key frame information at least comprises global position information and depth information of key frames;
step 2, receiving image data sent by the mobile terminal, and second camera internal parameters and second distortion parameters of the mobile terminal;
step 3, carrying out re-projection on the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image;
and 4, matching the re-projected image in an image key frame database, searching a positioning target key frame, calculating the relative position information of the re-projected image and the positioning target key frame, obtaining the global position information of the positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame.
2. The visual positioning method of claim 1, wherein in step 1, said constructing the environment map comprises:
and fusing the IMU by a depth camera, a binocular camera or a monocular camera to construct a three-dimensional environment map.
3. The visual localization method of claim 1, wherein said re-projecting the image data according to the first camera parameters, the first distortion parameters, the second camera parameters, and the second distortion parameters in step 3 comprises:
and calculating camera coordinates corresponding to the pixel points of the image data sent by the mobile terminal according to the second camera internal parameters and the second distortion parameters, and performing projection imaging on the camera coordinates according to the first camera internal parameters and the first distortion parameters.
4. The visual positioning method of claim 1, wherein in step 4, the re-projected image is matched in the image key frame database, and finding the positioning target key frame comprises:
and extracting first characteristic point information of the re-projected image, matching the first characteristic point information with second characteristic point information of all key frames in an image key frame database, and determining the key frame corresponding to the second characteristic point information with the matching degree of the first characteristic point information exceeding a first preset value as a positioning target key frame.
5. The visual positioning method of claim 1, wherein in step 4, the re-projected image is matched in the image key frame database, and finding the positioning target key frame comprises:
extracting feature point information of the reprojected image, converting the feature point information into a first bag-of-words vector, matching the first bag-of-words vector with second bag-of-words vectors of all key frames in an image key frame database, and determining a key frame corresponding to the second bag-of-words vector with the matching degree of the first bag-of-words vector exceeding a second preset value as a positioning target key frame.
6. The visual positioning method of claim 1, wherein in step 4, the method for calculating the relative position information of the reprojected image and the positioning target key frame at least comprises: the RANSAC PnP algorithm or the Bundle Adjustment algorithm.
7. The visual positioning method of any of claims 1 to 6, further comprising:
receiving destination information sent by a mobile terminal;
calculating a planned path to the destination according to the current global position information of the mobile terminal obtained by calculation;
and feeding back the planned path to the user terminal.
8. A visual positioning system, comprising:
the system comprises a construction module, a first image processing module and a second image processing module, wherein the construction module is used for constructing an environment map, acquiring first camera internal parameters and first distortion parameters for constructing the environment map, and establishing an image key frame database, the key frame database stores all key frame information, and the key frame information at least comprises global position information and depth information of key frames;
the first receiving module is used for receiving the image data sent by the mobile terminal and second camera internal parameters and second distortion parameters of the mobile terminal;
the image re-projection module is used for re-projecting the image data according to the first camera internal parameter, the first distortion parameter, the second camera internal parameter and the second distortion parameter to obtain a re-projected image;
and the global positioning module is used for matching the re-projected image in the image key frame database, searching a positioning target key frame, calculating the relative position information of the re-projected image and the positioning target key frame, obtaining the global position information of the positioning target key frame, and obtaining the global position information of the image data sent by the mobile terminal according to the global position information of the positioning target key frame.
9. The visual positioning system of claim 8, wherein the re-projecting the image data according to the first camera intrinsic parameters, the first distortion parameters, the second camera intrinsic parameters, and the second distortion parameters comprises:
and calculating camera coordinates corresponding to the pixel points of the image data sent by the mobile terminal according to the second camera internal parameters and the second distortion parameters, and performing projection imaging on the camera coordinates according to the first camera internal parameters and the first distortion parameters.
10. The visual positioning system of claim 8, further comprising:
the second receiving module is used for receiving the destination information sent by the mobile terminal;
the path planning module is used for calculating a planned path to the destination according to the current global position information of the mobile terminal obtained through calculation;
and the feedback module is used for feeding the planning path back to the mobile terminal.
CN201911089425.3A 2019-11-08 2019-11-08 Visual positioning method and system Pending CN110796706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911089425.3A CN110796706A (en) 2019-11-08 2019-11-08 Visual positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911089425.3A CN110796706A (en) 2019-11-08 2019-11-08 Visual positioning method and system

Publications (1)

Publication Number Publication Date
CN110796706A true CN110796706A (en) 2020-02-14

Family

ID=69443428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911089425.3A Pending CN110796706A (en) 2019-11-08 2019-11-08 Visual positioning method and system

Country Status (1)

Country Link
CN (1) CN110796706A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102410A (en) * 2020-09-24 2020-12-18 四川长虹电器股份有限公司 Mobile robot positioning method and device based on particle filter and vision assistance
WO2022078240A1 (en) * 2020-10-14 2022-04-21 佳都科技集团股份有限公司 Camera precise positioning method applied to electronic map, and processing terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN108748184A (en) * 2018-06-13 2018-11-06 四川长虹电器股份有限公司 A kind of robot patrol method and robot device based on area map mark
CN110189373A (en) * 2019-05-30 2019-08-30 四川长虹电器股份有限公司 A kind of fast relocation method and device of view-based access control model semantic information
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN108748184A (en) * 2018-06-13 2018-11-06 四川长虹电器股份有限公司 A kind of robot patrol method and robot device based on area map mark
CN110189373A (en) * 2019-05-30 2019-08-30 四川长虹电器股份有限公司 A kind of fast relocation method and device of view-based access control model semantic information
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102410A (en) * 2020-09-24 2020-12-18 四川长虹电器股份有限公司 Mobile robot positioning method and device based on particle filter and vision assistance
WO2022078240A1 (en) * 2020-10-14 2022-04-21 佳都科技集团股份有限公司 Camera precise positioning method applied to electronic map, and processing terminal

Similar Documents

Publication Publication Date Title
US10740975B2 (en) Mobile augmented reality system
KR102145109B1 (en) Methods and apparatuses for map generation and moving entity localization
US8872851B2 (en) Augmenting image data based on related 3D point cloud data
US9558559B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
JP5736526B2 (en) Location search method and apparatus based on electronic map
US9129429B2 (en) Augmented reality on wireless mobile devices
CN108921894B (en) Object positioning method, device, equipment and computer readable storage medium
US10291898B2 (en) Method and apparatus for updating navigation map
CN111627114A (en) Indoor visual navigation method, device and system and electronic equipment
US20140300775A1 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
JP2020091273A (en) Position update method, position and navigation route display method, vehicle and system
US10733777B2 (en) Annotation generation for an image network
KR101413011B1 (en) Augmented Reality System based on Location Coordinates and Augmented Reality Image Providing Method thereof
CN102831816B (en) Device for providing real-time scene graph
CN110796706A (en) Visual positioning method and system
CN109034214B (en) Method and apparatus for generating a mark
WO2018103544A1 (en) Method and device for presenting service object data in image
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
US20160086339A1 (en) Method of providing cartograic information of an eletrical component in a power network
KR102272757B1 (en) System and method for producing panoramic image and video
CN113763561B (en) POI data generation method and device, storage medium and electronic equipment
CN110276837B (en) Information processing method and electronic equipment
Chang et al. An Automatic Indoor Positioning Robot System Using Panorama Feature Matching.
CN117115244A (en) Cloud repositioning method, device and storage medium
JP2021038958A (en) Generation device, generation method, and generation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214