CN113554754A - Indoor positioning method based on computer vision - Google Patents

Indoor positioning method based on computer vision Download PDF

Info

Publication number
CN113554754A
CN113554754A CN202110873083.5A CN202110873083A CN113554754A CN 113554754 A CN113554754 A CN 113554754A CN 202110873083 A CN202110873083 A CN 202110873083A CN 113554754 A CN113554754 A CN 113554754A
Authority
CN
China
Prior art keywords
indoor
camera
coordinates
terminal
computer vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110873083.5A
Other languages
Chinese (zh)
Inventor
李爽
蔚保国
李隽�
赵茜
梁晓虎
祝瑞辉
李雅宁
张衡
黄璐
贾浩男
程建强
陈冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202110873083.5A priority Critical patent/CN113554754A/en
Publication of CN113554754A publication Critical patent/CN113554754A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an indoor positioning method based on computer vision, and relates to the technical field of computer vision indoor positioning. The method comprises the steps of firstly establishing an indoor image library, finding the most similar picture by comparing images collected by various indoor terminals with the image library, extracting and matching feature points, establishing a relation between three-dimensional coordinates of the feature points in a world coordinate system and two-dimensional coordinates of the feature points in an image coordinate system, and obtaining the pose which is the current pose of the terminal. The method is easy to realize and can effectively improve the accuracy of indoor navigation positioning.

Description

Indoor positioning method based on computer vision
Technical Field
The invention relates to the field of visual navigation, in particular to the field of visual indoor positioning, and especially relates to an indoor positioning method based on computer vision.
Background
The problem of high-precision positioning in an indoor environment is always a technical problem. Limited by the influence of multipath and indoor complex and variable environment, the traditional indoor positioning method has one or more problems and cannot realize high-precision positioning, such as:
the pseudo satellite is directly adopted for data resolving and positioning, and the positioning precision cannot be guaranteed due to the influence of multipath;
positioning methods based on signal intensity values of Wi-Fi and Bluetooth are greatly influenced by indoor environment changes, and positioning accuracy cannot be guaranteed;
the positioning method using the signal fingerprint has the problems of long time for establishing a fingerprint database and time variation.
The positioning method based on computer vision has the advantages of short positioning time and accurate positioning result, and has better application prospect in indoor buildings. However, there is no prior art attempt.
Disclosure of Invention
In view of the above, the invention provides an indoor positioning method based on computer vision, which is based on image matching of deep learning and solving of n-point perspective projection problem to realize positioning, and can well solve the problem of poor positioning performance in indoor complex environments in the prior art and improve the accuracy of indoor navigation positioning.
In order to achieve the purpose, the invention adopts the technical scheme that:
an indoor positioning method based on computer vision comprises the following steps:
(1) collecting and storing indoor RGB (red, green and blue) images and depth images, recording the pose of a camera when the images are collected, and establishing an indoor map library TMN
(2) Inputting the RGB images stored in the map library into a deep learning network model as training samples, training the network model, and storing network model parameters when the loss function value is not reduced any more;
(3) when a terminal enters a room, downloading parameters of a deep learning network model, shooting an indoor photo by using a terminal camera, identifying a matching picture most similar to the photo shot by the terminal from a map library by using the deep learning network, and extracting a corresponding depth image and a camera pose;
(4) extracting characteristic points of the terminal shot picture and the matched picture, matching the characteristic points and calculating to obtain coordinates of the matched points in a world coordinate system;
(5) according to the coordinates of the matching points in the world coordinate system and the image coordinates, solving the pose of the terminal in the world coordinate system when the terminal takes a photo by using an n-point perspective projection model solving method, converting the coordinates into a real position according to an indoor map and displaying the real position on the map;
and completing indoor positioning based on computer vision.
Further, the specific mode of the step (1) is as follows: dividing the indoor space into M multiplied by N grids according to the area, acquiring the plane coordinate of the central point of each grid by adopting a laser range finder, and erecting an RGB-D camera at the center of each grid to enable the height of the camera from the ground to be H meters and keep the camera parallel to the ground; RGB images and depth images are collected through the cameras, and the poses of the cameras when the images are collected are recorded, so that an indoor map library with the size of M multiplied by N is established.
Further, the deep learning network model is combined with LBP characteristics to perform scene recognition; when the network model is trained in the step (2), LBP feature extraction is carried out by using the following formula:
Figure BDA0003189397800000021
Figure BDA0003189397800000022
in the formula (x)c,yc) As the coordinates of the central pixel point, N is 8, ic,inThe gray values of the central pixel and the neighborhood pixels respectively, and s (-) is a sign function;
the purpose of the training process is to learn to obtain the parameter values of each layer of the network so as to fit given training data; establishing a log-likelihood function to maximize the value of the log-likelihood function on a training set, so as to obtain network layer parameters; assuming that the training set contains N samples, the log-likelihood function is:
Figure BDA0003189397800000023
in the formula, P (v)(i)| θ) represents the joint distribution of the visible unit and the hidden unit, argmax is a function of an autovariable set corresponding to the maximum value of the function, and θ is a network parameter.
Further, the specific mode of the step (4) is as follows:
respectively extracting SURF characteristic points of the terminal shot picture and the matched picture and matching;
eliminating the error matching points by using a random sampling consistency method;
extracting pixel coordinates of the matching points, searching the depth of the corresponding points in the depth image, and calculating to obtain the coordinates of the matching points in a world coordinate system according to the camera imaging model and the camera pose.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the method, a map library is established by collecting RGB and depth images of different indoor positions and directions, similar picture searching is realized by a depth learning network-based method, and indoor high-precision navigation positioning is realized by solving the camera pose by constructing an n-point perspective projection model equation.
2. The invention adopts image matching based on deep learning and n-point perspective projection problem solving to realize indoor positioning, and compared with the existing indoor positioning method, the positioning precision is greatly improved.
3. The method can well solve the problem that the positioning performance is poor in the indoor complex environment in the prior art, improves the indoor navigation positioning precision, and provides a new idea for solving the indoor high-precision positioning problem.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
An indoor positioning method based on computer vision comprises the following steps:
(1) dividing indoor space into planar grids of MxN, acquiring planar coordinates of central point of each grid by using a laser range finder, keeping the height of a camera from the ground by H meters by using an RGB-D camera at the center of each grid and keeping the camera parallel to the ground, collecting RGB images and depth images, storing, recording the pose of the camera during picture collection, and establishing an indoor map library T of MxN sizeMN
(2) Inputting RGB images stored in a map library into a constructed deep learning network model as training samples, training the network, and storing network model parameters when loss function values are not reduced;
(3) downloading parameters of a deep learning network model after any terminal enters a room, shooting indoor pictures by using a terminal camera, identifying the most similar pictures according to the deep learning network, and extracting a corresponding depth map and a corresponding camera pose;
(4) and respectively extracting SURF characteristic points of the terminal shot picture and the picture output by the network model, matching and removing the error matching points, extracting pixel coordinates of the matching points, searching the depth of the corresponding points in the depth map, and calculating to obtain the coordinates of the matching points in a world coordinate system according to the camera imaging model and the camera pose.
(5) After the coordinates and the image coordinates of the matching points in the world coordinate system are obtained, the pose of the terminal in the world coordinate system when the terminal takes a picture can be solved by using an n-point perspective projection problem solving method, and then the coordinates corresponding to the maximum value are converted into the real positions according to an indoor map and displayed on the map;
and completing indoor positioning based on computer vision.
Wherein, the network model training in the step (2) specifically comprises: LBP feature extraction, DBN training and the like. The LBP operator is formulated as:
Figure BDA0003189397800000041
wherein (x)c,yc) Coordinates of a central pixel point; the size of N is 8; i.e. ic,inGray values of the central pixel and the neighborhood pixels respectively; s (-) is a sign function.
The purpose of the training process is to learn the parameter values for each layer of the RBM network to fit given training data. Specifically, a log-likelihood function is established to maximize the value of the log-likelihood function on a training set, and then the network layer parameters are obtained. Assuming that the training set contains N samples, the log-likelihood function is:
Figure BDA0003189397800000042
wherein, P (v)(i)| θ) represents a visible unit andthe joint distribution of the hidden units, θ is a network parameter, and argmax is a known function in the fields of computers and mathematics, which are not described herein again.
The pose of the terminal under a world coordinate system when the terminal takes a picture is solved by using the perspective projection problem of n points in the step (5), and the specific flow is as follows:
assuming a calibrated camera (i.e. the focal length, optical center position and distortion parameters of the camera are known), a spatial reference point P in the world coordinate system is giveni1, n and the point v of the corresponding reference point in the camera coordinate systemi1.. n, the conversion equation between the two coordinate systems is obtained by the correspondence between the 3D/2D reference points as:
λivi=RPi+t (3)
wherein λ isiIs v isiDepth of the point, viSatisfy | | viAnd 1, R is a rotation matrix, and t is a translation vector.
Spatial reference point PiThe projection point in the image coordinate system is pi'=[x'y']TThere is a certain uncertainty for the observation of the projection points, which uncertainty is described by a two-dimensional covariance matrix. The calculation formula is shown as (4):
Figure BDA0003189397800000051
converting points in the image coordinate system into points in the camera coordinate system through a coordinate system transformation matrix A:
Figure BDA0003189397800000052
wherein, JAThe Jacobian matrix for coordinate system transformation A, so the uncertainty for point p in the camera coordinate system is represented as:
Figure BDA0003189397800000053
the rank of the covariance matrix is 2, and the covariance matrix is a singular matrix and is irreversible. Normalizing the coordinates of the points into a vector:
Figure BDA0003189397800000054
wherein the covariance matrix is derived from the following equation:
Figure BDA0003189397800000055
similarly, the covariance matrix is a singular matrix, and the correlation between components does not satisfy the requirement of independence of elements of the maximum likelihood solution, so a null space is introduced to represent the vector v:
Figure BDA0003189397800000056
wherein the function f (-) is a singular value decomposition function,
Figure BDA0003189397800000061
is transformed into a vector vrThe jacobian matrix of (a) is,
Figure BDA0003189397800000062
determinism from vector v to vector vrThe transformation of (d) is expressed as:
Figure BDA0003189397800000063
the pose of the camera can be obtained by the following formula.
Figure BDA0003189397800000064
Will be the above formulaIs developed wherein Pi=[px py pz]TThe following can be obtained:
Figure BDA0003189397800000065
the above equation is expressed as a homogeneous system of linear equations:
Bu=0 (14)
wherein B is the coefficient of the homogeneous equation set,
Figure BDA0003189397800000066
solving the camera rotation matrix and translation vector by the above equation requires at least 6 pairs of 2D/3D corresponding points.
Adding the uncertainty description matrix of the reference point into a homogeneous linear equation to obtain:
Figure BDA0003189397800000071
the final expression is:
BTCBu=Nu=0 (16)
wherein u satisfies the constraint condition | | | u | | | ═ 1. Performing singular value decomposition on the coefficient matrix:
N=UDVT (17)
wherein the rotation matrix
Figure BDA0003189397800000072
And translation vector
Figure BDA0003189397800000073
The eigenvectors solved by the above equation are obtained:
Figure BDA0003189397800000074
wherein the translation vector
Figure BDA0003189397800000075
Representing only the direction, the scale factor is calculated by the following equation, and the translation vector is:
Figure BDA0003189397800000076
the rotation matrix obtained by singular value decomposition is:
Figure BDA0003189397800000077
R=URVR T (21)
and obtaining the rotation matrix and the translation vector of the camera through the calculation process.
The following is a more specific example:
as shown in fig. 1, an indoor positioning method based on computer vision includes the following steps:
step 1: establishing an indoor map library, dividing an indoor space into M multiplied by N grids, acquiring a plane coordinate of a central point of each grid by adopting a laser range finder, utilizing an RGB-D camera at the center of each grid to enable the height of the camera to be H meters away from the ground and keep the camera parallel to the ground, acquiring RGB images and depth images for storage, recording the pose of the camera during image acquisition, and establishing an M multiplied by N indoor map library TMN
Step 2: inputting RGB images stored in a map library into a constructed deep learning network model as training samples, training the network, and storing network model parameters when loss function values are not reduced;
the network training process is described as: the file name and the corresponding label of each image in the map library are obtained, a training model (comprising initialization parameters, parameters such as convolution, pooling layer and the like and a network) is defined, then training is started, when the numerical value of the loss function is not reduced obviously any more, the classification accuracy of the network model parameters is optimal, and the network parameters are stored.
And step 3: after any terminal enters the room, downloading parameters of a deep learning network model through WiFi, shooting indoor pictures by using a terminal camera, identifying the most similar pictures according to the deep learning network, and extracting a corresponding depth map and a camera pose;
(4) and respectively extracting SURF characteristic points of the terminal shot picture and the picture output by the network model, matching and removing the error matching points, extracting pixel coordinates of the matching points, searching the depth of the corresponding points in the depth map, and calculating to obtain the coordinates of the matching points in a world coordinate system according to the camera imaging model and the camera pose.
(5) After the coordinates and the image coordinates of the matching points in the world coordinate system are obtained, the pose of the terminal in the world coordinate system when the terminal takes a photo can be solved by using an n-point perspective projection problem solving method, the true longitude and latitude coordinates are converted according to the longitude and latitude coordinates corresponding to the reference true value, and the true longitude and latitude coordinates are displayed on an indoor map.
And completing indoor positioning based on computer vision.

Claims (4)

1. An indoor positioning method based on computer vision is characterized by comprising the following steps:
(1) collecting and storing indoor RGB (red, green and blue) images and depth images, recording the pose of a camera when the images are collected, and establishing an indoor map library TMN
(2) Inputting the RGB images stored in the map library into a deep learning network model as training samples, training the network model, and storing network model parameters when the loss function value is not reduced any more;
(3) when a terminal enters a room, downloading parameters of a deep learning network model, shooting an indoor photo by using a terminal camera, identifying a matching picture most similar to the photo shot by the terminal from a map library by using the deep learning network, and extracting a corresponding depth image and a camera pose;
(4) extracting characteristic points of the terminal shot picture and the matched picture, matching the characteristic points and calculating to obtain coordinates of the matched points in a world coordinate system;
(5) according to the coordinates of the matching points in the world coordinate system and the image coordinates, solving the pose of the terminal in the world coordinate system when the terminal takes a photo by using an n-point perspective projection model solving method, converting the coordinates into a real position according to an indoor map and displaying the real position on the map;
and completing indoor positioning based on computer vision.
2. The indoor positioning method based on computer vision according to claim 1, characterized in that, the specific manner of step (1) is: dividing the indoor space into M multiplied by N grids according to the area, acquiring the plane coordinate of the central point of each grid by adopting a laser range finder, and erecting an RGB-D camera at the center of each grid to enable the height of the camera from the ground to be H meters and keep the camera parallel to the ground; RGB images and depth images are collected through the cameras, and the poses of the cameras when the images are collected are recorded, so that an indoor map library with the size of M multiplied by N is established.
3. The computer vision-based indoor positioning method of claim 1, wherein the deep learning network model combines LBP features for scene recognition; when the network model is trained in the step (2), LBP feature extraction is carried out by using the following formula:
Figure FDA0003189397790000011
Figure FDA0003189397790000012
in the formula (x)c,yc) As the coordinates of the central pixel point, N is 8, ic,inThe gray values of the central pixel and the neighborhood pixels respectively, and s (-) is a sign function;
the purpose of the training process is to learn to obtain the parameter values of each layer of the network so as to fit given training data; establishing a log-likelihood function to maximize the value of the log-likelihood function on a training set, so as to obtain network layer parameters; assuming that the training set contains N samples, the log-likelihood function is:
Figure FDA0003189397790000021
in the formula, P (v)(i)| θ) represents the joint distribution of the visible unit and the hidden unit, argmax is a function of an autovariable set corresponding to the maximum value of the function, and θ is a network parameter.
4. The indoor positioning method based on computer vision as claimed in claim 1, wherein the specific manner of step (4) is as follows:
respectively extracting SURF characteristic points of the terminal shot picture and the matched picture and matching;
eliminating the error matching points by using a random sampling consistency method;
extracting pixel coordinates of the matching points, searching the depth of the corresponding points in the depth image, and calculating to obtain the coordinates of the matching points in a world coordinate system according to the camera imaging model and the camera pose.
CN202110873083.5A 2021-07-30 2021-07-30 Indoor positioning method based on computer vision Pending CN113554754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110873083.5A CN113554754A (en) 2021-07-30 2021-07-30 Indoor positioning method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110873083.5A CN113554754A (en) 2021-07-30 2021-07-30 Indoor positioning method based on computer vision

Publications (1)

Publication Number Publication Date
CN113554754A true CN113554754A (en) 2021-10-26

Family

ID=78133417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110873083.5A Pending CN113554754A (en) 2021-07-30 2021-07-30 Indoor positioning method based on computer vision

Country Status (1)

Country Link
CN (1) CN113554754A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023082797A1 (en) * 2021-11-09 2023-05-19 Oppo广东移动通信有限公司 Positioning method, positioning apparatus, storage medium, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
CN109671119A (en) * 2018-11-07 2019-04-23 中国科学院光电研究院 A kind of indoor orientation method and device based on SLAM
CN110095752A (en) * 2019-05-07 2019-08-06 百度在线网络技术(北京)有限公司 Localization method, device, equipment and medium
CN110136175A (en) * 2019-05-21 2019-08-16 杭州电子科技大学 A kind of indoor typical scene matching locating method neural network based

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
CN109671119A (en) * 2018-11-07 2019-04-23 中国科学院光电研究院 A kind of indoor orientation method and device based on SLAM
CN110095752A (en) * 2019-05-07 2019-08-06 百度在线网络技术(北京)有限公司 Localization method, device, equipment and medium
CN110136175A (en) * 2019-05-21 2019-08-16 杭州电子科技大学 A kind of indoor typical scene matching locating method neural network based

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李爽: ""基于视觉的室内定位算法研究"", 《万方数据库》 *
胡婷婷: ""基于深度学习的移动机器人重定位算法研究"", 《CNKI》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023082797A1 (en) * 2021-11-09 2023-05-19 Oppo广东移动通信有限公司 Positioning method, positioning apparatus, storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN110163064B (en) Method and device for identifying road marker and storage medium
CN106529538A (en) Method and device for positioning aircraft
CN110009674B (en) Monocular image depth of field real-time calculation method based on unsupervised depth learning
CN106373088B (en) The quick joining method of low Duplication aerial image is tilted greatly
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN108171715B (en) Image segmentation method and device
CN110322507B (en) Depth reprojection and space consistency feature matching based method
CN112613397B (en) Method for constructing target recognition training sample set of multi-view optical satellite remote sensing image
CN112946679B (en) Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence
CN111144349A (en) Indoor visual relocation method and system
CN112132900A (en) Visual repositioning method and system
CN115471748A (en) Monocular vision SLAM method oriented to dynamic environment
CN103679740A (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN112884841B (en) Binocular vision positioning method based on semantic target
CN114708309A (en) Vision indoor positioning method and system based on building plan prior information
CN113554754A (en) Indoor positioning method based on computer vision
CN115330876B (en) Target template graph matching and positioning method based on twin network and central position estimation
CN111735447A (en) Satellite-sensitive-simulation type indoor relative pose measurement system and working method thereof
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image
Billy et al. Adaptive SLAM with synthetic stereo dataset generation for real-time dense 3D reconstruction
CN114199250A (en) Scene matching navigation method and device based on convolutional neural network
CN114387532A (en) Boundary identification method and device, terminal, electronic equipment and unmanned equipment
CN116188586B (en) Positioning system and method based on light distribution
Wang et al. Stereo Rectification Based on Epipolar Constrained Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211026