CN105469405A - Visual ranging-based simultaneous localization and map construction method - Google Patents

Visual ranging-based simultaneous localization and map construction method Download PDF

Info

Publication number
CN105469405A
CN105469405A CN201510845988.6A CN201510845988A CN105469405A CN 105469405 A CN105469405 A CN 105469405A CN 201510845988 A CN201510845988 A CN 201510845988A CN 105469405 A CN105469405 A CN 105469405A
Authority
CN
China
Prior art keywords
unique point
point
binocular image
coordinate
unique
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510845988.6A
Other languages
Chinese (zh)
Other versions
CN105469405B (en
Inventor
谷梦媛
郭开元
汪玉
王文强
杨华中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chaoxing Future Technology Co., Ltd
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201510845988.6A priority Critical patent/CN105469405B/en
Publication of CN105469405A publication Critical patent/CN105469405A/en
Application granted granted Critical
Publication of CN105469405B publication Critical patent/CN105469405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a visual ranging-based simultaneous localization and map construction method. The method includes the following steps that: a binocular image is acquired and corrected, so that a distortion-free binocular image can be obtained; feature extraction is performed on the distortion-free binocular image, so that feature point descriptors can be generated; feature point matching relations of the binocular image are established; the horizontal parallax of matching feature points is obtained according to the matching relations, and based on the parameters of a binocular image capture system, real space depth is calculated; the matching results of the feature points of a current frame and feature points in a world map are calculated; feature points which are wrongly matched with each other are removed, so that feature points which are successfully matched with each other can be obtained; a transform matrix of the coordinates of the feature points which are successfully matched with each other under a world coordinate system and the three-dimension coordinates of the feature points which are successfully matched with each other under a current reference coordinate system is calculated, and a pose change estimated value of the binocular image capture system relative to an initial position is obtained according to the transform matrix; and the world map is established and updated. The visual ranging-based simultaneous localization and map construction method of the invention has low computational complexity, centimeter-level positioning accuracy and unbiased characteristics of position estimation.

Description

The simultaneous localization and mapping method of view-based access control model ranging
Technical field
The present invention relates to computing machine and electronic information technical field, the simultaneous localization and mapping method of particularly a kind of view-based access control model ranging.
Background technology
In recent years, mobile working platform as unmanned plane, one of focus becoming research such as mobile robot.These equipment have very high dirigibility, are widely used in the scene such as disaster relief, geological exploration.The major technique of robot autonomous navigational system comprises: build space environment three-dimensional map, self poisoning, route planning and obstacle avoidance etc.Wherein, the three-dimensional map in constructing environment space and self poisoning are the very corns of a subject, and route planning, obstacle avoidance rely on the cognition of unmanned plane to environment and the result of location.
Current outdoor unmanned plane has had ripe navigational system, such as GPS and inertial navigation system.In indoor navigation, complicated closed space environment has shielding and interference to wireless signal, therefore, depends on the positioning system of satellite in indoor inefficacy; Meanwhile, inertial navigation system cumulative errors and make it not be suitable for high-precision indoor positioning to initial value sensitivity.Active sensor is as laser scanner, and sonar etc. can be measured and obtain one-dimensional distance and drift angle for location estimation, but the usual heavy of active sensor, expensive and power consumption is comparatively large, is not suitable for small-sized movable platform; Meanwhile, single data type and low data rate limit the lifting of indoor position accuracy.
In recent years along with vision sensor price reduces gradually, it is applied in vision guided navigation task widely.Visual information is more intuitive locator meams for biological, and vision sensor can provide abundanter data message, not only can realize the estimation of robot self-position and attitude, also can realize the cognition to space.Correct and dense spatial cognition information can be effectively applied in the contingency question such as route planning and obstacle avoidance.Wherein binocular camera system can obtain real depth and realize location when not providing environment prior imformation, and this expands the range of application of mobile platform in circumstances not known.
Vision ranging is the basic algorithm in robot navigation field.Its main thought is, obtain inter motion information by the coordinate conversion relation calculating front and back frame same characteristic features point, the pose being obtained robot relative initial position by the Integral Thought accumulated frame by frame is estimated.But actual experiment shows, although the location algorithm of view-based access control model ranging provides the lower position and orientation estimation method of complexity, its mode of accumulating frame by frame makes location estimation drift about in time, also namely produces cumulative errors.
Simultaneous localization and mapping algorithm (SimultaneousLocalizationandMapping, SLAM) is the study hotspot in robot navigation field in recent years.It refers to that the robot of movement in circumstances not known recovers the movement locus of self by sensor measurement data, constructs an increment type map about space simultaneously.In this process, robot self-position and space terrestrial reference have dual uncertainty, solve this problem by setting up Probability model.Simultaneous localization and mapping algorithm based on EKF is proved to be as the best SLAM algorithm of current convergence, but its open defect to be computational complexity very high, square being directly proportional of the feature point number comprised to world map.The computation complexity of quadratic term limits the scale of world map on the one hand, also namely limits the number of features in residing environment and environment area; On the other hand, the computing velocity of location navigation algorithm is constrained.Consider the characteristic of the low load of indoor mobile robot, low-power consumption, realize traditional SLAM system and reach real-time processing speed being restricted mobile platform business.
Summary of the invention
The present invention is intended to solve one of technical matters in above-mentioned correlation technique at least to a certain extent.
For this reason, the object of the invention is to a kind of simultaneous localization and mapping method proposing view-based access control model ranging, the method only relies on binocular vision to navigate, without the need to merging other sensing datas, without the need to having priori to environment, there is lower computation complexity, the positioning precision of centimetre-sized and to location estimation without inclined characteristic.
To achieve these goals, embodiments of the invention propose a kind of simultaneous localization and mapping method of view-based access control model ranging, comprise the following steps: S1: by binocular image capture systems synchronous acquisition binocular image, and described binocular image is corrected, to obtain undistorted binocular image; S2: feature point extraction is carried out to described undistorted binocular image, and according to the unique point generating feature point descriptor extracted; S3: the Feature Points Matching relation setting up described binocular image according to described unique point descriptor; S4: the horizontal parallax obtaining matching characteristic point according to the Feature Points Matching relation of described binocular image, and the space real depth calculating described matching characteristic point according to the described horizontal parallax of matching characteristic point and the parameter of binocular image capture systems; S5: the unique point obtaining present frame binocular image, and the described unique point of present frame binocular image is mated with the unique point in world map; S6: the matching result of the unique point in the unique point of described present frame binocular image and world map is analyzed, to remove the unique point of wherein erroneous matching, the unique point of coupling of succeeding; S7: the transformation matrix calculating the coordinate of unique point under world coordinate system of described successful match and the three-dimensional coordinate under current reference coordinates system, obtains the pose change estimated value of described binocular image capture systems relative initial position according to described transformation matrix; S8: when first frame calculates, world map described in initialization, and when non-first frame calculates, according to described transformation matrix upgrade in world map with the three-dimensional coordinate of the unique point of present frame successful match and Feature Descriptor, and in world map, add the newly observed unique point of present frame and Feature Descriptor thereof.
In addition, the simultaneous localization and mapping method of view-based access control model ranging according to the above embodiment of the present invention can also have following additional technical characteristic:
In some instances, described S1 comprises further: the imaging parameters extracting described binocular image capture systems, and corrects described binocular image according to described imaging parameters, to obtain undistorted binocular image.
In some instances, in described S2, by scale invariant feature conversion method, feature point extraction is carried out to described undistorted binocular image.
In some instances, in described S3, whether be less than the matching relationship between unique point that specific threshold sets up described binocular image by the vector angle between the unique point descriptor that judges described binocular image.
In some instances, described image capture system comprises described left camera and right camera, and described S4 comprises further: obtain the pixel coordinate of described matching characteristic point at the magazine imaging point of left camera and the right side respectively; The horizontal parallax of the magazine imaging point of the magazine imaging point in a described left side and the right side is calculated according to the coordinate of the central point of described camera; Obtain the space real depth of described matching characteristic point according to the horizontal parallax of the magazine imaging point of the magazine imaging point in a described left side and the right side and space geometry relation, wherein, the computing formula of described space real depth is as follows:
Z p = B L × f C D i s p a r i t y
Wherein, Z pfor the space real depth of described matching characteristic point, B lfor baseline, f cfor focal length, Disparity is horizontal parallax.
In some instances, described S5 comprises further: be stored in by row in the first matrix by the descriptor of present frame unique point, and is stored in the second matrix by row by the descriptor of the unique point in described world map; Travel through the whole unique points in described world map, from described world map, extract the unique point the highest with each unique point similarity in described present frame respectively; The unique point the highest according to described similarity sets up the unique point mapping relations table from described present frame unique point to described world map.
In some instances, the similarity of described unique point is determined according to the angle between described unique point descriptor vector, and wherein, when the angle when between described unique point descriptor vector is minimum, the similarity of described unique point is the highest.
In some instances, in described S6, analyzed by the matching result of random sampling consistent method to the unique point in the unique point of described present frame and world map, to remove the unique point of wherein erroneous matching, the unique point of coupling of succeeding, specifically comprise: S61: from the unique point of N group successful match, choose m group matching characteristic point at random, and set up global registration equation, the coordinate calculating described m group matching characteristic point according to described global registration equation transforms to the transformation matrix of Current camera reference frame from world coordinate system; S62: by the coordinate under the Current camera reference frame of N-m group matching characteristic point do not chosen according to described transformation matrix inverse mapping to world coordinate system, obtain coordinate inverse mapping result CoorTemp, and calculate the Euclidean distance of described CoorTemp and the actual observation coordinate CoorGlobal of unique point under world coordinate system; S63: judge whether described Euclidean distance is greater than predetermined threshold value, if described Euclidean distance is greater than predetermined threshold value, then judge that characteristic of correspondence point is as point not in the know, if described Euclidean distance is less than described predetermined threshold value, then judge that characteristic of correspondence point is as intra-office point, and the number put in statistics bureau; S64: iteration performs described S61 to S63, and the number and the transformation matrix of coordinates that record intra-office point in each iterative process; S65: choose from successive ignition process and comprise a maximum iteration of intra-office point number, and using the unique point of the intra-office point in this iteration as successful match, using the unique point of point not in the know as erroneous matching, and remove the unique point of described erroneous matching.
In some instances, described S7 comprises further: obtain the coordinate CoorGlobal of unique point under world coordinate system of described successful match and the coordinate CoorCamera under present image capture systems reference frame; The rotation matrix R of the unique point calculating described successful match between three-dimensional coordinate and translation matrix T, the pose change estimated value of described binocular image capture systems relative initial position is obtained according to described rotation matrix R and translation matrix T, wherein, described rotation matrix R and translation matrix T meet following relational expression:
min R , T Σ i n l i e r s | | C o o r C a m e r a - T - R × C o o r G l o b s l | | 2 .
In some instances, described S8 comprises further: when first frame calculates, world map described in initialization, is defined as the world coordinate system of this location, the three-dimensional coordinate of whole for first frame unique point and Feature Descriptor is added world map by the image capture system reference frame of first frame; When non-first frame calculates, according to described transformation matrix by the three-dimensional coordinate CoorCamera inverse mapping of the unique point of present frame successful match to world coordinate system CoorGlobal ', wherein, conversion formula is:
CoorGlobal′=inv(R)×(CoorCamera-T);
Be the average of repeatedly observing by the coordinate renew of described successful match unique point under world coordinate system, wherein, when (n+1)th time is observed this unique point, the coordinate renew formula in its world map is:
C o o r G l b a l = C o o r G l o b a l × n + CoorGlobal ′ n + 1 ;
The unique point descriptor of described successful match unique point is updated to the last unique point descriptor observed, and unique point three-dimensional coordinate newly observed for present frame and unique point descriptor are added described world map.
According to the simultaneous localization and mapping method of the view-based access control model ranging of the embodiment of the present invention, by binocular camera positioning system, the real depth of space characteristics point can be recovered in without the indoor environment of priori; Vision ranging method is adopted to carry out pose estimation, effectively reduce the computation complexity of simultaneous localization and mapping algorithm, computing velocity and world map scale are got a promotion, is applicable to locate mobile platform in real time, and can be operated in the wider interior space; Meanwhile, effectively eliminated the unique point of erroneous matching by random sampling consistent method, avoid as pose is estimated to introduce control information; In addition, effectively eliminate the accumulative of measuring error by setting up and upgrading world map, ensure that indoor position accuracy is not drifted about in time in centimetre rank; In addition, constructed world map comprises dense spatial signature information, can be used for follow-up obstacle avoidance, path planning work.
Additional aspect of the present invention and advantage will part provide in the following description, and part will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or additional aspect of the present invention and advantage will become obvious and easy understand from accompanying drawing below combining to the description of embodiment, wherein:
Fig. 1 is the process flow diagram of the simultaneous localization and mapping method of view-based access control model ranging according to an embodiment of the invention;
Fig. 2 is three-dimensional reconstruction space geometry principle schematic according to an embodiment of the invention;
Fig. 3 is the process flow diagram of random sampling consistent method according to an embodiment of the invention;
Fig. 4 is world coordinate system and camera coordinates system relation schematic diagram according to an embodiment of the invention;
Fig. 5 is indoor positioning movement locus and world map schematic diagram according to an embodiment of the invention;
Fig. 6 is indoor according to an embodiment of the invention unbiased test result schematic diagram.
Embodiment
Be described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Being exemplary below by the embodiment be described with reference to the drawings, only for explaining the present invention, and can not limitation of the present invention being interpreted as.
Below in conjunction with accompanying drawing, the simultaneous localization and mapping method according to the view-based access control model ranging of the embodiment of the present invention is described.
Fig. 1 is the process flow diagram of the simultaneous localization and mapping method of view-based access control model ranging according to an embodiment of the invention.As shown in Figure 1, the method comprises the following steps:
Step S1: gather binocular image by binocular image capture systems, and binocular image is corrected, to obtain undistorted binocular image.
In concrete example, binocular image capture systems is binocular camera, and it is made up of the camera of two parallel placements, and, this distance is baseline between two cameras in a distance.Particularly, binocular camera triggers by identical signal, ensures that the moment of captured picture is identical.In concrete example, binocular image capture systems mobile platform motion in a series of photo stream of the capture of 30 frames per second, using the input as positioning system.
Further, charge-coupled image sensor (ChargeCoupledDevice, CCD) phase chance produces the distortion caused by lens imaging in imaging process, as radial distortion, and oblique distortions etc.For binocular image capture systems, there is optical distortion in each one camera, and simultaneously two camera angles are not quite identical also can bring difficulty to subsequent image data process.More specifically, there is the relation of rotation and translation between two camera reference frames, this relation makes the unique point in the same space different separately at two magazine pixel coordinates, thus adds the computation complexity that three-dimensional reconstruction recovers real depth.Therefore, in one embodiment of the invention, the binocular camera aligning tool case such as provided by matlab extracts the imaging parameters of binocular camera, and according to this imaging parameters, taken binocular image is corrected in advance, the picture after correcting is made to there is not optical distortion, and the pixel coordinate of same characteristic features point in binocular image has identical ordinate, only there are differences in horizontal ordinate direction, namely obtain undistorted binocular image.
Step S2: undistorted binocular image is rotated, stretches, the feature point extraction of illumination invariant, and according to the unique point generating feature point descriptor extracted.
Particularly, this step carries out feature point detection respectively from the binocular image after correction, obtains the pixel coordinate of unique point in two dimensional image, and the descriptor of extract minutiae.Feature extraction may comprise scale invariant feature (scale-invariantfeaturetransform, SIFT) and accelerate robust feature (SpeededUpRobustFeatures, SURF) etc.
In one embodiment of the invention, scale invariant feature conversion method is such as adopted to carry out feature point extraction to undistorted binocular image.Scale invariant feature conversion method is the algorithm carrying out feature point extraction and set up matching relationship between unique point.This Feature Correspondence Algorithm rotates at image, scaling and brightness keep relative stability when changing.Consider that indoor belong near field environment, along with the scaling effect of the mobile space object of shooting platform is comparatively obvious; The illumination condition of indoor existence complexity simultaneously.Therefore, this algorithm be applicable to indoor binocular image, inter frame image feature point extraction with mate.Identical unique point can extract and set up characteristic matching relation from different shooting conditions and shooting visual angle.In scale invariant feature conversion, the vector description that each unique point descriptor is tieed up with 128, the matching relationship of two unique points is whether the vector angle by judging its unique point descriptor is less than specific threshold and determines.It should be noted that, embodiments of the invention are not limited only to adopt scale invariant feature conversion method to carry out feature point extraction to undistorted binocular image, and also can be further feature point extracting method, be only exemplary displaying herein.
Step S3: the Feature Points Matching relation setting up binocular image according to unique point descriptor.This step and binocular ranging step, wherein, in binocular ranging, the unique point of successful match can recover its spatial depth according to three-dimensional reconstruction step, and the unsuccessful unique point setting up coupling is eliminated in this step.
Further, whether binocular ranging step is less than by the vector angle between judging characteristic point descriptor the matching relationship that specific threshold determines between unique point, and its concrete operation step is similar with step S5 afterwards.
Particularly, suppose that the descriptor of the unique point that left figure extracts is stored in matrix L ([DscpL by row 1, DscpL 2..., DscpL m] t) in, the descriptor of the unique point in right figure is stored in matrix R ([DscpR 1, DscpR 2..., DscpR n] t) in.For each unique point in left figure, travel through whole unique points of right figure, find the unique point the most similar to it and set up unique point mapping relations from left figure to right figure.Further, the tolerance of similarity is provided by the angle between unique point descriptor vector: the angle between vector is less, and two vector similarities are higher; The similarity of two unique point Feature Descriptors is higher, and the coupling degree of confidence of unique point is larger.
Step S4: the horizontal parallax obtaining matching characteristic point according to the Feature Points Matching relation of binocular image, and the space real depth calculating matching characteristic point according to the horizontal parallax of matching characteristic point and the parameter of binocular image capture systems (binocular camera).Wherein, the parameter of binocular camera such as comprises: the focal length of binocular camera, baseline and central point etc.
In one embodiment of the invention, image capture system comprises left camera and right camera, and step S4 comprises further: obtain the pixel coordinate of matching characteristic point at the magazine imaging point of left camera and the right side respectively; The horizontal parallax of left magazine imaging point and the magazine imaging point in the right side is calculated according to the coordinate of image center point; Obtain the space real depth of matching characteristic point according to the horizontal parallax of the magazine imaging point of the magazine imaging point in a left side and the right side and space geometry relation, wherein, the computing formula of space real depth is as follows:
Z p = B L × f C D i s p a r i t y .
Wherein, Z pfor the space real depth of matching characteristic point, B lfor baseline, f cfor focal length, Disparity is horizontal parallax.
Specifically, in work in the past, the real depth calculating unique point often needs there is certain priori to environment, such as: the object of known depth in scene, signature identification of inserting environment etc.Which has limited the range of application of algorithm in circumstances not known.In one embodiment of the invention, without the need to knowing the degree of depth of object in space in advance or interfering environment in advance, by the imaging characteristics of binocular camera, three-dimensional reconstruction work is carried out to the unique point captured in space, its 3 d space coordinate of camera reference frame relative to the shooting moment can be obtained.
Further, with reference to shown in Fig. 2, three-dimensional reconstruction space geometry principle schematic (overlooking) is given.The camera reference frame of the binocular camera after correction is by (x 1, y 1, z 1) and (x 2, y 2, z 2) represent, it has identical imaging plane.On a left side, magazine imaging point is P to space mid point P l, on the right side, magazine imaging point is P r.Binocular camera parameter can be obtained in step s 2 as focal length (f c), baseline (B l), central point etc.Especially, for the unique point of mating in binocular image, because binocular image is through correcting in advance, its imaging point P lwith P ronly there is pixel value difference in the horizontal direction, in the vertical direction pixel coordinate is identical.Particularly, the distance in the real space calculating corresponding to the magazine imaging point of left magazine imaging point and the right side by the coordinate of camera imaging resolution and central point, the i.e. horizontal parallax (Disparity) of two imaging points.Further, the space real depth Z of matching characteristic point (as a P) is calculated according to space geometry relation p, wherein, the computing formula of space real depth is as follows:
Z p = B L × f L D i s p a r i t y .
Further, according to camera pinhole imaging system principle, can by space real depth Z p, pixel coordinate (x p, y p) and camera focus f ccalculate the three-dimensional coordinate (X of unique point in left camera reference frame (subsequent calculations with left camera for benchmark camera) p, Y p, Z p), circular is:
{ X p = x p Z p f C Y p = y p Z p f C .
Step S5: the unique point obtaining present frame binocular image, and the unique point of present frame binocular image is mated with the unique point in world map.In other words, this step and world map coupling step, refer to set up matching relationship between the unique point that comprises in the unique point extracted in the current frame and world map.
In one embodiment of the invention, this step S5 comprises further: be stored in by row in the first matrix by the descriptor of present frame unique point, and is stored in the second matrix by row by the descriptor of the unique point in world map; Whole unique points in traversal world map, extract the unique point the highest with each unique point similarity in present frame respectively from world map; The unique point the highest according to similarity sets up the unique point mapping relations table from present frame unique point to world map.
Specifically, suppose that the descriptor of the unique point that present frame extracts is stored in the first matrix A ([DscpC by row 1, DscpC 2..., DscpC m] t) in, the descriptor of the whole unique points comprised in world map is stored in the second matrix B ([DscpG 1, DscpG 2..., DscpG n] t) in.For each unique point of present frame, the whole unique points in traversal world map, the searching unique point the most similar to it sets up the unique point mapping relations from present frame to world map.Particularly, the similarity of unique point is determined according to the angle between unique point descriptor vector, and the angle between vector is less, and two vector similarities are higher, and when the angle when between unique point descriptor vector is minimum, the similarity of unique point is the highest.
Further, world map coupling comprises following operation: the descriptor column vector DscpC of the second matrix B and unique point i ibe multiplied, result is n × 1 column vector, and n element in vector represents DscpC respectively iand DscpG j(j=1...n) vectorial angle cosine value.Further, can vector angle be obtained by arc cosine formulae discovery, be stored in n × 1 column vector Res iin, wherein element Res il () (l=1...n) represents the vector angle of the unique point descriptor of l unique point in i-th unique point and world map that present frame extracts.
Further, Res is traveled through iin each element, find wherein minimum element be denoted as Res i(l 1), and the second little element is denoted as Res i(l 2), wherein, l 1and l 2represent minimum and the second little element at Res iin position (namely at world map comprise position in unique point).Further, l in i-th unique point of and if only if when meeting the following conditions present frame and world map 1individual unique point successful match:
Res i(l 1)<0.8Res i(l 2)。
Further, successively by world map unique point descriptor storage matrix B and present frame each unique point descriptor DscpC ibe multiplied, the process be multiplied can be accelerated by matrix operation.Particularly, by matrix multiplication B × A t, the vectorial angle cosine matrix of each unique point descriptor in the whole unique point descriptor of present frame and world map can be gone out by a command calculations.Further, the calculated example of arc cosine is as being approximately a series of matrix multiplication, addition of matrices computing by Maclaurin expansion.By relevant matrix operation storehouse as MKL (IntelMathKernelLibrary, Intel's mathematics core function storehouse), cuBLAS (NVIDIACudaBasicLinearAlgebraSubroutines, tall and handsomely reach the basic linear algebra subroutine library) can realize based on CPU (CentralProcessingUnit, central processing unit) and the high performance matrix computing of GPU (GraphicProcessingUnit, graphic process unit).Consider that world map dimension is comparatively large, therefore, the method effectively improves world map coupling and binocular ranging step in the algorithm operational efficiency of software end, improves the frame per second of algorithm process image, makes the real-time location of mobile platform become possibility.
Step S6: the matching result of the unique point in the unique point of present frame binocular image and world map is analyzed, to remove the unique point of wherein erroneous matching, the unique point of coupling of succeeding.Particularly, can form the unique point of some erroneous matching in world map coupling, the unique point of these erroneous matching has similar unique point descriptor, but does not belong to same picture point.The unique point of erroneous matching can be estimated to introduce error to pose, makes positioning result inaccurate.Therefore, in one embodiment of the invention, the matching result of random sampling consistent method to the unique point in the unique point of present frame binocular image and world map is adopted to analyze, to remove the unique point of wherein erroneous matching.Specifically, with reference to shown in Fig. 3, the process flow diagram of random sampling consistent method is given.Coordinate conversion relation between the coordinate of unique point (intra-office point) under system of the world of the correct coupling of random sampling consistent method hypothesis and the coordinate in Current camera reference frame can be described by unified model, and the unique point of erroneous matching (point not in the know) does not meet this coordinate transformation model.Meanwhile, the unique point of erroneous matching belongs to the exception in measurement, in whole feature point set, belong to small probability event.
Particularly, in one embodiment of the invention, remove erroneous matching unique point by random sampling consistent method, comprise the following steps:
S61: choose m group matching characteristic point at random from the unique point of N group successful match, and set up global registration equation, the coordinate calculating described m group matching characteristic point according to global registration equation transforms to the transformation matrix of Current camera reference frame from world coordinate system.
S62: by coordinate under Current camera reference frame of the N-m group matching characteristic point do not chosen in step S61 according to the transformation matrix inverse mapping calculated in step S61 to world coordinate system, obtain coordinate inverse mapping result CoorTemp, and calculate the Euclidean distance of CoorTemp and the actual observation coordinate CoorGlobal of unique point under world coordinate system, be called Matching power flow.
S63: in determining step S62, whether Euclidean distance (Matching power flow) is greater than predetermined threshold value, and if this Euclidean distance be greater than predetermined threshold value, then judge that characteristic of correspondence point is as point not in the know, does not namely meet the transformation matrix of coordinates in step S61; If this Euclidean distance is less than predetermined threshold value, then judge that characteristic of correspondence point is as intra-office point, and add up the number of intra-office point in this calculating.Wherein, this predetermined threshold value both by presetting experiment measuring in the environment, also can be provided by experience, and wanting the process noise of Considering experimental environment and measurement when arranging predetermined threshold value.
S64: iteration performs S61 to S63 (i.e. repeated execution of steps S61 to S63), and records number and the transformation matrix of coordinates of intra-office point in each iterative process;
S65: choose from successive ignition process and comprise a maximum iteration of intra-office point number, and using the unique point of the intra-office point in this iteration as successful match, using the unique point of point not in the know as erroneous matching, in subsequent calculations, remove the unique point (point not in the know) of erroneous matching.
Step S7: the transformation matrix calculating the coordinate of unique point under world coordinate system of successful match and the three-dimensional coordinate under current reference coordinates system, obtains the pose change estimated value of binocular image capture systems (binocular camera) relative initial position according to transformation matrix.
Particularly, with reference to shown in Fig. 4, world coordinate system and camera coordinates system relation schematic diagram is given.World coordinate system is defined as the camera coordinates system in image taking first frame moment, no longer changes in subsequent calculations.Camera reference frame is along with the motion of mobile platform being loaded with binocular image capture systems, relative world coordinate system rotates and translation, the target that pose is estimated is the transformation relation between acquisition two coordinate systems, is the motion that mobile platform relative initial position occurs.Further, by step S1 to step S6, succeed the coordinate of unique point under world coordinate system of coupling and the coordinate in camera reference frame thereof.According to geometrical principle, the coordinate system transformation of Two coordinate system and the coordinate transform of same point under two coordinate systems inverse process each other, this is the basis of carrying out pose estimation.
Further, the coordinate CoorGlobal of unique point under world coordinate system of successful match and the coordinate CoorCamera under present image capture systems reference frame thereof is obtained.Transformation relation between three-dimensional coordinate can describe by rotation matrix R and translation matrix T, therefore, the rotation matrix R of the unique point that can calculate successful match between three-dimensional coordinate and translation matrix T, namely obtain the pose change estimated value of binocular image capture systems relative initial position.Particularly, the process that pose is estimated is for finding one group of coordinate conversion relation R and T, after making, by this group coordinate conversion relation, the feature point set of coupling is transformed into camera reference frame from world coordinate system, minimum with the square error sum of its actual observation coordinate under present frame camera reference frame, also namely rotation matrix R and translation matrix T meet following relational expression:
min R , T Σ i n l i e r s | | C o o r C a m e r a - T - R × C o o r G l o b s l | | 2 .
Step S8: when first frame calculates, initialization world map, and when non-first frame calculates, according to the transformation matrix that calculates in step S7 upgrade in world map with the three-dimensional coordinate of the unique point of present frame successful match and Feature Descriptor, and in world map, add the newly observed unique point of present frame and Feature Descriptor thereof.Wherein, world map comprises the coordinate of unique point under world coordinate system and the Feature Descriptor of unique point that shooting area extracts.When calculating present frame, the world coordinates of the unique point of successful match will be updated, and newly observed unique point will be added into world map.The O&M of world map, the historical information combining each frame on the one hand obtains the accurate estimation of unique point three-dimensional coordinate under world coordinate system, successfully set up the data correlation of present frame and historical frames on the other hand by world map coupling step, compensate for effect by error and effectively reduce measuring error and cumulative errors.In addition, constructed world map comprises dense space characteristics, can be used for follow-up obstacle avoidance, path planning work.
Particularly, when first frame calculates, world map described in initialization, is defined as the world coordinate system of this location by the image capture system reference frame of first frame, three-dimensional coordinate and the Feature Descriptor of the whole unique points observed by first frame add in world map, and the initialization as world map builds.World coordinate system is once determining no longer to change in subsequent calculations.
Further, in the unique point that present frame extracts, a part and world map successfully establish matching relationship, estimate to obtain coordinate conversion relation R and T according to the pose in step S7, by the inverse world coordinate system that projects back of the three-dimensional coordinate of unique point under Current camera reference frame, as the once new observations CoorGlobal ' of unique point, conversion formula is:
CoorGlobal′=inv(R)×(CoorCamera-T)。
Further, the three-dimensional coordinate of the unique point of successful match under world coordinate system is updated to repeatedly the average of observations.Particularly, in world map, the coordinate of certain unique point is the average after observing this unique point n-th time, observes this unique point at present frame and calculates observations CoorGlobal ' for (n+1)th time, and the formula upgrading three-dimensional coordinate in this world map is:
C o o r G l b a l = C o o r G l o b a l × n + CoorGlobal ′ n + 1 .
Particularly, according to central limit theorem, measuring error is similar to that to meet with true value be the normal distribution of average.By getting arithmetic mean to each observations of unique point, measuring error is compensated for effect by error and is effectively reduced, and in world map, the three-dimensional coordinate of unique point is closer to true value.
Further, the unique point descriptor of the unique point of successful match in world map is upgraded by the Feature Descriptor extracted in a nearest two field picture.Consider the continuity of the time and space, a nearest two field picture and follow-up shooting image have more close image-forming condition, use the last unique point descriptor and the unique point descriptor extracted in successive image to be successfully established the probability mated larger.
Further, the unique point newly added is not considered in present frame with the unique point of world map successful match.The viewing area that its most probable is new after appearing at view transformation also may be the supplementary extraction to overlapping viewing area unique point.Its three-dimensional coordinate under camera reference frame is projected to world coordinate system by inverse equally, and is added in world map, and its unique point descriptor is also added into world map.
Further, embodiments of the invention have and compare the simpler computation complexity of EKF simultaneous localization and mapping algorithm (EKF-SLAM).
Particularly, the method of the embodiment of the present invention with all comprise the world map described in step S5 based on EKF simultaneous localization and mapping method and mate, its computation complexity is: establish world map to comprise n unique point, the Feature Descriptor of each unique point is p dimensional vector, having m unique point to participate in the matching primitives with world map in present frame, is O (m × n × p) at the computation complexity of this step matrix multiplication.In one particular embodiment of the present invention, if having m in present frame cindividual unique point and world map establish matching relationship, and the computation complexity that the random sampling consistent method in step S6 and the pose in step S7 are estimated is about O (3 × m c× k), wherein k represents the iterations in random sampling consistent method.For based on EKF simultaneous localization and mapping method, its pose estimating step needs the covariance matrix safeguarding world map, and this step computation complexity is about O (n × n).Consider the feature point number m that iterations k and present frame mate with world map cbe far smaller than the feature point number n in world map, therefore, embodiment of the present invention method is compared has simpler computation complexity based on EKF simultaneous localization and mapping method, consider the low computing power of mobile platform, low power consumption characteristic, the inventive method is more suitable for the real-time location tasks of small-sized movable platform.
In order to verify the performance of the simultaneous localization and mapping method of the view-based access control model ranging that the embodiment of the present invention proposes, embodiments of the invention take indoor real scene by binocular image capture systems, and implement the software verification of the method at MATLABR2014b.Wherein, binocular image capture systems is arranged on horizontal slide rail to ensure the rectilinear motion of mobile platform.As shown in table 1, give binocular image capture systems to move the pose estimated value that different distances, method when rotating different angles in the embodiment of the present invention provide, wherein X/Y/Z represents the position coordinates of binocular image capture systems in world coordinate system, and Pitch/Yaw/Roll represents the angle of pitch/crab angle/roll angle of binocular image capture systems in world coordinate system.Measuring true value is rule registration by horizontal rail and protractor gained, and when binocular image capture systems remains static, positive and negative values represents the standard deviation to estimation; When binocular image capture systems moves, the estimated result provided in table 1 is the mean value of ten estimation.As can be seen from Table 1, in this embodiment, motion estimation error is less than two centimetres, and this is close with the positioning precision based on the simultaneous localization and mapping algorithm of EKF.Further, the error of estimation under calculating stationary state, this indicates the drift situation of embodiments of the invention to motion estimation result:
Error T = m e a n ( X 2 + Y 2 + Z 2 ) = 1.23 c m ,
Table 1
Further, as shown in Figure 5, indoor positioning movement locus and the world map schematic diagram of one embodiment of the invention is given.In this embodiment, binocular image capture systems is placed on the dolly of assembling pulley.Promote dolly and move forward about 80cm with the speed of 5cm/s, be rotated counterclockwise about 65 °, then move forward in the direction.Can find out, the position and orientation estimation method that the present invention proposes gives pose estimated result accurately.Meanwhile, in the world map of structure, comprise the three-dimensional information of a large amount of space characteristics point, can be used for follow-up obstacle avoidance, the work such as path planning.
Further, as shown in Figure 6, the indoor unbiased test result schematic diagram of one embodiment of the invention is given.In this embodiment, binocular image capture systems is along same segment distance to-and-fro movement 20 times on horizontal slide rail, and the distance estimations of depth direction, in repeatedly calculating, unidirectional drift does not occur.This illustrates that the location estimation method that the present invention proposes is not passed in time and cumulative errors occurs and then floats gradually from true value.By the data correlation that world map is set up, effectively reduce the cumulative errors that closed loop moving in space is estimated.
To sum up, according to the simultaneous localization and mapping method of the view-based access control model ranging of the embodiment of the present invention, by binocular camera positioning system, the real depth of space characteristics point can be recovered in without the indoor environment of priori; Vision ranging method is adopted to carry out pose estimation, effectively reduce the computation complexity of simultaneous localization and mapping algorithm, computing velocity and world map scale are got a promotion, is applicable to locate mobile platform in real time, and can be operated in the wider interior space; Meanwhile, effectively eliminated the unique point of erroneous matching by random sampling consistent method, avoid as pose is estimated to introduce control information; In addition, effectively eliminate the accumulative of measuring error by setting up and upgrading world map, ensure that indoor position accuracy is not drifted about in time in centimetre rank; In addition, constructed world map comprises dense spatial signature information, can be used for follow-up obstacle avoidance, path planning work.
In describing the invention, it will be appreciated that, term " " center ", " longitudinal direction ", " transverse direction ", " length ", " width ", " thickness ", " on ", D score, " front ", " afterwards ", " left side ", " right side ", " vertically ", " level ", " top ", " end " " interior ", " outward ", " clockwise ", " counterclockwise ", " axis ", " radial direction ", orientation or the position relationship of the instruction such as " circumference " are based on orientation shown in the drawings or position relationship, only the present invention for convenience of description and simplified characterization, instead of indicate or imply that the device of indication or element must have specific orientation, with specific azimuth configuration and operation, therefore limitation of the present invention can not be interpreted as.
In addition, term " first ", " second " only for describing object, and can not be interpreted as instruction or hint relative importance or imply the quantity indicating indicated technical characteristic.Thus, be limited with " first ", the feature of " second " can express or impliedly comprise at least one this feature.In describing the invention, the implication of " multiple " is at least two, such as two, three etc., unless otherwise expressly limited specifically.
In the present invention, unless otherwise clearly defined and limited, the term such as term " installation ", " being connected ", " connection ", " fixing " should be interpreted broadly, and such as, can be fixedly connected with, also can be removably connect, or integral; Can be mechanical connection, also can be electrical connection; Can be directly be connected, also indirectly can be connected by intermediary, can be the connection of two element internals or the interaction relationship of two elements, unless otherwise clear and definite restriction.For the ordinary skill in the art, above-mentioned term concrete meaning in the present invention can be understood as the case may be.
In the present invention, unless otherwise clearly defined and limited, fisrt feature second feature " on " or D score can be that the first and second features directly contact, or the first and second features are by intermediary indirect contact.And, fisrt feature second feature " on ", " top " and " above " but fisrt feature directly over second feature or oblique upper, or only represent that fisrt feature level height is higher than second feature.Fisrt feature second feature " under ", " below " and " below " can be fisrt feature immediately below second feature or tiltedly below, or only represent that fisrt feature level height is less than second feature.
In the description of this instructions, specific features, structure, material or feature that the description of reference term " embodiment ", " some embodiments ", " example ", " concrete example " or " some examples " etc. means to describe in conjunction with this embodiment or example are contained at least one embodiment of the present invention or example.In this manual, to the schematic representation of above-mentioned term not must for be identical embodiment or example.And the specific features of description, structure, material or feature can combine in one or more embodiment in office or example in an appropriate manner.In addition, when not conflicting, the feature of the different embodiment described in this instructions or example and different embodiment or example can carry out combining and combining by those skilled in the art.
Although illustrate and describe embodiments of the invention above, be understandable that, above-described embodiment is exemplary, can not be interpreted as limitation of the present invention, and those of ordinary skill in the art can change above-described embodiment within the scope of the invention, revises, replace and modification.

Claims (10)

1. a simultaneous localization and mapping method for view-based access control model ranging, is characterized in that, comprise the following steps:
S1: by binocular image capture systems synchronous acquisition binocular image, and described binocular image is corrected, to obtain undistorted binocular image;
S2: feature point extraction is carried out to described undistorted binocular image, and according to the unique point generating feature point descriptor extracted;
S3: the Feature Points Matching relation setting up described binocular image according to described unique point descriptor;
S4: the horizontal parallax obtaining matching characteristic point according to the Feature Points Matching relation of described binocular image, and the space real depth calculating described matching characteristic point according to the described horizontal parallax of matching characteristic point and the parameter of binocular image capture systems;
S5: the unique point obtaining present frame binocular image, and the described unique point of present frame binocular image is mated with the unique point in world map;
S6: the matching result of the unique point in the unique point of described present frame binocular image and world map is analyzed, to remove the unique point of wherein erroneous matching, the unique point of coupling of succeeding;
S7: the transformation matrix calculating the coordinate of unique point under world coordinate system of described successful match and the three-dimensional coordinate under current reference coordinates system, obtains the pose change estimated value of described binocular image capture systems relative initial position according to described transformation matrix;
S8: when first frame calculates, world map described in initialization, and when non-first frame calculates, according to described transformation matrix upgrade in world map with the three-dimensional coordinate of the unique point of present frame successful match and Feature Descriptor, and in world map, add the newly observed unique point of present frame and Feature Descriptor thereof.
2. the simultaneous localization and mapping method of view-based access control model ranging according to claim 1, is characterized in that, described S1 comprises further:
Extract the imaging parameters of described binocular image capture systems, and according to described imaging parameters, described binocular image is corrected, to obtain undistorted binocular image.
3. the simultaneous localization and mapping method of view-based access control model ranging according to claim 1, is characterized in that, in described S2, carries out feature point extraction by scale invariant feature conversion method to described undistorted binocular image.
4. the simultaneous localization and mapping method of view-based access control model ranging according to claim 1, it is characterized in that, in described S3, whether be less than the matching relationship between unique point that specific threshold sets up described binocular image by the vector angle between the unique point descriptor that judges described binocular image.
5. the simultaneous localization and mapping method of view-based access control model ranging according to claim 1, is characterized in that, wherein, described binocular image capture systems comprises left camera and right camera, and described S4 comprises further:
Obtain the pixel coordinate of described matching characteristic point at the magazine imaging point of left camera and the right side respectively;
The horizontal parallax of the magazine imaging point of the magazine imaging point in a described left side and the right side is calculated according to the coordinate of camera imaging resolution and described central point;
Obtain the space real depth of described matching characteristic point according to the horizontal parallax of the magazine imaging point of the magazine imaging point in a described left side and the right side and space geometry relation, wherein, the computing formula of described space real depth is as follows:
Z p = B L × f C D i s p a r i t y
Wherein, Z pfor the space real depth of described matching characteristic point, B lfor baseline, f cfor focal length, Disparity is horizontal parallax.
6. the simultaneous localization and mapping method of view-based access control model ranging according to claim 1, is characterized in that, described S5 comprises further:
The descriptor of present frame unique point is stored in the first matrix by row, and the descriptor of the unique point in described world map is stored in the second matrix by row;
Travel through the whole unique points in described world map, from described world map, extract the unique point the highest with each unique point similarity in described present frame respectively;
The unique point the highest according to described similarity sets up the unique point mapping relations table from described present frame unique point to described world map.
7. the simultaneous localization and mapping method of view-based access control model ranging according to claim 6, it is characterized in that, the similarity of described unique point is determined according to the angle between described unique point descriptor vector, wherein, when angle when between described unique point descriptor vector is minimum, the similarity of described unique point is the highest.
8. the simultaneous localization and mapping method of view-based access control model ranging according to claim 1, it is characterized in that, in described S6, analyzed by the matching result of random sampling consistent method to the unique point in the unique point of described present frame and world map, to remove the unique point of wherein erroneous matching, the unique point of coupling of succeeding, specifically comprises:
S61: choose m group matching characteristic point at random from the unique point of N group successful match, and set up global registration equation, the coordinate calculating described m group matching characteristic point according to described global registration equation transforms to the transformation matrix of Current camera reference frame from world coordinate system;
S62: by the coordinate under the present image capture systems reference frame of N-m group matching characteristic point do not chosen according to described transformation matrix inverse mapping to world coordinate system, obtain coordinate inverse mapping result CoorTemp, and calculate the Euclidean distance of described CoorTemp and the actual observation coordinate CoorGlobal of unique point under world coordinate system;
S63: judge whether described Euclidean distance is greater than predetermined threshold value, if described Euclidean distance is greater than predetermined threshold value, then judge that characteristic of correspondence point is as point not in the know, if described Euclidean distance is less than described predetermined threshold value, then judge that characteristic of correspondence point is as intra-office point, and the number put in statistics bureau;
S64: iteration performs described S61 to S63, and the number and the transformation matrix of coordinates that record intra-office point in each iterative process;
S65: choose from successive ignition process and comprise a maximum iteration of intra-office point number, and using the unique point of the intra-office point in this iteration as successful match, using the unique point of point not in the know as erroneous matching, and remove the unique point of described erroneous matching.
9. the simultaneous localization and mapping method of view-based access control model ranging according to claim 1, is characterized in that, described S7 comprises further:
Obtain the coordinate CoorGlobal of unique point under world coordinate system of described successful match and the coordinate CoorCamera under present image capture systems reference frame;
The rotation matrix R of the unique point calculating described successful match between three-dimensional coordinate and translation matrix T, the pose change estimated value of described binocular image capture systems relative initial position is obtained according to described rotation matrix R and translation matrix T, wherein, described rotation matrix R and translation matrix T meet following relational expression:
min R , T Σ i n l i e r s | | C o o r C a m e r a - T - R × C o o r G l o b a l | | 2 .
10. the simultaneous localization and mapping method of view-based access control model ranging according to claim 9, is characterized in that, described S8 comprises further:
When first frame calculates, world map described in initialization, is defined as the world coordinate system of this location, the three-dimensional coordinate of whole for first frame unique point and Feature Descriptor is added world map by the image capture system reference frame of first frame;
When non-first frame calculates, according to described transformation matrix by the three-dimensional coordinate CoorCamera inverse mapping of the unique point of present frame successful match to world coordinate system CoorGlobal ', wherein, conversion formula is:
CoorGlobal′=inv(R)×(CoorCamera-T);
Be the average of repeatedly observing by the coordinate renew of the unique point of described successful match under world coordinate system, wherein, when (n+1)th time is observed this unique point, the coordinate renew formula in its world map is:
C o o r G l o b a l = C o o r G l o b a l × n + CoorGlobal ′ n + 1 ;
The unique point descriptor of described successful match unique point is updated to the last unique point descriptor observed, and unique point three-dimensional coordinate newly observed for present frame and unique point descriptor are added described world map.
CN201510845988.6A 2015-11-26 2015-11-26 Positioning and map constructing method while view-based access control model ranging Active CN105469405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510845988.6A CN105469405B (en) 2015-11-26 2015-11-26 Positioning and map constructing method while view-based access control model ranging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510845988.6A CN105469405B (en) 2015-11-26 2015-11-26 Positioning and map constructing method while view-based access control model ranging

Publications (2)

Publication Number Publication Date
CN105469405A true CN105469405A (en) 2016-04-06
CN105469405B CN105469405B (en) 2018-08-03

Family

ID=55607064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510845988.6A Active CN105469405B (en) 2015-11-26 2015-11-26 Positioning and map constructing method while view-based access control model ranging

Country Status (1)

Country Link
CN (1) CN105469405B (en)

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408654A (en) * 2016-09-07 2017-02-15 武汉工程大学 Three-dimensional map creation method and system
CN106447585A (en) * 2016-09-21 2017-02-22 武汉大学 Urban area and indoor high-precision visual positioning system and method
CN106767785A (en) * 2016-12-23 2017-05-31 成都通甲优博科技有限责任公司 The air navigation aid and device of a kind of double loop unmanned plane
CN106813672A (en) * 2017-01-22 2017-06-09 深圳悉罗机器人有限公司 The air navigation aid and mobile robot of mobile robot
CN106851095A (en) * 2017-01-13 2017-06-13 深圳拓邦股份有限公司 A kind of localization method, apparatus and system
CN106931961A (en) * 2017-03-20 2017-07-07 成都通甲优博科技有限责任公司 A kind of automatic navigation method and device
CN106931962A (en) * 2017-03-29 2017-07-07 武汉大学 A kind of real-time binocular visual positioning method based on GPU SIFT
CN106960454A (en) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 Depth of field barrier-avoiding method, equipment and unmanned vehicle
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN107818592A (en) * 2017-11-24 2018-03-20 北京华捷艾米科技有限公司 Method, system and the interactive system of collaborative synchronous superposition
CN107833245A (en) * 2017-11-28 2018-03-23 北京搜狐新媒体信息技术有限公司 SLAM method and system based on monocular vision Feature Points Matching
WO2018098811A1 (en) * 2016-12-02 2018-06-07 深圳前海达闼云端智能科技有限公司 Localization method and device
CN108326845A (en) * 2017-12-11 2018-07-27 浙江捷尚人工智能研究发展有限公司 Robot localization method, apparatus and system based on binocular camera and laser radar
CN108427438A (en) * 2018-04-11 2018-08-21 北京木业邦科技有限公司 Flight environment of vehicle detection method, device, electronic equipment and storage medium
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system
CN108665508A (en) * 2018-04-26 2018-10-16 腾讯科技(深圳)有限公司 A kind of positioning and map constructing method, device and storage medium immediately
CN108932587A (en) * 2018-06-29 2018-12-04 大连民族大学 Vertical view pedestrian's risk quantification system of two-dimensional world coordinate system
CN109147374A (en) * 2017-06-15 2019-01-04 深圳市金溢科技股份有限公司 Parking lot management method, system, on board unit and background server
CN109141433A (en) * 2018-09-20 2019-01-04 江阴市雷奥机器人技术有限公司 A kind of robot indoor locating system and localization method
CN109146773A (en) * 2018-08-02 2019-01-04 广州市鑫广飞信息科技有限公司 By river map maps to the method and device of Web map
CN109348415A (en) * 2018-12-10 2019-02-15 郑州联睿电子科技有限公司 Dead point convergence process method based on average homogeneity
CN109462746A (en) * 2018-10-26 2019-03-12 北京双髻鲨科技有限公司 A kind of method and device of image debounce
CN109543694A (en) * 2018-09-28 2019-03-29 天津大学 A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature
CN109540148A (en) * 2018-12-04 2019-03-29 广州小鹏汽车科技有限公司 Localization method and system based on SLAM map
CN109636916A (en) * 2018-07-17 2019-04-16 北京理工大学 A kind of a wide range of virtual reality roaming system and method for dynamic calibration
CN109727288A (en) * 2017-12-28 2019-05-07 北京京东尚科信息技术有限公司 System and method for monocular simultaneous localization and mapping
CN109752003A (en) * 2018-12-26 2019-05-14 浙江大学 A kind of robot vision inertia dotted line characteristic positioning method and device
CN109785731A (en) * 2018-12-29 2019-05-21 北京诺亦腾科技有限公司 A kind of map constructing method, system and storage medium
CN109887087A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM of vehicle builds drawing method and system
CN109978933A (en) * 2019-01-03 2019-07-05 北京中科慧眼科技有限公司 The confidence level detection method of parallax information data, device and automated driving system
CN110044358A (en) * 2019-04-29 2019-07-23 清华大学 Method for positioning mobile robot based on live field wire feature
CN110084851A (en) * 2019-04-19 2019-08-02 湖北亿咖通科技有限公司 A kind of binocular point cloud generation method and system
CN110097064A (en) * 2019-05-14 2019-08-06 驭势科技(北京)有限公司 One kind building drawing method and device
CN110119189A (en) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 The initialization of SLAM system, AR control method, device and system
CN110132278A (en) * 2019-05-14 2019-08-16 驭势科技(北京)有限公司 A kind of instant method and device for positioning and building figure
CN110186467A (en) * 2018-02-23 2019-08-30 通用汽车环球科技运作有限责任公司 Group's sensing points cloud map
CN110260857A (en) * 2019-07-02 2019-09-20 北京百度网讯科技有限公司 Calibration method, device and the storage medium of vision map
CN110490900A (en) * 2019-07-12 2019-11-22 中国科学技术大学 Binocular visual positioning method and system under dynamic environment
CN110520694A (en) * 2017-10-31 2019-11-29 深圳市大疆创新科技有限公司 A kind of visual odometry and its implementation
TWI679512B (en) * 2018-10-05 2019-12-11 東元電機股份有限公司 Automated guided vehicle
WO2019242418A1 (en) * 2018-06-19 2019-12-26 腾讯科技(深圳)有限公司 Camera localization method and apparatus, and terminal and storage medium
CN110660098A (en) * 2018-06-28 2020-01-07 北京京东尚科信息技术有限公司 Positioning method and device based on monocular vision
CN110796706A (en) * 2019-11-08 2020-02-14 四川长虹电器股份有限公司 Visual positioning method and system
CN110823233A (en) * 2019-11-28 2020-02-21 广东电网有限责任公司 Map model construction method and system based on coordinate transformation
CN110853095A (en) * 2019-10-11 2020-02-28 北京达佳互联信息技术有限公司 Camera positioning method and device, electronic equipment and storage medium
CN110956630A (en) * 2019-12-18 2020-04-03 浙江大学 Method, device and system for detecting plane printing defects
CN111028350A (en) * 2019-11-21 2020-04-17 大连理工大学 Method for constructing grid map by using binocular stereo camera
CN111089579A (en) * 2018-10-22 2020-05-01 北京地平线机器人技术研发有限公司 Heterogeneous binocular SLAM method and device and electronic equipment
CN111240321A (en) * 2020-01-08 2020-06-05 广州小鹏汽车科技有限公司 High-frequency positioning method based on SLAM map and vehicle control system
CN111260698A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Binocular image feature matching method and vehicle-mounted terminal
CN111260538A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111521186A (en) * 2020-05-27 2020-08-11 广州小鹏车联网科技有限公司 Vehicle positioning method and device, vehicle and storage medium
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN112083403A (en) * 2020-07-21 2020-12-15 青岛小鸟看看科技有限公司 Positioning tracking error correction method and system for virtual scene
CN112146647A (en) * 2020-09-11 2020-12-29 珠海市一微半导体有限公司 Binocular vision positioning method and chip for ground texture
CN112235562A (en) * 2020-10-12 2021-01-15 聚好看科技股份有限公司 3D display terminal, controller and image processing method
CN112556703A (en) * 2020-12-01 2021-03-26 北京罗克维尔斯科技有限公司 Method, device and system for updating high-precision map
CN112614191A (en) * 2020-12-16 2021-04-06 江苏智库智能科技有限公司 Loading and unloading position detection method, device and system based on binocular depth camera
CN112801077A (en) * 2021-04-15 2021-05-14 智道网联科技(北京)有限公司 Method for SLAM initialization of autonomous vehicles and related device
CN112907742A (en) * 2021-02-18 2021-06-04 湖南国科微电子股份有限公司 Visual synchronous positioning and mapping method, device, equipment and medium
CN113256718A (en) * 2021-05-27 2021-08-13 浙江商汤科技开发有限公司 Positioning method and device, equipment and storage medium
CN113392909A (en) * 2021-06-17 2021-09-14 深圳市睿联技术股份有限公司 Data processing method, data processing device, terminal and readable storage medium
CN113763468A (en) * 2021-01-21 2021-12-07 北京京东乾石科技有限公司 Positioning method, device, system and storage medium
CN116524008A (en) * 2023-04-14 2023-08-01 公安部第一研究所 Target object matching and spatial position estimation method for security inspection CT intelligent identification
CN117437552A (en) * 2023-12-21 2024-01-23 绘见科技(深圳)有限公司 Method, device, equipment and storage medium for constructing visual positioning map
CN112614191B (en) * 2020-12-16 2024-05-24 江苏智库智能科技有限公司 Loading and unloading position detection method, device and system based on binocular depth camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102608618A (en) * 2011-01-03 2012-07-25 德国福维克控股公司 Method for simultaneous location and map creation
US20140244094A1 (en) * 2013-02-28 2014-08-28 Korea Advanced Institute Of Science And Technology Mobile robot and method of localization and mapping of the same

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102608618A (en) * 2011-01-03 2012-07-25 德国福维克控股公司 Method for simultaneous location and map creation
US20140244094A1 (en) * 2013-02-28 2014-08-28 Korea Advanced Institute Of Science And Technology Mobile robot and method of localization and mapping of the same

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
卢维: ""高精度实时视觉定位的关键技术研究"", 《中国博士学位论文全文数据库 信息科技辑》 *
姜雪菲: ""双目立体视觉SLAM研究"", 《双目立体视觉SLAM研究》 *
武二永: ""基于视觉的机器人同时定位与地图构建"", 《中国博士学位论文全文数据库 信息科技辑》 *
龚云隆: ""基于嵌入式的室内机器人视觉导航与定位研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (103)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408654B (en) * 2016-09-07 2019-05-03 武汉工程大学 A kind of creation method and system of three-dimensional map
CN106408654A (en) * 2016-09-07 2017-02-15 武汉工程大学 Three-dimensional map creation method and system
CN106447585A (en) * 2016-09-21 2017-02-22 武汉大学 Urban area and indoor high-precision visual positioning system and method
WO2018098811A1 (en) * 2016-12-02 2018-06-07 深圳前海达闼云端智能科技有限公司 Localization method and device
CN106767785A (en) * 2016-12-23 2017-05-31 成都通甲优博科技有限责任公司 The air navigation aid and device of a kind of double loop unmanned plane
CN106767785B (en) * 2016-12-23 2020-04-07 成都通甲优博科技有限责任公司 Navigation method and device of double-loop unmanned aerial vehicle
CN106851095A (en) * 2017-01-13 2017-06-13 深圳拓邦股份有限公司 A kind of localization method, apparatus and system
CN106851095B (en) * 2017-01-13 2019-12-24 深圳拓邦股份有限公司 Positioning method, device and system
CN106813672A (en) * 2017-01-22 2017-06-09 深圳悉罗机器人有限公司 The air navigation aid and mobile robot of mobile robot
CN106960454A (en) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 Depth of field barrier-avoiding method, equipment and unmanned vehicle
CN106960454B (en) * 2017-03-02 2021-02-12 武汉星巡智能科技有限公司 Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
CN106931961B (en) * 2017-03-20 2020-06-23 成都通甲优博科技有限责任公司 Automatic navigation method and device
CN106931961A (en) * 2017-03-20 2017-07-07 成都通甲优博科技有限责任公司 A kind of automatic navigation method and device
CN106931962A (en) * 2017-03-29 2017-07-07 武汉大学 A kind of real-time binocular visual positioning method based on GPU SIFT
CN109147374A (en) * 2017-06-15 2019-01-04 深圳市金溢科技股份有限公司 Parking lot management method, system, on board unit and background server
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN107742311B (en) * 2017-09-29 2020-02-18 北京易达图灵科技有限公司 Visual positioning method and device
CN110520694A (en) * 2017-10-31 2019-11-29 深圳市大疆创新科技有限公司 A kind of visual odometry and its implementation
CN107818592A (en) * 2017-11-24 2018-03-20 北京华捷艾米科技有限公司 Method, system and the interactive system of collaborative synchronous superposition
CN107833245B (en) * 2017-11-28 2020-02-07 北京搜狐新媒体信息技术有限公司 Monocular visual feature point matching-based SLAM method and system
CN107833245A (en) * 2017-11-28 2018-03-23 北京搜狐新媒体信息技术有限公司 SLAM method and system based on monocular vision Feature Points Matching
CN108326845B (en) * 2017-12-11 2020-06-26 浙江捷尚人工智能研究发展有限公司 Robot positioning method, device and system based on binocular camera and laser radar
CN108326845A (en) * 2017-12-11 2018-07-27 浙江捷尚人工智能研究发展有限公司 Robot localization method, apparatus and system based on binocular camera and laser radar
CN109727288B (en) * 2017-12-28 2021-10-01 北京京东尚科信息技术有限公司 System and method for monocular simultaneous localization and mapping
CN109727288A (en) * 2017-12-28 2019-05-07 北京京东尚科信息技术有限公司 System and method for monocular simultaneous localization and mapping
CN110119189A (en) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 The initialization of SLAM system, AR control method, device and system
CN110119189B (en) * 2018-02-05 2022-06-03 浙江商汤科技开发有限公司 Initialization method, AR control method, device and system of SLAM system
CN110186467A (en) * 2018-02-23 2019-08-30 通用汽车环球科技运作有限责任公司 Group's sensing points cloud map
CN108427438A (en) * 2018-04-11 2018-08-21 北京木业邦科技有限公司 Flight environment of vehicle detection method, device, electronic equipment and storage medium
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system
CN108665508A (en) * 2018-04-26 2018-10-16 腾讯科技(深圳)有限公司 A kind of positioning and map constructing method, device and storage medium immediately
CN108665508B (en) * 2018-04-26 2022-04-05 腾讯科技(深圳)有限公司 Instant positioning and map construction method, device and storage medium
JP2021520540A (en) * 2018-06-19 2021-08-19 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Camera positioning methods and devices, terminals and computer programs
WO2019242418A1 (en) * 2018-06-19 2019-12-26 腾讯科技(深圳)有限公司 Camera localization method and apparatus, and terminal and storage medium
JP7026819B2 (en) 2018-06-19 2022-02-28 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Camera positioning method and equipment, terminals and computer programs
US11210810B2 (en) 2018-06-19 2021-12-28 Tencent Technology (Shenzhen) Company Limited Camera localization method and apparatus, terminal, and storage medium
CN110660098B (en) * 2018-06-28 2022-08-12 北京京东叁佰陆拾度电子商务有限公司 Positioning method and device based on monocular vision
CN110660098A (en) * 2018-06-28 2020-01-07 北京京东尚科信息技术有限公司 Positioning method and device based on monocular vision
CN108932587A (en) * 2018-06-29 2018-12-04 大连民族大学 Vertical view pedestrian's risk quantification system of two-dimensional world coordinate system
CN109636916B (en) * 2018-07-17 2022-12-02 北京理工大学 Dynamic calibration large-range virtual reality roaming system and method
CN109636916A (en) * 2018-07-17 2019-04-16 北京理工大学 A kind of a wide range of virtual reality roaming system and method for dynamic calibration
CN109146773A (en) * 2018-08-02 2019-01-04 广州市鑫广飞信息科技有限公司 By river map maps to the method and device of Web map
CN109146773B (en) * 2018-08-02 2022-10-11 广州市鑫广飞信息科技有限公司 Method and device for mapping river channel map to Web map
CN109141433A (en) * 2018-09-20 2019-01-04 江阴市雷奥机器人技术有限公司 A kind of robot indoor locating system and localization method
CN109543694A (en) * 2018-09-28 2019-03-29 天津大学 A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature
TWI679512B (en) * 2018-10-05 2019-12-11 東元電機股份有限公司 Automated guided vehicle
CN111089579A (en) * 2018-10-22 2020-05-01 北京地平线机器人技术研发有限公司 Heterogeneous binocular SLAM method and device and electronic equipment
CN109462746A (en) * 2018-10-26 2019-03-12 北京双髻鲨科技有限公司 A kind of method and device of image debounce
CN111260538A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN111260698B (en) * 2018-12-03 2024-01-02 北京魔门塔科技有限公司 Binocular image feature matching method and vehicle-mounted terminal
CN111260698A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Binocular image feature matching method and vehicle-mounted terminal
CN111260538B (en) * 2018-12-03 2023-10-03 北京魔门塔科技有限公司 Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN109540148A (en) * 2018-12-04 2019-03-29 广州小鹏汽车科技有限公司 Localization method and system based on SLAM map
CN109540148B (en) * 2018-12-04 2020-10-16 广州小鹏汽车科技有限公司 Positioning method and system based on SLAM map
CN109348415A (en) * 2018-12-10 2019-02-15 郑州联睿电子科技有限公司 Dead point convergence process method based on average homogeneity
CN109752003A (en) * 2018-12-26 2019-05-14 浙江大学 A kind of robot vision inertia dotted line characteristic positioning method and device
CN109785731A (en) * 2018-12-29 2019-05-21 北京诺亦腾科技有限公司 A kind of map constructing method, system and storage medium
CN109978933A (en) * 2019-01-03 2019-07-05 北京中科慧眼科技有限公司 The confidence level detection method of parallax information data, device and automated driving system
CN109887087A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM of vehicle builds drawing method and system
CN109887087B (en) * 2019-02-22 2021-02-19 广州小鹏汽车科技有限公司 SLAM mapping method and system for vehicle
CN110084851A (en) * 2019-04-19 2019-08-02 湖北亿咖通科技有限公司 A kind of binocular point cloud generation method and system
CN110044358A (en) * 2019-04-29 2019-07-23 清华大学 Method for positioning mobile robot based on live field wire feature
CN110097064A (en) * 2019-05-14 2019-08-06 驭势科技(北京)有限公司 One kind building drawing method and device
CN110132278A (en) * 2019-05-14 2019-08-16 驭势科技(北京)有限公司 A kind of instant method and device for positioning and building figure
CN110097064B (en) * 2019-05-14 2021-05-11 驭势科技(北京)有限公司 Picture construction method and device
CN110260857A (en) * 2019-07-02 2019-09-20 北京百度网讯科技有限公司 Calibration method, device and the storage medium of vision map
CN110490900B (en) * 2019-07-12 2022-04-19 中国科学技术大学 Binocular vision positioning method and system under dynamic environment
CN110490900A (en) * 2019-07-12 2019-11-22 中国科学技术大学 Binocular visual positioning method and system under dynamic environment
CN110853095B (en) * 2019-10-11 2021-06-18 北京达佳互联信息技术有限公司 Camera positioning method and device, electronic equipment and storage medium
CN110853095A (en) * 2019-10-11 2020-02-28 北京达佳互联信息技术有限公司 Camera positioning method and device, electronic equipment and storage medium
CN110796706A (en) * 2019-11-08 2020-02-14 四川长虹电器股份有限公司 Visual positioning method and system
WO2021098079A1 (en) * 2019-11-21 2021-05-27 大连理工大学 Method for using binocular stereo camera to construct grid map
CN111028350A (en) * 2019-11-21 2020-04-17 大连理工大学 Method for constructing grid map by using binocular stereo camera
CN111028350B (en) * 2019-11-21 2022-05-20 大连理工大学 Method for constructing grid map by using binocular stereo camera
US11315318B2 (en) 2019-11-21 2022-04-26 Dalian University Of Technology Method for constructing grid map by using binocular stereo camera
CN110823233A (en) * 2019-11-28 2020-02-21 广东电网有限责任公司 Map model construction method and system based on coordinate transformation
CN110956630A (en) * 2019-12-18 2020-04-03 浙江大学 Method, device and system for detecting plane printing defects
CN111240321A (en) * 2020-01-08 2020-06-05 广州小鹏汽车科技有限公司 High-frequency positioning method based on SLAM map and vehicle control system
CN111337947B (en) * 2020-05-18 2020-09-22 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111521186A (en) * 2020-05-27 2020-08-11 广州小鹏车联网科技有限公司 Vehicle positioning method and device, vehicle and storage medium
CN111862672B (en) * 2020-06-24 2021-11-23 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN112083403B (en) * 2020-07-21 2023-10-10 青岛小鸟看看科技有限公司 Positioning tracking error correction method and system for virtual scene
CN112083403A (en) * 2020-07-21 2020-12-15 青岛小鸟看看科技有限公司 Positioning tracking error correction method and system for virtual scene
CN112146647A (en) * 2020-09-11 2020-12-29 珠海市一微半导体有限公司 Binocular vision positioning method and chip for ground texture
CN112235562B (en) * 2020-10-12 2023-09-15 聚好看科技股份有限公司 3D display terminal, controller and image processing method
CN112235562A (en) * 2020-10-12 2021-01-15 聚好看科技股份有限公司 3D display terminal, controller and image processing method
CN112556703B (en) * 2020-12-01 2023-06-06 北京罗克维尔斯科技有限公司 Method, device and system for updating high-precision map
CN112556703A (en) * 2020-12-01 2021-03-26 北京罗克维尔斯科技有限公司 Method, device and system for updating high-precision map
CN112614191B (en) * 2020-12-16 2024-05-24 江苏智库智能科技有限公司 Loading and unloading position detection method, device and system based on binocular depth camera
CN112614191A (en) * 2020-12-16 2021-04-06 江苏智库智能科技有限公司 Loading and unloading position detection method, device and system based on binocular depth camera
CN113763468A (en) * 2021-01-21 2021-12-07 北京京东乾石科技有限公司 Positioning method, device, system and storage medium
CN113763468B (en) * 2021-01-21 2023-12-05 北京京东乾石科技有限公司 Positioning method, device, system and storage medium
CN112907742A (en) * 2021-02-18 2021-06-04 湖南国科微电子股份有限公司 Visual synchronous positioning and mapping method, device, equipment and medium
CN112801077A (en) * 2021-04-15 2021-05-14 智道网联科技(北京)有限公司 Method for SLAM initialization of autonomous vehicles and related device
CN113256718B (en) * 2021-05-27 2023-04-07 浙江商汤科技开发有限公司 Positioning method and device, equipment and storage medium
CN113256718A (en) * 2021-05-27 2021-08-13 浙江商汤科技开发有限公司 Positioning method and device, equipment and storage medium
CN113392909A (en) * 2021-06-17 2021-09-14 深圳市睿联技术股份有限公司 Data processing method, data processing device, terminal and readable storage medium
CN116524008B (en) * 2023-04-14 2024-02-02 公安部第一研究所 Target object matching and spatial position estimation method for security inspection CT intelligent identification
CN116524008A (en) * 2023-04-14 2023-08-01 公安部第一研究所 Target object matching and spatial position estimation method for security inspection CT intelligent identification
CN117437552A (en) * 2023-12-21 2024-01-23 绘见科技(深圳)有限公司 Method, device, equipment and storage medium for constructing visual positioning map
CN117437552B (en) * 2023-12-21 2024-03-26 绘见科技(深圳)有限公司 Method, device, equipment and storage medium for constructing visual positioning map

Also Published As

Publication number Publication date
CN105469405B (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN105469405A (en) Visual ranging-based simultaneous localization and map construction method
US9946264B2 (en) Autonomous navigation using visual odometry
Olson et al. Robust stereo ego-motion for long distance navigation
CN112197770B (en) Robot positioning method and positioning device thereof
EP2503510B1 (en) Wide baseline feature matching using collaborative navigation and digital terrain elevation data constraints
US8588471B2 (en) Method and device of mapping and localization method using the same
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
KR20210040217A (en) EXTRINSIC CALIBRATION METHOD OF PLURALITY OF 3D LiDAR SENSORS FOR AUTONOMOUS NAVIGATION SYSTEM
CN105910604A (en) Multi-sensor-based autonomous obstacle avoidance navigation system
EP2922023A1 (en) Three-dimensional object recognition device and three-dimensional object recognition method
JP6782903B2 (en) Self-motion estimation system, control method and program of self-motion estimation system
CN105424006A (en) Unmanned aerial vehicle hovering precision measurement method based on binocular vision
AliAkbarpour et al. Parallax-tolerant aerial image georegistration and efficient camera pose refinement—without piecewise homographies
Wang et al. D-liom: Tightly-coupled direct lidar-inertial odometry and mapping
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN104166995A (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
CN103791901A (en) Data processing system of star sensor
Saeedi et al. 3D motion tracking of a mobile robot in a natural environment
Ye et al. Robust and efficient vehicles motion estimation with low-cost multi-camera and odometer-gyroscope
Veth et al. Image‐Aided Navigation–Concepts and Applications
Kaneko et al. Stata Center Frame: A Novel World Assumption for Self-Localization
Xia et al. YOLO-Based Semantic Segmentation for Dynamic Removal in Visual-Inertial SLAM
Zsedrovits et al. Towards real-time visual and IMU data fusion
Geng et al. Egomotion estimation and reconstruction with Kalman filters and GPS integration
Wang et al. Real-time visual odometry estimation based on principal direction detection on ceiling vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191231

Address after: 100083 401a, floor 4, building 6, yard 1, Zhongguancun East Road, Haidian District, Beijing

Patentee after: Beijing Chaoxing Future Technology Co., Ltd

Address before: 100084 Haidian District 100084-82 mailbox Beijing

Patentee before: Tsinghua University