CN109493385A - Autonomic positioning method in a kind of mobile robot room of combination scene point line feature - Google Patents

Autonomic positioning method in a kind of mobile robot room of combination scene point line feature Download PDF

Info

Publication number
CN109493385A
CN109493385A CN201811166280.8A CN201811166280A CN109493385A CN 109493385 A CN109493385 A CN 109493385A CN 201811166280 A CN201811166280 A CN 201811166280A CN 109493385 A CN109493385 A CN 109493385A
Authority
CN
China
Prior art keywords
feature
scene
mobile robot
point
line feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811166280.8A
Other languages
Chinese (zh)
Inventor
田应仲
李昂松
李龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201811166280.8A priority Critical patent/CN109493385A/en
Publication of CN109493385A publication Critical patent/CN109493385A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Abstract

The present invention relates to autonomic positioning methods in a kind of mobile robot room of combination scene point line feature.This method has very strong scene adaptability, using the depth camera acquisition scene video data being mounted in mobile robot, while extracting the dotted line feature of the every frame of scene to complete subsequent location Calculation.In general, point feature is in clean mark, characteristic point is abundant, has excellent performance under unobstructed scene.However, under the less scene of texture characteristic points, characteristic point there is a problem of rare, and the localization method robustness of simple characteristic point is poor.The present invention extracts more stable object line feature in the scene that texture lacks, to guarantee to provide richer characteristic information, the calculating of Lai Shixian visual odometry achievees the purpose that autonomous positioning to obtain the location information of mobile robot.

Description

Autonomic positioning method in a kind of mobile robot room of combination scene point line feature
Technical field
The invention belongs to mobile robot autonomous navigation fields, are related to a kind of mobile robot of combination scene point line feature Indoor autonomic positioning method carries out autonomous positioning and pose in robot chamber using the depth camera being mounted in mobile robot It obtains.The method makes full use of indoor scene data information, while extracting the dotted line feature of scene.Point feature in clean mark, Characteristic point is abundant, has excellent performance under unobstructed scene.However, under the less scene of texture characteristic points, simple characteristic point Method robustness is poor.This method extracts more stable object line feature in the scene that texture lacks, to guarantee to provide more Characteristic information abundant, the calculating of visual odometry of the Lai Shixian based on depth camera, to obtain mobile robot Location information.
Background technique
Mobile robot positions and map structuring (SLAM) technology immediately, it is considered to be realizes really full autonomous machine The key technology of people.Mobile robot needs not rely on global navigational satellite positioning system and can realize independent navigation and determine Position.System cost, but also available scene three-dimensional information abundant can not only be reduced by carrying out SLAM using vision camera, be used Important a part that the wider navigation map visual odometry of purposes is vision SLAM system is established in subsequent, also referred to as The front end of SLAM system.The information of its scene video stream adjacent image frame according to captured by the vision camera being fixed in robot The movement of camera is estimated, so that relying solely on vision camera can realize that the positioning of mobile robot and pose are estimated.It is Chinese special Sharp CN 107025668 discloses a kind of visual odometry design method based on depth camera, the patent combination sparse direct method And method of characteristic point, it can be in the real-time and robustness of special scenes raising visual odometry, but the method is to environment point feature More demanding, performance is unknown under the sparse scene of characteristic point.Chinese patent CN 107356252A discloses a kind of fusion vision In achievement and the Position Method for Indoor Robot of physics odometer, solved after the method fusion visual odometry to a certain extent The deviation accumulation problem of physics odometer, still, the method calculation amount and higher cost, sensor used are not single vision There is data correlation in sensor, and the key of its algorithm is still the extraction of point feature, therefore, rare in characteristic point Scene under algorithm and be not suitable for.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of mobile machines of combination scene point line feature Autonomic positioning method in people room, it is the side for improving the mobile robot visual odometer positioning based on point feature of current mainstream Method makes full use of object scene dotted line feature to complete the position fixing process of visual odometry, for subsequent movement robot In SLAM system, the autonomous localization and navigation function of mobile robot is realized.
In order to achieve the above object, insight of the invention is that
Mobile robot carries deep vision camera, firstly, mobile robot acquires place scene by depth camera RGB-D (colour-depth) video flowing.ORB (the Oriented FAST and of scene is extracted for color image-adaptive It is convenient that Rotated BRIEF orientation quickly rotates) characteristic point, while bilateral filtering processing is carried out to depth image, to solve depth Degree image has that black hole and depth value are inaccurate.Positioning side can be improved in the adaptive ORB feature extracting method The efficiency and accuracy of method system under the occasion more than texture.Meanwhile the method introduces scene line feature structure information, meets Orientation problem under the less or even rare scene environment of characteristic point.The method is positioned using point feature and line characteristic synthetic The position of camera in the environment, to can realize oneself of mobile robot in the case where only with a depth camera sensor Master positioning and pose are estimated.
Conceived according to foregoing invention, the present invention adopts the following technical solutions:
A kind of autonomic positioning method in the mobile robot room of combination scene point line feature, it is characterised in that concrete operations step It is rapid as follows:
1) equipment installation is acquired with data: depth camera sensor is fixed on to the top of mobile robot, bottom installation The universal wheel that can be moved in any direction, inside place environmental data of the computer to handle depth camera acquisition, obtain ring After the data of border, localization method used according to the invention, the environmental data and computer background for making full use of depth camera to acquire Calculate the autonomous positioning to realize mobile robot.
2) the dotted line feature extracting and matching of scene where robot: the colour of scene where depth camera obtains robot Image and depth image carry out noise suppression preprocessing to acquisition image, extract and match the point feature and line feature of color image.
3) processing of the calculating of depths of features information and missing problem: depth camera calculates depth information and solves part figure As data have loss of depth information.
4) define the matching error of scene dotted line feature: by the 3) step calculate the depth information of extracted dotted line feature Afterwards, the re-projection error of dotted line feature is defined.
5) the mobile robot pose after optimization is obtained by the method for minimizing re-projection error: using nonlinear optimization The re-projection error that defines of method optimization previous step, minimize the error this.Since variable to be optimized includes the position of camera Appearance, can be obtained the posture information of mobile robot accordingly, to realize the autonomous positioning process of robot.
In the step 1), depth camera and universal wheel mobile chassis pass through robot operating system (ROS) interface It is communicated with computer.All data calculation processings carry out in ROS system.
In the step 2), the point feature of extraction is ORB feature, describes son description using BRIEF, and according to characteristic point Binary descriptor utilize K arest neighbors (KNN) algorithm carry out Feature Points Matching.The line feature of extraction is LSD feature, is used LBD description son description, and effective matching line segments are carried out using the appearance of linear feature and geometrical constraint.LSD is a kind of straight line Partitioning algorithm is detected, can show that the testing result of subpixel accuracy, core ideas are by gradient direction in linear session Similar pixel merges.Linear feature speed is extracted faster compared to traditional Hough transformation, to meet autonomous positioning Requirement of real-time.
In the step 3), in the presence of depth information, extracted point can be directly acquired by depth camera The depth information of line feature.In the case where loss of depth information, using the method for sampling re-projection, foot is solved by algorithm The depth information of enough dotted line features, the positioning failure problem caused by avoiding because of loss of depth information.Pass through the depth built The depth image of degree camera acquisition can obtain spatial point corresponding to the scene characteristic detected and space line parameter.But It is that this method performance in the case where loss of depth information or fuzzy perspective is bad.Reason is that depth camera is acquired Image depth information be not always obtainable, in addition, 2 dimension line segments of color image collected have due to perspective projection When and do not correspond to space 3D line segment.Therefore, the present invention 2 dimension line segment endpoints three-dimensional coordinate loss of depth information the case where Under, the acquisition of 3 dimension line segments is carried out using the method for sampling re-projection.Since line segment is made of numerous point, the line that will test Duan Tezheng carries out uniform sampling nlA point.Then the sampled point for abandoning not depth value believes remaining point using depth image Breath calculates 3D coordinate.Since data are there are noise, obtained 3D coordinate might not form the straight line of a three-dimensional space, connect down Three-dimensional space straight line is generated to be fitted 3D point using RANSAC method combination mahalanobis distance, while excluding abnormal point.
In the step 4), the matching error of scene dotted line feature is defined using the method for 3D-2D.Using Pl ü cker Coordinate representation straight line easily can use linear projection model to line feature.It is being solved by way of nonlinear optimization When depth camera relative pose converts, the re-projection error representation method of point, line feature should be first determined respectively.By 3D point according to working as The position p that the pose of preceding estimation is projected2With matched pixel coordinate p2It is compared, obtains error, under as a result can be used Formula indicates:
U in formulai=(ui,vi,1)TIt is the pixel coordinate that 3d space point projects on the 2 d image.K is the internal reference square of camera Battle array, exp (ξ ^) is transformation matrix TkwLie algebra form.
3D-2D mode by space line L re-projection to image, then calculates projection straight line l firstcIt is passed through in 2D image The aforementioned error e for matching obtained line segment ll.Assuming that the line feature in 2D image is indicated with two endpoints, respectively a and b.Line The re-projection error e of section LlThe endpoint a, b to projection straight line l of matched line segment l can be usedcDistance indicate:
Wherein, a, b are the extreme coordinates under homogeneous coordinate system, d (l, lc) represent the distance of straight line l and lc, projection straight line Coordinate be lc=[l1,l2,l3]T
The presence of noise causes the projection of space line and observation straight line not to be overlapped, it is hereby achieved that observation error. Assuming that the pose of moment camera where frame k is Tkw, frame k observe a space line be Lwj, then the re-projection error of the straight line ForWherein lkjIt is the observation straight line with matching line segments in frame k,It is Current camera position Appearance TkwThe transformation matrix of lower straight line, nc[] represents the square vector of straight line under Pl ü cker coordinate system.
Assuming that observation error is Gaussian Profile, available last re-projection error cost function is
Wherein,Respectively represent the information matrix of point, linear feature, ρpρlRepresent the Huber kernel function of robust.
In the step 5), to have time by arranging Wen Baige-Ma Kuaertefa Optimization Solution quantity of state to be estimated Between point, the re-projection error of straight line and minimum, resulting optimal variable is the pose solution after optimizing, to obtain mobile machine The optimal pose data of people.
The present invention compared with prior art, has following obvious prominent substantive and significant technological progress:
The present invention solves the problems, such as mobile robot autonomous positioning.Compared to by two dimensional code, the external world such as road sign marker, Or other such as global positioning systems, laser radar, ultrasonic sensor, the positioning side of the hardware devices such as Inertial Measurement Unit Method, sensor needed for the present invention is only relatively inexpensive depth camera, and localization method of the present invention is without outer boundary mark Will object can realize the autonomous positioning of mobile robot.Therefore, the present invention has positioning device cheap and simple, and operating process is just The features such as prompt.Moreover, localization method of the present invention sufficiently combines the dotted line feature of scene, compared to traditional independent base In the localization method of point feature, solving the problems, such as that scene point feature is rare leads to positioning failure, accuracy with higher and Stronger robustness.
Detailed description of the invention
Fig. 1 is the operating procedure stream of autonomic positioning method in the mobile robot room of combination scene point line feature of the invention Cheng Tu.
Fig. 2 is that the detailed technology of autonomic positioning method in the mobile robot room of combination scene point line feature of the invention is former Manage schematic diagram.
The schematic diagram for sampling re-projection when Fig. 3 is line depths of features loss of learning of the invention to obtain depth information.
Fig. 4 is the schematic diagram of 3D-2D re-projection error of the invention.
Specific embodiment
It elaborates with reference to the accompanying drawing to the preferred embodiment of the present invention.
Embodiment one: referring to figs. 1 to 4, this combines autonomic positioning method in the mobile robot room of scene point line feature, Itself specific steps are as follows:
1) equipment installation is acquired with data: depth camera sensor is fixed on to the top of mobile robot, robot bottom peace The universal wheel that can be moved in any direction is filled, environmental data of the computer to handle depth camera acquisition is placed in inside;
2) the dotted line feature extracting and matching of scene where robot: the color image of scene where depth camera obtains robot And depth image, noise suppression preprocessing is carried out to acquisition image, extracts and match the point feature and line feature of color image;
3) processing of the calculating of depths of features information and missing problem: depth camera calculates depth information and solves parts of images number According to there is loss of depth information;
4) re-projection error of scene dotted line feature is defined: after the depth information for calculating extracted dotted line feature by step 3), Define the re-projection error of dotted line feature;
5) the mobile robot pose after optimization is obtained by the method for minimizing re-projection error: using the side of nonlinear optimization The re-projection error that method optimization previous step defines, minimizes the error this;Since variable to be optimized includes the pose of camera, according to The posture information of mobile robot can be obtained in this, to realize the autonomous positioning process of robot.
Embodiment two: the present embodiment embodiment is basically the same as the first embodiment, and is particular in that:
In the step 1), depth camera and universal wheel mobile chassis by robot operating system ROS interface with Computer communication;All data calculation processings carry out in ROS system;Depth camera equipment depth image collected is gone Preprocess method of making an uproar uses bilateral filtering algorithm.
In the step 2), the point feature of extraction is ORB feature, describes son description using BRIEF, and according to characteristic point Binary descriptor utilize K arest neighbors (KNN) algorithm carry out Feature Points Matching.The line feature of extraction is LSD feature, is used LBD description son description, and effective matching line segments are carried out using the appearance of linear feature and geometrical constraint;ORB feature extraction is calculated The method that method uses adaptive region segmentation.
In the step 3), in the presence of depth information, extracted point can be directly acquired by depth camera The depth information of line feature;In the case where loss of depth information, using the method for sampling re-projection, foot is solved by algorithm The depth information of enough dotted line features, the positioning failure problem caused by avoiding because of loss of depth information.
In the step 4), the matching error of scene dotted line feature is defined using the method for 3D-2D;Scene line feature space ginseng Number uses Pl ü cker coordinate representation.
In the step 5), quantity of state to be estimated is solved by column Wen Baige-Ma Kuaertefa and makes all spaces Point, the re-projection error of straight line and minimum, resulting optimal variable is the pose solution after optimizing, to obtain mobile robot Optimal pose data;In the sampling re-projection error algorithm, two-dimensional points on the mentioned line segment feature of uniform sampling, and utilize Depth information back projection generates three-dimensional space straight line to three-dimensional space, using RANSAC method combination mahalanobis distance fitting 3D point, Abnormal point is excluded simultaneously, to obtain three-dimensional depth information.
Embodiment three: this combines autonomic positioning method in the mobile robot room of scene point line feature as follows:
As shown in Fig. 2, depth camera (RGB-D) is built on mobile robot platform, the depth phase in specific embodiment The Kinect v2 depth camera that machine can use Microsoft cheap, this camera have 1920 × 1080 color image resolutions, 512 × 424 depth image resolution ratio, front level visual angle are 70 degree, and vertical angle of view is 60 degree, can meet localization method of the present invention Requirement.Mobile platform use Kobuki mobile chassis, this chassis have 3 axis Together, digital gyroscopes, measurement range be ± 250 degree/ Second, this mobile chassis can meet autonomous positioning and subsequent navigation request.The acquisition of sensing data, the processing of data and meter It calculates, optimization of back-end data etc. is completed by the computer built in mobile robot.The invention specific embodiment is based on ROS frame completion under Ubuntu operating system is built.
In a preferred embodiment of the invention, depth camera Kinect v2 acquires the color image and depth of scene environment Image, and noise suppression preprocessing, the method processing of depth image bilateral filtering are carried out to acquisition image.For color image, make ORB feature is extracted with Adaptive Thresholding, spy is carried out using K arest neighbors (KNN) algorithm according to the binary descriptor of characteristic point Sign point matching.Later, the 3D coordinate for calculating matched ORB characteristic point, carries out the solution of initial transformation.In order to avoid localization method The rare situation for causing matching precision to reduce even odometer operational failure of characteristic point, introduces scene edge under low texture environment Line segment feature.The specific embodiment of the invention extracts linear feature using LSD (Line Segment Detector) algorithm.LSD is A kind of straight-line detection partitioning algorithm, can obtain the testing result of subpixel accuracy in linear session, core ideas be by Pixel similar in gradient direction merges.Linear feature speed is extracted faster compared to traditional Hough transformation, to meet certainly The requirement of real-time of master positioning.Straight line is described using LBD (Line Band Descriptor) description operator, with Brief describes subclass seemingly, and binary LBD feature vector is formed by many 0 and 1, and utilizes the appearance of linear feature and several What constraint carries out effective matching line segments.Indicate that the present invention is straight using Pl ü cker coordinate representation for the parameter of space line Line easily can use linear projection model to line feature.The calculating of interframe movement is carried out by the method for 2D-3D, To realize the autonomous positioning of robot.During interframe movement calculates, depth camera depth image collected is considered Line feature causes interframe tracking accuracy error to become larger the problem of even failing there are loss of depth information, and the present invention proposes sampling The method of re-projection, guarantee line feature use, the scene of meeting market's demand point qi and blood, thus improve system scene adaptability and Robustness.Finally, the specific embodiment of the invention realizes moving machine by the re-projection error function of Optimum Synthesis dotted line feature The location estimation and track following of device people.
As shown in figure 3, the specific embodiment of the invention passes through the depth camera built in situation obtained by depth information The depth image of Kinect v2 acquisition can obtain space line parameter corresponding to the scene characteristic detected.But it is this Method performance in the case where loss of depth information or fuzzy perspective is bad.Reason is that depth camera Kinect is collected Image depth information is not always obtainable, in addition, 2 dimension line segments of color image collected are sometimes due to perspective projection And space 3D line segment is not corresponded to.Therefore, three-dimensional coordinate loss of depth information of the specific embodiment of the invention in 2 dimension line segment endpoints In the case where, the acquisition of 3 dimension line segments is carried out using the method for sampling re-projection.Since line segment is made of numerous point, will test The line segment feature arrived carries out uniform sampling nlA point.Remaining point is utilized depth by the sampled point for then abandoning not depth value Image information calculates 3D coordinate.Since data are there are noise, obtained 3D coordinate might not form the straight of a three-dimensional space Next line is fitted 3D point using RANSAC method combination mahalanobis distance and generates three-dimensional space straight line, while excluding abnormal point.
As shown in figure 4, the dotted line feature in the specific embodiment of the invention defines matching error with the mode of 3D-2D, this The method that kind method compares 3D-3D, precision are higher.The transformation of depth camera relative pose is being solved by way of nonlinear optimization When, the re-projection error representation method of point, line feature should be first determined respectively.3D point is projected according to the pose currently estimated Obtained position p2With matched pixel coordinate p2It is compared, obtains error, as a result can be represented by the formula:
U in formulai=(ui,vi,1)TIt is the pixel coordinate that 3d space point projects on the 2 d image.K is the internal reference square of camera Battle array, exp (ξ ^) is transformation matrix TkwLie algebra form.
3D-2D mode by space line L re-projection to image, then calculates projection straight line l firstcIt is passed through in 2D image The aforementioned error e for matching obtained line segment ll.Assuming that the line feature in 2D image is indicated with two endpoints, respectively a and b.Line The re-projection error e of section LlThe endpoint a, b to projection straight line l of matched line segment l can be usedcDistance indicate:
Wherein, a, b are the extreme coordinates under homogeneous coordinate system, d (l, lc) represent straight line l and lcDistance, projection straight line Coordinate is lc=[l1,l2,l3]T
The presence of noise causes the projection of space line and observation straight line not to be overlapped, it is hereby achieved that observation error. Assuming that the pose of moment camera where frame k is Tkw, frame k observe a space line be Lwj, then the re-projection error of the straight line ForWherein lkjIt is the observation straight line with matching line segments in frame k,It is Current camera position Appearance TkwThe transformation matrix of lower straight line, nc[] represents the square vector of straight line under Pl ü cker coordinate system.
Assuming that observation error is Gaussian Profile, available last cost function is
Wherein,Respectively represent the information matrix of point, linear feature, ρpρlRepresent the Huber kernel function of robust. It solves quantity of state to be estimated and makes all spatial points, the re-projection error of straight line and minimum, by arranging Wen Baige-Ma Kuaerte Method solves the problems, such as this.
By optimizing above-mentioned quantity of state, optimal position of the depth camera under each state in specific embodiment can be obtained Appearance, since camera is placed in mobile robot, to realize that mobile robot only relies on depth camera sensor and completes certainly Master positioning and pose estimation procedure lay the foundation for the independent navigation of subsequent progress.

Claims (6)

1. autonomic positioning method in a kind of mobile robot room of combination scene point line feature, it is characterised in that concrete operation step It is as follows:
1) equipment installation is acquired with data: depth camera sensor is fixed on to the top of mobile robot, robot bottom peace The universal wheel that can be moved in any direction is filled, environmental data of the computer to handle depth camera acquisition is placed in inside;
2) the dotted line feature extracting and matching of scene where robot: the color image of scene where depth camera obtains robot And depth image, noise suppression preprocessing is carried out to acquisition image, extracts and match the point feature and line feature of color image;
3) processing of the calculating of depths of features information and missing problem: depth camera calculates depth information and solves parts of images number According to there is loss of depth information;
4) re-projection error of scene dotted line feature is defined: after the depth information for calculating extracted dotted line feature by step 3), Define the re-projection error of dotted line feature;
5) the mobile robot pose after optimization is obtained by the method for minimizing re-projection error: using the side of nonlinear optimization The re-projection error that method optimization previous step defines, minimizes the error this;Since variable to be optimized includes the pose of camera, according to The posture information of mobile robot can be obtained in this, to realize the autonomous positioning process of robot.
2. autonomic positioning method in the mobile robot room of combination scene point line feature according to claim 1, feature Be: in the step 1), depth camera and universal wheel mobile chassis by robot operating system ROS interface and are calculated Machine communication;All data calculation processings carry out in ROS system;The depth image denoising collected of depth camera equipment is pre- Processing method uses bilateral filtering algorithm.
3. autonomic positioning method in the mobile robot room of combination scene point line feature according to claim 1, feature Be: in the step 2), the point feature of extraction is ORB feature, describes son description using BRIEF, and according to characteristic point Binary descriptor carries out Feature Points Matching using K arest neighbors (KNN) algorithm, and the line feature of extraction is LSD feature, uses LBD description son description, and effective matching line segments are carried out using the appearance of linear feature and geometrical constraint;ORB feature extraction is calculated The method that method uses adaptive region segmentation.
4. autonomic positioning method in the mobile robot room of combination scene point line feature according to claim 1, feature It is: in the step 3), in the presence of depth information, it is special that extracted dotted line can be directly acquired by depth camera The depth information of sign;In the case where loss of depth information, using the method for sampling re-projection, solved by algorithm enough Dotted line feature depth information, the positioning failure problem caused by avoiding because of loss of depth information.
5. autonomic positioning method in the mobile robot room of combination scene point line feature according to claim 1, feature It is: in the step 4), the matching error of scene dotted line feature is defined using the method for 3D-2D;Scene line feature space Parameter uses Pl ü cker coordinate representation.
6. autonomic positioning method in the mobile robot room of combination scene point line feature according to claim 1, feature It is: in the step 5), quantity of state to be estimated is solved by column Wen Baige-Ma Kuaertefa and makes all spatial points, straight The re-projection error and minimum of line, resulting optimal variable is the pose solution after optimizing, to obtain mobile robot most Excellent pose data;In the sampling re-projection error algorithm, two-dimensional points on the mentioned line segment feature of uniform sampling, and utilize depth Information back projection generates three-dimensional space straight line to three-dimensional space, using RANSAC method combination mahalanobis distance fitting 3D point, simultaneously Abnormal point is excluded, to obtain three-dimensional depth information.
CN201811166280.8A 2018-10-08 2018-10-08 Autonomic positioning method in a kind of mobile robot room of combination scene point line feature Pending CN109493385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811166280.8A CN109493385A (en) 2018-10-08 2018-10-08 Autonomic positioning method in a kind of mobile robot room of combination scene point line feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811166280.8A CN109493385A (en) 2018-10-08 2018-10-08 Autonomic positioning method in a kind of mobile robot room of combination scene point line feature

Publications (1)

Publication Number Publication Date
CN109493385A true CN109493385A (en) 2019-03-19

Family

ID=65690105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811166280.8A Pending CN109493385A (en) 2018-10-08 2018-10-08 Autonomic positioning method in a kind of mobile robot room of combination scene point line feature

Country Status (1)

Country Link
CN (1) CN109493385A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110375732A (en) * 2019-07-22 2019-10-25 中国人民解放军国防科技大学 Monocular camera pose measurement method based on inertial measurement unit and point line characteristics
CN110570474A (en) * 2019-09-16 2019-12-13 北京华捷艾米科技有限公司 Pose estimation method and system of depth camera
CN110673543A (en) * 2019-09-10 2020-01-10 中国人民解放军63920部队 System deviation processing method and device for maintaining and controlling mission orbit of collinear translational point
CN110866497A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and image building method and device based on dotted line feature fusion
CN111283730A (en) * 2020-03-03 2020-06-16 广州赛特智能科技有限公司 Robot initial pose acquisition method based on point-line characteristics and starting self-positioning method
CN112884834A (en) * 2019-11-30 2021-06-01 华为技术有限公司 Visual positioning method and system
CN113514067A (en) * 2021-06-24 2021-10-19 上海大学 Mobile robot positioning method based on point-line characteristics
CN114972514A (en) * 2022-05-30 2022-08-30 歌尔股份有限公司 SLAM positioning method, device, electronic equipment and readable storage medium
CN114993293A (en) * 2022-07-28 2022-09-02 南京航空航天大学 Synchronous positioning and mapping method for mobile unmanned system in indoor weak texture environment
CN116499455A (en) * 2023-06-19 2023-07-28 煤炭科学研究总院有限公司 Positioning method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李帅鑫: ""机组合的3D SLAM技术研究"", 《道客巴巴》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110375732A (en) * 2019-07-22 2019-10-25 中国人民解放军国防科技大学 Monocular camera pose measurement method based on inertial measurement unit and point line characteristics
CN110673543A (en) * 2019-09-10 2020-01-10 中国人民解放军63920部队 System deviation processing method and device for maintaining and controlling mission orbit of collinear translational point
CN110570474A (en) * 2019-09-16 2019-12-13 北京华捷艾米科技有限公司 Pose estimation method and system of depth camera
CN110570474B (en) * 2019-09-16 2022-06-10 北京华捷艾米科技有限公司 Pose estimation method and system of depth camera
CN110866497B (en) * 2019-11-14 2023-04-18 合肥工业大学 Robot positioning and mapping method and device based on dotted line feature fusion
CN110866497A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and image building method and device based on dotted line feature fusion
CN112884834A (en) * 2019-11-30 2021-06-01 华为技术有限公司 Visual positioning method and system
CN111283730A (en) * 2020-03-03 2020-06-16 广州赛特智能科技有限公司 Robot initial pose acquisition method based on point-line characteristics and starting self-positioning method
CN113514067A (en) * 2021-06-24 2021-10-19 上海大学 Mobile robot positioning method based on point-line characteristics
CN114972514A (en) * 2022-05-30 2022-08-30 歌尔股份有限公司 SLAM positioning method, device, electronic equipment and readable storage medium
CN114993293A (en) * 2022-07-28 2022-09-02 南京航空航天大学 Synchronous positioning and mapping method for mobile unmanned system in indoor weak texture environment
CN116499455A (en) * 2023-06-19 2023-07-28 煤炭科学研究总院有限公司 Positioning method and device
CN116499455B (en) * 2023-06-19 2023-11-14 煤炭科学研究总院有限公司 Positioning method and device

Similar Documents

Publication Publication Date Title
CN109493385A (en) Autonomic positioning method in a kind of mobile robot room of combination scene point line feature
CN105856230B (en) A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
US6985620B2 (en) Method of pose estimation and model refinement for video representation of a three dimensional scene
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN104704384B (en) Specifically for the image processing method of the positioning of the view-based access control model of device
Meilland et al. A spherical robot-centered representation for urban navigation
CN103400409A (en) 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
WO2006083297A2 (en) Method and apparatus for aligning video to three-dimensional point clouds
WO2018019272A1 (en) Method and apparatus for realizing augmented reality on the basis of plane detection
CN108052103A (en) The crusing robot underground space based on depth inertia odometer positions simultaneously and map constructing method
Zhang et al. Building a partial 3D line-based map using a monocular SLAM
Zhao et al. Reconstruction of textured urban 3D model by fusing ground-based laser range and CCD images
Bülow et al. Fast and robust photomapping with an unmanned aerial vehicle (uav)
CN102999895B (en) Method for linearly solving intrinsic parameters of camera by aid of two concentric circles
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
Se et al. Instant scene modeler for crime scene reconstruction
Fang et al. Ground-texture-based localization for intelligent vehicles
Park et al. Hand-held 3D scanning based on coarse and fine registration of multiple range images
CN109443320A (en) Binocular vision speedometer and measurement method based on direct method and line feature
Jokinen et al. Lower bounds for as-built deviations against as-designed 3-D Building Information Model from single spherical panoramic image
EP1890263A2 (en) Method of pose estimation adn model refinement for video representation of a three dimensional scene
CN113487726A (en) Motion capture system and method
Sourimant et al. GPS, GIS and video registration for building reconstruction
Sun et al. Hybrid tracking for augmented reality GIS registration
Lee et al. A single-view based framework for robust estimation of height and position of moving people

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190319

RJ01 Rejection of invention patent application after publication