CN109934094B - System and method for improving monocular vision environment exploration reliability of mobile robot - Google Patents
System and method for improving monocular vision environment exploration reliability of mobile robot Download PDFInfo
- Publication number
- CN109934094B CN109934094B CN201910057142.4A CN201910057142A CN109934094B CN 109934094 B CN109934094 B CN 109934094B CN 201910057142 A CN201910057142 A CN 201910057142A CN 109934094 B CN109934094 B CN 109934094B
- Authority
- CN
- China
- Prior art keywords
- distribution
- image
- mobile robot
- integrity
- environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a system and a method for improving the monocular vision environment exploration reliability of a mobile robot. The invention can improve the reliable running time of the mobile robot system based on monocular vision and establish the mobile robot exploration system which is beneficial to the application of actual scenes.
Description
Technical Field
The invention relates to the technical field of robot vision, in particular to a system for improving the reliability of monocular vision environment exploration of a mobile robot.
Background
In the field of mobile robots that use monocular vision for environment exploration and modeling, mobile robots based on image information are not reliable in terms of environment modeling because image information is not robust against changes in illumination and changes in view angle. In the commercial application of indoor floor sweeping robots, service robots and the like, the reliability of the system is of great importance, and whether the system can continuously operate for a long time is an important judgment standard for commercial popularization of the vision mobile robot.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a system for improving the reliability of the monocular vision environment exploration of a mobile robot. The reliability of the mobile robot in the aspect of monocular vision environment exploration is improved by starting from three aspects of environment modeling, motion control and modeling integrity evaluation.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a system for improving the reliability of monocular visual environment exploration of a mobile robot comprises an active controller, a visual modeling module and a completeness evaluation module which are sequentially connected;
the active controller is used for driving a monocular vision sensor carried on the mobile robot to acquire a current image, and planning a motion target based on the acquired current image to realize motion control of the mobile robot; continuously updating the current image in real time in the motion process of the mobile robot;
the visual modeling module carries out block expression on the environment description by using an environment expression mode of multiple subgraphs, and establishes an environment expression model;
and the integrity evaluation module is used for evaluating the integrity of the modeled part by utilizing the image storage condition established in the visual exploration process.
To achieve the above object, the present invention additionally provides a method of a system for improving reliability of monocular visual environment exploration of a mobile robot, comprising the steps of:
s1: the active controller is used for driving a monocular vision sensor carried on the mobile robot to acquire a current image;
s2: planning a moving target based on the obtained current image;
s3: continuously updating the current image in real time in the moving process of the mobile robot;
s4: the visual modeling module uses an environment expression mode of multiple subgraphs, and utilizes the obtained image information to perform block expression on the environment portrayal so as to establish an environment expression model;
s5: and the integrity evaluation module is used for evaluating the integrity of the modeled part by utilizing the image storage condition established in the visual exploration process, judging whether the environment modeling is completed or not, and stopping exploration when the requirements are met, thereby improving the integrity of the finally established environment expression model data.
Further, the step S2 is to plan a moving object based on the obtained current image, and includes the following specific steps:
s2-1: taking the obtained current image as algorithm input, and performing ground segmentation by using a depth convolution neural network to obtain a travelable area under the view angle of the currently obtained image;
s2-2: planning a motion control target point under the current visual angle by counting the distribution condition of the segmentation result pixels of the travelable region according to the segmentation result so as to obtain the motion control target point;
s2-3: and after the motion control target point is obtained, assuming the image origin as the position of the current mobile robot, and realizing mobile robot control through the active controller according to the relative position of the motion control target point and the current mobile robot obtained in the image space.
Further, the step 2-2 obtains the distribution statistics of the pixel distribution along the image space XY direction by performing statistics on the pixel distribution of the travelable region segmentation result, and further performs motion control target point planning in the maximum possible motion direction according to the statistics;
the solving process of the maximum possible motion direction is as follows: the ground extension degrees in different directions under the pixel coordinates are respectively used as standard deviation vxAnd vyThe solution representing, i.e. the magnitude of the robot's movement in this direction, using the gaussian distribution statistics of the pixels, the most likely direction of movement is found as:
ηdirection=(vx,vy)。
further, the step S2-2 performs motion control target point planning according to the discrete search line set in the image space; the method comprises the following specific steps:
after Gaussian distribution statistics and pixel distribution conditions of different directions of the image are utilized, in order to realize the planning of the motion target point in the direction, a slave point o is designedimage=(mx,my) Starting with ηdirectionSearch line l with line as direction as centerc;
With lcCentered at oimageRotating the base point by a given value theta i times to the left and the right respectively by a given value theta to obtain a series of discrete search line sets lcSpread for the center, discretely and uniformly cover the whole image;
each line element along l is individually from point (m)x,my) Starting, carrying out image retrieval, finding a final ground segmentation effective point all the time, and ending, thereby obtaining a candidate point set p obtained by searching each element of lp;
From ppTo select a distance (m)x,my) The farthest point in the image space is used as the final motion planning target point pgNamely, the motion target point is solved according to the following cost function:
wherein the function d () is a function for solving the Euclidean distance, mx,myThe average values of the distribution positions of the ground pixels in the XY direction are respectively.
Further, in the step S4, visual tracking is implemented by using a data association manner based on the feature points; using multi-subgraph maintenance, and reconstructing a subgraph under the condition of failed tracking, thereby realizing continuous modeling under the subgraph; in addition, data association between sub-images is established through similar images between the sub-images, alignment fusion of the sub-images is achieved, and therefore data breakpoint continuous transmission from visual modeling is achieved.
Further, during graph theory optimization, the actual observation of the monocular vision sensor obtained by the mobile robot is expressed as z by using Pose-graph optimizationiMeanwhile, an observation estimated according to a motion model of the mobile robot is expressed as h (x)i) Wherein x isiIs an estimated mobile robot state; its estimated observation and realityDifference of observation eiThen, it is the mobile robot state xiThe estimation error of (2):
ei=zi-h(xi);
because the mobile robot is in continuous motion, there are multiple actual observations and multiple estimated observations, and there is a positive-negative relationship between the observations, then the error of the whole graph theory is expressed as the sum of squares of the errors:
using F (x) as a cost function to estimate the state x by each robotiFor variables, optimized model modeling is performed by using probability theory:
i.e. observation z at a known actual sensoriUnder the premise of (1), solving the maximum posterior estimation problem and searching the optimal estimationThe value of the cost function is minimized.
Further, in the step S5, the integrity evaluation includes a density-based integrity evaluation and a distribution-based integrity evaluation;
when the integrity evaluation based on the density meets the requirement, entering a distribution-based integrity evaluation stage;
the total judgment result is as follows: j. the design is a squaretermination=Jdistribution&Jdensity。
Further, the density-based integrity evaluation specifically includes:
each time a new image k is collected, a new node is established, the number of edges which can be established between the node and the existing node is counted, and when the distribution density of the node is higher, the number e of the edges which can be established is higherkThe more; when can establishIs greater than a given threshold TeConsidering that the density evaluation meets the requirement, and entering a distribution-based integrity evaluation stage; the criterion result JdensityExpressed as:
further, the specific process of the distribution-based integrity evaluation is as follows:
a: gridding the currently established environment expression model;
b: in the gridding environment expression, all the collected image nodes are classified into a corresponding grid;
c: counting the number of images of each grid and the distribution posture of the images, and evaluating the distribution integrity of image acquisition in each grid;
d: c, according to the distribution integrity score of each grid obtained in the step CCounting the distribution integrity score n of the whole visual environment modelc;
E: distribution integrity score n in combination with whole visual environment modelcAnd given a threshold value T based on the distribution evaluationcObtaining a result J of the distribution criteriondistributionThe method specifically comprises the following steps:
compared with the prior art, the principle and the advantages of the scheme are as follows:
1. and the system reliability of the visual exploration is improved from the data breakpoint continuous transmission angle by combining the characteristics of the multi-sub-graph environment modeling module and the active controller module. Since the active controller can implement appropriate motion control with only the current image, the mobile robot can implement safe motion in any situation; because the multi-subgraph modeling module has the capability of breakpoint continuous transmission, continuous modeling can be realized in the subgraph under the condition of tracking failure, and therefore the mobile robot can keep the map update at any time by using monocular vision. The two modules are combined with each other, the obtained images are continuously updated, and uninterrupted and high-reliability environment exploration modeling can be realized.
2. And an environment integrity evaluation mechanism is utilized to realize complete modeling of the environment in the continuous exploration process, and the environment model verified by the integrity is utilized to realize reliable visual positioning, so that the reliability of the visual exploration system is improved in the aspect of visual data association failure prevention. And (4) environment integrity evaluation, namely quantitatively evaluating the integrity of the environment depiction by utilizing the density and distribution condition of the environment acquisition image. And under the condition that the density of the collected images in the environment is high enough and the distribution is uniform enough, the environment is stopped from being explored and modeled, and the built model is stored. The integrity-verified environment model ensures the distribution of the image at each position and angle, so that the capability of realizing positioning based on monocular vision is improved, and the reliability of safe operation of the mobile robot in the environment is improved.
Drawings
FIG. 1 is a schematic diagram of a system for improving reliability of monocular visual environment exploration of a mobile robot according to the present invention;
FIG. 2 is a flow chart of a method of a system for improving monocular visual environment exploration reliability for a mobile robot of the present invention;
FIG. 3 is a diagram of a multi-subgraph graph according to the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples:
referring to fig. 1, the system for improving the reliability of the monocular visual environment exploration of the mobile robot according to the embodiment includes an active controller 1, a visual modeling module 2, and a integrity evaluation module 3, which are sequentially connected.
Referring to fig. 2, the specific steps of the working principle are as follows:
s1: the active controller 1 is used for driving a monocular vision sensor carried on the mobile robot to acquire a current image;
s2: planning a moving target based on the obtained current image:
s2-1: taking the obtained current image as algorithm input, and performing ground segmentation by using a depth convolution neural network to obtain a travelable area under the view angle of the currently obtained image;
s2-2: according to the segmentation result, the pixel distribution of the segmentation result of the travelable region is counted to obtain the distribution statistics of the pixel distribution along the XY direction of the image space, and then according to the statistical condition, the movement control target point planning is carried out in the maximum possible movement direction;
and the solving process of the maximum possible motion direction is as follows: the ground extension degrees in different directions under the pixel coordinates are respectively used as standard deviation vxAnd vyThe solution representing, i.e. the magnitude of the robot's movement in this direction, using the gaussian distribution statistics of the pixels, the most likely direction of movement is found as:
ηdirection=(vx,vy);
specifically, in this step S2-2, a motion control target point is planned in the image space according to the discrete search line set; the method comprises the following specific steps:
after Gaussian distribution statistics and pixel distribution conditions of different directions of the image are utilized, in order to realize the planning of the motion target point in the direction, a slave point o is designedimage=(mx,my) Starting with ηdirectionSearch line l with line as direction as centerc;
With lcCentered at oimageRotating the base point by a given value theta i times to the left and the right respectively by a given value theta to obtain a series of discrete search line sets lcSpread for the center, discretely and uniformly cover the whole image;
each line element along l is individually from point (m)x,my) Initially, image retrieval is performed, finding a final ground segmentationThe valid point is ended, and further a candidate point set p obtained by searching each element of l is obtainedp;
From ppTo select a distance (m)x,my) The farthest point in the image space is used as the final motion planning target point pgNamely, the motion target point is solved according to the following cost function:
wherein the function d () is a function for solving the Euclidean distance, mx,myRespectively are the average values of the distribution positions of the ground pixels in the XY direction;
s2-3: after the motion control target point is obtained, assuming the image origin as the position of the current mobile robot, and realizing mobile robot control through the active controller according to the relative position of the motion control target point and the current mobile robot obtained in the image space;
s3: continuously updating the current image in real time in the moving process of the mobile robot;
s4: the visual modeling module 2 uses an environment expression mode of a plurality of subgraphs to express the environment description in blocks and establish an environment expression model;
since the visual tracking modeling process is a very fragile process, there are often situations where tracking fails. Therefore, in the present step, visual tracking is realized by using a data association mode based on the feature points; with multi-subgraph maintenance, in the case of tracking failure, a subgraph is reconstructed, thereby realizing continuous modeling under the subgraph. In addition, data association between sub-images is established through similar images between the sub-images, alignment fusion of the sub-images is achieved, and therefore improvement of system reliability is achieved from the viewpoint of data breakpoint continuous transmission of visual modeling;
during graph theory optimization, the actual observation of the monocular vision sensor obtained by the mobile robot is expressed as z by using Pose-graph optimizationiMeanwhile, an observation estimated according to a motion model of the mobile robot is expressed as h (x)i) Wherein x isiIs an estimated mobile robot state; the difference e between the estimated observation and the actual observationiThen, it is the mobile robot state xiThe estimation error of (2):
ei=zi-h(xi);
because the mobile robot is in continuous motion, there are multiple actual observations and multiple estimated observations, and there is a positive-negative relationship between the observations, then the error of the whole graph theory is expressed as the sum of squares of the errors:
using F (x) as a cost function to estimate the state x by each robotiFor variables, optimized model modeling is performed by using probability theory:
i.e. observation z at a known actual sensoriUnder the premise of (1), solving the maximum posterior estimation problem and searching the optimal estimationMinimizing the value of the cost function;
the multi-subgraph graph theory is shown in fig. 3, in which large circles represent nodes, i.e. estimated states x of the mobile robot at various timesi(ii) a The arrows indicate the edges, i.e. the respective error term constraints eiThe small circles represent the data associations for alignment between subgraphs, and are also some constraint ei. The same numbered great circles represent nodes within the same subgraph. Obtaining an explored environment expression model through the steps;
s5: the integrity evaluation module 3 carries out integrity evaluation on the modeled part by utilizing the image storage condition established in the visual exploration process;
the integrity evaluation comprises density-based integrity evaluation and distribution-based integrity evaluation;
when the integrity evaluation based on the density meets the requirement, entering a distribution-based integrity evaluation stage;
the density-based integrity evaluation specifically includes:
each time a new image k is collected, a new node is established, the number of edges which can be established between the node and the existing node is counted, and when the distribution density of the node is higher, the number e of the edges which can be established is higherkThe more; when the number of edges that can be established is greater than a given threshold TeConsidering that the density evaluation meets the requirement, and entering a distribution-based integrity evaluation stage; the criterion result JdensityExpressed as:
the specific procedure of integrity evaluation based on distribution is as follows:
a: gridding the currently established environment expression model;
b: in the gridding environment expression, all the collected image nodes are classified into a corresponding grid;
c: counting the number of images of each grid and the distribution posture of the images, and evaluating the distribution integrity of image acquisition in each grid;
specifically, uniformly distributed parameterized expressions are used as integrity evaluation criteria of the visual composition; the method comprises the following steps:
angle set for counting all image poses in one gridMean value ofAnd standard deviation thereofAnd the two values are compared with the mean value m of the ideal uniform distributionidealAnd standard deviation videalThe method is used for calculating the Euclidean distance so as to obtain the distance between the posture distribution of the modeled model in the grid and the ideal posture distribution; an ideally uniform distribution is defined here as U (- π, π), whereby the parameterization of this U (- π, π) distribution expresses midealAnd videalObtaining;
from this distance, the distribution integrity of the image acquisition in a single grid is evaluated:
the larger the distance between the two is, the larger the difference between the current modeling and the ideal modeling is, and the lower the integrity of the modeling is;
d: c, according to the distribution integrity score of each grid obtained in the step CCounting the distribution integrity score n of the whole visual environment modelc;
The calculation formula is as follows:
wherein j is the number of all grids in the environment;
e: distribution integrity score n in combination with whole visual environment modelcAnd given a threshold value T based on the distribution evaluationcObtaining a result J of the distribution criteriondistributionThe method specifically comprises the following steps:
the total judgment result is as follows: j. the design is a squaretermination=Jdistribution&Jdensity;
In step S5, integrity evaluation is performed on the modeled part, whether environment modeling is completed or not is determined, and exploration is stopped when the requirement is met, so that integrity of the finally established environment expression model data is improved, and reliability of monocular visual positioning is improved.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that variations based on the shape and principle of the present invention should be covered within the scope of the present invention.
Claims (10)
1. A system for improving the reliability of a mobile robot in monocular visual environment exploration is characterized by comprising an active controller (1), a visual modeling module (2) and an integrity evaluation module (3), which are sequentially connected;
the active controller (1) is used for driving a monocular vision sensor carried on the mobile robot to acquire a current image, and planning a motion target based on the acquired current image to realize motion control of the mobile robot; continuously updating the current image in real time in the motion process of the mobile robot;
the visual modeling module (2) uses an environment expression mode of a plurality of subgraphs to express the environment description in blocks and establish an environment expression model;
and the integrity evaluation module (3) carries out integrity evaluation on the modeled part by utilizing the image storage condition established in the visual exploration process.
2. A method for a system for improving reliability of monocular visual environment exploration of a mobile robot according to claim 1, comprising the steps of:
s1: the active controller is used for driving a monocular vision sensor carried on the mobile robot to acquire a current image;
s2: planning a moving target based on the obtained current image;
s3: continuously updating the current image in real time in the moving process of the mobile robot;
s4: the visual modeling module uses an environment expression mode of multiple subgraphs, and utilizes the obtained image information to perform block expression on the environment portrayal so as to establish an environment expression model;
s5: and the integrity evaluation module is used for evaluating the integrity of the modeled part by utilizing the image storage condition established in the visual exploration process, judging whether the environment modeling is completed or not, and stopping exploration when the requirements are met, thereby improving the integrity of the finally established environment expression model data.
3. The method according to claim 2, wherein the step S2 of planning the moving object based on the obtained current image comprises the following steps:
s2-1: taking the obtained current image as algorithm input, and performing ground segmentation by using a depth convolution neural network to obtain a travelable area under the current image view angle;
s2-2: planning a motion control target point under the current visual angle by counting the distribution condition of the segmentation result pixels of the travelable region according to the segmentation result so as to obtain the motion control target point;
s2-3: and after the motion control target point is obtained, assuming the image origin as the position of the current mobile robot, and realizing mobile robot control through the active controller according to the relative position of the motion control target point and the current mobile robot obtained in the image space.
4. The method according to claim 3, wherein the step S2-2 is to perform statistics on the pixel distribution of the result of the segmentation of the travelable region to obtain the distribution statistics of the pixel distribution along the XY direction of the image space, and further perform the planning of the motion control target point in the direction of the most possible motion according to the statistics;
the solving process of the maximum possible motion direction is as follows: the ground extension degrees of the pixel coordinates in different directions are respectively used as standard deviation vxAnd vyIs represented by vxAnd vyThe amplitude of the movement of the robot in the direction is obtained, and the solution of the maximum possible movement direction is obtained by using the Gaussian distribution statistics of pixels as follows:
ηdirection=(vx,vy)。
5. the method according to claim 3, wherein said step S2-2 is performed by performing motion control target point planning based on discrete search line sets in image space; the method comprises the following specific steps:
after Gaussian distribution statistics and pixel distribution conditions of different directions of the image are utilized, in order to realize the planning of the motion control target point in the direction, a slave point o is designedimage=(mx,my) Starting with ηdirectionSearch line l with line as direction as centerc;
With lcCentered at oimageRotating the base point by a given value theta for I times respectively to the left and the right by a given value theta to obtain a series of discrete search line sets IcSpread for the center, discretely and uniformly cover the whole image;
each straight line element along I is respectively from point (m)x,my) Starting, carrying out image retrieval, finding a final ground segmentation effective point all the time, and ending, thereby obtaining a candidate point set p obtained by searching each element of Ip;
From ppTo select a distance (m)x,my) At the farthest point in the image space as the final motion control target point pgNamely, the motion control target point is solved according to the following cost function:
pg=arg max d(pi,oimage)
pi∈pp
wherein the function d () is a Euclidean distance function, mx,myThe average values of the distribution positions of the ground pixels in the XY direction are respectively.
6. The method according to claim 2, wherein in step S4, visual tracking is implemented by using a data association method based on feature points; using multi-subgraph maintenance, and reconstructing a subgraph under the condition of failed tracking, thereby realizing continuous modeling under the subgraph; in addition, data association between sub-images is established through similar images between the sub-images, alignment fusion of the sub-images is achieved, and therefore data breakpoint continuous transmission from visual modeling is achieved.
7. The method as claimed in claim 6, wherein during the graph theory optimization, the actual observation of the monocular vision sensor obtained by the mobile robot is expressed as z by using Pose-graph optimizationiMeanwhile, an observation estimated according to a motion model of the mobile robot is expressed as h (x)i) Wherein x isiIs an estimated mobile robot state; the difference e between the estimated observation and the actual observationiIs then the estimated mobile robot state xiThe estimation error of (2):
ei=zi-h(xi)
because the mobile robot is in continuous motion, there are multiple actual observations and multiple estimated observations, and there is a positive-negative relationship between the observations, then the error of the whole graph theory is expressed as the sum of squares of the errors:
taking F (x) as a cost function, and estimating the state x of the mobile robot at each timeiFor variables, optimized model modeling is performed by using probability theory:
8. The method according to claim 2, wherein in the step S5, the integrity evaluation includes a density-based integrity evaluation and a distribution-based integrity evaluation;
when the integrity evaluation based on the density meets the requirement, entering a distribution-based integrity evaluation stage;
the overall integrity assessment results were: j. the design is a squaretermination=Jdistribution&Jdensity。
9. The method according to claim 8, wherein the density-based integrity assessment is in particular:
each time a new image k is collected, a new node is established, the number of edges which can be established between the node and the existing node is counted, and when the distribution density of the node is higher, the number e of the edges which can be established is higherkThe more; when the number of edges that can be established is greater than a given threshold TeConsidering that the density evaluation meets the requirement, and entering a distribution-based integrity evaluation stage; density-based integrity evaluation result JdensityExpressed as:
10. the method according to claim 8, wherein the distribution-based integrity assessment is performed as follows:
a: gridding the currently established environment expression model;
b: in the gridding environment expression, all the collected image nodes are classified into a corresponding grid;
c: counting the number of images of each grid and the distribution posture of the images, and evaluating the distribution integrity of image acquisition in each grid;
d: c, according to the distribution integrity score of each grid obtained in the step CCounting the distribution integrity score n of the whole visual environment modelc;
E: distribution integrity score n in combination with whole visual environment modelcAnd given a threshold value T based on the distribution evaluationcObtaining a distribution-based integrity evaluation result JdistributionThe method specifically comprises the following steps:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910057142.4A CN109934094B (en) | 2019-01-22 | 2019-01-22 | System and method for improving monocular vision environment exploration reliability of mobile robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910057142.4A CN109934094B (en) | 2019-01-22 | 2019-01-22 | System and method for improving monocular vision environment exploration reliability of mobile robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109934094A CN109934094A (en) | 2019-06-25 |
CN109934094B true CN109934094B (en) | 2022-04-19 |
Family
ID=66985171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910057142.4A Active CN109934094B (en) | 2019-01-22 | 2019-01-22 | System and method for improving monocular vision environment exploration reliability of mobile robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934094B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542563A (en) * | 2011-11-24 | 2012-07-04 | 广东工业大学 | Modeling method of forward direction monocular vision of mobile robot |
CN104864849A (en) * | 2014-02-24 | 2015-08-26 | 电信科学技术研究院 | Visual navigation method and device and robot |
CN105843223A (en) * | 2016-03-23 | 2016-08-10 | 东南大学 | Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model |
CN106840148A (en) * | 2017-01-24 | 2017-06-13 | 东南大学 | Wearable positioning and path guide method based on binocular camera under outdoor work environment |
-
2019
- 2019-01-22 CN CN201910057142.4A patent/CN109934094B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542563A (en) * | 2011-11-24 | 2012-07-04 | 广东工业大学 | Modeling method of forward direction monocular vision of mobile robot |
CN104864849A (en) * | 2014-02-24 | 2015-08-26 | 电信科学技术研究院 | Visual navigation method and device and robot |
CN105843223A (en) * | 2016-03-23 | 2016-08-10 | 东南大学 | Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model |
CN106840148A (en) * | 2017-01-24 | 2017-06-13 | 东南大学 | Wearable positioning and path guide method based on binocular camera under outdoor work environment |
Non-Patent Citations (1)
Title |
---|
Submap-based Pose-graph Visual SLAM:A Robust Visual Exploration and Localization System;Weinan Chen 等;《International Conference on Intelligent Robots and Systems (IROS)》;20181005;第6851-6856页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109934094A (en) | 2019-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110587600B (en) | Point cloud-based autonomous path planning method for live working robot | |
WO2020134254A1 (en) | Method employing reinforcement learning to optimize trajectory of spray painting robot | |
CN111076733B (en) | Robot indoor map building method and system based on vision and laser slam | |
CN110531760B (en) | Boundary exploration autonomous mapping method based on curve fitting and target point neighborhood planning | |
CN113467456B (en) | Path planning method for specific target search under unknown environment | |
CN110989352B (en) | Group robot collaborative search method based on Monte Carlo tree search algorithm | |
CN103413352A (en) | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion | |
CN110223351B (en) | Depth camera positioning method based on convolutional neural network | |
CN111161334B (en) | Semantic map construction method based on deep learning | |
CN110749895B (en) | Laser radar point cloud data-based positioning method | |
CN111830985A (en) | Multi-robot positioning method, system and centralized communication system | |
CN110986945A (en) | Local navigation method and system based on semantic height map | |
Mount et al. | 2d visual place recognition for domestic service robots at night | |
CN109931940B (en) | Robot positioning position reliability assessment method based on monocular vision | |
CN109934094B (en) | System and method for improving monocular vision environment exploration reliability of mobile robot | |
Paton et al. | Eyes in the back of your head: Robust visual teach & repeat using multiple stereo cameras | |
CN106022238B (en) | Multi-object tracking method based on sliding window optimization | |
Danping et al. | Simultaneous localization and mapping based on Lidar | |
Gao et al. | A novel local path planning method considering both robot posture and path smoothness | |
Hroob et al. | Learned Long-Term Stability Scan Filtering for Robust Robot Localisation in Continuously Changing Environments | |
CN109798897B (en) | Method for improving monocular vision positioning reliability through environment model integrity evaluation | |
Sinisa | Evaluation of SLAM Methods and Adaptive Monte Carlo Localization | |
Del Bue et al. | Visual coverage using autonomous mobile robots for search and rescue applications | |
Li et al. | Simultaneous Coverage and Mapping of Stereo Camera Network for Unknown Deformable Object | |
CN117690123B (en) | Pedestrian 3D attitude point tracking method based on fusion characteristics under multi-view vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210902 Address after: 528253 room 3, 803, floor 8, block 3, Tian'an center, No. 31, Jihua East Road, Guicheng Street, Nanhai District, Foshan City, Guangdong Province (residence declaration) Applicant after: Jiutian innovation (Guangdong) Intelligent Technology Co.,Ltd. Address before: No. 100, Waihuan West Road, University Town, Guangzhou, Guangdong 510062 Applicant before: GUANGDONG University OF TECHNOLOGY |
|
GR01 | Patent grant | ||
GR01 | Patent grant |