CN109934094A - A kind of system and method for improving mobile robot monocular vision environment and exploring reliability - Google Patents
A kind of system and method for improving mobile robot monocular vision environment and exploring reliability Download PDFInfo
- Publication number
- CN109934094A CN109934094A CN201910057142.4A CN201910057142A CN109934094A CN 109934094 A CN109934094 A CN 109934094A CN 201910057142 A CN201910057142 A CN 201910057142A CN 109934094 A CN109934094 A CN 109934094A
- Authority
- CN
- China
- Prior art keywords
- image
- mobile robot
- distribution
- environment
- integrity degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention discloses the system and method that a kind of raising mobile robot monocular vision environment explores reliability, start with from the sensing feature of monocular vision, combining environmental modeling strategy, control strategy and modeling integrity degree assessment strategy, establish have section point resume, the vision guided navigation scheme of trouble-saving ability.The present invention can improve the mobile-robot system reliability service time based on monocular vision, establish the mobile robot searching system for being conducive to actual scene application.
Description
Technical field
The present invention relates to the technical field of robot vision more particularly to a kind of raising mobile robot monocular vision rings
The system of border exploration reliability.
Background technique
In the mobile robot field for carrying out environment exploration and modeling using monocular vision, the movement based on image information
Robot is in terms of environmental modeling, and since image information is facing illumination variation, the poor robustness of visual angle change, reliability is not
By force.And indoors in the business application of sweeping robot and service robot etc., the reliability of system be it is vital, be
Can system be the important judgment criteria that vision mobile robot carries out business promotion long lasting for running.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, propose a kind of raising mobile robot monocular vision environment
The system for exploring reliability.Set about in terms of environmental modeling, motion control and modeling integrity degree assessment three, improves mobile machine
Reliability of the people in terms of monocular vision environment exploration.
To achieve the above object, technical solution provided by the present invention are as follows:
A kind of system for improving mobile robot monocular vision environment and exploring reliability, including active controller, vision are built
Mould module, integrity degree evaluation module, three are linked in sequence;
Wherein, active controller is for driving the monocular vision sensor being mounted in mobile robot to obtain current figure
Picture, and moving target planning is carried out based on the present image got, realize the motion control of mobile robot;And mobile machine
Continuous real-time update present image in people's motion process;
Visual modeling module uses the environment expression way of more subgraphs, portrays environment and carries out piecemeal expression, establishes environment
Expression model;
Integrity degree evaluation module saves situation using the image established during visual exploration, has carried out to modeled segments
Whole degree assessment.
To achieve the above object, the present invention additionally provides one kind can for improving the exploration of mobile robot monocular vision environment
By the method for the system of property, comprising the following steps:
S1: active controller is for driving the monocular vision sensor being mounted in mobile robot to obtain present image;
S2: moving target planning is carried out based on the present image got;
S3: continuous real-time update present image during moveable robot movement;
S4: visual modeling module portrays environment using acquired image information using the environment expression way of more subgraphs
Piecemeal expression is carried out, environment expression model is established;
S5: integrity degree evaluation module saves situation using the image established during visual exploration, to modeled segments into
The assessment of row integrity degree, judges whether to the completion of environmental modeling, just stops exploring when meeting the requirements, so that raising is finally built
The integrity degree of vertical environment expression model data.
Further, the step S2 carries out the specific steps of moving target planning such as based on the present image got
Under:
S2-1: the present image that will acquire is inputted as algorithm, carries out ground segmentation using depth convolutional neural networks,
To obtain currently obtaining the travelable region under image aspects;
S2-2: according to segmentation result, it can travel the distribution situation of region segmentation result pixel by counting, to current visual angle
Under motion control target point planned, to obtain motion control target point;
S2-3: after obtaining motion control target point, being assumed to be current mobile robot position for image origin, according to
In the relative position of motion control target point and current mobile robot that image space obtains, is realized and moved by active controller
Mobile robot control.
Further, the step 2-2 is counted by the pixel distribution to travelable region segmentation result, obtains it
Pixel distribution along the direction image space XY distribution statistics, and then according to the statistical conditions, in the maximum possible direction of motion
Carry out the planning of motion control target point;
The solution procedure of the maximum possible direction of motion are as follows: the ground Extendible Extent difference of different directions under pixel coordinate
With standard deviation vxAnd vyIt indicates, that is to say the amplitude that mobile robot is movable in this direction, united using the Gaussian Profile of pixel
Meter, obtains the solution of the maximum possible direction of motion are as follows:
ηdirection=(vx,vy)。
Further, the step S2-2 carries out motion control target in image space according to discrete scounting line set
Point planning;Specific step is as follows:
After the pixel distribution situation using Gaussian Profile statistics and image different directions, in order to realize in this direction
Moving target point planning, design one is from point oimage=(mx, my) set out, with ηdirectionCentered on the line in direction
Scounting line lc;
With lcCentered on, with oimageTo rotate basic point, using given value θ as rotational steps, rotated i times with each to the right to the left,
To obtain the scounting line set l of series of discrete, l is with l for the scounting line setcCentered on be unfolded, it is discrete and equably cover
Whole image;
Along each vertical element of l, respectively from point (mx, my) start, image retrieval is carried out, finds final one always
Ground segmentation available point terminates, and then obtains the alternative point set p searched for by each element of lp;
From ppMiddle selection distance (mx, my) point farthest on image space, as final motion planning target point pg,
It is to carry out the solution of moving target point according to following cost function:
Wherein, function d () is to solve Euclidean distance function, mx, myRespectively pixel distribution position in ground is in the direction XY
Mean value.
Further, in the step S4, realize that vision tracks using the data correlation mode based on characteristic point;Using more
Subgraph maintenance rebuilds subgraph in the case where tracking failure, to realize the lasting modeling under subgraph;In addition, passing through subgraph
Between similar image, establish the data correlation between subgraph, realize the alignment fusion of subgraph, and then from the data breakpoint of visual modeling
Resume angle.
Further, when graph theory optimizes, optimized using Pose-graph, the monocular vision that mobile robot is obtained senses
Device actual observation is expressed as zi, while the observation estimated according to moveable robot movement model is expressed as h (xi), wherein
xiFor the mobile robot state of estimation;The difference e of its observation estimated and actual observationi, then it is mobile robot state xi's
Evaluated error:
ei=zi-h(xi);
Since mobile robot is in continuous movement, there are multiple actual observation and repeatedly estimation observation, and
There is positive-negative relationship between observation, then the error of entire graph theory is then expressed as the quadratic sum of each secondary error:
F (x) is regard as cost function, with each secondary robot estimated state xiFor variable, mould is optimized using probability theory
Type modeling:
As z is observed in known real sensoriUnder the premise of, MAP estimation problem is solved, optimal estimation is foundSo that the value of cost function is minimum.
Further, in the step S5, integrity degree assessment includes that the integrity degree based on density is assessed and based on distribution
Integrity degree assessment;
After the integrity degree assessment based on density is met the requirements, just enter the integrity degree evaluation stage based on distribution;
Total judging result are as follows: Jtermination=Jdistribution&Jdensity。
Further, the integrity degree assessment based on density specifically:
One new image k of every acquisition, just establishes a new node, which can be established existing node
Number of edges amount is counted, when Node distribution density is higher, the number e on the side that can be establishedkIt is more;When the number of edges amount that can be established is big
In given threshold value TeIn the case where, it is believed that density assessment is met the requirements, hence into the integrity degree evaluation stage based on distribution;It should
Criterion result JdensityExpression are as follows:
Further, detailed process is as follows for the integrity degree assessment based on distribution:
A: the environment currently established is expressed into model meshes;
B: in the environment expression of the gridding, all acquired image nodes are referred to some corresponding grid
In;
C: counting the amount of images of each grid and the distribution posture of image, assesses point of Image Acquisition in each grid
Cloth integrity degree;
D: according to the distribution integrity degree score of the obtained each grid of step CCount entire visual environment model
It is distributed integrity degree score nc;
E: in conjunction with the distribution integrity degree score n of entire visual environment modelcIt is T with the given threshold value based on distribution assessmentc,
Obtain the result J of distribution criteriondistribution, specifically:
Compared with prior art, this programme principle and advantage is as follows:
1. combining more subgraph environmental modeling modules and active controller module characteristic, angle is resumed from data breakpoint, is realized
The system reliability of visual exploration improves.Since active controller can realize suitable movement in the case where only present image
Control, therefore mobile robot can realize safe movement under any circumstance;Since more subgraph modeling modules have breakpoint
The ability resumed can realize lasting modeling, therefore mobile robot can be regarded using monocular when tracking failure in subgraph
Feel at any time keep map with new.Two modules be combined with each other, and obtain image and constantly update, it can be achieved that not interrupting
, the environment of high reliability explore modeling.
2. utilizing environment integrity degree evaluation mechanism, realizes complete modeling of the constant quest in the process to environment, utilize process
The environmental model of integrity degree verifying realizes reliable visual positioning, in terms of vision data association unsuccessfully prevents, improves vision and visits
The reliability of cable system.The assessment of environment integrity degree portrays environment using the density and distribution situation of environment acquisition image
Integrity degree is quantitatively evaluated.It is sufficiently high that image density is acquired in environment, is distributed in sufficiently uniform situation, side stops to ring
Modeling is explored in border, and saves model built.By integrity degree verify environmental model, due to its image at various locations, angle
Distribution be guaranteed, therefore, based on monocular vision realize positioning ability be improved, mobile robot is in the environment
The reliability of interior safety work is improved.
Detailed description of the invention
Fig. 1 is a kind of structural representation for improving mobile robot monocular vision environment and exploring the system of reliability of the present invention
Figure;
Fig. 2 is a kind of method for exploring the system of reliability for improving mobile robot monocular vision environment of the present invention
Flow chart;
Fig. 3 is the schematic diagram of more subgraph graph theorys in the present invention.
Specific embodiment
The present invention is further explained in the light of specific embodiments:
Shown in Figure 1, a kind of raising mobile robot monocular vision environment described in the present embodiment explores reliability
System, including active controller 1, visual modeling module 2, integrity degree evaluation module 3, three are linked in sequence.
Shown in Figure 2, specific step is as follows for working principle:
S1: active controller 1 is for driving the monocular vision sensor being mounted in mobile robot to obtain present image;
S2: moving target planning is carried out based on the present image got:
S2-1: the present image that will acquire is inputted as algorithm, carries out ground segmentation using depth convolutional neural networks,
To obtain currently obtaining the travelable region under image aspects;
S2-2: it according to segmentation result, is counted by the pixel distribution to travelable region segmentation result, obtains its picture
Element is distributed the distribution statistics along the direction image space XY, and then according to the statistical conditions, enterprising in the maximum possible direction of motion
The planning of row motion control target point;
And the solution procedure of the maximum possible direction of motion are as follows: the ground Extendible Extent of different directions is used respectively under pixel coordinate
Standard deviation vxAnd vyIt indicates, that is to say the amplitude that mobile robot is movable in this direction, united using the Gaussian Profile of pixel
Meter, obtains the solution of the maximum possible direction of motion are as follows:
ηdirection=(vx,vy);
Specifically, this step S2-2 carries out motion control target point rule in image space according to discrete scounting line set
It draws;Specific step is as follows:
After the pixel distribution situation using Gaussian Profile statistics and image different directions, in order to realize in this direction
Moving target point planning, design one is from point oimage=(mx, my) set out, with ηdirectionCentered on the line in direction
Scounting line lc;
With lcCentered on, with oimageTo rotate basic point, using given value θ as rotational steps, rotated i times with each to the right to the left,
To obtain the scounting line set l of series of discrete, l is with l for the scounting line setcCentered on be unfolded, it is discrete and equably cover
Whole image;
Along each vertical element of l, respectively from point (mx, my) start, image retrieval is carried out, finds final one always
Ground segmentation available point terminates, and then obtains the alternative point set p searched for by each element of lp;
From ppMiddle selection distance (mx, my) point farthest on image space, as final motion planning target point pg,
It is to carry out the solution of moving target point according to following cost function:
Wherein, function d () is to solve Euclidean distance function, mx, myRespectively pixel distribution position in ground is in the direction XY
Mean value;
S2-3: after obtaining motion control target point, being assumed to be current mobile robot position for image origin, according to
In the relative position of motion control target point and current mobile robot that image space obtains, is realized and moved by active controller
Mobile robot control;
S3: continuous real-time update present image during moveable robot movement;
S4: visual modeling module 2 portrays environment and is carried out piecemeal expression using the environment expression way of more subgraphs, establishes
Environment expression model;
The case where being a very fragile process due to vision tracing modeling process, being frequently present of tracking failure.Cause
This realizes that vision tracks using the data correlation mode based on characteristic point in this step;It is safeguarded, is being tracked using more subgraphs
In the case where failure, subgraph is rebuild, to realize the lasting modeling under subgraph.In addition, being built by the similar image between subgraph
Data correlation between vertical subgraph, realizes the alignment fusion of subgraph, and then resumes angle from the data breakpoint of visual modeling, realizes system
The raising for reliability of uniting;
When graph theory optimizes, optimized using Pose-graph, the monocular vision sensor that mobile robot is obtained is practical to be seen
Survey is expressed as zi, while the observation estimated according to moveable robot movement model is expressed as h (xi), wherein xiFor estimation
Mobile robot state;The difference e of its observation estimated and actual observationi, then it is mobile robot state xiEstimation miss
Difference:
ei=zi-h(xi);
Since mobile robot is in continuous movement, there are multiple actual observation and repeatedly estimation observation, and
There is positive-negative relationship between observation, then the error of entire graph theory is then expressed as the quadratic sum of each secondary error:
F (x) is regard as cost function, with each secondary robot estimated state xiFor variable, mould is optimized using probability theory
Type modeling:
As z is observed in known real sensoriUnder the premise of, MAP estimation problem is solved, optimal estimation is foundSo that the value of cost function is minimum;
More subgraph graph theorys as shown in figure 3, wherein great circle indicate node, that is to say mobile robot in the estimation shape at each moment
State xi;Arrow indicates side, that is to say each error item constraint ei, roundlet indicate subgraph between for alignment data correlation, equally
It is also some constraint ei.The great circle of identical label indicates the node in the same subgraph.Explored environment is obtained by the step
Expression model;
S5: integrity degree evaluation module 3 saves situation using the image established during visual exploration, to modeled segments into
The assessment of row integrity degree;
Integrity degree assessment includes the integrity degree assessment based on density and the integrity degree assessment based on distribution;
After the integrity degree assessment based on density is met the requirements, just enter the integrity degree evaluation stage based on distribution;
Integrity degree assessment based on density specifically:
One new image k of every acquisition, just establishes a new node, which can be established existing node
Number of edges amount is counted, when Node distribution density is higher, the number e on the side that can be establishedkIt is more;When the number of edges amount that can be established is big
In given threshold value TeIn the case where, it is believed that density assessment is met the requirements, hence into the integrity degree evaluation stage based on distribution;It should
Criterion result JdensityExpression are as follows:
Detailed process is as follows for integrity degree assessment based on distribution:
A: the environment currently established is expressed into model meshes;
B: in the environment expression of the gridding, all acquired image nodes are referred to some corresponding grid
In;
C: counting the amount of images of each grid and the distribution posture of image, assesses point of Image Acquisition in each grid
Cloth integrity degree;
Specifically, using equally distributed Parameter Expression as the integrity degree evaluation criteria of vision composition;Steps are as follows:
Calculate each grid giThe angular pose of middle acquired imageStatistical distribution;
Count the angle set of all image postures in a gridMean valueAnd its standard deviationAnd by this two
The mean value m of a numerical value and desired homogeneous distributionidealWith standard deviation videalFor calculating Euclidean distance, to modeled mould
Type the grid gesture distribution between ideal gesture distribution at a distance from;Herein desired homogeneous distribution, be defined as U (- π,
π), thus the Parameter Expression m of the U (- π, π) distributionidealAnd videalIt can obtain;
According to this distance, the distribution integrity degree of Image Acquisition in single grid is assessed:
The two distance is bigger, illustrates that the case where currently modeling is modeled with idealization difference is bigger, integrity degree is also lower;
For grid giIntegrity degreeCalculating is defined as:
D: according to the distribution integrity degree score of the obtained each grid of step CCount entire visual environment model
It is distributed integrity degree score nc;
Calculation formula is as follows:
Wherein, j is the number of all grids in environment;
E: in conjunction with the distribution integrity degree score n of entire visual environment modelcIt is T with the given threshold value based on distribution assessmentc,
Obtain the result J of distribution criteriondistribution, specifically:
Total judging result are as follows: Jtermination=Jdistribution&Jdensity;
This step S5 carries out integrity degree assessment to modeled segments, judges whether to the completion of environmental modeling, when full
Foot requires just to stop exploring, to improve the integrity degree for the environment expression model data finally established, and then improves monocular vision
The reliability of positioning.
The examples of implementation of the above are only the preferred embodiments of the invention, and implementation model of the invention is not limited with this
It encloses, therefore all shapes according to the present invention, changes made by principle, should all be included within the scope of protection of the present invention.
Claims (10)
1. a kind of system for improving mobile robot monocular vision environment and exploring reliability, which is characterized in that including active control
Device (1), visual modeling module (2), integrity degree evaluation module (3), three are linked in sequence;
Wherein, the active controller (1) is current for driving the monocular vision sensor being mounted in mobile robot to obtain
Image, and moving target planning is carried out based on the present image got, realize the motion control of mobile robot;And moving machine
Continuous real-time update present image in device people's motion process;
The visual modeling module (2) portrays environment and carries out piecemeal expression, established ring using the environment expression way of more subgraphs
Border expression model;
The integrity degree evaluation module (3) saves situation using the image established during visual exploration, to modeled segments into
The assessment of row integrity degree.
2. a kind of side for the system for exploring reliability for raising mobile robot monocular vision environment described in claim 1
Method, which comprises the following steps:
S1: active controller is for driving the monocular vision sensor being mounted in mobile robot to obtain present image;
S2: moving target planning is carried out based on the present image got;
S3: continuous real-time update present image during moveable robot movement;
S4: visual modeling module portrays progress to environment using acquired image information using the environment expression way of more subgraphs
Piecemeal expression, establishes environment expression model;
S5: integrity degree evaluation module saves situation using the image established during visual exploration, has carried out to modeled segments
The assessment of whole degree judges whether to the completion of environmental modeling, just stops exploring when meeting the requirements, thus what raising was finally established
The integrity degree of environment expression model data.
3. according to claim 2 a kind of for improving the system of mobile robot monocular vision environment exploration reliability
Method, which is characterized in that specific step is as follows based on the present image progress moving target planning got by the step S2:
S2-1: the present image that will acquire is inputted as algorithm, carries out ground segmentation using depth convolutional neural networks, with
To the current travelable region obtained under image aspects;
S2-2: according to segmentation result, it can travel the distribution situation of region segmentation result pixel by counting, under current visual angle
Motion control target point is planned, to obtain motion control target point;
S2-3: after obtaining motion control target point, being assumed to be current mobile robot position for image origin, according to scheming
The relative position of motion control target point and current mobile robot that image space obtains, realizes moving machine by active controller
Device people control.
4. according to claim 3 a kind of for improving the system of mobile robot monocular vision environment exploration reliability
Method, which is characterized in that the step 2-2 is counted by the pixel distribution to travelable region segmentation result, obtains it
Pixel distribution along the direction image space XY distribution statistics, and then according to the statistical conditions, in the maximum possible direction of motion
Carry out the planning of motion control target point;
The solution procedure of the maximum possible direction of motion are as follows: the ground Extendible Extent of different directions is respectively with mark under pixel coordinate
Quasi- difference vxAnd vyIt indicates, that is to say the amplitude that mobile robot is movable in this direction, counted using the Gaussian Profile of pixel,
Obtain the solution of the maximum possible direction of motion are as follows:
ηdirection=(vx,vy)。
5. according to claim 3 a kind of for improving the system of mobile robot monocular vision environment exploration reliability
Method, which is characterized in that the step S2-2, according to discrete scounting line set, carries out motion control target point in image space
Planning;Specific step is as follows:
After the pixel distribution situation using Gaussian Profile statistics and image different directions, in order to realize fortune in this direction
The planning of moving-target point designs one from point oimage=(mx, my) set out, with ηdirectionTo be searched for centered on the line in direction
Line lc;
With lcCentered on, with oimageTo rotate basic point, using given value θ as rotational steps, rotated i times with each to the right to the left, thus
The scounting line set l of series of discrete is obtained, l is with l for the scounting line setcCentered on be unfolded, it is discrete and equably covering it is entire
Image;
Along each vertical element of l, respectively from point (mx, my) start, image retrieval is carried out, finds a final ground always
Segmentation available point terminates, and then obtains the alternative point set p searched for by each element of lp;
From ppMiddle selection distance (mx, my) point farthest on image space, as final motion planning target point pg, that is to say according to
Cost function is descended to carry out the solution of moving target point accordingly:
Wherein, function d () is to solve Euclidean distance function, mx, myRespectively mean value of the pixel distribution position in ground in the direction XY.
6. according to claim 2 a kind of for improving the system of mobile robot monocular vision environment exploration reliability
Method, which is characterized in that in the step S4, realize that vision tracks using the data correlation mode based on characteristic point;Using more
Subgraph maintenance rebuilds subgraph in the case where tracking failure, to realize the lasting modeling under subgraph;In addition, passing through subgraph
Between similar image, establish the data correlation between subgraph, realize the alignment fusion of subgraph, and then from the data breakpoint of visual modeling
Resume angle.
7. according to claim 6 a kind of for improving the system of mobile robot monocular vision environment exploration reliability
Method, which is characterized in that when graph theory optimizes, optimized using Pose-graph, the monocular vision that mobile robot is obtained senses
Device actual observation is expressed as zi, while the observation estimated according to moveable robot movement model is expressed as h (xi), wherein
xiFor the mobile robot state of estimation;The difference e of its observation estimated and actual observationi, then it is mobile robot state xi's
Evaluated error:
ei=zi-h(xi);
Since mobile robot has multiple actual observation and repeatedly estimation observation in continuous movement, and observe
Between have positive-negative relationship, then the error of entire graph theory is then expressed as the quadratic sum of each secondary error:
F (x) is regard as cost function, with each secondary robot estimated state xiFor variable, model is optimized using probability theory and is built
Mould:
As z is observed in known real sensoriUnder the premise of, MAP estimation problem is solved, optimal estimation is foundMake
The value for obtaining cost function is minimum.
8. according to claim 2 a kind of for improving the system of mobile robot monocular vision environment exploration reliability
Method, which is characterized in that in the step S5, integrity degree assessment includes that the integrity degree based on density is assessed and based on the complete of distribution
Whole degree assessment;
After the integrity degree assessment based on density is met the requirements, just enter the integrity degree evaluation stage based on distribution;
Total judging result are as follows: Jtermination=Jdistribution&Jdensity。
9. according to claim 8 a kind of for improving the system of mobile robot monocular vision environment exploration reliability
Method, which is characterized in that the integrity degree assessment based on density specifically:
One new image k of every acquisition, just establishes a new node, the number of edges existing node that can be established between the node
Amount is counted, when Node distribution density is higher, the number e on the side that can be establishedkIt is more;When the number of edges amount that can be established be greater than to
Determine threshold value TeIn the case where, it is believed that density assessment is met the requirements, hence into the integrity degree evaluation stage based on distribution;The criterion
As a result JdensityExpression are as follows:
10. a kind of system for exploring reliability for improving mobile robot monocular vision environment according to claim 8
Method, which is characterized in that it is described based on distribution integrity degree assessment detailed process is as follows:
A: the environment currently established is expressed into model meshes;
B: in the environment expression of the gridding, all acquired image nodes are referred in some corresponding grid;
C: counting the amount of images of each grid and the distribution posture of image, the distribution for assessing Image Acquisition in each grid are complete
Whole degree;
D: according to the distribution integrity degree score of the obtained each grid of step CCount the distribution of entire visual environment model
Integrity degree score nc;
E: in conjunction with the distribution integrity degree score n of entire visual environment modelcIt is T with the given threshold value based on distribution assessmentc, obtain
It is distributed the result J of criteriondistribution, specifically:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910057142.4A CN109934094B (en) | 2019-01-22 | 2019-01-22 | System and method for improving monocular vision environment exploration reliability of mobile robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910057142.4A CN109934094B (en) | 2019-01-22 | 2019-01-22 | System and method for improving monocular vision environment exploration reliability of mobile robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109934094A true CN109934094A (en) | 2019-06-25 |
CN109934094B CN109934094B (en) | 2022-04-19 |
Family
ID=66985171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910057142.4A Active CN109934094B (en) | 2019-01-22 | 2019-01-22 | System and method for improving monocular vision environment exploration reliability of mobile robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934094B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542563A (en) * | 2011-11-24 | 2012-07-04 | 广东工业大学 | Modeling method of forward direction monocular vision of mobile robot |
CN104864849A (en) * | 2014-02-24 | 2015-08-26 | 电信科学技术研究院 | Visual navigation method and device and robot |
CN105843223A (en) * | 2016-03-23 | 2016-08-10 | 东南大学 | Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model |
CN106840148A (en) * | 2017-01-24 | 2017-06-13 | 东南大学 | Wearable positioning and path guide method based on binocular camera under outdoor work environment |
-
2019
- 2019-01-22 CN CN201910057142.4A patent/CN109934094B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542563A (en) * | 2011-11-24 | 2012-07-04 | 广东工业大学 | Modeling method of forward direction monocular vision of mobile robot |
CN104864849A (en) * | 2014-02-24 | 2015-08-26 | 电信科学技术研究院 | Visual navigation method and device and robot |
CN105843223A (en) * | 2016-03-23 | 2016-08-10 | 东南大学 | Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model |
CN106840148A (en) * | 2017-01-24 | 2017-06-13 | 东南大学 | Wearable positioning and path guide method based on binocular camera under outdoor work environment |
Non-Patent Citations (1)
Title |
---|
WEINAN CHEN 等: "Submap-based Pose-graph Visual SLAM:A Robust Visual Exploration and Localization System", 《INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》 * |
Also Published As
Publication number | Publication date |
---|---|
CN109934094B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111739063B (en) | Positioning method of power inspection robot based on multi-sensor fusion | |
CN107392964B (en) | The indoor SLAM method combined based on indoor characteristic point and structure lines | |
CN107505939B (en) | A kind of complete coverage path planning method of mobile robot | |
CN103247040B (en) | Based on the multi-robot system map joining method of hierarchical topology structure | |
Paz et al. | Large-scale 6-DOF SLAM with stereo-in-hand | |
CN109643127A (en) | Construct map, positioning, navigation, control method and system, mobile robot | |
CN109521774A (en) | A kind of spray robot track optimizing method based on intensified learning | |
CN106840148A (en) | Wearable positioning and path guide method based on binocular camera under outdoor work environment | |
CN109074083A (en) | Control method for movement, mobile robot and computer storage medium | |
CN109725327A (en) | A kind of method and system of multimachine building map | |
CN108088445A (en) | 3 d grid map path planning system and method based on octree representation | |
CN107808123A (en) | The feasible area detecting method of image, electronic equipment, storage medium, detecting system | |
CN109460267A (en) | Mobile robot offline map saves and real-time method for relocating | |
Chen et al. | Dynamic visual servo control methods for continuous operation of a fruit harvesting robot working throughout an orchard | |
CN103020989A (en) | Multi-view target tracking method based on on-line scene feature clustering | |
CN106647738A (en) | Method and system for determining docking path of automated guided vehicle, and automated guided vehicle | |
CN111489392B (en) | Single target human motion posture capturing method and system in multi-person environment | |
CN111862200B (en) | Unmanned aerial vehicle positioning method in coal shed | |
CN116448118B (en) | Working path optimization method and device of sweeping robot | |
Jiao et al. | Lce-calib: automatic lidar-frame/event camera extrinsic calibration with a globally optimal solution | |
Wang et al. | Active view planning for visual slam in outdoor environments based on continuous information modeling | |
CN109344685A (en) | A kind of wisdom pallet and its intelligent positioning method for tracing | |
CN105806331A (en) | Positioning method for indoor robot and indoor robot | |
CN109934094A (en) | A kind of system and method for improving mobile robot monocular vision environment and exploring reliability | |
Huber | 3-D real-time gesture recognition using proximity spaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210902 Address after: 528253 room 3, 803, floor 8, block 3, Tian'an center, No. 31, Jihua East Road, Guicheng Street, Nanhai District, Foshan City, Guangdong Province (residence declaration) Applicant after: Jiutian innovation (Guangdong) Intelligent Technology Co.,Ltd. Address before: No. 100, Waihuan West Road, University Town, Guangzhou, Guangdong 510062 Applicant before: GUANGDONG University OF TECHNOLOGY |
|
GR01 | Patent grant | ||
GR01 | Patent grant |