CN109062211B - Method, device and system for identifying adjacent space based on SLAM and storage medium - Google Patents

Method, device and system for identifying adjacent space based on SLAM and storage medium Download PDF

Info

Publication number
CN109062211B
CN109062211B CN201810909618.8A CN201810909618A CN109062211B CN 109062211 B CN109062211 B CN 109062211B CN 201810909618 A CN201810909618 A CN 201810909618A CN 109062211 B CN109062211 B CN 109062211B
Authority
CN
China
Prior art keywords
space
robot
plane
information
slam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810909618.8A
Other languages
Chinese (zh)
Other versions
CN109062211A (en
Inventor
李昌檀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hyperception Technology Beijing Co ltd
Original Assignee
Hyperception Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hyperception Technology Beijing Co ltd filed Critical Hyperception Technology Beijing Co ltd
Priority to CN201810909618.8A priority Critical patent/CN109062211B/en
Publication of CN109062211A publication Critical patent/CN109062211A/en
Application granted granted Critical
Publication of CN109062211B publication Critical patent/CN109062211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A method, device, system and computer storage medium for identifying an adjacent space based on landmark information in Slam. The method comprises the steps of obtaining road sign information obtained by the robot through SLAM; and identifying the adjacent space of the space where the robot is located according to the landmark information. The method also comprises the step of planning the walking path of the robot according to the adjacent space, so that the robot can travel according to an efficient path. The device is correspondingly provided with a road sign information acquisition module, an adjacent space identification module and a path planning module. The method and the device optimize the walking path of the robot, improve the working efficiency of the robot, and are simple to operate and easy to realize.

Description

Method, device and system for identifying adjacent space based on SLAM and storage medium
Technical Field
The invention relates to the field of robots, in particular to a method, a device and a system for identifying an adjacent space based on SLAM and a storage medium.
Background
With the widespread use of automated devices (i.e., robotic devices) for personal or commercial use, such as cleaning, there is an increasing demand for the efficiency of robots to be intelligent. For example, cleaning robots, the original method can only complete cleaning tasks by means of random path exploration, and future demands should be avoided by intelligent and efficient path planning to avoid repeated and inefficient cleaning. In the prior art, one method is to adopt a deep learning method to identify a target object, so as to obtain a three-dimensional map to optimize a path. The method has high requirement on the complexity of equipment, and the distance obtained by identifying the object through deep learning is not accurate enough.
Disclosure of Invention
In order to solve the problems of low path planning efficiency, low precision and low intelligence in the prior art, the invention provides a method, a device and a system for identifying an adjacent space based on SLAM and a computer storage medium, so that the walking path of a robot is optimized, the working efficiency of the robot is improved, the operation is simple, and the realization is easy.
In order to solve the above problem, a first aspect of the present invention provides a method for recognizing a neighboring space based on SLAM, including:
acquiring road sign information obtained by the robot through SLAM;
and identifying the adjacent space of the space where the robot is located according to the landmark information.
In some embodiments, the identifying a space in the vicinity of the space where the robot is located according to the landmark information includes:
performing plane division on the space where the robot is located according to the landmark information to obtain plane information;
and identifying the adjacent space of the space where the robot is located according to the plane information.
In some embodiments, the identifying a space in the vicinity of the space where the robot is located according to the plane information includes:
and identifying the adjacent space of the space where the robot is located according to the change of the plane information.
In some embodiments, the proximity space is a walkable space connected to a space in which the robot is located.
In some embodiments, the method further comprises:
and planning the walking path of the robot according to the adjacent space.
In some embodiments, the walking path comprises: preferentially walking to the adjacent space.
In some embodiments, the walking path comprises: and when the walking of the space is finished, walking to the adjacent space.
A second aspect of the present invention provides an apparatus for recognizing a near space based on SLAM, including:
the robot system comprises a road sign information acquisition module, a road sign information acquisition module and a traffic information acquisition module, wherein the road sign information acquisition module is used for acquiring road sign information obtained by the robot through SLAM;
and the near space identification module is used for identifying the near space of the space where the robot is located according to the landmark information.
In some embodiments, the adjacent space recognition module performs plane division on the space where the robot is located according to the landmark information to obtain plane information; and identifying the adjacent space of the space where the robot is located according to the plane information.
In some embodiments, the identifying a space in the vicinity of the space where the robot is located according to the plane information includes:
and identifying the adjacent space of the space where the robot is located according to the change of the plane information.
In some embodiments, the proximity space is a walkable space connected to a space in which the robot is located.
In some embodiments, the apparatus further comprises:
and the path planning module is used for planning the walking path of the robot according to the adjacent space.
In some embodiments, the walking path comprises: preferentially walking to the adjacent space.
In some embodiments, the walking path comprises: and when the walking of the space is finished, walking to the adjacent space.
A third aspect of the present invention provides a system for recognizing a near space based on SLAM, the system comprising:
a memory and one or more processors;
wherein the memory is communicatively coupled to the one or more processors and has stored therein instructions executable by the one or more processors to cause the one or more processors to perform the method as previously described.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon computer-executable instructions operable, when executed by a computing device, to perform a method as previously described.
In summary, the present invention provides a method, an apparatus, a system and a computer storage medium for identifying an adjacent space based on landmark information in Slam. The method comprises the steps of obtaining road sign information obtained by the robot through SLAM; and identifying the adjacent space of the space where the robot is located according to the landmark information. And planning the walking path of the robot according to the adjacent space, so that the robot can travel according to an efficient path. The method and the device for identifying the position condition of the adjacent space according to the road mark in the Slam optimize the walking path of the robot, improve the working efficiency of the robot, and have the advantages of simple operation and easy realization.
The technical scheme of the invention has the following beneficial technical effects:
the position condition of the adjacent space is identified according to the landmark information in the SLAM, the approximate boundary of the space and the approximate position of the obstacle are output to assist the robot in path planning, a complex target object identification method is not needed, and the path planning efficiency is improved; meanwhile, the distance information carried by the road signs in the SLAM is more suitable for path planning, and the method is more accurate in positioning the obstacles than a target object identification method.
Drawings
FIG. 1 is a flow chart of a method for identifying adjacent space based on SLAM according to the present invention;
FIG. 2 is a flow chart of a method of obtaining plane information of the present invention;
FIG. 3 is a block diagram of an apparatus for SLAM-based identification of near space according to the present invention;
FIG. 4 is a block diagram of the spatial identification module architecture of the present invention;
FIG. 5 is a structure and a flowchart of embodiment 1 of the present invention;
fig. 6 is an exemplary schematic diagram of a road in a visual Slam according to embodiment 1 of the invention;
FIG. 7 is a schematic view of a three-dimensional road marking in a visual Slam according to example 1 of the present invention;
FIG. 8 is a schematic view of a three-dimensional road sign in a Slam with different angle vision according to example 1 of the present invention;
FIG. 9a is a top plan view of a collection plane formed by the pavement markers of example 1 of the present invention; FIG. 9b is a rotated view of the three direction set;
fig. 10 is a schematic view of a robot in a state at different times according to embodiment 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Interpretation of terms:
slam (simultaneous localization and Mapping), also known as cml (current localization and localization), performs instantaneous positioning and Mapping, or performs simultaneous Mapping and positioning. The problem can be described as: putting a robot into an unknown position in an unknown environment, and whether a method exists for the robot to gradually draw a complete map of the environment, wherein the complete map refers to every corner which can be accessed by a room without obstacles.
And the Landmark refers to a coordinate value of one point stored in the GPS memory. Signposts are the core of GPS data, which is the basis for forming "routes".
A first aspect of the present invention provides a method 100 for recognizing a neighboring space based on SLAM, the method comprising the steps of, as shown in fig. 1:
and step 110, acquiring the landmark information obtained by the robot through SLAM.
Specifically, parameter information and landmark information may be obtained by performing SLAM measurement on an indoor space. The invention has no specific requirements on the Slam equipment, and can be used as a monocular (needing to be combined with IMU/wheels and the like) or binocular camera. In Slam vision, each feature point is a landmark, and a feature point is observed in multiple measurements and associated with the landmark to be placed in a map.
The parameter information includes: camera pose, image key frame, and camera calibration parameters.
The camera pose refers to the position and orientation of the camera in the world coordinate system, and is denoted as C ═ C1,C2...CN};
The image keyframe corresponds to the camera pose, denoted I ═ I1,I2...IN};
The camera calibration parameter K is:
Figure BDA0001761448440000051
the landmark information includes a set X of landmarks, X ═ X1,X2...XPIn which X is1,X2...XPAre coordinate values including the three axes x, y, z.
The N, P is a natural number.
Wherein the definition of the world coordinate system is: since the camera can be placed at any position in the environment, a reference coordinate system is selected in the environment to describe the position of the camera and to use it to describe the position of any object in the environment, which is called the world coordinate system.
And 120, identifying the adjacent space of the space where the robot is located according to the landmark information. Specifically, step 121 is to receive the landmark information by the space identification module, and perform plane division on the space where the robot is located according to the landmark information by using an image processing algorithm to obtain plane information. And step 122, identifying the adjacent space of the space where the robot is located according to the plane information.
As shown in fig. 2, the flow 200 of the plane information calculation method includes the following steps:
step 210, calculating Manhattan direction [ n ] according to the parameter information and the road sign information1,n2,n3]In the manhattan direction [ n ]1,n2,n3]The three vanishing directions represented by the vanishing points of the image in the image key frame are the median of the three vanishing directions, namely the three-dimensional directions.
Specifically, the manhattan direction is calculated by:
among the image key frames, from the jth image key frame Ij(j ═ 1,2, …, N) to obtain vectors of three image vanishing points
Figure BDA0001761448440000061
Calculating the vanishing direction of the vanishing point of the image in a world coordinate system:
Figure BDA0001761448440000062
wherein R is a rotation matrix of 3 x 3, orthogonal and determinant 1; k is a correction parameter; this is achieved byInverse of the rotation matrix will be the point P in the camera coordinate systemcRotated to point P in world coordinate systemw:Pw=R-1*Pc
Where the image vanishing point refers to the intersection of parallel lines. In physical space, parallel straight lines can only intersect at infinity, so the image vanishing point is at infinity; however, in a perspective view, two parallel lines may easily intersect at a point, which is the image vanishing point.
Obtaining a median value [ n ] of each vanishing direction1,n2,n3]:
Figure BDA0001761448440000063
Wherein k is 1,2, 3;
definition of [ n ]1,n2,n3]In the manhattan direction.
Step 220, projecting the set X of the road signs to the Manhattan direction, and respectively obtaining two road sign points with the farthest distances in three directions.
The specific implementation method comprises the following steps:
cycling in the manhattan direction:
Figure BDA0001761448440000064
obtaining the maximum and minimum distance:
Figure BDA0001761448440000065
step 230, obtaining the equidistant planes in the Manhattan direction according to the Manhattan direction and the two most distant waypoints, adding each equidistant plane into the plane set in the corresponding Manhattan direction
Figure BDA0001761448440000066
(k ═ 1,2,3), the set of planes being a set of planes corresponding to one dimension.
In particular, according to the Manhattan direction
Figure BDA0001761448440000067
And obtaining the planes with equal intervals in different directions, wherein the interval size can be preset.
k1,ηk2,…ηkd],k=1,2,3;
η=[nd]T
Wherein n is a normal vector of the plane, and d is a distance from the plane to the origin.
Respectively calculating eta according to the landmark point set Xk1,ηk2...ηkdFraction of (d) when ηkdThe fraction being greater than a set threshold σ1When, will etakdJoining sets of planes in corresponding Manhattan directions
Figure BDA0001761448440000076
The score calculation method is as follows:
initialization: score ═ 0, inputs: x, eta;
calculating XiDistance d from ηi
Figure BDA0001761448440000071
Figure BDA0001761448440000072
If | di|>A certain set threshold value sigma2
score+=1;
i=1,2,...,N。
Set of planes
Figure BDA0001761448440000073
I.e. a set of planes corresponding to one dimension, e.g.
Figure BDA0001761448440000074
Represents the combination of a plurality of horizontal planes corresponding to an axis perpendicular to the floor surface, with the floor surface being the closest distance and the ceiling plate being the farthest distance.
And 240, outputting the plane set as plane information.
According to the step 210 and 240, the plane information of the space where the Slam measuring device is located is obtained, and the approximate boundary and the approximate position of the obstacle in the space can be known, so that data support is provided for the subsequent path planning.
And step 122, identifying the adjacent space of the space where the robot is located according to the plane information.
Specifically, it is determined that the plane is the current spatial plane or the adjacent spatial plane successively according to the change of the plane set, and if the plane is a new plane not belonging to the plane set, the plane is the adjacent spatial plane and an adjacent space exists.
Specifically, when the robot walks out of the door or in a window, the Slam system acquires the landmark information outside the space where the robot is located. The space recognition module obtains a new plane set
Figure BDA0001761448440000075
And then, judging whether the plane is the current space plane or the plane of the adjacent space one by one according to the change of the plane set. This is because when the robot encounters an adjacent space, the landmark in the adjacent space is not visible before, and therefore there is no plane corresponding to the landmark point in the previous plane set, and when the new plane appears, it means that the robot observes an adjacent space. So if a new plane is found in the set of planes, the plane spatial position is passed into the path planning module.
Further, a step 130 of planning a walking path of the robot according to the proximity space may be further included.
The adjacent space can be a walkable space or a non-walkable space connected with the current space, and the specific judgment method is as follows:
and setting a depth threshold value sigma, and if the distance d from the plane to the origin under the world coordinate system is less than sigma, determining the walkable area (the current space and the adjacent space) of the robot. On the contrary, if the distance from the plane to the world coordinate system is too large, the adjacent space is open and is not suitable for the robot to walk.
Further, the path planning comprises preferentially walking to the adjacent space or walking to the adjacent space after the current space is finished.
Through the above-mentioned step 110-.
Another aspect of the present invention provides an apparatus 300 for recognizing a neighboring space based on SLAM, as shown in fig. 3, including:
a landmark information obtaining module 310, configured to obtain landmark information obtained by the robot through SLAM;
and the proximity space identification module 320 is configured to identify a proximity space of a space where the robot is located according to the landmark information.
Further, a path planning module 330 may be further included to plan a walking path of the robot according to the proximity space.
The proximity space identifying module 320 further includes, as shown in fig. 4:
a Manhattan direction calculating unit 321 for calculating a Manhattan direction [ n ]1,n2,n3]In the manhattan direction [ n ]1,n2,n3]The median of three vanishing directions represented by the vanishing points of the image in the image key frame;
a plane set obtaining unit 322, which projects the set X of landmarks to the manhattan direction, and obtains two landmark points with the farthest distances in three directions respectively; obtaining equi-spaced planes in the Manhattan direction according to the Manhattan direction and the two road mark points with the farthest distance, adding each equi-spaced plane into a plane set in the corresponding Manhattan direction
Figure BDA0001761448440000081
The plane set is a plane set corresponding to one dimension;
A plane set output unit 323 that outputs the plane set as spatial information.
Further, the proximity space identification module 320 includes a proximity space determination unit, which determines that the plane is the current spatial plane or the proximity space plane one by one according to the change of the plane information, and if the plane is a new plane not belonging to the plane set, the plane is the proximity space plane, and a proximity space exists.
Further, the adjacent space determining unit determines that the adjacent space is a walkable space or a non-walkable space connected to the current space, and determines the adjacent space by the following steps:
setting a depth threshold value sigma, and if the distance d from the plane close to the space to the origin under the world coordinate system is less than sigma, determining the region as a walkable region; conversely, if the distance from the plane of the adjacent space to the world coordinate system is greater than sigma, the adjacent space is an unmovable space.
A third aspect of the present invention provides a system for recognizing a near space based on SLAM, the system comprising:
a memory and one or more processors;
wherein the memory is communicatively coupled to the one or more processors and has stored therein instructions executable by the one or more processors to cause the one or more processors to perform the method as previously described.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon computer-executable instructions operable, when executed by a computing device, to perform a method as previously described.
Example 1:
in embodiment 1 of the invention, the robot equipment is a sweeping robot, and the Slam measuring module acquires parameter information and road sign information. The device for identifying the adjacent space based on the SLAM comprises a SLAM measuring module, and equipment has no specific requirements; the space identification module is used for receiving road sign information and the like and outputting space boundary information; and a path planning module, configured to receive the boundary information and output an optimized travel path, as shown in fig. 5.
Fig. 6 shows an example of road markings in a visual Slam, where the small squares represent road markings recorded by the Slam measurement module.
Fig. 7 and 8 are graphs of three-dimensional landmark information generated at different angles, and fig. 7 is close to a top view.
The space recognition module recognizes the space layout according to the landmark information. (1) Acquiring parameter information and road sign information from an SLAM module; (2) calculating the Manhattan direction and obtaining a space plane hypothesis; (3) obtaining a set of planes in a manhattan direction
Figure BDA0001761448440000101
FIG. 9a is a top view of a road sign, wherein plane (r) is a set
Figure BDA0001761448440000102
The term "plane" is understood to mean a plurality of planes in one direction of the horizontal plane on which the robot travels. Seventhly, the plane is a set
Figure BDA0001761448440000103
The plane in (1) can be understood as one of the walking planes of the robot and
Figure BDA0001761448440000104
a plurality of planes in a vertical direction. Collection
Figure BDA0001761448440000105
The middle plane is a plurality of planes parallel to the walking plane of the robot. In fig. 9b is a rotated view of the set of three directions.
And the path planning module judges the current space and the adjacent space according to the position information of the plane (i-c) in the three-dimensional map and the position information, and finishes walking planning. Specifically, for example, when the robot is opened, the robot is located at a plane (r), and at this time, the robot cannot obtain information of road signs between (c), where (c) is an open corridor. When the robot walks to the entrance of the corridor, namely, the position between the fifth in the map, the robot can observe new road sign information between the sixth and the fifth. At this time, a new plane except for (c) is obtained in the first plane set. Since the plane is not in the previous first plane set, it means that the robot finds a new proximity space. Further, since the distance information of the new plane is smaller than the preset depth threshold, the space is determined as an adjacent walkable space. If the distance information is greater than the depth threshold and the corresponding third plane set distance is also greater than the depth threshold, it means that the adjacent space is an open area, and thus the space is determined to be an adjacent non-walkable space.
Example 2:
the benefits that can be derived from the implementation of the invention in a specific robotic product are illustrated by way of example in fig. 10. As shown in fig. 10, the robot has shifted from time 1 to time 2, and at time 2, Slam of the robot finds a new landmark point, thus causing a change in the plane within the corresponding plane-space set. By judging the plane of change, the robot can find a new proximity space at time 2.
A prior art robot, starting for example at the position of time 1 in fig. 10, performs a cleaning operation, which can continuously track its position by means of the positioning information provided by the Slam measurement module. However, the path planning module cannot use the information of the Slam measurement module, and a common method is to perform i-shaped cleaning. After the robot finishes the cleaning work of the current area, the robot may reach a corner of the current space, and the robot considers that the cleaning is finished at the moment. After the method provided by the invention is used, the robot can also use the I-shaped path planning to clean the current path. However, during the cleaning process, the Slam measurement module not only provides the functions of positioning and mapping, but also outputs information of a nearby space to the navigation module. At this time, the robot records a corresponding adjacent space information and a corresponding coordinate in the task. After the robot finishes cleaning the current space, the robot path planning module identifies that the adjacent space is not cleaned and the space is a walking space, at the moment, the path planning module drives the robot to walk to the adjacent space in advance, and then the cleaning of the area is finished through the I-shaped path. By continuously repeating the above processes, the robot can clean all the walkable spaces and avoid walking to open areas such as outdoors. In some traditional robot methods, the robot can enter an adjacent space at a certain probability by a random walking method, and compared with the method provided by the invention, the method has low reliability and can not ensure the effect.
In summary, the present invention provides a method, an apparatus, a system and a computer storage medium for identifying a location condition (whether it is a wall or a doorway, whether there is an obstacle at a certain location, etc.) of an adjacent space according to landmark information in Slam. The method comprises the steps of obtaining road sign information obtained by the robot through SLAM; and identifying the adjacent space of the space where the robot is located according to the landmark information. The method also comprises the step of planning the walking path of the robot according to the adjacent space, so that the robot can travel according to an efficient path. The device is correspondingly provided with a road sign information acquisition module, an adjacent space identification module and a path planning module. The method and the device for identifying the position condition of the adjacent space according to the road mark in the Slam optimize the walking path of the robot, improve the working efficiency of the robot, and have the advantages of simple operation and easy realization.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (14)

1. A method for identifying a neighboring space based on SLAM, comprising:
acquiring parameter information and road sign information obtained by performing SLAM measurement on an indoor space by a robot;
calculating the Manhattan direction according to the parameter information and the road sign information;
projecting the set of the road signs to the Manhattan direction, and respectively obtaining two road sign points with the farthest distances in three directions;
obtaining equispaced planes in the Manhattan direction according to the Manhattan direction and the two most distant waypoints, adding each equispaced plane into a plane set in the corresponding Manhattan direction, and outputting the plane set as plane information;
and identifying the adjacent space of the space where the robot is located according to the plane information.
2. The method of claim 1, wherein the identifying the space near the space where the robot is located according to the plane information comprises:
and identifying the adjacent space of the space where the robot is located according to the change of the plane information.
3. The method of claim 1, wherein the proximity space is a walkable space connected to a space where the robot is located.
4. The method of claim 1, wherein the method further comprises:
and planning the walking path of the robot according to the adjacent space.
5. The method of claim 4, wherein the walking path comprises: preferentially walking to the adjacent space.
6. The method of claim 4, wherein the walking path comprises: and when the walking of the space is finished, walking to the adjacent space.
7. An apparatus for recognizing a neighboring space based on SLAM, comprising:
the system comprises a road sign information acquisition module, a road sign information acquisition module and a control module, wherein the road sign information acquisition module is used for acquiring parameter information and road sign information which are obtained by performing SLAM measurement on an indoor space by a robot;
the adjacent space identification module is used for calculating the Manhattan direction according to the parameter information and the landmark information;
projecting the set of the road signs to the Manhattan direction, and respectively obtaining two road sign points with the farthest distances in three directions;
obtaining equispaced planes in the Manhattan direction according to the Manhattan direction and the two most distant waypoints, adding each equispaced plane into a plane set in the corresponding Manhattan direction, and outputting the plane set as plane information;
and identifying the adjacent space of the space where the robot is located according to the plane information.
8. The device of claim 7, wherein the identifying the space near the space where the robot is located according to the plane information comprises:
and identifying the adjacent space of the space where the robot is located according to the change of the plane information.
9. The device of claim 8, wherein the proximity space is a walkable space connected to a space where the robot is located.
10. The SLAM-based identification proximity space apparatus of claim 7, further comprising:
and the path planning module is used for planning the walking path of the robot according to the adjacent space.
11. The SLAM-based identification proximity space apparatus of claim 10, wherein the walking path comprises: preferentially walking to the adjacent space.
12. The SLAM-based identification proximity space apparatus of claim 10, wherein the walking path comprises: and when the walking of the space is finished, walking to the adjacent space.
13. A system for recognizing a near space based on SLAM, the system comprising:
a memory and one or more processors;
wherein the memory is communicatively coupled to the one or more processors and has stored therein instructions executable by the one or more processors to cause the one or more processors to perform the method of any of claims 1-6.
14. A computer-readable storage medium having stored thereon computer-executable instructions operable, when executed by a computing device, to perform the method of any of claims 1-6.
CN201810909618.8A 2018-08-10 2018-08-10 Method, device and system for identifying adjacent space based on SLAM and storage medium Active CN109062211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810909618.8A CN109062211B (en) 2018-08-10 2018-08-10 Method, device and system for identifying adjacent space based on SLAM and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810909618.8A CN109062211B (en) 2018-08-10 2018-08-10 Method, device and system for identifying adjacent space based on SLAM and storage medium

Publications (2)

Publication Number Publication Date
CN109062211A CN109062211A (en) 2018-12-21
CN109062211B true CN109062211B (en) 2021-12-10

Family

ID=64683313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810909618.8A Active CN109062211B (en) 2018-08-10 2018-08-10 Method, device and system for identifying adjacent space based on SLAM and storage medium

Country Status (1)

Country Link
CN (1) CN109062211B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8331652B2 (en) * 2007-12-17 2012-12-11 Samsung Electronics Co., Ltd. Simultaneous localization and map building method and medium for moving robot
CN102866706A (en) * 2012-09-13 2013-01-09 深圳市银星智能科技股份有限公司 Cleaning robot adopting smart phone navigation and navigation cleaning method thereof
CN103942832A (en) * 2014-04-11 2014-07-23 浙江大学 Real-time indoor scene reconstruction method based on on-line structure analysis
CN105974928A (en) * 2016-07-29 2016-09-28 哈尔滨工大服务机器人有限公司 Robot navigation route planning method
CN106003052A (en) * 2016-07-29 2016-10-12 哈尔滨工大服务机器人有限公司 Creation method of robot visual navigation map
CN106952338A (en) * 2017-03-14 2017-07-14 网易(杭州)网络有限公司 Method, system and the readable storage medium storing program for executing of three-dimensional reconstruction based on deep learning
CN107390681A (en) * 2017-06-21 2017-11-24 华南理工大学 A kind of mobile robot real-time location method based on laser radar and map match
CN107913039A (en) * 2017-11-17 2018-04-17 北京奇虎科技有限公司 Block system of selection, device and robot for clean robot
CN107943058A (en) * 2017-12-26 2018-04-20 北京面面俱到软件有限公司 Sweeping robot and its cleaning paths planning method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5638116A (en) * 1993-09-08 1997-06-10 Sumitomo Electric Industries, Ltd. Object recognition apparatus and method
US9928594B2 (en) * 2014-07-11 2018-03-27 Agt International Gmbh Automatic spatial calibration of camera network
KR101575597B1 (en) * 2014-07-30 2015-12-08 엘지전자 주식회사 Robot cleaning system and method of controlling robot cleaner

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8331652B2 (en) * 2007-12-17 2012-12-11 Samsung Electronics Co., Ltd. Simultaneous localization and map building method and medium for moving robot
CN102866706A (en) * 2012-09-13 2013-01-09 深圳市银星智能科技股份有限公司 Cleaning robot adopting smart phone navigation and navigation cleaning method thereof
CN103942832A (en) * 2014-04-11 2014-07-23 浙江大学 Real-time indoor scene reconstruction method based on on-line structure analysis
CN105974928A (en) * 2016-07-29 2016-09-28 哈尔滨工大服务机器人有限公司 Robot navigation route planning method
CN106003052A (en) * 2016-07-29 2016-10-12 哈尔滨工大服务机器人有限公司 Creation method of robot visual navigation map
CN106952338A (en) * 2017-03-14 2017-07-14 网易(杭州)网络有限公司 Method, system and the readable storage medium storing program for executing of three-dimensional reconstruction based on deep learning
CN107390681A (en) * 2017-06-21 2017-11-24 华南理工大学 A kind of mobile robot real-time location method based on laser radar and map match
CN107913039A (en) * 2017-11-17 2018-04-17 北京奇虎科技有限公司 Block system of selection, device and robot for clean robot
CN107943058A (en) * 2017-12-26 2018-04-20 北京面面俱到软件有限公司 Sweeping robot and its cleaning paths planning method

Also Published As

Publication number Publication date
CN109062211A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN110675307B (en) Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
CN104536445B (en) Mobile navigation method and system
Yoo et al. A robust lane detection method based on vanishing point estimation using the relevance of line segments
Zhou et al. StructSLAM: Visual SLAM with building structure lines
WO2015024407A1 (en) Power robot based binocular vision navigation system and method based on
Xiao et al. 3D point cloud registration based on planar surfaces
Biswas et al. Depth camera based localization and navigation for indoor mobile robots
Wang et al. GLFP: Global localization from a floor plan
Kim et al. SLAM in indoor environments using omni-directional vertical and horizontal line features
Ravankar et al. A hybrid topological mapping and navigation method for large area robot mapping
CN115388902B (en) Indoor positioning method and system, AR indoor positioning navigation method and system
Morioka et al. Vision-based mobile robot's slam and navigation in crowded environments
Quintana et al. Door detection in 3D colored laser scans for autonomous indoor navigation
Pradeep et al. A wearable system for the visually impaired
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
Ranganathan et al. Topological navigation and qualitative localization for indoor environment using multi-sensory perception
CN114387576A (en) Lane line identification method, system, medium, device and information processing terminal
Casarrubias-Vargas et al. EKF-SLAM and machine learning techniques for visual robot navigation
CN109062211B (en) Method, device and system for identifying adjacent space based on SLAM and storage medium
Matsuo et al. Outdoor visual localization with a hand-drawn line drawing map using fastslam with pso-based mapping
WO2023274270A1 (en) Robot preoperative navigation method and system, storage medium, and computer device
Faigl et al. Surveillance planning with localization uncertainty for uavs
He et al. Feature extraction from 2D laser range data for indoor navigation of aerial robot
Nguyen et al. A visual SLAM system on mobile robot supporting localization services to visually impaired people
Nabbe et al. Opportunistic use of vision to push back the path-planning horizon

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant