EP4033324B1 - Verfahren und vorrichtung zur erfassung von hindernisinformation für einen mobilen roboter - Google Patents
Verfahren und vorrichtung zur erfassung von hindernisinformation für einen mobilen roboter Download PDFInfo
- Publication number
- EP4033324B1 EP4033324B1 EP20865409.5A EP20865409A EP4033324B1 EP 4033324 B1 EP4033324 B1 EP 4033324B1 EP 20865409 A EP20865409 A EP 20865409A EP 4033324 B1 EP4033324 B1 EP 4033324B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- mobile robot
- dimensional
- obstacle
- contour
- expansion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 53
- 239000011159 matrix material Substances 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 230000000717 retained effect Effects 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 8
- 238000007429 general method Methods 0.000 description 5
- 230000007547 defect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- VJYFKVYYMZPMAB-UHFFFAOYSA-N ethoprophos Chemical compound CCCSP(=O)(OCC)SCCC VJYFKVYYMZPMAB-UHFFFAOYSA-N 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
- B25J9/1666—Avoiding collision or forbidden zones
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
- G06V30/1801—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- the present application relates to the field of robot positioning and navigation, and in particular to obstacle information sensing method and device for a mobile robot.
- the commonly used sensing methods for sensing obstacles for a mobile robot include ultrasound, infrared and laser.
- the uultrasound cannot sense the exact location and range of obstacles; the infrared can only sense a single direction; two-dimensional laser can sense detailed obstacle information in the surrounding environment, but it can only sense the obstacles in its installation plane, and a low obstacle or a higher suspended obstacle relative to the installation plane cannot be sensed.
- WO2018054080A1 discloses a method and device for updating a planned path of a robot, the method comprising: acquiring a current carrier coordinate in real time in a movement process of a robot, and converting the current carrier coordinate into a current global coordinate of the robot (S101); calculating a movement speed estimation and a position estimation of the robot moving to a next site according to the current global coordinate and a current movement speed (S102); acquiring a three-dimensional coordinate point of a pixel point in a depth image representing an obstacle in a surrounding environment of the robot, and converting the three-dimensional coordinate point into a two-dimensional coordinate point so as to obtain a projection outline of the obstacle in the surrounding environment of the robot (S103); acquiring an angular point of the obstacle in the surrounding environment according to the two-dimensional coordinate point (S104); and finally, updating a planned path of the robot according to the position estimation and the angular point of the obstacle (S105).
- CN1 10202577A provides an autonomous mobile robot capable of realizing obstacle detection, and a method thereof.
- the robot comprises a Kinect sensor, a depth information acquisition module, a first three-dimensional coordinate acquisition module, a coordinate conversion module, a grid map generation module and an obstacle distance generation module.
- the method comprises the steps of acquiring a depth image of a current environment through a vision sensor; determining a first three-dimensional coordinate based on a coordinate system of the vision sensor according to the depth information; projecting a second three-dimensional coordinate of an entity onto a two-dimensional grid map, and obtaining a two-dimensional coordinate of the solid; coinciding the two-dimensional grid map and a plane on which the bottom surface of the robot is located; and acquiring a position relation between the entity and the robot according to the two-dimensional coordinate.
- the autonomous mobile robot capableof realizing obstacle detection, and the method thereof provided by the invention solve the technical problem that obstacles cannot be accurately judged by the robot due to the adoption of an infrared sensor and an obstacle analysis method.
- US20050131581A1 discloses an environment recognizing device and an environment recognizing method that can draw an environment map for judging if it is possible to move a region where one or more than one steps are found above or below a floor, a route planning device and a route planning method that can appropriately plan a moving route, using such an environment map and a robot equipped with such an environment recognizing device and a route planning device.
- the robot comprises an environment recognizing section including a plurality of plane extracting section 401 adapted to compute plane parameters from a parallax image or a distance image and extract a plurality of planes including the floor surface, an obstacle recognizing section 402 adapted to recognize obstacles on the plurality of planes including the floor surface and an environment map updating section 403 adapted to draw an environment map (obstacle map) for each of the planes on the basis of the result of recognition of the obstacle recognizing section 402 and update the existing environment maps and a route planning section 404 adapted to plan a route on the basis of the environment maps.
- the route planning section 404 selects a plane as route coordinate when an obstacle is found on it in the environment map of the floor surface but not found in the environment map of the plane.
- the present application provides a method and device for sensing obstacle information for a mobile robot, which can sense obstacle information in a surrounding space stereoscopically.
- the present application provides a method for sensing obstacle information for a mobile robot, the method comprises:
- obtaining three-dimensional point coordinates of the pixel in the coordinate system of the mobile robot through the camera external parameter, the depth value of the pixel, the camera internal parameter, and the coordinate value of the pixel comprises:
- converting the data in the coordinate system of the mobile robot into the projection in the moving plane of the mobile robot to obtain the two-dimensional data comprises:
- establishing the search window for detecting obstacles within the range of the travel route of the mobile robot, and searching for the two-dimensional data within the range of the search window comprises:
- converting the data in the coordinate system of the mobile robot into the projection in the moving plane of the mobile robot to obtain the two-dimensional data comprises:
- a carrying surface for the mobile robot is ground, and retaining the data corresponding to the obstacle that may be contacted during the movement process, comprises: a retaining three-dimensional point(s) with a z-value in the three-dimensional point coordinates in the coordinate system of the mobile robot greater than a first threshold and less than a second threshold.
- the method further comprises: determining, according to a location of the nearest obstacle in the search window, whether the mobile robot needs to avoid the obstacle, and if yes, triggering to obtain a passable area for the mobile robot to travel.
- Obtaining the passable area for the mobile robot to travel comprises:
- setting, for any two-dimensional point of the contour of the obstacle, the expansion circumference by taking the two-dimensional point as the center of the circle and the extension distance as the radius comprises:
- taking the projection center of the body of the mobile robot on the carrying surface as the starting point to set at least one ray at equal angles further comprises: starting from a boundary of an image field of view and taking the projection center of the body of the mobile robot on the carrying surface as the starting point to take rays at equal angles to form grids, wherein forming the expansion boundary of the current two-dimensional point based on the arc point, comprises:
- the present application provides a device for sensing obstacle information for a mobile robot, the device comprises:
- the present application further provides a mobile robot, a body of the mobile robot comprising:
- the present application further provides a computer readable storage medium, wherein the computer readable storage medium stores thereon computer programs that, upon executed by a processor, cause the processor to implement steps of any one of the methods for sensing obstacle information for a mobile robot.
- the present application further provides a computer program product comprising instructions which, when running on a computer, causes the computer to implement steps of any one of the methods for sensing obstacle information for a mobile robot.
- the depth image data may be converted into the projection in the moving plane of the mobile robot, and a search is performed based on the two-dimensional projection data to obtain obstacle information, and data similar to the two-dimensional laser is obtained, which solves the defects that conventional obstacle avoidance sensors cannot perform full stereo sensing and obstacle avoidance, and can sense obstacles in a large range and multi-levels.
- the depth data is projected into the moving plane of the mobile robot, therefore in application, the mature technology for obstacle avoidance control in the two-dimensional environment can be widely used, not only the three-dimensional sensing information of the three-dimensional sensor can be used, but also a quick route planning can be performed based on the two-dimensional data.
- Depth image data acquired by a depth camera is data to characterize a distance value between spatial points by means of a pixel gray value based on pixel coordinates, which may be expressed as p (u, v, d), where u and v are coordinates of a pixel in an image coordinate system. Specifically, u and v are a pixel row coordinate and a pixel column coordinate of the pixel in the depth image, respectively.
- d is a depth value of a spatial three-dimensional point corresponding to the pixel, where the depth value is also the distance value, that is, the distance between the spatial three-dimensional point corresponding to the pixel and the depth camera.
- a three-dimensional point set obtained by the depth camera is projected into a two-dimensional plane to obtain a two dimensional point set. Nto only the points of the obstacles is in the projection plane, the obstacle information of different height levels is preserved in the projection plane, thereby the obstacle information of the three-dimensional space environment is also preserved. In other words, the three-dimensional space information is preserved in the form of two-dimensional points. In this way, for a mobile robot moving in a plane, the route planning and control can be carried out in a two-dimensional plane, the processing complexity is reduced and responsiveness is improved.
- three-dimensional points within a certain height range in the depth image are converted into three-dimensional points of the coordinate system of the mobile robot according to the depth image data, and the three-dimensional points of the coordinate system of the mobile robot are reduced to a two-dimensional plane for processing; in the two-dimensional plane, the closest point on the polar coordinates is taken as the contour of the obstacle, and the data similar to the two-dimensional laser is obtained for obstacle avoidance, not only the three-dimensional sensing information of the three-dimensional sensor can be used, but also a route planning in the two-dimensional environment can be made.
- An embodiment of the present application provides a method for sensing obstacle information for a mobile robot, the method includes: acquiring depth image data; converting the depth image data into data in the coordinate system of the mobile robot; converting the data in the coordinate system of the mobile robot into a projection in a moving plane of the mobile robot to obtain two-dimensional data; detecting the two-dimensional data within a range of travel route of the mobile robot; and obtaining obstacle information based on the detected two-dimensional data.
- the depth image data may be converted into the projection in the moving plane of the mobile robot, and a search is performed based on the two-dimensional projection data to obtain obstacle information, and data similar to the two-dimensional laser is obtained, which solves the defects that conventional obstacle avoidance sensors cannot perform full stereo sensing and obstacle avoidance, and can sense obstacles in a large range and multi-levels.
- the general method for processing depth data is to establish a three-dimensional model of the obstacle based on the depth data, different from the general method for processing depth data, in the embodiment of the present application, the depth data is projected into the moving plane of the mobile robot, so that the mature technology for obstacle avoidance control in the two-dimensional environment can be applied to the present application for projection to obtain two-dimensional data, so that the solution of the present application can be widely used. Therefore, in the solution of the present application, not only the three-dimensional sensing information of the three-dimensional sensor can be used, but also a quick route planning can be performed based on the two-dimensional data.
- FIG. 1 is a flow diagram of sensing obstacle information based on depth image data according to an embodiment of the present application.
- Step 101 acquiring current depth image data;
- the depth image data of the current physical space is obtained in real time by the depth camera installed to the body of the mobile robot, so as to obtain a dense depth map;
- the depth camera may be a depth camera based on stereo vision and a depth camera based on time of flight (TOF).
- TOF time of flight
- Step 102 converting coordinates (u, v, d) of each pixel in the depth image data into three-dimensional point coordinates in a coordinate system of the mobile robot, that is, converting the depth image data into data in the coordinate system of the mobile robot;
- the origin of the coordinate system of the mobile robot coincides with the center of the mobile robot
- the coordinate system of the mobile robot is the coordinate system of the robot body
- the advance direction of the robot is the x-axis
- a direction which is on the same plane as the x-axis, perpendicular to the x-axis and to the left is the y-axis
- the direction perpendicular to the plane where the x-axis and y-axis are located is the z-axis.
- the plane where the x-axis and y-axis are located can be regarded as the moving plane of the mobile robot, which is parallel to the carrying surface for body of the mobile robot.
- the carrying surface for body of the mobile robot is usually the ground
- the plane where the x-axis and the y-axis are located is parallel to the ground.
- R bc is a rotation matrix between the depth camera coordinate system relative to the coordinate system of the mobile robot, which is a 3 ⁇ 3 matrix
- T bc is a displacement matrix between the depth camera coordinate system relative to the coordinate system of the mobile robot, which is a 3 ⁇ 1 matrix.
- coordinate transformation is performed to obtain the three-dimensional point coordinates P b (x, y, z) in the coordinate system of the mobile robot; after the coordinate transformation is performed on all the pixels in the depth image data, a three-dimensional point set is obtained, the set is also called three-dimensional point cloud.
- P b A bc d K ⁇ 1 u v 1 d is a scalar, which is the depth value of the pixel, K is a camera internal parameter, where u v 1 . is a matrix composed of pixel coordinate values, and u and v are a pixel row coordinate and a pixel column coordinate of the pixel in the depth image, respectively.
- Step 103 screening three-dimensional points in a three-dimensional point set.
- the remaining three-dimensional points are all three-dimensional points corresponding to obstacles in the scene that may affect the movement of the mobile robot, including three-dimensional points on the robot's walking route and those not on the robot's walking route.
- a three-dimensional point with a coordinate z-value of the three-dimensional point greater than a first threshold and less than a second threshold may be retained, the first threshold is less than the second threshold.
- h low ⁇ z ⁇ h high h low is the first threshold, and the specific value may be the difference between the z-value corresponding to the ground in the coordinate system of the mobile robot and the z-value of the origin coordinate.
- h high is the second threshold, and the specific value may be the difference between the z-value of the highest point of the mobile robot in the coordinate system of the mobile robot and the z-value of the origin coordinate.
- Step 104 performing two-dimensional projection regarding the screened three-dimensional points.
- the screened point cloud is still three-dimensional, and the screened point cloud is the point cloud corresponding to an obstacle that may be contacted by the mobile robot during the movement process of the mobile robot. That is, for a screened point cloud, no matter what information of the height direction it corresponds, it is always the point cloud corresponding to an obstacle that may be contacted by the mobile robot during the movement process of the mobile robot, so for the mobile robot moving in the plane, information of height direction for the screened point cloud may be considered as redundant information.
- the z-axis coordinate value in the coordinates may be discarded or set to 0, and only the coordinate values of x and y are retained, thereby obtaining a two-dimensional point coordinate P P (x, y) to perform a two-dimensional projection from the screened three-dimensional points.
- the z-axis coordinate values of all screened three-dimensional point coordinates are discarded, or the z-axes of all screened three-dimensional point coordinates are set to 0, so as to acquire a two-dimensional point set for characterizing obstacles using two-dimensional points, thereby facilitating searching for a passable area for a robot.
- Step 105 establishing a search window in an advance direction of the mobile robot, so as to detect obstacles within a range of a travel route of the mobile robot.
- FIG. 2 is a schematic diagram of a mobile robot sensing the closest obstacle within an obstacle avoidance range through a search window.
- the search window is three-dimensional, and the dotted frame is the projection of the search window on the moving plane, that is, the top view of the search window.
- the relative location of the search window (shown in the dotted line in FIG. 2 ) and the mobile robot is fixed, and the size of the section of the search window perpendicular to the traveling direction of the mobile robot is greater than or equal to the projection size of the mobile robot on this section.
- the search window is a rectangular frame, and the length of the rectangular frame is set according to a required obstacle avoidance distance.
- the length of the rectangular frame is the length in the traveling direction of the mobile robot.
- the search window may also be a two-dimensional window, the plane where the window is located is parallel to the carrying surface for mobile robot, the shape of the search window may be a rectangle, the length of the rectangle is set according to the required obstacle avoidance distance, and the width of the rectangle is greater than or equal to a passable width of the body of the mobile robot in the advance direction.
- the length of the rectangle is the length in the traveling direction of the mobile robot.
- the mobile robot When the mobile robot moves, it searches based on the two-dimensional point cloud within the range of the search window. If the mobile robot detects an obstacle within a certain distance, it needs to perform obstacle avoidance, where the certain distance is the obstacle avoidance range for the mobile robot. By detecting the position of the closest obstacle within the obstacle avoidance range, it may be determined whether the robot currently needs to perform deceleration, stop, or take other obstacle avoidance measures.
- the other obstacle avoidance measures may include turning, U-turn, etc.
- a first distance threshold and a second distance threshold may be preset, and the first distance threshold is less than the second distance threshold.
- the mobile robot may perform a parking action to avoid the obstacle.
- the mobile robot may not perform obstacle avoidance temporarily.
- the mobile robot may perform a deceleration action or take other obstacle avoidance measures for avoiding obstacles.
- the two-dimensional point A in the search box is the two-dimensional point corresponding to the nearest obstacle to the mobile robot, and then according to the location of the two-dimensional point A, it may be determined whether the robot currently needs to perform deceleration, parking or take other obstacle avoidance measures.
- Step 106 determining, according to a location of the nearest obstacle in the search window, whether the mobile robot needs to avoid obstacles, if yes, executing step 107, otherwise, returning to step 101.
- Step 107 expanding contour(s) of obstacle(s).
- contour(s) of obstacle(s) may be expanded to form a passable area for the mobile robot to travel in real time, the contour is expanded based on extending a certain distance from sensed obstacle boundary, and an area enclosed by the extended obstacle boundary represents an impassable area, and the center of the mobile robot is disabled to appear in this area, otherwise a collision will occur.
- the distance that the obstacle boundary extends outward is at least the maximum distance between a projection center of the robot body on the carrying surface and the contour.
- the projection center of the body of the mobile robot on the carrying surface is taken as the starting point to make multiple rays, and the two-dimensional point(s) that are on the rays in the two-dimensional point cloud and are closest to the mobile robot are taken as the obstacle boundary to obtain two-dimensional points of the contour of the obstacle; the two-dimensional points of the contour are expanded to obtain the two-dimensional points of the expansion contour; the two-dimensional points of the expansion contour are taken as a boundary between the passable area and the impassable area for the mobile robot.
- Both the search window and the search box are referred as the search window.
- FIG. 3 is a schematic diagram of sensing an contour of an obstacle.
- the projection center of the robot body on the carrying surface is taken as the starting point to make rays at equal angles to form a grid
- the range formed by the boundary of the field of view in FIG. 3 is the range of the image from boundary 1 to boundary 2
- only a few rays are shown in FIG. 3 , but not all rays are shown.
- the two-dimensional point(s) on these rays and are closest to the projection center of the robot body on the carrying surface are taken as the obstacle boundary, that is, the contour of the obstacle.
- FIG. 4 is a schematic diagram of an expansion contour.
- the two-dimensional point In the advance direction of the mobile robot, for any two-dimensional point on the contour of the obstacle, the two-dimensional point is taken as the center of a circle and an expanded extension distance is taken as the radius to form an expansion circumference, the arc sandwiched between the two tangent lines passing through the projection center of the robot body on the carrying surface and tangent to the expansion circumference is taken as an expansion boundary of two-dimensional point, a central angle of the sandwiched arc is less than or equal to 180°.
- the expansion circumferences are taken one by one and the arcs sandwiched between the tangent lines are taken one by one to obtain the expansion boundaries of all two-dimensional points.
- the envelope of the expansion boundaries of all two-dimensional points may be taken as the expanded contour, so as to obtain the passable area for the mobile robot to travel and the impassable area.
- FIG. 5 is a flow diagram of expanding a contour according to an embodiment of the present application.
- the projection center of the robot body on the carrying surface is taken as the pole and the advance direction of the robot is taken as the polar axis, so as to establish a polar coordinate system;
- Step 501 determining a tangent line angle range for an expansion circumference of a current two-dimensional point of a contour of obstacle.
- Step 502 calculating coordinates of an arc point sandwiched between tangent lines on the circumference, specifically: let the polar angle ⁇ to satisfy ⁇ arccos 1 ⁇ r 2 R 2 + ⁇ p ⁇ ⁇ ⁇ arccos 1 ⁇ r 2 R 2 + ⁇ p , set the value of polar angle ⁇ by using the angle ⁇ as a step angle, and the set value of polar angle ⁇ is substituted into the equation of the expansion circumference to solve the value of radius vector ⁇ .
- the value of the polar angle ⁇ may be continuously set according to the step angle ⁇ , and the set value of the polar angle ⁇ may be substituted into the equation of the expansion circumference to solve a value of a radius vector ⁇ , that is, the step 502 is repeated until a setting requirement is met.
- the setting requirement is: the number of polar coordinates of arc points obtained by calculation reaches the set threshold of the total number of arc coordinate points, thereby obtaining the set of arc polar coordinate points sandwiched between the tangent lines, and the arc represented by this set is taken as the expansion boundary of the current two-dimensional points included in the contour of the obstacle.
- the polar angle ⁇ may be divided equally according to the angle for dividing into grids to obtain the step angle ⁇ .
- the step angle may also be set. Since the smaller the step angle is, the more times the above step 502 is performed, the more arc polar coordinate points are obtained.
- Step 503 extracting a next two-dimensional point of the contour of the obstacle, if all points of the contour of the obstacle have not been extracted, returning to step 501 until all the points of the contour of the obstacle are completed. That is, until the coordinates of the arc points sandwiched between the tangent lines on the circumference corresponding to all the two-dimensional points included in the contour of the obstacle are calculated.
- two-dimensional points of the contour of the obstacle may be extracted equally spaced. If all points of the contour of the obstacle have been extracted, step 504 is executed.
- Step 504 forming two-dimensional points of the expansion contour based on arc coordinate points. That is to screen the two-dimensional points of the expansion contour from all arc coordinate points.
- the envelope of all arc coordinate points is taken as the expansion contour.
- the two-dimensional points of the expansion contour are screened from all the arc coordinate points.
- the range of the horizontal field of view of the depth camera is divided into grids, that is the case where, within the range formed by the boundary of the field of view, multiple rays at equal angles are set by taking the projection center of the robot body on the carrying surface as the starting point to form grids, if there is no two-dimensional point of the expansion contour in the current grid, the arc coordinate point located in the grid among the arc coordinate points calculated in step 502 is taken as the two-dimensional point of the expansion contour for the grid; if there is a two-dimensional point of an existing expansion contour in the grid, then it is determined which from the arc coordinate point located in the grid among the arc coordinate points calculated in step 502 and the two-dimensional point of the existing expansion contour in the grid is closer to the projection center of the robot body on the carrying surface, and the point closer to the projection center is taken as the two-dimensional point of the expansion contour for the grid.
- FIG. 6 is a schematic diagram of a scene where a mobile robot is located.
- FIG. 7 is a schematic diagram of a two-dimensional point cloud sensed based on depth image data of the scene shown in FIG. 6 and a contour expansion result determined based on the sensed two-dimensional point cloud.
- One of the applications for a sensing-based expansion result for contour of the obstacle is that the mobile robot may perform route planning, movement control, and obstacle avoidance control.
- the area without contour expansion for an obstacle is a passable area, which is consistent with the actual scene. That is to say, the expansion contour is taken as the boundary, the area on one side close to the mobile robot as a passable area, and the area on the other side as an impassable area.
- the point cloud is screened according to set height thresholds, and ground point cloud and point cloud above the height of the mobile robot in the actual environment are eliminated, so as to use the point cloud corresponding to the obstacle(s) that may be contacted in the movement process for obstacle sensing.
- the information of the height direction of the mobile robot when it moves in the plane is redundant.
- the screened three-dimensional point cloud is projected into a two-dimensional plane. In this way, the two-dimensional point cloud also retains the obstacle information in the three-dimensional environment, which is stored only in the form of two-dimensional points.
- the expansion contour of the obstacle is obtained by sampling directly on the point cloud corresponding to the obstacle by using circumference, which does not need rasterization, avoids the time-consuming increase and precision loss in the rasterization process, and can provide accurate expansion contour data for obstacle avoidance control.
- FIG. 8 is a schematic diagram of an obstacle avoidance system for a mobile robot based on the depth camera according to an embodiment of the present application.
- the depth camera installed to the body of the mobile robot obtains the current depth image data
- the obstacle sensing module obtains obstacle information through the depth data, and generates the expansion contour of the obstacle
- the movement control module performs route planning and movement control based on the expansion contour of the obstacle, and generates a movement instruction
- actuator travels according to the movement instruction, thereby driving the mobile robot to avoid obstacles.
- the movement instruction is also the motion instruction.
- the above-mentioned expansion contour is the contour of the expanded obstacle.
- FIG. 9 is a schematic diagram of an obstacle sensing module according to an embodiment of the present application.
- the obstacle sensing module includes:
- the obstacle sensing module further includes: a screening module, configured for screening the data converted by the first data conversion module, and retaining the data belonging to the moving plane of the mobile robot. That is to say, the data corresponding to the obstacles that may be contacted by the mobile robot during the movement process of the mobile robot are retained.
- the obstacle sensing module further includes: an passable area acquisition module, configured for taking a projection center of a body of the mobile robot on a carrying surface as a starting point to set at least one ray at equal angles, and taking the two-dimensional point(s) on the at least ray that is closest to the starting point as a contour of the obstacle to obtain two-dimensional point(s) of the contour of the obstacle.
- an passable area acquisition module configured for taking a projection center of a body of the mobile robot on a carrying surface as a starting point to set at least one ray at equal angles, and taking the two-dimensional point(s) on the at least ray that is closest to the starting point as a contour of the obstacle to obtain two-dimensional point(s) of the contour of the obstacle.
- an expansion circumference is set by taking the two-dimensional point as the center of a circle and an extension distance as a radius; two tangent lines passing through the starting point and tangent to the expansion circumference are obtained; an arc sandwiched between the two tangent lines is taken as an expansion boundary of two-dimensional point of the contour of the obstacle, a central angle of the sandwiched arc is less than or equal to 180°, and the extension distance is at least the maximum distance between the projection center of the body of the mobile robot on the carrying surface and the contour.
- the expansion boundaries of all two-dimensional points of the contour of the obstacle is taken, an envelope of the expansion boundaries of all two-dimensional points is taken as expansion contour two-dimensional points, so as to obtain a passable area and an impassable area for the mobile robot to travel.
- the first data conversion module is specifically configured for acquiring, for any pixel in the depth image data, three-dimensional point coordinates of the pixel in the coordinate system of the mobile robot through a camera external parameter, a depth value of the pixel, a camera internal parameter, and a coordinate value of the pixel; and acquiring, after converting all pixels in the depth image data into three-dimensional point coordinates in the coordinate system of the mobile robot, a three-dimensional point cloud composed of three-dimensional points in the coordinate system of the mobile robot.
- the second data conversion module is specifically configured for discarding, for any three-dimensional point coordinates in the coordinate system of the mobile robot, a coordinate value perpendicular to the moving plane of the mobile robot in the three-dimensional point coordinates; alternatively, setting the coordinate value perpendicular to the moving plane of the mobile robot in the three-dimensional point coordinates to 0, to obtain a two-dimensional projection of the three-dimensional point in the moving plane of the mobile robot; and obtaining, after converting all three-dimensional point coordinates in the mobile robot coordinate system into two-dimensional projections in the moving plane of the mobile robot, a two-dimensional point cloud composed of the two-dimensional projections.
- the sensing module is specifically configured for establishing a search window in an advance direction of the mobile robot, and a relative location of the search window and the mobile robot is fixed;
- the passable area acquisition module is specifically configured for converting the two-dimensional point coordinates of the current contour of the obstacle into polar coordinates
- An embodiment of the present application further provides a mobile robot based on a depth camera, including a memory and a processor, the memory stores thereon computer programs that, upon executed by the processor, cause the processor to execute steps of any of the above-mentioned method for sensing obstacle information for a mobile robot.
- the memory may include a Rrandom Aaccess Memory (RAM), or may include a Non-Volatile Memory (NVM), for example at least one disk memory.
- RAM Random Aaccess Memory
- NVM Non-Volatile Memory
- the memory may also be at least one storage device located away from the processor.
- the processor may be a general-purpose processor, such as a Central Processing Unit (CPU), a Network Processor (NP), or the like; it may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
- CPU Central Processing Unit
- NP Network Processor
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- An embodiment of the present application further provides a further mobile robot, the body of the mobile robot includes:
- An embodiment of the present application further provides a computer readable storage medium, the computer readable storage medium stores thereon computer programs that, upon executed by a processor, cause the processor to implements the following steps:
- the depth image data may be converted into the projection in the moving plane of the mobile robot, and a search is performed based on the two-dimensional projection data to obtain obstacle information, and data similar to the two-dimensional laser is obtained, which solves the defects that conventional obstacle avoidance sensors cannot perform full stereo sensing and obstacle avoidance, and can sense obstacles in a large range and multi-levels.
- the depth data is projected into the moving plane of the mobile robot, therefore in application, the mature technology for obstacle avoidance control in the two-dimensional environment can be applied to the solution of the present application for projection to obtain two-dimensional data, so that the solution of the present application can be widely used. Therefore, in the solution of the present application, not only the three-dimensional sensing information of the three-dimensional sensor can be used, but also a quick route planning can be performed based on the two-dimensional data.
- An embodiment of the present application further provides a computer program product comprising instructions which, when running on a computer, causes the computer to execute the steps of any of the methods for sensing obstacle information for a mobile robot.
- the depth image data may be converted into the projection in the moving plane of the mobile robot, and a search is performed based on the two-dimensional projection data to obtain obstacle information, and data similar to the two-dimensional laser is obtained, which solves the defects that conventional obstacle avoidance sensors cannot perform full stereo sensing and obstacle avoidance, and can sense obstacles in a large range and multi-levels.
- the depth data is projected into the moving plane of the mobile robot, therefore in application, the mature technology for obstacle avoidance control in the two-dimensional environment can be applied to the solution of the present application for projection to obtain two-dimensional data, so that the solution of the present application can be widely used. Therefore, in the solution of the present application, not only the three-dimensional sensing information of the three-dimensional sensor can be used, but also a quick route planning can be performed based on the two-dimensional data.
- the mobile robot, the storage medium and the computer application program since they are similar to the embodiments of the methods, the description thereof is relatively simple; the relating parts could refer to those in the description of embodiments of the methods.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Claims (13)
- Verfahren zum Erfassen von Hindernisinformationen für einen mobilen Roboter, wobei das Verfahren umfasst:- Erlangen (101) von Tiefenbilddaten;- Umwandeln (102) der Tiefenbilddaten in Daten in einem Koordinatensystem des mobilen Roboters;- Umwandeln der Daten in dem Koordinatensystem des mobilen Roboters in eine Projektion in einer Bewegungsebene des mobilen Roboters, um zweidimensionale Daten zu erhalten;- Erkennen der zweidimensionalen Daten innerhalb eines Bereichs eines Fahrwegs des mobilen Roboters; und- Erhalten der Hindernisinformationen auf Basis der erkannten zweidimensionalen Daten,- wobei das Verfahren dadurch gekennzeichnet ist, dass Erkennen zweidimensionaler Daten innerhalb des Bereichs des Fahrwegs des mobilen Roboters umfasst:
Einrichten eines Suchfensters zum Erkennen von Hindernissen innerhalb des Bereichs des Fahrwegs des mobilen Roboters, und Suchen nach zweidimensionalen Daten innerhalb eines Bereichs des Suchfensters,- wobei das Verfahren weiter umfasst:
Bestimmen (106), gemäß einer Position des nächstgelegenen Hindernisses in dem Suchfenster, ob der mobile Roboter dem Hindernis ausweichen muss, und falls ja, Auslösen, um einen passierbaren Bereich zu erhalten, in welchem der mobile Roboter fahren kann,- wobei Erhalten des passierbaren Bereichs, in dem der mobile Roboter fahren kann, umfasst:- Annehmen eines Projektionszentrums eines Körpers des mobilen Roboters auf der tragenden Fläche als einen Ausgangspunkt, um zumindest einen Strahl in gleichen Winkeln festzulegen, und Annehmen eines zweidimensionalen Punkts auf dem Strahl, welcher dem Ausgangspunkt am nächsten liegt, als eine Kontur des Hindernisses, um einen zweidimensionalen Punkt der Kontur des Hindernisses zu erhalten;- Festlegen eines Ausdehnungsumfangs für jeden zweidimensionalen Punkt der Kontur des Hindernisses durch Annehmen des zweidimensionalen Punkts als den Mittelpunkt eines Kreises und eines Ausdehnungsabstands als einen Radius; Erhalten zweier Tangentenlinien, welche durch den Ausgangspunkt verlaufen und den Ausdehnungsumfang tangieren; Annehmen eines zwischen den zwei Tangentenlinien eingeschlossenen Bogens als eine Ausdehnungsgrenze des zweidimensionalen Punkts der Kontur des Hindernisses, wobei ein Mittelwinkel des eingeschlossenen Bogens kleiner oder gleich 180° ist und der Ausdehnungsabstand zumindest der maximale Abstand zwischen dem Projektionszentrum des Körpers des mobilen Roboters auf der tragenden Fläche und der Kontur ist;- Erhalten der Ausdehnungsgrenzen aller zweidimensionalen Punkte der Kontur des Hindernisses, Annehmen einer Hülle der Ausdehnungsgrenzen aller zweidimensionalen Punkte als eine Ausdehnungskontur, Annehmen der Ausdehnungskontur als eine Grenze, um einen Bereich auf einer Seite in der Nähe des mobilen Roboters als den passierbaren Bereich und auf der anderen Seite als einen unpassierbaren Bereich festzulegen. - Verfahren nach Anspruch 1,
wobei Umwandeln der Tiefenbilddaten in Daten in dem Koordinatensystem des mobilen Roboters umfasst:- für jedes Pixel in den Tiefenbilddaten, Erlangen dreidimensionaler Punktkoordinaten des Pixels in dem Koordinatensystem des mobilen Roboters durch einen kameraexternen Parameter, einen Tiefenwert des Pixels, einen kamerainternen Parameter und einen Koordinatenwert des Pixels,- Erlangen einer dreidimensionalen Punktwolke, bestehend aus dreidimensionalen Punkten in dem Koordinatensystem des mobilen Roboters, nach Umwandeln aller Pixel in den Tiefenbilddaten in dreidimensionale Punktkoordinaten in dem Koordinatensystem des mobilen Roboters. - Verfahren nach Anspruch 2,
wobei Erhalten dreidimensionaler Punktkoordinaten des Pixels in dem Koordinatensystem des mobilen Roboters durch den kameraexternen Parameter, den Tiefenwert des Pixels, den kamerainternen Parameter und den Koordinatenwert des Pixels umfasst:- Erhalten, gemäß einem externen Parameter einer Tiefenkamera, einer Transformationsbeziehungsmatrix von dem Koordinatensystem des mobilen Roboters zu einem Koordinatensystem der Tiefenkamera;- Berechnen eines Produkts aus der Transformationsbeziehungsmatrix, dem Tiefenwert des Pixels, einer inversen Matrix einer kamerainternen Parametermatrix und einer durch Pixelkoordinatenwerte gebildeten Matrix, um dreidimensionale Punktkoordinaten des Pixels in dem Koordinatensystem des mobilen Roboters zu erhalten; - Verfahren nach Anspruch 2,
wobei Umwandeln der Daten in dem Koordinatensystem des mobilen Roboters in die Projektion in der Bewegungsebene des mobilen Roboters, um die zweidimensionalen Daten zu erhalten, umfasst:- Verwerfen eines Koordinatenwerts senkrecht zu der Bewegungsebene des mobilen Roboters in der dreidimensionalen Punktkoordinate für jede dreidimensionale Punktkoordinate in dem Koordinatensystem des mobilen Roboters, um eine zweidimensionale Projektion des dreidimensionalen Punkts in der Bewegungsebene des mobilen Roboters zu erhalten; alternativ, Festlegen des Koordinatenwerts senkrecht zu der Bewegungsebene des mobilen Roboters in der dreidimensionalen Punktkoordinate auf 0, um die zweidimensionale Projektion des dreidimensionalen Punkts in der Bewegungsebene des mobilen Roboters zu erhalten;- Erhalten einer zweidimensionalen Punktwolke, welche aus den zweidimensionalen Projektionen besteht, nach Umwandeln aller dreidimensionalen Punktkoordinaten in dem Koordinatensystem des mobilen Roboters in zweidimensionale Projektionen in der Bewegungsebene des mobilen Roboters. - Verfahren nach Anspruch 1,
wobei Einrichten des Suchfensters zum Erkennen von Hindernissen innerhalb des Bereichs des Fahrwegs des mobilen Roboters und Suchen der zweidimensionalen Daten innerhalb des Bereichs des Suchfensters umfasst:Einrichten (105) des Suchfensters in einer Vorwärtsrichtung des mobilen Roboters, wobei eine relative Position des Suchfensters und des mobilen Roboters feststehend ist,Erhalten der Hindernisinformationen auf Basis der erkannten zweidimensionalen Daten umfasst:
Erkennen, ob sich innerhalb des Bereichs des Suchfensters eine zweidimensionale Punktwolke befindet, und falls ja, Bestimmen, dass ein Hindernis erkannt wurde. - Verfahren nach einem der Ansprüche 1 bis 3, wobei Umwandeln der Daten in dem Koordinatensystem des mobilen Roboters in die Projektion in der Bewegungsebene des mobilen Roboters, um die zweidimensionalen Daten zu erhalten, umfasst:- Screening umgewandelter Daten in dem Koordinatensystem des mobilen Roboters und Behalten von Daten, welche einem Hindernis entsprechen, welches während eines Bewegungsprozesses berührt werden könnte;- Umwandeln der behaltenen Daten, welche dem Hindernis entsprechen, welches während des Bewegungsprozesses berührt werden könnte, in die Projektion in der Bewegungsebene des mobilen Roboters, um die zweidimensionalen Daten zu erhalten.
- Verfahren nach Anspruch 6,
wobei eine tragende Fläche für den mobilen Roboter Boden ist, und Behalten der Daten, welche dem Hindernis entsprechen, welches während des Bewegungsprozesses berührt werden könnte, umfasst: Behalten eines dreidimensionalen Punkts/dreidimensionaler Punkte mit einem z-Wert(en) in den dreidimensionale Punktkoordinaten in dem Koordinatensystem des mobilen Roboters größer als ein erster Schwellenwert und kleiner als ein zweiter Schwellenwert. - Verfahren nach Anspruch 1,- wobei Festlegen des Ausdehnungsumfangs für jeden zweidimensionalen Punkt der Kontur des Hindernisses durch Annehmen des zweidimensionalen Punkts als den Mittelpunkt des Kreises und des Ausdehnungsabstands als den Radius umfasst:Umwandeln zweidimensionaler Punktkoordinaten einer aktuellen Kontur des Hindernisses in Polarkoordinaten;Bestimmen einer Gleichung des Ausdehnungsumfangs in einem Polarkoordinatensystem gemäß den umgewandelten Polarkoordinaten und dem Ausdehnungsabstand;- wobei Erhalten der zwei Tangentenlinien, welche durch den Ausgangspunkt verlaufen und den Ausdehnungsumfang tangieren, umfasst:
Bestimmen (501), gemäß der Gleichung des Ausdehnungsumfangs in dem Polarkoordinatensystem, eines Winkelbereichs der Tangentenlinie für den Ausdehnungsumfang des zweidimensionalen Punkts der aktuellen Kontur des Hindernisses;- wobei Annehmen des zwischen den zwei Tangentenlinien eingeschlossenen Bogens als die Ausdehnungsgrenze des zweidimensionalen Punkts der Kontur des Hindernisses umfasst:- innerhalb des Winkelbereichs der Tangentenlinie, Ersetzen eines festgelegten aktuellen Polarwinkelwerts in der Gleichung des Ausdehnungsumfangs in dem Polarkoordinatensystem und Auflösen von Radiusvektorwerten für einen Bogenpunkt; Extrahieren eines kleineren Werts aus den Radiusvektorwerten, um Polarkoordinaten des zwischen den Tangentenlinien eingeschlossenen Bogenpunkts zu erhalten,- Festlegen eines nächsten Polarwinkelwerts und Zurückkehren zum Ausführen des Schritts des Ersetzen des festgelegten aktuellen Polarwinkelwerts in der Gleichung des Ausdehnungsumfangs in dem Polarkoordinatensystem, bis die Berechnung des Bogenkoordinatenpunkts für ein zwischen den Tangentenlinien eingeschlossenen Bogensegments abgeschlossen ist;- Bilden der Ausdehnungsgrenze des aktuellen zweidimensionalen Punkts auf Basis des Bogenpunkts. - Verfahren nach Anspruch 8,- wobei Annehmen des Projektionszentrums des Körpers des mobilen Roboters auf der tragenden Fläche als den Ausgangspunkt, um zumindest einen Strahl in gleichen Winkeln festzulegen, weiter umfasst: Ausgehen von einer Grenze eines Bildfelds und Annehmen des Projektionszentrums des Körpers des mobilen Roboters auf der tragenden Fläche als den Ausgangspunkt, um Strahlen in gleichen Winkeln anzunehmen, um Raster zu bilden,- wobei Bilden der Ausdehnungsgrenze des aktuellen zweidimensionalen Punkts auf Basis des Bogenpunkts umfasst:- falls es keinen zweidimensionalen Punkt der Ausdehnungskontur in einem aktuellen Raster gibt, Annehmen eines Bogenkoordinatenpunkts, welcher sich in dem Raster unter den berechneten Bogenkoordinatenpunkten befindet, als den zweidimensionalen Punkt der Ausdehnungskontur für das Raster;- Bestimmen, ob es einen zweidimensionalen Punkt einer vorhandenen Ausdehnungskontur in dem aktuellen Raster gibt, welcher von dem Bogenkoordinatenpunkt, welcher sich unter den berechneten Bogenkoordinatenpunkten in dem Raster befindet, und dem zweidimensionalen Punkt der vorhandenen Ausdehnungskontur näher dem Projektionszentrum des Körpers des mobilen Roboters auf der tragenden Fläche ist, und Annehmen des dem Projektionszentrum näheren Punkts als den zweidimensionalen Punkt der Ausdehnungskontur für das Raster;- Annehmen der zweidimensionalen Punkte der Ausdehnungskonturen für alle Raster als die Ausdehnungsgrenze des aktuellen zweidimensionalen Punkts.
- Vorrichtung zum Erfassen von Hindernisinformationen für einen mobilen Roboter, wobei
die Vorrichtung umfasst:- ein erstes Datenumwandlungsmodul, welches dazu konfiguriert ist, Tiefenbilddaten in Daten in einem Koordinatensystem des mobilen Roboters umzuwandeln;- ein zweites Datenumwandlungsmodul, welches dazu konfiguriert ist, die Daten in dem Koordinatensystem des mobilen Roboters in eine Projektion in einer Bewegungsebene des mobilen Roboters umzuwandeln, um zweidimensionale Daten zu erhalten;- ein Erfassungsmodul, welches dazu konfiguriert ist, die zweidimensionalen Daten innerhalb eines Bereichs eines Fahrwegs des mobilen Roboters zu erkennen und die Hindernisinformationen auf Basis der erkannten zweidimensionalen Daten zu erhalten; dadurch gekennzeichnet, dass die Vorrichtung weiter umfasst:- ein Modul zur Erlangung von passierbaren Bereichen, welches dazu konfiguriert ist, ein Projektionszentrum eines Körpers des mobilen Roboters auf einer tragenden Fläche als einen Ausgangspunkt anzunehmen, um zumindest einen Strahl in gleichen Winkeln festzulegen, und einen zweidimensionalen Punkt auf dem zumindest einen Strahl, welcher dem Ausgangspunkt am nächsten liegt, als eine Kontur des Hindernisses anzunehmen, um (einen) zweidimensionale(n) Punkt(e) der Kontur des Hindernisses zu erhalten,einen Ausdehnungsumfang für jeden zweidimensionalen Punkt der Kontur des Hindernisses durch Annehmen des zweidimensionalen Punkts als den Mittelpunkt eines Kreises und eines Ausdehnungsabstands als einen Radius festzulegen; Erhalten zweier Tangentenlinien, welche durch den Ausgangspunkt verlaufen und den Ausdehnungsumfang tangieren; einen zwischen den zwei Tangentenlinien eingeschlossenen Bogen als eine Ausdehnungsgrenze des zweidimensionalen Punkts der Kontur des Hindernisses anzunehmen, wobei ein Mittelwinkel des eingeschlossenen Bogens kleiner oder gleich 180° ist und der Ausdehnungsabstand zumindest der maximale Abstand zwischen dem Projektionszentrum des Körpers des mobilen Roboters auf der tragenden Fläche und der Kontur ist;die Ausdehnungsgrenzen aller zweidimensionalen Punkte der Kontur des Hindernisses anzunehmen, eine Hülle der Ausdehnungsgrenzen aller zweidimensionalen Punkte als zweidimensionale Ausdehnungskonturpunkte anzunehmen, um so einen passierbaren Bereich und einen unpassierbaren Bereich zu erhalten, in welchem der mobile Roboter fahren kann. - Mobiler Roboter,
wobei
ein Körper des mobilen Roboters umfasst:- eine an dem Körper des mobilen Roboters installierte Tiefenkamera, welche für das Erlangen von Tiefenbilddaten konfiguriert ist;- ein Hinderniserfassungsmodul, welches dazu konfiguriert ist, die Tiefenbilddaten in Daten in einem Koordinatensystem des mobilen Roboters umzuwandeln; die Daten in dem Koordinatensystem des mobilen Roboters in eine Projektion in einer Bewegungsebene des mobilen Roboters umzuwandeln, um zweidimensionale Daten zu erhalten; die zweidimensionalen Daten in einem Bereich eines Fahrwegs des mobilen Roboters zu erkennen; Hindernisinformationen auf Basis der erkannten zweidimensionalen Daten zu erhalten;- ein Bewegungssteuerungsmodul, welches dazu konfiguriert ist, eine Bewegungsanweisung gemäß den Hindernisinformationen zu bilden;- ein Aktormodul, welches dazu konfiguriert ist, die Bewegungsanweisung auszuführen,- dadurch gekennzeichnet, dass das Hinderniserfassungsmodul weiter umfasst:ein Modul zur Erlangung von passierbaren Bereichen, welches dazu konfiguriert ist, ein Projektionszentrum eines Körpers des mobilen Roboters auf einer tragenden Fläche als einen Ausgangspunkt anzunehmen, um zumindest einen Strahl in gleichen Winkeln festzulegen, und einen zweidimensionalen Punkt auf dem zumindest einen Strahl, welcher dem Ausgangspunkt am nächsten liegt, als eine Kontur des Hindernisses anzunehmen, um (einen) zweidimensionale(n) Punkt(e) der Kontur des Hindernisses zu erhalten,einen Ausdehnungsumfang für jeden zweidimensionalen Punkt der Kontur des Hindernisses durch Annehmen des zweidimensionalen Punkts als den Mittelpunkt eines Kreises und eines Ausdehnungsabstands als einen Radius festzulegen; Erhalten zweier Tangentenlinien, welche durch den Ausgangspunkt verlaufen und den Ausdehnungsumfang tangieren; einen zwischen den zwei Tangentenlinien eingeschlossenen Bogen als eine Ausdehnungsgrenze des zweidimensionalen Punkts der Kontur des Hindernisses anzunehmen, wobei ein Mittelwinkel des eingeschlossenen Bogens kleiner oder gleich 180° ist und der Ausdehnungsabstand zumindest der maximale Abstand zwischen dem Projektionszentrum des Körpers des mobilen Roboters auf der tragenden Fläche und der Kontur ist;die Ausdehnungsgrenzen aller zweidimensionalen Punkte der Kontur des Hindernisses anzunehmen, eine Hülle der Ausdehnungsgrenzen aller zweidimensionalen Punkte als zweidimensionale Ausdehnungskonturpunkte anzunehmen, um so einen passierbaren Bereich und einen unpassierbaren Bereich zu erhalten, in welchem der mobile Roboter fahren kann. - Computerlesbares Speichermedium,
wobei
das computerlesbare Speichermedium darauf Computerprogramme speichert, welche bei Ausführung durch einen Prozessor den Prozessor veranlassen, Schritte nach einem der Verfahren zum Erfassen von Hindernisinformationen für einen mobilen Roboter nach den Ansprüchen 1 bis 9 zu implementieren. - Computerprogrammprodukt, welches Anweisungen umfasst,
wobei
die Anweisungen, wenn auf einem Computer ausgeführt, den Computer veranlassen, Schritte nach einem der Verfahren zum Erfassen von Hindernisinformationen für einen mobilen Roboter nach den Ansprüchen 1 bis 9 zu implementieren.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910892943.2A CN112631266A (zh) | 2019-09-20 | 2019-09-20 | 一种移动机器人感知障碍信息的方法、装置 |
PCT/CN2020/115818 WO2021052403A1 (zh) | 2019-09-20 | 2020-09-17 | 一种移动机器人感知障碍信息的方法、装置 |
Publications (3)
Publication Number | Publication Date |
---|---|
EP4033324A1 EP4033324A1 (de) | 2022-07-27 |
EP4033324A4 EP4033324A4 (de) | 2022-11-02 |
EP4033324B1 true EP4033324B1 (de) | 2024-05-01 |
Family
ID=74883386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20865409.5A Active EP4033324B1 (de) | 2019-09-20 | 2020-09-17 | Verfahren und vorrichtung zur erfassung von hindernisinformation für einen mobilen roboter |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP4033324B1 (de) |
JP (1) | JP7314411B2 (de) |
KR (1) | KR20220066325A (de) |
CN (1) | CN112631266A (de) |
WO (1) | WO2021052403A1 (de) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113110457B (zh) * | 2021-04-19 | 2022-11-15 | 杭州视熵科技有限公司 | 在室内复杂动态环境中智能机器人的自主覆盖巡检方法 |
CN113190003B (zh) * | 2021-05-06 | 2024-05-31 | 珠海格力智能装备有限公司 | Agv避障方法、装置、计算机可读存储介质和处理器 |
CN113568003B (zh) * | 2021-07-26 | 2022-11-01 | 奥特酷智能科技(南京)有限公司 | 一种用于机场地勤车的防撞预警系统及方法 |
CN114265412B (zh) * | 2021-12-29 | 2023-10-24 | 深圳创维数字技术有限公司 | 车辆控制方法、装置、设备及计算机可读存储介质 |
CN114509064A (zh) * | 2022-02-11 | 2022-05-17 | 上海思岚科技有限公司 | 一种自主扩展传感器数据处理的方法、接口及设备 |
CN114255252B (zh) * | 2022-02-28 | 2022-05-17 | 新石器慧通(北京)科技有限公司 | 障碍物轮廓获取方法、装置、设备及计算机可读存储介质 |
CN115406445B (zh) * | 2022-08-18 | 2024-05-17 | 四川华丰科技股份有限公司 | 多传感器数据融合处理方法及机器人避障方法 |
CN116009559B (zh) * | 2023-03-24 | 2023-06-13 | 齐鲁工业大学(山东省科学院) | 一种输水管道内壁巡检机器人及检测方法 |
CN116185044B (zh) * | 2023-04-26 | 2023-06-27 | 威康(深圳)智能有限公司 | 一种机器人集群系统的控制方法、装置、设备及系统 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3994950B2 (ja) * | 2003-09-19 | 2007-10-24 | ソニー株式会社 | 環境認識装置及び方法、経路計画装置及び方法、並びにロボット装置 |
JP5803043B2 (ja) * | 2010-05-20 | 2015-11-04 | アイロボット コーポレイション | 移動式ロボットシステム及び移動式ロボットを作動させる方法 |
EP2776216B1 (de) * | 2011-11-11 | 2022-08-31 | iRobot Corporation | Robotervorrichtung und steuerverfahren zur wiederaufnahme des betriebs nach einer unterbrechung. |
CN105629989B (zh) * | 2015-12-28 | 2018-04-17 | 电子科技大学 | 基于最小外包圆和最大内接圆的障碍区域划分方法 |
CN106441275A (zh) * | 2016-09-23 | 2017-02-22 | 深圳大学 | 一种机器人规划路径的更新方法及装置 |
CN106599108B (zh) * | 2016-11-30 | 2019-12-31 | 浙江大学 | 一种三维环境中多模态环境地图构建方法 |
CN106969770B (zh) * | 2017-05-31 | 2021-04-06 | 深圳中智卫安机器人技术有限公司 | 一种机器人及其导航方法、计算机可读存储介质 |
JP7103359B2 (ja) * | 2017-08-04 | 2022-07-20 | ソニーグループ株式会社 | 制御装置、および制御方法、プログラム、並びに移動体 |
CN108733045B (zh) * | 2017-09-29 | 2022-01-04 | 北京猎户星空科技有限公司 | 机器人及其避障方法以及计算机可读存储介质 |
CN108256430B (zh) * | 2017-12-20 | 2021-01-29 | 北京理工大学 | 障碍物信息获取方法、装置及机器人 |
CN109048926A (zh) * | 2018-10-24 | 2018-12-21 | 河北工业大学 | 一种基于立体视觉的机器人智能避障系统及方法 |
CN109407705A (zh) * | 2018-12-14 | 2019-03-01 | 厦门理工学院 | 一种无人机躲避障碍物的方法、装置、设备和存储介质 |
CN109947097B (zh) * | 2019-03-06 | 2021-11-02 | 东南大学 | 一种基于视觉和激光融合的机器人定位方法及导航应用 |
CN110202577A (zh) * | 2019-06-15 | 2019-09-06 | 青岛中科智保科技有限公司 | 一种实现障碍物检测的自主移动机器人及其方法 |
-
2019
- 2019-09-20 CN CN201910892943.2A patent/CN112631266A/zh active Pending
-
2020
- 2020-09-17 KR KR1020227012806A patent/KR20220066325A/ko unknown
- 2020-09-17 WO PCT/CN2020/115818 patent/WO2021052403A1/zh unknown
- 2020-09-17 JP JP2022517836A patent/JP7314411B2/ja active Active
- 2020-09-17 EP EP20865409.5A patent/EP4033324B1/de active Active
Also Published As
Publication number | Publication date |
---|---|
EP4033324A1 (de) | 2022-07-27 |
JP2022548743A (ja) | 2022-11-21 |
CN112631266A (zh) | 2021-04-09 |
WO2021052403A1 (zh) | 2021-03-25 |
JP7314411B2 (ja) | 2023-07-25 |
KR20220066325A (ko) | 2022-05-24 |
EP4033324A4 (de) | 2022-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4033324B1 (de) | Verfahren und vorrichtung zur erfassung von hindernisinformation für einen mobilen roboter | |
US10712747B2 (en) | Data processing method, apparatus and terminal | |
CN108362295B (zh) | 车辆路径引导设备和方法 | |
EP2209091B1 (de) | System und Verfahren zur Objektbewegungsdetektion auf Grundlage von mehrfachem 3D-Warping und mit solch einem System ausgestattetes Fahrzeug | |
EP3349143B1 (de) | Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und computerlesbares medium | |
EP4141737A1 (de) | Zielerfassungsverfahren und -vorrichtung | |
US11898855B2 (en) | Assistance control system that prioritizes route candidates based on unsuitable sections thereof | |
KR20180041176A (ko) | 고정밀 지도 데이터 처리 방법, 장치, 저장 매체 및 기기 | |
US10872228B1 (en) | Three-dimensional object detection | |
JP2022542289A (ja) | 地図作成方法、地図作成装置、電子機器、記憶媒体及びコンピュータプログラム製品 | |
KR102117313B1 (ko) | 그래디언트 추정 장치, 그래디언트 추정 방법, 컴퓨터 프로그램 및 제어 시스템 | |
CN111381594A (zh) | 基于3d视觉的agv空间避障方法及系统 | |
CN112346463B (zh) | 一种基于速度采样的无人车路径规划方法 | |
EP3324359B1 (de) | Bildverarbeitungsvorrichtung und bildverarbeitungsverfahren | |
EP4124829B1 (de) | Kartenkonstruktionsverfahren, gerät, vorrichtung und speichermedium | |
KR20200046437A (ko) | 영상 및 맵 데이터 기반 측위 방법 및 장치 | |
Konrad et al. | Localization in digital maps for road course estimation using grid maps | |
US20220254062A1 (en) | Method, device and storage medium for road slope predicating | |
Kellner et al. | Road curb detection based on different elevation mapping techniques | |
US20230251097A1 (en) | Efficient map matching method for autonomous driving and apparatus thereof | |
JP7010535B2 (ja) | 情報処理装置 | |
CN113551679A (zh) | 一种示教过程中的地图信息构建方法、构建装置 | |
WO2021074660A1 (ja) | 物体認識方法及び物体認識装置 | |
US20220155455A1 (en) | Method and system for ground surface projection for autonomous driving | |
Zhou et al. | Positioning System Based on Lidar Fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220414 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20220929 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G05D 1/02 20200101AFI20220923BHEP |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: HANGZHOU HIKROBOT CO., LTD. |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06V 30/18 20220101ALI20231117BHEP Ipc: G06V 20/58 20220101ALI20231117BHEP Ipc: G06V 20/10 20220101ALI20231117BHEP Ipc: G01S 17/931 20200101ALI20231117BHEP Ipc: G01S 17/89 20200101ALI20231117BHEP Ipc: G05D 1/02 20200101AFI20231117BHEP |
|
INTG | Intention to grant announced |
Effective date: 20231208 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602020030406 Country of ref document: DE |