CN114029953B - Method for determining ground plane based on depth sensor, robot and robot system - Google Patents

Method for determining ground plane based on depth sensor, robot and robot system Download PDF

Info

Publication number
CN114029953B
CN114029953B CN202111366540.8A CN202111366540A CN114029953B CN 114029953 B CN114029953 B CN 114029953B CN 202111366540 A CN202111366540 A CN 202111366540A CN 114029953 B CN114029953 B CN 114029953B
Authority
CN
China
Prior art keywords
plane
robot
ground plane
point cloud
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111366540.8A
Other languages
Chinese (zh)
Other versions
CN114029953A (en
Inventor
董济铭
何林
蔡龙生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202111366540.8A priority Critical patent/CN114029953B/en
Publication of CN114029953A publication Critical patent/CN114029953A/en
Application granted granted Critical
Publication of CN114029953B publication Critical patent/CN114029953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means

Abstract

The invention provides a method for determining a ground plane based on a depth sensor, which comprises the following steps: acquiring a three-dimensional point cloud of a surrounding environment based on the depth sensor; fitting the three-dimensional point cloud to a measurement plane; acquiring the deviation between the measuring plane and the ground plane at the last moment; based on the deviation, a ground plane for the current time instant is determined. By adopting the technical scheme of the invention, the robot can be helped to more flexibly cope with scenes such as unevenness, slopes and steps, and the like, and when the robot is shaken or inclined, the influence on the stability and accuracy of distance measurement is reduced, so that the robot is beneficial to subsequent obstacle identification, path planning or obstacle avoidance operation and the like.

Description

Method for determining ground plane based on depth sensor, robot and robot system
Technical Field
The present disclosure relates to the field of robot technology, and in particular, to a method for determining a ground plane based on a depth sensor, a robot, and a robot system.
Background
With the rapid development of robotics, robots are increasingly used, for example, welcome robots, delivery robots, educational robots, and biomimetic robots, and the like. In the practical application process, the robot inevitably meets the conditions of unevenness, a slope, a road section with continuously changed gradient or a step and the like during traveling. And the robot judges the uneven ground condition according to the measurement data of the sensor, so that an obstacle avoidance strategy is adopted. However, before path planning and obstacle avoidance processing, the robot needs to determine a current reference ground plane first, and then determine the obstacle location, a passable path, and the like. Thus, reducing errors due to robot jitter or tilt, identifying and determining the ground plane at each moment is a prerequisite for performing further operations.
The statements in this background section merely represent techniques known to the public and are not, of course, representative of the prior art.
Disclosure of Invention
In view of one or more of the existing drawbacks, the present invention is directed to a method for determining a ground plane based on a depth sensor, comprising:
acquiring a three-dimensional point cloud of a surrounding environment based on the depth sensor;
fitting the three-dimensional point cloud to a measurement plane;
acquiring the deviation between the measuring plane and the ground plane at the last moment;
based on the deviation, a ground plane for the current time instant is determined.
According to one aspect of the invention, the depth sensor is mounted on a robot, the method further comprising: establishing a robot coordinate system based on the ground plane of the previous moment and the standing direction of the robot; establishing a sensor coordinate system based on the installation height and the installation angle of the depth sensor; mapping the three-dimensional point cloud from the sensor coordinate system to the robot coordinate system.
According to one aspect of the invention, wherein the step of fitting the three-dimensional point cloud to a measurement plane comprises: converting the three-dimensional point cloud mapped to the robot coordinate system into two-dimensional point cloud, acquiring a first distance between any point in the two-dimensional point cloud and a fitting straight line of other points, removing the point of which the first distance is greater than a first distance threshold value from the two-dimensional point cloud, converting the remaining points into three dimensions, and fitting the three dimensions into a measuring plane.
According to one aspect of the invention, wherein the step of fitting the three-dimensional point cloud to a measurement plane comprises: fitting any three points in the three-dimensional point cloud mapped to the robot coordinate system into a plane, acquiring second distances between the rest points and the plane fitted with the any three points, removing the points with the second distances larger than a second distance threshold value from the three-dimensional point cloud mapped to the robot coordinate system, and fitting the rest points into a measuring plane.
According to an aspect of the invention, the step of obtaining the deviation between the measurement plane and the ground plane at the previous moment further comprises: and acquiring the rotation angle between the measuring plane and the ground plane at the last moment and/or the distance between the measuring plane and the origin of the robot coordinate system.
According to an aspect of the invention, wherein the step of determining the ground plane at the present moment based on the deviation comprises: and when the rotating angle is smaller than or equal to a first angle threshold value and/or the distance between the rotating angle and the origin of the robot coordinate system is smaller than or equal to a third distance threshold value, taking the measuring plane as the ground plane at the current moment.
According to an aspect of the invention, wherein the step of determining the ground plane at the present moment based on the deviation comprises: and adjusting the measuring plane, and taking the adjusted measuring plane as the ground plane at the current moment.
According to an aspect of the invention, wherein the adjusting the measurement plane comprises: and rotating the measuring plane by a preset angle and/or translating by a preset distance, and taking the rotated and/or translated measuring plane as a ground plane at the current moment.
According to an aspect of the invention, wherein the step of determining the ground plane at the present moment based on the deviation comprises: and when the rotation angle is larger than a first angle threshold value and/or the distance between the rotation angle and the origin of the robot coordinate system is larger than a third distance threshold value, taking the ground plane at the last moment as the ground plane at the current moment.
According to one aspect of the invention, the method further comprises: and when the rotation angle is larger than a first angle threshold value and/or the distance between the rotation angle and the origin of the robot coordinate system is larger than a third distance threshold value, reporting prompt information.
The invention also relates to a robot comprising:
at least one depth sensor configured to acquire a three-dimensional point cloud of the robot's surroundings; and
a processor coupled to the at least one depth sensor configured to implement the method as described above to determine a ground plane of the robot.
According to one aspect of the invention, the depth sensor is a depth camera, a binocular camera or a distance sensor.
The invention also relates to a robot system comprising:
at least one robot comprising at least one depth sensor configured to acquire a three-dimensional point cloud of an environment surrounding the robot; and
a dispatch server in communication with the at least one robot and configured to implement the method as described above to determine a ground plane of the at least one robot.
By adopting the technical scheme of the invention, the robot can be helped to more flexibly deal with scenes such as unevenness, slopes and steps, and the like, and when the robot shakes or inclines, the influence on the stability and accuracy of distance measurement is reduced, so that the follow-up obstacle identification, path planning, obstacle avoidance operation and the like are facilitated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure. In the drawings:
FIG. 1 illustrates a flow diagram of a method for determining a ground plane based on a depth sensor according to an embodiment of the present invention;
FIG. 2 shows a schematic view of a sensor coordinate system and a robot coordinate system of one embodiment of the present invention;
FIG. 3A shows a three-dimensional point cloud representation of a sensor coordinate system of one embodiment of the present invention;
FIG. 3B shows a schematic diagram of the three-dimensional point cloud of FIG. 3A transformed coordinate system and projected onto the YOZ plane;
FIG. 3C shows a schematic representation of the three-dimensional point cloud of FIG. 3A converted to a robot coordinate system;
FIG. 4 shows a schematic view of a robotic system of one embodiment of the invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection, either mechanically, electrically, or in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood according to specific situations by those of ordinary skill in the art.
In the present invention, unless expressly stated or limited otherwise, the recitation of a first feature "on" or "under" a second feature may include the recitation of the first and second features being in direct contact, and may also include the recitation that the first and second features are not in direct contact, but are in contact via another feature between them. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. The first feature being "under," "beneath," and "under" the second feature includes the first feature being directly above and obliquely above the second feature, or simply meaning that the first feature is at a lesser level than the second feature.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 shows a flowchart of a method for determining a ground plane based on a depth sensor according to an embodiment of the present invention, where the method 10 includes steps S11-S14, and before describing the steps, the planes appearing in the context will be described first: measuring a plane, namely a plane which is fitted based on the three-dimensional point cloud for determining a ground plane at the current moment, wherein the plane can be a correct ground plane or an incorrect ground plane and needs to be further judged; the ground plane at the previous moment, the ground plane on which the robot stands at the initial moment or the ground plane determined at the previous moment based on the method 10, is used as the reference plane at the current moment to judge whether the measurement plane is correct or not; the ground plane at the current time, which is determined based on the method 10, can be used as the reference plane for the next time. With continued reference to fig. 1, the method 10 includes the steps of:
in step S11, a three-dimensional point cloud of the surrounding environment is acquired based on the depth sensor. The depth sensor may collect three-dimensional point cloud data of the robot's surroundings. The depth sensor is, for example, an RGB-D camera for capturing color images (RGB images) and infrared ranging to obtain depth images (D images), or a TOF camera, a structured light depth camera, a binocular camera, or a lidar, etc., and the invention is not limited to the type of the depth sensor. The three-dimensional point cloud data is a set of three-dimensional coordinates corresponding to sampling points on the ground or object surface in the surrounding environment. The range of collecting the three-dimensional point cloud depends on the installation position, the installation angle, the installation number of the depth sensors, the performance parameters of the depth sensors and the like.
According to a preferred embodiment of the invention, the collected three-dimensional point cloud data is subjected to filtering processing to filter noise points so as to reduce the calculation amount.
According to a preferred embodiment of the present invention, pixels with possible errors of the depth sensor are recorded, and the error pixels are marked or removed in subsequent calculations.
According to a preferred embodiment of the invention, a first distance range is determined based on one or more of the current speed, the traveling direction or the obstacle crossing capability of the robot, and points exceeding the first distance range in the three-dimensional point cloud are filtered out, so that the computation amount is further reduced, the system resource consumption is reduced, and the reaction speed is increased. For example, the field of view of the depth sensor is 5 meters, and when the robot is running at a slower speed, the first field of view may be set to 4 meters, and only the point clouds within the first distance range are processed. Wherein the obstacle crossing capability is the capability of the robot to cross an obstacle. Taking an indoor robot as an example, common obstacles are, for example, a deep groove (elevator gap), a threshold (moving door track), sundries (small stones), a step, a slope, accumulated water, etc., and when the robot encounters the above obstacles, the robot directly passes over the obstacle or performs obstacle avoidance operation, depending on the size of the obstacle and the obstacle crossing capability of the robot.
The three-dimensional point cloud is fitted to a measurement plane at step S12. For example, the depth sensor is mounted on a robot, information of a maximum plane is extracted from three-dimensional point cloud data acquired by the depth sensor, and a plane fitting is performed on corresponding points to obtain a measurement plane. The method of fitting the measurement plane is further described below.
According to a preferred embodiment of the present invention, the method 10 further comprises: establishing a robot coordinate system based on the ground plane at the previous moment and the standing direction of the robot; establishing a sensor coordinate system based on the installation height and the installation angle of the depth sensor; and mapping the three-dimensional point cloud from the sensor coordinate system to the robot coordinate system. For example, a depth sensor is installed on the front surface of the robot, and referring to fig. 2, a robot coordinate system is constructed according to the ground plane and the standing direction of the robot at the previous time. Specifically, the X-axis (the direction perpendicular to the paper surface and inward, not shown in fig. 2) is determined according to the right-hand spiral rule, with the center point of the bottom of the robot in contact with the ground plane as the origin O of the robot coordinate system, the standing direction of the robot as the Z-axis, and the right front of the robot as the Y-axis. In which, at an initial time, for example, immediately after the robot is turned on or restarted, the ground plane on which the robot stands is manually designated as the ground plane at the previous time. With continued reference to FIG. 2, a sensor coordinate system is established based on the mounting height and mounting angle of the depth sensor. The installation angle of the depth sensor comprises a pitch angle, a yaw angle and a roll angle. Specifically, with the center of the depth sensor as the origin O 'of the sensor coordinate system, the center axis of the depth sensor toward the front of the robot as the Y' axis, the direction perpendicular to the Y 'axis in the pitch direction as the Z' axis, and the direction perpendicular to the Y 'axis in the roll direction as the X axis (direction perpendicular to the paper surface inward, not shown in fig. 2), the X', Y ', and Z' axes of the sensor coordinate system form a right-hand coordinate system. When the yaw angle and the roll angle of the depth sensor are both set to be 0 degrees, the pitch angle is a negative value, the central shaft of the depth sensor inclines downwards, the view field is downward, and three-dimensional point cloud data of the front of the robot, which deviates to the ground, can be collected. The robot coordinate system and the sensor coordinate system constructed based on the method can conveniently convert the conversion relation between the two coordinate systems. Specifically, a translation matrix in the conversion matrix is determined according to the installation height of the depth sensor, namely the distance between the origin O' of the sensor coordinate system and the origin O of the robot coordinate system; and determining a rotation matrix in the conversion matrix according to the pitch angle of the depth sensor, namely the included angle between the Y' axis of the sensor coordinate system and the Y axis of the robot coordinate system. Based on the transformation matrix, the three-dimensional point cloud is mapped from the sensor coordinate system to the robot coordinate system. The invention does not limit the construction mode of each coordinate system and the conversion mode between coordinate systems.
In the above, how to construct the coordinate system and how to perform coordinate system conversion on the three-dimensional point cloud data are described through the preferred embodiment, so as to obtain the three-dimensional point cloud mapped to the robot coordinate system. The following continues with the description of the step of fitting the three-dimensional point cloud to the measurement plane.
According to a preferred embodiment of the present invention, the step of fitting the three-dimensional point cloud to the measurement plane in the method 10 comprises: converting the three-dimensional point cloud mapped to the robot coordinate system into a two-dimensional point cloud, acquiring a first distance between any point in the two-dimensional point cloud and fitting straight lines of other points, removing the points of which the first distance is greater than a first distance threshold value from the two-dimensional point cloud, converting the remaining points into three dimensions, and fitting the three dimensions into a measuring plane. Fig. 3A shows a schematic three-dimensional point cloud of a sensor coordinate system according to an embodiment of the present invention, corresponding to a scene of a hotel corridor. Firstly, converting the three-dimensional point cloud from a sensor coordinate system to a robot coordinate system, then classifying all points in the three-dimensional point cloud mapped to the robot coordinate system into a point set, projecting the point set to a YOZ plane of the robot coordinate system, and referring to fig. 3B, realizing the dimension reduction conversion from the three-dimensional point cloud to the two-dimensional point cloud. The points in the dashed box in the figure are significantly higher than the other points, corresponding to the obstacle, and in this step, it is necessary to filter out the points that are beyond the threshold range, i.e. the points corresponding to the obstacle. Generally, the points that make up the measurement plane include most points in the point cloud, and points that deviate from most points are filtered out in this step to obtain an accurate measurement plane. Optionally, one point in the set of points, for example, the point a, is fitted to a straight line in the YOZ plane, and a first distance between the point a and the fitted straight line is obtained. When the first distance is greater than the first distance threshold, the point is removed from the set of points. And traversing all the points in the point set, and carrying out dimension raising on the rest points with the first distance less than or equal to a first distance threshold value from the YOZ plane to the robot coordinate system, and fitting the rest points into a plane. The fitting plane is the measurement plane, which is then only a possible ground plane. The first distance is a positive value in the Z-axis direction, namely when the point a is positioned above the fitting straight line, the first distance is a difference value between the height of the point a and the height of the fitting straight line; and when the point a is positioned below the fitted straight line, the first distance is the difference between the height of the fitted straight line and the height of the point a. The first distance threshold is related to an obstacle crossing capability of the robot. The larger the obstacle crossing capability of the robot is, the larger the size of the obstacle that can be crossed is, and the larger the first distance threshold value can be set. Preferably, the post-training or reconnaissance settings are made according to the application scenario.
According to a preferred embodiment of the present invention, the step of fitting the three-dimensional point cloud to the measurement plane in the method 10 comprises: fitting any three points in the three-dimensional point cloud mapped to the robot coordinate system into a plane, acquiring second distances between the rest points and the plane fitted with any three points, removing the points with the second distances larger than a second distance threshold value from the three-dimensional point cloud mapped to the robot coordinate system, and fitting the rest points into a measuring plane. Referring to fig. 3C, the three-dimensional point cloud in the sensor coordinate system is converted to the robot coordinate system, and then all the points in the three-dimensional point cloud are classified into a point set, optionally three points in the point set, such as point b, point C, and point d, are fitted to a plane, and second distances between the rest points in the point set and the fitted plane are obtained. Points for which the second distance is greater than the second distance threshold are removed from the set of points. In general, the points lying in the measurement plane include the majority of points in the point cloud, in which step points deviating from the majority are filtered out. And traversing all the points in the point set, and fitting the rest points with the second distance less than or equal to a second distance threshold value into a plane. The plane is the measurement plane at the current moment. The second distance is a positive value in the Z-axis direction, that is, when any point is located above the fitting plane, the second distance is a difference between a height of the point and a height of the fitting plane; and when the point a is positioned below the fitting plane, the second distance is the difference between the height of the fitting plane and the height of the point. The height of the fitting plane is the maximum of the heights of the points b, c and d. The second distance threshold is related to an obstacle crossing capability of the robot. The stronger the obstacle crossing capability of the robot is, the larger the value of the second distance threshold can be set. Preferably, the post-training or post-survey settings are made according to the application scenario.
The above describes how to find points that may be the ground and fit to a measurement plane by two preferred embodiments. The remaining steps of method 10 are described further below.
The deviation between the measurement plane and the ground plane at the last moment is acquired in step S13. And acquiring a measuring plane which may be a ground plane according to the steps, and acquiring a deviation value based on the deviation of the measuring plane and the ground plane at the previous moment. Based on the deviation value, a ground plane at the current time is determined.
According to a preferred embodiment of the present invention, the step of obtaining the deviation between the measurement plane and the ground plane at the previous moment in the method 10 further comprises: and acquiring the rotation angle between the measuring plane and the ground plane at the last moment and/or the distance from the origin of the robot coordinate system. Wherein, the XOY plane of the robot coordinate system corresponds to the ground plane at the previous moment. The rotation angle is a pitch angle corresponding to the Z-axis direction, a course angle corresponding to the Y-axis direction or a roll angle corresponding to the X-axis direction in the robot coordinate system. Preferably, the rotation angle is a pitch angle corresponding to the Z-axis direction and/or a roll angle corresponding to the X-axis direction. And the distance from the origin of the robot coordinate system is the distance in the Z-axis direction. For example, when the robot runs, the robot encounters a shallow pit and inclines, a measurement plane obtained from the three-dimensional point cloud correspondingly rotates in a robot coordinate system, and a rotation angle between the measurement plane and a ground plane at the previous moment needs to be obtained. For another example, when the robot encounters a step during the traveling process, the measurement plane obtained from the three-dimensional point cloud may be a plane of the step, and the distance between the measurement plane and the ground plane, that is, the distance between the measurement plane and the origin in the robot coordinate system, needs to be obtained at this time. Only the rotation angle or the distance can be acquired according to the requirement, and two parameters can be acquired simultaneously, so that the stability and the accuracy of the ranging result are improved.
Based on the deviation, the ground plane at the present moment is determined in step S14. The measurement plane obtained in the previous step may be a correct ground plane or an incorrect ground plane, and needs to be compared with a reference plane, that is, the ground plane at the previous time (the initial ground plane or the ground plane determined at the previous time according to the method 10), to determine a deviation value, and thus determine the ground plane at the current time. For example, when the robot encounters a step in the traveling process, a measurement plane fitted by the robot according to the three-dimensional point cloud is a step plane, the step plane has a large deviation from a current standing plane of the robot, and if the height of the step exceeds the obstacle crossing capability of the robot and the robot cannot go up to the step, the step plane cannot be used as a ground plane at the current moment.
According to a preferred embodiment of the present invention, the step of determining the ground plane at the current time based on the deviation in the method 10 comprises: and when the rotation angle is smaller than or equal to the first angle threshold value and/or the distance between the rotation angle and the origin of the robot coordinate system is smaller than or equal to the third distance threshold value, taking the measuring plane as the ground plane at the current moment. Wherein the first angle threshold and the third distance threshold are related to the obstacle crossing capability of the robot. If the rotation angle and/or the distance satisfy the condition, that is, although the measurement plane has a certain rotation in the robot coordinate system or a certain distance from the current ground plane, it can be considered that the correct ground is found, and the measurement plane can be directly used as the current ground plane.
According to a preferred embodiment of the present invention, the step of determining the ground plane at the current moment based on the deviation in the method 10 comprises: and adjusting the measuring plane, and taking the adjusted measuring plane as the ground plane at the current moment. If the measuring plane has a deviation with the ground plane at the previous moment, the measuring plane can be properly adjusted in order to reduce the error caused by shaking when passing through the obstacle, and then the adjusted measuring plane is used as the ground plane at the current moment. The amount of adjustment based on the deviation value may be set or trained based on the operating scenario.
According to a preferred embodiment of the present invention, wherein the adjusting the measurement plane comprises: and rotating the measuring plane by a preset angle and/or translating by a preset distance, and taking the rotated and/or translated measuring plane as the ground plane at the current moment. If the measuring plane is judged to rotate for a certain degree relative to the ground plane at the previous moment based on the deviation, the measuring plane can be rotated by a preset angle, and then the rotated measuring plane is used as the ground plane at the current moment; if the measuring plane is judged to have certain translation relative to the ground plane at the previous moment based on the deviation, the measuring plane can be translated for a preset distance, and then the translated measuring plane is taken as the ground plane at the current moment; if the measuring plane rotates and translates relative to the ground plane at the previous moment, the measuring plane can rotate by a preset angle and translate by a preset distance to be used as the ground plane at the current moment, so that errors caused by changes of obstacles (such as changes of the ground) are reduced. The preset angle and the preset distance are related to the deviation and can be trained according to an application scene or set according to the obstacle crossing capability of the robot. For example, a thicker carpet is arranged in a robot passing area, when the robot goes up and down the carpet, the sensor data jitter caused by the jitter generated at the edge of the carpet is also fixed, and a preset angle or a preset distance can be obtained through multiple measurements so as to eliminate the influence caused by the change of the ground.
The foregoing is illustrated by the preferred embodiments: and when the deviation of the measuring plane and the ground plane at the last moment is less than a certain range, determining that the correct ground plane is found. For example, when the robot is located on a small bump on a plane, a small recess on the plane, a gentle slope from the plane, or a gentle slope from the plane at the last moment, the robot may shake or tilt, and in order to improve the stability and accuracy of distance measurement and reduce errors, the measurement plane may be directly or after being adjusted as the ground plane at the current moment.
According to a preferred embodiment of the present invention, the step of determining the ground plane at the present moment based on the deviation in the method 10 comprises: and when the rotation angle is larger than the first angle threshold value and/or the distance between the rotation angle and the origin of the robot coordinate system is larger than a third distance threshold value, taking the ground plane at the last moment as the ground plane at the current moment.
According to a preferred embodiment of the present invention, the method 10 further comprises: and when the rotating angle is larger than the first angle threshold value and/or the distance between the rotating angle and the origin of the robot coordinate system is larger than a third distance threshold value, reporting prompt information.
The above description of the preferred embodiment: when the deviation between the measuring plane and the ground plane at the previous moment exceeds a certain range, the ground plane is considered to be lost, for example, when the robot encounters an obstacle which cannot be spanned by a large slope, a step and the like at the previous moment, the obtained measuring plane corresponds to the plane of the slope or the plane of the step, if the robot cannot continuously pass under the condition, the measuring plane cannot be used as the ground plane at the current moment, the measuring plane needs to be discarded, and the ground plane at the previous moment is used as the ground plane at the current moment for obstacle avoidance processing or error reporting and the like; if the robot is still able to pass, for example, if the robot has a small slope plane but can walk smoothly for a certain distance, the measurement plane is taken as the ground plane at the current moment, and the ground plane at the previous moment is not considered any more, because the robot has left the ground plane at the previous moment.
In summary, the method 10 is described above through steps S11 to S14, and the technical solution of the present application can be summarized as follows: the method comprises the steps of obtaining three-dimensional point cloud of the surrounding environment through a depth sensor, processing the three-dimensional point cloud data, fitting a measuring plane which may be the ground, and judging the deviation between the measuring plane and the ground plane at the previous moment so as to determine the ground plane at the current moment or report prompt information. It will be understood by those skilled in the art that the numbering of steps S11-S14 does not constitute a limitation on the order in which the steps of method 10 may be performed.
The invention also relates to a robot 20, with reference to fig. 4, comprising:
at least one depth sensor 21 configured to acquire a three-dimensional point cloud of the environment surrounding the robot 20; and
a processor 22, coupled to the at least one depth sensor 21, configured to implement the method 10 as described above, to determine a ground plane of the robot 20.
According to a preferred embodiment of the invention, the depth sensor 21 is a depth camera, a binocular camera or a distance sensor.
The invention also relates to a robot system 30 comprising:
at least one robot 20, said robot 20 comprising at least one depth sensor 21, said at least one depth sensor 21 being configured to acquire a three-dimensional point cloud of an environment surrounding said robot 20; and
a dispatch server 31 in communication with the at least one robot 20 and configured to implement the method 10 as described above to determine a ground plane of the at least one robot 20.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method of determining a ground plane based on a depth sensor, comprising:
acquiring a three-dimensional point cloud of a surrounding environment based on the depth sensor;
extracting maximum plane information from the three-dimensional point cloud, and performing plane fitting on corresponding points to obtain a measurement plane, wherein the step of obtaining the measurement plane comprises the following steps: converting the three-dimensional point cloud mapped to the robot coordinate system into two-dimensional point cloud, acquiring a first distance between any point in the two-dimensional point cloud and a fitting straight line of other points, removing the point of which the first distance is greater than a first distance threshold value from the two-dimensional point cloud, converting the remaining points into three dimensions, and fitting the three dimensions into a measuring plane; or fitting any three points in the three-dimensional point cloud mapped to the robot coordinate system into a plane, acquiring second distances between the rest points and the plane fitted with the any three points, removing the points with the second distances larger than a second distance threshold value from the three-dimensional point cloud mapped to the robot coordinate system, and fitting the rest points into a measurement plane;
acquiring the deviation between the measuring plane and the ground plane at the last moment;
based on the deviation, a ground plane for the current time instant is determined.
2. The method of claim 1, the depth sensor being mounted on a robot, the method further comprising: establishing a robot coordinate system based on the ground plane of the previous moment and the standing direction of the robot; establishing a sensor coordinate system based on the installation height and the installation angle of the depth sensor; mapping the three-dimensional point cloud from the sensor coordinate system to the robot coordinate system.
3. The method of any of claims 1-2, wherein the step of obtaining a deviation between the measurement plane and a ground plane at a previous time further comprises: and acquiring the rotation angle between the measuring plane and the ground plane at the last moment and/or the distance from the origin of the robot coordinate system.
4. The method of claim 3, wherein the step of determining the ground plane at the current time based on the deviation comprises: and when the rotation angle is smaller than or equal to a first angle threshold value and/or the distance between the rotation angle and the origin of the robot coordinate system is smaller than or equal to a third distance threshold value, taking the measuring plane as the ground plane at the current moment.
5. The method of claim 1, wherein determining the ground plane for the current time based on the deviation comprises: and adjusting the measuring plane, and taking the adjusted measuring plane as the ground plane at the current moment.
6. The method of claim 5, wherein the adjusting a measurement plane comprises: and rotating the measuring plane by a preset angle and/or translating by a preset distance, and taking the rotated and/or translated measuring plane as the ground plane at the current moment.
7. The method of claim 3, wherein the step of determining the ground plane at the current time based on the deviation comprises: and when the rotation angle is larger than a first angle threshold value and/or the distance between the rotation angle and the origin of the robot coordinate system is larger than a third distance threshold value, taking the ground plane at the previous moment as the ground plane at the current moment.
8. The method of claim 3, further comprising: and when the rotation angle is larger than a first angle threshold value and/or the distance between the rotation angle and the origin of the robot coordinate system is larger than a third distance threshold value, reporting prompt information.
9. A robot, comprising:
at least one depth sensor configured to acquire a three-dimensional point cloud of the robot's surroundings; and
a processor, coupled with the at least one depth sensor, configured to implement the method of any of claims 1-8 to determine a ground plane of the robot.
10. The robot of claim 9, the depth sensor being a depth camera, a binocular camera, or a distance sensor.
11. A robotic system, comprising:
at least one robot comprising at least one depth sensor configured to acquire a three-dimensional point cloud of an environment surrounding the robot; and
a dispatch server in communication with the at least one robot and configured to implement the method of any of claims 1-8 to determine a ground plane of the at least one robot.
CN202111366540.8A 2021-11-18 2021-11-18 Method for determining ground plane based on depth sensor, robot and robot system Active CN114029953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111366540.8A CN114029953B (en) 2021-11-18 2021-11-18 Method for determining ground plane based on depth sensor, robot and robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111366540.8A CN114029953B (en) 2021-11-18 2021-11-18 Method for determining ground plane based on depth sensor, robot and robot system

Publications (2)

Publication Number Publication Date
CN114029953A CN114029953A (en) 2022-02-11
CN114029953B true CN114029953B (en) 2022-12-20

Family

ID=80144835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111366540.8A Active CN114029953B (en) 2021-11-18 2021-11-18 Method for determining ground plane based on depth sensor, robot and robot system

Country Status (1)

Country Link
CN (1) CN114029953B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116901085B (en) * 2023-09-01 2023-12-22 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099652A1 (en) * 2007-02-13 2008-08-21 Toyota Jidosha Kabushiki Kaisha Environment map drawing method and mobile robot
WO2020006765A1 (en) * 2018-07-06 2020-01-09 深圳前海达闼云端智能科技有限公司 Ground detection method, related device, and computer readable storage medium
WO2020006764A1 (en) * 2018-07-06 2020-01-09 深圳前海达闼云端智能科技有限公司 Path detection method, related device, and computer readable storage medium
WO2020019130A1 (en) * 2018-07-23 2020-01-30 深圳市大疆创新科技有限公司 Motion estimation method and mobile device
WO2021051346A1 (en) * 2019-09-19 2021-03-25 深圳市大疆创新科技有限公司 Three-dimensional vehicle lane line determination method, device, and electronic apparatus
CN113155057A (en) * 2021-03-16 2021-07-23 广西大学 Line structured light plane calibration method using non-purpose-made target

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090196527A1 (en) * 2008-02-01 2009-08-06 Hiwin Mikrosystem Corp. Calibration method of image planar coordinate system for high-precision image measurement system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099652A1 (en) * 2007-02-13 2008-08-21 Toyota Jidosha Kabushiki Kaisha Environment map drawing method and mobile robot
WO2020006765A1 (en) * 2018-07-06 2020-01-09 深圳前海达闼云端智能科技有限公司 Ground detection method, related device, and computer readable storage medium
WO2020006764A1 (en) * 2018-07-06 2020-01-09 深圳前海达闼云端智能科技有限公司 Path detection method, related device, and computer readable storage medium
WO2020019130A1 (en) * 2018-07-23 2020-01-30 深圳市大疆创新科技有限公司 Motion estimation method and mobile device
WO2021051346A1 (en) * 2019-09-19 2021-03-25 深圳市大疆创新科技有限公司 Three-dimensional vehicle lane line determination method, device, and electronic apparatus
CN113155057A (en) * 2021-03-16 2021-07-23 广西大学 Line structured light plane calibration method using non-purpose-made target

Also Published As

Publication number Publication date
CN114029953A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
JP6759307B2 (en) Adaptive mapping using spatial aggregation of sensor data
US11579623B2 (en) Mobile robot system and method for generating map data using straight lines extracted from visual images
CN110928301B (en) Method, device and medium for detecting tiny obstacle
WO2021093240A1 (en) Method and system for camera-lidar calibration
US11361469B2 (en) Method and system for calibrating multiple cameras
KR101083394B1 (en) Apparatus and Method for Building and Updating a Map for Mobile Robot Localization
US10129521B2 (en) Depth sensing method and system for autonomous vehicles
US8467902B2 (en) Method and apparatus for estimating pose of mobile robot using particle filter
CN114035584B (en) Method for detecting obstacle by robot, robot and robot system
WO2015024407A1 (en) Power robot based binocular vision navigation system and method based on
EP2144131A2 (en) Apparatus and method of building map for mobile robot
US20220383484A1 (en) Tunnel defect detecting method and system using unmanned aerial vehicle
Zou et al. Real-time full-stack traffic scene perception for autonomous driving with roadside cameras
CN114029953B (en) Method for determining ground plane based on depth sensor, robot and robot system
JP4539388B2 (en) Obstacle detection device
JP5819257B2 (en) Moving object position estimation method and moving object
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
CN112720408B (en) Visual navigation control method for all-terrain robot
CN114355894A (en) Data processing method, robot and robot system
CN113910265B (en) Intelligent inspection method and system for inspection robot
Fourre et al. Autonomous rgbd-based industrial staircase localization from tracked robots
KR20050011053A (en) The methode of self-localization for cleaning robot using ceiling image
KR102555708B1 (en) Method of position recognition and driving control for an autonomous mobile robot that tracks tile grid pattern
JP2018014064A (en) Position measuring system of indoor self-propelled robot
CN117152210A (en) Image dynamic tracking method and related device based on dynamic observation field angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant