CN112161622B - Robot footprint planning method and device, readable storage medium and robot - Google Patents

Robot footprint planning method and device, readable storage medium and robot Download PDF

Info

Publication number
CN112161622B
CN112161622B CN202010887219.3A CN202010887219A CN112161622B CN 112161622 B CN112161622 B CN 112161622B CN 202010887219 A CN202010887219 A CN 202010887219A CN 112161622 B CN112161622 B CN 112161622B
Authority
CN
China
Prior art keywords
robot
target space
footprint
planning
walkable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010887219.3A
Other languages
Chinese (zh)
Other versions
CN112161622A (en
Inventor
熊友军
罗志平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202010887219.3A priority Critical patent/CN112161622B/en
Publication of CN112161622A publication Critical patent/CN112161622A/en
Application granted granted Critical
Publication of CN112161622B publication Critical patent/CN112161622B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The application belongs to the technical field of robots, and particularly relates to a robot footprint planning method and device, a computer readable storage medium and a robot. The method comprises the steps that a preset 3D visual sensor is used for conducting overall scanning on a specified target space, and depth information of the target space is obtained; performing three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space; respectively dividing each walkable area in the terrain environment surface map; triangularization processing is carried out on each walking area to obtain a three-dimensional navigation grid of the target space; and planning the footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the biped joints of the robot. Through the embodiment of the application, the geometric attributes of the surface of the complex terrain can be accurately estimated, consideration of the constraint of the kinematics of the biped joints of the robot is added during footprint planning, and the method can be directly suitable for the humanoid robot.

Description

Robot footprint planning method and device, readable storage medium and robot
Technical Field
The application belongs to the technical field of robots, and particularly relates to a method and a device for planning footprint of a robot, a computer readable storage medium and the robot.
Background
In the prior art, when robot footprint planning is performed, a three-dimensional point cloud is generally converted into an octree map (OctoMap) as shown in fig. 1, and planning is performed based on a way-finding algorithm. However, the spatial representation strength of the OctoMap cannot meet the requirement of accurate reconstruction of the surface geometric attributes of the complex terrain map, so that the accuracy is low, and the motion characteristics of the humanoid robot are not considered in the method, so that the method is difficult to be directly applied to the humanoid robot.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for planning a footprint of a robot, a computer-readable storage medium, and a robot, so as to solve the problems that the existing robot footprint planning method is low in accuracy and is difficult to be directly applied to a humanoid robot.
A first aspect of an embodiment of the present application provides a robot footprint planning method, which may include:
globally scanning a specified target space by using a preset 3D visual sensor to obtain depth information of the target space;
Performing three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space;
respectively dividing each walkable area in the terrain environment surface map;
extracting outline polygons of all walkable areas; for each outline polygon, respectively taking each vertex of the outline polygon as a starting point, sequentially adding new points in the direction towards the central point of the outline polygon, wherein the distance between adjacent points is the maximum stride of the robot; connecting the points to obtain each decomposition polygon; determining the circle centers of the circumscribed circles of the decomposition polygons and connecting the circle centers to obtain a three-dimensional navigation grid of the target space;
and planning the footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the biped joints of the robot.
Further, the three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space includes:
carrying out voxelization processing on the target space to obtain a voxelized target space;
mapping the depth information into the voxelized target space and calculating a TSDF value for each voxel cube in the voxelized target space;
And constructing a voxel cube with the TSDF value of 0 as a voxelized terrain environment surface map of the target space.
Further, the dividing the surface map of the terrain environment into regions capable of walking respectively comprises:
respectively dividing each preliminary walkable region in the topographic environment surface map, wherein the preliminary walkable regions are regions formed by continuous voxel cubes meeting a preset height error range;
and splitting the preliminary walking area with the cavity, splitting the walking area with the concave polygon to obtain the walking areas without the cavity and with the convex polygon.
Further, the splitting the preliminary walkable region in which the void exists includes:
and connecting adjacent holes in the preliminary walkable region with the holes, and connecting each hole with the edge voxel cube closest to the hole, so as to split a new region.
Further, the planning the footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the bipedal joint of the robot comprises:
planning a path in the three-dimensional navigation grid by using a preset navigation grid algorithm to obtain a target path of the robot;
Performing footprint planning on the target path that satisfies the biped joint kinematics constraints.
Further, the joint kinematics constraint of the double feet is a constraint relation among the elevation amplitude, the forward-lifting angle amplitude and the footprint distance of the joint of the double feet.
A second aspect of an embodiment of the present application provides a robot footprint planning apparatus, which may include:
the system comprises a visual scanning module, a depth information acquisition module and a depth information acquisition module, wherein the visual scanning module is used for carrying out global scanning on a specified target space by using a preset 3D visual sensor to obtain the depth information of the target space;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the target space according to the depth information to obtain a voxelized topographic environment surface map of the target space;
the walking area dividing module is used for dividing each walking area in the terrain environment surface map;
the navigation grid generating module is used for extracting the outline polygons of all the walkable areas; for each outline polygon, respectively taking each vertex of the outline polygon as a starting point, sequentially adding new points in the direction towards the central point of the outline polygon, wherein the distance between adjacent points is the maximum stride of the robot; connecting the points with each other to obtain each decomposition polygon; determining the circle center of a circumscribed circle of each decomposition polygon, and connecting the circle centers to obtain a three-dimensional navigation grid of the target space;
And the footprint planning module is used for planning the footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the bipedal joints of the robot.
Further, the three-dimensional reconstruction module may include:
the voxelization processing unit is used for carrying out voxelization processing on the target space to obtain a voxelized target space;
a depth information mapping unit for mapping the depth information into the voxelized target space and calculating a TSDF value of each voxel cube in the voxelized target space;
and the terrain environment surface map construction unit is used for constructing a voxel cube with the TSDF value of 0 into a voxelized terrain environment surface map of the target space.
Further, the walkable region division module may include:
the primary walkable region dividing unit is used for respectively dividing each primary walkable region in the topographic environment surface map, and the primary walkable regions are regions formed by continuous voxel cubes meeting a preset height error range;
and the walkable region optimization processing unit is used for splitting the preliminary walkable region with the holes and splitting the walkable region with the concave polygon shape to obtain the walkable regions without the holes and with the convex polygon shape.
Further, the walkable region optimization processing unit may include:
and the hole processing subunit is used for connecting adjacent holes in the preliminary walkable region with the holes and connecting each hole with the edge voxel cube closest to the hole, so that a new region is split.
Further, the footprint planning module may include:
the target path determining unit is used for planning a path in the three-dimensional navigation grid by using a preset navigation grid algorithm to obtain a target path of the robot;
and the footprint planning unit is used for performing footprint planning meeting the bipedal joint kinematics constraint on the target path.
Further, the joint kinematics constraint of the double feet is a constraint relation among the elevation amplitude, the forward-lifting angle amplitude and the footprint distance of the joint of the double feet.
A third aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of any one of the above-mentioned robot footprint planning methods.
A fourth aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the above-mentioned robot footprint planning methods when executing the computer program.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on a robot, causes the robot to perform the steps of any one of the above-mentioned methods for robot footprint planning.
Compared with the prior art, the embodiment of the application has the beneficial effects that: the method comprises the steps that a preset 3D visual sensor is used for carrying out overall scanning on a specified target space to obtain depth information of the target space; performing three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space; respectively dividing each walkable area in the terrain environment surface map; triangularization processing is carried out on each walking area to obtain a three-dimensional navigation grid of the target space; and planning the footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the biped joints of the robot. Through this application embodiment, use 3D vision sensor to carry out overall situation density and build the picture, and generate three-dimensional grid map, can accurate estimation complicated topography surface geometric attributes to add the consideration to the biped joint kinematics constraint of robot when the footprint plans, can directly be applicable to humanoid robot.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic of an OctMap;
FIG. 2 is a flowchart illustrating an embodiment of a method for planning a footprint of a robot according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a surface map of a voxelized terrain environment;
FIG. 4 is a schematic view of an AABB bounding box;
FIG. 5 is a schematic view of a preliminary walkable region;
FIG. 6 is a schematic diagram of a split of a preliminary walkable region in which voids exist;
FIG. 7 is a schematic diagram of extracting outline polygons;
FIG. 8 is a schematic diagram of a triangularization process;
FIG. 9 is a schematic diagram of a three-dimensional navigation grid;
FIG. 10 shows the kinematics constraint of the bipedal joints of the robot
FIG. 11 is a block diagram of an embodiment of a robot footprint planning apparatus according to the embodiments of the present application;
fig. 12 is a schematic block diagram of a robot according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another, and are not to be construed as indicating or implying relative importance.
Referring to fig. 2, an embodiment of a method for planning footprint of a robot in an embodiment of the present application may include:
step S201, performing global scanning on a specified target space by using a preset 3D vision sensor to obtain depth information of the target space.
In the embodiment of the application, a 3D vision sensor can be configured on the humanoid robot in advance so as to carry out global scanning on a designated target space, and the 3D vision sensor can comprise but is not limited to an RGB-D camera and a TOF camera. In order to ensure a complete reconstruction of the topographical environment of the entire target space, the target space may be scanned back and forth a number of times.
Step S202, performing three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space.
In the embodiment of the present application, any three-dimensional reconstruction algorithm in the prior art may be adopted to perform three-dimensional reconstruction of the target space, and these three-dimensional reconstruction algorithms may include, but are not limited to, dense three-dimensional reconstruction algorithms such as InfiniTAM v3 and KinectFusion.
Specifically, the target space may be subjected to voxelization processing first to obtain a voxelized target space; then, mapping the depth information into the voxelized target space, and calculating a tsdf (rounded signaled Distance function) value of each Voxel cube (Voxel) in the voxelized target space; finally, only voxel cubes belonging to the surface of the terrain environment are retained, i.e. voxel cubes with a TSDF value of 0 are constructed as a voxelized surface map of the target space, as shown in fig. 3.
And S203, respectively marking out each walkable area in the topographic environment surface map.
Specifically, the respective preliminary walkable regions may be first marked out in the surface map of the terrain environment. The preliminary walkable region is a region formed by continuous voxel cubes meeting a preset height error range. As shown in fig. 4, an AABB bounding box of the entire target space may be generated, a Z-axis scale of the AABB bounding box is set as a size of a voxel cube (e.g., 1 cm), and a certain height error range (e.g., ± 0.2 cm) is reserved, so that continuous voxel cubes on the same scale may be divided into the same region, that is, the preliminary walkable region shown in fig. 5.
Then, splitting the preliminary walking area with the cavity, splitting the walking area with the concave polygon to obtain walking areas without the cavity and with the convex polygon, wherein the walking areas are not overlapped with each other.
As shown in fig. 6, for the preliminary walkable region with holes, adjacent holes are connected, and each hole is connected to the edge voxel cube closest to the hole, so as to split a new region, where the edge voxel cube is the voxel cube at the edge of the preliminary walkable region. Specifically, as shown in the left figure, if the edge of the connected hole exceeds the boundary of the region, a new connected vertex needs to be supplemented at the intersection.
And S204, respectively carrying out triangularization treatment on each walkable area to obtain a three-dimensional navigation grid of the target space.
Specifically, the contour polygons of each walkable region may be extracted first, so as to convert huge voxelized data into extremely small polygon data (vertices, edges, height values of vertices, etc.), and any polygon fitting method in the prior art may be adopted for contour extraction, which is not specifically limited in this embodiment of the present application. As shown in fig. 7, each of the outline polygons is a convex polygon, so that all the vertices can be reached in a straight line.
The embodiment of the application comprehensively considers the stride of the robot and the movement planning track in each direction in the triangularization process. As shown in fig. 8, for each outline polygon, sequentially adding new points (marked with black points in the figure) in the direction towards the center point (marked with five stars in the figure) of the outline polygon by taking each vertex of the outline polygon as a starting point, wherein the distance between the adjacent points is the maximum stride of the robot; connecting the points to obtain each decomposition polygon; and determining the circle centers (marked by triangles in the figure) of the circumscribed circles of each decomposition polygon, and connecting the circle centers to obtain a triangulation result, namely the three-dimensional navigation grid. FIG. 9 is a schematic view of a complete three-dimensional navigation grid for the target space;
And S205, planning the footprint of the robot according to the three-dimensional navigation grid and the biped joint kinematics constraint of the robot.
Specifically, a preset navigation grid algorithm may be used to perform path planning in the three-dimensional navigation grid to obtain a target path of the robot. In the embodiment of the present application, any navigation grid algorithm in the prior art may be used, and this is not specifically limited in the embodiment of the present application. In the process, the influence of the terrain and the obstacles is comprehensively considered, and a global optimal path moving from a starting point to a target point, namely the target path, is searched in the three-dimensional navigation grid.
After the target path is obtained, footprint planning satisfying the bipedal joint kinematic constraint can be performed on the target path. As shown in fig. 10, the joint kinematics constraint is a constraint relationship between the amplitude of elevation, the amplitude of the angle of frontal lift, and the distance between footprints of the joints of the feet. The joint kinematics constraint of the double feet is added into a planning optimization objective equation, and the conventional joint kinematics equation can be adopted to solve.
In summary, in the embodiment of the present application, a preset 3D vision sensor is used to perform global scanning on a specified target space, so as to obtain depth information of the target space; performing three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space; respectively dividing each walkable area in the terrain environment surface map; triangularization processing is carried out on each walking area to obtain a three-dimensional navigation grid of the target space; and planning the footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the biped joints of the robot. Through this application embodiment, use 3D vision sensor to carry out overall situation density and build the picture, and generate three-dimensional grid map, can accurate estimation complicated topography surface geometric attributes to add the consideration to the biped joint kinematics constraint of robot when the footprint plans, can directly be applicable to humanoid robot.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 11 is a structural diagram of an embodiment of a robot footprint planning apparatus according to the embodiment of the present application, which corresponds to the robot footprint planning method according to the above embodiment.
In this embodiment, a robot footprint planning apparatus may include:
the visual scanning module 1101 is configured to perform global scanning on a specified target space by using a preset 3D visual sensor, so as to obtain depth information of the target space;
a three-dimensional reconstruction module 1102, configured to perform three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized surface map of the terrain environment of the target space;
a walkable region dividing module 1103, configured to divide walkable regions into the surface map of the terrain environment;
the navigation grid generation module 1104 is configured to perform triangularization processing on each walkable region respectively to obtain a three-dimensional navigation grid of the target space;
And a footprint planning module 1105, configured to plan a footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the joints of the two feet of the robot.
Further, the three-dimensional reconstruction module may include:
the voxelization processing unit is used for carrying out voxelization processing on the target space to obtain a voxelized target space;
a depth information mapping unit for mapping the depth information into the voxelized target space and calculating a TSDF value of each voxel cube in the voxelized target space;
and the terrain environment surface map construction unit is used for constructing a voxel cube with the TSDF value of 0 into a voxelized terrain environment surface map of the target space.
Further, the walkable region division module may include:
the primary walkable region dividing unit is used for dividing each primary walkable region in the topographic environment surface map respectively, wherein the primary walkable regions are regions formed by continuous voxel cubes meeting a preset height error range;
and the walkable region optimization processing unit is used for splitting the preliminary walkable region with the cavity and splitting the walkable region with the concave polygon to obtain the walkable regions without the cavity and with the convex polygon.
Further, the walkable region optimization processing unit may include:
and the hole processing subunit is used for connecting adjacent holes in the preliminary walkable region with the holes and connecting each hole with the edge voxel cube closest to the hole, so that a new region is split.
Further, the navigation grid generation module may include:
a contour polygon extraction unit for extracting a contour polygon of each walkable region;
the new point adding unit is used for sequentially adding new points in the direction towards the center point of each outline polygon by taking each vertex of each outline polygon as a starting point, and the distance between every two adjacent points is the maximum stride of the robot;
a decomposed polygon construction unit for interconnecting the points to obtain decomposed polygons;
and the navigation grid generating unit is used for determining the circle centers of the circumscribed circles of the decomposition polygons and connecting the circle centers to obtain the three-dimensional navigation grid.
Further, the footprint planning module may include:
the target path determining unit is used for planning a path in the three-dimensional navigation grid by using a preset navigation grid algorithm to obtain a target path of the robot;
And the footprint planning unit is used for performing footprint planning on the target path, wherein the footprint planning meets the kinematics constraint of the double-foot joint.
Further, the joint kinematics constraint of the double feet is a constraint relation among the elevation amplitude, the forward-lifting angle amplitude and the footprint distance of the joint of the double feet.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Fig. 12 shows a schematic block diagram of a robot provided in an embodiment of the present application, and only a part related to the embodiment of the present application is shown for convenience of explanation.
As shown in fig. 12, the robot 12 of this embodiment includes: a processor 120, a memory 121, and a computer program 122 stored in said memory 121 and executable on said processor 120. The processor 120, when executing the computer program 122, implements the steps in the above-mentioned embodiments of the robot footprint planning method, such as the steps S201 to S205 shown in fig. 2. Alternatively, the processor 120, when executing the computer program 122, implements the functions of each module/unit in each apparatus embodiment described above, for example, the functions of the modules 1101 to 1105 shown in fig. 11.
Illustratively, the computer program 122 may be partitioned into one or more modules/units that are stored in the memory 121 and executed by the processor 120 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 122 in the robot 12.
Those skilled in the art will appreciate that fig. 12 is merely an example of a robot 12 and does not constitute a limitation of robot 12 and may include more or fewer components than shown, or some components in combination, or different components, e.g., robot 12 may also include input output devices, network access devices, buses, etc.
The Processor 120 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 121 may be an internal storage unit of the robot 12, such as a hard disk or a memory of the robot 12. The memory 121 may also be an external storage device of the robot 12, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the robot 12. Further, the memory 121 may also include both an internal storage unit and an external storage device of the robot 12. The memory 121 is used to store the computer program and other programs and data required by the robot 12. The memory 121 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/robot are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer-readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable storage media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present application, and they should be construed as being included in the present application.

Claims (9)

1. A robot footprint planning method, comprising:
globally scanning a specified target space by using a preset 3D visual sensor to obtain depth information of the target space;
performing three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space;
respectively dividing each walkable area in the terrain environment surface map;
extracting outline polygons of all walkable areas; for each outline polygon, respectively taking each vertex of the outline polygon as a starting point, sequentially adding new points in the direction towards the central point of the outline polygon, wherein the distance between adjacent points is the maximum stride of the robot; connecting the points to obtain each decomposition polygon; determining the circle centers of the circumscribed circles of the decomposition polygons and connecting the circle centers to obtain a three-dimensional navigation grid of the target space;
And planning the footprint of the robot according to the three-dimensional navigation grid and the biped joint kinematics constraint of the robot.
2. The method according to claim 1, wherein the three-dimensional reconstruction of the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space comprises:
carrying out voxelization processing on the target space to obtain a voxelized target space;
mapping the depth information into the voxelized target space and calculating a TSDF value for each voxel cube in the voxelized target space;
and constructing a voxel cube with the TSDF value of 0 as a voxelized terrain environment surface map of the target space.
3. The method for planning footprint of a robot according to claim 1, wherein said individually dividing regions of walkable terrain into surface maps of terrain environment comprises:
respectively dividing each preliminary walkable region in the topographic environment surface map, wherein the preliminary walkable regions are regions formed by continuous voxel cubes meeting a preset height error range;
and splitting the preliminary walking area with the cavity, splitting the walking area with the concave polygon to obtain the walking areas without the cavity and with the convex polygon.
4. The method for planning the footprint of a robot according to claim 3, wherein said splitting the preliminary walkable region in which the hole exists comprises:
and connecting adjacent holes in the preliminary walkable region with the holes, and connecting each hole with the edge voxel cube closest to the hole, so as to split a new region.
5. The robot footprint planning method according to any one of claims 1 to 4, wherein said planning the footprint of the robot according to the three-dimensional navigation grid and the bipedal joint kinematics constraints of the robot comprises:
planning a path in the three-dimensional navigation grid by using a preset navigation grid algorithm to obtain a target path of the robot;
performing footprint planning on the target path that satisfies the bipedal joint kinematics constraints.
6. The robot footprint planning method according to claim 5, in which said bipedal joint kinematic constraints are constraint relationships between the elevation amplitude, the frontal elevation angle amplitude, and the footprint distance of the bipedal joints.
7. A robot footprint planning apparatus, comprising:
The system comprises a visual scanning module, a depth information acquisition module and a display module, wherein the visual scanning module is used for globally scanning a specified target space by using a preset 3D visual sensor to obtain the depth information of the target space;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the target space according to the depth information to obtain a voxelized terrain environment surface map of the target space;
the walking area dividing module is used for dividing each walking area in the terrain environment surface map;
the navigation grid generation module is used for extracting the outline polygons of all the walkable areas; for each outline polygon, respectively taking each vertex of the outline polygon as a starting point, sequentially adding new points in the direction towards the central point of the outline polygon, wherein the distance between adjacent points is the maximum stride of the robot; connecting the points to obtain each decomposition polygon; determining the circle centers of the circumscribed circles of the decomposition polygons and connecting the circle centers to obtain a three-dimensional navigation grid of the target space;
and the footprint planning module is used for planning the footprint of the robot according to the three-dimensional navigation grid and the kinematics constraint of the bipedal joints of the robot.
8. A computer-readable storage medium, having a computer program stored thereon, which, when being executed by a processor, carries out the steps of the robot footprint planning method according to any one of claims 1 to 6.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, carries out the steps of the robot footprint planning method according to any one of claims 1 to 6.
CN202010887219.3A 2020-08-28 2020-08-28 Robot footprint planning method and device, readable storage medium and robot Active CN112161622B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010887219.3A CN112161622B (en) 2020-08-28 2020-08-28 Robot footprint planning method and device, readable storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010887219.3A CN112161622B (en) 2020-08-28 2020-08-28 Robot footprint planning method and device, readable storage medium and robot

Publications (2)

Publication Number Publication Date
CN112161622A CN112161622A (en) 2021-01-01
CN112161622B true CN112161622B (en) 2022-07-19

Family

ID=73859380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010887219.3A Active CN112161622B (en) 2020-08-28 2020-08-28 Robot footprint planning method and device, readable storage medium and robot

Country Status (1)

Country Link
CN (1) CN112161622B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112987724B (en) * 2021-02-04 2023-05-02 京东科技信息技术有限公司 Path optimization method, path optimization device, robot and storage medium
WO2022226720A1 (en) * 2021-04-26 2022-11-03 深圳市大疆创新科技有限公司 Path planning method, path planning device, and medium
CN113741414A (en) * 2021-06-08 2021-12-03 北京理工大学 Safe motion planning method and device based on mobile robot contour
CN115774452B (en) * 2023-02-13 2023-05-05 南京航空航天大学 Three-dimensional model surface coverage path planning method based on shape constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610212A (en) * 2017-07-25 2018-01-19 深圳大学 Scene reconstruction method, device, computer equipment and computer-readable storage medium
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber
CN109432776A (en) * 2018-09-21 2019-03-08 苏州蜗牛数字科技股份有限公司 A kind of free method for searching in space

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10062203B2 (en) * 2015-12-18 2018-08-28 Autodesk, Inc. Voxelization of mesh representations
US10046820B2 (en) * 2016-06-27 2018-08-14 Massachusetts Institute For Technology Bipedal isotropic lattice locomoting explorer: robotic platform for locomotion and manipulation of discrete lattice structures and lightweight space structures
US10643333B2 (en) * 2018-04-12 2020-05-05 Veran Medical Technologies Apparatuses and methods for navigation in and Local segmentation extension of anatomical treelike structures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610212A (en) * 2017-07-25 2018-01-19 深圳大学 Scene reconstruction method, device, computer equipment and computer-readable storage medium
CN109432776A (en) * 2018-09-21 2019-03-08 苏州蜗牛数字科技股份有限公司 A kind of free method for searching in space
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向大型三维场景的优化分层A~*寻路算法研究;朱昌龙等;《软件导刊》;20190531;第18卷(第05期);第13-16+2页 *

Also Published As

Publication number Publication date
CN112161622A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN112161622B (en) Robot footprint planning method and device, readable storage medium and robot
CN108401461B (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN109948189B (en) Material volume and weight measuring system for excavator bucket
CN109509210B (en) Obstacle tracking method and device
JP5430456B2 (en) Geometric feature extraction device, geometric feature extraction method, program, three-dimensional measurement device, object recognition device
Wei et al. A non-contact measurement method of ship block using image-based 3D reconstruction technology
US20170352163A1 (en) Method and system for determining cells traversed by a measuring or visualization axis
CN107833273B (en) Oblique photography three-dimensional model objectification application method based on three-dimensional simulation model
CN104809759A (en) Large-area unstructured three-dimensional scene modeling method based on small unmanned helicopter
Xu et al. Robust segmentation and localization of structural planes from photogrammetric point clouds in construction sites
Holenstein et al. Watertight surface reconstruction of caves from 3D laser data
CN107622530B (en) Efficient and robust triangulation network cutting method
F Laefer et al. Processing of terrestrial laser scanning point cloud data for computational modelling of building facades
Park et al. Reverse engineering with a structured light system
Park et al. Segmentation of Lidar data using multilevel cube code
Garrote et al. 3D point cloud downsampling for 2D indoor scene modelling in mobile robotics
Jarvis et al. 3D shape reconstruction of small bodies from sparse features
Guo et al. Improved marching tetrahedra algorithm based on hierarchical signed distance field and multi-scale depth map fusion for 3D reconstruction
Sappa et al. Incremental multiview integration of range images
Mortazavi et al. Voxel-based point cloud localization for smart spaces management
CN114677435A (en) Point cloud panoramic fusion element extraction method and system
Johnson et al. Probabilistic 3D Data Fusion for Adaptive Resolution Surface Generation.
CN116012613B (en) Method and system for measuring and calculating earthwork variation of strip mine based on laser point cloud
Teo Parametric reconstruction for complex building from LIDAR and vector maps using a divide-and-conquer strategy
JP2002366935A (en) Method and device for generating three-dimensional shape data and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant