CN111993425B - Obstacle avoidance method, device, mechanical arm and storage medium - Google Patents

Obstacle avoidance method, device, mechanical arm and storage medium Download PDF

Info

Publication number
CN111993425B
CN111993425B CN202010862102.XA CN202010862102A CN111993425B CN 111993425 B CN111993425 B CN 111993425B CN 202010862102 A CN202010862102 A CN 202010862102A CN 111993425 B CN111993425 B CN 111993425B
Authority
CN
China
Prior art keywords
sdf
mechanical arm
dimensional
visual environment
current visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010862102.XA
Other languages
Chinese (zh)
Other versions
CN111993425A (en
Inventor
罗志平
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202010862102.XA priority Critical patent/CN111993425B/en
Publication of CN111993425A publication Critical patent/CN111993425A/en
Application granted granted Critical
Publication of CN111993425B publication Critical patent/CN111993425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Software Systems (AREA)
  • Algebra (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an obstacle avoidance method, an obstacle avoidance device, a mechanical arm and a storage medium, and relates to the technical field of control, wherein a current visual environment obtained by the mechanical arm is subjected to three-dimensional reconstruction to obtain three-dimensional point cloud data corresponding to the current visual environment and a current pose parameter of the mechanical arm, and a coordinate transformation matrix is generated according to the obtained resolution parameter and the current pose parameter, so that the three-dimensional point cloud data are converted into corresponding two-dimensional space data by using the coordinate transformation matrix, and the mechanical arm is subjected to obstacle avoidance by using the two-dimensional space data; therefore, the data volume used when the mechanical arm avoids the barrier can be reduced, the barrier avoiding efficiency is improved, the data volume saved when the barrier is avoided is reduced, and the processing resource consumed when the barrier is avoided is reduced.

Description

Obstacle avoidance method, device, mechanical arm and storage medium
Technical Field
The application relates to the technical field of control, in particular to an obstacle avoidance method, an obstacle avoidance device, a mechanical arm and a storage medium.
Background
The mechanical arm can be widely applied to scenes such as industrial robots, home service robots and the like. The grabbing control of the mechanical arm needs to consider the obstacle avoidance problem of the surrounding environment, including moving to a target object, and placing the object in scenes such as a destination and the like after grabbing.
However, when the mechanical arm adopts some obstacle avoidance schemes to avoid an obstacle, the data volume processed is large, the obstacle avoidance efficiency is low, and more processing resources are consumed.
Disclosure of Invention
The application aims to provide an obstacle avoidance method, an obstacle avoidance device, a mechanical arm and a storage medium, which can improve the obstacle avoidance efficiency of the mechanical arm and reduce consumed processing resources.
In order to achieve the purpose, the technical scheme adopted by the application is as follows:
in a first aspect, the application provides an obstacle avoidance method, which is applied to a mechanical arm; the method comprises the following steps:
performing three-dimensional reconstruction on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and current pose parameters of the mechanical arm;
generating a coordinate transformation matrix according to the obtained resolution parameters and the current pose parameters; wherein the coordinate transformation matrix is used for transforming data in a three-dimensional space to a two-dimensional space;
and converting the three-dimensional point cloud data into corresponding two-dimensional space data by using the coordinate transformation matrix so that the mechanical arm avoids obstacles by using the two-dimensional space data.
In a second aspect, the application provides an obstacle avoidance device, which is applied to a mechanical arm; the device comprises:
the processing module is used for performing three-dimensional reconstruction on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and current pose parameters of the mechanical arm;
the processing module is further used for generating a coordinate transformation matrix according to the obtained resolution parameters and the current pose parameters; wherein the coordinate transformation matrix is used for transforming data in a three-dimensional space to a two-dimensional space;
and the conversion module is used for converting the three-dimensional point cloud data into corresponding two-dimensional space data by using the coordinate transformation matrix so that the mechanical arm can avoid obstacles by using the two-dimensional space data.
In a third aspect, the present application provides a robotic arm comprising a memory for storing one or more programs; a processor; the one or more programs, when executed by the processor, implement the above-described obstacle avoidance method.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the above-mentioned obstacle avoidance method.
According to the obstacle avoidance method, the obstacle avoidance device, the mechanical arm and the storage medium, the current visual environment obtained by the mechanical arm is subjected to three-dimensional reconstruction to obtain three-dimensional point cloud data corresponding to the current visual environment and the current pose parameter of the mechanical arm, and a coordinate transformation matrix is generated according to the obtained resolution parameter and the current pose parameter, so that the three-dimensional point cloud data are converted into corresponding two-dimensional space data by using the coordinate transformation matrix, and the mechanical arm is enabled to avoid obstacles by using the two-dimensional space data; therefore, the data volume used when the mechanical arm avoids the barrier can be reduced, the barrier avoiding efficiency is improved, the data volume saved when the barrier is avoided is reduced, and the processing resource consumed when the barrier is avoided is reduced.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly explain the technical solutions of the present application, the drawings needed for the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also derive other related drawings from these drawings without inventive effort.
Fig. 1 shows a schematic diagram of an obstacle avoidance scheme based on a 3D vision technology;
FIG. 2 illustrates a schematic block diagram of a robotic arm provided herein;
fig. 3 shows a schematic flow chart of an obstacle avoidance method provided by the present application;
FIG. 4 shows a schematic diagram of a three-dimensional grid;
FIG. 5 illustrates a schematic of two-dimensional image data for a robotic arm for obstacle avoidance;
FIG. 6 is a schematic diagram of a trajectory of a robotic arm in obstacle avoidance;
FIG. 7 shows a schematic diagram of normal line finding;
FIG. 8 shows a schematic of trilinear interpolation;
fig. 9 shows a schematic structural block diagram of an obstacle avoidance device provided by the present application.
In the figure: 100-a robotic arm; 101-a memory; 102-a processor; 103-a communication interface; 300-obstacle avoidance device; 301-a processing module; 302-conversion module.
Detailed Description
To make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the present application will be clearly and completely described below with reference to the accompanying drawings in some embodiments of the present application, and it is obvious that the described embodiments are some, but not all embodiments of the present application. The components of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on a part of the embodiments in the present application without any creative effort belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
In a scenario where, for example, the robot arm performs grasping control as described above, some strategies are grasping with a scheme based on 3D vision technology.
The scheme based on the 3D vision technology has high reliability and high grabbing accuracy, the pose of an object to be grabbed is identified and calculated, and the position of the object to be grabbed is utilized to determine the terminal pose of the mechanical arm when the mechanical arm grabs the object; reversely solving the angle values of all joints of the mechanical arm at the terminal pose by utilizing the inverse kinematics of the mechanical arm; and then planning the motion trail of the mechanical arm by combining the detected environmental obstacles, controlling the mechanical arm to move to the position of the object to be grabbed by utilizing a servo system, and completing grabbing of the object.
As described above, in the process of moving the mechanical arm to the object to be grabbed, the motion trajectory needs to be planned in combination with the detected environmental obstacle, that is, the grabbing control of the mechanical arm needs to consider the obstacle avoidance problem with the surrounding environment.
In the 3D vision-based implementation described above, some of the ideas of obstacle avoidance strategies are: the distance information of the surface of the surrounding environment of the mechanical arm from the mechanical arm is obtained through a laser or 3D vision technology, the three-dimensional point cloud of the surrounding environment of the mechanical arm is constructed, and then the three-dimensional point cloud is converted into an Octmap-type obstacle avoidance graph shown in figure 1.
In an obstacle avoidance map such as that shown in fig. 1, all cubes indicate that an obstacle exists in a corresponding environment; in addition, the height of the obstacle corresponding to each cube can be identified by using some color information, for example, red can be used for representing the obstacle with higher height, and the more red the color is, the higher the height of the obstacle from the ground is; the mechanical arm can avoid obstacles by avoiding all cubes in the OctMap.
However, when the mechanical arm generates an obstacle avoidance map shown in fig. 1 for obstacle avoidance, because the OctoMap stores three-dimensional cube data, the amount of data to be processed is large, and the obstacle avoidance efficiency is low; moreover, because the processed data volume is large, the data volume required to be stored by the mechanical arm when the mechanical arm avoids the obstacle is also large, more memory is required to be occupied, and more processing resources are consumed.
Therefore, based on the defects of the above scheme, the present application provides a possible implementation manner as follows: three-dimensional reconstruction is carried out on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and the current pose parameter of the mechanical arm, and a coordinate transformation matrix is generated according to the obtained resolution parameter and the current pose parameter, so that the three-dimensional point cloud data is converted into corresponding two-dimensional space data by using the coordinate transformation matrix, and the mechanical arm avoids barriers by using the two-dimensional space data; therefore, the obstacle avoidance efficiency of the mechanical arm can be reduced and improved, and consumed processing resources are reduced.
Referring to fig. 2, fig. 2 shows a schematic block diagram of a robot arm 100 provided in the present application; in some embodiments, the robotic arm 100 may include a memory 101, a processor 102, and a communication interface 103, the memory 101, the processor 102, and the communication interface 103 being electrically connected to one another, directly or indirectly, to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 101 may be configured to store software programs and modules, such as program instructions/modules corresponding to the obstacle avoidance apparatus provided in the present application, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 101, so as to execute the steps of the obstacle avoidance method provided in the present application. The communication interface 103 may be used for communicating signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Programmable Read-Only Memory (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The Processor 102 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in figure 2 is merely illustrative and that the robotic arm 100 may include more or fewer components than shown in figure 2 or may have a different configuration than shown in figure 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
The obstacle avoidance method provided by the present application is exemplarily described below with the robot arm 100 shown in fig. 2 as an exemplary implementation body.
Referring to fig. 3, fig. 3 shows a schematic flow chart of the obstacle avoidance method provided by the present application, and in some embodiments, the obstacle avoidance method provided by the present application may include the following steps:
step 201, performing three-dimensional reconstruction on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and current pose parameters of the mechanical arm.
And 203, generating a coordinate transformation matrix according to the obtained resolution parameters and the current pose parameters.
Step 205, converting the three-dimensional point cloud data into corresponding two-dimensional space data by using a coordinate transformation matrix, so that the mechanical arm avoids obstacles by using the two-dimensional space data.
In some embodiments, when the mechanical arm is in obstacle avoidance, the current environment of the mechanical arm may be obtained by shooting the current environment of the mechanical arm with equipment configured on the mechanical arm, such as a 3D visual camera, so that the mechanical arm may perform three-dimensional reconstruction with the obtained current visual environment based on, for example, a scheme of cube depth map fusion, and obtain three-dimensional point cloud data and current pose parameters corresponding to the current visual environment.
When the mechanical arm performs three-dimensional reconstruction, the surrounding space environment where the mechanical arm is located can be defined as a space cube, and the surrounding space environment of the mechanical arm is divided into a large number of cubes with smaller particle sizes according to a preset size (such as 1 cm); combining the posture of the inter-frame camera of the 3D visual camera, and fusing each frame of depth map of the 3D visual camera into a voxelized cube by utilizing a posture mapping scheme, so as to obtain dense three-dimensional point cloud data of the surface of the environment where the mechanical arm is located; in addition, in some embodiments, the mechanical arm may also perform networking operation on the obtained three-dimensional point cloud data to obtain a three-dimensional mesh as shown in fig. 4.
In some embodiments, the mechanical arm obtains the three-dimensional point cloud data, which may be three-dimensional data in a world coordinate system; in addition, the mechanical arm may construct a two-dimensional pixel coordinate system, and generate a coordinate transformation matrix according to the obtained resolution parameter and the current pose parameter, where the coordinate transformation matrix may be used to transform data in a three-dimensional space into a two-dimensional space, such as transforming coordinates in the three-dimensional world coordinate system into the two-dimensional pixel coordinate system.
In some embodiments, the mechanical arm may receive a resolution input by the user when generating the coordinate transformation matrix, and construct the coordinate transformation matrix by combining the current pose parameter and internal parameters of the 3D vision camera, such as focal length, pixel space origin, and the like.
Exemplarily, taking the pinhole camera model (the pinhole camera model) to project the three-dimensional space coordinate XYZ to the pixel space uv, the coordinate transformation matrix described above may satisfy the following formula:
Figure BDA0002648475090000081
wherein, (u, v) represents a two-dimensional coordinate in a pixel coordinate system; s is a preset parameter; (c)x,cy) Representing the central point of the image, wherein the value of the central point can be half of the resolution; f. ofx,fyRepresenting a focal length parameter; r and t represent current pose parameters; (X, Y, Z) represents three-dimensional coordinates in the world coordinate system.
It is understood that the above-mentioned resolution may be obtained by the mechanical arm receiving user input, for example, a resolution of 640 × 480 is preset by the user; in other possible embodiments of the present application, the above-mentioned resolution may be obtained by the robot arm in other manners, for example, the robot arm may receive the resolution sent by other control devices, or when the input resolution is not obtained, the robot arm may also use the default resolution for calculation.
Next, the mechanical arm can convert the obtained three-dimensional point cloud data in the world coordinate system into corresponding two-dimensional space data by using the obtained coordinate transformation matrix, so that the mechanical arm can avoid the obstacle by using the obtained two-dimensional space data, thereby reducing the data amount used when the mechanical arm avoids the obstacle, improving the obstacle avoiding efficiency, reducing the data amount stored when the mechanical arm avoids the obstacle, and reducing the processing resources consumed when the mechanical arm avoids the obstacle.
In addition, in the scheme of using the three-dimensional obstacle avoidance graph to avoid obstacles shown in fig. 1, for example, since all obstacles in the space where the mechanical arm is located are represented by voxels, the mechanical arm can only obtain obstacle information in the space environment, but cannot obtain geometric measurement of the environment surface, and cannot achieve refined obstacle avoidance; for example, in a small space range, the mechanical arm can only judge the direction of an obstacle to avoid the obstacle, but cannot move along the surface of the environment.
Based on this, in some embodiments, after the mechanical arm performs step 201, a surface normal SDF vector in the current visual environment may be calculated according to the obtained three-dimensional point cloud data by calculating a Sign Distance Function (SDF) value of the spatial environment surface, so that the normal and height information of the environment surface are obtained through the calculated surface normal SDF vector.
Moreover, in some embodiments, after the mechanical arm obtains the two-dimensional space data in step 205, the mechanical arm may further encode the surface normal SDF vector into the two-dimensional space data in a manner that, for example, component values in three directions of the surface normal SDF vector XYZ correspond to three channels of RGB in the pixel space, so as to generate the two-dimensional image data as shown in fig. 5, so that the mechanical arm can perform obstacle avoidance by using the two-dimensional image data.
In some embodiments, when the two-dimensional image data illustrated in fig. 5 is used by the mechanical arm to avoid an obstacle, the data amount stored in each square block in the figure may be the same, and the data amount stored can be reduced by storing the data in the square block as the minimum granularity, instead of storing the data in the pixel as the minimum granularity; in addition, because the proximity relation between the blocks can be conveniently obtained, the data structures such as a two-dimensional octree (octree) and the like can be utilized to quickly search the points of the three-dimensional space, thereby improving the searching speed, saving the time for searching the normal and height information when avoiding the obstacle and improving the obstacle avoiding efficiency.
Thus, as shown in fig. 6, the broken line in fig. 6 indicates the normal direction, the upper solid line indicates the movement locus of the robot arm, and the lower solid line indicates the obstacle surface; when the mechanical arm grabs in a narrow space such as a refrigerator, a semi-closed area and the like, the mechanical arm can obtain an intersection point with a reconstructed environment in two-dimensional image data such as that shown in fig. 5 through ray projection (ray casting) in the current motion direction and speed, and retrieve a normal line and height information corresponding to the intersection point in an obstacle avoidance graph, so that the mechanical arm can move by adhering to the surface of an obstacle by using the retrieved normal line and high-speed information, the control precision of the mechanical arm is improved, the mechanical arm can complete obstacle avoidance in some narrow spaces, and the applicable scene of the mechanical arm is improved.
It is understood that, as in the example of fig. 4, the three-dimensional point cloud data obtained by the robot arm executing step 201 may include data of a plurality of voxelized cubes; when the surface normal of the current visual environment is calculated by the mechanical arm, in combination with the example given in fig. 7, the vertex normal of the tangent space of the three-dimensional network may be calculated first, and then the vertex normal is converted into the world coordinate system from the tangent space, so as to obtain the surface normal of the current visual environment of the mechanical arm.
However, this processing method requires the robot arm to convert in each coordinate system, and cannot obtain the expression of the surface normal of the current visual environment in real time.
In addition, in the expression obtained by voxelizing the current visual environment of the mechanical arm as shown in fig. 3, since the obstacle in the current visual environment is expressed by using the voxelized cube, the environment coordinate point in the actual space and the surface point of the cube are different, and thus the SDF value of the environment surface in the current visual environment of the mechanical arm has a calculation error.
To this end, in some embodiments, the robotic arm may directly utilize parameters in the world coordinate system in calculating the surface normal SDF vector for the current visual environment.
For example, the robotic arm may calculate initial SDF values for all vertices of each cube in three-dimensional point cloud data using the three-dimensional point cloud data in the world coordinate system.
Next, in connection with the example given in fig. 8, for each surface point (x, y, z) in the current visual environment, the robotic arm may calculate a target SDF value for the surface point using the initial SDF values of all vertices of the cube at which the surface point (x, y, z) is located, for example, trilinear interpolation (trilinear interpolarionin).
Illustratively, assume that the initial SDF values of the eight vertices of the cube in which the surface points (x, y, z) are located in FIG. 8 are respectively represented as V000、V100、V010、V001、V101、V011、V110、V111(ii) a The target SDF value for the surface point (x, y, z) can be calculated by the following formula:
Vxyz=V000(1-x)(1-y)(1-z)+V100(1-y)(1-z)+V010(1-x)y(1-z)+V001(1-x)(1-y)z+V101(1-y)z+V011(1-x)yz+V110xy(1-z)+V111xyz。
thus, the mechanical arm can calculate the gradient value corresponding to each surface point in the current visual environment by taking each surface point (x, y, z) as the center by using the target SDF value of each surface point in the current visual environment, so as to obtain the surface normal SDF vector of the current visual environment.
For example, as a possible implementation, the mechanical arm may calculate the gradient value of each surface point in the directions of the coordinate axes according to the target SDF value of each surface point and the size parameter of the cube.
Illustratively, taking the calculation of the gradient value of the surface point (X, y, z) in the X-axis direction as an example, it is assumed that the surface point (X) closest to the surface point (X, y, z) in the negative X-axis direction-1Y, z) is expressed as V-1Surface point (X) nearest to surface point (X, y, z) in positive direction of X-axis1Y, z) is expressed as V1Then, the gradient value of the surface point (X, y, z) in the X-axis direction can be calculated by the following formula:
Nx=0.5*voxelsize*(V1-V-1)
in the formula, voxelsize indicates the size of a cube.
Therefore, the target SDF value of each surface point in each current visual environment is calculated by using the initial SDF values of all the vertexes of the cube where each surface point is located in each current visual environment in an interpolation mode, so that the surface normal SDF vector of the current visual environment is calculated by using the target SDF value of each surface point obtained by interpolation, and the calculation precision of the surface normal SDF vector can be improved.
In addition, it can be understood that, in the calculation process of the gradient values of the surface points (X, Y, Z) in the Y-axis direction and the gradient values in the Z-axis direction, reference may be made to the above calculation manner of the gradient values in the X-axis direction, and for convenience and brevity of description, details are not repeated herein.
In some embodiments, in order to improve the calculation accuracy of the target SDF value of each surface, each time the mechanical arm calculates initial SDF values of all vertices of each cube in the three-dimensional point cloud data corresponding to one frame of visual environment image, the mechanical arm may save all the calculated SDF values as historical SDF values of the vertices of each cube; in this way, after the mechanical arm obtains the initial SDFs of all the vertices of each cube in the three-dimensional point cloud data of the current visual environment through calculation, the mechanical arm may further update the initial SDF values of all the vertices of each cube by using the stored historical SDF values corresponding to all the vertices of each cube in the three-dimensional point cloud data, for example, the initial SDF values and the historical SDF values corresponding to all the vertices of each cube are weighted and averaged, so as to obtain the updated SDF values of all the vertices of each cube in the three-dimensional point cloud data, so that when the mechanical arm calculates the target SDF value of each surface point, the mechanical arm may calculate the target SDF value of each surface point in the current visual environment by using the updated SDF values of all the vertices of each cube, so as to improve the calculation accuracy of the target SDF value.
In addition, based on the same inventive concept as the above-mentioned obstacle avoidance method provided in the present application, as shown in fig. 9, the present application further provides an obstacle avoidance apparatus 300 applied to the mechanical arm shown in fig. 2, where the obstacle avoidance apparatus 300 includes a processing module 301 and a converting module 302; wherein:
the processing module 301 is configured to perform three-dimensional reconstruction on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and a current pose parameter of the mechanical arm;
the processing module 301 is further configured to generate a coordinate transformation matrix according to the obtained resolution parameter and the current pose parameter; the coordinate transformation matrix is used for transforming data in a three-dimensional space to a two-dimensional space;
the conversion module 302 is configured to convert the three-dimensional point cloud data into corresponding two-dimensional space data by using a coordinate transformation matrix, so that the robot arm avoids an obstacle by using the two-dimensional space data.
Optionally, as a possible implementation manner, after the processing module 301 performs three-dimensional reconstruction on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and the current pose parameter of the mechanical arm, the processing module 301 is further configured to:
calculating a surface normal SDF vector of the current visual environment according to the three-dimensional point cloud data;
after the conversion module 302 converts the three-dimensional point cloud data into corresponding two-dimensional space data by using the coordinate transformation matrix, the conversion module 302 is further configured to:
and encoding the surface normal SDF vector to two-dimensional space data to generate two-dimensional image data so that the mechanical arm can avoid the obstacle by using the two-dimensional image data.
Optionally, as a possible embodiment, the three-dimensional point cloud data includes data of a plurality of voxelized cubes;
when calculating the surface normal SDF vector of the current visual environment according to the three-dimensional point cloud data, the processing module 301 is specifically configured to:
calculating initial SDF values of all vertexes of each cube in the three-dimensional point cloud data;
aiming at each surface point in the current visual environment, calculating a target SDF value of the surface point by using the initial SDF values of all vertexes of the cube where the surface point is located;
and calculating the gradient value corresponding to each surface point in the current visual environment by using the target SDF value of each surface point in the current visual environment to obtain the surface normal SDF vector of the current visual environment.
Optionally, as a possible implementation manner, when calculating the gradient value corresponding to each surface point in the current visual environment by using the target SDF value of each surface point in the current visual environment, the processing module 301 is specifically configured to:
and calculating corresponding gradient values of the surface points in the directions of the coordinate axes according to the respective target SDF values of the two adjacent surface points of each surface point on the coordinate axes and the size parameter of the cube.
Optionally, as a possible implementation, after the processing module 301 calculates the initial SDF values of all the vertices of each cube in the three-dimensional point cloud data, the processing module 301 is further configured to:
and updating the initial SDF values of all the vertexes of each cube by using the historical SDF values corresponding to all the vertexes of each cube in the stored three-dimensional point cloud data to obtain the updated SDF values of all the vertexes of each cube in the three-dimensional point cloud data, so that the mechanical arm calculates the target SDF value of each surface point in the current visual environment by using the updated SDF values.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to some embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in some embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to some embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above description is only a few examples of the present application and is not intended to limit the present application, and those skilled in the art will appreciate that various modifications and variations can be made in the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. An obstacle avoidance method is characterized by being applied to a mechanical arm; the method comprises the following steps:
performing three-dimensional reconstruction on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and current pose parameters of the mechanical arm;
calculating a surface normal Symbolic Distance Function (SDF) vector of the current visual environment according to the three-dimensional point cloud data;
generating a coordinate transformation matrix according to the obtained resolution parameters and the current pose parameters; wherein the coordinate transformation matrix is used for transforming data in a three-dimensional space to a two-dimensional space;
and converting the three-dimensional point cloud data into corresponding two-dimensional space data by using the coordinate transformation matrix, encoding the surface normal SDF vector to the two-dimensional space data, and generating two-dimensional image data so that the mechanical arm can avoid the obstacle by using the two-dimensional image data.
2. The method of claim 1, wherein the three-dimensional point cloud data comprises data of a plurality of voxelized cubes;
calculating a surface normal SDF vector of the current visual environment according to the three-dimensional point cloud data, including:
calculating initial SDF values of all vertexes of each cube in the three-dimensional point cloud data;
aiming at each surface point in the current visual environment, calculating a target SDF value of the surface point by using the initial SDF values of all vertexes of the cube where the surface point is located;
and calculating the gradient value corresponding to each surface point in the current visual environment by using the target SDF value of each surface point in the current visual environment to obtain the surface normal SDF vector of the current visual environment.
3. The method of claim 2, wherein calculating the gradient value corresponding to each surface point in the current visual environment using the target SDF value for each surface point in the current visual environment comprises:
and calculating corresponding gradient values of the surface points in the directions of the coordinate axes according to the respective target SDF values of the two adjacent surface points of each surface point on the coordinate axes and the size parameters of the cube.
4. The method of claim 2, wherein after the calculating initial SDF values for all vertices of each cube in the three-dimensional point cloud data, the method further comprises:
and updating the initial SDF values of all the vertexes of each cube by using the stored historical SDF values corresponding to all the vertexes of each cube in the three-dimensional point cloud data to obtain the updated SDF values of all the vertexes of each cube in the three-dimensional point cloud data, so that the mechanical arm calculates the target SDF value of each surface point in the current visual environment by using the updated SDF values.
5. An obstacle avoidance device is characterized by being applied to a mechanical arm; the device comprises:
the processing module is used for performing three-dimensional reconstruction on the current visual environment obtained by the mechanical arm to obtain three-dimensional point cloud data corresponding to the current visual environment and current pose parameters of the mechanical arm;
the processing module is further used for calculating a surface normal Symbolic Distance Function (SDF) vector of the current visual environment according to the three-dimensional point cloud data;
the processing module is further used for generating a coordinate transformation matrix according to the obtained resolution parameters and the current pose parameters; wherein the coordinate transformation matrix is used for transforming data in a three-dimensional space to a two-dimensional space;
and the conversion module is used for converting the three-dimensional point cloud data into corresponding two-dimensional space data by using the coordinate transformation matrix, encoding the surface normal SDF vector to the two-dimensional space data, and generating two-dimensional image data so that the mechanical arm can avoid the obstacle by using the two-dimensional image data.
6. The apparatus of claim 5, in which the three-dimensional point cloud data comprises data of a plurality of voxelized cubes;
the processing module is specifically configured to, when calculating the surface normal SDF vector of the current visual environment according to the three-dimensional point cloud data:
calculating initial SDF values of all vertexes of each cube in the three-dimensional point cloud data;
aiming at each surface point in the current visual environment, calculating a target SDF value of the surface point by using the initial SDF values of all vertexes of the cube where the surface point is located;
and calculating the gradient value corresponding to each surface point in the current visual environment by using the target SDF value of each surface point in the current visual environment to obtain the surface normal SDF vector of the current visual environment.
7. A robot arm, comprising:
a memory for storing one or more programs;
a processor;
the one or more programs, when executed by the processor, implement the method of any of claims 1-4.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN202010862102.XA 2020-08-25 2020-08-25 Obstacle avoidance method, device, mechanical arm and storage medium Active CN111993425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010862102.XA CN111993425B (en) 2020-08-25 2020-08-25 Obstacle avoidance method, device, mechanical arm and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010862102.XA CN111993425B (en) 2020-08-25 2020-08-25 Obstacle avoidance method, device, mechanical arm and storage medium

Publications (2)

Publication Number Publication Date
CN111993425A CN111993425A (en) 2020-11-27
CN111993425B true CN111993425B (en) 2021-11-02

Family

ID=73471627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010862102.XA Active CN111993425B (en) 2020-08-25 2020-08-25 Obstacle avoidance method, device, mechanical arm and storage medium

Country Status (1)

Country Link
CN (1) CN111993425B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508933A (en) * 2020-12-21 2021-03-16 航天东方红卫星有限公司 Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning
CN113951761B (en) * 2021-10-20 2022-10-14 杭州景吾智能科技有限公司 Mechanical arm motion planning method and system for cleaning rectangular area in space

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004202627A (en) * 2002-12-25 2004-07-22 Yaskawa Electric Corp Interference checking device and method of horizontal multi-articulated robot
CN1883887A (en) * 2006-07-07 2006-12-27 中国科学院力学研究所 Robot obstacle-avoiding route planning method based on virtual scene
CN101008571A (en) * 2007-01-29 2007-08-01 中南大学 Three-dimensional environment perception method for mobile robot
CN106949893A (en) * 2017-03-24 2017-07-14 华中科技大学 The Indoor Robot air navigation aid and system of a kind of three-dimensional avoidance
CN109144072A (en) * 2018-09-30 2019-01-04 亿嘉和科技股份有限公司 A kind of intelligent robot barrier-avoiding method based on three-dimensional laser
CN109658373A (en) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 A kind of method for inspecting, equipment and computer readable storage medium
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004202627A (en) * 2002-12-25 2004-07-22 Yaskawa Electric Corp Interference checking device and method of horizontal multi-articulated robot
CN1883887A (en) * 2006-07-07 2006-12-27 中国科学院力学研究所 Robot obstacle-avoiding route planning method based on virtual scene
CN101008571A (en) * 2007-01-29 2007-08-01 中南大学 Three-dimensional environment perception method for mobile robot
CN106949893A (en) * 2017-03-24 2017-07-14 华中科技大学 The Indoor Robot air navigation aid and system of a kind of three-dimensional avoidance
CN109658373A (en) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 A kind of method for inspecting, equipment and computer readable storage medium
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
CN109144072A (en) * 2018-09-30 2019-01-04 亿嘉和科技股份有限公司 A kind of intelligent robot barrier-avoiding method based on three-dimensional laser

Also Published As

Publication number Publication date
CN111993425A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112859859B (en) Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping
EP3250347B1 (en) Specialized robot motion planning hardware and methods of making and using same
CN108986161B (en) Three-dimensional space coordinate estimation method, device, terminal and storage medium
Driess et al. Learning multi-object dynamics with compositional neural radiance fields
CN111993425B (en) Obstacle avoidance method, device, mechanical arm and storage medium
Zhang et al. A visual distance approach for multicamera deployment with coverage optimization
Rodenberg et al. Indoor A* pathfinding through an octree representation of a point cloud
JP2019159940A (en) Point group feature extraction device, point group feature extraction method, and program
Mojtahedzadeh Robot obstacle avoidance using the Kinect
Kruzhkov et al. Meslam: Memory efficient slam based on neural fields
US20230115521A1 (en) Device and method for training a machine learning model for recognizing an object topology of an object from an image of the object
Van Pabst et al. Multisensor data fusion of points, line segments, and surface segments in 3D space
Gao et al. Multi-view sensor fusion by integrating model-based estimation and graph learning for collaborative object localization
Kim et al. Recursive estimation of motion and a scene model with a two-camera system of divergent view
Zhang et al. Affordance-driven next-best-view planning for robotic grasping
CN114494312A (en) Apparatus and method for training a machine learning model for identifying object topology of an object from an image of the object
US20210156710A1 (en) Map processing method, device, and computer-readable storage medium
Watkins-Valls et al. Mobile manipulation leveraging multiple views
Schaub et al. 6-dof grasp detection for unknown objects
Saleh et al. Estimating the 2d static map based on moving stereo camera
JP6162526B2 (en) 3D environment restoration device
CN114029940B (en) Motion path planning method, device, equipment, medium and mechanical arm
Dai et al. PlaneSLAM: Plane-based LiDAR SLAM for Motion Planning in Structured 3D Environments
JP2022078979A (en) Device and method for controlling robot for picking up object in various pose situations
Jiang et al. FFPA-Net: Efficient feature fusion with projection awareness for 3D object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant