CN111113428B - Robot control method, robot control device and terminal equipment - Google Patents

Robot control method, robot control device and terminal equipment Download PDF

Info

Publication number
CN111113428B
CN111113428B CN201911416634.4A CN201911416634A CN111113428B CN 111113428 B CN111113428 B CN 111113428B CN 201911416634 A CN201911416634 A CN 201911416634A CN 111113428 B CN111113428 B CN 111113428B
Authority
CN
China
Prior art keywords
collision
detected
pair
pose
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911416634.4A
Other languages
Chinese (zh)
Other versions
CN111113428A (en
Inventor
林泽才
安昭辉
胡锡钦
刘益彰
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youbixuan Intelligent Robot Co ltd
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911416634.4A priority Critical patent/CN111113428B/en
Publication of CN111113428A publication Critical patent/CN111113428A/en
Application granted granted Critical
Publication of CN111113428B publication Critical patent/CN111113428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor

Abstract

The robot control method provided by the application comprises the following steps: acquiring target joint pose information of at least one appointed joint of the robot; acquiring space information of at least two bounding boxes corresponding to the robot, wherein a three-dimensional space corresponding to each bounding box comprises a preset component of the robot, and the preset components corresponding to the bounding boxes are different from each other; determining the collision condition of at least one group of collision pairs to be detected when at least one designated joint is in the target joint pose according to the target joint pose information and the space information, wherein each group of collision pairs to be detected comprises two bounding boxes; and if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in the collided collision pair to be detected to obtain an adjustment result. By the method, the collision possibly occurring between the components of the robot can be accurately and efficiently detected and adjusted.

Description

Robot control method, robot control device and terminal equipment
Technical Field
The present application relates to the field of robotics, and in particular, to a robot control method, a robot control apparatus, a terminal device, and a computer-readable storage medium.
Background
In order to ensure the safety of the robot, it is necessary to prevent collisions between the various components of the robot itself, such as between the arms and legs of the robot and the torso of the robot, during the movement of the robot.
However, when a collision between the components of the robot is about to occur or after the collision has occurred, due to the complexity of the specific environment, the structure of the robot, and the control manner, the existing processing method often directly stops the current movement of the robot to keep the robot in a stationary state. And this processing method seriously affects the execution efficiency of the robot.
Disclosure of Invention
The embodiment of the application provides a robot control method, a robot control device, a terminal device and a computer readable storage medium, which can accurately and efficiently detect and adjust the possible collision among all components of the robot.
In a first aspect, an embodiment of the present application provides a robot control method, including:
acquiring target joint pose information of at least one appointed joint of the robot;
acquiring space information of at least two bounding boxes corresponding to the robot, wherein a three-dimensional space corresponding to each bounding box comprises a preset component of the robot, and the preset components corresponding to the bounding boxes are different from each other;
determining the collision condition of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the space information, wherein each group of collision pairs to be detected comprises two bounding boxes;
and if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in the collided collision pair to be detected to obtain an adjustment result.
In a second aspect, an embodiment of the present application provides a robot control apparatus, including:
the robot comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring target joint pose information of at least one appointed joint of the robot;
the second acquisition module is used for acquiring the space information of at least two bounding boxes corresponding to the robot, wherein the three-dimensional space corresponding to each bounding box comprises a preset component of the robot, and the preset components corresponding to the bounding boxes are different from each other;
the determining module is used for determining the collision condition of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the space information, wherein each group of collision pairs to be detected comprises two bounding boxes;
and the adjusting module is used for adjusting the terminal pose of the bounding box in the collided collision pair to be detected to obtain an adjusting result if the collision condition indicates that at least one group of collision pairs to be detected collide.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, a display, and a computer program stored in the memory and executable on the processor, where the processor implements the robot control method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the robot control method according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the robot control method described in the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, by acquiring the spatial information of at least two bounding boxes corresponding to the robot, each component of the robot can be combined and/or divided through the bounding boxes, and in some cases, regular spatial information can be acquired by presetting the shapes and the like of the bounding boxes, and at the moment, each preset component is abstracted into a three-dimensional space of each rule, so that the calculation amount is reduced in subsequent calculation, and the calculation efficiency is improved; in addition, after acquiring target joint pose information of at least one designated joint of the robot, determining collision situations of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the spatial information, and determining whether the target joint pose of the designated joint needs to be adjusted according to the collision situations; if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in any collision pair to be detected to obtain an adjustment result, and estimating the collision which possibly occurs in the motion process of the robot by reasonably setting the collision pairs to be detected so as to avoid invalid operation and detection. Through this application embodiment, can be to each of robot predetermines the collision that probably takes place between the component and detect and adjust accurately high-efficiently to guarantee the stability and the security of the operation of robot.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a robot control method according to an embodiment of the present application;
FIG. 2 is a schematic view of the enclosure provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a robot control device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
Specifically, fig. 1 shows a flowchart of a first robot control method provided in an embodiment of the present application.
The robot control method provided in the embodiment of the present application may be applied to a robot, and may also be applied to other terminal devices coupled to the robot, for example, a server, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and other terminal devices, and the control of the robot is implemented through the other terminal devices. The embodiment of the present application does not set any limit to the specific type of the terminal device. The specific setting manner of the hardware, software and firmware of the robot may be various, and is not limited to this.
As shown in fig. 1, the robot control method includes:
step S101, target joint pose information of at least one appointed joint of the robot is obtained.
In the embodiment of the present application, the target joint pose information may include information of a target joint pose of the specified joint. Illustratively, one or more of a target joint angle, a target joint velocity, and a target spatial position of the specified joint may be included. The designated joint may be determined according to a current desired pose of the end of the robot, or the like. Accordingly, the specific arrangement position of the designated joint on the robot can be various.
In some embodiments, a current end-point desired pose of the robot may be obtained. The end-desired pose may be an end of a particular component in the robot, and in some cases, the actions to be performed by the robot may be different for the respective particular component. For example, for a humanoid robot, the humanoid robot may include a left arm, a right arm, a torso, a left leg, and a right leg, and the designated joints to be controlled may be different according to the currently expected end pose of the humanoid robot. For example, if the current end expected pose of the humanoid robot only relates to the motions of the left arm and the right arm, the designated joints may include at least partial joints of the left arm and the right arm, respectively. For example, the target joint pose information of the at least one designated joint may be obtained in advance by inverse kinematics solution or the like according to the current terminal expected pose of the robot. Of course, the target joint pose information may also be acquired by the terminal device according to the embodiment of the present application from another terminal through a specific information transmission or the like. The specific acquisition mode of the target joint pose information is not limited herein.
Step S102, obtaining space information of at least two bounding boxes corresponding to the robot, wherein a three-dimensional space corresponding to each bounding box comprises a preset component of the robot, and the preset components corresponding to the bounding boxes are different from each other.
In the embodiment of the present application, for example, the preset component may be obtained by combining and/or dividing the components of the robot. The preset components may be divided in various ways. For example, one or a plurality of components may be combined to obtain a predetermined component, and a part of the structure in one component may also be combined to form a predetermined component. The three-dimensional space of the bounding box may be of regular shape, e.g. may be one or more of a cuboid, a cube, a cylinder, a sphere, etc. The shapes corresponding to different bounding boxes may be different or the same. The space information of the bounding box may be predetermined, and there may be a plurality of ways to obtain the bounding box, for example, the bounding box may correspond to a minimum three-dimensional space (for example, a minimum cuboid or the like) which can include a corresponding preset component under a specific shape, and at this time, it may also be referred to as performing envelope processing on each preset component to obtain a corresponding bounding box.
In some embodiments, the shape of the bounding box may be determined according to the category of the preset component corresponding to the bounding box.
Fig. 2 is an exemplary schematic diagram of the bounding box. Wherein, the robot can include truck, left arm, right arm, left leg, right leg, head and neck, wherein, the left arm specifically includes left upper arm, left forearm and left hand, and the right arm specifically includes right upper arm, right forearm and right hand, and the left leg includes left thigh, left shank and left foot, right leg and right foot, and this moment, the bounding box that truck, left hand and right hand correspond is the cuboid shape, the bounding box that head and neck correspond is the spheroid shape, and the bounding box that left upper arm, left forearm, right upper arm, right forearm, left thigh and right thigh correspond is the cylinder shape, and based on positional relationship between each preset component part in the robot, all do not set up corresponding bounding box to left shank, right shank and left foot, right foot that can not take place the collision with other parts.
It should be noted that fig. 2 is only an exemplary arrangement of the bounding box, and does not limit the bounding box.
Step S103, determining the collision condition of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the space information, wherein each group of collision pairs to be detected comprises two bounding boxes.
Wherein the collision condition can indicate whether a collision occurs between two bounding boxes in at least one group of collision pairs to be detected, the pose of the bounding boxes in the collision, and the like. One bounding box can be contained in a plurality of different pairs of collisions to be detected, without being limited to and can be contained in only one pair of pairs of collisions to be detected.
In the embodiment of the present application, the collision condition may be determined in various ways. For example, the collision condition may be determined by simulating the motion of each collision pair to be detected in a three-dimensional space, or may be determined by calling an application collision detection library (FCL).
In an exemplary embodiment, when the at least one designated joint is in the target joint pose, the target end poses respectively corresponding to the bounding boxes are determined according to the target joint pose information and the spatial information, so as to determine the collision situations of at least one group of collision pairs to be detected.
An exemplary calculation method of the target end poses respectively corresponding to the bounding boxes is described below as a specific example.
In some embodiments, for example, the left and right arms of the robot are each a robotic arm having an upper arm (e.g., left upper arm, right upper arm), a forearm (e.g., right forearm, left forearm), and an elbow between the upper arm and the forearm. The robotic arm may be connected to a hand. Each of the bounding boxes may correspond to a respective portion of the upper arm or arms (e.g., left upper arm, right upper arm), forearm or arms (e.g., right forearm, left forearm), and elbow between upper arm and forearm, etc.
Specifically, the mechanical arm has 1 redundant degree of freedom, and the mechanical arm has 7 joints, and the 7 joints are a first joint, a second joint, a third joint, a fourth joint, a fifth joint, a sixth joint and a seventh joint. The upper arm has 3 degrees of freedom, and corresponds to 3 joints, namely a first joint, a second joint and a third joint; the forearm has 3 degrees of freedom, corresponds to 3 joints and is a fifth joint, a sixth joint and a seventh joint respectively; the elbow has one degree of freedom corresponding to one joint, the fourth joint. Wherein the first joint, the second joint and the third joint can form a shoulder joint combination, the fourth joint is an elbow joint, and the fifth joint, the sixth joint and the seventh joint can form a wrist joint combination
At the moment, the pose representation of the upper arm under the robot body coordinate system is determined according to the modes of Denavit-Hartenberg (D-H) modeling and the like0RUarmWherein the method is used for determining the position and the attitude representation of the upper arm under the base coordinate system of the robot body0RUarmThe first formula of (a) is:
0RUarm0R1 1R2(q1)2R3(q2)3R4(q3)
wherein the content of the first and second substances,0R1the posture of the first joint coordinate system of the upper arm relative to the robot body base coordinate system is expressed, and the trunk of the robot has no freedom degree, so that the robot body has no freedom degree0R1Typically a fixed value;iRi+1(qi) Is the posture representation of the (i + 1) th joint coordinate system relative to the (i) th joint coordinate system.
In the same way, the posture of the forearm can be represented0RFarmAnd hand gesture representation0RHandRespectively as follows:
Figure BDA0002351353760000071
coordinate position P of geometric center of upper armUarmCan be as follows:
Figure BDA0002351353760000072
wherein the content of the first and second substances,
Figure BDA0002351353760000073
a vector from the first characteristic position indicated by the shoulder joint combination to the fourth joint in the third joint coordinate system;
Figure BDA0002351353760000074
lsethe length of the forearm.
Coordinate position P of geometrical center of forearmFarmCan be as follows:
Figure BDA0002351353760000075
Figure BDA0002351353760000076
is the vector of the second characteristic position indicated by the fourth joint-to-wrist joint combination in the fourth joint coordinate system,
Figure BDA0002351353760000081
lewis the length of the forearm.
Coordinate position P of the geometric center of the handHandCan be as follows:
Figure BDA0002351353760000082
Figure BDA0002351353760000083
vector from the second characteristic position indicated by the wrist joint combination to the hand center in the seventh joint coordinate system of the arm,/handIs the distance of the second feature location from the center of the hand.
In addition, the robot also comprises legs, wherein the legs comprise thighs and shanks, and the joint of the thighs and the trunk can have 3 degrees of freedom.
According to the model of Denavit-Hartenberg (D-H), the posture representation of the thigh relative to the base coordinate system of the robot body is determined0Rleg
0Rleg0W1 1W2(W1)2W3(q2)4W5(q3)
Wherein the content of the first and second substances,0W1the posture of the first joint coordinate system of the leg relative to the robot body base coordinate system is expressed, and the trunk of the robot has no freedom degree, so that the robot has the advantages of simple structure, low cost and high precision0W1Typically a fixed value;iWi+1(qi) Is the posture representation of the i +1 th joint coordinate system of the leg relative to the i-th joint coordinate system.
The coordinate position of the geometrical center of the thigh may be Pleg
Figure BDA0002351353760000084
Wherein the content of the first and second substances,
Figure BDA0002351353760000085
is the vector from the first leg joint (e.g. hip joint) to the second leg joint (e.g. knee joint) in the leg's third joint coordinate system, llegIs the length of the thigh.
It should be noted that the above calculation method is only a specific example of the embodiment of the present application, and is not a limitation of the present application.
And step S104, if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in the collided collision pair to be detected, and obtaining an adjustment result.
The collision of the collision pair to be detected in the embodiment of the present application may be a collision between two bounding boxes included in the collision pair to be detected. Specifically, each collision pair to be detected, in which a collision occurs, may be adjusted separately. For example, in each adjustment, only one bounding box is adjusted along a certain direction in a specific coordinate system, and meanwhile, whether the bounding box after adjustment still collides can be judged by projecting the bounding box to a specific plane in the specific coordinate system. Alternatively, the end poses of the bounding boxes in the collision pair to be detected may also be adjusted according to the distance between the bounding box and the trunk, the distance between the two bounding boxes of the collision pair to be detected, and the like.
In some embodiments, the adjustment result may indicate whether the adjusted collision pair to be detected has collided, and/or include information such as an end pose of a bounding box in the adjusted collision pair to be detected.
In some embodiments, after obtaining the adjustment result, the method may further include:
and if the adjustment result indicates that the adjusted collision pair to be detected does not collide, adjusting the target joint pose of at least one appointed joint according to the adjusted collision pair to be detected.
In the embodiment of the application, according to the adjusted information such as the end pose of the bounding box in the collision pair to be detected, the target pose of the preset component corresponding to the corresponding bounding box can be determined, so that the target joint pose of the relevant joint (i.e. at least one designated joint) in the preset component can be determined by an inverse kinematics solution and other methods.
In some embodiments, after adjusting the target joint pose of the at least one designated joint, the robot may be controlled according to the adjusted target joint pose.
In the embodiment of the present application, for example, if the terminal device executing the embodiment of the present application is the robot itself, a control circuit in the robot may issue a relevant instruction to drive the corresponding joint to move according to the corresponding target joint angle. Of course, in some embodiments, the robot may also be controlled by other terminals coupled to the robot. At this time, the other terminal may send a corresponding control instruction to the robot through a preset information transmission manner to instruct the robot to perform a corresponding operation. In the embodiments of the present application, the specific manner of controlling the robot is not limited herein.
In some embodiments, the types of bounding boxes include a torso portion bounding box and a non-torso portion bounding box, the collision-to-be-detected pair includes a first collision-to-be-detected pair including one torso portion bounding box and one non-torso portion bounding box and a second collision-to-be-detected pair including two non-torso portion bounding boxes;
the step S103 specifically includes:
detecting whether at least one group of first collision pairs to be detected collide or not according to the pose information and the spatial information of the target joint;
the step S104 includes:
if at least one group of first collision pairs to be detected collide, adjusting the terminal pose of a non-trunk part enclosing box in the collided first collision pairs to be detected, so that the adjusted first collision pairs to be detected do not collide;
if each first collision pair to be detected does not collide, detecting whether at least one group of second collision pairs to be detected collide;
and if at least one group of second collision pairs to be detected collide, adjusting the terminal pose of the non-trunk part enclosing box in the second collision pair to be detected to obtain an adjustment result.
In the embodiment of the present application, the type of the bounding box of the preset component may be determined according to the motion condition of the preset component, and the like. The motion range corresponding to the body part bounding box is usually small, and the position is fixed. In this case, the predetermined component corresponding to the trunk portion enclosure box is often a base or the like of the predetermined component corresponding to the non-trunk portion enclosure box. For example, the preset component corresponding to the torso-portion bounding box may include a torso of the robot, and may also include one or more of a head, a neck, and the like of the robot. The predetermined components corresponding to the non-torso-part enclosure may include, for example, one or more of a left arm, a right arm, a left leg, a right leg, and the like. Generally, the non-body part enclosing box has a large movement range and is relatively flexible in position. At this time, different collision detection methods and adjustment methods may be adopted for different types of bounding boxes.
The collision determination method may determine that no collision occurs in each first collision pair to be detected after detection is performed according to the target joint pose information and the spatial information, and may determine that no collision occurs in the adjusted first collision pair to be detected and the unadjusted first collision pair to be detected after the end pose of the non-trunk portion bounding box in the collided first collision pair to be detected is adjusted.
In the embodiment of the application, whether the first pair of collision to be detected, which comprises a trunk part bounding box and a non-trunk part bounding box, collides or not can be detected first, and after the first pair of collision to be detected does not exist, the second pair of collision to be detected is adjusted, so that the adjustment of a plurality of pairs of collision to be detected is prevented from causing confusion, the adjustment is more orderly, the calculation amount is reduced, and the adjustment efficiency is improved.
Optionally, in some embodiments, if there is at least one group of first collision pair to be detected to collide, adjusting an end pose of a non-trunk portion bounding box in the first collision pair to be detected to collide so that the adjusted first collision pair to collide does not collide, includes:
if at least one group of first collision pairs to be detected collide, projecting two bounding boxes in the first collision pairs to be detected to be collided onto three projection planes of a coordinate system corresponding to the first collision pairs to be detected to be collided, and obtaining a projection result;
and according to the projection result, adjusting the terminal pose of the non-trunk part enclosing box in the first collision pair to be detected so as to prevent the adjusted first collision pair to be detected from colliding.
In the embodiment of the present application, the above projection and other processing may be performed for each first collision pair to be detected where a collision occurs. By projecting the two bounding boxes in the first collision pair to be detected onto the three projection planes of the coordinate system corresponding to the first collision pair to be detected, the specific collision states of the two bounding boxes in the first collision pair to be detected can be intuitively and quickly known based on the projection results, and the result of adjusting the end poses of the non-trunk bounding boxes in the first collision pair to be detected can be determined according to the projection results. For example, when the end pose of the non-trunk portion bounding box in the first collision pair to be detected is adjusted according to the projection result, the end pose of the non-trunk portion bounding box in the first collision pair to be detected may be adjusted along a specific direction, and whether the adjustment is completed may be determined by determining whether projection portions of two bounding boxes in the first collision pair to be detected overlap in a projection plane corresponding to the specific direction, that is, determining whether the adjusted first collision pair to be detected does not collide.
Optionally, in some embodiments, each projection plane corresponds to one direction to be adjusted;
the adjusting, according to the projection result, the end pose of the non-trunk portion bounding box in the first collision pair to be detected so that the adjusted first collision pair to be detected does not collide includes:
determining a target adjustment direction and a target movement distance corresponding to a non-trunk part bounding box in the first collision pair to be detected, wherein the target adjustment direction is one of the directions to be adjusted, a projection plane corresponding to the target adjustment direction is a target projection plane, in the target adjustment direction, a minimum movement distance, at which projection parts of two bounding boxes in the first collision pair to be detected do not overlap on the target projection plane, is the target movement distance, and the target movement distance is smaller than minimum movement distances corresponding to other projection planes except the target projection plane;
and adjusting the terminal pose of the non-trunk part enclosing box in the first collision pair to be detected according to the target adjusting direction and the target moving distance, so that the adjusted first collision pair to be detected does not collide.
In this embodiment, the directions to be adjusted may be perpendicular to each other. The direction to be adjusted corresponding to any projection plane may be within the projection plane, or may be parallel to the projection plane, so that when the non-trunk portion enclosure box moves along the direction to be adjusted, the movement of the non-trunk portion enclosure box may be reflected on the corresponding projection plane.
In general, in the embodiment of the present application, the target moving distance is moved along the target adjustment direction compared with other directions to be adjusted, so that the non-trunk portion enclosure box in the first collision pair to be detected in which a collision occurs can avoid collision with the corresponding trunk portion enclosure box with the minimum moving distance, thereby reducing the adjustment range of the robot, reducing the time length taken for adjusting the robot, and improving the control efficiency.
In some embodiments, if there is at least one second collision-detection pair to be collided, adjusting the end pose of the non-trunk portion enclosure box in the second collision-detection pair to be collided to obtain an adjustment result, including:
if at least one group of second collision pairs to be detected collides, determining the distance between two non-trunk part enclosure boxes in the collided second collision pairs to be detected and the trunk part enclosure boxes respectively;
one non-trunk part enclosing box which is far away from the trunk part enclosing box is taken as a non-trunk part enclosing box to be adjusted;
adjusting the terminal pose of the to-be-adjusted non-trunk part enclosure box until the two adjusted non-trunk part enclosure boxes do not collide, or until the adjusting times of the to-be-adjusted non-trunk part enclosure box exceed the preset adjusting times.
In the embodiment of the present application, the adjustment manner for the second collision-to-be-detected pair in which a collision occurs may be different from that for the first collision-to-be-detected pair in which a collision occurs, in consideration of the difference in movement between the torso-portion bounding box and the non-torso-portion bounding box. Specifically, one of the two non-trunk portion enclosure boxes, which is far from the trunk portion enclosure box, is used as the non-trunk portion enclosure box to be adjusted, and at this time, because the distance between the non-trunk portion enclosure box to be adjusted and the trunk portion enclosure box is far, the possibility of new collision when the non-trunk portion enclosure box to be adjusted is low, and thus, the safety and the accuracy during adjustment are ensured. In addition, by setting the preset adjusting times, the robot can be prevented from falling into an infinite loop adjusting state.
In some embodiments, if the two adjusted non-torso portion bounding boxes do not collide, the target joint pose of the at least one designated joint is adjusted according to the two adjusted non-torso portion bounding boxes, and the adjusted target joint pose is obtained.
In some embodiments, if the obtained two adjusted non-trunk portion enclosure boxes still collide after the number of times of adjustment performed on the to-be-adjusted non-trunk portion enclosure box exceeds a preset number of times of adjustment, it is determined that the target joint angle values of at least one designated joint of the robot are respectively the corresponding joint angle values at the previous moment.
In this embodiment of the application, if the number of times of adjusting the to-be-adjusted non-trunk portion enclosure box exceeds a preset number of times of adjustment, and the two obtained adjusted non-trunk portion enclosure boxes still collide with each other, each designated joint of the robot may be maintained in a state at the previous time.
In some embodiments, the adjusting the end pose of the non-torso-part enclosure to be adjusted until the two adjusted non-torso-part enclosure do not collide with each other or until the number of times of adjusting the non-torso-part enclosure to be adjusted exceeds a preset number of times of adjusting includes:
respectively corresponding characteristic points of two surrounding boxes of the non-trunk parts in the second collision pair to be detected in collision are used as a first characteristic point and a second characteristic point;
determining a first distance of the first feature point from the torso portion bounding box and a second distance of the second feature point from the torso portion bounding box;
determining a smaller distance and a larger distance in the first distance and the second distance, and generating a direction vector pointing to a feature point corresponding to the larger distance from the feature point corresponding to the smaller distance;
and adjusting the terminal pose of the non-trunk part enclosure box to be adjusted in the direction indicated by the direction vector until the two adjusted non-trunk part enclosure boxes do not collide, or until the number of times of adjusting the non-trunk part enclosure box to be adjusted exceeds a preset adjusting number of times.
In the embodiment of the present application, the feature points may be, for example, a geometric center, an end vertex, and the like of the non-torso-portion bounding box, respectively. A first distance between the first feature point and the torso-portion bounding box may be a shortest distance between the first feature point and the torso-portion bounding box, and a second distance between the second feature point and the torso-portion bounding box may be a shortest distance between the second feature point and the torso-portion bounding box.
Illustratively, if the non-torso-part bounding box includes a left-hand-part bounding box and a right-hand-part bounding box, the geometric center of the left-hand-part bounding box is taken as a first feature point, and the position coordinate of the first feature point is PHand_leftTaking the geometric center of the right-hand partial bounding box as a second characteristic point and the position coordinate of the second characteristic point as PHand_rightIf the first distance is less than the second distance, then the direction vector is:
Figure BDA0002351353760000141
and if the first distance is greater than the second distance, the direction vector is:
Figure BDA0002351353760000142
if the first distance is equal to the second distance, the direction vector may be randomly selected as
Figure BDA0002351353760000143
Or
Figure BDA0002351353760000144
In some embodiments, before determining at least one set of collision pairs to be detected according to the preset collision relation information between the bounding boxes, the method further includes:
and obtaining preset collision relation information among the bounding boxes according to the sizes corresponding to the bounding boxes respectively and/or the movement ranges of the corresponding preset components.
In this embodiment, the preset collision relation information may include collision possibility between each non-torso-part bounding box and other bounding boxes except the corresponding non-torso-part bounding box. At least one set of collision pairs to be detected may be determined based on the collision probability.
In some embodiments, before determining, according to the target joint pose information and the spatial information, collision situations of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose, the method further includes:
and determining at least one group of collision pairs to be detected according to the preset collision relation information between the bounding boxes.
In the embodiment of the present application, the preset collision relation information may be determined according to the size and the spatial position of the bounding box, the movement range of the corresponding preset component, and the like. The collision pairs to be detected can be stored in a table, a list or the like.
In some embodiments, before determining at least one set of collision pairs to be detected according to the preset collision relation information between the bounding boxes, the method further includes:
and obtaining preset collision relation information among the bounding boxes according to the sizes corresponding to the bounding boxes respectively and/or the movement ranges of the corresponding preset components.
For example, as shown in table 1, this is an exemplary arrangement of the collision pairs to be detected in some embodiments. Wherein, the robot can include truck, left arm, right arm, left leg, right leg, head and neck, and wherein, the left arm specifically includes left upper arm, left forearm and left hand, and the right arm specifically includes right upper arm, right forearm and right hand, and the left leg includes left thigh, left shank and left foot, right leg includes right thigh, right shank and right foot. In some cases, based on a preset motion range of the left upper arm and the right upper arm of the robot, the left upper arm and the right upper arm of the robot do not collide with the trunk, and therefore the left upper arm and the right upper arm do not respectively form a collision pair to be detected with the trunk; and because of the size limitation of the left upper arm and the right upper arm, the left upper arm and the right upper arm do not form a collision pair to be detected. In addition, in some cases, such as when the robot is standing and walking normally, neither the left arm nor the right arm will contact the left lower leg or the right lower leg. Therefore, based on the size, spatial position, and movement range of the corresponding preset components of the bounding box, etc., an exemplary arrangement of the collision pairs to be detected as shown in table 1 can be obtained, wherein the bounding boxes of two preset components corresponding to the cell shown as 1 can constitute a set of collision pairs to be detected.
Table 1: an exemplary arrangement of the pair of collision to be detected
Figure BDA0002351353760000161
In the embodiment of the application, by acquiring the spatial information of at least two bounding boxes corresponding to the robot, each component of the robot can be combined and/or divided through the bounding boxes, and in some cases, regular spatial information can be acquired by presetting the shapes and the like of the bounding boxes, and at the moment, each preset component is abstracted into a three-dimensional space of each rule, so that the calculation amount is reduced in subsequent calculation, and the calculation efficiency is improved; in addition, after acquiring target joint pose information of at least one designated joint of the robot, determining collision situations of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the spatial information, and determining whether the target joint pose of the designated joint needs to be adjusted according to the collision situations; if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in any collision pair to be detected to obtain an adjustment result, and estimating the collision which possibly occurs in the motion process of the robot by reasonably setting the collision pairs to be detected so as to avoid invalid operation and detection. Through this application embodiment, can be to each of robot predetermines the collision that probably takes place between the component and detect and adjust accurately high-efficiently to guarantee the stability and the security of the operation of robot.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 3 shows a block diagram of a robot control device according to an embodiment of the present application, which corresponds to the robot control method according to the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 3, the robot controller 3 includes:
a first obtaining module 301, configured to obtain target joint pose information of at least one specified joint of the robot;
a second obtaining module 302, configured to obtain spatial information of at least two bounding boxes corresponding to the robot, where a three-dimensional space corresponding to each bounding box includes a preset component of the robot, and the preset components corresponding to the bounding boxes are different from each other;
a determining module 303, configured to determine, according to the target joint pose information and the spatial information, collision conditions of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose, where each group of collision pairs to be detected includes two bounding boxes;
and the adjusting module 304 is configured to adjust the terminal pose of the bounding box in the collided pair to be detected to obtain an adjusting result if the collision condition indicates that at least one group of pairs to be detected collide.
Optionally, the types of bounding boxes include a torso-part bounding box and a non-torso-part bounding box, the collision pairs to be detected include a first collision pair to be detected and a second collision pair to be detected, the first collision pair to be detected includes one torso-part bounding box and one non-torso-part bounding box, and the second collision pair to be detected includes two non-torso-part bounding boxes;
the determining module 303 is specifically configured to:
detecting whether at least one group of first collision pairs to be detected collide or not according to the pose information and the spatial information of the target joint;
the adjusting module 304 specifically includes:
the first adjusting unit is used for adjusting the terminal pose of the non-trunk part enclosing box in the first collision pair to be detected if at least one group of first collision pairs to be detected collide, so that the adjusted first collision pair to be detected does not collide;
the first detection unit is used for detecting whether at least one group of second collision pairs to be detected collide if each first collision pair to be detected does not collide;
and the second adjusting unit is used for adjusting the terminal pose of the non-trunk part enclosure box in the second collision pair to be detected if at least one group of second collision pairs to be detected collide, so as to obtain an adjusting result.
Optionally, the second adjusting unit specifically includes:
the determining sub-unit is used for determining the distance between two non-trunk part sub-enclosure boxes in the second collision pair to be detected and the trunk part sub-enclosure box if at least one group of second collision pair to be detected collides exists;
a processing sub-unit configured to use one of the two non-torso-portion bounding boxes, which is located farther from the torso-portion bounding box, as a non-torso-portion bounding box to be adjusted;
the first adjusting subunit is configured to adjust the terminal pose of the to-be-adjusted non-trunk portion enclosure box until the two adjusted non-trunk portion enclosure boxes do not collide with each other, or until the number of times of adjusting the to-be-adjusted non-trunk portion enclosure box exceeds a preset number of times of adjustment.
Optionally, the robot controller 3 further includes:
and the second determining module is used for determining that the target joint angle value of at least one appointed joint of the robot is the corresponding joint angle value at the previous moment respectively if the two obtained adjusted non-trunk part enclosure boxes still collide after the adjusting times of the to-be-adjusted non-trunk part enclosure boxes exceed the preset adjusting times.
Optionally, the first adjusting subunit is specifically configured to:
respectively corresponding characteristic points of two surrounding boxes of the non-trunk parts in the second collision pair to be detected in collision are used as a first characteristic point and a second characteristic point;
determining a first distance of the first feature point from the torso portion bounding box and a second distance of the second feature point from the torso portion bounding box;
determining a smaller distance and a larger distance in the first distance and the second distance, and generating a direction vector pointing to a feature point corresponding to the larger distance from the feature point corresponding to the smaller distance;
and adjusting the terminal pose of the non-trunk part enclosure box to be adjusted in the direction indicated by the direction vector until the two adjusted non-trunk part enclosure boxes do not collide, or until the number of times of adjusting the non-trunk part enclosure box to be adjusted exceeds a preset adjusting number of times.
Optionally, the first adjusting unit specifically includes:
the shadow projecting unit is used for projecting two bounding boxes in the first collision pair to be detected to three projection planes of a coordinate system corresponding to the first collision pair to be detected to obtain a projection result if at least one group of first collision pair to be detected collides exists;
and the second adjusting subunit is used for adjusting the terminal pose of the non-body part enclosing box in the first collision pair to be detected according to the projection result, so that the adjusted first collision pair to be detected does not collide.
Optionally, each projection plane corresponds to one direction to be adjusted;
the second adjustment subunit is specifically configured to:
determining a target adjustment direction and a target movement distance corresponding to a non-trunk part bounding box in the first collision pair to be detected, wherein the target adjustment direction is one of the directions to be adjusted, a projection plane corresponding to the target adjustment direction is a target projection plane, in the target adjustment direction, a minimum movement distance, at which projection parts of two bounding boxes in the first collision pair to be detected do not overlap on the target projection plane, is the target movement distance, and the target movement distance is smaller than minimum movement distances corresponding to other projection planes except the target projection plane;
and adjusting the terminal pose of the non-trunk part enclosing box in the first collision pair to be detected according to the target adjusting direction and the target moving distance, so that the adjusted first collision pair to be detected does not collide.
Optionally, the robot controller 3 further includes:
and the second adjusting module is used for adjusting the target joint pose of at least one appointed joint according to the adjusted collision pair to be detected if the adjusted collision pair to be detected does not collide.
Optionally, the robot controller 3 further includes:
and the third determining module is used for determining at least one group of collision pairs to be detected according to the preset collision relation information between the bounding boxes.
In the embodiment of the application, by acquiring the spatial information of at least two bounding boxes corresponding to the robot, each component of the robot can be combined and/or divided through the bounding boxes, and in some cases, regular spatial information can be acquired by presetting the shapes and the like of the bounding boxes, and at the moment, each preset component is abstracted into a three-dimensional space of each rule, so that the calculation amount is reduced in subsequent calculation, and the calculation efficiency is improved; in addition, after acquiring target joint pose information of at least one designated joint of the robot, determining collision situations of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the spatial information, and determining whether the target joint pose of the designated joint needs to be adjusted according to the collision situations; if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in any collision pair to be detected to obtain an adjustment result, and estimating the collision which possibly occurs in the motion process of the robot by reasonably setting the collision pairs to be detected so as to avoid invalid operation and detection. Through this application embodiment, can be to each of robot predetermines the collision that probably takes place between the component and detect and adjust accurately high-efficiently to guarantee the stability and the security of the operation of robot.
As shown in fig. 4, the terminal device 4 of this embodiment includes: at least one processor 40 (only one shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, the steps in any of the various robot control method embodiments described above being implemented when the computer program 42 is executed by the processor 40.
The terminal device 4 may be a computing device such as a robot, a desktop computer, a notebook, a palm computer, and a cloud server, wherein when the terminal device 4 is a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server, the terminal device 4 may be coupled with the robot to control the robot. The terminal device may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of the terminal device 4, and does not constitute a limitation of the terminal device 4, and may include more or less components than those shown, or combine some of the components, or different components, such as may also include input devices, output devices, network access devices, etc. The input device may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, a camera, and the like, and the output device may include a display, a speaker, and the like.
The Processor 40 may be a Central Processing Unit (CPU), and the Processor 40 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 41 may be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4. In other embodiments, the memory 41 may also be an external storage device of the terminal device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device 4. Further, the memory 41 may include both an internal storage unit and an external storage device of the terminal device 4. The memory 41 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The above-mentioned memory 41 may also be used to temporarily store data that has been output or is to be output.
In addition, although not shown, the terminal device 4 may further include a network connection module, such as a bluetooth module Wi-Fi module, a cellular network module, and the like, which is not described herein again.
In this embodiment, when the processor 40 executes the computer program 42 to implement the steps in any of the robot control method embodiments, by acquiring spatial information of at least two bounding boxes corresponding to the robot, each component of the robot may be combined and/or divided by the bounding boxes, and in some cases, regular spatial information may be acquired by presetting shapes and the like of the bounding boxes, and at this time, each preset component is abstracted into a three-dimensional space of each rule, so that in subsequent calculations, the amount of calculation is reduced, and the calculation efficiency is improved; in addition, after acquiring target joint pose information of at least one designated joint of the robot, determining collision situations of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the spatial information, and determining whether the target joint pose of the designated joint needs to be adjusted according to the collision situations; if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in any collision pair to be detected to obtain an adjustment result, and estimating the collision which possibly occurs in the motion process of the robot by reasonably setting the collision pairs to be detected so as to avoid invalid operation and detection. Through this application embodiment, can be to each of robot predetermines the collision that probably takes place between the component and detect and adjust accurately high-efficiently to guarantee the stability and the security of the operation of robot.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (11)

1. A robot control method, comprising:
acquiring target joint pose information of at least one appointed joint of the robot;
acquiring space information of at least two bounding boxes corresponding to the robot, wherein a three-dimensional space corresponding to each bounding box comprises a preset component of the robot, and the preset components corresponding to the bounding boxes are different from each other;
determining collision conditions of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the space information, wherein each group of collision pairs to be detected comprises two bounding boxes, the types of the bounding boxes comprise a trunk part bounding box and a non-trunk part bounding box, the collision pairs to be detected comprise a first collision pair to be detected and a second collision pair to be detected, the first collision pair to be detected comprises a trunk part bounding box and a non-trunk part bounding box, the second collision pair to be detected comprises two non-trunk part bounding boxes, and the types of the bounding boxes are determined according to the motion conditions of the preset components;
if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in the collided collision pair to be detected to obtain an adjustment result;
determining the collision condition of at least one group of collision pairs to be detected when the at least one designated joint is in the target joint pose according to the target joint pose information and the spatial information, wherein the determining comprises the following steps:
detecting whether at least one group of first collision pairs to be detected collide or not according to the pose information and the spatial information of the target joint;
and if each first collision pair to be detected does not collide, detecting whether at least one group of second collision pairs to be detected collide.
2. The robot control method according to claim 1, wherein said determining, based on the target joint pose information and the spatial information, collision situations of at least one set of collision pairs to be detected when the at least one designated joint is in the target joint pose comprises:
detecting whether at least one group of first collision pairs to be detected collide or not according to the pose information and the spatial information of the target joint;
if the collision condition indicates that at least one group of collision pairs to be detected collide, adjusting the terminal pose of the bounding box in the collided collision pair to be detected to obtain an adjustment result, comprising:
if at least one group of first collision pairs to be detected collide, adjusting the terminal pose of a non-trunk part enclosing box in the collided first collision pairs to be detected, so that the adjusted first collision pairs to be detected do not collide;
and if at least one group of second collision pairs to be detected collide, adjusting the terminal pose of the non-trunk part enclosing box in the second collision pair to be detected to obtain an adjustment result.
3. The robot control method according to claim 2, wherein if there is at least one second pair of collision objects to be detected, adjusting the end pose of the non-body part enclosure box in the second pair of collision objects to be detected to obtain an adjustment result, comprises:
if at least one group of second collision pairs to be detected collides, determining the distance between two non-trunk part enclosure boxes in the collided second collision pairs to be detected and the trunk part enclosure boxes respectively;
one non-trunk part enclosing box which is far away from the trunk part enclosing box is taken as a non-trunk part enclosing box to be adjusted;
adjusting the terminal pose of the to-be-adjusted non-trunk part enclosure box until the two adjusted non-trunk part enclosure boxes do not collide, or until the adjusting times of the to-be-adjusted non-trunk part enclosure box exceed the preset adjusting times.
4. The robot control method according to claim 3, further comprising:
and if the obtained two non-trunk part enclosure boxes after adjustment still collide after the adjustment times of the non-trunk part enclosure boxes to be adjusted exceed the preset adjustment times, determining that the target joint angle value of at least one appointed joint of the robot is the corresponding joint angle value at the last moment.
5. The robot control method according to claim 3, wherein the adjusting the end pose of the non-torso-section enclosure box to be adjusted until the two adjusted non-torso-section enclosure boxes do not collide with each other or until the number of times of adjustment for adjusting the non-torso-section enclosure box to be adjusted exceeds a preset number of times of adjustment includes:
respectively corresponding characteristic points of two surrounding boxes of the non-trunk parts in the second collision pair to be detected in collision are used as a first characteristic point and a second characteristic point;
determining a first distance of the first feature point from the torso portion bounding box and a second distance of the second feature point from the torso portion bounding box;
determining a smaller distance and a larger distance in the first distance and the second distance, and generating a direction vector pointing to a feature point corresponding to the larger distance from the feature point corresponding to the smaller distance;
and adjusting the terminal pose of the non-trunk part enclosure box to be adjusted in the direction indicated by the direction vector until the two adjusted non-trunk part enclosure boxes do not collide, or until the number of times of adjusting the non-trunk part enclosure box to be adjusted exceeds a preset adjusting number of times.
6. The robot control method according to claim 2, wherein the adjusting of the end pose of the non-trunk-portion bounding box in the collided first pair of collisions so that the adjusted collided first pair of collisions does not collide, if there is at least one set of first pairs of collisions to be detected to collide, comprises:
if at least one group of first collision pairs to be detected collide, projecting two bounding boxes in the first collision pairs to be detected to be collided onto three projection planes of a coordinate system corresponding to the first collision pairs to be detected to be collided, and obtaining a projection result;
and according to the projection result, adjusting the terminal pose of the non-trunk part enclosing box in the first collision pair to be detected so as to prevent the adjusted first collision pair to be detected from colliding.
7. A robot control method according to claim 6, characterized in that each projection plane corresponds to a direction to be adjusted;
the adjusting, according to the projection result, the end pose of the non-trunk portion bounding box in the first collision pair to be detected so that the adjusted first collision pair to be detected does not collide includes:
determining a target adjustment direction and a target movement distance corresponding to a non-trunk part bounding box in the first collision pair to be detected, wherein the target adjustment direction is one of the directions to be adjusted, a projection plane corresponding to the target adjustment direction is a target projection plane, in the target adjustment direction, a minimum movement distance, at which projection parts of two bounding boxes in the first collision pair to be detected do not overlap on the target projection plane, is the target movement distance, and the target movement distance is smaller than minimum movement distances corresponding to other projection planes except the target projection plane;
and adjusting the terminal pose of the non-trunk part enclosing box in the first collision pair to be detected according to the target adjusting direction and the target moving distance, so that the adjusted first collision pair to be detected does not collide.
8. The robot control method according to any one of claims 1 to 7, further comprising, after obtaining the adjustment result:
and if the adjustment result indicates that the adjusted collision pair to be detected does not collide, adjusting the target joint pose of at least one appointed joint according to the adjusted collision pair to be detected.
9. A robot control apparatus, comprising:
the robot comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring target joint pose information of at least one appointed joint of the robot;
the second acquisition module is used for acquiring the space information of at least two bounding boxes corresponding to the robot, wherein the three-dimensional space corresponding to each bounding box comprises a preset component of the robot, and the preset components corresponding to the bounding boxes are different from each other;
a determining module, configured to determine, according to the target joint pose information and the spatial information, collision situations of at least one group of collision pairs to be detected when the at least one designated joint is in a target joint pose, where each group of collision pairs to be detected includes two bounding boxes, types of the bounding boxes include a trunk portion bounding box and a non-trunk portion bounding box, the pair to be detected includes a first collision pair to be detected and a second collision pair to be detected, the first collision pair to be detected includes one trunk portion bounding box and one non-trunk portion bounding box, the second collision pair to be detected includes two non-trunk portion bounding boxes, and the types of the bounding boxes are determined according to motion situations of the preset components;
the adjusting module is used for adjusting the terminal pose of the bounding box in the collided collision pair to be detected to obtain an adjusting result if the collision condition indicates that at least one group of collision pairs to be detected collide;
the determining module is specifically configured to:
detecting whether at least one group of first collision pairs to be detected collide or not according to the pose information and the spatial information of the target joint;
and if each first collision pair to be detected does not collide, detecting whether at least one group of second collision pairs to be detected collide.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the robot control method according to any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements a robot control method according to any one of claims 1 to 8.
CN201911416634.4A 2019-12-31 2019-12-31 Robot control method, robot control device and terminal equipment Active CN111113428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416634.4A CN111113428B (en) 2019-12-31 2019-12-31 Robot control method, robot control device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416634.4A CN111113428B (en) 2019-12-31 2019-12-31 Robot control method, robot control device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111113428A CN111113428A (en) 2020-05-08
CN111113428B true CN111113428B (en) 2021-08-27

Family

ID=70506882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416634.4A Active CN111113428B (en) 2019-12-31 2019-12-31 Robot control method, robot control device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111113428B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112060087B (en) * 2020-08-28 2021-08-03 佛山隆深机器人有限公司 Point cloud collision detection method for robot to grab scene
CN112245014B (en) * 2020-10-30 2023-06-02 上海微创医疗机器人(集团)股份有限公司 Medical robot, method for detecting collision of mechanical arm and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850699A (en) * 2015-05-19 2015-08-19 天津市天锻压力机有限公司 Anti-collision control method of transfer robots of stamping production line
CN106166750A (en) * 2016-09-27 2016-11-30 北京邮电大学 A kind of modified model D* mechanical arm dynamic obstacle avoidance paths planning method
CN106625662A (en) * 2016-12-09 2017-05-10 南京理工大学 Virtual reality based live-working mechanical arm anti-collision protecting method
CN107414835A (en) * 2017-08-31 2017-12-01 智造未来(北京)机器人系统技术有限公司 Mechanical arm control method and manned machine first
CN107803831A (en) * 2017-09-27 2018-03-16 杭州新松机器人自动化有限公司 A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method
CN109877836A (en) * 2019-03-13 2019-06-14 浙江大华技术股份有限公司 Paths planning method, device, mechanical arm controller and readable storage medium storing program for executing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9037295B2 (en) * 2008-03-07 2015-05-19 Perception Raisonnement Action En Medecine Dynamic physical constraint for hard surface emulation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850699A (en) * 2015-05-19 2015-08-19 天津市天锻压力机有限公司 Anti-collision control method of transfer robots of stamping production line
CN106166750A (en) * 2016-09-27 2016-11-30 北京邮电大学 A kind of modified model D* mechanical arm dynamic obstacle avoidance paths planning method
CN106625662A (en) * 2016-12-09 2017-05-10 南京理工大学 Virtual reality based live-working mechanical arm anti-collision protecting method
CN107414835A (en) * 2017-08-31 2017-12-01 智造未来(北京)机器人系统技术有限公司 Mechanical arm control method and manned machine first
CN107803831A (en) * 2017-09-27 2018-03-16 杭州新松机器人自动化有限公司 A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method
CN109877836A (en) * 2019-03-13 2019-06-14 浙江大华技术股份有限公司 Paths planning method, device, mechanical arm controller and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111113428A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
US20220134559A1 (en) Method and apparatus for motion planning of robot, method and apparatus for path planning of robot, and method and apparatus for grasping of robot
US9827675B2 (en) Collision avoidance method, control device, and program
CN108801255B (en) Method, device and system for avoiding robot collision
CN111113428B (en) Robot control method, robot control device and terminal equipment
CN113119098B (en) Mechanical arm control method, mechanical arm control device and terminal equipment
JP6348097B2 (en) Work position and orientation calculation device and handling system
US11926052B2 (en) Robot control method, computer-readable storage medium and biped robot
CN113119104B (en) Mechanical arm control method, mechanical arm control device, computing equipment and system
EP3967461A1 (en) Information processing method, information processing system, and program
JP6563596B2 (en) Image processing apparatus, image processing method, and program
US11181972B2 (en) Image processing apparatus, image processing method, and program
US20210200224A1 (en) Method for controlling a robot and its end-portions and device thereof
JP6515828B2 (en) Interference avoidance method
US11331802B2 (en) Method for imitation of human arm by robotic arm, computer readable storage medium, and robot
CN113246145B (en) Pose compensation method and system for nuclear industry grabbing equipment and electronic device
JP2004163990A5 (en)
US20210187731A1 (en) Robotic arm control method and apparatus and terminal device using the same
US20220415094A1 (en) Method and system for estimating gesture of user from two-dimensional image, and non-transitory computer-readable recording medium
WO2017128029A1 (en) Robot control method, control device and system
CN107614208B (en) Robot control method and control equipment
US20230356392A1 (en) Robot control system, control device, and robot control method
CN117084788A (en) Method and device for determining target gesture of mechanical arm and storage medium
CN115690279A (en) Method, device, equipment and medium for acquiring virtual object joint rotation angle
CN115934636A (en) Dynamic capture data structure conversion method and device, electronic equipment and medium
CN115026808A (en) Hand-eye calibration method, hand-eye calibration system, computer equipment and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231208

Address after: Room 601, 6th Floor, Building 13, No. 3 Jinghai Fifth Road, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100176

Patentee after: Beijing Youbixuan Intelligent Robot Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.

TR01 Transfer of patent right