CN116442221A - Robot control method and device, storage medium and electronic equipment - Google Patents

Robot control method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116442221A
CN116442221A CN202310342815.7A CN202310342815A CN116442221A CN 116442221 A CN116442221 A CN 116442221A CN 202310342815 A CN202310342815 A CN 202310342815A CN 116442221 A CN116442221 A CN 116442221A
Authority
CN
China
Prior art keywords
robot
track
target
execution process
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310342815.7A
Other languages
Chinese (zh)
Inventor
马世奎
王秋林
付强
黄博涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202310342815.7A priority Critical patent/CN116442221A/en
Publication of CN116442221A publication Critical patent/CN116442221A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The disclosure relates to a control method and device of a robot, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a terminal motion trail of the robot, demonstrating a first execution process of the terminal motion trail of the robot in a virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with a physical environment where the robot is located, receiving trail adjustment data based on the first execution process in the virtual mapping environment, generating a target motion trail according to the trail adjustment data and the terminal motion trail, and sending the target motion trail to the robot. Therefore, the tail end motion trail of the robot is simulated by using the virtual space, the accuracy of the motion trail of the robot is determined by monitoring the first execution process of the robot, and the tail end motion trail is adjusted by the trail adjustment data, so that the success rate of executing the robot task is improved.

Description

Robot control method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of automatic control, and in particular, to a method and apparatus for controlling a robot, a storage medium, and an electronic device.
Background
When a robot performs tasks in complex and changeable working environments, for example, when the robot moves, takes objects, places and the like in the environments of families, factories and the like, the traditional motion planning algorithm is difficult to cover all application scenes, and meanwhile, the dynamic changes of the working environments are difficult to cope with, and the motion trail of the robot cannot be optimized through the traditional motion planning result. And for different kinds of articles, different grabbing gestures are needed, and the traditional planning algorithm cannot handle the problems, so that the robot cannot execute corresponding mechanical tasks.
Disclosure of Invention
The invention aims to provide a control method, a device, a storage medium and electronic equipment for a robot, so as to solve the technical problem of low task execution success rate caused by inaccurate planning track of the robot in the related technology.
In order to achieve the above object, a first aspect of the present disclosure provides a control method of a robot, the method comprising:
acquiring a tail end motion trail of the robot;
the first execution process of the tail end movement track executed by the robot is demonstrated in a virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with a physical environment in which the robot is positioned;
receiving trajectory adjustment data based on the first execution process in the virtual mapping environment;
generating a target motion track according to the track adjustment data and the tail end motion track;
and sending the target motion trail to the robot.
Optionally, the receiving, in the virtual mapping environment, trajectory adjustment data based on the first execution procedure includes:
responding to a received dragging instruction of the movable joint point corresponding to the robot in the virtual mapping environment, and generating adjustment data of the movable joint point according to the dragging instruction;
and generating the track adjustment data according to the first execution process and the adjustment data.
Optionally, the receiving, in the virtual mapping environment, trajectory adjustment data based on the first execution procedure includes:
receiving a track abnormality instruction, wherein the track abnormality instruction is sent by a user under the condition that the tail end movement track is unreasonable and/or risk exists based on the first execution process;
and in response to the track abnormality instruction, monitoring track adjustment data in the virtual mapping environment, wherein the track adjustment data is sent by the user in the virtual mapping environment based on the first execution process.
Optionally, the trajectory adjustment data is sent by the user in the virtual mapping environment through a mouse device and/or a keyboard device.
Optionally, the demonstrating the first execution process of the robot to execute the end motion trajectory in the virtual space includes:
acquiring environment target data acquired and generated by the robot on the physical environment;
constructing the virtual mapping environment in the virtual space according to the environment target data;
the first execution process is demonstrated in the virtual mapping environment.
Optionally, the generating a target motion trajectory according to the trajectory adjustment data and the end motion trajectory includes:
determining an adjustment gesture from the tail end motion track according to the track adjustment data, and determining a node adjustment path corresponding to the adjustment gesture;
generating a target gesture according to the gesture adjustment and the node adjustment path;
and generating the target motion trail according to the tail end motion trail and the target gesture.
Optionally, the generating the target motion trajectory according to the end motion trajectory and the target gesture includes:
determining a starting point pose and an ending point pose of the robot according to the tail end motion trail;
generating a first motion trail based on the starting point pose and the target pose, and generating a second motion trail based on the target pose and the end point pose;
and generating the target motion trail according to the first motion trail and the second motion trail.
Optionally, the sending the target motion trajectory to the robot includes:
demonstrating a second execution process of the robot to execute the target motion trail in the virtual space;
and in response to receiving a track determination instruction based on the second execution process in the mapping environment, sending the target motion track to the robot.
According to a second aspect of embodiments of the present disclosure, there is provided a control device of a robot, the device including:
the acquisition module is used for acquiring the tail end motion trail of the robot;
the demonstration module is used for demonstrating a first execution process of the robot executing the tail end movement track in a virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with a physical environment in which the robot is located;
a receiving module, configured to receive track adjustment data based on the first execution procedure in the virtual mapping environment;
the generation module is used for generating a target motion track according to the track adjustment data and the tail end motion track;
and the sending module is used for sending the target motion trail to the robot.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of controlling a robot according to any one of the first aspects of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the control method of the robot according to any one of the first aspects of the present disclosure.
According to the technical scheme, the terminal motion trail of the robot is obtained, the first execution process of the terminal motion trail of the robot is demonstrated in the virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with the physical environment of the robot, track adjustment data based on the first execution process is received in the virtual mapping environment, the target motion trail is generated according to the track adjustment data and the terminal motion trail, and the target motion trail is sent to the robot. Therefore, the tail end motion trail of the robot is simulated by using the virtual space, the accuracy of the motion trail of the robot is determined by monitoring the first execution process of the robot, and the tail end motion trail is adjusted by the trail adjustment data, so that the success rate of executing the robot task is improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a control method of a robot according to an exemplary embodiment.
Fig. 2 is an exemplary diagram illustrating an adjustment of a robot tip motion profile according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a method of generating a target motion trajectory according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a control apparatus of a robot according to an exemplary embodiment.
Fig. 5 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
It should be noted that, all actions for acquiring signals, information or data in the present disclosure are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Fig. 1 is a flowchart illustrating a control method of a robot according to an exemplary embodiment, which is applied to a cloud server as shown in fig. 1, including the following steps.
Step S101, acquiring a tail end motion trail of the robot.
It should be noted that, the embodiment is applied to a cloud server, and the cloud server performs data interaction with the robot in a wireless communication manner, so as to obtain relevant operation data and environment data of the robot, determine a working condition environment faced in the current operation process of the robot through the environment data, and determine the current operation state of the robot through the operation data. The cloud server can analyze the current state of the robot according to the operation data and the environment data, and gives a control instruction to the robot according to an analysis result, so that the robot executes a corresponding task based on the control instruction. It should be noted that, in this embodiment, the robot is limited in the planning manner of the motion track in the process of executing the corresponding task based on the loading limitation of the storage capacity and the control algorithm, so the robot may be a semi-automatic robot, and under a simpler working condition, the semi-automatic robot may execute the corresponding task based on the control program loaded by itself, for example, based on a simple cruising task, the semi-automatic robot may complete the cruising task in a preset range based on the automatic cruising program; when the semi-automatic robot cannot complete corresponding tasks based on a control program loaded by the semi-automatic robot under a more complex working condition, the semi-automatic robot collects the current environment and the working condition to generate target data, then uploads the target data to the cloud server, the cloud server plans the execution process and the execution steps of the semi-automatic robot based on the target data, generates a control instruction and then sends the control instruction to the semi-automatic robot, so that the semi-automatic robot completes the corresponding tasks based on the control instruction, for example, based on a high-precision docking task, the semi-automatic robot sends the target data collected in the current environment to the cloud server, disassembles the moving route of the semi-automatic robot based on the target data and the docking task, and gradually sends the execution instruction to the semi-automatic robot after generating a plurality of stages of execution instructions, thereby enabling the semi-automatic robot to complete the corresponding docking task under the guidance control of the cloud server.
In general, a robot is divided into three stages in the task execution process: the robot can adjust based on the current working condition, but in the terminal docking stage, the terminal track control of the robot needs to meet the terminal docking requirement, so that the precision required by the terminal track control of the robot is higher, and if the precision can not reach the preset standard, the robot is likely to fail in docking, so that the robot can not complete the task. In this embodiment, the cloud server monitors the movement track of the end of the robot, and when the movement track of the end cannot meet the preset docking requirement, the cloud server corrects the movement track of the end of the robot. The terminal motion track can be generated by the robot based on a track planning algorithm loaded by the robot and target data corresponding to the current environment, and is uploaded to the cloud server in a wireless communication mode, wherein the target data can comprise azimuth data, distance data, gesture data and the like of a corresponding target object in the current environment of the robot; the terminal motion trail can also be generated by a cloud server through a trail planning algorithm based on target data uploaded by the robot. The method for acquiring the movement trace of the end is not limited in this embodiment.
Step S102, a first execution process of the tail end motion trail of the robot is demonstrated in a virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with a physical environment where the robot is located.
It should be noted that, in this embodiment, in order for the cloud server to identify the movement track of the end of the robot, the cloud server is required to monitor the current working condition environment of the robot at any time, and determine whether the movement track of the end is reasonable based on the current working condition environment. The rationale for determining whether the tail end movement track is whether other obstacles are encountered when the tail end movement track is executed by the robot or not, and whether the robot can complete corresponding execution tasks based on the tail end movement track or not. For example, in this embodiment, the cloud server may simulate, in the virtual space, the physical environment in which the robot is currently located according to the target data uploaded by the robot. For example, the robot may collect the physical object images in the physical environment within the preset square range through the camera device, and then collect the depth of each physical object image in the physical environment according to the laser radar device, so as to generate the target data corresponding to the physical environment, upload the target data to the server, and simulate the robot and the current environment of the robot in the virtual space through the target data and the preset robot size. It should be noted that, in this embodiment, based on the movement of the robot or the transformation of the environment itself, the target data collected by the camera device and the radar device on the robot may be continuously transformed, and in the transformation process, the robot continuously synchronizes the collected target data with the cloud server, and the cloud server simulates the current environment scene of the robot in the virtual space based on the target data, so as to achieve synchronization of the robot virtual environment and the virtual mapping environment in the virtual space.
For example, in this embodiment, the virtual space may be a metaspace, and the cloud server determines, in a preset database corresponding to the cloud space, a preset template matching with the current environment of the robot according to the target data uploaded by the robot, and constructs a virtual reality scene of the robot in the metaspace according to the positions of objects in the physical environment. After a virtual mapping environment of a physical environment corresponding to the robot is constructed in the virtual space, the first execution process of the terminal motion trail by the robot is demonstrated in the virtual space based on the virtual robot and the terminal motion trail. In this embodiment, the cloud server may determine the rationality of the motion trail of the end corresponding to the robot based on the first execution process demonstrated in the virtual space, and may be connected to the display device, where the first execution process is demonstrated in the display device, so that the relevant technician may determine whether the current motion trail of the end of the robot will undergo physical collision and whether the robot can complete the current task based on the first execution process.
Optionally, in one embodiment, step S102 includes:
acquiring physical environment of robot collecting generated environmental target data;
constructing a virtual mapping environment in a virtual space according to the environment target data;
the first execution is demonstrated in a virtual mapping environment.
For example, in this embodiment, the robot performs data acquisition on the current physical environment through the acquisition device, and generates corresponding environment target data, where the environment target data includes environment image information of the current environment where the robot is located and distance and size information corresponding to each environment image. After receiving the target data, the cloud server constructs a virtual mapping environment corresponding to the physical environment in the virtual space based on the target data, and demonstrates a first execution process of the robot executing the terminal motion trail in the virtual mapping environment according to the terminal motion trail.
Step S103, track adjustment data based on the first execution procedure is received in the virtual mapping environment.
It should be noted that, in this embodiment, after the first execution process is demonstrated in the virtual space, when the cloud server determines that the movement track of the end of the robot is reasonable based on the first execution process, the end movement track is sent to the robot, so that the robot executes the end movement track to complete the corresponding task. When the cloud server determines that the movement track of the tail end of the robot is unreasonable based on the first execution process, the robot may cause physical collision or cannot execute a corresponding docking task, the movement track of the tail end needs to be adjusted based on the first execution process. For example, in this embodiment, the first execution process is demonstrated in the virtual space, and when the relevant staff determines that the movement track of the end of the robot is unreasonable according to the first execution process, the relevant staff can adjust the movement track of the end through the relevant input tool, where the input tool may be a mouse and/or a keyboard, and the relevant staff can adjust the travel route of the movement track of the end through the mouse, and key in the adjusted parameters through the keyboard. And generating track adjustment data corresponding to the tail end movement track based on the data input by the related staff through the input tool.
Alternatively, in another embodiment, the step S103 includes:
responding to a received dragging instruction of the movable joint corresponding to the robot in the mapping environment, and generating adjustment data of the movable joint according to the dragging instruction;
track adjustment data is generated according to the first execution process and the adjustment data.
By way of example, in the present embodiment, a plurality of movable joints exist on a key part of the simulation robot in the virtual space, and the relevant staff can adjust the positions of the movable joints of the robot in the space through the input tool, so as to adjust the actions of the robot in the virtual space. For example, the related staff can adjust the moving track of the robot in the virtual space by dragging the moving node, and the cloud server generates adjustment data of each moving node through the received dragging instruction. The movable joints on the same part of the robot are linked, when one of the movable joints on the robot is dragged, based on the structure of the robot and the linkage on the same part, other movable joints on the part can be linked along with the movable joints, and the cloud server generates adjustment data of each movable joint by recording the moving track of each joint in the robot. And generating track adjustment data of the robot based on the tail end motion track through the first execution process of the robot in the virtual space and adjustment data of each movable joint point.
Alternatively, in another embodiment, the step S103 includes:
receiving a track abnormality instruction, wherein the track abnormality instruction is sent by a user under the condition that the tail end movement track is unreasonable and/or risk exists based on a first execution process;
in response to the track anomaly instruction, track adjustment data is monitored in the virtual mapping environment, the track adjustment data being sent by a user in the virtual mapping environment based on the first execution process.
By way of example, in this embodiment, the first execution process is demonstrated in the server, so that the relevant staff can determine the terminal motion trail of the robot according to the first execution process, and determine the rationality and risk of the terminal motion trail of the robot. It should be noted that when determining the movement track of the end based on the first execution process, the related staff generally uses whether the robot collides with other virtual objects in the virtual mapping environment in the first execution process, and whether the robot can complete the corresponding execution task based on the first execution process as a basis for determining whether the first execution process has risk and/or is reasonable. When the related staff determines that the tail end movement track is at risk and/or unreasonable according to the first execution process, a track abnormality instruction is sent to the server, and after the server receives the track abnormality instruction, the tail end movement track is reserved locally, and further track modification instructions of the related staff based on the first execution process are waited. The related staff member sends the track adjustment data to the server by revising the first execution process on line in the virtual mapping environment. By way of example, the related staff may revise the terminal movement track through the mobile terminal, the server may send the first execution process to the bound mobile terminal, the related staff revise the first execution process through the mobile terminal, and feed back track adjustment data generated after the revision to the server, so as to implement remote revision of the terminal movement track.
Alternatively, in another embodiment, the trajectory adjustment data is sent by the user in a virtual mapping environment via a mouse device and/or a keyboard device.
For example, the relevant staff may send a modification instruction to the server by using an input device such as a keyboard and/or a mouse connected to the server, and the server generates track adjustment data of the end motion track based on the modification instruction and the first execution process. Therefore, online revision of the tail end movement track is realized, and related staff can adjust the tail end movement track in real time based on the first execution process to generate track adjustment data.
Step S104, generating a target motion track according to the track adjustment data and the tail end motion track.
The tail end motion trail of the robot is a three-dimensional motion trail, wherein the tail end motion trail comprises the advancing route of each key node of the robot in a three-dimensional space, and is generally influenced by the self structure of the robot, and each key node of the same part is linked, for example, when the tail end height of the finger of the robot is adjusted in the tail end motion trail, the key nodes corresponding to the hands are all lifted under the influence of the structure and the linkage, so that the executing task of lifting the tail end of the finger is completed. Therefore, in this embodiment, when the relevant staff determines that the movement track of the tail end of the robot is unreasonable through the first execution process, the position of the unreasonable key node corresponding to the corresponding part in the virtual space can be adjusted in the movement track of the tail end through the mouse, so that the robot avoids the obstacle in the virtual mapping environment, and the corresponding docking task is completed. The robot is based on self construction to enable other key nodes of the same part to be linked, and the cloud server generates track adjustment data by monitoring the motion track of each key node of the robot in the virtual space. It should be noted that the track adjustment data may be a certain section of track in the end motion track, and the section of track is replaced by the track adjustment data, so as to generate the target motion track of the robot.
For example, in this embodiment, the target motion trajectory may be a motion path of a certain part of the robot, or may be a hand holding gesture of a palm of the robot, fig. 2 is an exemplary diagram illustrating an adjustment of a tail end motion trajectory of the robot according to an exemplary embodiment, as shown in fig. 2, a cloud server may adjust a holding gesture of a final docking of the robot based on the target motion trajectory, and a relevant worker may drag a palm position of the robot in a virtual mapping space through a mouse based on a first execution process, so that a final resting position of the palm of the robot is moved from an a position in the diagram to a B position in the diagram, so that the robot approaches to the target grabbing object C, thereby facilitating the robot to complete a grabbing task of the target grabbing object C.
Step S105, the target motion trajectory is sent to the robot.
In this embodiment, after the target motion track is generated through the steps, the target motion track is sent to the robot, so that the robot executes the target motion track to complete the corresponding execution task.
Alternatively, in another embodiment, the step S105 includes:
demonstrating a second execution process of the robot execution target motion trail in the virtual space;
and in response to receiving a trajectory determination instruction based on the second execution process in the mapping environment, sending the target motion trajectory to the robot.
In this embodiment, after the target motion track is generated through the steps, the second execution process of the target motion track is executed by the demonstration robot in the virtual space, so that a related technician can conveniently check the target motion track, and after the target motion track is checked, a track determination instruction is input to the cloud server through the input tool, and the cloud server sends the target motion track to the robot based on the track determination instruction.
According to the technical scheme, the terminal motion trail of the robot is obtained, the first execution process of the terminal motion trail of the robot is demonstrated in the virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with the physical environment of the robot, track adjustment data based on the first execution process is received in the virtual mapping environment, the target motion trail is generated according to the track adjustment data and the terminal motion trail, and the target motion trail is sent to the robot. Therefore, the tail end motion trail of the robot is simulated by using the virtual space, the accuracy of the motion trail of the robot is determined by monitoring the first execution process of the robot, and the tail end motion trail is adjusted by the trail adjustment data, so that the success rate of executing the robot task is improved.
Fig. 3 is a flowchart illustrating a method for generating a target motion trajectory according to an exemplary embodiment, and as shown in fig. 3, the above step S104 includes the following steps.
Step S201, according to the track adjustment data, an adjustment gesture is determined from the tail end motion track, and a node adjustment path corresponding to the adjustment gesture is determined.
In general, a part of motion trajectories may exist in the tail end motion trajectories, and a part of motion trajectories are unreasonable, so that an adjustment gesture required to be adjusted can be determined in the tail end motion trajectories according to trajectory adjustment data received by the cloud server, and adjustment paths corresponding to key nodes in the adjustment gesture are determined through analysis. The adjustment path may be a movement path corresponding to a key node based on a certain adjustment gesture on the end movement track.
Step S202, generating a target gesture according to the gesture adjustment and the node adjustment path.
Step S203, a target motion track is generated according to the tail end motion track and the target gesture.
For example, in this embodiment, according to the node adjustment path, the adjustment gesture is adjusted, and the target gesture of the terminal motion track at the moment is generated. Based on the tail end motion trail and the target gesture, the robot moving principle is enabled to generate a corresponding target motion trail.
Optionally, in one embodiment, step S203 includes:
determining a starting point pose and an ending point pose of the robot according to the tail end motion trail;
generating a first motion track based on the starting point pose and the target pose, and generating a second motion track based on the target pose and the end point pose;
and generating a target motion track according to the first motion track and the second motion track.
In this embodiment, the starting pose and the end pose of the robot are determined according to the end motion trajectories, the first motion trajectories from the starting pose to the target pose are determined according to the cooperative motion mode of the robot, the second motion trajectories are generated according to the target pose and the end pose, and the target motion trajectories are generated after the first motion trajectories and the second motion trajectories are combined.
By the mode, the tail end motion trail of the robot is adjusted based on the adjustment gesture, the target motion trail is generated, the robot can complete corresponding tasks based on the target motion trail, and the success rate of executing tasks by the robot is improved.
Fig. 4 is a block diagram illustrating a control apparatus of a robot according to an exemplary embodiment, and as shown in fig. 4, the apparatus 100 includes: an acquisition module 110, a presentation module 120, a receiving module 130, a generating module 140, and a sending module 150.
The acquiring module 110 is configured to acquire a motion trajectory of a tip of the robot.
The demonstration module 120 is configured to demonstrate, in a virtual space, a first execution process of the end motion trajectory performed by the robot, where the virtual space includes a virtual mapping environment synchronized with a physical environment in which the robot is located.
The receiving module 130 is configured to receive track adjustment data based on the first execution procedure in the virtual mapping environment.
The generating module 140 is configured to generate a target motion track according to the track adjustment data and the end motion track.
And the sending module 150 is used for sending the target motion trail to the robot.
Optionally, the receiving module 130 is configured to:
responding to a received dragging instruction of the movable joint corresponding to the robot in the mapping environment, and generating adjustment data of the movable joint according to the dragging instruction;
track adjustment data is generated according to the first execution process and the adjustment data.
Optionally, the receiving module 130 is configured to:
receiving a track abnormality instruction, wherein the track abnormality instruction is sent by a user under the condition that the tail end movement track is unreasonable and/or risk exists based on a first execution process;
in response to the track anomaly instruction, track adjustment data is monitored in the virtual mapping environment, the track adjustment data being sent by a user in the virtual mapping environment based on the first execution process.
Optionally, the trajectory adjustment data is sent by the user in the virtual mapping environment through a mouse device and/or a keyboard device.
Optionally, the presentation module 120 is configured to:
acquiring environment target data acquired and generated by a robot for a physical environment;
constructing a virtual mapping environment in a virtual space according to the environment target data;
the first execution is demonstrated in a virtual mapping environment.
Optionally, the generating module 140 includes:
the determining submodule is used for determining an adjusting gesture from the tail-end motion track according to the track adjusting data and adjusting a node adjusting path corresponding to the gesture;
the first generation submodule is used for generating a target gesture according to the gesture adjustment and the node adjustment path;
and the second generation submodule is used for generating a target motion track according to the tail end motion track and the target gesture.
Optionally, the second generating sub-module is configured to:
determining a starting point pose and an ending point pose of the robot according to the tail end motion trail;
generating a first motion track based on the starting point pose and the target pose, and generating a second motion track based on the target pose and the end point pose;
and generating a target motion track according to the first motion track and the second motion track.
Optionally, the sending module 150 is configured to:
demonstrating a second execution process of the robot execution target motion trail in the virtual space;
and in response to receiving a trajectory determination instruction based on the second execution process in the mapping environment, sending the target motion trajectory to the robot.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 is a block diagram of an electronic device 500, according to an example embodiment. For example, electronic device 500 may be provided as a server. Referring to fig. 5, the electronic device 500 includes a processor 522, which may be one or more in number, and a memory 532 for storing computer programs executable by the processor 522. The computer program stored in memory 532 may include one or more modules each corresponding to a set of instructions. Further, the processor 522 may be configured to execute the computer program to perform the control method of the robot described above.
In addition, the electronic device 500 may further include a power component 526 and a communication component 550, the power component 526 may be configured to perform power management of the electronic device 500, and the communication component 550 may be configured to enable communication of the electronic device 500, such as wired or wireless communication. In addition, the electronic device 500 may also include an input/output (I/O) interface 558. The electronic device 500 may operate an operating system based on storage 532.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the above-described method of controlling a robot is also provided. For example, the non-transitory computer readable storage medium may be the memory 532 including program instructions described above, which are executable by the processor 522 of the electronic device 500 to perform the control method of the robot described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned control method of a robot when being executed by the programmable apparatus.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (11)

1. A method of controlling a robot, the method comprising:
acquiring a tail end motion trail of the robot;
the first execution process of the tail end movement track executed by the robot is demonstrated in a virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with a physical environment in which the robot is positioned;
receiving trajectory adjustment data based on the first execution process in the virtual mapping environment;
generating a target motion track according to the track adjustment data and the tail end motion track;
and sending the target motion trail to the robot.
2. The control method according to claim 1, wherein the receiving trajectory adjustment data based on the first execution process in the virtual mapping environment includes:
responding to a received dragging instruction of the movable joint point corresponding to the robot in the virtual mapping environment, and generating adjustment data of the movable joint point according to the dragging instruction;
and generating the track adjustment data according to the first execution process and the adjustment data.
3. The control method according to claim 1, wherein the receiving trajectory adjustment data based on the first execution process in the virtual mapping environment includes:
receiving a track abnormality instruction, wherein the track abnormality instruction is sent by a user under the condition that the tail end movement track is unreasonable and/or risk exists based on the first execution process;
and in response to the track abnormality instruction, monitoring track adjustment data in the virtual mapping environment, wherein the track adjustment data is sent by the user in the virtual mapping environment based on the first execution process.
4. A control method according to claim 3, characterized in that the trajectory adjustment data is sent by the user in the virtual mapping environment via a mouse device and/or a keyboard device.
5. The control method according to claim 1, wherein the demonstrating in a virtual space the first execution of the tip movement locus by the robot includes:
acquiring environment target data acquired and generated by the robot on the physical environment;
constructing the virtual mapping environment in the virtual space according to the environment target data;
the first execution process is demonstrated in the virtual mapping environment.
6. The control method according to claim 1, wherein the generating a target motion trajectory from the trajectory adjustment data and the tip motion trajectory includes:
determining an adjustment gesture from the tail end motion track according to the track adjustment data, and determining a node adjustment path corresponding to the adjustment gesture;
generating a target gesture according to the gesture adjustment and the node adjustment path;
and generating the target motion trail according to the tail end motion trail and the target gesture.
7. The control method according to claim 6, wherein the generating the target motion trajectory from the tip motion trajectory and the target pose comprises:
determining a starting point pose and an ending point pose of the robot according to the tail end motion trail;
generating a first motion trail based on the starting point pose and the target pose, and generating a second motion trail based on the target pose and the end point pose;
and generating the target motion trail according to the first motion trail and the second motion trail.
8. The control method according to claim 1, characterized in that the transmitting the target motion trajectory to the robot includes:
demonstrating a second execution process of the robot to execute the target motion trail in the virtual space;
and in response to receiving a track determination instruction based on the second execution process in the mapping environment, sending the target motion track to the robot.
9. A control device for a robot, the device comprising:
the acquisition module is used for acquiring the tail end motion trail of the robot;
the demonstration module is used for demonstrating a first execution process of the robot executing the tail end movement track in a virtual space, wherein the virtual space comprises a virtual mapping environment synchronous with a physical environment in which the robot is located;
a receiving module, configured to receive track adjustment data based on the first execution procedure in the virtual mapping environment;
the generation module is used for generating a target motion track according to the track adjustment data and the tail end motion track;
and the sending module is used for sending the target motion trail to the robot.
10. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the steps of the method of any of claims 1-8.
11. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-8.
CN202310342815.7A 2023-03-31 2023-03-31 Robot control method and device, storage medium and electronic equipment Pending CN116442221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310342815.7A CN116442221A (en) 2023-03-31 2023-03-31 Robot control method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310342815.7A CN116442221A (en) 2023-03-31 2023-03-31 Robot control method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116442221A true CN116442221A (en) 2023-07-18

Family

ID=87133023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310342815.7A Pending CN116442221A (en) 2023-03-31 2023-03-31 Robot control method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116442221A (en)

Similar Documents

Publication Publication Date Title
Xia et al. A digital twin to train deep reinforcement learning agent for smart manufacturing plants: Environment, interfaces and intelligence
US11409260B2 (en) Runtime controller for robotic manufacturing system
Ong et al. Augmented reality-assisted robot programming system for industrial applications
US20220009100A1 (en) Software Interface for Authoring Robotic Manufacturing Process
EP3166084B1 (en) Method and system for determining a configuration of a virtual robot in a virtual environment
US9452533B2 (en) Robot modeling and positioning
CN109531577B (en) Mechanical arm calibration method, device, system, medium, controller and mechanical arm
CN110929422B (en) Robot cluster simulation method and device
Fang et al. Robot path and end-effector orientation planning using augmented reality
US20220019939A1 (en) Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations
Yun et al. Immersive and interactive cyber-physical system (I2CPS) and virtual reality interface for human involved robotic manufacturing
CN110940341A (en) Path planning method, robot and computer readable storage medium
CN112894758A (en) Robot cleaning control system, method and device and computer equipment
CN113211447A (en) Mechanical arm real-time perception planning method and system based on bidirectional RRT algorithm
Nandikolla et al. Teleoperation Robot Control of a Hybrid EEG‐Based BCI Arm Manipulator Using ROS
CN112060088A (en) Non-cooperative target accurate capturing teleoperation method under variable time delay condition
Von Borstel et al. Model-based development of virtual laboratories for robotics over the Internet
CN116442221A (en) Robot control method and device, storage medium and electronic equipment
CN114571460A (en) Robot control method, device and storage medium
WO2022224447A1 (en) Control device, control method, and storage medium
CN109531579B (en) Mechanical arm demonstration method, device, system, medium, controller and mechanical arm
KR20170116310A (en) System and method for task teaching
WO2020264432A1 (en) Methods and systems for testing robotic systems in an integrated physical and simulated environment
Antonelli et al. Exploring the limitations and potential of digital twins for mobile manipulators in industry
CN115870976B (en) Sampling track planning method and device for mechanical arm and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination