US20220019939A1 - Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations - Google Patents

Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations Download PDF

Info

Publication number
US20220019939A1
US20220019939A1 US17/295,541 US201917295541A US2022019939A1 US 20220019939 A1 US20220019939 A1 US 20220019939A1 US 201917295541 A US201917295541 A US 201917295541A US 2022019939 A1 US2022019939 A1 US 2022019939A1
Authority
US
United States
Prior art keywords
data
robotic
motion
outcome
locations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/295,541
Inventor
Moshe Hazan
Maxim Zemsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Industry Software SRL
Siemens Industry Software Inc
Original Assignee
SIEMENS INDUSTRY SOFTWARE Ltd
Siemens Industry Software SRL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SIEMENS INDUSTRY SOFTWARE Ltd, Siemens Industry Software SRL filed Critical SIEMENS INDUSTRY SOFTWARE Ltd
Priority to US17/295,541 priority Critical patent/US20220019939A1/en
Assigned to SIEMENS INDUSTRY SOFTWARE S.R.L. reassignment SIEMENS INDUSTRY SOFTWARE S.R.L. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZEMSKY, Maxim, HAZAN, MOSHE
Publication of US20220019939A1 publication Critical patent/US20220019939A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39298Trajectory learning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40499Reinforcement learning algorithm

Definitions

  • the present disclosure is directed, in general, to computer-aided design, visualization, and manufacturing (“CAD”) systems, product lifecycle management (“PLM”) systems, product data management (“PDM”) systems, and similar systems, that manage data for products and other items (collectively, “Product Data Management” systems or PDM systems). More specifically, the disclosure is directed to production environment simulation.
  • CAD computer-aided design, visualization, and manufacturing
  • PLM product lifecycle management
  • PDM product data management
  • PDM product data management
  • robotic simulation platforms and systems include, but are not limited to, Computer Assisted Robotic (“CAR”) tools, Process Simulate (a product of Siemens PLM software suite), robotic simulations tools, and other systems and virtual stations for industrial robotic simulation.
  • CAR Computer Assisted Robotic
  • Process Simulate a product of Siemens PLM software suite
  • robotic simulations tools and other systems and virtual stations for industrial robotic simulation.
  • VRC Virtual Robot Controllers
  • outcome data from robots' motions include but are not limited to:
  • swept volume data e.g. useful in industrial automation for detecting potential robotic collisions
  • cycle time data e.g. useful in industrial automation for scheduling, efficient operating and/or maintaining the robotic process
  • energy consumption data e.g. useful in industrial automation for achieving cost reductions and/or meeting environmental regulations
  • robot's joint movements data e.g. useful in industrial automation for optimizing maintenance
  • intermediate robotic locations data e.g. useful in industrial automation for predicting a robot trajectory.
  • the robot's swept volume mentioned above represents the entire 3 Dimensional (“3D”) space generated by the motion of the robot and its attached objects like e.g. tools along their path during a specific robotic program or robotic operation. Determining the robot's swept volume is particularly useful for collision detection purposes in order to avoid collisions between the robots and other robotic cell entities like pieces of equipment, objects and/or humans.
  • 3D 3 Dimensional
  • optimization algorithms are required to find an optimal free collision path optimizing one or more parameter e.g. like robotic cycle time, energy, joints movements, robot trajectory, robot intermediate locations and/or other robotic motion outcomes.
  • the optimization algorithms are then required to run a large amount of different simulations, gathering their motion-outcomes, processing their results and providing the optimal solutions.
  • a method includes receiving data on a given pair of robotic locations as input data.
  • the method includes applying a function trained by a machine learning algorithm to the input data, wherein a related robotic motion-outcome data is generated as output.
  • the method further includes providing as output data the robotic motion-outcome data.
  • a method includes receiving a plurality of robotic location pair data as input training data.
  • the method includes receiving a plurality of motion-outcome data as output training data, wherein the plurality of motion-outcome data is related to the plurality robotic location pair data.
  • the method further includes training by a machine learning algorithm a function based on the input training data and output training data.
  • the method further includes providing the trained function as motion-outcome prediction module for predicting as output data a motion-outcome data of a robot moving between a corresponding pair of robotic locations.
  • the method further including predicting a motion-outcome data by applying the motion-outcome prediction module to a given robotic location pair as input data.
  • Various disclosed embodiments include methods, systems, and computer readable mediums for providing a function trained by a machine learning algorithm.
  • the method includes receiving a plurality of robotic location pair data as input training data.
  • the method further includes receiving a plurality of motion-outcome data as output training data, wherein the plurality of the motion-outcome data is related to the plurality robotic location pair data.
  • the method further includes training by a machine learning algorithm a function based on the input training data and on the output training data.
  • the method further includes providing the trained function for predicting motion-outcome data of a robot moving between a corresponding pair of robotic locations.
  • FIG. 1 illustrates a block diagram of a data processing system in which an embodiment can be implemented.
  • FIG. 2 is a flow chart schematically illustrating training and using of a robotic motion-outcome prediction module in accordance with disclosed embodiments.
  • FIG. 3 is a graph schematically illustrating how to increase training data in accordance with disclosed embodiments.
  • FIG. 4 is a block diagram schematically illustrating a plurality of robotic motion-outcome prediction modules in accordance with disclosed embodiments.
  • FIG. 5 is a drawing schematically illustrating an example of a swept volume of a virtual robot.
  • FIG. 6 is a drawing schematically illustrating a cloud of points representing a space around a robot in accordance with disclosed embodiments.
  • FIG. 7 is a drawing schematically illustrating highlighted points representing a swept volume of a robot in accordance with disclosed embodiments.
  • FIG. 8 illustrates a flowchart for predicting a motion-outcome data of a moving robot in accordance with disclosed embodiments.
  • FIGS. 1 through 8 discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged device. The numerous innovative teachings of the present application will be described with reference to exemplary non-limiting embodiments.
  • Embodiments enable prediction of motion-outcome data of a specific robot without the need to run a robot's simulation.
  • Embodiments avoid in incurring in some of the performance drawbacks of running robotic simulations.
  • Embodiments enable accurate prediction of motion-outcome data of a specific robot without launching an RCS service.
  • Embodiments enable fast data communication not depending on external client-server communication of RRS.
  • Embodiments enable time savings.
  • Embodiments enable accurate prediction of motion-outcome data of a specific robot having no RCS modules.
  • Embodiments enable accurate prediction of motion-outcome data of a broad spectrum of robots of a broad spectrum of robot's vendors.
  • Embodiments enable accurate prediction of motion-outcome data of robots having complex kinematics as, for example, delta/spider robots or other next generation robots.
  • Embodiments may be used for robot validation or optimization purposes.
  • Embodiments enable robotic collision detection validations.
  • Embodiments enable upgrading quality and/or performances of robotic planning and simulation applications.
  • Embodiments enable time savings in fact there is no need to use kinematic inverse calculation, to run motion simulations, to communicate with external motion modules, and/or to calculate a swept volume during a robotic operation.
  • the provided Machine Learning motion-outcome prediction module deliver results in a fast manner independently from the robotic motion complexity.
  • Embodiments enable upgrading performances of kinematic/robotic simulation applications (e.g. like “Robot Viewer”, “Swept Volume”, “Automatic Path Planner” and other) due to time savings involved in avoiding running robotic simulations whilst still enabling the calculation of the swept volume for collision detection validations.
  • kinematic/robotic simulation applications e.g. like “Robot Viewer”, “Swept Volume”, “Automatic Path Planner” and other
  • Embodiments enables upgrading the quality of several existing kinematic/robotic static applications (e.g. like “Robot Smart Place”, “Reach Tests”) that today do not use a robotic simulation for time savings reasons and therefore just jump the robot to the end target location and check collision only there.
  • static robotic applications can perform collision detections too.
  • FIG. 1 illustrates a block diagram of a data processing system 100 in which an embodiment can be implemented, for example as a PDM system particularly configured by software or otherwise to perform the processes as described herein, and in particular as each one of a plurality of interconnected and communicating systems as described herein.
  • the data processing system 100 illustrated can include a processor 102 connected to a level two cache/bridge 104 , which is connected in turn to a local system bus 106 .
  • Local system bus 106 may be, for example, a peripheral component interconnect (PCI) architecture bus.
  • PCI peripheral component interconnect
  • Also connected to local system bus in the illustrated example are a main memory 108 and a graphics adapter 110 .
  • the graphics adapter 110 may be connected to display 111 .
  • LAN local area network
  • WiFi Wireless Fidelity
  • Expansion bus interface 114 connects local system bus 106 to input/output (I/O) bus 116 .
  • I/O bus 116 is connected to keyboard/mouse adapter 118 , disk controller 120 , and I/O adapter 122 .
  • Disk controller 120 can be connected to a storage 126 , which can be any suitable machine usable or machine readable storage medium, including but are not limited to nonvolatile, hard-coded type mediums such as read only memories (ROMs) or erasable, electrically programmable read only memories (EEPROMs), magnetic tape storage, and user-recordable type mediums such as floppy disks, hard disk drives and compact disk read only memories (CD-ROMs) or digital versatile disks (DVDs), and other known optical, electrical, or magnetic storage devices.
  • ROMs read only memories
  • EEPROMs electrically programmable read only memories
  • CD-ROMs compact disk read only memories
  • DVDs digital versatile disks
  • audio adapter 124 Also connected to I/O bus 116 in the example shown is audio adapter 124 , to which speakers (not shown) may be connected for playing sounds.
  • Keyboard/mouse adapter 118 provides a connection for a pointing device (not shown), such as a mouse, trackball, trackpointer, touchscreen, etc.
  • FIG. 1 may vary for particular implementations.
  • other peripheral devices such as an optical disk drive and the like, also may be used in addition or in place of the hardware illustrated.
  • the illustrated example is provided for the purpose of explanation only and is not meant to imply architectural limitations with respect to the present disclosure.
  • a data processing system in accordance with an embodiment of the present disclosure can include an operating system employing a graphical user interface.
  • the operating system permits multiple display windows to be presented in the graphical user interface simultaneously, with each display window providing an interface to a different application or to a different instance of the same application.
  • a cursor in the graphical user interface may be manipulated by a user through the pointing device. The position of the cursor may be changed and/or an event, such as clicking a mouse button, generated to actuate a desired response.
  • One of various commercial operating systems such as a version of Microsoft WindowsTM, a product of Microsoft Corporation located in Redmond, Wash. may be employed if suitably modified.
  • the operating system is modified or created in accordance with the present disclosure as described.
  • LAN/WAN/Wireless adapter 112 can be connected to a network 130 (not a part of data processing system 100 ), which can be any public or private data processing system network or combination of networks, as known to those of skill in the art, including the Internet.
  • Data processing system 100 can communicate over network 130 with server system 140 , which is also not part of data processing system 100 , but can be implemented, for example, as a separate data processing system 100 .
  • FIG. 2 is a flow chart schematically illustrating training and using of a robotic motion-outcome prediction module in accordance with disclosed embodiments.
  • motion-outcome training data 211 , 212 , 213 , 214 , 215 , 216 can be received from different types of data sources 201 , 202 , 203 , 204 , 205 , 206 .
  • robot indicates an industrial robot, any other type of robot, or any type of kinematic machine.
  • a first type of training data sources 201 , 202 , 203 provide data retrieved from real motion of the specific physical robot and a second type of data sources 204 , 205 , 206 provide data retrieved from simulated motion of a virtual representation of the specific physical robot, herein called virtual robot.
  • hybrid data sources combining the above may be provided.
  • Robot vendors may advantageously provide input and output training data derived from different data sources.
  • Examples of physical data sources 201 , 202 203 include position tracking systems like a camera 201 , robotic Application Program Interfaces (“APIs”) 202 , Internet of Things (“IoT”) sensors providing robotic motion-outcome data during the motion of the specific physical industrial robot while moving between a pair of robotic locations.
  • the pair of robotic locations are a source location and a target location, for example a desired start point and end point of a robot's Tool Center Point Frame (“TCPF”) or, in other embodiments, of a robot's Tool Center Point (“TCP”).
  • TCPF Tool Center Point Frame
  • TCP Tool Center Point
  • Simulated data sources 204 , 205 , 206 include RCSs 204 , VRCs 205 and MOPs 206 .
  • Such three modules 204 , 205 , 206 simulate the robotic motion-outcome data during the motion of the virtual industrial robot between the robotic location pair.
  • the virtual robot (not shown) is loaded in a robotic simulation system where training motion data can be generated.
  • training data from virtual or real data are prepared.
  • motion training data from simulated data sources are generated by running a plurality of robot simulation processes, preferably in the background and/or in parallel. Conveniently, the required calculations may be done without graphics.
  • robots may run a plurality of simulations from reachable source locations to reachable target locations, hereinafter named “robotic location pairs”, each time with different robotic motion instructions and parameters.
  • robotic motion instructions include, but are not limited to, motion type, configuration, speed, acceleration and zone.
  • Data inputs to the simulations are data on position and/or robotic instructions on the robotic location pairs.
  • source location data may be given as 3D location coordinates with current robotic instructions data and target location data may be given as 3D location coordinates with its desired robotic instructions data.
  • location-pair position data may be given as current and target robot poses (e.g. respective joint values).
  • the received motion-outcome training data 211 , 212 , 213 , 214 , 215 , 216 received from one or more of each data source 201 , 202 , 203 , 204 , 205 , 206 are processed in a training-data processing module 220 so as to be organized in input and output training data for training a function based on the input training data and the output training data via a Machine Learning (“ML”) algorithm.
  • ML Machine Learning
  • the input and output training data may be organized as follows:
  • robotic location pair data e.g. position and robotic instructions
  • source location position X,Y,Z,Rx,Ry,Rz
  • target location X,Y,Z,Rx,Ry,Rz
  • current robotic instructions e.g. motion type, configuration, speed, acceleration, zone
  • output training data, y tuples motion-outcome data (e.g. swept volume data cycle-time data, energy consumption data, robot's joint movement data, intermediate robotic locations data and/or other) of the robot motion between the location pair.
  • motion-outcome data e.g. swept volume data cycle-time data, energy consumption data, robot's joint movement data, intermediate robotic locations data and/or other
  • the data set represents values at specific times during motion.
  • the data sets may preferably have the same time reference values.
  • the time references might be the sampling of the robot's state at fixed intervals.
  • the sampling times are predefined, the time of each sampled value can be deduced from its index in the y tuple.
  • the sampling time values may preferably be included in the y tuples together with the other intermediate location values.
  • the terms “pair of robotic locations” and “robotic location pair” indicate robotic source location and robotic target location referring respectively to a start of robotic motion and to an end of robotic motion during the robotic motion defining the desired robotic input bounds.
  • other robotic input bounds may be added.
  • a third location between the start point and the target point usually called circular point, is needed to define the robotic input bound. Therefore, in case of circular motion, the x tuples of describe a robotic location triplet, a location pair, and a third location being the circular location.
  • the number of columns denoted as M should preferably be the same for all locations.
  • M the number of columns denoted as M
  • the lists of intermediate poses have different lengths, denoted as N 0 , N 1 , N 1 ⁇ M due to the different numbers of robot's poses necessary to reach a given target location departing from a given source location.
  • the last pose may be duplicated as many times as necessary to get to M columns.
  • the processed data 221 entering ML training module 230 may preferably be in a numerical data format.
  • a “hash of configuration” is generated to contain a numeric configuration index versus the configuration string so that a numerical data form can be used.
  • a new column will be added for each configuration with “0” value or “1” value so that for each line only one single “1” value is present, and all the other values are then “0”.
  • the new configuration when a new configuration not present in the map is encountered, the new configuration is conveniently inserted in the table with a new index.
  • the robotic pose of the source location is duplicated until reaching the desired M columns.
  • the processed data 221 are elaborated to train a function with a Machine Learning (ML) algorithm, preferably with a machine learning algorithm selected from the group of supervised learning algorithms.
  • the learned function f ML is such that it can map at its best the x variable tuples into the y variable tuples.
  • the goal of the used ML algorithm is to approximate the mapping function so well that for a given input data (x tuple), the output variables (y tuple) can be predicted for that given input data.
  • the trained function data 231 are used to generate a prediction module 240 for predicting motion-outcome data of the specific robot.
  • the motion-outcome data prediction module 240 is used to predict the motion-outcome data of this specific robot so that when receiving as input a robotic location pair 250 , in the form of x variable tuple values, it provides as resulting output 260 they variable tuple values describing the motion-outcome data of the specific robot moving between the inputted robotic location pair 250 .
  • the motion-outcome data prediction module 240 may be used as a stand-alone module, e.g. a cloud service for robotic planning and validations.
  • the motion prediction module may be used as a stand-alone module by a virtual simulation system.
  • the motion prediction module may be embedded 240 within a virtual simulation system.
  • the prediction module 240 in the form of “hybrid configurations”, can be used—standalone or embedded—in conjunction with one or more motion planning modules, as for example, prediction modules of generic motion planners, RCS modules, and others.
  • source locations other than the original main source location may be used as exemplified in the embodiment of FIG. 3 .
  • At least one of intermediate robot locations can be captured along the motion trajectory—from the physical or virtual robot—as one of the desired location pairs for training purposes.
  • Multiple training data sets may be generated from a single virtual robot's motion or a single tracked physical robot's motion.
  • different training data sets can be generated by using different intermediate locations as start locations.
  • the target location is the original target of the tracked movement for all these sets.
  • input data for the start location are included in input training data—tuple x.
  • a conversion formatting table is generated.
  • the conversion formatting table maps the original format of the received training data into a numerical format for the x,y tuples suitable for machine learning, e.g. for training and/or usage purposes.
  • FIG. 3 is a graph schematically illustrating how to increase training data in accordance with disclosed embodiments.
  • An original main robotic location pair 301 , 302 is illustrated, comprising the original main source location 301 and the original main target location 302 .
  • a plurality of intermediate robotic locations 303 , 304 , 305 are collected along the motion trajectory of the robot's TCPF (not shown).
  • TCPF TCPF
  • target location the original motion target
  • the robotic locations 301 , 302 , 303 , 304 , 305 are nodes of three directed graphs 310 , 311 , 312 each representing a robotic motion from a source node to a target node.
  • the original main graph 310 goes from the main source node 301 to main target node 302 .
  • the other two generated direct graphs 311 , 312 obtained by adding the corresponding edges 311 , 312 go from intermediate nodes 303 , 304 to the end node 302 .
  • other graphs may be generated.
  • the amount of training data sets can be advantageously increased by adding intermediate locations between the location pair along the robot's TCPF trajectory.
  • FIG. 4 is a block diagram schematically illustrating a plurality of robotic motion-outcome prediction modules in accordance with disclosed embodiments.
  • a robotic application 401 simulates the motion-outcome behavior of one or more specific robots of one or more specific robot vendors with a plurality of ML motion-outcome prediction modules 412 , 422 .
  • a first robot 410 may be a specific model of an ABB robot and the second robot 420 may be a specific model of a KUKA robot.
  • the virtual simulation system predicts the motion-outcome behavior of the specific robot 410 , 420 by inputting the data on a given robotic location pair 411 , 412 into the corresponding specific ML motion prediction module 412 , 422 and by receiving the ML predicted motion-outcome 413 , 423 of the specific robot during its motion between the inputted robotic location pair 411 , 421 .
  • Example Embodiment Algorithm The Motion-Outcome Data is Swept-Volume Data
  • the below described example embodiment algorithm refers to the embodiment where the robotic motion-outcome data is swept volume data of a specific robot.
  • the skilled person can easily apply the teachings of this specific example embodiments to other embodiments with other robotic motion-outcome data like for example cycle-time data, energy consumption data, robot's joint movement data, intermediate robotic locations data and other robotic motion-outcome data.
  • FIG. 5 is a drawing schematically illustrating a swept volume of a robot in accordance with disclosed embodiments.
  • the robotic swept volume 500 is generated by the robot 501 with its attached tool during its motion from a source location 502 to a final target location 503 .
  • FIG. 6 is a drawing schematically illustrating a cloud of points representing a 3D space around a robot (not shown) in accordance with disclosed embodiments.
  • Each point 610 hereby illustrated with a sphere-shaped sub-volume, is a unit of a volume 600 of the 3D space around the robot.
  • the volume unit 610 may be represented with a point, with a cube-shaped sub-volume or with other types of shape sub-elements.
  • FIG. 7 is a drawing schematically illustrating highlighted points representing a swept volume of the robot in the 3D space around the robot.
  • the collection of the highlighted points 710 represents the swept volume of the robot (not shown).
  • the exemplary algorithm embodiment includes the following steps.
  • the robot's swept volume is calculated without running a robot's simulation.
  • the x tuple may further include differential information between robot tool frame versus current TCPF in x tuple.
  • the data on locations pair in the x tuple may be given relative to the robot base like in a regular downloaded robotic program.
  • the quantity of used points for training the ML module may conveniently be reduced by marking only the outside points which are wrapping the robot's swept volume and by ignoring the point inside the swept volume.
  • Embodiments include predicting motion-outcome data of a robot moving between a given pair of robotic locations, comprising: receiving a plurality of robotic location pair data as input training data; receiving a plurality of motion-outcome data as output training data, wherein the plurality of motion-outcome data is related to the plurality robotic location pair data; training by a machine learning algorithm a function based on the input training data and output training data, providing the trained function as motion-outcome prediction module for predicting as output data motion-outcome data of a robot moving between a corresponding pair of robotic locations; predicting motion-outcome data by applying the motion-outcome prediction module to a given robotic location pair as input data.
  • Embodiments include a method for predicting motion-outcome data of a robot moving between a given pair of robotic locations, comprising: receiving training data of motion trajectories of the robot for a plurality of robotic location pairs; processing the training data so as to obtain x tuples and y tuples for machine learning purposes; wherein the x tuples describe the robotic location pair and the y tuples describe one or more intermediate robotic locations at specific time stamps during the motion of the robot between the location pair; learning from the processed data a function mapping the x tuples into the y tuples so as to generate a motion prediction module for the robot; for a given robotic location pair, predicting the robotic motion-outcome data between the given pair by getting the corresponding intermediate locations resulting from the motion prediction module.
  • FIG. 8 illustrates a flowchart 800 of a method for predicting a robotic motion-outcome of a specific robot moving between a given pair of robotic locations in accordance with disclosed embodiments. Such method can be performed, for example, by system 100 of FIG. 1 described above, but the “system” in the process below can be any apparatus configured to perform a process as described.
  • a function trained by a machine learning algorithm is applied to the input data, wherein a related robotic motion-outcome data is generated as output data.
  • the robotic motion-outcome data is provided as output data.
  • the motion-outcome the motion-outcome data is selected from the group consisting of energy consumption data; of cycle time data; of robotic swept volume data; of joint movement data; of intermediate robotic locations data; of other types of motion-outcome data; and any data set comprising any combination of the above data.
  • a conversion formatting table is generated for mapping the original format of the received input data into a numerical format suitable for applying the trained function to the input data.
  • the x tuples comprise one or more information pieces selected from the group consisting of: information on positions of the robotic location pair; information on robotic instructions at the robotic location pair; and/or information on a differential position between a robotic tool tip point and a robotic tool frame point.
  • information on positions of the robotic location pair of the x tuples are given as relative to a given reference frame of the robot.
  • the y tuples comprise information on position of the one or more intermediate locations and/or information on robotic instructions of the one or more intermediate robotic locations.
  • the x tuple describing the location pair comprise at least information on the positions of the location pair (e.g. poses, positions and/or directions of the location pair) and, optionally, it may additionally comprise information on the robotic motion at the location pair (e.g. speed, acceleration, tension, state and/or other robotic motion related information/instructions at the location pair).
  • the x tuple comprises the minimum information required for describing the desired motion planning of the specific robot.
  • position information of the location pair and/or of the intermediate locations may conveniently be given as relative to a given reference frame of the specific robot.
  • the given reference frame of the specific robot may be the robot base frame as it is typically the case with downloaded robotic programs.
  • different base positions of specific robots can be flexibly supported by the ML prediction module.
  • the x tuple describing the location pair comprises robotic tool-related information.
  • robot tool-related information may comprise robot's tool type and/or tool differential position information.
  • the differential tool position information may preferably be the delta between the robot's tool frame and the robot's TCPF frame.
  • different types of robotic tools/TCPFs can be flexibly supported by the ML prediction module and the related kinematic impact is then taken into account.
  • embodiments wherein different robot's tools are used in different location pairs can be flexibly supported by the ML prediction module, e.g. for one location pair robot's tool A is used and for another location pair robot's tool B is used.
  • the information describing the location position may be given in the form of spatial coordinates describing the robot's tip position independently on the robot type or it may be given as robot poses (e.g. via robot's joint values). In other embodiments, the location position information may be given in other formats that may or may not be robot specific.
  • machine usable/readable or computer usable/readable mediums include: nonvolatile, hard-coded type mediums such as read only memories (ROMs) or erasable, electrically programmable read only memories (EEPROMs), and user-recordable type mediums such as floppy disks, hard disk drives and compact disk read only memories (CD-ROMs) or digital versatile disks (DVDs).
  • ROMs read only memories
  • EEPROMs electrically programmable read only memories
  • user-recordable type mediums such as floppy disks, hard disk drives and compact disk read only memories (CD-ROMs) or digital versatile disks (DVDs).

Abstract

Systems and a method for predicting motion-outcome data of a robot moving between a given pair of robotic locations. Data on a given pair of robotic locations are received as input data. A function trained by a machine learning algorithm is applied to the input data, wherein a related robotic motion-outcome data is generated as output data. The robotic motion-outcome data is provided as output data.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims the priority, under 35 U.S.C. § 119, of U.S. application Ser. No. 16/196,156, filed Nov. 20, 2018; the prior application is herewith incorporated by reference in its entirety.
  • TECHNICAL FIELD
  • The present disclosure is directed, in general, to computer-aided design, visualization, and manufacturing (“CAD”) systems, product lifecycle management (“PLM”) systems, product data management (“PDM”) systems, and similar systems, that manage data for products and other items (collectively, “Product Data Management” systems or PDM systems). More specifically, the disclosure is directed to production environment simulation.
  • BACKGROUND OF THE DISCLOSURE
  • In industrial automation, applications performing robotic simulations are used for getting outcome data of robotic motions for validation and optimization purposes.
  • Examples of robotic simulation platforms and systems include, but are not limited to, Computer Assisted Robotic (“CAR”) tools, Process Simulate (a product of Siemens PLM software suite), robotic simulations tools, and other systems and virtual stations for industrial robotic simulation.
  • In order to improve the accuracy level of generic robotic MOtion Planner modules (MOPs), robotic vendors agreed on a Realistic Robot Simulation (“RRS”) protocol so that Robot Controller Software (RCS) modules may now be supplied for providing, among other functionalities, motion outcome prediction functionalities. Vendor-specific Virtual Robot Controllers (“VRC”) are other examples of modules simulating robotic motion trajectories and motion-outcome data.
  • Examples of outcome data from robots' motions include but are not limited to:
  • swept volume data; e.g. useful in industrial automation for detecting potential robotic collisions;
  • cycle time data; e.g. useful in industrial automation for scheduling, efficient operating and/or maintaining the robotic process;
  • energy consumption data; e.g. useful in industrial automation for achieving cost reductions and/or meeting environmental regulations;
  • robot's joint movements data; e.g. useful in industrial automation for optimizing maintenance;
  • intermediate robotic locations data; e.g. useful in industrial automation for predicting a robot trajectory.
  • The robot's swept volume mentioned above represents the entire 3 Dimensional (“3D”) space generated by the motion of the robot and its attached objects like e.g. tools along their path during a specific robotic program or robotic operation. Determining the robot's swept volume is particularly useful for collision detection purposes in order to avoid collisions between the robots and other robotic cell entities like pieces of equipment, objects and/or humans.
  • Unfortunately, running robotic simulations is typically a very time-consuming task. This is even more true in robotic simulations for collision detection validations and optimizations. For example, in some scenarios, optimization algorithms are required to find an optimal free collision path optimizing one or more parameter e.g. like robotic cycle time, energy, joints movements, robot trajectory, robot intermediate locations and/or other robotic motion outcomes. In such scenarios, the optimization algorithms are then required to run a large amount of different simulations, gathering their motion-outcomes, processing their results and providing the optimal solutions.
  • Therefore, improved techniques for predicting motion outcomes of a specific robot are desirable.
  • SUMMARY OF THE DISCLOSURE
  • Various disclosed embodiments include methods, systems, and computer readable mediums for predicting motion-outcome data of a robot moving between a given pair of robotic locations. A method includes receiving data on a given pair of robotic locations as input data. The method includes applying a function trained by a machine learning algorithm to the input data, wherein a related robotic motion-outcome data is generated as output. The method further includes providing as output data the robotic motion-outcome data.
  • A method includes receiving a plurality of robotic location pair data as input training data. The method includes receiving a plurality of motion-outcome data as output training data, wherein the plurality of motion-outcome data is related to the plurality robotic location pair data. The method further includes training by a machine learning algorithm a function based on the input training data and output training data. The method further includes providing the trained function as motion-outcome prediction module for predicting as output data a motion-outcome data of a robot moving between a corresponding pair of robotic locations. The method further including predicting a motion-outcome data by applying the motion-outcome prediction module to a given robotic location pair as input data.
  • Various disclosed embodiments include methods, systems, and computer readable mediums for providing a function trained by a machine learning algorithm. The method includes receiving a plurality of robotic location pair data as input training data. The method further includes receiving a plurality of motion-outcome data as output training data, wherein the plurality of the motion-outcome data is related to the plurality robotic location pair data. The method further includes training by a machine learning algorithm a function based on the input training data and on the output training data. The method further includes providing the trained function for predicting motion-outcome data of a robot moving between a corresponding pair of robotic locations.
  • The foregoing has outlined rather broadly the features and technical advantages of the present disclosure so that those skilled in the art may better understand the detailed description that follows. Additional features and advantages of the disclosure will be described hereinafter that form the subject of the claims. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiment disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the disclosure in its broadest form.
  • Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words or phrases used throughout this patent document: the terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation; the term “or” is inclusive, meaning and/or; the phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; and the term “controller” means any device, system or part thereof that controls at least one operation, whether such a device is implemented in hardware, firmware, software or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases. While some terms may include a wide variety of embodiments, the appended claims may expressly limit these terms to specific embodiments.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of the present disclosure, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, wherein like numbers designate like objects, and in which:
  • FIG. 1 illustrates a block diagram of a data processing system in which an embodiment can be implemented.
  • FIG. 2 is a flow chart schematically illustrating training and using of a robotic motion-outcome prediction module in accordance with disclosed embodiments.
  • FIG. 3 is a graph schematically illustrating how to increase training data in accordance with disclosed embodiments.
  • FIG. 4 is a block diagram schematically illustrating a plurality of robotic motion-outcome prediction modules in accordance with disclosed embodiments.
  • FIG. 5 is a drawing schematically illustrating an example of a swept volume of a virtual robot.
  • FIG. 6 is a drawing schematically illustrating a cloud of points representing a space around a robot in accordance with disclosed embodiments.
  • FIG. 7 is a drawing schematically illustrating highlighted points representing a swept volume of a robot in accordance with disclosed embodiments.
  • FIG. 8 illustrates a flowchart for predicting a motion-outcome data of a moving robot in accordance with disclosed embodiments.
  • DETAILED DESCRIPTION
  • FIGS. 1 through 8, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged device. The numerous innovative teachings of the present application will be described with reference to exemplary non-limiting embodiments.
  • Previous techniques for predicting motion-outcome data of a specific robot moving between a given pair of robotic locations have some drawbacks. The embodiments disclosed herein provide numerous technical benefits, including but not limited to the following examples.
  • Embodiments enable prediction of motion-outcome data of a specific robot without the need to run a robot's simulation.
  • Embodiments avoid in incurring in some of the performance drawbacks of running robotic simulations.
  • Embodiments enable accurate prediction of motion-outcome data of a specific robot without launching an RCS service.
  • Embodiments enable fast data communication not depending on external client-server communication of RRS.
  • Embodiments enable time savings.
  • Embodiments enable accurate prediction of motion-outcome data of a specific robot having no RCS modules.
  • Embodiments enable accurate prediction of motion-outcome data of a broad spectrum of robots of a broad spectrum of robot's vendors.
  • Embodiments enable accurate prediction of motion-outcome data of robots having complex kinematics as, for example, delta/spider robots or other next generation robots.
  • Embodiments may be used for robot validation or optimization purposes.
  • Embodiments enable robotic collision detection validations.
  • Embodiments enable upgrading quality and/or performances of robotic planning and simulation applications.
  • Embodiments enable time savings in fact there is no need to use kinematic inverse calculation, to run motion simulations, to communicate with external motion modules, and/or to calculate a swept volume during a robotic operation. Advantageously, with embodiments, the provided Machine Learning motion-outcome prediction module deliver results in a fast manner independently from the robotic motion complexity.
  • Embodiments enable upgrading performances of kinematic/robotic simulation applications (e.g. like “Robot Viewer”, “Swept Volume”, “Automatic Path Planner” and other) due to time savings involved in avoiding running robotic simulations whilst still enabling the calculation of the swept volume for collision detection validations.
  • Embodiments enables upgrading the quality of several existing kinematic/robotic static applications (e.g. like “Robot Smart Place”, “Reach Tests”) that today do not use a robotic simulation for time savings reasons and therefore just jump the robot to the end target location and check collision only there. Advantageously, with embodiments, static robotic applications can perform collision detections too.
  • FIG. 1 illustrates a block diagram of a data processing system 100 in which an embodiment can be implemented, for example as a PDM system particularly configured by software or otherwise to perform the processes as described herein, and in particular as each one of a plurality of interconnected and communicating systems as described herein. The data processing system 100 illustrated can include a processor 102 connected to a level two cache/bridge 104, which is connected in turn to a local system bus 106. Local system bus 106 may be, for example, a peripheral component interconnect (PCI) architecture bus. Also connected to local system bus in the illustrated example are a main memory 108 and a graphics adapter 110. The graphics adapter 110 may be connected to display 111.
  • Other peripherals, such as local area network (LAN)/Wide Area Network/Wireless (e.g. WiFi) adapter 112, may also be connected to local system bus 106. Expansion bus interface 114 connects local system bus 106 to input/output (I/O) bus 116. I/O bus 116 is connected to keyboard/mouse adapter 118, disk controller 120, and I/O adapter 122. Disk controller 120 can be connected to a storage 126, which can be any suitable machine usable or machine readable storage medium, including but are not limited to nonvolatile, hard-coded type mediums such as read only memories (ROMs) or erasable, electrically programmable read only memories (EEPROMs), magnetic tape storage, and user-recordable type mediums such as floppy disks, hard disk drives and compact disk read only memories (CD-ROMs) or digital versatile disks (DVDs), and other known optical, electrical, or magnetic storage devices.
  • Also connected to I/O bus 116 in the example shown is audio adapter 124, to which speakers (not shown) may be connected for playing sounds. Keyboard/mouse adapter 118 provides a connection for a pointing device (not shown), such as a mouse, trackball, trackpointer, touchscreen, etc.
  • Those of ordinary skill in the art will appreciate that the hardware illustrated in FIG. 1 may vary for particular implementations. For example, other peripheral devices, such as an optical disk drive and the like, also may be used in addition or in place of the hardware illustrated. The illustrated example is provided for the purpose of explanation only and is not meant to imply architectural limitations with respect to the present disclosure.
  • A data processing system in accordance with an embodiment of the present disclosure can include an operating system employing a graphical user interface. The operating system permits multiple display windows to be presented in the graphical user interface simultaneously, with each display window providing an interface to a different application or to a different instance of the same application. A cursor in the graphical user interface may be manipulated by a user through the pointing device. The position of the cursor may be changed and/or an event, such as clicking a mouse button, generated to actuate a desired response.
  • One of various commercial operating systems, such as a version of Microsoft Windows™, a product of Microsoft Corporation located in Redmond, Wash. may be employed if suitably modified. The operating system is modified or created in accordance with the present disclosure as described.
  • LAN/WAN/Wireless adapter 112 can be connected to a network 130 (not a part of data processing system 100), which can be any public or private data processing system network or combination of networks, as known to those of skill in the art, including the Internet. Data processing system 100 can communicate over network 130 with server system 140, which is also not part of data processing system 100, but can be implemented, for example, as a separate data processing system 100.
  • FIG. 2 is a flow chart schematically illustrating training and using of a robotic motion-outcome prediction module in accordance with disclosed embodiments.
  • For an industrial robot (not shown) of a specific type and vendor, motion- outcome training data 211, 212, 213, 214, 215, 216 can be received from different types of data sources 201, 202, 203, 204, 205, 206.
  • As used herein, the term robot indicates an industrial robot, any other type of robot, or any type of kinematic machine.
  • A first type of training data sources 201, 202, 203 provide data retrieved from real motion of the specific physical robot and a second type of data sources 204, 205, 206 provide data retrieved from simulated motion of a virtual representation of the specific physical robot, herein called virtual robot. In embodiments, hybrid data sources combining the above may be provided. Robot vendors may advantageously provide input and output training data derived from different data sources.
  • Examples of physical data sources 201, 202 203 include position tracking systems like a camera 201, robotic Application Program Interfaces (“APIs”) 202, Internet of Things (“IoT”) sensors providing robotic motion-outcome data during the motion of the specific physical industrial robot while moving between a pair of robotic locations. The pair of robotic locations are a source location and a target location, for example a desired start point and end point of a robot's Tool Center Point Frame (“TCPF”) or, in other embodiments, of a robot's Tool Center Point (“TCP”).
  • Simulated data sources 204, 205, 206 include RCSs 204, VRCs 205 and MOPs 206. Such three modules 204, 205, 206 simulate the robotic motion-outcome data during the motion of the virtual industrial robot between the robotic location pair.
  • In embodiments, the virtual robot (not shown) is loaded in a robotic simulation system where training motion data can be generated.
  • In embodiments, training data from virtual or real data are prepared.
  • In embodiments, motion training data from simulated data sources, are generated by running a plurality of robot simulation processes, preferably in the background and/or in parallel. Conveniently, the required calculations may be done without graphics.
  • Each process may run a plurality of simulations from reachable source locations to reachable target locations, hereinafter named “robotic location pairs”, each time with different robotic motion instructions and parameters. Examples of robotic motion instructions, hereinafter named “robotic instructions” include, but are not limited to, motion type, configuration, speed, acceleration and zone.
  • Data inputs to the simulations are data on position and/or robotic instructions on the robotic location pairs. For example, in embodiments, source location data may be given as 3D location coordinates with current robotic instructions data and target location data may be given as 3D location coordinates with its desired robotic instructions data. In other embodiments, location-pair position data may be given as current and target robot poses (e.g. respective joint values).
  • During each single simulation, a large amount of training data may preferably be gathered.
  • In embodiments, the received motion- outcome training data 211, 212, 213, 214, 215, 216 received from one or more of each data source 201, 202, 203, 204, 205, 206 are processed in a training-data processing module 220 so as to be organized in input and output training data for training a function based on the input training data and the output training data via a Machine Learning (“ML”) algorithm.
  • In an exemplary embodiment, the input and output training data may be organized as follows:
  • input training data, x tuples: robotic location pair data (e.g. position and robotic instructions), e.g. source location position (X,Y,Z,Rx,Ry,Rz) with its current robotic instructions (configuration, speed), target location (X,Y,Z,Rx,Ry,Rz) with its current robotic instructions (e.g. motion type, configuration, speed, acceleration, zone);
  • output training data, y tuples: motion-outcome data (e.g. swept volume data cycle-time data, energy consumption data, robot's joint movement data, intermediate robotic locations data and/or other) of the robot motion between the location pair.
  • In embodiments, when the output training data describe the intermediate robotic locations (e.g. the list of sextets of joint values list of robotic pose values (j1, j2, j3, j4, j5, j6)) or when the output training data describe robot's joint movement data, the data set represents values at specific times during motion. In embodiments, the data sets may preferably have the same time reference values. For example, for a motion starting at time TS and ending the latest at time TT, the time references might be the sampling of the robot's state at fixed intervals. In embodiments where the sampling times are predefined, the time of each sampled value can be deduced from its index in the y tuple. In other embodiments, where values may be sampled at non-predefined time intervals, the sampling time values may preferably be included in the y tuples together with the other intermediate location values.
  • As used herein, the terms “pair of robotic locations” and “robotic location pair” indicate robotic source location and robotic target location referring respectively to a start of robotic motion and to an end of robotic motion during the robotic motion defining the desired robotic input bounds.
  • In embodiments, other robotic input bounds may be added. For example, in case of circular motion of the robot, a third location between the start point and the target point, usually called circular point, is needed to define the robotic input bound. Therefore, in case of circular motion, the x tuples of describe a robotic location triplet, a location pair, and a third location being the circular location.
  • For ML training, the number of columns denoted as M should preferably be the same for all locations. For y tuples describing intermediate robotic locations, there might however be cases in which the lists of intermediate poses have different lengths, denoted as N0, N1, N1<M due to the different numbers of robot's poses necessary to reach a given target location departing from a given source location. In embodiments, to obtain data sets with the same number of columns M for intermediate locations, the last pose may be duplicated as many times as necessary to get to M columns.
  • In embodiments, the processed data 221 entering ML training module 230 may preferably be in a numerical data format. In embodiments, where motion parameters are in a non-numerical data format, e.g. in a string format or in other non-numerical formats, a “hash of configuration” is generated to contain a numeric configuration index versus the configuration string so that a numerical data form can be used. In other embodiments, a new column will be added for each configuration with “0” value or “1” value so that for each line only one single “1” value is present, and all the other values are then “0”.
  • Illustrated below is a simplified example of a map of robotic configuration strings and the corresponding index:
  • TABLE 1
    example of hash of configurations.
    J5 + J6 − OH− 1
    J5 − J6 + OH− 2
    J5 − J6 + OH+ 3
    J5 + J6 − OH+ 4
  • In embodiments, when a new configuration not present in the map is encountered, the new configuration is conveniently inserted in the table with a new index.
  • In embodiments, if the specific robot cannot reach a desired target location from a desired source location, the robotic pose of the source location is duplicated until reaching the desired M columns.
  • The processed data 221 are elaborated to train a function with a Machine Learning (ML) algorithm, preferably with a machine learning algorithm selected from the group of supervised learning algorithms. The learned function fML is such that it can map at its best the x variable tuples into the y variable tuples.
  • The goal of the used ML algorithm is to approximate the mapping function so well that for a given input data (x tuple), the output variables (y tuple) can be predicted for that given input data.
  • The trained function data 231 are used to generate a prediction module 240 for predicting motion-outcome data of the specific robot.
  • The motion-outcome data prediction module 240 is used to predict the motion-outcome data of this specific robot so that when receiving as input a robotic location pair 250, in the form of x variable tuple values, it provides as resulting output 260 they variable tuple values describing the motion-outcome data of the specific robot moving between the inputted robotic location pair 250.
  • In embodiments, the motion-outcome data prediction module 240 may be used as a stand-alone module, e.g. a cloud service for robotic planning and validations. In other embodiments, the motion prediction module may be used as a stand-alone module by a virtual simulation system. In other embodiments, the motion prediction module may be embedded 240 within a virtual simulation system. In other embodiments, in the form of “hybrid configurations”, the prediction module 240 can be used—standalone or embedded—in conjunction with one or more motion planning modules, as for example, prediction modules of generic motion planners, RCS modules, and others.
  • In embodiments, to increase the amount of training data sets, source locations other than the original main source location may be used as exemplified in the embodiment of FIG. 3. At least one of intermediate robot locations can be captured along the motion trajectory—from the physical or virtual robot—as one of the desired location pairs for training purposes.
  • Multiple training data sets may be generated from a single virtual robot's motion or a single tracked physical robot's motion. For the same original target location, different training data sets can be generated by using different intermediate locations as start locations. In embodiments, the target location is the original target of the tracked movement for all these sets. For each generated set, input data for the start location are included in input training data—tuple x.
  • In embodiments, a conversion formatting table is generated. The conversion formatting table maps the original format of the received training data into a numerical format for the x,y tuples suitable for machine learning, e.g. for training and/or usage purposes.
  • FIG. 3 is a graph schematically illustrating how to increase training data in accordance with disclosed embodiments.
  • An original main robotic location pair 301, 302 is illustrated, comprising the original main source location 301 and the original main target location 302. Between the main location pair, a plurality of intermediate robotic locations 303, 304, 305, are collected along the motion trajectory of the robot's TCPF (not shown). For any couple of source location (first or intermediate) and target location (the original motion target) data are collected as a list of robot poses versus delta times, preferably including corresponding robotic motion parameters.
  • In FIG. 3, the robotic locations 301, 302, 303, 304, 305 are nodes of three directed graphs 310, 311, 312 each representing a robotic motion from a source node to a target node. The original main graph 310 goes from the main source node 301 to main target node 302. The other two generated direct graphs 311, 312 obtained by adding the corresponding edges 311, 312 go from intermediate nodes 303, 304 to the end node 302. In embodiments, other graphs may be generated.
  • Hence, as shown in FIG. 3, the amount of training data sets can be advantageously increased by adding intermediate locations between the location pair along the robot's TCPF trajectory.
  • FIG. 4 is a block diagram schematically illustrating a plurality of robotic motion-outcome prediction modules in accordance with disclosed embodiments.
  • In embodiments, a robotic application 401 simulates the motion-outcome behavior of one or more specific robots of one or more specific robot vendors with a plurality of ML motion- outcome prediction modules 412, 422.
  • In the exemplary embodiment of FIG. 4, a first robot 410 may be a specific model of an ABB robot and the second robot 420 may be a specific model of a KUKA robot.
  • The virtual simulation system predicts the motion-outcome behavior of the specific robot 410, 420 by inputting the data on a given robotic location pair 411, 412 into the corresponding specific ML motion prediction module 412, 422 and by receiving the ML predicted motion- outcome 413, 423 of the specific robot during its motion between the inputted robotic location pair 411, 421.
  • Example Embodiment Algorithm: The Motion-Outcome Data is Swept-Volume Data
  • The below described example embodiment algorithm refers to the embodiment where the robotic motion-outcome data is swept volume data of a specific robot. The skilled person can easily apply the teachings of this specific example embodiments to other embodiments with other robotic motion-outcome data like for example cycle-time data, energy consumption data, robot's joint movement data, intermediate robotic locations data and other robotic motion-outcome data.
  • FIG. 5 is a drawing schematically illustrating a swept volume of a robot in accordance with disclosed embodiments. The robotic swept volume 500 is generated by the robot 501 with its attached tool during its motion from a source location 502 to a final target location 503.
  • FIG. 6 is a drawing schematically illustrating a cloud of points representing a 3D space around a robot (not shown) in accordance with disclosed embodiments. Each point 610, hereby illustrated with a sphere-shaped sub-volume, is a unit of a volume 600 of the 3D space around the robot. In other embodiment representations, the volume unit 610 may be represented with a point, with a cube-shaped sub-volume or with other types of shape sub-elements.
  • FIG. 7 is a drawing schematically illustrating highlighted points representing a swept volume of the robot in the 3D space around the robot. The collection of the highlighted points 710 represents the swept volume of the robot (not shown).
  • The exemplary algorithm embodiment includes the following steps.
  • 1) Preparing the training data:
      • 1.a) mapping an envelope of a robot with a cloud of points 600;
      • 1.b) for each robotic task, simulating a single robotic motion from a source robotic location (with current values of robotic instructions) forward to a target robotic location (with desired values of robotic instructions);
      • 1.c) during the robot's motion, marking each point touched by the robot; all the marked points 710 representing the robot's swept volume;
      • 1.d) optionally: in case of a robot without tool, collecting the robot Tool Frame trajectory list (e.g. position and orientation data set).
  • 2) Data processing for preparing the x tuple and y tuple for Machine Learning:
      • 2.a) optionally: increasing the training data set by using computed intermediate locations as source locations;
      • 2.b) inserting data in each x tuple:
        • source location data (with its robotic instructions data) and target location data (with its motion instructions data);
      • 2.c) inserting data in each y tuple:
        • all the marked points, e.g. represented by their (X,Y,Z) coordinates or by their point indexes;
        • optionally: the robot Tool Frame list, e.g. represented by their position and orientation coordinates (X,Y,Z,Rx,Ry,Rz).
  • 3) Providing a trained ML module:
      • training a ML function with a supervised algorithm and exposing the resulting ML-trained function as a trained ML module.
  • 4) Detecting collisions by using the ML trained module:
      • 4.a) For each collision detection request, applying to the ML trained module a x tuple with data on locations pair and robotic instructions and getting get back as results the corresponding y tuple with data on the robot's swept volume points;
      • 4.b) optionally, in case of a mounted tool (e.g. a gun or even gripper which holds a part), reallocating it at the first Tool Frame (from y tuple), and jumping it along the other Tool Frames data of the list to obtain the swept volume of the robot with the mounted tool;
      • 4.c) checking the presence of possible collisions between the robotic swept volume and any possible obstacle;
      • 4.d) in case of detected collision, showing the collision points and the critical robotic locations pairs.
  • Advantageously, by applying the ML trained module, the robot's swept volume is calculated without running a robot's simulation.
  • In embodiments, in order to support different robot TCPFs, the x tuple may further include differential information between robot tool frame versus current TCPF in x tuple.
  • In embodiments, in order to support different robot base positions, the data on locations pair in the x tuple may be given relative to the robot base like in a regular downloaded robotic program.
  • In embodiments, the quantity of used points for training the ML module may conveniently be reduced by marking only the outside points which are wrapping the robot's swept volume and by ignoring the point inside the swept volume.
  • Embodiments include predicting motion-outcome data of a robot moving between a given pair of robotic locations, comprising: receiving a plurality of robotic location pair data as input training data; receiving a plurality of motion-outcome data as output training data, wherein the plurality of motion-outcome data is related to the plurality robotic location pair data; training by a machine learning algorithm a function based on the input training data and output training data, providing the trained function as motion-outcome prediction module for predicting as output data motion-outcome data of a robot moving between a corresponding pair of robotic locations; predicting motion-outcome data by applying the motion-outcome prediction module to a given robotic location pair as input data.
  • Embodiments include a method for predicting motion-outcome data of a robot moving between a given pair of robotic locations, comprising: receiving training data of motion trajectories of the robot for a plurality of robotic location pairs; processing the training data so as to obtain x tuples and y tuples for machine learning purposes; wherein the x tuples describe the robotic location pair and the y tuples describe one or more intermediate robotic locations at specific time stamps during the motion of the robot between the location pair; learning from the processed data a function mapping the x tuples into the y tuples so as to generate a motion prediction module for the robot; for a given robotic location pair, predicting the robotic motion-outcome data between the given pair by getting the corresponding intermediate locations resulting from the motion prediction module.
  • FIG. 8 illustrates a flowchart 800 of a method for predicting a robotic motion-outcome of a specific robot moving between a given pair of robotic locations in accordance with disclosed embodiments. Such method can be performed, for example, by system 100 of FIG. 1 described above, but the “system” in the process below can be any apparatus configured to perform a process as described.
  • At act 805, data on a given pair of robotic locations is received as input data.
  • At act 810, a function trained by a machine learning algorithm is applied to the input data, wherein a related robotic motion-outcome data is generated as output data.
  • At act 815, the robotic motion-outcome data is provided as output data.
  • In embodiments, the motion-outcome the motion-outcome data is selected from the group consisting of energy consumption data; of cycle time data; of robotic swept volume data; of joint movement data; of intermediate robotic locations data; of other types of motion-outcome data; and any data set comprising any combination of the above data.
  • In embodiments, a conversion formatting table is generated for mapping the original format of the received input data into a numerical format suitable for applying the trained function to the input data.
  • In embodiments, the x tuples comprise one or more information pieces selected from the group consisting of: information on positions of the robotic location pair; information on robotic instructions at the robotic location pair; and/or information on a differential position between a robotic tool tip point and a robotic tool frame point. In embodiments, information on positions of the robotic location pair of the x tuples are given as relative to a given reference frame of the robot. In embodiments, the y tuples comprise information on position of the one or more intermediate locations and/or information on robotic instructions of the one or more intermediate robotic locations.
  • In embodiments, the x tuple describing the location pair comprise at least information on the positions of the location pair (e.g. poses, positions and/or directions of the location pair) and, optionally, it may additionally comprise information on the robotic motion at the location pair (e.g. speed, acceleration, tension, state and/or other robotic motion related information/instructions at the location pair).
  • In embodiments, the x tuple comprises the minimum information required for describing the desired motion planning of the specific robot.
  • In embodiments, position information of the location pair and/or of the intermediate locations may conveniently be given as relative to a given reference frame of the specific robot. Preferably, the given reference frame of the specific robot may be the robot base frame as it is typically the case with downloaded robotic programs. Advantageously, in this manner, different base positions of specific robots can be flexibly supported by the ML prediction module.
  • In embodiments, the x tuple describing the location pair comprises robotic tool-related information. For example, robot tool-related information may comprise robot's tool type and/or tool differential position information. The differential tool position information may preferably be the delta between the robot's tool frame and the robot's TCPF frame. Advantageously, in this manner, different types of robotic tools/TCPFs can be flexibly supported by the ML prediction module and the related kinematic impact is then taken into account. Advantageously, embodiments wherein different robot's tools are used in different location pairs can be flexibly supported by the ML prediction module, e.g. for one location pair robot's tool A is used and for another location pair robot's tool B is used.
  • In embodiments, the information describing the location position may be given in the form of spatial coordinates describing the robot's tip position independently on the robot type or it may be given as robot poses (e.g. via robot's joint values). In other embodiments, the location position information may be given in other formats that may or may not be robot specific.
  • Of course, those of skill in the art will recognize that, unless specifically indicated or required by the sequence of operations, certain steps in the processes described above may be omitted, performed concurrently or sequentially, or performed in a different order.
  • Those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure is not being illustrated or described herein. Instead, only so much of a data processing system as is unique to the present disclosure or necessary for an understanding of the present disclosure is illustrated and described. The remainder of the construction and operation of data processing system 100 may conform to any of the various current implementations and practices known in the art.
  • It is important to note that while the disclosure includes a description in the context of a fully functional system, those skilled in the art will appreciate that at least portions of the present disclosure are capable of being distributed in the form of instructions contained within a machine-usable, computer-usable, or computer-readable medium in any of a variety of forms, and that the present disclosure applies equally regardless of the particular type of instruction or signal bearing medium or storage medium utilized to actually carry out the distribution. Examples of machine usable/readable or computer usable/readable mediums include: nonvolatile, hard-coded type mediums such as read only memories (ROMs) or erasable, electrically programmable read only memories (EEPROMs), and user-recordable type mediums such as floppy disks, hard disk drives and compact disk read only memories (CD-ROMs) or digital versatile disks (DVDs).
  • Although an exemplary embodiment of the present disclosure has been described in detail, those skilled in the art will understand that various changes, substitutions, variations, and improvements disclosed herein may be made without departing from the spirit and scope of the disclosure in its broadest form.
  • None of the description in the present application should be read as implying that any particular element, step, or function is an essential element which must be included in the claim scope: the scope of patented subject matter is defined only by the allowed claims.

Claims (16)

1-15. (canceled)
16. A method for predicting, by a data processing system, motion-outcome data of a robot moving between a given pair of robotic locations, which comprises the following steps of:
receiving data on the given pair of robotic locations as input data;
applying a function trained by a machine learning algorithm to the input data, wherein related robotic motion-outcome data is generated; and
providing the related robotic motion-outcome data as output data.
17. The method according to claim 16, wherein the related robotic motion-outcome data is selected from the group consisting of:
energy consumption data;
cycle time data;
robotic swept volume data;
joint movement data;
intermediate robotic locations data;
other types of motion-outcome data; and
any data set containing any combination of the above data.
18. The method according to claim 16, which further comprises generating a conversion formatting table to map an original format of the input data received into a numerical format suitable for applying the function to the input data.
19. The method according to claim 16, wherein the input data contains at least one information piece selected from the group consisting of:
information on positions of the given pair of robotic locations;
information on robotic instructions at the given pair of robotic locations; and
information on a differential position between a robotic tool tip point and a robotic tool frame point.
20. A data processing system, comprising:
a processor; and
an accessible memory, the data processing system configured to:
receive data on a given pair of robotic locations as input data;
apply a function trained by a machine learning algorithm to the input data, wherein related robotic motion-outcome data is generated; and
providing the related robotic motion-outcome data as output data.
21. The data processing system according to claim 20, wherein the related robotic motion-outcome data is selected from the group consisting of:
energy consumption data;
cycle time data;
robotic swept volume data;
joint movement data;
intermediate robotic locations data;
other types of motion-outcome data; and
any data set comprising any combination of the above data.
22. The data processing system according to claim 20, wherein the data processing system is further configured to generate a conversion formatting table to map an original format of the input data received into a numerical format suitable for applying the function to the input data.
23. The data processing system according to claim 20, wherein the input data contains at least one information piece selected from the group consisting of:
information on positions of the given pair of robotic locations;
information on robotic instructions at the given pair of robotic locations;
information on a differential position between a robotic tool tip point and a robotic tool frame point.
24. A non-transitory computer-readable medium encoded with executable instructions that, when executed, cause at least one data processing system to:
receive data on a given pair of robotic locations as input data;
apply a function trained by a machine learning algorithm to the input data, wherein a related robotic motion-outcome data is generated;
providing the related robotic motion-outcome data as output data.
25. The non-transitory computer-readable medium according to claim 24, wherein the related robotic motion-outcome data is selected from the group consisting of:
energy consumption data;
cycle time data;
robotic swept volume data;
joint movement data;
intermediate robotic locations data;
other types of motion-outcome data; and
any data set comprising any combination of the above data.
26. The non-transitory computer-readable medium according to claim 24, wherein said at least one data processing system is further configured to generate a conversion formatting table to map an original format of the input data received into a numerical format suitable for applying the function to the input data.
27. The non-transitory computer-readable medium according to claim 24, wherein the input data contains at least one information piece selected from the group consisting of:
information on positions of the given pair of robotic locations;
information on robotic instructions at the given pair of robotic locations; and
information on a differential position between a robotic tool tip point and a robotic tool frame point.
28. A method for providing, by a data processing system, a function trained by a machine learning algorithm, which comprises the following steps of:
receiving a plurality of robotic location pair data as input training data;
receiving a plurality of motion-outcome data as output training data, wherein the plurality of the motion-outcome data is related to the plurality robotic location pair data;
training by a machine learning algorithm a function based on the input training data and on the output training data; and
providing a trained function for predicting the motion-outcome data of a robot moving between a corresponding pair of robotic locations.
29. A method for predicting, by a data processing system, motion-outcome data of a robot moving between a given pair of robotic locations, which comprises the following steps of:
receiving a plurality of robotic location pair data as input training data;
receiving a plurality of motion-outcome data as output training data, wherein the plurality of motion-outcome data is related to the plurality robotic location pair data;
training by a machine learning algorithm a function based on the input training data and the output training data;
providing a trained function as a motion-outcome prediction module for predicting as the output data a motion-outcome data of a robot moving between a corresponding pair of robotic locations;
predicting the motion-outcome data by applying the motion-outcome prediction module to a given robotic location pair as the input data.
30. A method for predicting, by a data processing system, motion-outcome data of a robot moving between a given pair of robotic locations, which comprises the following steps of:
receiving a plurality of robotic location pair data as input training data;
receiving a plurality of motion-outcome data as output training data, wherein the plurality of motion-outcome data is related to the plurality of robotic location pair data;
training by a machine learning algorithm a function based on the input training data and the output training data;
providing a trained function as motion-outcome prediction module for predicting as output data motion-outcome data of a robot moving between a corresponding pair of robotic locations; and
predicting the motion-outcome data by applying the motion-outcome prediction module to a given robotic location pair as the input data.
US17/295,541 2018-11-20 2019-07-18 Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations Pending US20220019939A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/295,541 US20220019939A1 (en) 2018-11-20 2019-07-18 Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/196,156 US20200160210A1 (en) 2018-11-20 2018-11-20 Method and system for predicting a motion trajectory of a robot moving between a given pair of robotic locations
PCT/IB2019/056158 WO2020104864A1 (en) 2018-11-20 2019-07-18 Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations
US17/295,541 US20220019939A1 (en) 2018-11-20 2019-07-18 Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/196,156 Continuation US20200160210A1 (en) 2018-11-20 2018-11-20 Method and system for predicting a motion trajectory of a robot moving between a given pair of robotic locations

Publications (1)

Publication Number Publication Date
US20220019939A1 true US20220019939A1 (en) 2022-01-20

Family

ID=67659413

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/196,156 Pending US20200160210A1 (en) 2018-11-20 2018-11-20 Method and system for predicting a motion trajectory of a robot moving between a given pair of robotic locations
US17/295,541 Pending US20220019939A1 (en) 2018-11-20 2019-07-18 Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/196,156 Pending US20200160210A1 (en) 2018-11-20 2018-11-20 Method and system for predicting a motion trajectory of a robot moving between a given pair of robotic locations

Country Status (4)

Country Link
US (2) US20200160210A1 (en)
EP (2) EP3884345A4 (en)
CN (2) CN113056710A (en)
WO (1) WO2020104864A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210031360A1 (en) * 2019-07-31 2021-02-04 Fanuc Corporation Article transport system having plurality of movable parts
WO2023180785A1 (en) * 2022-03-22 2023-09-28 Siemens Industry Software Ltd. Method and system for enabling inspecting an industrial robotic simulation at a crucial virtual time interval

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114147703B (en) * 2021-11-16 2023-11-17 深圳市优必选科技股份有限公司 Robot obstacle avoidance method and device, computer readable storage medium and robot
CN114217594B (en) * 2021-11-26 2023-12-22 北京云迹科技股份有限公司 Method, device, medium and equipment for testing robot scheduling system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160096272A1 (en) * 2014-10-02 2016-04-07 Brain Corporation Apparatus and methods for training of robots
US20170190051A1 (en) * 2016-01-06 2017-07-06 Disney Enterprises, Inc. Trained human-intention classifier for safe and efficient robot navigation

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5329856B2 (en) * 2008-06-27 2013-10-30 本田技研工業株式会社 Behavior estimation system
JP5313562B2 (en) * 2008-06-27 2013-10-09 本田技研工業株式会社 Behavior control system
US20110153080A1 (en) * 2009-12-22 2011-06-23 Siemens Product Lifecycle Management Software Inc. Method and apparatus for industrial robotic pathscycle time optimization using fly by
CN102024180B (en) * 2010-12-23 2013-04-10 浙江大学 Support vector machine-based parameter-adaptive motion prediction method
CN102554938B (en) * 2010-12-31 2014-12-03 北京中科广视科技有限公司 Tracking method for mechanical arm tail end trajectory of robot
EP2845065B1 (en) * 2012-05-04 2019-09-18 Leoni Cia Cable Systems SAS Imitation learning method for a multi-axis manipulator
US8996175B2 (en) * 2012-06-21 2015-03-31 Rethink Robotics, Inc. Training and operating industrial robots
WO2014201422A2 (en) * 2013-06-14 2014-12-18 Brain Corporation Apparatus and methods for hierarchical robotic control and robotic training
US20150032258A1 (en) * 2013-07-29 2015-01-29 Brain Corporation Apparatus and methods for controlling of robotic devices
US9463571B2 (en) * 2013-11-01 2016-10-11 Brian Corporation Apparatus and methods for online training of robots
CN103631142A (en) * 2013-12-09 2014-03-12 天津工业大学 Iterative learning algorithm for trajectory tracking of wheeled robot
EP2898996A1 (en) * 2014-01-23 2015-07-29 Plum Sp. z o.o. Method of controlling a robotic system and a robotic system controller for implementing this method
CN104049534B (en) * 2014-04-29 2017-01-25 河海大学常州校区 Self-adaption iterative learning control method for micro-gyroscope
DE102015204641B4 (en) * 2014-06-03 2021-03-25 ArtiMinds Robotics GmbH Method and system for programming a robot
US9815201B2 (en) * 2014-07-31 2017-11-14 Siemens Industry Software Limited Method and apparatus for industrial robotic energy saving optimization using fly-by
CN104635714B (en) * 2014-12-12 2018-02-27 同济大学 A kind of robot teaching orbit generation method based on time and space feature
WO2016141542A1 (en) * 2015-03-09 2016-09-15 深圳市道通智能航空技术有限公司 Aircraft tracing method and system
DE102015106227B3 (en) * 2015-04-22 2016-05-19 Deutsches Zentrum für Luft- und Raumfahrt e.V. Controlling and / or regulating motors of a robot
CN107645979B (en) * 2015-06-01 2022-02-11 Abb瑞士股份有限公司 Robot system for synchronizing the movement of a robot arm
US9707681B2 (en) * 2015-07-27 2017-07-18 Siemens Industry Software Ltd. Anti-collision management of overlapping robotic movements
WO2017134735A1 (en) * 2016-02-02 2017-08-10 株式会社日立製作所 Robot system, robot optimization system, and robot operation plan learning method
CN105773623B (en) * 2016-04-29 2018-06-29 江南大学 SCARA robotic tracking control methods based on the study of forecasting type Indirect iteration
WO2017201023A1 (en) * 2016-05-20 2017-11-23 Google Llc Machine learning methods and apparatus related to predicting motion(s) of object(s) in a robot's environment based on image(s) capturing the object(s) and based on parameter(s) for future robot movement in the environment
CN106600000A (en) * 2016-12-05 2017-04-26 中国科学院计算技术研究所 Method and system for human-robot motion data mapping
CN106774327B (en) * 2016-12-23 2019-09-27 中新智擎科技有限公司 A kind of robot path planning method and device
JP6705977B2 (en) * 2017-01-31 2020-06-03 株式会社安川電機 Robot path generation device and robot system
US10919153B2 (en) * 2017-03-06 2021-02-16 Canon Kabushiki Kaisha Teaching method for teaching operations to a plurality of robots and teaching system used therefor
JP6951659B2 (en) * 2017-05-09 2021-10-20 オムロン株式会社 Task execution system, task execution method, and its learning device and learning method
CN106965187B (en) * 2017-05-25 2020-11-27 北京理工大学 Method for generating feedback force vector when bionic hand grabs object
CN107225571B (en) * 2017-06-07 2020-03-31 纳恩博(北京)科技有限公司 Robot motion control method and device and robot
CN107102644B (en) * 2017-06-22 2019-12-10 华南师范大学 Underwater robot track control method and control system based on deep reinforcement learning
CN107511823B (en) * 2017-08-29 2019-09-27 重庆科技学院 The method of robot manipulating task track optimizing analysis
CN108255182B (en) * 2018-01-30 2021-05-11 上海交通大学 Service robot pedestrian perception obstacle avoidance method based on deep reinforcement learning
CN108481328B (en) * 2018-06-04 2020-10-09 浙江工业大学 Flexible iterative learning control method for joint space trajectory tracking of six-joint industrial robot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160096272A1 (en) * 2014-10-02 2016-04-07 Brain Corporation Apparatus and methods for training of robots
US20170190051A1 (en) * 2016-01-06 2017-07-06 Disney Enterprises, Inc. Trained human-intention classifier for safe and efficient robot navigation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Täubig et al., "Real-time Swept Volume and Distance Computation for Self Collision Detection, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (Year: 2011) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210031360A1 (en) * 2019-07-31 2021-02-04 Fanuc Corporation Article transport system having plurality of movable parts
US11752621B2 (en) * 2019-07-31 2023-09-12 Fanuc Corporation Article transport system having plurality of movable parts
WO2023180785A1 (en) * 2022-03-22 2023-09-28 Siemens Industry Software Ltd. Method and system for enabling inspecting an industrial robotic simulation at a crucial virtual time interval

Also Published As

Publication number Publication date
CN111195906B (en) 2023-11-28
EP3884345A4 (en) 2022-08-17
CN111195906A (en) 2020-05-26
US20200160210A1 (en) 2020-05-21
CN113056710A (en) 2021-06-29
EP3656513B1 (en) 2023-01-25
EP3884345A1 (en) 2021-09-29
WO2020104864A1 (en) 2020-05-28
EP3656513A1 (en) 2020-05-27

Similar Documents

Publication Publication Date Title
US20220019939A1 (en) Method and system for predicting motion-outcome data of a robot moving between a given pair of robotic locations
US9815201B2 (en) Method and apparatus for industrial robotic energy saving optimization using fly-by
EP3166084B1 (en) Method and system for determining a configuration of a virtual robot in a virtual environment
US9469029B2 (en) Method and apparatus for saving energy and reducing cycle time by optimal ordering of the industrial robotic path
US10414047B2 (en) Method and a data processing system for simulating and handling of anti-collision management for an area of a production plant
EP3166081A2 (en) Method and system for positioning a virtual object in a virtual simulation environment
EP2942685B1 (en) Method for robotic energy saving tool search
US9135392B2 (en) Semi-autonomous digital human posturing
EP2979825A1 (en) Method and apparatus for saving energy and reducing cycle time by using optimal robotic joint configurations
US10556344B2 (en) Method and system for determining a sequence of kinematic chains of a multiple robot
EP3546138A1 (en) Method and system to determine robot movement instructions.
EP2998078A1 (en) Method for improving efficiency of industrial robotic energy consumption and cycle time by handling orientation at task location
EP3753683A1 (en) Method and system for generating a robotic program for industrial coating
US20160275219A1 (en) Simulating an industrial system
US11514211B2 (en) Method and system for performing a simulation of a retraction cable motion
WO2023067374A1 (en) A method and a system for detecting possible collisions of objects in an industrial manufacturing environment
KR20240052808A (en) Multi-robot coordination using graph neural networks

Legal Events

Date Code Title Description
AS Assignment

Owner name: SIEMENS INDUSTRY SOFTWARE S.R.L., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAZAN, MOSHE;ZEMSKY, MAXIM;SIGNING DATES FROM 20210511 TO 20210626;REEL/FRAME:056686/0896

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED