WO2018172593A2 - Procédé pour intégrer de nouveaux modules dans des robots modulaires, et composant de robot associé - Google Patents

Procédé pour intégrer de nouveaux modules dans des robots modulaires, et composant de robot associé Download PDF

Info

Publication number
WO2018172593A2
WO2018172593A2 PCT/ES2018/070379 ES2018070379W WO2018172593A2 WO 2018172593 A2 WO2018172593 A2 WO 2018172593A2 ES 2018070379 W ES2018070379 W ES 2018070379W WO 2018172593 A2 WO2018172593 A2 WO 2018172593A2
Authority
WO
WIPO (PCT)
Prior art keywords
robot
component
modular
new
representation
Prior art date
Application number
PCT/ES2018/070379
Other languages
English (en)
Spanish (es)
Other versions
WO2018172593A3 (fr
Inventor
Víctor MAYORAL VILCHES
Asier BILBAO CALVO
Irati ZAMALLOA UGARTE
Risto KOJCEV
Alejandro HERNÁNDEZ CORDERO
Aday MUÑIZ ROSAS
Original Assignee
Erle Robotics, S.L
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Erle Robotics, S.L filed Critical Erle Robotics, S.L
Priority to PCT/ES2018/070379 priority Critical patent/WO2018172593A2/fr
Publication of WO2018172593A2 publication Critical patent/WO2018172593A2/fr
Publication of WO2018172593A3 publication Critical patent/WO2018172593A3/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1615Programme controls characterised by special kind of manipulator, e.g. planar, scara, gantry, cantilever, space, closed chain, passive/active joints and tendon driven manipulators
    • B25J9/1617Cellular, reconfigurable manipulator, e.g. cebot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/33Director till display
    • G05B2219/33039Learn for different measurement types, create for each a neural net
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40304Modular structure

Definitions

  • the object of the invention belongs to the technical field of robotics.
  • the object of the current invention is a method for integrating new trained robot modules.
  • US20080082301A1 The method for designing and manufacturing a robot is covered by US20080082301A1, in which a robot is created using 3D models.
  • US20080082301 A1 designs and builds a robot using 3D modeling techniques and tools. However, this document does not describe how the robot's behavior can be reconfigured, or how unknown components can be included in the process.
  • US20130275091 provides a definition of the term "robot.” This definition coincides with the more traditional concept of robots and provides pure description of sensors and actuators, but does not classify them, does not consider that there are relevant differences between sensors and actuators. Similarly, document US20130275091 does not consider that, in addition to sensors and actuators, a robot can comprise other types of devices, such as those specialized in communication (both inter and intra robots), power, cognition / reasoning or user interaction, among others.
  • Document US20130275091 proposes a catalog of parts for a robot simulation, but this catalog only provides a set of defined modules and does not consider that unknown third-party elements are added to the simulation, which limits the scalability of the invention.
  • the method of document US20130275091 does not allow modifications of the predefined models or extensions to adapt such models to many different robots.
  • the "drag and drop" system known in the art limits the scalability of the approach since only components in the catalog can be used to construct these virtual representations.
  • US20130275091 does not consider the need to add capabilities to the simulation process to support previously unknown or undefined elements.
  • its approach lacks scalability because it focuses only on the process of building robots, without covering the training of these machines using neuromorphic approaches.
  • document US9671786B2 does not cover the generation of extended robotic devices using unknown or new components, nor does it describe a clear approach to determine how the robot's behavior should be coded, programmed or trained. In addition to this, although the document considers virtual models, it does not propose the use of virtual models to validate the proposed behavior.
  • WO2017153383A1 discloses a method for assembling robots to reconfigure generic or unspecified ('existing') non-functional components into specified components recognizable by e! robot.
  • the respective firmware associated with each specific component is retrieved and optionally reconfigured, so that the generic component firmware is displayed with recovered or preconfigured firmware, making it compatible with the robot, always by reference to a predefined hardware abstraction layer.
  • a method for training modular robots that dynamically extend with new and unknown physical components is described.
  • the method disclosed herein enables that, given a new and unknown robot component hardware that attaches to the modular robot and sends its characteristics to itself, the modular robot uses this new information to integrate said component, reestablish its physical configuration and retrain himself dynamically to perform a given task.
  • a robot apparatus or module that sends representative information about itself for integration.
  • automatic in robots as described in the proposed method.
  • AND! use of communication standards, as well as the deployment of Artificial Neural Networks in the logic of! component allow a complete integration of the new component into the logical model of the modular robot.
  • the use of Reinforcement Learning Techniques helps in the training process of the extended robot that includes known and new unknown components.
  • a method for training modular robots that dynamically extend with new and unknown physical components is described.
  • the method disclosed herein enables the modular robot, given a new and unknown robot component hardware that attaches to the modular robot and sends its features to it, to use this new information to integrate said component, reestablish its physical configuration and retrain to himself dynamically to perform a given task.
  • a robot apparatus or module configured to send representative information about itself for automatic integration into robots as described in the method of the first aspect of the invention.
  • the use of communication standards, as well as the deployment of Artificial Neural Networks (ANN) in the logic of the component can allow a complete integration of the new component into the logical model of the modular robot.
  • ANN Artificial Neural Networks
  • the use of Reinforcement Learning Techniques helps in the process of training the extended robot, including known and new unknown components.
  • a method or system for retraining modular robots that dynamically extend with new and unknown hardware components is provided.
  • new and unknown it is conceived that it means hardware components that could come from any manufacturer and provide any functionality, although they have never been interconnected with the robot system before. Consequently, there is no prior information available about its physical, mechanical, electrical or logical characteristics.
  • unknown is also an "untrained” component in the sense that it lacks the information necessary for its full integration into the modular robot. From the scope of the components already integrated in the system, the new component is unknown because it has never undergone a training process alongside the rest of the components, its specific characteristics by other components are not yet known.
  • the new hardware component and Unknown has been provided with the ability to transmit such features to the modular robot due to! use of the same communication methods as those used by the robot.
  • a communication method can consist of a communication middieware, there being already a wide variety of methods that can be used for the same purpose, with the condition that they are known by both the robot and the new component .
  • the use of robot component hardware using common communication standards makes it easier to develop complex communication architectures between different components regardless of their type, status or origin.
  • the method of the invention allows the resulting robot, once the new module has been equipped, to perform the same task. This is feasible by implementing the method of the claims.
  • Figure 1 shows a diagram representing three different strategies for the construction of robots known in the art.
  • Figure 2 represents a 3D representation of a modular robot in an initial configuration.
  • Figure 3 represents a 3D representation of the modular robot of Figure 2 in which a new component is considered to be added.
  • Figure 4 represents a 3D representation of the new component considered to be added.
  • Figure 5 represents a 3D representation of the modular robot of Figure 2 in which the new component is added to the robot of Figure 2 and the resulting robot is still capable of executing a given task.
  • Figure 6 represents a 3D representation of! Modular robot of Figure 2 in which the new component is added to the robot of Figure 2 and the robot, with its new configuration, is not physically capable of executing a given task.
  • Figure 7 represents a 3D representation of! New component, a LIDAR, that connects to an existing robot, a car, to help this robot car validate its autonomous driving capabilities.
  • Figure 8 shows a diagram representing the model of a robot, when a new and unknown component is physically connected.
  • Figure 9 shows a flow chart of the process that aims to validate and integrate any new components.
  • the method of the invention disclosed herein allows that, given a modular robot (R) performing a task (T) and a new and unknown robot component (C) that is coupled to the robot (R), tai robot (R) updates its internal physical model while including the new component (C) and retrains itself to execute the task (T).
  • a user interface is also provided that displays a 3D model that represents the state of the assembled robot (R) and allows the training process to be monitored.
  • the physical modular robot (R) can be compared with the virtual representation of the robot (R) namely the robot representation (RR), enabling the realization of relevant corrections and assimilating indications about circumstances such as the weight of ios components or the curvature or inclination of the entire robot, which the cognition component (processing unit) of the robot may finally not be able to recognize itself.
  • the new component (C) is unknown to the robot (R) and comprises a set of logical rules with, among other robot-related capabilities, the following means that help improve the capabilities of the modular robot:
  • an information exchange logic or communication middleware can be used based on the publication methodology /subscription.
  • Examples of such communication middleware include the Data Distribution Service (DDS), the most abstract Robot Operating System (ROS) middleware or the OPC UA binary protocol.
  • DDS Data Distribution Service
  • ROS Robot Operating System
  • OPC UA binary protocol e.g., OPC UA
  • Other tai communication middiewares such as DDS for Extremely Restricted Resource Environments (XRCE) could also be used to disconnect components with limited resources that would later join any of the publication / subscription middiewares described above.
  • component (C) is coupled to the modular robot (R), it shares or publishes its characteristics on the data bus shared by all existing components , all of which subscribe to the built-in parameters that are useful for the operation of the modular robot (R), and therefore may know the addition of a new component (C) and its characteristics, as well as other changes made to the robot (R) modular such as component substitutions or deletions.
  • the published data is treated dynamically so it may either be or not be useful for the purpose of the modular robot (R) at any given time. If it is not useful, they will simply be ignored by the processing unit, but they are still available if the desired behaviors or the physical characteristics of the modular robot (R) finally change.
  • the retraining process may include the use of techniques that mimic neurobioiotic architectures present in the mammalian nervous system or neuromorphic techniques.
  • These neuromorphic techniques usually make use of a combination of Artificial Neural Networks (ANN) and Reinforcement Learning (RL) methods to represent and acquire the desired intelligence, thus creating mathematical models composed of variables and their relationships that show the properties of the system and they can be dynamically retrained depending on the available and operational parts in the modulate robot at any given time.
  • ANN Artificial Neural Networks
  • RL Reinforcement Learning
  • this process begins with the generation of a 3D environment comprising all the active components within the modular robot (R) that shares information at a given time. Due to this, together with the logic model generated, hereafter the model (M), will be used additionally to carry out simulations and robot trainings that aim to shape the behavior of the real modular robot (R) at a later stage . The more faithful the simulation to the circumstances of reality, the more accurate the deployment of this simulation in the real modular robot (R) will also be.
  • the retraining process can take place as an alternative on the modular robot (R) itself, on a server or servers connected to the modular robot (R) on the same network or on many servers that are available through the internet (services on the Internet). cloud). Since the resulting model (M) can be processed as an exportable computer file, the result of the retraining process can be transferred to the robot (R) or to another part that is needed later. This also allows the deployment of a resulting logical model (M) and its parameters in other environments, being the basis of calculation for other robots (R) and their respective retraining processes.
  • the processing unit of the modular robot (R) shows a representation of the robot (RR) or virtual representation of the modules physically present in the modular robot (R), including but not limited to its physical characteristics or type , which is dynamically updated as new components are added to the robot, such a robot representation (RR) would eventually need to be expanded with information provided by the new component (C) once connected to the modular robot (R).
  • CCM Component Robot Model
  • the processing unit integrated as one of the robot (or “brain") components, detects the new component (C) and determines whether this component is known in advance or not. To do that, in some embodiments the processing unit makes a subscription to the information published by the new component (C) in a data bus in which all existing components share information according to a publication / subscription methodology and that it is accessible by the modular robot processing unit (R) at any given time,
  • the processing unit requests (“subscribes") the information necessary to integrate the new component (C).
  • This information can be published by said new component (C), since it uses the same communication method used in the modular robot (R).
  • a publication / subscription communication pattern whereby the new component (C) shares its relevant information with the rest of the components of the modular robot (R) and requests from them the useful data for carrying out the process of Integration can finally be used, although many other possible and available forms of communication available to manufacturers or developers are also valid and are conceived to be understood by those skilled in the art.
  • this information can be encrypted using symmetric or asymmetric keys (or both at the same time), which the processing unit will also request while updating the Robot representation (RR).
  • the robot representation (RR) or, as defined above, the virtual representation of the components physically present in the modular robot (R), dynamically changes following a set of predefined rules and / or adapts itself according to with a preexisting database. For that to take place, after the new component (C) is connected and the unknown information is shared using one of the mentioned communication methods, the virtual representation of the physically present components or robot representation (RR) normally experiences a physical reconfiguration process by which the relative positioning of all existing components is determined and recently coupled in an ordinance index.
  • an ordinance index plays an important role as it completes the 3D environment in which the simulations will take place.
  • this physical reconfiguration method may consist of employing inertial units of measurement (IMU) or network topologies, among other available methods that aim to calculate the relative positioning of the robot hardware components in a modular robot.
  • IMU inertial units of measurement
  • the new representation of the physical modular robot (R), namely the resulting robot representation (RR) is then used by the logic or model (M) of the modular robot (R) to reconfigure or retrain itself to execute the task (T).
  • the information received from the new component (C), which includes 3D representations, mathematical models, component operation characteristics, calculation means, firmware versions and, finally, symmetric or asymmetric keys, is validated and confirmed by the unit of processing before its integration into the robot representation (RR).
  • a typical validation process implies, after the trajectory has been secured through the validation of the symmetric or asymmetric key:
  • the model (M) that feeds the robot logic uses the resulting version, namely, new or updated, of the robot representation (RR) to automatically adapt or retrain its behaviors according to the new robot physical configuration, as explained above.
  • This re-training process that consists of Learning by Reinforcement of Artificial Neural Networks can take place either in the real robot or in simulation many times. Each simulation implies an indefinite number of iterations that aim to achieve a pre-established desired behavior in the neuron network! If the robot achieves the objective, it is rewarded, therefore being penalized when it does not achieve it.
  • This retraining process can be monitored at any time through a user interface.
  • the user interface emits a signal (in the form of a light, sound or order signal) indicating the success or failure of the entire process, depending on the balance resulting from rewards and penalties received by the modular robot (R).
  • a success should be interpreted as that the method achieved the adaptation of the modular robot (R) for the task (T) with the addition of the new component (C).
  • the new robot (R), with the addition of the new component (C) can accomplish the task (T) and the improved behaviors, labeled as "learned”, can then be deployed in the modular robot (R) real, if desired.
  • a failure signal should be understood as that the modular robot (R) equipped with the new component (C), or modified modular robot (R), is not capable of executing the task (T) in its new configuration. Failures will usually be related to the fact that the modular robot (R) is not physically capable of accomplishing the task (T). Figure 6 show such a case in e! that the modular robot (R), with its new configuration, is not physically capable of executing the task (T), thus the integration has not been satisfactory while the clamping organ points out of the workspace. Incompatibilities such as a non-specification of the length of a component could eventually not be addressed until the simulation results in failure, and is automatically fixed later.
  • the described method allows the modular robot (R) to automatically retrain itself for a given task (T) every time a new component (C) is added and determines the feasibility of achieving such a task (T) with the existing configuration in the modular robot (R), or updated robot representation (RR). Therefore, the method described herein focuses on expanding the representation of robot (RR) with unknown components (C) and adapting the logical model (M) of the robot (R) to perform a task (T) given, regardless of a new and extended physical configuration, or updated robot representation (RR).
  • a robot apparatus that automatically sends representative information about its physical characteristics and its capabilities, or Robot Model by Components (CRM).
  • this apparatus acts as the new and unknown component (C) to be coupled to the robot (R).
  • Tai apparatus provides the rest of existing components (B, A1, A2, A4, EF) in the modular robot (R) with detection, actuation, cognition, power, communication, user interfaces and other robot related capabilities.
  • An illustrative apparatus or robot hardware component as disclosed herein can be defined as a robot module.
  • a robot module shows an additional central processing unit (CPU) and additional programmable logic usually performed in the form of an F ' PGA, volatile and non-volatile memory, a network interface, sensors, actuators, etc., and, in the context of a modular robot, it comprises an IMU code or program that reveals its current position and status and is constantly awaiting instructions.
  • An illustrative modular robot (R) may comprise four modules as in Figure 2, these being: a processing unit (B), a first actuator (A1), a second actuator (A2) and a clamping member (EF).
  • a communication unit that provides all components with power and communication means is conceived as understood and understood by those skilled in the art.
  • a third actuator (A3) is coupled to a modular robot (R) with a SCARA configuration that performs movements on a single axis.
  • the main function of the third actuator (A3) collected includes rotating angles of left and right drawing more or less wide, adapting such movements to the operation of the entire modular robot (R) which aims to achieve the desired task (T) usually represented by the clamping body placing itself on the spatial point of workspace labeled as "objective" in Figure 6.
  • Said third actuator (A3) when connected to the robot (R), sends information about itself so that the robot (R) can automatically integrate it and reconfigure its behaviors to include the same, as described above.
  • the third-party actuator (A3) collected needs to transmit at least, but not limited to, the following information:
  • This 3D model comprises a realistic representation of the component itself, including, but not limited to its shape, size or length according to a set of values that emulate the characteristics of the components existing in the modular robot (R).
  • the new component (C) for example, half the size of an already integrated component, its 3D models will also "graphically" match its proportions. This ensures reality-based 3D workspaces and modular (R) robots to work with and monitor through the user interface provided for that purpose.
  • a Component Robot Model that includes a representation of the submodules within the component that includes, but is not limited to its connections, physical characteristics, material or its type.
  • a simple Component Robot Model could include a description of the component along with its connections and their corresponding physical characteristics and, finally and with the objective of ensuring the authenticity, integrity and non-repudiation of the transmitted information, be provided with a key symmetric or asymmetric that will be verified and used by the processing unit before integration.
  • the Component Robot Model is also determined but not limited to ios following aspects: a) a predefined set of rules specified by the manufacturer, including a communication standard shared by the third actuator (A3) collected and in the existing components (B, A1, A2, A4, EF) of robot (R) , b) a graphical user interface in which a user can configure the "CRM" of the component and c) an automatic mechanism in which the Component Robot Model (CRM) is automatically deduced from interactions with the robot ( R).
  • the capabilities of the device including, but not limited to the interfaces and abstractions used to operate such device, its name, version, mechanical limits (motor values) and electrical limits (energy reading), as well as any other programming related information .
  • the information transmitted by the new component (C) will be available to any other component that participates in the communication network and publishes and subscribes to the information necessary to make the Robot work.
  • the cognition component which can be represented either by the CPU of a certain robot module (R) or by a computer network accessible by the robot (R)
  • detects the new component (C) it will request the relevant information to validate and integrate it into the robot representation (RR), which updates the current 3D representation of the active components and their database.
  • the robot model (M) is therefore able to use its new configuration (RR updated with CRM) to retrain itself to perform the task (T ) desired using the techniques described above.
  • the robot (R) can comprise a robot representation (RR) that represents the modules inside the robot (R)
  • the component (C) has a Component Robot Model (CRM), which is a representation of each of the submodules within the component, considered individually, including, but not limited to its connections, physical characteristics, material or its type.
  • CCM Component Robot Model
  • component (C) can be a robot apparatus which transmits to! less, but without limitation the following information when connected to the robot (R):
  • CRM Component Robot Model
  • Figure 5 shows a 3D representation of a modular robot comprising five modules, in which the objective task (T) is also represented (as "objective"). Power and communication details as well as related components have not been represented.
  • This modular robot is derived from Figure 2 after the integration of the robot component (C) of Figures 3 and 4 has been achieved, leading to the achievement of the task despite the new physical configuration of the modular robot.
  • the robot (R) can perform the task (T) while moving its joints properly. Therefore, Figure 5 presents a satisfactory implementation after the robot representation (RR) was through the proposed method. Since the "objective" was reached, the task (T) was successful, thus the method would produce a positive result.
  • the integration of new component (C) into the modular robot (R) comprises the following steps:
  • the information sent from the new component (C) or Robot by Component Model (CRM), including symmetric or asymmetric keys, if desired, is received in the robot processing unit (R), validates and confirms against a set of predefined rules, since the new component (C) has a communication standard in common with the robot (R), so there is communication. Among other validation tasks, it includes ensuring that the Component Robot Model (CRM) is complete, authentic and compatible with the robot (R).
  • the Component Robot Model is integrated into the robot (RR) representation of the robot (R).
  • integration conflicts between the Component Robot Model (CRM) and the robot representation (RR) are automatically resolved by default of the robot representation connections (RR), structures and characteristics, in case of disagreement.
  • Figure 6 shows a 3D representation of a modular robot comprising five modules and three joints in which the task (T) is also presented. Power and communication details as well as related components have not been represented.
  • This modular robot is obtained from Figure 2 after the assembly of the robot component (C) (represented in Figures 3 and 4) is completed, although with a slightly different configuration. This implies that the information sent by the new component (C) or Robot Model by Components (CRM), is different when compared to Figure 5. In this case, the robot is unable to reach the target ("target") of the task (T) in this way the integration has not been satisfactory while the clamping body points out of the workspace. In this particular case, the method failed to adapt the robot (R) for the task (T) and the result produced by the method should be negative.
  • Figure 7 represents a new and unknown component (C), a LIDAR, which automatically sends representative information about its physical model and its capabilities, which connects to an existing robot (R), a car, to help such a robot car to validate their autonomous driving capabilities.
  • the task to be performed by the robot (R) is coded as a set of routines to validate its capabilities of autonomous driving, while an illustrative routine could be defined by a simulated urban environment in which the car should drive and travel through the streets for a given amount of time without collisions. Another routine could be defined as a similar virtual environment in which the robot car should travel through an urban environment while respecting traffic rules.
  • a new and unknown component (C) in this case to LiDAR, is placed in the upper part of the robot car (R) to assist, through detection, in the task of autonomous driving described above. Connecting the new component (C) to the robot (R), the method proposed in this invention will proceed as follows:
  • the component (C) transmits its information, including symmetric or asymmetric keys if desired, to the robot (R).
  • the Component Robot Model (CRM) transmitted from the new component (C) is validated and confirmed against a predefined set of rules ensuring that the new component (C) is compatible with the robot (R).
  • the new version of the robot representation (RR) will be used to automatically adapt (retrain) the robot car (R). To do that, the routines that define the task (T) will be executed with the new robot representation (RR) many times.
  • Figure 8 schematically represents how a model (M) of a robot works (also the logic that feeds the robot (R) according to the method of the invention): when the unknown component (C) is physically connected, the representation of robot (RR) or representation of the modules inside the robot, is updated with the information received from the new component (C) or Robot Model by Components (CRM). Right after this occurs, the model (M) trains while receiving many inputs (i) together with the updated robot representation (RR) and generates many results (O) for the actuators, so that the robot (R ) Learn to achieve the given task (T) regardless of the number of modules present in the robot, if it is physically viable.
  • object of the invention has been deployed in an environment of multiple CPUs, that is to say using logical and programmable units provided in the robot (R) or in a client server architecture accessible by at least one robot CPU (R); Any new component (C) equipped with sufficient computing power performs the method of the invention itself, using only one unit that could be defined by a CPU that is the processor of the new component (C) itself.
  • Figure 9 is a flow chart in which the process that aims to validate and integrate new components (C), in accordance with the method disclosed herein, has been represented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Manipulator (AREA)

Abstract

L'invention concerne un procédé pour réentraîner des robots modulaires dynamiquement équipés de composants physiques nouveaux et inconnus qui sont intégrés à ceux-ci. Ledit procédé fait appel à un système à l'aide duquel de tels composants envoient leurs caractéristiques au robor modulaire qui fonctionne déjà et à l'aide duquel ces informations sont utilisées pour intégrer le composant, rétablir la configuration physique et réentraîner la logique du robot, utilisant l'apprentissage par renforcement basé sur des réseaux artificiels, pour effectuer une tâche déterminée; un appareil de robot (un module de robot) qui s'accouple à des robots modulaires et envoie des informations représentatives de lui-même pour leur intégration automatique dans des robots modulaires, ledit appareil ayant été pourvu d'une norme de communication extensible dans le contexte du réseau de communication présent dans le robot.
PCT/ES2018/070379 2018-05-25 2018-05-25 Procédé pour intégrer de nouveaux modules dans des robots modulaires, et composant de robot associé WO2018172593A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/ES2018/070379 WO2018172593A2 (fr) 2018-05-25 2018-05-25 Procédé pour intégrer de nouveaux modules dans des robots modulaires, et composant de robot associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/ES2018/070379 WO2018172593A2 (fr) 2018-05-25 2018-05-25 Procédé pour intégrer de nouveaux modules dans des robots modulaires, et composant de robot associé

Publications (2)

Publication Number Publication Date
WO2018172593A2 true WO2018172593A2 (fr) 2018-09-27
WO2018172593A3 WO2018172593A3 (fr) 2019-04-04

Family

ID=63556357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2018/070379 WO2018172593A2 (fr) 2018-05-25 2018-05-25 Procédé pour intégrer de nouveaux modules dans des robots modulaires, et composant de robot associé

Country Status (1)

Country Link
WO (1) WO2018172593A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3639983A1 (fr) * 2018-10-18 2020-04-22 Technische Universität München Mesures de securite anti-collision pour un robot modulaire reconfigurable
WO2022240906A1 (fr) * 2021-05-11 2022-11-17 Strong Force Vcn Portfolio 2019, Llc Systèmes, procédés, kits et appareils de mémorisation et d'interrogation distribués en périphérie dans des réseaux à chaîne de valeur
US12039559B2 (en) 2021-04-16 2024-07-16 Strong Force Vcn Portfolio 2019, Llc Control tower encoding of cross-product data structure

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4225112C1 (de) 1992-07-30 1993-12-09 Bodenseewerk Geraetetech Einrichtung zum Messen der Position eines Instruments relativ zu einem Behandlungsobjekt
US6366293B1 (en) 1998-09-29 2002-04-02 Rockwell Software Inc. Method and apparatus for manipulating and displaying graphical objects in a computer display device
US6995536B2 (en) 2003-04-07 2006-02-07 The Boeing Company Low cost robot manipulator
US20080082301A1 (en) 2006-10-03 2008-04-03 Sabrina Haskell Method for designing and fabricating a robot
US20130275091A1 (en) 2010-07-22 2013-10-17 Cogmation Robotics Inc. Non-programmer method for creating simulation-enabled 3d robotic models for immediate robotic simulation, without programming intervention
US9671786B2 (en) 2007-01-12 2017-06-06 White Magic Robotics Inc. Method and system for robot generation
WO2017153383A1 (fr) 2016-03-07 2017-09-14 Softbank Robotics Europe Fabrication modulaire d'un robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110224828A1 (en) * 2010-02-12 2011-09-15 Neuron Robotics, LLC Development platform for robotic systems
US9751211B1 (en) * 2015-10-08 2017-09-05 Google Inc. Smart robot part
US11400587B2 (en) * 2016-09-15 2022-08-02 Google Llc Deep reinforcement learning for robotic manipulation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4225112C1 (de) 1992-07-30 1993-12-09 Bodenseewerk Geraetetech Einrichtung zum Messen der Position eines Instruments relativ zu einem Behandlungsobjekt
US6366293B1 (en) 1998-09-29 2002-04-02 Rockwell Software Inc. Method and apparatus for manipulating and displaying graphical objects in a computer display device
US6995536B2 (en) 2003-04-07 2006-02-07 The Boeing Company Low cost robot manipulator
US20080082301A1 (en) 2006-10-03 2008-04-03 Sabrina Haskell Method for designing and fabricating a robot
US9671786B2 (en) 2007-01-12 2017-06-06 White Magic Robotics Inc. Method and system for robot generation
US20130275091A1 (en) 2010-07-22 2013-10-17 Cogmation Robotics Inc. Non-programmer method for creating simulation-enabled 3d robotic models for immediate robotic simulation, without programming intervention
WO2017153383A1 (fr) 2016-03-07 2017-09-14 Softbank Robotics Europe Fabrication modulaire d'un robot

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3639983A1 (fr) * 2018-10-18 2020-04-22 Technische Universität München Mesures de securite anti-collision pour un robot modulaire reconfigurable
WO2020079272A3 (fr) * 2018-10-18 2020-07-02 Technische Universität München Mesures de sécurité anti-collision pour un robot modulaire
US12039559B2 (en) 2021-04-16 2024-07-16 Strong Force Vcn Portfolio 2019, Llc Control tower encoding of cross-product data structure
WO2022240906A1 (fr) * 2021-05-11 2022-11-17 Strong Force Vcn Portfolio 2019, Llc Systèmes, procédés, kits et appareils de mémorisation et d'interrogation distribués en périphérie dans des réseaux à chaîne de valeur

Also Published As

Publication number Publication date
WO2018172593A3 (fr) 2019-04-04

Similar Documents

Publication Publication Date Title
Xu et al. Opencda: an open cooperative driving automation framework integrated with co-simulation
Cortés et al. Coordinated control of multi-robot systems: A survey
JP7014368B2 (ja) プログラム、方法、装置、及びコンピュータ可読記憶媒体
JP7003355B2 (ja) スパイキングニューロモーフィックコンピュータを用いる自律型ナビゲーション
WO2018172593A2 (fr) Procédé pour intégrer de nouveaux modules dans des robots modulaires, et composant de robot associé
Kramer et al. Development environments for autonomous mobile robots: A survey
Jelisavcic et al. Real-world evolution of robot morphologies: A proof of concept
Coppola et al. Provable self-organizing pattern formation by a swarm of robots with limited knowledge
CN112947557B (zh) 一种切换拓扑下的多智能体容错跟踪控制方法
Heyden et al. Development of a design education platform for an interdisciplinary teaching concept
JP7110884B2 (ja) 学習装置、制御装置、学習方法、及び学習プログラム
Aloui et al. A new SysML model for UAV swarm modeling: UavSwarmML
Aitken et al. Adaptation of system configuration under the robot operating system
JP2015518597A (ja) デバイスコントロールシステム
Pignède et al. Toolchain for a Mobile Robot Applied on the DLR Scout Rover
Hercus et al. Control of an unmanned aerial vehicle using a neuronal network
Sofge et al. Challenges and opportunities of evolutionary robotics
US20220058318A1 (en) System for performing an xil-based simulation
Radecký et al. Intelligent agents for traffic simulation
CN109240694A (zh) 用于智能驾驶辅助系统控制算法的快速原型开发验证系统及方法
Ruediger et al. Simulation of Underwater Environments to Investigate Multi-Robot Systems for Marine Hull Inspection
WO2018154153A2 (fr) Méthode pour la configuration de robots modulaires
Grandi et al. Unibot remote laboratory: A scalable web-based set-up for education and experimental activities in robotics
Takács et al. Computational-Level Framework for Autonomous Systems: a Practical Approach
Brunete et al. Multi-drive control for in-pipe snakelike heterogeneous modular micro-robots

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18768925

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18768925

Country of ref document: EP

Kind code of ref document: A2