WO2021166918A1 - Robot - Google Patents

Robot Download PDF

Info

Publication number
WO2021166918A1
WO2021166918A1 PCT/JP2021/005770 JP2021005770W WO2021166918A1 WO 2021166918 A1 WO2021166918 A1 WO 2021166918A1 JP 2021005770 W JP2021005770 W JP 2021005770W WO 2021166918 A1 WO2021166918 A1 WO 2021166918A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
base
arm
robot
respect
Prior art date
Application number
PCT/JP2021/005770
Other languages
English (en)
Japanese (ja)
Inventor
ギレルメ マエダ
Original Assignee
株式会社Preferred Networks
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Preferred Networks filed Critical 株式会社Preferred Networks
Publication of WO2021166918A1 publication Critical patent/WO2021166918A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages

Definitions

  • This disclosure relates to robot control technology.
  • the self-propelled robot equipped with a robot arm on a base that can freely run on the floor.
  • the self-propelled robot can perform various tasks related to the target by the end effector of the robot arm after moving to the vicinity of the target, or can convey the target by moving while holding the target by the end effector. ..
  • the accuracy of position control of the base and the end effector is required.
  • the relative position of the end effector with respect to the base can be grasped from the encoder of the robot arm.
  • the self-propelled robot may slide against the floor surface during traveling. For this reason, Wheel Odometry using a sensor provided on the wheel may not be able to accurately measure the amount of movement of the base.
  • the surrounding environment has few features, such as a self-propelled robot facing a white wall. For this reason, LiDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing) may not be able to accurately measure the position of the base.
  • LiDAR Light Detection and Ringing, Laser Imaging Detection and Ringing
  • the subject of the present disclosure is to improve the accuracy of the base position control of the self-propelled robot.
  • the robot according to the embodiment has the characteristics described in the claims.
  • the schematic diagram which shows an example of the appearance of the robot which concerns on embodiment.
  • the block diagram which shows an example of the functional structure which the controller which concerns on embodiment has.
  • the flowchart which shows an example of the control process executed by the controller which concerns on 1st Embodiment.
  • the flowchart which shows an example of the arm control processing which concerns on 1st Embodiment.
  • the flowchart which shows an example of the base control processing which concerns on 1st Embodiment.
  • the figure for demonstrating the arm / base control processing which concerns on 1st Embodiment.
  • FIG. 1 is a schematic view showing an example of the appearance of the robot 1 according to the embodiment.
  • the robot 1 is a self-propelled robot provided with wheels and capable of freely traveling on a floor surface by the wheels, and is a robot that performs a predetermined work on a target.
  • the floor surface is, for example, a two-dimensional plane.
  • the floor surface is a surface on which the wheels roll for movement, and includes the ground. That is, it is a traveling surface on which the robot travels.
  • the robot 1 configured to be movable on a two-dimensional plane is illustrated, but the present invention is not limited to this.
  • the floor surface may be a curved surface having undulations.
  • the robot 1 may be configured to be movable in a three-dimensional space. That is, the moving body as the robot 1 according to the embodiment may be configured as a vehicle, a ship, an aircraft, or a combination thereof.
  • the robot 1 has a base 2, a main body 3, an arm 4, a camera 5, and a controller 6.
  • Base 2 supports the main body 3.
  • the base 2 and the main body 3 may be integrally formed.
  • the base 2 is configured to be movable on the floor surface.
  • the base 2 has a battery, a motor and wheels.
  • the battery powers the motor.
  • the motor is powered by the electric power supplied by the battery.
  • the wheels are driven by the power transmitted from the motor.
  • the speed and direction of movement of the base 2 are controlled according to the rotation direction of each wheel, the rotation speed of each wheel, the direction of the wheels, and the like.
  • the drive mechanism may be appropriately selected according to the usage environment of the robot 1, and may be wheels, two or more multi-legged crawlers, or a combination thereof.
  • a motor has been exemplified as a power generator, the present invention is not limited to this.
  • the power generator may be appropriately selected according to the usage environment of the robot 1, and may be a motor, an internal combustion engine, an external combustion engine, or a combination thereof. Further, the motor may be supplied with electric power not only from a battery but also from a commercial power source.
  • the main body 3 supports the arm 4 and the camera 5. Further, the main body 3 has a built-in controller 6.
  • Arm 4 is at least one robot arm. In the embodiment, a case where one robot arm is used as the arm 4 is illustrated. Further, in the embodiment, an articulated arm 4 having a plurality of links will be illustrated. Each link of the arm 4 is rotatably connected to each other. Each link is rotated by, for example, a power transmitted from a motor provided on the arm 4. In other words, the arm 4 is configured to be able to displace the joint angle between adjacent links.
  • the arm 4 does not have to have a joint.
  • the arm 4 may be fixed to the main body 3.
  • the target arm configuration means the position of the target arm 4 or the position of the target end effector 41.
  • a detachable end effector 41 is provided at the tip of the arm 4.
  • the end effector 41 performs work on the target.
  • a gripper configured to be able to grip an object is illustrated.
  • the end effector 41 may be integrally formed with the arm 4. Further, the end effector 41 may be appropriately selected according to the content and target of the work of the robot 1.
  • the camera 5 is an imaging unit that images an object and outputs visual information about the object.
  • the camera 5 is arranged on the upper part of the main body 3 as an example.
  • a camera 5 having an RGB-D sensor and an imaging optical system configured to be capable of measuring depth is illustrated. That is, the visual information about the object is three-dimensional point cloud data or image data regarding the imaging range of the camera 5 including the object.
  • the three-dimensional point cloud data or image data relating to the target is an example of the three-dimensional position information relating to the target.
  • a camera configured to be able to measure the three-dimensional position of the target (three-dimensional position information about the target) by LiDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing) may be used.
  • a stereo camera capable of recording information in the depth direction of the target that is, three-dimensional position information regarding the target
  • shooting the target from a plurality of different directions at the same time for example, may be used. ..
  • FIG. 2 is a block diagram showing an example of the functional configuration of the controller 6 according to the embodiment. As shown in FIG. 2, the controller 6 realizes functions as a camera controller 6a, a planner 6b, an arm controller 6c, and a base controller 6d.
  • the camera controller 6a controls the operation of the camera 5.
  • the camera controller 6a acquires visual information about the target from the camera 5.
  • the planner 6b calculates the position of the target 7 with respect to the base 2 based on the visual information.
  • the planner 6b calculates the target arm configuration based on the position of the target 7 with respect to the base 2.
  • the target arm configuration is, for example, the configuration of the joint angle of each link of the arm 4 on which the end effector 41 can perform work on the target 7 (that is, to reach the target 7).
  • the position of the end effector 41 with respect to the base 2 is determined.
  • the planner 6b calculates the target position of the base 2 based on the target arm configuration.
  • the target position of the base 2 and the target arm configuration indicate the target position and posture of the robot 1.
  • the planner 6b calculates the position of the target 7 with respect to the target position of the base 2.
  • the position of the target 7 with respect to the target position of the base 2 is equal to the position of the end effector 41 with respect to the target position of the base 2.
  • the planner 6b controls the movement of the base 2 by executing feedback control that reduces the difference between the position of the target 7 with respect to the base 2 and the position of the target 7 with respect to the target position of the base 2. That is, this feedback control brings the base 2 closer to the target position.
  • the arm controller 6c moves the arm 4 by controlling the joint angle of each link of the arm 4 based on the target arm configuration supplied from the planner 6b.
  • the target arm configuration is an example of the control amount of the arm 4, and is calculated based on the target position of the end effector 41 with respect to the base 2.
  • the base controller 6d moves the base 2 based on the control amount of the base 2 supplied from the planner 6b.
  • FIG. 3 is a diagram for explaining an outline of the control executed by the robot 1 according to the present embodiment.
  • FIG. 4 is a flowchart showing an example of the control process executed by the controller 6 according to the present embodiment.
  • FIG. 3 a process of reaching the end effector 41 to a desired target (object 7) in the Cartesian space (position and origin) by using the camera 5 mounted on the head of the robot 1 is exemplified. do. That is, the purpose (goal) of the control (task) in the following description is to bring the end effector 41 to a position where the target 7 can be gripped.
  • the Cartesian space is represented by the global coordinate system (world frame) of the origin Ow.
  • a reference robot (Reference Robot) RR and an estimated robot (Estimated Robot) ER are defined for the robot 1 according to the embodiment.
  • the reference robot RR and the estimation robot ER are arranged in the global coordinate system of the origin Ow.
  • the reference robot RR is a theoretical robot 1, that is, a virtual robot 1.
  • the reference robot RR provides a reference solution for reaching the goal.
  • the end effector 41 of the reference robot RR is always set at a position where the target 7 can be gripped. That is, the reference robot RR shows the target position of the base 2 and the target arm configuration of the arm 4. Therefore, when the actual robot 1 (real robot) reaches the same position and posture as the reference robot RR, the end effector 41 can grasp the desired target 7. From this, the reference robot RR can be expressed as a theoretical robot that indicates the target position and posture of the robot 1.
  • the reference robot RR has a reference frame.
  • the reference coordinate system is associated with the base 2 of the reference robot RR. Specifically, the origin Or of the reference coordinate system is set on the base 2 of the reference robot RR.
  • the estimated robot ER is the robot 1 estimated as the current position and posture of the actual robot 1.
  • the position and posture of the estimated robot ER are estimated based on the visual information acquired by the camera 5 mounted on the actual robot 1.
  • the visual information shows the relativity between the actual robot 1 and the object 7.
  • the estimation robot ER has an estimated coordinate system (estimated frame).
  • the estimated coordinate system is associated with base 2 of the estimated robot ER. Specifically, the origin Oe of the estimated coordinate system is set on the base 2 of the estimated robot ER.
  • the camera controller 6a causes the camera 5 to perform imaging on the target 7 at a predetermined cycle.
  • the predetermined cycle is a cycle of several tens of Hz or the like. As an example, 20 Hz is available as a predetermined period. Therefore, the camera 5 outputs visual information at a predetermined cycle.
  • the camera controller 6a acquires visual information from the camera 5 (S1).
  • the planner 6b calculates a position vector Xe indicating the position of the target 7 in the estimated coordinate system based on the visual information (S2).
  • the planner 6b calculates the position T1 of the target 7 with respect to the camera 5 based on the visual information. Subsequently, the planner 6b uses the position T1 of the target 7 with respect to the camera 5 and the position T2 of the camera 5 with respect to the base 2 (origin Oe of the estimated coordinate system) to determine the position of the target 7 with respect to the origin Oe of the estimated coordinate system.
  • the indicated position vector Xe is calculated.
  • the position T2 of the camera 5 with respect to the origin Oe of the estimated coordinate system is a known value, and is, for example, preset and stored in the storage area of the controller 6.
  • the position vector Xe is the position of the target 7 with respect to the estimation robot ER. That is, the planner 6b estimates the position (origin Oe) of the estimation robot ER in the global coordinate system based on the visual information. In other words, the planner 6b sets the estimation robot ER based on the visual information.
  • the planner 6b executes the arm / base control process (S3).
  • the arm / base control process includes an arm control process and a base control process.
  • FIG. 5 is a flowchart showing an example of the arm control process according to the present embodiment.
  • FIG. 6 is a flowchart showing an example of the base control process according to the present embodiment.
  • 7 to 11 are diagrams for explaining the arm / base control process according to the embodiment, respectively.
  • the order of arm control processing and base control processing can be set as appropriate. That is, the arm control process may be executed prior to the base control process, may be executed in parallel with the base control process, or may be executed after the base control process. However, the process of S11 may be executed only in the process that is executed first in each cycle of the arm control process and the base control process. In addition, one of the arm control process and the base control process may not be executed in each cycle.
  • the arm control process and the base control process are executed and the arm control process is executed prior to the base control process will be described as an example.
  • the planner 6b calculates a target arm configuration ARM capable of gripping the target 7 (S11).
  • the target arm configuration ARM is the joint angle of each link of the arm 4 of the reference robot RR.
  • the calculated target arm configuration ARM is supplied to the arm controller 6c.
  • Arm configuration U arm goal, with respect to the j joint of each link, it is expressed as follows. U arm (q1, q2, ... qj)
  • the arm controller 6c is a command for controlling the arm 4, i.e. on the basis of the arm structure U arm goals from planner 6b, controlling the joint angle of each link arm 4 of the robot 1 (S12). It is assumed that the arm 4 has an accurate joint encoder. Therefore, the arm 4 of the robot 1 can directly and accurately reproduce the posture of the arm 4 of the reference robot RR.
  • the arm control process ends after the arm 4 is moved by the arm controller 6c for one cycle time (for example, 50 milliseconds).
  • the planner 6b arranges the reference robot RR based on the target arm configuration ARM calculated in S11 (S21). Specifically, the planner 6b plans the posture of the reference robot RR based on the joint angle of each link of the arm 4 of the reference robot RR, and the base 2 of the reference robot RR, that is, the reference coordinates with respect to the global coordinate system. The coordinates (qx, qy, qt) that transform and rotate the system are calculated.
  • the planner 6b calculates a position vector Xr indicating the position of the target 7 in the reference coordinate system (S22).
  • the position vector Xr is the position of the target 7 with respect to the reference robot RR.
  • the planner 6b calculates the control amount U base of the base 2 so that the difference between the position vector Xe and the position vector Xr is reduced (S23).
  • the calculated U base is supplied to the base controller 6d.
  • the proportional controller in FIG. 3 has a gain P.
  • the control amount U base and the gain P of the base 2 are expressed as follows, respectively.
  • the base 2 of the robot 1 illustrated in this embodiment is not displaced in the height direction (z direction). Therefore, u_z is not used. That is, in the robot 1 illustrated in the present embodiment, the control amount U base of the base 2 is expressed as follows.
  • U base (u_x, u_y, u_ ⁇ )
  • the planner 6b is based on feedback control for reducing the distance (distance between relative positions) between the position of the target 7 with respect to the estimation robot ER and the position of the target 7 with respect to the reference robot RR.
  • a command for controlling 2 is generated, that is, a control amount Feedback of base 2 is generated.
  • the generated control amount U base of the base 2 is represented by a vector with reference to the estimation robot ER as an example.
  • the base controller 6d controls the movement of the base 2 of the robot 1 based on the control amount U base of the base 2 from the planner 6b (S24). After the base 2 is moved by the base controller 6d, the base control process ends.
  • the planner 6b calculates the control amount U of the robot 1 in the arm / base control process (S3) and controls the movement of the robot 1.
  • the control amount U of the robot 1 is expressed as follows.
  • the planner 6b determines whether or not the end effector 41 has reached the target 7 (S4). As an example, the planner 6b determines that the end effector 41 has reached the target 7 when the control amount U base does not occur in S16, that is, when the difference between the position vector Xe and the position vector Xr is within a predetermined range. .. In addition, this determination may be performed based on the output of the tactile sensor mounted on the end effector 41.
  • the controller 6 updates the position vectors Xe and Xr each time new visual information comes from the camera 5, that is, at a predetermined period (several tens of Hz), and between the two relative positions.
  • a predetermined period severe tens of Hz
  • command regeneration for controlling the robot 1 is continuously repeated. That is, by operating the controller 6 at a high frequency, as shown in FIG. 11, the difference between the position vector Xe and the position vector Xr can be reduced, and the end effector 41 can reach the target 7.
  • the technique according to the present embodiment can correspond to the moving target 7. If the target 7 does not move, a part of the arm / base control process (S3) such as S11, S12, S21, and S22 may not be executed in the repetition. Alternatively, there may be a specification in which the flow of S11, S12, S21, S22, etc. is executed only when a change in the inclination, shape, or the like of the object 7 is detected based on, for example, visual information.
  • the posture including the arm configuration of the reference robot RR may be planned by any method.
  • the posture including the arm configuration of the reference robot RR can be calculated by inverse kinematics so that the end effector 41 reaches the object 7.
  • the posture including the arm configuration of the reference robot RR is calculated by motion planning or trajectory optimization in which the positions of the joints of the base 2 and the arm 4 are searched so that the end effector 41 reaches the desired target 7. You can also.
  • FIG. 12 is a diagram for explaining an outline of the control executed by the controller 6 according to the present embodiment.
  • FIG. 13 is a flowchart showing an example of the arm control process according to the present embodiment.
  • FIG. 14 is a flowchart showing an example of the base control process according to the present embodiment.
  • the order and non-execution of the arm control process and the base control process can be appropriately set, which is the same as that of the first embodiment.
  • the arm control process is executed prior to the base control process will be described as an example.
  • the planner 6b calculates the position vector Xo of the end effector 41 in the estimated coordinate system (S11a).
  • the position vector Xo calculated here indicates the current position of the end effector 41.
  • the position vector Xo may be calculated using the previous control amount of the arm 4, or may be calculated based on the output of the joint encoder of the arm 4.
  • the planner 6b calculates the target arm configuration ARM , that is, the control amount of the arm 4 based on the position vector Xe and the position vector Xo (S11b). Specifically, assuming that the arm 4 is on a fixed base 2, the joint angle of each link of the arm 4 is calculated by inverse kinematics based on the position vector Xe of the object 7. The planner 6b supplies the calculated control amount (target arm configuration ARM ) to the arm controller 6c. After the joint angle of each link of the arm 4 is controlled (S12) by the arm controller 6c, the arm control process ends.
  • the target arm configuration ARM that is, the control amount of the arm 4 based on the position vector Xe and the position vector Xo (S11b). Specifically, assuming that the arm 4 is on a fixed base 2, the joint angle of each link of the arm 4 is calculated by inverse kinematics based on the position vector Xe of the object 7. The planner 6b supplies the calculated control amount (target arm configuration ARM ) to
  • the position vector Xo calculated here indicates the position of the end effector 41 after the movement.
  • the position vector Xo may be calculated based on the output of the joint encoder of the arm 4. Also, of the case where the base control process after the arm control process is executed, if it is before the movement of the arm 4, wherein the position vector Xo is calculated (control amount of the arm 4) arm structure of the target U arm This is the target position of the end effector 41 defined by.
  • the position vector Xo may be calculated based on the arm structure of the target calculated in the process of S11b arms control process U arm (control amount of the arm 4). Further, when the base control process is executed prior to the arm control process, it may be calculated using the previous control amount of the arm 4.
  • the calculated U base is supplied to the base controller 6d. After the movement of the base 2 is controlled (S24) by the base controller 6d, the base control process ends.
  • the planner 6b determines that the end effector 41 has reached the target 7 when the position vector Xe and the position vector Xo approach zero below a predetermined threshold value (S4).
  • the controller 6 has a target arm configuration Uarm based on the reverse kinematics based on the position (position vector Xe) of the target 7 with respect to the base 2, that is, the target position (position) of the end effector 41 with respect to the base 2.
  • the movement of the base 2 is controlled by calculating the vector Xo) and executing feedback control for reducing the difference between the position vector Xe and the position vector Xo.
  • the controller 6 can make the target 7 reach the end effector 41 by executing the feedback control using the position vector Xo of the end effector 41 as an offset. According to this configuration, the same effect as that of the first embodiment can be obtained. In addition, the calculation cost can be reduced as compared with the first embodiment.
  • the robot 1 based on visual information, calculates the position and configuration of the target arm 4 of the target 7 with respect to the base 2 (arm structure U arm of target), subject to the base 2 It has a controller that controls the movement of the base 2 based on the arm configuration ARM of the position 7 and the target. Therefore, according to the robot 1 according to the embodiment, since the base position control is performed based on the relative position of the base 2 with respect to the target 7, the self-propelled robot even when the base 2 slides on the floor surface. The accuracy of the base position control can be maintained. Further, according to the robot 1 that uses the camera 5 capable of measuring the depth, it is possible to realize the base position control of the self-propelled robot regardless of the surrounding environment. Therefore, according to at least one embodiment described above, the accuracy of the base position control of the self-propelled robot can be improved.
  • the control method is based on the visual information with respect to the robot 1 by using one or more processors as illustrated in FIGS. 3 and 12. calculating the arm structure U arm position and the target of the target 7 with respect, to control the movement of the base 2 on the basis of the arm structure U arm position and the target of the target 7 with respect to the base 2. Since the processing procedure, processing content, effect, and the like in this control method are the same as those in the above-described embodiment, the description thereof will be omitted.
  • the control program is applied to one or a plurality of computers with respect to the robot 1, based on visual information, and the position of the target 7 with respect to the base 2 and the target arm configuration ARM. Is calculated, and it is realized that the movement of the base 2 is controlled based on the position of the target 7 with respect to the base 2 and the arm configuration ARM of the target. Since the processing procedure, processing content, effect, etc. realized by the control program are the same as those in the above-described embodiment, the description thereof will be omitted.
  • a part or all of the controller 6 in the above-described embodiment may be configured by hardware, or may be processed by software (program) executed by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or the like. It may be configured.
  • the software that realizes at least a part of the functions of the controller 6 in the above-described embodiment is a flexible disk, a CD-ROM (Compact Disc-Read Only Memory), or a USB (Universal Serial). Bus)
  • Information processing of software may be executed by storing it in a non-temporary storage medium (non-temporary computer-readable medium) such as a memory and loading it into a computer.
  • the software may be downloaded via a communication network.
  • information processing may be executed by hardware by implementing the software in a circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
  • the type of storage medium that stores the software is not limited.
  • the storage medium is not limited to a removable one such as a magnetic disk or an optical disk, and may be a fixed storage medium such as a hard disk or a memory. Further, the storage medium may be provided inside the computer or may be provided outside the computer.
  • FIG. 15 is a block diagram showing an example of the hardware configuration of the object operation system 9 in the above-described embodiment.
  • the controller 6 includes, for example, a processor 811, a main storage device 812 (memory), an auxiliary storage device 813 (memory), a network interface 814, and a device interface 815. It may be realized as a computer 81 connected via the bus 816.
  • the computer 81 of FIG. 15 includes one component for each component, but may include a plurality of the same components. Further, although one computer 81 is shown in FIG. 15, software is installed on a plurality of computers, and each of the plurality of computers executes the same or different part of the software. May be good. In this case, it may be a form of distributed computing in which each computer communicates via a network interface 814 or the like to execute processing. That is, the controller 6 in the above-described embodiment may be configured as a system that realizes a function by executing instructions stored in one or a plurality of storage devices by one or a plurality of computers. Further, the information transmitted from the terminal may be processed by one or a plurality of computers provided on the cloud, and the processing result may be transmitted to the terminal.
  • the various operations of the controller 6 in the above-described embodiment may be executed in parallel processing by using one or a plurality of processors or by using a plurality of computers via a network. Further, various operations may be distributed to a plurality of arithmetic cores in the processor and executed in parallel processing. In addition, some or all of the processes, means, etc. of the present disclosure may be executed by at least one of a processor and a storage device provided on the cloud capable of communicating with the computer 81 via the network. As described above, each device in the above-described embodiment may be in the form of parallel computing by one or a plurality of computers.
  • the processor 811 may be an electronic circuit (processing circuit, Processing circuit, Processing circuitry, CPU, GPU, FPGA, ASIC, etc.) including a computer control device and an arithmetic unit. Further, the processor 811 may be a semiconductor device or the like including a dedicated processing circuit. The processor 811 is not limited to an electronic circuit using an electronic logic element, and may be realized by an optical circuit using an optical logic element. Further, the processor 811 may include an arithmetic function based on quantum computing.
  • the processor 811 can perform arithmetic processing based on the data and software (program) input from each device or the like of the internal configuration of the computer 81, and output the arithmetic result or the control signal to each device or the like.
  • the processor 811 may control each component constituting the computer 81 by executing an OS (Operating System) of the computer 81, an application, or the like.
  • OS Operating System
  • the controller 6 in the above-described embodiment may be realized by one or a plurality of processors 811.
  • processor 811 may refer to one or more electronic circuits arranged on one chip, or may refer to one or more electronic circuits arranged on two or more chips or two or more devices. You may point. When a plurality of electronic circuits are used, each electronic circuit may communicate by wire or wirelessly.
  • the main storage device 812 is a storage device that stores instructions executed by the processor 811 and various data and the like, and the information stored in the main storage device 812 is read out by the processor 811.
  • the auxiliary storage device 813 is a storage device other than the main storage device 812. Note that these storage devices mean arbitrary electronic components capable of storing electronic information, and may be semiconductor memories.
  • the semiconductor memory may be either a volatile memory or a non-volatile memory.
  • the storage device for storing various data in the controller 6 in the above-described embodiment may be realized by the main storage device 812 or the auxiliary storage device 813, or may be realized by the built-in memory built in the processor 811.
  • the storage unit 102 in the above-described embodiment may be realized by the main storage device 812 or the auxiliary storage device 813.
  • processors may be connected (combined) to one storage device (memory), or a single processor may be connected.
  • a plurality of storage devices (memory) may be connected (combined) to one processor.
  • the controller 6 in the above-described embodiment is composed of at least one storage device (memory) and a plurality of processors connected (combined) to the at least one storage device (memory), at least one of the plurality of processors
  • One processor may include a configuration in which it is connected (combined) to at least one storage device (memory). Further, this configuration may be realized by a storage device (memory) and a processor included in a plurality of computers. Further, a configuration in which the storage device (memory) is integrated with the processor (for example, a cache memory including an L1 cache and an L2 cache) may be included.
  • the network interface 814 is an interface for connecting to the communication network 82 wirelessly or by wire.
  • the device interface 815 is an interface such as USB that directly connects to the external device 83b.
  • the external device 83a is a device connected to the computer 81 via a network.
  • the external device 83b is a device that is directly connected to the computer 81.
  • the external device 83a or the external device 83b may be an input device as an example.
  • the input device is, for example, a device such as a camera, motion capture, or various sensors, and gives the acquired information to the computer 81.
  • the external device 83a or the external device 83b may be a device having some functions of the components of each device (robot 1 or controller 6) in the above-described embodiment. That is, the computer 81 may transmit or receive a part or all of the processing result of the external device 83a or the external device 83b.
  • the external device 83a or the external device 83b may be at least one of the base 2, the arm 4, the end effector 41, the camera 5, and the controller 6 in the above-described embodiment.
  • the expression (including similar expressions) of "at least one (one) of a, b and c" or "at least one (one) of a, b or c" is used. When used, it includes any of a, b, c, ab, ac, bc, or abc. It may also include multiple instances of any element, such as a-a, a-b-b, a-a-b-b-c-c, and the like. It also includes adding elements other than the listed elements (a, b and c), such as having d, such as a-b-c-d.
  • the physical structure of the element A can execute the operation B. Including that the element A has a configuration and the permanent or temporary setting (setting / configuration) of the element A is set (configured / set) to actually execute the operation B. good.
  • the element A is a general-purpose processor
  • the processor has a hardware configuration capable of executing the operation B, and the operation B is set by setting a permanent or temporary program (instruction). It suffices if it is configured to actually execute.
  • the element A is a dedicated processor, a dedicated arithmetic circuit, or the like, the circuit structure of the processor actually executes the operation B regardless of whether or not the control instruction and data are actually attached. It only needs to be implemented.
  • the respective hardware when a plurality of hardware performs a predetermined process, the respective hardware may cooperate to perform the predetermined process, or some hardware may perform the predetermined process. You may do all of the above. Further, some hardware may perform a part of a predetermined process, and another hardware may perform the rest of the predetermined process.
  • the hardware that performs the first process and the hardware that performs the second process when expressions such as "one or more hardware performs the first process and the one or more hardware performs the second process" are used. , The hardware that performs the first process and the hardware that performs the second process may be the same or different. That is, the hardware that performs the first process and the hardware that performs the second process may be included in the one or more hardware.
  • the hardware may include an electronic circuit, a device including the electronic circuit, or the like.
  • each storage device (memory) among the plurality of storage devices (memory) stores only a part of the data. It may be stored or the entire data may be stored.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

La présente invention porte, selon un mode de réalisation, sur un robot qui comprend une base, un bras, une unité d'imagerie et un dispositif de commande. La base est mobile sur une surface de déplacement. Le bras est pourvu d'un effecteur terminal qui réalise un travail sur un objet. L'unité d'imagerie capture une image de l'objet et délivre en sortie des informations visuelles se rapportant à l'objet. Le dispositif de commande comprend un processeur ou une pluralité de processeurs et calcule la position de l'objet par rapport à la base et à une configuration cible du bras sur la base des informations visuelles et commande le déplacement de la base sur la base de la position de l'objet par rapport à la base et à la configuration cible du bras.
PCT/JP2021/005770 2020-02-17 2021-02-16 Robot WO2021166918A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-024777 2020-02-17
JP2020024777 2020-02-17

Publications (1)

Publication Number Publication Date
WO2021166918A1 true WO2021166918A1 (fr) 2021-08-26

Family

ID=77391928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/005770 WO2021166918A1 (fr) 2020-02-17 2021-02-16 Robot

Country Status (1)

Country Link
WO (1) WO2021166918A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010214544A (ja) * 2009-03-17 2010-09-30 Toshiba Corp 移動マニピュレータの軌道生成システム
JP2012011531A (ja) * 2010-07-05 2012-01-19 Yaskawa Electric Corp ロボット装置およびロボット装置による把持方法
JP2015178141A (ja) * 2014-03-19 2015-10-08 トヨタ自動車株式会社 搬送ロボット及び搬送方法
JP2019185265A (ja) * 2018-04-05 2019-10-24 トヨタ自動車株式会社 動作計画装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010214544A (ja) * 2009-03-17 2010-09-30 Toshiba Corp 移動マニピュレータの軌道生成システム
JP2012011531A (ja) * 2010-07-05 2012-01-19 Yaskawa Electric Corp ロボット装置およびロボット装置による把持方法
JP2015178141A (ja) * 2014-03-19 2015-10-08 トヨタ自動車株式会社 搬送ロボット及び搬送方法
JP2019185265A (ja) * 2018-04-05 2019-10-24 トヨタ自動車株式会社 動作計画装置

Similar Documents

Publication Publication Date Title
CN110198813B (zh) 机器人路径生成装置和机器人系统
US8676381B2 (en) Humanoid robot and walking control method thereof
JP6075343B2 (ja) 走行ロボット、その動作計画方法及びプログラム
JP7469282B2 (ja) 動的バランスを使用する脚ロボットに対する全身マニピュレーション
CN105690410B (zh) 干涉避免方法、控制装置以及程序
KR101732901B1 (ko) 보행 로봇 및 그 제어방법
Xu et al. Visual-haptic aid teleoperation based on 3-D environment modeling and updating
GB2549072A (en) Robot control
JP2014180704A (ja) ロボットピッキングシステム及び被加工物の製造方法
US20230117928A1 (en) Nonlinear trajectory optimization for robotic devices
Lippiello et al. A position-based visual impedance control for robot manipulators
US20240189999A1 (en) Arm and body coordination
WO2022153842A1 (fr) Dispositif mobile et procédé de commande de dispositif mobile
US20210402605A1 (en) Work Mode and Travel Mode for Mobile Robots
WO2021166918A1 (fr) Robot
KR20100110960A (ko) 옴니휠을 이용한 이동로봇의 주행제어 장치 및 방법
Shim et al. Stability and four-posture control for nonholonomic mobile robots
TW202120273A (zh) 機器人系統之控制方法
Lippiello et al. Robot interaction control using force and vision
US20240181635A1 (en) Robot movement and interaction with massive bodies
RU2725930C1 (ru) Комплекс копирующего управления манипуляторами антропоморфного робота
Li et al. Cooperative Motion Planning for Mobile Humanoid Robots Based on the Manipulability Map
JP2001060112A (ja) スプライン補間による移動経路制御方法
Huang et al. Development and Design of AIVBOT for Intelligent Manufacturing System Education Factory
Arias et al. Motion control of a semi-mobile haptic interface for extended range telepresence

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21757000

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21757000

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP