WO2020044491A1 - Robot - Google Patents

Robot Download PDF

Info

Publication number
WO2020044491A1
WO2020044491A1 PCT/JP2018/032118 JP2018032118W WO2020044491A1 WO 2020044491 A1 WO2020044491 A1 WO 2020044491A1 JP 2018032118 W JP2018032118 W JP 2018032118W WO 2020044491 A1 WO2020044491 A1 WO 2020044491A1
Authority
WO
WIPO (PCT)
Prior art keywords
control unit
camera
robot
image
installation surface
Prior art date
Application number
PCT/JP2018/032118
Other languages
English (en)
Japanese (ja)
Inventor
貴明 重光
Original Assignee
TechShare株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TechShare株式会社 filed Critical TechShare株式会社
Priority to PCT/JP2018/032118 priority Critical patent/WO2020044491A1/fr
Priority to JP2020539946A priority patent/JP7055883B2/ja
Publication of WO2020044491A1 publication Critical patent/WO2020044491A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices

Definitions

  • the present invention relates to a technique for acquiring and processing an image in order to realize automation of a robot operation.
  • robot refers to this definition
  • a robot works, it is necessary to obtain information on the work target and its surroundings.
  • an image is acquired using a camera, and the image is processed to obtain information such as a shape and a position of a work target and obstacles around the work target, thereby obtaining a robot for performing a desired work.
  • Methods for calculating motion are known.
  • the fixed camera 5 is installed in a state where the fixed camera 5 is fixed above the tray 7, which is the work target of the robot, and photographs the tray 7.
  • the fixed camera coordinate system F is a known coordinate system. Then, the position shift amount of the work target is acquired using the image captured by the fixed camera 5, and the position shift amount is used for controlling the robot.
  • the robot has two imaging units, a two-dimensional vision system 60 and a three-dimensional vision system 70, in order for the robot to process the electric wire group.
  • FIG. 3 flowchart
  • the first image data D1 that is an image of a range including the entire wire group is acquired using the two-dimensional vision system 60, and then, based on the first image data D1.
  • the region (R2) determined is imaged by the three-dimensional vision system 70 to obtain the second image data D2.
  • the position and the like of the processing target are recognized, and an instruction to the processing robot is given.
  • a laser sensor 26 and a vision sensor 27 are provided at the tip of a robot 24 for sorting articles (FIG. 5 and the like). Then, according to FIG. 12 (flowchart) and the like, first, the distance to the upper surface of the package is acquired using the laser sensor 26, and the package with the highest upper surface is specified. After that, the outer shape information and the like of the specified luggage are acquired using the vision sensor 27.
  • the position information of a work target is obtained from an image captured by a fixed camera in a work environment of a robot. Also in the technique described in Patent Document 2, it is expected that the camera 62 for acquiring the first image data D1 in the first stage is supported by the camera support member 64 and the electric wire group is provided. The region R1 which is the entire region is arranged so as to be able to image (paragraph 0036). In the technique described in Patent Literature 3, both the laser sensor 26 and the vision sensor 27 are provided at the distal end of the robot, and are variable in position with the movement of the arm.
  • the conventional technology uses a fixed camera, and it is necessary to install the camera at a position sufficiently higher than the robot in order to widely view the movable range of the robot. I needed to.
  • the camera since the camera is installed on a structure independent of the robot body, the positional relationship between the robot body and the installation surface, and the positional relationship between the structure on which the camera is installed and the installation surface are grasped in advance, and the position is determined. It was necessary to operate the robot on the premise of the relationship. During the operation of the robot, it was assumed that both the robot body and the structure on which the camera was installed were firmly fixed to the installation surface so that the positional relationship did not change.
  • a relatively small robot is a robot that is not permanently or semi-permanently fixedly installed in a work environment, but can be carried around anywhere and used flexibly.
  • such a small robot simply moves the robot on the spot by simply placing the pedestal on the floor surface or work table surface, etc., and picks up and sorts objects. It can be used for such purposes.
  • the robot moves autonomously over a wide area, picks up and sorts the objects, sorts them on different lines, and stores them in remote locations. Or it is possible.
  • the problem to be solved by the present invention is to enable a portable robot or a mobile robot to obtain high-accuracy position information about a working environment without installing a fixed camera or the like. It is to be.
  • a robot in order to solve the above-described problems, includes a pedestal having a point in contact with an installation surface, a column vertically attached to the pedestal, and rotatable around a vertical axis, and connected to the column.
  • An arm that can be turned by one or more joints, a tip that is connected to the tip of the arm and that can turn with the arm, and that is fixed to the strut.
  • a plurality of first cameras that are provided at a position and are capable of photographing the pedestal itself and the installation surface around the pedestal; acquiring first images from the plurality of first cameras; and performing coordinate transformation on the first image To generate a plan view image including an image of the pedestal by performing alignment of the plurality of partial plan views and combining the partial plan views, and based on the plan view image.
  • a control unit for controlling so as to change the orientation of the arm to move the tip into position Te.
  • the above-mentioned robot further includes an irradiation unit fixedly provided to the distal end portion and irradiating light having a predetermined pattern in the installation surface direction.
  • the robot further includes a second camera provided at the distal end, wherein the control unit is configured to control the arm based on a second image acquired from the second camera. Is controlled to change the direction.
  • the robot further includes an autonomous moving unit for autonomously moving on the installation surface.
  • the first camera and the second camera simultaneously shoot an object
  • the control unit simultaneously shoots an object by the first camera and the second camera, respectively. It controls to change the direction of the arm based on the image obtained.
  • the first camera and the second camera photograph an object in different phases from each other, and the control unit controls the first camera for each of the phases. Control is performed to change the direction of the arm based on the captured first image, and control is performed to change the direction of the arm based on the second image captured by the second camera.
  • the robot further includes a plurality of the irradiation units, and at least a part of the plurality of irradiation units emits the light in a direction different from other irradiation units. Irradiate.
  • the control unit moves the tip so that the irradiation unit sequentially irradiates the light in directions of a plurality of predetermined positions on the installation surface. While controlling, the control unit generates a plurality of the plan view images corresponding to each of the timing at which the irradiation unit is irradiating the light in the direction of the plurality of predetermined positions, the control unit, A step or unevenness of the installation surface is calculated based on the position of the light of the predetermined pattern included in the plurality of plan view images.
  • the robot of the present invention mounts the camera on the robot main body, so that no structure other than the robot is necessary for mounting the camera. Therefore, the robot can be operated by grasping only the positional relationship between the robot body and the installation surface. Furthermore, since the pedestal of the robot can be photographed together with the surrounding installation surface, the positional relationship between the robot body and the installation surface can be automatically grasped. Therefore, it is not necessary to perform an operation for setting the position even after moving the robot.
  • FIG. 2 is a side view illustrating a schematic configuration of the robot according to the first embodiment.
  • FIG. 2 is a schematic diagram illustrating a combined image generated based on an image captured by a first camera in the first embodiment.
  • FIG. 4 is a schematic diagram illustrating a combined image including a plurality of points irradiated on a setting surface by a laser irradiation unit in the first embodiment. It is a flow chart which shows a procedure of processing for controlling operation which a robot by a 1st embodiment grasps a target, and carries to a place position (the 1). 10 is a flowchart illustrating a procedure of a process for controlling the operation of the robot according to the first embodiment to grasp and carry the target to the place position (part 2).
  • FIG. 15 is a perspective view illustrating an operation environment of a robot when an installation surface has a step in the fifth embodiment.
  • FIG. 15 is a schematic diagram illustrating a combined image generated based on an image captured by a first camera in a fifth embodiment.
  • FIG. 26 is a plan view corresponding to FIG. 25.
  • FIG. 26 is a schematic diagram in which the geometric relationship of each point in the side view of FIG. 25 is extracted.
  • FIG. 1 is a side view showing a schematic configuration of the robot according to the first embodiment.
  • the robot 1 includes a pedestal 11, a support 12, a first camera 14L, 14R, a first arm 15, a second arm 16, a first joint 21, a second joint 22, The three joints 23, the distal end (hand) 31, the laser irradiation unit 32, the finger 33, the second camera 34, and the control unit 40 are included.
  • the left side of the figure is the front side of the robot 1, and the right side is the back side of the robot 1.
  • the robot 1 is a relatively small-sized device that can be carried by a person. The robot 1 can be used by placing it on the installation surface 5.
  • the bottom surface of the pedestal 11 is in contact with the installation surface 5. As long as no external force acts, the pedestal 11 is fixed at a predetermined position on the installation surface 5 and does not move.
  • the column 12 is a part of the arm, and the position with respect to the pedestal 11 is fixed. However, the column 12 can rotate around a rotation axis perpendicular to the installation surface 5.
  • first cameras 14L and 14R may be referred to as “stereo cameras”.
  • second camera 34 may be referred to as a “tip camera”.
  • the pedestal 11 supports the entire robot 1.
  • the bottom of the pedestal 11 is flat and can be placed on the installation surface 5. That is, the pedestal 11 has a point or a surface that is in contact with the installation surface 5.
  • the support 12 is fixed to the base 11 and supports the first arm 15.
  • the first cameras 14L and 14R are cameras provided on the left and right sides of the column 12, respectively. In the present embodiment, two first cameras 14L and 14R are provided fixed to the support 12. The first cameras 14L and 14R are provided so as to photograph the front side (the left side in FIG. 1) of the main body of the robot 1. The main axes of the respective lenses of the first cameras 14L and 14R are provided so as to face slightly downward on the front side of the main body of the robot 1. Accordingly, the images captured by the first cameras 14L and 14R include many installation surfaces 5. A part of the robot 1 itself (for example, a part of the pedestal 11, a part of the support 12, etc.) may be included in the images captured by the first cameras 14L and 14R. That is, the first camera is provided at a position fixed with respect to the column 12, and is provided so as to be able to photograph the pedestal 11 itself and the installation surface 5 around the pedestal.
  • the number of first cameras may be three or more. As will be described later, the first camera is used for covering the entire periphery of the main body of the robot 1 as an imaging range. Therefore, in order to prevent a blind spot from occurring, a plurality of first cameras are provided. Further, at least a part of the plurality of first cameras may be fixed to the pedestal 11 instead of the support 12. In any of these cases, each of the plurality of first cameras acquires an image with a fixed angle of view with respect to the installation surface 5.
  • the plurality of first cameras may be collectively referred to as “first cameras 14”.
  • the first arm 15 and the second arm 16 are robot arms.
  • the first arm 15 and the second arm 16 are collectively simply referred to as “arm”.
  • the arm is directly or indirectly connected to the pedestal 11, and can be turned by one or more joints.
  • the first joint 21 is a joint that connects the column 12 and the first arm 15.
  • the second joint 22 is a joint that connects the first arm 15 and the second arm 16.
  • the third joint 23 is a joint that connects the second arm 16 and the distal end portion 31 (also referred to as a “hand”).
  • Each of the first joint 21, the second joint 22, and the third joint 23 enables rotation about a rotation axis parallel to the installation surface 5 or a rotation axis perpendicular to the installation surface 5. Any of the joints may rotate around a plurality of axes.
  • the robot 1 can move the distal end portion 31 to an arbitrary position within the movable range. Further, the robot 1 can direct the distal end portion 31 in an arbitrary direction within the movable range.
  • the state of a movable part such as a joint in the robot 1 (for example, a displacement rotation angle or the like) can be always grasped by the control unit 40.
  • the robot 1 has two arms and three joints.
  • the number of arms and the number of joints are not limited to those illustrated here, and may be arbitrary. It is. An appropriate number of joints and an appropriate number of arms may be used depending on the design and the like.
  • the tip (hand) 31 has a function for the robot 1 to work. Specifically, a laser irradiation unit 32, a finger 33, and a second camera 34 are attached to the distal end portion 31. As described above, the distal end portion 31 can approach an object or the like (target) to be worked by the arm and the joint. With the function of the distal end portion 31, the robot 1 can acquire an image of the target, grasp the target, and carry the grasped target. That is, the distal end portion 31 is connected to the distal end of the arm via a joint or the like. The direction between the distal end portion 31 and the second arm 16 can be changed.
  • the laser irradiation unit 32 (irradiation unit) irradiates the laser beam mainly in a direction toward the installation surface 5 (perpendicularly or obliquely to the installation surface 5). Although only one laser irradiation unit 32 is shown in FIG. 1, the laser irradiation unit 32 may be provided at a plurality of locations. In addition, one laser irradiation unit 32 may irradiate laser beams in a plurality of directions simultaneously or in a time-division manner. As the laser irradiation unit 32, for example, a cross laser is used. The cross laser irradiates a cross-shaped pattern. That is, the laser irradiation unit 32 is provided fixed to the distal end portion 31 and irradiates light having a predetermined pattern in the (almost) installation surface direction.
  • the finger 33 is a member having a mechanism that can grasp an object to be worked.
  • the finger 33 can not only grasp an object but also push and move an object placed on the installation surface 5, for example. Further, the finger 33 may have a mechanism for adsorbing an object.
  • the second camera 34 is a camera fixedly attached to the distal end portion 31. While the above-described first camera acquires an image at a height position and an angle of view that are fixed with respect to the installation surface 5, the second camera 34 moves along with the movement of the distal end 31. Images can be obtained by approaching various places within the constraints of the movable range.
  • the second camera 34 normally captures an image in a downward direction (in a direction substantially facing the installation surface 5).
  • the image (second image) photographed by the second camera 34 is used for grasping the position of the object or grasping the state of the finger 33 (whether or not the finger 33 is grasping the object). Can be Based on these grasps, the movement and operation of the arm of the robot 1 are planned. Although only one second camera 34 is shown in FIG. 1, a plurality of second cameras 34 may be provided at the end 31.
  • the robot 1 acquires visual information from the images captured by the first camera 14 and the second camera 34. Note that the robot 1 may acquire information other than visual information by having various sensors and the like.
  • the control unit 40 has a function for controlling the entire robot 1.
  • the control unit 40 is realized by, for example, an electronic circuit.
  • the control unit 40 may be realized using a computer (a microcomputer chip or the like).
  • a function for controlling the robot 1 can be described by a program.
  • the program may be stored in a storage unit (for example, a semiconductor memory or the like) of the control unit 40 and the computer may execute the program.
  • the position where the control unit 40 is provided is arbitrary, for example, the function of the control unit 40 can be stored inside the pedestal 11 as illustrated. Further, the control unit 40 may be composed of a plurality of devices.
  • a hardware board specialized for controlling the arm of the robot 1 is stored inside the pedestal 11, and the image processing from the camera and the method of controlling the arm are realized in software by a computer outside the robot 1, 1 may be configured to communicate with an external computer to realize the processing of the control unit 40.
  • the control unit 40 When the robot 1 operates, the control unit 40 obtains images captured by the first camera 14 and the second camera 34 and obtains signals from various sensors provided at various parts of the robot 1. The control unit 40 makes a necessary determination based on the information or the signal, and as a result, outputs a control signal for operating each member of the robot 1.
  • the control unit 40 outputs, for example, an instruction signal for bringing the first joint 21, the second joint 22, and the third joint 23 into a desired state.
  • the control unit 40 outputs an instruction signal so that the laser irradiation unit 32 and the finger 33 included in the distal end portion 31 perform a predetermined operation.
  • the control unit 40 obtains a first image from the plurality of first cameras, generates a partial plan view image by performing coordinate transformation on the first image, performs alignment of the plurality of partial plan views, and combines the images.
  • a plan view image including the pedestal image is generated.
  • the control unit 40 controls to change the direction of the arm in order to move the distal end portion 31 to a predetermined position based on the plan view image.
  • the control unit 40 controls to change the direction of the arm based on the second image acquired from the second camera 34.
  • the control unit 40 generates an image corresponding to one plan view based on the images acquired by the plurality of first cameras 14L and 14R. For this purpose, the control unit 40 first performs coordinate transformation (affine transformation) on each of the images obtained from the first cameras 14L and 14R. Then, the control unit 40 generates one large image by combining (pasting) the images subjected to the coordinate conversion.
  • coordinate transformation affine transformation
  • FIG. 2 is a schematic diagram showing one plan view image generated by the control unit 40 based on the images captured by the first cameras 14L and 14R, respectively.
  • reference numeral 101L is a partial plan view obtained by performing coordinate conversion on an image captured by the first camera 14L.
  • 101R is a partial plan view obtained by performing coordinate conversion on an image captured by the first camera 14L.
  • Each of the first cameras 14L and 14R captures an image with a lens directed obliquely downward, but by using a coordinate transformation that is an existing technique, an image of each camera can be used as a part of a plan view.
  • Such images are referred to as coordinate-transformed images 101L and 101R, respectively.
  • Each of the coordinate-transformed images 101L and 101R corresponds to an image of the installation surface 5 as viewed from above from above.
  • the control unit 40 aligns the coordinate-transformed images 101L and 101R at appropriate positions, and appropriately overlaps or overlaps a region overlapped in both images, or cuts off the image as necessary, thereby combining the combined image 101L and 101R. Get. In the combined image 101, a part of the pedestal 11 is reflected. In addition, the installation surface 5 is shown in most of the synthesized image 101. In the case where objects and the like are placed in the photographing range on the installation surface 5, an image of those objects as viewed in plan is also included in the synthesized image 101.
  • the first cameras 14 ⁇ / b> L and 14 ⁇ / b> R are arranged on the left and right of the column 12 of the robot 1, respectively, but the arrangement of the plurality of first cameras 14 is not limited to the left and right.
  • the number of first cameras 14 used is not limited to two, and for example, three or more first cameras may be used.
  • a total of four first cameras are provided at the front, rear, left, and right sides of the robot, the front, rear, left, and right are photographed by each of these cameras. It may be combined with two plan view images.
  • the synthesized image 101 may cover the entire circumference (360 degrees in a plane) of the robot 1 or may cover only a part of the angle range.
  • the combined image 101 illustrated in FIG. 2 is a plan view image mainly showing the front (partly including the left and right sides).
  • the synthesized image 101 may be an image in which only a range related to the work of the robot 1 is captured.
  • the parameters (coefficients of the transformation matrix, etc.) when the control unit 40 performs the coordinate transformation may be set in advance for each robot 1, for example.
  • a marker indicating a reference position is placed on the installation surface 5 to be photographed, and the coordinate conversion parameter is dynamically determined based on the coordinate position in the image of the marker included in the photographed image. May be.
  • a marker for example, a barcode, a two-dimensional code, or the like is used. The same applies to the case where a specific object, a place, or the like is recognized using a sign.
  • control unit 40 generates (partial images of) the coordinate transformation processing plan view image based on the images captured by the plurality of first cameras may be referred to as “around view processing”.
  • the control unit 40 creates a plan view image based on the images captured by the plurality of first cameras 14.
  • the robot 1 can acquire information of a plan view of a relatively wide range around the robot 1 (for example, a range covering the movable region of the distal end portion 31) without a blind spot.
  • the information of the plan view obtained in this way includes information for planning the work to be executed by the robot 1, and the robot 1 can smoothly proceed with the work.
  • the support 12 can rotate around an axis in a direction perpendicular to the installation surface.
  • the controller 40 It is also possible to grasp the rotation direction based on the figure image.
  • control unit 40 grasps the direction of the pedestal 11, the direction of each arm (the first arm 15 and the second arm 16), and the direction of the tip 31 based on the generated plan view image. Specifically, control unit 40 grasps the orientation of pedestal 11 in the generated plan view image in the surrounding environment. In other words, the control unit 40 determines the position of a peripheral object or the like (which may include a sign indicating a known position) and the position of the pedestal 11 of the own robot 1 in the generated plan view image. Figure out the relationship. Then, the control unit 40 grasps the direction of the first arm 15 from the position of the pedestal 11 and the state of the first joint 21 (the displacement angle about the rotation axis of the joint).
  • the control unit 40 grasps the direction of the second arm 16 from the direction of the first arm 15 and the state of the second joint 22 (the displacement angle about the rotation axis of the joint). Then, the control unit 40 grasps the direction of the distal end portion 31 from the direction of the second arm 16 and the state of the third joint 23 (the displacement angle about the rotation axis of the joint). Since the lengths of the first arm 15 and the second arm 16 are fixed, the control unit 40 can also grasp the positions of the second joint 22 and the third joint 23. Further, the control unit 40 can grasp the position of the distal end 31 based on the position of the third joint 23. The control unit 40 can appropriately store the direction of each arm, the position of each joint, and the position and direction of the distal end portion 31 obtained by the above method.
  • the control unit 40 can grasp the position of the target (the object or the place to be worked) based on the position of the pedestal 11 of the robot 1 itself.
  • the control unit 40 can grasp the position of the target in the generated plan view image by learning the characteristics of the target as an image in advance.
  • the user may be able to specify the position of the target using a GUI (graphical user interface).
  • GUI graphical user interface
  • the control unit 40 displays the generated plan view image on the display of the terminal device operated by the user, and the user indicates the position of the target on the plan view image, whereby the control unit 40 Know your location.
  • the control unit 40 can grasp the position of the target with reference to the own robot 1 with a predetermined accuracy. That is, the control unit 40 can make a plan to move, for example, the distal end portion 31 to a position near the target based on the position of the target that can be grasped.
  • the laser irradiator 32 provided on the tip 31 irradiates a laser.
  • the second camera 34 provided at the distal end portion 31 can also take an image in a state where the laser is irradiated.
  • the laser irradiating section 32 may irradiate a laser having a predetermined pattern (a laser that irradiates a projection surface with a predetermined shape).
  • the laser irradiating section 32 may irradiate only one kind of laser, or may irradiate the laser from a plurality of positions or at a plurality of angles.
  • only one second camera 34 may be provided, or a plurality of second cameras 34 may be provided.
  • the control unit 40 estimates the distance to the target object and the direction of the target object based on the laser light projected on the target object (target). At this time, the control unit 40 may refer to information on the shape of the target object acquired in advance.
  • the control unit 40 slightly changes the position and the direction of the distal end portion 31 by moving the arm by a predetermined amount.
  • the control unit 40 re-estimates the distance and direction to the target object based on a change in the image before and after moving the distal end portion 31 (including a change in the shape or the size of the laser beam projected on the target object). It becomes possible.
  • the control unit 40 calculates the amount of movement of the arm to move the tip 31 closer to the target object based on information such as the distance and the direction to the target object obtained by analyzing the image of the second camera 34. . Further, when grasping the target object, the control unit 40 calculates the amount of moving the third joint 23 based on the information of the appropriate direction of the distal end portion 31 for that purpose. In addition, the control unit 40 calculates the amount of movement of the finger 33 for grasping the target object, if necessary. Then, the control unit 40 grasps the target object by actually performing control to move each joint or finger based on the calculated values.
  • the control unit 40 After performing the grabbing operation, the control unit 40 further instructs the second camera 34 to perform shooting. And the control part 40 acquires the image which the 2nd camera 34 image
  • images may be obtained from each of the plurality of cameras.
  • the control unit 40 can determine whether the finger 33 has been able to grasp the target object by analyzing the image. Note that the determination may be made by the first camera instead of by the second camera. Alternatively, a limit switch or a tactile sensor may be mounted on the finger 33, and whether or not the target object can be grasped may be determined based on an output signal thereof.
  • the control for the operation of moving the object from the state where the finger 33 of the robot 1 is grasping the object to the place position is as follows.
  • the “place” is a place designated for placing an object or the like, or a container or the like for storing the object or the like. Since the position where the finger 33 grasps the target object is almost the same height position as the installation surface, the arm is raised to a certain height position so as not to interfere with the installation surface and surroundings when moving the target object.
  • the first cameras 14L and 14R perform photographing.
  • the control unit 40 acquires images from the first cameras 14L and 14R, and generates a plan view image using those images. The method for generating the plan view image is as described above.
  • control unit 40 grasps the place position based on the generated plan view image.
  • the control unit 40 has learned the features of the image at the place position in advance, and automatically recognizes the place from the plan view image.
  • the user may be able to specify the location of the place using the GUI.
  • the control unit 40 displays the plan view image on the display of the terminal device, and acquires information on the position on the plan view image specified by the user.
  • the control unit 40 makes a plan to move the tip 31 from the current position to the upper part of the place position.
  • the process of creating the movement plan is the same as the above-described process of planning to move the distal end portion 31 to the upper part of the target object.
  • the control unit 40 moves the distal end portion 31 to an upper part of the place position by controlling the movement of the arm based on the created plan. This movement is a movement parallel to the xy plane.
  • the control unit 40 obtains an image captured by the second camera, and analyzes the image to confirm whether or not the distal end portion 31 is located above the place position. If the confirmation is successful, the control unit 40 performs control for storing the object at the place position. That is, the control unit 40 controls the finger 33 so as to store the object at that position (that is, release the grasped object) after performing control to lower the distal end portion 31 to the place position.
  • the robot 1 may perform not only the operation of grasping and releasing the object using the finger 33 but also the operation of, for example, pushing and moving the placed object. Further, the robot 1 may cause the object to be sucked using the suction mechanism of the finger 33, or may move the object in a state of being sucked. Also in this case, the control unit 40 appropriately moves the distal end portion 31 and operates the finger 33 appropriately.
  • FIG. 3 is a schematic diagram showing a state of irradiation by the laser irradiation unit 32.
  • the figure shows a situation in which two laser irradiation units 32 irradiate the installation surface 5 with two irradiation lights P1 and P2, respectively.
  • P1 and P2 are laser beams irradiated in a point-like or substantially point-like manner.
  • the distance between the point P1 and the point P2 on the installation surface 5 is defined as d.
  • the first cameras 14L and 14R can capture an image including these points P1 and P2 irradiated on the installation surface 5, and the control unit 40 can acquire the images.
  • FIG. 3 shows a synthesized image obtained by synthesizing those images.
  • the distance d on the installation surface 5 is obtained by performing an appropriate scale process on the image acquired by the control unit 40.
  • the control unit 40 can calculate the amount of movement in the height direction when the tip unit 31 is moved regardless of whether the finger 33 is holding the target object or not. it can.
  • the control unit 40 controls the robot to sequentially move the tip 31 to a plurality of preset positions on the installation surface 5.
  • the control unit 40 controls the laser irradiation unit 32 provided to irradiate the laser beam downward (perpendicular to the installation surface 5) on each set position.
  • the control unit 40 acquires the images captured by the first cameras 14L and 14R each time. Based on the photographed images, they can be combined by performing coordinate transformation and positioning to obtain a combined image. Irradiation light points should appear at the same position in all the synthesized images obtained by performing the above operation at each set position. However, when there is a step or unevenness on the installation surface 5, the position of the irradiation light point is shifted.
  • the control unit can detect a deviation in the height of the installation surface 5 by performing a process of superimposing the combined images obtained at the respective setting positions.
  • the unevenness of the installation surface 5 can be known.
  • the control unit 40 can determine the height at which the target is grasped by this calculation.
  • FIG. 4, FIG. 5, FIG. 6, and FIG. 7 are flowcharts showing the procedure of processing for controlling the operation of the robot 1 to grasp the target object and carry it to the place position. 4, 5, 6, and 7 are one flowchart combined by a connector. Hereinafter, the processing procedure will be described with reference to this flowchart.
  • step S1 of FIG. 4 the control unit 40 acquires images captured by the first cameras 14L and 14R.
  • step S2 the control unit 40 performs an around-view process (coordinate conversion process) on each image acquired in step S1, and performs a process of combining those images. Thereby, the control unit 40 generates a plan view image of the periphery of the robot 1.
  • step S3 the control unit 40 recognizes the pedestal 11 included in the plan view image generated in step S2.
  • the control unit 40 learns features of the pedestal 11 as an image in advance.
  • the control unit 40 recognizes the pedestal 11 by recognizing a sign (characteristic image) attached to a predetermined portion of the pedestal 11.
  • the control unit 40 previously acquires the approximate coordinate position of the pedestal 11 in the image captured by each of the first cameras (14L and 14R).
  • the control unit 40 may use a combination of a plurality of the methods exemplified here to recognize the pedestal 11.
  • step S4 the control unit 40 uses the above-described method to set the directions and angles of the arms (the first arm 15, the second arm 16) and the directions of the columns 12 (centering on the rotation axis) in the robot 1. (Direction of rotation) and the position and orientation of the distal end portion 31 are calculated and stored.
  • the control unit 40 calculates the direction of each arm and the position and direction of the distal end portion 31 as a relative direction or position with respect to the pedestal 11. Since the pedestal 11 in the plan view image is recognized in step S3, the control unit 40 can grasp the direction of each arm and the position and direction of the distal end portion 31 based on the coordinates of the plan view image.
  • step S5 the control unit 40 recognizes a target (object or place) to be operated in the plan view image.
  • the control unit 40 can recognize the target in the plan view image using the characteristics of the target acquired in advance. Further, the control unit 40 may recognize the target by a user's designation (designation from the GUI).
  • step S6 the control unit 40 performs a movement plan of the arm and the distal end portion 31.
  • the control unit 40 plans the movement of the arm and the distal end portion 31 based on the current positions of the arm and the distal end portion 31 already known and the position of the target. For example, the control unit 40 plans the movement of the arm so that the tip 31 is moved above the target position by moving the tip 31 in parallel with the xy plane (installation surface 5).
  • the control unit 40 may perform the calculation using the rectangular coordinates (x-axis and y-axis), or may perform the calculation using the polar coordinates having the predetermined position of the pedestal 11 as a pole.
  • the “upper part of the target position” does not necessarily have to be a position directly above the target (a position moved upward from the target position on the xy plane perpendicularly to the xy plane).
  • a position at which the second camera 34 attached to the distal end portion 31 can easily recognize the target or a position at which the finger 33 can easily operate the target may be defined as “above the target position”.
  • the control unit 40 performs control for moving the arm based on the plan established in step S6. As a result, the tip 31 moves above the target position unless some unexpected situation occurs (for example, the robot 1 itself is lifted and its position is changed).
  • step S8 the control unit 40 acquires an image captured by the second camera 34.
  • step S9 the control unit 40 analyzes the image acquired in step S8, and recognizes a target in the image. Further, the control unit 40 grasps the position of the target in the image. The reason for recognizing the target in this step is to check whether or not the tip 31 is actually located above the target position targeted in the movement plan set in step S6.
  • step S10 the control unit 40 determines that the current position of the distal end portion 31 is appropriate for the target based on the position of the target grasped in step S9 (the position in the image captured by the second camera 34). It is determined whether or not. If the position of the distal end portion 31 (the position of the arm) is appropriate (step S10: YES), the process proceeds to step S12 in FIG. If the position of the distal end portion 31 is not appropriate (step S10: NO), the process proceeds to S11 to correct the position. When proceeding to step S11, the control unit 40 determines whether or not the target is within the field of view of the second camera.
  • control unit 40 determines whether or not the target is present in the image acquired from the second camera 34 in step S8. If the target is within the field of view of the second camera (step S11: YES), the process proceeds to step S6 in order to correct the position by re-moving the arm. If the target is not within the field of view of the second camera (step S11: NO), the process returns to step S1 to start over from capturing the target by the first camera.
  • step S12 the controller 40 controls the laser irradiator 32 to irradiate the laser.
  • step S13 the second camera 34 captures an image while the laser is being irradiated.
  • the control unit 40 acquires the image from the second camera 34.
  • step S14 the control unit 40 performs a process of estimating the three-dimensional structure of the target based on the image acquired in step S13.
  • the control unit 40 performs processing for estimating the distance to the target object, the direction of the object, and the like.
  • step S15 the control unit 40 determines whether or not the three-dimensional structure of the target can be estimated by the processing in step S14. If the three-dimensional structure can be estimated (step S15: YES), the process proceeds to step S17. If the three-dimensional structure cannot be estimated (step S15: NO), the process proceeds to step S16 in order to change the position of the distal end portion 31 and obtain an image again.
  • step S16 the control unit 40 controls to lower the arm slightly. In other words, the control unit 40 performs control to move the joint so that the position of the distal end portion 31 is slightly (a predetermined amount) lower than before. Accordingly, since the distal end portion 31 is closer to the target object, it is expected that a better image (image by the second camera 34) can be obtained. After step S16, the process returns to step S12.
  • step S17 the control unit 40 calculates a height and an angle for grasping the target object based on the result estimated in the processing in step S14. Based on the calculation result, the control unit 40 calculates the displacement amount of each joint for moving the distal end portion 31 to a position where the target object is grasped.
  • step S18 the control unit 40 controls the movement of each joint based on the result obtained in step S17, and moves the tip 31 to a desired position by moving the arm. Then, the control unit 40 controls the finger 33 to perform an operation of grasping the target object.
  • step S19 the control unit 40 acquires an image captured by the second camera 34, and analyzes the image.
  • the control unit 40 specifically analyzes whether or not the finger 33 has successfully grasped the target object.
  • step S20 the control unit 40 determines whether or not the target has been successfully grasped based on the analysis result in step S19. If the target has been successfully grasped (step S20: YES), the process proceeds to step S22. If the target has not been successfully grasped (step S20: NO), the process proceeds to step S21.
  • the control unit 40 moves the arm so as to raise the distal end portion 31 (away from the target object), and then executes step S12 to execute a series of procedures for grasping the target again. Return to the processing of.
  • control unit 40 performs control to move the arm so as to raise the distal end portion 31 with the finger 33 grasping the target object.
  • step S23 The processing from step S23 in FIG. 6 is a control for performing an operation of carrying and placing the object grasped by the robot 1 to the place position.
  • the control unit 40 acquires an image captured by the first camera 14.
  • step S24 the control unit 40 performs a coordinate conversion process (around view process) and a synthesis process based on the image acquired in step S23, and generates a plan view image.
  • step S25 the control unit 40 recognizes a place position for placing the grasped object on the plan view image generated in step S24. Specifically, the control unit 40 automatically recognizes the place position using the features of the place position acquired in advance. Alternatively, the control unit 40 grasps information on the place position specified by the user through the GUI. Alternatively, the control unit 40 may be given coordinate information of the place position with respect to the pedestal 11 in advance.
  • step S26 the control unit 40 creates a plan for moving the distal end portion 31 from the current position to above the place position.
  • the movement here is to move the tip portion 31 in a direction parallel to the xy plane.
  • the control unit 40 obtains an amount by which each joint is displaced.
  • step S27 the control unit 40 performs control to move the distal end portion 31 based on the plan created in step S26.
  • step S28 the control unit 40 acquires an image captured by the second camera 34.
  • step S29 the control unit 40 performs a process of recognizing the position of the place based on the image acquired in step S28.
  • the control unit 40 automatically recognizes the place position using the features of the place position acquired in advance.
  • step S30 the control unit 40 determines whether or not the positions of the arm and the distal end portion 31 are appropriate for the position of the place recognized in step S29. When the positions of the arm and the distal end portion 31 are appropriate (step S30: YES), the process proceeds to step S32 in FIG. If the positions of the arm and the distal end portion 31 are not appropriate (step S30: NO), the process proceeds to step S31.
  • step S31 the control unit 40 determines whether or not the position of the place is within the field of view of the second camera as a result of the recognition processing in step S29. In other words, the control unit 40 determines whether or not the place is included in the image acquired from the second camera 34 in step S28. If the position of the place is within the field of view of the second camera (step S31: YES), the process moves to step S26 in order to move the distal end portion 31 again. If the position of the place is not within the field of view of the second camera (step S31: NO), the process proceeds to step S23 in order to start over from the process of re-capturing a wide area image with the first camera 14.
  • step S32 the control unit 40 lowers the arm so that the distal end portion 31 approaches the place position.
  • step S33 the control unit 40 stores the target object (target) in the place by controlling the finger 33 to release the target object held by the finger 33.
  • step S33 the entire control of the series of movements until the robot 1 grasps the target and stores it in the place (the flowchart from FIG. 4 to FIG. 7) ends.
  • the first camera and the second camera photograph the object in different phases from each other, and the control unit controls the first camera photographed by the first camera for each of those phases. Control is performed to change the direction of the arm based on one image, and control is performed to change the direction of the arm based on the second image captured by the second camera.
  • the plurality of first cameras 14 capture a wide area image.
  • the control unit 40 performs coordinate conversion on the image captured by the first camera to obtain a partial image of the plan view (around view processing). Further, the control unit 40 combines the plurality of partial images to generate one plan view image. By synthesizing a plan view image from a plurality of images, a plan view image without blind spots by the robot 1 itself (such as the pedestal 11) can be obtained. That is, the control unit 40 can grasp the situation around the robot 1. Further, the control unit 40 recognizes the pedestal 11 in the plan view image.
  • the control unit 40 can grasp the surrounding situation, the position and the direction of the pedestal of the robot 1 itself, the direction of each arm, and the position and the direction of the distal end portion 31 in association with each other. That is, the control unit 40 can accurately grasp the position information of the robot 1 and the surroundings. That is, for example, even when the user carries or changes the direction of the robot 1, the control unit 40 of the robot 1 automatically determines its own position without relying on a camera or the like fixedly installed in the vicinity (work environment). Can be grasped. That is, even if the position of the robot 1 is changed, it is not necessary to perform a readjustment operation for that.
  • control unit 40 recognizes an operation target object, a place, and the like using an image captured by the second camera fixed to the distal end portion 31. That is, there is no need to perform an operation for setting the positional relationship between the robot and the camera.
  • the robot 1 uses the first camera that captures a relatively wide area and the second camera that is fixedly installed at the distal end portion 31 of the arm. Accordingly, the robot 1 can finely recognize the operation target object or the like at the front end while seeing and grasping the entire periphery. That is, the robot 1 can perform an accurate operation.
  • the resolution and accuracy of the first camera 14 need only be such that the position where the target object exists can be recognized from a long distance.
  • the resolution and accuracy of the second camera 34 need only be such that the shape of the target object can be recognized from a short distance.
  • the degree of freedom in selecting a camera module is expanded. It is a contradictory request to grasp a detailed situation by approaching the operation target object or the like and to broadly look over the entire object.
  • the robot 1 of the present embodiment does not require a fixed camera to be installed. It is possible to meet the demand. Further, even when the target object goes out of the field of view of the camera and the robot 1 loses sight of the target object, by moving the arm, the camera can move with the arm and capture the target object again. Even if a blind spot occurs, there is a possibility that the blind spot can be eliminated by the movement of the arm.
  • automation of robot operation can be realized without impairing the portability and mobility of a relatively small robot.
  • a feature of the second embodiment is that a plurality of laser irradiators are provided at the distal end portion 31, and at least one of the plurality of laser irradiators irradiates a laser in a direction different from other laser irradiators. It is to be.
  • FIG. 8 is a schematic diagram (side view) showing a detailed configuration of the distal end portion 31b which is a characteristic portion according to the present embodiment.
  • FIG. 3 shows the relationship between the arrangement of the plurality of laser irradiation units and the surface on which the laser is irradiated in the present embodiment.
  • the distal end portion 31b includes a plurality of laser irradiation units. Specifically, the distal end portion 31b has laser irradiation portions 32-1 and 32-2. Further, the distal end portion 31b includes a second camera 34. Note that the finger 33 provided on the distal end portion 31b is omitted in this figure.
  • the direction of irradiation by the laser irradiation unit 32-1 (shown by a broken line A) is different from the direction of irradiation by the laser irradiation unit 32-2 (shown by a broken line B). That is, at least one of the plurality of laser irradiation units irradiates the laser in a direction different from that of the other laser irradiation units.
  • These two laser irradiation units 32-1 and 32-2 are projected on the installation surface 5. At this time, the distance on the installation surface between the points irradiated by the two lasers differs depending on the distance between the tip 31b and the installation surface 5.
  • FIG. 9 and FIG. 10 are schematic diagrams illustrating examples of patterns that the laser irradiation unit illustrated in FIG.
  • FIG. 9 shows a pattern projected on the installation surface 5 when the laser irradiation unit and the installation surface are separated by a certain distance.
  • FIG. 10 shows a pattern projected on the installation surface 5 when the laser irradiation unit is closer to the installation surface than in FIG.
  • the upper cross is a pattern irradiated by the laser irradiation unit 32-2.
  • the lower cross (a cross formed by a vertical line and a horizontal line) is a pattern irradiated by the laser irradiation unit 32-1.
  • FIG. 9 shows a pattern projected on the installation surface 5 when the laser irradiation unit and the installation surface are separated by a certain distance.
  • FIG. 10 shows a pattern projected on the installation surface 5 when the laser irradiation unit is closer to the installation surface than in FIG.
  • the upper cross is a pattern irradiated by the laser
  • a cross shape indicated by a broken line indicates a position where the upper cross shape was projected in FIG.
  • the upper cross is a pattern irradiated by the laser irradiation unit 32-2.
  • the lower cross is a pattern irradiated by the laser irradiation unit 32-1.
  • FIG. 11 is a schematic diagram (side view) showing a detailed configuration of the distal end portion 31b.
  • FIG. 3 shows the relationship between the arrangement of the plurality of laser irradiation units and the surface on which the laser is irradiated in the present embodiment.
  • the tip 31b is provided with two laser irradiation units 32-1 and 32-2.
  • the distance between the light sources of the laser irradiation units 32-1 and 32-2 is x.
  • a laser beam (light beam A) is irradiated from directly below the laser irradiation unit 32-1 (perpendicularly to the installation surface 5).
  • Laser light (light beam B) is emitted from the laser irradiation section 32-2 in an oblique direction, that is, in a direction at an angle of ⁇ downward from a horizontal plane (for example, the bottom surface of the distal end portion 31b). .
  • the positions of the laser irradiation units 32-1 and 32-2 and the laser irradiation direction do not change, the values of x and ⁇ remain unchanged.
  • the second camera 34 attached to the distal end portion 31b captures an image in the downward direction, that is, in the direction of the installation surface 5.
  • the control unit 40 acquires the image captured by the second camera 34, and obtains the distance from the bottom surface of the distal end portion 31b (the distal end portion of the robot) to the installation surface 5 as follows. That is, the control unit 40 performs a scaling process based on the captured image, and irradiates a point (irradiation from the laser irradiation unit 32-1) of the irradiation position of the light beam A to the installation surface 5 and a light beam B. The distance from the point of the irradiation position (irradiation from the laser irradiation unit 32-2) is determined.
  • the control unit 40 calculates the distance z from the bottom surface of the distal end portion 31b to the installation surface 5 by the following equation.
  • tan () is a tangent function.
  • the control unit 40 may measure the height of the distal end portion 31b according to the following modified example. For example, when attaching the laser irradiation unit 32-1 to the tip 31b, the irradiation direction does not have to be right below (perpendicular to the bottom surface of the tip 31b). Even when the irradiation direction of the laser irradiation unit 32-1 is oblique (however, irradiation at a known angle), the control unit 40 can appropriately calculate the height. Further, for example, in order to recognize the laser beam irradiated on the installation surface 5, an image captured by the first camera 14L or 14R may be used instead of an image captured by the second camera 34.
  • the installation surface 5 may be irradiated with the laser light emitted from the laser irradiation units 32-1 and 32-2 after the intersection.
  • FIG. 12 is a schematic view (side view) of the distal end portion 31b showing an example in which two laser beams are irradiated on the installation surface after intersecting.
  • the laser irradiation unit 32-1 irradiates a laser beam (light beam A) directly below (perpendicular to the installation surface 5).
  • the laser irradiator 32-2 irradiates a laser beam (light beam B) obliquely downward at an angle ⁇ from the tip 31b. After the light beams A and B cross each other, the light beams are radiated on the installation surface 5.
  • the distance between the irradiation points on the installation surface 5 is d.
  • the distance between the light sources of the laser irradiation units 32-1 and 32-2 is x.
  • control unit 40 can automatically calculate the position at which the object is grasped. Therefore, even when the installation location of the robot is changed, the operation of setting the position at which the object is grasped can be performed. No need to do. That is, automation of the robot operation can be realized without impairing the portability of the small robot.
  • the robot has an autonomous moving unit.
  • the robot can move not only fixedly on the installation surface but also in a wide range.
  • FIG. 13 is a schematic diagram showing an outline of the movement operation of the robot 2 according to the present embodiment.
  • the robot 2a represents the position before the movement
  • the robot 2b represents the position after the movement.
  • a rail 201 is laid at a place where the robot 2 operates.
  • An object 202 to be operated by the robot 2 is placed near the rail 201.
  • the robot 2 can move in the front-rear direction along the rail 201 by driving wheels (not shown) by, for example, a motor.
  • the robot 2 may be freely movable not only on the rail 201 but also on a plane in any direction by wheels (for example, four wheels).
  • the control unit 40 of the robot 2 to drive the wheels, the robot 2 can approach the object 202. Further, the robot 2 can move to a desired position according to the situation.
  • autonomous moving units are a function for the robot to autonomously move on the installation surface.
  • the autonomous moving unit is controlled by the control unit 40.
  • the function of the automatic moving unit allows the robot 2 to autonomously move to a desired object even when there is an object or place at a position where the arm cannot reach, or when the object or place cannot be visually recognized from the first camera 14. It is possible to move near an object or place.
  • FIGS. 14, 15, 16, and 17 are flowcharts showing the processing procedure of the robot 2 in this embodiment. This flowchart shows a procedure of a process for controlling the operation of the robot 2 to grasp the target object and carry it to the place position.
  • FIG. 14, FIG. 15, FIG. 16, and FIG. 17 are one flowchart connected by a connector. Hereinafter, the processing procedure will be described with reference to this flowchart.
  • step S106 the control unit 40 determines whether or not the target is within a range where the arm reaches the target. In addition, even when the target is not found as a result of performing the target recognition processing (step S105), the control unit 40 determines that the target is not within the range where the arm reaches the target. If it is determined that the arm reaches the target (step S105: YES), the process proceeds to step S108. If it is determined that the target is not within the reach of the arm (step S105: NO), the process proceeds to step S107 in order to move the robot and recognize the target again.
  • control unit 40 controls the autonomous moving unit to move the robot 2 itself. Specifically, the robot 2 is moved to a predetermined position by moving the wheels of the robot 2. After the end of this step, the process returns to step S101 and continues.
  • step S108 the control unit 40 performs control to move the arm, and recognizes the position of the target based on the image acquired from the second camera 34.
  • step S112 the control unit 40 makes the same determination as in step S10 of FIG. 4, and branches according to the determination result. That is, when the position of the distal end portion 31 (the position of the arm) is appropriate (step S112: YES), the process proceeds to step S114 in FIG. If the position of the distal end portion 31 is not appropriate (step S112: NO), the process proceeds to S113 to correct the position.
  • step S113 the control unit 40 makes the same determination as in step S11 of FIG. 4, and branches according to the determination result. That is, when the target is within the field of view of the second camera (step S113: YES), the process proceeds to step S108 to correct the position by re-moving the arm. If the target is not within the field of view of the second camera (step S113: NO), the process returns to step S101 to start over from capturing the target by the first camera.
  • steps S114 to S124 in FIG. 15 is the same as the processing from steps S12 to S22 in FIG. 5, respectively.
  • the process in step S124 in FIG. 15 ends, the process moves to step S125.
  • step S1208 the control unit 40 determines whether or not the arm is within the range of the place.
  • the control unit 40 determines that the place is not within the range where the arm reaches the place. If it is determined that the arm is within the range of the place (step S128: YES), the process proceeds to step S130.
  • step S128: NO the process proceeds to step S129 to move the robot and recognize the target again.
  • step S129 the control unit 40 controls the autonomous moving unit to move the robot 2 itself. Specifically, the robot 2 is moved to a predetermined position by moving the wheels of the robot 2. After the end of this step, the process returns to step S125 and continues.
  • step S130 When proceeding to step S130, the robot 2 performs the processing from steps S130 to S135.
  • the processing from steps S130 to S135 is the same as the processing from steps S26 to S31 in FIG.
  • step S134 when the controller 40 determines that the arm position is appropriate for the place (step S134: YES), the process proceeds to step S136 in FIG.
  • step S134 when the control unit 40 determines in step S134 that the arm position is not appropriate for the place (step S134: NO), the process proceeds to step S135.
  • step S135 the same determination as in step S31 in FIG. 6 is performed, and the process returns to the appropriate side of step S125 or S130.
  • step S136 in FIG. 17 the robot 2 performs the processes of steps S136 and S137.
  • the processing in steps S136 and S137 is the same as the processing in steps S32 and S33 in FIG. 7, respectively.
  • the processing in step S137 ends, the processing procedure of the entire flowchart ends.
  • the robot 2 when the robot 2 autonomously moves, at the destination, the robot 2 reacquires the surrounding image using the first camera, performs coordinate transformation, and generates a combined image.
  • the control unit 40 can recognize the coordinate relationship between the tip of the arm and the installation surface at the destination based on the position of the pedestal 11 in the planar image (combined image).
  • the first camera and the second camera are always installed at positions where the periphery of the robot can be observed.
  • the position can be recognized.
  • the robot operation can be automated without impairing the autonomous mobility of the mobile robot.
  • the feature of the fourth embodiment is that the robot is operated while using the first camera and the second camera at the same time. That is, in the present embodiment, under the control of the control unit 40, the first cameras 14L and 14R and the second camera 34 simultaneously photograph an object. Further, the control unit 40 controls to change the position and the direction of the arm based on the images simultaneously captured by the first cameras 14L and 14R and the second camera 34, respectively.
  • FIGS. 18, 19, and 20 are flowcharts showing the processing procedure of the robot 3 in this embodiment. This flowchart shows a procedure of a process for controlling the operation of the robot 3 to grasp the target object and carry it to the place position.
  • FIG. 18, FIG. 19, and FIG. 20 are one flowchart linked by a connector. Hereinafter, the processing procedure will be described with reference to this flowchart.
  • step S208 the control unit 40 controls both the first cameras 14L and 14R and the second camera 34 to perform shooting.
  • the control unit 40 acquires images from the first cameras 14L and 14R and the second camera 34. That is, the control unit 40 uses the first cameras 14L and 14R and the second camera 34 simultaneously.
  • step S209 the control unit 40 recognizes the position of the target using the images acquired from the first cameras 14L and 14R and the second camera 34.
  • the control unit 40 recognizes the position of the target using the images acquired from the first cameras 14L and 14R and the second camera 34.
  • a synthesized image is synthesized based on the images captured by the first cameras 14L and 14R. Based on the image, the control unit 40 can recognize in which direction the target is located.
  • step S210 the control unit 40 determines whether the position of the distal end portion 31 of the arm is appropriate for the target (for example, the distal end portion 31 is above the target and can approach the target). I do. If the arm position is appropriate for the target (step S210: YES), the process proceeds to step S211 in FIG. If the arm position is not appropriate (step S210: NO), the process returns to step S206 to move the arm again.
  • step S211 to step S221 in FIG. 19 is the same as the processing from step S12 to step S22 in FIG. 5, respectively.
  • step S227 the control unit 40 controls both the first cameras 14L and 14R and the second camera 34 to perform shooting.
  • the control unit 40 acquires images from the first cameras 14L and 14R and the second camera 34.
  • step S228, the control unit 40 recognizes the position of the place using the images acquired from the first cameras 14L and 14R and the second camera 34.
  • a synthesized image is synthesized based on the images captured by the first cameras 14L and 14R. Based on the image, the control unit 40 can recognize in which direction the place is located.
  • step S229 the control unit 40 determines whether or not the position of the distal end portion 31 of the arm is appropriate for the place. If the arm position is appropriate for the place (step S229: YES), the process proceeds to step S230. If the arm position is not appropriate (step S229: NO), the process returns to step S225 to move the arm again.
  • the processing in the next step S230 and step S231 is the same as the processing in step S32 and step S33 in FIG. 7, respectively.
  • the robot 3 ends all the processing in this flowchart.
  • the control unit 40 recognizes the position of the target and the position of the place by using the first camera and the second camera simultaneously.
  • the control unit 40 can control the center part of the arm to move to the target or place position earlier. That is, according to the present embodiment, automation of the robot operation can be realized without impairing the portability and mobility of a relatively small robot.
  • the feature of the robot 4 according to the fifth embodiment is that the installation surface irradiated by the laser irradiation unit is photographed, and the step of the installation surface is grasped and measured based on the obtained image.
  • FIG. 21 is a perspective view showing the operating environment of the robot when the installation surface has a step.
  • the installation surface 5 has a step.
  • the upper stage of the step is 5U, and the lower stage is 5L.
  • the laser irradiating section 32 provided on the distal end portion 31 of the robot 4 irradiates a laser beam so as to form a cross-shaped pattern.
  • the first cameras 14L and 14R capture images of a stepped installation surface irradiated with a laser beam.
  • the control unit 40 can recognize that there is a step on the installation surface based on the shape of the laser light irradiation included in the captured image.
  • FIG. 22 shows a synthesized image obtained by performing coordinate transformation and synthesis processing based on images obtained by photographing an installation surface having a step with the first cameras 14L and 14R.
  • the upper stage of the installation surface is 5U
  • the lower stage is 5L.
  • 5G shown by a broken line represents a step.
  • FIG. 23 is a schematic view showing an example of an irradiation method by the laser irradiation unit 32.
  • FIG. 4 is a combined image obtained by combining images captured by the first cameras 14L and 14R at a plurality of timings.
  • the laser irradiating unit 32 irradiates a predetermined area of the installation surface 5 with a total of twelve points in three rows and four columns (from P11 to P14, from P21 to P24, from P31 to P34). In the example shown in the figure, there is no step or unevenness in the area in the installation surface 5.
  • the robot 4 performs imaging with the first cameras 14L and 14R while sequentially changing the position of the laser irradiation unit 32 provided on the distal end portion 31 by moving the arm. That is, first, the first cameras 14L and 14R capture images when the laser irradiating unit 32 irradiates the position P11, and then, when the laser irradiating unit 32 irradiates the position P12. The cameras 14L and 14R perform photographing, and the same is repeated thereafter. That is, while the laser irradiating unit 32 sequentially irradiates 12 points from P11 to P34, the first cameras 14L and 14R perform imaging at the timing of irradiating each point.
  • the control unit 40 sequentially performs control for moving the arm and control for executing photographing by the first cameras 14L and 14R. Further, the control unit 40 generates a total of 12 combined images corresponding to each irradiation timing by performing coordinate transformation and left and right combination of the images acquired from the first cameras 14L and 14R. Further, the control unit 40 performs a combining process of superimposing all 12 images corresponding to the irradiation timing of each point. As a result, the control unit 40 generates one synthesized image including the twelve points (irradiated points) shown in FIG.
  • FIG. 24 is a schematic diagram showing a combined image generated in the same procedure as described with reference to FIG. 23 when the installation surface 5 has a step.
  • the image shown in FIG. 24 is obtained by controlling the control unit 40 to move the arm in the same procedure as described with reference to FIG. 23, so that the laser irradiation device 32 sequentially captures images from the same 12 positions as in FIG. It is a composite.
  • the image of FIG. 23 is obtained by photographing the laser light applied to the installation surface 5 having no step
  • the image of FIG. 24 is the image of the installation surface 5 having the step shown in FIG. It was obtained by shooting.
  • points from P11 to P14 appear at the same position in the image of FIG. 23 and the image of FIG.
  • the positions of points from P21 to P24 and from P31 to P34 are shifted between the image in FIG. 23 and the image in FIG.
  • FIG. 25, FIG. 26, and FIG. 27 are schematic diagrams for explaining the principle of occurrence of the above-described point shift.
  • FIG. 25 is a side view showing a relationship between the robot 4, the installation surface 5, points irradiated by the robot 4, and a camera (first camera) for photographing an area including the point.
  • FIG. 26 is a plan view corresponding to FIG.
  • the laser irradiation unit 32 of the robot 4 performs irradiation toward P21 which is a point on the installation surface 5U.
  • the laser beam emitted from the laser irradiation unit 32 irradiates a point on the surface 5L at a position lower than the surface 5U.
  • the first camera 14 captures an image of the surface having the step.
  • the image captured by the first camera 14 is such that the irradiated point is a point existing at the position of P21 ′, which is a point on the installation surface 5.
  • the image is as if it were.
  • FIG. 26 (plan view) an image is generated that looks as if the laser beam irradiating in the direction of P21 which is a point on the surface 5L irradiates the position of P21 'which is another point. Is done.
  • FIG. 27 is a side view similar to FIG. 25, and is a side view in which information on a distance between representative positions appearing in FIG. 25 is extracted.
  • the height of the light source of the laser irradiation unit 32 with respect to the installation surface 5 is denoted by z.
  • the height of the light source of the laser irradiation unit 32 with respect to the surface 6 (below the installation surface 5) is defined as z ′.
  • the horizontal distance from the point P21 to the principal point of the lens of the first camera 14 is r ′.
  • the horizontal distance from the point P21 'to the principal point of the lens of the first camera 14 is represented by r.
  • control unit 40 can grasp the state of the surface step and the state of unevenness by performing the same calculation for all the irradiation points.
  • the control unit 40 can determine the height at which the target is grasped or separated based on the information on the calculated height.
  • the irradiation pattern by the laser irradiation unit 32 may have another shape.
  • the irradiation pattern may be cross-shaped (by two intersecting line segments).
  • the laser irradiation unit 32 may include a plurality of light emitting units (light sources).
  • a plurality of points on the installation surface can be irradiated simultaneously.
  • the laser irradiation unit 32 can simultaneously irradiate some of the twelve points shown in FIG.
  • the laser irradiation unit 32 may irradiate a plurality of points in a time-division manner.
  • the number of irradiation points does not have to be 12.
  • the laser irradiation unit 32 does not necessarily need to irradiate the laser light perpendicularly to the installation surface.
  • the laser irradiation unit 32 may irradiate the installation surface in an oblique direction.
  • the control unit 40 controls the laser irradiation unit 32 while moving the distal end portion 31 such that the laser irradiation unit 32 sequentially irradiates light in directions of a plurality of predetermined positions on the installation surface 5. Further, the control unit 40 generates a plurality of combined images (plan view images) corresponding to the respective timings at which the laser irradiation unit 32 irradiates light in the directions of the plurality of predetermined positions. In addition, the control unit 40 calculates a step or unevenness of the installation surface 5 based on the positions of the light of the predetermined pattern included in the plurality of synthesized images (plan view images). In this process, a process of superimposing the synthesized images at the plurality of timings may be performed to generate one synthesized image (plan view image).
  • the robot 4 can automatically grasp the steps and the undulations of the installation surface. Therefore, even when the installation location of the robot 4 is changed, it is not necessary to perform an operation of setting a position such as a height at which the target is grasped. That is, automation of the robot operation can be realized without impairing the portability of the small robot.
  • the functions of the control unit 40 of the robot in each of the above-described embodiments can be realized by a computer.
  • a program for realizing this function may be recorded on a computer-readable recording medium, and the program recorded on this recording medium may be read and executed by a computer system.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer-readable recording medium” is a portable medium such as a flexible disk, a magneto-optical disk, a ROM, a CD-ROM, a DVD-ROM, a USB memory, or a storage device such as a hard disk built in a computer system. That means.
  • a "computer-readable recording medium” is a medium that temporarily and dynamically holds a program, such as a communication line for transmitting a program through a network such as the Internet or a communication line such as a telephone line.
  • a program that holds a program for a certain period of time such as a volatile memory in a computer system serving as a server or a client, may be included.
  • the above-mentioned program may be for realizing a part of the above-mentioned functions, or may be for realizing the above-mentioned functions in combination with a program already recorded in a computer system.
  • the present invention can be used for all kinds of robots, for example, for industrial use or entertainment.
  • the scope of use of the present invention is not limited to those exemplified here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

L'invention concerne un robot (1) comprenant : un socle (11) avec une surface venant en contact avec une surface d'installation (5) ; un support (12) qui est fixé verticalement sur le socle (11) et est apte à tourner autour d'un axe vertical ; des bras (15, 16) qui sont reliés au support (12) et dont l'orientation peut être modifiée au moyen d'une ou de plusieurs articulations (21, 22, 23) disposées sur ceux-ci ; une partie extrémité avant (31) qui est reliée à l'extrémité avant des bras et dont l'orientation par rapport aux bras peut être modifiée ; de multiples premières caméras (14L, 14R) qui sont disposées sur le support au niveau de positions fixes et avec lesquelles le socle lui-même et la surface d'installation entourant le socle peuvent être imagés ; et une unité de commande (40) qui acquiert des premières images à partir des multiples premières caméras, génère une image en vue en plan partielle par conversion des coordonnées des premières images, aligne et compose de multiples telles vues en plan partielles pour générer une image en vue en plan comprenant une image du socle, puis effectue une commande de manière à modifier l'orientation des bras afin de déplacer la partie d'extrémité avant vers une position spécifiée sur la base de l'image en vue en plan.
PCT/JP2018/032118 2018-08-30 2018-08-30 Robot WO2020044491A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2018/032118 WO2020044491A1 (fr) 2018-08-30 2018-08-30 Robot
JP2020539946A JP7055883B2 (ja) 2018-08-30 2018-08-30 ロボット

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/032118 WO2020044491A1 (fr) 2018-08-30 2018-08-30 Robot

Publications (1)

Publication Number Publication Date
WO2020044491A1 true WO2020044491A1 (fr) 2020-03-05

Family

ID=69643551

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/032118 WO2020044491A1 (fr) 2018-08-30 2018-08-30 Robot

Country Status (2)

Country Link
JP (1) JP7055883B2 (fr)
WO (1) WO2020044491A1 (fr)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4636137A (en) * 1980-10-24 1987-01-13 Lemelson Jerome H Tool and material manipulation apparatus and method
JP2001252883A (ja) * 2000-03-09 2001-09-18 Denso Corp 移動ロボットシステム
JP2006021300A (ja) * 2004-07-09 2006-01-26 Sharp Corp 推定装置および把持装置
JP2011051056A (ja) * 2009-09-01 2011-03-17 Kawada Kogyo Kk 吊下げ型協調作業ロボット
JP2014089095A (ja) * 2012-10-30 2014-05-15 Fujitsu Ltd 移動体の姿勢検出方法及び移動体の姿勢検出装置並びに部品組立装置
JP2014148040A (ja) * 2014-05-21 2014-08-21 Seiko Epson Corp 位置制御方法、ロボット
JP2014188600A (ja) * 2013-03-26 2014-10-06 Toshiba Corp 遠隔視認装置および遠隔視認操作システム
JP2016078184A (ja) * 2014-10-17 2016-05-16 ファナック株式会社 ロボットの干渉領域設定装置
WO2017056269A1 (fr) * 2015-09-30 2017-04-06 株式会社小松製作所 Procédé de génération de données d'image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4636137A (en) * 1980-10-24 1987-01-13 Lemelson Jerome H Tool and material manipulation apparatus and method
JP2001252883A (ja) * 2000-03-09 2001-09-18 Denso Corp 移動ロボットシステム
JP2006021300A (ja) * 2004-07-09 2006-01-26 Sharp Corp 推定装置および把持装置
JP2011051056A (ja) * 2009-09-01 2011-03-17 Kawada Kogyo Kk 吊下げ型協調作業ロボット
JP2014089095A (ja) * 2012-10-30 2014-05-15 Fujitsu Ltd 移動体の姿勢検出方法及び移動体の姿勢検出装置並びに部品組立装置
JP2014188600A (ja) * 2013-03-26 2014-10-06 Toshiba Corp 遠隔視認装置および遠隔視認操作システム
JP2014148040A (ja) * 2014-05-21 2014-08-21 Seiko Epson Corp 位置制御方法、ロボット
JP2016078184A (ja) * 2014-10-17 2016-05-16 ファナック株式会社 ロボットの干渉領域設定装置
WO2017056269A1 (fr) * 2015-09-30 2017-04-06 株式会社小松製作所 Procédé de génération de données d'image

Also Published As

Publication number Publication date
JP7055883B2 (ja) 2022-04-18
JPWO2020044491A1 (ja) 2021-08-10

Similar Documents

Publication Publication Date Title
JP5290324B2 (ja) 空間内において少なくとも1つのオブジェクトを最終姿勢に高精度で位置決めするための方法およびシステム
KR102458415B1 (ko) 로봇 모션 용 비전 시스템의 자동 핸드-아이 캘리브레이션을 위한 시스템 및 방법
JP6855492B2 (ja) ロボットシステム、ロボットシステム制御装置、およびロボットシステム制御方法
TWI670153B (zh) 機器人及機器人系統
JP6427972B2 (ja) ロボット、ロボットシステム及び制御装置
US8244402B2 (en) Visual perception system and method for a humanoid robot
US10754307B2 (en) Teaching device and control information generation method
JP2012528016A (ja) 空間内において少なくとも1つのオブジェクトを最終姿勢に高精度で位置決めするための方法およびシステム
JP2017100240A (ja) 制御装置、ロボットおよびロボットシステム
JP2016099257A (ja) 情報処理装置及び情報処理方法
JP6900290B2 (ja) ロボットシステム
RU2669200C2 (ru) Устройство обнаружения препятствий при помощи пересекающихся плоскостей и способ обнаружения с применением такого устройства
JP6741537B2 (ja) ロボット、ロボットの制御装置、及び、ロボットの位置教示方法
JP6741538B2 (ja) ロボット、ロボットの制御装置、及び、ロボットの位置教示方法
JP7155660B2 (ja) ロボット制御装置およびロボットシステム
JP6869159B2 (ja) ロボットシステム
CN113508012A (zh) 用于机器人机械的视觉系统
JP6328796B2 (ja) マニプレータ制御方法、システム、およびマニプレータ
JP2008168372A (ja) ロボット装置及び形状認識方法
JP2014149182A (ja) ワークとの相関位置決め方法
JP6725344B2 (ja) プレスブレーキ及び角度検出装置
WO2020044491A1 (fr) Robot
JP2016182648A (ja) ロボット、ロボット制御装置およびロボットシステム
JP6499272B2 (ja) ティーチング装置及び制御情報の生成方法
JPH0545117A (ja) 光学式3次元位置計測方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18931516

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020539946

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18931516

Country of ref document: EP

Kind code of ref document: A1