WO2016113836A1 - Procédé et système de commande de manipulateur, et manipulateur - Google Patents

Procédé et système de commande de manipulateur, et manipulateur Download PDF

Info

Publication number
WO2016113836A1
WO2016113836A1 PCT/JP2015/050616 JP2015050616W WO2016113836A1 WO 2016113836 A1 WO2016113836 A1 WO 2016113836A1 JP 2015050616 W JP2015050616 W JP 2015050616W WO 2016113836 A1 WO2016113836 A1 WO 2016113836A1
Authority
WO
WIPO (PCT)
Prior art keywords
manipulator
mirror
camera
image
posture
Prior art date
Application number
PCT/JP2015/050616
Other languages
English (en)
Japanese (ja)
Inventor
潔人 伊藤
宣隆 木村
敬介 藤本
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2016569144A priority Critical patent/JP6328796B2/ja
Priority to PCT/JP2015/050616 priority patent/WO2016113836A1/fr
Publication of WO2016113836A1 publication Critical patent/WO2016113836A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a configuration and control method of a manipulator, and more particularly to a configuration and control method of a mobile manipulator using an image.
  • a moving manipulator includes a moving carriage unit that moves itself by moving forward, backward, and turning, an arm unit that has one or more joints attached to the moving carriage unit, and a gripper unit that is attached to the tip of the arm unit. And have.
  • the gripper unit can hold or move a specific object. Such gripping and movement of the object by the gripper unit is referred to as an operation on the object.
  • a stationary manipulator fixed on a gantry in a factory or the like operates an object at a predetermined position in a specific work space. Accordingly, it is possible to design in advance a method for operating an object so that surrounding obstacles and the manipulator do not collide or interfere with each other.
  • the moving manipulator moves around and cannot limit the working space of the arm part and the gripper part. That is, in the mobile manipulator, an unknown space in which work objects and obstacles are cluttered is a work space. In such an unknown work space, the moving manipulator needs to operate the arm part, the gripper part, and the target object so as not to collide with or interfere with the obstacle. For this purpose, it is necessary to acquire the positional relationship between the manipulator itself, in particular, the arm part or gripper part that causes interaction with the object, and the target object or surrounding obstacles.
  • Patent Document 1 discloses a technology in which a camera installed in a work space captures an image (monitoring image) of a working robot, and an operator performs an operation while viewing the first person image and the monitoring image. .
  • a first camera is provided on the trunk of a robot having a manipulator, a second camera different from the first camera is provided in the manipulator, and an image of the first camera and the second camera are provided.
  • An invention for synthesizing these images is disclosed.
  • the area of the manipulator portion reflected in the first camera is complemented by the second camera, so that the blind spot due to the manipulator in the first camera is removed, and the position of the grasped object is specified by the first camera. is doing.
  • Patent Document 3 an identification surface provided in a work space, a reflection mirror provided at a position facing the identification surface, and a camera that images the identification surface via the reflection mirror, the identification surface imaged by the camera Based on these images, a technique for detecting an intruder in the work space has been disclosed.
  • Patent Document 3 the technique described in Patent Document 3 is premised on that the position of the reflecting mirror and the position of the camera relative thereto are fixed in advance so as to have a predetermined positional relationship in the work space. For this reason, if the position of the camera or reflecting mirror is changed, the identification surface cannot be photographed. That is, when the work space is changed, it is necessary to re-install the reflecting mirror and the position of the camera relative to the reflecting mirror so as to have a predetermined positional relationship. Therefore, it cannot be applied to the case where the work area is arbitrarily changed, such as a moving manipulator.
  • the problem to be solved by the present invention is to capture the positional relationship between the gripper part and the object and the surrounding environment in the work space of the mobile manipulator without blind spots, and based on that, do not collide with an obstacle.
  • the object is to provide a method for controlling the position of the gripper section.
  • a representative one of the inventions disclosed in the present application includes a procedure for detecting the attitude of a mirror from an image and a procedure for detecting the attitude of an object from an image, and the attitude is reflected in the detected mirror.
  • the posture of an object is detected, and the manipulator is controlled based on the detected posture of the object.
  • Another aspect of the present invention is a manipulator control method for operating a movable manipulator in a work space where an object is arranged.
  • a mirror placed in or near the work space is imaged with a camera mounted on a manipulator, and the mirror attitude is detected based on the camera attitude and the image captured by the camera.
  • a mirror region is detected from the image captured by the camera, and at least a partial image of the object is detected from the detected mirror region.
  • the posture of the target object in the work space is calculated based on the mirror posture and the target object image, and the manipulator is operated based on the calculated information on the target posture.
  • the posture of the object in the work space based on the mirror posture and the object image.
  • the mirror image viewpoint of the camera is calculated based on the camera posture and the mirror posture
  • the object posture in the work space is calculated based on the camera mirror image viewpoint and the image of the object reflected in the mirror.
  • the virtual image of the object in the mirror image space is calculated based on the camera image and the image of the object reflected in the mirror
  • the work space is calculated based on the virtual image attitude of the object and the mirror image.
  • the posture of the object at is calculated.
  • the object is placed on a shelf, the manipulator is moved to the vicinity of a predetermined shelf based on the shelf arrangement information, and the camera, mirror, etc. are set with reference to the origin fixed to the manipulator. , And the posture of the object may be calculated. Since the manipulator itself moves, control is facilitated by unifying the coordinate axes based on the manipulator at the time of work.
  • the image reflected in the mirror not only the operation target object but also the image of the manipulator itself or other objects may be used. Further, more information can be obtained by integrating and using the image reflected in the mirror and the image of the area other than the mirror directly taken by the camera.
  • a marker can be arranged in the mirror or the work space, and the attitude of the mirror can be detected based on the marker information in the image captured by the camera.
  • the manipulator includes a moving mechanism that moves the manipulator, an operation mechanism that operates the object, a camera that images the work space, a camera control mechanism that changes the posture of the camera, and a control unit.
  • the control unit has a function of detecting the posture of the camera and a function of detecting the posture of the mirror in the captured image based on the posture of the camera and the image captured by the camera.
  • the operation mechanism is operated based on the calculated posture information of the object.
  • an example having an arm unit connected by at least one joint mechanism and a gripper unit held by the arm unit and capable of grasping and moving an object can be considered.
  • the camera can be arranged in the gripper part.
  • the control unit controls the operation of the operation mechanism, stores geometric data of the shape of the operation mechanism, and uses the geometric data to determine the posture of the camera. Can be detected.
  • the work space in which the object is arranged can be defined with reference to the shelf on which the object is arranged, and can be configured to arrange a mirror on the shelf.
  • the manipulator is configured to be movable in the vicinity of an arbitrary shelf among the plurality of shelves.
  • the control unit stores shelf arrangement information related to the arrangement of a plurality of shelves, shelf shape information related to the shape of the shelf, and object information related to the objects arranged on each shelf, and the control unit stores the shelf arrangement information and Based on the object information, the manipulator can be moved to the vicinity of a predetermined shelf, and the work space can be imaged with a camera.
  • a marker is arranged in the work space, and the relative positional relationship between the marker and the mirror is stored in the control unit. It is possible to detect the attitude of the mirror in the captured image based on the marker.
  • Still another aspect of the present invention is a manipulator that operates in a work space in which an object is arranged.
  • the manipulator includes a moving mechanism for moving the manipulator, a gripper unit for operating the object, an arm unit for moving the gripper unit, a camera for imaging the work space, and a camera control mechanism for changing the posture of the camera. And having a control unit.
  • the control unit has a function of detecting the posture of the camera and a function of detecting the posture of the mirror in the captured image based on the posture of the camera and the image captured by the camera.
  • a function for detecting at least a part of an image of an object from an image captured by a camera a function for calculating the attitude of an object in a workspace based on the attitude of a mirror and the image of the object, and a calculated object
  • the gripper unit is operated based on the information on the posture of the object.
  • information on the posture of the marker and an image of the marker in the image captured by the camera are used.
  • the attitude of the mirror in the captured image may be detected.
  • the configuration of the control unit may be configured by a single computer, or any part of the input device, output device, processing device, and storage device may be configured by another computer connected via a network. Also good. Specifically, the control unit may be configured by a computer mounted on a manipulator, or a part or all of the functions may be arranged on a server that can communicate with the manipulator.
  • the function of the control unit can be realized by software executed by the processing device. Functions equivalent to those configured by software can also be realized by hardware such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). Such an embodiment is also included in the scope of the present invention.
  • FIG. 2 is a perspective view schematically showing an example of a schematic configuration of a moving manipulator according to the first embodiment.
  • 3 is a side view showing an example of a configuration of a gripper portion of the moving manipulator according to the first embodiment.
  • FIG. It is the perspective view which showed typically an example of the structure of the shelf concerning Embodiment 1.
  • FIG. 1 is a front view schematically showing an example of a configuration of a plane mirror according to a first exemplary embodiment. 1 is a block diagram illustrating an example of a configuration of a control system for a mobile manipulator according to a first embodiment.
  • FIG. 3 is a flowchart illustrating an example of an operation procedure of the moving manipulator according to the first embodiment. It is the side view which showed typically an example of the mode seen from the side surface of the work space in step S702 of FIG.
  • FIG. 9 is an image diagram illustrating an example of a photographed image 141 photographed by the camera 14 in FIG. 8. It is the side view which showed typically another example of the mode seen from the side surface of the work space in step S702 of FIG. It is the image figure which showed an example of the picked-up image 142 image
  • FIG. 3 is a principle diagram schematically showing a method of calculating the attitude of a plane mirror according to the first embodiment.
  • FIG. 1 is a diagram illustrating a configuration of a checker pattern marker according to a first embodiment.
  • FIG. 3 is a schematic diagram illustrating an example of a configuration of a convex mirror according to the first embodiment.
  • FIG. 10 is a flowchart illustrating an example of an operation procedure of the moving manipulator according to the second embodiment.
  • FIG. 5 is a schematic diagram illustrating a configuration of a tray according to a second embodiment. It is the perspective view which showed an example of the structure of the gripper part of the movement manipulator concerning Embodiment 3.
  • FIG. 10 is a side view showing an example of a method for calculating a reflecting surface of a plane mirror according to a third embodiment. It is the perspective view which showed typically the structure of the shelf concerning Embodiment 3.
  • FIG. 10 is a flowchart showing a part of an example of an operation procedure of the moving manipulator according to the fourth embodiment; It is the side view which showed an example of the method of calculating the reflective surface of the plane mirror concerning Embodiment 4.
  • notations such as “first”, “second”, and “third” are attached to identify the constituent elements, and do not necessarily limit the number or order.
  • a number for identifying a component is used for each context, and a number used in one context does not necessarily indicate the same configuration in another context. Further, it does not preclude that a component identified by a certain number also functions as a component identified by another number.
  • FIG. 1 is a diagram showing a picking operation of an object by a moving manipulator.
  • Embodiment 1 of the present invention as shown in FIG. 1, an operation of grasping and lifting an object 3 placed on a shelf 2 by a moving manipulator 1 will be described.
  • a series of operations for gripping and lifting the object 3 by the moving manipulator 1 is referred to as a picking operation of the object 3.
  • FIG. 2 is a schematic perspective view illustrating a schematic configuration of the moving manipulator 1 according to the first embodiment.
  • the movable manipulator 1 is provided at a predetermined portion of the control unit 10, the movable carriage unit 11, the arm unit 12, the gripper unit 13 fixed to the tip of the arm unit 12, and the gripper unit 13.
  • Camera 14 the control unit 10
  • the control unit 10 receives an instruction from the host control unit 6 (described later) by wire or wireless, and controls the operation of the mobile manipulator 1.
  • the operation control means not only controlling the operation of each part of the moving manipulator 1 based on the instruction, but also planning the operation of the moving manipulator 1 based on information such as the camera 14 of the moving manipulator 1. Including intelligent processing.
  • the moving carriage unit 11 includes one or more moving mechanisms such as wheels 110, and moves the moving manipulator 1 to an arbitrary place on a flat ground by moving forward / backward / turning based on an operation command of the moving manipulator control unit 10.
  • the flat ground may include not only a simple plane but also a slope or a small step.
  • traveling of the moving manipulator 1 is referred to as traveling of the moving manipulator 1.
  • the arm unit 12 is fixed to a predetermined portion of the movable carriage unit 11.
  • the arm unit 12 has one or more joint mechanisms, and moves the gripper unit 13 to a predetermined position and direction in a predetermined three-dimensional space based on an operation command of the moving manipulator control unit 10.
  • the position and direction of the gripper portion 13 are referred to as the posture of the gripper portion 13.
  • the arm unit 12 has a configuration having a vertical articulated mechanism in which a plurality of joint mechanisms are connected by the arm unit.
  • the arm unit 12 may be configured to move the gripper unit 13 to a predetermined posture.
  • the configuration is not limited to this.
  • a horizontal multi-joint mechanism, an orthogonal multi-joint mechanism, a parallel link mechanism, or a combination thereof may be used.
  • the gripper unit 13 has a function of gripping a predetermined object.
  • the gripper unit 13 shows a configuration in which a predetermined object is gripped by two finger mechanisms 130, but is not limited thereto.
  • grips a target object with a vacuum suction mechanism or a magnet without using a finger mechanism may be sufficient.
  • FIG. 3 is a schematic side view of the distal end portion of the arm portion 12, the gripper portion 13, and the camera 14 in FIG.
  • the camera 14 is offset and fixed at a predetermined angle with respect to the extending direction of the finger mechanism 130 of the gripper unit 13. That is, the lower end direction of the visual field V of the camera 14 is fixed so as to be parallel to the extending direction of the finger. Thereby, it is possible to avoid the finger mechanism 130 from being reflected in the visual field V.
  • the arm portion 12 is provided with a joint mechanism 121, and the gripper portion 13 can be directed at an arbitrary angle along the arrow P in FIG.
  • the arm portion 12 is provided with a joint mechanism 122, and the gripper portion 13 can be directed at an arbitrary angle along the arrow R in FIG.
  • the arm part 12 can orient
  • the arm unit 12 can direct the gripper unit 13 in an arbitrary direction.
  • the configuration shown in FIG. 3 is an example, and can be configured by combining various known joint mechanisms.
  • FIG. 4 is a schematic perspective view showing a schematic configuration of the shelf 2 according to the first embodiment of the present invention.
  • the shelf 2 is composed of one shelf board 21 and one shelf top board 22 supported by four columns 20. A plurality of picking objects 3 are placed on the shelf board 21.
  • a work space is defined for the mobile manipulator 1 according to the first embodiment.
  • the work space is a predetermined space in which one or more objects 3 to be picked by the moving manipulator 1 are placed.
  • the work space of the moving manipulator 1 is defined as a three-dimensional space having a rectangular parallelepiped shape that is surrounded by columns 20 from the upper surface of the shelf plate 21 to the lower surface of the shelf top plate 22.
  • the plane mirror 4 is arranged at a predetermined position.
  • the plane mirror 4 is fixed to a predetermined position of the shelf top plate 22 at a predetermined angle with its reflection surface directed toward the shelf plate 21.
  • the existence of the object 3 and the plane mirror 4 is not strictly limited in the work space. A part of the object 3 or the plane mirror 4 protrudes from the work space, or the entire object 3 or the plane mirror 4 works. It does not prevent it from being placed in the vicinity of the space.
  • FIG. 5 is a drawing schematically showing a configuration in which the reflecting surface of the plane mirror 4 is viewed from the front.
  • the plane mirror 4 can be made of glass, metal, plastic, or the like, and reflects at least the wavelength of the region that can be imaged by the camera 14.
  • markers 5 having a predetermined shape are arranged at the four corners of the reflecting surface of the plane mirror 4.
  • all the markers 5 have the same shape.
  • the shape of the marker 5 may be any shape that can detect the presence / absence and the number of captured images from an image captured by the camera 14 by predetermined image processing. Furthermore, in the said image process, what is necessary is just the shape which can calculate the center coordinate of each marker 5 in the said image.
  • the marker 5 can be formed by adding protrusions on the surface or inside of the plane mirror, printing, marking, applying a seal, or the like.
  • the positional relationship between the markers 5 arranged on the plane mirror 4 is stored in advance in the control unit 10 or the upper control unit 6 (described later).
  • FIG. 6 is a diagram showing an example of the configuration of the control system for the mobile manipulator in the first embodiment.
  • the control unit 10 of the mobile manipulator 1 includes an overall control unit 100 that controls the entire manipulator 1, a data storage unit 101 that stores data necessary for control, a mobile cart control unit 102 that controls the mobile cart unit 11, an arm An arm control unit 103 that controls the unit 12, a gripper control unit 104 that controls the gripper unit 13, and an image processing unit 105 that processes an image obtained by the camera 14 are provided.
  • the control unit 10 of the mobile manipulator 1 is configured to be able to communicate with the host control unit 6 by wire or wireless.
  • the host control unit 6 includes an object data storage unit 60 that stores information such as map data 601, shelf shape data 602, and object data 603.
  • the host control unit 6 may be built in the housing of the mobile manipulator 1 or may be outside the mobile manipulator 1.
  • the upper control unit 6 and the control unit 10 need only be able to perform information processing in cooperation and control the mobile manipulator 1, and may be configured by a single computer, or may include an input device, an output device, a processing device, Any part of the storage device may be composed of a plurality of computers connected by a wired or wireless network.
  • the map data 601 is data describing a point where the shelf 2 is arranged in a space where the mobile carriage unit 11 of the mobile manipulator 1 can travel (hereinafter referred to as travel space).
  • travel space a space where the mobile carriage unit 11 of the mobile manipulator 1 can travel
  • the shelf shape data 602 is data in which information such as the width, depth, and height of the shelf board 22 is described with reference to a predetermined part of the shelf 2.
  • the object data 603 stores information such as the type or identification number of the object 3 to be picked and the identification code of the shelf 22 on which the object 3 is arranged.
  • the object data 603 may further include characteristic data such as the shape, weight, and material of the object 3.
  • the upper control unit 6 allows the work space where the object 3 to be picked is displayed to be present in the traveling space.
  • the picking command can be transmitted to the control unit 10 of the movement manipulator 1 based on this.
  • the posture (position and orientation) of the target object 3 that is the target of the picking operation is the position of the center of gravity of the bin that is the target. Etc.) cannot be known by the host controller 6. This is because the object 3 to be displayed is picked by the moving manipulator 1 or picked by a human worker in the traveling space, so that the posture may be changed at any time.
  • the moving manipulator 1 needs to pick the object 3 after confirming the posture of the object 3 in the work space.
  • FIG. 7 is a diagram showing an overall outline of the picking operation procedure in the first embodiment. Each step in FIG. 7 may be executed by a predetermined program built in the control unit 10 of the movement manipulator 1 or may be executed by the host control unit 6 to control the control unit 10 of the movement manipulator 1 one by one or collectively. You may make it transmit a command to.
  • step S701 the moving manipulator 1 travels to the front of the shelf 2 on which the object 3 to be picked is displayed.
  • the upper control unit 6 stores the point of the shelf 2 where the object 3 to be picked is displayed, that is, the point of the work space, in the traveling space of the mobile manipulator 1.
  • the host control unit 6 transmits a picking command including information indicating the point of the work space to the control unit 10 of the mobile manipulator 1.
  • the mobile manipulator 1 travels by controlling the mobile carriage unit 11 to the point of the work space specified by the picking command.
  • an existing technique such as embedding a magnetic cable indicating a traveling route in the traveling space in advance and moving along the magnetic cable may be used.
  • an existing technology that moves autonomously using the map data 601 may be used.
  • step S702 the moving manipulator 1 travels to the front of the shelf 2, and then changes the posture of the gripper unit 13 to move the camera 14 from the front (opening) of the shelf so as to face the work space, Take a picture.
  • the center of the moving manipulator 1 (the center can be arbitrarily determined by the point information of the work space and the shelf shape data 602 stored in advance in the upper control unit 6. In order to simplify the description, the moving manipulator 1 will be described below.
  • the position of the shelf 21 in the work space with respect to the origin) is known. Therefore, the moving manipulator 1 can control the arm unit 12 and the gripper unit 13 so that the camera 14 is positioned in a predetermined posture (position and orientation) relatively determined from the position of the shelf board 21.
  • the center of the moving manipulator 1 is positioned at a position away from the shelf 2 by a predetermined distance, and the arm 12 and the gripper 13 are positioned in a posture set according to the posture of the shelf board 21.
  • FIG. 8 is a diagram schematically showing a state viewed from the side of the work space in step S702.
  • the object 3b that is not the object of the picking operation is positioned behind the object 3a as viewed from the gripper unit 13. is doing. Therefore, when picking the object 3a by the gripper part 13, it is necessary to control the finger mechanism 130 of the gripper part 13 so as not to interfere with the object 3b.
  • FIG. 9 is a diagram showing an example of a photographed image 141 photographed by the camera 14 in step S702.
  • the photographed image 141 the image of the object 3b is photographed so as to be behind the object 3a. It is generally difficult to accurately specify the posture of the object 3b from such an image that is behind the object and partially missing. Further, when the object 3b is completely hidden by the object 3a as viewed from the gripper unit 13, the presence of the object 3b cannot be detected from the state of FIG.
  • step S703 detection processing of the markers 5 at the four corners of the plane mirror 4 is performed on the captured image 141 (FIG. 9).
  • the shape and number of the markers 5 are stored in advance as a part of the shelf shape data 602, for example. Since the shape and number of the markers 5 are known, the number of markers 5 photographed in the photographed image 141 and the pixel coordinate values in the photographed image 141 can be specified by a known template matching process. Although the marker 5 may be omitted and the corners of the plane mirror 4 may be extracted instead of the marker by image processing or the like, the burden of image processing can be reduced by preparing and using the marker.
  • the image processing unit 105 performs image processing and calculation after step S703. Further, the image processing unit 105, the overall control unit 100, and the host control unit 6 may share the processing.
  • step S703 when the four markers 5 are not detected from the image, the posture of the gripper unit 13 is changed (step S704).
  • the control operation such as changing the posture of the gripper unit 13 by rotating the joint mechanism 121 of the arm unit 12 in the direction of the arrow P in FIG.
  • the work space is imaged again by the camera 14 (step S702), and the marker 5 detection process (step S703) is performed.
  • step S701 The processing from step S702 to step S704 is repeated until a known number (for example, four) of markers are detected from the captured image of the camera 14.
  • a known number for example, four
  • the positional relationship between the shelf 2 and the moving manipulator 1 is set to an appropriate positional relationship so that all the markers are within the angle of view of the camera 14.
  • the marker should be detected once or a predetermined number of trials.
  • the operator can be notified by issuing an error signal or a warning.
  • FIG. 10 is a diagram schematically showing the working space viewed from the side when four markers 5 are detected from the image captured by the camera 14.
  • the mirror 4 is within the angle of view of the camera 14.
  • FIG. 11 is a drawing showing an example of a photographed image of the camera 14 in FIG.
  • step S705 the attitude of the plane mirror 4 in the work space is calculated based on the image captured by the camera 14 (FIG. 11). An example of the calculation method is shown in FIG.
  • the origin of the moving manipulator 1 is O
  • the viewpoint of the camera 14 is C
  • the center point of the plane mirror 4 is M.
  • the plane mirror 4 is illustrated as a rectangle having no thickness, and the position of the marker 5 is the position of each vertex of the rectangle.
  • the posture of the viewpoint C of the camera 14 with respect to the origin O of the moving manipulator 1 (information indicating the position and tilt of the camera.
  • the position of the center point of the camera lens and the direction in which the optical axis of the camera faces Can be uniquely calculated by the angle of each joint of the arm unit 12 and the length of the arm connecting the joints (not shown in FIG. 12).
  • Information relating to the moving manipulator 1 such as the angle of each joint of the arm unit 12 and the length of the arm connecting the joints is stored in advance in, for example, the data storage unit 101 of the control unit 10.
  • the homogeneous transformation matrix indicating the posture of the viewpoint C of the camera 14 with the origin O of the moving manipulator 1 as a reference is calculated as O T C.
  • the captured image (FIG. 11) captured at the viewpoint C of the camera 14 is shown as a captured image plane P.
  • the mutual positional relationship between the markers 5 of the plane mirror 4, that is, the lengths H and W of each side of the plane mirror 4, for example, is stored as a part of the shelf shape data 602 and is known.
  • the intersections of the four straight lines connecting the marker 5 and the viewpoint C of the camera 14 and the captured image plane P of the camera 14 become the coordinates of the marker 5 detected in step S703.
  • the center point of the plane mirror 4 with respect to the viewpoint C of the camera 14 is estimated.
  • the posture of M (information indicating the position and tilt of the mirror.
  • the position of the center point of the mirror and the direction in which the mirror surface faces can be calculated.
  • the calculated homogeneous transformation matrix indicating the attitude of the center point M of the plane mirror 4 with respect to the viewpoint C of the camera 14 is defined as C T M.
  • the homogeneous transformation matrix O indicating the attitude of the center point of the plane mirror 4 with respect to the origin O of the moving manipulator 1 is used.
  • T M can be calculated.
  • the homogeneous transformation matrix O T M corresponds to the attitude of the plane mirror 4 in the work space (real space).
  • the attitude of the plane mirror 4 in the work space was derived from the image of the plane mirror 4 captured by the camera 14.
  • the posture of the plane mirror 4 in the work space is calculated by the markers 5 arranged at the four corners of the plane mirror 4, an accurate posture can be calculated.
  • the calculated attitude of the plane mirror 4 is stored in the control unit 10 or the host control unit 6. (Step S706).
  • step S707 the work space is imaged again using the camera 14.
  • the posture is the same as that in step S702, but may be any other posture as long as the work space is photographed.
  • the photographed image is assumed to be an image 141 similar to that in FIG.
  • the image 141 is roughly divided into two image processes, a process in step S708 and a process from step 709 to step 721, in parallel or in series.
  • step S708 the object 3a which is the picking target is detected from the image 141 photographed in step S707, and the posture thereof is calculated.
  • a known image recognition algorithm such as pattern matching with an image of the object 3 stored in advance (for example, stored as part of the object data 603) may be applied.
  • the detected posture of the object may be calculated by storing the size of the object in advance and comparing it with the size reflected in the captured image 141 or calculating the homography. .
  • These image processes can be calculated, for example, with coordinates with the viewpoint C of the camera 14 as the origin, but eventually the coordinates are converted to coordinates based on the origin O of the moving manipulator 1 to unify the coordinate system. Also good.
  • the postures of the other objects 3b and the gripper unit 13 may be calculated together.
  • step S709 for the viewpoint C of the camera 14, a point C ′ that is a surface object with respect to the reflecting surface M of the plane mirror 4 is calculated.
  • the viewpoint C ′ is referred to as a mirror image viewpoint C ′ of the camera 14.
  • FIG. 13 is a side view illustrating a mirror image viewpoint C ′ with respect to the plane mirror of the camera.
  • the point orientation C ′ that is a surface object with respect to the reflection surface M of the plane mirror 4 becomes the mirror image viewpoint C ′ of the camera 14. That is, if the posture of the camera 14 and the posture of the mirror 4 are known, the posture of the mirror image viewpoint C ′ of the camera 14 can be calculated.
  • the object reflected on the mirror 4 captured by the camera 14 corresponds to the image of the object viewed from the mirror image viewpoint C ′ of the camera 14.
  • the posture of the mirror 4 is stored in step 706, and can be expressed by, for example, coordinates based on the origin O of the moving manipulator 1. Further, the posture of the camera 14 can be calculated based on the same coordinates from the data regarding the moving manipulator 1 stored in the data storage unit 101 as described above.
  • step S710 a region of the plane mirror 4 in the photographed image 141, that is, a mirror region image is cut out.
  • a region of the plane mirror 4 in the photographed image 141 that is, a mirror region image is cut out.
  • An example of the mirror region image cutout process will be described with reference to FIG. 12 again.
  • the homogeneous transformation matrix O T M indicating the attitude of the plane mirror 4 with respect to the origin O of the moving manipulator 1 is stored in step S706 (here, the moving manipulator 1 does not move after step S701). To do). Further, the homogeneous transformation matrix O T C indicating the posture of the viewpoint C of the camera 14 with respect to the origin O of the moving manipulator 1 can be calculated from the angle of each joint of the arm unit 12 and the length of the arm connecting the joints. . From these homogeneous transformation matrices O T M and O T C , a homogeneous transformation matrix C T M indicating the attitude of the plane mirror 4 with reference to the viewpoint C of the camera 14 in step 710 (FIG. 13) can be calculated.
  • the mirror region image of the plane mirror 4 cut out in this way is an image of the visual field indicated by V ′ from the mirror image viewpoint C ′ of the camera 14 in FIG.
  • the object 3b is captured in the mirror region image without being shaded by the object 3a. That is, an object that has not been detected as a blind spot in step S708 is captured in the mirror region image.
  • step S711 the object 3b is detected by image processing similar to that in step S708 on the mirror region image cut out in step S710. That is, the object 3b that cannot be seen from the viewpoint C of the camera 14 is detected from the mirror region image, and its posture is calculated by pattern matching or homography calculation. As shown in FIG. 13, the image in the mirror region image is an image of the object 3 b viewed from the mirror image viewpoint C ′ of the camera 14.
  • the object 3b is calculated as the posture of the camera 14 viewed from the mirror image viewpoint C'.
  • the calculation result can be indicated by coordinates with the mirror image viewpoint C ′ as the origin.
  • the postures of the other objects 3a and the gripper unit 13 may be calculated together.
  • the coordinate system may be unified by converting the coordinates into the coordinates based on the origin O of the moving manipulator 1.
  • step S713 the posture information of the target object 3a calculated in step S708 and the posture information of the target object 3b obtained in step S711 are integrated. Thereby, the control part 10 becomes possible [grasping
  • each posture is converted into coordinates based on the origin O of the moving manipulator 1 and the coordinate system is unified, the positional relationship of objects in the work space can be grasped in a unified manner.
  • the three-dimensional posture of the object can be calculated by stereo processing.
  • control is performed so that the gripper unit 13 is moved to a posture in which the object 3a can be gripped (step S714), and the object is gripped by the gripper unit 13 (step S714). S715).
  • the moving manipulator 1 becomes a blind spot in the work space, in addition to the posture of the target object 3a that is the target of the picking operation, as a shadow of the target object 3a from the shelf opening. It is possible to detect the posture of the object 3b located at the position. Therefore, it is possible to pick only the object 3a without the object 3b and the gripper 13 colliding with each other.
  • step S708 if the moving manipulator 1 stops at a slight inclination with respect to the shelf 2, it is detected in step S708.
  • the posture of the object 3b viewed from the mirror image viewpoint C ′ is shifted by twice the inclination angle of the mobile manipulator 1 with respect to the shelf 2.
  • the mirror region in step S710 is also cut out from the region shifted by the tilt angle. Due to these effects, the posture of the object 3b cannot be calculated correctly.
  • step S705 for calculating the attitude of the plane mirror 4 with the marker 5 or the like, the shift of the stop position accompanying travel such that the moving manipulator 1 is tilted with respect to the shelf 2 and stopped.
  • the calculation of the mirror image viewpoint C ′ of the camera 14 by the plane mirror 4 and the process of cutting out the mirror area image can be performed with high accuracy.
  • a step of photographing again in a state where the gripper unit 13 is sufficiently close to the object 3a may be provided.
  • the gripper unit 13 when the gripper unit 13 is sufficiently close to the object 3a, only a small part of the object 3a can be photographed from the camera 14.
  • the mirror image viewpoint C ′′ of the camera 14 in the state of FIG. 14 it is possible to photograph three objects: the object 3a, the object 3b, and the finger mechanism 130 of the gripper unit 13. Accordingly, the object 3a and Not only the positional relationship between the target 3b but also the positional relationship between the target 3a and the gripper unit 13 and the positional relationship between the target 3b and the gripper unit 13 can be detected from the mirror region image.
  • step S708 to step S711 need only be performed without performing step S708. Thereby, each positional relationship of the target object 3a, the target object 3b, and the gripper part 13 can be grasped
  • the procedure for calculating the attitude of the plane mirror 4 by the marker 5 in step S705 may be performed before the procedure in steps S709 to S711.
  • Step S714 for controlling the movement of the gripper unit 13 may be performed based on the result of the procedure of S711.
  • the camera 14 is not necessarily fixed to the gripper unit 13. An example is shown in FIG.
  • a configuration in which a joint mechanism 131 is further provided between the camera 14 and the gripper unit 13 and the angle T of the camera 14 with respect to the gripper unit 13 can be arbitrarily changed may be employed. If the angle T of the camera 14 with respect to the gripper unit 13 can be observed by the control unit 10, the attitude of the plane mirror 4 from the origin O of the moving manipulator 1 shown in FIG. 12 can be calculated. This eliminates the need to control the arm unit 12 in the search for the marker 5 in step S704.
  • step S703 it has been described that the markers 5 at the four corners of the plane mirror 4 are detected, but it is not always necessary to detect the four markers 5.
  • An example is shown in FIG.
  • a method using two types of markers 5 and 51 arranged in different shapes on the diagonal of the plane mirror 4 may be used. In this case, if at least three markers are detected, The attitude of the plane mirror 4 can be calculated. As a result, the number of repeated processes from step S702 to step S704 can be reduced.
  • the attitude of the plane mirror 4 can be calculated by predetermined image processing. This is because each square vertex in the checker pattern plays the same role as the marker 5.
  • a convex mirror 41 having a known shape may be used as shown in FIG.
  • the convex mirror 41 in FIG. 18 has a checker pattern marker 52 attached to the center thereof.
  • the mirror region image of the convex mirror 41 is converted into a planar image by performing projective transformation determined by the shape of the convex boundary 41 after cutting out the mirror region image from the image in step S710 in FIG. Convert it.
  • the convex mirror 41 the number of calculation steps for calculation increases compared to the case of using the plane mirror 4, but it is possible to observe a wider space from the mirror viewpoint of the camera 14.
  • the configuration may be further arranged.
  • the marker 5, the marker 52, and the marker 53 are different in shape from each other as long as it can be determined and detected by predetermined image processing. It goes without saying that the operation procedure shown in FIG. 7 can be applied to each of the plane mirrors 4a to 4c.
  • the plane mirrors 4a to 4c may be used in combination.
  • the plane mirror 4a when used in combination with the plane mirror 4a, it may be realized by a recursive process of further cutting out the mirror area image of the plane mirror 4b from the mirror area image of the plane mirror 4a cut out in step S710. In this way, the posture detection of an object with no blind spots can be performed by complex posture calculation using a plurality of plane mirrors.
  • Example 1 Although the example which controls the arm part 12 and the gripper part 13 of the movement manipulator 1 using the plane mirror 4 which provided the marker 5 in four corners was demonstrated, the plane mirror 4 provided with the marker 5 is used for all working spaces. Need to be placed in. Therefore, in the present embodiment, an example in which the marker 5 is not arranged on the plane mirror 4 and the marker 5 is arranged at a place other than the plane mirror 4 will be described.
  • FIG. 20 is a diagram showing a configuration of the shelf 2 according to the present embodiment, and is a diagram contrasted with FIG. 4 in the first example.
  • FIG. 20 schematically shows a configuration in which the shelf 2 is viewed from the front in the left direction of FIG.
  • the shelf 20 differs from the shelf 2 shown in FIG. 4 in that the marker 5 is provided on a part of the shelf board 21 and the shelf top board 22. Further, the plane mirror 4 is fixed at a predetermined angle at a predetermined position of the shelf top plate 22 of the shelf 2 shown in FIG. 20 with its reflecting surface directed toward the shelf plate 21.
  • the flat mirror 4 according to the present embodiment is different from the flat mirror 4 shown in FIG. 5 in that the markers 5 are not provided at the four corners.
  • the shelf shape data 602 stored in the object data storage unit 60 of the host control unit 6 includes the shape of the shelf 2 and the attitude of the plane mirror 4 based on a predetermined part of the shelf 2. Are stored. That is, in this embodiment, the relative positional relationship among the shelf 2, the marker 5, and the plane mirror 4 is known in advance.
  • FIG. 21 is a diagram illustrating a picking procedure of the object 3 by the moving manipulator 1 according to the present embodiment, and is compared with FIG. 7 in the first embodiment.
  • step S7021 the entire shelf opening is photographed by the camera 14 in step S7021. Thereby, it can be searched whether the four markers 5 stuck on the shelf 21 and the shelf 22 shown in FIG. 20 are reflected in the captured image of the camera 14 (step S7031). If four markers 5 are not detected in step S7031, the direction of the camera 14 is changed (step S704), and photographing is performed again (step S7021).
  • the attitude of the shelf 2 is calculated using the four markers 5 detected from the captured image of the camera 14 and the shelf shape data 602 (step S7051).
  • step S7052 the attitude of the plane mirror 4 fixed to the shelf 2 is calculated from the shelf shape data 602 based on the calculated attitude of the shelf 2 (step S7052), and the attitude is transferred to the control unit 10 or the upper control unit 6.
  • step S706 the attitude of the plane mirror 4 fixed to the shelf 2 is calculated from the shelf shape data 602 based on the calculated attitude of the shelf 2 (step S7052), and the attitude is transferred to the control unit 10 or the upper control unit 6.
  • step S706 the attitude of the plane mirror 4 fixed to the shelf 2 is calculated from the shelf shape data 602 based on the calculated attitude of the shelf 2
  • the shape of the support object of the plane mirror 4 As described above, in the present embodiment, the shape of the support object of the plane mirror 4, the posture of the plane mirror 4 with respect to the support object, and the position of the marker 5 attached to the support object, The posture is calculated, and the posture of the plane mirror 4 is indirectly calculated from the posture of the supporting object.
  • the fixed position of the plane mirror 4 and the marker 5 is not limited to the shelf 2.
  • FIG. 22 is a perspective view showing a schematic configuration of the tray 200 for storing the object 3 according to the present embodiment.
  • a two-dimensional code 53 is affixed to the outside of the front surface of the tray 200, and the plane mirror 4 is fixed to the inside of the back surface of the tray 200.
  • Information such as the contents of the tray 200 can be embedded in the two-dimensional code 53 using the pattern shape as an identification code.
  • the posture can be calculated from the image by one two-dimensional code 53.
  • the shape data of the tray 200 including the two-dimensional code 53 and the fixed posture of the plane mirror 4 is stored in the object data storage unit 60, thereby calculating the posture of the plane mirror 4 from the posture of the shelf 2 according to the present embodiment. Similar to the procedure, the attitude of the plane mirror 4 fixed to the back inner side of the tray 200 can be calculated from the two-dimensional code 53 attached to the outer front surface of the tray 200. Then, by using the calculated posture of the plane mirror 4 fixed inside the back surface of the tray 200, the object 3 in the tray 200 can be gripped by the gripper unit 13 so as not to collide with the wall surface of the tray 200. Needless to say.
  • the marker 5 can be attached to a place other than the plane mirror 4 and can be easily photographed with respect to the camera 14 fixed to the gripper unit 13. Can be attached to a place.
  • FIG. 23 is a perspective view schematically showing the configuration of the gripper portion 13 of the moving manipulator 1 according to the present embodiment.
  • the gripper portion 13 according to the present embodiment is different from the first embodiment in that a checker pattern marker 52 is further provided at the center of the two finger mechanisms 130 of the gripper portion 13.
  • FIG. 24 is a drawing schematically showing a method for calculating the attitude of the plane mirror 4 in the moving manipulator 1 according to the present embodiment.
  • the field of view V of the camera 14 fixed to the gripper unit 13 is a mirror image reflected on the plane mirror 4.
  • a point indicated by the posture of the checker pattern marker 52 detected in the mirror image of the plane mirror 4 is denoted by m1.
  • the plane on which the reflecting surface of the plane mirror 4 exists is a vertical bisector between the posture of the checker pattern marker 52 calculated from the joint angle of the arm portion 12 and the length of the arm between the joints, and m1. It can be calculated.
  • Information on the attachment position of the marker 52 to the manipulator is stored as data in the data storage unit 105 or the like. Therefore, the posture of the marker 52 can be uniquely calculated based on data such as the angle of each joint of the arm unit 12 and the length of the arm connecting the joints, similarly to the posture of the camera 14.
  • FIG. 25 is a perspective view of the work space in the present embodiment.
  • the plane on which the reflecting surface of the plane mirror 4 exists can be calculated, but the position of the mirror area of the plane mirror 4 cannot be calculated.
  • the field of view of the camera 14 that is fixed to the gripper unit 13 can be entirely as long as the gripper unit 13 is in the work space.
  • the plane mirror 4 can be arranged so as to cover it. Such a plane mirror 4 is disposed, and control is performed by regarding all the captured images of the camera 14 as mirror region images.
  • This modification is basically a modification of the embodiment described with reference to FIGS. 7 and 21 and is another example of the obstacle posture detection process.
  • FIG. 26 shows only a portion between steps S707 and S713 in FIGS. 7 and 21, which is different from FIGS.
  • the mirror image viewpoint is not calculated, and the posture in the work space is derived from the posture in the mirror image space of the obstacle reflected in the mirror region in the image.
  • Figure 27 shows a conceptual diagram.
  • the object 3b is reflected in the mirror 4 as 3b '. Therefore, as in the embodiments of FIGS. 7 and 21, a mirror region in the image is cut out (step S710), and the virtual image 3b 'of the object 3b is detected by image processing similar to step S708. That is, the object 3b cannot be seen from the viewpoint C of the camera 14, but the virtual image 3b 'is detected from the mirror region image, and the posture thereof is calculated by pattern matching or homography calculation (step S711).
  • the image in the mirror region image is an image obtained by viewing the virtual image 3b 'of the object 3b from the viewpoint C of the camera 14.
  • the posture of the object 3b with respect to the virtual image 3b ' is obtained by performing mirror image conversion of the posture of the virtual image 3b' with respect to the mirror (step S712).
  • the postures of the other objects 3a and the gripper unit 13 may be calculated together. Further, the calculation result may be finally converted into coordinates based on the origin O of the moving manipulator 1 to unify the coordinate system.
  • the present invention is not limited to this.
  • the present invention can also be applied to a stationary manipulator in which the arm portion is moved by a linear motion mechanism fixed at a predetermined location.
  • It can be used to control various mobile manipulators.

Abstract

L'invention concerne un procédé permettant de commander la position d'un manipulateur mobile de manière à éviter des collisions avec des obstacles. Le procédé comprend une procédure de détection de l'orientation d'un miroir, et une procédure de détection de l'orientation d'un objet à partir d'une image. L'orientation d'un objet est détectée à partir d'une image réfléchie dans le miroir, et le manipulateur est commandé sur la base de l'orientation détectée de l'objet.
PCT/JP2015/050616 2015-01-13 2015-01-13 Procédé et système de commande de manipulateur, et manipulateur WO2016113836A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2016569144A JP6328796B2 (ja) 2015-01-13 2015-01-13 マニプレータ制御方法、システム、およびマニプレータ
PCT/JP2015/050616 WO2016113836A1 (fr) 2015-01-13 2015-01-13 Procédé et système de commande de manipulateur, et manipulateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/050616 WO2016113836A1 (fr) 2015-01-13 2015-01-13 Procédé et système de commande de manipulateur, et manipulateur

Publications (1)

Publication Number Publication Date
WO2016113836A1 true WO2016113836A1 (fr) 2016-07-21

Family

ID=56405404

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/050616 WO2016113836A1 (fr) 2015-01-13 2015-01-13 Procédé et système de commande de manipulateur, et manipulateur

Country Status (2)

Country Link
JP (1) JP6328796B2 (fr)
WO (1) WO2016113836A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018020423A (ja) * 2016-08-05 2018-02-08 株式会社日立製作所 ロボットシステム及びピッキング方法
JP2020073302A (ja) * 2017-11-28 2020-05-14 ファナック株式会社 ロボットおよびロボットシステム
JP2020192648A (ja) * 2019-05-29 2020-12-03 株式会社日立製作所 エンドエフェクタ及びピッキングシステム
JP7109699B1 (ja) * 2021-07-07 2022-07-29 三菱電機株式会社 遠隔操作システム
US11565421B2 (en) 2017-11-28 2023-01-31 Fanuc Corporation Robot and robot system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05313744A (ja) * 1992-05-06 1993-11-26 Toshiba Corp 位置姿勢測定用標識
JP2004338889A (ja) * 2003-05-16 2004-12-02 Hitachi Ltd 映像認識装置
WO2006006624A1 (fr) * 2004-07-13 2006-01-19 Matsushita Electric Industrial Co., Ltd. Système de saisie d’article, robot et procédé de contrôle du robot
JP2014161950A (ja) * 2013-02-25 2014-09-08 Dainippon Screen Mfg Co Ltd ロボットシステム、ロボット制御方法、ロボット較正方法
JP2014188617A (ja) * 2013-03-27 2014-10-06 Seiko Epson Corp ロボット制御システム、ロボット、ロボット制御方法及びプログラム

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6167760B2 (ja) * 2013-08-26 2017-07-26 株式会社ダイフク 物品位置認識装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05313744A (ja) * 1992-05-06 1993-11-26 Toshiba Corp 位置姿勢測定用標識
JP2004338889A (ja) * 2003-05-16 2004-12-02 Hitachi Ltd 映像認識装置
WO2006006624A1 (fr) * 2004-07-13 2006-01-19 Matsushita Electric Industrial Co., Ltd. Système de saisie d’article, robot et procédé de contrôle du robot
JP2014161950A (ja) * 2013-02-25 2014-09-08 Dainippon Screen Mfg Co Ltd ロボットシステム、ロボット制御方法、ロボット較正方法
JP2014188617A (ja) * 2013-03-27 2014-10-06 Seiko Epson Corp ロボット制御システム、ロボット、ロボット制御方法及びプログラム

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018020423A (ja) * 2016-08-05 2018-02-08 株式会社日立製作所 ロボットシステム及びピッキング方法
JP2020073302A (ja) * 2017-11-28 2020-05-14 ファナック株式会社 ロボットおよびロボットシステム
US11565421B2 (en) 2017-11-28 2023-01-31 Fanuc Corporation Robot and robot system
JP2020192648A (ja) * 2019-05-29 2020-12-03 株式会社日立製作所 エンドエフェクタ及びピッキングシステム
JP7109699B1 (ja) * 2021-07-07 2022-07-29 三菱電機株式会社 遠隔操作システム
WO2023281648A1 (fr) * 2021-07-07 2023-01-12 三菱電機株式会社 Système de fonctionnement à distance

Also Published As

Publication number Publication date
JP6328796B2 (ja) 2018-05-23
JPWO2016113836A1 (ja) 2017-06-22

Similar Documents

Publication Publication Date Title
KR102442241B1 (ko) 로봇 충전기 도킹 로컬화
JP6811258B2 (ja) ロボット車両の位置測定
EP3186777B1 (fr) Combinaison de traitements stéréoscopique et à lumière structurée
US9630321B2 (en) Continuous updating of plan for robotic object manipulation based on received sensor data
JP6855492B2 (ja) ロボットシステム、ロボットシステム制御装置、およびロボットシステム制御方法
JP6328796B2 (ja) マニプレータ制御方法、システム、およびマニプレータ
CN110640730B (zh) 生成用于机器人场景的三维模型的方法和系统
JP6359756B2 (ja) マニプレータ、マニプレータの動作計画方法、および、マニプレータの制御システム
JP2021504793A (ja) ロボット充電器ドッキング制御
CN108177162B (zh) 移动机器人的干扰区域设定装置
KR20220012921A (ko) 3차원 라이다를 갖는 로봇 구성
JP2016099257A (ja) 情報処理装置及び情報処理方法
JP6950638B2 (ja) マニピュレータ制御装置、マニピュレータ制御方法、及びマニピュレータ制御プログラム
JP2007216350A (ja) 移動型ロボット
JP6855491B2 (ja) ロボットシステム、ロボットシステム制御装置、およびロボットシステム制御方法
KR20190003643A (ko) 네거티브 매핑을 이용한 국부화
JP2022521003A (ja) マルチカメラ画像処理
JP2020163502A (ja) 物体検出方法、物体検出装置およびロボットシステム
KR100906991B1 (ko) 로봇의 비시인성 장애물 탐지방법
US20240003675A1 (en) Measurement system, measurement device, measurement method, and measurement program
CN117794704A (zh) 机器人控制设备、机器人控制系统以及机器人控制方法
Chang et al. Vision-Based Cooperative Manipulation of Mobile Robots.
WO2023192295A1 (fr) Étalonnage extrinsèque d'un capteur monté sur véhicule à l'aide de caractéristiques de véhicule naturelles
Hentout et al. Multi-agent control architecture of mobile manipulators: Extraction of 3D coordinates of object using an eye-in-hand camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15877791

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016569144

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15877791

Country of ref document: EP

Kind code of ref document: A1