WO2016113836A1 - Manipulator control method, system, and manipulator - Google Patents

Manipulator control method, system, and manipulator Download PDF

Info

Publication number
WO2016113836A1
WO2016113836A1 PCT/JP2015/050616 JP2015050616W WO2016113836A1 WO 2016113836 A1 WO2016113836 A1 WO 2016113836A1 JP 2015050616 W JP2015050616 W JP 2015050616W WO 2016113836 A1 WO2016113836 A1 WO 2016113836A1
Authority
WO
WIPO (PCT)
Prior art keywords
manipulator
mirror
camera
image
posture
Prior art date
Application number
PCT/JP2015/050616
Other languages
French (fr)
Japanese (ja)
Inventor
潔人 伊藤
宣隆 木村
敬介 藤本
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2015/050616 priority Critical patent/WO2016113836A1/en
Priority to JP2016569144A priority patent/JP6328796B2/en
Publication of WO2016113836A1 publication Critical patent/WO2016113836A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a configuration and control method of a manipulator, and more particularly to a configuration and control method of a mobile manipulator using an image.
  • a moving manipulator includes a moving carriage unit that moves itself by moving forward, backward, and turning, an arm unit that has one or more joints attached to the moving carriage unit, and a gripper unit that is attached to the tip of the arm unit. And have.
  • the gripper unit can hold or move a specific object. Such gripping and movement of the object by the gripper unit is referred to as an operation on the object.
  • a stationary manipulator fixed on a gantry in a factory or the like operates an object at a predetermined position in a specific work space. Accordingly, it is possible to design in advance a method for operating an object so that surrounding obstacles and the manipulator do not collide or interfere with each other.
  • the moving manipulator moves around and cannot limit the working space of the arm part and the gripper part. That is, in the mobile manipulator, an unknown space in which work objects and obstacles are cluttered is a work space. In such an unknown work space, the moving manipulator needs to operate the arm part, the gripper part, and the target object so as not to collide with or interfere with the obstacle. For this purpose, it is necessary to acquire the positional relationship between the manipulator itself, in particular, the arm part or gripper part that causes interaction with the object, and the target object or surrounding obstacles.
  • Patent Document 1 discloses a technology in which a camera installed in a work space captures an image (monitoring image) of a working robot, and an operator performs an operation while viewing the first person image and the monitoring image. .
  • a first camera is provided on the trunk of a robot having a manipulator, a second camera different from the first camera is provided in the manipulator, and an image of the first camera and the second camera are provided.
  • An invention for synthesizing these images is disclosed.
  • the area of the manipulator portion reflected in the first camera is complemented by the second camera, so that the blind spot due to the manipulator in the first camera is removed, and the position of the grasped object is specified by the first camera. is doing.
  • Patent Document 3 an identification surface provided in a work space, a reflection mirror provided at a position facing the identification surface, and a camera that images the identification surface via the reflection mirror, the identification surface imaged by the camera Based on these images, a technique for detecting an intruder in the work space has been disclosed.
  • Patent Document 3 the technique described in Patent Document 3 is premised on that the position of the reflecting mirror and the position of the camera relative thereto are fixed in advance so as to have a predetermined positional relationship in the work space. For this reason, if the position of the camera or reflecting mirror is changed, the identification surface cannot be photographed. That is, when the work space is changed, it is necessary to re-install the reflecting mirror and the position of the camera relative to the reflecting mirror so as to have a predetermined positional relationship. Therefore, it cannot be applied to the case where the work area is arbitrarily changed, such as a moving manipulator.
  • the problem to be solved by the present invention is to capture the positional relationship between the gripper part and the object and the surrounding environment in the work space of the mobile manipulator without blind spots, and based on that, do not collide with an obstacle.
  • the object is to provide a method for controlling the position of the gripper section.
  • a representative one of the inventions disclosed in the present application includes a procedure for detecting the attitude of a mirror from an image and a procedure for detecting the attitude of an object from an image, and the attitude is reflected in the detected mirror.
  • the posture of an object is detected, and the manipulator is controlled based on the detected posture of the object.
  • Another aspect of the present invention is a manipulator control method for operating a movable manipulator in a work space where an object is arranged.
  • a mirror placed in or near the work space is imaged with a camera mounted on a manipulator, and the mirror attitude is detected based on the camera attitude and the image captured by the camera.
  • a mirror region is detected from the image captured by the camera, and at least a partial image of the object is detected from the detected mirror region.
  • the posture of the target object in the work space is calculated based on the mirror posture and the target object image, and the manipulator is operated based on the calculated information on the target posture.
  • the posture of the object in the work space based on the mirror posture and the object image.
  • the mirror image viewpoint of the camera is calculated based on the camera posture and the mirror posture
  • the object posture in the work space is calculated based on the camera mirror image viewpoint and the image of the object reflected in the mirror.
  • the virtual image of the object in the mirror image space is calculated based on the camera image and the image of the object reflected in the mirror
  • the work space is calculated based on the virtual image attitude of the object and the mirror image.
  • the posture of the object at is calculated.
  • the object is placed on a shelf, the manipulator is moved to the vicinity of a predetermined shelf based on the shelf arrangement information, and the camera, mirror, etc. are set with reference to the origin fixed to the manipulator. , And the posture of the object may be calculated. Since the manipulator itself moves, control is facilitated by unifying the coordinate axes based on the manipulator at the time of work.
  • the image reflected in the mirror not only the operation target object but also the image of the manipulator itself or other objects may be used. Further, more information can be obtained by integrating and using the image reflected in the mirror and the image of the area other than the mirror directly taken by the camera.
  • a marker can be arranged in the mirror or the work space, and the attitude of the mirror can be detected based on the marker information in the image captured by the camera.
  • the manipulator includes a moving mechanism that moves the manipulator, an operation mechanism that operates the object, a camera that images the work space, a camera control mechanism that changes the posture of the camera, and a control unit.
  • the control unit has a function of detecting the posture of the camera and a function of detecting the posture of the mirror in the captured image based on the posture of the camera and the image captured by the camera.
  • the operation mechanism is operated based on the calculated posture information of the object.
  • an example having an arm unit connected by at least one joint mechanism and a gripper unit held by the arm unit and capable of grasping and moving an object can be considered.
  • the camera can be arranged in the gripper part.
  • the control unit controls the operation of the operation mechanism, stores geometric data of the shape of the operation mechanism, and uses the geometric data to determine the posture of the camera. Can be detected.
  • the work space in which the object is arranged can be defined with reference to the shelf on which the object is arranged, and can be configured to arrange a mirror on the shelf.
  • the manipulator is configured to be movable in the vicinity of an arbitrary shelf among the plurality of shelves.
  • the control unit stores shelf arrangement information related to the arrangement of a plurality of shelves, shelf shape information related to the shape of the shelf, and object information related to the objects arranged on each shelf, and the control unit stores the shelf arrangement information and Based on the object information, the manipulator can be moved to the vicinity of a predetermined shelf, and the work space can be imaged with a camera.
  • a marker is arranged in the work space, and the relative positional relationship between the marker and the mirror is stored in the control unit. It is possible to detect the attitude of the mirror in the captured image based on the marker.
  • Still another aspect of the present invention is a manipulator that operates in a work space in which an object is arranged.
  • the manipulator includes a moving mechanism for moving the manipulator, a gripper unit for operating the object, an arm unit for moving the gripper unit, a camera for imaging the work space, and a camera control mechanism for changing the posture of the camera. And having a control unit.
  • the control unit has a function of detecting the posture of the camera and a function of detecting the posture of the mirror in the captured image based on the posture of the camera and the image captured by the camera.
  • a function for detecting at least a part of an image of an object from an image captured by a camera a function for calculating the attitude of an object in a workspace based on the attitude of a mirror and the image of the object, and a calculated object
  • the gripper unit is operated based on the information on the posture of the object.
  • information on the posture of the marker and an image of the marker in the image captured by the camera are used.
  • the attitude of the mirror in the captured image may be detected.
  • the configuration of the control unit may be configured by a single computer, or any part of the input device, output device, processing device, and storage device may be configured by another computer connected via a network. Also good. Specifically, the control unit may be configured by a computer mounted on a manipulator, or a part or all of the functions may be arranged on a server that can communicate with the manipulator.
  • the function of the control unit can be realized by software executed by the processing device. Functions equivalent to those configured by software can also be realized by hardware such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). Such an embodiment is also included in the scope of the present invention.
  • FIG. 2 is a perspective view schematically showing an example of a schematic configuration of a moving manipulator according to the first embodiment.
  • 3 is a side view showing an example of a configuration of a gripper portion of the moving manipulator according to the first embodiment.
  • FIG. It is the perspective view which showed typically an example of the structure of the shelf concerning Embodiment 1.
  • FIG. 1 is a front view schematically showing an example of a configuration of a plane mirror according to a first exemplary embodiment. 1 is a block diagram illustrating an example of a configuration of a control system for a mobile manipulator according to a first embodiment.
  • FIG. 3 is a flowchart illustrating an example of an operation procedure of the moving manipulator according to the first embodiment. It is the side view which showed typically an example of the mode seen from the side surface of the work space in step S702 of FIG.
  • FIG. 9 is an image diagram illustrating an example of a photographed image 141 photographed by the camera 14 in FIG. 8. It is the side view which showed typically another example of the mode seen from the side surface of the work space in step S702 of FIG. It is the image figure which showed an example of the picked-up image 142 image
  • FIG. 3 is a principle diagram schematically showing a method of calculating the attitude of a plane mirror according to the first embodiment.
  • FIG. 1 is a diagram illustrating a configuration of a checker pattern marker according to a first embodiment.
  • FIG. 3 is a schematic diagram illustrating an example of a configuration of a convex mirror according to the first embodiment.
  • FIG. 10 is a flowchart illustrating an example of an operation procedure of the moving manipulator according to the second embodiment.
  • FIG. 5 is a schematic diagram illustrating a configuration of a tray according to a second embodiment. It is the perspective view which showed an example of the structure of the gripper part of the movement manipulator concerning Embodiment 3.
  • FIG. 10 is a side view showing an example of a method for calculating a reflecting surface of a plane mirror according to a third embodiment. It is the perspective view which showed typically the structure of the shelf concerning Embodiment 3.
  • FIG. 10 is a flowchart showing a part of an example of an operation procedure of the moving manipulator according to the fourth embodiment; It is the side view which showed an example of the method of calculating the reflective surface of the plane mirror concerning Embodiment 4.
  • notations such as “first”, “second”, and “third” are attached to identify the constituent elements, and do not necessarily limit the number or order.
  • a number for identifying a component is used for each context, and a number used in one context does not necessarily indicate the same configuration in another context. Further, it does not preclude that a component identified by a certain number also functions as a component identified by another number.
  • FIG. 1 is a diagram showing a picking operation of an object by a moving manipulator.
  • Embodiment 1 of the present invention as shown in FIG. 1, an operation of grasping and lifting an object 3 placed on a shelf 2 by a moving manipulator 1 will be described.
  • a series of operations for gripping and lifting the object 3 by the moving manipulator 1 is referred to as a picking operation of the object 3.
  • FIG. 2 is a schematic perspective view illustrating a schematic configuration of the moving manipulator 1 according to the first embodiment.
  • the movable manipulator 1 is provided at a predetermined portion of the control unit 10, the movable carriage unit 11, the arm unit 12, the gripper unit 13 fixed to the tip of the arm unit 12, and the gripper unit 13.
  • Camera 14 the control unit 10
  • the control unit 10 receives an instruction from the host control unit 6 (described later) by wire or wireless, and controls the operation of the mobile manipulator 1.
  • the operation control means not only controlling the operation of each part of the moving manipulator 1 based on the instruction, but also planning the operation of the moving manipulator 1 based on information such as the camera 14 of the moving manipulator 1. Including intelligent processing.
  • the moving carriage unit 11 includes one or more moving mechanisms such as wheels 110, and moves the moving manipulator 1 to an arbitrary place on a flat ground by moving forward / backward / turning based on an operation command of the moving manipulator control unit 10.
  • the flat ground may include not only a simple plane but also a slope or a small step.
  • traveling of the moving manipulator 1 is referred to as traveling of the moving manipulator 1.
  • the arm unit 12 is fixed to a predetermined portion of the movable carriage unit 11.
  • the arm unit 12 has one or more joint mechanisms, and moves the gripper unit 13 to a predetermined position and direction in a predetermined three-dimensional space based on an operation command of the moving manipulator control unit 10.
  • the position and direction of the gripper portion 13 are referred to as the posture of the gripper portion 13.
  • the arm unit 12 has a configuration having a vertical articulated mechanism in which a plurality of joint mechanisms are connected by the arm unit.
  • the arm unit 12 may be configured to move the gripper unit 13 to a predetermined posture.
  • the configuration is not limited to this.
  • a horizontal multi-joint mechanism, an orthogonal multi-joint mechanism, a parallel link mechanism, or a combination thereof may be used.
  • the gripper unit 13 has a function of gripping a predetermined object.
  • the gripper unit 13 shows a configuration in which a predetermined object is gripped by two finger mechanisms 130, but is not limited thereto.
  • grips a target object with a vacuum suction mechanism or a magnet without using a finger mechanism may be sufficient.
  • FIG. 3 is a schematic side view of the distal end portion of the arm portion 12, the gripper portion 13, and the camera 14 in FIG.
  • the camera 14 is offset and fixed at a predetermined angle with respect to the extending direction of the finger mechanism 130 of the gripper unit 13. That is, the lower end direction of the visual field V of the camera 14 is fixed so as to be parallel to the extending direction of the finger. Thereby, it is possible to avoid the finger mechanism 130 from being reflected in the visual field V.
  • the arm portion 12 is provided with a joint mechanism 121, and the gripper portion 13 can be directed at an arbitrary angle along the arrow P in FIG.
  • the arm portion 12 is provided with a joint mechanism 122, and the gripper portion 13 can be directed at an arbitrary angle along the arrow R in FIG.
  • the arm part 12 can orient
  • the arm unit 12 can direct the gripper unit 13 in an arbitrary direction.
  • the configuration shown in FIG. 3 is an example, and can be configured by combining various known joint mechanisms.
  • FIG. 4 is a schematic perspective view showing a schematic configuration of the shelf 2 according to the first embodiment of the present invention.
  • the shelf 2 is composed of one shelf board 21 and one shelf top board 22 supported by four columns 20. A plurality of picking objects 3 are placed on the shelf board 21.
  • a work space is defined for the mobile manipulator 1 according to the first embodiment.
  • the work space is a predetermined space in which one or more objects 3 to be picked by the moving manipulator 1 are placed.
  • the work space of the moving manipulator 1 is defined as a three-dimensional space having a rectangular parallelepiped shape that is surrounded by columns 20 from the upper surface of the shelf plate 21 to the lower surface of the shelf top plate 22.
  • the plane mirror 4 is arranged at a predetermined position.
  • the plane mirror 4 is fixed to a predetermined position of the shelf top plate 22 at a predetermined angle with its reflection surface directed toward the shelf plate 21.
  • the existence of the object 3 and the plane mirror 4 is not strictly limited in the work space. A part of the object 3 or the plane mirror 4 protrudes from the work space, or the entire object 3 or the plane mirror 4 works. It does not prevent it from being placed in the vicinity of the space.
  • FIG. 5 is a drawing schematically showing a configuration in which the reflecting surface of the plane mirror 4 is viewed from the front.
  • the plane mirror 4 can be made of glass, metal, plastic, or the like, and reflects at least the wavelength of the region that can be imaged by the camera 14.
  • markers 5 having a predetermined shape are arranged at the four corners of the reflecting surface of the plane mirror 4.
  • all the markers 5 have the same shape.
  • the shape of the marker 5 may be any shape that can detect the presence / absence and the number of captured images from an image captured by the camera 14 by predetermined image processing. Furthermore, in the said image process, what is necessary is just the shape which can calculate the center coordinate of each marker 5 in the said image.
  • the marker 5 can be formed by adding protrusions on the surface or inside of the plane mirror, printing, marking, applying a seal, or the like.
  • the positional relationship between the markers 5 arranged on the plane mirror 4 is stored in advance in the control unit 10 or the upper control unit 6 (described later).
  • FIG. 6 is a diagram showing an example of the configuration of the control system for the mobile manipulator in the first embodiment.
  • the control unit 10 of the mobile manipulator 1 includes an overall control unit 100 that controls the entire manipulator 1, a data storage unit 101 that stores data necessary for control, a mobile cart control unit 102 that controls the mobile cart unit 11, an arm An arm control unit 103 that controls the unit 12, a gripper control unit 104 that controls the gripper unit 13, and an image processing unit 105 that processes an image obtained by the camera 14 are provided.
  • the control unit 10 of the mobile manipulator 1 is configured to be able to communicate with the host control unit 6 by wire or wireless.
  • the host control unit 6 includes an object data storage unit 60 that stores information such as map data 601, shelf shape data 602, and object data 603.
  • the host control unit 6 may be built in the housing of the mobile manipulator 1 or may be outside the mobile manipulator 1.
  • the upper control unit 6 and the control unit 10 need only be able to perform information processing in cooperation and control the mobile manipulator 1, and may be configured by a single computer, or may include an input device, an output device, a processing device, Any part of the storage device may be composed of a plurality of computers connected by a wired or wireless network.
  • the map data 601 is data describing a point where the shelf 2 is arranged in a space where the mobile carriage unit 11 of the mobile manipulator 1 can travel (hereinafter referred to as travel space).
  • travel space a space where the mobile carriage unit 11 of the mobile manipulator 1 can travel
  • the shelf shape data 602 is data in which information such as the width, depth, and height of the shelf board 22 is described with reference to a predetermined part of the shelf 2.
  • the object data 603 stores information such as the type or identification number of the object 3 to be picked and the identification code of the shelf 22 on which the object 3 is arranged.
  • the object data 603 may further include characteristic data such as the shape, weight, and material of the object 3.
  • the upper control unit 6 allows the work space where the object 3 to be picked is displayed to be present in the traveling space.
  • the picking command can be transmitted to the control unit 10 of the movement manipulator 1 based on this.
  • the posture (position and orientation) of the target object 3 that is the target of the picking operation is the position of the center of gravity of the bin that is the target. Etc.) cannot be known by the host controller 6. This is because the object 3 to be displayed is picked by the moving manipulator 1 or picked by a human worker in the traveling space, so that the posture may be changed at any time.
  • the moving manipulator 1 needs to pick the object 3 after confirming the posture of the object 3 in the work space.
  • FIG. 7 is a diagram showing an overall outline of the picking operation procedure in the first embodiment. Each step in FIG. 7 may be executed by a predetermined program built in the control unit 10 of the movement manipulator 1 or may be executed by the host control unit 6 to control the control unit 10 of the movement manipulator 1 one by one or collectively. You may make it transmit a command to.
  • step S701 the moving manipulator 1 travels to the front of the shelf 2 on which the object 3 to be picked is displayed.
  • the upper control unit 6 stores the point of the shelf 2 where the object 3 to be picked is displayed, that is, the point of the work space, in the traveling space of the mobile manipulator 1.
  • the host control unit 6 transmits a picking command including information indicating the point of the work space to the control unit 10 of the mobile manipulator 1.
  • the mobile manipulator 1 travels by controlling the mobile carriage unit 11 to the point of the work space specified by the picking command.
  • an existing technique such as embedding a magnetic cable indicating a traveling route in the traveling space in advance and moving along the magnetic cable may be used.
  • an existing technology that moves autonomously using the map data 601 may be used.
  • step S702 the moving manipulator 1 travels to the front of the shelf 2, and then changes the posture of the gripper unit 13 to move the camera 14 from the front (opening) of the shelf so as to face the work space, Take a picture.
  • the center of the moving manipulator 1 (the center can be arbitrarily determined by the point information of the work space and the shelf shape data 602 stored in advance in the upper control unit 6. In order to simplify the description, the moving manipulator 1 will be described below.
  • the position of the shelf 21 in the work space with respect to the origin) is known. Therefore, the moving manipulator 1 can control the arm unit 12 and the gripper unit 13 so that the camera 14 is positioned in a predetermined posture (position and orientation) relatively determined from the position of the shelf board 21.
  • the center of the moving manipulator 1 is positioned at a position away from the shelf 2 by a predetermined distance, and the arm 12 and the gripper 13 are positioned in a posture set according to the posture of the shelf board 21.
  • FIG. 8 is a diagram schematically showing a state viewed from the side of the work space in step S702.
  • the object 3b that is not the object of the picking operation is positioned behind the object 3a as viewed from the gripper unit 13. is doing. Therefore, when picking the object 3a by the gripper part 13, it is necessary to control the finger mechanism 130 of the gripper part 13 so as not to interfere with the object 3b.
  • FIG. 9 is a diagram showing an example of a photographed image 141 photographed by the camera 14 in step S702.
  • the photographed image 141 the image of the object 3b is photographed so as to be behind the object 3a. It is generally difficult to accurately specify the posture of the object 3b from such an image that is behind the object and partially missing. Further, when the object 3b is completely hidden by the object 3a as viewed from the gripper unit 13, the presence of the object 3b cannot be detected from the state of FIG.
  • step S703 detection processing of the markers 5 at the four corners of the plane mirror 4 is performed on the captured image 141 (FIG. 9).
  • the shape and number of the markers 5 are stored in advance as a part of the shelf shape data 602, for example. Since the shape and number of the markers 5 are known, the number of markers 5 photographed in the photographed image 141 and the pixel coordinate values in the photographed image 141 can be specified by a known template matching process. Although the marker 5 may be omitted and the corners of the plane mirror 4 may be extracted instead of the marker by image processing or the like, the burden of image processing can be reduced by preparing and using the marker.
  • the image processing unit 105 performs image processing and calculation after step S703. Further, the image processing unit 105, the overall control unit 100, and the host control unit 6 may share the processing.
  • step S703 when the four markers 5 are not detected from the image, the posture of the gripper unit 13 is changed (step S704).
  • the control operation such as changing the posture of the gripper unit 13 by rotating the joint mechanism 121 of the arm unit 12 in the direction of the arrow P in FIG.
  • the work space is imaged again by the camera 14 (step S702), and the marker 5 detection process (step S703) is performed.
  • step S701 The processing from step S702 to step S704 is repeated until a known number (for example, four) of markers are detected from the captured image of the camera 14.
  • a known number for example, four
  • the positional relationship between the shelf 2 and the moving manipulator 1 is set to an appropriate positional relationship so that all the markers are within the angle of view of the camera 14.
  • the marker should be detected once or a predetermined number of trials.
  • the operator can be notified by issuing an error signal or a warning.
  • FIG. 10 is a diagram schematically showing the working space viewed from the side when four markers 5 are detected from the image captured by the camera 14.
  • the mirror 4 is within the angle of view of the camera 14.
  • FIG. 11 is a drawing showing an example of a photographed image of the camera 14 in FIG.
  • step S705 the attitude of the plane mirror 4 in the work space is calculated based on the image captured by the camera 14 (FIG. 11). An example of the calculation method is shown in FIG.
  • the origin of the moving manipulator 1 is O
  • the viewpoint of the camera 14 is C
  • the center point of the plane mirror 4 is M.
  • the plane mirror 4 is illustrated as a rectangle having no thickness, and the position of the marker 5 is the position of each vertex of the rectangle.
  • the posture of the viewpoint C of the camera 14 with respect to the origin O of the moving manipulator 1 (information indicating the position and tilt of the camera.
  • the position of the center point of the camera lens and the direction in which the optical axis of the camera faces Can be uniquely calculated by the angle of each joint of the arm unit 12 and the length of the arm connecting the joints (not shown in FIG. 12).
  • Information relating to the moving manipulator 1 such as the angle of each joint of the arm unit 12 and the length of the arm connecting the joints is stored in advance in, for example, the data storage unit 101 of the control unit 10.
  • the homogeneous transformation matrix indicating the posture of the viewpoint C of the camera 14 with the origin O of the moving manipulator 1 as a reference is calculated as O T C.
  • the captured image (FIG. 11) captured at the viewpoint C of the camera 14 is shown as a captured image plane P.
  • the mutual positional relationship between the markers 5 of the plane mirror 4, that is, the lengths H and W of each side of the plane mirror 4, for example, is stored as a part of the shelf shape data 602 and is known.
  • the intersections of the four straight lines connecting the marker 5 and the viewpoint C of the camera 14 and the captured image plane P of the camera 14 become the coordinates of the marker 5 detected in step S703.
  • the center point of the plane mirror 4 with respect to the viewpoint C of the camera 14 is estimated.
  • the posture of M (information indicating the position and tilt of the mirror.
  • the position of the center point of the mirror and the direction in which the mirror surface faces can be calculated.
  • the calculated homogeneous transformation matrix indicating the attitude of the center point M of the plane mirror 4 with respect to the viewpoint C of the camera 14 is defined as C T M.
  • the homogeneous transformation matrix O indicating the attitude of the center point of the plane mirror 4 with respect to the origin O of the moving manipulator 1 is used.
  • T M can be calculated.
  • the homogeneous transformation matrix O T M corresponds to the attitude of the plane mirror 4 in the work space (real space).
  • the attitude of the plane mirror 4 in the work space was derived from the image of the plane mirror 4 captured by the camera 14.
  • the posture of the plane mirror 4 in the work space is calculated by the markers 5 arranged at the four corners of the plane mirror 4, an accurate posture can be calculated.
  • the calculated attitude of the plane mirror 4 is stored in the control unit 10 or the host control unit 6. (Step S706).
  • step S707 the work space is imaged again using the camera 14.
  • the posture is the same as that in step S702, but may be any other posture as long as the work space is photographed.
  • the photographed image is assumed to be an image 141 similar to that in FIG.
  • the image 141 is roughly divided into two image processes, a process in step S708 and a process from step 709 to step 721, in parallel or in series.
  • step S708 the object 3a which is the picking target is detected from the image 141 photographed in step S707, and the posture thereof is calculated.
  • a known image recognition algorithm such as pattern matching with an image of the object 3 stored in advance (for example, stored as part of the object data 603) may be applied.
  • the detected posture of the object may be calculated by storing the size of the object in advance and comparing it with the size reflected in the captured image 141 or calculating the homography. .
  • These image processes can be calculated, for example, with coordinates with the viewpoint C of the camera 14 as the origin, but eventually the coordinates are converted to coordinates based on the origin O of the moving manipulator 1 to unify the coordinate system. Also good.
  • the postures of the other objects 3b and the gripper unit 13 may be calculated together.
  • step S709 for the viewpoint C of the camera 14, a point C ′ that is a surface object with respect to the reflecting surface M of the plane mirror 4 is calculated.
  • the viewpoint C ′ is referred to as a mirror image viewpoint C ′ of the camera 14.
  • FIG. 13 is a side view illustrating a mirror image viewpoint C ′ with respect to the plane mirror of the camera.
  • the point orientation C ′ that is a surface object with respect to the reflection surface M of the plane mirror 4 becomes the mirror image viewpoint C ′ of the camera 14. That is, if the posture of the camera 14 and the posture of the mirror 4 are known, the posture of the mirror image viewpoint C ′ of the camera 14 can be calculated.
  • the object reflected on the mirror 4 captured by the camera 14 corresponds to the image of the object viewed from the mirror image viewpoint C ′ of the camera 14.
  • the posture of the mirror 4 is stored in step 706, and can be expressed by, for example, coordinates based on the origin O of the moving manipulator 1. Further, the posture of the camera 14 can be calculated based on the same coordinates from the data regarding the moving manipulator 1 stored in the data storage unit 101 as described above.
  • step S710 a region of the plane mirror 4 in the photographed image 141, that is, a mirror region image is cut out.
  • a region of the plane mirror 4 in the photographed image 141 that is, a mirror region image is cut out.
  • An example of the mirror region image cutout process will be described with reference to FIG. 12 again.
  • the homogeneous transformation matrix O T M indicating the attitude of the plane mirror 4 with respect to the origin O of the moving manipulator 1 is stored in step S706 (here, the moving manipulator 1 does not move after step S701). To do). Further, the homogeneous transformation matrix O T C indicating the posture of the viewpoint C of the camera 14 with respect to the origin O of the moving manipulator 1 can be calculated from the angle of each joint of the arm unit 12 and the length of the arm connecting the joints. . From these homogeneous transformation matrices O T M and O T C , a homogeneous transformation matrix C T M indicating the attitude of the plane mirror 4 with reference to the viewpoint C of the camera 14 in step 710 (FIG. 13) can be calculated.
  • the mirror region image of the plane mirror 4 cut out in this way is an image of the visual field indicated by V ′ from the mirror image viewpoint C ′ of the camera 14 in FIG.
  • the object 3b is captured in the mirror region image without being shaded by the object 3a. That is, an object that has not been detected as a blind spot in step S708 is captured in the mirror region image.
  • step S711 the object 3b is detected by image processing similar to that in step S708 on the mirror region image cut out in step S710. That is, the object 3b that cannot be seen from the viewpoint C of the camera 14 is detected from the mirror region image, and its posture is calculated by pattern matching or homography calculation. As shown in FIG. 13, the image in the mirror region image is an image of the object 3 b viewed from the mirror image viewpoint C ′ of the camera 14.
  • the object 3b is calculated as the posture of the camera 14 viewed from the mirror image viewpoint C'.
  • the calculation result can be indicated by coordinates with the mirror image viewpoint C ′ as the origin.
  • the postures of the other objects 3a and the gripper unit 13 may be calculated together.
  • the coordinate system may be unified by converting the coordinates into the coordinates based on the origin O of the moving manipulator 1.
  • step S713 the posture information of the target object 3a calculated in step S708 and the posture information of the target object 3b obtained in step S711 are integrated. Thereby, the control part 10 becomes possible [grasping
  • each posture is converted into coordinates based on the origin O of the moving manipulator 1 and the coordinate system is unified, the positional relationship of objects in the work space can be grasped in a unified manner.
  • the three-dimensional posture of the object can be calculated by stereo processing.
  • control is performed so that the gripper unit 13 is moved to a posture in which the object 3a can be gripped (step S714), and the object is gripped by the gripper unit 13 (step S714). S715).
  • the moving manipulator 1 becomes a blind spot in the work space, in addition to the posture of the target object 3a that is the target of the picking operation, as a shadow of the target object 3a from the shelf opening. It is possible to detect the posture of the object 3b located at the position. Therefore, it is possible to pick only the object 3a without the object 3b and the gripper 13 colliding with each other.
  • step S708 if the moving manipulator 1 stops at a slight inclination with respect to the shelf 2, it is detected in step S708.
  • the posture of the object 3b viewed from the mirror image viewpoint C ′ is shifted by twice the inclination angle of the mobile manipulator 1 with respect to the shelf 2.
  • the mirror region in step S710 is also cut out from the region shifted by the tilt angle. Due to these effects, the posture of the object 3b cannot be calculated correctly.
  • step S705 for calculating the attitude of the plane mirror 4 with the marker 5 or the like, the shift of the stop position accompanying travel such that the moving manipulator 1 is tilted with respect to the shelf 2 and stopped.
  • the calculation of the mirror image viewpoint C ′ of the camera 14 by the plane mirror 4 and the process of cutting out the mirror area image can be performed with high accuracy.
  • a step of photographing again in a state where the gripper unit 13 is sufficiently close to the object 3a may be provided.
  • the gripper unit 13 when the gripper unit 13 is sufficiently close to the object 3a, only a small part of the object 3a can be photographed from the camera 14.
  • the mirror image viewpoint C ′′ of the camera 14 in the state of FIG. 14 it is possible to photograph three objects: the object 3a, the object 3b, and the finger mechanism 130 of the gripper unit 13. Accordingly, the object 3a and Not only the positional relationship between the target 3b but also the positional relationship between the target 3a and the gripper unit 13 and the positional relationship between the target 3b and the gripper unit 13 can be detected from the mirror region image.
  • step S708 to step S711 need only be performed without performing step S708. Thereby, each positional relationship of the target object 3a, the target object 3b, and the gripper part 13 can be grasped
  • the procedure for calculating the attitude of the plane mirror 4 by the marker 5 in step S705 may be performed before the procedure in steps S709 to S711.
  • Step S714 for controlling the movement of the gripper unit 13 may be performed based on the result of the procedure of S711.
  • the camera 14 is not necessarily fixed to the gripper unit 13. An example is shown in FIG.
  • a configuration in which a joint mechanism 131 is further provided between the camera 14 and the gripper unit 13 and the angle T of the camera 14 with respect to the gripper unit 13 can be arbitrarily changed may be employed. If the angle T of the camera 14 with respect to the gripper unit 13 can be observed by the control unit 10, the attitude of the plane mirror 4 from the origin O of the moving manipulator 1 shown in FIG. 12 can be calculated. This eliminates the need to control the arm unit 12 in the search for the marker 5 in step S704.
  • step S703 it has been described that the markers 5 at the four corners of the plane mirror 4 are detected, but it is not always necessary to detect the four markers 5.
  • An example is shown in FIG.
  • a method using two types of markers 5 and 51 arranged in different shapes on the diagonal of the plane mirror 4 may be used. In this case, if at least three markers are detected, The attitude of the plane mirror 4 can be calculated. As a result, the number of repeated processes from step S702 to step S704 can be reduced.
  • the attitude of the plane mirror 4 can be calculated by predetermined image processing. This is because each square vertex in the checker pattern plays the same role as the marker 5.
  • a convex mirror 41 having a known shape may be used as shown in FIG.
  • the convex mirror 41 in FIG. 18 has a checker pattern marker 52 attached to the center thereof.
  • the mirror region image of the convex mirror 41 is converted into a planar image by performing projective transformation determined by the shape of the convex boundary 41 after cutting out the mirror region image from the image in step S710 in FIG. Convert it.
  • the convex mirror 41 the number of calculation steps for calculation increases compared to the case of using the plane mirror 4, but it is possible to observe a wider space from the mirror viewpoint of the camera 14.
  • the configuration may be further arranged.
  • the marker 5, the marker 52, and the marker 53 are different in shape from each other as long as it can be determined and detected by predetermined image processing. It goes without saying that the operation procedure shown in FIG. 7 can be applied to each of the plane mirrors 4a to 4c.
  • the plane mirrors 4a to 4c may be used in combination.
  • the plane mirror 4a when used in combination with the plane mirror 4a, it may be realized by a recursive process of further cutting out the mirror area image of the plane mirror 4b from the mirror area image of the plane mirror 4a cut out in step S710. In this way, the posture detection of an object with no blind spots can be performed by complex posture calculation using a plurality of plane mirrors.
  • Example 1 Although the example which controls the arm part 12 and the gripper part 13 of the movement manipulator 1 using the plane mirror 4 which provided the marker 5 in four corners was demonstrated, the plane mirror 4 provided with the marker 5 is used for all working spaces. Need to be placed in. Therefore, in the present embodiment, an example in which the marker 5 is not arranged on the plane mirror 4 and the marker 5 is arranged at a place other than the plane mirror 4 will be described.
  • FIG. 20 is a diagram showing a configuration of the shelf 2 according to the present embodiment, and is a diagram contrasted with FIG. 4 in the first example.
  • FIG. 20 schematically shows a configuration in which the shelf 2 is viewed from the front in the left direction of FIG.
  • the shelf 20 differs from the shelf 2 shown in FIG. 4 in that the marker 5 is provided on a part of the shelf board 21 and the shelf top board 22. Further, the plane mirror 4 is fixed at a predetermined angle at a predetermined position of the shelf top plate 22 of the shelf 2 shown in FIG. 20 with its reflecting surface directed toward the shelf plate 21.
  • the flat mirror 4 according to the present embodiment is different from the flat mirror 4 shown in FIG. 5 in that the markers 5 are not provided at the four corners.
  • the shelf shape data 602 stored in the object data storage unit 60 of the host control unit 6 includes the shape of the shelf 2 and the attitude of the plane mirror 4 based on a predetermined part of the shelf 2. Are stored. That is, in this embodiment, the relative positional relationship among the shelf 2, the marker 5, and the plane mirror 4 is known in advance.
  • FIG. 21 is a diagram illustrating a picking procedure of the object 3 by the moving manipulator 1 according to the present embodiment, and is compared with FIG. 7 in the first embodiment.
  • step S7021 the entire shelf opening is photographed by the camera 14 in step S7021. Thereby, it can be searched whether the four markers 5 stuck on the shelf 21 and the shelf 22 shown in FIG. 20 are reflected in the captured image of the camera 14 (step S7031). If four markers 5 are not detected in step S7031, the direction of the camera 14 is changed (step S704), and photographing is performed again (step S7021).
  • the attitude of the shelf 2 is calculated using the four markers 5 detected from the captured image of the camera 14 and the shelf shape data 602 (step S7051).
  • step S7052 the attitude of the plane mirror 4 fixed to the shelf 2 is calculated from the shelf shape data 602 based on the calculated attitude of the shelf 2 (step S7052), and the attitude is transferred to the control unit 10 or the upper control unit 6.
  • step S706 the attitude of the plane mirror 4 fixed to the shelf 2 is calculated from the shelf shape data 602 based on the calculated attitude of the shelf 2 (step S7052), and the attitude is transferred to the control unit 10 or the upper control unit 6.
  • step S706 the attitude of the plane mirror 4 fixed to the shelf 2 is calculated from the shelf shape data 602 based on the calculated attitude of the shelf 2
  • the shape of the support object of the plane mirror 4 As described above, in the present embodiment, the shape of the support object of the plane mirror 4, the posture of the plane mirror 4 with respect to the support object, and the position of the marker 5 attached to the support object, The posture is calculated, and the posture of the plane mirror 4 is indirectly calculated from the posture of the supporting object.
  • the fixed position of the plane mirror 4 and the marker 5 is not limited to the shelf 2.
  • FIG. 22 is a perspective view showing a schematic configuration of the tray 200 for storing the object 3 according to the present embodiment.
  • a two-dimensional code 53 is affixed to the outside of the front surface of the tray 200, and the plane mirror 4 is fixed to the inside of the back surface of the tray 200.
  • Information such as the contents of the tray 200 can be embedded in the two-dimensional code 53 using the pattern shape as an identification code.
  • the posture can be calculated from the image by one two-dimensional code 53.
  • the shape data of the tray 200 including the two-dimensional code 53 and the fixed posture of the plane mirror 4 is stored in the object data storage unit 60, thereby calculating the posture of the plane mirror 4 from the posture of the shelf 2 according to the present embodiment. Similar to the procedure, the attitude of the plane mirror 4 fixed to the back inner side of the tray 200 can be calculated from the two-dimensional code 53 attached to the outer front surface of the tray 200. Then, by using the calculated posture of the plane mirror 4 fixed inside the back surface of the tray 200, the object 3 in the tray 200 can be gripped by the gripper unit 13 so as not to collide with the wall surface of the tray 200. Needless to say.
  • the marker 5 can be attached to a place other than the plane mirror 4 and can be easily photographed with respect to the camera 14 fixed to the gripper unit 13. Can be attached to a place.
  • FIG. 23 is a perspective view schematically showing the configuration of the gripper portion 13 of the moving manipulator 1 according to the present embodiment.
  • the gripper portion 13 according to the present embodiment is different from the first embodiment in that a checker pattern marker 52 is further provided at the center of the two finger mechanisms 130 of the gripper portion 13.
  • FIG. 24 is a drawing schematically showing a method for calculating the attitude of the plane mirror 4 in the moving manipulator 1 according to the present embodiment.
  • the field of view V of the camera 14 fixed to the gripper unit 13 is a mirror image reflected on the plane mirror 4.
  • a point indicated by the posture of the checker pattern marker 52 detected in the mirror image of the plane mirror 4 is denoted by m1.
  • the plane on which the reflecting surface of the plane mirror 4 exists is a vertical bisector between the posture of the checker pattern marker 52 calculated from the joint angle of the arm portion 12 and the length of the arm between the joints, and m1. It can be calculated.
  • Information on the attachment position of the marker 52 to the manipulator is stored as data in the data storage unit 105 or the like. Therefore, the posture of the marker 52 can be uniquely calculated based on data such as the angle of each joint of the arm unit 12 and the length of the arm connecting the joints, similarly to the posture of the camera 14.
  • FIG. 25 is a perspective view of the work space in the present embodiment.
  • the plane on which the reflecting surface of the plane mirror 4 exists can be calculated, but the position of the mirror area of the plane mirror 4 cannot be calculated.
  • the field of view of the camera 14 that is fixed to the gripper unit 13 can be entirely as long as the gripper unit 13 is in the work space.
  • the plane mirror 4 can be arranged so as to cover it. Such a plane mirror 4 is disposed, and control is performed by regarding all the captured images of the camera 14 as mirror region images.
  • This modification is basically a modification of the embodiment described with reference to FIGS. 7 and 21 and is another example of the obstacle posture detection process.
  • FIG. 26 shows only a portion between steps S707 and S713 in FIGS. 7 and 21, which is different from FIGS.
  • the mirror image viewpoint is not calculated, and the posture in the work space is derived from the posture in the mirror image space of the obstacle reflected in the mirror region in the image.
  • Figure 27 shows a conceptual diagram.
  • the object 3b is reflected in the mirror 4 as 3b '. Therefore, as in the embodiments of FIGS. 7 and 21, a mirror region in the image is cut out (step S710), and the virtual image 3b 'of the object 3b is detected by image processing similar to step S708. That is, the object 3b cannot be seen from the viewpoint C of the camera 14, but the virtual image 3b 'is detected from the mirror region image, and the posture thereof is calculated by pattern matching or homography calculation (step S711).
  • the image in the mirror region image is an image obtained by viewing the virtual image 3b 'of the object 3b from the viewpoint C of the camera 14.
  • the posture of the object 3b with respect to the virtual image 3b ' is obtained by performing mirror image conversion of the posture of the virtual image 3b' with respect to the mirror (step S712).
  • the postures of the other objects 3a and the gripper unit 13 may be calculated together. Further, the calculation result may be finally converted into coordinates based on the origin O of the moving manipulator 1 to unify the coordinate system.
  • the present invention is not limited to this.
  • the present invention can also be applied to a stationary manipulator in which the arm portion is moved by a linear motion mechanism fixed at a predetermined location.
  • It can be used to control various mobile manipulators.

Abstract

Provided is a method for controlling the position of a mobile manipulator so as to avoid collisions with obstacles. The method is provided with a procedure for detecting the orientation of a mirror, and a procedure for detecting the orientation of an object from an image. The orientation of an object is detected from an image reflected in the mirror, and the manipulator is controlled on the basis of the detected orientation of the object.

Description

マニプレータ制御方法、システム、およびマニプレータManipulator control method, system, and manipulator
 本発明はマニプレータの構成および制御方法に関し、特に画像を用いた移動型マニプレータの構成および制御方法に関する。 The present invention relates to a configuration and control method of a manipulator, and more particularly to a configuration and control method of a mobile manipulator using an image.
 移動マニプレータは、倉庫や工場などの施設や家庭内などにおける人間の活動を代替するものとして、実用化が期待されている。 移動マニプレータは、一般に、前進・後退・旋回などにより自身を移動させる移動台車部と、移動台車部に取り付けられた1つ以上の関節を有するアーム部と、アーム部の先端に取り付けられたグリッパ部とを有する。グリッパ部は、特定の物体を把持したり、移動させたりすることができる。こうした、グリッパ部による物体の把持や移動を、物体に対する操作と呼称する。 Mobile manipulators are expected to be put to practical use as alternatives to human activities in facilities such as warehouses and factories and in homes. In general, a moving manipulator includes a moving carriage unit that moves itself by moving forward, backward, and turning, an arm unit that has one or more joints attached to the moving carriage unit, and a gripper unit that is attached to the tip of the arm unit. And have. The gripper unit can hold or move a specific object. Such gripping and movement of the object by the gripper unit is referred to as an operation on the object.
 工場などにおいて架台に固定された据え付け型マニプレータは、特定の作業空間において、決められた位置にある対象物に対して操作を行う。従って、周囲の障害物とマニプレータが衝突・干渉しないように対象物を操作する方法を、あらかじめ設計することができる。 A stationary manipulator fixed on a gantry in a factory or the like operates an object at a predetermined position in a specific work space. Accordingly, it is possible to design in advance a method for operating an object so that surrounding obstacles and the manipulator do not collide or interfere with each other.
 一方、移動マニプレータは、据え置き型マニプレータとは異なり、それ自身が動き回るため、アーム部やグリッパ部の作業空間を限定することはできない。すなわち、移動型マニプレータは、作業対象物と障害物とが雑然と配置された未知の空間が作業空間となる。そうした未知の作業空間において、移動マニプレータは、障害物と衝突・干渉が発生しないように、アーム部やグリッパ部や対象物を操作する必要がある。そのためには、マニプレータ自身、特に物体とのインタラクションが生じるアーム部やグリッパ部と、対象物や周囲の障害物の位置関係を取得することが必要となる。 On the other hand, unlike the stationary manipulator, the moving manipulator moves around and cannot limit the working space of the arm part and the gripper part. That is, in the mobile manipulator, an unknown space in which work objects and obstacles are cluttered is a work space. In such an unknown work space, the moving manipulator needs to operate the arm part, the gripper part, and the target object so as not to collide with or interfere with the obstacle. For this purpose, it is necessary to acquire the positional relationship between the manipulator itself, in particular, the arm part or gripper part that causes interaction with the object, and the target object or surrounding obstacles.
 従来、移動マニプレータを含むロボットと、対象物や周囲の環境との位置関係を、カメラを用いて取得する技術がある。 Conventionally, there is a technique for acquiring a positional relationship between a robot including a moving manipulator and an object or surrounding environment using a camera.
 たとえば、特許文献1では、作業空間に設置されたカメラが、作業ロボットの画像(監視画像)を撮影し、操作者が上述の一人称画像および監視画像を見ながら操作を行う技術が公開されている。 For example, Patent Document 1 discloses a technology in which a camera installed in a work space captures an image (monitoring image) of a working robot, and an operator performs an operation while viewing the first person image and the monitoring image. .
 また、特許文献2では、マニプレータを有するロボットの体幹に第1のカメラを備え、第1のカメラとは別の第2のカメラをマニプレータに備え、第1のカメラの画像と第2のカメラの画像とを合成する発明が開示されている。これにより、第1のカメラに映りこむマニプレータ部分の領域を、第2のカメラで補完することで、第1のカメラにおけるマニプレータによる死角を除去し、把持対象物の位置を第1のカメラにより特定している。 In Patent Document 2, a first camera is provided on the trunk of a robot having a manipulator, a second camera different from the first camera is provided in the manipulator, and an image of the first camera and the second camera are provided. An invention for synthesizing these images is disclosed. As a result, the area of the manipulator portion reflected in the first camera is complemented by the second camera, so that the blind spot due to the manipulator in the first camera is removed, and the position of the grasped object is specified by the first camera. is doing.
 特許文献3では、作業空間に設けられた識別面と、識別面に対向する位置に設けられた反射鏡と、反射鏡を介して識別面を撮影するカメラを備え、カメラによって撮影された識別面の画像に基づいて、作業空間への侵入物を検出する技術が公開されている。 In Patent Document 3, an identification surface provided in a work space, a reflection mirror provided at a position facing the identification surface, and a camera that images the identification surface via the reflection mirror, the identification surface imaged by the camera Based on these images, a technique for detecting an intruder in the work space has been disclosed.
特開2001-062766JP2001-062766 特開2010-131751JP2010-131751 特開2013-52485JP2013-52485A
 しかしながら、特許文献1記載の技術では、監視画像用のカメラを、作業空間に死角がないように設置しなければならない。この場合、対象物が棚のような場所に置かれている場合、必ず死角ができてしまうという課題がある。 However, in the technique described in Patent Document 1, a camera for monitoring images must be installed so that there is no blind spot in the work space. In this case, when the object is placed in a place such as a shelf, there is a problem that a blind spot is always formed.
 また、特許文献2記載の技術では、腕カメラを用いても、対象物の陰となった死角は撮影できない。こうした対象物の陰を撮影するためには、腕カメラを対象物の背後に回り込ませる必要がある。腕カメラを回り込ませるために腕を移動させる経路に障害物があった場合、マニプレータと障害物が衝突してしまう。 Also, with the technique described in Patent Document 2, even if an arm camera is used, the blind spot behind the object cannot be photographed. In order to photograph the shadow of such an object, it is necessary to wrap the arm camera behind the object. If there is an obstacle in the path for moving the arm to wrap around the arm camera, the manipulator and the obstacle collide.
 一方、特許文献3記載の技術では、作業空間において、反射鏡の位置と、それに対するカメラの位置は、予め所定の位置関係となるように、固定されているのが前提である。そのため、カメラや反射鏡の位置が変わると識別面を撮影できなくなる。すなわち、作業空間を変更する場合は、反射鏡と、それに対するカメラの位置とを、所定の位置関係となるように、設置し直す必要がある。従って、移動するマニプレータのように作業領域が任意に変更されるものには適用できない。 On the other hand, the technique described in Patent Document 3 is premised on that the position of the reflecting mirror and the position of the camera relative thereto are fixed in advance so as to have a predetermined positional relationship in the work space. For this reason, if the position of the camera or reflecting mirror is changed, the identification surface cannot be photographed. That is, when the work space is changed, it is necessary to re-install the reflecting mirror and the position of the camera relative to the reflecting mirror so as to have a predetermined positional relationship. Therefore, it cannot be applied to the case where the work area is arbitrarily changed, such as a moving manipulator.
 上記を鑑み、本発明が解決する課題は、移動マニプレータの作業空間において、グリッパ部と対象物の位置関係、および周囲の環境とを、死角無く撮影し、それに基づき、障害物に衝突しないようにグリッパ部の位置を制御する方法を提供することにある。 In view of the above, the problem to be solved by the present invention is to capture the positional relationship between the gripper part and the object and the surrounding environment in the work space of the mobile manipulator without blind spots, and based on that, do not collide with an obstacle. The object is to provide a method for controlling the position of the gripper section.
 本願において開示される発明のうち代表的なものを挙げれば、画像から鏡の姿勢を検出する手続きと、画像から物体の姿勢を検出する手続きとを備え、前記姿勢を検出された鏡に映った物体の姿勢を検出し、検出された前記物体の姿勢に基づきマニプレータを制御する方法である。 A representative one of the inventions disclosed in the present application includes a procedure for detecting the attitude of a mirror from an image and a procedure for detecting the attitude of an object from an image, and the attitude is reflected in the detected mirror. In this method, the posture of an object is detected, and the manipulator is controlled based on the detected posture of the object.
 本発明の他の側面は、移動可能なマニプレータを、対象物が配置された作業空間で動作させるマニプレータの制御方法である。この方法では、作業空間またはその近傍に配置された鏡をマニプレータに搭載されたカメラで撮像し、カメラの姿勢およびカメラによる撮像画像に基づいて鏡の姿勢を検出する。そして、カメラによる撮像画像から鏡の領域を検出し、検出された鏡の領域から、対象物の少なくとも一部の像を検出する。そして、鏡の姿勢および対象物の像に基づいて、作業空間における対象物の姿勢を算出し、算出された対象物の姿勢の情報に基づいて、マニプレータを操作する。 Another aspect of the present invention is a manipulator control method for operating a movable manipulator in a work space where an object is arranged. In this method, a mirror placed in or near the work space is imaged with a camera mounted on a manipulator, and the mirror attitude is detected based on the camera attitude and the image captured by the camera. Then, a mirror region is detected from the image captured by the camera, and at least a partial image of the object is detected from the detected mirror region. Then, the posture of the target object in the work space is calculated based on the mirror posture and the target object image, and the manipulator is operated based on the calculated information on the target posture.
 鏡の姿勢および対象物の像に基づいて、作業空間における対象物の姿勢を算出する手法としては、種々考えられる。一例としては、カメラの姿勢と記鏡の姿勢に基づいて、カメラの鏡像視点を算出し、カメラの鏡像視点と鏡に映る対象物の像に基づいて、作業空間における対象物の姿勢を算出する。他の例としては、カメラの姿勢と鏡に映る対象物の像に基づいて、鏡像空間における対象物の虚像の姿勢を算出し、対象物の虚像の姿勢と鏡の姿勢に基づいて、作業空間における対象物の姿勢を算出する。 There are various methods for calculating the posture of the object in the work space based on the mirror posture and the object image. As an example, the mirror image viewpoint of the camera is calculated based on the camera posture and the mirror posture, and the object posture in the work space is calculated based on the camera mirror image viewpoint and the image of the object reflected in the mirror. . As another example, the virtual image of the object in the mirror image space is calculated based on the camera image and the image of the object reflected in the mirror, and the work space is calculated based on the virtual image attitude of the object and the mirror image. The posture of the object at is calculated.
 このように、鏡に映った像を利用することで、カメラの位置からは撮影できない情報を得ることができ、マニプレータの制御に利用することができる。 Thus, by using the image reflected in the mirror, it is possible to obtain information that cannot be taken from the position of the camera, and it can be used for manipulator control.
 さらに具体的な例を説明すると、対象物は棚に載置され、棚の配置情報に基づいて、マニプレータを所定の棚の近傍に移動し、マニプレータに固定された原点を基準として、カメラ、鏡、および対象物の姿勢を算出するように構成してもよい。マニプレータ自体は移動するため、作業時にはマニプレータを基準とした座標軸に統一することで、制御が容易となる。 To describe a more specific example, the object is placed on a shelf, the manipulator is moved to the vicinity of a predetermined shelf based on the shelf arrangement information, and the camera, mirror, etc. are set with reference to the origin fixed to the manipulator. , And the posture of the object may be calculated. Since the manipulator itself moves, control is facilitated by unifying the coordinate axes based on the manipulator at the time of work.
 また、鏡に映る像としては、操作対象物だけでなく、マニプレータ自身や、その他の物体の像を利用してもよい。また、鏡に映る像と、カメラで直接撮影した鏡以外の領域の画像を統合して用いることで、より多くの情報を得ることができる。 Also, as the image reflected in the mirror, not only the operation target object but also the image of the manipulator itself or other objects may be used. Further, more information can be obtained by integrating and using the image reflected in the mirror and the image of the area other than the mirror directly taken by the camera.
 鏡の姿勢を検知するための方法としては、例えば、鏡または作業空間内にマーカーを配置し、カメラによる撮像画像におけるマーカーの情報に基づいて鏡の姿勢を検出することができる。 As a method for detecting the attitude of the mirror, for example, a marker can be arranged in the mirror or the work space, and the attitude of the mirror can be detected based on the marker information in the image captured by the camera.
 本発明の他の側面は、移動可能なマニプレータを、対象物が配置された作業空間で動作させるマニプレータの制御システムである。このシステムにおいて、マニプレータは、マニプレータを移動させる移動機構と、対象物を操作する操作機構と、作業空間を撮像するためのカメラと、カメラの姿勢を変化させるカメラ制御機構と、制御部を有する。制御部は、カメラの姿勢を検出する機能と、カメラの姿勢およびカメラによる撮像画像に基づいて撮像画像内の鏡の姿勢を検出する機能とを有する。また、カメラによる撮像画像から鏡の領域を検出し、検出された鏡の領域から、対象物の少なくとも一部の像を検出する機能と、鏡の姿勢および対象物の像に基づいて、作業空間における前記対象物の姿勢を算出する機能とを有する。そして、このシステムでは、算出された対象物の姿勢の情報に基づいて、操作機構を操作する。 Another aspect of the present invention is a manipulator control system for operating a movable manipulator in a work space where an object is arranged. In this system, the manipulator includes a moving mechanism that moves the manipulator, an operation mechanism that operates the object, a camera that images the work space, a camera control mechanism that changes the posture of the camera, and a control unit. The control unit has a function of detecting the posture of the camera and a function of detecting the posture of the mirror in the captured image based on the posture of the camera and the image captured by the camera. Further, based on the function of detecting a mirror region from the image captured by the camera and detecting at least a part of the image of the object from the detected mirror region, the posture of the mirror and the image of the object, the work space And a function for calculating the posture of the object. In this system, the operation mechanism is operated based on the calculated posture information of the object.
 操作機構の構成例としては、少なくとも一つの関節機構で連結されたアーム部と、アーム部に保持され、対象物を把握および移動可能なグリッパ部を有する例が考えられる。また、カメラは、グリッパ部に配置することができる。 As an example of the configuration of the operation mechanism, an example having an arm unit connected by at least one joint mechanism and a gripper unit held by the arm unit and capable of grasping and moving an object can be considered. The camera can be arranged in the gripper part.
 カメラの姿勢を検出する一例としては、制御部は、操作機構の動作を制御するとともに、操作機構の形状の幾何学的データを記憶しており、該幾何学的データを用いてカメラの姿勢を検出することができる。 As an example of detecting the posture of the camera, the control unit controls the operation of the operation mechanism, stores geometric data of the shape of the operation mechanism, and uses the geometric data to determine the posture of the camera. Can be detected.
 また、好ましい具体例では、対象物が配置された作業空間は、対象物が配置された棚を基準として定義することができ、棚に鏡を配置するように構成することができる。 In a preferred embodiment, the work space in which the object is arranged can be defined with reference to the shelf on which the object is arranged, and can be configured to arrange a mirror on the shelf.
 さらに好ましい具体例では、マニプレータは、複数の前記棚のうち任意の棚の近傍に移動可能な構成である。制御部は、複数の棚の配置に関する棚配置情報と、棚の形状に関する棚形状情報と、各棚に配置された対象物に関する対象物情報を記憶しており、制御部は、棚配置情報と対象物情報に基づいて、マニプレータを所定の棚の近傍に移動させ、カメラで前記作業空間を撮像するように構成することができる。 In a more preferred specific example, the manipulator is configured to be movable in the vicinity of an arbitrary shelf among the plurality of shelves. The control unit stores shelf arrangement information related to the arrangement of a plurality of shelves, shelf shape information related to the shape of the shelf, and object information related to the objects arranged on each shelf, and the control unit stores the shelf arrangement information and Based on the object information, the manipulator can be moved to the vicinity of a predetermined shelf, and the work space can be imaged with a camera.
 また、鏡の姿勢を検出しやすくするための具体例として、作業空間にはマーカーが配置されており、マーカーと鏡との相対的位置関係は制御部に記憶されており、カメラによる撮像画像中のマーカーに基づいて撮像画像内の鏡の姿勢を検出することができる。 As a specific example for making it easy to detect the posture of the mirror, a marker is arranged in the work space, and the relative positional relationship between the marker and the mirror is stored in the control unit. It is possible to detect the attitude of the mirror in the captured image based on the marker.
 本発明のさらに他の側面は、対象物が配置された作業空間で動作させるマニプレータである。このマニプレータは、マニプレータを移動させる移動機構と、対象物を操作するグリッパ部と、グリッパ部を移動させるアーム部と、作業空間を撮像するためのカメラと、カメラの姿勢を変化させるカメラ制御機構と、制御部を有する。制御部は、カメラの姿勢を検出する機能と、カメラの姿勢およびカメラによる撮像画像に基づいて撮像画像内の鏡の姿勢を検出する機能とを有する。また、カメラによる撮像画像から対象物の少なくとも一部の像を検出する機能と、鏡の姿勢および対象物の像に基づいて、作業空間における対象物の姿勢を算出する機能と、算出された対象物の姿勢の情報に基づいて、グリッパ部を操作する。さらに具体的な構成例をしては、グリッパ部に配置されたマーカーを有し、鏡の姿勢を検出する機能において、マーカーの姿勢に関する情報と、カメラによる撮像画像中のマーカーの像を用いて、該撮像画像内の鏡の姿勢を検出するようにしてもよい。 Still another aspect of the present invention is a manipulator that operates in a work space in which an object is arranged. The manipulator includes a moving mechanism for moving the manipulator, a gripper unit for operating the object, an arm unit for moving the gripper unit, a camera for imaging the work space, and a camera control mechanism for changing the posture of the camera. And having a control unit. The control unit has a function of detecting the posture of the camera and a function of detecting the posture of the mirror in the captured image based on the posture of the camera and the image captured by the camera. In addition, a function for detecting at least a part of an image of an object from an image captured by a camera, a function for calculating the attitude of an object in a workspace based on the attitude of a mirror and the image of the object, and a calculated object The gripper unit is operated based on the information on the posture of the object. As a more specific configuration example, in the function of having a marker arranged in the gripper unit and detecting the posture of the mirror, information on the posture of the marker and an image of the marker in the image captured by the camera are used. The attitude of the mirror in the captured image may be detected.
 以上において制御部の構成は、単体のコンピュータで構成してもよいし、あるいは、入力装置、出力装置、処理装置、記憶装置の任意の部分が、ネットワークで接続された他のコンピュータで構成されてもよい。具体的には、制御部は、マニプレータに搭載されたコンピュータで構成してもよいし、機能の一部または全部を、マニプレータと通信可能なサーバに配置してもよい。また、制御部の機能は処理装置で実行されるソフトウエアで実現することができる。また、ソフトウエアで構成した機能と同等の機能は、FPGA(Field Programmable Gate Array)、ASIC(Application Specific Integrated Circuit)などのハードウエアでも実現できる。そのような態様も本願発明の範囲に含まれる。 In the above, the configuration of the control unit may be configured by a single computer, or any part of the input device, output device, processing device, and storage device may be configured by another computer connected via a network. Also good. Specifically, the control unit may be configured by a computer mounted on a manipulator, or a part or all of the functions may be arranged on a server that can communicate with the manipulator. The function of the control unit can be realized by software executed by the processing device. Functions equivalent to those configured by software can also be realized by hardware such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). Such an embodiment is also included in the scope of the present invention.
 本発明によれば、移動マニプレータの作業空間において、障害物に衝突しないように制御できる。上記した以外の課題、構成、及び効果は、以下の実施形態の説明により明らかにされる。 According to the present invention, it is possible to control the mobile manipulator so as not to collide with an obstacle in the work space. Problems, configurations, and effects other than those described above will be clarified by the following description of embodiments.
移動マニプレータによる、対象物のピッキング動作を模式的に示した斜視図である。It is the perspective view which showed typically the picking operation | movement of the target object by a movement manipulator. 実施の形態1にかかる移動マニプレータの概略構成の一例を模式的に示した斜視図である。FIG. 2 is a perspective view schematically showing an example of a schematic configuration of a moving manipulator according to the first embodiment. 実施の形態1にかかる移動マニプレータのグリッパ部の構成の一例を示した側面図である。3 is a side view showing an example of a configuration of a gripper portion of the moving manipulator according to the first embodiment. FIG. 実施の形態1にかかる棚の構成の一例を模式的に示した斜視図である。It is the perspective view which showed typically an example of the structure of the shelf concerning Embodiment 1. FIG. 実施の形態1にかかる平面鏡の構成の一例を模式的に示した正面図である。1 is a front view schematically showing an example of a configuration of a plane mirror according to a first exemplary embodiment. 実施の形態1にかかる移動マニプレータの制御システムの構成の一例を示したブロック図である。1 is a block diagram illustrating an example of a configuration of a control system for a mobile manipulator according to a first embodiment. 実施の形態1にかかる移動マニプレータの動作手順の一例を示した流れ図である。3 is a flowchart illustrating an example of an operation procedure of the moving manipulator according to the first embodiment. 図7のステップS702における作業空間の側面から見た様子の一例を模式的に示した側面図である。It is the side view which showed typically an example of the mode seen from the side surface of the work space in step S702 of FIG. 図8においてカメラ14により撮影された撮影画像141の一例を示したイメージ図である。FIG. 9 is an image diagram illustrating an example of a photographed image 141 photographed by the camera 14 in FIG. 8. 図7のステップS702における作業空間の側面から見た様子の別なる一例を模式的に示した側面図である。It is the side view which showed typically another example of the mode seen from the side surface of the work space in step S702 of FIG. 図10においてカメラ14により撮影された撮影画像142の一例を示したイメージ図である。It is the image figure which showed an example of the picked-up image 142 image | photographed with the camera 14 in FIG. 実施の形態1にかかる平面鏡の姿勢を算出する方法を模式的に示した原理図である。FIG. 3 is a principle diagram schematically showing a method of calculating the attitude of a plane mirror according to the first embodiment. 実施の形態1にかかるカメラの平面鏡に対する鏡像視点C’を説明した側面図である。It is a side view explaining the mirror image viewpoint C 'with respect to the plane mirror of the camera concerning Embodiment 1. FIG. 実施の形態1にかかるカメラの平面鏡に対する鏡像視点C”を説明した側面図である。FIG. 6 is a side view for explaining a mirror image viewpoint C ″ with respect to the plane mirror of the camera according to the first embodiment; 実施の形態1にかかる移動マニプレータのグリッパ部の構成の別なる一例を示した側面図である。It is the side view which showed another example of a structure of the gripper part of the movement manipulator concerning Embodiment 1. FIG. 実施の形態1にかかる平面鏡の構成の別なる一例を模式的に示した正面図である。It is the front view which showed typically another example of the structure of the plane mirror concerning Embodiment 1. FIG. 実施の形態1にかかるチェッカーパターンマーカーの構成を示した図面である。1 is a diagram illustrating a configuration of a checker pattern marker according to a first embodiment. 実施の形態1にかかる凸面鏡の構成の一例を示した模式図である。FIG. 3 is a schematic diagram illustrating an example of a configuration of a convex mirror according to the first embodiment. 実施の形態1にかかる棚の構成の別なる一例を示した斜視図である。It is the perspective view which showed another example of the structure of the shelf concerning Embodiment 1. FIG. 実施の形態2にかかる棚の構成を模式的に示した斜視図である。It is the perspective view which showed typically the structure of the shelf concerning Embodiment 2. FIG. 実施の形態2にかかる移動マニプレータの動作手順の一例を示した流れ図である。10 is a flowchart illustrating an example of an operation procedure of the moving manipulator according to the second embodiment. 実施の形態2にかかるトレイの構成を示した模式図である。FIG. 5 is a schematic diagram illustrating a configuration of a tray according to a second embodiment. 実施の形態3にかかる移動マニプレータのグリッパ部の構成の一例を示した斜視図である。It is the perspective view which showed an example of the structure of the gripper part of the movement manipulator concerning Embodiment 3. FIG. 実施の形態3にかかる平面鏡の反射面を算出する方法の一例を示した側面図である。FIG. 10 is a side view showing an example of a method for calculating a reflecting surface of a plane mirror according to a third embodiment. 実施の形態3にかかる棚の構成を模式的に示した斜視図である。It is the perspective view which showed typically the structure of the shelf concerning Embodiment 3. FIG. 実施の形態4にかかる移動マニプレータの動作手順の一例の一部を示した流れ図である。10 is a flowchart showing a part of an example of an operation procedure of the moving manipulator according to the fourth embodiment; 実施の形態4にかかる平面鏡の反射面を算出する方法の一例を示した側面図である。It is the side view which showed an example of the method of calculating the reflective surface of the plane mirror concerning Embodiment 4.
 以下、本発明の実施の形態について添付図面を参照して説明する。なお、全ての図面において、実施形態が異なる場合であっても、同一または相当する部材については同一の符号を付し、共通する説明は繰り返さない。本発明は以下に示す実施の形態の記載内容に限定して解釈されるものではない。本発明の思想ないし趣旨から逸脱しない範囲で、その具体的構成を変更し得ることは当業者であれば容易に理解される。 Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In all the drawings, even if the embodiments are different, the same or corresponding members are denoted by the same reference numerals, and the common description will not be repeated. The present invention is not construed as being limited to the description of the embodiments below. Those skilled in the art will readily understand that the specific configuration can be changed without departing from the spirit or the spirit of the present invention.
 本明細書等における「第1」、「第2」、「第3」などの表記は、構成要素を識別するために付するものであり、必ずしも、数または順序を限定するものではない。また、構成要素の識別のための番号は文脈毎に用いられ、一つの文脈で用いた番号が、他の文脈で必ずしも同一の構成を示すとは限らない。また、ある番号で識別された構成要素が、他の番号で識別された構成要素の機能を兼ねることを妨げるものではない。 In this specification and the like, notations such as “first”, “second”, and “third” are attached to identify the constituent elements, and do not necessarily limit the number or order. In addition, a number for identifying a component is used for each context, and a number used in one context does not necessarily indicate the same configuration in another context. Further, it does not preclude that a component identified by a certain number also functions as a component identified by another number.
 図面等において示す各構成の位置、大きさ、形状、範囲などは、発明の理解を容易にするため、実際の位置、大きさ、形状、範囲などを表していない場合がある。このため、本発明は、必ずしも、図面等に開示された位置、大きさ、形状、範囲などに限定されない。 The position, size, shape, range, etc. of each component shown in the drawings and the like may not represent the actual position, size, shape, range, etc. in order to facilitate understanding of the invention. For this reason, the present invention is not necessarily limited to the position, size, shape, range, and the like disclosed in the drawings and the like.
 図1は移動マニプレータによる、対象物のピッキング動作を示した図である。本発明の実施の形態1では、図1に示すように、移動マニプレータ1によって、棚2に置かれる対象物3を把持し、持ち上げる動作について説明する。以降、移動マニプレータ1による、対象物3の把持および持ち上げにかかる一連の動作を、対象物3のピッキング動作と称する。 FIG. 1 is a diagram showing a picking operation of an object by a moving manipulator. In Embodiment 1 of the present invention, as shown in FIG. 1, an operation of grasping and lifting an object 3 placed on a shelf 2 by a moving manipulator 1 will be described. Hereinafter, a series of operations for gripping and lifting the object 3 by the moving manipulator 1 is referred to as a picking operation of the object 3.
 (構成)
 図2は、実施の形態1にかかる移動マニプレータ1の概略構成を示す模式的な斜視図である。図2において、移動型マニプレータ1は、制御部10と、移動台車部11と、アーム部12と、アーム部12の先端に固定されるグリッパ部13と、グリッパ部13の所定の箇所に具備されるカメラ14とから構成される。
(Constitution)
FIG. 2 is a schematic perspective view illustrating a schematic configuration of the moving manipulator 1 according to the first embodiment. In FIG. 2, the movable manipulator 1 is provided at a predetermined portion of the control unit 10, the movable carriage unit 11, the arm unit 12, the gripper unit 13 fixed to the tip of the arm unit 12, and the gripper unit 13. Camera 14.
 制御部10は、有線ないし無線によって上位制御部6(後述)の指示を受信し、移動マニプレータ1の動作の制御を行う。ここで、動作の制御とは、前記指示に基づき、移動マニプレータ1の各部の動作を制御するだけではなく、移動マニプレータ1のカメラ14などの情報に基づいて、移動マニプレータ1の動作を計画するなどの知的処理も含むものとする。 The control unit 10 receives an instruction from the host control unit 6 (described later) by wire or wireless, and controls the operation of the mobile manipulator 1. Here, the operation control means not only controlling the operation of each part of the moving manipulator 1 based on the instruction, but also planning the operation of the moving manipulator 1 based on information such as the camera 14 of the moving manipulator 1. Including intelligent processing.
 移動台車部11は、車輪110などの移動機構を1つ以上備え、移動マニプレータ制御部10の動作指令に基づき、前進・後退・旋回などにより、移動マニプレータ1を平坦地の任意の場所に移動させる。なお、ここで平坦地とは、単純な平面だけではなく、斜面や小さな段差などを含んでもよい。以下、移動台車部11による移動マニプレータ1の平坦地移動を、移動マニプレータ1の走行と称する。 The moving carriage unit 11 includes one or more moving mechanisms such as wheels 110, and moves the moving manipulator 1 to an arbitrary place on a flat ground by moving forward / backward / turning based on an operation command of the moving manipulator control unit 10. . Here, the flat ground may include not only a simple plane but also a slope or a small step. Hereinafter, the flat ground movement of the moving manipulator 1 by the moving carriage unit 11 is referred to as traveling of the moving manipulator 1.
 アーム部12は、移動台車部11の所定の箇所に固定される。また、アーム部12は、1つ以上の関節機構を有し、移動マニプレータ制御部10の動作指令に基づき、グリッパ部13を、所定の3次元空間内において、所定の位置および方向に移動させる。以下、グリッパ部13の位置および方向を、グリッパ部13の姿勢と称する。なお、図2において、アーム部12は、複数の関節機構が腕部により連結される垂直多関節型機構を持つ構成を示したが、グリッパ部13を、所定の姿勢に移動させる構成であれば、この構成に限るものではない。たとえば、水平多関節機構、直交多関節機構、パラレルリンク機構であってもよく、また、それらの組み合わせであってもよい。 The arm unit 12 is fixed to a predetermined portion of the movable carriage unit 11. The arm unit 12 has one or more joint mechanisms, and moves the gripper unit 13 to a predetermined position and direction in a predetermined three-dimensional space based on an operation command of the moving manipulator control unit 10. Hereinafter, the position and direction of the gripper portion 13 are referred to as the posture of the gripper portion 13. In FIG. 2, the arm unit 12 has a configuration having a vertical articulated mechanism in which a plurality of joint mechanisms are connected by the arm unit. However, the arm unit 12 may be configured to move the gripper unit 13 to a predetermined posture. The configuration is not limited to this. For example, a horizontal multi-joint mechanism, an orthogonal multi-joint mechanism, a parallel link mechanism, or a combination thereof may be used.
 グリッパ部13は、所定の物体を把持する機能を有する。なお、図2において、グリッパ部13は、2本の指機構130により、所定の物体を挟むことで把持する構成を示しているが、これに限るものではない。たとえば、3本以上の指機構を有している構成や、指機構を用いず真空吸着機構や磁石により、対象物を把持する構成であってもよい。 The gripper unit 13 has a function of gripping a predetermined object. In FIG. 2, the gripper unit 13 shows a configuration in which a predetermined object is gripped by two finger mechanisms 130, but is not limited thereto. For example, the structure which has three or more finger mechanisms, or the structure which hold | grips a target object with a vacuum suction mechanism or a magnet without using a finger mechanism may be sufficient.
 図3は、図2におけるアーム部12の先端部およびグリッパ部13およびカメラ14の模式的な側面図である。図3おいて、カメラ14は、グリッパ部13の指機構130の延伸方向に対して、所定の角度でオフセットされて固定される。すなわち、カメラ14の視野Vの下端方向が、指の延伸方向と平行となるように固定される。これにより、視野Vに指機構130が映りこむことを避けることができる。 FIG. 3 is a schematic side view of the distal end portion of the arm portion 12, the gripper portion 13, and the camera 14 in FIG. In FIG. 3, the camera 14 is offset and fixed at a predetermined angle with respect to the extending direction of the finger mechanism 130 of the gripper unit 13. That is, the lower end direction of the visual field V of the camera 14 is fixed so as to be parallel to the extending direction of the finger. Thereby, it is possible to avoid the finger mechanism 130 from being reflected in the visual field V.
 アーム部12には、関節機構121が具備され、グリッパ部13を、図3における矢印Pに沿って、任意の角度に向けることができる。同様に、アーム部12には、関節機構122が具備され、グリッパ部13を、図3における矢印Rに沿って、任意の角度に向けることができる。また、図示されないアーム部12の他の関節機構によって、アーム部12は、グリッパ部13を、図3における矢印Yに従った、任意の方向に向けることができる。これらにより、アーム部12は、グリッパ部13を任意の方向に向けることができる。もっとも、図3に示した構成は一例であって、公知の種々の関節機構を組み合わせて構成することができる。 The arm portion 12 is provided with a joint mechanism 121, and the gripper portion 13 can be directed at an arbitrary angle along the arrow P in FIG. Similarly, the arm portion 12 is provided with a joint mechanism 122, and the gripper portion 13 can be directed at an arbitrary angle along the arrow R in FIG. Moreover, the arm part 12 can orient | assign the gripper part 13 to the arbitrary directions according to the arrow Y in FIG. 3 with the other joint mechanism of the arm part 12 which is not shown in figure. Thus, the arm unit 12 can direct the gripper unit 13 in an arbitrary direction. However, the configuration shown in FIG. 3 is an example, and can be configured by combining various known joint mechanisms.
 図4は、本発明の実施例1にかかる棚2の概略構成を示す模式的な斜視図である。棚2は、4本の支柱20によって支持された1枚の棚板21と1枚の棚天板22とから構成される。棚板21にはピッキング対象物3が複数置かれている。 FIG. 4 is a schematic perspective view showing a schematic configuration of the shelf 2 according to the first embodiment of the present invention. The shelf 2 is composed of one shelf board 21 and one shelf top board 22 supported by four columns 20. A plurality of picking objects 3 are placed on the shelf board 21.
 ここで、実施の形態1にかかる、移動マニプレータ1に対して、作業空間を定義する。作業空間とは、移動マニプレータ1のピッキング動作の対象となる1つ以上の対象物3が置かれた所定の空間である。図4において、移動マニプレータ1の作業空間とは、四方を支柱20によって囲まれた、棚板21の上面から棚天板22の下面に至るまでの直方体形状の3次元的な空間と定義する。 Here, a work space is defined for the mobile manipulator 1 according to the first embodiment. The work space is a predetermined space in which one or more objects 3 to be picked by the moving manipulator 1 are placed. In FIG. 4, the work space of the moving manipulator 1 is defined as a three-dimensional space having a rectangular parallelepiped shape that is surrounded by columns 20 from the upper surface of the shelf plate 21 to the lower surface of the shelf top plate 22.
 実施の形態1にかかる作業空間には、所定の位置に平面鏡4が配置される。図4においては、平面鏡4は、その反射面を棚板21の方向に向けて、棚天板22の所定の箇所に、所定の角度で固定される。もっとも、対象物3や平面鏡4は厳密に作業空間内に存在が制限されるものではなく、対象物3や平面鏡4の一部が作業空間からはみ出したり、対象物3や平面鏡4の全体が作業空間の近傍に配置されることを妨げるものではない。 In the work space according to the first embodiment, the plane mirror 4 is arranged at a predetermined position. In FIG. 4, the plane mirror 4 is fixed to a predetermined position of the shelf top plate 22 at a predetermined angle with its reflection surface directed toward the shelf plate 21. However, the existence of the object 3 and the plane mirror 4 is not strictly limited in the work space. A part of the object 3 or the plane mirror 4 protrudes from the work space, or the entire object 3 or the plane mirror 4 works. It does not prevent it from being placed in the vicinity of the space.
 図5は、平面鏡4の反射面を正面から見た構成を模式的に示す図面である。平面鏡4は、ガラス、金属、プラスチック等で構成可能であり、カメラ14で撮像可能な領域の波長を少なくとも反射するものとする。図5において、平面鏡4の反射面の四隅には所定の形状のマーカー5が配置される。図5において、マーカー5は、全て同一の形状をしている。マーカー5の形状は、カメラ14により撮影された画像から、所定画像処理により、その存在可否および撮影された個数を検出可能な形状であればよい。更に、前記画像処理において、前記画像内の各マーカー5の中心座標を算出可能な形状であればよい。マーカー5は、平面鏡表面または内部への突起物付加、プリント、刻印、シールの塗布等で形成可能である。また、平面鏡4に配置された、各マーカー5の位置関係は、あらかじめ制御部10もしくは上位制御部6(後述)に、記憶される。 FIG. 5 is a drawing schematically showing a configuration in which the reflecting surface of the plane mirror 4 is viewed from the front. The plane mirror 4 can be made of glass, metal, plastic, or the like, and reflects at least the wavelength of the region that can be imaged by the camera 14. In FIG. 5, markers 5 having a predetermined shape are arranged at the four corners of the reflecting surface of the plane mirror 4. In FIG. 5, all the markers 5 have the same shape. The shape of the marker 5 may be any shape that can detect the presence / absence and the number of captured images from an image captured by the camera 14 by predetermined image processing. Furthermore, in the said image process, what is necessary is just the shape which can calculate the center coordinate of each marker 5 in the said image. The marker 5 can be formed by adding protrusions on the surface or inside of the plane mirror, printing, marking, applying a seal, or the like. The positional relationship between the markers 5 arranged on the plane mirror 4 is stored in advance in the control unit 10 or the upper control unit 6 (described later).
 図6は、実施の形態1における移動マニプレータの制御システムの構成の一例を示した図面である。移動マニプレータ1の制御部10には、マニプレータ1の全体を制御する全体制御部100、制御に必要なデータを格納するデータ記憶部101、移動台車部11の制御を行う移動台車制御部102、アーム部12の制御を行うアーム制御部103、グリッパ部13の制御を行うグリッパ制御部104、カメラ14で得られた画像を処理する画像処理部105が、具備される。また、移動マニプレータ1の制御部10は、有線ないし無線によって、上位制御部6と通信可能な構成である。 FIG. 6 is a diagram showing an example of the configuration of the control system for the mobile manipulator in the first embodiment. The control unit 10 of the mobile manipulator 1 includes an overall control unit 100 that controls the entire manipulator 1, a data storage unit 101 that stores data necessary for control, a mobile cart control unit 102 that controls the mobile cart unit 11, an arm An arm control unit 103 that controls the unit 12, a gripper control unit 104 that controls the gripper unit 13, and an image processing unit 105 that processes an image obtained by the camera 14 are provided. The control unit 10 of the mobile manipulator 1 is configured to be able to communicate with the host control unit 6 by wire or wireless.
 上位制御部6には、地図データ601、棚形状データ602、対象物データ603といった情報を記憶する対象物データ記憶部60が具備される。上位制御部6は、移動マニプレータ1の筐体に内蔵されてもよく、もしくは移動マニプレータ1の外部にあってもよい。上位制御部6と制御部10は、連携して情報処理を行い、移動マニプレータ1の制御ができればよいので、単体のコンピュータで構成してもよいし、あるいは、入力装置、出力装置、処理装置、記憶装置の任意の部分が、有線または無線のネットワークで接続された複数のコンピュータで構成されてもよい。 The host control unit 6 includes an object data storage unit 60 that stores information such as map data 601, shelf shape data 602, and object data 603. The host control unit 6 may be built in the housing of the mobile manipulator 1 or may be outside the mobile manipulator 1. The upper control unit 6 and the control unit 10 need only be able to perform information processing in cooperation and control the mobile manipulator 1, and may be configured by a single computer, or may include an input device, an output device, a processing device, Any part of the storage device may be composed of a plurality of computers connected by a wired or wireless network.
 ここで、地図データ601は、移動マニプレータ1の移動台車部11が走行できうる空間(以下、走行空間)において、棚2が配置されている地点などが記述されたデータである。棚2が、走行空間において、複数配置されている場合は、それぞれの棚2に個別の識別符号が付与されて、識別符号と地点とが対になって記憶されているものとする。棚形状データ602は、棚2の所定の部位を基準にして、棚板22の幅・奥行き・高さといった情報が記述されたデータである。対象物データ603は、ピッキング動作の対象となる対象物3の種類あるいは識別番号、また、その対象物3が配置される棚板22の識別符号等の情報が格納される。また、対象物データ603は、さらに対象物3の形状、重さ、材質などの特性データを含んでもよい。 Here, the map data 601 is data describing a point where the shelf 2 is arranged in a space where the mobile carriage unit 11 of the mobile manipulator 1 can travel (hereinafter referred to as travel space). When a plurality of shelves 2 are arranged in the traveling space, it is assumed that an individual identification code is assigned to each shelf 2 and the identification codes and points are stored in pairs. The shelf shape data 602 is data in which information such as the width, depth, and height of the shelf board 22 is described with reference to a predetermined part of the shelf 2. The object data 603 stores information such as the type or identification number of the object 3 to be picked and the identification code of the shelf 22 on which the object 3 is arranged. The object data 603 may further include characteristic data such as the shape, weight, and material of the object 3.
 すなわち、地図データ601と、棚形状データ602と、対象物データ603との組み合わせによって、上位制御部6は、ピッキング動作の対象となる対象物3が陳列される作業空間が、走行空間のどこに存在するかを特定可能であり、それに基づき移動マニプレータ1の制御部10に、ピッキング指令を送信することができる。 That is, by combining the map data 601, the shelf shape data 602, and the object data 603, the upper control unit 6 allows the work space where the object 3 to be picked is displayed to be present in the traveling space. The picking command can be transmitted to the control unit 10 of the movement manipulator 1 based on this.
 一方、作業空間において、ピッキング動作の対象となる対象物3が、どのような姿勢(位置および向き。例えば、位置は対象物であるビンの重心の位置。向きはビンが立っているか倒れているか等)で陳列されているかは、上位制御部6は知り得ない。陳列される対象物3は、移動マニプレータ1によりピッキングされたり、走行空間内における人間の作業員によりピッキングされたりするため、随時姿勢が変更されている可能性があるためである。 On the other hand, in the work space, the posture (position and orientation) of the target object 3 that is the target of the picking operation. For example, the position is the position of the center of gravity of the bin that is the target. Etc.) cannot be known by the host controller 6. This is because the object 3 to be displayed is picked by the moving manipulator 1 or picked by a human worker in the traveling space, so that the posture may be changed at any time.
 従って、移動マニプレータ1は、作業空間における対象物3の姿勢を確認してから、対象物3をピッキングする必要がある。 Therefore, the moving manipulator 1 needs to pick the object 3 after confirming the posture of the object 3 in the work space.
 (動作)
 次に、図7ないし図13を参照して、実施の形態1における移動マニプレータによる対象物3のピッキング動作の手順について説明する。
(Operation)
Next, the procedure of the picking operation of the object 3 by the moving manipulator in the first embodiment will be described with reference to FIGS.
 図7は、実施の形態1におけるピッキング動作手順の全体概要を示す図である。なお、図7における各ステップは、移動マニプレータ1の制御部10に内蔵される所定のプログラムによって実行されてもよいし、上位制御部6で実行し、逐一もしくは纏めて移動マニプレータ1の制御部10に指令を送信するようにしてもよい。 FIG. 7 is a diagram showing an overall outline of the picking operation procedure in the first embodiment. Each step in FIG. 7 may be executed by a predetermined program built in the control unit 10 of the movement manipulator 1 or may be executed by the host control unit 6 to control the control unit 10 of the movement manipulator 1 one by one or collectively. You may make it transmit a command to.
 図7のフローは、例えばオペレータが、対象物3を特定しピッキングを行う指示を上位制御部6に入力することにより開始される(START)。 7 is started when, for example, the operator inputs an instruction to identify and pick the target object 3 to the host control unit 6 (START).
 ステップS701において、移動マニプレータ1は、ピッキングする対象物3が陳列される棚2の前まで走行する。前述のように、上位制御部6には、移動マニプレータ1の走行空間において、ピッキングする対象物3が陳列されている棚2の地点、すなわち作業空間の地点が記憶されている。上位制御部6は、移動マニプレータ1の制御部10に対して、その作業空間の地点を示す情報を含んだピッキング指令を送信する。そして、移動マニプレータ1は、前記ピッキング指令で指定される作業空間の地点まで、移動台車部11を制御して走行する。この走行には、たとえば、あらかじめ走行空間に走行経路を示す磁気ケーブルなどを埋め込んでおき、その磁気ケーブルに沿って移動するなどの、既存技術を用いればよい。あるいは地図データ601を用いて、自律的に移動する既存技術を用いてもよい。 In step S701, the moving manipulator 1 travels to the front of the shelf 2 on which the object 3 to be picked is displayed. As described above, the upper control unit 6 stores the point of the shelf 2 where the object 3 to be picked is displayed, that is, the point of the work space, in the traveling space of the mobile manipulator 1. The host control unit 6 transmits a picking command including information indicating the point of the work space to the control unit 10 of the mobile manipulator 1. Then, the mobile manipulator 1 travels by controlling the mobile carriage unit 11 to the point of the work space specified by the picking command. For this traveling, for example, an existing technique such as embedding a magnetic cable indicating a traveling route in the traveling space in advance and moving along the magnetic cable may be used. Alternatively, an existing technology that moves autonomously using the map data 601 may be used.
 ステップS702において、移動マニプレータ1は棚2の前まで走行した後、グリッパ部13の姿勢を変更して、棚の前方(開口部)から、カメラ14を、前記作業空間を向くように移動させ、画像を撮影する。上位制御部6に予め記憶される、作業空間の地点情報と棚形状データ602により、移動マニプレータ1の中心(中心は任意に定めることができる。以下、説明を単純にするために、移動マニプレータ1の原点と称する)に対する、作業空間における棚板21の姿勢は既知である。そのため、移動マニプレータ1は、棚板21の位置から相対的に定められる所定の姿勢(位置及び向き)に、カメラ14が位置するように、アーム部12およびグリッパ部13を制御することができる。例えば、棚2から所定距離離れた位置に移動マニプレータ1の中心を位置決めし、アーム12およびグリッパ13を棚板21の姿勢に応じて設定された姿勢に位置決めする。 In step S702, the moving manipulator 1 travels to the front of the shelf 2, and then changes the posture of the gripper unit 13 to move the camera 14 from the front (opening) of the shelf so as to face the work space, Take a picture. The center of the moving manipulator 1 (the center can be arbitrarily determined by the point information of the work space and the shelf shape data 602 stored in advance in the upper control unit 6. In order to simplify the description, the moving manipulator 1 will be described below. The position of the shelf 21 in the work space with respect to the origin) is known. Therefore, the moving manipulator 1 can control the arm unit 12 and the gripper unit 13 so that the camera 14 is positioned in a predetermined posture (position and orientation) relatively determined from the position of the shelf board 21. For example, the center of the moving manipulator 1 is positioned at a position away from the shelf 2 by a predetermined distance, and the arm 12 and the gripper 13 are positioned in a posture set according to the posture of the shelf board 21.
 図8は、ステップS702において、作業空間の側面から見た様子を模式的に示した図面である。 FIG. 8 is a diagram schematically showing a state viewed from the side of the work space in step S702.
 図8において、棚板21の上には、ピッキング動作の対象である対象物3aの他に、グリッパ部13から見て対象物3aの奥側に、ピッキング動作の対象ではない対象物3bが位置している。従って、グリッパ部13によって対象物3aをピッキングする際には、グリッパ部13の指機構130が対象物3bと干渉しないように制御する必要がある。 In FIG. 8, on the shelf board 21, in addition to the object 3a that is the object of the picking operation, the object 3b that is not the object of the picking operation is positioned behind the object 3a as viewed from the gripper unit 13. is doing. Therefore, when picking the object 3a by the gripper part 13, it is necessary to control the finger mechanism 130 of the gripper part 13 so as not to interfere with the object 3b.
 図9はステップS702においてカメラ14により撮影された撮影画像141の一例を示した図面である。 FIG. 9 is a diagram showing an example of a photographed image 141 photographed by the camera 14 in step S702.
 図9に示されるように、撮影画像141において、対象物3bの像は、対象物3aの陰となるように撮影される。こうした、物体の陰となり一部が欠損した像から、対象物3bの姿勢を、精度よく特定することは、一般的に困難となる。また、グリッパ部13から見て、対象物3bが、対象物3aによって完全に隠されている場合は、図8の状態から、対象物3bの存在を検知することはできない。 As shown in FIG. 9, in the photographed image 141, the image of the object 3b is photographed so as to be behind the object 3a. It is generally difficult to accurately specify the posture of the object 3b from such an image that is behind the object and partially missing. Further, when the object 3b is completely hidden by the object 3a as viewed from the gripper unit 13, the presence of the object 3b cannot be detected from the state of FIG.
 図7に戻って説明を続ける。
Returning to FIG. 7, the description will be continued.
 ステップS703において、撮影画像141(図9)に対して、平面鏡4の四隅にあるマーカー5の検出処理を行う。ここでは、マーカー5の形状や個数は、予め例えば棚形状データ602の一部として記憶しておく。マーカー5の形状や個数が既知であるため、公知のテンプレートマッチング処理により、撮影画像141に撮影されるマーカー5の個数および、その撮影画像141における画素座標値を特定することができる。なお、マーカー5を省略し、画像処理等により平面鏡4の角部をマーカーの代わりに抽出してもよいが、マーカーを準備して利用したほうが、画像処理の負担が低減できる。ステップS703以降の画像の処理や計算は、画像処理部105で行う。また、画像処理部105、全体制御部100、上位制御部6で分担して処理を行ってもよい。 In step S703, detection processing of the markers 5 at the four corners of the plane mirror 4 is performed on the captured image 141 (FIG. 9). Here, the shape and number of the markers 5 are stored in advance as a part of the shelf shape data 602, for example. Since the shape and number of the markers 5 are known, the number of markers 5 photographed in the photographed image 141 and the pixel coordinate values in the photographed image 141 can be specified by a known template matching process. Although the marker 5 may be omitted and the corners of the plane mirror 4 may be extracted instead of the marker by image processing or the like, the burden of image processing can be reduced by preparing and using the marker. The image processing unit 105 performs image processing and calculation after step S703. Further, the image processing unit 105, the overall control unit 100, and the host control unit 6 may share the processing.
 ステップS703において、画像から4個のマーカー5が検出されなかった場合、グリッパ部13の姿勢を変更する(ステップS704)。たとえば、所定の角度値だけ、図3の矢印Pの方向にアーム部12の関節機構121を回転して、グリッパ部13の姿勢を変更するなどの制御動作を行う。そして、姿勢を変更した後に、再び作業空間をカメラ14により撮影し(ステップS702)、マーカー5の検出処理(ステップS703)を行う。 In step S703, when the four markers 5 are not detected from the image, the posture of the gripper unit 13 is changed (step S704). For example, the control operation such as changing the posture of the gripper unit 13 by rotating the joint mechanism 121 of the arm unit 12 in the direction of the arrow P in FIG. Then, after changing the posture, the work space is imaged again by the camera 14 (step S702), and the marker 5 detection process (step S703) is performed.
 ステップS702ないしステップS704の処理は、カメラ14の撮影画像から既知の数(例えば4個)のマーカーが検出されるまで繰り返し行われる。ステップS701の段階で、棚2と移動マニプレータ1の位置関係を、全てのマーカーがカメラ14の画角に収まるような適切な位置関係に位置決めしておくことが望ましい。この場合、1回あるいは所定回数の試行でマーカーが検出されるはずである。しかし、所定回数以内にマーカーが検出できない場合は、エラー信号あるいは警告を発する等して、オペレータに知らせることもできる。 The processing from step S702 to step S704 is repeated until a known number (for example, four) of markers are detected from the captured image of the camera 14. In step S701, it is desirable that the positional relationship between the shelf 2 and the moving manipulator 1 is set to an appropriate positional relationship so that all the markers are within the angle of view of the camera 14. In this case, the marker should be detected once or a predetermined number of trials. However, if the marker cannot be detected within a predetermined number of times, the operator can be notified by issuing an error signal or a warning.
 図10は、カメラ14の撮影画像から4個のマーカー5が検出された時の作業空間を側面から見た様子を模式的に示した図面である。鏡4がカメラ14の画角内に収まっている。 FIG. 10 is a diagram schematically showing the working space viewed from the side when four markers 5 are detected from the image captured by the camera 14. The mirror 4 is within the angle of view of the camera 14.
 図11は、図10におけるカメラ14の撮影画像の一例を示した図面である。 FIG. 11 is a drawing showing an example of a photographed image of the camera 14 in FIG.
 ステップS705において、カメラ14の撮影画像(図11)を基にして、平面鏡4の作業空間における姿勢を算出する。算出方法の一例を図12に示す。 In step S705, the attitude of the plane mirror 4 in the work space is calculated based on the image captured by the camera 14 (FIG. 11). An example of the calculation method is shown in FIG.
 図12において、移動マニプレータ1の原点をO、カメラ14の視点をC、平面鏡4の中心点をMとする。また、説明を単純にするため、図12において、平面鏡4は厚みを持たない長方形として図示され、マーカー5の位置は前記長方形の各頂点の位置とする。 12, the origin of the moving manipulator 1 is O, the viewpoint of the camera 14 is C, and the center point of the plane mirror 4 is M. In order to simplify the explanation, in FIG. 12, the plane mirror 4 is illustrated as a rectangle having no thickness, and the position of the marker 5 is the position of each vertex of the rectangle.
 図12において、移動マニプレータ1の原点Oを基準とした、カメラ14の視点Cの姿勢(カメラの位置と傾きを示す情報。例えば、カメラのレンズの中心点の位置及びカメラの光軸が向く方向を用いることができる)は、図12には図示されないアーム部12の各関節の角度および関節間を結ぶ腕の長さにより、一意に算出できる。アーム部12の各関節の角度、関節間を結ぶ腕の長さ、等の移動マニプレータ1に関する情報は、予め例えば制御部10のデータ記憶部101に格納しておく。このようにして算出される、移動マニプレータ1の原点Oを基準とした、カメラ14の視点Cの姿勢を示す同次変換行列をとする。図12において、カメラ14の視点Cで撮影した撮影画像(図11)は、撮影画像面Pとして示される。 In FIG. 12, the posture of the viewpoint C of the camera 14 with respect to the origin O of the moving manipulator 1 (information indicating the position and tilt of the camera. For example, the position of the center point of the camera lens and the direction in which the optical axis of the camera faces Can be uniquely calculated by the angle of each joint of the arm unit 12 and the length of the arm connecting the joints (not shown in FIG. 12). Information relating to the moving manipulator 1 such as the angle of each joint of the arm unit 12 and the length of the arm connecting the joints is stored in advance in, for example, the data storage unit 101 of the control unit 10. The homogeneous transformation matrix indicating the posture of the viewpoint C of the camera 14 with the origin O of the moving manipulator 1 as a reference is calculated as O T C. In FIG. 12, the captured image (FIG. 11) captured at the viewpoint C of the camera 14 is shown as a captured image plane P.
 一方、平面鏡4のマーカー5の相互の位置関係、すなわち平面鏡4の各辺の長さHおよびWは、例えば棚形状データ602の一部として記憶されており、これにより既知である。このとき、マーカー5とカメラ14の視点Cを結ぶ4本の直線と、カメラ14の撮影画像面Pとの交点が、ステップS703により検出されたマーカー5の座標となる。このカメラ14の撮影画像面Pにおけるマーカー5の座標と、鏡4の各辺の長さWとHとから、ホモグラフィ行列を推定することにより、カメラ14の視点Cに対する、平面鏡4の中心点Mの姿勢(鏡の位置と傾きを示す情報。例えば、鏡の中心点の位置及び鏡面が向く方向を用いることができる)を算出できる。算出された、カメラ14の視点Cに対する平面鏡4の中心点Mの姿勢を示す同次変換行列をMとする。 On the other hand, the mutual positional relationship between the markers 5 of the plane mirror 4, that is, the lengths H and W of each side of the plane mirror 4, for example, is stored as a part of the shelf shape data 602 and is known. At this time, the intersections of the four straight lines connecting the marker 5 and the viewpoint C of the camera 14 and the captured image plane P of the camera 14 become the coordinates of the marker 5 detected in step S703. By estimating the homography matrix from the coordinates of the marker 5 on the captured image plane P of the camera 14 and the lengths W and H of each side of the mirror 4, the center point of the plane mirror 4 with respect to the viewpoint C of the camera 14 is estimated. The posture of M (information indicating the position and tilt of the mirror. For example, the position of the center point of the mirror and the direction in which the mirror surface faces) can be calculated. The calculated homogeneous transformation matrix indicating the attitude of the center point M of the plane mirror 4 with respect to the viewpoint C of the camera 14 is defined as C T M.
 そして、算出された二つの同次変換行列とを積算することにより、移動マニプレータ1の原点Oを基準とした、平面鏡4の中心点の姿勢を示す同次変換行列Mが算出できる。同次変換行列Mが、作業空間(現実の空間)における平面鏡4の姿勢に相当する。 Then, by integrating the calculated two homogeneous transformation matrices O T C and C T m , the homogeneous transformation matrix O indicating the attitude of the center point of the plane mirror 4 with respect to the origin O of the moving manipulator 1 is used. T M can be calculated. The homogeneous transformation matrix O T M corresponds to the attitude of the plane mirror 4 in the work space (real space).
 このようにして、カメラ14により撮像された平面鏡4の画像から、作業空間における平面鏡4の姿勢を導出した。このとき、平面鏡4の四隅に配置されたマーカー5により、平面鏡4の作業空間における姿勢を算出すると、正確な姿勢を算出することができる。算出された平面鏡4の姿勢は、制御部10もしくは上位制御部6に記憶される。(ステップS706)。 In this manner, the attitude of the plane mirror 4 in the work space was derived from the image of the plane mirror 4 captured by the camera 14. At this time, if the posture of the plane mirror 4 in the work space is calculated by the markers 5 arranged at the four corners of the plane mirror 4, an accurate posture can be calculated. The calculated attitude of the plane mirror 4 is stored in the control unit 10 or the host control unit 6. (Step S706).
 ステップS707において、再びカメラ14を用いて作業空間を撮影する。ここでは、ステップS702と同じ姿勢とするが、また作業空間を撮影するものであれば、他の姿勢であってもよい。撮影された画像は図9と同様の画像141であるものとする。この画像141に対して、大きく分けて、ステップS708による処理と、ステップ709からステップ721までの処理という、二つの画像処理が並列的または直列的に行われる。 In step S707, the work space is imaged again using the camera 14. Here, the posture is the same as that in step S702, but may be any other posture as long as the work space is photographed. The photographed image is assumed to be an image 141 similar to that in FIG. The image 141 is roughly divided into two image processes, a process in step S708 and a process from step 709 to step 721, in parallel or in series.
 ステップS708では、ステップS707において撮影された画像141から、ピッキング目標である対象物3aを検出し、その姿勢を算出する。対象物3aの検出には、あらかじめ記憶させておいた対象物3の画像(例えば対象物データ603の一部として記憶しておく)とパターンマッチングさせるといった、既知の画像認識アルゴリズムを適用すればよい。また、検出された物体の姿勢の算出は、あらかじめ対象物の大きさを記憶させておき、撮影画像141に映り込んだ大きさと比較したり、ホモグラフィを計算したりすることで算出すればよい。これら画像処理は、例えばカメラ14の視点Cを原点とした座標で計算することができるが、最終的には、移動マニプレータ1の原点Oを基準とした座標に変換し、座標系を統一してもよい。また、この時同時に、他の対象物3bやグリッパ部13の姿勢を併せて算出しておいてもよい。 In step S708, the object 3a which is the picking target is detected from the image 141 photographed in step S707, and the posture thereof is calculated. For the detection of the object 3a, a known image recognition algorithm such as pattern matching with an image of the object 3 stored in advance (for example, stored as part of the object data 603) may be applied. . The detected posture of the object may be calculated by storing the size of the object in advance and comparing it with the size reflected in the captured image 141 or calculating the homography. . These image processes can be calculated, for example, with coordinates with the viewpoint C of the camera 14 as the origin, but eventually the coordinates are converted to coordinates based on the origin O of the moving manipulator 1 to unify the coordinate system. Also good. At the same time, the postures of the other objects 3b and the gripper unit 13 may be calculated together.
 ステップS709では、カメラ14の視点Cについて、平面鏡4の反射面Mに対して面対象となる点C’を算出する。以下の説明では、視点C’を、カメラ14の鏡像視点C’と称する。 In step S709, for the viewpoint C of the camera 14, a point C ′ that is a surface object with respect to the reflecting surface M of the plane mirror 4 is calculated. In the following description, the viewpoint C ′ is referred to as a mirror image viewpoint C ′ of the camera 14.
 図13はカメラの平面鏡に対する鏡像視点C’を説明した側面図である。図13により理解されるように、カメラ14の視点Cについて、平面鏡4の反射面Mに対して面対象となる点姿勢C’がカメラ14の鏡像視点C’となる。すなわち、カメラ14の姿勢と鏡4の姿勢が既知であれば、カメラ14の鏡像視点C’の姿勢が算出できる。そして、カメラ14が撮像した鏡4に映っている対象物は、カメラ14の鏡像視点C’から見た対象物のイメージに相当する。ここで、鏡4の姿勢はステップ706で記憶してあり、例えば、移動マニプレータ1の原点Oを基準とした座標で表現できる。また、カメラ14の姿勢は、上述のようにデータ記憶部101に格納された、移動マニプレータ1に関するデータから、同じ座標を基準に算出することができる。 FIG. 13 is a side view illustrating a mirror image viewpoint C ′ with respect to the plane mirror of the camera. As can be understood from FIG. 13, with respect to the viewpoint C of the camera 14, the point orientation C ′ that is a surface object with respect to the reflection surface M of the plane mirror 4 becomes the mirror image viewpoint C ′ of the camera 14. That is, if the posture of the camera 14 and the posture of the mirror 4 are known, the posture of the mirror image viewpoint C ′ of the camera 14 can be calculated. The object reflected on the mirror 4 captured by the camera 14 corresponds to the image of the object viewed from the mirror image viewpoint C ′ of the camera 14. Here, the posture of the mirror 4 is stored in step 706, and can be expressed by, for example, coordinates based on the origin O of the moving manipulator 1. Further, the posture of the camera 14 can be calculated based on the same coordinates from the data regarding the moving manipulator 1 stored in the data storage unit 101 as described above.
 ステップS710において、撮影された画像141における平面鏡4の領域、すなわち鏡領域画像の切り出し処理を行う。この鏡領域画像の切り出し処理の一例を、再び図12を参照して、説明する。 In step S710, a region of the plane mirror 4 in the photographed image 141, that is, a mirror region image is cut out. An example of the mirror region image cutout process will be described with reference to FIG. 12 again.
 まず、移動マニプレータ1の原点Oを基準とした、平面鏡4の姿勢を示す同次変換行列は、ステップS706により記憶されている(ここで、移動マニプレータ1はステップS701以降動かないものとする)。また、移動マニプレータ1の原点Oを基準としたカメラ14の視点Cの姿勢を示す同次変換行列は、アーム部12の各関節の角度および関節間を結ぶ腕の長さにより算出できる。これら同次変換行列により、ステップ710(図13)におけるカメラ14の視点Cを基準とした、平面鏡4の姿勢を示す同次変換行列が算出できる。 First, the homogeneous transformation matrix O T M indicating the attitude of the plane mirror 4 with respect to the origin O of the moving manipulator 1 is stored in step S706 (here, the moving manipulator 1 does not move after step S701). To do). Further, the homogeneous transformation matrix O T C indicating the posture of the viewpoint C of the camera 14 with respect to the origin O of the moving manipulator 1 can be calculated from the angle of each joint of the arm unit 12 and the length of the arm connecting the joints. . From these homogeneous transformation matrices O T M and O T C , a homogeneous transformation matrix C T M indicating the attitude of the plane mirror 4 with reference to the viewpoint C of the camera 14 in step 710 (FIG. 13) can be calculated.
 ここで、平面鏡4の各辺の長さ、すなわちHおよびWが既知である場合、よりホモグラフィ行列を算出し、カメラ14の撮影平面pにおける平面鏡4の像mを算出できる。カメラ14の撮影平面pにおける平面鏡4の像mが鏡領域画像となる。したがって、撮影画像141から、像mの領域のみを切り出すことで、平面鏡4の鏡領域画像を切り出すことができる。なお、像mの切り出し方法としては、上記に限るものではなく、例えば鏡4の四隅のマーカーを検出し、マーカーで囲まれる矩形の領域を像mとしてもよい。 Here, when the length of each side of the plane mirror 4, that is, H and W are known, a homography matrix is calculated from C T M , and the image m of the plane mirror 4 on the imaging plane p of the camera 14 can be calculated. An image m of the plane mirror 4 on the shooting plane p of the camera 14 becomes a mirror region image. Therefore, by cutting out only the area of the image m from the captured image 141, the mirror area image of the plane mirror 4 can be cut out. Note that the method of cutting out the image m is not limited to the above, and for example, markers at four corners of the mirror 4 may be detected, and a rectangular region surrounded by the markers may be used as the image m.
 このようにして切り出される平面鏡4の鏡領域画像は、図13におけるカメラ14の鏡像視点C’からV’で示される視野の画像となる。図13から明らかなように、鏡領域画像には、対象物3bが、対象物3aの陰とならずに撮像される。すなわち、ステップS708で死角となって検出されなかった物体が、鏡領域画像には撮像される。 The mirror region image of the plane mirror 4 cut out in this way is an image of the visual field indicated by V ′ from the mirror image viewpoint C ′ of the camera 14 in FIG. As is apparent from FIG. 13, the object 3b is captured in the mirror region image without being shaded by the object 3a. That is, an object that has not been detected as a blind spot in step S708 is captured in the mirror region image.
 ステップS711において、ステップS710で切り出した鏡領域画像に対して、ステップS708と同様の画像処理によって、対象物3bを検出する。すなわち、鏡領域画像から、カメラ14の視点Cからは見えない対象物3bを検出し、その姿勢をパターンマッチングや、ホモグラフィ計算により算出する。図13に示すように、鏡領域画像の中の画像は、カメラ14の鏡像視点C’から対象物3bを見たイメージである。 In step S711, the object 3b is detected by image processing similar to that in step S708 on the mirror region image cut out in step S710. That is, the object 3b that cannot be seen from the viewpoint C of the camera 14 is detected from the mirror region image, and its posture is calculated by pattern matching or homography calculation. As shown in FIG. 13, the image in the mirror region image is an image of the object 3 b viewed from the mirror image viewpoint C ′ of the camera 14.
 鏡像視点C’を基準とし、これを原点とした座標でステップS708と同様の画像処理を行うと、対象物3bはカメラ14の鏡像視点C’から見た姿勢として算出される。計算結果は、鏡像視点C’を原点とした座標で示すことができる。この時同時に、他の対象物3aやグリッパ部13の姿勢を併せて算出しておいてもよい。また、最終的には、移動マニプレータ1の原点Oを基準とした座標に変換し、座標系を統一してもよい。 When image processing similar to that in step S708 is performed using the mirror image viewpoint C 'as a reference and coordinates with this as the origin, the object 3b is calculated as the posture of the camera 14 viewed from the mirror image viewpoint C'. The calculation result can be indicated by coordinates with the mirror image viewpoint C ′ as the origin. At the same time, the postures of the other objects 3a and the gripper unit 13 may be calculated together. Finally, the coordinate system may be unified by converting the coordinates into the coordinates based on the origin O of the moving manipulator 1.
 ステップS713において、ステップS708において算出された対象物3a等の姿勢情報と、ステップS711において得られた対象物3b等の姿勢情報の統合を行う。これにより、作業空間における対象物3aおよび対象物3bおよび必要に応じてグリッパ13等の相対位置関係を、制御部10が把握することが可能となる。 In step S713, the posture information of the target object 3a calculated in step S708 and the posture information of the target object 3b obtained in step S711 are integrated. Thereby, the control part 10 becomes possible [grasping | ascertaining the relative positional relationship of the target object 3a in the work space, the target object 3b, and the gripper 13 etc. as needed.
 先に述べたように、各姿勢を移動マニプレータ1の原点Oを基準とした座標に変換し、座標系を統一すれば、作業空間内の物体の位置関係を統一的に把握することができる。また、カメラ14の視点Cと鏡像視点C’の両方から見える特徴点が対象物にある場合、ステレオ的処理により、対象物の3次元的な姿勢を算出することもできる。 As described above, if each posture is converted into coordinates based on the origin O of the moving manipulator 1 and the coordinate system is unified, the positional relationship of objects in the work space can be grasped in a unified manner. In addition, when there are feature points that are visible from both the viewpoint C and the mirror image viewpoint C ′ of the camera 14, the three-dimensional posture of the object can be calculated by stereo processing.
 そして、統合された各物体の姿勢情報に基づき、グリッパ部13を、対象物3aを把持可能な姿勢まで移動するように制御を行い(ステップS714)、対象物をグリッパ部13で把持する(ステップS715)。 Based on the integrated posture information of each object, control is performed so that the gripper unit 13 is moved to a posture in which the object 3a can be gripped (step S714), and the object is gripped by the gripper unit 13 (step S714). S715).
 (効果)
 以上説明した本実施の形態によれば、移動マニプレータ1は、その作業空間において、ピッキング動作の対象である対象物3aの姿勢に加えて、棚開口部からは対象物3aの陰となって死角に位置する対象物3bの姿勢を検出することが可能である。従って、対象物3bと、グリッパ部13が衝突することなく、対象物3aのみをピッキングすることが可能となる。
(effect)
According to the present embodiment described above, the moving manipulator 1 becomes a blind spot in the work space, in addition to the posture of the target object 3a that is the target of the picking operation, as a shadow of the target object 3a from the shelf opening. It is possible to detect the posture of the object 3b located at the position. Therefore, it is possible to pick only the object 3a without the object 3b and the gripper 13 colliding with each other.
 また、あらかじめ作業空間内に配置された平面鏡4の姿勢を記憶しておく方法の場合、移動マニプレータ1が棚2に対して僅かでも傾斜して停止してしまうと、ステップS708において検出される、鏡像視点C’から見た対象物3bの姿勢が、移動マニプレータ1の棚2に対する傾斜角の2倍だけ、ずれた姿勢となってしまう。更に、ステップS710における鏡領域も、前記傾斜角だけずれた領域が切り出されてしまう。これらの影響により、対象物3bの姿勢が正しく算出することができない。一方、本実施の形態によれば、マーカー5等により平面鏡4の姿勢を算出するステップS705を設けることにより、移動マニプレータ1が棚2に対して傾斜して停止するといった走行に伴う停止位置のずれによらず、平面鏡4によるカメラ14の鏡像視点C’の算出や鏡領域画像を切り出す処理を精度よく行うことができる。 Further, in the case of the method of storing the posture of the plane mirror 4 arranged in the work space in advance, if the moving manipulator 1 stops at a slight inclination with respect to the shelf 2, it is detected in step S708. The posture of the object 3b viewed from the mirror image viewpoint C ′ is shifted by twice the inclination angle of the mobile manipulator 1 with respect to the shelf 2. Further, the mirror region in step S710 is also cut out from the region shifted by the tilt angle. Due to these effects, the posture of the object 3b cannot be calculated correctly. On the other hand, according to the present embodiment, by providing step S705 for calculating the attitude of the plane mirror 4 with the marker 5 or the like, the shift of the stop position accompanying travel such that the moving manipulator 1 is tilted with respect to the shelf 2 and stopped. Regardless, the calculation of the mirror image viewpoint C ′ of the camera 14 by the plane mirror 4 and the process of cutting out the mirror area image can be performed with high accuracy.
 (変形例)
 なお、本実施の形態による移動マニプレータ1のグリッパ部13による対象物3aのピッキングは、図7に示された手順通りに行う必要はなく、発明の範囲を逸脱しない限り適宜変更可能である。図7においては、ステップS705において平面鏡4の姿勢を算出してから、ピッキング動作が完了するまでに、ステップS707において1回だけ作業空間を撮影しているが、これに限る必要はない。
(Modification)
Note that the picking of the object 3a by the gripper portion 13 of the moving manipulator 1 according to the present embodiment does not have to be performed according to the procedure shown in FIG. 7, and can be appropriately changed without departing from the scope of the invention. In FIG. 7, the work space is photographed only once in step S <b> 707 after the attitude of the plane mirror 4 is calculated in step S <b> 705 until the picking operation is completed.
 たとえば、グリッパ部13が、対象物3aに十分近づいた状態で、再び撮影するステップ設けてもよい。 For example, a step of photographing again in a state where the gripper unit 13 is sufficiently close to the object 3a may be provided.
 図14に示すように、グリッパ部13が、対象物3aに十分近づいた場合、カメラ14からは対象物3aのごく一部しか撮影できない。一方、図14の状態におけるカメラ14の鏡像視点C”からは、対象物3aおよび対象物3bおよび、グリッパ部13の指機構130の3つの物体を撮影することができる。従って、対象物3aと対象物3bの位置関係だけではなく、対象物3aとグリッパ部13の位置関係および、対象物3bとグリッパ部13の位置関係も、鏡領域画像から検出することができる。 As shown in FIG. 14, when the gripper unit 13 is sufficiently close to the object 3a, only a small part of the object 3a can be photographed from the camera 14. On the other hand, from the mirror image viewpoint C ″ of the camera 14 in the state of FIG. 14, it is possible to photograph three objects: the object 3a, the object 3b, and the finger mechanism 130 of the gripper unit 13. Accordingly, the object 3a and Not only the positional relationship between the target 3b but also the positional relationship between the target 3a and the gripper unit 13 and the positional relationship between the target 3b and the gripper unit 13 can be detected from the mirror region image.
 この場合、ステップS708を実施せず、ステップS709ないしステップS711を実施するだけで良い。これにより、対象物3aと対象物3bとグリッパ部13のそれぞれの位置関係を把握することができる。 In this case, step S708 to step S711 need only be performed without performing step S708. Thereby, each positional relationship of the target object 3a, the target object 3b, and the gripper part 13 can be grasped | ascertained.
 すなわち、本実施の形態にかかる動作手順においては、ステップS705によるマーカー5による平面鏡4の姿勢を算出する手順が、ステップS709ないしステップS711の手順の前に実施されていればよく、ステップS709ないしステップS711の手順の結果によって、グリッパ部13の移動を制御するステップS714が実施されればよい。 That is, in the operation procedure according to the present embodiment, the procedure for calculating the attitude of the plane mirror 4 by the marker 5 in step S705 may be performed before the procedure in steps S709 to S711. Step S714 for controlling the movement of the gripper unit 13 may be performed based on the result of the procedure of S711.
 また、カメラ14は、グリッパ部13に必ずしも固定される必要はない。その例を図15に示す。 Further, the camera 14 is not necessarily fixed to the gripper unit 13. An example is shown in FIG.
 図15に示すようにカメラ14とグリッパ部13との間に更に関節機構131を備え、グリッパ部13に対するカメラ14の角度Tを任意に変更できる構成でもよい。グリッパ部13に対するカメラ14の角度Tが、制御部10により観測可能であれば、図12に示した、移動マニプレータ1の原点Oからの平面鏡4の姿勢を算出が可能である。これにより、ステップS704におけるマーカー5の探索において、アーム部12を制御する必要はなくなる。 As shown in FIG. 15, a configuration in which a joint mechanism 131 is further provided between the camera 14 and the gripper unit 13 and the angle T of the camera 14 with respect to the gripper unit 13 can be arbitrarily changed may be employed. If the angle T of the camera 14 with respect to the gripper unit 13 can be observed by the control unit 10, the attitude of the plane mirror 4 from the origin O of the moving manipulator 1 shown in FIG. 12 can be calculated. This eliminates the need to control the arm unit 12 in the search for the marker 5 in step S704.
 また、ステップS703において、平面鏡4の四隅にあるマーカー5を検出すると説明したが、必ずしも4個のマーカー5を検出する必要はない。その例を図16に示す。 In step S703, it has been described that the markers 5 at the four corners of the plane mirror 4 are detected, but it is not always necessary to detect the four markers 5. An example is shown in FIG.
 図16に示すように、平面鏡4の対角線に異なる形状になるように配置された、2種類のマーカー5およびマーカー51を用いる方法でもよい、この場合、少なくとも3個のマーカーが検出されれば、平面鏡4の姿勢を算出することができる。これにより、ステップS702ないしステップS704に至る繰り返し処理の回数を減らすことができる。 As shown in FIG. 16, a method using two types of markers 5 and 51 arranged in different shapes on the diagonal of the plane mirror 4 may be used. In this case, if at least three markers are detected, The attitude of the plane mirror 4 can be calculated. As a result, the number of repeated processes from step S702 to step S704 can be reduced.
 図17に示すように、大きさが既知である所定の正方形から構成されるチェッカーパターンマーカー52を1つだけ用いれば、所定の画像処理により平面鏡4の姿勢を算出することができる。これは、チェッカーパターン内の正方形の各頂点が、マーカー5と同様の役割を果たすためである。 As shown in FIG. 17, if only one checker pattern marker 52 composed of a predetermined square having a known size is used, the attitude of the plane mirror 4 can be calculated by predetermined image processing. This is because each square vertex in the checker pattern plays the same role as the marker 5.
 また、本実施の形態では平面鏡4を用いる手順を説明したが、平面鏡以外の形状の鏡を用いることも可能である。 In the present embodiment, the procedure using the plane mirror 4 has been described. However, it is also possible to use a mirror having a shape other than the plane mirror.
 図18に示すように既知の形状の凸面鏡41を用いてもよい。図18における凸面鏡41は、その中央にチェッカーパターンマーカー52が貼付される。凸面境41を用いる場合、図7におけるステップS710において、画像から鏡領域画像を切り出した後に、凸面境41の形状により定まる射影変換を施すことにより、凸面鏡41の鏡領域画像を、平面画像へと変換すればよい。凸面鏡41を用いることで、平面鏡4を用いる場合より、算出するための演算ステップ数が増加するが、より広い空間をカメラ14の鏡視点から観測することが可能である。 A convex mirror 41 having a known shape may be used as shown in FIG. The convex mirror 41 in FIG. 18 has a checker pattern marker 52 attached to the center thereof. In the case where the convex boundary 41 is used, the mirror region image of the convex mirror 41 is converted into a planar image by performing projective transformation determined by the shape of the convex boundary 41 after cutting out the mirror region image from the image in step S710 in FIG. Convert it. By using the convex mirror 41, the number of calculation steps for calculation increases compared to the case of using the plane mirror 4, but it is possible to observe a wider space from the mirror viewpoint of the camera 14.
 なお、これまでの説明では、作業空間に1つだけ平面鏡4が配置される例について説明した。当然ながら、作業空間における平面鏡4は複数配置されてもよい。たとえば、図19に示すように、作業空間にマーカー5が四隅に配置された平面鏡4aの他に、マーカー52が四隅に配置された平面鏡4bと、マーカー53が四隅に配置された平面鏡4cとが、さらに配置される構成でもよい。ここでは、マーカー5とマーカー52とマーカー53とは、それぞれ形状が異なり、所定の画像処理により判別して検出することが構成な可能であればよい。平面鏡4a~4cについて、図7に示した動作手順がそれぞれ適用できることは言うまでもない。 In the above description, an example in which only one plane mirror 4 is arranged in the work space has been described. Of course, a plurality of plane mirrors 4 in the work space may be arranged. For example, as shown in FIG. 19, in addition to the plane mirror 4a in which the markers 5 are arranged at the four corners in the work space, the plane mirror 4b in which the markers 52 are arranged at the four corners, and the plane mirror 4c in which the markers 53 are arranged at the four corners. Further, the configuration may be further arranged. Here, the marker 5, the marker 52, and the marker 53 are different in shape from each other as long as it can be determined and detected by predetermined image processing. It goes without saying that the operation procedure shown in FIG. 7 can be applied to each of the plane mirrors 4a to 4c.
 図19にかかる作業空間においては、平面鏡4a~4cを複合して用いてもよい。たとえば、平面鏡4aに対して、平面鏡4bを複合して用いる場合、ステップS710で切り出される平面鏡4aの鏡領域画像から更に平面鏡4bの鏡領域画像を切り出すといった再帰的処理により実現すればよい。このように複数の平面鏡による複合的な姿勢算出によって、より死角のない対象物の姿勢検出が可能となる。 In the work space shown in FIG. 19, the plane mirrors 4a to 4c may be used in combination. For example, when the plane mirror 4a is used in combination with the plane mirror 4a, it may be realized by a recursive process of further cutting out the mirror area image of the plane mirror 4b from the mirror area image of the plane mirror 4a cut out in step S710. In this way, the posture detection of an object with no blind spots can be performed by complex posture calculation using a plurality of plane mirrors.
 実施例1では、マーカー5を四隅に備えた平面鏡4を用いて、移動マニプレータ1のアーム部12およびグリッパ部13を制御する例について説明したが、マーカー5を備えた平面鏡4を全ての作業空間に配置する必要がある。そこで、本実施の形態では、平面鏡4にはマーカー5を配置せず、マーカー5を平面鏡4以外の箇所に配置する例について説明する。 In Example 1, although the example which controls the arm part 12 and the gripper part 13 of the movement manipulator 1 using the plane mirror 4 which provided the marker 5 in four corners was demonstrated, the plane mirror 4 provided with the marker 5 is used for all working spaces. Need to be placed in. Therefore, in the present embodiment, an example in which the marker 5 is not arranged on the plane mirror 4 and the marker 5 is arranged at a place other than the plane mirror 4 will be described.
 図20は、本実施の形態にかかる棚2の構成を示した図面であり、実施例1における図4と対比される図面である。また、図20は、図4の左方向から棚2を正面から見た構成を模式的に示している。 FIG. 20 is a diagram showing a configuration of the shelf 2 according to the present embodiment, and is a diagram contrasted with FIG. 4 in the first example. FIG. 20 schematically shows a configuration in which the shelf 2 is viewed from the front in the left direction of FIG.
 図20に示される棚2が、図4で示される棚2と異なる点は、棚板21および棚天板22の一部にマーカー5が具備される点である。また、図20に示される棚2の棚天板22の所定の箇所には、平面鏡4が、その反射面を棚板21の方向に向けて、所定の角度で固定される。本実施の形態にかかる平面鏡4が、図5に示される平面鏡4と異なる点は、その四隅にマーカー5が具備されない点である。 20 differs from the shelf 2 shown in FIG. 4 in that the marker 5 is provided on a part of the shelf board 21 and the shelf top board 22. Further, the plane mirror 4 is fixed at a predetermined angle at a predetermined position of the shelf top plate 22 of the shelf 2 shown in FIG. 20 with its reflecting surface directed toward the shelf plate 21. The flat mirror 4 according to the present embodiment is different from the flat mirror 4 shown in FIG. 5 in that the markers 5 are not provided at the four corners.
 また、本実施の形態において、上位制御部6の対象物データ記憶部60に記憶される棚形状データ602には、棚2の形状と共に、棚2の所定の部位を基準とした平面鏡4の姿勢が、それぞれ記憶されている。すなわち、本実施の形態では、棚2とマーカー5と平面鏡4の相対的位置関係が予め知られている。 In the present embodiment, the shelf shape data 602 stored in the object data storage unit 60 of the host control unit 6 includes the shape of the shelf 2 and the attitude of the plane mirror 4 based on a predetermined part of the shelf 2. Are stored. That is, in this embodiment, the relative positional relationship among the shelf 2, the marker 5, and the plane mirror 4 is known in advance.
 図21は、本実施の形態にかかる移動マニプレータ1による対象物3のピッキング手順を説明した図面であり、実施例1における図7と対比されるものである。 FIG. 21 is a diagram illustrating a picking procedure of the object 3 by the moving manipulator 1 according to the present embodiment, and is compared with FIG. 7 in the first embodiment.
 図21で示される動作手順において、図7で示される動作手順と異なる点は、下記の通りである。 21 is different from the operation procedure shown in FIG. 7 in the operation procedure shown in FIG.
 まず、ステップS701によって移動マニプレータ1は棚2の前まで走行した後、ステップS7021において、棚開口部全体をカメラ14により撮影する。これにより、図20に示される棚21および棚22に貼付された4個のマーカー5がカメラ14の撮影画像に映り込んでいるか探索することができる(ステップS7031)。ステップS7031において、マーカー5が4個検出されない場合は、カメラ14の方向を変更し(ステップS704)、再度撮影を行う(ステップS7021)。 First, after the mobile manipulator 1 travels to the front of the shelf 2 in step S701, the entire shelf opening is photographed by the camera 14 in step S7021. Thereby, it can be searched whether the four markers 5 stuck on the shelf 21 and the shelf 22 shown in FIG. 20 are reflected in the captured image of the camera 14 (step S7031). If four markers 5 are not detected in step S7031, the direction of the camera 14 is changed (step S704), and photographing is performed again (step S7021).
 次に、カメラ14の撮影画像から検出された4個のマーカー5と、前記棚形状データ602とを用いて、棚2の姿勢を算出する(ステップS7051)。 Next, the attitude of the shelf 2 is calculated using the four markers 5 detected from the captured image of the camera 14 and the shelf shape data 602 (step S7051).
 そして、算出された棚2の姿勢を基準に、前記棚形状データ602から、棚2に固定された平面鏡4の姿勢を算出し(ステップS7052)、その姿勢を制御部10もしくは上位制御部6に記憶する(ステップS706)。 Then, the attitude of the plane mirror 4 fixed to the shelf 2 is calculated from the shelf shape data 602 based on the calculated attitude of the shelf 2 (step S7052), and the attitude is transferred to the control unit 10 or the upper control unit 6. Store (step S706).
 このように、本実施の形態は、平面鏡4の支持物体の形状と、前記支持物体を基準とした平面鏡4の姿勢と、前記支持物体に貼付されたマーカー5の位置とから、前記支持物体の姿勢を算出し、前記支持物体の姿勢から間接的に平面鏡4の姿勢を算出する。 As described above, in the present embodiment, the shape of the support object of the plane mirror 4, the posture of the plane mirror 4 with respect to the support object, and the position of the marker 5 attached to the support object, The posture is calculated, and the posture of the plane mirror 4 is indirectly calculated from the posture of the supporting object.
 なお、本実施の形態において、平面鏡4およびマーカー5の固定位置は、棚2に限るものではない。 In the present embodiment, the fixed position of the plane mirror 4 and the marker 5 is not limited to the shelf 2.
 図22は、本実施の形態にかかる、対象物3を格納するトレイ200の模式的な構成を示した斜視図である。トレイ200の前面外側には、二次元コード53が貼付され、トレイ200の背面内側には平面鏡4が固定される。二次元コード53には、そのパターン形状を識別符号として、トレイ200の内容物などの情報が埋め込むことができる。 FIG. 22 is a perspective view showing a schematic configuration of the tray 200 for storing the object 3 according to the present embodiment. A two-dimensional code 53 is affixed to the outside of the front surface of the tray 200, and the plane mirror 4 is fixed to the inside of the back surface of the tray 200. Information such as the contents of the tray 200 can be embedded in the two-dimensional code 53 using the pattern shape as an identification code.
 二次元コード53は、実施例1におけるチェッカーパターンマーカー52と同様に、その大きさが既知であれば、1つの二次元コード53により、画像から姿勢を算出できる。 As with the checker pattern marker 52 in the first embodiment, if the size of the two-dimensional code 53 is known, the posture can be calculated from the image by one two-dimensional code 53.
 二次元コード53および平面鏡4の固定姿勢を含むトレイ200の形状データを、対象物データ記憶部60に記憶しておくことで、本実施例にかかる棚2の姿勢から平面鏡4の姿勢を算出する手順と同様にして、トレイ200の前面外側に貼付された二次元コード53から、トレイ200の背面内側に固定された平面鏡4の姿勢を算出できる。そして、算出されたトレイ200の背面内側に固定された平面鏡4の姿勢を用いることで、トレイ200内部にある対象物3を、グリッパ部13にて、トレイ200の壁面と衝突しないように把持可能であることは言うまでもない。 The shape data of the tray 200 including the two-dimensional code 53 and the fixed posture of the plane mirror 4 is stored in the object data storage unit 60, thereby calculating the posture of the plane mirror 4 from the posture of the shelf 2 according to the present embodiment. Similar to the procedure, the attitude of the plane mirror 4 fixed to the back inner side of the tray 200 can be calculated from the two-dimensional code 53 attached to the outer front surface of the tray 200. Then, by using the calculated posture of the plane mirror 4 fixed inside the back surface of the tray 200, the object 3 in the tray 200 can be gripped by the gripper unit 13 so as not to collide with the wall surface of the tray 200. Needless to say.
 このようにして、本実施の形態のかかる移動マニプレータ1の動作手順においては、マーカー5を平面鏡4以外の場所に貼付可能であり、よりグリッパ部13に固定されたカメラ14に対して撮影しやすい場所に貼付することができる。 In this manner, in the operation procedure of the moving manipulator 1 according to the present embodiment, the marker 5 can be attached to a place other than the plane mirror 4 and can be easily photographed with respect to the camera 14 fixed to the gripper unit 13. Can be attached to a place.
 上述の実施例では、作業空間に配置される平面鏡4や棚2にマーカー5を配置する例について説明した。実施例3では、移動マニプレータ1の一部にマーカー5を配置する構成について説明する。 In the above-described embodiment, the example in which the marker 5 is arranged on the plane mirror 4 and the shelf 2 arranged in the work space has been described. In the third embodiment, a configuration in which the marker 5 is arranged on a part of the moving manipulator 1 will be described.
 図23は、本実施の形態にかかる移動マニプレータ1のグリッパ部13の構成を模式的に示した斜視図である。本実施の形態にかかるグリッパ部13が、実施例1と異なる点は、グリッパ部13の2本の指機構130の中央にチェッカーパターンマーカー52を更に備える点である。 FIG. 23 is a perspective view schematically showing the configuration of the gripper portion 13 of the moving manipulator 1 according to the present embodiment. The gripper portion 13 according to the present embodiment is different from the first embodiment in that a checker pattern marker 52 is further provided at the center of the two finger mechanisms 130 of the gripper portion 13.
 図24は、本実施形態にかかる移動マニプレータ1において、平面鏡4の姿勢を算出するための方法を模式的に示した図面である。図24において、グリッパ部13に固定されたカメラ14の視野Vは、全て平面鏡4に映り込んだ鏡像である。ここで、平面鏡4の鏡像において検出されるチェッカーパターンマーカー52の姿勢が示す点をm1とする。すると、平面鏡4の反射面が存在する平面は、アーム部12の関節角度および関節間の腕の長さから算出されるチェッカーパターンマーカー52の姿勢と、m1との、垂直二等分面として、算出可能である。なお、マーカー52のマニプレータへの取り付け位置の情報は、データとしてデータ記憶部105等に記憶しておく。よって、マーカー52の姿勢は、カメラ14の姿勢と同様、アーム部12の各関節の角度および関節間を結ぶ腕の長さ等のデータにより、一意に算出できる。 FIG. 24 is a drawing schematically showing a method for calculating the attitude of the plane mirror 4 in the moving manipulator 1 according to the present embodiment. In FIG. 24, the field of view V of the camera 14 fixed to the gripper unit 13 is a mirror image reflected on the plane mirror 4. Here, a point indicated by the posture of the checker pattern marker 52 detected in the mirror image of the plane mirror 4 is denoted by m1. Then, the plane on which the reflecting surface of the plane mirror 4 exists is a vertical bisector between the posture of the checker pattern marker 52 calculated from the joint angle of the arm portion 12 and the length of the arm between the joints, and m1. It can be calculated. Information on the attachment position of the marker 52 to the manipulator is stored as data in the data storage unit 105 or the like. Therefore, the posture of the marker 52 can be uniquely calculated based on data such as the angle of each joint of the arm unit 12 and the length of the arm connecting the joints, similarly to the posture of the camera 14.
 図25に本実施の形態における、作業空間の斜視図を示す。本実施の形態においては、平面鏡4の反射面が存在する平面を算出することができるが、平面鏡4の鏡領域の位置までは算出できない。しかしながら、図25に示す棚2のように、作業空間と比して十分大きな平面鏡4を用いることで、作業空間にグリッパ部13がある限り、グリッパ部13に固定されるカメラ14の視野を全て覆うように平面鏡4を配置することができる。このような平面鏡4を配置し、カメラ14の撮影画像を、全て鏡領域画像とみなして制御する。 FIG. 25 is a perspective view of the work space in the present embodiment. In the present embodiment, the plane on which the reflecting surface of the plane mirror 4 exists can be calculated, but the position of the mirror area of the plane mirror 4 cannot be calculated. However, by using a plane mirror 4 that is sufficiently larger than the work space as in the shelf 2 shown in FIG. 25, the field of view of the camera 14 that is fixed to the gripper unit 13 can be entirely as long as the gripper unit 13 is in the work space. The plane mirror 4 can be arranged so as to cover it. Such a plane mirror 4 is disposed, and control is performed by regarding all the captured images of the camera 14 as mirror region images.
 このようにして、本実施の形態は平面鏡4や棚2にマーカー5を配置することなく、鏡像視点を用いてグリッパ部13の移動を制御することが可能となる。 Thus, in this embodiment, it is possible to control the movement of the gripper unit 13 using the mirror image viewpoint without disposing the marker 5 on the plane mirror 4 or the shelf 2.
 他の変形例を説明する。この変形例は、基本的に図7、図21で説明した実施例の変形例であり、障害物の姿勢検出処理の他の例である。 Other modifications will be described. This modification is basically a modification of the embodiment described with reference to FIGS. 7 and 21 and is another example of the obstacle posture detection process.
 図26に図7、図21と異なる部分である、図7、図21のステップS707とS713の間の部分のみを示す。この例では、鏡像視点を算出せず、画像中の鏡領域に写った障害物の鏡像空間内の姿勢から、作業空間内の姿勢を導き出す。 FIG. 26 shows only a portion between steps S707 and S713 in FIGS. 7 and 21, which is different from FIGS. In this example, the mirror image viewpoint is not calculated, and the posture in the work space is derived from the posture in the mirror image space of the obstacle reflected in the mirror region in the image.
 図27に概念図を示す。カメラ14の視点Cから見ると、鏡4には対象物3bが3b’のように映りこんでいる。そこで、図7、図21の実施例と同様に、画像内の鏡領域を切り出し(ステップS710)、ステップS708と同様の画像処理によって、対象物3bの虚像3b’を検出する。すなわち、カメラ14の視点Cからは対象物3bは見えないが、鏡領域画像から虚像3b’を検出し、その姿勢をパターンマッチングや、ホモグラフィ計算により算出する(ステップS711)。図27に示すように、鏡領域画像の中の画像は、カメラ14の視点Cから対象物3bの虚像3b’を見たイメージである。 Figure 27 shows a conceptual diagram. When viewed from the viewpoint C of the camera 14, the object 3b is reflected in the mirror 4 as 3b '. Therefore, as in the embodiments of FIGS. 7 and 21, a mirror region in the image is cut out (step S710), and the virtual image 3b 'of the object 3b is detected by image processing similar to step S708. That is, the object 3b cannot be seen from the viewpoint C of the camera 14, but the virtual image 3b 'is detected from the mirror region image, and the posture thereof is calculated by pattern matching or homography calculation (step S711). As shown in FIG. 27, the image in the mirror region image is an image obtained by viewing the virtual image 3b 'of the object 3b from the viewpoint C of the camera 14.
 虚像3b’に対して、対象物3bの姿勢は、虚像3b’の姿勢を鏡に対して鏡像変換することにより得られる(ステップS712)。この時同時に、他の対象物3aやグリッパ部13の姿勢を併せて算出しておいてもよい。また、計算結果は、最終的には、移動マニプレータ1の原点Oを基準とした座標に変換し、座標系を統一してもよい。 The posture of the object 3b with respect to the virtual image 3b 'is obtained by performing mirror image conversion of the posture of the virtual image 3b' with respect to the mirror (step S712). At the same time, the postures of the other objects 3a and the gripper unit 13 may be calculated together. Further, the calculation result may be finally converted into coordinates based on the origin O of the moving manipulator 1 to unify the coordinate system.
 以上、本発明の具体的実施形態を詳細に説明したが、これらは開示の発明の明確な理解のために提示された実施の例である。従って、特許請求の範囲から逸脱することなく、上述の実施形態には、さらに各種態様、変形例、および組み合わせが形成できることは言うまでもない。例えば、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることが可能である。また、各実施例の構成の一部について、他の実施例の構成の追加・削除・置換をすることが可能である。本明細書において単数形で表される構成要素は、特段文脈で明らかに示されない限り、複数形を含むものとする。 Although specific embodiments of the present invention have been described in detail above, these are examples of implementations presented for a clear understanding of the disclosed invention. Accordingly, it goes without saying that various aspects, modifications, and combinations can be further formed in the above-described embodiment without departing from the scope of the claims. For example, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Further, it is possible to add, delete, and replace the configurations of other embodiments with respect to a part of the configurations of the embodiments. Any component expressed in the singular herein shall include the plural unless the context clearly dictates otherwise.
 以上の実施形態では移動台車部11を持つ移動マニプレータ1について説明したが、本発明はこれに限るものではない。たとえば、所定の箇所に固定された直動機構により、アーム部を移動させるような据え付け型マニプレータにも適用できることは言うまでもない。 Although the mobile manipulator 1 having the mobile carriage unit 11 has been described in the above embodiment, the present invention is not limited to this. For example, it goes without saying that the present invention can also be applied to a stationary manipulator in which the arm portion is moved by a linear motion mechanism fixed at a predetermined location.
 さまざまな移動マニプレータの制御に利用することが可能となる。 It can be used to control various mobile manipulators.
1 移動マニプレータ
10 制御部
11 移動台車部
110 車輪
12 アーム部
121、122 関節機構
13 グリッパ部
131 指機構
14 カメラ
2 棚
20 支柱
21 棚板
22 棚天板
200 トレイ
3、3a、3b 対象物
4 平面鏡
41 凸面鏡
5、51 マーカー
52 チェッカーパターンマーカー
53 二次元コード
6 上位制御部
60 対象物データ記憶部
DESCRIPTION OF SYMBOLS 1 Mobile manipulator 10 Control part 11 Mobile trolley part 110 Wheel 12 Arm part 121,122 Joint mechanism 13 Gripper part 131 Finger mechanism 14 Camera 2 Shelf 20 Strut 21 Shelf board 22 Shelf top board 200 Tray 3, 3a, 3b Object 4 Plane mirror 41 Convex mirrors 5, 51 Marker 52 Checker pattern marker 53 Two-dimensional code 6 Host control unit 60 Object data storage unit

Claims (15)

  1.  移動可能なマニプレータを、対象物が配置された作業空間で動作させるマニプレータの制御方法であって、
     前記作業空間またはその近傍に配置された鏡を前記マニプレータに搭載されたカメラで撮像し、
     前記カメラの姿勢および前記カメラによる撮像画像に基づいて前記鏡の姿勢を検出し、
     前記カメラによる撮像画像から前記鏡の領域を検出し、
     前記検出された鏡の領域から、前記対象物の少なくとも一部の像を検出し、
     前記鏡の姿勢および前記対象物の像に基づいて、前記作業空間における前記対象物の姿勢を算出し、
     該算出された前記対象物の姿勢の情報に基づいて、前記マニプレータを操作するマニプレータの制御方法。
    A manipulator control method for operating a movable manipulator in a work space where an object is arranged,
    The mirror placed in the work space or in the vicinity thereof is imaged with a camera mounted on the manipulator,
    Detecting the orientation of the mirror based on the orientation of the camera and the image captured by the camera;
    Detecting the mirror area from the image captured by the camera,
    Detecting an image of at least a portion of the object from the area of the detected mirror;
    Based on the attitude of the mirror and the image of the object, calculate the attitude of the object in the work space;
    A manipulator control method for operating the manipulator based on the calculated posture information of the object.
  2.  前記鏡の姿勢および前記対象物の像に基づいて、前記作業空間における前記対象物の姿勢を算出する際に、
     前記カメラの姿勢と前記鏡の姿勢に基づいて、前記カメラの鏡像視点を算出し、
     前記カメラの鏡像視点と前記対象物の像に基づいて、前記作業空間における前記対象物の姿勢を算出する、請求項1記載のマニプレータの制御方法。
    When calculating the orientation of the object in the work space based on the orientation of the mirror and the image of the object,
    Based on the camera posture and the mirror posture, the mirror image viewpoint of the camera is calculated,
    The manipulator control method according to claim 1, wherein an attitude of the object in the work space is calculated based on a mirror image viewpoint of the camera and an image of the object.
  3.  前記鏡の姿勢および前記対象物の像に基づいて、前記作業空間における前記対象物の姿勢を算出する際に、
     前記カメラの姿勢と前記対象物の像に基づいて、鏡像空間における前記対象物の虚像の姿勢を算出し、
     前記対象物の虚像の姿勢と前記鏡の姿勢に基づいて、前記作業空間における前記対象物の姿勢を算出する、請求項1記載のマニプレータの制御方法。
    When calculating the orientation of the object in the work space based on the orientation of the mirror and the image of the object,
    Based on the posture of the camera and the image of the object, calculate the posture of the virtual image of the object in a mirror image space;
    The manipulator control method according to claim 1, wherein the attitude of the object in the work space is calculated based on the attitude of the virtual image of the object and the attitude of the mirror.
  4.  前記対象物は棚に載置され、
     前記棚の配置情報に基づいて、前記マニプレータを所定の棚の近傍に移動し、
     前記マニプレータに固定された原点を基準として、前記カメラ、前記鏡、および前記対象物の姿勢を算出する請求項1記載のマニプレータの制御方法。
    The object is placed on a shelf;
    Based on the shelf arrangement information, the manipulator is moved to the vicinity of a predetermined shelf,
    The manipulator control method according to claim 1, wherein the attitudes of the camera, the mirror, and the object are calculated based on an origin fixed to the manipulator.
  5.  前記検出された鏡の領域から、前記マニプレータの少なくとも一部の像を検出し、
     前記鏡の姿勢および前記前記マニプレータの少なくとも一部の像に基づいて、前記作業空間における前記前記マニプレータの少なくとも一部の姿勢を算出し、
     該算出された前記マニプレータの少なくとも一部の姿勢の情報にさらに基づいて、前記マニプレータを操作する請求項1記載のマニプレータの制御方法。
    Detecting an image of at least a portion of the manipulator from the detected mirror region;
    Based on the attitude of the mirror and at least a part of the image of the manipulator, calculating the attitude of at least a part of the manipulator in the work space;
    The manipulator control method according to claim 1, wherein the manipulator is operated based on the calculated posture information of at least a part of the manipulator.
  6.  前記検出された鏡以外の領域から、前記対象物の少なくとも一部を検出し、
     前記対象物の少なくとも一部の像に基づいて、前記作業空間における前記対象物の姿勢を算出し、
     該算出された前記対象物の姿勢の情報にさらに基づいて、前記マニプレータを操作する請求項1記載のマニプレータの制御方法。
    Detecting at least a portion of the object from a region other than the detected mirror;
    Calculating an attitude of the object in the work space based on an image of at least a part of the object;
    The manipulator control method according to claim 1, wherein the manipulator is further operated based on the calculated posture information of the object.
  7.  前記検出された鏡以外の領域から、前記マニプレータの少なくとも一部を検出し、
     前記マニプレータの少なくとも一部の像に基づいて、前記作業空間における前記マニプレータの少なくとも一部の姿勢を算出し、
     該算出された前記マニプレータの少なくとも一部の姿勢の情報にさらに基づいて、前記マニプレータを操作する請求項1記載のマニプレータの制御方法。
    Detecting at least a portion of the manipulator from a region other than the detected mirror;
    Calculating an attitude of at least a part of the manipulator in the work space based on an image of at least a part of the manipulator;
    The manipulator control method according to claim 1, wherein the manipulator is operated based on the calculated posture information of at least a part of the manipulator.
  8.  前記鏡または作業空間内にマーカーを配置し、
     前記カメラによる撮像画像における前記マーカーの情報に基づいて、前記鏡の姿勢を検出する請求項1記載のマニプレータの制御方法。
    Place a marker in the mirror or workspace,
    The manipulator control method according to claim 1, wherein an attitude of the mirror is detected based on information on the marker in an image captured by the camera.
  9.  移動可能なマニプレータを、対象物が配置された作業空間で動作させるマニプレータの制御システムであって、
     前記マニプレータは、前記マニプレータを移動させる移動機構と、前記対象物を操作する操作機構と、前記作業空間を撮像するためのカメラと、前記カメラの姿勢を変化させるカメラ制御機構と、制御部を有し、
     前記制御部は、
     前記カメラの姿勢を検出する機能と、
     前記カメラの姿勢および前記カメラによる撮像画像に基づいて、該撮像画像内の鏡の姿勢を検出する機能と、
     前記カメラによる撮像画像から前記鏡の領域を検出し、前記検出された鏡の領域から、前記対象物の少なくとも一部の像を検出する機能と、
     前記鏡の姿勢および前記対象物の像に基づいて、前記作業空間における前記対象物の姿勢を算出する機能と、を有し、
     該算出された前記対象物の姿勢の情報に基づいて、前記操作機構を操作するマニプレータの制御システム。
    A manipulator control system for operating a movable manipulator in a work space where an object is arranged,
    The manipulator includes a moving mechanism for moving the manipulator, an operating mechanism for operating the object, a camera for imaging the work space, a camera control mechanism for changing the posture of the camera, and a control unit. And
    The controller is
    A function of detecting the posture of the camera;
    A function of detecting the orientation of the mirror in the captured image based on the orientation of the camera and the image captured by the camera;
    A function of detecting an area of the mirror from an image captured by the camera and detecting an image of at least a part of the object from the detected area of the mirror;
    A function of calculating the posture of the object in the work space based on the posture of the mirror and the image of the object;
    A manipulator control system for operating the operation mechanism based on the calculated posture information of the object.
  10.  前記操作機構は、少なくとも一つの関節機構で連結されたアーム部と、該アーム部に保持され、前記対象物を把握および移動可能なグリッパ部を有し、
     前記カメラは、前記グリッパ部に配置され、
     前記制御部は、前記操作機構の動作を制御するとともに、前記操作機構の形状の幾何学的データを記憶しており、該幾何学的データを用いて前記カメラの姿勢を検出する、請求項9記載のマニプレータの制御システム。
    The operation mechanism includes an arm unit connected by at least one joint mechanism, and a gripper unit held by the arm unit and capable of grasping and moving the object.
    The camera is disposed in the gripper unit,
    The control unit controls the operation of the operation mechanism, stores geometric data of the shape of the operation mechanism, and detects the posture of the camera using the geometric data. The manipulator control system described.
  11.  前記対象物が配置された作業空間は、前記対象物が配置された棚を基準として定義される空間であり、
     前記棚に前記鏡が配置されている、請求項9記載のマニプレータの制御システム。
    The work space in which the object is arranged is a space defined with reference to a shelf on which the object is arranged,
    The manipulator control system according to claim 9, wherein the mirror is disposed on the shelf.
  12.  前記マニプレータは、複数の前記棚のうち任意の棚の近傍に移動可能な構成であり、
     前記制御部は、前記複数の棚の配置に関する棚配置情報と、各棚に配置された対象物に関する対象物情報を記憶しており、
     前記制御部は、前記棚配置情報と前記対象物情報に基づいて、前記マニプレータを所定の棚の近傍に移動させ、前記カメラで前記作業空間を撮像する、請求項11記載のマニプレータの制御システム。
    The manipulator is configured to be movable in the vicinity of an arbitrary shelf among the plurality of shelves,
    The control unit stores shelf arrangement information relating to the arrangement of the plurality of shelves and object information relating to objects arranged on each shelf,
    12. The manipulator control system according to claim 11, wherein the control unit moves the manipulator to a vicinity of a predetermined shelf based on the shelf arrangement information and the object information, and images the work space with the camera.
  13.  前記作業空間にはマーカーが配置されており、前記マーカーと前記鏡との相対的位置関係は前記制御部に記憶されており、前記カメラによる撮像画像中の前記マーカーに基づいて該撮像画像内の鏡の姿勢を検出する、請求項12記載のマニプレータの制御システム。 A marker is arranged in the work space, and a relative positional relationship between the marker and the mirror is stored in the control unit, and the inside of the captured image is based on the marker in the captured image by the camera. The manipulator control system according to claim 12, wherein the control system detects a posture of the mirror.
  14.  対象物が配置された作業空間で動作させるマニプレータであって、
     前記マニプレータを移動させる移動機構と、
     前記対象物を操作するグリッパ部と、
     前記グリッパ部を移動させるアーム部と、
     前記作業空間を撮像するためのカメラと、
     前記カメラの姿勢を変化させるカメラ制御機構と、
     制御部を有し、
     前記制御部は、
     前記カメラの姿勢を検出する機能と、
     前記カメラの姿勢および前記カメラによる撮像画像に基づいて該撮像画像内の鏡の姿勢を検出する機能と、
     前記カメラによる撮像画像から前記対象物の少なくとも一部の像を検出する機能と、
     前記鏡の姿勢および前記対象物の像に基づいて、前記作業空間における前記対象物の姿勢を算出する機能と、
     該算出された前記対象物の姿勢の情報に基づいて、前記グリッパ部を操作するマニプレータ。
    A manipulator that operates in a work space in which an object is arranged,
    A moving mechanism for moving the manipulator;
    A gripper for operating the object;
    An arm part for moving the gripper part;
    A camera for imaging the work space;
    A camera control mechanism for changing the posture of the camera;
    Having a control unit,
    The controller is
    A function of detecting the posture of the camera;
    A function of detecting the orientation of the mirror in the captured image based on the orientation of the camera and the image captured by the camera;
    A function of detecting an image of at least a part of the object from an image captured by the camera;
    A function of calculating the orientation of the object in the work space based on the orientation of the mirror and the image of the object;
    A manipulator that operates the gripper unit based on the calculated posture information of the object.
  15.  前記グリッパ部に配置されたマーカーを有し、
     前記鏡の姿勢を検出する機能において、
     前記マーカーの姿勢に関する情報と、前記カメラによる撮像画像中の前記マーカーの像を用いて、該撮像画像内の鏡の姿勢を検出する、請求項14記載のマニプレータ。
    Having a marker disposed on the gripper portion;
    In the function of detecting the attitude of the mirror,
    The manipulator according to claim 14, wherein the posture of the mirror in the captured image is detected using information on the posture of the marker and an image of the marker in the image captured by the camera.
PCT/JP2015/050616 2015-01-13 2015-01-13 Manipulator control method, system, and manipulator WO2016113836A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2015/050616 WO2016113836A1 (en) 2015-01-13 2015-01-13 Manipulator control method, system, and manipulator
JP2016569144A JP6328796B2 (en) 2015-01-13 2015-01-13 Manipulator control method, system, and manipulator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/050616 WO2016113836A1 (en) 2015-01-13 2015-01-13 Manipulator control method, system, and manipulator

Publications (1)

Publication Number Publication Date
WO2016113836A1 true WO2016113836A1 (en) 2016-07-21

Family

ID=56405404

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/050616 WO2016113836A1 (en) 2015-01-13 2015-01-13 Manipulator control method, system, and manipulator

Country Status (2)

Country Link
JP (1) JP6328796B2 (en)
WO (1) WO2016113836A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018020423A (en) * 2016-08-05 2018-02-08 株式会社日立製作所 Robot system and picking method
JP2020073302A (en) * 2017-11-28 2020-05-14 ファナック株式会社 Robot and robot system
JP2020192648A (en) * 2019-05-29 2020-12-03 株式会社日立製作所 End effector and picking system
JP7109699B1 (en) * 2021-07-07 2022-07-29 三菱電機株式会社 remote control system
US11565421B2 (en) 2017-11-28 2023-01-31 Fanuc Corporation Robot and robot system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05313744A (en) * 1992-05-06 1993-11-26 Toshiba Corp Mark for position and attitude measurement
JP2004338889A (en) * 2003-05-16 2004-12-02 Hitachi Ltd Image recognition device
WO2006006624A1 (en) * 2004-07-13 2006-01-19 Matsushita Electric Industrial Co., Ltd. Article holding system, robot and robot control method
JP2014161950A (en) * 2013-02-25 2014-09-08 Dainippon Screen Mfg Co Ltd Robot system, robot control method, and robot calibration method
JP2014188617A (en) * 2013-03-27 2014-10-06 Seiko Epson Corp Robot control system, robot, robot control method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6167760B2 (en) * 2013-08-26 2017-07-26 株式会社ダイフク Article position recognition device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05313744A (en) * 1992-05-06 1993-11-26 Toshiba Corp Mark for position and attitude measurement
JP2004338889A (en) * 2003-05-16 2004-12-02 Hitachi Ltd Image recognition device
WO2006006624A1 (en) * 2004-07-13 2006-01-19 Matsushita Electric Industrial Co., Ltd. Article holding system, robot and robot control method
JP2014161950A (en) * 2013-02-25 2014-09-08 Dainippon Screen Mfg Co Ltd Robot system, robot control method, and robot calibration method
JP2014188617A (en) * 2013-03-27 2014-10-06 Seiko Epson Corp Robot control system, robot, robot control method, and program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018020423A (en) * 2016-08-05 2018-02-08 株式会社日立製作所 Robot system and picking method
JP2020073302A (en) * 2017-11-28 2020-05-14 ファナック株式会社 Robot and robot system
US11565421B2 (en) 2017-11-28 2023-01-31 Fanuc Corporation Robot and robot system
JP2020192648A (en) * 2019-05-29 2020-12-03 株式会社日立製作所 End effector and picking system
JP7109699B1 (en) * 2021-07-07 2022-07-29 三菱電機株式会社 remote control system
WO2023281648A1 (en) * 2021-07-07 2023-01-12 三菱電機株式会社 Remote operation system

Also Published As

Publication number Publication date
JPWO2016113836A1 (en) 2017-06-22
JP6328796B2 (en) 2018-05-23

Similar Documents

Publication Publication Date Title
KR102442241B1 (en) Robot Charger Docking Localization
EP3186777B1 (en) Combination of stereo and structured-light processing
JP6811258B2 (en) Position measurement of robot vehicle
US9630321B2 (en) Continuous updating of plan for robotic object manipulation based on received sensor data
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
JP6328796B2 (en) Manipulator control method, system, and manipulator
CN110640730B (en) Method and system for generating three-dimensional model for robot scene
JP6359756B2 (en) Manipulator, manipulator operation planning method, and manipulator control system
JP2021504793A (en) Robot charger docking control
KR20220012921A (en) Robot configuration with 3D lidar
JP2016099257A (en) Information processing device and information processing method
JP6950638B2 (en) Manipulator controller, manipulator control method, and manipulator control program
JP2007216350A (en) Mobile type robot
JP6855491B2 (en) Robot system, robot system control device, and robot system control method
KR20190003643A (en) Localization using negative mapping
JP2022521003A (en) Multi-camera image processing
JP2020163502A (en) Object detection method, object detection device, and robot system
JP6040264B2 (en) Information processing apparatus, information processing apparatus control method, and program
KR100906991B1 (en) Method for detecting invisible obstacle of robot
EP4245480A1 (en) Measuring system, measuring device, measuring method, and measuring program
CN117794704A (en) Robot control device, robot control system, and robot control method
WO2023192295A1 (en) Extrinsic calibration of a vehicle-mounted sensor using natural vehicle features
Hentout et al. Multi-agent control architecture of mobile manipulators: Extraction of 3D coordinates of object using an eye-in-hand camera
WO2023192311A1 (en) Segmentation of detected objects into obstructions and allowed objects
JPH07151844A (en) Three-dimensional tracking apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15877791

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016569144

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15877791

Country of ref document: EP

Kind code of ref document: A1