JPS5877473A - Visual recognizing handling system - Google Patents

Visual recognizing handling system

Info

Publication number
JPS5877473A
JPS5877473A JP56172634A JP17263481A JPS5877473A JP S5877473 A JPS5877473 A JP S5877473A JP 56172634 A JP56172634 A JP 56172634A JP 17263481 A JP17263481 A JP 17263481A JP S5877473 A JPS5877473 A JP S5877473A
Authority
JP
Japan
Prior art keywords
hand
robot
target object
handling system
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP56172634A
Other languages
Japanese (ja)
Other versions
JPS6150757B2 (en
Inventor
博 塩ノ谷
隆 内山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP56172634A priority Critical patent/JPS5877473A/en
Publication of JPS5877473A publication Critical patent/JPS5877473A/en
Publication of JPS6150757B2 publication Critical patent/JPS6150757B2/ja
Granted legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

(57)【要約】本公報は電子出願前の出願データであるた
め要約のデータは記録されません。
(57) [Summary] This bulletin contains application data before electronic filing, so abstract data is not recorded.

Description

【発明の詳細な説明】 (1)  発明の技術分野 本発明は視覚ij!−によって得られる情報に基づ龜四
ポットを駆動し対象物体のハンドリングを行なう視覚認
識ハンドリングシステムに関するものである。
DETAILED DESCRIPTION OF THE INVENTION (1) Technical field of the invention The present invention is directed to visual ij! This invention relates to a visual recognition handling system that handles a target object by driving four pots based on information obtained from -.

(2)従来技術 従来、ベルトコンベア等で供給される試料1部品等を視
覚at識に基づきロボットのハンドを操作してハンドリ
ングするシステムはよく知られている。この場合の把握
動作紘対象物体が単体と考えられる程度に間隔をあけて
配置されている場合に適するものである。嬉1図はこの
ようなロボットを用いたハンドリングシステムの1例を
示すものである。同図においてロボット1は回転位置決
め可能な関節1−1.1−2.1−3.1−4.1−5
.1−6を有し、制御部4によル作業空間内で自在に位
置決めができる。また先端に2本爪よ)なるハンド1−
71有している。
(2) Prior Art Conventionally, a system is well known in which a sample component, etc., supplied by a belt conveyor or the like is handled by operating a robot hand based on visual perception. The grasping operation in this case is suitable when the objects to be grasped are arranged at such intervals that they can be considered as a single object. Figure 1 shows an example of a handling system using such a robot. In the same figure, the robot 1 has joints 1-1.1-2.1-3.1-4.1-5 that can be rotated and positioned.
.. 1-6, and can be freely positioned within the working space by the control section 4. There are also two claws at the tip) hand 1-
It has 71.

制御部4は対象物体3の鉛直上方に配置されたTV左カ
メラによる視覚w11鐵によルロボット1のハンド1−
7の位置制御と把握制御を行なう。この場合、対象物体
3が他の物体と近接した9重ね合ったルしていると、鉛
直上方からTV左カメラで視覚la!威してハンド1−
7を対象物体に°合せてセットした時、ハンド位置が近
接した他物体と重なって把握することができない。勿論
対象物体を他の物体から離して整列し直せばよいが、各
種の微小部品が多数存在するような場合には整列Kmす
る時間と手数は非常に大きなものとなる。そこでハンド
自身で障害物体を遠ざけるようにすることが望ましい。
The control unit 4 controls the hand 1- of the robot 1 based on visual perception from the TV left camera placed vertically above the target object 3.
7 position control and grasping control are performed. In this case, when the target object 3 is close to another object and is superimposed, the TV left camera can see it from vertically above. Intimidate hand 1-
7 is set to the target object, the hand position overlaps with another nearby object and cannot be grasped. Of course, it is possible to separate the target object from other objects and realign the objects, but if there are a large number of various minute parts, the time and effort required to align them becomes extremely large. Therefore, it is desirable to use the hand itself to move the obstacle away.

(3)  発明の目的 本発明の目的は視覚g鐵機能を有するロボットで対象物
体に近接し九障害物体があってもロボット自身でこれを
遠ざけ対象物体を容易KiP!−で龜るようにした視覚
g織ハンドVングシステムを提供することである。
(3) Purpose of the Invention The purpose of the present invention is to use a robot that has a vision gage function to easily move the target object away even if there is an obstacle in the vicinity of the target object. - To provide a visual weaving hand vunging system that is fastened by -.

(4)発明の構成 前記目的を達成する丸め、本発明の視覚gdハンドリン
グシステムの構成は対象物体を鉛直上方から視覚ii1
鍼してロボットのハンドによル上方から把握させハンド
リングを行なう視覚−織ハンドリングシステムにおいて
、複数物体を作業対象とする時ロボットのハンドの対象
一体把握の障害となる物体を検出した場合に、把握対象
物体の重心と障害物体の重心とを結ぶ線分の垂直二等分
線上の二点間を走査する手段を具え、接近した二物体を
遠ざけるようにしたことを特徴とするものである。
(4) Structure of the Invention The structure of the visual gd handling system of the present invention is to achieve the above-mentioned object.
In a visual-textile handling system that uses acupuncture needles to grasp and handle objects from above with the robot's hand, when multiple objects are to be worked on, if an object is detected that is an obstacle for the robot's hand to grasp the object as a whole, the grasp The object is characterized in that it includes means for scanning between two points on the perpendicular bisector of a line segment connecting the center of gravity of the target object and the center of gravity of the obstructing object, so that two approaching objects are moved apart.

優) 発明の実施例 第2図は本発明の実施例め構成説明図である。Excellent) Examples of the invention FIG. 2 is an explanatory diagram of the configuration of an embodiment of the present invention.

従来と異なる点はロボット1の制御部4に障害物体の存
在を検知してこれを遠ざけるように走査する手段を設け
たことでア)、同図紘ブロック図を示す。第3図(a)
〜(f)はその動作説明図である。
The difference from the conventional robot 1 is that the controller 4 of the robot 1 is provided with a means for detecting the presence of an obstacle and scanning to move it away. Figure 3(a)
~(f) is an explanatory diagram of the operation.

第2図において、TVカメラ2は対象物体3の像を入力
し、インターフェース部11紘その映像信号をデジタル
信号に変換して両像メモリ12に格納する。
In FIG. 2, a TV camera 2 inputs an image of a target object 3, and an interface unit 11 converts the video signal into a digital signal and stores it in an image memory 12.

画像メモリ12は九とえば256 X 2561j素で
構成し、各1ilii素の値はその画素のアドレスを用
いて参照できる。位置・姿勢計側部15は画像メモリ1
2を参照して、ロボット1が対象物体3を把握するため
に必要な位置と姿勢を針側する。たとえば、対象物体像
が第5図(&)に示すような長方形であるとすると、位
置を重心の座標とし、姿勢を長辺がX軸となす角θと定
義する。ハンド領域算出部14はハンド情報格納部15
から第3図(b)に示すロボットハンドの掴み幅W、ハ
ンド底部の寸法R(bXj)を入力し、これと第3図(
&)の対象物体の位置・姿勢とから第3図(c)に示す
ようなハンドが鉛直上方から対象物体を掴む際の画像上
のハンドの領域を求める。障害物体検知部16は画像メ
モリ12を参照し、前記ハンド領域内に他の物体上の点
が在るか否かを調べる。否の場合には障害物体が無いと
判断し、針側した位置・姿勢に基づくハンドリングが実
行される。在った場合には障害物有)と判断し、接近し
九二物体を遠ざける丸めの処理が開始される。第3図(
d) U後者の場合の例を示す。
The image memory 12 is composed of 9 pixels, for example 256×2561j, and the value of each 1ilii pixel can be referenced using the address of that pixel. The position/orientation meter side part 15 is the image memory 1
2, the position and posture necessary for the robot 1 to grasp the target object 3 are determined. For example, if the target object image is a rectangle as shown in FIG. The hand area calculation unit 14 is connected to the hand information storage unit 15.
Input the gripping width W of the robot hand and the dimension R (bXj) of the bottom of the hand shown in Fig. 3(b) from
From the position and orientation of the target object in &), the area of the hand on the image when the hand as shown in FIG. 3(c) grasps the target object from vertically above is determined. The obstacle detection unit 16 refers to the image memory 12 and checks whether there is a point on another object within the hand area. If not, it is determined that there is no obstructing object, and handling is performed based on the needle-side position and posture. If there is an obstacle, it is determined that there is an obstacle), and a rounding process is started to approach the object and move it away from the object. Figure 3 (
d) An example of the latter case is shown.

すなわち、第5図(e)に示すように、障害物体重心検
出部17によル、その1部がハンド領域内に含まれる障
害物体の重心Sを求める。走査データ算出部18紘同図
(・)に示す把握対象物の重心Pと前記障害物体の重心
Sとを結び、線分psota直二勢分線上の任意の二点
A、 Bを設定する。Bが視野の中心に近い方とする。
That is, as shown in FIG. 5(e), the obstacle center of gravity detection unit 17 determines the center of gravity S of the obstacle object, a portion of which is included within the hand area. The scanning data calculation unit 18 connects the center of gravity P of the object to be grasped and the center of gravity S of the obstacle object shown in the same figure (-), and sets arbitrary two points A and B on the line segment psota. B is the one closest to the center of the visual field.

ロボット制御部19は、jll!5図(f)に示すよう
に、制御信号をロボット駆動部20に送〕、両ハンドを
一体に閉じ掴む位置まで下げて走査し、近接した二物体
を遠ざけるための動作を行なう。走査動作実行後、再び
画像入力・認識を行ない、今度は完全なハンドリングを
実行する。
The robot control unit 19 selects jll! As shown in FIG. 5(f), a control signal is sent to the robot drive unit 20], and both hands are lowered to a position where they can be grasped together and scanned, thereby performing an operation to move two objects that are close to each other apart. After performing the scanning operation, image input and recognition are performed again, and complete handling is performed this time.

(6)  発明の詳細 な説明したように、本発明によれば、視覚認識ロボット
で対象物体に近接した障害物体があっても、これt−g
#してロボット自身でこれを遠ざけるように動作して完
全な状態で対象物体を把握することができるものである
。このように人手や他の装置を用いることなく、自律性
を有すると。ともに汎用性の高い視覚認識ハンドリング
システムが得られるものである。
(6) As described in detail, according to the present invention, even if there is an obstacle close to a target object in a visual recognition robot,
#, and the robot itself moves to move away from the target object, allowing it to grasp the target object in perfect condition. In this way, it has autonomy without using human hands or other equipment. Both provide a highly versatile visual recognition handling system.

【図面の簡単な説明】[Brief explanation of drawings]

第1図はハンドリングシステムの一般説明図、第2図は
本発明の実施例の構成i1!明図、第6図(&)〜(f
)は第2図の実施例要部の動作説明図でTo)、図中、
1はロゴツ)、1−7はハンド、2はTV左カメラ5は
対象物体、4紘制一部、11はインターフェース部、1
2はiij像メモリ、13は位置・央勢針側部、14紘
ハンド領域算出部、15はI・ンド情報格納部、16は
障害物体検知部、17拡障害物体重心検出部、18は走
査データ算出部、19はロボット制御部、20はロボッ
ト駆動部を示す。 特許出願人富士通株式会社 復代理人 弁理士 1)坂 善 ゛重 第 1 図 第2図 第3図 (a)        (b)
FIG. 1 is a general explanatory diagram of a handling system, and FIG. 2 is a configuration i1 of an embodiment of the present invention! Figure 6 (&) ~ (f
) is an explanatory diagram of the operation of the main part of the embodiment shown in FIG.
1 is logotsu), 1-7 is the hand, 2 is the TV left camera 5 is the target object, 4 is part of the system, 11 is the interface part, 1
2 is an Iij image memory, 13 is a position/center needle side part, 14 is a hand area calculation unit, 15 is an I/ND information storage unit, 16 is an obstacle detection unit, 17 is an enlarged obstacle center of gravity detection unit, and 18 is a scanning unit. A data calculation section, 19 a robot control section, and 20 a robot drive section. Patent applicant Fujitsu Ltd. Sub-agent Patent attorney 1) Yoshikazu Saka 1 Figure 2 Figure 3 (a) (b)

Claims (1)

【特許請求の範囲】[Claims] 対象物体を鉛直上方から視覚認識してロボットのハンド
によル上方から把握させハンドリングを行なう視覚認識
ハンドリングシステムにおいて、値数物体を作業対象と
する時ロボットのハンドの対象物体把握の障害となる物
体を検出した場合に、把握対象物体の重心と障害物体の
重心とを結ぶ線分Ofl直二等分線上の二点間を走査す
る手RtAえ、接近した二物体を遠ざけるようにしたこ
とを特徴とする視覚認識ハンドリングシステム。
In a visual recognition handling system in which a target object is visually recognized from vertically above and then handled by the robot's hand from above, objects that obstruct the robot's hand from grasping the target object when working with numerical objects. is detected, the hand RtA scans between two points on the perpendicular bisector of the line segment Ofl connecting the center of gravity of the object to be grasped and the center of gravity of the obstructing object, and the two approaching objects are moved away from each other. Visual recognition handling system.
JP56172634A 1981-10-28 1981-10-28 Visual recognizing handling system Granted JPS5877473A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP56172634A JPS5877473A (en) 1981-10-28 1981-10-28 Visual recognizing handling system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP56172634A JPS5877473A (en) 1981-10-28 1981-10-28 Visual recognizing handling system

Publications (2)

Publication Number Publication Date
JPS5877473A true JPS5877473A (en) 1983-05-10
JPS6150757B2 JPS6150757B2 (en) 1986-11-05

Family

ID=15945510

Family Applications (1)

Application Number Title Priority Date Filing Date
JP56172634A Granted JPS5877473A (en) 1981-10-28 1981-10-28 Visual recognizing handling system

Country Status (1)

Country Link
JP (1) JPS5877473A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59182076A (en) * 1983-03-31 1984-10-16 株式会社日立製作所 Measuring device for error on installation of robot
JPS63245387A (en) * 1987-03-30 1988-10-12 豊田工機株式会社 Visual recognizer for robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59182076A (en) * 1983-03-31 1984-10-16 株式会社日立製作所 Measuring device for error on installation of robot
JPH0512111B2 (en) * 1983-03-31 1993-02-17 Hitachi Ltd
JPS63245387A (en) * 1987-03-30 1988-10-12 豊田工機株式会社 Visual recognizer for robot

Also Published As

Publication number Publication date
JPS6150757B2 (en) 1986-11-05

Similar Documents

Publication Publication Date Title
JP3768174B2 (en) Work take-out device
CN109955254A (en) The remote operating control method of Mobile Robot Control System and robot end's pose
Jo et al. Manipulative hand gesture recognition using task knowledge for human computer interaction
US10596707B2 (en) Article transfer device
CN110744544B (en) Service robot vision grabbing method and service robot
Kelley et al. Three vision algorithms for acquiring workpieces from bins
CN114670189A (en) Storage medium, and method and system for generating control program of robot
US20210401515A1 (en) Binding and non-binding articulation limits for robotic surgical systems
JPS5877473A (en) Visual recognizing handling system
JPS6134675A (en) Image recognizing method and its device
JP2000263482A (en) Attitude searching method and attitude searching device of work, and work grasping method and work grasping device by robot
JPH09225872A (en) Robot teaching device
JPS6257884A (en) Manipulator device
JPS58114892A (en) Visual device for robot
JP3396920B2 (en) Harvesting robot imaging method
JPH05108131A (en) Teaching device of robot
JPS5877486A (en) Visual recognizing device for robot
JPS5877487A (en) Visual recognizing device for robot
JPH05185388A (en) Parts supply device
CN108090430A (en) The method and its device of Face datection
JPS60123285A (en) Method of detecting force of robot
Yamauchi et al. On cooperative conveyance by two mobile robots
JPH02256485A (en) Robot with visual device
Jarvis Automatic grip site detection for robotic manipulators.
JPS63134187A (en) Manipulator controller