JPH02234013A - Fruit/vegetable position recognizing device - Google Patents

Fruit/vegetable position recognizing device

Info

Publication number
JPH02234013A
JPH02234013A JP5675689A JP5675689A JPH02234013A JP H02234013 A JPH02234013 A JP H02234013A JP 5675689 A JP5675689 A JP 5675689A JP 5675689 A JP5675689 A JP 5675689A JP H02234013 A JPH02234013 A JP H02234013A
Authority
JP
Japan
Prior art keywords
fruit
center
gravity
spot light
manipulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP5675689A
Other languages
Japanese (ja)
Other versions
JP2784585B2 (en
Inventor
Taiji Mizukura
泰治 水倉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanmar Co Ltd
Original Assignee
Yanmar Agricultural Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanmar Agricultural Equipment Co Ltd filed Critical Yanmar Agricultural Equipment Co Ltd
Priority to JP1056756A priority Critical patent/JP2784585B2/en
Publication of JPH02234013A publication Critical patent/JPH02234013A/en
Application granted granted Critical
Publication of JP2784585B2 publication Critical patent/JP2784585B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Harvesting Machines For Specific Crops (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

PURPOSE:To decrease the image processing quantity by irradiating the centroid of an object fruit with a light spot, bringing its centroid to image pickup so as to become the center of an image pickup visual field of a color camera, and recognizing a position of the fruit, based on their position and attitude. CONSTITUTION:First of all, a fruit 10 of a harvest object is detected by binarizing an image pickup image, and by calculating its centroid, a manipulator 6 is controlled so that it is irradiated with a spot light. Subsequently, a camera actuator 13 is controlled so that the position of the spot light becomes the center of an image pickup visual field, and by detecting the direction of a color video camera 11 at that time and an attitude of a hand 7, a distance between the manipulator 6 and the fruit 10 is derived by a method of a trigonometrical survey, etc. In such a way, the moving amount and the direction of the manipulator 6 for harvesting the fruit 10 are calculated, and the fruit is harvested.

Description

【発明の詳細な説明】 〔産業上の利用分野] この発明はロボット等のマニピエレー夕を用いて果実,
野菜等の果菜物を収穫する際の果菜物の位置認識装置に
関する。
[Detailed Description of the Invention] [Industrial Application Field] This invention uses a manipulator such as a robot to grow fruit,
The present invention relates to a position recognition device for fruits and vegetables when harvesting fruits and vegetables.

(従来の技術) りんご等の果実を収穫する果実収穫ロボットにおける果
実位置の認識装置としては、従来以下の2つの方法を用
いたものが公知である。即ちカメラ2台又はカメラ1台
を一定距離移動して得た2つの画像からステレオ測距に
よって果実位置を認識する方法及びロボットハンドの先
端に設けたカメラによるビジュアルフィードバックによ
って複数の画像を得、それにより果実位置を認議すi方
法である. (発明が解決しようとする課題〕 一般に画像処理は、画素毎の演算が必要となるので、演
算量が膨大となり、前述の2つの従来技術の如く複数の
画像を画像処理して夫々の画像に含まれる対応点を見つ
け出し、対象果実の位置を認識する場合、演算量がさら
に膨大となり、その処理時間に長時間を要し、それを短
縮するには高速演算能力を有する高価な演算装置が必要
となり、実用化が困難であった。
(Prior Art) As fruit position recognition devices for fruit harvesting robots that harvest fruits such as apples, devices using the following two methods are conventionally known. In other words, the fruit position is recognized by stereo ranging from two images obtained by moving two cameras or one camera a certain distance, and multiple images are obtained by visual feedback from a camera installed at the tip of the robot hand. This is the i method of recognizing the fruit position. (Problems to be Solved by the Invention) Generally, image processing requires calculation for each pixel, resulting in an enormous amount of calculation. When finding the corresponding points contained in the fruit and recognizing the position of the target fruit, the amount of calculation becomes even more enormous, and the processing time takes a long time. To shorten this time, an expensive computing device with high-speed computing power is required. Therefore, it was difficult to put it into practical use.

この発明は斯かる事情に鑑みてなされたものであり、光
スポットを対象果実の重心に照射し、その重心をカラー
カメラの撮像視野の中心となるように橋像し、それらの
位置及び姿勢に基づき果実の位置を認識することにより
、画像処理量を減少させ、画像処理時間を減少させるこ
とができ、果実位置の認識に要する時間を短縮できる果
菜物位置認識装置を提供することを目的とする。
This invention was made in view of the above circumstances, and it illuminates the center of gravity of the target fruit with a light spot, creates a bridge image so that the center of gravity becomes the center of the imaging field of the color camera, and changes the position and posture of the target fruit. An object of the present invention is to provide a fruit and vegetable position recognition device that can reduce the amount of image processing and image processing time by recognizing the position of fruit based on the fruit position, and can shorten the time required to recognize the fruit position. .

〔課題を解決するための手段〕[Means to solve the problem]

この発明に係る果菜物位置認識装置は、収穫対象の果菜
物を可動の撮像手段により措像し、得られた画像を処理
してその位置を認識する果菜物位置認識装置において、
前記果菜物にスポット光を照射する投光手段と、該投光
手段を移動させる移動手段と、盪像された果菜物の重心
位置を算出する手段と、算出された重心位置に前記スポ
ット光を照射すべく前記移動手段を制御する手段と、前
記スポット光を前記損像手段の撮像視野内に所定位置で
撮像すべく前記撮像手段の位置を制御する手段と、前記
撮像手段の入射光の光軸の方向及び前記投光手段の姿勢
に基づき前記果菜物の位置を算出する手段とを備えるこ
とを特徴とする。
The fruit and vegetable position recognition device according to the present invention is a fruit and vegetable position recognition device that images fruits and vegetables to be harvested using a movable imaging means, processes the obtained image, and recognizes its position.
a light projecting means for irradiating the fruit and vegetables with a spotlight; a moving means for moving the light projecting means; a means for calculating the center of gravity of the imaged fruit and vegetables; and a means for directing the spotlight to the calculated center of gravity. means for controlling the moving means to irradiate the image; means for controlling the position of the imaging means so as to image the spot light at a predetermined position within the imaging field of the imaging means; and the light incident on the imaging means. It is characterized by comprising means for calculating the position of the fruit and vegetables based on the direction of the axis and the attitude of the light projecting means.

〔作用〕[Effect]

この発明においては、果菜物を撮像手段により検出する
と、重心位置が算出され、その重心に光スポットが照射
されるように移動手段が制御され、光スポット位置を撮
像手段の視野内の所定位置とすぺく撮像手段を位置決め
してそのときの撮像手段の光軸の方向及び投光手段の姿
勢に基づき果菜物の位置を認識する。
In this invention, when a fruit or vegetable is detected by the imaging means, the center of gravity position is calculated, the moving means is controlled so that a light spot is irradiated on the center of gravity, and the light spot position is set to a predetermined position within the field of view of the imaging means. The imaging means is precisely positioned and the position of the fruit and vegetable is recognized based on the direction of the optical axis of the imaging means and the attitude of the light projecting means at that time.

〔実施例〕〔Example〕

以下、この発明をその一実施例を示す図面に基づいて説
明する。第1図はこの発明に係る果菜物位置認識装置を
搭載した果実収穫ロボットの外観を示す斜視図である。
Hereinafter, the present invention will be explained based on the drawings showing one embodiment thereof. FIG. 1 is a perspective view showing the appearance of a fruit harvesting robot equipped with a fruit and vegetable position recognition device according to the present invention.

図において1は収穫用マニビュレータ6を搭載した車体
であり、該車体lは2つの駆動輪2及び1つの操向輪3
により自動操向されている。車体1の後部には操向制御
部4及び収穫制御部5を搭載している. また車体l中央上部にはマユピュレータ6が載置されて
いる。マニピュレータ6は5軸の多関節型を用いてなり
、そのアーム先端部には、果実10を浦捉するためのハ
ンド7が取付けられている。
In the figure, 1 is a vehicle body equipped with a harvesting manibulator 6, and the vehicle body 1 has two driving wheels 2 and one steering wheel 3.
It is automatically steered by A steering control section 4 and a harvesting control section 5 are mounted at the rear of the vehicle body 1. Further, a maupulator 6 is placed at the upper center of the vehicle body l. The manipulator 6 is a five-axis multi-joint type, and a hand 7 for grasping the fruit 10 is attached to the tip of the arm.

ハンド7はモータからなるハンドアクチュエータ17に
より開閉自在な収穫部8を有しており、その把握力はそ
の内側に設けた圧カセンサl5にて検出され所定値とな
るよう制御される。
The hand 7 has a harvesting part 8 that can be opened and closed by a hand actuator 17 consisting of a motor, and its grasping force is detected by a pressure sensor l5 provided inside the harvesting part 8 and controlled to a predetermined value.

またハンド7にはスポット光を照射するスポット光照射
部9が取付けられている。これらのマニビュレータ6に
取付けられたハンド7の位置及び方向はマニビュレータ
6の各軸を駆動する5個のモータからなるロボットアク
チュエータ16に取付けられた夫々のロータリエンコー
ダからの検出値に基づきハンド姿勢検出部l2により検
出されてぃる。即ちマニピュレータ6の旋回部、肩部及
び肘部のモータのロークリエンコーダの検出値によりハ
ンド7の位置を求め、さらに手首部の2つのモータのロ
ータリエンコーダよりハンド7の旋回角度及び傾動角度
を得て、それによりハンド7の方向を求める。
Further, a spot light irradiation unit 9 is attached to the hand 7 to irradiate a spot light. The position and direction of the hand 7 attached to these manibulators 6 are determined by a hand posture detection section based on detection values from each rotary encoder attached to a robot actuator 16 consisting of five motors that drive each axis of the manibulator 6. Detected by l2. That is, the position of the hand 7 is determined from the detection values of the rotary encoders of the motors at the rotation, shoulder, and elbow parts of the manipulator 6, and the rotation angle and tilting angle of the hand 7 are obtained from the rotary encoders of the two motors at the wrist. Then, the direction of the hand 7 is determined.

車体1の前部には盪像手段であるカラービデオカメラl
1が2台車のロータリエンコーダ付のモータからなるカ
メラアクチュエータl3により旋回及び傾動自在となる
ようにして取付けられており、その旋回角度及び傾動角
度がロークリエンコーダにより検出され、それに基づき
カメラ方向検出部14でカメラの方向が検出される。
At the front of the vehicle body 1 is a color video camera l, which is an imaging means.
1 is mounted so that it can rotate and tilt freely by a camera actuator 13 consisting of a motor with a rotary encoder on two carts, and the rotation angle and tilting angle are detected by a rotary encoder, and based on this, the camera direction detection unit At 14, the direction of the camera is detected.

第2図は収穫制御部の概略構成を示すブロック図であり
、収穫制御部5は16ビットcpuを用いてなり、その
人力端子にはカラービデオカメラからのNTSC方式の
映像信号、圧カセンサl5がらの圧力信号、カメラ方向
検出部l4がらのカラービデオヵメラ11の方向並びに
ハンド姿勢検出部l2からのハンド7の位置及び方向が
入力され、その出力端子からはロボットアクチュエータ
16、ハンドアクチェエータ17及びカメラアクチュエ
ータ13を駆動するための各別の出力信号が出力される
FIG. 2 is a block diagram showing a schematic configuration of the harvesting control section.The harvesting control section 5 uses a 16-bit CPU, and its human input terminal receives an NTSC video signal from a color video camera and a pressure sensor 15. , the direction of the color video camera 11 from the camera direction detection section 14, and the position and direction of the hand 7 from the hand posture detection section 12 are input, and from its output terminal, the robot actuator 16 and the hand actuator 17 are inputted. And different output signals for driving the camera actuator 13 are output.

次に以上の如く構成されたこの発明装置の動作を説明す
る.第3図はこの発明の果実位置の認識手順を示す原理
図、第4図(a),(ロ)はそのフローチャートである
Next, the operation of the inventive device configured as described above will be explained. FIG. 3 is a principle diagram showing the fruit position recognition procedure of the present invention, and FIGS. 4(a) and (b) are flowcharts thereof.

この発明装置においては、第3図に示す如く最初に収穫
対象の果実10を撮像画像を2値化して検出し(第3図
(a)Lその重心位置を算出して、そこにスポット光が
照射されるようにマニビュレータ6を制御し(第3図(
ハ))、次にスポット光の位置が措像視野の中心となる
ようにカメラアクチュエータ13を制御し(第3図(C
)) 、そのときのカラービデオカメラ11の方向及び
ハンド7の姿勢を検出して、三角測量等の公知の方法に
よりマユピュレータ6と果実10との距離を求める。こ
れにより果実lOを収穫するためのマニピュレータ6の
移動量及び方向が算出され、果実が収穫できることとな
る。
In this invention device, as shown in Fig. 3, the fruit 10 to be harvested is first detected by binarizing the captured image (Fig. 3(a)L), and the center of gravity of the fruit 10 is calculated, and a spot light is directed there. The manibulator 6 is controlled so that the irradiation is performed (see Fig. 3).
c)) Next, the camera actuator 13 is controlled so that the position of the spot light becomes the center of the imaging field of view (see Fig. 3 (C)).
)) The direction of the color video camera 11 and the posture of the hand 7 at that time are detected, and the distance between the maupulator 6 and the fruit 10 is determined by a known method such as triangulation. As a result, the amount and direction of movement of the manipulator 6 for harvesting the fruit IO are calculated, and the fruit can be harvested.

次にフローチャートに基づき動作を詳細に説明する. 最初にカラービデオカメラ1lにて撮像を開始し(ステ
ップl)、その画像を収穫制御部5内に入力し、その画
像内に赤色Rが所定閾値R3以上含まれる画素を抽出し
て2値化する(ステップ2)。
Next, the operation will be explained in detail based on the flowchart. First, the color video camera 1l starts capturing an image (step l), inputs the image into the harvesting control unit 5, extracts pixels in the image that contain red color R equal to or higher than a predetermined threshold R3, and converts the image into a binary value. (Step 2).

次に2値化された画素数を計数し、それが所定値P,以
上か否かを判断し(ステップ3)、Ps以下のときはカ
メラアクチュエータ13を所定量旋回及び傾動し、ステ
ップlに戻るeP3より大きいときは画素数により面積
を求め(ステップ5)、それに基づきその重心位置を算
出する(ステップ6)。次にスポット光照射部9をオン
し、スポット光を照射する(ステップ7)。そしてカラ
ービデオカメラ11にて撮像し(ステップ8)、その画
像を所定の第2闇値で2値化しスポット光を抽出する(
ステップ9)。2値化の結果、スポット光が撮像視野内
にあるか否かを判定し(ステップ10)スポット光が視
野内にないときはマニピュレータ6を所定量駆動し(ス
テップl1)、ステップ8に戻る。スポット光が視野内
にあるときは、スポット光の中心と重心位置とのずれ量
を算出する(ステップ12).ここでずれ量とは距離及
び方向を含むものである。ずれ量が算出されるとそれに
応じてマニピュレータ6を駆動し(ステップ13)、撮
像及び2値化により(ステップ14.15)、スポット
光を重心位置に一致させる。ステップ16で、スポット
光の中心が重心位置と一致したか否かが判定され、一致
していないときは再度ステップ12からやり直す.一致
したときは重心位置と視野中心とのずれ量を算出し(ス
テップ17)、それに基づきカメラアクチェエータ13
を駆動し(ステップ18)、ステップ19で視野中心と
重心位置とが一致したか否かを判定する,一致しないと
きは再度ステップl7から反復し、一致したときは、直
ちにスポット光照射部9をオフし、ハンド姿勢検出部l
2でマニビュレータ6のロボットアクチュエータl6の
旋回部、肩部及び肘部の3つのロータリエンコーダの検
出値からハンド7の位置を算出し、手首部の2つのロー
タリエンコーダの検出値からハンド7の方向を算出する
と共に、カメラアクチュエータ13に取付けられた2つ
のロータリエンコーダの検出値からカラービデオカメラ
11の方向を算出する(ステップ20). またこのときカラービデオカメラ11の光軸の中心(=
視野中心)及びスポット光照射部9からのスポット光は
共に果実10の重心位置にあるので、前記光軸及びスポ
ット光の光軸は同一平面上にあり、2つの光軸にて形成
される三角形は一意に求められる。そして公知の三角測
量方式を用いて果実10の位置を算出し、それにより果
実10とハンド7との距離及び方向を求めることができ
る(ステップ21)。
Next, the number of binarized pixels is counted and it is determined whether it is greater than or equal to a predetermined value P (step 3). If it is less than Ps, the camera actuator 13 is rotated and tilted by a predetermined amount, and the process proceeds to step l. If the area is larger than eP3, the area is calculated based on the number of pixels (step 5), and the center of gravity position is calculated based on the area (step 6). Next, the spot light irradiation section 9 is turned on to irradiate the spot light (step 7). Then, an image is captured by the color video camera 11 (step 8), the image is binarized using a predetermined second darkness value, and spot light is extracted (
Step 9). As a result of the binarization, it is determined whether or not the spot light is within the imaging field of view (step 10). If the spot light is not within the field of view, the manipulator 6 is driven by a predetermined amount (step l1), and the process returns to step 8. When the spotlight is within the field of view, the amount of deviation between the center of the spotlight and the center of gravity is calculated (step 12). Here, the amount of deviation includes distance and direction. When the amount of deviation is calculated, the manipulator 6 is driven accordingly (step 13), and the spot light is made to coincide with the center of gravity position by imaging and binarization (step 14.15). In step 16, it is determined whether the center of the spotlight matches the center of gravity, and if they do not match, the process starts over from step 12. When they match, the amount of deviation between the center of gravity and the center of the field of view is calculated (step 17), and based on that, the camera actuator 13
(Step 18), and in Step 19 it is determined whether or not the visual field center and the center of gravity position match. If they do not match, repeat from step 17 again, and if they match, immediately turn on the spot light irradiation unit 9. Turn off and hand posture detection unit l
In step 2, the position of the hand 7 is calculated from the detection values of the three rotary encoders on the rotating part, shoulder part, and elbow part of the robot actuator l6 of the manibulator 6, and the direction of the hand 7 is calculated from the detection values of the two rotary encoders on the wrist part. At the same time, the direction of the color video camera 11 is calculated from the detection values of the two rotary encoders attached to the camera actuator 13 (step 20). Also, at this time, the center of the optical axis of the color video camera 11 (=
Since both the center of the field of view) and the spot light from the spot light irradiation unit 9 are located at the center of gravity of the fruit 10, the optical axis and the optical axis of the spot light are on the same plane, and a triangle formed by the two optical axes is formed. is uniquely required. Then, the position of the fruit 10 is calculated using a known triangulation method, thereby determining the distance and direction between the fruit 10 and the hand 7 (step 21).

そしてこれらのデータを用いることにより、マニビュレ
ータ6を駆動して収穫対象の果実10にハンド7を接近
させ、収穫部8のハンドアクチュエータ17を開閉して
果実lOを収穫できることになる.なおこの実施例では
マニピュレー夕として多関節型のロボットを用いたがこ
の発明はこれに限るものではなく、マニピュレータは自
動的に果菜物を捕捉できるものであればどのような型式
のロボットを用いてもよい。
By using these data, it is possible to drive the manibulator 6 to bring the hand 7 close to the fruit 10 to be harvested, open and close the hand actuator 17 of the harvesting section 8, and harvest the fruit 10. In this embodiment, an articulated robot is used as the manipulator, but the present invention is not limited to this. Any type of robot can be used as the manipulator as long as it can automatically capture fruits and vegetables. Good too.

またこの実施例の構成によれば、この発明方法だけでは
なく、従来のビジュアルフィードバックによる方法及び
ステレオ測距による方法を用いることができ、対象作物
及び環境に応じて位置認識方法を使い分けが可能となり
汎用製が高い。
Furthermore, according to the configuration of this embodiment, it is possible to use not only the method of this invention but also a conventional method using visual feedback and a method using stereo distance measurement, and it is possible to use a position recognition method depending on the target crop and environment. General-purpose products are expensive.

〔発明の効果〕〔Effect of the invention〕

以上説明したとおり、この発明によれば1枚の映像の画
像処理で果菜物の位置が認識できるので、対応点を検出
するための演算処理量が減少し、位置の認識が高速化す
ると共に、安価な演算装置で画像処理できる等優れた効
果を奏する。
As explained above, according to the present invention, the positions of fruits and vegetables can be recognized by image processing of a single video, so the amount of calculation processing for detecting corresponding points is reduced, and position recognition is accelerated. It has excellent effects such as being able to process images with inexpensive arithmetic equipment.

【図面の簡単な説明】[Brief explanation of drawings]

第1図はこの発明に係る果菜物位置認識装置を搭載した
果実収穫ロボットの外観を示す斜視図、第2図は収穫制
御部の構成を示すブロック図、第3図は果実位置の認識
手順を示す原理図、第4図(a), (b)はそのフロ
ーチャートである.l・・・車体 5・・・収穫制御部
 6・・・マニピュレ−タ 7・・・ハンド 8・・・
収穫部 9・・・スポット光照射部 10・・・果実 
1l・・・カラービデオヵメラ l2・・・ハンド姿勢
検出部 13・・・カメラアクチュエー夕l4・・・カ
メラ方向検出部 l6・・・ロボットアクチュエータ 特 許 出願人  ヤンマー農機株式会社代理人 弁理
士  河  野  登  夫第 図 (b)
Fig. 1 is a perspective view showing the appearance of a fruit harvesting robot equipped with a fruit and vegetable position recognition device according to the present invention, Fig. 2 is a block diagram showing the configuration of the harvest control section, and Fig. 3 shows the fruit position recognition procedure. The principle diagram shown in Fig. 4 (a) and (b) is a flowchart. l... Vehicle body 5... Harvesting control section 6... Manipulator 7... Hand 8...
Harvesting section 9...Spot light irradiation section 10...Fruit
1l...Color video camera l2...Hand posture detection unit 13...Camera actuator l4...Camera direction detection unit l6...Robot actuator patent Applicant Yanmar Agricultural Machinery Co., Ltd. Agent Patent attorney Noboru Kono Diagram (b)

Claims (1)

【特許請求の範囲】 1、収穫対象の果菜物を可動の撮像手段により撮像し、
得られた画像を処理してその位置を認識する果菜物位置
認識装置において、 前記果菜物にスポット光を照射する投光手 段と、 該投光手段を移動させる移動手段と、 撮像された果菜物の重心位置を算出する手 段と、 算出された重心位置に前記スポット光を照 射すべく前記移動手段を制御する手段と、 前記スポット光を前記撮像手段の撮像視野 内に所定位置で撮像すべく前記撮像手段の位置を制御す
る手段と、 前記撮像手段の入射光の光軸の方向及び前 記投光手段の姿勢に基づき前記果菜物の位置を算出する
手段と を備えることを特徴とする果菜物位置認識 装置。
[Claims] 1. Imaging fruits and vegetables to be harvested by a movable imaging means,
A fruit and vegetable position recognition device that processes an obtained image and recognizes its position, comprising: a light projector that irradiates the fruit and vegetables with a spot light; a moving device that moves the light projector; and an imaged fruit and vegetable. means for calculating the position of the center of gravity of the center of gravity; means for controlling the moving means to irradiate the calculated center of gravity position with the spot light; and means for controlling the moving means to irradiate the calculated center of gravity position with the spot light; A fruit and vegetable position comprising: means for controlling the position of an imaging means; and means for calculating the position of the fruit and vegetable based on the direction of the optical axis of the incident light of the imaging means and the attitude of the light projecting means. recognition device.
JP1056756A 1989-03-08 1989-03-08 Fruit and vegetable position recognition device Expired - Fee Related JP2784585B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1056756A JP2784585B2 (en) 1989-03-08 1989-03-08 Fruit and vegetable position recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1056756A JP2784585B2 (en) 1989-03-08 1989-03-08 Fruit and vegetable position recognition device

Publications (2)

Publication Number Publication Date
JPH02234013A true JPH02234013A (en) 1990-09-17
JP2784585B2 JP2784585B2 (en) 1998-08-06

Family

ID=13036352

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1056756A Expired - Fee Related JP2784585B2 (en) 1989-03-08 1989-03-08 Fruit and vegetable position recognition device

Country Status (1)

Country Link
JP (1) JP2784585B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008022737A (en) * 2006-07-19 2008-02-07 Kochi Univ Of Technology Harvesting robot
KR101405858B1 (en) * 2013-12-04 2014-06-16 안성훈 Robot apparatus for harvesting fruit
JP2019083750A (en) * 2017-11-07 2019-06-06 シブヤ精機株式会社 Fruit/vegetable harvesting device
CN111011003A (en) * 2019-11-12 2020-04-17 青岛大学 Strawberry picking system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729830A (en) * 2019-01-31 2019-05-10 安徽省鑫矿液压机械有限责任公司 A kind of backpack fruit picker

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62194413A (en) * 1986-02-21 1987-08-26 Yamaha Motor Co Ltd Three-dimensional coordinate measuring instrument
JPS62278917A (en) * 1986-05-26 1987-12-03 株式会社クボタ Guide apparatus of working machine for fruits

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62194413A (en) * 1986-02-21 1987-08-26 Yamaha Motor Co Ltd Three-dimensional coordinate measuring instrument
JPS62278917A (en) * 1986-05-26 1987-12-03 株式会社クボタ Guide apparatus of working machine for fruits

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008022737A (en) * 2006-07-19 2008-02-07 Kochi Univ Of Technology Harvesting robot
KR101405858B1 (en) * 2013-12-04 2014-06-16 안성훈 Robot apparatus for harvesting fruit
JP2019083750A (en) * 2017-11-07 2019-06-06 シブヤ精機株式会社 Fruit/vegetable harvesting device
CN111011003A (en) * 2019-11-12 2020-04-17 青岛大学 Strawberry picking system

Also Published As

Publication number Publication date
JP2784585B2 (en) 1998-08-06

Similar Documents

Publication Publication Date Title
Plebe et al. Localization of spherical fruits for robotic harvesting
US10913151B1 (en) Object hand-over between robot and actor
US4868752A (en) Boundary detecting method and apparatus for automatic working vehicle
CN204585197U (en) Can automatic obstacle-avoiding Work robot
CN109604777A (en) Welding seam traking system and method based on laser structure light
JP2022544007A (en) Visual Teaching and Repetition of Mobile Manipulation System
CN113814986B (en) Method and system for controlling SCARA robot based on machine vision
JP2802638B2 (en) Fruit and vegetable harvester
CN110842890B (en) Robot and control method thereof
JPH0798208A (en) Method and system for recognizing three-dimensional position and attitude on the basis of sense of sight
Almendral et al. Autonomous fruit harvester with machine vision
JPH02234013A (en) Fruit/vegetable position recognizing device
Harrell et al. Real-time vision-servoing of a robotic tree fruit harvester
Marshall et al. Uncalibrated visual servoing for intuitive human guidance of robots
Lin Combining stereo vision and fuzzy image based visual servoing for autonomous object grasping using a 6-DOF manipulator
Dario et al. The Agrobot project for greenhouse automation
JP3543329B2 (en) Robot teaching device
JPS6270916A (en) Boundary detecting method for self-traveling truck
Taylor et al. Flexible self-calibrated visual servoing for a humanoid robot
Boby Hand-eye calibration using a single image and robotic picking up using images lacking in contrast
JPH0679671A (en) Method and device for guiding gripping means of robot
JP7505674B2 (en) Picking System
Le et al. The system design of an autonomous mobile welding robot
Yip et al. Development of an omnidirectional mobile robot using a RGB-D sensor for indoor navigation
Vollmann et al. Manipulator control by calibration-free stereo vision

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees