TWI842416B - Crop harvester - Google Patents
Crop harvester Download PDFInfo
- Publication number
- TWI842416B TWI842416B TW112107347A TW112107347A TWI842416B TW I842416 B TWI842416 B TW I842416B TW 112107347 A TW112107347 A TW 112107347A TW 112107347 A TW112107347 A TW 112107347A TW I842416 B TWI842416 B TW I842416B
- Authority
- TW
- Taiwan
- Prior art keywords
- crop
- dimensional image
- dimensional
- walking
- image
- Prior art date
Links
- 238000003306 harvesting Methods 0.000 claims abstract description 40
- 238000004458 analytical method Methods 0.000 claims abstract description 14
- 235000005340 Asparagus officinalis Nutrition 0.000 claims description 31
- 238000001514 detection method Methods 0.000 claims description 25
- 238000013473 artificial intelligence Methods 0.000 claims description 14
- 238000011897 real-time detection Methods 0.000 claims description 6
- 244000003416 Asparagus officinalis Species 0.000 claims 1
- 241000234427 Asparagus Species 0.000 description 30
- 230000007246 mechanism Effects 0.000 description 12
- 238000012549 training Methods 0.000 description 8
- 238000000034 method Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 3
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 3
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000010223 real-time analysis Methods 0.000 description 2
- 208000019462 Occupational injury Diseases 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Abstract
Description
本發明涉及一種可自動採收作物之作物採收機。 The present invention relates to a crop harvester capable of automatically harvesting crops.
蘆筍種植人力需求高,採收時仰賴大量人力進行,然而近年來台灣的農民勞動人口逐漸短缺,且採收時由於可採收的蘆筍嫩莖位於地面附近,農民需要經常彎腰、蹲下進行採收,長久下來造成嚴重的身體負擔與職業傷害。 Asparagus cultivation requires a lot of manpower, and harvesting relies on a large number of people. However, in recent years, Taiwan's farmer labor force has gradually become short. In addition, since the harvestable asparagus stems are located near the ground, farmers need to bend and squat frequently to harvest, which will cause serious physical burdens and occupational injuries in the long run.
然而,目前雖有蘆筍採收輔具協助採收蘆筍,但其以改良採收為主,多為機構改良降低人力作業時間或改良採收方式,卻仍需要人工進行判別,而無法全自動化地辨識及採收蘆筍嫩莖。因此,期望能在蘆筍產業導入智慧農業的架構,改善現有蘆筍的採收狀況,以求達到省工以及穩定品質的成效。 However, although there are currently asparagus harvesting aids to assist in harvesting asparagus, they are mainly for improving harvesting, and most of them are for improving the mechanism to reduce the labor time or improve the harvesting method, but still require manual identification, and cannot fully automatically identify and harvest asparagus stems. Therefore, it is hoped that the framework of smart agriculture can be introduced into the asparagus industry to improve the current asparagus harvesting situation, so as to achieve the effect of saving labor and stabilizing quality.
本發明之主要目的在於提供一種作物採收機,包括:行走裝置,用以行走於種植有作物之農地;機器手臂裝置,設置於該行走裝置上;至少一拍攝裝置,設置於該行走裝置上,並用以於該農地中拍攝以取得二維影像及對應該二維影像之三維影像;以及運算裝置,設置於該行走裝置上,並包含:影像對齊模組,用以將該二維影像與該三維影像對齊;物件偵測模組,用以偵測該二維影 像,以在該二維影像中標記至少一對應該作物的標記區域,俾取得該標記區域之二維座標;座標分析模組,用以將該二維影像的該標記區域對應至與該二維影像對齊的該三維影像,以從該三維影像中取得該標記區域的深度座標,俾結合該深度座標及該二維座標來作為夾取該作物的三維座標;及控制模組,根據該三維座標控制該機器手臂裝置作動,以採收該作物。 The main purpose of the present invention is to provide a crop harvester, comprising: a walking device for walking on a farmland planted with crops; a machine arm device, arranged on the walking device; at least one shooting device, arranged on the walking device, and used to shoot in the farmland to obtain a two-dimensional image and a three-dimensional image corresponding to the two-dimensional image; and a computing device, arranged on the walking device, and comprising: an image alignment module, used to align the two-dimensional image with the three-dimensional image; an object detection module, used to detect the A two-dimensional image is used to mark at least one marked area corresponding to the crop in the two-dimensional image to obtain the two-dimensional coordinates of the marked area; a coordinate analysis module is used to correspond the marked area of the two-dimensional image to the three-dimensional image aligned with the two-dimensional image to obtain the depth coordinates of the marked area from the three-dimensional image to combine the depth coordinates and the two-dimensional coordinates as the three-dimensional coordinates for clamping the crop; and a control module controls the robot arm device to move according to the three-dimensional coordinates to harvest the crop.
如前述之作物採收機中,該座標分析模組係以該標記區域下半部區域的深度資訊中最小者作為該深度座標。 As in the aforementioned crop harvester, the coordinate analysis module uses the smallest depth information of the lower half of the marked area as the depth coordinate.
如前述之作物採收機中,該拍攝裝置之數量為二個,配置成能從不同視角進行拍攝,以取得不同視角之兩個該二維影像供該影像對齊模組及該物件偵測模組進行對齊及偵測。 As in the aforementioned crop harvester, there are two camera devices, which are configured to shoot from different viewing angles to obtain two two-dimensional images at different viewing angles for the image alignment module and the object detection module to align and detect.
如前述之作物採收機中,該運算裝置為邊緣運算裝置,用以令該拍攝裝置於該行走裝置行走時進行連續拍攝以形成影像串流,再供該物件偵測模組進行即時偵測。 As in the aforementioned crop harvester, the computing device is an edge computing device, which is used to enable the shooting device to continuously shoot while the walking device is walking to form an image stream, which is then used by the object detection module for real-time detection.
如前述之作物採收機中,該物件偵測模組在能標記出該標記區域時,送出停止訊號予該行走裝置。 As in the aforementioned crop harvester, the object detection module sends a stop signal to the walking device when it is able to mark the marking area.
如前述之作物採收機中,該控制模組於該機器手臂裝置採收完該作物之後,送出行走訊號予該行走裝置以令該拍攝裝置再次進行連續拍攝而形成影像串流,俾供該物件偵測模組再次進行即時偵測。 As in the aforementioned crop harvester, after the machine arm device has harvested the crop, the control module sends a walking signal to the walking device to enable the shooting device to continuously shoot again to form an image stream, so that the object detection module can perform real-time detection again.
如前述之作物採收機中,該拍攝裝置為深度相機。 As in the aforementioned crop harvester, the photographing device is a depth camera.
如前述之作物採收機中,該物件偵測模組為YOLOv5x6所訓練出之人工智慧模型。 As in the aforementioned crop harvester, the object detection module is an artificial intelligence model trained by YOLOv5x6.
如前述之作物採收機中,該人工智慧模型係以包含該作物被阻擋或與其他非採收目標重疊的複數影像所訓練。 As in the aforementioned crop harvester, the AI model is trained with multiple images containing the crop being obstructed or overlapped with other non-harvesting objects.
如前述之作物採收機中,該人工智慧模型更由依照高度標註該作物的該複數影像所訓練。 As in the aforementioned crop harvester, the artificial intelligence model is further trained by the multiple images of the crop labeled according to its height.
如前述之作物採收機中,該標記區域所對應之該作物為蘆筍嫩莖。 As in the aforementioned crop harvester, the crop corresponding to the marked area is asparagus stems.
如前述之作物採收機中,該機器手臂裝置具有多軸自由度,其末端設有夾持單元,該夾持單元包含軟質襯墊及刀片。 As in the aforementioned crop harvester, the machine arm device has multi-axis degrees of freedom, and a clamping unit is provided at the end thereof, and the clamping unit includes a soft pad and a blade.
綜上所述,藉由本發明之作物採收機,可精確且有效地辨識出欲採收之作物,並依此準確地進行自動採收,從而提升蘆筍採收的方便性與效率,有效達到採收省工的目的。 In summary, the crop harvester of the present invention can accurately and effectively identify the crops to be harvested, and automatically harvest them accurately accordingly, thereby improving the convenience and efficiency of asparagus harvesting and effectively achieving the purpose of labor-saving harvesting.
1:作物採收機 1:Crop harvester
10:行走裝置 10: Walking device
11:履帶機構 11: Track mechanism
20:機器手臂裝置 20: Robot arm device
21:垂直升降機構 21: Vertical lifting mechanism
22:第一機器手臂 22: The first robot arm
23:第二機器手臂 23: Second robot arm
24:第三機器手臂 24: The third robot arm
25:夾持單元 25: Clamping unit
26:軟質襯墊 26: Soft padding
27:刀片 27: Blade
30:拍攝裝置 30: Filming equipment
40:運算裝置 40: Computing device
41:影像對齊模組 41: Image alignment module
42:物件偵測模組 42: Object detection module
43:座標分析模組 43: Coordinate analysis module
44:控制模組 44: Control module
50:採收盒 50: Harvest box
B:標記區域 B: Marking area
C:作物 C: Crops
I2:二維影像 I 2 : Two-dimensional image
I3,I3’:三維影像 I 3 ,I 3 ': 3D image
P:深度座標 P: Depth coordinate
S11-S15:步驟 S11-S15: Steps
圖1為本發明作物採收機之示意圖。 Figure 1 is a schematic diagram of the crop harvester of the present invention.
圖2為本發明夾持單元之示意圖。 Figure 2 is a schematic diagram of the clamping unit of the present invention.
圖3為本發明運算裝置之架構圖。 Figure 3 is a diagram of the architecture of the computing device of the present invention.
圖4(a)為本發明二維影像與三維影像對齊前之示意圖。 Figure 4(a) is a schematic diagram of the present invention before the two-dimensional image and the three-dimensional image are aligned.
圖4(b)為本發明影像對齊模組對齊二維影像與三維影像後之示意圖。 Figure 4(b) is a schematic diagram of the image alignment module of the present invention after aligning the two-dimensional image and the three-dimensional image.
圖5為本發明物件偵測模組偵測二維影像後標記出標記區域之示意圖。 Figure 5 is a schematic diagram of the object detection module of the present invention marking the marked area after detecting a two-dimensional image.
圖6為本發明作物採收機運作之流程圖。 Figure 6 is a flow chart of the operation of the crop harvester of the present invention.
以下藉由特定之具體實施例加以說明本發明之實施方式,而熟悉此技術之人士可由本說明書所揭示之內容輕易地瞭解本發明之其他優點和功效,亦可藉由其他不同的具體實施例加以施行或應用。 The following is a specific embodiment to illustrate the implementation of the present invention. People familiar with this technology can easily understand the other advantages and effects of the present invention from the content disclosed in this manual, and can also implement or apply it through other different specific embodiments.
請參閱圖1,本發明之作物採收機1包括行走裝置10、機器手臂裝置20、至少一拍攝裝置30、運算裝置40及採收盒50。機器手臂裝置20、拍攝裝置30、運算裝置40以及採收盒50係設置於行走裝置10上,行走裝置10可行走於種植有作物C之農地中,從而拍攝裝置30可於該農地中進行拍攝,並且運算裝置40可根據拍攝裝置30所拍攝之影像進行分析,以控制機器手臂裝置20將作物C採收到採收盒50中,進而實現全自動的作物採收。
Please refer to Figure 1. The crop harvester 1 of the present invention includes a
如圖1所示,行走裝置10包括履帶機構11,且履帶機構11的轉速較佳為250rpm,但並不以此為限。藉由履帶機構11,作物採收機1可穩定地行走於零碎且不平坦的田畦間,而不會過度顛簸影響作物採收機1對作物C的辨識及採收。在本實施例中,行走裝置10設有超音波感測器(圖未示),超音波感測器可感測周圍的環境資訊,使行走裝置10自動避開障礙物,以順暢且平穩地行走於農地間。但本發明並不以此為限,行走裝置10也可以使用其他方式來感測周圍的環境資訊。
As shown in FIG1 , the
如圖1所示,機器手臂裝置20包括垂直升降機構21、第一機器手臂22、第二機器手臂23、第三機器手臂24以及夾持單元25。垂直升降機構21可旋轉地設置於行走裝置10上,而能夠相對於行走裝置10水平旋轉。第一機器手臂22的一端連接於垂直升降機構21,且能夠相對於垂直升降機構21垂
直升降。第二機器手臂23的一端連接於第一機器手臂22的另一端,且能夠相對於第一機器手臂22水平旋轉。第三機器手臂24的一端連接於第二機器手臂23的另一端,且能夠相對於第二機器手臂23水平旋轉。夾持單元25連接於第三機器手臂24的另一端,且能夠相對於第三機器手臂24垂直旋轉。藉由上述設計,機器手臂裝置20具有多軸自由度,可針對不同位置的作物C規劃不同的移動路徑。此外,機器手臂裝置20可搭配行走裝置10的移動,避開枝葉等障礙物,精準地使夾持單元25移動到作物C的位置進行採收。本發明之機器手臂裝置20並不以上述結構為限,只要具有多軸自由度的機器手臂即可。
As shown in FIG1 , the
如圖2所示,夾持單元25呈V字型夾子,而能夠打開、關閉以夾持作物C。夾持單元25的V字型夾子兩側各設有軟質襯墊26及刀片27,其中,刀片27設於夾持單元25下端,以切割作物C的底部,而軟質襯墊26設於刀片27以上的部分,在本實施例中,軟質襯墊26可以是高密度海綿。當V字型夾子兩側相互靠近使刀片27割下作物C後,夾持單元25可透過軟質襯墊26夾持住作物C並將其送到採收盒50中,而不會損傷到作物C。
As shown in FIG2 , the clamping
在本實施例中,拍攝裝置30為深度相機(例如Intel RealSense D435i),深度相機具有兩個鏡頭,而能夠取得二維影像I2及對應該二維影像I2之三維影像I3,如圖4(a)所示。在本實施例中,深度相機獲取深度資訊的方法為藉由紅外線進行測距,例如D435i具有兩組紅外線接收器,能透過獲取兩張紅外線視差影像以合成深度影像,但本發明並不以此為限。此外,拍攝裝置30之數量為二個,二個拍攝裝置30可從不同視角(在圖1中為分別位於機器手臂裝置20的前後兩側)進行拍攝,以取得不同視角之兩個二維影像I2供運算裝置40進
行分析。又,二維影像I2可以是RGB影像,其解析度為1920*1080 pixels,三維影像I3為深度影像,其解析度為1280*720 pixels。
In the present embodiment, the photographing
請參閱圖3,運算裝置40包括影像對齊模組41、物件偵測模組42、座標分析模組43以及控制模組44。在本實施例中,運算裝置40為邊緣運算裝置(Nvidia Jetson Xavier NX 16G)。拍攝裝置30於行走裝置10行走時進行連續拍攝以形成影像串流(Video stream),運算裝置40接收該影像串流以即時進行分析。
Please refer to Figure 3, the
如圖4(a)所示,拍攝裝置30所拍攝之二維影像I2及三維影像I3在進行校正對齊前可能有大小或位置互相不對應的情況,原因在於深度相機所拍攝RGB影像及深度影像的解析度不同。如圖4(b)所示,影像對齊模組41將三維影像I3與二維影像I2對齊而調整為與二維影像I2相對應的三維影像I3’,例如將尺寸(scale)調整為相同,並進行不同座標系的轉換。其中,二維影像I2之X、Y座標為以像素(pixels)為單位,而三維影像I3’之X、Y座標為以像素為單位,Z座標為以公分為單位。
As shown in FIG4(a), the two-dimensional image I2 and the three-dimensional image I3 taken by the
在本實施例中,可以藉由以下公式進行座標轉換。 In this embodiment, coordinate conversion can be performed using the following formula.
其中,OW(XW,YW,ZW)為世界座標系,其原點通常設為機器人底座(即機器手臂裝置),且其單位為毫米(mm);OC(XC,YC,ZC)為相機座標系,其原點為相機 光心,且其單位為毫米;o(x,y)為圖像座標系,且其單位為毫米;以及(u,v)為像素座標系,且其單位為像素。在上述公式中,為用以將世界座標系 轉換為相機座標系;為用以將該相機座標系進一步轉 換為圖像座標系;為用以將該圖像座標系再進一步轉換為像素座標系;為表示相機內參數,而為表示相機外參數。 Among them, O W (X W ,Y W ,Z W ) is the world coordinate system, whose origin is usually set to the robot base (i.e., the robot arm device), and its unit is millimeter (mm); O C (X C ,Y C ,Z C ) is the camera coordinate system, whose origin is the camera optical center, and its unit is millimeter; o(x,y) is the image coordinate system, and its unit is millimeter; and (u,v) is the pixel coordinate system, and its unit is pixel. In the above formula, For converting the world coordinate system into the camera coordinate system; for further converting the camera coordinate system into an image coordinate system; For further converting the image coordinate system into a pixel coordinate system; represents the camera internal parameters, and To represent the camera external parameters.
如圖5所示,物件偵測模組42可用以偵測二維影像I2,以在二維影像I2中標記至少一對應作物C的標記區域B,並取得標記區域B之二維座標。物件偵測模組42在能標記出標記區域B時,送出停止訊號予行走裝置10,行走裝置10停止後機器手臂裝置20可進行作物C的採收。在本實施例中,物件偵測模組42為YOLOv5x6所訓練出之人工智慧模型,但本發明並不以此為限,物件偵測模組42也可以使用其他種類之人工智慧模型。有關人工智慧模型之訓練方式將容後詳述。
As shown in FIG5 , the
座標分析模組43可用以將二維影像I2的標記區域B對應至與二維影像I2對齊的三維影像I3’,以從三維影像I3’中取得標記區域B的深度座標P,並結合深度座標P及二維座標來作為夾取作物C的三維座標。在本實施例中,座標分析模組43係以標記區域B下半部區域的深度資訊中最小者(即最靠
近拍攝模組30的物件)作為深度座標P,如圖5所示,即使蘆筍嫩莖歪斜,座標分析模組43也能夠正確地判斷出蘆筍嫩莖的位置。在其他實施例中,座標分析模組43也可以標記區域B底部的中心點的深度資訊,來作為深度座標P,但本發明並不以此為限。此外,本發明不限於上述判斷方式,也可以使用諸如Yolact-Edge的偵測模組將蘆筍邊緣圍起來,直接偵測圖內之底部點,以避免夾取位置錯誤。
The coordinate
控制模組44可根據三維座標控制機器手臂裝置20作動,使機器手臂裝置20之夾持單元25移動至作物C處,進行作物C的採收,並將割下的作物C送到採收盒50收集。控制模組44於機器手臂裝置20採收完該作物C之後,可送出行走訊號予行走裝置10並令拍攝裝置30再次進行連續拍攝以形成影像串流,以供物件偵測模組42再次進行即時偵測,直到物件偵測模組42能再度標記出標記區域B時,送出停止訊號予行走裝置10,以停下進行採收。
The
在本實施例中,標記區域B所對應之作物C為蘆筍嫩莖,但本發明並不以此為限。 In this embodiment, the crop C corresponding to the marked area B is asparagus stems, but the present invention is not limited to this.
以下進一步說明人工智慧模型之訓練方式。在本實施例中是採用YOLOv5x6來進行人工智慧模型之訓練。在YOLOv5x6之訓練中,可先實際拍攝蘆筍嫩莖而取得包含蘆筍嫩莖的複數影像之後,利用諸如的LabelImg的標記軟體在包含蘆筍嫩莖的複數影像中人工標記對應於蘆筍嫩莖的標記區域,並將包含蘆筍嫩莖的複數影像及對應之標記區域作為人工智慧模型之訓練集,藉此訓練人工智慧模型來辨別蘆筍嫩莖。 The following further describes the training method of the artificial intelligence model. In this embodiment, YOLOv5x6 is used to train the artificial intelligence model. In the training of YOLOv5x6, asparagus stems can be actually photographed to obtain multiple images containing asparagus stems, and then a labeling software such as LabelImg is used to manually label the labeled area corresponding to the asparagus stems in the multiple images containing asparagus stems, and the multiple images containing asparagus stems and the corresponding labeled area are used as the training set of the artificial intelligence model, thereby training the artificial intelligence model to identify asparagus stems.
在訓練YOLOv5x6的複數影像中,可包含如蘆筍嫩莖被葉面遮蔽或與蘆筍母莖重疊的複數影像,如此可更貼近實際應用的情況,因此在實際應用
時可具有更佳的辨識準確率。此外,在標記用以訓練YOLOv5x6的複數影像時,也可依照高度標註蘆筍嫩莖,並且,更可以進一步依作物C高度依序分類不同採收等級的蘆筍嫩莖,以供物件偵測模組42標記符合採收等級的蘆筍嫩莖。
The multiple images for training YOLOv5x6 may include multiple images in which asparagus stems are covered by leaves or overlap with asparagus stems. This is closer to the actual application situation, so it can have a better recognition accuracy in actual application. In addition, when marking the multiple images for training YOLOv5x6, the asparagus stems can also be marked according to the height, and the asparagus stems of different harvesting levels can be further classified according to the height of the crop C, so that the
此外,在YOLOv5x6之訓練中,可調整超參數如下表1所示。下述各參數及其數值僅為示例,本發明並不以此為限。 In addition, in the training of YOLOv5x6, the hyperparameters that can be adjusted are shown in Table 1 below. The following parameters and their values are only examples, and the present invention is not limited to them.
表1
本發明以上述YOLOv5x6所訓練得到之人工智慧模型之效能驗證結果如下。以2573張包含蘆筍嫩莖的影像來進行人工智慧模型之訓練與效能
驗證,其中包含如蘆筍嫩莖被葉面遮蔽或與蘆筍母莖重疊等特殊情況的影像各300張,蘆筍嫩莖之辨識準確率約為96.6%,而機器手臂裝置20基於蘆筍嫩莖的三維座標定位的準確率約為93.3%,且運算裝置40對於將影像串流中單張幀的計算速度可達0.3秒,可有效地進行即時偵測。
The performance verification results of the artificial intelligence model trained by the YOLOv5x6 are as follows. The artificial intelligence model was trained and performance verified using 2573 images containing asparagus stems, including 300 images of special cases such as asparagus stems being covered by leaves or overlapping with asparagus stems. The recognition accuracy of asparagus stems was about 96.6%, and the accuracy of the
請參閱圖6,本發明作物採收機1實際應用時的運作流程如下。首先,於步驟S11中,係先令行走裝置10行走於種植有作物C之農地間,同時令拍攝裝置30於行走裝置10行走時進行連續拍攝以形成影像串流,運算裝置40接收該影像串流以即時進行分析。其中,運算裝置40係藉由影像對齊模組41將拍攝裝置30所拍攝的二維影像I2及三維影像I3對齊,並藉由物件偵測模組42偵測二維影像I2,以判斷二維影像I2中是否可標記出對應於作物C之標記區域B。於步驟S12中,當運算裝置40之物件偵測模組42能標記出對應於作物C之標記區域B時,送出停止訊號予行走裝置10。於步驟S13中,行走裝置10接收該停止訊號停止行走,並回傳訊號予運算裝置40。
Please refer to FIG. 6 , the operation flow of the crop harvester 1 of the present invention when it is actually applied is as follows. First, in step S11, the
於步驟S14中,運算裝置40接收到行走裝置10已停止行走的訊號後,藉由座標分析模組43將二維影像I2的標記區域B對應至與二維影像I2對齊的三維影像I3’,以從三維影像I3’中取得標記區域B的深度座標P,並結合深度座標P及二維座標來作為夾取作物C的三維座標,並且控制模組44基於座標分析模組43所分析的作物C之三維座標,控制機器手臂裝置20使夾持單元25移動到作物C的位置進行採收。於步驟S15中,機器手臂裝置20採收作物C完畢後,送出行走訊號予行走裝置10及運算裝置40。由此,再度回到步驟S11重複進行上述步驟,從而可實現全自動的作物採收。
In step S14, after the
在本實施例中,由於行走裝置10行走的過程中拍攝裝置30持續地進行拍攝且運算裝置40持續地進行分析,因此,相較於行走一段固定距離後再停下拍攝,可避免在此固定距離中間有遺漏的作物C。並且,當運算裝置40發現到作物C時立即傳送停止訊號使行走裝置10停下,因此行走裝置10停下的位置即為作物C的位置,而不會發生行走裝置10停下的位置距離作物C的位置過遠的問題。
In this embodiment, since the
綜上所述,藉由本發明之作物採收機,可精確且有效地辨識出欲採收之作物,並依此準確地進行自動採收,從而提升蘆筍採收的方便性與效率,有效達到採收省工的目的。 In summary, the crop harvester of the present invention can accurately and effectively identify the crops to be harvested, and automatically harvest them accurately accordingly, thereby improving the convenience and efficiency of asparagus harvesting and effectively achieving the purpose of labor-saving harvesting.
上述實施形態僅為例示性說明本發明之技術原理、特點及其功效,並非用以限制本發明之可實施範疇,任何熟習此技術之人士均可在不違背本發明之精神與範疇下,對上述實施形態進行修飾與改變。然任何運用本發明所教示內容而完成之等效修飾及改變,均仍應為下述之申請專利範圍所涵蓋。而本發明之權利保護範圍,應如下述之申請專利範圍所列。 The above implementation forms are only illustrative of the technical principles, features and effects of the present invention, and are not intended to limit the scope of implementation of the present invention. Anyone familiar with this technology can modify and change the above implementation forms without violating the spirit and scope of the present invention. However, any equivalent modifications and changes completed by using the teachings of the present invention should still be covered by the following patent application scope. The scope of protection of the present invention should be as listed in the following patent application scope.
1:作物採收機 1:Crop harvester
10:行走裝置 10: Walking device
11:履帶機構 11: Track mechanism
20:機器手臂裝置 20: Robot arm device
21:垂直升降機構 21: Vertical lifting mechanism
22:第一機器手臂 22: The first robot arm
23:第二機器手臂 23: Second robot arm
24:第三機器手臂 24: The third robot arm
25:夾持單元 25: Clamping unit
30:拍攝裝置 30: Filming equipment
40:運算裝置 40: Computing device
50:採收盒 50: Harvest box
Claims (12)
Publications (1)
Publication Number | Publication Date |
---|---|
TWI842416B true TWI842416B (en) | 2024-05-11 |
Family
ID=
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115648164A (en) | 2022-10-25 | 2023-01-31 | 中国农业科学院都市农业研究所 | Rotary recognition harvesting robot device and method |
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115648164A (en) | 2022-10-25 | 2023-01-31 | 中国农业科学院都市农业研究所 | Rotary recognition harvesting robot device and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103716594B (en) | Panorama splicing linkage method and device based on moving target detecting | |
JP2011103870A (en) | Automatic measurement system and method for plant features, and recording medium thereof | |
US8210122B2 (en) | Arrangement and method for determining positions of the teats of a milking animal | |
CN109948563A (en) | A kind of withered tree detection localization method of the pine nematode based on deep learning | |
KR101898782B1 (en) | Apparatus for tracking object | |
WO2023050783A1 (en) | Weeding robot and method and apparatus for planning weeding path thereof, and medium | |
WO2021208407A1 (en) | Target object detection method and apparatus, and image collection method and apparatus | |
CN111968074A (en) | Method for detecting and harvesting lodging crops of harvester by combining binocular camera and IMU | |
KR20180086745A (en) | Apparatus for processing plant images and method thereof | |
CN108470165A (en) | A kind of picking robot fruit vision collaboratively searching method | |
TWI842416B (en) | Crop harvester | |
CN110516563A (en) | Agriculture transplanter intelligence method for path navigation based on DSP | |
CN108989686A (en) | Captured in real-time device and control method based on humanoid tracking | |
CN113920190A (en) | Ginkgo flower spike orientation method and system | |
Bulanon et al. | Feedback control of manipulator using machine vision for robotic apple harvesting | |
CN115512351A (en) | Multi-angle cognitive and positioning system for enhancing corn tassels | |
CN113114766B (en) | Potted plant information detection method based on ZED camera | |
Roy et al. | Robotic surveying of apple orchards | |
CN115457437A (en) | Crop identification method, device and system and pesticide spraying robot | |
CN111784749A (en) | Space positioning and motion analysis system based on binocular vision | |
TWI809993B (en) | Automatic rice transplanter and method applying image recognition thererfore | |
Tarry et al. | An integrated bud detection and localization system for application in greenhouse automation | |
CN114307100B (en) | Shooting training method and system based on automatic cruise robot | |
WO2022208973A1 (en) | Information processing device, information processing method, and program | |
KR102439922B1 (en) | Simultaneous real-time acquisition system of RGB image/joint position label data pairs using heterogeneous cameras |