TWI842416B - Crop harvester - Google Patents

Crop harvester Download PDF

Info

Publication number
TWI842416B
TWI842416B TW112107347A TW112107347A TWI842416B TW I842416 B TWI842416 B TW I842416B TW 112107347 A TW112107347 A TW 112107347A TW 112107347 A TW112107347 A TW 112107347A TW I842416 B TWI842416 B TW I842416B
Authority
TW
Taiwan
Prior art keywords
crop
dimensional image
dimensional
walking
image
Prior art date
Application number
TW112107347A
Other languages
Chinese (zh)
Inventor
周呈霙
江昭皚
王人正
林弘人
莊詠竣
賴怡穎
黃偉豪
張善程
陳博炤
闕振祐
李承駿
葉紹翔
Original Assignee
國立臺灣大學
Filing date
Publication date
Application filed by 國立臺灣大學 filed Critical 國立臺灣大學
Application granted granted Critical
Publication of TWI842416B publication Critical patent/TWI842416B/en

Links

Images

Abstract

The present invention provides a crop harvester, comprising: a traveling device; a robotic arm device; at least one image-capturing device for capturing in farmland to obtain a two-dimensional image and a three-dimensional image corresponding to the two-dimensional image; and a computing device comprising: an image aligning module for aligning the three-dimensional image with the two-dimensional image; an object detecting module for detecting the two-dimensional image to mark at least one marking area corresponding to a crop in the two-dimensional image and obtaining a two-dimensional coordinate of the marking area; a coordinate analysis module for corresponding the marking area of the two-dimensional image to the three-dimensional image to obtain a depth coordinate of the marking area and then combine with the two-dimensional coordinate as a three-dimensional coordinate for clamping the crop; and a control module for controlling the robotic arm device to harvest the crop according to the three-dimensional coordinate.

Description

作物採收機 Crop harvester

本發明涉及一種可自動採收作物之作物採收機。 The present invention relates to a crop harvester capable of automatically harvesting crops.

蘆筍種植人力需求高,採收時仰賴大量人力進行,然而近年來台灣的農民勞動人口逐漸短缺,且採收時由於可採收的蘆筍嫩莖位於地面附近,農民需要經常彎腰、蹲下進行採收,長久下來造成嚴重的身體負擔與職業傷害。 Asparagus cultivation requires a lot of manpower, and harvesting relies on a large number of people. However, in recent years, Taiwan's farmer labor force has gradually become short. In addition, since the harvestable asparagus stems are located near the ground, farmers need to bend and squat frequently to harvest, which will cause serious physical burdens and occupational injuries in the long run.

然而,目前雖有蘆筍採收輔具協助採收蘆筍,但其以改良採收為主,多為機構改良降低人力作業時間或改良採收方式,卻仍需要人工進行判別,而無法全自動化地辨識及採收蘆筍嫩莖。因此,期望能在蘆筍產業導入智慧農業的架構,改善現有蘆筍的採收狀況,以求達到省工以及穩定品質的成效。 However, although there are currently asparagus harvesting aids to assist in harvesting asparagus, they are mainly for improving harvesting, and most of them are for improving the mechanism to reduce the labor time or improve the harvesting method, but still require manual identification, and cannot fully automatically identify and harvest asparagus stems. Therefore, it is hoped that the framework of smart agriculture can be introduced into the asparagus industry to improve the current asparagus harvesting situation, so as to achieve the effect of saving labor and stabilizing quality.

本發明之主要目的在於提供一種作物採收機,包括:行走裝置,用以行走於種植有作物之農地;機器手臂裝置,設置於該行走裝置上;至少一拍攝裝置,設置於該行走裝置上,並用以於該農地中拍攝以取得二維影像及對應該二維影像之三維影像;以及運算裝置,設置於該行走裝置上,並包含:影像對齊模組,用以將該二維影像與該三維影像對齊;物件偵測模組,用以偵測該二維影 像,以在該二維影像中標記至少一對應該作物的標記區域,俾取得該標記區域之二維座標;座標分析模組,用以將該二維影像的該標記區域對應至與該二維影像對齊的該三維影像,以從該三維影像中取得該標記區域的深度座標,俾結合該深度座標及該二維座標來作為夾取該作物的三維座標;及控制模組,根據該三維座標控制該機器手臂裝置作動,以採收該作物。 The main purpose of the present invention is to provide a crop harvester, comprising: a walking device for walking on a farmland planted with crops; a machine arm device, arranged on the walking device; at least one shooting device, arranged on the walking device, and used to shoot in the farmland to obtain a two-dimensional image and a three-dimensional image corresponding to the two-dimensional image; and a computing device, arranged on the walking device, and comprising: an image alignment module, used to align the two-dimensional image with the three-dimensional image; an object detection module, used to detect the A two-dimensional image is used to mark at least one marked area corresponding to the crop in the two-dimensional image to obtain the two-dimensional coordinates of the marked area; a coordinate analysis module is used to correspond the marked area of the two-dimensional image to the three-dimensional image aligned with the two-dimensional image to obtain the depth coordinates of the marked area from the three-dimensional image to combine the depth coordinates and the two-dimensional coordinates as the three-dimensional coordinates for clamping the crop; and a control module controls the robot arm device to move according to the three-dimensional coordinates to harvest the crop.

如前述之作物採收機中,該座標分析模組係以該標記區域下半部區域的深度資訊中最小者作為該深度座標。 As in the aforementioned crop harvester, the coordinate analysis module uses the smallest depth information of the lower half of the marked area as the depth coordinate.

如前述之作物採收機中,該拍攝裝置之數量為二個,配置成能從不同視角進行拍攝,以取得不同視角之兩個該二維影像供該影像對齊模組及該物件偵測模組進行對齊及偵測。 As in the aforementioned crop harvester, there are two camera devices, which are configured to shoot from different viewing angles to obtain two two-dimensional images at different viewing angles for the image alignment module and the object detection module to align and detect.

如前述之作物採收機中,該運算裝置為邊緣運算裝置,用以令該拍攝裝置於該行走裝置行走時進行連續拍攝以形成影像串流,再供該物件偵測模組進行即時偵測。 As in the aforementioned crop harvester, the computing device is an edge computing device, which is used to enable the shooting device to continuously shoot while the walking device is walking to form an image stream, which is then used by the object detection module for real-time detection.

如前述之作物採收機中,該物件偵測模組在能標記出該標記區域時,送出停止訊號予該行走裝置。 As in the aforementioned crop harvester, the object detection module sends a stop signal to the walking device when it is able to mark the marking area.

如前述之作物採收機中,該控制模組於該機器手臂裝置採收完該作物之後,送出行走訊號予該行走裝置以令該拍攝裝置再次進行連續拍攝而形成影像串流,俾供該物件偵測模組再次進行即時偵測。 As in the aforementioned crop harvester, after the machine arm device has harvested the crop, the control module sends a walking signal to the walking device to enable the shooting device to continuously shoot again to form an image stream, so that the object detection module can perform real-time detection again.

如前述之作物採收機中,該拍攝裝置為深度相機。 As in the aforementioned crop harvester, the photographing device is a depth camera.

如前述之作物採收機中,該物件偵測模組為YOLOv5x6所訓練出之人工智慧模型。 As in the aforementioned crop harvester, the object detection module is an artificial intelligence model trained by YOLOv5x6.

如前述之作物採收機中,該人工智慧模型係以包含該作物被阻擋或與其他非採收目標重疊的複數影像所訓練。 As in the aforementioned crop harvester, the AI model is trained with multiple images containing the crop being obstructed or overlapped with other non-harvesting objects.

如前述之作物採收機中,該人工智慧模型更由依照高度標註該作物的該複數影像所訓練。 As in the aforementioned crop harvester, the artificial intelligence model is further trained by the multiple images of the crop labeled according to its height.

如前述之作物採收機中,該標記區域所對應之該作物為蘆筍嫩莖。 As in the aforementioned crop harvester, the crop corresponding to the marked area is asparagus stems.

如前述之作物採收機中,該機器手臂裝置具有多軸自由度,其末端設有夾持單元,該夾持單元包含軟質襯墊及刀片。 As in the aforementioned crop harvester, the machine arm device has multi-axis degrees of freedom, and a clamping unit is provided at the end thereof, and the clamping unit includes a soft pad and a blade.

綜上所述,藉由本發明之作物採收機,可精確且有效地辨識出欲採收之作物,並依此準確地進行自動採收,從而提升蘆筍採收的方便性與效率,有效達到採收省工的目的。 In summary, the crop harvester of the present invention can accurately and effectively identify the crops to be harvested, and automatically harvest them accurately accordingly, thereby improving the convenience and efficiency of asparagus harvesting and effectively achieving the purpose of labor-saving harvesting.

1:作物採收機 1:Crop harvester

10:行走裝置 10: Walking device

11:履帶機構 11: Track mechanism

20:機器手臂裝置 20: Robot arm device

21:垂直升降機構 21: Vertical lifting mechanism

22:第一機器手臂 22: The first robot arm

23:第二機器手臂 23: Second robot arm

24:第三機器手臂 24: The third robot arm

25:夾持單元 25: Clamping unit

26:軟質襯墊 26: Soft padding

27:刀片 27: Blade

30:拍攝裝置 30: Filming equipment

40:運算裝置 40: Computing device

41:影像對齊模組 41: Image alignment module

42:物件偵測模組 42: Object detection module

43:座標分析模組 43: Coordinate analysis module

44:控制模組 44: Control module

50:採收盒 50: Harvest box

B:標記區域 B: Marking area

C:作物 C: Crops

I2:二維影像 I 2 : Two-dimensional image

I3,I3’:三維影像 I 3 ,I 3 ': 3D image

P:深度座標 P: Depth coordinate

S11-S15:步驟 S11-S15: Steps

圖1為本發明作物採收機之示意圖。 Figure 1 is a schematic diagram of the crop harvester of the present invention.

圖2為本發明夾持單元之示意圖。 Figure 2 is a schematic diagram of the clamping unit of the present invention.

圖3為本發明運算裝置之架構圖。 Figure 3 is a diagram of the architecture of the computing device of the present invention.

圖4(a)為本發明二維影像與三維影像對齊前之示意圖。 Figure 4(a) is a schematic diagram of the present invention before the two-dimensional image and the three-dimensional image are aligned.

圖4(b)為本發明影像對齊模組對齊二維影像與三維影像後之示意圖。 Figure 4(b) is a schematic diagram of the image alignment module of the present invention after aligning the two-dimensional image and the three-dimensional image.

圖5為本發明物件偵測模組偵測二維影像後標記出標記區域之示意圖。 Figure 5 is a schematic diagram of the object detection module of the present invention marking the marked area after detecting a two-dimensional image.

圖6為本發明作物採收機運作之流程圖。 Figure 6 is a flow chart of the operation of the crop harvester of the present invention.

以下藉由特定之具體實施例加以說明本發明之實施方式,而熟悉此技術之人士可由本說明書所揭示之內容輕易地瞭解本發明之其他優點和功效,亦可藉由其他不同的具體實施例加以施行或應用。 The following is a specific embodiment to illustrate the implementation of the present invention. People familiar with this technology can easily understand the other advantages and effects of the present invention from the content disclosed in this manual, and can also implement or apply it through other different specific embodiments.

請參閱圖1,本發明之作物採收機1包括行走裝置10、機器手臂裝置20、至少一拍攝裝置30、運算裝置40及採收盒50。機器手臂裝置20、拍攝裝置30、運算裝置40以及採收盒50係設置於行走裝置10上,行走裝置10可行走於種植有作物C之農地中,從而拍攝裝置30可於該農地中進行拍攝,並且運算裝置40可根據拍攝裝置30所拍攝之影像進行分析,以控制機器手臂裝置20將作物C採收到採收盒50中,進而實現全自動的作物採收。 Please refer to Figure 1. The crop harvester 1 of the present invention includes a walking device 10, a machine arm device 20, at least one camera device 30, a computing device 40 and a harvesting box 50. The machine arm device 20, the camera device 30, the computing device 40 and the harvesting box 50 are arranged on the walking device 10. The walking device 10 can walk in the farmland where the crop C is planted, so that the camera device 30 can take pictures in the farmland, and the computing device 40 can analyze the images taken by the camera device 30 to control the machine arm device 20 to harvest the crop C into the harvesting box 50, thereby realizing fully automatic crop harvesting.

如圖1所示,行走裝置10包括履帶機構11,且履帶機構11的轉速較佳為250rpm,但並不以此為限。藉由履帶機構11,作物採收機1可穩定地行走於零碎且不平坦的田畦間,而不會過度顛簸影響作物採收機1對作物C的辨識及採收。在本實施例中,行走裝置10設有超音波感測器(圖未示),超音波感測器可感測周圍的環境資訊,使行走裝置10自動避開障礙物,以順暢且平穩地行走於農地間。但本發明並不以此為限,行走裝置10也可以使用其他方式來感測周圍的環境資訊。 As shown in FIG1 , the walking device 10 includes a crawler mechanism 11, and the rotation speed of the crawler mechanism 11 is preferably 250 rpm, but not limited thereto. With the crawler mechanism 11, the crop harvester 1 can stably travel between fragmented and uneven fields without excessive bumps affecting the crop harvester 1's identification and harvesting of the crop C. In this embodiment, the walking device 10 is provided with an ultrasonic sensor (not shown), which can sense the surrounding environmental information, so that the walking device 10 automatically avoids obstacles to travel smoothly and steadily between farmlands. However, the present invention is not limited thereto, and the walking device 10 can also use other methods to sense the surrounding environmental information.

如圖1所示,機器手臂裝置20包括垂直升降機構21、第一機器手臂22、第二機器手臂23、第三機器手臂24以及夾持單元25。垂直升降機構21可旋轉地設置於行走裝置10上,而能夠相對於行走裝置10水平旋轉。第一機器手臂22的一端連接於垂直升降機構21,且能夠相對於垂直升降機構21垂 直升降。第二機器手臂23的一端連接於第一機器手臂22的另一端,且能夠相對於第一機器手臂22水平旋轉。第三機器手臂24的一端連接於第二機器手臂23的另一端,且能夠相對於第二機器手臂23水平旋轉。夾持單元25連接於第三機器手臂24的另一端,且能夠相對於第三機器手臂24垂直旋轉。藉由上述設計,機器手臂裝置20具有多軸自由度,可針對不同位置的作物C規劃不同的移動路徑。此外,機器手臂裝置20可搭配行走裝置10的移動,避開枝葉等障礙物,精準地使夾持單元25移動到作物C的位置進行採收。本發明之機器手臂裝置20並不以上述結構為限,只要具有多軸自由度的機器手臂即可。 As shown in FIG1 , the robot arm device 20 includes a vertical lifting mechanism 21, a first robot arm 22, a second robot arm 23, a third robot arm 24, and a clamping unit 25. The vertical lifting mechanism 21 is rotatably arranged on the walking device 10, and can be horizontally rotated relative to the walking device 10. One end of the first robot arm 22 is connected to the vertical lifting mechanism 21, and can be vertically lifted relative to the vertical lifting mechanism 21. One end of the second robot arm 23 is connected to the other end of the first robot arm 22, and can be horizontally rotated relative to the first robot arm 22. One end of the third robot arm 24 is connected to the other end of the second robot arm 23, and can be horizontally rotated relative to the second robot arm 23. The clamping unit 25 is connected to the other end of the third robot arm 24, and can be vertically rotated relative to the third robot arm 24. With the above design, the robot arm device 20 has multi-axis degrees of freedom, and can plan different movement paths for crops C at different positions. In addition, the robot arm device 20 can cooperate with the movement of the walking device 10 to avoid obstacles such as branches and leaves, and accurately move the clamping unit 25 to the position of the crop C for harvesting. The robot arm device 20 of the present invention is not limited to the above structure, as long as it has a multi-axis degree of freedom.

如圖2所示,夾持單元25呈V字型夾子,而能夠打開、關閉以夾持作物C。夾持單元25的V字型夾子兩側各設有軟質襯墊26及刀片27,其中,刀片27設於夾持單元25下端,以切割作物C的底部,而軟質襯墊26設於刀片27以上的部分,在本實施例中,軟質襯墊26可以是高密度海綿。當V字型夾子兩側相互靠近使刀片27割下作物C後,夾持單元25可透過軟質襯墊26夾持住作物C並將其送到採收盒50中,而不會損傷到作物C。 As shown in FIG2 , the clamping unit 25 is a V-shaped clamp that can be opened and closed to clamp the crop C. A soft pad 26 and a blade 27 are provided on both sides of the V-shaped clamp of the clamping unit 25, wherein the blade 27 is provided at the lower end of the clamping unit 25 to cut the bottom of the crop C, and the soft pad 26 is provided above the blade 27. In this embodiment, the soft pad 26 can be a high-density sponge. When the two sides of the V-shaped clamp are close to each other so that the blade 27 cuts the crop C, the clamping unit 25 can clamp the crop C through the soft pad 26 and send it to the harvesting box 50 without damaging the crop C.

在本實施例中,拍攝裝置30為深度相機(例如Intel RealSense D435i),深度相機具有兩個鏡頭,而能夠取得二維影像I2及對應該二維影像I2之三維影像I3,如圖4(a)所示。在本實施例中,深度相機獲取深度資訊的方法為藉由紅外線進行測距,例如D435i具有兩組紅外線接收器,能透過獲取兩張紅外線視差影像以合成深度影像,但本發明並不以此為限。此外,拍攝裝置30之數量為二個,二個拍攝裝置30可從不同視角(在圖1中為分別位於機器手臂裝置20的前後兩側)進行拍攝,以取得不同視角之兩個二維影像I2供運算裝置40進 行分析。又,二維影像I2可以是RGB影像,其解析度為1920*1080 pixels,三維影像I3為深度影像,其解析度為1280*720 pixels。 In the present embodiment, the photographing device 30 is a depth camera (e.g., Intel RealSense D435i), which has two lenses and can obtain a two-dimensional image I 2 and a three-dimensional image I 3 corresponding to the two-dimensional image I 2 , as shown in FIG4(a). In the present embodiment, the depth camera obtains depth information by measuring distance using infrared rays. For example, the D435i has two sets of infrared receivers and can synthesize a depth image by obtaining two infrared parallax images, but the present invention is not limited thereto. In addition, there are two photographing devices 30, and the two photographing devices 30 can shoot from different viewing angles (in FIG1 , they are located at the front and rear sides of the robot arm device 20, respectively) to obtain two two-dimensional images I 2 at different viewing angles for analysis by the computing device 40. Furthermore, the two-dimensional image I 2 can be an RGB image with a resolution of 1920*1080 pixels, and the three-dimensional image I 3 can be a depth image with a resolution of 1280*720 pixels.

請參閱圖3,運算裝置40包括影像對齊模組41、物件偵測模組42、座標分析模組43以及控制模組44。在本實施例中,運算裝置40為邊緣運算裝置(Nvidia Jetson Xavier NX 16G)。拍攝裝置30於行走裝置10行走時進行連續拍攝以形成影像串流(Video stream),運算裝置40接收該影像串流以即時進行分析。 Please refer to Figure 3, the computing device 40 includes an image alignment module 41, an object detection module 42, a coordinate analysis module 43 and a control module 44. In this embodiment, the computing device 40 is an edge computing device (Nvidia Jetson Xavier NX 16G). The shooting device 30 continuously shoots when the walking device 10 walks to form an image stream (Video stream), and the computing device 40 receives the image stream for real-time analysis.

如圖4(a)所示,拍攝裝置30所拍攝之二維影像I2及三維影像I3在進行校正對齊前可能有大小或位置互相不對應的情況,原因在於深度相機所拍攝RGB影像及深度影像的解析度不同。如圖4(b)所示,影像對齊模組41將三維影像I3與二維影像I2對齊而調整為與二維影像I2相對應的三維影像I3’,例如將尺寸(scale)調整為相同,並進行不同座標系的轉換。其中,二維影像I2之X、Y座標為以像素(pixels)為單位,而三維影像I3’之X、Y座標為以像素為單位,Z座標為以公分為單位。 As shown in FIG4(a), the two-dimensional image I2 and the three-dimensional image I3 taken by the shooting device 30 may not correspond to each other in size or position before calibration and alignment, because the resolutions of the RGB image and the depth image taken by the depth camera are different. As shown in FIG4(b), the image alignment module 41 aligns the three-dimensional image I3 with the two-dimensional image I2 and adjusts it to a three-dimensional image I3 ' corresponding to the two-dimensional image I2 , for example, adjusting the scale to be the same and performing conversion between different coordinate systems. Among them, the X and Y coordinates of the two-dimensional image I2 are in pixels, while the X and Y coordinates of the three-dimensional image I3 ' are in pixels, and the Z coordinate is in centimeters.

在本實施例中,可以藉由以下公式進行座標轉換。 In this embodiment, coordinate conversion can be performed using the following formula.

Figure 112107347-A0101-12-0006-1
Figure 112107347-A0101-12-0006-1

其中,OW(XW,YW,ZW)為世界座標系,其原點通常設為機器人底座(即機器手臂裝置),且其單位為毫米(mm);OC(XC,YC,ZC)為相機座標系,其原點為相機 光心,且其單位為毫米;o(x,y)為圖像座標系,且其單位為毫米;以及(u,v)為像素座標系,且其單位為像素。在上述公式中,

Figure 112107347-A0101-12-0007-2
為用以將世界座標系 轉換為相機座標系;
Figure 112107347-A0101-12-0007-3
為用以將該相機座標系進一步轉 換為圖像座標系;
Figure 112107347-A0101-12-0007-4
為用以將該圖像座標系再進一步轉換為像素座標系;
Figure 112107347-A0101-12-0007-5
為表示相機內參數,而
Figure 112107347-A0101-12-0007-6
為表示相機外參數。 Among them, O W (X W ,Y W ,Z W ) is the world coordinate system, whose origin is usually set to the robot base (i.e., the robot arm device), and its unit is millimeter (mm); O C (X C ,Y C ,Z C ) is the camera coordinate system, whose origin is the camera optical center, and its unit is millimeter; o(x,y) is the image coordinate system, and its unit is millimeter; and (u,v) is the pixel coordinate system, and its unit is pixel. In the above formula,
Figure 112107347-A0101-12-0007-2
For converting the world coordinate system into the camera coordinate system;
Figure 112107347-A0101-12-0007-3
for further converting the camera coordinate system into an image coordinate system;
Figure 112107347-A0101-12-0007-4
For further converting the image coordinate system into a pixel coordinate system;
Figure 112107347-A0101-12-0007-5
represents the camera internal parameters, and
Figure 112107347-A0101-12-0007-6
To represent the camera external parameters.

如圖5所示,物件偵測模組42可用以偵測二維影像I2,以在二維影像I2中標記至少一對應作物C的標記區域B,並取得標記區域B之二維座標。物件偵測模組42在能標記出標記區域B時,送出停止訊號予行走裝置10,行走裝置10停止後機器手臂裝置20可進行作物C的採收。在本實施例中,物件偵測模組42為YOLOv5x6所訓練出之人工智慧模型,但本發明並不以此為限,物件偵測模組42也可以使用其他種類之人工智慧模型。有關人工智慧模型之訓練方式將容後詳述。 As shown in FIG5 , the object detection module 42 can be used to detect the two-dimensional image I 2 to mark at least one marked area B corresponding to the crop C in the two-dimensional image I 2 and obtain the two-dimensional coordinates of the marked area B. When the object detection module 42 can mark the marked area B, it sends a stop signal to the walking device 10. After the walking device 10 stops, the robot arm device 20 can harvest the crop C. In this embodiment, the object detection module 42 is an artificial intelligence model trained by YOLOv5x6, but the present invention is not limited thereto. The object detection module 42 can also use other types of artificial intelligence models. The training method of the artificial intelligence model will be described in detail later.

座標分析模組43可用以將二維影像I2的標記區域B對應至與二維影像I2對齊的三維影像I3’,以從三維影像I3’中取得標記區域B的深度座標P,並結合深度座標P及二維座標來作為夾取作物C的三維座標。在本實施例中,座標分析模組43係以標記區域B下半部區域的深度資訊中最小者(即最靠 近拍攝模組30的物件)作為深度座標P,如圖5所示,即使蘆筍嫩莖歪斜,座標分析模組43也能夠正確地判斷出蘆筍嫩莖的位置。在其他實施例中,座標分析模組43也可以標記區域B底部的中心點的深度資訊,來作為深度座標P,但本發明並不以此為限。此外,本發明不限於上述判斷方式,也可以使用諸如Yolact-Edge的偵測模組將蘆筍邊緣圍起來,直接偵測圖內之底部點,以避免夾取位置錯誤。 The coordinate analysis module 43 can be used to correspond the marked area B of the two-dimensional image I 2 to the three-dimensional image I 3 ' aligned with the two-dimensional image I 2 , so as to obtain the depth coordinate P of the marked area B from the three-dimensional image I 3 ', and combine the depth coordinate P and the two-dimensional coordinate as the three-dimensional coordinate of the crop C. In this embodiment, the coordinate analysis module 43 uses the smallest depth information of the lower half of the marked area B (i.e., the object closest to the shooting module 30) as the depth coordinate P. As shown in FIG. 5, even if the asparagus stem is crooked, the coordinate analysis module 43 can correctly determine the position of the asparagus stem. In other embodiments, the coordinate analysis module 43 can also use the depth information of the center point of the bottom of the marked area B as the depth coordinate P, but the present invention is not limited thereto. In addition, the present invention is not limited to the above-mentioned determination method, and a detection module such as Yolact-Edge can also be used to enclose the edge of the asparagus and directly detect the bottom point in the image to avoid incorrect clamping position.

控制模組44可根據三維座標控制機器手臂裝置20作動,使機器手臂裝置20之夾持單元25移動至作物C處,進行作物C的採收,並將割下的作物C送到採收盒50收集。控制模組44於機器手臂裝置20採收完該作物C之後,可送出行走訊號予行走裝置10並令拍攝裝置30再次進行連續拍攝以形成影像串流,以供物件偵測模組42再次進行即時偵測,直到物件偵測模組42能再度標記出標記區域B時,送出停止訊號予行走裝置10,以停下進行採收。 The control module 44 can control the robot arm device 20 to move according to the three-dimensional coordinates, so that the gripping unit 25 of the robot arm device 20 moves to the crop C to harvest the crop C and send the harvested crop C to the harvesting box 50 for collection. After the robot arm device 20 harvests the crop C, the control module 44 can send a walking signal to the walking device 10 and make the shooting device 30 shoot continuously again to form an image stream for the object detection module 42 to perform real-time detection again, until the object detection module 42 can mark the marking area B again, and then send a stop signal to the walking device 10 to stop harvesting.

在本實施例中,標記區域B所對應之作物C為蘆筍嫩莖,但本發明並不以此為限。 In this embodiment, the crop C corresponding to the marked area B is asparagus stems, but the present invention is not limited to this.

以下進一步說明人工智慧模型之訓練方式。在本實施例中是採用YOLOv5x6來進行人工智慧模型之訓練。在YOLOv5x6之訓練中,可先實際拍攝蘆筍嫩莖而取得包含蘆筍嫩莖的複數影像之後,利用諸如的LabelImg的標記軟體在包含蘆筍嫩莖的複數影像中人工標記對應於蘆筍嫩莖的標記區域,並將包含蘆筍嫩莖的複數影像及對應之標記區域作為人工智慧模型之訓練集,藉此訓練人工智慧模型來辨別蘆筍嫩莖。 The following further describes the training method of the artificial intelligence model. In this embodiment, YOLOv5x6 is used to train the artificial intelligence model. In the training of YOLOv5x6, asparagus stems can be actually photographed to obtain multiple images containing asparagus stems, and then a labeling software such as LabelImg is used to manually label the labeled area corresponding to the asparagus stems in the multiple images containing asparagus stems, and the multiple images containing asparagus stems and the corresponding labeled area are used as the training set of the artificial intelligence model, thereby training the artificial intelligence model to identify asparagus stems.

在訓練YOLOv5x6的複數影像中,可包含如蘆筍嫩莖被葉面遮蔽或與蘆筍母莖重疊的複數影像,如此可更貼近實際應用的情況,因此在實際應用 時可具有更佳的辨識準確率。此外,在標記用以訓練YOLOv5x6的複數影像時,也可依照高度標註蘆筍嫩莖,並且,更可以進一步依作物C高度依序分類不同採收等級的蘆筍嫩莖,以供物件偵測模組42標記符合採收等級的蘆筍嫩莖。 The multiple images for training YOLOv5x6 may include multiple images in which asparagus stems are covered by leaves or overlap with asparagus stems. This is closer to the actual application situation, so it can have a better recognition accuracy in actual application. In addition, when marking the multiple images for training YOLOv5x6, the asparagus stems can also be marked according to the height, and the asparagus stems of different harvesting levels can be further classified according to the height of the crop C, so that the object detection module 42 can mark the asparagus stems that meet the harvesting level.

此外,在YOLOv5x6之訓練中,可調整超參數如下表1所示。下述各參數及其數值僅為示例,本發明並不以此為限。 In addition, in the training of YOLOv5x6, the hyperparameters that can be adjusted are shown in Table 1 below. The following parameters and their values are only examples, and the present invention is not limited to them.

表1

Figure 112107347-A0101-12-0009-7
Table 1
Figure 112107347-A0101-12-0009-7

本發明以上述YOLOv5x6所訓練得到之人工智慧模型之效能驗證結果如下。以2573張包含蘆筍嫩莖的影像來進行人工智慧模型之訓練與效能 驗證,其中包含如蘆筍嫩莖被葉面遮蔽或與蘆筍母莖重疊等特殊情況的影像各300張,蘆筍嫩莖之辨識準確率約為96.6%,而機器手臂裝置20基於蘆筍嫩莖的三維座標定位的準確率約為93.3%,且運算裝置40對於將影像串流中單張幀的計算速度可達0.3秒,可有效地進行即時偵測。 The performance verification results of the artificial intelligence model trained by the YOLOv5x6 are as follows. The artificial intelligence model was trained and performance verified using 2573 images containing asparagus stems, including 300 images of special cases such as asparagus stems being covered by leaves or overlapping with asparagus stems. The recognition accuracy of asparagus stems was about 96.6%, and the accuracy of the robot arm device 20 based on the three-dimensional coordinate positioning of asparagus stems was about 93.3%. The computing device 40 can calculate a single frame in the image stream in 0.3 seconds, which can effectively perform real-time detection.

請參閱圖6,本發明作物採收機1實際應用時的運作流程如下。首先,於步驟S11中,係先令行走裝置10行走於種植有作物C之農地間,同時令拍攝裝置30於行走裝置10行走時進行連續拍攝以形成影像串流,運算裝置40接收該影像串流以即時進行分析。其中,運算裝置40係藉由影像對齊模組41將拍攝裝置30所拍攝的二維影像I2及三維影像I3對齊,並藉由物件偵測模組42偵測二維影像I2,以判斷二維影像I2中是否可標記出對應於作物C之標記區域B。於步驟S12中,當運算裝置40之物件偵測模組42能標記出對應於作物C之標記區域B時,送出停止訊號予行走裝置10。於步驟S13中,行走裝置10接收該停止訊號停止行走,並回傳訊號予運算裝置40。 Please refer to FIG. 6 , the operation flow of the crop harvester 1 of the present invention when it is actually applied is as follows. First, in step S11, the walking device 10 is first made to walk in the farmland where the crop C is planted, and at the same time, the shooting device 30 is made to continuously shoot while the walking device 10 is walking to form an image stream, and the computing device 40 receives the image stream for real-time analysis. Among them, the computing device 40 aligns the two-dimensional image I 2 and the three-dimensional image I 3 shot by the shooting device 30 through the image alignment module 41, and detects the two-dimensional image I 2 through the object detection module 42 to determine whether the marking area B corresponding to the crop C can be marked in the two-dimensional image I 2 . In step S12, when the object detection module 42 of the computing device 40 can mark the marked area B corresponding to the crop C, a stop signal is sent to the walking device 10. In step S13, the walking device 10 receives the stop signal, stops walking, and sends a signal back to the computing device 40.

於步驟S14中,運算裝置40接收到行走裝置10已停止行走的訊號後,藉由座標分析模組43將二維影像I2的標記區域B對應至與二維影像I2對齊的三維影像I3’,以從三維影像I3’中取得標記區域B的深度座標P,並結合深度座標P及二維座標來作為夾取作物C的三維座標,並且控制模組44基於座標分析模組43所分析的作物C之三維座標,控制機器手臂裝置20使夾持單元25移動到作物C的位置進行採收。於步驟S15中,機器手臂裝置20採收作物C完畢後,送出行走訊號予行走裝置10及運算裝置40。由此,再度回到步驟S11重複進行上述步驟,從而可實現全自動的作物採收。 In step S14, after the computing device 40 receives the signal that the walking device 10 has stopped walking, the coordinate analysis module 43 maps the marked area B of the two-dimensional image I 2 to the three-dimensional image I 3 ' aligned with the two-dimensional image I 2 , so as to obtain the depth coordinate P of the marked area B from the three-dimensional image I 3 ', and combines the depth coordinate P with the two-dimensional coordinate as the three-dimensional coordinate for gripping the crop C, and the control module 44 controls the robot arm device 20 to move the gripping unit 25 to the position of the crop C for harvesting based on the three-dimensional coordinate of the crop C analyzed by the coordinate analysis module 43. In step S15, after the robot arm device 20 finishes harvesting the crop C, it sends a walking signal to the walking device 10 and the computing device 40. Therefore, the process returns to step S11 and repeats the above steps, thereby achieving fully automatic crop harvesting.

在本實施例中,由於行走裝置10行走的過程中拍攝裝置30持續地進行拍攝且運算裝置40持續地進行分析,因此,相較於行走一段固定距離後再停下拍攝,可避免在此固定距離中間有遺漏的作物C。並且,當運算裝置40發現到作物C時立即傳送停止訊號使行走裝置10停下,因此行走裝置10停下的位置即為作物C的位置,而不會發生行走裝置10停下的位置距離作物C的位置過遠的問題。 In this embodiment, since the shooting device 30 continuously shoots and the computing device 40 continuously analyzes during the walking process of the walking device 10, compared with stopping to shoot after walking a fixed distance, it can avoid missing crops C in the middle of the fixed distance. In addition, when the computing device 40 finds the crop C, it immediately sends a stop signal to stop the walking device 10, so the position where the walking device 10 stops is the position of the crop C, and the problem of the walking device 10 stopping too far from the position of the crop C will not occur.

綜上所述,藉由本發明之作物採收機,可精確且有效地辨識出欲採收之作物,並依此準確地進行自動採收,從而提升蘆筍採收的方便性與效率,有效達到採收省工的目的。 In summary, the crop harvester of the present invention can accurately and effectively identify the crops to be harvested, and automatically harvest them accurately accordingly, thereby improving the convenience and efficiency of asparagus harvesting and effectively achieving the purpose of labor-saving harvesting.

上述實施形態僅為例示性說明本發明之技術原理、特點及其功效,並非用以限制本發明之可實施範疇,任何熟習此技術之人士均可在不違背本發明之精神與範疇下,對上述實施形態進行修飾與改變。然任何運用本發明所教示內容而完成之等效修飾及改變,均仍應為下述之申請專利範圍所涵蓋。而本發明之權利保護範圍,應如下述之申請專利範圍所列。 The above implementation forms are only illustrative of the technical principles, features and effects of the present invention, and are not intended to limit the scope of implementation of the present invention. Anyone familiar with this technology can modify and change the above implementation forms without violating the spirit and scope of the present invention. However, any equivalent modifications and changes completed by using the teachings of the present invention should still be covered by the following patent application scope. The scope of protection of the present invention should be as listed in the following patent application scope.

1:作物採收機 1:Crop harvester

10:行走裝置 10: Walking device

11:履帶機構 11: Track mechanism

20:機器手臂裝置 20: Robot arm device

21:垂直升降機構 21: Vertical lifting mechanism

22:第一機器手臂 22: The first robot arm

23:第二機器手臂 23: Second robot arm

24:第三機器手臂 24: The third robot arm

25:夾持單元 25: Clamping unit

30:拍攝裝置 30: Filming equipment

40:運算裝置 40: Computing device

50:採收盒 50: Harvest box

Claims (12)

一種作物採收機,包括:行走裝置,用以行走於種植有作物之農地;機器手臂裝置,設置於該行走裝置上;至少一拍攝裝置,設置於該行走裝置上,並用以於該農地中拍攝以取得二維影像及對應該二維影像之三維影像;以及運算裝置,設置於該行走裝置上,並包含:影像對齊模組,用以將該二維影像與該三維影像對齊;物件偵測模組,用以偵測該二維影像,以在該二維影像中標記至少一對應該作物的標記區域,俾取得該標記區域之二維座標;座標分析模組,用以將該二維影像的該標記區域對應至與該二維影像對齊的該三維影像,以從該三維影像中取得該標記區域的深度座標,俾結合該深度座標及該二維座標來作為夾取該作物的三維座標;及控制模組,根據該三維座標控制該機器手臂裝置作動,以採收該作物。 A crop harvester includes: a walking device for walking on a farmland planted with crops; a machine arm device disposed on the walking device; at least one shooting device disposed on the walking device and used to shoot in the farmland to obtain a two-dimensional image and a three-dimensional image corresponding to the two-dimensional image; and a computing device disposed on the walking device and including: an image alignment module for aligning the two-dimensional image with the three-dimensional image; an object detection module for detecting the two-dimensional image, At least one marked area corresponding to the crop is marked in the two-dimensional image to obtain the two-dimensional coordinates of the marked area; a coordinate analysis module is used to correspond the marked area of the two-dimensional image to the three-dimensional image aligned with the two-dimensional image to obtain the depth coordinates of the marked area from the three-dimensional image, so as to combine the depth coordinates and the two-dimensional coordinates as the three-dimensional coordinates for clamping the crop; and a control module controls the robot arm device to move according to the three-dimensional coordinates to harvest the crop. 如請求項1所述之作物採收機,其中,該座標分析模組係以該標記區域下半部區域的深度資訊中最小者作為該深度座標。 A crop harvester as described in claim 1, wherein the coordinate analysis module uses the smallest depth information of the lower half of the marked area as the depth coordinate. 如請求項1所述之作物採收機,其中,該拍攝裝置之數量為二個,配置成能從不同視角進行拍攝,以取得不同視角之兩個該二維影像供該影像對齊模組及該物件偵測模組進行對齊及偵測。 The crop harvester as described in claim 1, wherein the number of the photographing devices is two, and the photographing devices are configured to be able to photograph from different viewing angles to obtain two two-dimensional images at different viewing angles for the image alignment module and the object detection module to align and detect. 如請求項1所述之作物採收機,其中,該運算裝置為邊緣運算裝置,用以令該拍攝裝置於該行走裝置行走時進行連續拍攝以形成影像串流,再供該物件偵測模組進行即時偵測。 The crop harvester as described in claim 1, wherein the computing device is an edge computing device, which is used to enable the shooting device to continuously shoot when the walking device is walking to form an image stream, and then provide the object detection module with real-time detection. 如請求項4所述之作物採收機,其中,該物件偵測模組在能標記出該標記區域時,送出停止訊號予該行走裝置。 A crop harvester as described in claim 4, wherein the object detection module sends a stop signal to the walking device when the marking area is marked. 如請求項5所述之作物採收機,其中,該控制模組於該機器手臂裝置採收完該作物之後,送出行走訊號予該行走裝置以令該拍攝裝置再次進行連續拍攝而形成影像串流,俾供該物件偵測模組再次進行即時偵測。 The crop harvester as described in claim 5, wherein the control module sends a walking signal to the walking device after the machine arm device has harvested the crop, so that the shooting device can continue shooting again to form an image stream, so that the object detection module can perform real-time detection again. 如請求項1所述之作物採收機,其中,該拍攝裝置為深度相機。 A crop harvester as described in claim 1, wherein the photographing device is a depth camera. 如請求項1所述之作物採收機,其中,該物件偵測模組為YOLOv5x6所訓練出之人工智慧模型。 A crop harvester as described in claim 1, wherein the object detection module is an artificial intelligence model trained by YOLOv5x6. 如請求項8所述之作物採收機,其中,該人工智慧模型係以包含該作物被阻擋或與其他非採收目標重疊的複數影像所訓練。 A crop harvester as described in claim 8, wherein the artificial intelligence model is trained with multiple images including the crop being obstructed or overlapped with other non-harvesting objects. 如請求項9所述之作物採收機,其中,該人工智慧模型更由依照高度標註該作物的該複數影像所訓練。 A crop harvester as described in claim 9, wherein the artificial intelligence model is further trained by the plurality of images of the crop labeled according to its height. 如請求項1所述之作物採收機,其中,該標記區域所對應之該作物為蘆筍嫩莖。 A crop harvester as described in claim 1, wherein the crop corresponding to the marked area is asparagus stems. 如請求項1所述之作物採收機,其中,該機器手臂裝置具有多軸自由度,其末端設有夾持單元,該夾持單元包含軟質襯墊及刀片。 A crop harvester as described in claim 1, wherein the machine arm device has multi-axis degrees of freedom, and a clamping unit is provided at the end thereof, and the clamping unit includes a soft pad and a blade.
TW112107347A 2023-03-01 Crop harvester TWI842416B (en)

Publications (1)

Publication Number Publication Date
TWI842416B true TWI842416B (en) 2024-05-11

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115648164A (en) 2022-10-25 2023-01-31 中国农业科学院都市农业研究所 Rotary recognition harvesting robot device and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115648164A (en) 2022-10-25 2023-01-31 中国农业科学院都市农业研究所 Rotary recognition harvesting robot device and method

Similar Documents

Publication Publication Date Title
CN103716594B (en) Panorama splicing linkage method and device based on moving target detecting
JP2011103870A (en) Automatic measurement system and method for plant features, and recording medium thereof
US8210122B2 (en) Arrangement and method for determining positions of the teats of a milking animal
CN109948563A (en) A kind of withered tree detection localization method of the pine nematode based on deep learning
KR101898782B1 (en) Apparatus for tracking object
WO2023050783A1 (en) Weeding robot and method and apparatus for planning weeding path thereof, and medium
WO2021208407A1 (en) Target object detection method and apparatus, and image collection method and apparatus
CN111968074A (en) Method for detecting and harvesting lodging crops of harvester by combining binocular camera and IMU
KR20180086745A (en) Apparatus for processing plant images and method thereof
CN108470165A (en) A kind of picking robot fruit vision collaboratively searching method
TWI842416B (en) Crop harvester
CN110516563A (en) Agriculture transplanter intelligence method for path navigation based on DSP
CN108989686A (en) Captured in real-time device and control method based on humanoid tracking
CN113920190A (en) Ginkgo flower spike orientation method and system
Bulanon et al. Feedback control of manipulator using machine vision for robotic apple harvesting
CN115512351A (en) Multi-angle cognitive and positioning system for enhancing corn tassels
CN113114766B (en) Potted plant information detection method based on ZED camera
Roy et al. Robotic surveying of apple orchards
CN115457437A (en) Crop identification method, device and system and pesticide spraying robot
CN111784749A (en) Space positioning and motion analysis system based on binocular vision
TWI809993B (en) Automatic rice transplanter and method applying image recognition thererfore
Tarry et al. An integrated bud detection and localization system for application in greenhouse automation
CN114307100B (en) Shooting training method and system based on automatic cruise robot
WO2022208973A1 (en) Information processing device, information processing method, and program
KR102439922B1 (en) Simultaneous real-time acquisition system of RGB image/joint position label data pairs using heterogeneous cameras