TWI826777B - Row-crop type unmanned vehicle automatic navigation system and method thereof - Google Patents
Row-crop type unmanned vehicle automatic navigation system and method thereof Download PDFInfo
- Publication number
- TWI826777B TWI826777B TW110109893A TW110109893A TWI826777B TW I826777 B TWI826777 B TW I826777B TW 110109893 A TW110109893 A TW 110109893A TW 110109893 A TW110109893 A TW 110109893A TW I826777 B TWI826777 B TW I826777B
- Authority
- TW
- Taiwan
- Prior art keywords
- module
- road surface
- trajectory
- surface images
- automatic navigation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000011218 segmentation Effects 0.000 claims abstract description 42
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 10
- 238000004458 analytical method Methods 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 16
- 238000012800 visualization Methods 0.000 claims description 14
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 5
- 239000003086 colorant Substances 0.000 claims description 4
- 238000012549 training Methods 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 9
- 241000196324 Embryophyta Species 0.000 description 5
- 239000002689 soil Substances 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 235000010469 Glycine max Nutrition 0.000 description 1
- 244000068988 Glycine max Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009313 farming Methods 0.000 description 1
- 230000035784 germination Effects 0.000 description 1
- 238000003973 irrigation Methods 0.000 description 1
- 230000002262 irrigation Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000017260 vegetative to reproductive phase transition of meristem Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 238000009333 weeding Methods 0.000 description 1
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
本發明係有關一種自動導航系統,特別是指一種畦田式無人載具自動導航系統及其方法。The present invention relates to an automatic navigation system, in particular to a border type unmanned vehicle automatic navigation system and a method thereof.
如第2圖所示,畦田式的農田10中具有隆起的畦田12和凹陷的畦溝14,農作物種植在畦田12上,而農民行走、農用機具行進、灌溉則是在畦溝14上。目前灑種子、除草、施肥等等的任務,皆需要人工進行。後續已發展出各種農機取代純人工施作,由人駕駛農機以節省人力負擔。As shown in Figure 2, the border-style farmland 10 has a raised border 12 and a concave border 14. Crops are planted on the border 12, while farmers walk, use agricultural machinery, and irrigate on the border 14. Currently, tasks such as spreading seeds, weeding, fertilizing, etc. all need to be done manually. Subsequently, various agricultural machinery have been developed to replace purely manual work, with humans driving the agricultural machinery to save labor burden.
雖然有農機可取代徒步人力,還是需要人力駕駛,若廣闊的田地(如大豆農場)縱使有農機,對農民而言仍然需要在烈日下進行長時間的體力勞動。若只有一個人會駕駛農機,則各階段的農作都只能由單人承擔,無法提高生產力及農場管理效率,並且長時人力付出,可能導致疲勞駕駛、衍生專注力失調及產生安全問題。因此,將農機自動化可大幅提升農業生產力及管理效率。Although there are agricultural machinery that can replace human labor on foot, human driving is still required. Even if there are agricultural machinery in vast fields (such as soybean farms), farmers still need to perform long periods of physical labor under the scorching sun. If only one person can drive agricultural machinery, each stage of farming can only be undertaken by a single person, which cannot improve productivity and farm management efficiency. Moreover, long-term manpower expenditure may lead to fatigue driving, concentration disorders, and safety problems. Therefore, automating agricultural machinery can significantly improve agricultural productivity and management efficiency.
而為了提升農業生產力及管理效率,習知技術已有提出農機自動化的相關專利。例如,本國公開號第202030497號專利「導航系統及導航系統的操作方法」,其利用發射出一雷達信號,依據雷達信號被物體反射回來的時間差以計算本地與物體之間的距離,並進行障礙物偵測以得知周遭物體位置的分布資訊。然而,在實際農業應用中,雷達裝置與導航裝置皆為不小的開銷。因此,若將雷達裝置安裝在農機上,會大幅增加成本。又如本國公告號第I389635號專利「複式導向工作車」,其利用預定的軌道讓車子行走,達到不同任務需求。然而,此方法只能讓工作車在預定的實質軌道上行走。若要到一個新的地方使用工作車的話,則必須重新安裝軌道,相當不便。而且,軌道也是一筆額外的成本支出。In order to improve agricultural productivity and management efficiency, Xinzhi Technology has proposed patents related to agricultural machinery automation. For example, the National Patent No. 202030497 "Navigation System and Operation Method of Navigation System" uses a radar signal to be emitted, and the distance between the local area and the object is calculated based on the time difference between the radar signal being reflected by the object, and obstacles are detected. Object detection to obtain the distribution information of the location of surrounding objects. However, in actual agricultural applications, both radar devices and navigation devices are expensive. Therefore, if the radar device is installed on agricultural machinery, the cost will increase significantly. Another example is the "Compound Guided Work Vehicle" patented by the country's patent No. I389635, which uses predetermined tracks to move the vehicle to meet different task requirements. However, this method can only allow the work vehicle to walk on a predetermined physical track. If you want to use the work vehicle in a new place, you must reinstall the track, which is quite inconvenient. Also, the track is an additional cost.
因此,本發明針對上述習知技術之缺失及未來之需求,提出一種畦田式無人載具自動導航系統及其方法,用無人車的方式自主導航前進,具體架構及其實施方式將詳述於下:Therefore, in view of the shortcomings of the above-mentioned conventional technologies and future needs, the present invention proposes a border-type unmanned vehicle automatic navigation system and its method, which uses an unmanned vehicle to navigate forward autonomously. The specific architecture and implementation will be described in detail below. :
本發明之主要目的在提供一種畦田式無人載具自動導航系統及其方法,其利用訓練出人工智慧模型以辨識農田的地形,區分畦田和畦溝並計算出適當的移動軌跡提供給移動載具,以供移動載具自動導航前進,減少農民的體力勞動。The main purpose of the present invention is to provide a field-type unmanned vehicle automatic navigation system and its method, which utilizes trained artificial intelligence models to identify the terrain of farmland, distinguish between fields and ditches, and calculate appropriate movement trajectories to provide to the mobile vehicle. , for mobile vehicles to automatically navigate forward and reduce farmers’ physical labor.
本發明之另一目的在提供一種畦田式無人載具自動導航系統及其方法,藉由各類環境(土壤乾溼、光線強弱…等)條件及植株各種生長期之下少量的道路影像,所訓練好的推論模型結合導航資訊演算法,即可完成移動載具的自動導航,且不會受到陽光強弱、植物大小、土壤狀況等因素而影響導航效果。Another object of the present invention is to provide a field-type unmanned vehicle automatic navigation system and its method, which uses a small number of road images under various environmental conditions (dry and wet soil, light intensity, etc.) and various growth stages of plants. The trained inference model combined with the navigation information algorithm can complete the automatic navigation of the mobile vehicle, and the navigation effect will not be affected by factors such as sunlight intensity, plant size, soil conditions, etc.
為達上述目的,本發明提供一種畦田式無人載具自動導航系統,其係裝設於一移動載具上,包括:至少一影像擷取裝置,擷取複數路面影像;一標註分析模組,連接影像擷取裝置,在路面影像上產生複數標註,以識別路面影像中之地形;一語義分割模組,連接標註分析模組及該至少一影像擷取裝置,建立一推論模型並利用至少一種語義分割式人工智慧演算法及包含標註之路面影像進行訓練,產生一推論模型,用以輸出一地形辨識結果;一行進資訊模組,連接語義分割模組,依據地形辨識結果計算出一組軌跡資訊;以及一行進控制模組,連接行進資訊模組及該移動載具,接收軌跡資訊中之至少一部分,並根據軌跡資訊控制移動載具移動。In order to achieve the above object, the present invention provides a border type unmanned vehicle automatic navigation system, which is installed on a mobile vehicle and includes: at least one image capture device to capture multiple road surface images; a label analysis module, Connect the image capture device to generate multiple annotations on the road surface image to identify the terrain in the road surface image; a semantic segmentation module connects the annotation analysis module and the at least one image capture device to establish an inference model and use at least one The semantic segmentation artificial intelligence algorithm is trained with labeled road surface images to generate an inference model to output a terrain recognition result; a travel information module is connected to the semantic segmentation module to calculate a set of trajectories based on the terrain recognition results. information; and a travel control module that connects the travel information module and the mobile vehicle, receives at least part of the trajectory information, and controls the movement of the mobile vehicle according to the trajectory information.
依據本發明之實施例,更包括一影像處理模組,訊號連接影像擷取裝置、標註分析模組及語義分割模組,接收影像擷取裝置所傳送之路面影像並進行預處理後,傳送到標註分析模組或語義分割模組。According to the embodiment of the present invention, an image processing module is further included. The signal is connected to the image capture device, the annotation analysis module and the semantic segmentation module. It receives the road surface image transmitted by the image capture device and performs pre-processing before transmitting it to Annotation analysis module or semantic segmentation module.
依據本發明之實施例,影像處理模組進行之預處理包括:調整路面影像之尺寸、調整路面影像之色彩表徵值、對路面影像進行採樣或前述三者之組合。According to embodiments of the present invention, the preprocessing performed by the image processing module includes: adjusting the size of the road surface image, adjusting the color representation value of the road surface image, sampling the road surface image, or a combination of the above three.
依據本發明之實施例,地形辨識結果包括在該等路面影像上的複數地形區塊,該等地形區塊包括畦田區塊及畦溝區塊。According to embodiments of the present invention, the terrain recognition results include a plurality of terrain blocks on the road surface images, and the terrain blocks include border blocks and border blocks.
依據本發明之實施例,軌跡資訊包括該移動載具的車輪軌跡、質心軌跡、行進的線速度及角速度,線速度為預先設定,且行進資訊模組取畦溝區塊的兩側邊緣的中心點並連成一線,產生該移動載具之車輪軌跡,取二車輪軌跡的中心點並連成一線段做為該移動載具之質心軌跡,並依據質心軌跡之一偏移量計算出移動載具行進的角速度。According to the embodiment of the present invention, the trajectory information includes the wheel trajectory, center of mass trajectory, traveling linear speed and angular velocity of the mobile vehicle. The linear speed is preset, and the traveling information module obtains the parameters of both sides of the ditch block. The center points are connected in a line to generate the wheel trajectory of the mobile vehicle. The center points of the two wheel trajectories are taken and connected in a line segment as the centroid trajectory of the mobile vehicle, and calculated based on the offset of the centroid trajectory. The angular velocity of the moving vehicle.
依據本發明之實施例,行進資訊模組將該組軌跡資訊中的線速度及角速度傳送給行進控制模組。According to an embodiment of the present invention, the traveling information module transmits the linear velocity and angular velocity in the set of trajectory information to the traveling control module.
依據本發明之實施例,更包括一視覺化模組,其與語義分割模組連接,用以顯示地形區塊。再與行進資訊模組連接,用以在地形區塊上顯示車輪軌跡及質心軌跡。According to an embodiment of the present invention, a visualization module is further included, which is connected with the semantic segmentation module to display terrain regions. Then it is connected to the travel information module to display the wheel trajectory and center of mass trajectory on the terrain block.
依據本發明之實施例,視覺化模組將軌跡資訊以不同顏色、色塊、線條顯示。According to embodiments of the present invention, the visualization module displays trajectory information in different colors, color blocks, and lines.
本發明另提供一種畦田式無人載具自動導航方法,其應用於裝設在一移動載具上之一畦田式無人載具自動導航系統,包括下列步驟:利用至少一影像擷取裝置擷取複數路面影像,傳送至一標註分析模組,以在路面影像上產生複數標註,以識別路面影像中之地形;利用一語義分割模組接收包含標註之路面影像,並建立一推論模型,利用至少一種語義分割式人工智慧演算法及所接收之路面影像對該推論模型進行訓練,以輸出一地形辨識結果;以及利用一行進資訊模組依據推論模型計算出適合移動載具行進的一組軌跡資訊,並傳送給一行進控制模組以根據該組軌跡資訊控制移動載具移動。The invention also provides a borderland-type unmanned vehicle automatic navigation method, which is applied to a borderland-type unmanned vehicle automatic navigation system installed on a mobile vehicle, including the following steps: using at least one image capture device to capture a plurality of images The road surface image is sent to a label analysis module to generate plural labels on the road surface image to identify the terrain in the road surface image; a semantic segmentation module is used to receive the road surface image containing the label, and establish an inference model using at least one The semantic segmentation artificial intelligence algorithm and the received road surface image train the inference model to output a terrain recognition result; and use a travel information module to calculate a set of trajectory information suitable for the movement of the mobile vehicle based on the inference model. And sent to a travel control module to control the movement of the mobile vehicle according to the set of trajectory information.
本發明提供一種畦田式無人載具自動導航系統及其方法,其係應用於無人農具在畦田式農地上的行進方式,藉由收集少量的畦田式農業場域影像,即可偵測分辨畦田及畦溝並訓練出可辨識地形的AI推論模型,並提供畦溝的明確偵測與定位,以提供移動載具(如無人農具)在畦溝間移動的軌跡並導航。The present invention provides a border-type unmanned vehicle automatic navigation system and a method thereof, which are applied to the traveling mode of unmanned agricultural tools on border-style farmland. By collecting a small amount of border-field agricultural field images, the border and fields can be detected and distinguished. Qigou has also trained an AI inference model that can identify terrain, and provides clear detection and positioning of gullies to provide trajectories and navigation for mobile vehicles (such as unmanned agricultural tools) moving between gullies.
請參考第3圖,其為移動載具16在農田10上移動的示意圖。農田10為長條式的畦田12,於畦田12之間設有凹陷的畦溝14以供行走或灌溉進水。移動載具16為四輪無人載具,其左側車輪和右側車輪跨越至少一個畦田12而分別著陸在兩側的兩個畦溝14,例如移動載具16的左、右兩側車輪可橫跨兩個畦田12。於移動載具16上設有至少一個影像擷取裝置22,例如相機、攝影機或任意鏡頭,其可在移動載具16的前後位置各設置一個,當移動載具16前進時用前方的影像擷取裝置22擷取路面影像,而當移動載具16後退時則用後方的影像擷取裝置22擷取路面影像。且移動載具16設有一處理器24,影像擷取裝置22和處理器24組成了本發明之畦田式無人載具自動導航系統20,本發明中的軟體部分的影像處理、AI訓練、軌跡計算等皆在處理器24中進行。Please refer to Figure 3 , which is a schematic diagram of the mobile vehicle 16 moving on the farmland 10 . The farmland 10 is a long strip of fields 12, and sunken furrows 14 are provided between the fields 12 for walking or water intake for irrigation. The mobile vehicle 16 is a four-wheeled unmanned vehicle, and its left and right wheels span at least one farmland 12 and land on two ditches 14 on both sides respectively. For example, the left and right wheels of the mobile vehicle 16 can span Two borders12. There is at least one image capture device 22 on the mobile vehicle 16, such as a camera, a video camera or any lens. One of them can be installed at the front and rear positions of the mobile vehicle 16. When the mobile vehicle 16 moves forward, the image capture device 22 in front is used. The capturing device 22 captures the road surface image, and when the mobile vehicle 16 retreats, the rear image capturing device 22 is used to capture the road surface image. And the mobile vehicle 16 is provided with a processor 24. The image capture device 22 and the processor 24 form the border type unmanned vehicle automatic navigation system 20 of the present invention. The software part of the present invention includes image processing, AI training, and trajectory calculation. etc. are all performed in the processor 24.
請同時參考第1圖,其為本發明畦田式無人載具自動導航系統20之方塊圖。本發明之畦田式無人載具自動導航系統20係裝設於移動載具16上,包括影像擷取裝置22及處理器24,其中,影像擷取裝置22用以擷取路面影像,包含畦田12、畦溝14和可能出現的障礙物;處理器24中更包括一影像處理模組242、一標註分析模組244、一語義分割模組246、一視覺化模組247、一行進資訊模組248及一行進控制模組249。影像處理模組242連接影像擷取裝置22,用以對路面影像進行預處理,例如調整路面影像之尺寸,使影像的尺寸符合後續需要,或是只擷取影像中的一部分感興趣區域。此外,預處理還包括對路面影像進行色彩表徵值之調整,如調整色相(Hue)、飽和度(Saturation)、明度(Value)、亮度(Lightness)或包含前述表徵值的HSV或HSL色彩空間等。此外,預處理還包括對路面影像進行採樣,因此傳送到後方標註分析模組244的僅有少量影像,舉例而言,假設影像擷取模組每秒可擷取30張路面影像,在訓練模式下,將各式條件都加入考慮,包括各種植株生長期、光線變化…等差異,各個場域分別取數張影像,例如取若干張影像,每一種類的影像張數都不會太多張,增加取樣的廣度,可以促成完整的訓練。而當實際進行判讀而非訓練模式時,則取樣較多張影像,且不再考慮各種條件,例如每秒影像取其中的6張或10張,或總數1000張取其中的700張,將該些取樣的影像輸入到語義分割模組246訓練好的推論模型中,形成連續的行進結果。Please also refer to Figure 1 , which is a block diagram of the border-type unmanned vehicle automatic navigation system 20 of the present invention. The border type unmanned vehicle automatic navigation system 20 of the present invention is installed on the mobile vehicle 16 and includes an image capture device 22 and a processor 24. The image capture device 22 is used to capture road images, including the border 12 , ditch 14 and possible obstacles; the processor 24 further includes an image processing module 242, an annotation analysis module 244, a semantic segmentation module 246, a visualization module 247, and a traveling information module. 248 and a travel control module 249. The image processing module 242 is connected to the image capture device 22 to pre-process the road surface image, such as adjusting the size of the road surface image to meet subsequent needs, or capturing only a part of the area of interest in the image. In addition, preprocessing also includes adjusting the color representation values of the road image, such as adjusting the hue (Hue), saturation (Saturation), value (Value), brightness (Lightness), or the HSV or HSL color space containing the aforementioned representation values, etc. . In addition, preprocessing also includes sampling road surface images, so only a small number of images are sent to the rear annotation analysis module 244. For example, assuming that the image capture module can capture 30 road surface images per second, in the training mode Next, various conditions are taken into consideration, including differences in growth stages of various plants, light changes, etc., and several images are taken for each field. For example, several images are taken. The number of images of each type will not be too many. , increasing the breadth of sampling can lead to complete training. When the interpretation is actually performed instead of the training mode, more images are sampled, and various conditions are no longer considered. For example, 6 or 10 images are taken per second, or 700 of the total 1,000 images are taken, and the These sampled images are input into the inference model trained by the semantic segmentation module 246 to form a continuous traveling result.
標註分析模組244在路面影像上產生複數標註,標註是以識別路面影像中之地形,在一實施例中,此標註是由標註分析模組244依據使用者所標註的結果所產生的,例如提供一使用者介面(圖中未示),並在使用者介面上提供「畦田」、「畦溝」或其他選項供使用者選擇,或是提供一輸入裝置(圖中未示)如鍵盤、手寫板等,讓使用者利用輸入裝置在使用者介面上輸入標註,例如輸入第1圖中的「畦田」和「畦溝」等標註。在一實施例中,所謂的標註是由使用者在路面影像上指定片段或線段,連成一個區域做框選,產生地形區塊。The annotation analysis module 244 generates multiple annotations on the road surface image. The annotation is used to identify the terrain in the road surface image. In one embodiment, the annotation analysis module 244 generates multiple annotations based on the annotation results of the user, for example Provide a user interface (not shown in the figure), and provide "Qi Tian", "Qi Gou" or other options on the user interface for the user to choose, or provide an input device (not shown in the figure) such as a keyboard, A handwriting pad, etc., allows the user to use the input device to input annotations on the user interface, for example, input annotations such as "Qi Tian" and "Qi Ditch" in Figure 1. In one embodiment, the so-called annotation is performed by the user specifying segments or line segments on the road surface image and connecting them into an area for frame selection to generate a terrain block.
語義分割模組246連接標註分析模組244,用以訓練一AI模型,更具體而言,語義分割模組246先建立一推論模型,再利用至少一種語義分割式人工智慧演算法及包含標註之路面影像對推論模型進行訓練,此推論模型通常是深度學習的神經網路模型,進而訓練出具有人工智慧的推論模型,因此,推論模型可利用人工智慧對影像進行語義分割,自動辨識出地形並輸出一地形辨識結果,例如訓練後的推論模型可辨識出路面影像中何處為畦田、何處為畦溝。在訓練的過程,語義分割模組246可以接收作物不同成長時期(發芽、開花、成熟等)、不同光線條件以及不同乾溼狀態的土壤等,比較具有代表性的路面影像進行訓練。The semantic segmentation module 246 is connected to the annotation analysis module 244 to train an AI model. More specifically, the semantic segmentation module 246 first establishes an inference model, and then uses at least one semantic segmentation artificial intelligence algorithm and annotation-based artificial intelligence algorithm. Pavement images are used to train the inference model. This inference model is usually a deep learning neural network model, and then an inference model with artificial intelligence is trained. Therefore, the inference model can use artificial intelligence to perform semantic segmentation on the image, automatically identify the terrain, and Output a terrain recognition result. For example, the trained inference model can identify where are the borders and where are the borders in the road image. During the training process, the semantic segmentation module 246 can receive representative road images of crops in different growth stages (germination, flowering, maturity, etc.), different light conditions, and soil in different dry and wet conditions for training.
行進資訊模組248連接語義分割模組246,行進資訊模組248依據語義分割模組246所輸出之地形辨識結果,結合至少一種導航資訊演算法,計算出適合移動載具16行進的一組軌跡資訊,包括移動載具16的車輪軌跡、質心軌跡、行進的線速度及角速度。地形辨識結果包括在路面影像上的地形區塊,地形區塊包括畦田區塊及畦溝區塊。在本實施例中,依據應用場域的操作者預先設定移動載具可行的線速度來行進。接著,可先利用判斷畦溝區塊的兩側邊緣,找出兩側邊緣的中心點並連成一線,用以在畦溝區塊上產生移動載具16的車輪軌跡,本發明需要在二個畦溝區塊上產生兩條車輪軌跡。其次,取此兩條車輪軌跡的中心點並連成一線,此線段位於兩條車輪軌跡的中間,即為移動載具16的質心軌跡,一般而言此質心軌跡會落在畦田區塊上。接著,再依據質心軌跡的偏移量計算出移動載具16的角速度。在另一未繪示的實施例中,在產生地形區塊後,可利用其他習知的導航資訊演算法找出移動載具16的軌跡資訊,並不以此為限。The traveling information module 248 is connected to the semantic segmentation module 246. The traveling information module 248 calculates a set of trajectories suitable for the traveling of the mobile vehicle 16 based on the terrain recognition results output by the semantic segmentation module 246 and combined with at least one navigation information algorithm. The information includes the wheel trajectory, center of mass trajectory, linear velocity and angular velocity of the mobile vehicle 16 . The terrain recognition results include terrain blocks on the road surface image, and the terrain blocks include border blocks and border blocks. In this embodiment, the operator of the application field presets the feasible linear speed of the mobile vehicle to travel. Then, the edges on both sides of the ditch block can be judged first, and the center points of the two edges can be found and connected in a line to generate the wheel trajectory of the mobile vehicle 16 on the ditch block. The present invention requires two steps. Two wheel tracks are generated on each ditch block. Secondly, take the center points of the two wheel trajectories and connect them into a line. This line segment is located in the middle of the two wheel trajectories, which is the centroid trajectory of the mobile vehicle 16. Generally speaking, this centroid trajectory will fall in the Qitian area. superior. Then, the angular velocity of the mobile carrier 16 is calculated based on the offset of the center of mass trajectory. In another not-shown embodiment, after the terrain blocks are generated, other conventional navigation information algorithms can be used to find the trajectory information of the mobile vehicle 16 , but the invention is not limited thereto.
在產生線速度及角速度之後,行進資訊模組248將全部或部分的軌跡資訊傳送給行進控制模組249,例如僅將線速度和角速度提供給行進控制模組249。行進控制模組249連接行進資訊模組248及移動載具16,當接收部分或全部之軌跡資訊後,根據軌跡資訊對移動載具16進行自動導航。舉例來說,移動載具16具有多個車輪(未繪示)以及一底盤(未繪示),行進控制模組249可根據軌跡資訊控制移動載具16的不同輪子的轉速、方向等,以改變移動載具16的行進方向、速度等。After generating the linear velocity and angular velocity, the traveling information module 248 transmits all or part of the trajectory information to the traveling control module 249 , for example, only the linear velocity and the angular velocity are provided to the traveling control module 249 . The travel control module 249 is connected to the travel information module 248 and the mobile vehicle 16. After receiving part or all of the trajectory information, it automatically navigates the mobile vehicle 16 according to the trajectory information. For example, the mobile vehicle 16 has multiple wheels (not shown) and a chassis (not shown). The travel control module 249 can control the rotation speed and direction of different wheels of the mobile vehicle 16 according to the trajectory information, so as to Change the traveling direction, speed, etc. of the mobile vehicle 16.
特別的是,影像處理模組242的後端同時連接標註分析模組244及語義分割模組246,若是在推論模型的訓練階段,則將預處理後的路面影像傳送到標註分析模組244以產生標註,提供給語義分割模組246去訓練推論模型。但若推論模型已經訓練完成,是在實際運用的階段,則標註分析模組244不須再提供訓練推論模型用的標註,此時影像處理模組242會將預處理後的路面影像直接傳送到語義分割模組246,利用推論模型進行地形辨識。In particular, the backend of the image processing module 242 is connected to the annotation analysis module 244 and the semantic segmentation module 246 at the same time. If it is in the training stage of the inference model, the preprocessed road image is sent to the annotation analysis module 244. Annotations are generated and provided to the semantic segmentation module 246 to train the inference model. However, if the inference model has been trained and is in the actual application stage, the annotation analysis module 244 no longer needs to provide annotations for training the inference model. At this time, the image processing module 242 will directly transmit the preprocessed road surface image to Semantic segmentation module 246 uses inference models for terrain recognition.
以下提供一實施例說明本發明之畦田式無人載具自動導航系統10的導航方法。請同時參考第2圖~第4E圖。首先影像擷取裝置22擷取多個路面影像,路面影像包含畦田12及畦溝14,如第4A圖所示。接著路面影像在影像處理模組242中進行預處理,例如調整路面影像的尺寸,將其縮小或放大,如第4B圖所示。接著,如第4C圖所示,利用標註分析模組244在預處理後的路面影像上進行標註,標出「畦田」和「畦溝」。第4D圖中,語義分割模組246利用語義分割相關的演算法在路面影像上框出複數地形區塊,其包括畦田區塊32及畦溝區塊34,分別代表可種植的畦田12及移動載具16可移動的畦溝14(如第2圖所示)。語義分割模組246並根據包含標註的該些地形區塊訓練出可分析地形的推論模型,當推論模型訓練完成之後,新擷取的路面影像經過影像處理模組242進行預處理後(如第4B圖)便可直接傳送到語義分割模組246,並由語義分割模組246在路面影像上利用該推論模型產生複數地形區塊,產生如第4D圖的地形辨識結果並輸出。接著如第4E圖所示,行進資訊模組246在第4D圖的基礎上,依據地形辨識結果結合導航資訊相關的演算法,計算出適合移動載具行進的一組軌跡資訊,包括移動載具16在畦溝區塊34上移動的車輪軌跡、在畦田區塊32上的質心軌跡、行進的線速度及角速度。An embodiment is provided below to illustrate the navigation method of the Qitian-type unmanned vehicle automatic navigation system 10 of the present invention. Please also refer to Figure 2 ~ Figure 4E. First, the image capturing device 22 captures multiple road surface images, and the road surface images include border fields 12 and border ditches 14, as shown in Figure 4A. The road surface image is then pre-processed in the image processing module 242, such as adjusting the size of the road surface image, reducing or enlarging it, as shown in Figure 4B. Next, as shown in Figure 4C, the annotation analysis module 244 is used to annotate the preprocessed road surface image, and mark "edge fields" and "edge ditches". In Figure 4D, the semantic segmentation module 246 uses algorithms related to semantic segmentation to frame a plurality of terrain blocks on the road surface image, including a border block 32 and a border block 34, which respectively represent the plantable border area 12 and mobile fields. The carrier 16 has a movable furrow 14 (as shown in Figure 2). The semantic segmentation module 246 trains an inference model that can analyze the terrain based on the terrain blocks containing the annotation. After the inference model training is completed, the newly captured road surface image is preprocessed by the image processing module 242 (as shown in Section 2). 4B) can be directly sent to the semantic segmentation module 246, and the semantic segmentation module 246 uses the inference model to generate a plurality of terrain blocks on the road image, and generates and outputs the terrain recognition results as shown in Figure 4D. Then, as shown in Figure 4E, the travel information module 246 calculates a set of trajectory information suitable for the movement of the mobile vehicle based on the terrain recognition results and the algorithm related to the navigation information based on the Figure 4D, including the mobile vehicle. 16. The track of the wheel moving on the furrow block 34, the trajectory of the center of mass on the furrow block 32, the linear velocity and the angular velocity of travel.
請參考第5圖,其為本發明在路面影像上計算車輪軌跡、質心軌跡、行進的線速度及角速度之示意圖。此張路面影像為經預處理調整尺寸的路面影像,車輪軌跡是取畦溝區塊34的中心線,因此A、B兩點分別為左、右兩側畦溝區塊34的畫面底部的中心點。質心軌跡則為二車輪軌跡的中心線,因此C點為A、B兩點的中心點。從第5圖可看出C並不在畫面底部的中心位置C’,並有一偏移量d,代表移動載具16的行進路線偏移,需校正。故利用此偏移量d計算出移動載具16的角速度,使質心軌跡的底部起點C回歸到畫面的中心位置。而移動載具16通常等速前進,因此線速度固定不變。Please refer to Figure 5, which is a schematic diagram of the present invention's calculation of wheel trajectory, center of mass trajectory, traveling linear velocity and angular velocity on the road surface image. This road surface image is a pre-processed and resized road surface image. The wheel track is taken from the center line of the ditch block 34. Therefore, points A and B are the centers of the bottom of the screen of the ditch blocks 34 on the left and right sides respectively. point. The center of mass trajectory is the center line of the two wheel trajectories, so point C is the center point of points A and B. It can be seen from Figure 5 that C is not at the center position C’ at the bottom of the screen, and there is an offset d, which represents the offset of the traveling route of the mobile vehicle 16 and needs to be corrected. Therefore, the offset d is used to calculate the angular velocity of the mobile vehicle 16 so that the bottom starting point C of the center of mass trajectory returns to the center of the screen. The mobile vehicle 16 usually moves forward at a constant speed, so the linear speed is fixed.
第5圖無法呈現出線速度和角速度,但只需提供線速度和角速度給行進控制模組249,行進控制模組249即可據以控制前進方向及煞車,因此實質上給予行進控制模組249的導航資訊為線速度和角速度。Figure 5 cannot show the linear velocity and angular velocity, but only needs to provide the linear velocity and angular velocity to the travel control module 249, and the travel control module 249 can control the forward direction and braking accordingly, so essentially giving the travel control module 249 The navigation information is linear velocity and angular velocity.
在一實施例中,本發明中的視覺化模組247與語義分割模組246及行進資訊模組248連接,用以顯示語義分割模組246所產生的地形區塊,並在地形區塊上顯示出行進資訊模組248所產生的車輪軌跡及質心軌跡。視覺化模組247包含一個顯示介面,將如第4D圖及第4E圖之地形區塊、車輪軌跡和質心軌跡呈現給使用者觀看。特別是視覺化模組可將地形區塊、車輪軌跡和質心軌跡等軌跡資訊以不同顏色、色塊、線條顯示出來,讓使用者可直觀地看到移動載具16的移動軌跡。In one embodiment, the visualization module 247 in the present invention is connected to the semantic segmentation module 246 and the traveling information module 248 to display the terrain blocks generated by the semantic segmentation module 246, and on the terrain blocks The wheel trajectory and center of mass trajectory generated by the travel information module 248 are displayed. The visualization module 247 includes a display interface that presents terrain blocks, wheel trajectories and center-of-mass trajectories as shown in Figure 4D and Figure 4E to the user for viewing. In particular, the visualization module can display trajectory information such as terrain blocks, wheel trajectories, and center-of-mass trajectories in different colors, color blocks, and lines, allowing users to intuitively see the movement trajectory of the mobile vehicle 16 .
若前方具有障礙物時,如第6A圖所示,在右側的畦溝14上出現了障礙物30,此時語義分割模組246所產生的地形區塊會如第6B圖所示,右側的畦溝14對應到地形區塊中的畦溝區塊34a,其為語義分割模組246所推論的有空白裂塊情形之地形區塊。當行進資訊模組248依據語義分割模組246輸出的地形辨識結果中所包含的地形區塊要產生軌跡資訊時,首先在產生車輪軌跡時就會發現畦溝區塊34a的車輪軌跡中斷,因此會提示行進資訊模組248需要剎車,如此一來,行進資訊模組248提供給行進控制模組249的線速度和角速度都會為零,以避免移動載具16撞到障礙物30。If there is an obstacle ahead, as shown in Figure 6A, the obstacle 30 appears on the ditch 14 on the right. At this time, the terrain block generated by the semantic segmentation module 246 will be as shown in Figure 6B. The furrow 14 corresponds to the furrow block 34a in the terrain block, which is a terrain block inferred by the semantic segmentation module 246 with a blank split block situation. When the traveling information module 248 generates trajectory information based on the terrain blocks included in the terrain recognition results output by the semantic segmentation module 246, it will first find that the wheel trajectory of the ditch block 34a is interrupted when generating the wheel trajectory. Therefore, It will prompt that the traveling information module 248 needs to brake. In this case, the linear velocity and angular velocity provided by the traveling information module 248 to the traveling control module 249 will be zero to prevent the mobile vehicle 16 from hitting the obstacle 30 .
因此,藉由本發明所提供之畦田式無人載具自動導航系統及其方法,其只需使用各類環境(土壤乾溼、光線強弱…等)條件及植株各種生長期之下少量的前方道路影像,便可訓練出可辨識地形的推論模型,結合導航資訊演算法後即可控制完成移動載具移動,且不會受到陽光強弱、植物大小、土壤狀況等因素而影響導航效果。藉由本發明,不但可減少農民的體力勞動,更可同時多台移動載具同時施作,提高農業生產力,並藉由計算出的軌跡資訊統計每日的移動距離、了解施作進度,如此一來將可提高農場管理效率。Therefore, through the field-type unmanned vehicle automatic navigation system and method provided by the present invention, it only needs to use a small amount of images of the road ahead under various environmental conditions (dry and wet soil, light intensity, etc.) and various growth stages of plants. , an inference model that can identify the terrain can be trained, and combined with the navigation information algorithm, the mobile vehicle can be controlled to move without being affected by factors such as sunlight intensity, plant size, soil conditions, etc. The navigation effect will not be affected. The present invention not only reduces the physical labor of farmers, but also enables multiple mobile vehicles to perform operations at the same time, thereby improving agricultural productivity. The calculated trajectory information can be used to count the daily movement distance and understand the application progress. In this way, This will improve farm management efficiency in the future.
唯以上所述者,僅為本發明之較佳實施例而已,並非用來限定本發明實施之範圍。故即凡依本發明申請範圍所述之特徵及精神所為之均等變化或修飾,均應包括於本發明之申請專利範圍內。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the scope of the present invention. Therefore, all equivalent changes or modifications made in accordance with the characteristics and spirit described in the scope of the present invention shall be included in the patent scope of the present invention.
10:農田 12:畦田 14:畦溝 16:移動載具 20:畦田式無人載具自動導航系統 22:影像擷取裝置 24:處理器 242:影像處理模組 244:標註分析模組 246:語義分割模組 247:視覺化模組 248:行進資訊模組 249:行進控制模組 30:障礙物 32:畦田區塊 34、34a:畦溝區塊 A:左側畦溝區塊的畫面底部的中心點 B:右側畦溝區塊的畫面底部的中心點 C:A點和B點的中心點 C’:畫面底部的中心位置 10:Farmland 12: Qitian 14: Qigou 16:Mobile vehicle 20: Qitian type unmanned vehicle automatic navigation system 22:Image capture device 24: Processor 242:Image processing module 244:Annotation analysis module 246:Semantic segmentation module 247:Visualization module 248:Travel information module 249:Travel control module 30: Obstacles 32: Qitian block 34, 34a: Qigou block A: The center point of the bottom of the screen of the ditch area on the left B: The center point of the bottom of the screen of the ditch area on the right C: The center point of point A and point B C’: Center position at the bottom of the screen
第1圖為本發明畦田式無人載具自動導航系統之方塊圖。 第2圖為農田包括畦田和畦溝之示意圖。 第3圖為移動載具在農田上移動之示意圖。 第4A圖至第4E圖為本發明畦田式無人載具自動導航系統之流程之一實施例示意圖。 第5圖為本發明在路面影像上計算車輪軌跡、質心軌跡、行進的線速度及角速度之示意圖。 第6A圖及第6B圖為前方有障礙物之示意圖及對應之地形區塊之示意圖。 Figure 1 is a block diagram of the Qitian type unmanned vehicle automatic navigation system of the present invention. Figure 2 is a schematic diagram of farmland including borders and ditches. Figure 3 is a schematic diagram of a mobile vehicle moving on farmland. Figures 4A to 4E are schematic diagrams of one embodiment of the process of the border-type unmanned vehicle automatic navigation system of the present invention. Figure 5 is a schematic diagram of the present invention's calculation of wheel trajectory, center of mass trajectory, traveling linear velocity and angular velocity on road surface images. Figure 6A and Figure 6B are schematic diagrams of obstacles ahead and corresponding terrain blocks.
20:畦田式無人載具自動導航系統 20: Qitian type unmanned vehicle automatic navigation system
22:影像擷取裝置 22:Image capture device
24:處理器 24: Processor
242:影像處理模組 242:Image processing module
244:標註分析模組 244:Annotation analysis module
246:語義分割模組 246:Semantic segmentation module
247:視覺化模組 247:Visualization module
248:行進資訊模組 248:Travel information module
249:行進控制模組 249:Travel control module
Claims (16)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110109893A TWI826777B (en) | 2021-03-19 | 2021-03-19 | Row-crop type unmanned vehicle automatic navigation system and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110109893A TWI826777B (en) | 2021-03-19 | 2021-03-19 | Row-crop type unmanned vehicle automatic navigation system and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202238303A TW202238303A (en) | 2022-10-01 |
TWI826777B true TWI826777B (en) | 2023-12-21 |
Family
ID=85460551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW110109893A TWI826777B (en) | 2021-03-19 | 2021-03-19 | Row-crop type unmanned vehicle automatic navigation system and method thereof |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI826777B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108780318A (en) * | 2016-03-07 | 2018-11-09 | 洋马株式会社 | Coordinates measurement device |
WO2019040866A2 (en) * | 2017-08-25 | 2019-02-28 | The Board Of Trustees Of The University Of Illinois | Apparatus and method for agricultural data collection and agricultural operations |
US20200118425A1 (en) * | 2018-10-11 | 2020-04-16 | Toyota Research Institute, Inc. | System and method for roadway context learning by infrastructure sensors |
TW202020799A (en) * | 2018-11-23 | 2020-06-01 | 明創能源股份有限公司 | External coordinate-based real-time three-dimensional road condition auxiliary device for mobile vehicle, and system |
US10721859B2 (en) * | 2017-01-08 | 2020-07-28 | Dolly Y. Wu PLLC | Monitoring and control implement for crop improvement |
TW202037192A (en) * | 2019-03-20 | 2020-10-01 | 齊凌科技有限公司 | Vehicle with road condition analysis module |
-
2021
- 2021-03-19 TW TW110109893A patent/TWI826777B/en active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108780318A (en) * | 2016-03-07 | 2018-11-09 | 洋马株式会社 | Coordinates measurement device |
US10721859B2 (en) * | 2017-01-08 | 2020-07-28 | Dolly Y. Wu PLLC | Monitoring and control implement for crop improvement |
WO2019040866A2 (en) * | 2017-08-25 | 2019-02-28 | The Board Of Trustees Of The University Of Illinois | Apparatus and method for agricultural data collection and agricultural operations |
US20200118425A1 (en) * | 2018-10-11 | 2020-04-16 | Toyota Research Institute, Inc. | System and method for roadway context learning by infrastructure sensors |
TW202020799A (en) * | 2018-11-23 | 2020-06-01 | 明創能源股份有限公司 | External coordinate-based real-time three-dimensional road condition auxiliary device for mobile vehicle, and system |
TW202037192A (en) * | 2019-03-20 | 2020-10-01 | 齊凌科技有限公司 | Vehicle with road condition analysis module |
Also Published As
Publication number | Publication date |
---|---|
TW202238303A (en) | 2022-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110243372B (en) | Intelligent agricultural machinery navigation system and method based on machine vision | |
WO2020140491A1 (en) | Automatic driving system for grain processing, and automatic driving method and path planning method therefor | |
CN111727457B (en) | Cotton crop row detection method and device based on computer vision and storage medium | |
US20200288625A1 (en) | Agricultural utility vehicle | |
Yang et al. | Real-time detection of crop rows in maize fields based on autonomous extraction of ROI | |
CA3125700C (en) | Automatic driving system for grain processing, automatic driving method and automatic identification method | |
CN115723138A (en) | Control method and device of agricultural robot, electronic equipment and storage medium | |
CN110081873A (en) | Recognition methods is got in region ready and agricultural machinery gets identifying system ready | |
Tillett et al. | A field assessment of a potential method for weed and crop mapping on the basis of crop planting geometry | |
Wang et al. | The seedling line extraction of automatic weeding machinery in paddy field | |
JP2006101816A (en) | Method and apparatus for controlling steering | |
TWI826777B (en) | Row-crop type unmanned vehicle automatic navigation system and method thereof | |
CN115451965B (en) | Relative heading information detection method for transplanting system of transplanting machine based on binocular vision | |
US20230094371A1 (en) | Vehicle row follow system | |
CN113587946A (en) | Visual navigation system and method for field agricultural machine | |
Türköz et al. | Computer vision-based guidance assistance concept for plowing using rgb-d camera | |
KR20210006068A (en) | Autonomous tractors for dry field farming | |
CN114485612B (en) | Route generation method and device, unmanned operation vehicle, electronic equipment and storage medium | |
KR102077219B1 (en) | Routing method and system for self-driving vehicle using tree trunk detection | |
Liu et al. | Method for the navigation line recognition of the ridge without crops via machine vision | |
Kunghun et al. | A rubber tree orchard mapping method via image processing | |
WO2023120183A1 (en) | Agricultural machine | |
WO2023127437A1 (en) | Agricultural machine | |
US20240004087A1 (en) | Center Path Estimator for Agricultural Windrows | |
Valero et al. | Single Plant Fertilization using a Robotic Platform in an Organic Cropping Environment |