TWI783748B - UAV obstacle avoidance flight control image recognition method, system and application using deep learning - Google Patents

UAV obstacle avoidance flight control image recognition method, system and application using deep learning Download PDF

Info

Publication number
TWI783748B
TWI783748B TW110139345A TW110139345A TWI783748B TW I783748 B TWI783748 B TW I783748B TW 110139345 A TW110139345 A TW 110139345A TW 110139345 A TW110139345 A TW 110139345A TW I783748 B TWI783748 B TW I783748B
Authority
TW
Taiwan
Prior art keywords
flight
image
uav
deep learning
module
Prior art date
Application number
TW110139345A
Other languages
Chinese (zh)
Other versions
TW202318271A (en
Inventor
李昆益
李宗諺
苗延浩
陳健榮
呂奇晏
陳思樺
周謙宇
鐘翔安
黃韋華
劉兆祥
林坤成
Original Assignee
中華學校財團法人中華科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中華學校財團法人中華科技大學 filed Critical 中華學校財團法人中華科技大學
Priority to TW110139345A priority Critical patent/TWI783748B/en
Application granted granted Critical
Publication of TWI783748B publication Critical patent/TWI783748B/en
Publication of TW202318271A publication Critical patent/TW202318271A/en

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本發明係揭露一種應用深度學習之無人機避障飛行控制影像辨識方法、系統及其應用,其包括無人機、無線通訊單元及資訊處理單元,無人機包含飛行控制模組及影像擷取裝置,飛行控制模組係依據飛行路徑控制無人機飛行,影像擷取裝置連續擷取無人機的飛行狀態而成像為飛行影像。資訊處理單元透過無線通訊單元接收飛行影像,資訊處理單元包含深度學習演算模組及內建有物件特徵樣本的特徵資料庫,每一物件特徵樣本定義有物件名稱,深度學習演算模組用以對每一飛行影像擷出物件特徵,並輸入特徵資料庫,以預測與物件特徵樣本的符合機率,當符合機率大於預設機率時,以輸出與物件名稱符合之物件的辨識結果資訊,並判斷是否為障礙物,判定結果為是,則產生修正飛行路徑的修正控制訊號,使無人機避開障礙物,俾能實現在飛行路徑的小型區域內即時辨識出障礙物的任務,並能夠準確定位出障礙物的實際位置,而可有效避開障礙物。 The present invention discloses a UAV obstacle avoidance flight control image recognition method, system and application using deep learning. It includes a UAV, a wireless communication unit and an information processing unit. The UAV includes a flight control module and an image capture device. The flight control module controls the flight of the UAV according to the flight path, and the image capture device continuously captures the flight status of the UAV and forms a flight image. The information processing unit receives flight images through the wireless communication unit. The information processing unit includes a deep learning algorithm module and a built-in feature database with object feature samples. Each object feature sample is defined with an object name. The deep learning algorithm module is used for The features of the object are extracted from each flight image and input into the feature database to predict the probability of matching with the object feature sample. When the matching probability is greater than the preset probability, the identification result information of the object matching the object name is output, and it is judged whether If it is an obstacle, if the judgment result is yes, then a correction control signal for correcting the flight path will be generated, so that the UAV can avoid the obstacle, so that the task of identifying obstacles in a small area of the flight path can be realized in real time, and can be accurately located. The actual position of the obstacle can be effectively avoided.

Description

應用深度學習之無人機避障飛行控制影像辨識方法、系統及其應用 UAV obstacle avoidance flight control image recognition method, system and application using deep learning

本發明係有關一種應用深度學習之無人機避障飛行控制影像辨識方法、系統及其應用,尤指一種可以實現在飛行路徑之小型區域內即時辨識出障礙物並能夠準確定位出障礙物實際位置的無人機避障飛行控制技術。 The present invention relates to a UAV obstacle avoidance flight control image recognition method, system and application using deep learning, especially a method that can realize real-time identification of obstacles in a small area of the flight path and can accurately locate the actual position of the obstacle UAV obstacle avoidance flight control technology.

按,隨著無人機製造與飛行控制等科技日趨成熟與完備之下,連帶使得無人機的應用範圍也更為普及。舉凡包括環境數據的檢測、特定區域的巡邏、國土保育、急難搜救、氣象觀測和通訊中繼等任務皆可由無人機來執行。此外,亦有相關業者已將無人機用於收集攝影測量與遙感探測資料等用途,無論在科學與民生用途方面,無人飛行載具亦可提供救難搜索、農藥噴灑、大氣資料蒐集以及定點拍攝等諸多的用途,因此,近年來,無人機確實已廣泛受到各研究單位、企業界與相關政府部門的高度關注與青睞。 By the way, as technologies such as drone manufacturing and flight control become more mature and complete, the scope of application of drones has become more popular. For example, tasks including environmental data detection, patrols in specific areas, land conservation, emergency search and rescue, meteorological observation, and communication relay can all be performed by drones. In addition, some related companies have used drones to collect photogrammetry and remote sensing detection data, etc. In terms of science and people's livelihood, unmanned aerial vehicles can also provide rescue and search, pesticide spraying, atmospheric data collection, and fixed-point shooting, etc. Therefore, in recent years, drones have indeed been widely concerned and favored by various research institutes, business circles and relevant government departments.

依據所知,利用人工智慧技術來導引無人機自主飛行的專利前案如新型第M558760號『無人機人工智慧模組』所示。該專利之控制模組係包括第一無線通訊單元、與第一無線通訊單元電性連接的人工智慧機器學習處理單元以及和人工智慧機器學習處理單元電性連接的數據儲存單元及無人飛行器。無人飛行器設置包括影像擷取件、電性連接於影像擷取件 的第二無線通訊單元以及電性連接於驅動組件、影像擷取件、第二無線通訊單元的電力單元,影像擷取件透過第二無線通訊單元以及第一無線通訊單元而與人工智慧機器學習處理單元通訊連接。雖然該專利之人工智慧機器學習處理單元可以根據影像擷取件取得的影像資訊而基於機器學習方法控制無人飛行器,以及具備防撞偵測部可以確認該無人飛行器周遭環境障礙物,並回傳至該機器學習處理單元,以利建立實際立體空間概念而避開障礙物;惟,該專利仍具有下列所述的缺點: As far as we know, the previous patent application of using artificial intelligence technology to guide the autonomous flight of drones is shown in the new No. M558760 "UAV artificial intelligence module". The control module of the patent includes a first wireless communication unit, an artificial intelligence machine learning processing unit electrically connected to the first wireless communication unit, a data storage unit and an unmanned aerial vehicle electrically connected to the artificial intelligence machine learning processing unit. The unmanned aerial vehicle configuration includes an image capture unit, electrically connected to the image capture unit The second wireless communication unit and the power unit electrically connected to the driving component, the image capture unit, and the second wireless communication unit, the image capture unit and the artificial intelligence machine learning through the second wireless communication unit and the first wireless communication unit Processing unit communication connection. Although the artificial intelligence machine learning processing unit of the patent can control the unmanned aerial vehicle based on the machine learning method based on the image information obtained by the image capture unit, and the anti-collision detection unit can confirm the environmental obstacles around the unmanned aerial vehicle and send it back to The machine learning processing unit facilitates the establishment of an actual three-dimensional space concept and avoids obstacles; however, this patent still has the following disadvantages:

1.其人工智慧機器學習處理單元不具備障礙物影像辨識的功能,所以必須另外加裝防撞偵測部,以致造成零組件成本的增加。 1. Its artificial intelligence machine learning processing unit does not have the function of obstacle image recognition, so an additional anti-collision detection unit must be installed, resulting in an increase in the cost of components.

2.其防撞偵測部係固設於無人機一側,以致無法隨著無人機行進方向而改變其偵測方向,由於行進方向與偵測方向非為同步性一致的緣故,所以確實較容易發生撞擊障礙物的情事,因而造成無人機毀損的財物損害。 2. The anti-collision detection part is fixed on one side of the UAV, so that it cannot change its detection direction along with the UAV's travel direction. Since the travel direction and detection direction are not synchronized, it is indeed more It is easy to hit obstacles, thus causing property damage caused by drone damage.

由上述得知,該專利避障飛行技術確實未臻完善,仍有再改善的必要性,而且基於相關產業的迫切需求之下,緣是,本發明人等乃經不斷的努力研發之下,終於研發出一套有別於上述習知技術的本發明。 It can be known from the above that the patented obstacle avoidance flight technology is indeed not perfect, and there is still a need for further improvement, and based on the urgent needs of related industries, the reason is that the inventors have made continuous efforts in research and development. Finally develop a set of the present invention that is different from above-mentioned prior art.

本發明第一目的在於提供一種應用深度學習之無人機避障飛行控制影像辨識方法、系統及其應用,主要是可以實現在飛行路徑的小型區域內即時辨識出障礙物的任務,並能夠準確定位出障礙物的實際位置,而可有效避開障礙物。達成本發明第一目的採用之技術手段,係包括無人機、無線通訊單元及資訊處理單元,無人機包含飛行控制模組及影像 擷取裝置,飛行控制模組係依據飛行路徑控制無人機飛行;影像擷取裝置連續擷取無人機的飛行狀態而成像為飛行影像。資訊處理單元透過無線通訊單元接收飛行影像,資訊處理單元包含深度學習演算模組及內建有物件特徵樣本的特徵資料庫,每一物件特徵樣本定義有物件名稱,深度學習演算模組用以對每一飛行影像擷出物件特徵,並輸入特徵資料庫,以預測與物件特徵樣本的符合機率,當符合機率大於預設機率時,以輸出與物件名稱符合之物件的辨識結果資訊,並判斷是否為障礙物,判定結果為是,則產生修正飛行路徑的修正控制訊號,使無人機依據修正控制訊號飛行而避開障礙物。 The first purpose of the present invention is to provide a UAV obstacle avoidance flight control image recognition method, system and application using deep learning, mainly to realize the task of instantly identifying obstacles in a small area of the flight path and accurately locate The actual position of the obstacle can be obtained, and the obstacle can be effectively avoided. The technical means adopted to achieve the first purpose of the present invention include unmanned aerial vehicle, wireless communication unit and information processing unit, and unmanned aerial vehicle includes flight control module and image The capture device, the flight control module controls the flight of the drone according to the flight path; the image capture device continuously captures the flight status of the drone and forms a flight image. The information processing unit receives flight images through the wireless communication unit. The information processing unit includes a deep learning algorithm module and a built-in feature database with object feature samples. Each object feature sample is defined with an object name. The deep learning algorithm module is used for The features of the object are extracted from each flight image and input into the feature database to predict the probability of matching with the object feature sample. When the matching probability is greater than the preset probability, the identification result information of the object matching the object name is output, and it is judged whether If it is an obstacle, if the judgment result is yes, then a correction control signal for correcting the flight path is generated, so that the UAV can fly according to the correction control signal to avoid the obstacle.

本發明第二目的在於提供一種可以隨著無人機行進方向而改變影像擷取方向的應用深度學習之無人機避障飛行控制影像辨識方法、系統及其應用,主要是可以隨著無人機行進方向而改變影像擷取的方向,由於無人機的行進方向與影像擷取方向被設定為同步一致性的緣故,所以可以即時偵測出前進方向是否有障礙物,因而得以有效避免撞擊障礙物的情事發生。達成本發明第二目的採用之技術手段,係包括無人機、無線通訊單元及資訊處理單元,無人機包含飛行控制模組及影像擷取裝置,飛行控制模組係依據飛行路徑控制無人機飛行;影像擷取裝置連續擷取無人機的飛行狀態而成像為飛行影像。資訊處理單元透過無線通訊單元接收飛行影像,資訊處理單元包含深度學習演算模組及內建有物件特徵樣本的特徵資料庫,每一物件特徵樣本定義有物件名稱,深度學習演算模組用以對每一飛行影像擷出物件特徵,並輸入特徵資料庫,以預測與物件特徵樣本的符合機率,當符合機率大於預設機率時,以輸出與物件名稱符合之物件的辨識結果資訊,並判斷是否為障礙物,判定結果為是,則產生修正飛行路徑的修正控制訊號,使無人機依據修正控制訊號飛行而避開障礙物。其更包含一設於該無人機用以驅動該影像擷取裝置轉向的角度調整單元及一 設於該無人機的資料擷取模組,該資料擷取模組用以擷取該飛行控制模組之飛行狀態的飛行狀態數據,該飛行狀態係選自向前飛行、爬升飛行、轉向飛行以及下降飛行的至少其中一種狀態資料。 The second purpose of the present invention is to provide a UAV obstacle avoidance flight control image recognition method, system and application that can change the direction of image acquisition with the application of deep learning, mainly following the direction of UAV travel. When changing the direction of image capture, since the direction of travel of the UAV and the direction of image capture are set to be synchronized, it is possible to detect immediately whether there is an obstacle in the direction of travel, thereby effectively avoiding collisions with obstacles. occur. The technical means adopted to achieve the second objective of the present invention include a drone, a wireless communication unit, and an information processing unit. The drone includes a flight control module and an image capture device. The flight control module controls the flight of the drone according to the flight path; The image capture device continuously captures the flight state of the drone and forms a flight image. The information processing unit receives flight images through the wireless communication unit. The information processing unit includes a deep learning algorithm module and a built-in feature database with object feature samples. Each object feature sample is defined with an object name. The deep learning algorithm module is used for The features of the object are extracted from each flight image and input into the feature database to predict the probability of matching with the object feature sample. When the matching probability is greater than the preset probability, the identification result information of the object matching the object name is output, and it is judged whether If it is an obstacle, if the judgment result is yes, then a correction control signal for correcting the flight path is generated, so that the UAV can fly according to the correction control signal to avoid the obstacle. It further includes an angle adjustment unit installed on the UAV to drive the image capture device to turn and a The data acquisition module installed in the UAV, the data acquisition module is used to acquire the flight status data of the flight status of the flight control module, the flight status is selected from forward flight, climbing flight, turning flight and at least one of the status data for the descent flight.

本發明第三目的在於提供一種結合影像辨識與P3P運算技術實現降落定位功能的應用深度學習之無人機避障飛行控制影像辨識方法、系統及其應用。達成本發明第三目的採用之技術手段,係包括無人機、無線通訊單元及資訊處理單元,無人機包含飛行控制模組及影像擷取裝置,飛行控制模組係依據飛行路徑控制無人機飛行;影像擷取裝置連續擷取無人機的飛行狀態而成像為飛行影像。資訊處理單元透過無線通訊單元接收飛行影像,資訊處理單元包含深度學習演算模組及內建有物件特徵樣本的特徵資料庫,每一物件特徵樣本定義有物件名稱,深度學習演算模組用以對每一飛行影像擷出物件特徵,並輸入特徵資料庫,以預測與物件特徵樣本的符合機率,當符合機率大於預設機率時,以輸出與物件名稱符合之物件的辨識結果資訊,並判斷是否為障礙物,判定結果為是,則產生修正飛行路徑的修正控制訊號,使無人機依據修正控制訊號飛行而避開障礙物。其中,該飛行狀態資料透過該無線通訊單元傳輸至該資訊處理單元,經該資訊處理單元解讀為下降飛行時,則判定該物件是否為降落平台,當判斷結果為是,則透過該無線通訊單元傳輸一啟動訊號至該飛行控制模組,以啟動該影像擷取裝置影像擷取該降落平台上的一圖案標記(如ARUCO MARKER而成像為圖案標記影像,該資訊處理單元透過透過該無線通訊單元接收該圖案標記影像,並以一識別技術對該圖案標記影像進行識別處理計算,以得到相應的識別資訊及該無人機與該降落平台之間的距離值,並核對該識別資訊是否正確,當該識別資訊正確時,則判斷該距離值是否為零或接近零,判斷結果為是,該無人機則可降落於該降落平台上;判斷結果為否,則透過該無線通訊單元傳輸繼續下降的控制訊號給該飛行 控制模組,以令該無人機繼續下降飛行,直到該距離值等於零或接近零為止。該識別技術係為影像識別技術,該資訊處理單元包含一影像識別模組及一圖案資料庫,當該影像識別模組接收到該圖案標記影像時,則擷取該圖案標記影像的影像特徵,並與儲存在該影像資料庫的特徵樣本進行特徵匹配的影像辨識處理,以得到該識別資訊及該距離值。該影像識別模組係根據P3P(Perspective 3 point)方法,設P點為影像擷取裝置的投影中心,點A、B、C為世界座標系下的三維世界點,於特徵匹配之後得到對應影像座標系下之二維該圖案標記影像的三個點u、v、w,再將世界座標系下的ABC三點與影像座標系下的u、v、w三個點匹配,其中AB,BC,AC的長度已知,uv,vw,uw也已知,透過餘弦定理構建聯立方程式求解出PA,PB,PC之間的長度距離,以求得該世界座標系下作為該距離值的相機位姿;令該投影中心(P)到各點的長度距離分別為x=|PA|、y=|PB|、z=|PC|,α=∠BPCβ=∠APCγ=∠APBp=2cos αq=2cos βr=2cos γa'=|BC|、b'=|AC|、c'=|AB|。 The third object of the present invention is to provide a UAV obstacle avoidance flight control image recognition method, system and application thereof that combine image recognition and P3P computing technology to realize the landing positioning function and apply deep learning. The technical means adopted to achieve the third objective of the present invention include a drone, a wireless communication unit, and an information processing unit. The drone includes a flight control module and an image capture device. The flight control module controls the flight of the drone according to the flight path; The image capture device continuously captures the flight state of the drone and forms a flight image. The information processing unit receives flight images through the wireless communication unit. The information processing unit includes a deep learning algorithm module and a built-in feature database with object feature samples. Each object feature sample is defined with an object name. The deep learning algorithm module is used for The features of the object are extracted from each flight image and input into the feature database to predict the probability of matching with the object feature sample. When the matching probability is greater than the preset probability, the identification result information of the object matching the object name is output, and it is judged whether If it is an obstacle, if the judgment result is yes, then a correction control signal for correcting the flight path is generated, so that the UAV can fly according to the correction control signal to avoid the obstacle. Wherein, the flight state data is transmitted to the information processing unit through the wireless communication unit, and when the information processing unit interprets it as a descending flight, it is determined whether the object is a landing platform, and when the judgment result is yes, it is determined through the wireless communication unit Transmit an activation signal to the flight control module to activate the image capture device to capture a pattern mark (such as ARUCO MARKER) on the landing platform and image it as a pattern mark image, and the information processing unit passes through the wireless communication unit Receive the image of the pattern mark, and use a recognition technology to perform recognition processing and calculation on the image of the pattern mark to obtain the corresponding identification information and the distance value between the UAV and the landing platform, and check whether the identification information is correct, when When the identification information is correct, it is judged whether the distance value is zero or close to zero. If the judgment result is yes, the UAV can land on the landing platform; The control signal is sent to the flight control module to make the UAV continue to descend and fly until the distance value is equal to zero or close to zero. The recognition technology is an image recognition technology, and the information processing unit includes an image recognition module and a pattern database, when the image recognition module receives the pattern mark image, it extracts the image features of the pattern mark image, and performs feature matching image recognition processing with the feature samples stored in the image database, so as to obtain the Recognition information and the distance value. The image recognition module is based on the P3P (Perspective 3 point) method. Point P is set as the projection center of the image capture device, and points A, B, and C are three-dimensional world points under the world coordinate system. After feature matching, the three points u, v, and w of the two-dimensional pattern mark image in the corresponding image coordinate system are obtained, and then the three points ABC in the world coordinate system and the three points u, v, and w in the image coordinate system are obtained. Point matching, where the lengths of AB, BC, and AC are known, uv, vw, and uw are also known, and the length and distance between PA, PB, and PC are solved by constructing simultaneous equations through the law of cosines, so as to obtain the world coordinate system The following is the camera pose of the distance value; let the length distance from the projection center (P) to each point be x =| PA |, y = | PB |, z = | PC |, α = ∠ BPC , β = ∠ APC , γ = ∠ APB , p = 2cos α , q = 2cos β , r = 2cos γ , a '=| BC |, b '=| AC |, c '=| AB |.

10:無人機 10: Drone

11:飛行控制模組 11: Flight control module

12:影像擷取裝置 12: Image capture device

13:角度調整單元 13: Angle adjustment unit

130:樞座 130: pivot seat

131:配樞座 131: with pivot seat

132:旋轉驅動機構 132: Rotary drive mechanism

14:飛行驅動單元 14: Flight drive unit

15:供電單元 15: Power supply unit

20:無線通訊單元 20: Wireless communication unit

30:資訊處理單元 30: Information processing unit

31:深度學習演算模組 31: Deep Learning Calculation Module

310:深度學習模型 310:Deep Learning Models

311:影像識別模組 311:Image recognition module

312:圖案資料庫 312: Pattern database

32:特徵資料庫 32: Feature database

40:降落平台 40: Landing platform

41:圖案標記 41:Pattern mark

A:障礙物 A: Obstacles

D1:前方的方向 D1: Direction ahead

D2:下方的方向 D2: Downward direction

D3:上方的方向 D3: Direction above

原本飛行路徑:dn Original flight path: dn

修正飛行路徑:dn-1 Corrected flight path: dn-1

圖1係本發明基本實施的功能方塊示意圖。 FIG. 1 is a functional block schematic diagram of the basic implementation of the present invention.

圖2係本發明具體實施的功能方塊示意圖。 Fig. 2 is a functional block schematic diagram of the embodiment of the present invention.

圖3係本發明無人機遇到障礙物的飛行路徑修正實施示意圖。 Fig. 3 is a schematic diagram of implementing the correction of the flight path when the UAV encounters an obstacle in the present invention.

圖4係本發明深度學習演算模組於訓練階段的流程控制實施示意圖。 FIG. 4 is a schematic diagram of the process control implementation of the deep learning calculation module in the training phase of the present invention.

圖5係本發明深度學習演算模組於預測階段的流程控制實施示意圖。 FIG. 5 is a schematic diagram of the process control implementation of the deep learning calculation module in the prediction stage of the present invention.

圖6係本發明無人機準備降落於降落平台的實施示意圖。 Fig. 6 is an implementation schematic diagram of the drone of the present invention preparing to land on the landing platform.

圖7係本發明影像擷取裝置受到驅動而改變影像擷取方向的實施示意圖。 FIG. 7 is a schematic diagram of an implementation of the image capture device being driven to change the image capture direction of the present invention.

圖8係本發明利用P3P方法的投影實施示意圖。 FIG. 8 is a schematic diagram of the implementation of projection using the P3P method in the present invention.

為讓 貴審查委員能進一步瞭解本發明整體的技術特徵與達成本發明目的之技術手段,玆以具體實施例並配合圖式加以詳細說明: In order to allow your review committee to further understand the overall technical characteristics of the present invention and the technical means to achieve the purpose of the present invention, specific embodiments and accompanying drawings are hereby described in detail:

請配合參看圖1、3所示,為達成本發明第一目的之第一實施例,係包括無人機10、無線通訊單元20及資訊處理單元30。每一無人機10包含一飛行控制模組11及一影像擷取裝置12。該飛行控制模組11係依據一飛行路徑控制驅動無人機10的飛行。該影像擷取裝置12用以連續擷取無人機10的飛行狀態而成像為複數飛行影像。該資訊處理單元30(如設於地面站的電腦或伺服器)係透過無線通訊單元20(如4G、5G行動通訊系統;或UHF、VHF無線射頻通訊系統)接收複數飛行影像。該資訊處理單元30包含一深度學習演算模組31及一內建有複數物件特徵樣本的特徵資料庫32,每一物件特徵樣本定義有一物件名稱,該深度學習演算模組31用以對每一飛行影像擷出至少一物件特徵,並將至少一物件特徵依序輸入特徵資料庫32,以預測所輸入之至少一物件特徵與物件特徵樣本的符合機率,當符合機率大於預設機率時,則讀取出相應的物件名稱,以輸出與物件名稱符合之物件的辨識結果資訊,並判斷該物件是否為阻礙飛行路徑的障礙物A,當判定結果為是,則產生修正飛行路徑的修正控制訊號,該修正控制訊號透過無線通訊單元20無線傳輸至飛行控制模組11,使無人機10依據修正控制訊號飛行而避開障礙物A。其中,如圖3所示,dn為原本飛行路徑,dn-1為修正飛行路徑。 Please refer to FIGS. 1 and 3 . In order to achieve the first purpose of the present invention, the first embodiment includes a drone 10 , a wireless communication unit 20 and an information processing unit 30 . Each drone 10 includes a flight control module 11 and an image capture device 12 . The flight control module 11 controls and drives the flight of the UAV 10 according to a flight path. The image capture device 12 is used to continuously capture the flight state of the UAV 10 and image it into a plurality of flight images. The information processing unit 30 (such as a computer or server located at a ground station) receives a plurality of flight images through the wireless communication unit 20 (such as a 4G, 5G mobile communication system; or a UHF, VHF radio frequency communication system). The information processing unit 30 includes a deep learning algorithm module 31 and a built-in feature database 32 with multiple object feature samples, each object feature sample defines an object name, and the deep learning algorithm module 31 is used for each At least one object feature is extracted from the flying image, and the at least one object feature is input into the feature database 32 in order to predict the coincidence probability of the input at least one object feature and the object feature sample. When the coincidence probability is greater than the preset probability, then Read out the corresponding object name to output the recognition result information of the object that matches the object name, and judge whether the object is an obstacle A that hinders the flight path. When the judgment result is yes, a correction control signal for correcting the flight path is generated. , the correction control signal is wirelessly transmitted to the flight control module 11 through the wireless communication unit 20, so that the UAV 10 flies according to the correction control signal and avoids the obstacle A. Wherein, as shown in FIG. 3 , dn is the original flight path, and dn-1 is the corrected flight path.

具體的,每一物件名稱(如闖入飛行路徑的其他無人機10;或飛行器;或其他擋住飛行路徑的電線桿、電線、礙子、電塔、鳥類等)定義預設的輪廓尺寸,因而得以依據影像的輪廓尺寸來判定無人機10與該物件的距離,並以影像的輪廓尺寸來作為修正飛行路徑的依據。 Specifically, each object name (such as other unmanned aerial vehicles 10 that break into the flight path; or aircraft; or other utility poles, wires, obstacles, towers, birds, etc. that block the flight path) defines a preset outline size, so that The distance between the UAV 10 and the object is determined according to the contour size of the image, and the flight path is corrected based on the contour size of the image.

請配合參看圖4~5所示,該深度學習演算模組31執行時則 包含下列步驟: Please refer to Figures 4 to 5, when the deep learning calculation module 31 is executed, the Contains the following steps:

(a)訓練階段步驟,係建立有至少一深度學習模型310,並於該至少一深度學習模型310輸入巨量的物件特徵樣本、障礙物A辨識參數及影像辨識參數,並由深度學習模型310測試影像辨識的正確率,再判斷影像辨識正確率是否足夠,當判斷結果為是,則將辨識結果輸出及儲存;當判斷結果為否,則使深度學習模型310自我修正學習。 (a) The step of the training stage is to establish at least one deep learning model 310, and input a huge amount of object feature samples, obstacle A recognition parameters and image recognition parameters into the at least one deep learning model 310, and the deep learning model 310 Test the correct rate of image recognition, and then judge whether the correct rate of image recognition is sufficient. If the judgment result is yes, then output and store the recognition result;

(b)運行預測階段步驟,係於深度學習模型310輸入即時擷取之飛行影像,並由深度學習模型310計算出相應的物件特徵,以預測辨識出該物件的物件名稱及是否為障礙物A的辨識結果資訊。 (b) The step of running the prediction stage is to input the real-time captured flight image into the deep learning model 310, and calculate the corresponding object features by the deep learning model 310, so as to predict the name of the object and whether it is an obstacle A. The identification result information of .

較具體的,該深度學習模型310係為一種基於YOLO深度學習演算法的訓練預測模型。 More specifically, the deep learning model 310 is a training prediction model based on the YOLO deep learning algorithm.

此外,本發明係主要是提出一個基於You Only Look Once(YOLO)的深度學習演算法的訓練模型,用於預估連續影像當中特定標靶(如障礙物或降落平台)位置實際的距離。在理想的距離檢測方法當中,通常仰賴幾何資訊的正確性,當物體在經過旋轉或是相機鏡頭畸變之後會喪失幾何特徵。我們在提出的方法中,以You Only Look Once(YOLO)的深度學習方法演算法提取標靶間的空間資訊作為訓練資料,透過訓練模型探討特定標靶外形經過攝相機投影之後與實際世界座標的相對關係,並還原標靶在不同視角造成的幾何誤差,修正標靶的在相機影像當中的實際距離。本發明之特色是基於YOLO這個深度學習方法實現在一小型區域內即時辨識出標靶的任務,也能夠準確定位出標靶的位置。本發明研究方法是以YOLO網路框架為基礎,輔以遷移學習的方法訓練underwater-YOLO網路以及時辨識標靶。經實驗結果證實,所提出的方法與YOLO相比,對標靶小目標和 重疊目標有較好的辨識性能。 In addition, the present invention mainly proposes a training model based on You Only Look Once (YOLO) deep learning algorithm, which is used to estimate the actual distance of a specific target (such as an obstacle or a landing platform) in continuous images. In the ideal distance detection method, the correctness of geometric information is usually relied on, and the geometric characteristics will be lost when the object is rotated or the camera lens is distorted. In our proposed method, the deep learning algorithm of You Only Look Once (YOLO) is used to extract the spatial information between the targets as training data, and the training model is used to explore the relationship between the specific target shape and the actual world coordinates after being projected by the camera. Relative relationship, and restore the geometric error caused by different viewing angles of the target, and correct the actual distance of the target in the camera image. The feature of the present invention is that based on the deep learning method of YOLO, the task of instantly identifying the target in a small area can be realized, and the position of the target can also be accurately located. The research method of the present invention is based on the YOLO network framework, supplemented by the method of transfer learning to train the underwater-YOLO network to identify targets in time. The experimental results confirm that the proposed method is better than YOLO for the target small target and Overlapping targets have better recognition performance.

請配合參看圖1~3及圖7所示,為達成本發明第二目的之第二實施例,本實施例除了包括上述第一實施例的整體技術內容之外,更包含一設於無人機10用以驅動影像擷取裝置12轉向的角度調整單元13、及一可受飛行控制模組11之複數種控制指令而驅動無人機10做出相應飛行狀態的飛行驅動單元14及一用以供應所需電源的供電單元15。該飛行狀態可以是向前飛行、爬升飛行、轉向飛行;或是下降飛行的至少其中一種飛行狀態。 Please refer to Fig. 1~3 and Fig. 7, in order to achieve the second embodiment of the second purpose of the present invention, this embodiment not only includes the overall technical content of the above-mentioned first embodiment, but also includes a 10 is used to drive the angle adjustment unit 13 of the image capture device 12 to turn, and a flight drive unit 14 that can drive the UAV 10 to make a corresponding flight state by receiving multiple control instructions from the flight control module 11, and a flight drive unit 14 for supplying Power supply unit 15 for the required power supply. The flight state may be forward flight, climbing flight, turning flight; or at least one flight state of descending flight.

具體的,如圖7所示的角度調整單元13係包含一設於無人機10底部的樞座130、一配樞座131及旋轉驅動機構132,該配樞座131一端設有一配樞部,該配樞部與樞座130之樞部形成可轉動的樞接,該配樞座131末端可供固設影像擷取裝置12,該旋轉驅動機構132設於樞座130一側;當飛行狀態為向前飛行時,該飛行控制模組11則觸發旋轉驅動機構132,以驅動影像擷取裝置12之鏡頭轉動定位於朝向前方的前方方向D1,以擷取無人機10前方的飛行影像;當飛行狀態為爬升飛行時,該飛行控制模組11則觸發旋轉驅動機構132,以驅動影像擷取裝置12之鏡頭轉動定位於朝向上方的上方方向D3,以擷取無人機10上方的飛行影像;當飛行狀態為下降飛行時,該飛行控制模組11則觸發旋轉驅動機構132,以驅動影像擷取裝置12之鏡頭轉動定位於朝向下方的下方方向D2,以擷取無人機10下方的飛行影像。 Specifically, the angle adjustment unit 13 as shown in FIG. 7 comprises a pivot seat 130 located at the bottom of the drone 10, a pivot seat 131 and a rotation drive mechanism 132. One end of the pivot seat 131 is provided with a pivot portion. The pivot part and the pivot part of the pivot base 130 form a rotatable pivot connection, and the end of the pivot base 131 can be used to fix the image capture device 12, and the rotation drive mechanism 132 is located on one side of the pivot base 130; When flying forward, the flight control module 11 triggers the rotary drive mechanism 132 to drive the lens of the image capture device 12 to rotate and locate in the forward direction D1 to capture the flying image in front of the UAV 10; When the flight state is climbing flight, the flight control module 11 triggers the rotary drive mechanism 132 to drive the lens of the image capture device 12 to rotate and position in the upward direction D3 to capture the flight image above the UAV 10; When the flight state is descending flight, the flight control module 11 triggers the rotation drive mechanism 132 to drive the lens of the image capture device 12 to rotate and position in the downward direction D2 to capture the flight image below the UAV 10 .

請配合參看圖1~3及圖6所示,為達成本發明第三目的之第三實施例,本實施例除了包括上述第一、第二實施例的整體技術內容之外,該飛行控制模組的控制指令係透過無線通訊單元20傳輸至資訊處理單元30,經資訊處理單元30解讀為下降飛行時,該深度學習演算模組31經影像辨識後判定該物件是否為降落平台40,當判斷結果為是,則 透過無線通訊單元20傳輸啟動訊號至飛行控制模組11,以啟動影像擷取裝置12影像擷取降落平台40上的圖案標記41(如ARUCO MARKER;或是QR CODE)而成像為圖案標記41影像,該資訊處理單元30透過透過無線通訊單元20接收圖案標記41影像,該深度學習演算模組31係以一種識別技術對圖案標記41影像進行識別處理計算,以得到相應的識別資訊(於每一無人機預設一種識別碼,以作為無人機是否可以在降落平台降落的權限評估依據)及無人機10與降落平台40之間的距離值,並核對識別資訊是否正確,當識別資訊正確時,則判斷距離值是否為零或接近零,判斷結果為是,該無人機10則可降落於降落平台40上;判斷結果為否,則透過無線通訊單元20傳輸繼續下降的控制訊號給該飛行控制模組11,以令無人機10繼續下降飛行,直到距離值等於零或接近零為止。其中,該識別技術係為影像識別技術,該深度學習演算模組31更包含一影像識別模組311及一圖案資料庫312,當影像識別模組33接收到圖案標記41影像時,則擷取圖案標記41影像的影像特徵,並與儲存在影像資料庫34的特徵樣本進行特徵匹配的影像辨識處理,以得到識別資訊及距離值。 Please refer to Figures 1 to 3 and Figure 6, in order to achieve the third embodiment of the third purpose of the present invention, this embodiment includes the overall technical content of the above-mentioned first and second embodiments, the flight control module The control command of the group is transmitted to the information processing unit 30 through the wireless communication unit 20. When the information processing unit 30 interprets it as a landing flight, the deep learning calculation module 31 determines whether the object is a landing platform 40 after image recognition. If the result is yes, then The start signal is transmitted to the flight control module 11 through the wireless communication unit 20, so as to start the image capture device 12 to capture the image mark 41 (such as ARUCO MARKER; or QR CODE) on the landing platform 40 and form an image of the image mark 41 The information processing unit 30 receives the image of the pattern mark 41 through the wireless communication unit 20, and the deep learning calculation module 31 uses a recognition technology to perform recognition processing and calculation on the image of the pattern mark 41 to obtain corresponding recognition information (in each The UAV presets an identification code as the authority evaluation basis for whether the UAV can land on the landing platform) and the distance value between the UAV 10 and the landing platform 40, and checks whether the identification information is correct. When the identification information is correct, Then judge whether the distance value is zero or close to zero, if the judgment result is yes, the UAV 10 can land on the landing platform 40; Module 11, so that the UAV 10 continues to descend and fly until the distance value is equal to zero or close to zero. Wherein, the recognition technology is an image recognition technology, and the deep learning calculation module 31 further includes an image recognition module 311 and a pattern database 312. When the image recognition module 33 receives the image of the pattern mark 41, it retrieves The pattern mark 41 is an image feature of the image, and performs feature matching image recognition processing with the feature samples stored in the image database 34 to obtain identification information and distance values.

承上所述,如圖8所示,該影像識別模組係根據P3P(Perspective 3 point)方法,設P點為影像擷取裝置12的投影中心,點A、B、C為世界座標系下的三維世界點,於特徵匹配之後得到對應影像座標系下之二維該圖案標記41影像的三個點u、v、w,再將世界座標系下的ABC三點與影像座標系下的u、v、w三個點匹配,其中AB,BC,AC的長度已知,uv,vw,uw也已知,透過餘弦定理構建聯立方程式求解出PA,PB,PC之間的長度距離,以求得該世界座標系下作為該距離值的相機位姿;令該投影中心(P)到各點的長度距離分別為x=|PA|、y=|PB|、z=|PC|,α=∠BPCβ=∠APCγ=∠APBp=2cos αq=2cos βr=2cos γa'=|BC|、b'=|AC|、c'=|AB|,根據餘弦定理,可以得到聯立方程式如下式: As mentioned above, as shown in FIG. 8 , the image recognition module is based on the P3P (Perspective 3 point) method, and point P is set as the projection center of the image capture device 12, and points A, B, and C are points in the world coordinate system. After the feature matching, the three points u, v, and w of the two-dimensional pattern mark 41 image in the corresponding image coordinate system are obtained, and then the ABC three points in the world coordinate system are compared with the u in the image coordinate system , v, and w are matched at three points, where the lengths of AB, BC, and AC are known, and uv, vw, and uw are also known. Through the cosine theorem, the simultaneous equations are constructed to solve the length distance between PA, PB, and PC, and then Obtain the camera pose as the distance value in the world coordinate system; let the length and distance from the projection center (P) to each point be x = | PA |, y = | PB |, z = | PC |, α = ∠BPC , β = ∠APC , γ = ∠APB , p =2cos α , q =2cos β , r =2cos γ , a '=| BC |, b '=| AC |, c '=| AB |, According to the law of cosines, the simultaneous equations can be obtained as follows:

Figure 110139345-A0101-12-0010-1
Figure 110139345-A0101-12-0010-1

本發明於一種應用深度學習之無人機避障飛行控制影像辨識系統的噴灑供膠系統中,係包括設置在無人機10上的一供膠裝置(本圖式例未示)及一噴控模組(本圖式例未示),該供膠裝置包含一用以容裝膠狀植生基材的供膠艙(本圖式例未示)及一用以將膠狀植生基材噴灑出的噴嘴(本圖式例未示);該無人機10之飛行控制模組11預設有一用以飛行至一待噴膠區域的上空的供膠飛行路徑,使無人機10得以依據供膠飛行路徑而飛行至待噴膠區域的上空;該噴控模組用以操控供膠裝置作動而使噴嘴朝待噴膠區域噴灑膠狀植生基材的時機及噴灑量,使待噴膠區域表面披覆至少一層膠狀植生基材。 The present invention is applied to a spraying glue supply system of an UAV obstacle avoidance flight control image recognition system using deep learning, which includes a glue supply device (not shown in this figure) and a spray control module arranged on the UAV 10 Group (not shown in this figure example), the glue supply device includes a glue supply cabin (not shown in this figure example) for holding the gelatinous plant growth substrate and a spraying machine for spraying the gelatinous plant growth substrate Nozzle (not shown in this drawing example); the flight control module 11 of the UAV 10 is preset with a glue supply flight path for flying to a glue-spraying area, so that the UAV 10 can follow the glue supply flight path And fly to the sky above the area to be sprayed; the spray control module is used to control the glue supply device to actuate and make the nozzle spray the timing and spraying amount of the gel-like plant substrate toward the area to be sprayed, so that the surface of the area to be sprayed is coated At least one layer of gelatinous vegetation substrate.

經由上述具體實施例的說明,本發明確實具有下列所述的特點: Through the description of the above specific embodiments, the present invention does have the following characteristics:

1.本發明確實可以實現在飛行路徑的小型區域內即時辨識出障礙物的任務,並能夠準確定位出障礙物的實際位置,而可有效避開障礙物。 1. The present invention can indeed realize the task of instantly identifying obstacles in a small area of the flight path, and can accurately locate the actual position of the obstacle, thereby effectively avoiding the obstacle.

2.本發明確實可以隨著無人機行進方向而改變影像擷取的方向,由於無人機的行進方向與影像擷取方向被設定為同步一致性的緣故,所以可以即時偵測出前進方向是否有障礙物,因而得以有效避免撞擊障礙物的情事發生。 2. The present invention can indeed change the direction of image capture along with the direction of travel of the UAV. Since the direction of travel of the UAV and the direction of image capture are set to be synchronized, it can be detected in real time whether there is any difference in the direction of travel. obstacles, thus effectively avoiding collisions with obstacles.

3.本發明確實具備結合影像辨識與P3P運算技術實現降落定位的功能。 3. The present invention does have the function of combining image recognition and P3P computing technology to realize landing positioning.

以上所述,僅為本發明之可行實施例,並非用以限定本發明 之專利範圍,凡舉依據下列請求項所述之內容、特徵以及其精神而為之其他變化的等效實施,皆應包含於本發明之專利範圍內。本發明所具體界定於請求項之結構特徵,未見於同類物品,且具實用性與進步性,已符合發明專利要件,爰依法具文提出申請,謹請 鈞局依法核予專利,以維護本申請人合法之權益。 The above descriptions are only feasible embodiments of the present invention, and are not intended to limit the present invention All equivalent implementations of other changes based on the content, features and spirit of the following claims shall be included in the patent scope of the present invention. The structural features of the invention specifically defined in the claims are not found in similar items, and are practical and progressive, and have met the requirements of an invention patent. I file an application in accordance with the law. I would like to ask the Jun Bureau to approve the patent in accordance with the law to maintain this invention. The legitimate rights and interests of the applicant.

10:無人機 10: Drone

11:飛行控制模組 11: Flight control module

12:影像擷取裝置 12: Image capture device

14:飛行驅動單元 14: Flight drive unit

15:供電單元 15: Power supply unit

20:無線通訊單元 20: Wireless communication unit

30:資訊處理單元 30: Information processing unit

31:深度學習演算模組 31: Deep Learning Calculation Module

32:特徵資料庫 32: Feature database

Claims (9)

一種應用深度學習之無人機避障飛行控制影像辨識方法,其包括:提供至少一無人機、一無線通訊單元及一資訊處理單元;其中,每一該無人機包含一飛行控制模組及一影像擷取裝置;令該飛行控制模組係依據一飛行路徑控制驅動該至少一無人機飛行;以該影像擷取裝置連續擷取該無人機的飛行狀態而成像為複數飛行影像;及令該資訊處理單元係透過該無線通訊單元接收該複數飛行影像,該資訊處理單元包含一深度學習演算模組及一內建有複數物件特徵樣本的特徵資料庫,每一該物件特徵樣本定義有一物件名稱,該深度學習演算模組用以對每一該飛行影像擷出至少一物件特徵,並將該至少一物件特徵依序輸入該特徵資料庫,以預測所輸入之該至少一物件特徵與該物件特徵樣本的符合機率,當該符合機率大於一預設機率時,則讀取出相應的該物件名稱,以輸出與該物件名稱符合之物件的辨識結果資訊,並判斷該物件是否為阻礙該飛行路徑的障礙物,當判定結果為是,則產生修正該飛行路徑的修正控制訊號,該修正控制訊號透過該無線通訊單元無線傳輸至該飛行控制模組,使該無人機依據該修正控制訊號飛行而避開該障礙物;其中,該飛行控制模組的控制指令係透過該無線通訊單元傳輸至該資訊處理單元,經該資訊處理單元解讀該控制指令為下降飛行時,該深度學習演算模組經影像辨識後判定該物件是否為一降落平台,當判斷結果為是,則透過該無線通訊單元傳輸一啟動訊號至該飛行控制模組,以啟動該影像擷取裝置影像擷取該降落平台上的一圖案標記而成像為圖案標記影像,該資訊處理單元透過透過該無線通訊單元接收該圖案標記影像,該深度學習演算模組係以一識別技術對該圖案標記影像進行識別處理計算,以得到相應的識別資訊及該無人機與該降落平台之間的距離值,並核對該識別資訊是否正確,當該 識別資訊正確時,則判斷該距離值是否為零或接近零,判斷結果為是,該無人機則可降落於該降落平台上;判斷結果為否,則透過該無線通訊單元傳輸繼續下降的控制訊號給該飛行控制模組,以令該無人機繼續下降飛行,直到該距離值等於零;或接近零為止。 A UAV obstacle avoidance flight control image recognition method using deep learning, which includes: providing at least one UAV, a wireless communication unit and an information processing unit; wherein, each UAV includes a flight control module and an image Capture device; make the flight control module drive the at least one drone to fly according to a flight path control; use the image capture device to continuously capture the flight status of the drone and image it into a plurality of flight images; and make the information The processing unit receives the plurality of flight images through the wireless communication unit, the information processing unit includes a deep learning algorithm module and a built-in feature database with a plurality of object feature samples, each of the object feature samples defines an object name, The deep learning algorithm module is used to extract at least one object feature for each of the flying images, and input the at least one object feature into the feature database in order to predict the input of the at least one object feature and the object feature The matching probability of the sample, when the matching probability is greater than a preset probability, read out the corresponding object name to output the identification result information of the object matching the object name, and judge whether the object is obstructing the flight path If the judgment result is yes, a correction control signal for correcting the flight path will be generated, and the correction control signal will be wirelessly transmitted to the flight control module through the wireless communication unit, so that the UAV will fly according to the correction control signal. Avoiding the obstacle; wherein, the control command of the flight control module is transmitted to the information processing unit through the wireless communication unit, and when the information processing unit interprets the control command as descending flight, the deep learning calculation module passes After image recognition, it is determined whether the object is a landing platform, and when the judgment result is yes, an activation signal is transmitted to the flight control module through the wireless communication unit to activate the image capture device to capture images on the landing platform. A pattern mark is imaged as a pattern mark image, the information processing unit receives the pattern mark image through the wireless communication unit, and the deep learning calculation module uses a recognition technology to recognize, process and calculate the pattern mark image to obtain a corresponding identification information and the distance value between the UAV and the landing platform, and check whether the identification information is correct, when the When the identification information is correct, it is judged whether the distance value is zero or close to zero. If the judgment result is yes, the UAV can land on the landing platform; The signal is sent to the flight control module, so that the UAV continues to descend and fly until the distance value is equal to zero or close to zero. 如請求項1所述之應用深度學習之無人機避障飛行控制影像辨識方法,其中,該深度學習演算模組執行時則包含下列步驟:(a)訓練階段步驟,係建立有至少一深度學習模型,並於該至少一深度學習模型輸入巨量的該物件特徵樣本、障礙物辨識參數及影像辨識參數,並由該深度學習模型測試影像辨識的正確率,再判斷影像辨識正確率是否足夠,當判斷結果為是,則將辨識結果輸出及儲存;當判斷結果為否,則使該深度學習模型自我修正學習;及(b)運行預測階段步驟,係於該深度學習模型輸入即時擷取之該飛行影像,並由該深度學習模型計算出相應的該物件特徵,以預測辨識出該物件的該物件名稱及是否為該障礙物的辨識結果資訊。 The UAV obstacle avoidance flight control image recognition method applying deep learning as described in claim 1, wherein the deep learning calculation module includes the following steps when executed: (a) the training phase step is to establish at least one deep learning model, and input a huge number of object feature samples, obstacle recognition parameters, and image recognition parameters into the at least one deep learning model, and test the accuracy of image recognition by the deep learning model, and then determine whether the accuracy of image recognition is sufficient, When the judgment result is yes, the identification result is output and stored; when the judgment result is no, the deep learning model is made to self-correct and study; and (b) the step of running the prediction stage is obtained in real time as the input of the deep learning model The flight image, and the corresponding feature of the object is calculated by the deep learning model, so as to predict the identification result information of the object name and whether the object is the obstacle. 如請求項2所述之應用深度學習之無人機避障飛行控制影像辨識方法,其中,該至少一深度學習模型係為基於YOLO深度學習演算法的訓練預測模型。 The UAV obstacle avoidance flight control image recognition method applying deep learning as described in claim 2, wherein the at least one deep learning model is a training prediction model based on the YOLO deep learning algorithm. 一種應用深度學習之無人機避障飛行控制影像辨識系統,其包括:至少一無人機,每一該無人機包含一飛行控制模組及一影像擷取裝置,該飛行控制模組係依據一飛行路徑控制驅動該至少一無人機的飛行;該影像擷取裝置用以連續擷取該無人機的飛行狀態而成像為複數飛行影像;一無線通訊單元;及一資訊處理單元,其透過該無線通訊單元接收該複數飛行影像;該資訊處理單元包含一深度學習演算模組及一內建有複數物件特徵樣本的特徵資 料庫,每一該物件特徵樣本定義有一物件名稱,該深度學習演算模組用以對每一該飛行影像擷出至少一物件特徵,並將該至少一物件特徵依序輸入該特徵資料庫,以預測所輸入之該至少一物件特徵與該物件特徵樣本的符合機率,當該符合機率大於一預設機率時,則讀取出相應的該物件名稱,以輸出與該物件名稱符合之物件的辨識結果資訊,並判斷該物件是否為阻礙該飛行路徑的障礙物,當判定結果為是,則產生修正該飛行路徑的修正控制訊號,該修正控制訊號透過該無線通訊單元無線傳輸至該飛行控制模組,使該無人機依據該修正控制訊號飛行而避開該障礙物;其中,該飛行控制模組的控制指令係透過該無線通訊單元傳輸至該資訊處理單元,經該資訊處理單元解讀該控制指令為下降飛行時,該深度學習演算模組經影像辨識後判定該物件是否為一降落平台,當判斷結果為是,則透過該無線通訊單元傳輸一啟動訊號至該飛行控制模組,以啟動該影像擷取裝置影像擷取該降落平台上的一圖案標記而成像為圖案標記影像,該資訊處理單元透過透過該無線通訊單元接收該圖案標記影像,該深度學習演算模組係以一識別技術對該圖案標記影像進行識別處理計算,以得到相應的識別資訊及該無人機與該降落平台之間的距離值,並核對該識別資訊是否正確,當該識別資訊正確時,則判斷該距離值是否為零或接近零,判斷結果為是,該無人機則可降落於該降落平台上;判斷結果為否,則透過該無線通訊單元傳輸繼續下降的控制訊號給該飛行控制模組,以令該無人機繼續下降飛行,直到該距離值等於零;或接近零為止。 An image recognition system for UAV obstacle avoidance flight control using deep learning, which includes: at least one UAV, each UAV includes a flight control module and an image capture device, and the flight control module is based on a flight Path control drives the flight of the at least one unmanned aerial vehicle; the image capture device is used to continuously capture the flight state of the unmanned aerial vehicle and image it into a plurality of flight images; a wireless communication unit; and an information processing unit, which through the wireless communication The unit receives the plurality of flight images; the information processing unit includes a deep learning algorithm module and a built-in characteristic data of the plurality of object characteristic samples a database, each of the object feature samples defines an object name, and the deep learning algorithm module is used to extract at least one object feature for each of the flight images, and input the at least one object feature into the feature database in sequence, To predict the coincidence probability of the at least one input object characteristic and the object characteristic sample, when the coincidence probability is greater than a preset probability, then read out the corresponding object name, so as to output the object name matching the object name Identify the result information and judge whether the object is an obstacle blocking the flight path. If the judgment result is yes, then generate a correction control signal to correct the flight path, and the correction control signal is wirelessly transmitted to the flight control through the wireless communication unit module, so that the UAV can fly according to the corrected control signal to avoid the obstacle; wherein, the control command of the flight control module is transmitted to the information processing unit through the wireless communication unit, and the information processing unit interprets the When the control command is to descend and fly, the deep learning algorithm module judges whether the object is a landing platform after image recognition, and when the judgment result is yes, then transmits an activation signal to the flight control module through the wireless communication unit to Start the image capture device to capture a pattern mark on the landing platform and image it as a pattern mark image, the information processing unit receives the pattern mark image through the wireless communication unit, and the deep learning calculation module uses a recognition The technology performs recognition processing and calculation on the image of the pattern mark to obtain the corresponding recognition information and the distance value between the UAV and the landing platform, and check whether the recognition information is correct. When the recognition information is correct, then judge the distance If the value is zero or close to zero, if the judgment result is yes, the UAV can land on the landing platform; Let the UAV continue to descend until the distance value is equal to zero or close to zero. 如請求項4所述之應用深度學習之無人機避障飛行控制影像辨識系統,其更包含一設於該無人機用以驅動該影像擷取裝置轉向的角度調整單元及一可受該飛行控制模組之複數種控制指令而驅動該無人機做出相應飛行狀態的飛行驅動單元,該飛行狀態係選自向前飛行、爬升飛行、轉向飛行以及下降飛行的其中一種狀態。 The UAV obstacle avoidance flight control image recognition system applying deep learning as described in claim 4, which further includes an angle adjustment unit installed on the UAV to drive the image capture device to turn and a device that can be controlled by the flight Multiple control commands of the module drive the flight drive unit of the UAV to make a corresponding flight state. The flight state is selected from one of forward flight, climbing flight, turning flight and descending flight. 如請求項5所述之應用深度學習之無人機避障飛行控制影像辨識系統,其中,該角度調整單元包含一設於該無人機底部的樞座、一配樞座及旋轉驅動機構,該配樞座一端設有一配樞部,該配樞部與該樞座之一樞部形成可轉動的樞接,該配樞座末端可供固設該影像擷取裝置,該旋轉驅動機構設於該樞座一側;當該飛行狀態為向前飛行時,該飛行控制模組則觸發該旋轉驅動機構,以驅動該影像擷取裝置之一鏡頭轉動定位於朝向前方的位置,以擷取該無人機前方的該飛行影像;當該飛行狀態為爬升飛行時,該飛行控制模組則觸發該旋轉驅動機構,以驅動該影像擷取裝置之該鏡頭轉動定位於朝向上方的位置,以擷取該無人機上方的該飛行影像;當該飛行狀態為下降飛行時,該飛行控制模組則觸發該旋轉驅動機構,以驅動該影像擷取裝置之該鏡頭轉動定位於朝向下方的位置,以擷取該無人機下方的該飛行影像。 The UAV obstacle avoidance flight control image recognition system applying deep learning as described in claim 5, wherein the angle adjustment unit includes a pivot seat arranged at the bottom of the UAV, a supporting pivot seat and a rotating drive mechanism, the fitting One end of the pivot base is provided with a pivot part, which forms a rotatable pivot connection with one of the pivot parts of the pivot base. The end of the pivot base can be used to fix the image capture device. One side of the pivot seat; when the flight state is forward flight, the flight control module triggers the rotary drive mechanism to drive a lens of the image capture device to rotate and position it towards the front to capture the unmanned The flight image in front of the aircraft; when the flight state is climbing flight, the flight control module triggers the rotary drive mechanism to drive the lens of the image capture device to rotate and position upward to capture the The flight image above the UAV; when the flight state is descending flight, the flight control module triggers the rotary drive mechanism to drive the lens of the image capture device to rotate and position downward to capture This flight image below the drone. 如請求項4所述之應用深度學習之無人機避障飛行控制影像辨識系統,其中,該識別技術係為影像識別技術,該深度學習演算模組更包含一影像識別模組及一圖案資料庫,當該影像識別模組接收到該圖案標記影像時,則擷取該圖案標記影像的影像特徵,並與儲存在該影像資料庫的特徵樣本進行特徵匹配的影像辨識處理,以得到該識別資訊及該距離值。 The UAV obstacle avoidance flight control image recognition system applying deep learning as described in claim 4, wherein the recognition technology is an image recognition technology, and the deep learning calculation module further includes an image recognition module and a pattern database , when the image recognition module receives the pattern mark image, it extracts the image features of the pattern mark image, and performs feature matching image recognition processing with the feature samples stored in the image database to obtain the recognition information and the distance value. 如請求項7所述之應用深度學習之無人機避障飛行控制影像辨識系統,其中,該影像識別模組係根據P3P(Perspective 3 point)方法,設P點為影像擷取裝置的投影中心,點A、B、C為世界座標系下的三維世界點,於特徵匹配之後得到對應影像座標系下之二維該圖案標記影像的三個點u、v、w,再將世界座標系下的ABC三點與影像座標系下的u、v、w三個點匹配,其中AB,BC,AC的長度已知,uv,vw,uw也已知,透過餘弦定理構建聯立方程式求解出PA,PB,PC之間的長度距離,以求得該世界座標系 下作為該距離值的相機位姿;令該投影中心(P)到各點的長度距離分別為x=|PA|、y=|PB|、z=|PC|,α=∠BPC、β=∠APC、γ=∠APBp=2cosα、q=2cosβ、r=2cosγ,a'=|BC|、b'=|AC|、c'=|AB|,根據該餘弦定理,可以得到該聯立方程式如下式:
Figure 110139345-A0305-02-0018-1
The UAV obstacle avoidance flight control image recognition system applying deep learning as described in claim 7, wherein the image recognition module is based on the P3P (Perspective 3 point) method, setting point P as the projection center of the image capture device, Points A, B, and C are three-dimensional world points in the world coordinate system. After feature matching, three points u, v, and w of the two-dimensional pattern mark image in the corresponding image coordinate system are obtained, and then the points in the world coordinate system are The three points ABC match the three points u, v, and w in the image coordinate system, where the lengths of AB, BC, and AC are known, and uv, vw, and uw are also known, and PA is solved by constructing a simultaneous equation through the law of cosines. PB, the length distance between PC to obtain the camera pose as the distance value in the world coordinate system; let the length distance from the projection center (P) to each point be x = | PA |, y = | PB |, z = | PC |, α = ∠ BPC , β = ∠ APC , γ = ∠ APB , p = 2cosα, q = 2cosβ, r = 2cosγ, a '=| BC |, b '=| AC |, c '=| AB |, according to the law of cosines, the simultaneous equation can be obtained as follows:
Figure 110139345-A0305-02-0018-1
一種應用如請求項4之應用深度學習之無人機避障飛行控制影像辨識系統的噴灑供膠系統,其包括設置在該無人機上的一供膠裝置、一噴嘴及一噴控模組,該供膠裝置包含一用以容裝膠狀植生基材的供膠艙及一用以將該膠狀植生基材噴灑出的噴嘴;該無人機之該飛行控制模組預設有一用以飛行至一待噴膠區域的上空的供膠飛行路徑,使該無人機得以依據該供膠飛行路徑而飛行至該待噴膠區域的上空;該噴控模組用以操控該供膠裝置作動,使該噴嘴朝該待噴膠區域噴灑該膠狀植生基材的時機及噴灑量,並使該待噴膠區域表面披覆至少一層該膠狀植生基材。 A spraying glue supply system that applies the UAV obstacle avoidance flight control image recognition system using deep learning such as claim 4, which includes a glue supply device, a nozzle and a spray control module arranged on the UAV. The glue supply device includes a glue supply cabin for accommodating the gelatinous planting substrate and a nozzle for spraying the gelatinous planting substrate; the flight control module of the drone is preset with one for flying to A glue supply flight path over the area to be sprayed allows the UAV to fly to the sky above the area to be sprayed according to the glue supply flight path; the spray control module is used to control the glue supply device to operate, so that The nozzle sprays the timing and spraying amount of the gel-like vegetation substrate toward the area to be sprayed, and makes the surface of the area to be sprayed coated with at least one layer of the gel-like vegetation substrate.
TW110139345A 2021-10-22 2021-10-22 UAV obstacle avoidance flight control image recognition method, system and application using deep learning TWI783748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110139345A TWI783748B (en) 2021-10-22 2021-10-22 UAV obstacle avoidance flight control image recognition method, system and application using deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110139345A TWI783748B (en) 2021-10-22 2021-10-22 UAV obstacle avoidance flight control image recognition method, system and application using deep learning

Publications (2)

Publication Number Publication Date
TWI783748B true TWI783748B (en) 2022-11-11
TW202318271A TW202318271A (en) 2023-05-01

Family

ID=85794447

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110139345A TWI783748B (en) 2021-10-22 2021-10-22 UAV obstacle avoidance flight control image recognition method, system and application using deep learning

Country Status (1)

Country Link
TW (1) TWI783748B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886099A (en) * 2017-11-09 2018-04-06 电子科技大学 Synergetic neural network and its construction method and aircraft automatic obstacle avoiding method
TW201925033A (en) * 2017-11-30 2019-07-01 財團法人工業技術研究院 Unmanned aerial vehicle, control system for unmanned aerial vehicle and control method thereof
TW202108448A (en) * 2019-07-23 2021-03-01 日商東洋製罐股份有限公司 Unmanned aerial vehicle
US20210191390A1 (en) * 2019-12-18 2021-06-24 Lg Electronics Inc. User equipment, system, and control method for controlling drone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886099A (en) * 2017-11-09 2018-04-06 电子科技大学 Synergetic neural network and its construction method and aircraft automatic obstacle avoiding method
TW201925033A (en) * 2017-11-30 2019-07-01 財團法人工業技術研究院 Unmanned aerial vehicle, control system for unmanned aerial vehicle and control method thereof
TW202108448A (en) * 2019-07-23 2021-03-01 日商東洋製罐股份有限公司 Unmanned aerial vehicle
US20210191390A1 (en) * 2019-12-18 2021-06-24 Lg Electronics Inc. User equipment, system, and control method for controlling drone

Also Published As

Publication number Publication date
TW202318271A (en) 2023-05-01

Similar Documents

Publication Publication Date Title
CN105549614B (en) Unmanned plane target tracking
CN106054929B (en) A kind of unmanned plane based on light stream lands bootstrap technique automatically
CN110879601B (en) Unmanned aerial vehicle inspection method for unknown fan structure
AU2018388887B2 (en) Image based localization for unmanned aerial vehicles, and associated systems and methods
McGee et al. Obstacle detection for small autonomous aircraft using sky segmentation
CN103822635B (en) The unmanned plane during flying spatial location real-time computing technique of view-based access control model information
Lange et al. Autonomous landing for a multirotor UAV using vision
CN107729808A (en) A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
CN107589758A (en) A kind of intelligent field unmanned plane rescue method and system based on double source video analysis
CN112270267B (en) Camera shooting identification system capable of automatically capturing line faults
JP2017537484A (en) System and method for detecting and tracking movable objects
CN111679695B (en) Unmanned aerial vehicle cruising and tracking system and method based on deep learning technology
CN112068539A (en) Unmanned aerial vehicle automatic driving inspection method for blades of wind turbine generator
CN105867397A (en) Unmanned aerial vehicle accurate position landing method based on image processing and fuzzy control
KR20170091352A (en) Method for detecting working area and performing continuous working in the detected working area and the unmanned air vehicle performing the same
CN107054654A (en) A kind of unmanned plane target tracking system and method
CN110362109A (en) A kind of cross-domain shutdown library landing method of unmanned plane and landing platform
CN106094876A (en) A kind of unmanned plane target locking system and method thereof
CN113156998B (en) Control method of unmanned aerial vehicle flight control system
CN112789672A (en) Control and navigation system, attitude optimization, mapping and positioning technology
JP2024061767A (en) Drone operation support system and drone operation support method
Xiang et al. UAV based target tracking and recognition
CN114815871A (en) Vision-based autonomous landing method for vertical take-off and landing unmanned mobile platform
US20210407129A1 (en) Method and device for assisting the driving of an aircraft moving on the ground
KR102349818B1 (en) Autonomous UAV Navigation based on improved Convolutional Neural Network with tracking and detection of road cracks and potholes