TWI802514B - Processing method of target identification for unmanned aerial vehicle (uav) - Google Patents

Processing method of target identification for unmanned aerial vehicle (uav) Download PDF

Info

Publication number
TWI802514B
TWI802514B TW111138301A TW111138301A TWI802514B TW I802514 B TWI802514 B TW I802514B TW 111138301 A TW111138301 A TW 111138301A TW 111138301 A TW111138301 A TW 111138301A TW I802514 B TWI802514 B TW I802514B
Authority
TW
Taiwan
Prior art keywords
target
image
color
length
person
Prior art date
Application number
TW111138301A
Other languages
Chinese (zh)
Other versions
TW202416231A (en
Inventor
林俊良
陳文傑
陳泱億
Original Assignee
國立中興大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中興大學 filed Critical 國立中興大學
Priority to TW111138301A priority Critical patent/TWI802514B/en
Application granted granted Critical
Publication of TWI802514B publication Critical patent/TWI802514B/en
Publication of TW202416231A publication Critical patent/TW202416231A/en

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

This invention includes the steps of image pre-process, target image acquisition, target image-based tracking, color identification, body segment identification and length calculation, target position identification, and target image refinement. By these steps, the unmanned aerial vehicle (UAV) can track a target person using an on-board light- weight micro-controller unit (MCU). By utilizing color database and dressing database and target position prediction, the MCU calculates and sums up the individual scores corresponding to color, length, and target position. Finally, the person with the highest score within the captured image from the on-board camera is recognized, locked and tracked. Hence, this invention has presented an effective and low-cost scoring method which is feasible to be realized within on-board MCU of the UAV for the purpose of as a personal security guard. Accuracy of target identification of this method is relatively high while considering the limited computational resource of on-board MCU. In addition, parameters of the scoring system can be flexibly adjusted.

Description

無人機目標辨識之處理方法 The Processing Method of UAV Target Recognition

本發明係有關一種無人機目標辨識之處理方法,尤指一種兼具採計分制判別機制簡單快速、判斷之準確性高、可減少電腦運算資源及可以彈性微調參數之無人機目標辨識之處理方法。 The present invention relates to a processing method for UAV target identification, especially a processing method for UAV target identification with a simple and fast scoring system, high judgment accuracy, reduced computer computing resources, and flexible fine-tuning of parameters. method.

近年來無人機的發展有極大的進步,不僅可應用於軍事偵察、警用搜救、商業攝影、農業應用(噴農藥或灑種籽)等領域,還可以應用於媒體(或稱為自錄節目之主持人或網路紅人)之追蹤跟拍。簡單來說,就是某一人走在前面,後面跟著一台無人機,一邊走一邊介紹(例如某城區景點之介紹導覽)或是做為個人保鑣用途,此時,無人機必須要有追蹤某一特定(人物)目標之能力,並且要一直保持固定的間距(例如跟在某人之後方5公尺處,高度維持在2公尺高)。 In recent years, the development of drones has made great progress, not only can be used in military reconnaissance, police search and rescue, commercial photography, agricultural applications (spraying pesticides or seeding) and other fields, but also can be used in media (or self-recorded programs) The host or Internet celebrity) tracking and filming. To put it simply, a person walks in front, followed by a drone, and introduces it while walking (such as an introduction and tour of a city's attractions) or as a personal bodyguard. At this time, the drone must have tracking The ability of a specific (person) target, and always maintain a fixed distance (for example, follow someone 5 meters behind, and maintain a height of 2 meters).

傳統之無人機目標辨識,若鎖定一人跟拍是毫無問題,但是,當有其他人混入無人機之攝影範圍時,就有可能混淆,而發生跟錯人的情形。 For traditional UAV target recognition, it is no problem to lock on one person to follow the shot. However, when other people mix into the UAV's shooting range, there may be confusion, and the situation of following the wrong person may occur.

其次,無人機之體積小、重量有限且電池容量有限,無法搭載太重的影像處理(例如分辨特定目標)設備;以人工智慧(Artificial Intelligence,簡稱AI;且為公知AI技術,恕不贅述)演算為基礎之辨識為例,AI專用微電腦需浪費大量運算資源,不適用無人機空間及其有限的電池容量。 Secondly, drones are small in size, limited in weight, and have limited battery capacity, so they cannot carry heavy image processing (such as distinguishing specific targets) equipment; Artificial Intelligence (AI for short; and it is a well-known AI technology, will not be described in detail) Calculation-based identification is an example. AI-specific microcomputers need to waste a lot of computing resources, which is not suitable for the space of drones and their limited battery capacity.

但目前市面上並無此類可快速分辨不同人之無人機目標辨識技術。 However, there is currently no such UAV target recognition technology that can quickly distinguish different people on the market.

傳統之無人機目標辨識技術主要是以位置資訊為主,故,當目標人物與另一人物先重疊再分離時,無人機目標辨識就經常會出錯。 The traditional UAV target recognition technology is mainly based on location information. Therefore, when the target person overlaps with another person and then separates, the UAV target recognition often makes mistakes.

因此,業界一直在尋求較簡單且準確性高之判別機制,來應用於無人機之目標辨識。 Therefore, the industry has been looking for a simpler and more accurate discrimination mechanism to be applied to the target recognition of UAVs.

有鑑於此,必須研發出可解決上述習用缺點之技術。 In view of this, must develop the technology that can solve above-mentioned conventional shortcoming.

本發明之目的,在於提供一種無人機目標辨識之處理方法,其兼具採計分制判別機制簡單快速、判斷之準確性高、可減少電腦運算資源因而適用於機上電腦及可以彈性微調參數等優點。特別是,本發明所欲解決之問題係在於傳統上之無人機目標辨識技術主要是以位置資訊為主,故當目標人物與另一人物先重疊再分離時,無人機目標辨識經常會出錯,而以人工智慧(AI)演算為基礎之辨識和微電腦需浪費大量運算資源,不適用無人機空間與電池容量有限等問題。 The purpose of the present invention is to provide a processing method for UAV target recognition, which has the advantages of simple and fast judgment mechanism based on the scoring system, high judgment accuracy, and can reduce computer computing resources, so it is suitable for on-board computers and can flexibly fine-tune parameters Etc. In particular, the problem to be solved by the present invention is that the traditional UAV target recognition technology is mainly based on location information, so when the target person overlaps with another person and then separates, the UAV target recognition often makes mistakes. However, recognition based on artificial intelligence (AI) calculations and microcomputers need to waste a lot of computing resources, and are not suitable for drones with limited space and battery capacity.

解決上述問題之技術手段係提供一種無人機目標辨識之處理方法,其包括下列步驟:一、準備步驟;二、擷取目標影像步驟;三、持續追蹤目標影像步驟;四、顏色辨識計算步驟;五、長度辨識計算步驟;六、位置辨識計算步驟;及七、完成辨識目標影像步驟。 The technical means to solve the above problems is to provide a processing method for UAV target recognition, which includes the following steps: 1. Preparation step; 2. Capture target image step; 3. Continuously track target image step; 4. Color recognition calculation step; 5. the calculation step of length identification; 6. the calculation step of position identification; and 7. the step of completing the target image identification.

本發明之上述目的與優點,不難從下述所選用實施例之詳細說明與附圖中,獲得深入瞭解。 The above objects and advantages of the present invention can be easily understood from the detailed description of the following selected embodiments and the accompanying drawings.

茲以下列實施例並配合圖式詳細說明本發明於後: The present invention is hereafter described in detail with the following embodiments and accompanying drawings:

10:無人機 10: Drone

11:顏色資料庫 11: Color database

12:服裝儀容資料庫 12: Clothing appearance database

13:辨識資料庫 13: Identification database

20:目標人物 20: target person

31:閒雜人物 31: Miscellaneous Characters

S1:準備步驟 S1: Preparatory steps

S2:擷取目標影像步驟 S2: the step of capturing the target image

S3:持續追蹤目標影像步驟 S3: Continuously track target image steps

S4:顏色辨識計算步驟 S4: color recognition calculation steps

S5:長度辨識計算步驟 S5: length identification calculation steps

S6:位置辨識計算步驟 S6: Position identification calculation steps

S7:完成辨識目標影像步驟 S7: Complete the step of identifying the target image

M1:第一無人機視野影像 M1: The first drone view image

M2:第二無人機視野影像 M2: Second drone view image

A:目標人物影像 A: The image of the target person

A1:目標頭部影像 A1: target head image

A2:目標上半身影像 A2: Target upper body image

A3:目標下半身影像 A3: target lower body image

B1、B2:閒雜人物影像 B1, B2: Miscellaneous figures images

B11:閒雜頭部影像 B11: Miscellaneous Head Image

B12:閒雜上半身影像 B12: Miscellaneous upper body images

B13:閒雜下半身影像 B13: Miscellaneous lower body images

L11:目標頭部長度 L11: target head length

L12:目標下半身長度 L12: Target lower body length

L21:閒雜頭部長度 L21: idle head length

L22:閒雜下半身長度 L22: casual lower body length

(Xp,Yp,Zp):目標三維座標 (Xp, Yp, Zp): 3D coordinates of the target

(Xb,Yb,Zb):閒雜人物三維座標 (Xb, Yb, Zb): three-dimensional coordinates of idle characters

第1圖係本發明之處理方法之流程圖。 Fig. 1 is a flowchart of the processing method of the present invention.

第2圖係本發明之擷取目標影像之示意圖。 Fig. 2 is a schematic diagram of the captured target image of the present invention.

第3圖係第2圖之取得之第一無人機視野影像之示意圖。 Figure 3 is a schematic diagram of the first UAV field of view image obtained in Figure 2.

第4圖係第3圖之人物影像之放大之示意圖。 Figure 4 is an enlarged schematic diagram of the image of the person in Figure 3.

第5圖係本發明之持續追蹤目標人物影像之示意圖。 Fig. 5 is a schematic diagram of the continuous tracking target image of the present invention.

第6圖係第5圖之取得之第二無人機視野影像之示意圖。 Figure 6 is a schematic diagram of the second UAV field of view image obtained in Figure 5.

第7圖係第6圖之其中之一人物影像之放大之示意圖。 Fig. 7 is an enlarged schematic diagram of one of the figures in Fig. 6.

第8圖係本發明之S狀函數之曲線圖。 Fig. 8 is a graph of the sigmoid function of the present invention.

附件:係本發明之第二無人機視野影像之參考照片。 Attachment: It is a reference photo of the second UAV field of view image of the present invention.

參閱第1、第2、第3及第4圖,本發明係為一種無人機目標辨識之處理方法,其係包括: Referring to Figures 1, 2, 3 and 4, the present invention is a processing method for UAV target identification, which includes:

一、準備步驟S1:準備一無人機10,該無人機10係具有相互連通之一顏色資料庫11、一服裝儀容資料庫12及一辨識資料庫13。該顏色資料庫11係內建複數筆色系,該複數筆色系至少包括紅色、橙色、黃色、綠色、藍色、紫色、黑色及白色。該服裝儀容資料庫12係內建複數筆服裝儀容資訊,該複數筆儀容資訊至少包括長髮、短髮、長褲及短褲。 1. Preparation step S1 : prepare a drone 10 , which has a color database 11 , a clothing appearance database 12 and an identification database 13 connected to each other. The color database 11 is built with multiple color systems, and the multiple color systems at least include red, orange, yellow, green, blue, purple, black and white. The clothing and appearance database 12 is built with multiple items of clothing and appearance information, and the multiple items of appearance information at least include long hair, short hair, trousers and shorts.

二、擷取目標影像步驟S2:控制該無人機10用以在一時間=t時,追蹤一目標人物20而取得一第一無人機視野影像M1,該第一無人機視野影像M1係具有一目標人物影像A,該目標人物影像A係包括一目標頭部影像A1、一目標上半身影像A2及一目標下半身影像A3。其中: 該目標頭部影像A1係具有一第一長度乘以一第一寬度個像素,該目標頭部影像A1中佔據最多像素數量之色系係被定義為一目標頭部顏色;該第一長度係被定義為一目標頭部長度L11。 2. Capturing target image step S2: controlling the UAV 10 to track a target person 20 at a time=t to obtain a first UAV field of view image M1, the first UAV field of view image M1 has a The target person image A, the target person image A includes a target head image A1 , a target upper body image A2 and a target lower body image A3 . in: The target head image A1 has a first length multiplied by a first width of pixels, and the color system occupying the largest number of pixels in the target head image A1 is defined as a target head color; the first length is is defined as a target header length L11.

該目標上半身影像A2係具有一第二長度乘以一第二寬度個像素,該目標上半身影像A2中佔據最多像素數量之色系係被定義為一目標上半身顏色。 The target upper body image A2 has a second length multiplied by a second width pixels, and the color system occupying the largest number of pixels in the target upper body image A2 is defined as a target upper body color.

該目標下半身影像A3係具有一第三長度乘以一第三寬度個像素,該目標下半身影像A3中佔據最多像素數量之色系係被定義為一目標下半身顏色,該第三長度係被定義為一目標下半身長度L12。 The target lower body image A3 has a third length multiplied by a third width of pixels, the color system occupying the largest number of pixels in the target lower body image A3 is defined as a target lower body color, and the third length is defined as A target lower body length is L12.

又,取得該無人機10之拍攝角度,及該無人機10與該目標人物20之距離,即可由該辨識資料庫13換算出該目標人物20之空間位置,且將其定義為一目標三維座標(Xp,Yp,Zp)。 Also, by obtaining the shooting angle of the UAV 10 and the distance between the UAV 10 and the target person 20, the spatial position of the target person 20 can be converted from the identification database 13 and defined as a target three-dimensional coordinate (Xp, Yp, Zp).

三、持續追蹤目標影像步驟S3:參閱第5圖,控制該無人機10用以在一時間=t+1時,持續追蹤該目標人物20而取得一第二無人機視野影像M2(如第6圖及附件所示),該第二無人機視野影像M2具有該目標人物影像A及至少一閒雜人物影像(如第6圖所示,例如B1、B2、…),該至少一閒雜人物影像係對應鄰近該目標人物20之一閒雜人物31,且該至少一閒雜人物影像係包括一閒雜頭部影像(參考第7圖,例如為B11)、一閒雜上半身影像(例如為B12)及一閒雜下半身影像(例如為B13)。其中: 該閒雜頭部影像B11係具有一第四長度乘以一第四寬度個像素,該閒雜頭部影像B11中佔據最多像素數量之色系係被定義為一閒雜頭部顏色;該第四長度係被定義為一閒雜頭部長度L21。 3. Continuously track the target image Step S3: Referring to Figure 5, control the drone 10 to continuously track the target person 20 at a time=t+1 to obtain a second drone field of view image M2 (as in Figure 6 As shown in the figure and the appendix), the second UAV field of view image M2 has the target person image A and at least one idle person image (as shown in Figure 6, such as B1, B2, ...), and the at least one idle person image is Corresponding to a person 31 adjacent to the target person 20, and the at least one person image includes a head image (refer to FIG. 7, for example, B11), an image of the upper body of the person (for example, B12) and a lower body image of the person Image (eg B13). in: The miscellaneous head image B11 has a fourth length multiplied by a fourth width of pixels, and the color system occupying the largest number of pixels in the miscellaneous head image B11 is defined as a miscellaneous head color; the fourth length is is defined as an idle header length L21.

該閒雜上半身影像B12係具有一第五長度乘以一第五寬度個像素,該閒雜上半身影像B12中佔據最多像素數量之色系係被定義為一閒雜上半身顏色。 The miscellaneous upper body image B12 has a fifth length multiplied by a fifth width of pixels, and the color system occupying the largest number of pixels in the miscellaneous upper body image B12 is defined as a miscellaneous upper body color.

該閒雜下半身影像B13係具有一第六長度乘以一第六寬度個像素,該閒雜下半身影像B13中佔據最多像素數量之色系係被定義為一閒雜下半身顏色,該第六長度係被定義為一閒雜下半身長度L22。 The miscellaneous lower body image B13 has a sixth length multiplied by a sixth width pixel, and the color system occupying the largest number of pixels in the miscellaneous lower body image B13 is defined as a miscellaneous lower body color, and the sixth length is defined as A leisurely lower body length L22.

又,取得該無人機10之拍攝角度,及該無人機10與該至少一閒雜人物31之距離,即可由該辨識資料庫13換算出該閒雜人物31之空間位置,且將其定義為一閒雜人物三維座標(Xb,Yb,Zb)。 Also, by obtaining the shooting angle of the UAV 10 and the distance between the UAV 10 and the at least one idle person 31, the spatial position of the idle person 31 can be converted from the identification database 13, and it can be defined as an idle person. The three-dimensional coordinates (Xb, Yb, Zb) of the character.

四、顏色辨識計算步驟S4:該辨識資料庫13係內建一顏色分數,其係預設為0分,並當該辨識資料庫13辨識該閒雜頭部顏色等於該目標頭部顏色時,則將該顏色分數加1分,否則加0分;當該辨識資料庫13辨識該閒雜上半身顏色等於該目標上半身顏色時,則加1分,否則加0分;當該辨識資料庫13辨識該閒雜下半身顏色等於該目標下半身顏色時,則加1分,否則加0分。 4. Color recognition calculation step S4: the recognition database 13 has a built-in color score, which is preset as 0 points, and when the recognition database 13 recognizes that the color of the idle head is equal to the color of the target head, then Add 1 point to the color score, otherwise add 0 points; when the recognition database 13 recognizes that the upper body color of the idle miscellaneous is equal to the upper body color of the target, then add 1 point, otherwise add 0 points; when the identification database 13 identifies the idle miscellaneous When the color of the lower body is equal to the color of the lower body of the target, add 1 point, otherwise add 0 points.

五、長度辨識計算步驟S5:該辨識資料庫13係內建一長度分數,其係預設為0分,並當該辨識資料庫13辨識該閒雜頭部長度L21等於該目標頭部長度L11之±N%時,則將該長度分數加1分,否則加0分;且0<N<20;當該辨識資料庫13辨識該閒雜下半身長度L22等於該目標下半身長度L12之±N%時,則加1分,否則加0分;且0<N<20。 5. Length identification calculation step S5: the identification database 13 has a built-in length score, which is preset as 0 points, and when the identification database 13 identifies that the length L21 of the idle head is equal to the length L11 of the target head When ±N%, then add 1 point to the length score, otherwise add 0 points; and 0<N<20; when the identification database 13 identifies that the idle lower body length L22 is equal to ±N% of the target lower body length L12, Add 1 point, otherwise add 0 points; and 0<N<20.

六、位置辨識計算步驟S6:該辨識資料庫13係至少內建下列(公式1)及(公式2):

Figure 111138301-A0305-02-0007-1
6. Position identification calculation step S6: The identification database 13 is at least built in the following (Formula 1) and (Formula 2):
Figure 111138301-A0305-02-0007-1

其中:position error=位置誤差; ε=係數,其係0.0001<ε<0.1。 Among them: position error=position error; ε=coefficient, which is 0.0001<ε<0.1.

PS=A×sigmoid(s(position error-bias))-A/2 (公式2) PS = A × sigmoid ( s ( position error-bias ))- A /2 (Formula 2)

其中:A=(顏色分數+長度分數)×2;s=陡峭度;及bias=偏移量。 Where: A=(color score+length score)×2; s=steepness; and bias=offset.

且該辨識資料庫13係利用上述(公式1),先算出位置誤差(position error),再將偏移量預設為0,陡峭度預設為1,且該A=顏色分數+長度分數,再代入公式2,即可算出位置分數(PS); And the identification database 13 uses the above (formula 1) to calculate the position error (position error), and then preset the offset to 0, the steepness to 1, and the A=color score+length score, Substituting into formula 2, the position score (PS) can be calculated;

七、完成辨識目標影像步驟S7:該辨識資料庫13係內建下列(公式3):

Figure 111138301-A0305-02-0008-2
7. Complete the step S7 of identifying the target image: the identification database 13 is built with the following (Formula 3):
Figure 111138301-A0305-02-0008-2

其中:w 1w 2w 3是權重因子(w 1+w 2+w 3=1);S=總分;RR=辨識率;CS=顏色分數;CFS=顏色滿分;LS=長度分數;LFS=長度滿分;PS=位置分數;及PFS=位置滿分。 Among them: w 1 , w 2 and w 3 are weighting factors ( w 1 + w 2 + w 3 =1); S=total score; RR=recognition rate; CS=color score; CFS=color full score; LS=length score ; LFS = Length Full Score; PS = Position Score; and PFS = Position Full Score.

利用公式3,即可算出總分(S)。 Using formula 3, the total score (S) can be calculated.

該第二無人機視野影像M2中對應總分最高之人物影像,即為該目標人物影像A,進而完成辨識。 The person image corresponding to the highest total score in the second UAV field of view image M2 is the target person image A, and then the recognition is completed.

舉例來講:首先,第一人為特定目標,已知條件為:黑髮(該顏色資料庫11)、短髮(該服裝儀容資料庫12)、橙色上衣(該顏色資料庫11及該服裝儀容資料庫12)、藍褲(該顏色資料庫11及該服裝儀容資料庫12)、長褲(該服裝儀容資料庫12)。 For example: first, the first person is a specific target, and the known conditions are: black hair (the color database 11), short hair (the clothing appearance database 12), orange tops (the color database 11 and the clothing appearance database 12) database 12), blue trousers (the color database 11 and the clothing appearance database 12), trousers (the clothing appearance database 12).

而照片中有6人待測(如第6圖所示),如下表1及下表2所示,舉前3人為例說明:

Figure 111138301-A0305-02-0009-3
In the photo, there are 6 people to be tested (as shown in Figure 6), as shown in Table 1 and Table 2 below, and the first 3 people are used as an example to illustrate:
Figure 111138301-A0305-02-0009-3

Figure 111138301-A0305-02-0009-4
Figure 111138301-A0305-02-0009-4

則第1人:黑髮、短髮、橙上衣、藍褲、長褲;顏色分數得3分;長度分數得2分; 所以共得五分(3+2=5)。 Then the first person: black hair, short hair, orange jacket, blue pants, trousers; color score is 3 points; length score is 2 points; So a total of five points (3+2=5).

A=(顏色分數+長度分數)x2;A=5*2;S=1;Bias=0;ε=0.01;由於目標人物三維座標(Xp,Yp,Zp)=(0,0,0);而閒雜人物三維座標(Xb,Yb,Zb)=(0,0,0);所以依據下列公式:

Figure 111138301-A0305-02-0010-5
位置誤差=1/(0+0.01)=100;Sigmoid(100)=1(如第8圖所示);位置分數=(5*2)*Sigmoid(1(100-0))-5;=10*1-5;=5。 A=(color score + length score)x2; A=5*2; S=1; Bias=0; And the three-dimensional coordinates of idle characters (Xb, Yb, Zb)=(0,0,0); so according to the following formula:
Figure 111138301-A0305-02-0010-5
Position error=1/(0+0.01)=100; Sigmoid(100)=1 (as shown in Figure 8); Position score=(5*2)*Sigmoid(1(100-0))-5;= 10*1-5;=5.

故,第一人之總分=3+2+5=10分。 Therefore, the total score of the first person = 3 + 2 + 5 = 10 points.

第2人:黑髮、短髮、藍上衣、藍褲、長褲;顏色分數得2分;(黑髮及藍褲);長度分數得2分;(短髮及長褲);共得4分。 Person 2: Black hair, short hair, blue jacket, blue pants, long pants; 2 points for color; (black hair and blue pants); 2 points for length; (short hair and long pants); 4 points in total.

A=(顏色分數+長度分數)x2;A=(2+2)*2; A=4*2;S=1;Bias=0;由於目標人物三維座標(Xp,Yp,Zp)=(0,0,0);而閒雜人物三維座標(Xb,Yb,Zb)=(0,1,0);位置誤差=1/(1+0.01)=0.99約等於1;Sigmoid(1)=0.5;位置分數=(4*2)*Sigmoid(1(1-0))-4;=8*0.5-4;=0。 A=(color score + length score)x2; A=(2+2)*2; A=4*2; S=1; Bias=0; because the three-dimensional coordinates of the target character (Xp, Yp, Zp)=(0,0,0); and the three-dimensional coordinates of the idle characters (Xb,Yb,Zb)=(0 ,1,0); Position error=1/(1+0.01)=0.99 is approximately equal to 1; Sigmoid(1)=0.5; Position score=(4*2)*Sigmoid(1(1-0))-4; =8*0.5-4;=0.

故,第3人之總分=2+2+0=4分。 Therefore, the total score of the third person = 2 + 2 + 0 = 4 points.

第3人:黑髮、長褲、短髮;顏色分數得1分;(黑髮);長度分數得2分;(短髮及長褲);得3分(黑髮、長褲及短髮);A=3*2;S=1;Bias=0;由於目標人物三維座標(Xp,Yp,Zp)=(0,0,0);而閒雜人物三維座標(Xb,Yb,Zb)=(2,0,0);位置誤差=1/(4+0.01)=0.25約等於1/4;Sigmoid(0.25)=0.125; 位置分數=(3*2)*Sigmoid(1(0.25-0))-3;=3*0.125-3;=-2.625;故,第3人之總分=1+2+(-2.625)=0.375分。 Person 3: black hair, long pants, short hair; 1 point for color; (black hair); 2 points for length; (short hair and long pants); 3 points for (black hair, long pants, and short hair); A =3*2; S=1; Bias=0; because the three-dimensional coordinates of the target character (Xp, Yp, Zp)=(0,0,0); and the three-dimensional coordinates of the idle characters (Xb,Yb,Zb)=(2, 0,0); Position error=1/(4+0.01)=0.25 is approximately equal to 1/4; Sigmoid(0.25)=0.125; Position score=(3*2)*Sigmoid(1(0.25-0))-3;=3*0.125-3;=-2.625; therefore, the total score of the third person=1+2+(-2.625)= 0.375 points.

本發明之優點及功效係如下所述: Advantage and effect of the present invention are as follows:

[1]採計分制判別相當特別。本案除了空間位置(相當於位置分數)之判斷外,還導入「顏色分數」與「長度分數」之判斷,有別於公知技術以整體影像進行辨識。故,採計分制判別相當特別。 [1] The judgment of the scoring system is quite special. In this case, in addition to the judgment of the spatial position (equivalent to the position score), the judgment of "color score" and "length score" is also introduced, which is different from the recognition of the whole image in the known technology. Therefore, it is quite special to adopt the scoring system to judge.

[2]判斷之準確性高。本案主要採用「位置分數」、「顏色分數」與「長度分數」三者之綜合判斷,因此,整體之準確性高於傳統僅用空間位置(相當於位置分數)之判斷。 [2] The accuracy of judgment is high. This case mainly uses the comprehensive judgment of "position score", "color score" and "length score". Therefore, the overall accuracy is higher than the traditional judgment that only uses spatial position (equivalent to position score).

[3]可減少電腦運算資源因而適用於機上電腦。本案只擷取影像中之「位置分數」、「顏色分數」與「長度分數」進行運算比對,無需對大量的影像進行運算比對,可大幅減少運算量。故,可減少電腦運算資源因而適用於機上電腦。 [3] It can reduce computer computing resources and is therefore suitable for on-board computers. In this case, only the "Position Score", "Color Score" and "Length Score" in the image are extracted for calculation and comparison. There is no need to perform calculation and comparison on a large number of images, which can greatly reduce the amount of calculation. Therefore, the computing resource of the computer can be reduced and it is applicable to the on-board computer.

[4]可以彈性微調參數。由於本案採用S狀函數(Sigmoid函數或稱S函數),其中有一個陡峭度(s)及一個偏移量(bias)可以進行調整,若原有預設之陡峭度(s)=1及偏移量(bias)=0並不適合時,可以彈性修改陡峭度(s)及偏移量(bias),就有可能調整為更佳之狀況,因此,可以彈性微調參數。 [4] The parameters can be fine-tuned flexibly. Since the S-shaped function (Sigmoid function or S-function) is used in this case, there is a steepness (s) and an offset (bias) that can be adjusted. If the original preset steepness (s) = 1 and the offset When the amount (bias)=0 is not suitable, you can flexibly modify the steepness (s) and offset (bias), and it may be adjusted to a better situation. Therefore, you can flexibly fine-tune the parameters.

以上僅是藉由較佳實施例詳細說明本發明,對於該實施例所做的任何簡單修改與變化,皆不脫離本發明之精神與範圍。 The above is only a detailed description of the present invention through preferred embodiments, and any simple modifications and changes made to the embodiments will not depart from the spirit and scope of the present invention.

S1:準備步驟 S1: Preparatory steps

S2:擷取目標影像步驟 S2: the step of capturing the target image

S3:持續追蹤目標影像步驟 S3: Continuously track target image steps

S4:顏色辨識計算步驟 S4: color recognition calculation steps

S5:長度辨識計算步驟 S5: length identification calculation steps

S6:位置辨識計算步驟 S6: Position identification calculation steps

S7:完成辨識目標影像步驟 S7: Complete the step of identifying the target image

Claims (3)

一種無人機目標辨識之處理方法,係包括:一、準備步驟:準備一無人機,該無人機係具有相互連通之一顏色資料庫、一服裝儀容資料庫及一辨識資料庫;該顏色資料庫係內建複數筆色系;該服裝儀容資料庫係內建複數筆服裝儀容資訊;二、擷取目標影像步驟:控制該無人機用以在一時間=t時,追蹤一目標人物而取得一第一無人機視野影像,該第一無人機視野影像係具有一目標人物影像,該目標人物影像係包括一目標頭部影像、一目標上半身影像及一目標下半身影像;其中:該目標頭部影像係具有一第一長度乘以一第一寬度個像素,該目標頭部影像中佔據最多像素數量之色系係被定義為一目標頭部顏色;該第一長度係被定義為一目標頭部長度;該目標上半身影像係具有一第二長度乘以一第二寬度個像素,該目標上半身影像中佔據最多像素數量之色系係被定義為一目標上半身顏色;該目標下半身影像係具有一第三長度乘以一第三寬度個像素,該目標下半身影像中佔據最多像素數量之色系係被定義為一目標下半身顏色,該第三長度係被定義為一目標下半身長度;又,取得該無人機之拍攝角度,及該無人機與該目標人物之距離,即可由該辨識資料庫換算出該目標人物之空間位置,且將其定義為一目標三維座標(Xp,Yp,Zp);三、持續追蹤目標影像步驟:控制該無人機用以在一時間=t+1時,持續追蹤該目標人物而取得一第二無人機視野影像,該第二無人機視野影像具有該目標 人物影像及至少一閒雜人物影像,該至少一閒雜人物影像係對應鄰近該目標人物之一閒雜人物,且該至少一閒雜人物影像係包括一閒雜頭部影像、一閒雜上半身影像及一閒雜下半身影像;其中:該閒雜頭部影像係具有一第四長度乘以一第四寬度個像素,該閒雜頭部影像中佔據最多像素數量之色系係被定義為一閒雜頭部顏色,該第四長度係被定義為一閒雜頭部長度;該閒雜上半身影像係具有一第五長度乘以一第五寬度個像素,該閒雜上半身影像中佔據最多像素數量之色系係被定義為一閒雜上半身顏色;該閒雜下半身影像係具有一第六長度乘以一第六寬度個像素,該閒雜下半身影像中佔據最多像素數量之色系係被定義為一閒雜下半身顏色,該第六長度係被定義為一閒雜下半身長度;又,取得該無人機之拍攝角度,及該無人機與該至少一閒雜人物之距離,即可由該辨識資料庫換算出該閒雜人物之空間位置,且將其定義為一閒雜人物三維座標(Xb,Yb,Zb);四、顏色辨識計算步驟:該辨識資料庫係內建一顏色分數,其係預設為0分,並當該辨識資料庫辨識該閒雜頭部顏色等於該目標頭部顏色時,則該顏色分數加1分,否則加0分;當該辨識資料庫辨識該閒雜上半身顏色等於該目標上半身顏色時,則加1分,否則加0分;當該辨識資料庫辨識該閒雜下半身顏色等於該目標下半身顏色時,則加1分,否則加0分; 五、長度辨識計算步驟:該辨識資料庫係內建一長度分數,其係預設為0分,並當該辨識資料庫辨識該閒雜頭部長度等於該目標頭部長度之±N%時,則將該長度分數加1分,否則加0分,且0<N<20;當該辨識資料庫辨識該閒雜下半身長度等於該目標下半身長度之±N%時,則加1分,否則加0分,且0<N<20;六、位置辨識計算步驟:該辨識資料庫係至少內建下列(公式1)及(公式2):
Figure 111138301-A0305-02-0016-7
其中:position error=位置誤差;ε=係數,其係0.0001<ε<0.1;PS=A×sigmoid(s(position error-bias))-A/2 (公式2)其中:A=(顏色分數+長度分數)×2;s=陡峭度;及bias=偏移量;且該辨識資料庫係利用上述(公式1),先算出位置誤差(position error),再將偏移量預設為0,陡峭度預設為1,且該A=顏色分數+長度分數,再代入公式2,即可算出位置分數(PS);七、完成辨識目標影像步驟:該辨識資料庫係內建下列(公式3):
Figure 111138301-A0305-02-0016-8
其中:w 1w 2w 3是權重因子(w 1+w 2+w 3=1);S=總分; RR=辨識率;CS=顏色分數;CFS=顏色滿分;LS=長度分數;LFS=長度滿分;PS=位置分數;及PFS=位置滿分;利用公式3,即可算出總分(S);該第二無人機視野影像中對應總分最高之人物影像,即為該目標人物影像,進而完成辨識。
A processing method for UAV target identification, which includes: 1. Preparing steps: prepare a UAV, which has a color database, a clothing and appearance database, and an identification database connected to each other; the color database A plurality of color systems are built in; the clothing and appearance database is built with a plurality of clothing and appearance information; 2. The step of capturing target images: control the drone to track a target person at a time = t to obtain a The first UAV field of view image, the first UAV field of view image has a target person image, and the target person image includes a target head image, a target upper body image and a target lower body image; wherein: the target head image It has a first length multiplied by a first width of pixels, and the color system occupying the largest number of pixels in the target head image is defined as a target head color; the first length is defined as a target head length; the target upper body image has a second length multiplied by a second width of pixels, and the color system occupying the largest number of pixels in the target upper body image is defined as a target upper body color; the target lower body image has a first Three lengths multiplied by a third width pixel, the color system occupying the largest number of pixels in the target lower body image is defined as a target lower body color, and the third length is defined as a target lower body length; The shooting angle of the drone and the distance between the drone and the target person can be converted from the recognition database to the spatial position of the target person, and defined as a target three-dimensional coordinates (Xp, Yp, Zp); 3. The step of continuously tracking the target image: controlling the UAV to continuously track the target person at a time=t+1 to obtain a second UAV field of view image, the second UAV field of view image has the target person image and at least An image of a person, the at least one image of a person corresponds to a person adjacent to the target person, and the at least one image of a person includes a head image, an image of an upper body and an image of a lower body; wherein: the The miscellaneous head image has a fourth length multiplied by a fourth width of pixels, the color system occupying the largest number of pixels in the miscellaneous head image is defined as a miscellaneous head color, and the fourth length is defined as A length of the head; the upper body image has a fifth length multiplied by a fifth width pixels, and the color system occupying the largest number of pixels in the upper body image is defined as an upper body color; the lower body image It has a sixth length multiplied by a sixth width pixel, the color system occupying the largest number of pixels in the lower body image is defined as a lower body color, and the sixth length is defined as a lower body length; and , obtain the shooting angle of the UAV, and the distance between the UAV and the at least one idle person, the spatial position of the idle person can be converted from the identification database, and be defined as a three-dimensional coordinate of an idle person (Xb, Yb, Zb); 4. Calculation steps for color recognition: a color score is built into the recognition database, which is preset to 0 points, and when the recognition database recognizes that the color of the idle head is equal to the color of the target head , then add 1 point to the color score, otherwise add 0 points; when the recognition database recognizes that the color of the upper body of the idler is equal to the color of the upper body of the target, add 1 point, otherwise add 0 points; when the recognition database identifies the lower body of the idler When the color is equal to the color of the lower body of the target, add 1 point, otherwise add 0 points; 5. Length identification calculation steps: The identification database has a built-in length score, which is 0 points by default, and is used as the identification database When it is identified that the head length of the idler is equal to ±N% of the head length of the target, add 1 point to the length score, otherwise add 0 points, and 0<N<20; when the identification database identifies the length of the lower body of the idler as equal to When the length of the lower body of the target is ±N%, add 1 point, otherwise add 0 points, and 0<N<20; 6. Position recognition calculation steps: The recognition database is built with at least the following (formula 1) and (formula 2):
Figure 111138301-A0305-02-0016-7
Among them: position error=position error; ε=coefficient, its system is 0.0001<ε<0.1; PS = A × sigmoid ( s ( position error-bias ))- A /2 (formula 2) where: A=(color score+ length fraction)×2; s=steepness; and bias=offset; and the identification database uses the above (formula 1), first calculates the position error (position error), and then presets the offset to 0, Steepness is preset to 1, and the A=color score+length score, and then substituted into formula 2, the position score (PS) can be calculated; 7. Complete the steps of identifying the target image: the identification database is built in the following (formula 3 ):
Figure 111138301-A0305-02-0016-8
Among them: w 1 , w 2 and w 3 are weighting factors ( w 1 + w 2 + w 3 =1); S=total score; RR=recognition rate; CS=color score; CFS=color full score; LS=length score ;LFS=Length Full Score; PS=Position Score; and PFS=Position Full Score; use formula 3 to calculate the total score (S); The image of the person, and then complete the identification.
如請求項1所述之無人機目標辨識之處理方法,其中,該複數色系至少包括紅色、橙色、黃色、綠色、藍色、紫色、黑色及白色。 The processing method for UAV target recognition according to Claim 1, wherein the plurality of colors at least includes red, orange, yellow, green, blue, purple, black and white. 如請求項1所述之無人機目標辨識之處理方法,其中,該複數筆服裝儀容資訊至少包括長髮、短髮、長褲及短褲。 The method for processing drone target recognition as described in Claim 1, wherein the plurality of pieces of clothing and appearance information at least include long hair, short hair, trousers and shorts.
TW111138301A 2022-10-07 2022-10-07 Processing method of target identification for unmanned aerial vehicle (uav) TWI802514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111138301A TWI802514B (en) 2022-10-07 2022-10-07 Processing method of target identification for unmanned aerial vehicle (uav)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111138301A TWI802514B (en) 2022-10-07 2022-10-07 Processing method of target identification for unmanned aerial vehicle (uav)

Publications (2)

Publication Number Publication Date
TWI802514B true TWI802514B (en) 2023-05-11
TW202416231A TW202416231A (en) 2024-04-16

Family

ID=87424441

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111138301A TWI802514B (en) 2022-10-07 2022-10-07 Processing method of target identification for unmanned aerial vehicle (uav)

Country Status (1)

Country Link
TW (1) TWI802514B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180350054A1 (en) * 2017-06-05 2018-12-06 Hana Resources, Inc. Organism growth prediction system using drone-captured images
US20180357834A1 (en) * 2015-12-15 2018-12-13 Intel Corporation Generation of synthetic 3-dimensional object images for recognition systems
US20210132612A1 (en) * 2019-03-08 2021-05-06 SZ DJI Technology Co., Ltd. Techniques for sharing mapping data between an unmanned aerial vehicle and a ground vehicle
CN113516713A (en) * 2021-06-18 2021-10-19 广西财经学院 Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network
CN114758119A (en) * 2022-04-20 2022-07-15 北京航空航天大学 Sea surface recovery target detection method based on eagle eye-imitated vision and similarity
CN115115859A (en) * 2022-06-16 2022-09-27 天津大学 Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180357834A1 (en) * 2015-12-15 2018-12-13 Intel Corporation Generation of synthetic 3-dimensional object images for recognition systems
US20180350054A1 (en) * 2017-06-05 2018-12-06 Hana Resources, Inc. Organism growth prediction system using drone-captured images
US20210132612A1 (en) * 2019-03-08 2021-05-06 SZ DJI Technology Co., Ltd. Techniques for sharing mapping data between an unmanned aerial vehicle and a ground vehicle
CN113516713A (en) * 2021-06-18 2021-10-19 广西财经学院 Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network
CN114758119A (en) * 2022-04-20 2022-07-15 北京航空航天大学 Sea surface recovery target detection method based on eagle eye-imitated vision and similarity
CN115115859A (en) * 2022-06-16 2022-09-27 天津大学 Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography

Similar Documents

Publication Publication Date Title
CN109887040B (en) Moving target active sensing method and system for video monitoring
CN110782481B (en) Unmanned ship intelligent decision-making method and system
CN106570820B (en) A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone
CN109584213B (en) Multi-target number selection tracking method
WO2017101434A1 (en) Human body target re-identification method and system among multiple cameras
Koide et al. Monocular person tracking and identification with on-line deep feature selection for person following robots
Rozantsev et al. Flight dynamics-based recovery of a UAV trajectory using ground cameras
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN110006444B (en) Anti-interference visual odometer construction method based on optimized Gaussian mixture model
Mazzeo et al. HSV and RGB color histograms comparing for objects tracking among non overlapping FOVs, using CBTF
CN114612823A (en) Personnel behavior monitoring method for laboratory safety management
CN108564022A (en) A kind of more personage&#39;s pose detection methods based on positioning classification Recurrent networks
CN107038714A (en) Many types of visual sensing synergistic target tracking method
CN110060304A (en) A kind of organism three-dimensional information acquisition method
Li et al. Color based multiple people tracking
Zou et al. Microarray camera image segmentation with Faster-RCNN
CN109801336A (en) Airborne target locating system and method based on visible light and infrared light vision
Yan et al. Active infrared coded target design and pose estimation for multiple objects
Wang Exploring intelligent image recognition technology of football robot using omnidirectional vision of internet of things
Zhang et al. The use of optical flow for UAV motion estimation in indoor environment
TWI802514B (en) Processing method of target identification for unmanned aerial vehicle (uav)
Montcalm et al. Object inter-camera tracking with non-overlapping views: a new dynamic approach
CN109202911A (en) A kind of cluster amphibious robot 3-D positioning method based on panoramic vision
CN113298177A (en) Night image coloring method, device, medium, and apparatus
CN112509009A (en) Target tracking method based on natural language information assistance