TWI817265B - Object tracking and guiding system on conveyor belt and method thereof - Google Patents

Object tracking and guiding system on conveyor belt and method thereof Download PDF

Info

Publication number
TWI817265B
TWI817265B TW110144380A TW110144380A TWI817265B TW I817265 B TWI817265 B TW I817265B TW 110144380 A TW110144380 A TW 110144380A TW 110144380 A TW110144380 A TW 110144380A TW I817265 B TWI817265 B TW I817265B
Authority
TW
Taiwan
Prior art keywords
information
target object
depth
tracking
conveyor belt
Prior art date
Application number
TW110144380A
Other languages
Chinese (zh)
Other versions
TW202320919A (en
Inventor
黃穎竹
高薇雅
黃任鴻
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to TW110144380A priority Critical patent/TWI817265B/en
Priority to CN202111473322.4A priority patent/CN116177148A/en
Publication of TW202320919A publication Critical patent/TW202320919A/en
Application granted granted Critical
Publication of TWI817265B publication Critical patent/TWI817265B/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G43/00Control devices, e.g. for safety, warning or fault-correcting
    • B65G43/08Control devices operated by article or material being fed, conveyed or discharged
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G15/00Conveyors having endless load-conveying surfaces, i.e. belts and like continuous members, to which tractive effort is transmitted by means other than endless driving elements of similar configuration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G2203/00Indexing code relating to control or detection of the articles or the load carriers during conveying
    • B65G2203/02Control or detection
    • B65G2203/0208Control or detection relating to the transported articles
    • B65G2203/0216Codes or marks on the article
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G2203/00Indexing code relating to control or detection of the articles or the load carriers during conveying
    • B65G2203/02Control or detection
    • B65G2203/0208Control or detection relating to the transported articles
    • B65G2203/0233Position of the article
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G2203/00Indexing code relating to control or detection of the articles or the load carriers during conveying
    • B65G2203/04Detection means
    • B65G2203/041Camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

An object tracking and guiding system on a conveyor belt includes a data acquisition unit, a data calculation unit and a display unit. The data acquisition unit includes a color/depth camera and a barcode reading device. The depth camera is used to capture a color image information and a depth information of a target object. The barcode reading device is used to read an object category of the target object. The data calculation unit is used for receiving the color image information and the depth information to track a position of the target object on the conveyor belt, and generating a mark information according to the object category of the target object. The display unit includes at least one projector, and the projector projects the marking information on the target object according to the position of the target object.

Description

輸送帶上的物件追蹤及導引系統及其方法 Object tracking and guidance system and method on conveyor belt

本發明是有關於物流管理,且特別是有關於一種輸送帶上的物件追蹤及導引系統及其方法。 The present invention relates to logistics management, and in particular to an object tracking and guidance system and method on a conveyor belt.

傳統的物流管理作業將物件配送到消費者之前,需對物件進行識別及分揀,並將分揀出來的物件集中到對應的貨運區等待出貨。然而,現存的方法中,僅能在輸送帶定速運轉的情況下進行分揀,而後使用輸送帶的速度和時間計算物件的位置,進而造成分揀的效率降低。此外,現存的方法中,使用彩色攝影機進行輸送帶上的物件偵測,然而當物件高度移動的情況下會因影像模糊而導致誤判,或者,因為物件顏色與輸送帶顏色相近時物件無法正常被偵測的問題。 Before delivering items to consumers in traditional logistics management operations, the items need to be identified and sorted, and the sorted items must be concentrated in the corresponding freight area to wait for shipment. However, existing methods can only sort when the conveyor belt is running at a constant speed, and then use the speed and time of the conveyor belt to calculate the location of the objects, which in turn reduces the efficiency of sorting. In addition, the existing method uses a color camera to detect objects on the conveyor belt. However, when the object moves highly, it will cause misjudgment due to blurred images, or the object cannot be detected normally because the color of the object is similar to the color of the conveyor belt. Detection issues.

本發明係有關於一種輸送帶上的物件追蹤及導引系統及其方法,用以提高分揀的效率及減少誤判。 The invention relates to an object tracking and guidance system and method on a conveyor belt to improve sorting efficiency and reduce misjudgments.

根據本發明之一方面,提出一種輸送帶上的物件追蹤及導引系統,包括一資料取得單元、一資料運算單元以及一顯 示單元。資料取得單元包括一深度攝影機以及一條碼讀取裝置,深度攝影機用以擷取一目標物件的一物件影像以及一深度資訊,該條碼讀取裝置用以讀取該目標物件的一物件類別。資料運算單元用以接收該物件影像及該深度資訊以追蹤該目標物件於輸送帶上的一位置,並根據該目標物件的該物件類別產生一標記資訊。顯示單元包括至少一投影機,該投影機根據該目標物件的位置投射該標記資訊於該目標物件上。 According to one aspect of the present invention, an object tracking and guidance system on a conveyor belt is proposed, including a data acquisition unit, a data computing unit and a display display unit. The data acquisition unit includes a depth camera and a barcode reading device. The depth camera is used to capture an object image and depth information of a target object. The barcode reading device is used to read an object type of the target object. The data computing unit is used to receive the object image and the depth information to track a position of the target object on the conveyor belt, and generate a mark information according to the object type of the target object. The display unit includes at least one projector, which projects the marking information on the target object according to the position of the target object.

本發明的功效係用以提高分揀的效率及減少誤判。 The function of the present invention is to improve the efficiency of sorting and reduce misjudgments.

為了對本發明之上述及其他方面有更佳的瞭解,下文特舉實施例,並配合所附圖式詳細說明如下: In order to have a better understanding of the above and other aspects of the present invention, examples are given below and are described in detail with reference to the accompanying drawings:

100:物件追蹤及導引系統 100: Object tracking and guidance system

101:目標物件 101:Target object

101a:第一目標物件 101a: First target object

101b:第二目標物件 101b: Second target object

101c:第三目標物件 101c: The third target object

110:資料取得單元 110: Data acquisition unit

112:彩色/深度攝影機 112:Color/Depth Camera

113:彩色影像資訊 113: Color image information

114:條碼讀取裝置 114:Barcode reading device

115:深度資訊 115:In-depth information

117:類別資訊 117:Category information

120:資料運算單元 120: Data operation unit

122:資訊處理模組 122:Information processing module

123:二值化矩陣 123: Binarized matrix

124:物件偵測及追蹤模組 124:Object detection and tracking module

125:位置資訊 125:Location information

126:資訊配對模組 126:Information matching module

127:標記資訊 127: Mark information

130:顯示單元 130:Display unit

132:投影機 132:Projector

132a:第一投影機 132a:First projector

132b:第二投影機 132b: Second projector

132c:第三投影機 132c:Third projector

A、B:標記 A, B: mark

第1A圖及第1B圖繪示依照本發明一實施例的物件追蹤及導引系統的示意圖;第2圖繪示依照本發明一實施例的物件追蹤及導引系統的元件方塊圖;第3圖繪示依照本發明一實施例的資料處理流程的示意圖;第4圖繪示依照本發明一實施例的物件追蹤及導引方法的流程圖;及第5A及5B圖分別繪示依照本發明一實施例的資料運算單元的元件操作方塊圖。 Figures 1A and 1B illustrate a schematic diagram of an object tracking and guidance system according to an embodiment of the present invention; Figure 2 illustrates a component block diagram of an object tracking and guidance system according to an embodiment of the present invention; Figure 3 Figure 4 illustrates a schematic diagram of a data processing process according to an embodiment of the present invention; Figure 4 illustrates a flow chart of an object tracking and guidance method according to an embodiment of the present invention; and Figures 5A and 5B respectively illustrate a method according to the present invention. A block diagram of component operation of the data operation unit according to an embodiment.

下面將結合本申請實施例中的附圖,對本申請實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例是本申請一部分實施例,而不是全部的實施例。基於本申請中的實施例,本領域具有通常知識者在顯而易知的前提下所獲得的所有其它實施例,都屬於本申請保護的範圍。以下是以相同/類似的符號表示相同/類似的元件做說明。 The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of the embodiments. Based on the embodiments in this application, all other embodiments that can be easily understood by a person with ordinary skill in the art shall fall within the scope of protection of this application. The following description uses the same/similar symbols to indicate the same/similar components.

請參照第1A、1B、2及3圖,其中第1A及1B圖分別繪示依照本發明一實施例的物件追蹤及導引系統的側視示意圖及俯視示意圖,第2圖繪示依照本發明一實施例的物件追蹤及導引系統100的元件方塊圖,第3圖繪示依照本發明一實施例的資料處理流程的示意圖。物件追蹤及導引系統100可用於一輸送帶103上,其包括一資料取得單元110、一資料運算單元120以及一顯示單元130。資料取得單元110可包括一彩色/深度攝影機112以及一條碼讀取裝置114。資料運算單元120例如為微處理器、數位訊號處理器及/或圖形處理器等的組合,顯示單元130可包括一投影機132。 Please refer to Figures 1A, 1B, 2 and 3. Figures 1A and 1B respectively illustrate a schematic side view and a top view of an object tracking and guidance system according to an embodiment of the present invention. Figure 2 illustrates an object tracking and guidance system according to an embodiment of the present invention. A component block diagram of the object tracking and guidance system 100 according to an embodiment. Figure 3 is a schematic diagram of a data processing flow according to an embodiment of the present invention. The object tracking and guidance system 100 can be used on a conveyor belt 103 and includes a data acquisition unit 110, a data calculation unit 120 and a display unit 130. The data acquisition unit 110 may include a color/depth camera 112 and a barcode reading device 114. The data computing unit 120 is, for example, a combination of a microprocessor, a digital signal processor and/or a graphics processor, and the display unit 130 may include a projector 132 .

在第1A及1B圖中,物流管理作業將物件配送到消費者之前,需對物件進行識別及分揀,並將分揀出來的物件集中到對應的貨運區等待出貨。本實施例的物件追蹤及導引系統100透過資料取得單元110的彩色/深度攝影機112對目標物件101進行辨識及追蹤,再透過顯示單元130的投影機132投射對應的標記資訊127(例如標記A、B)於目標物件101上,以供分揀人員或分揀設備進行物件分類。 In Figures 1A and 1B, before the logistics management operation delivers the items to consumers, the items need to be identified and sorted, and the sorted items are gathered into the corresponding freight area to wait for shipment. The object tracking and guidance system 100 of this embodiment identifies and tracks the target object 101 through the color/depth camera 112 of the data acquisition unit 110, and then projects the corresponding mark information 127 (such as mark A) through the projector 132 of the display unit 130. , B) on the target object 101 for sorting personnel or sorting equipment to classify objects.

彩色/深度攝影機112設置於輸送帶103的上方,其數量可為一台或多台。每台彩色/深度攝影機112可擷取預定視角範圍內的目標物件101,且每台彩色/深度攝影機112與其前方彩色/深度攝影機112或後方彩色/深度攝影機112可間隔地排列在輸送帶103的上方,以對移動中的目標物件101進行辨識及追蹤。 The color/depth camera 112 is arranged above the conveyor belt 103, and the number thereof can be one or more. Each color/depth camera 112 can capture the target object 101 within a predetermined viewing angle range, and each color/depth camera 112 and its front color/depth camera 112 or rear color/depth camera 112 can be arranged on the conveyor belt 103 at intervals. Above, to identify and track the moving target object 101.

此外,投影機132設置於輸送帶103的上方,其數量可為一台或多台。每台投影機132可根據目標物件101所在的位置投射一個或多個標記資訊127於至少一個目標物件101上,以供分揀人員或分揀設備進行物件分類。 In addition, the projector 132 is disposed above the conveyor belt 103, and the number of the projector 132 may be one or more. Each projector 132 can project one or more marking information 127 on at least one target object 101 according to the location of the target object 101 for sorting personnel or sorting equipment to classify objects.

請參照第2及3圖,資料取得單元110的彩色/深度攝影機112可以識別輸送帶103上的一目標物件101,並根據目標物件101的影像取得目標物件101的彩色影像資訊113及深度資訊115。彩色影像資訊113可包括目標物件101的紅綠藍(RGB)影像或紅綠藍白(RGBW)影像等資訊,而深度資訊115例如為深度圖,其依照時差測距計算目標物件101與彩色/深度攝影機112之間的距離,並將目標物件101與彩色/深度攝影機112之間的相對距離轉換成對應的深度值,以得到深度圖。在本實施例中,資料取得單元110藉由參考目標物件101的彩色影像資訊113來得知目標物件101的實際邊界,進而正確地修正目標物件101的深度資訊115,以得到更高精確度的深度圖。 Please refer to Figures 2 and 3. The color/depth camera 112 of the data acquisition unit 110 can identify a target object 101 on the conveyor belt 103, and obtain the color image information 113 and depth information 115 of the target object 101 based on the image of the target object 101. . The color image information 113 may include information such as a red-green-blue (RGB) image or a red-green-blue-white (RGBW) image of the target object 101, and the depth information 115 is, for example, a depth map, which calculates the distance between the target object 101 and the color/ distance between the depth cameras 112, and convert the relative distance between the target object 101 and the color/depth camera 112 into corresponding depth values to obtain a depth map. In this embodiment, the data acquisition unit 110 obtains the actual boundary of the target object 101 by referring to the color image information 113 of the target object 101, and then correctly corrects the depth information 115 of the target object 101 to obtain a more accurate depth. Figure.

此外,在本實施例中,資料取得單元110還可透過深度資訊115來輔助目標物件101的影像辨識,可解決目標物件101 高度移動的情況下因影像模糊而導致誤判,另一方面,也可避免目標物件101顏色與輸送帶103顏色相近時目標物件101無法正常被偵測的問題。 In addition, in this embodiment, the data acquisition unit 110 can also assist the image recognition of the target object 101 through the depth information 115, which can solve the problem of the target object 101. In the case of high movement, misjudgment may occur due to image blur. On the other hand, it can also avoid the problem that the target object 101 cannot be detected normally when the color of the target object 101 is similar to the color of the conveyor belt 103 .

請參照第2及3圖,資料取得單元110的條碼讀取裝置114還可根據目標物件101的類別產生一類別資訊117。目標物件101例如具有二維條碼或矩陣圖碼(例如QR code),其記錄目標物件101的貨品名稱、型號、規格、品項及其他資訊等類別,當目標物件101通過條碼讀取裝置114的下方時,條碼讀取裝置114可發出紅外光掃描二維條碼或矩陣圖碼,因此條碼讀取裝置114可根據二維條碼或矩陣圖碼取得目標物件101的類別資訊117。條碼讀取裝置114例如為一光學掃描器或一雷射掃描器。 Referring to Figures 2 and 3, the barcode reading device 114 of the data acquisition unit 110 can also generate a category information 117 according to the category of the target object 101. The target object 101 has, for example, a two-dimensional barcode or a matrix code (such as a QR code), which records the product name, model, specification, item and other information of the target object 101. When the target object 101 passes through the barcode reading device 114 When downward, the barcode reading device 114 can emit infrared light to scan the two-dimensional barcode or matrix pattern code, so the barcode reading device 114 can obtain the category information 117 of the target object 101 based on the two-dimensional barcode or matrix pattern code. The barcode reading device 114 is, for example, an optical scanner or a laser scanner.

請參照第2及3圖,資料運算單元120用以接收目標物件101的彩色資訊及深度資訊115以追蹤目標物件101,並可取得目標物件101於輸送帶103上的一位置資訊125。也就是說,當目標物件101在輸送帶103上移動時,即使輸送帶103非等速移動或間歇性移動(例如加速度運動或靜止後再移動),資料運算單元120透過影像即時追蹤及定位做為偵測目標物件101的位置資訊125的來源,不需使用輸送帶103的移動速度和移動時間計算目標物件101的位置,以達到即時追蹤及快速定位目標物件101的功效。 Referring to Figures 2 and 3, the data computing unit 120 is used to receive the color information and depth information 115 of the target object 101 to track the target object 101, and obtain a position information 125 of the target object 101 on the conveyor belt 103. That is to say, when the target object 101 moves on the conveyor belt 103, even if the conveyor belt 103 does not move at a constant speed or moves intermittently (such as accelerating or moving after being stationary), the data computing unit 120 performs real-time tracking and positioning through images. In order to detect the source of the position information 125 of the target object 101, it is not necessary to use the moving speed and moving time of the conveyor belt 103 to calculate the position of the target object 101, so as to achieve the effect of real-time tracking and rapid positioning of the target object 101.

此外,資料運算單元120還可根據目標物件101的類別資訊117產生一標記資訊127。也就是說,當目標物件101的類 別資訊117已取得(例如透過掃描二維條碼或矩陣圖碼),資料運算單元120對應目標物件101的類別資訊117產生一標記資訊127。標記資訊127例如為符號、文字、數字或幾何圖案等,本發明對此不加以限制。 In addition, the data computing unit 120 can also generate a tag information 127 according to the category information 117 of the target object 101 . That is to say, when the class of target object 101 After the identification information 117 has been obtained (for example, by scanning a two-dimensional barcode or a matrix image code), the data operation unit 120 generates a tag information 127 corresponding to the category information 117 of the target object 101 . The mark information 127 may be, for example, symbols, characters, numbers, or geometric patterns, and the present invention is not limited thereto.

請參照第2及3圖,當資料運算單元120取得目標物件101的位置資訊125與標記資訊127之後,可將目標物件101的位置資訊125與標記資訊127傳送給顯示單元130的投影機132。顯示單元130的投影機132可根據目標物件101的位置資訊125投射標記資訊127(例如標記A、B)於目標物件101上,以供分揀人員或分揀設備進行物件分類。 Referring to Figures 2 and 3, after the data computing unit 120 obtains the position information 125 and the mark information 127 of the target object 101, it can transmit the position information 125 and the mark information 127 of the target object 101 to the projector 132 of the display unit 130. The projector 132 of the display unit 130 can project mark information 127 (eg, marks A, B) on the target object 101 according to the position information 125 of the target object 101 for sorting personnel or sorting equipment to classify objects.

請參照第1B圖,投影機132例如為3台,每台投影機132可分別投射一個標記資訊(例如標記A、B)於移動至預定位置的目標物件101上。舉例來說,三個目標物件101a、101b、101c依序沿著輸送帶103由右至左移動並通過三台攝影機,當第一目標物件101a移動至第一投影機132a下方或鄰近位置時,第一投影機132a根據第一目標物件101a的類別投射A標記資訊於第一目標物件101a上;當第一目標物件101a移動至第二投影機132b下方或鄰近位置時,第二投影機132b根據第一目標物件101a的類別投射A標記資訊於第一目標物件101a上;當第一目標物件101a移動至第三投影機132c下方或鄰近位置時,第三投影機132c根據第一目標物件101a的類別投射A標記資訊於第一目標物件101a上;同理,當第二目標物件101b移動至第一投影機132a下方或鄰近位置 時,第一投影機132a根據第二目標物件101b的類別投射B標記資訊於第二目標物件101b上;當第二目標物件101b移動至第二投影機132b下方或鄰近位置時,第二投影機132b根據第二目標物件101b的類別投射B標記資訊於第二目標物件101b上;當第二目標物件101b移動至第三投影機132c下方或鄰近位置時,第三投影機132c根據第二目標物件101b的類別投射B標記資訊於第二目標物件101b上。當第三目標物件101c移動至第一投影機132a下方或鄰近位置時,第一投影機132a根據第三目標物件101c的類別投射C標記資訊於第三目標物件101c上;當第三目標物件101c移動至第二投影機132b下方或鄰近位置時,第二投影機132b根據第三目標物件101c的類別投射C標記資訊於第三目標物件101c上;當第三目標物件101c移動至第三投影機132c下方或鄰近位置時,第三投影機132c根據第三目標物件101c的類別投射C標記資訊於第三目標物件101c上。 Referring to Figure 1B, there are, for example, three projectors 132, and each projector 132 can respectively project a piece of marker information (eg, markers A, B) on the target object 101 moved to a predetermined position. For example, three target objects 101a, 101b, and 101c move sequentially from right to left along the conveyor belt 103 and pass through three cameras. When the first target object 101a moves to a position below or adjacent to the first projector 132a, The first projector 132a projects the A mark information on the first target object 101a according to the type of the first target object 101a; when the first target object 101a moves to below or adjacent to the second projector 132b, the second projector 132b projects the A mark information according to the type of the first target object 101a. The type of the first target object 101a projects the A mark information on the first target object 101a; when the first target object 101a moves to a position below or near the third projector 132c, the third projector 132c according to the type of the first target object 101a The category projects A mark information on the first target object 101a; similarly, when the second target object 101b moves to below or adjacent to the first projector 132a When, the first projector 132a projects the B mark information on the second target object 101b according to the type of the second target object 101b; when the second target object 101b moves to below or adjacent to the second projector 132b, the second projector 132b projects the B mark information on the second target object 101b according to the type of the second target object 101b; when the second target object 101b moves to below or adjacent to the third projector 132c, the third projector 132c The category of 101b projects the B tag information on the second target object 101b. When the third target object 101c moves to a position below or adjacent to the first projector 132a, the first projector 132a projects C mark information on the third target object 101c according to the type of the third target object 101c; when the third target object 101c When moving to a position below or adjacent to the second projector 132b, the second projector 132b projects the C mark information on the third target object 101c according to the type of the third target object 101c; when the third target object 101c moves to the third projector When the third projector 132c is below or near the third target object 101c, the third projector 132c projects the C mark information on the third target object 101c according to the type of the third target object 101c.

之後,被投射A標記資訊的第一目標物件101a可被分揀至一第一類別區域(圖未繪示)中;被投射B標記資訊的第二目標物件101b可被分揀至第二類別區域中;被投射C標記資訊的第三目標物件101c可被分揀至第三類別區域中,以完成所有物件的分揀。本實施例雖以設置彩色/深度攝影機112及投影機132於輸送帶103上方為例,但彩色/深度攝影機112及投影機132亦可設置在別處,只要沿著目標物件101移動方向設置,並能完成目標物件 101辨識、位置追蹤及標記資訊127投影即可,本發明對此不加以限制。 Afterwards, the first target object 101a to which the A mark information is projected can be sorted into a first category area (not shown); the second target object 101b to which the B mark information is projected can be sorted to the second category. In the area; the third target object 101c on which the C mark information is projected can be sorted into the third category area to complete the sorting of all objects. Although in this embodiment, the color/depth camera 112 and the projector 132 are placed above the conveyor belt 103 as an example, the color/depth camera 112 and the projector 132 can also be placed elsewhere, as long as they are placed along the moving direction of the target object 101, and Able to complete target objects 101 identification, position tracking and marking information 127 can be projected, and the present invention does not limit this.

上述實施例中,雖繪示單一投影機132投射單一標記資訊127的情形,但投影機132亦可針對多個目標物件101投射多個標記資訊127,只要資料運算單元120同時將多個目標物件101的位置資訊125與其標記資訊127傳給投影機132,投影機132即可分別將對應的標記資訊127投射至各自的目標物件101上。另外,類似的情況,若有多個目標物件101同時移動至一投影機132下方位置時,投影機132亦可分別將對應的標記資訊127投射至各自的目標物件101上,本發明對此不加以限制。 In the above embodiment, although a single projector 132 projects a single mark information 127, the projector 132 can also project multiple mark information 127 for multiple target objects 101, as long as the data computing unit 120 simultaneously projects multiple target objects 101. The position information 125 and mark information 127 of 101 are transmitted to the projector 132, and the projector 132 can project the corresponding mark information 127 onto the respective target objects 101 respectively. In addition, in a similar situation, if multiple target objects 101 move to a position below a projector 132 at the same time, the projector 132 can also project the corresponding mark information 127 onto the respective target objects 101. This is not the case in the present invention. be restricted.

請參照第4圖繪示依照本發明一實施例的物件追蹤及導引方法的流程圖。物件追蹤及導引方法包括下列步驟。在步驟S210,以一深度攝影機112擷取該目標物件101的一彩色影像資訊113以及一深度資訊115。在步驟S220,以一條碼讀取裝置114讀取該目標物件101的一類別資訊117。在步驟S230,接收彩色影像資訊113及該深度資訊115以追蹤該目標物件101,以取得目標物件101於輸送帶103上的一位置資訊125。在步驟S240,根據該目標物件101的該類別資訊117產生一標記資訊127。在步驟S250,根據目標物件101的位置資訊125投射標記資訊127於目標物件101上。 Please refer to Figure 4 to illustrate a flow chart of an object tracking and guidance method according to an embodiment of the present invention. The object tracking and guidance method includes the following steps. In step S210, a depth camera 112 is used to capture a color image information 113 and a depth information 115 of the target object 101. In step S220, a barcode reading device 114 is used to read a category information 117 of the target object 101. In step S230 , the color image information 113 and the depth information 115 are received to track the target object 101 to obtain a position information 125 of the target object 101 on the conveyor belt 103 . In step S240, a mark information 127 is generated according to the category information 117 of the target object 101. In step S250, the mark information 127 is projected on the target object 101 according to the position information 125 of the target object 101.

上述的步驟S210及S220對應由第2圖中的資料取得單元110進行以取得目標物件101的彩色影像資訊113、深度 資訊115以及類別資訊117,上述的步驟S230及S240對應由第2圖中的資料運算單元120進行以取得目標物件101的位置資訊125及標記資訊127,而上述的步驟S250對應由第2圖中的顯示單元130進行,以進行標記資訊127的投射。 The above-mentioned steps S210 and S220 are performed by the data acquisition unit 110 in Figure 2 to obtain the color image information 113 and depth of the target object 101. Information 115 and category information 117, the above-mentioned steps S230 and S240 correspond to being performed by the data computing unit 120 in Figure 2 to obtain the position information 125 and mark information 127 of the target object 101, and the above-mentioned step S250 corresponds to being performed by the data operation unit 120 in Figure 2 The display unit 130 is configured to project the mark information 127 .

請參照第2及5A、5B圖,以進一步說明資料運算單元120的各個元件模組的功能,其中第5A及5B圖分別繪示依照本發明一實施例的資料運算單元120的元件操作方塊圖。在本實施例中,資料運算單元120包括一資訊處理模組122、一物件偵測及追蹤模組124以及一資訊配對模組126。在第5A圖中,資訊處理模組122用以將彩色影像資訊113進行灰階轉換及迦瑪(Gamma)校正,再透過二值化及背景分割等影像處理以將彩色影像資訊113中屬於目標物件101的部分及不屬於目標物件101的背景部分進行分類。此外,資訊處理模組122還用以將深度資訊115進行深度過濾及二值化處理,以取得目標物件101的深度資訊115,再將目標物件101的彩色影像資訊113及深度資訊115進行加權平均,以得到二值化矩陣123。 Please refer to Figures 2 and 5A and 5B to further explain the functions of each component module of the data computing unit 120. Figures 5A and 5B respectively illustrate component operation block diagrams of the data computing unit 120 according to an embodiment of the present invention. . In this embodiment, the data computing unit 120 includes an information processing module 122, an object detection and tracking module 124, and an information matching module 126. In Figure 5A, the information processing module 122 is used to perform grayscale conversion and gamma correction on the color image information 113, and then through image processing such as binarization and background segmentation to convert the color image information 113 belonging to the target Parts of the object 101 and background parts that do not belong to the target object 101 are classified. In addition, the information processing module 122 is also used to perform depth filtering and binarization processing on the depth information 115 to obtain the depth information 115 of the target object 101, and then perform a weighted average of the color image information 113 and the depth information 115 of the target object 101. , to obtain the binarized matrix 123.

此外,物件偵測及追蹤模組124用以將二值化矩陣123進行目標物件101的形態學(morphology)處理以及輪廓偵測,並將目標物件101的輪廓影像及其位置座標進行儲存,以供後續目標物件101的追蹤及比對。之後,物件偵測及追蹤模組124將辨識後的物件影像與其對應的類別資訊117相結合,並進行物件追蹤演算,以即時得到目標物件101於輸送帶103上的位置資 訊125及其物件類別。 In addition, the object detection and tracking module 124 is used to perform morphology processing and contour detection on the target object 101 with the binary matrix 123, and store the contour image and its position coordinates of the target object 101, so as to For subsequent tracking and comparison of the target object 101. Afterwards, the object detection and tracking module 124 combines the recognized object image with its corresponding category information 117 and performs object tracking calculations to obtain the position information of the target object 101 on the conveyor belt 103 in real time. Message 125 and its object category.

在一實施例中,形態學處理例如透過閉運算將二值化矩陣123進行膨脹(Dilation)及侵蝕(Erosion),以達到消除雜訊的目的。輪廓偵測例如使用邊界跟隨演算法(Border Following Algorithms)將二值化矩陣123進行拓撲分析(Topological Analysis),找出目標物件101輪廓。追蹤演算例如使用通道及空間可靠性(Channel and Spatial Reliability,CSRT)追蹤器進行物件追蹤。CSRT追蹤器會計算被選取區域的直方圖(histogram)特徵及顏色標籤(color names)特徵,並與前一幅的特徵進行比對,以此判斷目前物件當前的位置。 In one embodiment, the morphological processing, for example, performs dilation and erosion on the binarized matrix 123 through closing operations to achieve the purpose of eliminating noise. Contour detection uses, for example, a border following algorithm (Border Following Algorithms) to perform topological analysis (Topological Analysis) on the binarized matrix 123 to find the contour of the target object 101 . The tracking algorithm uses the Channel and Spatial Reliability (CSRT) tracker for object tracking. The CSRT tracker will calculate the histogram features and color names features of the selected area, and compare them with the features of the previous image to determine the current location of the current object.

另外,請參照第5B圖,資訊配對模組126用以接收目標物件101的位置資訊125及其類別資訊117,並使用類別資訊117於資料庫中查詢對應的標記資訊127。如第5圖所示,資料庫中儲存各個物件類別與相對應的標記資訊127的表格,例如:當物件類別為類別_1時,對應的標記為投影標記_1;當物件類別為類別_2時,對應的標記為投影標記_2,依此類推。因此,資訊配對模組126可根據目標物件101的類別資訊117經由查表法產生對應的標記資訊127。後續,投影機132可根據目標物件101的位置資訊125投射對應的標記資訊127於目標物件101上,如上所述,在此不再贅述。 In addition, please refer to Figure 5B. The information matching module 126 is used to receive the location information 125 and its category information 117 of the target object 101, and use the category information 117 to query the corresponding tag information 127 in the database. As shown in Figure 5, the database stores a table of each object category and corresponding tag information 127. For example: when the object category is category_1, the corresponding tag is projection tag_1; when the object category is category_ When 2, the corresponding mark is projection mark_2, and so on. Therefore, the information matching module 126 can generate corresponding tag information 127 according to the category information 117 of the target object 101 through a table lookup method. Subsequently, the projector 132 can project the corresponding mark information 127 on the target object 101 according to the position information 125 of the target object 101, as described above, which will not be described again here.

由上述的說明可知,本發明上述實施例的物件追蹤及導引系統及其方法,可解決目標物件高度移動的情況下因影像 模糊而導致誤判,另一方面,也可避免目標物件顏色與輸送帶顏色相近時目標物件無法正常被偵測的問題。此外,本發明上述實施例的物件追蹤及導引系統及其方法只要透過影像即時追蹤及辨識做為偵測目標物件的位置資訊的來源,不需使用輸送帶的移動速度和移動時間計算目標物件的位置,因此,即使輸送帶非等速移動或間歇性移動(例如加速度運動或靜止後再移動)仍能精確計算目標物件的位置座標,以達到即時追蹤及快速定位目標物件的功效。 It can be seen from the above description that the object tracking and guidance system and method thereof according to the above embodiments of the present invention can solve the problem of image distortion when the target object moves highly. On the other hand, it can also avoid the problem that the target object cannot be detected normally when the color of the target object is similar to the color of the conveyor belt. In addition, the object tracking and guidance system and method of the above embodiments of the present invention only need to use real-time tracking and recognition of images as the source of location information to detect the target object, and do not need to use the moving speed and moving time of the conveyor belt to calculate the target object. Therefore, even if the conveyor belt does not move at a constant speed or moves intermittently (such as accelerating or moving after being stationary), the position coordinates of the target object can still be accurately calculated to achieve real-time tracking and rapid positioning of the target object.

綜上所述,雖然本發明已以實施例揭露如上,然其並非用以限定本發明。本發明所屬技術領域中具有通常知識者,在不脫離本發明之精神和範圍內,當可作各種之更動與潤飾。因此,本發明之保護範圍當視後附之申請專利範圍所界定者為準。 In summary, although the present invention has been disclosed above through embodiments, they are not intended to limit the present invention. Those with ordinary knowledge in the technical field to which the present invention belongs can make various modifications and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention shall be determined by the appended patent application scope.

100:物件追蹤及導引系統 100: Object tracking and guidance system

110:資料取得單元 110: Data acquisition unit

112:彩色/深度攝影機 112:Color/Depth Camera

114:條碼讀取裝置 114:Barcode reading device

117:類別資訊 117:Category information

120:資料運算單元 120: Data operation unit

122:資訊處理模組 122:Information processing module

124:物件偵測及追蹤模組 124:Object detection and tracking module

126:資訊配對模組 126:Information matching module

130:顯示單元 130:Display unit

132:投影機 132:Projector

Claims (10)

一種物件追蹤及導引系統,用於一輸送帶上,該系統包括:一資料取得單元,包括一彩色/深度攝影機以及一條碼讀取裝置,該彩色/深度攝影機用以擷取一目標物件的一彩色影像資訊以及一深度資訊,該條碼讀取裝置用以讀取該目標物件的一類別資訊;一資料運算單元,用以接收該彩色影像資訊及該深度資訊以追蹤該目標物件,以取得該目標物件於該輸送帶上的一位置資訊,並根據該目標物件的該類別資訊產生一標記資訊,其中該資料運算單元包括一資訊處理模組,該資訊處理模組用以將該彩色影像資訊進行灰階轉換及迦瑪校正,再進行二值化及背景分割的影像處理,以得到一二值化矩陣,且該資料運算單元包括一物件偵測及追蹤模組,該物件偵測及追蹤模組用以將該二值化矩陣進行該目標物件的形態學處理以及輪廓偵測,並將該目標物件的輪廓影像及其位置座標進行儲存,以及該資料運算單元包括一資訊配對模組,該資訊配對模組用以接收該目標物件的該位置資訊及該類別資訊,並使用該類別資訊於一資料庫中查詢對應的該標記資訊;以及一顯示單元,包括至少一投影機,該投影機根據該目標物件的該位置資訊投射該標記資訊於該目標物件上。 An object tracking and guidance system for use on a conveyor belt. The system includes: a data acquisition unit, including a color/depth camera and a barcode reading device. The color/depth camera is used to capture a target object. A color image information and a depth information, the barcode reading device is used to read a type of information of the target object; a data computing unit is used to receive the color image information and the depth information to track the target object to obtain A position information of the target object on the conveyor belt, and a mark information is generated according to the category information of the target object, wherein the data processing unit includes an information processing module, and the information processing module is used to convert the color image The information is subjected to gray scale conversion and gamma correction, and then is subjected to image processing of binarization and background segmentation to obtain a binarization matrix, and the data operation unit includes an object detection and tracking module. The object detection and tracking module The tracking module is used to perform morphological processing and contour detection of the target object on the binary matrix, and store the contour image and position coordinates of the target object, and the data computing unit includes an information matching module , the information matching module is used to receive the location information and the category information of the target object, and use the category information to query the corresponding mark information in a database; and a display unit including at least one projector, the The projector projects the mark information on the target object according to the position information of the target object. 如請求項1所述之系統,其中該深度資訊用以輔助該目標物件的影像辨識。 The system of claim 1, wherein the depth information is used to assist image recognition of the target object. 如請求項1所述之系統,其中該目標物件具有二維條碼或矩陣圖碼,該條碼讀取裝置可發出紅外光掃描該二維條碼或該矩陣圖碼。 The system according to claim 1, wherein the target object has a two-dimensional barcode or a matrix pattern code, and the barcode reading device can emit infrared light to scan the two-dimensional barcode or the matrix pattern code. 如請求項1所述之系統,其中該資料運算單元透過影像即時追蹤及定位以偵測該目標物件的該位置資訊。 The system as described in claim 1, wherein the data computing unit detects the location information of the target object through real-time image tracking and positioning. 如請求項1所述之系統,其中該資料運算單元根據該目標物件的該類別資訊不同而對應產生不同的該標記資訊。 The system of claim 1, wherein the data computing unit generates different tag information according to different types of information of the target object. 一種物件追蹤及導引方法,用於一輸送帶上,該方法包括:以一深度攝影機擷取該目標物件的一彩色影像資訊以及一深度資訊;以一條碼讀取裝置讀取該目標物件的一類別資訊;接收該彩色影像資訊及該深度資訊以追蹤該目標物件,以取得該目標物件於該輸送帶上的一位置資訊;根據該目標物件的該類別資訊產生一標記資訊,其中該方法用以將該彩色影像資訊進行灰階轉換及迦瑪校正,再進行二值化及背景分割的影像處理,以得到一二值化矩陣;其中該方法用以將該二值化矩陣進行該目標物件的形態學處理以及輪廓偵測,並將該目標物件的輪廓影像及其位置座標進行存;其中該方法用以 接收該目標物件的該位置資訊及該類別資訊,並使用該類別資訊於一資料庫中查詢對應的該標記資訊;以及根據該目標物件的該位置資訊投射該標記資訊於該目標物件上。 An object tracking and guidance method, used on a conveyor belt, the method includes: using a depth camera to capture a color image information and a depth information of the target object; using a barcode reading device to read the target object A type of information; receiving the color image information and the depth information to track the target object to obtain a position information of the target object on the conveyor belt; generating a mark information based on the type of information of the target object, wherein the method It is used to perform grayscale conversion and gamma correction on the color image information, and then perform image processing of binarization and background segmentation to obtain a binarization matrix; wherein the method is used to perform the binarization matrix on the target Morphological processing and contour detection of objects, and storing the contour image and position coordinates of the target object; where this method is used Receive the location information and the category information of the target object, and use the category information to query the corresponding tag information in a database; and project the tag information on the target object based on the location information of the target object. 如請求項6所述之方法,其中該深度資訊用以輔助該目標物件的影像辨識。 The method of claim 6, wherein the depth information is used to assist image recognition of the target object. 如請求項6所述之方法,其中該目標物件具有二維條碼或矩陣圖碼,該條碼讀取裝置可發出紅外光掃描該二維條碼或該矩陣圖碼。 The method of claim 6, wherein the target object has a two-dimensional barcode or a matrix pattern code, and the barcode reading device can emit infrared light to scan the two-dimensional barcode or the matrix pattern code. 如請求項6所述之方法,其中該方法透過影像即時追蹤及定位以偵測該目標物件的該位置資訊。 The method described in claim 6, wherein the method detects the location information of the target object through real-time image tracking and positioning. 如請求項6所述之方法,其中該方法根據該目標物件的該類別資訊不同而對應產生不同的該標記資訊。 The method described in claim 6, wherein the method generates different tag information according to different types of information of the target object.
TW110144380A 2021-11-29 2021-11-29 Object tracking and guiding system on conveyor belt and method thereof TWI817265B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW110144380A TWI817265B (en) 2021-11-29 2021-11-29 Object tracking and guiding system on conveyor belt and method thereof
CN202111473322.4A CN116177148A (en) 2021-11-29 2021-12-03 Article tracking and guiding system on conveyor belt and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110144380A TWI817265B (en) 2021-11-29 2021-11-29 Object tracking and guiding system on conveyor belt and method thereof

Publications (2)

Publication Number Publication Date
TW202320919A TW202320919A (en) 2023-06-01
TWI817265B true TWI817265B (en) 2023-10-01

Family

ID=86446765

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110144380A TWI817265B (en) 2021-11-29 2021-11-29 Object tracking and guiding system on conveyor belt and method thereof

Country Status (2)

Country Link
CN (1) CN116177148A (en)
TW (1) TWI817265B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI505198B (en) * 2012-09-11 2015-10-21 Sintai Optical Shenzhen Co Ltd Bar code reading method and reading device
CN109711225A (en) * 2018-12-13 2019-05-03 珠海优特智厨科技有限公司 Recognition methods, device and the label of bar code

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI505198B (en) * 2012-09-11 2015-10-21 Sintai Optical Shenzhen Co Ltd Bar code reading method and reading device
CN109711225A (en) * 2018-12-13 2019-05-03 珠海优特智厨科技有限公司 Recognition methods, device and the label of bar code

Also Published As

Publication number Publication date
CN116177148A (en) 2023-05-30
TW202320919A (en) 2023-06-01

Similar Documents

Publication Publication Date Title
US10628648B2 (en) Systems and methods for tracking optical codes
CN106767399A (en) The non-contact measurement method of the logistics measurement of cargo found range based on binocular stereo vision and dot laser
US8135172B2 (en) Image processing apparatus and method thereof
CN110390677B (en) Defect positioning method and system based on sliding self-matching
CN108764338B (en) Pedestrian tracking method applied to video analysis
CN111027526B (en) Method for improving detection and identification efficiency of vehicle target
CN110189343B (en) Image labeling method, device and system
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
CN113538491A (en) Edge identification method, system and storage medium based on self-adaptive threshold
US11216905B2 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
TWI817265B (en) Object tracking and guiding system on conveyor belt and method thereof
CN114092682A (en) Small hardware fitting defect detection algorithm based on machine learning
Dewi et al. Object detection without color feature: Case study Autonomous Robot
Rother et al. What can casual walkers tell us about a 3D scene?
US11074472B2 (en) Methods and apparatus for detecting and recognizing graphical character representations in image data using symmetrically-located blank areas
US20200234453A1 (en) Projection instruction device, parcel sorting system, and projection instruction method
CN111062387B (en) Identification method, grabbing method and related equipment for articles on conveyor belt
TW201701190A (en) Text localization system for street view image and device thereof
CN114529555A (en) Image recognition-based efficient cigarette box in-and-out detection method
CN111091086B (en) Method for improving identification rate of single characteristic information of logistics surface by utilizing machine vision technology
Ahammed Basketball player identification by jersey and number recognition
Bauer et al. Intelligent predetection of projected reference markers for robot-based inspection systems
EP3872707A1 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
EP4235563A1 (en) Method and arrangements for removing erroneous points from a set of points of a 3d virtual object provided by 3d imaging
Kavitha et al. Vehicle tracking and speed estimation using view-independent traffic cameras