TWI675337B - Unmanned goods management system and unmanned goods management method - Google Patents

Unmanned goods management system and unmanned goods management method Download PDF

Info

Publication number
TWI675337B
TWI675337B TW107108461A TW107108461A TWI675337B TW I675337 B TWI675337 B TW I675337B TW 107108461 A TW107108461 A TW 107108461A TW 107108461 A TW107108461 A TW 107108461A TW I675337 B TWI675337 B TW I675337B
Authority
TW
Taiwan
Prior art keywords
goods
tracking
product
person
data
Prior art date
Application number
TW107108461A
Other languages
Chinese (zh)
Other versions
TW201939383A (en
Inventor
簡慧宜
Original Assignee
新漢股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 新漢股份有限公司 filed Critical 新漢股份有限公司
Priority to TW107108461A priority Critical patent/TWI675337B/en
Publication of TW201939383A publication Critical patent/TW201939383A/en
Application granted granted Critical
Publication of TWI675337B publication Critical patent/TWI675337B/en

Links

Abstract

本發明提供一種無人化貨品管理系統與無人化貨品管理方法,可自動對追蹤區的貨品進行管理。方法是拍攝追蹤區的追蹤影像,取得進入追蹤區的人員的身份資料,依據追蹤影像產生此人員的姿態模型,於貨品影像中辨識各貨品的貨品位置,於依據姿態模型及各貨品位置判斷任一貨品被人員拿取時取得此貨品的貨品資料,並連結身份資料及貨品資料。本發明可有效實現無人化貨品管理,而可有效節省人力,進而降低管理成本。 The invention provides an unmanned goods management system and an unmanned goods management method, which can automatically manage the goods in the tracking area. The method is to capture the tracking image of the tracking area, obtain the identity data of the person entering the tracking area, generate the person's posture model based on the tracking image, identify the position of each product in the product image, and judge any position based on the posture model and the position of each product. When a product is picked up by a person, the product information of the product is obtained, and the identity information and the product information are linked. The invention can effectively realize unmanned goods management, and can effectively save manpower, thereby reducing management costs.

Description

無人化貨品管理系統與無人化貨品管理方法 Unmanned goods management system and unmanned goods management method

本發明是與貨品管理有關,特別有關於無人化貨品管理系統與無人化貨品管理方法。 The present invention relates to the management of goods, and particularly relates to an unmanned goods management system and an unmanned goods management method.

於目前的無人商店中,管理系統並無法主動判斷消費者拿取了那些商品,而無法主動進行結帳,這使得消費者於拿取商品後必須至結帳櫃台進行自助結帳,而造成用戶體驗不佳。 In the current unmanned store, the management system cannot actively determine which products the consumer has taken, and cannot actively checkout. This makes it necessary for the consumer to go to the checkout counter for self-checkout after taking the goods, resulting in users Poor experience.

有鑑於此,目前極需一種可主動判斷消費者拿取了那些商品的管理系統被提出。 In view of this, a management system that can proactively determine which products a consumer has taken is currently being proposed.

本發明提供一種無人化貨品管理系統與無人化貨品管理方法,可主動判斷人員是否拿取任一貨品,並於人員拿取貨品時自動將被拿取的貨品與拿取貨品的人員建立連結。 The invention provides an unmanned goods management system and an unmanned goods management method, which can proactively determine whether a person has taken any one of the goods and automatically establish a connection between the taken goods and the person who takes the goods when the personnel take the goods.

於一實施例中,一種無人化貨品管理方法,用於對追蹤區中的多個貨品進行管理,包括以下步驟:經由追蹤攝影機拍攝追蹤區以取得追蹤影像;取得進入追蹤區的人員的身份資料;依據追蹤影像產生人員的姿態模型;於貨品影像中辨識各貨品的貨品位置;於依據姿態模型及各貨品位置判斷任一 貨品被人員拿取時取得被拿取的貨品的貨品資料;及,連結身份資料及貨品資料。 In one embodiment, an unmanned goods management method is used to manage multiple goods in the tracking area, including the following steps: shooting the tracking area through a tracking camera to obtain a tracking image; obtaining identity information of a person entering the tracking area ; Generate a person's pose model based on the tracking image; identify the goods position of each item in the goods image; judge any one based on the pose model and each item's position Obtain the goods information of the retrieved goods when the goods are taken by the personnel; and, link the identity information and the goods information.

於一實施例中,一種無人化貨品管理系統,用於對追蹤區中的多個貨品進行管理,包括追蹤攝影機、資料庫及控制裝置。追蹤攝影機用以拍攝追蹤區以取得追蹤影像。資料庫用以儲存分別對應多個貨品的多個貨品資料及分別對應多個人員的多個身份資料。控制裝置連接追蹤攝影機及資料庫,並包括身份辨識模組、姿態追蹤模組、貨品辨識模組、拿取分析模組及處理模組。身份辨識模組用以取得進入追蹤區的人員的身份資料。姿態追蹤模組用以依據追蹤影像產生人員的姿態模型。貨品辨識模組用以於貨品影像中辨識各貨品的貨品位置,並用以取得被拿取的貨品的貨品資料。拿取分析模組用以依據姿態模型及各貨品位置判斷是否任一貨品被人員拿取。處理模組用以連結所取得的身份資料及被拿取的貨品的貨品資料。 In one embodiment, an unmanned goods management system is used to manage multiple goods in a tracking area, including a tracking camera, a database, and a control device. The tracking camera is used to capture a tracking area to obtain a tracking image. The database is used to store multiple goods data corresponding to multiple goods and multiple identity data corresponding to multiple persons. The control device is connected to a tracking camera and a database, and includes an identity recognition module, an attitude tracking module, a goods identification module, a retrieval analysis module and a processing module. The identification module is used to obtain the identity information of the person entering the tracking area. The attitude tracking module is used to generate a person's attitude model based on the tracking image. The goods identification module is used to identify the goods position of each goods in the goods image, and is used to obtain the goods data of the retrieved goods. The taking analysis module is used to judge whether any item is taken by the personnel according to the attitude model and the position of each item. The processing module is used to link the obtained identity data and the goods data of the retrieved goods.

本發明可有效實現無人化貨品管理,而可有效節省人力,進而降低管理成本。 The invention can effectively realize unmanned goods management, and can effectively save manpower, thereby reducing management costs.

1‧‧‧無人化貨品管理系統 1‧‧‧Unmanned Goods Management System

10‧‧‧控制裝置 10‧‧‧Control device

100‧‧‧處理模組 100‧‧‧Processing Module

101‧‧‧身份辨識模組 101‧‧‧Identification Module

102‧‧‧姿態追蹤模組 102‧‧‧Attitude tracking module

103‧‧‧臉部追蹤模組 103‧‧‧Face Tracking Module

104‧‧‧手部追蹤模組 104‧‧‧Hand Tracking Module

105‧‧‧拿取分析模組 105‧‧‧ Take the analysis module

106‧‧‧貨品辨識模組 106‧‧‧ Goods Identification Module

107‧‧‧身份確認模組 107‧‧‧ Identity Confirmation Module

108‧‧‧結算模組 108‧‧‧Settlement module

11、41-44‧‧‧貨品攝影機 11, 41-44‧‧‧ Goods camera

12、40‧‧‧追蹤攝影機 12, 40‧‧‧ tracking camera

13‧‧‧辨識裝置 13‧‧‧Identification device

14‧‧‧儲存裝置 14‧‧‧Storage device

140‧‧‧電腦程式 140‧‧‧Computer Program

141、210‧‧‧資料庫 141, 210‧‧‧Database

15‧‧‧通訊裝置 15‧‧‧Communication device

20‧‧‧網路 20‧‧‧Internet

21‧‧‧主機 21‧‧‧Host

30-32‧‧‧貨架 30-32‧‧‧Shelf

34‧‧‧追蹤區 34‧‧‧Tracking Area

50-58‧‧‧貨品 50-58‧‧‧ Goods

530-580‧‧‧圖案標籤 530-580 ‧‧‧ pattern label

6‧‧‧人員 6‧‧‧ personnel

60‧‧‧姿態模型 60‧‧‧ Attitude Model

S10-S16‧‧‧第一管理步驟 S10-S16‧‧‧The first management step

S20-S24‧‧‧放回步驟 S20-S24‧‧‧Replacement steps

S30-S32‧‧‧結算步驟 S30-S32‧‧‧Settlement steps

S400-S412‧‧‧第二管理步驟 S400-S412‧‧‧Second Management Step

圖1為本發明的一實施例的無人化貨品管理系統的架構圖。 FIG. 1 is a structural diagram of an unmanned goods management system according to an embodiment of the present invention.

圖2本發明的一實施例的控制裝置架構圖。 FIG. 2 is a structural diagram of a control device according to an embodiment of the present invention.

圖3為本發明的第一實施例的無人化貨品管理方法的流程圖。 FIG. 3 is a flowchart of a method for managing unmanned goods according to the first embodiment of the present invention.

圖4為本發明的第二實施例的無人化貨品管理方法的部分流程圖。 FIG. 4 is a partial flowchart of a method for managing unmanned goods according to a second embodiment of the present invention.

圖5為本發明的第三實施例的無人化貨品管理方法的部分流程圖。 FIG. 5 is a partial flowchart of a method for managing unmanned goods according to a third embodiment of the present invention.

圖6為本發明的第四實施例的無人化貨品管理方法的流程圖。 FIG. 6 is a flowchart of a method for managing unmanned goods according to a fourth embodiment of the present invention.

圖7為本發明的無人化貨品管理的第一示意圖。 FIG. 7 is a first schematic diagram of unmanned goods management of the present invention.

圖8為本發明的無人化貨品管理的第二示意圖。 FIG. 8 is a second schematic diagram of the unmanned goods management of the present invention.

圖9為本發明的無人化貨品管理的第三示意圖。 FIG. 9 is a third schematic diagram of the unmanned goods management of the present invention.

圖10為本發明的無人化貨品管理的第四示意圖。 FIG. 10 is a fourth schematic diagram of the unmanned goods management of the present invention.

圖11為本發明的無人化貨品管理的第五示意圖。 FIG. 11 is a fifth schematic diagram of the unmanned goods management of the present invention.

圖12為本發明的無人化貨品管理的第六示意圖。 FIG. 12 is a sixth schematic diagram of the unmanned goods management of the present invention.

茲就本發明的一較佳實施例,配合圖式,詳細說明如後。 A preferred embodiment of the present invention is described in detail below with reference to the drawings.

本發明主要是提供一種無人化貨品管理技術,可對追蹤區中的多個貨品進行管理。本發明的無人化貨品管理技術主要是自動取得進入追蹤區的人員的身份資料,自動偵測人員是否拿取貨品,並於偵測到人員拿取貨品時對被拿取的貨品與拿取貨品的人員進行關聯。 The invention mainly provides an unmanned goods management technology, which can manage multiple goods in the tracking area. The unmanned goods management technology of the present invention is mainly to automatically obtain the identity data of the personnel entering the tracking area, automatically detect whether the personnel have taken the goods, and detect the goods and the taken goods when the personnel are taken to the goods. Associates.

更進一步地,本發明的無人化貨品管理技術可適用於無人商店、吾人出租店或無人倉儲等應用。 Furthermore, the unmanned goods management technology of the present invention can be applied to applications such as unmanned stores, self-rental stores, or unmanned storage.

請參閱圖1,為本發明的一實施例的無人化貨品管理系統的架構圖。本發明的無人化貨品管理系統(下稱管理系統)1主要包括一或多個追蹤攝影機12、儲存裝置14及連結上述裝置的控制裝置10。 Please refer to FIG. 1, which is a structural diagram of an unmanned goods management system according to an embodiment of the present invention. The unmanned goods management system (hereinafter referred to as the management system) 1 of the present invention mainly includes one or more tracking cameras 12, a storage device 14, and a control device 10 connected to the above devices.

追蹤攝影機12用以拍攝追蹤區以取得追蹤影像。於一實施例中,追蹤攝影機12可包括彩色追蹤攝影機(如RGB攝影機)及/或深度追蹤攝影機(如紅外線發射器與紅外線攝影機的組合或超音波發射器與超音波接收器的組 合)。前述彩色追蹤攝影機用以取得追蹤區的彩色追蹤影像,前述深度追蹤攝影機用以取得追蹤區的深度追蹤影像。 The tracking camera 12 is used to capture a tracking area to obtain a tracking image. In an embodiment, the tracking camera 12 may include a color tracking camera (such as an RGB camera) and / or a depth tracking camera (such as a combination of an infrared transmitter and an infrared camera or a combination of an ultrasonic transmitter and an ultrasonic receiver). Hop). The color tracking camera is used to obtain a color tracking image of a tracking area, and the depth tracking camera is used to obtain a depth tracking image of a tracking area.

更進一步地,當追蹤攝影機12同時包括彩色追蹤攝影機及深度追蹤攝影機時可實現3D攝影機的功能。具體而言,追蹤攝影機12所拍攝的彩色追蹤影像及深度追蹤影像可被用來分析追蹤區的3D空間資訊(如區域中物體的數量與尺寸),而可準確地進行各種偵測處理(如偵測人員是否進入或離開追蹤區,或辨識人員當前的姿態)。 Furthermore, when the tracking camera 12 includes both a color tracking camera and a depth tracking camera, the function of the 3D camera can be realized. Specifically, the color tracking image and depth tracking image captured by the tracking camera 12 can be used to analyze the 3D spatial information of the tracking area (such as the number and size of objects in the area), and can accurately perform various detection processes (such as Detect if a person has entered or left the tracking area, or identify the current posture of the person).

儲存裝置14(如隨機存取記憶體、快閃記憶體、磁碟硬碟、快取記憶體或上述儲存裝置的任意組合)用以儲存資料。控制裝置10(如處理器或微控制器)用以控制管理系統1的各裝置。 The storage device 14 (such as a random access memory, a flash memory, a disk drive, a cache memory, or any combination of the above storage devices) is used to store data. The control device 10 (such as a processor or a microcontroller) is used to control the devices of the management system 1.

於一實施例中,儲存裝置14包括本地資料庫141,資料庫141用以儲存實現無人化貨品管理所需資料(如後述的身份資料與貨品資料)。 In an embodiment, the storage device 14 includes a local database 141, which is used to store data (such as identity data and product data described later) needed to implement unmanned goods management.

於一實施例中,實現無人化貨品管理所需資料是被儲存於網路資料庫。具體而言,管理系統1包括連接控制裝置10的通訊裝置15(如網路卡),通訊裝置15可連接網路20(如網際網路),並經由網路20連接遠端的主機21的資料庫210。後續將以實現無人化貨品管理所需資料被儲存於本地資料庫141為例進行說明。 In one embodiment, the data needed to achieve unmanned goods management is stored in a network database. Specifically, the management system 1 includes a communication device 15 (such as a network card) connected to the control device 10. The communication device 15 can be connected to a network 20 (such as the Internet), and connected to a remote host 21 via the network 20. Database 210. The following will take the data required to realize the management of unmanned goods is stored in the local database 141 as an example for illustration.

於一實施例中,管理系統1可包括連接控制裝置10的一或多個貨品攝影機11。貨品攝影機11用以拍攝追蹤區中的貨品所在區域以取得貨品影像。貨品攝影機11的結構是與追蹤攝影機12相同或相似,於此不再贅述。 In one embodiment, the management system 1 may include one or more product cameras 11 connected to the control device 10. The goods camera 11 is used for shooting the area where the goods are located in the tracking area to obtain goods images. The structure of the goods camera 11 is the same as or similar to that of the tracking camera 12 and will not be repeated here.

於一實施例中,追蹤區的管理員可於追蹤區中設定多個放置有貨品的貨品區,並於各貨品區設置一或多個貨品攝影機11來取得各貨品區的清晰的貨品影像以進行更精確的偵測處理(容後詳述)。 In one embodiment, the administrator of the tracking area can set a plurality of goods areas in which the goods are placed, and set up one or more goods cameras 11 in each goods area to obtain a clear picture of the goods in each area. For more accurate detection processing (more details later).

於一實施例中,管理系統1可包括連接控制裝置10的辨識裝置13。辨識裝置13用以取得進入追蹤區的人員的特徵資料。 In one embodiment, the management system 1 may include an identification device 13 connected to the control device 10. The identification device 13 is used to obtain characteristic data of a person entering the tracking area.

舉例來說,辨識裝置13可為RFID讀卡機,並設置於追蹤區的入口。當人員欲進入追蹤區時必須先將所持有的RFID標籤(tag)與辨識裝置13進行感應以輸入儲存於RFID標籤的人員的識別碼(即前述特徵資料)。 For example, the identification device 13 may be an RFID card reader and disposed at the entrance of the tracking area. When a person wants to enter the tracking area, he must first sense the RFID tag (holding tag) with the identification device 13 to enter the identification code of the person stored in the RFID tag (that is, the aforementioned characteristic data).

於另一例子中,辨識裝置13可為生物辨識裝置(如指紋掃描器、虹膜掃描器或靜脈掃描器),並設置於追蹤區的入口。當人員欲進入追蹤區時必須先將輸入生物資料(即前述特徵資料,如指紋特徵資料、虹膜特徵資料或靜脈分佈特徵資料)至辨識裝置13。 In another example, the identification device 13 may be a biometric identification device (such as a fingerprint scanner, an iris scanner, or a vein scanner), and is disposed at the entrance of the tracking area. When a person wants to enter the tracking area, he must first input the biological data (that is, the aforementioned characteristic data, such as fingerprint characteristic data, iris characteristic data, or vein distribution characteristic data) to the identification device 13.

於一實施例中,儲存裝置14包括非暫態電腦可讀取媒體,並儲存有電腦程式140,電腦程式140記錄有電腦可讀取的程式碼。控制裝置10可執行電腦程式140來實現本發明各實施例的無人化貨品管理方法。 In an embodiment, the storage device 14 includes a non-transitory computer-readable medium, and stores a computer program 140, and the computer program 140 records computer-readable code. The control device 10 can execute a computer program 140 to implement the unmanned goods management method according to the embodiments of the present invention.

續請一併參閱圖2,本發明的一實施例的控制裝置架構圖。具體而言,控制裝置10主要是通過執行電腦程式140來與其他裝置進行互動,以執行本發明的無人化貨品管理方法的各個功能。並且,電腦程式140包括分別對應下列功能模組的多組程式碼,控制裝置10執行對應的各組程式碼後可實作下列各功能模組:處理模組100:用以於貨品被人員拿取時連結人員的身份資料及被拿取的貨品的貨品資料,並於貨品被人員放回時解除人員的身份資料及被放回的貨品的貨品資料之間的連結。 Please refer to FIG. 2 together, which is a structural diagram of a control device according to an embodiment of the present invention. Specifically, the control device 10 mainly interacts with other devices by executing a computer program 140 to perform various functions of the unmanned product management method of the present invention. In addition, the computer program 140 includes multiple sets of codes corresponding to the following functional modules, and the control device 10 can implement the following functional modules after executing the corresponding sets of codes: processing module 100: used for the goods to be taken by personnel When retrieved, the identity information of the personnel and the goods information of the retrieved goods are linked, and when the goods are returned by the personnel, the link between the identity information of the personnel and the goods information of the returned goods is released.

於一實施例中,處理模組100可判斷人員是否接近貨品,並於人員接近貨品時控制貨品攝影機11開始對貨品所在區域進行拍攝。 In one embodiment, the processing module 100 can determine whether a person is approaching the product, and control the product camera 11 to start shooting the area where the product is located when the person approaches the product.

身份辨識模組101:用以取得進入追蹤區的人員的身份資料。於一實施例中,身份辨識模組101自辨識裝置13取得特徵資料,並自資料庫141查 詢對應此特徵資料的身份資料。於一實施例中,身份辨識模組101是對追蹤攝影機12所拍攝的彩色追蹤影像執行臉部辨識以取得進入追蹤區的人員的臉部特徵(即特徵資料),並依據特徵資料於資料庫141查詢對應的身份資料。 Identity identification module 101: used to obtain identity data of persons entering the tracking area. In an embodiment, the identity recognition module 101 obtains the characteristic data from the recognition device 13 and checks it from the database 141 Ask for identity information that corresponds to this characteristic. In an embodiment, the identity recognition module 101 performs face recognition on the color tracking image captured by the tracking camera 12 to obtain facial features (ie, feature data) of a person entering the tracking area, and based on the feature data in a database 141 Query the corresponding identity information.

姿態追蹤模組102:用以依據追蹤攝影機12所拍攝的追蹤影像產生人員的姿態模型。於一實施例中,姿態追蹤模組102依據彩色追蹤影像及深度追蹤影像決定人員的全部或部分關節位置,並依據所決定的關節位置產生前述姿態模型。 Attitude tracking module 102: used to generate a person's attitude model based on the tracking image captured by the tracking camera 12. In an embodiment, the attitude tracking module 102 determines all or part of the joint position of the person according to the color tracking image and the depth tracking image, and generates the aforementioned attitude model according to the determined joint position.

臉部追蹤模組103:用以依據追蹤攝影機12所拍攝的彩色追蹤影像追蹤人員的臉部位置。 The face tracking module 103 is configured to track a person's face based on a color tracking image captured by the tracking camera 12.

手部追蹤模組104:用以依據貨品攝影機11所拍攝的彩色貨品影像及深度貨品影像產生人員的手部姿態模型。於一實施例中,手部追蹤模組104依據彩色貨品影像及深度貨品影像決定人員的手部的多個關節位置,並依據多個關節位置產生手部姿態模型。 The hand tracking module 104 is used to generate a person's hand posture model based on the color goods image and the depth goods image captured by the goods camera 11. In one embodiment, the hand tracking module 104 determines a plurality of joint positions of a person's hand according to the color goods image and a depth goods image, and generates a hand posture model according to the plurality of joint positions.

拿取分析模組105:用以依據姿態模型(及/或手部姿態模型)與各貨品位置判斷是否任一貨品被人員拿取。 Handling analysis module 105: It is used to judge whether any goods are taken by personnel according to the attitude model (and / or the hand attitude model) and the positions of the goods.

貨品辨識模組106:用以於貨品影像中辨識各貨品的貨品位置,並用以取得被拿取(或被放回)的貨品的貨品資料。 Commodity identification module 106: used to identify the position of each product in the image of the product, and used to obtain the product data of the retrieved (or returned) product.

於一實施例中,貨品辨識模組106是於追蹤影像中識別前述貨品影像,或經由貨品攝影機11取得前述貨品影像。 In one embodiment, the goods identification module 106 identifies the aforementioned goods image in the tracking image, or obtains the aforementioned goods image through the goods camera 11.

於一實施例中,貨品辨識模組106於彩色貨品影像中辨識各貨品的圖案標籤,並依據各圖案標籤的位置決定所對應的貨品位置。 In one embodiment, the product identification module 106 identifies the pattern labels of each item in the color image of the product, and determines the corresponding position of the item according to the position of each pattern label.

於一實施例中,貨品辨識模組106依據被拿取的貨品的圖案標籤的外觀(如圖案的形狀或顏色)於資料庫141查詢對應的貨品資料。 In one embodiment, the goods identification module 106 queries the database 141 for corresponding goods data according to the appearance (such as the shape or color of the pattern) of the pattern label of the retrieved goods.

身份確認模組107:用以對較清晰的彩色貨品影像執行臉部辨識以確認拿取貨品的人員的身份資料。 Identity verification module 107: Perform face recognition on clearer color goods images to confirm the identity information of the person who took the goods.

結算模組108:用以於預設的結算條件滿足時對連結身份資料的所有貨品資料執行結算處理。 The settlement module 108 is configured to perform settlement processing on all the goods data linked to the identity data when the preset settlement conditions are met.

經由上述功能模組,本發明可有效辨識進入追蹤區的人員,準確辨識此人員所拿取/放回的貨品,並進行記錄。 Through the above-mentioned function module, the present invention can effectively identify the person who enters the tracking area, accurately identify the goods taken / replaced by this person, and record it.

續請一併參閱圖3,為本發明的第一實施例的無人化貨品管理方法的流程圖。本發明各實施例的無人化貨品管理方法(下稱管理方法)可由圖1至圖2所示管理系統1來加以實現。本實施例的管理方法包括以下步驟。 Please refer to FIG. 3 together for a flowchart of a method for managing unmanned goods according to the first embodiment of the present invention. The unmanned goods management method (hereinafter referred to as the management method) of the embodiments of the present invention can be implemented by the management system 1 shown in FIGS. 1 to 2. The management method of this embodiment includes the following steps.

步驟S10:控制裝置10控制追蹤攝影機12拍攝追蹤區(如圖7至圖12的追蹤區34)以取得追蹤影像。 Step S10: The control device 10 controls the tracking camera 12 to capture a tracking area (such as the tracking area 34 in FIGS. 7 to 12) to obtain a tracking image.

於一實施例中,追蹤攝影機12是3D攝影機(即包括追蹤彩色攝影機及追蹤深度攝影機),而可拍攝追蹤區的彩色追蹤影像及深度追蹤影像。 In an embodiment, the tracking camera 12 is a 3D camera (ie, includes a tracking color camera and a tracking depth camera), and can capture color tracking images and depth tracking images in a tracking area.

於一實施例中,控制裝置10是於自辨識裝置13收到人員的特徵資料或經由人員感測器(如PIR感測器)偵測到人員進入追蹤區時才控制追蹤攝影機12開始拍攝追蹤區。 In an embodiment, the control device 10 controls the tracking camera 12 to start shooting and tracking when the self-identification device 13 receives the characteristic data of the person or detects that the person enters the tracking area through a person sensor (such as a PIR sensor) Area.

於一實施例中,追蹤攝影機12是持續拍攝追蹤區,並對追蹤影像執行人員偵測,並於偵測到人員進入追蹤區時開始記錄追蹤影像。 In one embodiment, the tracking camera 12 continuously captures the tracking area, performs human detection on the tracking image, and starts recording the tracking image when it is detected that the person enters the tracking area.

步驟S11:控制裝置10經由身份辨識模組101取得進入追蹤區的人員的身份資料。具體而言,管理員可預先對允許進入追蹤區的所有人員進行建檔,並將各人員的身份資料儲存於資料庫141,以於辨識任一人員進入追蹤區時可取得對應的身份資料。 Step S11: The control device 10 obtains the identity data of the person entering the tracking area via the identity recognition module 101. Specifically, the administrator can profile all persons allowed to enter the tracking area in advance, and store the identity information of each person in the database 141, so that when identifying any person entering the tracking area, the corresponding identity data can be obtained.

於一實施例中,資料庫141可進一步儲存分別對應各身份資料的特徵資料。控制裝置10經由辨識裝置13取得進入(或欲進入)追蹤區的人員的特徵資料,並依據特徵資料於資料庫141查詢對應的身份資料。 In an embodiment, the database 141 may further store characteristic data corresponding to each identity data. The control device 10 obtains the characteristic data of the person entering (or intending to enter) the tracking area through the identification device 13 and queries the corresponding identity data in the database 141 according to the characteristic data.

以辨識裝置13是RFID讀卡機為例,人員於進入追蹤區前可使用儲存有其識別碼(即特徵資料)的RFID標籤對辨識裝置13進行感應以輸入識別碼。接著,控制裝置10可依據此識別碼於資料庫141查詢此人員的身份資料,而可完成此人員的身份辨識。 Taking the identification device 13 as an RFID card reader as an example, before entering the tracking area, a person may use an RFID tag stored with its identification code (ie, characteristic data) to sense the identification device 13 to enter the identification code. Then, the control device 10 can query the identity information of the person in the database 141 according to the identification code, and can complete the identification of the person.

以辨識裝置13是生物辨識裝置(如指紋掃描機)為例,人員於進入追蹤區前可使用辨識裝置13來輸入指紋資料。接著,控制裝置10可分析指紋資料來獲得指紋特徵(即特徵資料),並依據指紋特徵於資料庫141查詢此人員的身份資料,而可完成此人員的身份辨識。 Taking the identification device 13 as a biometric identification device (such as a fingerprint scanner) as an example, a person may use the identification device 13 to enter fingerprint data before entering the tracking area. Then, the control device 10 can analyze the fingerprint data to obtain fingerprint characteristics (ie, characteristic data), and query the identity information of the person in the database 141 according to the fingerprint characteristics, so as to complete the identification of the person.

於一實施例中,控制裝置10是對追縱攝影機12所拍攝的追蹤影像進行分析來取得進入追蹤區的人員的影像特徵(即特徵資料),並依據影像特徵於資料庫141查詢對應的身份資料。 In an embodiment, the control device 10 analyzes the tracking images captured by the tracking camera 12 to obtain the image characteristics (ie, characteristic data) of the person entering the tracking area, and queries the corresponding identity in the database 141 according to the image characteristics. data.

舉例來說,追蹤影像可包括追蹤彩色影像。控制裝置10對彩色追蹤影像執行臉部辨識以取得進入追蹤區的人員的臉部特徵(即特徵資料),並依據此特徵資料於資料庫141查詢對應的身份資料。 For example, tracking images may include tracking color images. The control device 10 performs face recognition on the color tracking image to obtain the facial features (ie, feature data) of the person entering the tracking area, and queries the corresponding identity data in the database 141 according to the feature data.

於一實施例中,管理系統1更包括連接控制裝置10的人機介面(如觸控螢幕、顯示器或揚聲器)與門禁裝置(圖未標示)。控制裝置10可於取得人員的身份資料時解鎖(或開啟)門禁裝置以允許人員進入追蹤區,並於無法取得人員的身份資料時上鎖(或關閉)門禁裝置以拒絕人員進入追蹤區,並可進一步經由人機介面發出未註冊人員通知。 In one embodiment, the management system 1 further includes a human-machine interface (such as a touch screen, a display, or a speaker) connected to the control device 10 and an access control device (not shown in the figure). The control device 10 can unlock (or open) the access control device to allow the person to enter the tracking area when obtaining the identity information of the person, and lock (or close) the access control device to prevent the person from entering the tracking area when the identity information of the person cannot be obtained, and Further notifications of unregistered personnel can be issued via the man-machine interface.

並且,控制裝置10還可進一步經由人機介面與人員進行互動以導引人員進行身份資料的註冊。藉此,未註冊人員可於完成註冊後順利進入追蹤區。 In addition, the control device 10 may further interact with the personnel through the human-machine interface to guide the personnel to register the identity data. With this, unregistered personnel can successfully enter the tracking area after completing registration.

步驟S12:控制裝置10經由姿態追蹤模組102依據追蹤影像產生人員的姿態模型(如圖8至圖12的姿態模型60)。 Step S12: The control device 10 generates a pose model of the person according to the tracking image via the pose tracking module 102 (as shown in the pose model 60 of FIG. 8 to FIG. 12).

於一實施例中,追蹤影像包括彩色追蹤影像及深度追蹤影像,控制裝置10可於彩色追蹤影像及深度追蹤影像中識別人員的影像,識別人員當前的姿態,建構對應的姿態模型,並依據所拍攝的彩色追蹤影像及/或深度追蹤影像持續更新所建構的姿態模型。藉此,控制裝置10可經由姿態模型得知人員當前的位置(如是否接近或遠離貨品)與動作(如是否拿取/放下貨品)。 In an embodiment, the tracking image includes a color tracking image and a depth tracking image. The control device 10 can identify a person's image in the color tracking image and the depth tracking image, identify the current posture of the person, and construct a corresponding posture model. The captured color tracking image and / or depth tracking image continuously update the constructed attitude model. Thereby, the control device 10 can know the current position of the person (such as whether to approach or move away from the product) and the action (such as whether to pick up / down the product) through the attitude model.

於一實施例中,控制裝置10是依據彩色追蹤影像及深度追蹤影像決定人員的多個關節位置,並依據多個關節位置建構或更新姿態模型。 In one embodiment, the control device 10 determines a plurality of joint positions of the person according to the color tracking image and the depth tracking image, and constructs or updates the posture model according to the plurality of joint positions.

於一實施例中,控制裝置10並不建構人員的全身姿態模型,而僅建構部分姿態模型(如僅建構手部的姿態模型)。藉此,本實施例依據部分姿態模型仍可判斷人員是否拿取/放下貨品,並可有效降低運算量。 In one embodiment, the control device 10 does not construct a full-body posture model of a person, but only a partial posture model (for example, only a hand posture model). Therefore, in this embodiment, according to the partial attitude model, it is still possible to judge whether a person picks up / downs the goods, and can effectively reduce the calculation amount.

步驟S13:控制裝置10取得貨品影像,並經由貨品辨識模組106於貨品影像中辨識放置於追蹤區的各貨品的貨品位置。 Step S13: The control device 10 obtains the goods image, and identifies the goods position of each goods placed in the tracking area in the goods image through the goods identification module 106.

於一實施例中,控制裝置10是對追蹤影像中進行分析與裁切來獲得包括貨品的貨品影像。 In an embodiment, the control device 10 analyzes and cuts the tracking image to obtain a product image including the product.

於一實施例中,控制裝置10是控制貨品攝影機11進行拍攝以取得貨品影像。 In an embodiment, the control device 10 controls the goods camera 11 to capture images to obtain goods images.

於一實施例中,資料庫141可進一步儲存分別對應各貨品的貨品資料。控制裝置10可進一步對貨品影像中的各貨品進行辨識,並取得所辨識出的貨品的貨品資料。 In an embodiment, the database 141 may further store product data corresponding to each product. The control device 10 can further identify each item in the item image, and obtain item data of the identified item.

於一實施例中,各貨品包括彼此不同的圖案標籤(圖案標籤的圖案可為條碼、特定形狀及/或顏色)。資料庫141可進一步儲存各貨品資料的圖案標籤所表示的標籤資料(如貨品條碼號或圖案標籤的形狀及/或顏色)。 In one embodiment, each product includes a pattern label (the pattern of the pattern label may be a bar code, a specific shape and / or a color) different from each other. The database 141 may further store label data (such as the barcode number of the product or the shape and / or color of the graphic label) indicated by the graphic label of each product data.

舉例來說,前述圖案標籤可被黏貼或列印於各貨品上。控制裝置10可於貨品影像中辨識出各圖案標籤的影像,並對各圖案標籤的影像進行分析以取得標籤資料(即貨品條碼號或圖案標籤的形狀及/或顏色),再依據標籤資料取得對應的貨品資料。並且,控制裝置10還可依據各圖案標籤的位置決定貨品的位置。 For example, the aforementioned graphic label can be pasted or printed on each product. The control device 10 can identify the image of each pattern label in the product image, and analyze the image of each pattern label to obtain label data (that is, the barcode number of the product or the shape and / or color of the pattern label), and then obtain it according to the label data. Corresponding product information. In addition, the control device 10 may determine the position of the goods according to the position of each pattern label.

藉此,本發明可經由影像處理來取得追蹤區中的各貨品的貨品位置及貨品資料。 Accordingly, the present invention can obtain the product position and product data of each product in the tracking area through image processing.

步驟S14:控制裝置10經由拿取分析模組105依據姿態模型及各貨品的貨品位置判斷是否任一貨品被人員拿取。 Step S14: The control device 10 determines whether any one of the goods has been taken by the personnel according to the posture model and the goods position of each goods via the taking analysis module 105.

於一實施例中,控制裝置10是於依據貨品影像與姿態模型辨識人員的手部接觸或持續碰觸任一貨品(如姿態模型的手部的位置持續與任一貨品的貨品位置重疊)時,判定人員拿取此貨品。 In one embodiment, the control device 10 is based on the image of the product and the posture model's hand touching or continuously touching any product (for example, the position of the hand of the posture model continuously overlaps with the position of any product). , Judges take this product.

於一實施例中,控制裝置10是於判斷任一貨品的貨品位置隨著姿態模型移動及/或貨品位置與姿態模型(的手部)之間的距離小於預設的拿取距離(如30公分)時,判定人員拿取此貨品。 In one embodiment, the control device 10 determines whether the position of any item of the product moves with the posture model and / or the distance between the position of the product and the posture model (hand) is less than a preset picking distance (such as 30 (Cm), the judge will take this product.

若控制裝置10判斷任一貨品被人員拿取時執行步驟S15。否則,控制裝置10再次執行步驟S14。 If the control device 10 determines that any of the goods has been taken by a person, step S15 is performed. Otherwise, the control device 10 executes step S14 again.

步驟S15:控制裝置10先經由貨品辨識模組106取得被拿取的貨品的貨品資料,再經由處理模組100連結拿取貨品的人員的身份資料及被拿取的貨品的貨品資料。 Step S15: The control device 10 first obtains the goods data of the retrieved goods through the goods identification module 106, and then links the identity information of the person who retrieves the goods and the goods data of the retrieved goods through the processing module 100.

以無人商店為例,控制裝置10可於儲存裝置14中建立對應此身份資料的購物車清單,並將貨品資料加入至購物車清單中以建立身份資料及貨品資料之間的連結。 Taking an unmanned store as an example, the control device 10 may create a shopping cart list corresponding to the identity data in the storage device 14 and add the product data to the shopping cart list to establish a link between the identity data and the goods data.

步驟S16:控制裝置10判斷是否人員已完成貨品拿取。 Step S16: The control device 10 determines whether the person has finished taking the goods.

於一實施例中,控制裝置10是於依據追蹤影像判斷人員離開追蹤區時判定人員已完成貨品拿取,並結束管理方法。 In one embodiment, the control device 10 determines that the person has finished taking the goods when the person leaves the tracking area based on the tracking image, and ends the management method.

於一實施例中,控制裝置10是於依據姿態模型與貨品位置判斷人員雙手皆拿取貨品時判定人員已完成貨品拿取,並結束管理方法。 In an embodiment, the control device 10 determines that the person has finished taking the product when the person judges that both hands take the product according to the attitude model and the position of the product, and ends the management method.

於一實施例中,控制裝置10是於依據姿態模型與貨品位置判斷人員持續預設時間(如5分鐘)未拿取新的貨品且未放回任一貨品時判定人員已完成貨品拿取,並結束管理方法。 In an embodiment, the control device 10 determines that the person has finished taking the goods when the person has not taken the new goods for a preset time (e.g., 5 minutes) based on the attitude model and the goods position. And end the management method.

於一實施例中,控制裝置10是於再次自辨識裝置13接收到人員的特徵資料(如人員再次使用RFID標籤進行感應,或再次輸入生物資料)時判定人員已完成貨品拿取,並結束管理方法。 In one embodiment, the control device 10 determines that the person has finished taking the goods when the characteristic data of the person is received from the identification device 13 (for example, the person uses RFID tags for sensing again, or enters the biological data again), and ends the management. method.

於一實施例中,控制裝置10是於經由人機介面接收人員的結算操作時判定人員已完成貨品拿取,並結束管理方法。 In an embodiment, the control device 10 determines that the person has finished taking the goods when receiving the settlement operation of the person via the human-machine interface, and ends the management method.

本發明經由建構並更新各人員的姿態模型,可準確判斷人員是否拿取貨品。並且,相較於使用RFID技術來辨識被拿取的貨品(如於各貨品貼上昂貴的RFID標籤,並使用RFID讀卡機來感應被拿取的貨品的RFID標籤),本發明由於採用了便宜的圖案標籤並基於影像辨識技術來辨識貨品,可大幅降低系統建置成本(即不須昂貴的RFID標籤及RFID讀卡機),並有效有效實現追蹤區的無人化貨品管理,而可有效節省人力,進而降低管理成本。 By constructing and updating the posture model of each person, the present invention can accurately determine whether the person takes the goods. In addition, compared with using RFID technology to identify the goods being retrieved (such as attaching expensive RFID tags to each goods, and using an RFID card reader to sense the RFID tags of the goods being retrieved), the present invention uses the Cheap pattern tags and identification of goods based on image recognition technology can greatly reduce system construction costs (ie, no expensive RFID tags and RFID card readers are needed), and effectively implement unmanned goods management in the tracking area, which can effectively Save manpower and reduce management costs.

續請一併參閱圖3及圖4,圖4為本發明的第二實施例的無人化貨品管理方法的部分流程圖。本實施例中進一步提供貨品放回偵測功能,可自動 偵測人員是否放回貨品。相較於圖3所示的管理方法,本實施例的管理方法於步驟S15之後包括以下步驟。 Please refer to FIG. 3 and FIG. 4 together. FIG. 4 is a partial flowchart of the unmanned goods management method according to the second embodiment of the present invention. In this embodiment, a product return detection function is further provided, which can be automatically Detect if personnel have returned the goods. Compared with the management method shown in FIG. 3, the management method of this embodiment includes the following steps after step S15.

步驟S20:控制裝置10取得貨品影像,並經由貨品辨識模組106於貨品影像中辨識人員所拿取的各貨品的貨品位置。步驟S20所採用的辨識手段是與步驟S13相同或相似,於此不再贅述。 Step S20: The control device 10 obtains the image of the product, and identifies the position of each product obtained by a person in the image of the product through the product identification module 106. The identification method used in step S20 is the same as or similar to step S13, and details are not described herein again.

步驟S21:控制裝置10經由拿取分析模組105依據姿態模型及已被人員拿取的貨品的貨品位置判斷是否人員放回任一個已拿取的貨品。 Step S21: The control device 10 determines whether the person puts back any of the retrieved goods according to the posture model and the goods position of the goods that have been taken by the personnel via the taking analysis module 105.

於一實施例中,控制裝置10是於依據貨品影像與姿態模型辨識人員的手部脫離任一貨品(如姿態模型的手部的位置與任一貨品的貨品位置自重疊狀態變為非重疊狀態)時,判定人員放回此貨品。 In an embodiment, the control device 10 recognizes that a person's hand is separated from any product based on the image of the product and the posture model (such as the position of the hand of the posture model and the position of any product from an overlapping state to a non-overlapping state). ), The judge returns the item.

於一實施例中,控制裝置10是於判斷任一貨品的貨品位置不再隨著姿態模型移動及/或貨品位置與姿態模型(的手部)之間的距離不小於預設的拿取距離時,判定人員放回此貨品。 In an embodiment, the control device 10 determines whether the position of any item of the product is no longer moving with the posture model and / or the distance between the position of the product and the posture model (hand) is not less than a preset picking distance At that time, the judge returned the goods.

若控制裝置10判斷任一貨品被人員放回時執行步驟S22。否則,控制裝置10再次執行步驟S13。 If the control device 10 judges that any product is returned by a person, step S22 is performed. Otherwise, the control device 10 executes step S13 again.

步驟S22:控制裝置10先經由貨品辨識模組106取得被放回的貨品的貨品資料,再經由處理模組100解除放回貨品的人員的身份資料及被放回的貨品的貨品資料之間的連結。 Step S22: The control device 10 first obtains the goods data of the returned goods through the goods identification module 106, and then releases the identity data of the person who placed the goods back through the processing module 100 and the goods data of the returned goods. link.

以無人商店為例,控制裝置10可將貨品資料自購物車清單中移出以解除身份資料及貨品資料之間的連結。 Taking an unmanned store as an example, the control device 10 may remove the goods data from the shopping cart list to release the link between the identity data and the goods data.

步驟S23:控制裝置10經由結算模組108判斷是否預設的結算條件滿足。 Step S23: The control device 10 determines whether the preset settlement conditions are met via the settlement module 108.

於一實施例中,前述結算條件是偵測到人員離開追蹤區、預設時間經過、再次自辨識裝置13接收到人員的特徵資料及/或經由人機介面接收人員的結算操作。 In one embodiment, the aforementioned settlement conditions are detected when a person leaves the tracking area, a preset time elapses, the characteristic data of the person is received from the identification device 13 again, and / or a settlement operation of the person is received via a human-machine interface.

若控制裝置10判斷預設的結算條件滿足,則執行步驟S24。否則,控制裝置10再次執行步驟S13。 If the control device 10 determines that the preset settlement conditions are met, step S24 is executed. Otherwise, the control device 10 executes step S13 again.

步驟S24:控制裝置10對已連結身份資料的所有貨品資料執行結算處理。 Step S24: The control device 10 executes settlement processing on all the goods data linked to the identity data.

以無人商店為例,控制裝置10可執行結算處理來對購物車清單中的所有貨品資料進行結帳。 Taking an unmanned store as an example, the control device 10 may perform a settlement process to check out all the goods data in the shopping cart list.

以無人倉儲為例,控制裝置10可執行結算處理來已連結的所有貨品資料綁定於此身份資料(如將貨品資料的所有權或持有人設定為此身份資料)。 Taking unmanned storage as an example, the control device 10 can perform settlement processing to bind all the linked product data to this identity data (such as setting the ownership or holder of the goods data as this identity data).

藉此,本發明可有效實現貨品放回偵測功能,並適時地自動執行結算處理。 With this, the present invention can effectively implement the goods return detection function and automatically perform settlement processing in a timely manner.

續請一併參閱圖3、圖4及圖5,圖5為本發明的第三實施例的無人化貨品管理方法的部分流程圖。本實施例的管理方法是用於無人商店,而可於結算條件滿足時自動進行結帳。具體而言,相較於圖3及圖4所示的管理方法,本實施例的管理方法的步驟S24更包括用以執行結算處理的以下步驟。 Continue to refer to FIG. 3, FIG. 4 and FIG. 5 together. FIG. 5 is a partial flowchart of the unmanned goods management method according to the third embodiment of the present invention. The management method of this embodiment is used in an unmanned store, and the checkout can be performed automatically when the settlement conditions are met. Specifically, compared to the management method shown in FIG. 3 and FIG. 4, step S24 of the management method of this embodiment further includes the following steps for performing settlement processing.

步驟S30:控制裝置10經由結算模組108取得對應身份資料的支付資料。 Step S30: The control device 10 obtains payment data corresponding to the identity data through the settlement module 108.

於一實施例中,資料庫141更儲存分別對應多個身份資料的多個支付資料(如信用卡資料、儲值卡資料或帳戶資料)。控制裝置10是依據符合結算條件的人員的身份資料來取得對應的支付資料。 In an embodiment, the database 141 further stores a plurality of payment data (such as credit card data, stored value card data, or account data) corresponding to a plurality of identity data. The control device 10 obtains corresponding payment data according to the identity data of a person who meets the settlement conditions.

步驟S31:控制裝置10經由結算模組108計算貨品總額。 Step S31: The control device 10 calculates the total amount of the goods via the settlement module 108.

於一實施例中,資料庫141更儲存各貨品資料的價格資料。控制裝置10取得已連結此身份資料的各貨品資料的價格資料,並依據所取得的貨品價格計算貨品總額。 In an embodiment, the database 141 further stores price data of each product data. The control device 10 obtains price data of each product data linked to the identity data, and calculates the total amount of the product based on the obtained product price.

步驟S32:控制裝置10經由結算模組108依據支付資料及貨品總額進行結帳。 Step S32: The control device 10 performs the settlement according to the payment information and the total amount of the goods via the settlement module 108.

舉例來說,控制裝置10可依據所算出的貨品總額向信用卡資料所對應的信用卡公司進行請款,自儲值卡資料的餘額扣除貨品總額,或自帳戶資料扣除貨品總額,不加以限定。 For example, the control device 10 may request payment from the credit card company corresponding to the credit card information according to the calculated total amount of goods, and deduct the total amount of goods from the balance of the stored value card information or the total amount of goods from the account information without limitation.

藉此,本發明可有效用於無人商店,並可自動進行結帳。 Thus, the present invention can be effectively used in an unmanned store and can automatically perform checkout.

續請一併參閱圖6,為本發明的第四實施例的無人化貨品管理方法的流程圖。於本實施例中,各貨品的圖案標籤分別具有不同形狀及/或顏色的外觀(如圖7所示的圖案標籤530-550)。 Please refer to FIG. 6 together for a flowchart of a method for managing unmanned goods according to a fourth embodiment of the present invention. In this embodiment, the pattern labels of each product have the appearance of different shapes and / or colors, respectively (such as the pattern labels 530-550 shown in FIG. 7).

並且,於本實施例中,管理系統1更包括一或多個貨品攝影機11(如圖7所示的貨品攝影機41-44),各貨品攝影機11分別設置於鄰近貨品的位置而可拍攝更為清晰的貨品影像。 Moreover, in this embodiment, the management system 1 further includes one or more commodity cameras 11 (such as the commodity cameras 41 to 44 shown in FIG. 7). Each of the commodity cameras 11 is respectively located adjacent to the commodity and can shoot more. Clear image of goods.

並且,於本實施例中,資料庫141更記錄有各貨品的貨品資料與圖案標籤的外觀的對應關係。舉例來說,第一個貨品的貨品資料可對應紅色三角形、第二個貨品的貨品資料可對應綠色三角形、第三個貨品的貨品資料可對應紅色矩形等等。本實施例的管理方法包括以下步驟。 Moreover, in this embodiment, the database 141 further records the correspondence between the product data of each product and the appearance of the pattern label. For example, the product data of the first product may correspond to a red triangle, the product data of the second product may correspond to a green triangle, the product data of the third product may correspond to a red rectangle, and so on. The management method of this embodiment includes the following steps.

步驟S400:控制裝置10控制追蹤攝影機12拍攝追蹤區以取得追蹤影像。 Step S400: The control device 10 controls the tracking camera 12 to capture a tracking area to obtain a tracking image.

步驟S401:控制裝置10經由身份辨識模組101取得進入追蹤區的人員的身份資料。 Step S401: The control device 10 obtains the identity data of the person entering the tracking area via the identity recognition module 101.

於本實施例中,資料庫141儲存分別對應各身份資料的特徵資料,追縱攝影機12是3D攝影機而可拍攝彩色追蹤影像及深度追蹤影像。控制裝置10是對彩色追蹤影像進行人臉辨識來取得進入追蹤區的人員的臉部特徵(即特徵資料),並依據特徵資料於資料庫141查詢對應的身份資料。接著,控制裝置10可同時(或交替)執行步驟S402及步驟S403。 In this embodiment, the database 141 stores characteristic data corresponding to each identity data, and the tracking camera 12 is a 3D camera and can capture color tracking images and depth tracking images. The control device 10 performs face recognition on the color tracking image to obtain facial features (ie, feature data) of a person entering the tracking area, and queries corresponding identity data in the database 141 according to the feature data. Then, the control device 10 can execute steps S402 and S403 simultaneously (or alternately).

步驟S402:控制裝置10經由姿態追蹤模組102依據連續拍測的多張彩色追蹤影像及多張深度追蹤影像產生人員的姿態模型,並持續更新姿態模型。 Step S402: The control device 10 generates a person's attitude model based on the continuously tracked and measured multiple color tracking images and multiple depth tracking images via the attitude tracking module 102, and continuously updates the attitude model.

步驟S403:控制裝置10經由臉部追蹤模組103依據連續拍攝的多張彩色追蹤影像持續辨識人員的臉部影像以持續追蹤人員的臉部位置。 Step S403: The control device 10 continuously recognizes a person's face image based on the continuously captured multiple color tracking images via the face tracking module 103 to continuously track the person's face position.

步驟S404:控制裝置10經由處理模組100依據姿態模型的位置或臉部位置判斷人員是否接近追蹤區中的貨品。 Step S404: The control device 10 determines whether the person approaches the goods in the tracking area according to the position of the pose model or the position of the face via the processing module 100.

具體而言,貨品(或擺放貨品的貨架)於追蹤區中的貨品位置4可預先被儲存於儲存裝置14,控制裝置10是於判斷姿態模型的位置或臉部位置鄰近前述貨品位置時判定人員接近貨品。 Specifically, the goods position 4 in the tracking area of the goods (or the shelves on which the goods are placed) may be stored in advance in the storage device 14, and the control device 10 determines when the position of the posture model or the position of the face is close to the aforementioned goods position. Personnel approach the goods.

若控制裝置10判斷人員接近貨品,則執行步驟S405。否則,控制裝置10再次執行步驟S402及S403。 If the control device 10 determines that the person is approaching the goods, step S405 is performed. Otherwise, the control device 10 executes steps S402 and S403 again.

步驟S405:控制裝置10控制貨品攝影機11開始拍攝貨品所在區域以持續取得貨品影像。 Step S405: The control device 10 controls the goods camera 11 to start shooting the area where the goods are located to continuously obtain the goods images.

於一實施例中,貨品攝影機11是3D攝影機(即包括彩色貨品攝影機及深度貨品攝影機)而可拍攝彩色貨品影像與深度貨品影像,但不以此限定。 In an embodiment, the product camera 11 is a 3D camera (that is, including a color product camera and a depth product camera), and can capture color product images and depth product images, but it is not limited thereto.

於一實施例中,貨品攝影機11亦可僅包括彩色貨品攝影機與深度貨品攝影機的其中之一。 In one embodiment, the commodity camera 11 may include only one of a color commodity camera and a depth commodity camera.

接著,控制裝置10可同時(或交替)執行步驟S406、步驟S407及步驟S408。 Then, the control device 10 can execute steps S406, S407, and S408 simultaneously (or alternately).

步驟S406:控制裝置10經由手部追蹤模組104依據彩色貨品影像及深度貨品影像產生人員的手部姿態模型,並持續依據彩色貨品影像及深度貨品影像來更新手部姿態模型。 Step S406: The control device 10 generates a person's hand posture model based on the color product image and the depth product image via the hand tracking module 104, and continuously updates the hand posture model based on the color product image and the depth product image.

前述手部姿態模型的產生方式是與前述姿態模型的產生方式相同或相似,其細節於此不再贅述。 The aforementioned generation method of the hand pose model is the same as or similar to the aforementioned generation of the pose model, and details thereof will not be repeated here.

步驟S407:控制裝置10經由身份確認模組107對彩色貨品影像執行臉部辨識以取得貨品攝影機11所拍攝到的人員的臉部特徵(即特徵資料),並依據此特徵資料於資料庫141查詢對應的身份資料。 Step S407: The control device 10 performs face recognition on the color goods image through the identity confirmation module 107 to obtain facial features (ie, feature data) of the person captured by the goods camera 11, and queries the database 141 based on the feature data Corresponding identity information.

於一實施例中,控制裝置10是比較追蹤攝影機12當前或先前所拍攝到人員的臉部特徵與貨品攝影機11當前所拍攝到的人員的臉部特徵,並於判斷兩組臉部特徵屬於同一人員(如臉部特徵相同或極為相似)時將追蹤攝影機12所拍攝到人員的身份資料直接做為貨品攝影機11所拍攝到的人員的身份資料。藉此,本發明不須再次向資料庫141進行身份資料查詢亦可確認人員的身份資料,而可降低資料庫141的負載。 In one embodiment, the control device 10 compares the facial features of the person currently or previously captured by the tracking camera 12 with the facial features of the person currently captured by the commodity camera 11 and determines whether the two facial features belong to the same When the persons (such as the facial features are the same or very similar), the identity information of the person photographed by the tracking camera 12 is directly used as the identity information of the person photographed by the goods camera 11. In this way, the present invention does not need to query the identity database 141 again to confirm the identity information of the personnel, and can reduce the load on the database 141.

步驟S408:控制裝置10經由貨品辨識模組106於連續拍攝的多張彩色貨品影像中持續辨識各貨品的圖案標籤的外觀,並依據各圖案標籤的於或彩色貨品影像中的位置持續決定各貨品的貨品位置。 Step S408: The control device 10 continuously recognizes the appearance of the pattern label of each item in the continuously captured multiple color product images through the product identification module 106, and continuously determines each item according to the position of each pattern label in the color product image Product location.

值得一提的是,當以條碼作為圖案標籤的圖案時,由於條碼具有較高精細度且條碼辨識結果容易受光影影響,貨品影像必須具有極佳畫質(如極高解析度與極少的雜訊)才可成功辨識嵌入於條碼中的標籤資料並進行貨品辨識,而會大幅提升貨品攝影機11的成本。 It is worth mentioning that when using a bar code as the pattern of the pattern label, because the bar code has a high degree of precision and the recognition result of the bar code is easily affected by light and shadow, the product image must have excellent image quality (such as extremely high resolution and minimal noise). (Information) can successfully identify the tag data embedded in the barcode and identify the goods, which will greatly increase the cost of the goods camera 11.

為解決上述問題,本發明提出使用不同的形狀、顏色或其組合來做為圖案標籤的圖案。相較於條碼,以形狀與顏色作為圖案的圖案標籤由於精細度較低且不易受光影影響,可具有較高的辨識成功率,並可適用於較為低階的貨品攝影機11。 To solve the above problems, the present invention proposes to use different shapes, colors, or a combination thereof as the pattern of the pattern label. Compared with bar codes, pattern labels with shapes and colors as patterns have lower precision and are less susceptible to light and shadow, can have a higher recognition success rate, and can be applied to lower-level commodity cameras 11.

於一實施例中,控制裝置10是持續辨識所拍攝到的各圖案標籤的形狀及/或顏色(即外觀),並進一步依據圖案標籤的外觀於資料庫141中查詢對應的貨品資料,但不以此限定。 In an embodiment, the control device 10 continuously recognizes the shape and / or color (ie, appearance) of each pattern label captured, and further queries the corresponding product data in the database 141 according to the appearance of the pattern label, but does not Defined by this.

於一實施例中,控制裝置10是於所拍攝到的任一貨品被人員拿取或放回時才查詢對應的貨品資料。藉此,本發明可降低資料庫141的負載。接著執行步驟S409。 In an embodiment, the control device 10 only queries the corresponding product data when any of the photographed goods is picked up or returned by the personnel. Accordingly, the present invention can reduce the load of the database 141. Then step S409 is performed.

步驟S409:控制裝置10經由拿取分析模組105依據手部姿態模型及各貨品的各貨品位置判斷是否任一貨品被人員拿取。前述是否拿取貨品的判定如前所述,於此不再贅述。 Step S409: The control device 10 determines whether any of the products has been taken by the personnel according to the hand posture model and the position of each product through the fetch analysis module 105. The foregoing determination of whether to take the goods is as described above, and is not repeated here.

若控制裝置10判斷任一貨品被人員拿取時執行步驟S410。否則,控制裝置10再次執行步驟S402及步驟S403。 If the control device 10 determines that any of the goods has been taken by a person, step S410 is performed. Otherwise, the control device 10 executes steps S402 and S403 again.

步驟S410:控制裝置10經由貨品辨識模組106依據被拿取的貨品的圖案標籤的外觀(即圖案標籤的形狀、顏色或其組合,前述外觀可於步驟S408中被辨識)於資料庫141查詢對應的貨品資料。 Step S410: The control device 10 queries the database 141 via the product identification module 106 according to the appearance of the pattern label of the retrieved product (that is, the shape, color, or combination of the pattern label, the aforementioned appearance can be identified in step S408). Corresponding product information.

於一實施例中,控制裝置10還經由身份確認模組107確認拿取貨品的人員的身份資料(前述身份資料可於步驟S407中被辨識),但不以此限定。 In an embodiment, the control device 10 also confirms the identity information of the person who took the goods through the identity confirmation module 107 (the aforementioned identity information can be identified in step S407), but it is not limited thereto.

於一實施例中,當追蹤區內僅有單一人員時控制裝置10亦可不執行步驟S407,而直接將步驟S401所取得的身份資料設定為拿取貨品的人員的身份資料。 In an embodiment, when there is only a single person in the tracking area, the control device 10 may not perform step S407, but directly set the identity data obtained in step S401 as the identity data of the person who took the goods.

步驟S411:控制裝置10經由處理模組100連結已確認的身份資料及被拿取的貨品的貨品資料(如將貨品資料加入至此身份資料所對應的購物車清單)。 Step S411: The control device 10 links the confirmed identity data and the goods data of the retrieved goods (for example, the goods data is added to the shopping cart list corresponding to the identity data) via the processing module 100.

步驟S412:控制裝置10依據各人員的姿態模型的位置及/或臉部影像的位置判斷此人員是否離開追蹤區。 Step S412: The control device 10 determines whether the person leaves the tracking area according to the position of each person's posture model and / or the position of the facial image.

若控制裝置10判斷人員是否離開追蹤區,則結束管理方法。否則,控制裝置10再次執行步驟S402及步驟S403。 If the control device 10 determines whether a person has left the tracking area, the management method ends. Otherwise, the control device 10 executes steps S402 and S403 again.

於一實施例中,控制裝置10於判斷人員離開追蹤區後可自動對已連結身份資料的所有貨品資料執行結算處理(如圖4所示的步驟S24或圖5所示的步驟S30-S32)。 In an embodiment, the control device 10 can automatically perform settlement processing on all the goods information of the connected identity data after the judges leave the tracking area (step S24 shown in FIG. 4 or steps S30-S32 shown in FIG. 5) .

藉此,本發明可進一步提升判斷貨品拿取的成功率,而可更有效地實現無人化貨品管理。 In this way, the present invention can further improve the success rate of judging the taking of goods, and can more effectively realize the management of unmanned goods.

續請一併參閱圖7至圖12,圖7為本發明的無人化貨品管理的第一示意圖,圖8為本發明的無人化貨品管理的第二示意圖,圖9為本發明的無人化貨品管理的第三示意圖,圖10為本發明的無人化貨品管理的第四示意圖,圖11為本發明的無人化貨品管理的第五示意圖,圖12為本發明的無人化貨品管理的第六示意圖。圖7至圖12用以示例性說明本發明的無人化貨品管理的實施方式的其中之一。 Please refer to FIGS. 7 to 12 together. FIG. 7 is a first schematic diagram of unmanned goods management of the present invention, FIG. 8 is a second schematic diagram of unmanned goods management of the present invention, and FIG. 9 is unmanned goods of the present invention. A third schematic diagram of management, FIG. 10 is a fourth schematic diagram of unmanned goods management of the present invention, FIG. 11 is a fifth schematic diagram of unmanned goods management of the present invention, and FIG. 12 is a sixth schematic diagram of unmanned goods management of the present invention. . 7 to 12 are used to exemplify one of the embodiments of the unmanned goods management of the present invention.

如圖7所示,於本例子中,追蹤攝影機40(於本例子中為3D攝影機)拍攝整個追蹤區。追蹤區34內設置有三層貨架。貨品50-52被放置於第一層貨架30,一台貨品攝影機41從上方對第一層貨架30進行拍攝。貨品53-55被放置於第二層貨架31,一台貨品攝影機42從側方對第二層貨架31進行拍攝。貨品56-58被放置於第三層貨架32,兩台貨品攝影機43從側方對第三層貨架32進行拍 攝。並且,於本例子中,貨品攝影機41-44為3D攝影機,而可同時擷取彩色貨品影像及深度貨品影像。 As shown in FIG. 7, in this example, the tracking camera 40 (a 3D camera in this example) captures the entire tracking area. There are three shelves in the tracking area 34. The goods 50-52 are placed on the first-level shelf 30, and a goods camera 41 photographs the first-level shelf 30 from above. The goods 53-55 are placed on the second shelf 31, and a goods camera 42 photographs the second shelf 31 from the side. The goods 56-58 are placed on the third shelf 32, and two goods cameras 43 take pictures of the third shelf 32 from the side. Photo. Moreover, in this example, the product cameras 41-44 are 3D cameras, and can capture color product images and depth product images at the same time.

此外,於本例子中,圖案標籤的圖案是不同形狀與顏色的組合。如貨品50、53、56的圖案標籤的圖案為紅色三角形,貨品51、54、57的圖案標籤的圖案為綠色三角形,貨品52、55、58的圖案標籤的圖案為紅色正方形。 In addition, in this example, the pattern of the pattern label is a combination of different shapes and colors. For example, the patterns of the pattern tags of goods 50, 53, 56 are red triangles, the patterns of the pattern tags of goods 51, 54, 57 are green triangles, and the patterns of the pattern tags of goods 52, 55, 58 are red squares.

更進一步地,本發明的圖案標籤可採用下述任一方式來與貨品結合。於一例子中,貨品50-52的圖案標籤是被設計為外盒,而可裝納貨品50-52。於一例子中,圖案標籤530-550是黏貼於貨品53-55的表面。於一例子中,圖案標籤560-580是直接被印刷於貨品53-55上。 Furthermore, the pattern label of the present invention can be combined with goods in any of the following ways. In one example, the pattern labels of goods 50-52 are designed as outer boxes, and can hold goods 50-52. In one example, the pattern labels 530-550 are adhered to the surfaces of the goods 53-55. In one example, graphic labels 560-580 are printed directly on goods 53-55.

如圖8所示,於本例子中,管理系統1可持續對追蹤攝影機40所拍攝的彩色追蹤影像及/或深度追蹤影像進行人員偵測,並於偵測到人員6進入追蹤區34時對人員6執行臉部辨識以取得人員6的身份資料,並建構人員6的姿態模型60(於本例子中,姿態模型60是人員6的骨架形式來呈現)。 As shown in FIG. 8, in this example, the management system 1 can continuously detect the color tracking image and / or the depth tracking image captured by the tracking camera 40, and detect the person 6 entering the tracking area 34. The person 6 performs face recognition to obtain the identity information of the person 6 and constructs a pose model 60 of the person 6 (in this example, the pose model 60 is presented in the form of a skeleton of the person 6).

接著,如圖9所示,於人員6拿取貨品50的過程中,管理系統1可依據貨品攝影機41所拍攝的影像確認拿取貨品50的人員6的身份資料,並辨識被拿取的貨品50的圖案標籤為紅色三角形,進而取得對應的貨品資料。並且,管理系統1連結貨品50的貨品資料與人員6的身份資料(如將貨品50的貨品資料加入人員6的身份資料所對應的購物車清單)。 Next, as shown in FIG. 9, during the process of the person 50 picking up the goods 50, the management system 1 may confirm the identity information of the person 6 picking up the goods 50 according to the image captured by the product camera 41 and identify the picked up goods. The pattern label of 50 is a red triangle to obtain the corresponding product information. In addition, the management system 1 links the product data of the goods 50 and the identity data of the personnel 6 (such as adding the product data of the goods 50 to the shopping cart list corresponding to the identity data of the personnel 6).

接著,如圖10所示,於人員6拿取另一貨品51的過程中,管理系統1可依據貨品攝影機41所拍攝的影像確認拿取貨品51的人員6的身份資料,並辨識被拿取的貨品51的圖案標籤為綠色三角形,進而取得對應的貨品資料。並且,管理系統1連結貨品51的貨品資料與人員6的身份資料(如將貨品51的貨品資料加入人員6的身份資料所對應的購物車清單)。 Next, as shown in FIG. 10, during the process of the person 6 taking another product 51, the management system 1 can confirm the identity information of the person 6 taking the product 51 according to the image captured by the product camera 41 and identify the person being taken The pattern label of the product 51 is a green triangle, and the corresponding product data is obtained. In addition, the management system 1 links the product data of the goods 51 and the identity data of the personnel 6 (such as adding the product data of the goods 51 to the shopping cart list corresponding to the identity data of the personnel 6).

接著,如圖11所示,於人員6放回貨品51的過程中,管理系統1可依據貨品攝影機41所拍攝的影像確認放回貨品51的人員6的身份資料,並辨識被放回的貨品51的圖案標籤為綠色三角形,進而取得對應的貨品資料。並且,管理系統1解除貨品51的貨品資料與人員6的身份資料之間的連結(如將貨品51的貨品資料自人員6的身份資料所對應的購物車清單中移除)。 Next, as shown in FIG. 11, in the process of returning the goods 51 by the personnel 6, the management system 1 can confirm the identity information of the personnel 6 returning the goods 51 based on the image captured by the goods camera 41, and identify the returned goods The pattern label of 51 is a green triangle, so as to obtain the corresponding product data. In addition, the management system 1 releases the connection between the product data of the goods 51 and the identity data of the personnel 6 (for example, the product data of the goods 51 is removed from the shopping cart list corresponding to the identity data of the personnel 6).

接著,如圖12所示,管理系統1可於偵測到人員6離開追蹤區34時對當前與人員6的身份資料連結的所有貨物51的貨物資料執行結算處理(如依據人員6的身份資料所對應的支付資料對購物車清單中的所有貨品資料進行結帳)。 Then, as shown in FIG. 12, when the management system 1 detects that the person 6 leaves the tracking area 34, it performs settlement processing on the cargo information of all the goods 51 currently connected to the identification information of the person 6 (for example, based on the identification information of the person 6). The corresponding payment information will check out all the product information in the shopping cart list).

藉此,本發明可有效盼人員是否拿取或放回貨品,並實現無人化貨品管理。 In this way, the present invention can effectively hope whether personnel can pick up or return the goods, and realize unmanned goods management.

以上所述僅為本發明的較佳具體實例,非因此即侷限本發明的專利範圍,故舉凡運用本發明內容所為的等效變化,均同理皆包含於本發明的範圍內,合予陳明。 The above description is only a preferred specific example of the present invention, and therefore does not limit the scope of the patent of the present invention. Therefore, any equivalent changes made by using the content of the present invention are included in the scope of the present invention in the same way. Bright.

Claims (22)

一種無人化貨品管理方法,用於對一追蹤區中的多個貨品進行管理,包括以下步驟:a)經由一追蹤攝影機拍攝該追蹤區以取得一追蹤影像;b)取得進入該追蹤區的一人員的一身份資料;c)依據該追蹤影像產生該人員的一姿態模型;d)經由一貨品攝影機拍攝該多個貨品所在區域以取得一彩色貨品影像;e)於該彩色貨品影像中辨識各該貨品的一圖案標籤,並依據各該圖案標籤的位置決定各該貨品的一貨品位置;f)於依據該姿態模型及各該貨品位置判斷任一該貨品被該人員拿取時取得被拿取的該貨品的一貨品資料;及g)連結該身份資料及該貨品資料。 An unmanned goods management method for managing multiple goods in a tracking area includes the following steps: a) shooting the tracking area through a tracking camera to obtain a tracking image; b) obtaining a tracking image entering the tracking area An identification of the person; c) generating an attitude model of the person based on the tracking image; d) shooting a region of the plurality of goods through a goods camera to obtain a color goods image; e) identifying each of the color goods images A pattern label of the item, and a position of each item is determined according to the position of each pattern label; f) when any of the item is judged by the person according to the posture model and the position of the item Take a piece of product information of the product; and g) link the identity information and the product information. 如請求項1所述的無人化貨品管理方法,其中更包括一步驟h)於一結算條件滿足時對已連結該身份資料的所有該貨品資料執行一結算處理。 The method for managing unmanned goods according to claim 1, further comprising a step h) performing a settlement process on all the goods data linked to the identity data when a settlement condition is satisfied. 如請求項2所述的無人化貨品管理方法,其中該步驟h)更包括以下步驟:h1)取得對應該身份資料的一支付資料;h2)依據所連結的各該貨品資料的一價格資料計算一貨品總額;及h3)依據該支付資料及該貨品總額進行結帳。 The unmanned goods management method according to claim 2, wherein step h) further includes the following steps: h1) obtaining a payment information corresponding to the identity information; h2) calculating based on a price data of each of the linked goods information A total amount of goods; and h3) settlement based on the payment information and the total amount of goods. 如請求項2所述的無人化貨品管理方法,其中該結算條件是偵測到該人員離開該追蹤區。 The unmanned goods management method according to claim 2, wherein the settlement condition is that the person is detected to leave the tracking area. 如請求項1所述的無人化貨品管理方法,其中該步驟b)是經由一辨識裝置取得該人員的一特徵資料,並依據該特徵資料於一資料庫查詢對應的該身份資料,其中該特徵資料是該人員的識別碼或生物資料。 The unmanned goods management method according to claim 1, wherein step b) is to obtain a characteristic data of the person through an identification device, and query the corresponding identity information in a database based on the characteristic data, wherein the characteristic The information is the person's identification code or biological information. 如請求項1所述的無人化貨品管理方法,其中該追蹤影像包括一彩色追蹤影像;該步驟b)是對該彩色追蹤影像執行臉部辨識以取得該人員的臉部的一特徵資料,並依據該特徵資料於一資料庫查詢對應的該身份資料。 The unmanned goods management method according to claim 1, wherein the tracking image includes a color tracking image; the step b) is to perform face recognition on the color tracking image to obtain a characteristic data of the person's face, and Query the corresponding identity information in a database based on the characteristic data. 如請求項1所述的無人化貨品管理方法,其中該追蹤影像包括一彩色追蹤影像及一深度追蹤影像;該步驟c)是依據該彩色追蹤影像及該深度追蹤影像決定該人員的多個關節位置,並依據該多個關節位置產生該姿態模型。 The unmanned goods management method according to claim 1, wherein the tracking image includes a color tracking image and a depth tracking image; the step c) is to determine multiple joints of the person based on the color tracking image and the depth tracking image Position, and generate the pose model according to the multiple joint positions. 如請求項7所述的無人化貨品管理方法,其中於該步驟a)之後,該步驟e)之前更包括以下步驟:i1)依據該彩色追蹤影像追蹤該人員的一臉部位置;及i2)於依據該姿態模型的位置或該臉部位置判斷該人員接近該多個貨品時控制一貨品攝影機開始拍攝該多個貨品所在區域。 The method for managing unmanned goods according to claim 7, wherein after the step a) and before the step e), the method further comprises the following steps: i1) tracking the position of a face of the person according to the color tracking image; and i2) When it is determined that the person is approaching the plurality of goods according to the position of the posture model or the position of the face, a goods camera is controlled to start shooting the area where the plurality of goods are located. 如請求項1所述的無人化貨品管理方法,其中該步驟d)是經由該貨品攝影機取得該彩色貨品影像及一深度貨品影像;其中,該步驟f)包括:f1)依據該彩色貨品影像及該深度貨品影像產生該人員的一手部姿態模型;及f2)於依據該手部姿態模型及各該貨品位置判斷任一該貨品被該人員拿取時依據被拿取的該貨品的該圖案標籤的形狀或顏色於一資料庫查詢對應的該貨品資料。 The unmanned goods management method according to claim 1, wherein the step d) is obtaining the color goods image and a depth goods image through the goods camera; wherein the step f) includes: f1) according to the color goods image and The deep product image generates a person's hand posture model; and f2) when judging any of the goods by the person based on the hand posture model and each of the product positions, based on the pattern label of the product being taken For the shape or color of the product, query the corresponding product data in a database. 如請求項9所述的無人化貨品管理方法,其中該步驟f)更包括以下步驟: f3)對該彩色貨品影像執行臉部辨識以確認拿取該貨品的該人員的該身份資料;其中,該步驟g)是連結已確認的該身份資料及該貨品資料。 The method for managing unmanned goods according to claim 9, wherein step f) further includes the following steps: f3) Perform face recognition on the color goods image to confirm the identity information of the person who took the goods; wherein, step g) is to link the identified identity information and the goods information. 如請求項1所述的無人化貨品管理方法,其中更包括以下步驟:j1)於依據該姿態模型及被拿取的該貨品的該貨品位置判斷該人員放回任一該貨品時取得被放回的該貨品的該貨品資料;及j2)解除該身份資料及被放回的該貨品的該貨品資料之間的連結。 The method for managing unmanned goods according to claim 1, further comprising the following steps: j1) obtaining the release when the person judges that the person puts back any of the goods based on the posture model and the location of the goods being taken Return the goods information of the goods; and j2) release the link between the identity information and the goods information of the goods returned. 一種無人化貨品管理系統,用於對一追蹤區中的多個貨品進行管理,包括:一追蹤攝影機,用以拍攝該追蹤區以取得一追蹤影像;一資料庫,用以儲存分別對應該多個貨品的多個貨品資料及分別對應多個人員的多個身份資料;一彩色貨品攝影機,用以拍攝該多個貨品所在區域以產生一彩色貨品影像;及一控制裝置,連接該追蹤攝影機、該資料庫及該彩色貨品攝影機,該控制裝置包括:一身份辨識模組,用以取得進入該追蹤區的一人員的該身份資料;一姿態追蹤模組,用以依據該追蹤影像產生該人員的一姿態模型;一貨品辨識模組,用以於該彩色貨品影像中辨識各該貨品的一圖案標籤,並依據各該圖案標籤的位置決定各該貨品的一貨品位置,並用以取得被拿取的該貨品的一貨品資料;一拿取分析模組,用以依據該姿態模型及各該貨品位置判斷是否任一該貨品被該人員拿取;及 一處理模組,用以連結所取得的該身份資料及被拿取的該貨品的該貨品資料。 An unmanned goods management system for managing multiple goods in a tracking area includes: a tracking camera for shooting the tracking area to obtain a tracking image; and a database for storing the corresponding multiples. Multiple product data of each item and multiple identification data corresponding to multiple persons; a color product camera for shooting the area where the multiple items are located to generate a color product image; and a control device connected to the tracking camera, The database and the color goods camera, the control device includes: an identity recognition module for obtaining the identity data of a person entering the tracking area; an attitude tracking module for generating the person based on the tracking image An attitude model of a product; a product identification module for identifying a pattern label of each product in the color product image, and determining a product position of each product according to the position of each pattern label, and used to obtain the captured product Take a piece of product information of the item; a take analysis module, which is used to determine whether to take any responsibility based on the attitude model and the position of each item. The goods were to pick up the staff; and A processing module is used to link the obtained identity information and the goods information of the goods obtained. 如請求項12所述的無人化貨品管理系統,其中該控制裝置更包括一結算模組,用以於一結算條件滿足時對連結該身份資料的所有該貨品資料執行一結算處理。 The unmanned goods management system according to claim 12, wherein the control device further includes a settlement module for performing a settlement process on all the goods data linked to the identity data when a settlement condition is satisfied. 如請求項13所述的無人化貨品管理系統,其中該資料庫更儲存分別對應該多個身份資料的多個支付資料及各該貨品資料的一價格資料,該結算模組取得該身份資料的該支付資料及連結該身份資料的所有該貨品資料的該價格資料,依據所取得的該價格資料計算一貨品總額,並依據該支付資料及該貨品總額進行結帳。 The unmanned goods management system according to claim 13, wherein the database further stores a plurality of payment data corresponding to a plurality of identity data and a price data of each of the goods data, and the settlement module obtains the identity data. The payment information and the price information of all the product information linked to the identity information are calculated based on the obtained price information for a total of the goods, and the billing is performed based on the payment information and the total of the goods. 如請求項13所述的無人化貨品管理系統,其中該結算條件是偵測到該人員離開該追蹤區。 The unmanned goods management system according to claim 13, wherein the settlement condition is that the person is detected to leave the tracking area. 如請求項12所述的無人化貨品管理系統,其中該資料庫更儲存分別對應該多個身份資料的多個特徵資料;其中,該無人化貨品管理系統更包括連接該控制裝置的一辨識裝置,該辨識裝置用以取得進入該追蹤區的該人員的該特徵資料;其中,該身份辨識模組是自該資料庫查詢對應該特徵資料的該身份資料;其中,該辨識裝置是RFID讀卡機或生物辨識裝置,該特徵資料是該人員的識別碼或生物資料。 The unmanned goods management system according to claim 12, wherein the database further stores a plurality of characteristic data corresponding to a plurality of identity data; wherein the unmanned goods management system further includes an identification device connected to the control device. , The identification device is used to obtain the characteristic data of the person entering the tracking area; wherein the identification module queries the identification data corresponding to the characteristic data from the database; wherein the identification device is an RFID card reader Device or biometric device, the characteristic data is the identification code or biometric information of the person. 如請求項12所述的無人化貨品管理系統,其中該資料庫更儲存分別對應該多個身份資料的多個特徵資料;其中,該追蹤攝影機包括用以產生一彩色追蹤影像的一彩色追蹤攝影機; 其中,該身份辨識模組是對該彩色追蹤影像執行臉部辨識以取得進入該追蹤區的該人員的臉部的一特徵資料,並依據該特徵資料於該資料庫查詢對應的該身份資料。 The unmanned goods management system according to claim 12, wherein the database further stores a plurality of characteristic data corresponding to a plurality of identity data; wherein the tracking camera includes a color tracking camera for generating a color tracking image ; The identity recognition module performs face recognition on the color tracking image to obtain a feature data of the face of the person entering the tracking area, and inquires the corresponding identity data in the database according to the feature data. 如請求項12所述的無人化貨品管理系統,其中該追蹤攝影機包括用以產生一彩色追蹤影像的一彩色追蹤攝影機及用以產生一深度追蹤影像的一深度追蹤攝影機,該姿態追蹤模組是依據該彩色追蹤影像及該深度追蹤影像決定該人員的多個關節位置,並依據該多個關節位置產生該姿態模型。 The unmanned goods management system according to claim 12, wherein the tracking camera includes a color tracking camera for generating a color tracking image and a depth tracking camera for generating a depth tracking image. The attitude tracking module is A plurality of joint positions of the person are determined according to the color tracking image and the depth tracking image, and the posture model is generated according to the plurality of joint positions. 如請求項18所述的無人化貨品管理系統,其中該控制裝置更包括一臉部追蹤模組,用以依據該彩色追蹤影像追蹤該人員的一臉部位置;其中,該處理模組於依據該姿態模型的位置或該臉部位置判斷該人員接近該多個貨品時控制該彩色貨品攝影機開始拍攝該多個貨品所在區域。 The unmanned goods management system according to claim 18, wherein the control device further includes a face tracking module for tracking a person's face position based on the color tracking image; wherein the processing module is based on When the position of the posture model or the position of the face determines that the person is approaching the plurality of goods, the color goods camera is controlled to start shooting the area where the plurality of goods are located. 如請求項12所述的無人化貨品管理系統,其中該無人化貨品管理系統更包括連接該控制裝置的一深度貨品攝影機,該深度貨品攝影機用以拍攝該多個貨品所在區域以產生一深度貨品影像;其中,該資料庫更儲存各該貨品資料的該圖案標籤的形狀或顏色;其中,該控制裝置更包括一手部追蹤模組,用以依據該彩色貨品影像及該深度貨品影像產生該人員的一手部姿態模型;其中,該拿取分析模組是依據該手部姿態模型及各該貨品位置判斷是否任一該貨品被該人員拿取;其中,該貨品辨識模組是依據被拿取的該貨品的該圖案標籤的形狀或顏色於該資料庫查詢對應的該貨品資料。 The unmanned goods management system according to claim 12, wherein the unmanned goods management system further comprises a depth goods camera connected to the control device, the depth goods camera is used to photograph the area where the plurality of goods are located to generate a depth goods Image; wherein, the database further stores the shape or color of the pattern label of each of the product data; wherein the control device further includes a hand tracking module for generating the person based on the color product image and the depth product image The hand analysis model is based on the hand posture model and the positions of the goods to determine whether any of the goods have been taken by the person. Among them, the goods identification module is taken based on For the shape or color of the pattern label of the product, query the database for the corresponding product data. 如請求項12所述的無人化貨品管理系統,其中該控制裝置更包括一身份確認模組,該身份確認模組對該彩色貨品影像執行臉部辨識以確認拿取該貨品的該人員的該身份資料;其中,該處理模組是連結已確認的該身份資料及該貨品資料。 The unmanned goods management system according to claim 12, wherein the control device further includes an identity verification module, which performs face recognition on the color goods image to confirm the identity of the person who took the goods. Identity data; among them, the processing module is a link between the identified identity data and the goods data. 如請求項12所述的無人化貨品管理系統,其中該拿取分析模組依據該姿態模型及被拿取的該貨品的該貨品位置判斷是否該人員放回任一該貨品;其中,該貨品辨識模組取得被放回的該貨品的該貨品資料;其中,該處理模組解除該身份資料及被放回的該貨品的該貨品資料之間的連結。 The unmanned goods management system according to claim 12, wherein the retrieval analysis module determines whether the person puts back any of the goods based on the attitude model and the location of the goods being retrieved; among which, the goods The identification module obtains the product data of the returned product; wherein the processing module releases the link between the identity data and the product data of the returned product.
TW107108461A 2018-03-13 2018-03-13 Unmanned goods management system and unmanned goods management method TWI675337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107108461A TWI675337B (en) 2018-03-13 2018-03-13 Unmanned goods management system and unmanned goods management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107108461A TWI675337B (en) 2018-03-13 2018-03-13 Unmanned goods management system and unmanned goods management method

Publications (2)

Publication Number Publication Date
TW201939383A TW201939383A (en) 2019-10-01
TWI675337B true TWI675337B (en) 2019-10-21

Family

ID=69023191

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107108461A TWI675337B (en) 2018-03-13 2018-03-13 Unmanned goods management system and unmanned goods management method

Country Status (1)

Country Link
TW (1) TWI675337B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI771790B (en) * 2020-11-03 2022-07-21 財團法人工業技術研究院 Intelligent store system and intelligent store method
TWI807605B (en) * 2022-01-21 2023-07-01 國立勤益科技大學 Image identifying shopping system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847998B1 (en) * 1998-08-12 2005-01-25 Alasi Di Arcieri Franco & C.S.A.S. Apparatus for control and certification of the delivery of goods
US7243074B1 (en) * 1999-12-30 2007-07-10 General Electric Company Capacity monitoring process for a goods delivery system
CN106781121A (en) * 2016-12-14 2017-05-31 朱明� The supermarket self-checkout intelligence system of view-based access control model analysis
CN107067510A (en) * 2017-03-27 2017-08-18 杭州赛狐科技有限公司 A kind of unattended Supermarket shopping system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847998B1 (en) * 1998-08-12 2005-01-25 Alasi Di Arcieri Franco & C.S.A.S. Apparatus for control and certification of the delivery of goods
US7243074B1 (en) * 1999-12-30 2007-07-10 General Electric Company Capacity monitoring process for a goods delivery system
CN106781121A (en) * 2016-12-14 2017-05-31 朱明� The supermarket self-checkout intelligence system of view-based access control model analysis
CN107067510A (en) * 2017-03-27 2017-08-18 杭州赛狐科技有限公司 A kind of unattended Supermarket shopping system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI771790B (en) * 2020-11-03 2022-07-21 財團法人工業技術研究院 Intelligent store system and intelligent store method
US11551289B2 (en) 2020-11-03 2023-01-10 Industrial Technology Research Institute Intelligent store system and intelligent store method
TWI807605B (en) * 2022-01-21 2023-07-01 國立勤益科技大學 Image identifying shopping system

Also Published As

Publication number Publication date
TW201939383A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
KR102510679B1 (en) Target positioning system and positioning method
US11790433B2 (en) Constructing shopper carts using video surveillance
US10290031B2 (en) Method and system for automated retail checkout using context recognition
US11587149B2 (en) Associating shoppers together
US20200258070A1 (en) Purchased product checkout support system
RU2739542C1 (en) Automatic registration system for a sales outlet
CN108846621A (en) A kind of inventory management system based on policy module
US20210312772A1 (en) Storefront device, storefront management method, and program
JPWO2019181499A1 (en) Store management device and store management method
WO2021125357A1 (en) Information processing system
JP2022548730A (en) Electronic device for automatic user identification
CN110287676A (en) Picking employee recognition methods, robot and computer readable storage medium
JP2021512385A (en) Methods and systems to support purchasing in the physical sales floor
WO2019033635A1 (en) Purchase settlement method, device, and system
TWI675337B (en) Unmanned goods management system and unmanned goods management method
WO2020243984A1 (en) Goods perception system, goods perception method and electronic device
JP2023526196A (en) Electronic device for automatic identification of users
JP6687199B2 (en) Product shelf position registration program and information processing device
CN110689389A (en) Computer vision-based shopping list automatic maintenance method and device, storage medium and terminal
US20200104565A1 (en) Context-aided machine vision item differentiation
CA3231848A1 (en) Contactless checkout system with theft detection
US11651416B2 (en) Goods purchase analysis assist system
CN109448278A (en) Self-service shopping and goods picking system for unmanned store
KR102612284B1 (en) Method and system for providing auto payment service using cartrail
US11756036B1 (en) Utilizing sensor data for automated user identification