TWI796072B - Identification system, method and computer readable medium thereof - Google Patents

Identification system, method and computer readable medium thereof Download PDF

Info

Publication number
TWI796072B
TWI796072B TW110149709A TW110149709A TWI796072B TW I796072 B TWI796072 B TW I796072B TW 110149709 A TW110149709 A TW 110149709A TW 110149709 A TW110149709 A TW 110149709A TW I796072 B TWI796072 B TW I796072B
Authority
TW
Taiwan
Prior art keywords
person
model
images
feature vector
identity recognition
Prior art date
Application number
TW110149709A
Other languages
Chinese (zh)
Other versions
TW202326517A (en
Inventor
吳忠倫
賴辰瑜
Original Assignee
關貿網路股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 關貿網路股份有限公司 filed Critical 關貿網路股份有限公司
Priority to TW110149709A priority Critical patent/TWI796072B/en
Application granted granted Critical
Publication of TWI796072B publication Critical patent/TWI796072B/en
Publication of TW202326517A publication Critical patent/TW202326517A/en

Links

Images

Landscapes

  • Testing Of Coins (AREA)
  • Traffic Control Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is an identification system and method thereof, including a data processing module, a feature extraction module, and a classification module. A person’s image of a person form a received image data is cut to form an input data by the data processing module. After a feature vector of the input data is captured by the feature extraction module, whether the person is a clerk with uniform or a consumer without uniform is determined based on the feature vector by the classification module and the flow of people can be counted. In the present invention, the problem that the conventional face recognition technology is not suitable for the crowd counting of retail stores can be solved. The present invention also provides a computer-readable medium for executing the method of the present invention.

Description

身分辨識系統、方法及其電腦可讀媒體 Identity recognition system, method and computer-readable medium thereof

本發明係關於影像辨識之技術,尤指一種用以對人物之身分別進行識別之身分辨識系統、方法及其電腦可讀媒體。 The present invention relates to the technology of image recognition, in particular to a body recognition system, method and computer-readable medium for respectively recognizing people's bodies.

對於商店而言,來客數代表著店家受歡迎的程度,也可增加消費的機會,因此,如何正確統計商店一天或一段時間內之來客人數,對商店之自我評估相當重要。進言之,在智慧零售的應用中,通常需要蒐集消費者行為,作為提升零售之服務體驗的依據,傳統對於消費者行為之蒐集,須透過蒐集問卷或由店員配戴例如計數器之識別裝置進行人流計數。時至今日,蒐集消費者行為中目前較常見的方法,大多利用影像辨識技術進行人流計數、熱區識別以及消費者性別年齡層辨識,以在不影響消費者的情況下,達到取得消費者到店之相關資訊的目的。 For a store, the number of visitors represents the popularity of the store and can also increase the chance of consumption. Therefore, how to correctly count the number of visitors to the store in a day or a period of time is very important for the self-evaluation of the store. In other words, in the application of smart retail, it is usually necessary to collect consumer behavior as a basis for improving the retail service experience. Traditionally, the collection of consumer behavior must be done through the collection of questionnaires or the identification devices such as counters worn by shop assistants. count. Today, the more common methods of collecting consumer behaviors are mostly using image recognition technology for people counting, hot spot identification, and consumer gender and age group identification, so as to obtain consumers' reach without affecting consumers. The purpose of store-related information.

在分析消費者相關資訊時,來店之消費者的人數係最基礎之指標(Metric),然商店內通常同時存在消費者以及為消費者提供服務之店員,因而如何在人流計數上區分出消費者和店員,以便得到真正的消費者數,實為目前智慧零售之重要主題之一。對此,為提供智慧零售對真正消費者之計數,常見之方法 係利用人臉辨識(Face Recognition)技術,將已預先註冊面容識別(Face ID)之店員辨識出來並進行扣除,惟近期由於新冠肺炎(COVID-19)疫情之影響,民衆在公共場合皆會戴上口罩,將影響人臉辨識之辨識效果。再者,依美國國家標準與技術研究所(NIST)之實驗指出,臉部辨識在辨識戴著口罩之人臉時,錯誤率在5%到50%之問,換言之,即使此問題隨著臉部辨識技術的提升已有改善,但人臉辨識仍然會有受臉部角度影響的問題,因而,習知的人臉辨識技術對於身分識別仍存有缺失。 When analyzing consumer-related information, the number of consumers who come to the store is the most basic indicator (Metric). However, there are usually consumers and shop assistants who provide services to consumers at the same time in the store, so how to distinguish consumption in the flow of people count In order to obtain the real number of consumers, it is one of the important themes of smart retail at present. In this regard, in order to provide smart retail counting of real consumers, a common method It uses Face Recognition (Face Recognition) technology to identify and deduct the clerk who has pre-registered Face ID (Face ID). However, due to the impact of the new crown pneumonia (COVID-19) epidemic, people will wear it in public. Wearing a mask will affect the recognition effect of face recognition. Furthermore, according to the experiment of the National Institute of Standards and Technology (NIST), the error rate of face recognition is between 5% and 50% when identifying the face wearing a mask. The improvement of face recognition technology has improved, but the face recognition still has the problem of being affected by the angle of the face. Therefore, the known face recognition technology still has deficiencies in identity recognition.

另外,由於零售店之店通常會穿著制服,是以,另有利用服裝辨識(Clothes Recognition)之方法針對制服進行辨識以區別之,然用於辨識制服之模型對於輸入之服裝角度以及解析度之要求甚高。另外,於訓練模型時,通常會輸入一位模特兒或是一套服裝所構成之靜態影像,惟消費者或店員於零售店內通常呈現動態之情況,故習知之服裝辨識方法實不適用於零售店中人物密集、角度不定或是遮蔽的情況下由閉路電視(Closed-Circuit Television,CCTV)所錄製的串流影片,又或者,利用標誌辨識(Logo Recognition)和光學字元辨識(Optical Character Recognition)辨識制服上標誌(Logo)或字樣之方法,亦存在與前述服裝辨識技術相同之問題,亦即,標誌辨識或光學字元辨識之方法仍須要輸入清楚且相較完整的影像,故同樣不適用於零售店的場域。 In addition, because retail stores usually wear uniforms, there is another method of clothing recognition (Clothes Recognition) to identify uniforms to distinguish them. Very demanding. In addition, when training the model, a static image of a model or a set of clothing is usually input, but consumers or shop assistants usually present dynamic situations in retail stores, so the conventional clothing recognition method is not suitable for Streaming videos recorded by Closed-Circuit Television (CCTV) in retail stores where people are crowded, angles are uncertain or covered, or, using Logo Recognition and Optical Character Recognition (Optical Character Recognition) Recognition) to identify the logo (Logo) or words on the uniform, also has the same problem as the aforementioned clothing recognition technology, that is, the method of logo recognition or optical character recognition still needs to input a clear and relatively complete image, so The same does not apply to the field of retail stores.

鑑於上述問題,如何辨識較大之尺度的串流影片中之人物,特別是,即使人物於串流影片畫面上所顯示之尺寸相對較小或是遭到部分遮蔽,仍能取得相較豐富之相關資訊而進行識別,此將成為目前本技術領域人員急欲追求之目標。 In view of the above problems, how to identify characters in larger-scale streaming videos, especially, even if the size of the characters displayed on the streaming video screen is relatively small or partially covered, can still obtain relatively rich information. Identifying relevant information will become the goal that those skilled in the art are eager to pursue.

為解決上述現有技術之問題,本發明係揭露一種身分辨識系統,係包括:資料處理模組,係用以接收影像資料,且利用多物件追蹤演算法追蹤該影像資料中之人物,以自該影像資料中對應該人物剪裁出多個人物影像而形成輸入資料;特徵擷取模組,係與該資料處理模組耦接,利用人物重識別模型對該輸入資料中該人物進行特徵向量之擷取;以及分類模組,係與該特徵擷取模組耦接,用以將該特徵向量輸入分類模型,判斷該人物之身分別。 In order to solve the above-mentioned problems in the prior art, the present invention discloses an identity recognition system, which includes: a data processing module, which is used to receive image data, and use a multi-object tracking algorithm to track people in the image data, so as to obtain information from the image data. The input data is formed by cutting out multiple person images corresponding to the person in the image data; the feature extraction module is coupled with the data processing module, and uses the person re-identification model to extract the feature vector of the person in the input data and a classification module, coupled with the feature extraction module, for inputting the feature vector into the classification model to determine the identity of the person.

於一實施例中,該資料處理模組係包括用於對該多個人物影像進行多方向取樣之姿勢方向估計模型,其中,該姿勢方向估計模型係對該多個人物影像進行方向辨識,以篩選出不同方向之人物影像而形成該輸入資料。 In one embodiment, the data processing module includes a posture and direction estimation model for performing multi-directional sampling on the plurality of person images, wherein the posture and direction estimation model is used to perform direction recognition on the plurality of person images, so as to The input data is formed by filtering images of people in different directions.

於另一實施例中,該不同方向之人物影像係包括該人物之前、後、左及/或右之方位。 In another embodiment, the image of the person in different directions includes the orientation of the person in front, back, left and/or right.

於另一實施例中,該資料處理模組係對該多個人物影像進行人臉匿名模糊化,以形成該輸入資料。 In another embodiment, the data processing module performs face anonymization and blurring on the plurality of person images to form the input data.

於另一實施例中,該特徵向量之擷取係包括取得該人物身上之制服、帽子及/或識別證的特徵向量。 In another embodiment, the feature vector extraction includes obtaining feature vectors of the person's uniform, hat and/or identification card.

於又一實施例中,於模型訓練過程中,該資料處理模組係預先將該多個人物影像依據該人物穿著制服與否進行標記,以形成訓練樣本,俾供該人物重識別模型利用該訓練樣本進行訓練。 In yet another embodiment, during the model training process, the data processing module pre-marks the multiple person images according to whether the person wears a uniform or not to form a training sample for the person re-identification model to use the training samples for training.

本發明復揭露一種身分辨識方法,係由電腦執行該方法,該方法包括以下步驟:接收影像資料;利用多物件追蹤演算法追蹤該影像資料中之人物,以自該影像資料中對應該人物剪裁出多個人物影像而形成輸入資料;透過 人物重識別模型對該輸入資料中該人物進行特徵向量之擷取;以及透過分類模型,依據該特徵向量判斷該人物為消費者或店員之身分別。 The present invention further discloses a method for identifying an identity, which is executed by a computer. The method includes the following steps: receiving image data; using a multi-object tracking algorithm to track a person in the image data, so as to cut out the corresponding person from the image data Multiple character images are generated to form input data; through The person re-identification model extracts the feature vector of the person in the input data; and through the classification model, judges whether the person is a consumer or a shop assistant according to the feature vector.

於一實施例中,該形成輸入資料之步驟復包括透過姿勢方向估計模型對該多個人物影像進行多方向取樣,以篩選出不同方向之人物影像而形成該輸入資料。於另一實施例中,該不同方向之人物影像係包括該人物之前、後、左及/或右之方位。 In one embodiment, the step of forming the input data further includes performing multi-directional sampling on the plurality of person images through the pose and direction estimation model, so as to filter out the person images in different directions to form the input data. In another embodiment, the image of the person in different directions includes the orientation of the person in front, back, left and/or right.

於另一實施例中,該形成輸入資料之步驟復包括對該多個人物影像進行人臉匿名模糊化,以形成該輸入資料。 In another embodiment, the step of forming the input data further includes performing face anonymization and blurring on the plurality of person images to form the input data.

於另一實施例中,該特徵向量之擷取係包括取得該人物身上之制服、帽子及/或識別證的特徵向量。 In another embodiment, the feature vector extraction includes obtaining feature vectors of the person's uniform, hat and/or identification card.

於又一實施例中,本發明復包括於模型訓練過程中,先將該多個人物影像依據該人物穿著制服與否進行標記,以形成訓練樣本,俾供該人物重識別模型利用該訓練樣本進行訓練。 In yet another embodiment, the present invention further includes in the process of model training, first marking the plurality of person images according to whether the person wears a uniform or not, so as to form a training sample, so that the person re-identification model can use the training sample to train.

本發明復揭露一種電腦可讀媒體,應用於計算裝置或電腦中,係儲存有指令,以執行前述之身分辨識方法。 The present invention further discloses a computer-readable medium, which is used in a computing device or a computer, and stores instructions to execute the aforementioned identification method.

由上可知,本發明之身分辨識系統、方法及其電腦可讀媒體,係藉由資料處理模組對所接收之影像資料進行人物剪裁,以形成輸入資料,經特徵擷取模組對輸入資料進行人物之特徵向量擷取,使分類模組能依據特徵向量對人物進行分類及計數,據以透過制服辨識而判斷人物之身分別以達到人流計數之目的。 As can be seen from the above, the identity recognition system, method and computer-readable medium of the present invention use the data processing module to trim the received image data to form input data, and the input data is processed by the feature extraction module. The feature vector extraction of the characters is carried out, so that the classification module can classify and count the characters according to the feature vectors, so as to judge the identity of the characters through uniform recognition to achieve the purpose of counting people.

1:身分辨識系統 1: Identity recognition system

11:資料處理模組 11: Data processing module

12:特徵擷取模組 12: Feature extraction module

13:分類模組 13: Classification module

S201~S207:步驟 S201~S207: steps

S501~S507:流程 S501~S507: Process

U、NU:空間 U, NU: space

圖1係本發明之身分辨識系統之系統架構圖。 FIG. 1 is a system architecture diagram of the identity recognition system of the present invention.

圖2係本發明之身分辨識系統之人物重識別模型進行訓練之流程圖。 Fig. 2 is a flow chart of training the person re-identification model of the identity recognition system of the present invention.

圖3係本發明之身分辨識系統進行分類訓練之示意圖。 Fig. 3 is a schematic diagram of classification training performed by the identity recognition system of the present invention.

圖4A-4B係本發明之身分辨識系統進行分類推論之示意圖。 4A-4B are schematic diagrams of classification inference performed by the identity recognition system of the present invention.

圖5係本發明之身分辨識方法之流程圖。 Fig. 5 is a flow chart of the identification method of the present invention.

以下藉由特定的具體實施形態說明本發明之技術內容,熟悉此技藝之人士可由本說明書所揭示之內容輕易地瞭解本發明之優點與功效。然本發明亦可藉由其他不同的具體實施形態加以施行或應用。 The following describes the technical content of the present invention through specific embodiments, and those skilled in the art can easily understand the advantages and effects of the present invention from the content disclosed in this specification. However, the present invention can also be implemented or applied in other different specific implementation forms.

圖1為本發明之身分辨識系統之系統架構圖。如圖所示,本發明之身分辨識系統1係包括資料處理模組11、特徵擷取模組12以及分類模組13,其中,資料處理模組11用以自接收之影像資料中剪裁出人物影像,據以形成輸入資料,經特徵擷取模組12對輸入資料進行特徵向量擷取後,由分類模組13依據特徵向量判斷人物影像中人物之身分別,例如為消費者或店員,甚或零售店經理、活動宣傳模特、外送員,並進行人流計數,簡言之,透過特徵向量擷取來進行身分別之判斷,如何提取和辨識影像中人物之特徵成為關鍵,以下以消費者與店員之辨識為例進行說明。 FIG. 1 is a system architecture diagram of the identity recognition system of the present invention. As shown in the figure, the identity recognition system 1 of the present invention includes a data processing module 11, a feature extraction module 12, and a classification module 13, wherein the data processing module 11 is used to cut out characters from the received image data The image is used to form the input data. After the feature extraction module 12 extracts the feature vector of the input data, the classification module 13 judges the identity of the person in the image according to the feature vector, such as a consumer or a shop assistant, or even Retail store managers, event promotional models, delivery staff, and people counting, in short, through the feature vector extraction to judge the identity, how to extract and identify the characteristics of the people in the image is the key, the following is the consumer and The identification of the clerk will be described as an example.

關於本發明之身分辨識系統1之說明,詳述如下。 The description of the identity recognition system 1 of the present invention is described in detail as follows.

資料處理模組11係用以接收影像資料。在一實施例中,資料處理模組11係接收來自外部之閉路電視(CCTV)所錄製之串流影片作為影像資料,或 是由本發明之身分辨識系統1所建置之資料庫中所儲存或攝影機所拍攝之影像資料。進言之,資料處理模組11係利用多物件追蹤(Multiple Object Tracking,MOT)技術,其中,多物件追蹤技術具體為多物件追蹤演算法,能針對影像資料中之人物(例如消費者或店員)進行追蹤,具體而言,資料處理模組11可藉由多物件追蹤演算法以框格框選人物之方式顯示對人物之追蹤,經多物件追蹤技術對人物進行追蹤後,將影像資料中對應所追蹤之人物進行裁剪,即依據框選人物之框格裁剪,由於影像資料係連續之串流影片,因而人物經裁剪後,將產生多個人物影像而形成輸入資料。 The data processing module 11 is used for receiving image data. In one embodiment, the data processing module 11 receives a streaming video recorded by an external closed-circuit television (CCTV) as image data, or It is the image data stored in the database built by the identity recognition system 1 of the present invention or captured by a camera. In other words, the data processing module 11 utilizes Multiple Object Tracking (Multiple Object Tracking, MOT) technology, wherein the multiple object tracking technology is specifically a multiple object tracking algorithm, which can target people (such as consumers or shop assistants) in the image data To track, specifically, the data processing module 11 can use the multi-object tracking algorithm to display the tracking of the person in the form of a frame to select the person. After the person is tracked by the multi-object tracking technology, the image data corresponding to The tracked person is cropped, that is, cropped according to the frame of the selected person. Since the image data is a continuous streaming video, after the person is cropped, multiple person images will be generated to form the input data.

於一實施例中,資料處理模組11係包括用以自多個人物影像中針對人物進行多方取樣之姿勢方向估計模型,由於單一人物經自影像資料中進行剪裁後,通常具有多種型態人物影像作為樣本,亦即多種型態人物影像會包含各類方向或角度的人物影像,因此,即便在連續或隨機取樣後仍可能取得在外觀上幾乎重複的樣本,是以,為求盡可能得到人物面對不同方向的樣本,故本發明採用姿勢方向估計(Pose Orientation Estimation,POE)模型進一步分析,使該資料處理模組11藉由姿勢方向估計模型先辨識出人物之方向,再對該多個人物影像進行多方向取樣,最後僅保留不同方向的人物影像,進而使經多方向取樣後的該些人物影像形成該輸入資料,亦即,本發明之多方向取樣係針對該人物之前、後、左及/或右之方位進行取樣,具體地,形成例如人物面向前方、後方、左方及/或右方之方向的樣本。據此,透過提供單一人物之多方位之人物影像,藉以提升本發明之身分辨識系統1之辨識結果的正確性,同時排除多餘相似資料,能有助於減少數據運算量。 In one embodiment, the data processing module 11 includes a posture and direction estimation model for multi-sampling a person from multiple person images. Since a single person is clipped from the image data, there are usually multiple types of people. Images are used as samples, that is, images of various types of people will include images of people in various directions or angles. Therefore, even after continuous or random sampling, it is still possible to obtain samples that are almost repeated in appearance. Therefore, in order to obtain as much as possible Characters face samples in different directions, so the present invention adopts a pose orientation estimation (Pose Orientation Estimation, POE) model for further analysis, so that the data processing module 11 first recognizes the orientation of the characters through the pose orientation estimation model, and then the multiple Multi-directional sampling is performed on the personal character image, and finally only the character images in different directions are retained, so that the input data is formed from the multi-directional sampling of the character images, that is, the multi-directional sampling of the present invention is aimed at the front and back of the character. , left, and/or right orientations, specifically, samples of the directions in which the person faces, for example, the front, the rear, the left, and/or the right. Accordingly, by providing multi-directional person images of a single person, the correctness of the recognition results of the body recognition system 1 of the present invention can be improved, and redundant similar data can be eliminated, which can help reduce the amount of data calculation.

於一實施例中,該資料處理模組11係對該多個人物影像進行人臉匿名模糊化,以形成該輸入資料,亦即,資料處理模組11對人物影像進行處理步驟中還包含對人物影像中之人臉部位進行模糊化,以使身分辨識系統1無需辨識臉部特徵,本發明藉由將人物影像中之人物的人臉部位模糊化,使得特徵擷取模組12於執行特徵向量擷取時,能專注於人臉以外之部份,例如衣服、帽子或識別證等,甚至是衣服上之標誌(LOGO)部分,使本發明之身分辨識系統1之辨識效果更加精確;另外,透過人物之人臉的模糊化,更能同時達到保護消費者隱私之目的。 In one embodiment, the data processing module 11 performs face anonymization and blurring on the plurality of person images to form the input data, that is, the step of processing the person images by the data processing module 11 also includes: The face parts in the person image are blurred so that the body recognition system 1 does not need to recognize facial features. The present invention blurs the face parts of the person in the person image so that the feature extraction module 12 can When performing feature vector extraction, it can focus on parts other than the face, such as clothes, hats or identification cards, or even the logo (LOGO) part on the clothes, making the recognition effect of the identity recognition system 1 of the present invention more accurate ; In addition, through the blurring of the faces of the characters, the purpose of protecting the privacy of consumers can be achieved at the same time.

特徵擷取模組12係利用人物重識別(Person Re-identification)模型對該輸入資料中該人物進行特徵向量擷取,其中,人物之特徵向量之擷取係包括取得該人物身上之制服、LOGO(例如零售店之商標、店徽等)、帽子及/或識別證等可供辨識身分別(例如為店員或消費者之人物)的人臉以外之特徵向量。 The feature extraction module 12 uses a Person Re-identification (Person Re-identification) model to extract the feature vector of the person in the input data, wherein the feature vector extraction of the person includes obtaining the uniform and LOGO on the person. (such as a retail store's trademark, store logo, etc.), hat and/or identification card, etc., which can be used to identify the feature vector other than the face of the person (such as a person who is a clerk or a consumer).

分類模組13係將經由人物重識別模型所擷取之特徵向量輸入至分類模型中,使分類模型依據特徵向量判斷該輸入資料所對應之人物是否穿著制服及其身分別,以進行分類,特別是本發明係分類出店員和非店員(消費者),並能於分類後進行人流計數,據以達到統計消費者之人數的目的。 The classification module 13 is to input the feature vectors extracted by the person re-identification model into the classification model, so that the classification model judges whether the person corresponding to the input data is wearing a uniform and their identity according to the feature vectors, so as to classify, especially It is that the present invention classifies shop assistants and non-shop assistants (consumers), and can count the number of people after the classification, so as to achieve the purpose of counting the number of consumers.

圖2為本發明之身分辨識系統之人物重識別模型進行訓練之流程圖。如圖所示,於身分辨識系統運作前,可預先對人物重識別模型進行訓練,除了建立人物重識別模型外,亦能對人物重識別模型作調整改良,俾以達到對影像資料進行正確且精準之消費者計數之效果。其訓練步驟如下所示。 Fig. 2 is a flow chart of training the person re-identification model of the identity recognition system of the present invention. As shown in the figure, before the operation of the identity recognition system, the person re-identification model can be trained in advance. In addition to establishing the person re-identification model, the person re-identification model can also be adjusted and improved, so as to achieve correct and accurate image data. The effect of accurate consumer counting. Its training steps are as follows.

於步驟S201中,準備資料。本步驟除了能自本發明之身分辨識系統所建置之資料庫中取得預先儲存之影像資料外,亦可接收自本系統建置之攝 影機或外部之CCTV裝置所擷取之影像資料,經由多物件追蹤演算法將該影像資料中的人物裁剪出來做為樣本。 In step S201, data are prepared. In addition to obtaining pre-stored image data from the database built by the identity recognition system of the present invention, this step can also receive images from the camera built by the system. The image data captured by the video camera or external CCTV device is cut out from the image data by a multi-object tracking algorithm as a sample.

於步驟S202中,多方向取樣。本步驟係將一位人物所對應之多個人物影像,經姿勢方向估計模型將人物方向辨識出來,以得到人物面對不同方向之人物影像以作為樣本。於一實施例中,多方向取樣之結果係可保留前、後、左、右之方向的樣本。 In step S202, multi-directional sampling is performed. In this step, multiple person images corresponding to a person are used to identify the direction of the person through the pose and direction estimation model, so as to obtain images of the person facing different directions as samples. In one embodiment, as a result of multi-directional sampling, samples in front, back, left, and right directions can be retained.

於步驟S203中,進行人臉匿名模糊。由於本發明欲使人物重識別模型著重於對人物之制服、LOGO、帽子或識別證等店員專有之特徵進行特徵擷取,因而將人臉進行匿名模糊化,其中,人臉匿名模糊化後對人物重識別的辨識能力幾乎沒有影響;再者,由於身分辨識不需要辨識人臉特徵,是以,為提升本發明之人物重識別模型於實際場域(例如於零售商店中)辨識的泛化性,故在處理輸入之資料(即樣本)時,可將人臉進行匿名模糊,一方面能保護消費者的隱私,另一方面可以讓人物重識別模型專注於服裝之特徵進行辨識,減少對人臉特徵的依賴。 In step S203, face anonymous blurring is performed. Since the present invention intends to make the person re-identification model focus on the feature extraction of the clerk's exclusive features such as the person's uniform, LOGO, hat or identification card, the face is anonymously blurred, wherein, after the face is anonymously blurred There is almost no impact on the recognition ability of person re-identification; moreover, because the identity recognition does not need to recognize the facial features, so in order to improve the recognition of the person re-identification model of the present invention in the actual field (such as in a retail store) Therefore, when processing the input data (i.e. samples), the face can be anonymously blurred. On the one hand, it can protect the privacy of consumers. On the other hand, it can make the person re-identification model focus on the characteristics of clothing for identification, reducing Dependence on facial features.

於步驟S204中,標記資料。本步驟係針對樣本中之人物有穿制服或沒穿制服作為條件進行標記,簡言之,為了區分樣本是否有穿制服或沒穿制服,進而推估該人物為店員與否,本步驟可先利用標記之方式,對樣本中之人物進行有穿制服或沒穿制服之標記。 In step S204, mark the data. This step is to mark whether the person in the sample is wearing a uniform or not as a condition. In short, in order to distinguish whether the sample is wearing a uniform or not, and then estimate whether the person is a clerk, this step can first Use the marking method to mark the characters in the sample as wearing uniforms or not wearing uniforms.

於步驟S205中,特徵擷取。本步驟係將樣本輸入至人物重識別模型中以擷取特徵向量,據之使人物重識別模型分別針對有穿制服或沒穿制服之人物進行特徵向量擷取,即可得到有穿制服之人物之特徵向量。 In step S205, feature extraction. In this step, the sample is input into the person re-identification model to extract the feature vectors. Based on this, the person re-identification model can extract the feature vectors for the characters who wear uniforms or not, and then the characters who wear uniforms can be obtained. The eigenvector of .

於步驟S206中,分類。本步驟將特徵向量輸入如支援向量機(Support Vector Machine,SVM)之分類模型中,以將樣本分成店員或消費者兩類別。另外,由於零售店中通常店員數遠比消費者數少,所取得之資料為不平衡資料(Imbalanced Data),故在訓練支援向量機時,採用類別權重設定(例如機器學習中sklearn.svm.SVC的平衡(balanced)模式的類別權重設定),以自動增加樣本較少的店員類別的權重,藉以避免人物重識別模型傾向將人物辨識成樣本較多的消費者類別。 In step S206, classify. In this step, the feature vector is input into a classification model such as a Support Vector Machine (SVM), so as to classify the samples into two categories: shop assistants or consumers. In addition, since the number of shop assistants in retail stores is usually far less than the number of consumers, the obtained data is Imbalanced Data. Therefore, when training the support vector machine, the category weight setting is used (for example, sklearn.svm. The category weight setting of the balanced (balanced) mode of SVC) to automatically increase the weight of the clerk category with fewer samples, so as to avoid the tendency of the person re-identification model to identify the person as a consumer category with more samples.

舉例而言,若用以訓練之影像資料中,具有1,000筆關於消費者之樣本以及100筆關於店員之樣本,此時,若將全部店員皆誤判成消費者,其錯誤率也僅為100/(1,000+100),約為9.1%,其錯誤率可能被忽略,對此,本發明增加店員之權重,亦即令消費者之權重與店員之權重的比例為1:10,使得1,000位消費者與100店員之重要性相同,據此,若遇所有店員皆遭誤判時,其錯誤率為100*10/(1,000*1+100*10)=50%,換言之,若所有店員皆遭誤判時,等同於誤判半數資料,增加了誤判之嚴重性,故本發明透過調整、設定權重以增加模型對於樣本數量相對較少之店員的樣本誤判之嚴重性,以避免誤判情況衍伸之問題。於一具體實施例中,本發明之設定權重能透過平衡(balanced)模式進行自動調整,亦即,平衡模式利用公式:總樣本數/(類別數*該類別的樣本數),以達到自動依據樣本數調整前述之權重之目的,舉例言之,承前述示例,則消費者權重=(1,000+100)/(2*1,000)=0.55,而店員權重=(1,000+100)/(2*100)=5.5,因此,消費者權重與店員權重之比例為0.55:5.5即為1:10,同於前述示例,如此即能避免誤判所衍伸之模型失衡。 For example, if there are 1,000 samples of consumers and 100 samples of shop assistants in the image data used for training, at this time, if all the shop assistants are misjudged as consumers, the error rate is only 100/ (1,000+100), about 9.1%, and its error rate may be ignored. For this, the present invention increases the weight of the clerk, that is, the ratio of the weight of the consumer to the weight of the clerk is 1:10, so that 1,000 consumers The importance is the same as 100 clerks. Accordingly, if all clerks are misjudged, the error rate is 100*10/(1,000*1+100*10)=50%. In other words, if all clerks are misjudged , which is equivalent to misjudgment of half of the data, which increases the severity of misjudgment. Therefore, the present invention increases the severity of misjudgment by the model for samples of shop assistants with a relatively small number of samples by adjusting and setting weights, so as to avoid the problem of misjudgment. In a specific embodiment, the setting weight of the present invention can be automatically adjusted through a balanced (balanced) mode, that is, the balanced mode uses the formula: total number of samples/(number of categories*number of samples in this category), to achieve automatic basis The sample size is used to adjust the aforementioned weights. For example, following the aforementioned example, the consumer weight = (1,000+100)/(2*1,000)=0.55, and the clerk weight = (1,000+100)/(2*100 )=5.5, therefore, the ratio of consumer weight to clerk weight is 0.55:5.5, which is 1:10, which is the same as the previous example, so that the model imbalance caused by misjudgment can be avoided.

於步驟S207中,權重優化。本步驟依據分類結果與真實類別(Ground Truth)計算損耗值(Loss),更新人物重識別模型和分類模型的權重(Weight),讓Loss變小,進而反覆訓練直到達到終止條件,其中,於真實類別計算上,t為樣本之真實類別,其可為1、-1,以分別代表店員(1)以及消費者(-1),當分類模型預測之結果為y=0.7,且該樣本的真實類別為t=1,則loss為:max(0,1-t*y)=max(0,1-1*0.7)=0.3,此表示若y愈接近t,則loss愈小。 In step S207, the weights are optimized. This step calculates the loss value (Loss) based on the classification result and the ground truth, updates the weights (Weight) of the person re-identification model and the classification model to make the Loss smaller, and then trains repeatedly until the termination condition is reached. Among them, in the ground truth In terms of category calculation, t is the true category of the sample, which can be 1 and -1 to represent the clerk (1) and the consumer (-1) respectively. When the result of the classification model prediction is y=0.7, and the real category of the sample The category is t=1, then the loss is: max(0,1-t*y)=max(0,1-1*0.7)=0.3, which means that if y is closer to t, the loss will be smaller.

接著,為使本發明之身分辨識系統之人物重辨識模型於訓練後能夠正確的分類,進行訓練如下。 Next, in order to enable the person re-identification model of the body recognition system of the present invention to correctly classify after training, the training is performed as follows.

如圖3所示,係本發明之身分辨識系統進行分類訓練之示意圖。為了讓執行身分辨識之分類模型訓練後能夠正確的分類,訓練的目標設定為訓練人物重識別模型的全身特徵擷取能力並保證以下條件為真:(a)穿著相似且為同一個人時,特徵向量應該要幾乎一樣;(b)穿著相似時,即使是不同個人,特徵向量應該要相近;(c)穿著不同時,特徵向量應該要不相近。 As shown in FIG. 3 , it is a schematic diagram of classification training performed by the identity recognition system of the present invention. In order to allow the classification model that performs identity recognition to be classified correctly after training, the training goal is set to train the full-body feature extraction ability of the person re-identification model and ensure that the following conditions are true: (a) when the person is similarly dressed and is the same person, the feature The vectors should be almost the same; (b) when the clothes are similar, even if they are different individuals, the feature vectors should be similar; (c) when the clothes are different, the feature vectors should not be similar.

首先,將標記穿制服樣本(樣本1-3)以及沒穿制服樣本(樣本4-6)輸入人物重識別模型,以訓練人物重識別模型對人物之全身特徵的擷取能力,使得人物重識別模型對於穿著相似且為同一個人所擷取之特徵向量相同或近乎相同、穿著相似之不同個人所擷取之特徵向量相似以及穿著不同者所擷取之特徵向量不相近。 Firstly, input the labeled uniform samples (samples 1-3) and non-uniform samples (samples 4-6) into the person re-identification model to train the person re-identification model’s ability to extract the full-body features of the person, so that the person re-identification For the model, the feature vectors extracted by the same person who are similarly dressed are the same or nearly the same, the feature vectors extracted by different individuals who are similarly dressed are similar, and the feature vectors extracted by different individuals are not similar.

之後,訓練分類模型對特徵向量進行分類,在前述條件分類下,讓有穿制服與沒穿制服對應的特徵向量會分別落在決策邊界所分割出來的空間U(Uniform,即有穿制服)與空間NU(Non-Uniform,即沒穿制服)。 Afterwards, the training classification model is used to classify the feature vectors. Under the aforementioned classification conditions, the feature vectors corresponding to those who wear uniforms and those who do not wear uniforms will fall in the space U (Uniform, that is, those who wear uniforms) and the spaces separated by the decision boundary. Space NU (Non-Uniform, that is, not wearing a uniform).

接著,訓練完分類模型後,即可進行推論,如圖4A和4B所示,係本發明之身分辨識系統進行分類推論之示意圖。本系統之身分辨識系統將輸入樣本進行多方向取樣及人臉匿名模糊,經人物重識別模型對輸入樣本推論出特徵向量,接著,利用分類模型對特徵向量進行分類,針對有穿制服和沒穿制服的樣本,如圖4A所示,若特徵向量落在空間U(Uniform),即人物為有穿制服,反之,若是落在空間NU(Non-Uniform),即人物為沒穿制服,如圖4B所示。 Next, after the classification model is trained, inference can be performed, as shown in Figures 4A and 4B, which are schematic diagrams of classification inference performed by the identity recognition system of the present invention. The identity recognition system of this system performs multi-directional sampling and anonymous face blurring on the input samples, deduces the feature vectors from the input samples through the person re-identification model, and then uses the classification model to classify the feature vectors, for those who wear uniforms and those who do not wear uniforms For the uniform sample, as shown in Figure 4A, if the feature vector falls in the space U (Uniform), that is, the character is wearing a uniform; otherwise, if it falls in the space NU (Non-Uniform), that is, the character is not wearing a uniform, as shown in the figure 4B.

圖5為本發明之身分辨識方法之流程圖。如圖所示,本發明之身分辨識方法係於電腦、伺服器或雲端系統等電子設備中執行,具體而言,本發明之身分辨識方法係包括以下流程。 FIG. 5 is a flow chart of the identification method of the present invention. As shown in the figure, the identity recognition method of the present invention is implemented in electronic devices such as computers, servers, or cloud systems. Specifically, the identity recognition method of the present invention includes the following procedures.

於流程S501中,接收影像資料。本流程係可自資料庫中取得其所儲存之影像資料,亦可接收自所設置之攝影機或CCTV裝置中所擷取之影像資料。 In the process S501, image data is received. This process can obtain the stored image data from the database, and can also receive the image data captured from the installed camera or CCTV device.

於流程S502中,人物追蹤。本流程係透過多物件追蹤演算法追蹤該影像資料中之人物,以自將該影像資料中對應該人物剪裁出多個人物影像。 In the process S502, the person is tracked. This process is to track the person in the image data through a multi-object tracking algorithm, so as to cut out a plurality of person images corresponding to the person in the image data.

於流程S503中,多方向取樣。本流程係透過使用姿勢方向估計模型對所剪裁出來之多個人物影像進行多方向取樣,具體而言,多方向取樣係針對該人物之前、後、左及/或右之方位進行取樣,以依據經多方向取樣之該多個人物影像形成該輸入資料。 In the process S503, multi-directional sampling is performed. This process uses the pose direction estimation model to perform multi-directional sampling on the cropped images of multiple characters. Specifically, the multi-directional sampling is to sample the front, rear, left and/or right orientations of the character, based on The multiple person images sampled in multiple directions form the input data.

於流程S504中,人臉匿名模糊。本流程係將該多個人物影像進行人臉匿名模糊化,使人臉部位模糊化而無法進行辨識,據以形成該輸入資料,其目的除了保護個資外,亦能降低特徵擷取時的影響。 In process S504, the face is anonymously blurred. This process is to anonymously blur the faces of the multiple person images, making the facial parts blurred and unable to be identified, and the input data is formed based on this. The purpose is not only to protect personal information, but also to reduce the time required for feature extraction. Impact.

於流程S505中,形成輸入資料。本流程係將上述經剪裁、多方向取樣以及人臉匿名模糊後之多個人物影像形成輸入資料。 In the process S505, input data is formed. This process is to form the input data of the above-mentioned multiple person images that have been trimmed, multi-directionally sampled, and face anonymously blurred.

於流程S506中,透過人物重識別模型對該輸入資料中該人物之特徵進行特徵向量擷取。前述之特徵向量擷取係包括對人物之制服、LOGO、帽子及/或識別證等進行向量擷取。 In the process S506, feature vector extraction is performed on the features of the person in the input data through the person re-identification model. The aforementioned feature vector extraction includes vector extraction of the character's uniform, LOGO, hat, and/or identification card.

於流程S507中,透過分類模型依據該特徵向量,判斷該人物是否穿著制服及其身分別。本流程是利用分類模型來判斷人物影像其身分別為何,於本發明中,主要判斷人物為消費者或店員,接著還能進行人流計數。 In the process S507, it is judged whether the character wears a uniform and the identity of the character according to the feature vector through the classification model. This process is to use a classification model to determine the identity of the person image. In the present invention, it is mainly determined whether the person is a consumer or a shop assistant, and then the flow of people can be counted.

於一實施例中,於步驟S506中,於利用人物重識別模型前,可先對人物重識別模型進行訓練,以獲得人物重識別模型或後續人物重識別模型之優化。具體而言,訓練步驟包括準備資料、多方向取樣、進行人臉匿名模糊、標記資料、特徵擷取、分類以及權重優化,其詳細內容已於圖2及對應段落說明,故不再贅述。 In one embodiment, in step S506 , before using the person re-identification model, the person re-identification model may be trained to obtain the person re-identification model or the optimization of the subsequent person re-identification model. Specifically, the training steps include data preparation, multi-directional sampling, anonymous face blurring, data labeling, feature extraction, classification, and weight optimization. The detailed content has been described in Figure 2 and the corresponding paragraphs, so it will not be repeated.

在一實施例中,本發明之身分辨識系統及方法係於實際應用場域進行應用,簡言之,於9間大型零售店進行訓練,其訓練樣本包括5,077個沒穿制服樣本以及755個有穿制服樣本,本發明使用經此訓練後之人物重識別模型,以於另外之零售店進行1,074個測試樣本,其中,平均精準率(Precision)、召回率(Recall)皆達到93%以上,詳如下面表1所示。 In one embodiment, the identity recognition system and method of the present invention are applied in actual application fields. In short, training is conducted in 9 large retail stores, and the training samples include 5,077 samples without uniforms and 755 samples with Wearing uniform samples, the present invention uses the trained person re-identification model to conduct 1,074 test samples in another retail store, among which the average precision rate (Precision) and recall rate (Recall) are both above 93%. Details As shown in Table 1 below.

Figure 110149709-A0305-02-0014-1
Figure 110149709-A0305-02-0014-1

此外,本發明還揭示一種電腦可讀媒體,係應用於具有處理器(例如,CPU、GPU等)及/或記憶體的計算裝置或電腦中,且儲存有指令,並可利用此計算裝置或電腦透過處理器及/或記憶體執行此電腦可讀媒體,以於執行此電腦可讀媒體時執行上述之方法及各步驟。 In addition, the present invention also discloses a computer-readable medium, which is applied to a computing device or computer having a processor (for example, CPU, GPU, etc.) and/or memory, and stores instructions, and can be used by this computing device or The computer executes the computer-readable medium through the processor and/or memory, so as to execute the above-mentioned method and steps when executing the computer-readable medium.

在一實施例中,本發明的模組、單元、裝置等包括微處理器及記憶體,而演算法、資料、程式等係儲存記憶體或晶片內,微處理器可從記憶體載入資料或演算法或程式進行資料分析或計算等處理。易言之,本發明之身分辨識系統可於電子設備上執行,例如一般電腦、平板或是伺服器,在收到影像資料後執行資料分析與運算,故身分辨識系統所進行程序,可透過軟體設計並架構在具有處理器、記憶體等元件之電子設備上,以於各類電子設備上運行;另外,亦可將身分辨識系統之各模組或單元分別以獨立元件組成,例如設計為計算器、記憶體、儲存器或是具有處理單元的韌體,皆可成為實現本發明之組件,而人物重識別模型或分類模型等相關模型,亦可選擇以軟體程式、硬體或韌體架構呈現。 In one embodiment, the modules, units, devices, etc. of the present invention include a microprocessor and a memory, and algorithms, data, programs, etc. are stored in the memory or chip, and the microprocessor can load data from the memory Or algorithms or programs for data analysis or calculation. In other words, the identity recognition system of the present invention can be executed on electronic devices, such as general computers, tablets or servers, which perform data analysis and calculation after receiving image data, so the procedures performed by the identity recognition system can be performed through software Designed and built on electronic devices with processors, memory and other components to run on various electronic devices; in addition, each module or unit of the identity recognition system can be composed of independent components, such as designed as a computer Devices, memory, storage, or firmware with processing units can all be components of the present invention, and related models such as person re-identification models or classification models can also be selected as software programs, hardware or firmware architectures presented.

綜上所述,本發明之身分辨識系統、方法及其電腦可讀媒體,係於零售店內藉由辨識制服及相關特徵以區分消費者與店員,透過人物重識別針對人物之全身特徵進行特徵向量之擷取,再將特徵向量輸入到用以辨識制服之分類模型,以進行人物之身分別的判斷,例如該人物為消費者與店員;另外,於訓練過程中,針對樣本進行標記,例如有穿制服和未穿制服,讓模型能學習並辨識店員的共同特徵像是制服、帽子、識別證等。因此,本發明解決了以往用臉部辨識無法辨識未註冊Face ID的店員,以及辨識率受口罩遮蔽、臉部角度影響之問題,易言之,即使有新進店員未在任何系統註冊過,但只要該新進 人員穿著店員制服,即便在背對鏡頭的情況下仍可被身分辨識系統辨識出為店員。 To sum up, the identity recognition system and method of the present invention and its computer-readable medium are used to distinguish consumers and shop assistants by identifying uniforms and related features in retail stores, and perform character recognition on the whole body characteristics of people through person re-identification The extraction of the vector, and then input the feature vector into the classification model used to identify uniforms to judge the identity of the person, for example, the person is a consumer and a shop assistant; in addition, during the training process, the samples are marked, such as There are uniforms and non-uniforms, so that the model can learn and recognize the common characteristics of shop assistants such as uniforms, hats, identification cards, etc. Therefore, the present invention solves the problem that in the past, facial recognition could not identify shop assistants who did not register Face ID, and the recognition rate was affected by masks and face angles. In other words, even if there are new shop assistants who have not registered in any system, as long as the new Personnel wear shop assistant uniforms, and can still be identified as shop assistants by the ID system even when their backs are turned to the camera.

上述實施例僅為例示性說明,而非用於限制本發明。任何熟習此項技藝之人士均可在不違背本發明之精神及範疇下,對上述實施例進行修飾與改變。因此,本發明之權利保護範圍係由本發明所附之申請專利範圍所定義,只要不影響本發明之效果及實施目的,應涵蓋於此公開技術內容中。 The above-mentioned embodiments are for illustrative purposes only, and are not intended to limit the present invention. Anyone skilled in the art can make modifications and changes to the above-mentioned embodiments without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention is defined by the scope of patent application attached to the present invention, as long as it does not affect the effect and implementation purpose of the present invention, it should be included in this disclosed technical content.

1:身分辨識系統 1: Identity recognition system

11:資料處理模組 11: Data processing module

12:特徵擷取模組 12: Feature extraction module

13:分類模組 13: Classification module

Claims (11)

一種身分辨識系統,係包括:資料處理模組,係用以接收影像資料,且利用多物件追蹤演算法追蹤該影像資料中之人物,以自該影像資料中對應該人物剪裁出多個人物影像,進而對該多個人物影像進行人臉匿名模糊化以形成輸入資料;特徵擷取模組,係與該資料處理模組耦接,利用人物重識別模型對該輸入資料中該人物進行特徵向量之擷取;以及分類模組,係與該特徵擷取模組耦接,用以將該特徵向量輸入分類模型,判斷該人物之身分別,其中,於模型訓練過程中,係採用類別權重設定,以自動增加具有該特徵向量的人物類別的權重,藉以調整該人物重識別模型和該分類模型。 An identity recognition system, comprising: a data processing module, used to receive image data, and use a multi-object tracking algorithm to track a person in the image data, so as to cut out a plurality of person images corresponding to the person in the image data , and then perform face anonymization and fuzzing on the multiple person images to form input data; the feature extraction module is coupled with the data processing module, and uses the person re-identification model to perform feature vectors for the person in the input data and the classification module is coupled with the feature extraction module to input the feature vector into the classification model to determine the identity of the person, wherein, in the model training process, the category weight setting is adopted , to automatically increase the weight of the person category with the feature vector, so as to adjust the person re-identification model and the classification model. 如請求項1所述之身分辨識系統,其中,該資料處理模組係包括用於對該多個人物影像進行多方向取樣之姿勢方向估計模型,其中,該姿勢方向估計模型係對該多個人物影像進行方向辨識,以篩選出不同方向之人物影像而形成該輸入資料。 The identity recognition system as described in Claim 1, wherein the data processing module includes a posture direction estimation model for multi-directional sampling of the multiple person images, wherein the posture direction estimation model is for the multiple The direction recognition is performed on the person images to filter out the person images in different directions to form the input data. 如請求項2所述之身分辨識系統,其中,該不同方向之人物影像係包括該人物之前、後、左或右之方位。 The identity recognition system according to Claim 2, wherein the images of the person in different directions include the orientation of the person in front, back, left or right. 如請求項1所述之身分辨識系統,其中,該特徵向量之擷取係包括取得該人物身上之制服、帽子或識別證的特徵向量。 The identity recognition system according to Claim 1, wherein the feature vector extraction includes obtaining the feature vector of the person's uniform, hat or identification card. 如請求項1所述之身分辨識系統,其中,於模型訓練過程中,該資料處理模組係預先將該多個人物影像依據該人物穿著制服與否進行標記,以形成訓練樣本,俾供該人物重識別模型利用該訓練樣本進行訓練。 The identity recognition system as described in Claim 1, wherein, during the model training process, the data processing module pre-marks the plurality of person images according to whether the person wears a uniform or not, so as to form a training sample for the The person re-identification model is trained using the training samples. 一種身分辨識方法,係由電腦執行該方法,該方法包括以下步驟:接收影像資料;利用多物件追蹤演算法追蹤該影像資料中之人物,以自該影像資料中對應該人物剪裁出多個人物影像,進而對該多個人物影像進行人臉匿名模糊化以形成輸入資料;透過人物重識別模型對該輸入資料中該人物進行特徵向量之擷取;以及透過分類模型,依據該特徵向量判斷該人物之身分別,其中,於模型訓練過程中,係採用類別權重設定,以自動增加具有該特徵向量的人物類別的權重,藉以調整該人物重識別模型和該分類模型。 A method for identifying an identity, which is executed by a computer, and the method includes the following steps: receiving image data; using a multi-object tracking algorithm to track a person in the image data, so as to cut out a plurality of people corresponding to the person in the image data image, and then perform face anonymization and fuzzing on the multiple person images to form input data; use the person re-identification model to extract the feature vector of the person in the input data; and use the classification model to judge the person according to the feature vector The person's body is separated, wherein, in the model training process, the category weight setting is adopted to automatically increase the weight of the person category with the feature vector, so as to adjust the person re-identification model and the classification model. 如請求項6所述之身分辨識方法,其中,該形成輸入資料之步驟復包括透過姿勢方向估計模型對該多個人物影像進行多方向取樣,以篩選出不同方向之人物影像而形成該輸入資料。 The identity recognition method as described in Claim 6, wherein the step of forming input data further includes performing multi-directional sampling on the plurality of person images through a posture and direction estimation model, so as to filter out person images in different directions to form the input data . 如請求項7所述之身分辨識方法,其中,該不同方向之人物影像係包括該人物之前、後、左或右之方位。 The identity recognition method as described in Claim 7, wherein the images of the person in different directions include the orientation of the person in front, back, left or right. 如請求項6所述之身分辨識方法,其中,該特徵向量之擷取係包括取得該人物身上之制服、帽子或識別證的特徵向量。 The identity recognition method as described in Claim 6, wherein the feature vector extraction includes obtaining feature vectors of the person's uniform, hat or identification card. 如請求項6所述之身分辨識方法,復包括於模型訓練過程中,先將該多個人物影像依據該人物穿著制服與否進行標記,以形成訓練樣本,俾供該人物重識別模型利用該訓練樣本進行訓練。 The identity recognition method as described in Claim 6 further includes in the model training process, first marking the multiple person images according to whether the person wears a uniform or not, so as to form a training sample for the person re-identification model to use the training samples for training. 一種電腦可讀媒體,應用於計算裝置或電腦中,係儲存有指令,以執行如請求項6至10之任一者所述之身分辨識方法。 A computer-readable medium, used in a computing device or a computer, stores instructions to execute the identity identification method described in any one of claims 6-10.
TW110149709A 2021-12-30 2021-12-30 Identification system, method and computer readable medium thereof TWI796072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110149709A TWI796072B (en) 2021-12-30 2021-12-30 Identification system, method and computer readable medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110149709A TWI796072B (en) 2021-12-30 2021-12-30 Identification system, method and computer readable medium thereof

Publications (2)

Publication Number Publication Date
TWI796072B true TWI796072B (en) 2023-03-11
TW202326517A TW202326517A (en) 2023-07-01

Family

ID=86692267

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110149709A TWI796072B (en) 2021-12-30 2021-12-30 Identification system, method and computer readable medium thereof

Country Status (1)

Country Link
TW (1) TWI796072B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091156A (en) * 2014-07-10 2014-10-08 深圳市中控生物识别技术有限公司 Identity recognition method and device
US20170243058A1 (en) * 2014-10-28 2017-08-24 Watrix Technology Gait recognition method based on deep learning
CN110348352A (en) * 2019-07-01 2019-10-18 深圳前海达闼云端智能科技有限公司 Training method, terminal and storage medium for human face image age migration network
CN111428662A (en) * 2020-03-30 2020-07-17 齐鲁工业大学 Advertisement playing change method and system based on crowd attributes
TW202036476A (en) * 2019-03-25 2020-10-01 大陸商上海商湯智能科技有限公司 Method, device and electronic equipment for image processing and storage medium thereof
TW202119274A (en) * 2019-11-01 2021-05-16 財團法人工業技術研究院 Face image reconstruction method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091156A (en) * 2014-07-10 2014-10-08 深圳市中控生物识别技术有限公司 Identity recognition method and device
US20170243058A1 (en) * 2014-10-28 2017-08-24 Watrix Technology Gait recognition method based on deep learning
TW202036476A (en) * 2019-03-25 2020-10-01 大陸商上海商湯智能科技有限公司 Method, device and electronic equipment for image processing and storage medium thereof
CN110348352A (en) * 2019-07-01 2019-10-18 深圳前海达闼云端智能科技有限公司 Training method, terminal and storage medium for human face image age migration network
TW202119274A (en) * 2019-11-01 2021-05-16 財團法人工業技術研究院 Face image reconstruction method and system
CN111428662A (en) * 2020-03-30 2020-07-17 齐鲁工业大学 Advertisement playing change method and system based on crowd attributes

Also Published As

Publication number Publication date
TW202326517A (en) 2023-07-01

Similar Documents

Publication Publication Date Title
CN109657609B (en) Face recognition method and system
US9947087B2 (en) Systems and methods for determining image safety
CN107679448B (en) Eyeball action-analysing method, device and storage medium
CN108229330A (en) Face fusion recognition methods and device, electronic equipment and storage medium
WO2019011165A1 (en) Facial recognition method and apparatus, electronic device, and storage medium
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
CN108229335A (en) It is associated with face identification method and device, electronic equipment, storage medium, program
CN104966070B (en) Biopsy method and device based on recognition of face
CN105989331B (en) Face feature extraction element, facial feature extraction method, image processing equipment and image processing method
CN105160318A (en) Facial expression based lie detection method and system
CN105809178A (en) Population analyzing method based on human face attribute and device
WO2022062379A1 (en) Image detection method and related apparatus, device, storage medium, and computer program
WO2021203882A1 (en) Attitude detection and video processing method and apparatus, and electronic device and storage medium
CN111696080B (en) Face fraud detection method, system and storage medium based on static texture
CN105184266B (en) A kind of finger venous image recognition methods
JP2020154808A (en) Information processor, information processing system, information processing method, and program
AU2017231602A1 (en) Method and system for visitor tracking at a POS area
CN113239805A (en) Mask wearing identification method based on MTCNN
CN110543813B (en) Face image and gaze counting method and system based on scene
CN112001785A (en) Network credit fraud identification method and system based on image identification
CN113420667B (en) Face living body detection method, device, equipment and medium
CN113128373B (en) Image processing-based color spot scoring method, color spot scoring device and terminal equipment
TWI796072B (en) Identification system, method and computer readable medium thereof
Abate et al. An ablation study on part-based face analysis using a multi-input convolutional neural network and semantic segmentation
TWM629362U (en) Identification system