TWI722705B - Method for automatically labelling muscle feature points on face - Google Patents

Method for automatically labelling muscle feature points on face Download PDF

Info

Publication number
TWI722705B
TWI722705B TW108144950A TW108144950A TWI722705B TW I722705 B TWI722705 B TW I722705B TW 108144950 A TW108144950 A TW 108144950A TW 108144950 A TW108144950 A TW 108144950A TW I722705 B TWI722705 B TW I722705B
Authority
TW
Taiwan
Prior art keywords
feature points
face
image
muscle
muscle feature
Prior art date
Application number
TW108144950A
Other languages
Chinese (zh)
Other versions
TW202123074A (en
Inventor
王鴻君
Original Assignee
麗寶大數據股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 麗寶大數據股份有限公司 filed Critical 麗寶大數據股份有限公司
Priority to TW108144950A priority Critical patent/TWI722705B/en
Application granted granted Critical
Publication of TWI722705B publication Critical patent/TWI722705B/en
Publication of TW202123074A publication Critical patent/TW202123074A/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for automatically labelling muscle feature points on face implemented by a face image analysis apparatus is disclosed and includes following steps: obtaining an identified image showing a face of a user; performing a face recognition procedure to the identified image for obtaining multiple strong reference points on the face; performing a fuzzy comparison procedure on the face of the identified image based on a pre-trained training model for generating a comparison result; automatically labelling multiple muscle feature points on the face according to the comparison result, wherein the multiple muscle feature points respectively locate at multiple weak reference points of the face; and, displaying the multiple muscle feature points and the identified image on a display unit.

Description

臉部肌肉特徵點自動標記方法 Automatic marking method of facial muscle feature points

本發明涉及一種臉部的標記方法,尤其涉及一種臉部的肌肉特徵點的標記方法。 The invention relates to a method for marking a face, in particular to a method for marking facial muscle feature points.

由於人類的肌肉(特別是臉部的肌肉)會隨著年齡的增加而慢慢出現鬆弛、下垂的現象,因此部分的使用者會選擇使用保養品來對肌肉及皮膚進行保養、使用化妝品來掩飾鬆弛的肌肉,或是通過運動、醫美等手段來減緩上述鬆弛、下垂現象的發生。 Since human muscles (especially facial muscles) will gradually become loose and sagging with age, some users will choose to use skin care products to maintain muscles and skin, and use cosmetics to cover up Relaxation of muscles, or through exercise, medical beauty and other means to slow down the occurrence of the above-mentioned relaxation and sagging phenomenon.

一般來說,使用者會坐在鏡子前面進行保養與化妝,或是藉由智慧型手機、平板電腦或是特殊的化妝輔助裝置來協助所述保養、化妝的動作,以求速度的提昇。 Generally speaking, users will sit in front of a mirror to perform maintenance and makeup, or use a smartphone, a tablet computer, or a special makeup assisting device to assist the maintenance and makeup actions in order to increase the speed.

惟,上述裝置只能協助使用者進行保養、化妝,但無法主動分析使用者目前的肌肉狀態,無法令使用者得知其所做的保養、化妝動作是否真的有效。 However, the above-mentioned device can only assist the user in maintenance and makeup, but cannot actively analyze the user's current muscle state, and cannot let the user know whether the maintenance and makeup actions they perform are really effective.

有鑑於此,本技術領域中實需提供一種新穎的方法,可有效地對使用者的臉部影像進行分析,並且自動標記出可以代表使用者當前的肌肉狀態 的多個肌肉特徵點,藉此讓使用者輕易瞭解當前的肌肉狀態並判斷所進行的保養手段是否有效。 In view of this, there is a need in this technical field to provide a novel method that can effectively analyze the user’s facial image and automatically mark the user’s current muscle state. The multiple muscle feature points, which allows the user to easily understand the current muscle state and judge whether the maintenance method performed is effective.

本發明的主要目的,在於提供一種臉部肌肉特徵點自動標記方法,可自動於使用者的臉部影像上標記出用來代表臉部肌肉狀態的複數個肌肉特徵點。 The main purpose of the present invention is to provide a method for automatically marking facial muscle feature points, which can automatically mark a plurality of muscle feature points representing the state of facial muscles on a user's facial image.

為了達成上述目的,本發明的臉部肌肉特徵點自動標記方法主要應用於一臉部影像分析裝置,並且包括下列步驟:取得一待辨識影像,其中該待辨識影像至少包含使用者的臉部;對該待辨識影像執行一臉部識別程序以獲得該臉部上的多個強參考點;基於預先訓練完成的一訓練模型對該待辨識影像進行模糊比對並產生一辨識結果;依據該辨識結果於該待辨識影像上標記複數肌肉特徵點,其中該複數肌肉特徵點分別落在該臉部上的複數弱參考點;及,通過一顯示單元重疊顯示該待辨識影像及該複數肌肉特徵點。 In order to achieve the above objective, the method for automatically marking facial muscle feature points of the present invention is mainly applied to a facial image analysis device, and includes the following steps: obtaining a to-be-recognized image, wherein the to-be-recognized image includes at least the user’s face; Perform a face recognition procedure on the image to be recognized to obtain multiple strong reference points on the face; perform blur comparison on the image to be recognized based on a training model completed in advance and generate a recognition result; according to the recognition As a result, a plurality of muscle feature points are marked on the image to be recognized, wherein the plurality of muscle feature points respectively fall on the plural weak reference points on the face; and, the image to be recognized and the plurality of muscle feature points are overlapped and displayed by a display unit .

相較於相關技術,本發明可以藉由對使用者的臉部影像的辨識來自動標記出至少四個肌肉特徵點,而本發明標記的肌肉特徵點可以用來代表臉部的肌肉狀態。因此,使用者可以通過所標記的肌肉特徵點迅速瞭解自己當前的臉部肌肉狀態,並且確認所採取的保養手段是否有效。 Compared with the related technology, the present invention can automatically mark at least four muscle feature points by recognizing the user's facial image, and the muscle feature points marked by the present invention can be used to represent the muscle state of the face. Therefore, the user can quickly understand his current facial muscle state through the marked muscle feature points, and confirm whether the maintenance measures taken are effective.

1:臉部影像分析裝置 1: Facial image analysis device

10:處理器 10: processor

11:顯示單元 11: Display unit

12:影像擷取單元 12: Image capture unit

13:輸入單元 13: Input unit

14:無線傳輸單元 14: wireless transmission unit

15:儲存單元 15: storage unit

151:人工智慧訓練演算法 151: Artificial Intelligence Training Algorithm

152:訓練資料 152: Training Materials

153:訓練模型 153: training model

1531:第一個迴歸樹 1531: The first regression tree

1532:第二個迴歸樹 1532: The second regression tree

1533:第n個迴歸樹 1533: the nth regression tree

1534:判斷端點 1534: Judgment endpoint

1535:權重 1535: weight

2:待辨識影像 2: Image to be recognized

3、4:肌肉特徵點 3, 4: muscle feature points

31、41:第一肌肉特徵點 31, 41: the first muscle feature point

32、42:第二肌肉特徵點 32, 42: Second muscle feature point

33、43:第三肌肉特徵點 33, 43: third muscle feature point

34、44:第四肌肉特徵點 34, 44: Fourth muscle feature point

5:臉部定位框 5: Face positioning frame

61:第一淚溝切線 61: First tear groove tangent

62:第二淚溝切線 62: Second tear groove tangent

63:第一法令紋切線 63: The first law pattern tangent

64:第二法令紋切線 64: The second law pattern tangent

65:第一木偶紋切線 65: The first puppet pattern tangent

66:第二木偶紋切線 66: The second puppet pattern tangent

67:第一眼角垂直線 67: Vertical line at the first corner of the eye

68:第二眼角垂直線 68: Vertical line at the second corner of the eye

69:第一顎骨枝切線 69: First jawbone branch tangent

70:第二顎骨枝切線 70: Second jawbone branch tangent

80:預測特徵點 80: predict feature points

81:第一調整後預測點 81: The first adjusted forecast point

82:第二調整後預測點 82: Second adjusted forecast point

83:第n-1調整後預測點 83: adjusted prediction point n-1

84:最終預測點 84: Final prediction point

S10~S20:建立步驟 S10~S20: Establishment steps

S30~S36:辨識步驟 S30~S36: Identification steps

S320~S330:辨識步驟 S320~S330: Identification steps

圖1為本發明的臉部影像分析裝置的示意圖的第一具體實施例。 FIG. 1 is a first specific embodiment of a schematic diagram of a facial image analysis device of the present invention.

圖2本發明的臉部影像分析裝置的方塊圖的第一具體實施例。 Fig. 2 is a first embodiment of the block diagram of the facial image analysis device of the present invention.

圖3為本發明的肌肉特徵點示意圖的第一具體實施例。 Fig. 3 is a first specific embodiment of the schematic diagram of muscle feature points of the present invention.

圖4為本發明的訓練模型建立流程圖的第一具體實施例。 Fig. 4 is a first specific embodiment of the flow chart of establishing a training model of the present invention.

圖5為本發明的肌肉特徵點的標記位置示意圖的第一具體實施例。 Fig. 5 is a first specific embodiment of the schematic diagram of the marking position of the muscle feature points of the present invention.

圖6為本發明的訓練資料示意圖的第一具體實施例。 Fig. 6 is a first specific embodiment of the training data schematic diagram of the present invention.

圖7為本發明的辨識流程圖的第一具體實施例。 FIG. 7 is a first specific embodiment of the identification flowchart of the present invention.

圖8為本發明的辨識流程圖的第二具體實施例。 FIG. 8 is a second specific embodiment of the identification flowchart of the present invention.

圖9為本發明的訓練模型示意圖的第一具體實施例。 Fig. 9 is a first specific embodiment of a schematic diagram of a training model of the present invention.

圖10為本發明的特徵點調整示意圖的第一具體實施例。 Fig. 10 is a first specific embodiment of the feature point adjustment schematic diagram of the present invention.

茲就本發明之一較佳實施例,配合圖式,詳細說明如後。 With regard to a preferred embodiment of the present invention, the detailed description is given below in conjunction with the drawings.

請參閱圖1及圖2,其中圖1為本發明的臉部影像分析裝置的示意圖的第一具體實施例,圖2為本發明的臉部影像分析裝置的方塊圖的第一具體實施例。 Please refer to FIGS. 1 and 2. FIG. 1 is a first specific embodiment of a schematic diagram of a facial image analysis device of the present invention, and FIG. 2 is a first specific embodiment of a block diagram of the facial image analysis device of the present invention.

本發明揭露了一種臉部肌肉特徵點自動標記方法(下面簡稱為標記方法),所述標記方法主要應用於如圖1、圖2所示的臉部影像分析裝置1(下面簡稱為分析裝置1)。具體地,本發明主要是通過所述分析裝置1來取得使用者的臉部影像,並且基於預先訓練完成的人工智慧模型來對臉部影像進行辨識,再依據辨識結果於臉部影像上標示出可用來代表使用者當前的臉部肌肉狀態的多個肌肉特徵點。通過由分析裝置1自動標記的多個肌肉特徵點,使用者可以快速且有效地瞭解自己當前的臉部肌肉狀態,進而評估自己目前使用的化妝品、保養品及所採用的各項保養手段(如例運動或醫美)是否有效。 The present invention discloses a method for automatically marking facial muscle feature points (hereinafter referred to as the marking method), which is mainly applied to the facial image analysis device 1 as shown in FIG. 1 and FIG. 2 (hereinafter referred to as the analysis device 1 for short). ). Specifically, the present invention mainly obtains the facial image of the user through the analysis device 1, and recognizes the facial image based on the pre-trained artificial intelligence model, and then marks the facial image based on the recognition result. Multiple muscle feature points that can be used to represent the user's current facial muscle state. Through the multiple muscle feature points automatically marked by the analysis device 1, the user can quickly and effectively understand his current facial muscle state, and then evaluate his current use of cosmetics, skin care products and various maintenance methods (such as (Exercise or medical beauty) is effective.

圖1、圖2所示的分析裝置1主要是用來協助使用者進行化妝。具體地,所述分析裝置1可在使用者進行化妝前提供化妝建議、於進行化妝時提供化妝輔助、並於化妝完成後對妝容進行分析與評價。值得一提的是,在其他電子設備(例如智慧型行動裝置、平板電腦等)具有與分析裝置1相近的硬體元件,並且安裝了對應的應用軟體而可執行本發明的標記方法的各個控制步驟的情況下,本發明的標記方法將不限於運用在圖1、圖2所示的分析裝置1,而可藉由各式的電子設備來實現。 The analysis device 1 shown in Fig. 1 and Fig. 2 is mainly used to assist the user in applying makeup. Specifically, the analysis device 1 can provide makeup suggestions before the user applies makeup, provide makeup assistance when applying makeup, and analyze and evaluate makeup after the makeup is completed. It is worth mentioning that other electronic devices (such as smart mobile devices, tablet computers, etc.) have hardware components similar to the analysis device 1, and the corresponding application software is installed to perform the various controls of the marking method of the present invention. In the case of steps, the marking method of the present invention is not limited to being applied to the analysis device 1 shown in FIG. 1 and FIG. 2, and can be implemented by various electronic devices.

如圖1及圖2所示,所述分析裝置1主要包括處理器10、顯示單元11、影像擷取單元12、輸入單元13、無線傳輸單元14及儲存單元15。所述處理器10通過匯流排電性連接顯示單元11、影像擷取單元12、輸入單元13、無線傳輸單元14及儲存單元15,以對該些元件進行整合與控制。 As shown in FIGS. 1 and 2, the analysis device 1 mainly includes a processor 10, a display unit 11, an image capture unit 12, an input unit 13, a wireless transmission unit 14 and a storage unit 15. The processor 10 is electrically connected to the display unit 11, the image capture unit 12, the input unit 13, the wireless transmission unit 14 and the storage unit 15 through a bus to integrate and control these components.

所述分析裝置1可通過影像擷取單元12即時擷取使用者的照片(主要為包含了使用者的臉部影像的照片),從照片中擷取出使用者的臉部影像並於顯示單元11上顯示。本發明的主要技術特徵在於,分析裝置1可對所擷取的臉部影像進行辨識,並且自動在臉部影像上標記出可以用來代表使用者當前的臉部肌肉狀態的多個肌肉特徵點,再藉由顯示單元11同時顯示所述臉部影像與多個肌肉特徵點。藉此,使用者可以藉由分析裝置1上顯示的資訊來快速地確認自己當前的肌肉狀態。 The analysis device 1 can instantly capture a user's photo (mainly a photo containing the user's facial image) through the image capturing unit 12, and extract the user's facial image from the photo and display it on the display unit 11. On display. The main technical feature of the present invention is that the analysis device 1 can recognize the captured facial image, and automatically mark the facial image with multiple muscle feature points that can be used to represent the user's current facial muscle state , And the display unit 11 simultaneously displays the facial image and multiple muscle feature points. In this way, the user can quickly confirm his current muscle state by analyzing the information displayed on the device 1.

所述輸入單元13設置於分析裝置1的一側,並且可為實體輸入單元或觸控輸入單元。通過所述輸入單元13,使用者可與分析裝置1進行互動,以對分析裝置1進行操作並向分析裝置1發送指令。舉例來說,使用者可通過 輸入單元13來選擇分析裝置1的不同功能(例如化妝輔助功能、臉部肌肉分析功能等),或是切換分析裝置1所提供的化妝步驟或化妝建議(例如下一頁、上一頁)。 The input unit 13 is arranged on one side of the analysis device 1 and can be a physical input unit or a touch input unit. Through the input unit 13, the user can interact with the analysis device 1 to operate the analysis device 1 and send instructions to the analysis device 1. For example, users can pass The input unit 13 selects different functions of the analysis device 1 (such as makeup assist function, facial muscle analysis function, etc.), or switches the makeup steps or makeup suggestions (such as next page, previous page) provided by the analysis device 1.

於一實施例中,所述顯示單元11可為能夠接受使用者操作的觸控式螢幕。於此實施例中,所述輸入單元13與顯示單元11整合為一體,而不單獨存在。 In one embodiment, the display unit 11 may be a touch screen capable of accepting user operations. In this embodiment, the input unit 13 and the display unit 11 are integrated as a whole, and do not exist separately.

所述無線傳輸單元14用以連接網路,分析裝置1可通過網路連接遠端的電子設備或伺服器。本發明中,分析裝置1主要可藉由一或多個演算法來進行人工智慧模型的訓練程序、臉部影像的辨識程序以及肌肉特徵點的標記程序,而該些演算法及訓練完成的人工智慧模型可以儲存於分析裝置1中,也可以儲存於遠端的電子設備、伺服器中,不加以限定。再者,使用者亦可操作使用者終端(圖未標示)通過網路連接分析裝置1,以從遠端直接對分析裝置1進行韌體的維護與更新。 The wireless transmission unit 14 is used to connect to a network, and the analysis device 1 can be connected to a remote electronic device or server through the network. In the present invention, the analysis device 1 can mainly use one or more algorithms to perform artificial intelligence model training procedures, facial image recognition procedures, and muscle feature points marking procedures. The smart model can be stored in the analysis device 1 or in a remote electronic device or server, without limitation. Furthermore, the user can also operate a user terminal (not shown in the figure) to connect to the analysis device 1 via a network, so as to directly maintain and update the firmware of the analysis device 1 from a remote location.

於一實施例中,分析裝置1是通過所述影像擷取單元12即時擷取使用者的臉部影像,並對臉部影像進行辨識以分析使用者當前的臉部肌肉狀態。於另一實施例中,分析裝置1亦可藉由無線傳輸單元14從遠端的電子設備或伺服器中下載使用者預先拍攝的照片,並對所述照片中的臉部影像進行辨識,以評價使用者於拍攝所述照片時的臉部肌肉狀態。於又一實施例中,分析裝置1亦可從儲存單元15中讀取使用者預先拍攝的照片,並對所述照片中的臉部影像進行辨識,以評價使用者於拍攝所述照片時的臉部肌肉狀態。 In one embodiment, the analysis device 1 captures the user's facial image in real time through the image capturing unit 12, and recognizes the facial image to analyze the user's current facial muscle state. In another embodiment, the analysis device 1 can also use the wireless transmission unit 14 to download photos taken in advance by the user from a remote electronic device or server, and recognize the facial images in the photos to Evaluate the state of the facial muscles of the user when the photo was taken. In another embodiment, the analysis device 1 can also read the photos taken by the user in advance from the storage unit 15 and recognize the facial images in the photos to evaluate the user’s performance when taking the photos. The state of facial muscles.

所述儲存單元15儲存分析裝置1用來執行本發明的標記方法所需的演算法及模型,具體地,至少儲存有被用來進行模糊比對的訓練模型153,以及用來訓練所述訓練模型153的人工智慧訓練演算法151以及大量的訓練資 料152,但不加以限定。如前文所述,所述人工智慧訓練演算法151、訓練資料152及訓練模型153亦可儲存於遠端的電子裝置或伺服器,分析裝置1可經由網路連接遠端的電子裝置或伺服器,並從遠端存取所述人工智慧訓練演算法151、訓練資料152及訓練模型153。 The storage unit 15 stores the algorithms and models required by the analysis device 1 to execute the marking method of the present invention. Specifically, at least a training model 153 used for fuzzy comparison is stored and used for training the training Model 153's artificial intelligence training algorithm 151 and a lot of training resources 152, but not limited. As mentioned above, the artificial intelligence training algorithm 151, training data 152, and training model 153 can also be stored in a remote electronic device or server, and the analysis device 1 can be connected to a remote electronic device or server via a network. , And remotely access the artificial intelligence training algorithm 151, training data 152, and training model 153.

於另一實施例中,所述人工智慧訓練演算法151可內嵌於處理器10中,以做為處理器10的韌體的一部分,但不加以限定。 In another embodiment, the artificial intelligence training algorithm 151 can be embedded in the processor 10 as a part of the firmware of the processor 10, but it is not limited.

本發明中,分析裝置1的製造商可預先將大量的訓練資料152匯入儲存單元15中,這些訓練資料152主要為不特定人的臉部影像,並且經由人工方式在這些臉部影像上標示出專家(例如醫師、美容師等)所定義的多個肌肉特徵點。藉此,分析裝置1可通過人工智慧訓練演算法151對這些訓練資料152進行所述訓練模型153的訓練程序。本發明中,所述多個肌肉特徵點的位置係可用來代表臉部的肌肉狀態。 In the present invention, the manufacturer of the analysis device 1 can import a large amount of training data 152 into the storage unit 15 in advance. These training data 152 are mainly facial images of unspecified people, and manually mark these facial images. Multiple muscle feature points defined by experts (such as physicians, beauticians, etc.) are generated. In this way, the analysis device 1 can perform the training procedure of the training model 153 on the training data 152 through the artificial intelligence training algorithm 151. In the present invention, the positions of the multiple muscle feature points can be used to represent the muscle state of the face.

本發明的主要技術特徵在於,當分析裝置1取得一張新的照片時(例如通過影像擷取單元12即時拍攝使用者的臉部影像,或是從儲存單元15中讀取一張預先拍攝的照片),可基於所述訓練模型153對照片進行模糊比對,以在照片中的臉部影像上自動標記出多個肌肉特徵點的位置。如此一來,使用者可基於分析裝置1自動標記的肌肉特徵點的位置來判斷自己當前的臉部的肌肉狀態。 The main technical feature of the present invention is that when the analysis device 1 obtains a new photo (for example, the user’s face image is captured in real time by the image capture unit 12, or a pre-photographed image is read from the storage unit 15). Photo), based on the training model 153, the photo can be blurred and compared to automatically mark the positions of multiple muscle feature points on the facial image in the photo. In this way, the user can determine the current muscle state of his face based on the position of the muscle feature points automatically marked by the analysis device 1.

請同時參閱圖3,為本發明的肌肉特徵點示意圖的第一具體實施例。如圖3所示,當分析裝置1通過影像擷取單元12、無線傳輸單元14或儲存單元15取得一張包含了使用者的臉部影像的待辨識影像2後,即可基於預先訓練完成的訓練模型153對待辨識影像2進行模糊比對,以找出待辨識影像2中 的臉部影像上的複數肌肉特徵點3的位置,並且將複數肌肉特徵3標記於待辨識影像2上後再通過顯示單元11加以顯示。 Please also refer to FIG. 3, which is a first specific embodiment of the schematic diagram of the muscle feature points of the present invention. As shown in FIG. 3, after the analysis device 1 obtains a to-be-recognized image 2 containing the user's facial image through the image capturing unit 12, the wireless transmission unit 14, or the storage unit 15, it can be based on the pre-training completed The training model 153 performs a fuzzy comparison on the image 2 to be identified to find out the image 2 The position of the complex muscle feature point 3 on the facial image of, and the multiple muscle feature 3 is marked on the image 2 to be recognized and then displayed on the display unit 11.

於圖3的實施例中,分析裝置1可基於訓練模型153而於待辨識影像2中自動標記至少四個肌肉特徵點3,包括位於臉部左側上方(對應左邊笑肌位置)的第一肌肉特徵點31、位於臉部左側下方(對應左邊咬合肌位置)的第二肌肉特徵點32、位於臉部右側上方(對應右邊笑肌位置)的第三肌肉特徵點33及位於臉部右側下方(對應右邊咬合肌位置)的第四肌肉特徵點34。於圖3的實施例中是以四個肌肉特徵點3為例,但於其他實施例中,肌肉特徵點3的數量不以四個為限。 In the embodiment of FIG. 3, the analysis device 1 can automatically mark at least four muscle feature points 3 in the image 2 to be recognized based on the training model 153, including the first muscle located on the upper left side of the face (corresponding to the position of the laughing muscle on the left) Feature point 31, the second muscle feature point 32 located on the lower left side of the face (corresponding to the position of the left masseter muscle), the third muscle feature point 33 located on the upper right side of the face (corresponding to the position of the right laughing muscle), and the third muscle feature point 33 located on the lower right side of the face ( The fourth muscle feature point 34 corresponding to the position of the right masseter muscle. In the embodiment of FIG. 3, four muscle feature points 3 are taken as an example, but in other embodiments, the number of muscle feature points 3 is not limited to four.

值得一提的是,在一定範圍內,若所述第一肌肉特徵點31與第三肌肉特徵點33的位置越靠近臉部內側(靠近鼻子)以及臉部上側(靠近眼睛),就表示使用者的肌肉狀態越好(例如越年輕、肌肉越緊實)。相似地,在一定範圍內,若所述第二肌肉特徵點32與第四肌肉特徵點34的位置越靠近臉部內側(靠近嘴巴)及臉部上側(靠近鼻頭),就表示使用者的肌肉狀態越好。 It is worth mentioning that within a certain range, if the positions of the first muscle feature point 31 and the third muscle feature point 33 are closer to the inside of the face (close to the nose) and the upper side of the face (close to the eyes), it means using The better the muscles are (for example, the younger, the tighter the muscles). Similarly, within a certain range, if the positions of the second muscle feature point 32 and the fourth muscle feature point 34 are closer to the inside of the face (near the mouth) and the upper side of the face (near the nose), it indicates the muscles of the user The better the state.

如上所述,本發明的標記方法是基於預先訓練完成的訓練模型153對待辨識影像2進行辨識,並且於辨識完成後在待辨識影像2上自動標記至少四個肌肉特徵點3,藉此,可有效達到令使用者藉由至少四個肌肉特徵點3的位置來判斷自己當前的臉部肌肉狀態的技術效果。 As mentioned above, the marking method of the present invention is based on the pre-trained training model 153 to recognize the image 2 to be recognized, and automatically mark at least four muscle feature points 3 on the image 2 to be recognized after the recognition is completed. Effectively achieve the technical effect of allowing users to judge their current facial muscle state by the positions of at least four muscle feature points 3.

相較於臉部上的多個強參考點(例如眼睛、鼻子、嘴巴等器官,或是具有明顯特徵的部位),本發明所指的肌肉特徵點3主要是對應到臉部上的弱參考點。因此,若要藉由分析裝置1來實現本發明的標記方法以自動為使用者輸 入的待辨識影像2標記所述肌肉特徵點3,分析裝置1需要預先建立用以執行模糊比對的訓練模型153。 Compared with multiple strong reference points on the face (such as eyes, nose, mouth and other organs, or parts with obvious characteristics), the muscle feature point 3 referred to in the present invention mainly corresponds to the weak reference points on the face point. Therefore, if the analysis device 1 is used to implement the marking method of the present invention to automatically input for the user The imported image 2 to be recognized marks the muscle feature points 3, and the analysis device 1 needs to establish a training model 153 for performing blur comparison in advance.

續請參閱圖4,為本發明的訓練模型建立流程圖的第一具體實施例。圖4用以具體說明本發明的訓練模型153的建立程序。 Please continue to refer to FIG. 4, which is a first specific embodiment of the flow chart for establishing the training model of the present invention. Fig. 4 is used to specifically illustrate the procedure for establishing the training model 153 of the present invention.

如圖4所示,首先,使用者取得大量的訓練資料152,這些訓練資料152分別為相同或不同人的臉部影像。接著,使用者根據這些臉部影像所實際呈現的臉部肌肉狀態,基於專家(如醫生或美容師等)定義的特徵點設定規則分別為這些訓練資料152標記至少四個肌肉特徵點(步驟S10)。如前文中所述,本發明藉由至少四個肌肉特徵點來代表臉部的肌肉狀態,換句話說,在訓練階段使用者也可以依據臉部影像的肌肉狀態來以人工方式在訓練資料152上標記肌肉特徵點,以做為所述訓練模型153的訓練基礎。 As shown in FIG. 4, first, the user obtains a large amount of training data 152, which are facial images of the same or different people. Then, the user marks the training data 152 with at least four muscle feature points based on the feature point setting rules defined by experts (such as doctors or beauticians) based on the actual facial muscle states shown in these facial images (step S10 ). As mentioned above, the present invention uses at least four muscle feature points to represent the muscle state of the face. In other words, during the training phase, the user can also manually update the training data 152 according to the muscle state of the facial image. Mark the muscle feature points as the training basis of the training model 153.

請同時參閱圖5,為本發明的肌肉特徵點的標記位置示意圖的第一具體實施例。本發明中,所述肌肉特徵點4主要包括位於臉部左側的第一肌肉特徵點41與第二肌肉特徵點42,以及位於臉部右側的第三肌肉特徵點43與第四肌肉特徵點44。並且,所述至少四個肌肉特徵點4分別位於臉部的四個弱參考點上。其中,所述弱參考點的數量對應至肌肉特徵點4的數量。具體地,若使用者在訓練資料152上標記的肌肉特徵點4的數量大於四個,則各個肌肉特徵點4皆需位於臉部的弱參考點上。 Please also refer to FIG. 5, which is a first specific embodiment of the schematic diagram of the marking position of the muscle feature points of the present invention. In the present invention, the muscle feature points 4 mainly include a first muscle feature point 41 and a second muscle feature point 42 located on the left side of the face, and a third muscle feature point 43 and a fourth muscle feature point 44 located on the right side of the face. . Moreover, the at least four muscle feature points 4 are respectively located on four weak reference points of the face. Wherein, the number of weak reference points corresponds to the number of muscle feature points 4. Specifically, if the number of muscle feature points 4 marked on the training data 152 by the user is greater than four, each muscle feature point 4 needs to be located on a weak reference point of the face.

更具體地,如圖5所示,使用者可以通過肉眼或是演算法對訓練資料152中的臉部影像進行分析與判斷,以藉由臉部的實際肌肉狀態取得臉部左側的第一淚溝切線61、第一法令紋切線63、第一木偶紋切線65、第一眼角垂直線67及第一顎骨枝(mandible ramus)切線67等區域構成輔助線,並且取得臉部 右側的第二淚溝切線62、第二法令紋切線64、第二木偶紋切線66、第二眼角垂直線68及第二顎骨枝切線70等區域構成輔助線。本發明中,使用者可通過上述區域構成輔助線來決定各個肌肉特徵點4的具體位置,並且直接在訓練資料152上標記至少四個肌肉特徵點4。 More specifically, as shown in FIG. 5, the user can analyze and judge the facial image in the training data 152 with the naked eye or an algorithm, so as to obtain the first tear on the left side of the face based on the actual muscle state of the face. The groove tangent line 61, the first law pattern tangent line 63, the first marionette line tangent line 65, the first corner vertical line 67, and the first mandible ramus tangent line 67 constitute auxiliary lines and obtain the face The second tear groove tangent 62, the second law pattern tangent 64, the second marionette tangent 66, the second corner vertical line 68, and the second jawbone branch tangent 70 on the right constitute auxiliary lines. In the present invention, the user can determine the specific position of each muscle feature point 4 by forming the auxiliary line of the above-mentioned area, and directly mark at least four muscle feature points 4 on the training data 152.

當使用者在標記所述肌肉特徵點4時,主要是令第一肌肉特徵點41落在由臉部左側的第一淚溝切線61、第一法令紋切線63、第一眼角垂直線67及第一顎骨枝切線69所構成的區域內;令第二肌肉特徵點42落在由臉部左側的第一法令紋切線61、第一木偶紋切線65、第一眼角垂直線67及第一顎骨枝切線69所構成的區域內;令第三肌肉特徵點43落在由臉部右側的第二淚溝切線62、第二法令紋切線64、第二眼角垂直線68及第二顎骨枝切線70所構成的區域內;並令第四肌肉特徵點44落在由臉部右側的第二法令紋切線64、第二木偶紋切線66、第二眼角垂直線68及第二顎骨枝切線70所構成的區域內。惟,上述僅為本發明的其中一個具體實施範例,但不以此為限。 When the user is marking the muscle feature point 4, it is mainly to make the first muscle feature point 41 fall on the first tear groove tangent line 61, the first law pattern tangent line 63, the first eye corner vertical line 67 and the left side of the face. Within the area formed by the first jawbone branch tangent line 69; make the second muscle feature point 42 fall on the first decree pattern tangent line 61, the first marionette pattern tangent line 65, the first eye corner vertical line 67 and the first jawbone on the left side of the face Within the area formed by the branch tangent line 69; the third muscle feature point 43 is made to fall on the second tear groove tangent line 62, the second nasolabial tangent line 64, the second eye vertical line 68 and the second jawbone branch tangent line 70 on the right side of the face Within the area constituted; and make the fourth muscle feature point 44 fall on the right side of the face by the second law pattern tangent 64, the second marionette pattern tangent 66, the second eye corner vertical line 68 and the second jawbone branch tangent 70 Within the area. However, the above is only one of the specific implementation examples of the present invention, but it is not limited thereto.

此外,分析裝置1亦可藉由分別判斷上述肌肉特徵點4是否落在其對應之區域構成輔助線所構成的區域內,作為使用者在標記上述肌肉特徵點4時是否有誤之輔助預先判斷,以過濾掉錯誤標記之情形或提出警示以讓使用者再為確認。相同地,這樣的輔助方法,也可以在後述之訓練模型153建立後,選擇性地被用為檢視判斷訓練模型153自動標記之結果是否有誤之輔助方法,特別是在訓練資料152較為不足時可發揮一定之輔助功能。 In addition, the analysis device 1 can also separately determine whether the muscle feature points 4 fall within the area constituted by the auxiliary line of the corresponding area, as an aid for the user to pre-determine whether there is an error in marking the muscle feature points 4 , In order to filter out the wrong marking situation or raise a warning for the user to confirm again. Similarly, such an auxiliary method can also be selectively used as an auxiliary method for checking whether the result of the automatic marking of the training model 153 is incorrect after the training model 153 described later is established, especially when the training data 152 is relatively insufficient. Can play a certain auxiliary function.

回到圖4。使用者還藉由分析裝置1或其他電子裝置對所述訓練資料152進行初步的臉部識別程序,藉此從這些訓練資料152中定位出可用來做為訓練基礎的臉部特徵。於一實施例中,分析裝置1主要是通過處理器10執行 Dlib Face Landmark系統的方向梯度直方圖(Histogram of Oriented Gradient,HOG)演算法來對該些訓練資料152執行臉部識別程序,並且於訓練資料152上產生用來指出有效的臉部特徵的臉部定位框(步驟S12)。於一實施例中,至少部分的臉部特徵為臉部中的強參考點。 Return to Figure 4. The user also uses the analysis device 1 or other electronic devices to perform a preliminary face recognition procedure on the training data 152, thereby locating facial features from the training data 152 that can be used as a basis for training. In one embodiment, the analysis device 1 is mainly executed by the processor 10 The Histogram of Oriented Gradient (HOG) algorithm of the Dlib Face Landmark system executes the face recognition program on the training data 152, and generates the face that indicates the effective facial features on the training data 152 Position the frame (step S12). In one embodiment, at least part of the facial features are strong reference points in the face.

請同時參閱圖6,為本發明的訓練資料示意圖的第一具體實施例。如圖6所示,所述HOG演算法主要是將一張照片轉換為複數向量,並且依據這些向量的大小、方向及形狀的組合來判斷臉部影像在照片中的位置,並且產生可覆蓋整個臉部影像的一個臉部定位框5。本實施例中,所述四個肌肉特徵點4皆落在臉部定位框5內,並且這四個肌肉特徵點4的連線可以構成一個矩形框或梯形框。 Please refer to FIG. 6 at the same time, which is a first specific embodiment of the training data schematic diagram of the present invention. As shown in Figure 6, the HOG algorithm mainly converts a photo into a complex vector, and determines the position of the face image in the photo according to the combination of the size, direction, and shape of these vectors, and generates a picture that can cover the entire A face positioning frame 5 of the face image. In this embodiment, the four muscle feature points 4 all fall within the face positioning frame 5, and the connection of the four muscle feature points 4 can form a rectangular frame or a trapezoidal frame.

回到圖4。於步驟S12後,分析裝置1可依據這些訓練資料152來執行所述人工智慧訓練演算法151(步驟S14),以建立所述訓練模型153。更具體地說,人工智慧訓練演算法151是基於訓練資料152、訓練資料152上被標記的肌肉特徵點4、訓練資料152上的臉部定位框5以及臉部定位框5中的複數臉部特徵點,來執行訓練程序。 Return to Figure 4. After step S12, the analysis device 1 can execute the artificial intelligence training algorithm 151 according to the training data 152 (step S14) to build the training model 153. More specifically, the artificial intelligence training algorithm 151 is based on the training data 152, the marked muscle feature points 4 on the training data 152, the face positioning frame 5 on the training data 152, and the plural faces in the face positioning frame 5. Feature points to perform the training program.

值得一提的是,使用者可以依據實際情況選擇性地先標記該些肌肉特徵點4,或是先產生臉部定位框5。換句話說,所述步驟S10與步驟S12並不具有執行上的必然順序關係。 It is worth mentioning that the user can selectively mark the muscle feature points 4 first or generate the face positioning frame 5 first according to the actual situation. In other words, the steps S10 and S12 do not have an inevitable sequence relationship in execution.

本發明中,所述人工智慧訓練演算法151在訓練過程中主要是對該些訓練資料152進行分析,並且至少記錄所述臉部定位框5內的多個特徵點間的對應關係、所述複數肌肉特徵點4彼此間的對應關係(例如所述矩形框或梯 形框的大小、形狀、角度等)、以及各個肌肉特徵點4與臉部定位框5內的一或多個特徵點(尤其是強參考點)間的對應關係(步驟S16)。 In the present invention, the artificial intelligence training algorithm 151 mainly analyzes the training data 152 during the training process, and records at least the correspondence between the multiple feature points in the face positioning frame 5, the The corresponding relationship between the plural muscle feature points 4 (such as the rectangular frame or ladder The size, shape, angle, etc. of the shaped frame, and the correspondence between each muscle feature point 4 and one or more feature points (especially strong reference points) in the face positioning frame 5 (step S16).

值得一提的是,於步驟S16中,人工智慧訓練演算法151可在執行訓練程序的同時統計各個肌肉特徵點4在各個位置上出現的機率(例如該些肌肉特徵點4從來沒有出現在哪些位置上),以及各個肌肉特徵點4的基本定位規則(例如第一肌肉特徵點41必定高於第二肌肉特徵點42、第三肌肉特徵點43必定位於第一肌肉特徵點41的右側等),以做為分析裝置1在為新輸入的待辨識影像2執行模糊比對時的預測參考值(容後詳述)。 It is worth mentioning that in step S16, the artificial intelligence training algorithm 151 can count the probability of each muscle feature point 4 appearing at each position while executing the training program (for example, where these muscle feature points 4 have never appeared) Position), and the basic positioning rules of each muscle feature point 4 (for example, the first muscle feature point 41 must be higher than the second muscle feature point 42, and the third muscle feature point 43 must be located on the right side of the first muscle feature point 41, etc.) , Is used as the prediction reference value of the analysis device 1 when performing blur comparison for the newly inputted image 2 to be recognized (detailed later).

步驟S16後,人工智慧訓練演算法151可依據分析所得的對應關係來產生複數肌肉特徵點定位規則(步驟S18),並且至少依據複數肌肉特徵點定位規則、判斷深度及迴歸次數來建立一個訓練模型153(步驟S20)。當訓練模型153被建立後,分析裝置1可以對新輸入的待辨識影像2進行模糊比對,以基於訓練模型153而在待辨識影像2中的臉部影像上自動標記出所述複數肌肉特徵點3。 After step S16, the artificial intelligence training algorithm 151 can generate a plurality of muscle feature point positioning rules based on the analyzed correspondence relationship (step S18), and establish a training model at least according to the plurality of muscle feature point positioning rules, judgment depth and regression times 153 (Step S20). After the training model 153 is established, the analysis device 1 can perform a fuzzy comparison on the newly input image 2 to be recognized, so as to automatically mark the plural muscle features on the facial image in the image 2 to be recognized based on the training model 153 Point 3.

本發明中,人工智慧訓練演算法151所產生的訓練模型153主要是一種迴歸器(Regressor),此迴歸器中包含了複數個內容相同的Cascade的迴歸樹(Regression tree)。各個迴歸樹分別具有複數個判斷端點,而至少一部分的判斷端點的內容會對應至上述複數肌肉特徵點定位規則(例如圖9所示,容後詳述)。 In the present invention, the training model 153 generated by the artificial intelligence training algorithm 151 is mainly a regressor, which includes multiple Cascade regression trees with the same content. Each regression tree has a plurality of judgment endpoints, and the content of at least a part of the judgment endpoints corresponds to the above-mentioned multiple muscle feature point positioning rules (for example, as shown in FIG. 9 and detailed later).

值得一提的是,在執行所述人工智慧訓練演算法151前,使用者可以依據分析裝置1的硬體效能、所需的辨識準確度及所能接受的執行時間等實際因素來設定所述迴歸次數及端點數量,本實施例中,複數迴歸樹的數量相同於所述迴歸次數,而複數判斷端點的數量相同於所述判斷深度。在不考慮所述硬 體效能、辨識準確度及執行時間的情況下,迴歸次數與判斷深度越大,辨識所得的結果會越準確。 It is worth mentioning that before executing the artificial intelligence training algorithm 151, the user can set the actual factors such as the hardware performance of the analysis device 1, the required recognition accuracy, and the acceptable execution time. The number of regression times and the number of endpoints. In this embodiment, the number of complex regression trees is the same as the number of regressions, and the number of complex judgment endpoints is the same as the judgment depth. Without considering the hard In the case of physical performance, identification accuracy, and execution time, the greater the number of regressions and the depth of judgment, the more accurate the identification results will be.

參閱圖7,為本發明的辨識流程圖的第一具體實施例。圖7揭露了本發明的標記方法的各個執行步驟,並且所述標記方法主要應用於如圖1、圖2所示的分析裝置1。 Refer to FIG. 7, which is a first specific embodiment of the identification flowchart of the present invention. FIG. 7 discloses each execution step of the marking method of the present invention, and the marking method is mainly applied to the analysis device 1 shown in FIG. 1 and FIG. 2.

當使用者要通過分析裝置1來辨識臉部的肌肉狀態時,主要可將包含了使用者的臉部影像的待辨識影像2(如圖3所示)輸入分析裝置1,以令分析裝置1取得使用者的待辨識影像2(步驟S30)。於一實施例中,分析裝置1可通過其上的影像擷取單元12即時擷取待辨識影像2。於另一實施例中,分析裝置1可從儲存單元15中讀取使用者預先儲存的待辨識影像2。於又一實施例中,分析裝置1可經由無線傳輸單元14從外部接收待辨識影像2。 When the user wants to recognize the muscle state of the face through the analysis device 1, the image to be recognized 2 (as shown in FIG. 3) containing the user's facial image can be input to the analysis device 1, so that the analysis device 1 Obtain the user's to-be-recognized image 2 (step S30). In one embodiment, the analysis device 1 can capture the image 2 to be recognized in real time through the image capture unit 12 thereon. In another embodiment, the analysis device 1 can read from the storage unit 15 the to-be-identified image 2 pre-stored by the user. In another embodiment, the analysis device 1 can receive the image 2 to be recognized from the outside via the wireless transmission unit 14.

於取得待辨識影像2後,分析裝置1可通過處理器10來對待辨識影像2執行臉部識別程序,以產生如圖6所示的臉部定位框5(步驟S32)。所述臉部定位框5標示出了待辨識影像2中的臉部影像,並且臉部定位框5涵蓋了臉部影像中的多個強參考點(例如眼睛、鼻子、嘴巴等器官,或是明顯的特徵點)。 After obtaining the image 2 to be recognized, the analysis device 1 can execute a face recognition program on the image 2 to be recognized through the processor 10 to generate a face positioning frame 5 as shown in FIG. 6 (step S32). The face positioning frame 5 marks the face image in the image to be recognized 2, and the face positioning frame 5 covers multiple strong reference points in the face image (such as eyes, nose, mouth and other organs, or Obvious feature points).

於一實施例中,處理器10主要是通過Dlib Face Landmark系統的方向梯度直方圖(Histogram of Oriented Gradient,HOG)演算法對待辨識影像2執行所述臉部識別程序,並在待辨識影像2上產生所述臉部定位框5。本發明中,處理器10只依據訓練資料152的臉部定位框5內的多個特徵點進行訓練,因此在進行辨識程序時,處理器10也只會依據待辨識影像2的臉部定位框5內的多個特徵點來進行辨識。 In one embodiment, the processor 10 mainly uses the Histogram of Oriented Gradient (HOG) algorithm of the Dlib Face Landmark system to execute the face recognition program on the image 2 to be recognized, and perform the face recognition program on the image 2 to be recognized. The face positioning frame 5 is generated. In the present invention, the processor 10 only performs training based on the multiple feature points in the face positioning frame 5 of the training data 152. Therefore, when performing the recognition process, the processor 10 also only performs training based on the face positioning frame of the image 2 to be recognized. Multiple feature points within 5 for identification.

步驟S32後,處理器10基於所述訓練模型153對待辨識影像2進行模糊比對,並產生一個辨識結果(步驟S34)。藉此,處理器10可依據所述辨識結果在待辨識影像2上直接標記四個肌肉特徵點3的位置(步驟S36),其中,這四個肌肉特徵點3將會分別落在待辨識照影像上的所述臉部定位框5中,並且分別對應至臉部的至少四個弱參考點。 After step S32, the processor 10 performs a blur comparison on the image 2 to be recognized based on the training model 153, and generates a recognition result (step S34). Thereby, the processor 10 can directly mark the positions of the four muscle feature points 3 on the image 2 to be recognized according to the recognition result (step S36), wherein the four muscle feature points 3 will respectively fall on the image to be recognized. The face positioning frame 5 on the image corresponds to at least four weak reference points of the face, respectively.

步驟S36後,處理器10可控制顯示單元11顯示待辨識影像2及四個肌肉特徵點3(步驟S38),並且令四個肌肉特徵點3之標記分別重疊顯示於待辨識影像2中的臉部影像的對應位置上。 After step S36, the processor 10 may control the display unit 11 to display the image 2 to be recognized and the four muscle feature points 3 (step S38), and make the marks of the four muscle feature points 3 overlapped and displayed on the face in the image 2 to be recognized. The corresponding position of the image.

於本發明的標記方法中,處理器10是基於訓練模型153來對待辨識影像2進行模糊比對,因此處理器10在待辨識影像2上自動標記的四個肌肉特徵點3的位置,將會對應至依據大量的訓練資料152所分析出來的定位規則。 In the marking method of the present invention, the processor 10 performs blur comparison on the image 2 to be recognized based on the training model 153. Therefore, the position of the four muscle feature points 3 automatically marked by the processor 10 on the image 2 to be recognized will be Corresponds to the positioning rules analyzed based on a large amount of training data 152.

具體地,處理器10經由模糊比對而在待辨識影像2上自動標記的肌肉特徵點3的數量,對應至各筆訓練資料152中所被標記的肌肉特徵點4的數量。換句話說,處理器10在步驟S36中所標示的肌肉特徵點3的數量,以及在步驟S38中所顯示在待辨識影像2上的肌肉特徵點3的數量,是對應至儲存單元15中的各筆訓練資料152上被標記的肌肉特徵點4的數量(即,為複數),但不以上述之四個為限。 Specifically, the number of muscle feature points 3 automatically marked on the image 2 to be recognized by the processor 10 through blur comparison corresponds to the number of muscle feature points 4 marked in each piece of training data 152. In other words, the number of muscle feature points 3 marked by the processor 10 in step S36 and the number of muscle feature points 3 displayed on the image 2 to be recognized in step S38 correspond to the storage unit 15 The number of muscle feature points 4 marked on each training data 152 (ie, plural), but not limited to the above four.

於一實施例中,處理器10自動標記的四個肌肉特徵點3將會包括位於待辨識影像2中的臉部影像左側的第一肌肉特徵點31及第二肌肉特徵點32,以及位於臉部影像右側的第三肌肉特徵點33及第四肌肉特徵點34,並這四個肌肉特徵點3的連線構成一個矩形框或梯形框。 In one embodiment, the four muscle feature points 3 automatically marked by the processor 10 will include the first muscle feature point 31 and the second muscle feature point 32 located on the left side of the face image in the image 2 to be recognized, as well as those located on the face. The third muscle feature point 33 and the fourth muscle feature point 34 on the right side of the partial image, and the connection of these four muscle feature points 3 form a rectangular frame or trapezoidal frame.

於另一實施例中,處理器10自動標記的第一肌肉特徵點31的位置會落在先前所述及之由臉部影像左側的第一淚溝切線61、第一法令紋切線63、第一眼角垂直線67及第一顎骨枝切線69等區域構成輔助線所構成的區域內;第二肌肉特徵點的位置會落在由臉部影像左側的第一法令紋切線63、第一木偶紋切線65、第一眼角垂直線67及第一顎骨枝切線69等區域構成輔助線所構成的區域內;第三肌肉特徵點會落在由臉部影像右側的第二淚溝切線62、第二法令紋切線64、第二眼角垂直線68及第二顎骨枝切線70等區域構成輔助線所構成的區域內;第四肌肉特徵點會落在由臉部影像右側的第二法令紋切線64、第二木偶紋切線66、第二眼角垂直線68及第二顎骨枝切線70等區域構成輔助線所構成的區域內。 In another embodiment, the position of the first muscle feature point 31 automatically marked by the processor 10 will fall on the first tear groove tangent line 61, the first legal pattern tangent line 63, and the first tear groove tangent line 63 on the left side of the facial image as described previously. The vertical line 67 of the corner of the eye and the tangent line 69 of the first jawbone branch constitute the auxiliary line; the position of the second muscle feature point will fall on the tangent line 63 of the first law pattern and the first puppet pattern on the left side of the face image The tangent line 65, the vertical line 67 of the first corner of the eye, and the tangent line 69 of the first jawbone branch constitute the auxiliary line; the third muscle feature point will fall on the second tear groove tangent line 62 and the second tear groove on the right side of the face image. The tangent line 64 of the law pattern, the vertical line 68 of the second corner of the eye and the tangent line 70 of the second jawbone constitute the auxiliary line; the fourth muscle feature point will fall on the tangent line 64 of the second law pattern on the right side of the face image The second marionette line tangent 66, the second corner vertical line 68, and the second jawbone branch tangent 70 constitute the area formed by the auxiliary line.

如上所述,藉由分析裝置1自動標記的四個肌肉特徵點3,使用者可以快速且有效地判斷自己當前的臉部肌肉狀態。例如,若所述第一肌肉特徵點31與第三肌肉特徵點33的位置靠近雙眼,即表示使用者的肌肉相當緊實。再例如,若所述第二肌肉特徵點32與第四肌肉特徵點34遠離嘴巴並且靠近下巴,即表示使用者的肌肉相當鬆弛而有保養之必要。 As described above, with the four muscle feature points 3 automatically marked by the analysis device 1, the user can quickly and effectively judge his current facial muscle state. For example, if the positions of the first muscle feature point 31 and the third muscle feature point 33 are close to the eyes, it means that the user's muscles are quite tight. For another example, if the second muscle feature point 32 and the fourth muscle feature point 34 are far away from the mouth and close to the chin, it means that the user's muscles are quite relaxed and maintenance is necessary.

續請參閱圖8,為本發明的辨識流程圖的第二具體實施例。圖8用以更進一步說明圖7的步驟S34的具體內容。 Please continue to refer to FIG. 8, which is a second specific embodiment of the identification flowchart of the present invention. FIG. 8 is used to further explain the specific content of step S34 in FIG. 7.

如前文中所述,本發明中的訓練模型153主要為包含了複數個Cascade的迴歸樹的迴歸器,並且所述人工智慧訓練演算法151在建立訓練模型153時,會依據對大量的訓練資料152的分析而記錄各個肌肉特徵點的在各個位置出現的機率以及基本定位規則。當處理器10要基於訓練模型153對待辨識影像2進行模糊比對前,將會先依據所述基本定位規則以及機率於待辨識影像2上 隨機產生複數個預測特徵點(步驟S340)。本實施例中,所述複數個預測特徵點是隨機產生的(initial estimate),但會依據所述機率排除不可能出現的位置(例如眉毛上方、嘴巴裡面等),並且會符合所述基本定位規則(例如複數個預測特徵點皆落在臉部定位框5內、複數個預測特徵點的連線可構成一個矩形框或梯形框等)。 As mentioned in the foregoing, the training model 153 in the present invention is mainly a regressor containing multiple Cascade regression trees, and the artificial intelligence training algorithm 151 will build the training model 153 based on a large amount of training data. The analysis of 152 records the probability of each muscle feature point appearing in each position and the basic positioning rules. When the processor 10 performs a blur comparison on the image 2 to be identified based on the training model 153, it will first perform the blur comparison on the image 2 to be identified according to the basic positioning rules and probability. A plurality of predicted feature points are randomly generated (step S340). In this embodiment, the plurality of predicted feature points are randomly generated (initial estimate), but locations that are impossible to appear (for example, above the eyebrows, inside the mouth, etc.) will be excluded based on the probability, and will conform to the basic positioning Rules (for example, a plurality of predicted feature points fall within the face positioning frame 5, and the connection of a plurality of predicted feature points can form a rectangular frame or a trapezoidal frame, etc.).

值得一提的是,所述複數個預測特徵點的數量,是對應至各筆訓練資料152中所被標記的肌肉特徵點4的數量。換句話說,處理器10在步驟S340中所隨機產生的預測特徵點的數量相同於用來訓練訓練模型153所使用的各筆訓練資料152上被標記的肌肉特徵點4的數量(即,為複數),但此數量並不以四個為限。 It is worth mentioning that the number of the plurality of predicted feature points corresponds to the number of the marked muscle feature points 4 in each piece of training data 152. In other words, the number of predicted feature points randomly generated by the processor 10 in step S340 is the same as the number of marked muscle feature points 4 on each piece of training data 152 used to train the training model 153 (ie, Plural), but this number is not limited to four.

接著,處理器10將待辨識影像2及複數個預測特徵點匯入訓練模型153的複數迴歸樹的其中之一(步驟S342),並且取得迴歸樹的複數筆分析結果(步驟S344)。步驟S344後,處理器10再依據所取得的複數筆分析結果來分別對複數個預測特徵點進行調整,並且產生複數個調整後特徵點(步驟S346)。 Next, the processor 10 merges the image 2 to be recognized and a plurality of predicted feature points into one of the complex regression trees of the training model 153 (step S342), and obtains a complex analysis result of the regression tree (step S344). After step S344, the processor 10 adjusts the plurality of predicted feature points respectively according to the obtained analysis results of the plurality of pens, and generates a plurality of adjusted feature points (step S346).

本發明中,該些分析結果主要是分別記錄待辨識影像2及複數個預測特徵點的組合與複數訓練資料152的至少一部分相比的權重(Weight)。更具體地,所述權重可用來指出待辨識影像2中的一或多個強參考點與各個預測特徵點間的對應關係,與訓練資料152中的一或多個強參考點與各個肌肉特徵點4間的對應關係的相似度。換句話說,所述分析結果可用來指出待辨識影像2與目前的預測特徵點的組合,和各訓練資料152(或已歸納/分類的資料群組)與其肌肉特徵點4的組合的相似度。 In the present invention, the analysis results mainly record the weight of the combination of the image 2 to be recognized and the plurality of predicted feature points compared with at least a part of the plurality of training data 152 respectively. More specifically, the weight can be used to indicate the correspondence between one or more strong reference points in the image 2 to be recognized and each predicted feature point, and one or more strong reference points and each muscle feature in the training data 152. The similarity of the correspondence between points 4. In other words, the analysis result can be used to indicate the combination of the image to be recognized 2 and the current predicted feature points, and the similarity of each training data 152 (or the summarized/classified data group) and the combination of the muscle feature points 4 .

於一實施例中,所述相似度越高,權重就越大,但不加以限定。並且,於上述步驟S346中,處理器10主要是依據不同的權重(即,複數筆分析 結果)來分別調整各個預測特徵點在待辨識影像2上的座標值,藉此產生複數個調整後特徵點。並且,複數個調整後特徵點的數量相同於預測特徵點的數量(例如為四個)。 In one embodiment, the higher the similarity, the higher the weight, but it is not limited. In addition, in the above step S346, the processor 10 is mainly based on different weights (ie, plural analysis Result) to adjust the coordinate values of each predicted feature point on the image 2 to be recognized, thereby generating a plurality of adjusted feature points. In addition, the number of the plurality of adjusted feature points is the same as the number of predicted feature points (for example, four).

如前文所述,訓練模型153中包含了複數迴歸樹,且迴歸樹的數量對應至使用者預設的迴歸次數。在步驟S346後,處理器10會判斷訓練模型153中的複數迴歸樹是否已經全部執行完畢(步驟S348),即,判斷前述步驟S342、步驟S344及步驟S346的執行次數是否相等於所述迴歸次數。 As mentioned above, the training model 153 includes complex regression trees, and the number of regression trees corresponds to the number of regressions preset by the user. After step S346, the processor 10 will determine whether all the complex regression trees in the training model 153 have been executed (step S348), that is, determine whether the execution times of the aforementioned steps S342, S344, and S346 are equal to the regression times. .

本發明中,處理器10會在複數迴歸樹尚未執行完畢前,以步驟S346所產生的複數個調整後特徵點取代前述的複數個預測特徵點,並且依據待辨識影像2及複數個調整後特徵點,於下一個迴歸樹中再次執行步驟S342、步驟S344及步驟S346,以持續對複數個特徵點進行調整。並且,當複數迴歸樹皆執行完畢時,處理器10會將最後一次執行步驟S346後所產生的複數個調整後特徵點做為最終確定的複數個肌肉特徵點3,並且輸出這複數個肌肉特徵點3以結束模糊比對程序(步驟S350)。本發明中,分析裝置1自動標記在待辨識影像2上的複數個肌肉特徵點3,即為處理器10在步驟S350後所輸出的最終確定的複數個肌肉特徵點3。 In the present invention, the processor 10 will replace the aforementioned plurality of predicted feature points with the plurality of adjusted feature points generated in step S346 before the execution of the complex regression tree is completed, and based on the to-be-identified image 2 and the plurality of adjusted features Point, perform step S342, step S344, and step S346 again in the next regression tree to continuously adjust the plurality of feature points. Moreover, when the complex regression trees are all executed, the processor 10 will use the plurality of adjusted feature points generated after the last execution of step S346 as the finally determined plurality of muscle feature points 3, and output the plurality of muscle features Point 3 to end the fuzzy comparison procedure (step S350). In the present invention, the analysis device 1 automatically marks the plurality of muscle feature points 3 on the image 2 to be recognized, that is, the finally determined plurality of muscle feature points 3 output by the processor 10 after step S350.

請同時參閱圖9及圖10,其中圖9為本發明的訓練模型示意圖的第一具體實施例,圖10為本發明的特徵點調整示意圖的第一具體實施例。為便於理解,下面將配合圖9及圖10,以四個預測特徵點來進行舉例說明(即,各筆訓練資料152上被標記的肌肉特徵點4的數量為四個)。 Please refer to FIG. 9 and FIG. 10 at the same time. FIG. 9 is the first specific embodiment of the schematic diagram of the training model of the present invention, and FIG. 10 is the first specific embodiment of the feature point adjustment schematic diagram of the present invention. For ease of understanding, four predicted feature points will be used as an example below in conjunction with FIG. 9 and FIG. 10 (that is, the number of marked muscle feature points 4 on each training data 152 is four).

如圖所示,在執行模糊比對前,處理器10會依據所述基本定位規則及機率在待辨識影像2上隨機產生四個預測特徵點80。需說明的是,這四個 預測特徵點80僅僅符合人工智慧訓練演算法151在訓練程序中所歸納的基本定位規則和所統計的機率,但無法代表待辨識影像2中的臉部影像的實際肌肉狀態。 As shown in the figure, before performing the fuzzy comparison, the processor 10 randomly generates four predicted feature points 80 on the image to be recognized 2 according to the basic positioning rule and probability. It should be noted that these four The predicted feature points 80 only comply with the basic positioning rules and statistical probabilities summarized by the artificial intelligence training algorithm 151 in the training procedure, but cannot represent the actual muscle state of the face image in the image 2 to be recognized.

接著,處理器10將待辨識影像2與上述四個預測特徵點80匯入訓練模型153的複數迴歸樹中的第一個迴歸樹1531。所述第一個迴歸樹1531具有多個判斷端點1534,每一個判斷端點1534都代表一個規則,而這些規則是人工智慧訓練演算法151基於複數訓練資料152在訓練所述訓練模型153時所得到的。 Then, the processor 10 merges the image 2 to be recognized and the four predicted feature points 80 into the first regression tree 1531 in the complex regression tree of the training model 153. The first regression tree 1531 has multiple judgment endpoints 1534, and each judgment endpoint 1534 represents a rule, and these rules are the artificial intelligence training algorithm 151 based on the plural training data 152 when training the training model 153 Obtained.

本發明中,第一個迴歸樹1531的至少部分判斷端點1534的內容對應至人工智慧訓練演算法151在前述圖4的步驟S18中產生的複數肌肉特徵點定位規則,換句話說,處理器10會於所述判斷端點1534判斷待辨識影像2上的四個預測特徵點80彼此間的對應關係(例如所述矩形框或梯形框的大小與形狀)是否與此判斷端點1534所指出的規則相似,或是判斷待辨識影像2上的各個預測特徵點80與待辨識影像2在臉部定位框內5內的一或多個強參考點間的對應關係是否與此判斷端點1534所指出的規則相似,即,判斷是(YES)或否(NO)。 In the present invention, at least part of the content of the judgment endpoint 1534 of the first regression tree 1531 corresponds to the multiple muscle feature point positioning rule generated by the artificial intelligence training algorithm 151 in step S18 of FIG. 4, in other words, the processor 10 will determine at the judgment endpoint 1534 whether the correspondence between the four prediction feature points 80 on the image 2 to be recognized (for example, the size and shape of the rectangular frame or trapezoid frame) is the same as that indicated by the judgment endpoint 1534 The rule is similar, or it is determined whether the corresponding relationship between each predicted feature point 80 on the image to be recognized 2 and one or more strong reference points in the face positioning frame 5 of the image 2 to be recognized is consistent with this judgment endpoint 1534 The indicated rules are similar, that is, the judgment is YES or NO.

值得一提的是,在每一個判斷端點1534處判斷是或否時,處理器10主要是判斷與此判斷端點1534的內容相似(即,是)的機率以及不相似(即,否)的機率,並且不會出現100%相似(即,0%不相似)或100%不相似(即,0%相似)的情況。當迴歸樹1531中的每一個判斷端點1534都判斷完成後,處理器10可以得到多筆分析結果,每一筆分析結果分別對應至一筆權重1535。 It is worth mentioning that when judging yes or no at each judging endpoint 1534, the processor 10 mainly judges the probability that the content of the judging endpoint 1534 is similar (ie, yes) and dissimilar (ie, no). The probability of being 100% similar (ie, 0% dissimilar) or 100% dissimilar (ie, 0% similar) will not occur. After each judgment endpoint 1534 in the regression tree 1531 has been judged, the processor 10 can obtain multiple analysis results, and each analysis result corresponds to a weight 1535.

本發明中,所述權重1535是用來指出待辨識影像2中的臉部影像上的一或多個強參考點與各個預測特徵點80間的對應關係與一或多個訓練資料 152(或相似的多個訓練資料152的群組)中的一或多個強參考點與各個肌肉特徵點4間的對應關係的相似度,或是待辨識影像2中的臉部影像上的各個預測特徵點80彼此間的對應關係與一或多個訓練資料152(或所述群組)中的各個肌肉特徵點4彼此間的對應關係的相似度等。 In the present invention, the weight 1535 is used to indicate the correspondence between one or more strong reference points on the face image in the image 2 to be recognized and each predicted feature point 80 and one or more training data The similarity of the correspondence between one or more strong reference points in 152 (or a group of similar multiple training data 152) and each muscle feature point 4, or the similarity on the face image in the image 2 to be recognized The similarity of the corresponding relationship between each predicted feature point 80 and the corresponding relationship between each muscle feature point 4 in one or more training data 152 (or the group), etc.

舉例來說,若待辨識影像2中的臉部影像與第一類型的訓練資料152(例如都指出大鼻子的臉型)很相近,則所述權重1535會比較高;若待辨識影像2中的臉部影像與第二類型的訓練資料152(例如都指出小眼睛的臉型)不相近,則所述權重1535會比較低。 For example, if the face image in the image 2 to be recognized is very similar to the first type of training data 152 (for example, the face shape with a big nose is pointed out), the weight 1535 will be higher; if the image in the image 2 to be recognized If the facial image is not similar to the second type of training data 152 (for example, both indicate the face shape with small eyes), the weight 1535 will be lower.

於第一個迴歸樹1531中的所有判斷端點1534都判斷完畢並且取得多個權重1535後,處理器10進一步依據這些權重來調整各個預測特徵點80在待辨識影像2上的座標值,並且產生第一調整後預測點81。舉例來說,若待辨識影像2中的臉部影像與第一類型的訓練資料152(例如都指出大鼻子的臉型)相近,則第一調整後預測點81的座標值將會朝向第一類型的訓練資料152上的肌肉特徵點4的位置移動,並且調整幅度將會對應到所得到的權重1535(權重1535越高,調整幅度越大)。 After all the judgment endpoints 1534 in the first regression tree 1531 are judged and multiple weights 1535 are obtained, the processor 10 further adjusts the coordinate value of each predicted feature point 80 on the image 2 to be identified according to the weights, and The first adjusted prediction point 81 is generated. For example, if the face image in the image to be recognized 2 is similar to the training data 152 of the first type (for example, the face shape with a big nose is pointed out), the coordinate value of the first adjusted prediction point 81 will be toward the first type The position of the muscle feature point 4 on the training data 152 moves, and the adjustment range will correspond to the obtained weight 1535 (the higher the weight 1535, the larger the adjustment range).

如圖10所述,相較於預測特徵點80,所述第一調整後預測點81將會更為貼近待辨識影像2中的臉部影像上的肌肉特徵點3的實際位置。其中,複數第一調整後預測點81的數量相同於前述預測特徵點80的數量。 As shown in FIG. 10, compared to the predicted feature point 80, the first adjusted predicted point 81 will be closer to the actual position of the muscle feature point 3 on the face image in the image 2 to be recognized. Among them, the number of the plurality of first adjusted prediction points 81 is the same as the number of the aforementioned prediction feature points 80.

接著,如圖9所示,在取得四個第一調整後預測點81後,處理器10進一步將待辨識影像2及四個第一調整後預測點81匯入訓練模型153的第二個迴歸樹1532。本發明中,第二個迴歸樹1532與前述第一個迴歸樹1531具有相同數量與相關內容的複數判斷端點1534,但因為輸入的參數已經改變(從四個預 測特徵點80變成四個第一調整後預測點81),因此最後得到的分析結果會有所不同(即,所得的權重1535會不相同)。 Next, as shown in FIG. 9, after obtaining the four first adjusted prediction points 81, the processor 10 further integrates the to-be-identified image 2 and the four first adjusted prediction points 81 into the second regression of the training model 153 Tree 1532. In the present invention, the second regression tree 1532 and the aforementioned first regression tree 1531 have the same number of complex judgment endpoints 1534 with related content, but because the input parameters have been changed (from the four predictions) The measured feature points 80 become four first adjusted prediction points 81), so the final analysis results obtained will be different (that is, the weights 1535 obtained will be different).

相似地,於第二個迴歸樹1532中的所有判斷端點1534都判斷完畢並且取得多個權重1535後,處理器10進一步依據所得的權重1535來調整各個第一調整後預測點81在待辨識影像2上的座標值,並且產生第二調整後預測點82。 Similarly, after all the judgment endpoints 1534 in the second regression tree 1532 are judged and multiple weights 1535 are obtained, the processor 10 further adjusts each first adjusted prediction point 81 to be identified according to the weights 1535 obtained. The coordinate values on the image 2 are generated, and the second adjusted predicted point 82 is generated.

本發明中,處理器10會持續執行上述動作。當第n-1個迴歸樹(圖未標示)執行完畢後,處理器10會產生四個第n-1調整後預測點83。接著,處理器10將四個第n-1調整後預測點83匯入訓練模型153的最後一個迴歸樹(即,第n個回歸樹1533)。當第n個回歸樹1533執行完畢後,處理器10同樣會得到多筆分析結果(即,多個權重1535),此時處理器10會依據多個權重1535來調整所述四個第n-1調整後預測點83,並產生四個最終預測點84。本發明中,處理器10將會輸出這四個最終預測點84,以做為分析裝置1自動分析後產生的四個肌肉特徵點3,並且將這四個肌肉特徵點3自動標記在待辨識影像2上。 In the present invention, the processor 10 will continue to perform the above actions. When the n-1th regression tree (not shown in the figure) is executed, the processor 10 will generate four n-1th adjusted prediction points 83. Next, the processor 10 merges the four n-1 adjusted predicted points 83 into the last regression tree of the training model 153 (ie, the nth regression tree 1533). After the execution of the nth regression tree 1533 is completed, the processor 10 will also obtain multiple analysis results (ie, multiple weights 1535). At this time, the processor 10 will adjust the four n-th regression trees according to the multiple weights 1535. 1 The adjusted prediction point 83, and four final prediction points 84 are generated. In the present invention, the processor 10 will output these four final prediction points 84 as the four muscle feature points 3 generated after the analysis device 1 automatically analyzes them, and automatically mark the four muscle feature points 3 in the to-be-identified Image 2 on.

值得一提的是,所述迴歸樹的數量取決於分析裝置1的硬體效能、使用者所需的辨識精準度以及使用者能接受的處理時間。一般來說,迴歸樹的數量越多,則最終預測點84的位置會越貼近待辨識影像2中的臉部影像上實際的肌肉特徵點3所應該存在的位置。 It is worth mentioning that the number of the regression trees depends on the hardware performance of the analysis device 1, the identification accuracy required by the user, and the processing time acceptable to the user. Generally speaking, the greater the number of regression trees, the closer the position of the final prediction point 84 will be to the position where the actual muscle feature point 3 on the face image in the image 2 to be recognized should exist.

通過本發明的標記方法,分析裝置1可以基於人工智慧比對而自動在使用者的臉部影像上標記四個肌肉特徵點,藉此讓使用者清楚且快速地瞭解自己當前的臉部肌肉狀態,進而確認所採取的保養手段是否有效,對於使用者實相當便利,並且具有實用性。 Through the marking method of the present invention, the analysis device 1 can automatically mark four muscle feature points on the user's facial image based on artificial intelligence comparison, thereby allowing the user to clearly and quickly understand their current facial muscle state , And then confirm whether the maintenance measures taken are effective, which is quite convenient for users and practical.

以上所述僅為本發明之較佳具體實例,非因此即侷限本發明之專利範圍,故舉凡運用本發明內容所為之等效變化,均同理皆包含於本發明之範圍內,合予陳明。 The above are only preferred specific examples of the present invention, and are not limited to the scope of the patent of the present invention. Therefore, all equivalent changes made by using the content of the present invention are included in the scope of the present invention in the same way. Bright.

S30~S38:辨識步驟 S30~S38: Identification steps

Claims (9)

一種臉部肌肉特徵點自動標記方法,應用於具有一處理器、一顯示單元及一儲存單元的一臉部影像分析裝置,其中該儲存單元儲存有預先訓練完成的一訓練模型,並且該方法包括:a)取得一使用者的一待辨識影像;b)由該處理器對該待辨識影像執行一臉部識別程序以產生一臉部定位框,其中該臉部定位框標示出該待辨識影像中的一臉部並涵蓋該臉部上的多個強參考點;c)由該處理器基於該訓練模型對該待辨識影像進行模糊比對並產生一辨識結果;d)依據該辨識結果於該待辨識影像上標記複數個肌肉特徵點,其中該複數個肌肉特徵點分別落在該臉部定位框中並且對應至該臉部上的複數個弱參考點,該複數個肌肉特徵點包括位於該臉部左側的一第一肌肉特徵點及一第二肌肉特徵點以及位於該臉部右側的一第三肌肉特徵點及一第四肌肉特徵點,並且該四個肌肉特徵點的連線構成一矩形框或一梯形框;及e)通過該顯示單元重疊顯示該待辨識影像及該複數個肌肉特徵點,其中該第一肌肉特徵點落在由該臉部左側的一第一淚溝切線、一第一法令紋切線、一第一眼角垂直線及一第一顎骨枝切線所構成的區域內,該第三肌肉特徵點落在由該臉部右側的一第二淚溝切線、一第二法令紋切線、一第二眼角垂直線及一第二顎骨枝切線所構成的區域內。 A method for automatically marking facial muscle feature points is applied to a facial image analysis device having a processor, a display unit, and a storage unit, wherein the storage unit stores a training model completed in advance, and the method includes : A) Obtain an image to be recognized of a user; b) Perform a face recognition program on the image to be recognized by the processor to generate a face positioning frame, wherein the face positioning frame marks the image to be recognized A face in the image and covers a number of strong reference points on the face; c) the processor performs a fuzzy comparison on the image to be recognized based on the training model and generates a recognition result; d) based on the recognition result A plurality of muscle feature points are marked on the image to be recognized, wherein the plurality of muscle feature points respectively fall in the face positioning frame and correspond to a plurality of weak reference points on the face, and the plurality of muscle feature points include A first muscle feature point and a second muscle feature point on the left side of the face, a third muscle feature point and a fourth muscle feature point on the right side of the face, and the connection of the four muscle feature points is formed A rectangular frame or a trapezoidal frame; and e) displaying the image to be recognized and the plurality of muscle feature points overlapped by the display unit, wherein the first muscle feature point falls on a first tear groove tangent line on the left side of the face , In the area constituted by a first law pattern tangent line, a first eye corner vertical line and a first jawbone branch tangent line, the third muscle feature point falls on a second tear groove tangent line on the right side of the face, a first Within the area formed by the tangent line of the two statute lines, the vertical line of the second corner of the eye, and the tangent line of the second jawbone branch. 如請求項1所述的臉部肌肉特徵點自動標記方法,其中該第二肌肉特徵點落在由該臉部左側的該第一法令紋切線、該第一眼角垂直線、該第一 顎骨枝切線及一第一木偶紋切線所構成的區域內,該第四肌肉特徵點落在由該臉部右側的該第二法令紋切線、該第二眼角垂直線、該第二顎骨枝切線及一第二木偶紋切線所構成的區域內。 The method for automatically marking facial muscle feature points according to claim 1, wherein the second muscle feature point falls on the tangent line of the first law pattern on the left side of the face, the first vertical line of the corner of the eye, and the first In the area formed by the tangent line of the jawbone branch and the tangent line of the first marionette pattern, the fourth muscle characteristic point falls on the tangent line of the second law pattern on the right side of the face, the vertical line of the second eye corner, and the tangent line of the second jawbone branch And a second puppet pattern tangent to the area constituted. 如請求項1所述的臉部肌肉特徵點自動標記方法,其中該步驟a)是通過該臉部影像分析裝置的一影像擷取單元即時擷取該待辨識影像、由該儲存單元讀取預先儲存的該待辨識影像或通過該臉部影像分析裝置的一無線傳輸單元從外部接收該待辨識影像,其中該待辨識影像至少包含了該使用者的臉部影像。 The method for automatically marking facial muscle feature points according to claim 1, wherein the step a) is to capture the image to be recognized in real time by an image capture unit of the facial image analysis device, and the storage unit reads the pre-recognized image. The stored image to be recognized or the image to be recognized is received from the outside through a wireless transmission unit of the facial image analysis device, wherein the image to be recognized includes at least the facial image of the user. 如請求項1所述的臉部肌肉特徵點自動標記方法,其中該步驟b)是通過Dlib Face Landmark系統的方向梯度直方圖(Histogram of Oriented Gradient,HOG)演算法對該待辨識影像執行該臉部識別程序並產生該臉部定位框。 The method for automatically marking facial muscle feature points according to claim 1, wherein the step b) is to execute the face on the image to be recognized by the Histogram of Oriented Gradient (HOG) algorithm of the Dlib Face Landmark system Face recognition program and generate the face positioning frame. 如請求項1所述的臉部肌肉特徵點自動標記方法,其中該步驟a)之前更包括下列步驟:a01)於複數訓練資料上分別標記該複數個肌肉特徵點,其中該複數訓練資料為臉部影像;a02)通過Dlib Face Landmark系統的方向梯度直方圖(Histogram of Oriented Gradient,HOG)演算法對各該訓練資料執行該臉部識別程序,以分別於各該訓練資料上產生該臉部定位框;a03)依據該複數訓練資料執行一人工智慧訓練演算法,以分析並記錄各該訓練資料中的該複數個肌肉特徵點彼此間的對應關係,以及各該肌肉特徵點與該臉部定位框內的一或多個強參考點間的對應關係;a04)依據分析所得的該些對應關係產生複數肌肉特徵點定位規則;及 a05)至少依據該複數肌肉特徵點定位規則、一判斷深度及一迴歸次數建立該訓練模型。 The method for automatically marking facial muscle feature points according to claim 1, wherein step a) further includes the following steps: a01) respectively marking the plurality of muscle feature points on the plural training data, wherein the plural training data is the face A02) Perform the face recognition program on each training data through the Histogram of Oriented Gradient (HOG) algorithm of the Dlib Face Landmark system to generate the face location on each training data. Box; a03) Execute an artificial intelligence training algorithm based on the plural training data to analyze and record the correspondence between the plural muscle feature points in each training data, as well as the position of each muscle feature point and the face Correspondence between one or more strong reference points in the frame; a04) generate a plurality of muscle feature point positioning rules based on the corresponding relations obtained by analysis; and a05) Establish the training model at least according to the multiple muscle feature point positioning rules, a judgment depth and a regression number. 如請求項5所述的臉部肌肉特徵點自動標記方法,其中該訓練模型為一迴歸器(Regressor),該迴歸器包含複數個內容相同的Cascade的迴歸樹(Regression tree),各該迴歸樹分別具有複數判斷端點,該複數判斷端點的至少一部分對應至該複數肌肉特徵點定位規則,其中該複數迴歸樹的數量相同於該迴歸次數,該複數判斷端點的數量相同於該判斷深度。 The method for automatically marking facial muscle feature points according to claim 5, wherein the training model is a regressor, and the regressor includes a plurality of Cascade regression trees with the same content, and each regression tree Each has a plurality of judgment endpoints, at least a part of the plurality of judgment endpoints corresponds to the plurality of muscle feature point positioning rules, wherein the number of the complex regression tree is the same as the number of regressions, and the number of the complex judgment endpoints is the same as the judgment depth . 如請求項6所述的臉部肌肉特徵點自動標記方法,其中該步驟c)包括下列步驟:c01)依據一基本定位規則及該訓練模型指出的一機率於該待辨識影像上隨機產生對應該複數個肌肉特徵點的複數個預測特徵點;c02)將該待辨識影像及該複數個預測特徵點匯入該訓練模型的該複數迴歸樹的其中之一;c03)取得該迴歸樹的複數分析結果;c04)依據該複數分析結果調整該複數個預測特徵點並產生複數個調整後預測特徵點;c05)於該複數迴歸樹皆執行完畢前,以該複數個調整後預測特徵點再次執行該步驟c02)至該步驟c04);及c06)於該複數迴歸樹皆執行完畢後,以該複數個調整後預測特徵點做為該待辨識影像的該複數個肌肉特徵點。 The method for automatically marking facial muscle feature points according to claim 6, wherein the step c) includes the following steps: c01) randomly generating a correspondence on the image to be recognized according to a basic positioning rule and a probability indicated by the training model A plurality of predicted feature points of a plurality of muscle feature points; c02) the image to be identified and the plurality of predicted feature points are imported into one of the complex regression trees of the training model; c03) a complex analysis of the regression tree is obtained Result; c04) adjust the plurality of predicted feature points according to the complex analysis result and generate a plurality of adjusted predicted feature points; c05) execute the plurality of adjusted predicted feature points again before the complex regression tree is executed Step c02) to step c04); and c06) after the complex regression tree is executed, the plurality of adjusted predicted feature points are used as the plurality of muscle feature points of the image to be recognized. 如請求項7所述的臉部肌肉特徵點自動標記方法,其中該複數分析結果分別記錄該待辨識影像及該複數個預測特徵點與該複數訓練資料的至少 一部分相比的一權重,該權重為該待辨識影像中的一或多個強參考點與各該預測特徵點間的對應關係與該訓練資料中的一或多個強參考點與各該肌肉特徵點間的對應關係的一相似度,其中該步驟c04)是依據複數該權重調整各該預測特徵點於該待辨識影像上的一座標值。 The method for automatically marking facial muscle feature points according to claim 7, wherein the complex analysis result records the image to be recognized, the plurality of predicted feature points, and at least the plurality of training data. A weight of a part of the comparison, the weight being the correspondence between one or more strong reference points in the image to be recognized and each of the predicted feature points, and one or more strong reference points in the training data and each of the muscles A similarity of the corresponding relationship between the feature points, wherein the step c04) adjusts a standard value of each predicted feature point on the image to be recognized according to the plurality of weights. 如請求項1所述的臉部肌肉特徵點自動標記方法,其中該處理器於臉部定位框中識別該臉部上的複數區域構成輔助線,該複數區域構成輔助線包括該臉部左側的一第一淚溝切線、一第一法令紋切線、一第一眼角垂直線、一第一顎骨枝切線與一第一木偶紋切線,以及該臉部右側的一第二淚溝切線、一第二法令紋切線、一第二眼角垂直線、一第二顎骨枝切線及一第二木偶紋切線,其中,該處理器判斷該複數個肌肉特徵點是否分別落在由該複數區域構成輔助線所分別構成的各個區域內,並且於任一該肌肉特徵點沒有落在對應的該區域內時判斷該辨識結果有誤。 The method for automatically marking facial muscle feature points according to claim 1, wherein the processor recognizes in the face positioning frame that a plurality of regions on the face constitute an auxiliary line, and the plurality of regions constitute an auxiliary line including the left side of the face A first tear groove tangent, a first law pattern tangent, a first vertical eye corner, a first jawbone branch tangent and a first marionette pattern tangent, and a second tear groove tangent on the right side of the face, a first Two tangent lines of the law pattern, a second vertical line of the corner of the eye, a second tangent line of the jawbone branch, and a second tangent line of the puppet pattern, wherein the processor determines whether the plurality of muscle feature points fall on the auxiliary line formed by the plurality of regions. It is determined that the recognition result is incorrect when any of the muscle feature points does not fall within the corresponding area.
TW108144950A 2019-12-09 2019-12-09 Method for automatically labelling muscle feature points on face TWI722705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108144950A TWI722705B (en) 2019-12-09 2019-12-09 Method for automatically labelling muscle feature points on face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108144950A TWI722705B (en) 2019-12-09 2019-12-09 Method for automatically labelling muscle feature points on face

Publications (2)

Publication Number Publication Date
TWI722705B true TWI722705B (en) 2021-03-21
TW202123074A TW202123074A (en) 2021-06-16

Family

ID=76036085

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108144950A TWI722705B (en) 2019-12-09 2019-12-09 Method for automatically labelling muscle feature points on face

Country Status (1)

Country Link
TW (1) TWI722705B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM488060U (en) * 2014-06-13 2014-10-11 Univ China Sci & Tech Computer input device for muscular dystrophy patient
CN104156970A (en) * 2014-08-21 2014-11-19 云南师范大学 Human body jugomaxillary muscle activity simulation method based on nuclear magnetic resonance image processing
CN109816601A (en) * 2018-12-26 2019-05-28 维沃移动通信有限公司 A kind of image processing method and terminal device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM488060U (en) * 2014-06-13 2014-10-11 Univ China Sci & Tech Computer input device for muscular dystrophy patient
CN104156970A (en) * 2014-08-21 2014-11-19 云南师范大学 Human body jugomaxillary muscle activity simulation method based on nuclear magnetic resonance image processing
CN109816601A (en) * 2018-12-26 2019-05-28 维沃移动通信有限公司 A kind of image processing method and terminal device

Also Published As

Publication number Publication date
TW202123074A (en) 2021-06-16

Similar Documents

Publication Publication Date Title
KR102014385B1 (en) Method and apparatus for learning surgical image and recognizing surgical action based on learning
JP6947759B2 (en) Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects
KR101870689B1 (en) Method for providing information on scalp diagnosis based on image
CN105426827B (en) Living body verification method, device and system
TW201814572A (en) Facial recognition-based authentication
CN108875452A (en) Face identification method, device, system and computer-readable medium
CN108875485A (en) A kind of base map input method, apparatus and system
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
US20160092726A1 (en) Using gestures to train hand detection in ego-centric video
CN111382648A (en) Method, device and equipment for detecting dynamic facial expression and storage medium
CN111937082A (en) Guidance method and system for remote dental imaging
JP2019048026A (en) Biological information analysis device and hand skin analysis method
CN112633221A (en) Face direction detection method and related device
CN111639582A (en) Living body detection method and apparatus
CN108880815A (en) Auth method, device and system
CN110321778A (en) A kind of face image processing process, device and storage medium
TWI722705B (en) Method for automatically labelling muscle feature points on face
CN110545386B (en) Method and apparatus for photographing image
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium
KR101734212B1 (en) Facial expression training system
CN113327212B (en) Face driving method, face driving model training device, electronic equipment and storage medium
CN111767829B (en) Living body detection method, device, system and storage medium
TW202122040A (en) Method for analyzing and estimating face muscle status
EP3836008A1 (en) Method for automatically marking muscle feature points on face
US11872050B1 (en) Image integrity and repeatability system

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees