TW202016881A - Program, information processing device, quantification method, and information processing system - Google Patents
Program, information processing device, quantification method, and information processing system Download PDFInfo
- Publication number
- TW202016881A TW202016881A TW108126136A TW108126136A TW202016881A TW 202016881 A TW202016881 A TW 202016881A TW 108126136 A TW108126136 A TW 108126136A TW 108126136 A TW108126136 A TW 108126136A TW 202016881 A TW202016881 A TW 202016881A
- Authority
- TW
- Taiwan
- Prior art keywords
- area
- hand
- makeup
- animation data
- face
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
本發明係關於一種程式、資訊處理裝置、定量化方法及資訊處理系統。The invention relates to a program, an information processing device, a quantitative method and an information processing system.
眾所周知,化妝效果根據所用化妝品及化妝技術等而異。歷來,化妝的多數用戶少有機會能看到他人的化妝技術。而近年來有很多拍攝化妝時的情形的動畫(化妝動畫)被共享在動畫網站等網站上。透過這種化妝動畫,用戶確認化妝技術的機會在增加(例如,參照非專利文獻1)。 <先前技術文獻> <非專利文獻>As we all know, makeup effects vary depending on the cosmetics and makeup techniques used. Historically, most users of makeup rarely have the opportunity to see the makeup techniques of others. In recent years, many animations (make-up animations) of shooting makeup have been shared on animation websites and other websites. Through this kind of makeup animation, the chances of the user confirming the makeup technique are increasing (for example, refer to Non-Patent Document 1). <Prior Technical Documents> <Non-patent literature>
非專利文獻1:“動畫課程”,[online],株式會社資生堂,[2018年4月20日檢索],網際網絡(URL:https://www.shiseido.co.jp/beauty/dictionary/lesson/index.html)Non-Patent Document 1: "Animation Course", [online], Shiseido Co., Ltd. [retrieved on April 20, 2018], Internet (URL: https://www.shiseido.co.jp/beauty/dictionary/lesson /index.html)
<發明所欲解決之問題><Problems to be solved by the invention>
藉由視聽上述化妝動畫,用戶能看到化妝動畫中映出的人物所使用的化妝品及化妝動作。然而,僅靠觀看化妝動畫中映出的人物的化妝動作,用戶很難正確模仿該人物的化妝動作。對此,若能根據化妝動畫對化妝動作進行定量化,則容易對進行模仿的用戶的化妝動作及化妝動畫中映出的人物的化妝動作進行比較,便於利用。By viewing the above-mentioned makeup animation, the user can see the cosmetics and makeup actions used by the characters reflected in the makeup animation. However, it is difficult for the user to correctly imitate the makeup movement of the character by watching only the makeup movement of the character reflected in the makeup animation. In this regard, if the makeup movement can be quantified based on the makeup animation, it is easy to compare the makeup movement of the user who imitated with the makeup movement of the character reflected in the makeup animation, and it is convenient to use.
本發明一實施方式之目的在於提供一種能夠對動畫資料中映出的人物的化妝動作進行定量化的程式。 <解決問題之技術手段>An object of an embodiment of the present invention is to provide a program capable of quantifying the makeup actions of characters reflected in animation data. <Technical means to solve problems>
為了達成上述目的,本發明一實施方式之程式為一種用於使計算機發揮對動畫資料進行分析,並輸出對該動畫資料中映出的人物的化妝動作進行定量化的值的功能的程式,該程式使該計算機發揮以下各元件的功能:第1檢測元件,從該動畫資料中檢測映出該人物面部的面部區域;第2檢測元件,從該動畫資料中檢測映出該人物的手的手區域;輸出元件,根據檢測出的該面部區域及該手區域,輸出基於該面部區域及該手區域的變動對該動畫資料中映出的人物的化妝動作進行定量化的值。 <發明之效果>In order to achieve the above object, a program according to an embodiment of the present invention is a program for causing a computer to perform a function of analyzing animation data and outputting a value that quantifies the makeup movement of a person reflected in the animation data. The program causes the computer to function as the following elements: the first detection element detects the facial area reflecting the character's face from the animation data; the second detection element detects the hand reflecting the character's hand from the animation data Area; the output element, based on the detected face area and the hand area, outputs a value quantifying the makeup movement of the person reflected in the animation data based on changes in the face area and the hand area. <Effect of invention>
根據本發明的一實施方式,能夠對動畫資料中映出的人物的化妝動作進行定量化。According to an embodiment of the present invention, it is possible to quantify the makeup actions of the characters reflected in the animation material.
以下,詳細說明本發明的實施方式。Hereinafter, embodiments of the present invention will be described in detail.
[第1實施方式]
<系統結構>
圖1A及圖1B是本實施方式的資訊處理系統的一例的結構圖。圖1A的資訊處理系統具備單體的資訊處理裝置1。資訊處理裝置1是用戶操作的PC、智慧手機、平板電腦、家用或業務用的對化妝動作進行定量化的專用機器等。[First embodiment]
<System structure>
1A and 1B are configuration diagrams of an example of the information processing system of this embodiment. The information processing system of FIG. 1A includes a single
另外,圖1B的資訊處理系統中1台以上的客戶終端2與伺服器裝置3藉由網際網路等網路4連接。客戶終端2是用戶操作的PC、智慧手機、平板電腦等終端裝置、家用或業務用的化妝動作定量化的專用機器等。伺服器裝置3進行與客戶終端2中實施的化妝動作定量化相關的處理等。In addition, in the information processing system of FIG. 1B, one or
如上所述,本發明除了應用於如圖1B所示的客戶・伺服器型的資訊處理系統之外,亦可應用於如圖1A所示的單體資訊處理裝置1中。在此,圖1A及圖1B的資訊處理系統僅為一例,根據用途及目的,當然可以有各種各樣的系統結構例。例如,圖1B的伺服器裝置3也可以由分散的複數個計算機構成。As described above, the present invention can be applied to the single
<硬體結構>
圖1A及圖1B的資訊處理裝置1、客戶終端2及伺服器裝置3例如由圖2所示的硬體結構的計算機實現。圖2是本實施方式的計算機的一例的硬體結構圖。<Hardware structure>
The
圖2的計算機具備輸入裝置501、輸出裝置502、外部I/F503、RAM504、ROM505、CPU506、通信I/F507及HDD508等,分別藉由匯流排B相連接。The computer of FIG. 2 includes an
輸入裝置501是用於輸入操作的鍵盤、滑鼠等。輸出裝置502由可顯示畫面的液晶或有機EL等顯示器、用於輸出聲音或音樂等音響資料的擴音器等構成。通信I/F507是用於將計算機連接於網路4的介面。HDD508是用於存放程式或資料的非揮發性記憶裝置之一例。The
外部I/F503是與外部裝置連接的介面。計算機藉由外部I/F503能對記録媒體503a進行讀取及/或寫入。記録媒體503a可以是DVD、SD記憶卡、USB記憶體等。The external I/F503 is an interface to connect with external devices. The computer can read and/or write to the recording medium 503a through the external I/
CPU506是一種從ROM505或HDD508等記憶裝置中將程式及資料讀取到RAM504上,並進行處理,從而實現計算機整體的控制及功能的運算裝置。本實施方式的資訊處理裝置1、客戶終端2及伺服器裝置3藉由在具有上述硬體結構的計算機中執行程式,能夠實現各種功能。The
在此,圖2是硬體結構的一例,當然可以根據用途及目的形成各種各樣的結構例。例如圖2的計算機中,輸入裝置501還可以具備能夠拍攝動畫的相機功能。Here, FIG. 2 is an example of a hardware structure, and of course various structure examples can be formed according to the use and purpose. For example, in the computer of FIG. 2, the
〈化妝動作定量化的探討〉 作為化妝動作的定量化方法,例如有利用感測器的方法。利用感測器的化妝動作定量化方法中,使進行化妝的人物配戴感測器,進行化妝動作。在利用感測器的化妝動作定量化方法中,能夠根據感測器輸出的資料來對人物的化妝動作進行定量化,但無法對拍攝後的化妝動畫中映出的人物的化妝動作進行定量化。<Discussion on quantification of makeup movements> As a method of quantifying makeup actions, for example, there is a method using a sensor. In the method of quantifying the makeup movement using the sensor, the person who does makeup wears the sensor to perform the makeup movement. In the method of quantifying the makeup movement using the sensor, the makeup movement of the person can be quantified based on the data output from the sensor, but the makeup movement of the person reflected in the makeup animation after shooting cannot be quantified .
若能夠對拍攝後的化妝動畫進行分析,並對該化妝動畫中映出的人物的化妝動作進行定量化,則能夠利用動畫網站等網站的化妝動畫,且無需配戴感測器等,從而可期待對自然的化妝動作進行定量化。在此,本實施方式中,為了對拍攝後的化妝動畫中映出的人物的化妝動作進行定量化,關於掌握從化妝動畫中取得的對象,及利用該對象進行化妝動作定量化進行了探討。If you can analyze the makeup animation after shooting and quantify the makeup movements of the characters reflected in the makeup animation, you can use the makeup animation of websites such as animation websites without the need to wear a sensor, etc. Expect to quantify natural makeup movements. Here, in this embodiment, in order to quantify the makeup movements of the people reflected in the makeup animation after shooting, it is discussed about grasping the object acquired from the makeup animation and using the object to quantify the makeup movement.
<<從化妝動畫中取得的對象的掌握>> 為了掌握從化妝動畫中取得的對象,利用動作擷取(器)對被測者的化妝動作中的變動進行了計測。在此,測定部位是右手中指尖端、中指根部、手背中央、手腕中央、肘部、額部等共計6處。分析對象是測定部位的座標(變位)、速度、加速度及角速度。經過上述分析的被測者變動的主成份,被推定為化妝動作中的主要變動要素。<<Mastering objects acquired from makeup animation>> In order to grasp the objects obtained from the makeup animation, the movement capture (device) was used to measure the changes in the makeup movement of the subject. Here, a total of 6 measurement sites are the tip of the right middle finger, the root of the middle finger, the center of the back of the hand, the center of the wrist, the elbow, and the forehead. The analysis objects are the coordinates (displacement), velocity, acceleration, and angular velocity of the measurement site. The main component of the subject's variation after the above analysis is estimated to be the main variation element in the makeup movement.
圖3A及圖3B是計測結果的一例的分佈圖。圖3A是表示每個化妝動作中的手動作的細緻度,及手與肘的連動性的一例的分佈圖。圖3B是表示每個化妝動作的手動作的細緻度及面部變動的一例的分佈圖。如圖3A及圖3B所示,被測者的變動主成份是手動作的細緻度、手與肘的連動性及面部面動。在此,本實施方式中作為從化妝動畫取得的對象設定為手的座標・速度及面部的座標・速度。3A and 3B are distribution diagrams of an example of measurement results. FIG. 3A is a distribution diagram showing an example of the fineness of the hand movement in each makeup movement and the interlocking property of the hand and the elbow. 3B is a distribution diagram showing an example of the fineness of hand movements and facial changes for each makeup movement. As shown in FIGS. 3A and 3B, the main components of the subject’s changes are the fineness of hand movements, the interlocking of the hands and elbows, and facial and facial movements. Here, in this embodiment, the object acquired from the makeup animation is set to the coordinates and speed of the hand and the coordinates and speed of the face.
<<利用取得的對象進行化妝動作的定量化>> 作為取得化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度的方法,可舉出利用卷積神經網路(Convolutional Neural Network,以下稱之為CNN)的圖像識別。在利用CNN的圖像識別中,能夠從二維圖像中檢測出面部區域及手區域,因此,藉由對從化妝動畫的訊框圖像中檢測出的面部區域及手區域進行追蹤(跟蹤),能夠取得化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度。以下,關於本實施方式中利用CNN的圖像識別進行詳細說明。<<Quantization of makeup actions using acquired objects>> As a method of obtaining the coordinates and speed of the hand and the coordinates and speed of the face in the makeup movement of the character reflected in the makeup animation, a diagram using a Convolutional Neural Network (hereinafter referred to as CNN) can be mentioned. Like recognition. In image recognition using CNN, the face area and hand area can be detected from the two-dimensional image. Therefore, by tracking the face area and hand area detected from the frame image of the makeup animation (tracking ), it is possible to obtain the coordinates and speed of the hand and the coordinates and speed of the face in the makeup movement of the character reflected in the makeup animation. The image recognition using CNN in this embodiment will be described in detail below.
<軟體結構>
<<功能塊>>
以下說明本實施方式的資訊處理系統的軟體結構。在此,以圖1A所示的資訊處理裝置1作為一例進行說明。圖4是本實施方式的資訊處理系統的一例的功能方塊圖。資訊處理裝置1藉由執行程式,來實現操作受理部10、區域檢測部12、定量化部14、後處理部16及動畫資料記憶部18的功能。<Software structure>
<<Function Block>>
The following describes the software structure of the information processing system of this embodiment. Here, the
操作受理部10受理來自用戶的各種操作。動畫資料記憶部18記憶化妝動畫。在此,也可以將動畫資料記憶部18設置在資訊處理裝置1之外部。動畫資料記憶部18中記憶的化妝動畫及利用相機功能拍攝的化妝動畫被輸入到區域檢測部12。區域檢測部12以構成該輸入的化妝動畫的每個訊框圖像,對該訊框圖像中映出的人物的面部區域及手區域進行下述檢測。The
定量化部14藉由從區域檢測部12檢測出的面部區域及手區域中,取得化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度,來對化妝動畫中映出的人物的化妝動作進行定量化。後處理部16對區域檢測部12及定量化部14的處理結果進行後處理,並輸出到輸出裝置502等。The
例如,後處理部16進行以矩形包圍化妝動畫中映出的人物的化妝動作中的面部區域及手區域的後處理。另外,後處理部16進行基於化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度,來視覺性地表現手動作的細緻度、手與肘的連動性及面部變動等的後處理。For example, the
此外,後處理部16能夠對2個化妝動畫中映出的人物的化妝動作進行定量化及比較,並輸出其比較結果。例如,利用本實施方式的資訊處理系統的用戶,藉由對自己的化妝動作及化妝師等化妝技術高的人物的化妝動作進行定量化及比較,易於理解與自身化妝技術之差異。由此,本實施方式的資訊處理系統可提供用於提高用戶化妝技術的服務。In addition, the
用於檢測化妝動畫的訊框圖像中映出的人物的面部區域及手區域的圖4的區域檢測部12,具有例如圖5所示的結構。圖5是區域檢測部的一例的結構圖。圖5的區域檢測部12具備訊框化部20、面部區域檢測部22、手區域檢測部24及面部特徵點檢測部26。The
訊框化部20將該輸入的化妝動畫,以訊框圖像作為單位提供給面部區域檢測部22及手區域檢測部24。面部區域檢測部22具備包含面部器官區域學習模型的面部區域學習模型。The framing
在此,透過以手區域重疊於面部區域的二維圖像作為教導資料(teaching data)的機器學習,製作面部區域檢測部22所具備的面部區域學習模型。基於手被映成面部前景的二維圖像,製作成手區域重疊於面部區域的教導資料。相對於註解(作為教導資料製作)的手區域學習資料集(data set),以手作為前景的方式,貼附有註解的面部區域學習資料集的圖像,可製作成手區域重疊於面部區域的教導資料。Here, through machine learning using a two-dimensional image in which a hand region overlaps a face region as teaching data, a face region learning model included in the face
面部區域檢測部22,利用藉由進行CNN學習的面部區域學習模型,來實現對面部區域與手區域重疊具有強效的面部區域檢測,該CNN的學習資料集中使用面部區域的一部分被手區域遮擋的狀態(遮蔽環境)的教導資料。The face
手區域檢測部24具有包含指尖位置區域學習模型的手區域學習模型。藉由以化妝時的手的二維圖像作為教導資料,製作成手區域檢測部24所具有的手區域學習模型。The hand
在此,化妝時的手的教導資料由特化於化妝時的手的形狀的註解手區域學習資料集、及特化於化妝時的指尖位置的註解指尖位置區域學習資料集製作成。手區域檢測部24利用藉由進行CNN學習的手區域學習模型,來實現化妝動作中形狀變化多的手及指尖位置的檢測精度高的手區域檢測,該CNN的學習資料集中使用上述教導資料。Here, the teaching material of the hand during makeup is made of an annotation hand region learning material set specialized in the shape of the hand during makeup, and an annotation fingertip position region learning data set specialized in the position of the fingertip during makeup. The hand
另外,面部特徵點檢測部26具有面部特徵點學習模型,其包含面部器官特徵點學習模型。面部特徵點檢測部26藉由利用面部特徵點學習模型來檢測面部整體的面部特徵點。然後,面部特徵點檢測部26藉由利用面部器官特徵點學習模型,按各部位區分檢測面部器官的面部特徵點。面部特徵點檢測部26,對面部器官中包含的眼睛的面部特徵點進行檢測,基於眼睛的面部特徵點的位置,對眼睛以外的部位的面部特徵點(包括輪廓)的位置進行修正,從而在低解析度或遮蔽環境下也能夠實現高精度的面部特徵點檢測。In addition, the facial feature
<處理>
<<面部區域檢測及面部特徵點檢測>>
區域檢測部12對訊框圖像中映出的人物的面部區域進行檢測的處理,例如按照圖6所示的方式進行。圖6是對訊框圖像中映出的人物的面部區域進行檢測處理的一例的概念圖。<Processing>
<<Face area detection and facial feature point detection>>
The
區域檢測部12的面部區域檢測部22,藉由利用上述面部區域學習模型,以矩形1002檢測映出訊框圖像1000的人物面部的面部區域。面部區域檢測部22從矩形1002的區域中,利用上述面部器官區域學習模型檢測面部器官區域,並將檢測出的以鼻子為中心的矩形1002的矩形比率修正成矩形1004的形態。The face
面部特徵點檢測部26利用上述面部特徵點學習模型,從矩形1004的區域中檢測出如矩形區域圖像1006所示的面部特徵點。另外,面部特徵點檢測部26利用上述面部器官特徵點學習模型,從矩形區域圖像1006中,按不同部位檢測出如矩形區域圖像1008所示的面部器官的特徵點。The facial feature
在此,圖5的區域檢測部12藉由進行圖7的流程圖所示的處理,在低解析度及遮蔽環境下也能夠實現高精度的面部特徵點檢測。圖7是面部特徵點檢測處理的一例的流程圖。Here, by performing the processing shown in the flowchart of FIG. 7, the
在步驟S11中,區域檢測部12的面部特徵點檢測部26,從面部區域檢測部22檢測的面部區域(面部整體)中,利用上述面部特徵點學習模型來檢測面部特徵點,並推測頭部姿勢。In step S11, the facial feature
進入步驟S12,面部特徵點檢測部26對步驟S11中推測出的頭部姿勢加以斟酌,並利用面部器官特徵點學習模型來對面部區域檢測部22檢測出的眼睛進行檢測,從而對眼睛的位置推測進行校正,以提高眼睛的推測位置精度。Proceeding to step S12, the facial feature
進入步驟S13,面部特徵點檢測部26對步驟S12中校正的眼睛推測位置加以斟酌,並利用上述面部器官特徵點學習模型來檢測眼睛之外的面部器官的特徵點(包括輪廓),並對眼睛之外的面部器官的推測位置進行修正。圖7的流程圖的處理,例如在面部輪廓被手遮擋的情形下有效。Proceeding to step S13, the facial feature
<<手區域檢測>>
區域檢測部12對訊框圖像中映出的人物的手區域進行檢測的處理,例如按照圖8所示的方式進行。圖8是訊框圖像中映出的人物的手區域的檢測處理的一例的概念圖。<<Hand area detection>>
The
區域檢測部12的手區域檢測部24,藉由利用上述手區域學習模型,以矩形1102檢測映出訊框圖像1100的人物的左右手的手區域。另外,手區域檢測部24利用上述指尖位置區域學習模型,從矩形1102的區域中檢測訊框圖像1110的人物的左右手的區域1112及1114、左右手的指尖位置1116及1118。The hand
<<輸出>>
本實施方式的資訊處理裝置1,例如基於化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度,能夠計算並輸出手動作的細緻度、手與肘的連動性及面部變動。上述手動作的細緻度、手與肘的連動性及面部變動的輸出,可用於化妝動作的研究等。<<output>>
The
另外,本實施方式的資訊處理裝置1能夠對2個化妝動畫中映出的人物的化妝動作進行定量化及比較,因此,能夠對想學習化妝技術的用戶的化妝動作及化妝師等化妝技術高的人物的化妝動作進行定量化及比較。可對比較結果進行評分並提供給用戶,也可以採用視覺方式向用戶提示用戶與化妝師等化妝技術高的人物的化妝動作之差異。此外,本實施方式的資訊處理裝置1,還可以根據比較結果,向用戶提示化妝技術,以使用戶的化妝動作接近化妝師等化妝技術高的人物的化妝動作。In addition, the
例如,塗敷腮紅/粉底等供廣面積塗敷的化妝產品時,本實施方式的資訊處理裝置1能夠向用戶提供塗敷面積判斷及塗敷範圍的指導。另外,使用眼線/眼影/遮瑕膏等技巧方面較有難度的化妝產品時,本實施方式的資訊處理裝置1能夠向用戶提供動作的正誤判斷及指導。此外,本實施方式的資訊處理裝置1還能夠向用戶提供髮蠟(美髮產品)的塗敷方法指導、護膚產品的塗敷方法指導或按摩方法指導。For example, when applying cosmetic products for wide area application such as blush/foundation, the
另外,腮紅(胭脂)的塗法等化妝方法會根據骨格較尖的用戶及較圓的用戶而不同,因此本實施方式的資訊處理裝置1還可以提供適合用戶臉型的推薦妝容、用於實現該妝容的技術指導。In addition, makeup methods such as blush (rouge) application methods vary according to users with sharp bones and round users. Therefore, the
[第2實施方式]
第2實施方式的資訊處理系統中,除了一部分結構之外其他結構與第1實施方式的資訊處理系統相同,因此適宜省略說明。第2實施方式的資訊處理系統的結構中,以圖9的區域檢測部12代替了圖5的區域檢測部12。圖9是區域檢測部的一例的結構圖。圖9的區域檢測部12的結構中包括訊框化部50、膚色區域提取部52、區域分割部54、面部區域檢測部56、手區域檢測部58、力場計算部60及頻道計算部62。[Second Embodiment]
In the information processing system of the second embodiment, the configuration is the same as the information processing system of the first embodiment except for a part of the configuration, so it is appropriate to omit the description. In the configuration of the information processing system of the second embodiment, the
訊框化部50以訊框圖像作為單位,將輸入的化妝動畫提供給膚色區域提取部52。膚色區域提取部52從訊框圖像中提取膚色區域。區域分割部54將膚色區域提取部52提取的膚色區域分割成候補區塊,並且,對候補區塊作標記。區域分割部54將有標記的候補區塊提供給面部區域檢測部56及手區域檢測部58。The framing
面部區域檢測部56根據上述提供的候補區塊的標記(被分割的膚色區域的特徵),對面部區域的候補區塊進行分類(檢測面部區域)。另外,手區域檢測部58根據由區域分割部54提供的候補區塊的標記(被分割的膚色區域的特徵)、由面部區域檢測部56分類的面部區域的候補區塊,對手區域的候補區塊進行分類(檢測手區域)。The face
圖9的面部區域檢測部56及手區域檢測部58,如圖10所示,先由面部區域檢測部56對面部區域的候補區塊進行分類,除去面部區域的候補區塊,由手區域檢測部58對手區域的候補區塊進行分類。因此,本實施方式中能夠防止手區域的錯誤檢測。As shown in FIG. 10, the face
力場計算部60在面部區域檢測部56中無法對面部區域的候補區塊進行分類的情形下,判斷為面部區域與手區域產生了干涉,進行如下文所述的處理。向力場計算部60提供前訊框(t-1)的面部區域及手區域的候補區塊及當前訊框(t)的有標記的候補區塊。When the face
力場計算部60,如圖11所示,依據力場(Force Field)在候補區塊的圖像內設定多個頻道(Channel)。頻道計算部62按每個頻道計算相對於前訊框的移動距離,將移動距離大的頻道的候補區塊設為移動中的手的候補區塊,由此能夠對手區域及面部區域的候補區塊進行分類。As shown in FIG. 11, the force
在此,區域檢測部12的力場計算部60及頻道計算部62除了移動距離的大小之外,藉由預先對相似手區域的變動進行叢集(clustering),能夠進一步防止手區域及面部區域的候補區塊的錯誤檢測。Here, in addition to the magnitude of the moving distance, the force
(總結) 如上所述,根據本實施方式,無需裝備感測器等,也能夠對拍攝後的化妝動畫中映出的人物的化妝動作進行定量化,提供或指教化妝技術。本發明並不限定於上述具體公開的實施方式,只要不脫離申請專利範圍,可進行各種變形或變更。例如,本實施方式中,以二維動畫資料為一例進行了說明,此外也可以採用三維動畫資料。根據本實施方式,藉由與二維動畫資料同樣的資料分析,或接合二維動畫資料分析及三維資訊的分析,能夠對三維動畫資料中映出的人物的化妝動作進行定量化,提供或指教化妝技術。(to sum up) As described above, according to the present embodiment, it is possible to quantify the makeup movement of the person reflected in the makeup animation after shooting without providing a sensor or the like, and provide or teach makeup techniques. The present invention is not limited to the above specifically disclosed embodiments, and various modifications or changes can be made as long as they do not deviate from the scope of the patent application. For example, in the present embodiment, two-dimensional animation data has been described as an example, and three-dimensional animation data may also be used. According to this embodiment, through the same data analysis as the two-dimensional animation data, or combining the analysis of the two-dimensional animation data and the analysis of the three-dimensional information, it is possible to quantify and provide or instruct the makeup actions of the characters reflected in the three-dimensional animation data. Makeup technique.
以上,根據實施例說明了本發明,但本發明並不限定於上述實施例,可在申請專利範圍的記載範疇內實施各種變形。本申請基於2018年10月24日向日本專利廳提交的基礎申請2018-199739號請求優先權,並參照該基礎申請的全部內容。The present invention has been described above based on the embodiments. However, the present invention is not limited to the above embodiments, and various modifications can be made within the scope of the description of the patent application. This application claims priority based on the basic application No. 2018-199739 filed with the Japanese Patent Office on October 24, 2018, and refers to the entire contents of the basic application.
1:資訊處理裝置 2:客戶終端 3:伺服器裝置 4:網路 10:操作受理部 12:區域檢測部 14:定量化部 16:後處理部 18:動畫資料記憶部 20:訊框化部 22:面部區域檢測部 24:手區域檢測部 26:面部特徵點檢測部 50:訊框化部 52:膚色區域提取部 54:區域分割部 56:面部區域檢測部 58:手區域檢測部 60:力場計算部 62:頻道計算部1: Information processing device 2: client terminal 3: server device 4: Internet 10: Operation acceptance department 12: Area detection department 14: Quantification Department 16: Post-processing department 18: Animation Data Memory Department 20: Framing Department 22: Face area detection section 24: Hand area detection department 26: Facial feature point detection unit 50: Framing Department 52: Skin color area extraction section 54: Area division 56: Face area detection section 58: Hand area detection department 60: Force Field Calculation Department 62: Channel calculation department
圖1A是本實施方式的資訊處理系統的一例的結構圖。 圖1B是本實施方式的資訊處理系統的一例的結構圖。 圖2是本實施方式的計算機的一例的硬體結構圖。 圖3A是計測結果的一例的分佈圖。 圖3B是計測結果的一例的分佈圖 圖4是本實施方式的資訊處理系統的一例的功能方塊圖。 圖5是區域檢測部的一例的結構圖。 圖6是對訊框圖像中映出的人物的面部區域進行檢測處理的一例的概念圖。 圖7是面部特徵點檢測處理的一例的流程圖。 圖8是對訊框圖像中映出的人物的手區域進行檢測處理的一例的概念圖。 圖9是區域檢測部的一例的結構圖。 圖10是面部區域檢測部及手區域檢測部的處理的一例的概念圖。 圖11是力場計算部及頻道計算部的處理的一例的概念圖。FIG. 1A is a configuration diagram of an example of an information processing system of this embodiment. FIG. 1B is a configuration diagram of an example of the information processing system of this embodiment. FIG. 2 is a hardware configuration diagram of an example of the computer of this embodiment. FIG. 3A is a distribution diagram of an example of measurement results. Fig. 3B is a distribution diagram of an example of measurement results FIG. 4 is a functional block diagram of an example of the information processing system of this embodiment. 5 is a configuration diagram of an example of an area detection unit. 6 is a conceptual diagram of an example of detection processing of a face area of a person reflected in a frame image. 7 is a flowchart of an example of face feature point detection processing. FIG. 8 is a conceptual diagram of an example of detection processing of a hand region of a person reflected in a frame image. 9 is a configuration diagram of an example of an area detection unit. 10 is a conceptual diagram of an example of processing by a face area detection unit and a hand area detection unit. 11 is a conceptual diagram of an example of processing by a force field calculation unit and a channel calculation unit.
1:資訊處理裝置 1: Information processing device
10:操作受理部 10: Operation acceptance department
12:區域檢測部 12: Area detection department
14:定量化部 14: Quantification Department
16:後處理部 16: Post-processing department
18:動畫資料記憶部 18: Animation Data Memory Department
Claims (11)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-199739 | 2018-10-24 | ||
JP2018199739 | 2018-10-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
TW202016881A true TW202016881A (en) | 2020-05-01 |
Family
ID=70331505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW108126136A TW202016881A (en) | 2018-10-24 | 2019-07-24 | Program, information processing device, quantification method, and information processing system |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP7408562B2 (en) |
CN (1) | CN112912925A (en) |
TW (1) | TW202016881A (en) |
WO (1) | WO2020084842A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102586144B1 (en) * | 2021-09-23 | 2023-10-10 | 주식회사 딥비전 | Method and apparatus for hand movement tracking using deep learning |
CN114407913B (en) * | 2022-01-27 | 2022-10-11 | 星河智联汽车科技有限公司 | Vehicle control method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002072861A (en) * | 2000-09-01 | 2002-03-12 | Ibic Service:Kk | Hairdressing art lecture system |
JP2003216955A (en) | 2002-01-23 | 2003-07-31 | Sharp Corp | Method and device for gesture recognition, dialogue device, and recording medium with gesture recognition program recorded thereon |
WO2015029372A1 (en) * | 2013-08-30 | 2015-03-05 | パナソニックIpマネジメント株式会社 | Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program |
TWI669103B (en) * | 2014-11-14 | 2019-08-21 | 日商新力股份有限公司 | Information processing device, information processing method and program |
JP6670677B2 (en) * | 2016-05-20 | 2020-03-25 | 日本電信電話株式会社 | Technical support apparatus, method, program and system |
-
2019
- 2019-07-08 WO PCT/JP2019/027052 patent/WO2020084842A1/en active Application Filing
- 2019-07-08 CN CN201980069716.8A patent/CN112912925A/en active Pending
- 2019-07-08 JP JP2020552524A patent/JP7408562B2/en active Active
- 2019-07-24 TW TW108126136A patent/TW202016881A/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2020084842A1 (en) | 2020-04-30 |
JP7408562B2 (en) | 2024-01-05 |
CN112912925A (en) | 2021-06-04 |
JPWO2020084842A1 (en) | 2021-10-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Taheri et al. | GRAB: A dataset of whole-body human grasping of objects | |
Cheng et al. | An image-to-class dynamic time warping approach for both 3D static and trajectory hand gesture recognition | |
WO2018103220A1 (en) | Image processing method and device | |
Nair et al. | Hand gesture recognition system for physically challenged people using IOT | |
Turk et al. | Perceptual interfaces | |
Jörg et al. | Data-driven finger motion synthesis for gesturing characters | |
Nai et al. | Fast hand posture classification using depth features extracted from random line segments | |
Cheng et al. | Image-to-class dynamic time warping for 3D hand gesture recognition | |
Murtaza et al. | Analysis of face recognition under varying facial expression: a survey. | |
Liang et al. | Barehanded music: real-time hand interaction for virtual piano | |
US20220148333A1 (en) | Method and system for estimating eye-related geometric parameters of a user | |
TW202129524A (en) | Makeup processing method and apparatus, electronic device and storage medium | |
Saval-Calvo et al. | 3D non-rigid registration using color: color coherent point drift | |
TW202016881A (en) | Program, information processing device, quantification method, and information processing system | |
He et al. | Autolink: Self-supervised learning of human skeletons and object outlines by linking keypoints | |
Yong et al. | Emotion recognition in gamers wearing head-mounted display | |
Gupta et al. | Hands-free PC Control” controlling of mouse cursor using eye movement | |
Yang et al. | 3D character recognition using binocular camera for medical assist | |
US11361467B2 (en) | Pose selection and animation of characters using video data and training techniques | |
Ahmed et al. | Interaction techniques in mobile Augmented Reality: State-of-the-art | |
Treepong et al. | The development of an augmented virtuality for interactive face makeup system | |
Anwar et al. | Real Time Eye Blink Detection Method for Android Device Controlling | |
Anwar | Real time facial expression recognition and eye gaze estimation system | |
TW202122040A (en) | Method for analyzing and estimating face muscle status | |
CN116542846B (en) | User account icon generation method and device, computer equipment and storage medium |