TW202016881A - Program, information processing device, quantification method, and information processing system - Google Patents

Program, information processing device, quantification method, and information processing system Download PDF

Info

Publication number
TW202016881A
TW202016881A TW108126136A TW108126136A TW202016881A TW 202016881 A TW202016881 A TW 202016881A TW 108126136 A TW108126136 A TW 108126136A TW 108126136 A TW108126136 A TW 108126136A TW 202016881 A TW202016881 A TW 202016881A
Authority
TW
Taiwan
Prior art keywords
area
hand
makeup
animation data
face
Prior art date
Application number
TW108126136A
Other languages
Chinese (zh)
Inventor
遠藤瞳
豊田成人
森雄一郎
青木義満
Original Assignee
日商資生堂股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日商資生堂股份有限公司 filed Critical 日商資生堂股份有限公司
Publication of TW202016881A publication Critical patent/TW202016881A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a program for causing a computer to function to output a value acquired by analyzing moving image data and quantifying a makeup action of a person shown in the moving image data. The program causes the computer to function as: a first detection means for detecting a face area in which a face of the person is shown from the moving image data; a second detection means for detecting a hand area in which hands of the person are shown from the moving image data; and an output means for outputting a value acquired by quantifying a makeup action of the person shown in the moving image data from movements in the face area and the hand area on the basis of the detected face area and hand area.

Description

程式、資訊處理裝置、定量化方法及資訊處理系統Program, information processing device, quantitative method and information processing system

本發明係關於一種程式、資訊處理裝置、定量化方法及資訊處理系統。The invention relates to a program, an information processing device, a quantitative method and an information processing system.

眾所周知,化妝效果根據所用化妝品及化妝技術等而異。歷來,化妝的多數用戶少有機會能看到他人的化妝技術。而近年來有很多拍攝化妝時的情形的動畫(化妝動畫)被共享在動畫網站等網站上。透過這種化妝動畫,用戶確認化妝技術的機會在增加(例如,參照非專利文獻1)。 <先前技術文獻> <非專利文獻>As we all know, makeup effects vary depending on the cosmetics and makeup techniques used. Historically, most users of makeup rarely have the opportunity to see the makeup techniques of others. In recent years, many animations (make-up animations) of shooting makeup have been shared on animation websites and other websites. Through this kind of makeup animation, the chances of the user confirming the makeup technique are increasing (for example, refer to Non-Patent Document 1). <Prior Technical Documents> <Non-patent literature>

非專利文獻1:“動畫課程”,[online],株式會社資生堂,[2018年4月20日檢索],網際網絡(URL:https://www.shiseido.co.jp/beauty/dictionary/lesson/index.html)Non-Patent Document 1: "Animation Course", [online], Shiseido Co., Ltd. [retrieved on April 20, 2018], Internet (URL: https://www.shiseido.co.jp/beauty/dictionary/lesson /index.html)

<發明所欲解決之問題><Problems to be solved by the invention>

藉由視聽上述化妝動畫,用戶能看到化妝動畫中映出的人物所使用的化妝品及化妝動作。然而,僅靠觀看化妝動畫中映出的人物的化妝動作,用戶很難正確模仿該人物的化妝動作。對此,若能根據化妝動畫對化妝動作進行定量化,則容易對進行模仿的用戶的化妝動作及化妝動畫中映出的人物的化妝動作進行比較,便於利用。By viewing the above-mentioned makeup animation, the user can see the cosmetics and makeup actions used by the characters reflected in the makeup animation. However, it is difficult for the user to correctly imitate the makeup movement of the character by watching only the makeup movement of the character reflected in the makeup animation. In this regard, if the makeup movement can be quantified based on the makeup animation, it is easy to compare the makeup movement of the user who imitated with the makeup movement of the character reflected in the makeup animation, and it is convenient to use.

本發明一實施方式之目的在於提供一種能夠對動畫資料中映出的人物的化妝動作進行定量化的程式。 <解決問題之技術手段>An object of an embodiment of the present invention is to provide a program capable of quantifying the makeup actions of characters reflected in animation data. <Technical means to solve problems>

為了達成上述目的,本發明一實施方式之程式為一種用於使計算機發揮對動畫資料進行分析,並輸出對該動畫資料中映出的人物的化妝動作進行定量化的值的功能的程式,該程式使該計算機發揮以下各元件的功能:第1檢測元件,從該動畫資料中檢測映出該人物面部的面部區域;第2檢測元件,從該動畫資料中檢測映出該人物的手的手區域;輸出元件,根據檢測出的該面部區域及該手區域,輸出基於該面部區域及該手區域的變動對該動畫資料中映出的人物的化妝動作進行定量化的值。 <發明之效果>In order to achieve the above object, a program according to an embodiment of the present invention is a program for causing a computer to perform a function of analyzing animation data and outputting a value that quantifies the makeup movement of a person reflected in the animation data. The program causes the computer to function as the following elements: the first detection element detects the facial area reflecting the character's face from the animation data; the second detection element detects the hand reflecting the character's hand from the animation data Area; the output element, based on the detected face area and the hand area, outputs a value quantifying the makeup movement of the person reflected in the animation data based on changes in the face area and the hand area. <Effect of invention>

根據本發明的一實施方式,能夠對動畫資料中映出的人物的化妝動作進行定量化。According to an embodiment of the present invention, it is possible to quantify the makeup actions of the characters reflected in the animation material.

以下,詳細說明本發明的實施方式。Hereinafter, embodiments of the present invention will be described in detail.

[第1實施方式] <系統結構> 圖1A及圖1B是本實施方式的資訊處理系統的一例的結構圖。圖1A的資訊處理系統具備單體的資訊處理裝置1。資訊處理裝置1是用戶操作的PC、智慧手機、平板電腦、家用或業務用的對化妝動作進行定量化的專用機器等。[First embodiment] <System structure> 1A and 1B are configuration diagrams of an example of the information processing system of this embodiment. The information processing system of FIG. 1A includes a single information processing device 1. The information processing device 1 is a PC operated by a user, a smartphone, a tablet computer, a dedicated machine for quantifying makeup movements for home or business use, and the like.

另外,圖1B的資訊處理系統中1台以上的客戶終端2與伺服器裝置3藉由網際網路等網路4連接。客戶終端2是用戶操作的PC、智慧手機、平板電腦等終端裝置、家用或業務用的化妝動作定量化的專用機器等。伺服器裝置3進行與客戶終端2中實施的化妝動作定量化相關的處理等。In addition, in the information processing system of FIG. 1B, one or more client terminals 2 and the server device 3 are connected via a network 4 such as the Internet. The client terminal 2 is a terminal device such as a PC, smart phone, or tablet computer operated by a user, or a dedicated device for quantifying makeup actions for home or business use. The server device 3 performs processing related to quantification of makeup actions performed in the client terminal 2 and the like.

如上所述,本發明除了應用於如圖1B所示的客戶・伺服器型的資訊處理系統之外,亦可應用於如圖1A所示的單體資訊處理裝置1中。在此,圖1A及圖1B的資訊處理系統僅為一例,根據用途及目的,當然可以有各種各樣的系統結構例。例如,圖1B的伺服器裝置3也可以由分散的複數個計算機構成。As described above, the present invention can be applied to the single information processing device 1 shown in FIG. 1A in addition to the client/server type information processing system shown in FIG. 1B. Here, the information processing system of FIG. 1A and FIG. 1B is just an example, and of course various system configuration examples are possible depending on the purpose and purpose. For example, the server device 3 of FIG. 1B may be composed of a plurality of distributed computers.

<硬體結構> 圖1A及圖1B的資訊處理裝置1、客戶終端2及伺服器裝置3例如由圖2所示的硬體結構的計算機實現。圖2是本實施方式的計算機的一例的硬體結構圖。<Hardware structure> The information processing device 1, the client terminal 2 and the server device 3 of FIGS. 1A and 1B are realized by, for example, a computer with a hardware structure shown in FIG. 2. FIG. 2 is a hardware configuration diagram of an example of the computer of this embodiment.

圖2的計算機具備輸入裝置501、輸出裝置502、外部I/F503、RAM504、ROM505、CPU506、通信I/F507及HDD508等,分別藉由匯流排B相連接。The computer of FIG. 2 includes an input device 501, an output device 502, an external I/F 503, a RAM 504, a ROM 505, a CPU 506, a communication I/F 507, an HDD 508, etc., and is connected via a bus B, respectively.

輸入裝置501是用於輸入操作的鍵盤、滑鼠等。輸出裝置502由可顯示畫面的液晶或有機EL等顯示器、用於輸出聲音或音樂等音響資料的擴音器等構成。通信I/F507是用於將計算機連接於網路4的介面。HDD508是用於存放程式或資料的非揮發性記憶裝置之一例。The input device 501 is a keyboard, a mouse, etc. for input operation. The output device 502 is composed of a display such as a liquid crystal or organic EL capable of displaying a screen, a microphone for outputting audio materials such as sound or music, and the like. The communication I/F507 is an interface for connecting a computer to the network 4. HDD508 is an example of a non-volatile memory device used to store programs or data.

外部I/F503是與外部裝置連接的介面。計算機藉由外部I/F503能對記録媒體503a進行讀取及/或寫入。記録媒體503a可以是DVD、SD記憶卡、USB記憶體等。The external I/F503 is an interface to connect with external devices. The computer can read and/or write to the recording medium 503a through the external I/F 503. The recording medium 503a may be a DVD, SD memory card, USB memory, or the like.

CPU506是一種從ROM505或HDD508等記憶裝置中將程式及資料讀取到RAM504上,並進行處理,從而實現計算機整體的控制及功能的運算裝置。本實施方式的資訊處理裝置1、客戶終端2及伺服器裝置3藉由在具有上述硬體結構的計算機中執行程式,能夠實現各種功能。The CPU 506 is a computing device that reads programs and data from a memory device such as a ROM 505 or a HDD 508 to the RAM 504 and performs processing to realize overall control and functions of the computer. The information processing device 1, the client terminal 2 and the server device 3 of the present embodiment can realize various functions by executing a program in a computer having the above-mentioned hardware structure.

在此,圖2是硬體結構的一例,當然可以根據用途及目的形成各種各樣的結構例。例如圖2的計算機中,輸入裝置501還可以具備能夠拍攝動畫的相機功能。Here, FIG. 2 is an example of a hardware structure, and of course various structure examples can be formed according to the use and purpose. For example, in the computer of FIG. 2, the input device 501 may also have a camera function capable of shooting movies.

〈化妝動作定量化的探討〉 作為化妝動作的定量化方法,例如有利用感測器的方法。利用感測器的化妝動作定量化方法中,使進行化妝的人物配戴感測器,進行化妝動作。在利用感測器的化妝動作定量化方法中,能夠根據感測器輸出的資料來對人物的化妝動作進行定量化,但無法對拍攝後的化妝動畫中映出的人物的化妝動作進行定量化。<Discussion on quantification of makeup movements> As a method of quantifying makeup actions, for example, there is a method using a sensor. In the method of quantifying the makeup movement using the sensor, the person who does makeup wears the sensor to perform the makeup movement. In the method of quantifying the makeup movement using the sensor, the makeup movement of the person can be quantified based on the data output from the sensor, but the makeup movement of the person reflected in the makeup animation after shooting cannot be quantified .

若能夠對拍攝後的化妝動畫進行分析,並對該化妝動畫中映出的人物的化妝動作進行定量化,則能夠利用動畫網站等網站的化妝動畫,且無需配戴感測器等,從而可期待對自然的化妝動作進行定量化。在此,本實施方式中,為了對拍攝後的化妝動畫中映出的人物的化妝動作進行定量化,關於掌握從化妝動畫中取得的對象,及利用該對象進行化妝動作定量化進行了探討。If you can analyze the makeup animation after shooting and quantify the makeup movements of the characters reflected in the makeup animation, you can use the makeup animation of websites such as animation websites without the need to wear a sensor, etc. Expect to quantify natural makeup movements. Here, in this embodiment, in order to quantify the makeup movements of the people reflected in the makeup animation after shooting, it is discussed about grasping the object acquired from the makeup animation and using the object to quantify the makeup movement.

<<從化妝動畫中取得的對象的掌握>> 為了掌握從化妝動畫中取得的對象,利用動作擷取(器)對被測者的化妝動作中的變動進行了計測。在此,測定部位是右手中指尖端、中指根部、手背中央、手腕中央、肘部、額部等共計6處。分析對象是測定部位的座標(變位)、速度、加速度及角速度。經過上述分析的被測者變動的主成份,被推定為化妝動作中的主要變動要素。<<Mastering objects acquired from makeup animation>> In order to grasp the objects obtained from the makeup animation, the movement capture (device) was used to measure the changes in the makeup movement of the subject. Here, a total of 6 measurement sites are the tip of the right middle finger, the root of the middle finger, the center of the back of the hand, the center of the wrist, the elbow, and the forehead. The analysis objects are the coordinates (displacement), velocity, acceleration, and angular velocity of the measurement site. The main component of the subject's variation after the above analysis is estimated to be the main variation element in the makeup movement.

圖3A及圖3B是計測結果的一例的分佈圖。圖3A是表示每個化妝動作中的手動作的細緻度,及手與肘的連動性的一例的分佈圖。圖3B是表示每個化妝動作的手動作的細緻度及面部變動的一例的分佈圖。如圖3A及圖3B所示,被測者的變動主成份是手動作的細緻度、手與肘的連動性及面部面動。在此,本實施方式中作為從化妝動畫取得的對象設定為手的座標・速度及面部的座標・速度。3A and 3B are distribution diagrams of an example of measurement results. FIG. 3A is a distribution diagram showing an example of the fineness of the hand movement in each makeup movement and the interlocking property of the hand and the elbow. 3B is a distribution diagram showing an example of the fineness of hand movements and facial changes for each makeup movement. As shown in FIGS. 3A and 3B, the main components of the subject’s changes are the fineness of hand movements, the interlocking of the hands and elbows, and facial and facial movements. Here, in this embodiment, the object acquired from the makeup animation is set to the coordinates and speed of the hand and the coordinates and speed of the face.

<<利用取得的對象進行化妝動作的定量化>> 作為取得化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度的方法,可舉出利用卷積神經網路(Convolutional Neural Network,以下稱之為CNN)的圖像識別。在利用CNN的圖像識別中,能夠從二維圖像中檢測出面部區域及手區域,因此,藉由對從化妝動畫的訊框圖像中檢測出的面部區域及手區域進行追蹤(跟蹤),能夠取得化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度。以下,關於本實施方式中利用CNN的圖像識別進行詳細說明。<<Quantization of makeup actions using acquired objects>> As a method of obtaining the coordinates and speed of the hand and the coordinates and speed of the face in the makeup movement of the character reflected in the makeup animation, a diagram using a Convolutional Neural Network (hereinafter referred to as CNN) can be mentioned. Like recognition. In image recognition using CNN, the face area and hand area can be detected from the two-dimensional image. Therefore, by tracking the face area and hand area detected from the frame image of the makeup animation (tracking ), it is possible to obtain the coordinates and speed of the hand and the coordinates and speed of the face in the makeup movement of the character reflected in the makeup animation. The image recognition using CNN in this embodiment will be described in detail below.

<軟體結構> <<功能塊>> 以下說明本實施方式的資訊處理系統的軟體結構。在此,以圖1A所示的資訊處理裝置1作為一例進行說明。圖4是本實施方式的資訊處理系統的一例的功能方塊圖。資訊處理裝置1藉由執行程式,來實現操作受理部10、區域檢測部12、定量化部14、後處理部16及動畫資料記憶部18的功能。<Software structure> <<Function Block>> The following describes the software structure of the information processing system of this embodiment. Here, the information processing device 1 shown in FIG. 1A will be described as an example. FIG. 4 is a functional block diagram of an example of the information processing system of this embodiment. The information processing device 1 implements the functions of the operation acceptance unit 10, the area detection unit 12, the quantification unit 14, the post-processing unit 16, and the animation data storage unit 18 by executing a program.

操作受理部10受理來自用戶的各種操作。動畫資料記憶部18記憶化妝動畫。在此,也可以將動畫資料記憶部18設置在資訊處理裝置1之外部。動畫資料記憶部18中記憶的化妝動畫及利用相機功能拍攝的化妝動畫被輸入到區域檢測部12。區域檢測部12以構成該輸入的化妝動畫的每個訊框圖像,對該訊框圖像中映出的人物的面部區域及手區域進行下述檢測。The operation acceptance unit 10 accepts various operations from the user. The animation data memory section 18 memorizes makeup animation. Here, the animation data storage unit 18 may be provided outside the information processing device 1. The makeup animation stored in the animation data storage section 18 and the makeup animation shot by the camera function are input to the area detection section 12. The area detection unit 12 performs the following detection of the face area and hand area of the person reflected in the frame image for each frame image constituting the input makeup animation.

定量化部14藉由從區域檢測部12檢測出的面部區域及手區域中,取得化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度,來對化妝動畫中映出的人物的化妝動作進行定量化。後處理部16對區域檢測部12及定量化部14的處理結果進行後處理,並輸出到輸出裝置502等。The quantification unit 14 acquires the coordinates and speed of the hand and the coordinates and speed of the face in the makeup movement of the character reflected in the makeup animation from the face area and the hand area detected by the area detection unit 12 The makeup actions of the characters reflected in are quantified. The post-processing unit 16 performs post-processing on the processing results of the area detection unit 12 and the quantization unit 14 and outputs them to the output device 502 or the like.

例如,後處理部16進行以矩形包圍化妝動畫中映出的人物的化妝動作中的面部區域及手區域的後處理。另外,後處理部16進行基於化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度,來視覺性地表現手動作的細緻度、手與肘的連動性及面部變動等的後處理。For example, the post-processing unit 16 performs post-processing on a face area and a hand area in a makeup operation in which a person reflected in a makeup animation is surrounded by a rectangle. In addition, the post-processing unit 16 visually expresses the fineness of the hand movement, the linkage between the hand and the elbow, and the coordinate and speed of the hand based on the makeup movement of the character reflected in the makeup animation. Post-processing of facial changes, etc.

此外,後處理部16能夠對2個化妝動畫中映出的人物的化妝動作進行定量化及比較,並輸出其比較結果。例如,利用本實施方式的資訊處理系統的用戶,藉由對自己的化妝動作及化妝師等化妝技術高的人物的化妝動作進行定量化及比較,易於理解與自身化妝技術之差異。由此,本實施方式的資訊處理系統可提供用於提高用戶化妝技術的服務。In addition, the post-processing unit 16 can quantify and compare the makeup actions of the characters shown in the two makeup animations, and output the comparison result. For example, a user who uses the information processing system of the present embodiment can easily understand the difference from his own makeup technology by quantifying and comparing his own makeup movements and makeup movements of people with high makeup skills such as makeup artists. Thus, the information processing system of this embodiment can provide services for improving the user's makeup technology.

用於檢測化妝動畫的訊框圖像中映出的人物的面部區域及手區域的圖4的區域檢測部12,具有例如圖5所示的結構。圖5是區域檢測部的一例的結構圖。圖5的區域檢測部12具備訊框化部20、面部區域檢測部22、手區域檢測部24及面部特徵點檢測部26。The area detection unit 12 of FIG. 4 for detecting the face area and the hand area of the person reflected in the frame image of the makeup animation has, for example, the configuration shown in FIG. 5. 5 is a configuration diagram of an example of an area detection unit. The area detection unit 12 of FIG. 5 includes a framing unit 20, a face area detection unit 22, a hand area detection unit 24, and a face feature point detection unit 26.

訊框化部20將該輸入的化妝動畫,以訊框圖像作為單位提供給面部區域檢測部22及手區域檢測部24。面部區域檢測部22具備包含面部器官區域學習模型的面部區域學習模型。The framing unit 20 supplies the input makeup animation to the face area detection unit 22 and the hand area detection unit 24 in units of frame images. The face region detection unit 22 includes a face region learning model including a face organ region learning model.

在此,透過以手區域重疊於面部區域的二維圖像作為教導資料(teaching data)的機器學習,製作面部區域檢測部22所具備的面部區域學習模型。基於手被映成面部前景的二維圖像,製作成手區域重疊於面部區域的教導資料。相對於註解(作為教導資料製作)的手區域學習資料集(data set),以手作為前景的方式,貼附有註解的面部區域學習資料集的圖像,可製作成手區域重疊於面部區域的教導資料。Here, through machine learning using a two-dimensional image in which a hand region overlaps a face region as teaching data, a face region learning model included in the face region detection unit 22 is created. Based on the two-dimensional image in which the hand is reflected as the foreground of the face, the teaching material in which the hand area overlaps the face area is made. Relative to the annotated (created as teaching material) hand region learning data set (data set), with the hand as the foreground, the image of the annotated face region learning data set is attached, and the hand region can be made to overlap the face region Teaching materials.

面部區域檢測部22,利用藉由進行CNN學習的面部區域學習模型,來實現對面部區域與手區域重疊具有強效的面部區域檢測,該CNN的學習資料集中使用面部區域的一部分被手區域遮擋的狀態(遮蔽環境)的教導資料。The face area detection unit 22 uses a face area learning model that performs CNN learning to implement a face area detection that has a strong effect on the overlap of the face area and the hand area. The CNN learning data set uses a part of the face area to be blocked by the hand area The teaching materials of the state (shading the environment).

手區域檢測部24具有包含指尖位置區域學習模型的手區域學習模型。藉由以化妝時的手的二維圖像作為教導資料,製作成手區域檢測部24所具有的手區域學習模型。The hand area detection unit 24 has a hand area learning model including a fingertip position area learning model. By using the two-dimensional image of the hand at the time of makeup as a teaching material, a hand region learning model included in the hand region detection unit 24 is created.

在此,化妝時的手的教導資料由特化於化妝時的手的形狀的註解手區域學習資料集、及特化於化妝時的指尖位置的註解指尖位置區域學習資料集製作成。手區域檢測部24利用藉由進行CNN學習的手區域學習模型,來實現化妝動作中形狀變化多的手及指尖位置的檢測精度高的手區域檢測,該CNN的學習資料集中使用上述教導資料。Here, the teaching material of the hand during makeup is made of an annotation hand region learning material set specialized in the shape of the hand during makeup, and an annotation fingertip position region learning data set specialized in the position of the fingertip during makeup. The hand area detection unit 24 uses a hand area learning model that performs CNN learning to realize hand area detection with high detection accuracy of the hand and fingertip positions during shape changes. The CNN learning materials focus on the above teaching materials .

另外,面部特徵點檢測部26具有面部特徵點學習模型,其包含面部器官特徵點學習模型。面部特徵點檢測部26藉由利用面部特徵點學習模型來檢測面部整體的面部特徵點。然後,面部特徵點檢測部26藉由利用面部器官特徵點學習模型,按各部位區分檢測面部器官的面部特徵點。面部特徵點檢測部26,對面部器官中包含的眼睛的面部特徵點進行檢測,基於眼睛的面部特徵點的位置,對眼睛以外的部位的面部特徵點(包括輪廓)的位置進行修正,從而在低解析度或遮蔽環境下也能夠實現高精度的面部特徵點檢測。In addition, the facial feature point detection unit 26 has a facial feature point learning model that includes a facial organ feature point learning model. The facial feature point detection unit 26 detects facial feature points of the entire face by using the facial feature point learning model. Then, the facial feature point detection unit 26 detects the facial feature points of the facial organs for each part by learning the model using the facial organ feature points. The facial feature point detection unit 26 detects the facial feature points of the eyes included in the facial organs, and corrects the positions of the facial feature points (including the outline) of the parts other than the eyes based on the positions of the facial feature points of the eyes, so that High-precision facial feature point detection can also be achieved in low-resolution or shaded environments.

<處理> <<面部區域檢測及面部特徵點檢測>> 區域檢測部12對訊框圖像中映出的人物的面部區域進行檢測的處理,例如按照圖6所示的方式進行。圖6是對訊框圖像中映出的人物的面部區域進行檢測處理的一例的概念圖。<Processing> <<Face area detection and facial feature point detection>> The area detection unit 12 detects the face area of the person reflected in the frame image, for example, as shown in FIG. 6. 6 is a conceptual diagram of an example of detection processing of a face area of a person reflected in a frame image.

區域檢測部12的面部區域檢測部22,藉由利用上述面部區域學習模型,以矩形1002檢測映出訊框圖像1000的人物面部的面部區域。面部區域檢測部22從矩形1002的區域中,利用上述面部器官區域學習模型檢測面部器官區域,並將檢測出的以鼻子為中心的矩形1002的矩形比率修正成矩形1004的形態。The face area detection section 22 of the area detection section 12 detects the face area of the person's face reflecting the frame image 1000 in the rectangle 1002 by using the aforementioned face area learning model. The face area detection unit 22 detects the face organ area from the area of the rectangle 1002 using the above-mentioned face organ area learning model, and corrects the rectangle ratio of the detected rectangle 1002 centered on the nose to form the rectangle 1004.

面部特徵點檢測部26利用上述面部特徵點學習模型,從矩形1004的區域中檢測出如矩形區域圖像1006所示的面部特徵點。另外,面部特徵點檢測部26利用上述面部器官特徵點學習模型,從矩形區域圖像1006中,按不同部位檢測出如矩形區域圖像1008所示的面部器官的特徵點。The facial feature point detection unit 26 uses the aforementioned facial feature point learning model to detect the facial feature points shown in the rectangular area image 1006 from the area of the rectangle 1004. In addition, the facial feature point detection unit 26 uses the facial organ feature point learning model to detect the feature points of the facial organs shown in the rectangular area image 1008 for different parts from the rectangular area image 1006.

在此,圖5的區域檢測部12藉由進行圖7的流程圖所示的處理,在低解析度及遮蔽環境下也能夠實現高精度的面部特徵點檢測。圖7是面部特徵點檢測處理的一例的流程圖。Here, by performing the processing shown in the flowchart of FIG. 7, the area detection unit 12 of FIG. 5 can realize high-precision facial feature point detection even in a low-resolution and shaded environment. 7 is a flowchart of an example of face feature point detection processing.

在步驟S11中,區域檢測部12的面部特徵點檢測部26,從面部區域檢測部22檢測的面部區域(面部整體)中,利用上述面部特徵點學習模型來檢測面部特徵點,並推測頭部姿勢。In step S11, the facial feature point detection unit 26 of the region detection unit 12 detects the facial feature points using the above facial feature point learning model from the facial region (whole face) detected by the facial region detection unit 22, and estimates the head posture.

進入步驟S12,面部特徵點檢測部26對步驟S11中推測出的頭部姿勢加以斟酌,並利用面部器官特徵點學習模型來對面部區域檢測部22檢測出的眼睛進行檢測,從而對眼睛的位置推測進行校正,以提高眼睛的推測位置精度。Proceeding to step S12, the facial feature point detection unit 26 considers the posture of the head estimated in step S11, and uses the facial organ feature point learning model to detect the eyes detected by the face region detection unit 22, thereby detecting the position of the eyes The speculation is corrected to improve the accuracy of the speculative position of the eye.

進入步驟S13,面部特徵點檢測部26對步驟S12中校正的眼睛推測位置加以斟酌,並利用上述面部器官特徵點學習模型來檢測眼睛之外的面部器官的特徵點(包括輪廓),並對眼睛之外的面部器官的推測位置進行修正。圖7的流程圖的處理,例如在面部輪廓被手遮擋的情形下有效。Proceeding to step S13, the facial feature point detection unit 26 considers the estimated eye position corrected in step S12, and uses the above facial organ feature point learning model to detect feature points (including contours) of facial organs other than the eye Correct the estimated position of other facial organs. The processing of the flowchart of FIG. 7 is effective, for example, in the case where the facial contour is blocked by the hand.

<<手區域檢測>> 區域檢測部12對訊框圖像中映出的人物的手區域進行檢測的處理,例如按照圖8所示的方式進行。圖8是訊框圖像中映出的人物的手區域的檢測處理的一例的概念圖。<<Hand area detection>> The area detection unit 12 detects the hand area of the person reflected in the frame image, for example, as shown in FIG. 8. 8 is a conceptual diagram of an example of detection processing of a person's hand region reflected in a frame image.

區域檢測部12的手區域檢測部24,藉由利用上述手區域學習模型,以矩形1102檢測映出訊框圖像1100的人物的左右手的手區域。另外,手區域檢測部24利用上述指尖位置區域學習模型,從矩形1102的區域中檢測訊框圖像1110的人物的左右手的區域1112及1114、左右手的指尖位置1116及1118。The hand region detection unit 24 of the region detection unit 12 detects the hand regions of the left and right hands of the person reflecting the frame image 1100 by using the above-mentioned hand region learning model. The hand area detection unit 24 uses the above-mentioned fingertip position area learning model to detect the left and right hand areas 1112 and 1114 of the person in the frame image 1110 and the fingertip positions 1116 and 1118 of the left and right hands from the area of the rectangle 1102.

<<輸出>> 本實施方式的資訊處理裝置1,例如基於化妝動畫中映出的人物的化妝動作中的手的座標・速度及面部的座標・速度,能夠計算並輸出手動作的細緻度、手與肘的連動性及面部變動。上述手動作的細緻度、手與肘的連動性及面部變動的輸出,可用於化妝動作的研究等。<<output>> The information processing device 1 of the present embodiment can calculate and output the fineness of hand movements and the linkage between the hands and elbows based on, for example, the coordinates and speed of the hand and the coordinates and speed of the face in the makeup movement of the person reflected in the makeup animation Sexual and facial changes. The fineness of the above-mentioned hand movements, the linkage between the hands and elbows, and the output of facial changes can be used to study makeup movements.

另外,本實施方式的資訊處理裝置1能夠對2個化妝動畫中映出的人物的化妝動作進行定量化及比較,因此,能夠對想學習化妝技術的用戶的化妝動作及化妝師等化妝技術高的人物的化妝動作進行定量化及比較。可對比較結果進行評分並提供給用戶,也可以採用視覺方式向用戶提示用戶與化妝師等化妝技術高的人物的化妝動作之差異。此外,本實施方式的資訊處理裝置1,還可以根據比較結果,向用戶提示化妝技術,以使用戶的化妝動作接近化妝師等化妝技術高的人物的化妝動作。In addition, the information processing device 1 of the present embodiment can quantify and compare the makeup movements of the characters reflected in the two makeup animations. Therefore, the makeup movements of users who want to learn makeup techniques and makeup techniques such as makeup artists can be improved. The makeup actions of the characters are quantified and compared. The comparison result may be scored and provided to the user, or the user may be visually reminded of the difference in makeup action between the user and a person with high makeup skills such as a makeup artist. In addition, the information processing device 1 of the present embodiment may also present the makeup technique to the user based on the comparison result, so that the makeup action of the user is close to the makeup action of a person with high makeup technique such as a makeup artist.

例如,塗敷腮紅/粉底等供廣面積塗敷的化妝產品時,本實施方式的資訊處理裝置1能夠向用戶提供塗敷面積判斷及塗敷範圍的指導。另外,使用眼線/眼影/遮瑕膏等技巧方面較有難度的化妝產品時,本實施方式的資訊處理裝置1能夠向用戶提供動作的正誤判斷及指導。此外,本實施方式的資訊處理裝置1還能夠向用戶提供髮蠟(美髮產品)的塗敷方法指導、護膚產品的塗敷方法指導或按摩方法指導。For example, when applying cosmetic products for wide area application such as blush/foundation, the information processing device 1 of the present embodiment can provide the user with guidance on application area determination and application range. In addition, the information processing device 1 of the present embodiment can provide the user with correctness and error judgment and guidance when using cosmetic products that are more difficult in techniques such as eyeliner/eye shadow/concealer. In addition, the information processing device 1 of the present embodiment can also provide the user with a method of applying a hair wax (hairdressing product), a method of applying a skin care product, or a method of massaging.

另外,腮紅(胭脂)的塗法等化妝方法會根據骨格較尖的用戶及較圓的用戶而不同,因此本實施方式的資訊處理裝置1還可以提供適合用戶臉型的推薦妝容、用於實現該妝容的技術指導。In addition, makeup methods such as blush (rouge) application methods vary according to users with sharp bones and round users. Therefore, the information processing device 1 of the present embodiment can also provide recommended makeup suitable for the user’s face shape. Technical guidance for the makeup.

[第2實施方式] 第2實施方式的資訊處理系統中,除了一部分結構之外其他結構與第1實施方式的資訊處理系統相同,因此適宜省略說明。第2實施方式的資訊處理系統的結構中,以圖9的區域檢測部12代替了圖5的區域檢測部12。圖9是區域檢測部的一例的結構圖。圖9的區域檢測部12的結構中包括訊框化部50、膚色區域提取部52、區域分割部54、面部區域檢測部56、手區域檢測部58、力場計算部60及頻道計算部62。[Second Embodiment] In the information processing system of the second embodiment, the configuration is the same as the information processing system of the first embodiment except for a part of the configuration, so it is appropriate to omit the description. In the configuration of the information processing system of the second embodiment, the area detection unit 12 of FIG. 5 is replaced with the area detection unit 12 of FIG. 9. 9 is a configuration diagram of an example of an area detection unit. The structure of the area detection unit 12 in FIG. 9 includes a framing unit 50, a skin color area extraction unit 52, an area division unit 54, a face area detection unit 56, a hand area detection unit 58, a force field calculation unit 60, and a channel calculation unit 62 .

訊框化部50以訊框圖像作為單位,將輸入的化妝動畫提供給膚色區域提取部52。膚色區域提取部52從訊框圖像中提取膚色區域。區域分割部54將膚色區域提取部52提取的膚色區域分割成候補區塊,並且,對候補區塊作標記。區域分割部54將有標記的候補區塊提供給面部區域檢測部56及手區域檢測部58。The framing unit 50 supplies the input makeup animation to the skin-color area extraction unit 52 in units of frame images. The skin color area extraction unit 52 extracts the skin color area from the frame image. The region dividing unit 54 divides the skin color region extracted by the skin color region extracting unit 52 into candidate blocks, and marks the candidate blocks. The area division unit 54 supplies the marked candidate blocks to the face area detection unit 56 and the hand area detection unit 58.

面部區域檢測部56根據上述提供的候補區塊的標記(被分割的膚色區域的特徵),對面部區域的候補區塊進行分類(檢測面部區域)。另外,手區域檢測部58根據由區域分割部54提供的候補區塊的標記(被分割的膚色區域的特徵)、由面部區域檢測部56分類的面部區域的候補區塊,對手區域的候補區塊進行分類(檢測手區域)。The face area detection unit 56 classifies the candidate blocks of the face area (detects the face area) based on the markers of the candidate blocks (the characteristics of the divided skin-color area) provided above. In addition, the hand area detection unit 58 selects the candidate block provided by the area division unit 54 (the characteristics of the divided skin color area), the candidate block of the face region classified by the face area detection unit 56, and the candidate region of the hand region Blocks are classified (detect hand area).

圖9的面部區域檢測部56及手區域檢測部58,如圖10所示,先由面部區域檢測部56對面部區域的候補區塊進行分類,除去面部區域的候補區塊,由手區域檢測部58對手區域的候補區塊進行分類。因此,本實施方式中能夠防止手區域的錯誤檢測。As shown in FIG. 10, the face region detection unit 56 and the hand region detection unit 58 of FIG. 9 first classify candidate regions of the face region, remove the candidate regions of the face region, and detect by the hand region Part 58 classifies candidate blocks in the opponent area. Therefore, in this embodiment, erroneous detection of the hand area can be prevented.

力場計算部60在面部區域檢測部56中無法對面部區域的候補區塊進行分類的情形下,判斷為面部區域與手區域產生了干涉,進行如下文所述的處理。向力場計算部60提供前訊框(t-1)的面部區域及手區域的候補區塊及當前訊框(t)的有標記的候補區塊。When the face area detection unit 56 cannot classify the candidate blocks of the face area, the force field calculation section 60 determines that the face area and the hand area have interfered, and performs processing as described below. The force field calculation unit 60 is provided with candidate blocks of the face area and hand area of the previous frame (t-1) and marked candidate blocks of the current frame (t).

力場計算部60,如圖11所示,依據力場(Force Field)在候補區塊的圖像內設定多個頻道(Channel)。頻道計算部62按每個頻道計算相對於前訊框的移動距離,將移動距離大的頻道的候補區塊設為移動中的手的候補區塊,由此能夠對手區域及面部區域的候補區塊進行分類。As shown in FIG. 11, the force field calculation unit 60 sets a plurality of channels in the image of the candidate block based on the force field (Force Field). The channel calculation unit 62 calculates the moving distance with respect to the previous frame for each channel, and sets the candidate block of the channel with a large moving distance as the candidate block of the moving hand, thereby enabling the candidate area of the opponent area and the face area Block to classify.

在此,區域檢測部12的力場計算部60及頻道計算部62除了移動距離的大小之外,藉由預先對相似手區域的變動進行叢集(clustering),能夠進一步防止手區域及面部區域的候補區塊的錯誤檢測。Here, in addition to the magnitude of the moving distance, the force field calculation unit 60 and the channel calculation unit 62 of the area detection unit 12 can further prevent the hand area and the face area by clustering changes of similar hand areas in advance Error detection of candidate blocks.

(總結) 如上所述,根據本實施方式,無需裝備感測器等,也能夠對拍攝後的化妝動畫中映出的人物的化妝動作進行定量化,提供或指教化妝技術。本發明並不限定於上述具體公開的實施方式,只要不脫離申請專利範圍,可進行各種變形或變更。例如,本實施方式中,以二維動畫資料為一例進行了說明,此外也可以採用三維動畫資料。根據本實施方式,藉由與二維動畫資料同樣的資料分析,或接合二維動畫資料分析及三維資訊的分析,能夠對三維動畫資料中映出的人物的化妝動作進行定量化,提供或指教化妝技術。(to sum up) As described above, according to the present embodiment, it is possible to quantify the makeup movement of the person reflected in the makeup animation after shooting without providing a sensor or the like, and provide or teach makeup techniques. The present invention is not limited to the above specifically disclosed embodiments, and various modifications or changes can be made as long as they do not deviate from the scope of the patent application. For example, in the present embodiment, two-dimensional animation data has been described as an example, and three-dimensional animation data may also be used. According to this embodiment, through the same data analysis as the two-dimensional animation data, or combining the analysis of the two-dimensional animation data and the analysis of the three-dimensional information, it is possible to quantify and provide or instruct the makeup actions of the characters reflected in the three-dimensional animation data. Makeup technique.

以上,根據實施例說明了本發明,但本發明並不限定於上述實施例,可在申請專利範圍的記載範疇內實施各種變形。本申請基於2018年10月24日向日本專利廳提交的基礎申請2018-199739號請求優先權,並參照該基礎申請的全部內容。The present invention has been described above based on the embodiments. However, the present invention is not limited to the above embodiments, and various modifications can be made within the scope of the description of the patent application. This application claims priority based on the basic application No. 2018-199739 filed with the Japanese Patent Office on October 24, 2018, and refers to the entire contents of the basic application.

1:資訊處理裝置 2:客戶終端 3:伺服器裝置 4:網路 10:操作受理部 12:區域檢測部 14:定量化部 16:後處理部 18:動畫資料記憶部 20:訊框化部 22:面部區域檢測部 24:手區域檢測部 26:面部特徵點檢測部 50:訊框化部 52:膚色區域提取部 54:區域分割部 56:面部區域檢測部 58:手區域檢測部 60:力場計算部 62:頻道計算部1: Information processing device 2: client terminal 3: server device 4: Internet 10: Operation acceptance department 12: Area detection department 14: Quantification Department 16: Post-processing department 18: Animation Data Memory Department 20: Framing Department 22: Face area detection section 24: Hand area detection department 26: Facial feature point detection unit 50: Framing Department 52: Skin color area extraction section 54: Area division 56: Face area detection section 58: Hand area detection department 60: Force Field Calculation Department 62: Channel calculation department

圖1A是本實施方式的資訊處理系統的一例的結構圖。 圖1B是本實施方式的資訊處理系統的一例的結構圖。 圖2是本實施方式的計算機的一例的硬體結構圖。 圖3A是計測結果的一例的分佈圖。 圖3B是計測結果的一例的分佈圖 圖4是本實施方式的資訊處理系統的一例的功能方塊圖。 圖5是區域檢測部的一例的結構圖。 圖6是對訊框圖像中映出的人物的面部區域進行檢測處理的一例的概念圖。 圖7是面部特徵點檢測處理的一例的流程圖。 圖8是對訊框圖像中映出的人物的手區域進行檢測處理的一例的概念圖。 圖9是區域檢測部的一例的結構圖。 圖10是面部區域檢測部及手區域檢測部的處理的一例的概念圖。 圖11是力場計算部及頻道計算部的處理的一例的概念圖。FIG. 1A is a configuration diagram of an example of an information processing system of this embodiment. FIG. 1B is a configuration diagram of an example of the information processing system of this embodiment. FIG. 2 is a hardware configuration diagram of an example of the computer of this embodiment. FIG. 3A is a distribution diagram of an example of measurement results. Fig. 3B is a distribution diagram of an example of measurement results FIG. 4 is a functional block diagram of an example of the information processing system of this embodiment. 5 is a configuration diagram of an example of an area detection unit. 6 is a conceptual diagram of an example of detection processing of a face area of a person reflected in a frame image. 7 is a flowchart of an example of face feature point detection processing. FIG. 8 is a conceptual diagram of an example of detection processing of a hand region of a person reflected in a frame image. 9 is a configuration diagram of an example of an area detection unit. 10 is a conceptual diagram of an example of processing by a face area detection unit and a hand area detection unit. 11 is a conceptual diagram of an example of processing by a force field calculation unit and a channel calculation unit.

1:資訊處理裝置 1: Information processing device

10:操作受理部 10: Operation acceptance department

12:區域檢測部 12: Area detection department

14:定量化部 14: Quantification Department

16:後處理部 16: Post-processing department

18:動畫資料記憶部 18: Animation Data Memory Department

Claims (11)

一種程式,用於使計算機發揮對動畫資料進行分析,並輸出對該動畫資料中映出的人物的化妝動作進行定量化的值的功能,  該程式用於使該計算機作為以下各元件發揮功能:  第1檢測元件,從該動畫資料中檢測映出該人物面部的面部區域;  第2檢測元件,從該動畫資料中檢測映出該人物的手的手區域;及  輸出元件,根據檢測出的該面部區域及該手區域,輸出基於該面部區域及該手區域的變動對該動畫資料中映出的人物的化妝動作進行定量化的值。A program that enables a computer to analyze animation data and output a value that quantifies the makeup actions of the characters reflected in the animation data. The program is used to make the computer function as the following components: The first detection element detects the facial area reflecting the character's face from the animation data; the second detection element detects the hand area reflecting the character's hand from the animation data; and the output element is based on the detected The face area and the hand area output values that quantify the makeup movement of the person reflected in the animation data based on the changes in the face area and the hand area. 如請求項1之程式,其特徵在於,  該第1檢測元件具備:  面部區域檢測元件,利用面部區域學習模型,從該動畫資料中檢測該人物的面部區域;及  面部器官區域檢測元件,利用面部器官特徵點學習模型,從該面部區域中檢測面部器官區域,  該第2檢測元件具備:  手區域檢測元件,利用該化妝動作的手區域學習模型,從該動畫資料中檢測該人物的手區域;及  指尖位置區域檢測元件,利用該化妝動作的指尖位置區域學習模型,從該手區域中檢測指尖位置區域。The program of claim 1 is characterized in that the first detection element is provided with: a facial area detection element that uses a facial area learning model to detect the person's facial area from the animation data; and a facial organ area detection element that uses a face An organ feature point learning model to detect facial organ regions from the facial region, and the second detection element is provided with: a hand region detection element, a hand region learning model that uses the makeup action, and detects the character's hand region from the animation data; And a fingertip position area detection element, which uses the fingertip position area learning model of the makeup movement to detect the fingertip position area from the hand area. 如請求項2之程式,其特徵在於,  該面部區域學習模型是將面部區域的一部分被手區域遮擋的狀態的教導資料用於面部區域學習資料集中進行學習的卷積神經網路。The program of claim 2 is characterized in that the facial region learning model is a convolutional neural network that uses the teaching data of the state where a part of the facial region is blocked by the hand region for learning in the facial region learning data set. 如請求項2之程式,其特徵在於,  該手區域學習模型是將化妝時的手形狀的教導資料用於手區域學習資料集中進行學習的卷積神經網路,  該指尖位置區域學習模型是將化妝時的指尖位置的教導資料用於指尖位置區域學習資料集中進行學習的卷積神經網路。The program of claim 2 is characterized in that the hand region learning model is a convolutional neural network that uses the teaching data of the hand shape when applying makeup to the hand region learning data set for learning, and the fingertip location region learning model is The teaching data of the fingertip position during makeup is used in the convolutional neural network for learning in the fingertip position area learning data set. 如請求項1之程式,其特徵在於,  還使該計算機作為以下元件發揮功能:  提取元件,從該動畫資料中提取膚色區域;及  分割元件,將該膚色區域分割為分割區域,  該第1檢測元件根據該分割區域之特徵量,從該膚色區域中檢測映出該人物的面部的面部區域,  該第2檢測元件根據該分割區域之特徵量,從除了作為映出該人物的面部的面部區域被檢測出的該膚色區域之外的該膚色區域中,檢測映出該人物的手的手區域。The program of claim 1 is characterized in that it also makes the computer function as the following components: extracting components, extracting skin color regions from the animation data; and segmenting components, segmenting the skin color regions into segmented regions, the first detection The element detects the facial area reflecting the face of the person from the skin color area based on the feature amount of the segmented area, and the second detection element removes the facial area reflecting the face of the person based on the feature amount of the segmented area In the skin-color area other than the detected skin-color area, a hand area reflecting the person's hand is detected. 如請求項5之程式,其特徵在於,  還使該計算機作為第3檢測元件發揮如下功能,在該第1檢測元件無法從該膚色區域中檢測出映出該人物的面部的面部區域之情形下,對構成該動畫資料的訊框圖像間的該分割區域的移動距離進行檢測,並將移動距離較大的該分割區域假設為映出該人物的手的手區域。The program of claim 5 is characterized in that the computer also functions as a third detection element in the case where the first detection element cannot detect the facial area reflecting the face of the person from the skin color area , Detecting the moving distance of the divided area between the frame images constituting the animation data, and assuming that the divided area with the larger moving distance is the hand area reflecting the character's hand. 如請求項1之程式,其特徵在於,  該輸出元件,將化妝動作中的手的座標、速度及面部的座標、速度作為對該動畫資料中映出的人物的化妝動作進行定量化的值進行輸出。The program of claim 1 is characterized in that the output element uses the coordinates and speed of the hand and the coordinates and speed of the face in the makeup movement as the values quantifying the makeup movement of the character reflected in the animation data Output. 如請求項1之程式,其特徵在於,  該輸出元件根據對2個該動畫資料中映出的人物的化妝動作進行定量化的值的比較結果,輸出視覺性地表示該2個該動畫資料中映出的人物的化妝動作之差異的圖像。The program of claim 1 is characterized in that the output element outputs a visual representation of the two of the animation data based on the comparison result of the quantified values of the makeup actions of the characters reflected in the two of the animation data. Image showing the difference in the makeup actions of the characters. 一種資訊處理裝置,對動畫資料進行分析,並輸出對該動畫資料中映出的人物的化妝動作進行定量化的值,  該資訊處理裝置包括:  第1檢測元件,從該動畫資料中檢測映出該人物的面部的面部區域;  第2檢測元件,從該動畫資料中檢測映出該人物的手的手區域;及  輸出元件,根據檢測出的該面部區域及該手區域,輸出基於該面部區域及該手區域的變動對該動畫資料中映出的人物的化妝動作進行定量化的值。An information processing device that analyzes animation data and outputs a value that quantifies the makeup actions of the characters reflected in the animation data. The information processing device includes: a first detection element that detects and reflects from the animation data The facial area of the character's face; the second detection element, which detects the hand area reflecting the character's hand from the animation data; and the output element, based on the detected facial area and the hand area, the output is based on the facial area And the change in the hand area is a value that quantifies the makeup movement of the character reflected in the animation data. 一種定量化方法,在資訊處理裝置中執行對動畫資料進行分析,並輸出對該動畫資料中映出的人物的化妝動作進行定量化的值的處理,  該定量化方法包括:  第1檢測步驟,從該動畫資料中檢測映出該人物的面部的面部區域;  第2檢測步驟,從該動畫資料中檢測映出該人物的手的手區域;及  輸出步驟,根據檢測出的該面部區域及該手區域,輸出基於該面部區域及該手區域的變動對該動畫資料中映出的人物的化妝動作進行定量化的值。A quantitative method that performs analysis of animation data in an information processing device and outputs a value that quantifies the makeup movement of the person reflected in the animation data. The quantitative method includes: the first detection step, Detecting the facial area reflecting the character's face from the animation data; the second detection step, detecting the hand area reflecting the character's hand from the animation data; and the output step, based on the detected facial area and the The hand area outputs a value that quantifies the makeup movement of the person reflected in the animation data based on the changes in the face area and the hand area. 一種資訊處理系統,其包括客戶終端及伺服器裝置,該客戶終端受理來自用戶的操作,該伺服器裝置根據該客戶終端從該用戶受理的操作,對動畫資料進行分析,並輸出對該動畫資料中映出的人物的化妝動作進行定量化的值,提供給該客戶終端,  該伺服器裝置包括:  接收元件,接收該客戶終端從該用戶受理的操作的資訊;  第1檢測元件,根據從該用戶受理的操作,從該動畫資料中檢測映出該人物的面部的面部區域;  第2檢測元件,根據從該用戶受理的操作,從該動畫資料中檢測映出該人物的手的手區域;  輸出元件,根據檢測出的該面部區域及該手區域,輸出基於該面部區域及該手區域的變動對該動畫資料中映出的人物的化妝動作進行定量化的值;及  發送元件,將輸出的該值發送給該客戶終端。An information processing system includes a client terminal and a server device, the client terminal accepts operations from a user, the server device analyzes animation data according to the operations accepted by the client terminal from the user, and outputs the animation data The quantified value of the makeup action of the person reflected in the image is provided to the client terminal. The server device includes: a receiving element that receives information about operations received by the client terminal from the user; the first detection element is based on The operation accepted by the user detects the facial area reflecting the face of the character from the animation data; the second detection element detects the hand area reflecting the hand of the character from the animation data based on the operation accepted from the user; An output element, based on the detected face area and the hand area, outputs a value that quantifies the makeup movement of the character reflected in the animation data based on changes in the face area and the hand area; and the sending element will output The value of is sent to the client terminal.
TW108126136A 2018-10-24 2019-07-24 Program, information processing device, quantification method, and information processing system TW202016881A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-199739 2018-10-24
JP2018199739 2018-10-24

Publications (1)

Publication Number Publication Date
TW202016881A true TW202016881A (en) 2020-05-01

Family

ID=70331505

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108126136A TW202016881A (en) 2018-10-24 2019-07-24 Program, information processing device, quantification method, and information processing system

Country Status (4)

Country Link
JP (1) JP7408562B2 (en)
CN (1) CN112912925A (en)
TW (1) TW202016881A (en)
WO (1) WO2020084842A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102586144B1 (en) * 2021-09-23 2023-10-10 주식회사 딥비전 Method and apparatus for hand movement tracking using deep learning
CN114407913B (en) * 2022-01-27 2022-10-11 星河智联汽车科技有限公司 Vehicle control method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002072861A (en) * 2000-09-01 2002-03-12 Ibic Service:Kk Hairdressing art lecture system
JP2003216955A (en) 2002-01-23 2003-07-31 Sharp Corp Method and device for gesture recognition, dialogue device, and recording medium with gesture recognition program recorded thereon
WO2015029372A1 (en) * 2013-08-30 2015-03-05 パナソニックIpマネジメント株式会社 Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
TWI669103B (en) * 2014-11-14 2019-08-21 日商新力股份有限公司 Information processing device, information processing method and program
JP6670677B2 (en) * 2016-05-20 2020-03-25 日本電信電話株式会社 Technical support apparatus, method, program and system

Also Published As

Publication number Publication date
WO2020084842A1 (en) 2020-04-30
JP7408562B2 (en) 2024-01-05
CN112912925A (en) 2021-06-04
JPWO2020084842A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
Taheri et al. GRAB: A dataset of whole-body human grasping of objects
Cheng et al. An image-to-class dynamic time warping approach for both 3D static and trajectory hand gesture recognition
WO2018103220A1 (en) Image processing method and device
Nair et al. Hand gesture recognition system for physically challenged people using IOT
Turk et al. Perceptual interfaces
Jörg et al. Data-driven finger motion synthesis for gesturing characters
Nai et al. Fast hand posture classification using depth features extracted from random line segments
Cheng et al. Image-to-class dynamic time warping for 3D hand gesture recognition
Murtaza et al. Analysis of face recognition under varying facial expression: a survey.
Liang et al. Barehanded music: real-time hand interaction for virtual piano
US20220148333A1 (en) Method and system for estimating eye-related geometric parameters of a user
TW202129524A (en) Makeup processing method and apparatus, electronic device and storage medium
Saval-Calvo et al. 3D non-rigid registration using color: color coherent point drift
TW202016881A (en) Program, information processing device, quantification method, and information processing system
He et al. Autolink: Self-supervised learning of human skeletons and object outlines by linking keypoints
Yong et al. Emotion recognition in gamers wearing head-mounted display
Gupta et al. Hands-free PC Control” controlling of mouse cursor using eye movement
Yang et al. 3D character recognition using binocular camera for medical assist
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques
Ahmed et al. Interaction techniques in mobile Augmented Reality: State-of-the-art
Treepong et al. The development of an augmented virtuality for interactive face makeup system
Anwar et al. Real Time Eye Blink Detection Method for Android Device Controlling
Anwar Real time facial expression recognition and eye gaze estimation system
TW202122040A (en) Method for analyzing and estimating face muscle status
CN116542846B (en) User account icon generation method and device, computer equipment and storage medium